小波(Wavelet)樂園

小波分析的基礎知識, 小波分析的軟件實現, 小波分析應用的現狀與前景
正文

Wavelet Introduction (4)

(2004-12-17 04:55:28) 下一個

An Idea with No Name

Over the course of the twentieth century, scientists in different fields struggled to get around these limitations, in order to allow representations of the data to adapt to the nature of the information. In essence, they wanted to capture both the low-resolution forest—the repeating background signal—and the high-resolution trees—the individual, localized variations in the background. Although the scientists were each trying to solve the problems particular to their respective fields, they began to arrive at the same conclusion—namely, that Fourier transforms themselves were to blame. They also arrived at essentially the same solution: Perhaps by splitting a signal up into components that were not pure sine waves, it would be possible to condense the information in both the time and frequency domains. This is the idea that would ultimately be known as wavelets.

The first entrant in the wavelet derby was a Hungarian mathematician named Alfred Haar, who introduced in 1909 the functions that are now called “Haar wavelets.” These functions consist simply of a short positive pulse followed by a short negative pulse. An example is shown on page 5. Although the short pulses of Haar wavelets are excellent for teaching wavelet theory, they are less useful for most applications because they yield jagged lines instead of smooth curves. For example, an image reconstructed with Haar wavelets looks like a cheap calculator display, and a Haar wavelet reconstruction of the sound of a flute is too harsh.

From time to time over the next several decades, other precursors of wavelet theory arose. In the 1930s, the English mathematicians John Littlewood and R.E.A.C. Paley developed a method of grouping frequencies by octaves, thereby creating a signal that is well localized in frequency (its spectrum lies within one octave) and also relatively well localized in time. In 1946, Dennis Gabor, a British-Hungarian physicist, introduced the Gabor transform, analogous to the Fourier transform, which separates a wave into “time-frequency packets” or “coherent states” that have the greatest possible simultaneous localization in both time and frequency. And in the 1970s and 1980s, the signal processing and image processing communities introduced their own versions of wavelet analysis, going by such names as “subband coding,” “quadrature mirror filters,” and the “pyramidal algorithm.”

While not precisely identical, all of these techniques had similar features. They decomposed or transformed signals into pieces that could be localized to any time interval and could also be dilated or contracted to analyze the signal at different scales of resolution. These precursors of wavelets had one other thing in common: No one knew about them beyond individual specialized communities. But in 1984, wavelet theory finally came into its own.

[ 打印 ]
閱讀 ()評論 (0)
評論
目前還沒有任何評論
登錄後才可評論.