Harmonic
Analysis and
Wavelets
Harmonic Analysis is
nowadays one of the most important branches of Mathematical
Analysis. Some noticeable theoretic aspects are, e.g., Fourier
Analysis on groups, or
the study of classical function spaces (e.g. Hilbert
spaces such as L^2(R)
or L^2(T) by typical means as
Fourier
trasforms, or bases (and
generalizations). But there are also many applications of Harmonic
Analysis' concepts and tecniques: from the most classical Fourier
trasform (used for decades by physicians, astronomers, engineers,
geologists...) to the relatively new applications of Wavelets
to signal processing.
The father of Harmonic
Analysis can be considered, for both historical and
intellectual reasons, Joseph Fourier.
He conjectured, in the first years of the past century (around 1808,
I guess, but he published his work on the theory of heating in 1822)
that every (or almost every) function could be written as sum (either
finite or infinite) of sines and cosines. He was brought to this
conjecture while studying the heat equation (he had discovered),
which he solved by writing the solutions as superposition of
sinusoidal distributions of heat.
Did those series converge? This
wasn't clear at all, and the study of the convergence of Fourier
series wasn't indeed simple: great mathematicians tried to establish
when (and in which sense) the Fourier series of a function
converges to the function, and only around 1960 one of the last
conjectures about such a topic (Lusin's:
it stated that "the Fourier series of a continuous
function converges to the function (pointwise) almost everywhere
(with respect to Lebesgue
measure)") was affirmatively proved (by Carleman).
The applications of concepts
from Harmonic Analysis are important in many fields of
physics, such as in the solution of differential equations, in the
analysis of vibrations, images, sounds, earthquakes... and data
compression.
Wavelets and the concept of Multi-Resolution
Analysis (MRA) have born
powerful tools in applied mathematics. The interesting and important
applications of these concepts make of Wavelets one of the
most studied mathematical objects by researchers in many fields.
Many good sites on them can be found on the Web, and below I've
listed some.
In the following sections, I'd like to say two words
about
After this you will find useful
(I do hope) links
to other selected pages which contain material (such as introductory
and/or advanced papers, software...) related to Fourier Analysis
and Wavelets.
Fourier
seems was one of the first mathematician to promote the idea that "
every " function can be written as a finite or infinite sum of
sines and cosines; D.Bernoulli had probably a similar idea,
but he didn't exploit it. Fourier was trying to solve the heat
equation (which he discovered!), and found solutions by writing them
as superposition of an infinite number of sinusoidal waves. It wasn't
very clear if those series really converged or defined a solution,
but they worked in most real problems.
The theory, of course,
came after, giving birth to that branch of Analysis today called
Harmonic Analysis. For a while,I will adopt the language of signal
analysis, since this may help imagination and since important
applications of the mathematical theory are in this direction.
The
basic idea of classical Fourier Analysis is the idea of
"frequency content" of a signal (mathematically represented
by a function with certain properties):
a superposition of sinusoidal waves with different frequencies can
give birth to a complex signal (no more sinusoidal, maybe neither
smooth) and, on the contrary, the frequencies of large classes of
signals can be analyzed and then used to reconstruct the signal as a
superposition of simple waves. It is a sort of bridge between the
space domain in which the wave exists and the time domain in which
the wave evolves.
From a mathematical point of view, we have two
domains: the "space domain" G (e.g. an interval, or
the whole real line), and the "frequency domain" G^
(which is in some manner related to that of the "space-domain"):
|
Space domain G |
Frequency/time domain G^ |
|
T = [0,L] |
Z = {...,-2,-1,0,+1,+2,...} |
|
R |
R |
|
Z ={...,-2,-1,0,+1,+2,...} |
T |
A function f (representing a signal) defined on the
space-domain is analyzed and gives birth to a function f^ (its
Fourier transform) on the frequency-domain. And again, from a
function g on the frequency- domain we can get back (inverse
Fourier transform) a function g~ defined on the
space-domain. Moreover, if we make the inverse transform of the
transform, we're back to our original signal: (f^)~ = f (this
is the so called reconstruction formula, since it allows the
reconstruction of a signal from its frequencies).
|
Space domain G |
|
Frequency/time domain G^ |
|
f |
|-----------> |
f^ |
|
g~ |
<-----------| |
g |
For large (at last from the standpoint of common applications) classes of functions it is possible to define the Fourier transform (a harder step is to find (characterize) the functions that are the transform of some function). Everything goes very well in special function spaces: the Hilbert spaces, such L^2 (R) or L^2 (R^n), which can be thought to contain all the functions that represent, from a physical point of view, signals with finite energy. Here this "energy" is given by
|
Functions on the Space domain G |
Norm
|
Norm
|
Frequency/time domain G^ |
|
L^2(T)=L^2([0,L]) |
1/(2L)*Integral of |f(x)|^2 between 0,L |
Sum of |f(n)|^2 for all n in Z |
L^2(Z) |
|
L^2(R) |
Integral of |f(x)|^2 on the real line |
1/(2L)*Integral of |f(x)|^2 on the real line |
L^2(R) |
|
L^2(Z)= L^2({...,-2,-1,0,+1,+2,...}) |
Sum of |f(n)|^2 for all n in Z |
1/(2L)*Integral |f(x)|^2 on [0,L] |
L^2(T) |
These spaces are
infinite dimensional generalizations of the usual euclidean spaces
R^n and C^n. Every point of such a space is a whole
function, and we can sum two points and dilate every point by an
arbitrary factor, as we can do in finite dimensional vector space,
such as R^3.
In R^n we can choose a referiment (i.e. a basis, which is a
set of n independent vectors) which allows us to associate to
every vector n numbers (called components relative to the
referiment chosen). These numbers can then be used to reconstruct the
vector.
In Hilbert spaces we need infinite numbers to determine a
vector of the space (which represents a function): we have bases with
infinite vectors (for example the classical basis which is made up of
all the sines and cosines with all integer frequencies) and an
infinite set of numbers (called the Fourier coefficients of the
function with respect to the basis) that are the components of the
vector on the basis.
Moreover, in Hilbert space has still sense
to speak of orthogonality, so we have infinite dimensional versions
of Pitagora's theorem and intuitive geometric constructions and
approximations. The sines and cosines are the analogue of an
orthonormal basis in R^n, the Fourier coefficients are the
projections of a function on the vectors of this basis, and the
Fourier series and integrals are the formulas for the reconstruction
of a function by means of its coefficients.
It
turns out that a function of these spaces has a Fourier transform
which is again in the same type of space (not necessarily the same:
the space domain and the frequency domain may be different: remember
the if G=[a,b] then G^=Z, but if G=R then G^=R).
The Fourier transform becomes an isometric isomorphism (i.e. sends
sum and scalar multiplication in a space in the same operations in
the other space, and moreover conserves the total energy) between
these spaces.
In particular the "self-duality" of
R=G=G^ implies that we have two exact copies of the same
space, and the Fourier transform can be regarded in a natural way as
a transformation of the space itself. In fact, it can be considered
as a rotation of such a space, or, conceptually, a change of point of
of view (or of referiment).
|
L^2(R) |
-------------------> |
L^2(R) |
|
|
Fourier trasform |
|
|
a * f |
-------------------> |
a * f^ |
|
f + g |
-------------------> |
f^+ g^ |
The analysis in these spaces is
achieved by writing the signal as a superposition of classical
sinusoidal waves of all the possible frequencies, which is called "
expanding" the signal on the "basis" of sinusoidal
functions. The idea is that the more a signal is concentrated on a
certain frequency, the larger will be the related coefficient we get
in the expansion.
Also, once we've got the decomposition of the
signal in frequencies, we could discard the coefficients less than a
certain value, i.e. we could "forget" those frequencies
that less contribute to the signal, and thus we would achieve a
compression of the signal (however, by doing this, we lose a part of
the signal a priori considered negligeable). Or maybe, for denoising
purposes , we could discard high-frequency components. It is easy to
imagine how many applications these ideas have found.
We've seen that Fourier analysis
is an useful mathematical approach to the analysis of the frequencies
of a signal. Nevertheless, Fourier analysis is not the best way, for
many aspects, to solve the problem of analysis and reconstruction.
Which is the main problem with Fourier analysis?
The answer
can be summed up in a magic word: lack of localization.
Let
me explain what is meant by localization. Sinusoidal waves are equal
all over the space domain (they are periodic, hence
translation-invariant): this means they are not localized. So, well,
what does non-localization imply? Let me explain with a typical
example. Suppose we have a simple signal which is quite flat all over
the (time-)line, except for a sudden peak around a point, with a
certain frequency phi. When we decompose this signal on the standard
base of sinusoidal waves, we get not only large coefficients relative
to frequencies near phi, but we get also significantly large
coefficients in a wide spectrum. This is due to the fact that once
we've got the frequencies around phi, we're with a sinusoidal wave,
which maybe approximates well the peak, but certainly doesn't look
flat far from it. This implies that, since, for the reconstruction
formula, we must have back our original signal from the
anti-trasform, the oscillations afar the peak are to be cancelled to
recreate the flat signal. How does this happen? By adding higher and
higher frequencies, each partially cancelling the others. So we get a
trasform big over many frequencies, and it is not easy to read from
it informations about the space-signal. This is why the classical
Fourier
transform is so
ineffective in handling with peaks and sharp edges (in images). It is
very useful, nevertheless, in studying signals that statistically
show the same characteristics all over the time, whose behaviour
doesn't vary abruptly in time.
The problem of localization is solved by wavelets. These are a
family of functions that, like sinusoidal waves, form a good
(orthonormal and unconditional) basis in certain spaces (such Hilbert
spaces, but also many others). But, unlike sines (optimally localized
in frequency, but oscillating forever in time with the same
amplitude), are localized both in time and frequency.
A
family of wavelets (the classical ones, but there are many others
with different charateristics and arising in different ways) is
obtained by translations and dilations (generally by a factor of 2,
but the case in which the factor is an integer greater than 2 is
studied) of a single wavelet (often called the "mother
wavelet").
Psi(x;j,m) = 2^(j/2)*Mother_Psi((2^j)x-m)
Index j gives the level of
dilation, m the translation. The factor 2^(j/2) is
introduced so that all the wavelets have the same energy. By
translating we obtain localization in space, by dilating we obtain
better and better resolutions. We can parametrize the family of
wavelets with two parameters as above, one indicating where (in
spatial sense) the wavelet is centered, the other indicating at what
resolution the wavelet is analyzing. The two parameters are
independent. By fixing the first and moving the second, we can
analyze the signal as accurately as we wish around a certain position
in space, in the other way we can analyze at a fixed resolution in
different places.
A signal such as that described above can be
easily analyzed with wavelets: we could start with inspecting the
space domain, by varying the spatial parameter. By doing so, we would
find a lot of nearly zero coefficients, except for a few ones, which
are those relative to the wavelets centered near the peak. This
enables us to know where a more accurate analysis is required (if
desired): it is enough to fix the translation parameter and increase
the resolution of the wavelets to get an accurate description of the
peak.
The concept of Multi-Resolution Analysis (MRA from now on) is strongly related to that of wavelets. The idea is to decompose the space of signals in a chain of particular nested subspaces {V(j)} with index j taking values in Z. They are "particular" in the sense that they are requested to posses the following properties:
the subspaces are nestes: each contains the precedent and is contained in the successive:
{0}<...<V(-2)<V(-1)<V(0)<V(1)<V(2)<...<L^2
the intersection of all these spaces is the only element 0, while their union is ("almost") the whole space,
there is a function f such that its integer traslates (which are in the form f(x-k) for some k in Z) generate V(0),
every space is obtaining by dilating the preceding one, in the sense that the functions in V(j) are exactly the functions in V(j+1) dilated by a factor of 2:
g(x) is in V(j) iff g(2x) is inV(j+1)
The idea underlying all this is as follows. If we have a function (signal) in the space, we can project it on a cerain subspace V(j). By doing so, we discard some informations in the original signal (which will lie in the orthogonal complement of that subspace), and remain with an approximation. The largest is j, the better this approximation will be, since a larger j means "more detailed functions" in V(j).
Where are the wavelets in the context of an MRA? A beautiful construction of Yves Meyer shows that if we consider the space W(j) that "fills the details" between V(j) and V(j+1) (i.e. the orthogonal complement of V(j) in V(j+1)), we find that each W(j+1) is a dilated W(j), all the W(j) are mutually orthogonal, and there exist a familty of wavelets functions Psi(x;j,m)=2^(j/2)*Mother_Psi((2^j)x-m) such that each W(j) is generated by traslated versions of Psi(x;j):
W(j) = < Psi(x;j,m) > for m varying in Z
So, for each level j of detail, the translated of a dilated f, i.e. the functions f(2^(j)*x-k), k in Z, generate V(j) and the translated of a dilated mother wavelets, i.e. the functions Mother_Psi((2^j)x-m), m in Z, generate W(j).
The technique for analyzing a signal is now the following: project on V(j), save the details you have in W(j), project on V(j-1), save the details that are in W(j-1)...repeat until the approximation on V(J) is meaningful to your purpose. To reconstruct, start from the coarsest approximation in V(J), add the details of W(J) to get the approximation in V(J+1),...,add the details in W(j-1) to obtain your original approximation in V(j).
Wavelets still present some lack of localization in the frequency domain, at last at high resolutions (small details). This problem is solved by wavelet packets, which allow a further splitting of the spaces W(j) in orthogonal subspaces some of which have better frequency localization than others. They are a sort of library of special wavelet-like functions, some better localized in frequency than others. A choice adapted to the signal currently being analyzed is also possible, in an efficent way, thus noticeably increasing the "power of frequency resolution" of the analysis. This tecnique is particularly useful when a great accuracy is required in recognizing different frequencies: such a need arises, for example, in speech recognition applications.
In these few lines I tried to
give an idea on why wavelets can be so effectively used for accurate
signal analysis, edge detection (where Fourier series fail), sound
and image compression (only few larger coefficients are enough to get
a good approximation of the signal, and moreover the algorithmic
complexity is less than that of the fast Fourier transform!)...
I'll give here only the idea at
the basis of this theory. Even this little bit is rather technical,
and difficult to explain without using mathematical language.
This
theory is a generalization of classical
Fourier analysis.
This was done by studying periodic functions (i.e. functions on
T=finite interval on the real
line), or functions on the real line R,
and the analogues in many dimensions. The first (trivial, but
important) observation is that all these domains are groups (i.e.
they are endowed with a binary operation with good properties), so we
will indicate them by G. On R
we have the usual sum of real numbers, on T
the sum modulus the length of the interval, on R^n
the sum of vectors.
Let S be
the unit circle in the complex plane C: it is a group with the
usual multiplication of complex numbers.
Let us now consider the
homomorphisms from G to S.
These are maps that send the operation in G
in the operation in S, i.e., if e
is such a map, we have
e(a+b) = e(a)*e(b)
where + is the operation in G
and * the complex moltiplication in S,
for every a,b in G.
The
complex exponentials of classical Fourier Analysis (which "real"ize
in sines and cosines) are exactly such homomorphims for the
torus, if the exponent is a relative integer, and for the real line,
if the exponent is a real number. In fact the property we are
interested in is exactly that
exp(i(a+b)) =exp(ia)*exp(ib)
which means they are
homomorphism from T or R
in the unit circle of C. Note the
the +is the
operation in T or R and * is the operation in S.
It is a fact that on many groups it is possible to define a
topology, which gives the notion of open set and hence of a
continuous functions (it also required for the group operation to be
continuous with respect to such a topology: this makes the group a so
called topological group). But it also possible to endow a large
family of groups with a measure (called the Haar measure, which
furthermore is invariant under the operation of the group), which let
us talk of measurable and integrable functions, and of function
spaces on these groups.
The characters are exactly those complex
homomorphisms from the group to S
which are also continuous. They form a group (with pointwise
product), which is called the dual group G^.
We can also define the dual of the dual, but this turns out to be the
original G. The dual of T
is Z, the dual of R
is R again. This is why we use
"discrete frequencies" to analyze functions on T,
but we pass to a "continuous parameter" when we pass to
functions on the real line.
This way of looking at Fourier
Analysis makes many things clearer, and this is why it is
appreciated and adopted by many mathematicians. It also offers
powerful and general tools which could hardly have been discovered by
other means.
Here are some links I consider interesting. I hope you'll find them good and that they will make you save your time in searching the Web for what you really are looking for. There are both a section with introductions to wavelets and research papers, neatly divided in categories according to their topic.
Wavelets
Resources at MathSoft
Good and vast bibliography, lots of papers (pre-prints) of great
authors, subdivided by argument. Also links to wavelets-related
software are available. A very good page!
Rice
DSP Publication Archives
Lots (really lots) of papers on Wavelets, FFT, software...Rice
University site is well organized and rich in materials. If are
interested in latest research, you can also find the Techincal
Reports of the past years, all available in electronic format (see
CML).
It is generally quite fast in download. Reserve a bookmark for this!
I.
Daubechies' home page
The home page of the great researcher in Wavelets. Daubechies'
studies in wavelet theory have brought to many important
achievements. One of the most important is her study and
characterization of compactly supported (i.e. with "finite
length in time") wavelets, which are , most rightly, named
after her.
An
Introduction to Wavelets
A simple introductory article (by Amara Grasp) which presents
basic ideas in Wavelet theory and some of their applications (did
you know that FBI fingerprint archives are compressed with
wavelets?). The home page is at Amara's
page.
Wavelet.org
This is a really good site on wavelets, featuring sections on
the latest news in wavelets' world and on the current research. Here
you will be also given the opportunity to subscribe (free of charge)
to the Wavelet Digest, an electronic journal you can
periodically receive by e-mail: it contains links to papers
available on the web, current lines of research in related topics,
calendars of wavelet-related events, questions and answers from
other waveleteers.
On the site you will also find pages of links
to other sites, to papers, books, software...
Don't miss this!
Numerical
Harmonic Analysis Group (NUHAG)
This group's research interests are principally in Gabor
Analysis, Banach algebras, and applications. You can also subscribe
(free of charge) the electronic journal of the group, dealing mainly
with current lines of development and research in Gabor Analysis,
group representation, Wiener algebras...and much, much more. A wide
collection of papers (available in electronic format) of the NUHAG
members is also available.
Wavelet
Papers
A site
collecting some papers on wavelets.
Wim Sweldens' Selected publications Many good papers of recognized researchers in Wavelets and new lines of research in related sectors.
Wavelets at MIT Researching group in Wavelets at MIT. Applications of wavelets to image compression and to algorithms for solving partial differential equations are considered. Some papers are also available.
Wavelets
References and Homepages
Page containing a list of links to homepages on wavelets and to
other wavelet related sites on the Web.
Wavelets
- Internet Resources
Lots of good links to available papers, homepages, courses on
Wavelets and their applications. Last time I visited it, some
suggested links were broken, but here you can find so many of them
that you won't worry if some aren't up to date.
Wavelet
links at WashU of St.Louis
List of Web resources on wavelets and related topics.
Wavelet
Image Compression
Application of Wavelets to image compression has given very good
results, and the studies for improving the efficiency and speed of
these methods are making them always more and more widespread. You
won't be surprised if in a year you digital photo camera will use
wavelet compression to contain more and more films and photos.
At
last, the high end of compression seems to be in joining Wavelet and
Fractal techniques. See for example Fractal
Image Compression (site
and introductory article ) or Fractal
Image Compression Bibliography
for a (quite long) bibliography (without links: only books and
papers).
Wavelets
for retarded
A
simple introduction to wavelets.
Dragon
Applet
Applet that
illustrates the links between iterative methods and wavelets.
Number of visitors since May 21st, 2004: