Fourier series

From testwiki
Revision as of 05:35, 26 February 2025 by imported>Roffaduft (Undid revision 1277683060 by Goodphy (talk) Uninformative)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Redirect Template:Fourier transforms A Fourier series (Template:IPAc-en[1]) is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series.Template:Sfn By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Template:Slink.

The study of the convergence of Fourier series focus on the behaviors of the partial sums, which means studying the behavior of the sum as more and more terms from the series are summed. The figures below illustrate some partial Fourier series results for the components of a square wave.

Fourier series are closely related to the Fourier transform, a more general tool that can even find the frequency information for functions that are not periodic. Periodic functions can be identified with functions on a circle; for this reason Fourier series are the subject of Fourier analysis on the circle group, denoted by 𝕋 or S1. The Fourier transform is also part of Fourier analysis, but is defined for functions on n.

Since Fourier's time, many different approaches to defining and understanding the concept of Fourier series have been discovered, all of which are consistent with one another, but each of which emphasizes different aspects of the topic. Some of the more powerful and elegant approaches are based on mathematical ideas and tools that were not available in Fourier's time. Fourier originally defined the Fourier series for real-valued functions of real arguments, and used the sine and cosine functions in the decomposition. Many other Fourier-related transforms have since been defined, extending his initial idea to many applications and birthing an area of mathematics called Fourier analysis.

Template:See also

The Fourier series is named in honor of Jean-Baptiste Joseph Fourier (1768–1830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d'Alembert, and Daniel Bernoulli.Template:Efn-ua Fourier introduced the series for the purpose of solving the heat equation in a metal plate, publishing his initial results in his 1807 Mémoire sur la propagation de la chaleur dans les corps solides (Treatise on the propagation of heat in solid bodies), and publishing his Théorie analytique de la chaleur (Analytical theory of heat) in 1822. The Mémoire introduced Fourier analysis, specifically Fourier series. Through Fourier's research the fact was established that an arbitrary (at first, continuous[2] and later generalized to any piecewise-smooth[3]) function can be represented by a trigonometric series. The first announcement of this great discovery was made by Fourier in 1807, before the French Academy.[4] Early ideas of decomposing a periodic function into the sum of simple oscillating functions date back to the 3rd century BC, when ancient astronomers proposed an empiric model of planetary motions, based on deferents and epicycles.

Independently of Fourier, astronomer Friedrich Wilhelm Bessel introduced Fourier series to solve Kepler's equation. His work was published in 1819, unaware of Fourier's work which remained unpublished until 1822.[5]

The heat equation is a partial differential equation. Prior to Fourier's work, no solution to the heat equation was known in the general case, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was to model a complicated heat source as a superposition (or linear combination) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series.

From a modern point of view, Fourier's results are somewhat informal, due to the lack of a precise notion of function and integral in the early nineteenth century. Later, Peter Gustav Lejeune Dirichlet[6] and Bernhard Riemann[7][8][9] expressed Fourier's results with greater precision and formality.

Although the original motivation was to solve the heat equation, it later became obvious that the same techniques could be applied to a wide array of mathematical and physical problems, and especially those involving linear differential equations with constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has many such applications in electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, quantum mechanics, econometrics,[10] shell theory,[11] etc.

Beginnings

Joseph Fourier wrote[12]

Template:Blockquote

This immediately gives any coefficient ak of the trigonometric series for φ(y) for any function which has such an expansion. It works because if φ has such an expansion, then (under suitable convergence assumptions) the integral 11φ(y)cos(2k+1)πy2dy=11(acosπy2cos(2k+1)πy2+acos3πy2cos(2k+1)πy2+)dy can be carried out term-by-term. But all terms involving cos(2j+1)πy2cos(2k+1)πy2 for Template:Nowrap vanish when integrated from −1 to 1, leaving only the kth term, which is 1.

In these few lines, which are close to the modern formalism used in Fourier series, Fourier revolutionized both mathematics and physics. Although similar trigonometric series were previously used by Euler, d'Alembert, Daniel Bernoulli and Gauss, Fourier believed that such trigonometric series could represent any arbitrary function. In what sense that is actually true is a somewhat subtle issue and the attempts over many years to clarify this idea have led to important discoveries in the theories of convergence, function spaces, and harmonic analysis.

When Fourier submitted a later competition essay in 1811, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: ...the manner in which the author arrives at these equations is not exempt of difficulties and...his analysis to integrate them still leaves something to be desired on the score of generality and even rigour.[13]

Fourier's motivation

This resulting heat distribution in a metal plate is easily solved using Fourier's method

The Fourier series expansion of the sawtooth function (below) looks more complicated than the simple formula s(x)=xπ, so it is not immediately apparent why one would need the Fourier series. While there are many applications, Fourier's motivation was in solving the heat equation. For example, consider a metal plate in the shape of a square whose sides measure π meters, with coordinates (x,y)[0,π]×[0,π]. If there is no heat source within the plate, and if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by y=π, is maintained at the temperature gradient T(x,π)=x degrees Celsius, for x in (0,π), then one can show that the stationary heat distribution (or the heat distribution after a long time has elapsed) is given by

T(x,y)=2n=1(1)n+1nsin(nx)sinh(ny)sinh(nπ).

Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by multiplying each term of the equation from Analysis § Example by sinh(ny)/sinh(nπ). While our example function s(x) seems to have a needlessly complicated Fourier series, the heat distribution T(x,y) is nontrivial. The function T cannot be written as a closed-form expression. This method of solving the heat problem was made possible by Fourier's work.

Other applications

Another application is to solve the Basel problem by using Parseval's theorem. The example generalizes and one may compute ζ(2n), for any positive integer n.

Definition

The Fourier series of a complex-valued Template:Math-periodic function s(x), integrable over the interval [0,P] on the real line, is defined as a trigonometric series of the form n=cnei2πnPx, such that the Fourier coefficients cn are complex numbers defined by the integralTemplate:SfnTemplate:Sfn cn=1P0Ps(x) ei2πnPxdx. The series does not necessarily converge (in the pointwise sense) and, even if it does, it is not necessarily equal to s(x). Only when certain conditions are satisfied (e.g. if s(x) is continuously differentiable) does the Fourier series converge to s(x), i.e., s(x)=n=cnei2πnPx. For functions satisfying the Dirichlet sufficiency conditions, pointwise convergence holds.Template:Sfn However, these are not necessary conditions and there are many theorems about different types of convergence of Fourier series (e.g. uniform convergence or mean convergence).Template:Sfn The definition naturally extends to the Fourier series of a (periodic) distribution s (also called Fourier-Schwartz series).Template:Sfn Then the Fourier series converges to s(x) in the distribution sense.Template:Sfn

The process of determining the Fourier coefficients of a given function or signal is called analysis, while forming the associated trigonometric series (or its various approximations) is called synthesis.

Synthesis

A Fourier series can be written in several equivalent forms, shown here as the Nth partial sums sN(x) of the Fourier series of s(x):[14]

Fig 1. The top graph shows a non-periodic function s(x) in blue defined only over the red interval from 0 to P. The function can be analyzed over this interval to produce the Fourier series in the bottom graph. The Fourier series is always a periodic function, even if original function s(x) is not.
Sine-cosine form

Template:NumBlk


Exponential form

Template:NumBlk

The harmonics are indexed by an integer, n, which is also the number of cycles the corresponding sinusoids make in interval P. Therefore, the sinusoids have:

  • a wavelength equal to Pn in the same units as x.
  • a frequency equal to nP in the reciprocal units of x.

These series can represent functions that are just a sum of one or more frequencies in the harmonic spectrum. In the limit N, a trigonometric series can also represent the intermediate frequencies and/or non-sinusoidal functions because of the infinite number of terms.

Analysis

The coefficients can be given/assumed, such as a music synthesizer or time samples of a waveform. In the latter case, the exponential form of Fourier series synthesizes a discrete-time Fourier transform where variable x represents frequency instead of time. In general, the coefficients are determined by analysis of a given function s(x) whose domain of definition is an interval of length P.Template:Efn-uaTemplate:Sfn

Fourier coefficients

Template:NumBlk

The 2P scale factor follows from substituting Template:EquationNote into Template:EquationNote and utilizing the orthogonality of the trigonometric system.[15] The equivalence of Template:EquationNote and Template:EquationNote follows from Euler's formula cosx=eix+eix2,sinx=eixeix2i, resulting in:

Exponential form coefficients

cn={12(anibn)if n>0,anif n=0,12(an+ibn)if n<0,

with c0 being the mean value of s on the interval P.Template:Sfn Conversely:

Inverse relationships

a0=c0an=cn+cnforn>0bn=i(cncn)forn>0

Example

Plot of the sawtooth wave, a periodic continuation of the linear function s(x)=x/π on the interval (π,π]
Animated plot of the first five successive partial Fourier series

Consider a sawtooth function: s(x)=s(x+2πk)=xπ,forπ<x<π, and k. In this case, the Fourier coefficients are given by a0=0.an=1πππs(x)cos(nx)dx=0,n1.bn=1πππs(x)sin(nx)dx=2πncos(nπ)+2π2n2sin(nπ)=2(1)n+1πn,n1. It can be shown that the Fourier series converges to s(x) at every point x where s is differentiable, and therefore: s(x)=a0+n=1[ancos(nx)+bnsin(nx)]=2πn=1(1)n+1nsin(nx),for (xπ) is not a multiple of 2π. When x=π, the Fourier series converges to 0, which is the half-sum of the left- and right-limit of s at x=π. This is a particular instance of the Dirichlet theorem for Fourier series.

This example leads to a solution of the Basel problem.

Amplitude-phase form

If the function s(x) is real-valued then the Fourier series can also be represented asTemplate:Sfn

Amplitude-phase form

Template:NumBlk

where An is the amplitude and φn is the phase shift of the nth harmonic.

The equivalence of Template:EquationNote and Template:EquationNote follows from the trigonometric identity: cos(2πnPxφn)=cos(φn)cos(2πnPx)+sin(φn)sin(2πnPx), which implies[16] an=Ancos(φn)andbn=Ansin(φn)

Fig 2. The blue curve is the cross-correlation of a square wave and a cosine template, as the phase lag of the template varies over one cycle. The amplitude and phase at the maximum value are the polar coordinates of one harmonic in the Fourier series expansion of the square wave. The corresponding rectangular coordinates can be determined by evaluating the correlation at just two samples separated by 90°.

are the rectangular coordinates of a vector with polar coordinates An and φn given by An=an2+bn2andφn=Arg(cn)=atan2(bn,an) where Arg(cn) is the argument of cn.

An example of determining the parameter φn for one value of n is shown in Figure 2. It is the value of φ at the maximum correlation between s(x) and a cosine template, cos(2πnPxφ). The blue graph is the cross-correlation function, also known as a matched filter:

X(φ)=Ps(x)cos(2πnPxφ)dxφ[0,2π]=cos(φ)Ps(x)cos(2πnPx)dxX(0)+sin(φ)Ps(x)sin(2πnPx)dxX(π/2)

Fortunately, it isn't necessary to evaluate this entire function, because its derivative is zero at the maximum: X(φ)=sin(φ)X(0)cos(φ)X(π/2)=0,at φ=φn. Hence φnarctan(bn/an)=arctan(X(π/2)/X(0)).

Common notations

The notation cn is inadequate for discussing the Fourier coefficients of several different functions. Therefore, it is customarily replaced by a modified form of the function (s, in this case), such as s^(n) or S[n], and functional notation often replaces subscripting:

s(x)=n=s^(n)ei2πnPxcommon mathematics notation=n=S[n]ei2πnPxcommon engineering notation

In engineering, particularly when the variable x represents time, the coefficient sequence is called a frequency domain representation. Square brackets are often used to emphasize that the domain of this function is a discrete set of frequencies.

Another commonly used frequency domain representation uses the Fourier series coefficients to modulate a Dirac comb:

S(f)  n=S[n]δ(fnP),

where f represents a continuous frequency domain. When variable x has units of seconds, f has units of hertz. The "teeth" of the comb are spaced at multiples (i.e. harmonics) of 1P, which is called the fundamental frequency. s(x) can be recovered from this representation by an inverse Fourier transform:

1{S(f)}=(n=S[n]δ(fnP))ei2πfxdf,=n=S[n]δ(fnP)ei2πfxdf,=n=S[n]ei2πnPx   s(x).

The constructed function S(f) is therefore commonly referred to as a Fourier transform, even though the Fourier integral of a periodic function is not convergent at the harmonic frequencies.Template:Efn-ua

Table of common Fourier series

Some common pairs of periodic functions and their Fourier series coefficients are shown in the table below.

  • s(x) designates a periodic function with period P.
  • a0,an,bn designate the Fourier series coefficients (sine-cosine form) of the periodic function s(x).
Time domain

s(x)

Plot Frequency domain (sine-cosine form)

a0anfor n1bnfor n1

Remarks Reference
s(x)=A|sin(2πPx)|for 0x<P
a0=2Aπan={4Aπ1n21n even0n oddbn=0 Full-wave rectified sine [17]Template:Rp
s(x)={Asin(2πPx)for 0x<P/20for P/2x<P
a0=Aπan={2Aπ1n21n even0n oddbn={A2n=10n>1 Half-wave rectified sine [17]Template:Rp
s(x)={Afor 0x<DP0for DPx<P
a0=ADan=Anπsin(2πnD)bn=2Anπ(sin(πnD))2 0D1
s(x)=AxPfor 0x<P
a0=A2an=0bn=Anπ [17]Template:Rp
s(x)=AAxPfor 0x<P
a0=A2an=0bn=Anπ [17]Template:Rp
s(x)=4AP2(xP2)2for 0x<P
a0=A3an=4Aπ2n2bn=0 [17]Template:Rp

Table of basic transformation rules

Template:See also This table shows some mathematical operations in the time domain and the corresponding effect in the Fourier series coefficients. Notation:

  • Complex conjugation is denoted by an asterisk.
  • s(x),r(x) designate P-periodic functions or functions defined only for x[0,P].
  • S[n],R[n] designate the Fourier series coefficients (exponential form) of s and r.
Property Time domain Frequency domain (exponential form) Remarks Reference
Linearity as(x)+br(x) aS[n]+bR[n] a,b
Time reversal / Frequency reversal s(x) S[n] [18]Template:Rp
Time conjugation s*(x) S*[n] [18]Template:Rp
Time reversal & conjugation s*(x) S*[n]
Real part in time Re(s(x)) 12(S[n]+S*[n])
Imaginary part in time Im(s(x)) 12i(S[n]S*[n])
Real part in frequency 12(s(x)+s*(x)) Re(S[n])
Imaginary part in frequency 12i(s(x)s*(x)) Im(S[n])
Shift in time / Modulation in frequency s(xx0) S[n]ei2πx0Pn x0 [18]Template:Rp
Shift in frequency / Modulation in time s(x)ei2πn0Px S[nn0] n0 [18]Template:Rp

Properties

Symmetry relations

When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:Template:SfnTemplate:Sfn

𝖳𝗂𝗆𝖾 𝖽𝗈𝗆𝖺𝗂𝗇s=sRE+sRO+i sIE+i sIO      𝖥𝗋𝖾𝗊𝗎𝖾𝗇𝖼𝗒 𝖽𝗈𝗆𝖺𝗂𝗇S=SRE+i SIO+i SIE+SRO

From this, various relationships are apparent, for example:

  • The transform of a real-valued function (sRE+sRO) is the conjugate symmetric function SRE+i SIO. Conversely, a conjugate symmetric transform implies a real-valued time-domain.
  • The transform of an imaginary-valued function (i sIE+i sIO) is the conjugate antisymmetric function SRO+i SIE, and the converse is true.
  • The transform of a conjugate symmetric function (sRE+i sIO) is the real-valued function SRE+SRO, and the converse is true.
  • The transform of a conjugate antisymmetric function (sRO+i sIE) is the imaginary-valued function i SIE+i SIO, and the converse is true.

Riemann–Lebesgue lemma

Template:Main If S is integrable, lim|n|S[n]=0, limn+an=0 and limn+bn=0.

Parseval's theorem

Template:Main If s belongs to L2(P) (periodic over an interval of length P) then: 1PP|s(x)|2dx=n=|S[n]|2.

Plancherel's theorem

Template:Main If c0,c±1,c±2, are coefficients and n=|cn|2< then there is a unique function sL2(P) such that S[n]=cn for every n.

Convolution theorems

Template:Main

Given P-periodic functions, sP and rP with Fourier series coefficients S[n] and R[n], n,

  • The pointwise product: hP(x)sP(x)rP(x) is also P-periodic, and its Fourier series coefficients are given by the discrete convolution of the S and R sequences: H[n]={S*R}[n].
  • The periodic convolution: hP(x)PsP(τ)rP(xτ)dτ is also P-periodic, with Fourier series coefficients: H[n]=PS[n]R[n].
  • A doubly infinite sequence {cn}nZ in c0() is the sequence of Fourier coefficients of a function in L1([0,2π]) if and only if it is a convolution of two sequences in 2(). See [19]

Derivative property

If s is a 2Template:Pi-periodic function on which is k times differentiable, and its kth derivative is continuous, then s belongs to the function space Ck().

  • If sCk(), then the Fourier coefficients of the kth derivative of s can be expressed in terms of the Fourier coefficients s^[n] of s, via the formula s(k)^[n]=(in)ks^[n]. In particular, since for any fixed k1 we have s(k)^[n]0 as n, it follows that |n|ks^[n] tends to zero, i.e., the Fourier coefficients converge to zero faster than the kth power of |n|.

Compact groups

Template:Main

One of the interesting properties of the Fourier transform which we have mentioned, is that it carries convolutions to pointwise products. If that is the property which we seek to preserve, one can produce Fourier series on any compact group. Typical examples include those classical groups that are compact. This generalizes the Fourier transform to all spaces of the form L2(G), where G is a compact group, in such a way that the Fourier transform carries convolutions to pointwise products. The Fourier series exists and converges in similar ways to the Template:Closed-closed case.

An alternative extension to compact groups is the Peter–Weyl theorem, which proves results about representations of compact groups analogous to those about finite groups.

The atomic orbitals of chemistry are partially described by spherical harmonics, which can be used to produce Fourier series on the sphere.

Riemannian manifolds

Template:Main

If the domain is not a group, then there is no intrinsically defined convolution. However, if X is a compact Riemannian manifold, it has a Laplace–Beltrami operator. The Laplace–Beltrami operator is the differential operator that corresponds to Laplace operator for the Riemannian manifold X. Then, by analogy, one can consider heat equations on X. Since Fourier arrived at his basis by attempting to solve the heat equation, the natural generalization is to use the eigensolutions of the Laplace–Beltrami operator as a basis. This generalizes Fourier series to spaces of the type L2(X), where X is a Riemannian manifold. The Fourier series converges in ways similar to the [π,π] case. A typical example is to take X to be the sphere with the usual metric, in which case the Fourier basis consists of spherical harmonics.

Locally compact Abelian groups

Template:Main

The generalization to compact groups discussed above does not generalize to noncompact, nonabelian groups. However, there is a straightforward generalization to Locally Compact Abelian (LCA) groups.

This generalizes the Fourier transform to L1(G) or L2(G), where G is an LCA group. If G is compact, one also obtains a Fourier series, which converges similarly to the [π,π] case, but if G is noncompact, one obtains instead a Fourier integral. This generalization yields the usual Fourier transform when the underlying locally compact Abelian group is .

Extensions

Fourier-Stieltjes series

Template:See also Let F(x) be a function of bounded variation defined on the closed inverval [0,P]. The Fourier series whose coefficients are given byTemplate:Sfn cn=1P0P ei2πnPxdF(x),n, is called the Fourier-Stieltjes series. The space of functions of bounded variation BV is a subspace of L1. As any FBV defines a Radon measure (i.e. a locally finite Borel measure on ), this definition can be extended as follows.

Consider the space M of all finite Borel measures on the real line; as such L1M.Template:Sfn If there is a measure μM such that the Fourier-Stieltjes coefficients are given by cn=μ^(n)=1P0P ei2πnPxdμ(x),n, then the series is called a Fourier-Stieltjes series. Likewise, the function μ^(n), where μM, is called a Fourier-Stieltjes transform.Template:Sfn

The question whether or not μ exists for a given sequence of cn forms the basis of the trigonometric moment problem.Template:Sfn

Furthermore, M is a strict subspace of the space of (tempered) distributions 𝒟, i.e., M𝒟. If the Fourier coefficients are determined by a distribution F𝒟 then the series is described as a Fourier-Schwartz series. Contrary to the Fourier-Stieltjes series, deciding whether a given series is a Fourier series or a Fourier-Schwartz series is relatively trivial due to the characteristics of its dual space; the Schwartz space 𝒮(n).Template:Sfn

Fourier series on a square

We can also define the Fourier series for functions of two variables x and y in the square [π,π]×[π,π]: f(x,y)=j,kcj,keijxeiky,cj,k=14π2ππππf(x,y)eijxeikydxdy.

Aside from being useful for solving partial differential equations such as the heat equation, one notable application of Fourier series on the square is in image compression. In particular, the JPEG image compression standard uses the two-dimensional discrete cosine transform, a discrete form of the Fourier cosine transform, which uses only cosine as the basis function.

For two-dimensional arrays with a staggered appearance, half of the Fourier series coefficients disappear, due to additional symmetry.[20]

Fourier series of Bravais-lattice-periodic-function

A three-dimensional Bravais lattice is defined as the set of vectors of the form: 𝐑=n1𝐚1+n2𝐚2+n3𝐚3 where ni are integers and 𝐚i are three linearly independent vectors. Assuming we have some function, f(𝐫), such that it obeys the condition of periodicity for any Bravais lattice vector 𝐑, f(𝐫)=f(𝐑+𝐫), we could make a Fourier series of it. This kind of function can be, for example, the effective potential that one electron "feels" inside a periodic crystal. It is useful to make the Fourier series of the potential when applying Bloch's theorem. First, we may write any arbitrary position vector 𝐫 in the coordinate-system of the lattice: 𝐫=x1𝐚1a1+x2𝐚2a2+x3𝐚3a3, where ai|𝐚i|, meaning that ai is defined to be the magnitude of 𝐚i, so 𝐚i^=𝐚iai is the unit vector directed along 𝐚i.

Thus we can define a new function, g(x1,x2,x3)f(𝐫)=f(x1𝐚1a1+x2𝐚2a2+x3𝐚3a3).

This new function, g(x1,x2,x3), is now a function of three-variables, each of which has periodicity a1, a2, and a3 respectively: g(x1,x2,x3)=g(x1+a1,x2,x3)=g(x1,x2+a2,x3)=g(x1,x2,x3+a3).

This enables us to build up a set of Fourier coefficients, each being indexed by three independent integers m1,m2,m3. In what follows, we use function notation to denote these coefficients, where previously we used subscripts. If we write a series for g on the interval [0,a1] for x1, we can define the following: hone(m1,x2,x3)1a10a1g(x1,x2,x3)ei2πm1a1x1dx1

And then we can write: g(x1,x2,x3)=m1=hone(m1,x2,x3)ei2πm1a1x1

Further defining: htwo(m1,m2,x3)1a20a2hone(m1,x2,x3)ei2πm2a2x2dx2=1a20a2dx21a10a1dx1g(x1,x2,x3)ei2π(m1a1x1+m2a2x2)

We can write g once again as: g(x1,x2,x3)=m1=m2=htwo(m1,m2,x3)ei2πm1a1x1ei2πm2a2x2

Finally applying the same for the third coordinate, we define: hthree(m1,m2,m3)1a30a3htwo(m1,m2,x3)ei2πm3a3x3dx3=1a30a3dx31a20a2dx21a10a1dx1g(x1,x2,x3)ei2π(m1a1x1+m2a2x2+m3a3x3)

We write g as: g(x1,x2,x3)=m1=m2=m3=hthree(m1,m2,m3)ei2πm1a1x1ei2πm2a2x2ei2πm3a3x3

Re-arranging: g(x1,x2,x3)=m1,m2,m3hthree(m1,m2,m3)ei2π(m1a1x1+m2a2x2+m3a3x3).

Now, every reciprocal lattice vector can be written (but does not mean that it is the only way of writing) as 𝐆=m1𝐠1+m2𝐠2+m3𝐠3, where mi are integers and 𝐠i are reciprocal lattice vectors to satisfy 𝐠𝐢𝐚𝐣=2πδij (δij=1 for i=j, and δij=0 for ij). Then for any arbitrary reciprocal lattice vector 𝐆 and arbitrary position vector 𝐫 in the original Bravais lattice space, their scalar product is: 𝐆𝐫=(m1𝐠1+m2𝐠2+m3𝐠3)(x1𝐚1a1+x2𝐚2a2+x3𝐚3a3)=2π(x1m1a1+x2m2a2+x3m3a3).

So it is clear that in our expansion of g(x1,x2,x3)=f(𝐫), the sum is actually over reciprocal lattice vectors: f(𝐫)=𝐆h(𝐆)ei𝐆𝐫,

where h(𝐆)=1a30a3dx31a20a2dx21a10a1dx1f(x1𝐚1a1+x2𝐚2a2+x3𝐚3a3)ei𝐆𝐫.

Assuming 𝐫=(x,y,z)=x1𝐚1a1+x2𝐚2a2+x3𝐚3a3, we can solve this system of three linear equations for x, y, and z in terms of x1, x2 and x3 in order to calculate the volume element in the original rectangular coordinate system. Once we have x, y, and z in terms of x1, x2 and x3, we can calculate the Jacobian determinant: |x1xx1yx1zx2xx2yx2zx3xx3yx3z| which after some calculation and applying some non-trivial cross-product identities can be shown to be equal to: a1a2a3𝐚1(𝐚2×𝐚3)

(it may be advantageous for the sake of simplifying calculations, to work in such a rectangular coordinate system, in which it just so happens that 𝐚1 is parallel to the x axis, 𝐚2 lies in the xy-plane, and 𝐚3 has components of all three axes). The denominator is exactly the volume of the primitive unit cell which is enclosed by the three primitive-vectors 𝐚1, 𝐚2 and 𝐚3. In particular, we now know that dx1dx2dx3=a1a2a3𝐚1(𝐚2×𝐚3)dxdydz.

We can write now h(𝐆) as an integral with the traditional coordinate system over the volume of the primitive cell, instead of with the x1, x2 and x3 variables: h(𝐆)=1𝐚1(𝐚2×𝐚3)Cd𝐫f(𝐫)ei𝐆𝐫 writing d𝐫 for the volume element dxdydz; and where C is the primitive unit cell, thus, 𝐚1(𝐚2×𝐚3) is the volume of the primitive unit cell.

Hilbert space

Template:See also

As the trigonometric series is a special class of orthogonal system, Fourier series can naturally be defined in the context of Hilbert spaces. For example, the space of square-integrable functions on [π,π] forms the Hilbert space L2([π,π]). Its inner product, defined for any two elements f and g, is given by: f,g=12πππf(x)g(x)dx. This space is equipped with the orthonormal basis {en=einx:n}. Then the (generalized) Fourier series expansion of fL2([π,π]), given by f(x)=n=cneinx, can be written asTemplate:Sfn f=n=f,enen.

Sines and cosines form an orthogonal set, as illustrated above. The integral of sine, cosine and their product is zero (green and red areas are equal, and cancel out) when m, n or the functions are different, and π only if m and n are equal, and the function used is the same. They would form an orthonormal set, if the integral equaled 1 (that is, each function would need to be scaled by 1/π).

The sine-cosine form follows in a similar fashion. Indeed, the sines and cosines form an orthogonal set: ππcos(mx)cos(nx)dx=12ππcos((nm)x)+cos((n+m)x)dx=πδmn,m,n1, ππsin(mx)sin(nx)dx=12ππcos((nm)x)cos((n+m)x)dx=πδmn,m,n1 (where δmn is the Kronecker delta), and ππcos(mx)sin(nx)dx=12ππsin((n+m)x)+sin((nm)x)dx=0; Hence, the set {12,cosx2,sinx2,,cos(nx)2,sin(nx)2,}, also forms an orthonormal basis for L2([π,π]). The density of their span is a consequence of the Stone–Weierstrass theorem, but follows also from the properties of classical kernels like the Fejér kernel.

Fourier theorem proving convergence of Fourier series

Template:Main

In engineering, the Fourier series is generally assumed to converge except at jump discontinuities since the functions encountered in engineering are usually better-behaved than those in other disciplines. In particular, if s is continuous and the derivative of s(x) (which may not exist everywhere) is square integrable, then the Fourier series of s converges absolutely and uniformly to s(x).[21] If a function is square-integrable on the interval [x0,x0+P], then the Fourier series converges to the function almost everywhere. It is possible to define Fourier coefficients for more general functions or distributions, in which case pointwise convergence often fails, and convergence in norm or weak convergence is usually studied.

The theorems proving that a Fourier series is a valid representation of any periodic function (that satisfies the Dirichlet conditions), and informal variations of them that don't specify the convergence conditions, are sometimes referred to generically as Fourier's theorem or the Fourier theorem.[22][23][24][25]

Least squares property

The earlier Template:EquationNote:

sN(x)=n=NNS[n] ei2πnPx,

is a trigonometric polynomial of degree N that can be generally expressed as:

pN(x)=n=NNp[n] ei2πnPx.

Parseval's theorem implies that:

Template:Math theorem

Convergence theorems

Template:See also Because of the least squares property, and because of the completeness of the Fourier basis, we obtain an elementary convergence result.

Template:Math theorem If s is continuously differentiable, then (in)S[n] is the nth Fourier coefficient of the first derivative s. Since s is continuous, and therefore bounded, it is square-integrable and its Fourier coefficients are square-summable. Then, by the Cauchy–Schwarz inequality,

(n0|S[n]|)2n01n2n0|nS[n]|2.

This means that s is absolutely summable. The sum of this series is a continuous function, equal to s, since the Fourier series converges in L1 to s:

Template:Math theorem

This result can be proven easily if s is further assumed to be C2, since in that case n2S[n] tends to zero as n. More generally, the Fourier series is absolutely summable, thus converges uniformly to s, provided that s satisfies a Hölder condition of order α>1/2. In the absolutely summable case, the inequality:

supx|s(x)sN(x)||n|>N|S[n]|

proves uniform convergence.

Many other results concerning the convergence of Fourier series are known, ranging from the moderately simple result that the series converges at x if s is differentiable at x, to more sophisticated results such as Carleson's theorem which states that the Fourier series of an L2 function converges almost everywhere.

Divergence

Since Fourier series have such good convergence properties, many are often surprised by some of the negative results. For example, the Fourier series of a continuous T-periodic function need not converge pointwise. The uniform boundedness principle yields a simple non-constructive proof of this fact.

In 1922, Andrey Kolmogorov published an article titled Une série de Fourier-Lebesgue divergente presque partout in which he gave an example of a Lebesgue-integrable function whose Fourier series diverges almost everywhere. He later constructed an example of an integrable function whose Fourier series diverges everywhere.Template:Sfn

It is possible to give explicit examples of a continuous function whose Fourier series diverges at 0: for instance, the even and 2π-periodic function f defined for all x in [0,π] by[26]

f(x)=n=11n2sin[(2n3+1)x2].

Because the function is even the Fourier series contains only cosines:

m=0Cmcos(mx).

The coefficients are:

Cm=1πn=11n2{22n3+12m+22n3+1+2m}

As Template:Mvar increases, the coefficients will be positive and increasing until they reach a value of about Cm2/(n2π) at m=2n3/2 for some Template:Mvar and then become negative (starting with a value around 2/(n2π)) and getting smaller, before starting a new such wave. At x=0 the Fourier series is simply the running sum of Cm, and this builds up to around

1n2πk=02n3/222k+11n2πln2n3=nπln2

in the Template:Mvarth wave before returning to around zero, showing that the series does not converge at zero but reaches higher and higher peaks. Note that though the function is continuous, it is not differentiable.

See also

Template:Cols

Template:Colend

Notes

Template:Notelist-ua

References

Template:Reflist

Bibliography

Template:Refbegin

Template:Refend

Template:PlanetMath attribution Template:Series (mathematics) Template:Authority control

  1. Template:Dictionary.com
  2. Template:Cite book
  3. Cite error: Invalid <ref> tag; no text was provided for refs named iit.edu
  4. Template:Cite book
  5. Template:Cite journal
  6. Template:Cite journal
  7. Template:Cite web
  8. Template:Citation
  9. Template:Cite book
  10. Template:Cite book
  11. Wilhelm Flügge, Stresses in Shells (1973) 2nd edition. Template:Isbn. Originally published in German as Statik und Dynamik der Schalen (1937).
  12. Template:Cite book Template:Pb Whilst the cited article does list the author as Fourier, a footnote on page 215 indicates that the article was actually written by Poisson and that it is, "for reasons of historical interest", presented as though it were Fourier's original memoire.
  13. Template:Cite book
  14. Template:Citation
  15. Template:Cite web
  16. Template:Cite web
  17. 17.0 17.1 17.2 17.3 17.4 Template:Cite book
  18. 18.0 18.1 18.2 18.3 Template:Cite book
  19. Template:Cite web
  20. Vanishing of Half the Fourier Coefficients in Staggered Arrays
  21. Template:Cite book
  22. Template:Cite book
  23. Template:Cite book
  24. Template:Cite book
  25. Template:Cite book
  26. Template:Cite book