Gaussian quadrature

From testwiki
Revision as of 08:25, 14 January 2025 by imported>Hdjensofjfnen (Change wording to avoid unrelated adjacent numbers)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Redirect Template:More footnotes

Comparison between 2-point Gaussian and trapezoidal quadrature.
Comparison between 2-point Gaussian and trapezoidal quadrature.
The blue curve shows the function whose definite integral on the interval Template:Math is to be calculated (the integrand). The trapezoidal rule approximates the function with a linear function that coincides with the integrand at the endpoints of the interval and is represented by an orange dashed line. The approximation is apparently not good, so the error is large (the trapezoidal rule gives an approximation of the integral equal to Template:Math, while the correct value is Template:Math). To obtain a more accurate result, the interval must be partitioned into many subintervals and then the composite trapezoidal rule must be used, which requires many more calculations.
The Gaussian quadrature chooses more suitable points instead, so even a linear function approximates the function better (the black dashed line). As the integrand is the third-degree polynomial Template:Math, the 2-point Gaussian quadrature rule even returns an exact result.

In numerical analysis, an Template:Mvar-point Gaussian quadrature rule, named after Carl Friedrich Gauss,[1] is a quadrature rule constructed to yield an exact result for polynomials of degree Template:Math or less by a suitable choice of the nodes Template:Mvar and weights Template:Mvar for Template:Math.

The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826.[2] The most common domain of integration for such a rule is taken as Template:Math, so the rule is stated as 11f(x)dxi=1nwif(xi),

which is exact for polynomials of degree Template:Math or less. This exact rule is known as the Gauss–Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if Template:Math is well-approximated by a polynomial of degree Template:Math or less on Template:Math.

The Gauss–Legendre quadrature rule is not typically used for integrable functions with endpoint singularities. Instead, if the integrand can be written as

f(x)=(1x)α(1+x)βg(x),α,β>1,

where Template:Math is well-approximated by a low-degree polynomial, then alternative nodes Template:Mvar and weights Template:Mvar will usually give more accurate quadrature rules. These are known as Gauss–Jacobi quadrature rules, i.e.,

11f(x)dx=11(1x)α(1+x)βg(x)dxi=1nwig(xi).

Common weights include 11x2 (Chebyshev–Gauss) and 1x2. One may also want to integrate over semi-infinite (Gauss–Laguerre quadrature) and infinite intervals (Gauss–Hermite quadrature).

It can be shown (see Press et al., or Stoer and Bulirsch) that the quadrature nodes Template:Mvar are the roots of a polynomial belonging to a class of orthogonal polynomials (the class orthogonal with respect to a weighted inner-product). This is a key observation for computing Gauss quadrature nodes and weights.

Gauss–Legendre quadrature

Template:Further

Graphs of Legendre polynomials (up to Template:Math

For the simplest integration problem stated above, i.e., Template:Math is well-approximated by polynomials on [1,1], the associated orthogonal polynomials are Legendre polynomials, denoted by Template:Math. With the Template:Mvar-th polynomial normalized to give Template:Math, the Template:Mvar-th Gauss node, Template:Mvar, is the Template:Mvar-th root of Template:Mvar and the weights are given by the formula[3] wi=2(1xi2)[P'n(xi)]2.

Some low-order quadrature rules are tabulated below (over interval Template:Math, see the section below for other intervals).

Number of points, Template:Mvar Points, Template:Mvar Weights, Template:Mvar
1 0 2
2 ±13 ±0.57735... 1
3 0 89 0.888889...
±35 ±0.774597... 59 0.555556...
4 ±372765 ±0.339981... 18+3036 0.652145...
±37+2765 ±0.861136... 183036 0.347855...
5 0 128225 0.568889...
±1352107 ±0.538469... 322+1370900 0.478629...
±135+2107 ±0.90618... 3221370900 0.236927...

Change of interval

An integral over Template:Math must be changed into an integral over Template:Math before applying the Gaussian quadrature rule. This change of interval can be done in the following way: abf(x)dx=11f(ba2ξ+a+b2)dxdξdξ

with dxdξ=ba2

Applying the n point Gaussian quadrature (ξ,w) rule then results in the following approximation: abf(x)dxba2i=1nwif(ba2ξi+a+b2).

Example of two-point Gauss quadrature rule

Use the two-point Gauss quadrature rule to approximate the distance in meters covered by a rocket from t=8s to t=30s, as given by s=830(2000ln[1400001400002100t]9.8t)dt

Change the limits so that one can use the weights and abscissae given in Table 1. Also, find the absolute relative true error. The true value is given as 11061.34 m.

Solution

First, changing the limits of integration from [8,30] to [1,1] gives

830f(t)dt=308211f(3082x+30+82)dx=1111f(11x+19)dx

Next, get the weighting factors and function argument values from Table 1 for the two-point rule,

  • c1=1.000000000
  • x1=0.577350269
  • c2=1.000000000
  • x2=0.577350269

Now we can use the Gauss quadrature formula 1111f(11x+19)dx11[c1f(11x1+19)+c2f(11x2+19)]=11[f(11(0.5773503)+19)+f(11(0.5773503)+19)]=11[f(12.64915)+f(25.35085)]=11[(296.8317)+(708.4811)]=11058.44 since f(12.64915)=2000ln[1400001400002100(12.64915)]9.8(12.64915)=296.8317 f(25.35085)=2000ln[1400001400002100(25.35085)]9.8(25.35085)=708.4811

Given that the true value is 11061.34 m, the absolute relative true error, |εt| is |εt|=|11061.3411058.4411061.34|×100%=0.0262%


Other forms

The integration problem can be expressed in a slightly more general way by introducing a positive weight function Template:Mvar into the integrand, and allowing an interval other than Template:Math. That is, the problem is to calculate abω(x)f(x)dx for some choices of Template:Mvar, Template:Mvar, and Template:Mvar. For Template:Math, Template:Math, and Template:Math, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).

Interval Template:Math Orthogonal polynomials A & S For more information, see ...
Template:Closed-closed Template:Math Legendre polynomials 25.4.29 Template:Section link
Template:Open-open (1x)α(1+x)β,α,β>1 Jacobi polynomials 25.4.33 (Template:Math) Gauss–Jacobi quadrature
Template:Open-open 11x2 Chebyshev polynomials (first kind) 25.4.38 Chebyshev–Gauss quadrature
Template:Closed-closed 1x2 Chebyshev polynomials (second kind) 25.4.40 Chebyshev–Gauss quadrature
Template:Closed-open ex Laguerre polynomials 25.4.45 Gauss–Laguerre quadrature
Template:Closed-open xαex,α>1 Generalized Laguerre polynomials Gauss–Laguerre quadrature
Template:Open-open ex2 Hermite polynomials 25.4.46 Gauss–Hermite quadrature

Fundamental theorem

Let Template:Mvar be a nontrivial polynomial of degree Template:Mvar such that abω(x)xkpn(x)dx=0,for all k=0,1,,n1.

Note that this will be true for all the orthogonal polynomials above, because each Template:Mvar is constructed to be orthogonal to the other polynomials Template:Mvar for Template:Math, and Template:Math is in the span of that set.

If we pick the Template:Mvar nodes Template:Mvar to be the zeros of Template:Mvar, then there exist Template:Mvar weights Template:Mvar which make the Gaussian quadrature computed integral exact for all polynomials Template:Math of degree Template:Math or less. Furthermore, all these nodes Template:Mvar will lie in the open interval Template:Math.[4]

To prove the first part of this claim, let Template:Math be any polynomial of degree Template:Math or less. Divide it by the orthogonal polynomial Template:Mvar to get h(x)=pn(x)q(x)+r(x). where Template:Math is the quotient, of degree Template:Math or less (because the sum of its degree and that of the divisor Template:Mvar must equal that of the dividend), and Template:Math is the remainder, also of degree Template:Math or less (because the degree of the remainder is always less than that of the divisor). Since Template:Mvar is by assumption orthogonal to all monomials of degree less than Template:Mvar, it must be orthogonal to the quotient Template:Math. Therefore abω(x)h(x)dx=abω(x)(pn(x)q(x)+r(x))dx=abω(x)r(x)dx.

Since the remainder Template:Math is of degree Template:Math or less, we can interpolate it exactly using Template:Mvar interpolation points with Lagrange polynomials Template:Math, where li(x)=jixxjxixj.

We have r(x)=i=1nli(x)r(xi).

Then its integral will equal abω(x)r(x)dx=abω(x)i=1nli(x)r(xi)dx=i=1nr(xi)abω(x)li(x)dx=i=1nr(xi)wi,

where Template:Math, the weight associated with the node Template:Math, is defined to equal the weighted integral of Template:Math (see below for other formulas for the weights). But all the Template:Mvar are roots of Template:Mvar, so the division formula above tells us that h(xi)=pn(xi)q(xi)+r(xi)=r(xi), for all Template:Mvar. Thus we finally have abω(x)h(x)dx=abω(x)r(x)dx=i=1nwir(xi)=i=1nwih(xi).

This proves that for any polynomial Template:Math of degree Template:Math or less, its integral is given exactly by the Gaussian quadrature sum.

To prove the second part of the claim, consider the factored form of the polynomial Template:Math. Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from Template:Mvar to Template:Mvar will not change sign over that interval. Finally, for factors corresponding to roots Template:Mvar inside the interval from Template:Mvar to Template:Mvar that are of odd multiplicity, multiply Template:Math by one more factor to make a new polynomial pn(x)i(xxi).

This polynomial cannot change sign over the interval from Template:Mvar to Template:Mvar because all its roots there are now of even multiplicity. So the integral abpn(x)(i(xxi))ω(x)dx0, since the weight function Template:Math is always non-negative. But Template:Math is orthogonal to all polynomials of degree Template:Math or less, so the degree of the product i(xxi) must be at least Template:Mvar. Therefore Template:Math has Template:Mvar distinct roots, all real, in the interval from Template:Mvar to Template:Mvar.

General formula for the weights

The weights can be expressed as

Template:NumBlk

where ak is the coefficient of xk in pk(x). To prove this, note that using Lagrange interpolation one can express Template:Math in terms of r(xi) as r(x)=i=1nr(xi)1jnjixxjxixj because Template:Math has degree less than Template:Mvar and is thus fixed by the values it attains at Template:Mvar different points. Multiplying both sides by Template:Math and integrating from Template:Mvar to Template:Mvar yields abω(x)r(x)dx=i=1nr(xi)abω(x)1jnjixxjxixjdx

The weights Template:Mvar are thus given by wi=abω(x)1jnjixxjxixjdx

This integral expression for wi can be expressed in terms of the orthogonal polynomials pn(x) and pn1(x) as follows.

We can write 1jnji(xxj)=1jn(xxj)xxi=pn(x)an(xxi)

where an is the coefficient of xn in pn(x). Taking the limit of Template:Mvar to xi yields using L'Hôpital's rule 1jnji(xixj)=p'n(xi)an

We can thus write the integral expression for the weights as

Template:NumBlk

In the integrand, writing 1xxi=1(xxi)kxxi+(xxi)k1xxi

yields abω(x)xkpn(x)xxidx=xikabω(x)pn(x)xxidx

provided kn, because 1(xxi)kxxi is a polynomial of degree Template:Math which is then orthogonal to pn(x). So, if Template:Math is a polynomial of at most nth degree we have abω(x)pn(x)xxidx=1q(xi)abω(x)q(x)pn(x)xxidx

We can evaluate the integral on the right hand side for q(x)=pn1(x) as follows. Because pn(x)xxi is a polynomial of degree Template:Math, we have pn(x)xxi=anxn1+s(x) where Template:Math is a polynomial of degree n2. Since Template:Math is orthogonal to pn1(x) we have abω(x)pn(x)xxidx=anpn1(xi)abω(x)pn1(x)xn1dx

We can then write xn1=(xn1pn1(x)an1)+pn1(x)an1

The term in the brackets is a polynomial of degree n2, which is therefore orthogonal to pn1(x). The integral can thus be written as abω(x)pn(x)xxidx=anan1pn1(xi)abω(x)pn1(x)2dx

According to equation (Template:EquationNote), the weights are obtained by dividing this by p'n(xi) and that yields the expression in equation (Template:EquationNote).

wi can also be expressed in terms of the orthogonal polynomials pn(x) and now pn+1(x). In the 3-term recurrence relation pn+1(xi)=(a)pn(xi)+(b)pn1(xi) the term with pn(xi) vanishes, so pn1(xi) in Eq. (1) can be replaced by 1bpn+1(xi).

Proof that the weights are positive

Consider the following polynomial of degree 2n2 f(x)=1jnji(xxj)2(xixj)2 where, as above, the Template:Mvar are the roots of the polynomial pn(x). Clearly f(xj)=δij. Since the degree of f(x) is less than 2n1, the Gaussian quadrature formula involving the weights and nodes obtained from pn(x) applies. Since f(xj)=0 for Template:Mvar not equal to Template:Mvar, we have abω(x)f(x)dx=j=1nwjf(xj)=j=1nδijwj=wi>0.

Since both ω(x) and f(x) are non-negative functions, it follows that wi>0.

Computation of Gaussian quadrature rules

There are many algorithms for computing the nodes Template:Mvar and weights Template:Mvar of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring Template:Math operations, Newton's method for solving pn(x)=0 using the three-term recurrence for evaluation requiring Template:Math operations, and asymptotic formulas for large n requiring Template:Math operations.

Recurrence relation

Orthogonal polynomials pr with (pr,ps)=0 for rs for a scalar product (,), degree (pr)=r and leading coefficient one (i.e. monic orthogonal polynomials) satisfy the recurrence relation pr+1(x)=(xar,r)pr(x)ar,r1pr1(x)ar,0p0(x)

and scalar product defined (f(x),g(x))=abω(x)f(x)g(x)dx

for r=0,1,,n1 where Template:Mvar is the maximal degree which can be taken to be infinity, and where ar,s=(xpr,ps)(ps,ps). First of all, the polynomials defined by the recurrence relation starting with p0(x)=1 have leading coefficient one and correct degree. Given the starting point by p0, the orthogonality of pr can be shown by induction. For r=s=0 one has (p1,p0)=(xa0,0)(p0,p0)=(xp0,p0)a0,0(p0,p0)=(xp0,p0)(xp0,p0)=0.

Now if p0,p1,,pr are orthogonal, then also pr+1, because in (pr+1,ps)=(xpr,ps)ar,r(pr,ps)ar,r1(pr1,ps)ar,0(p0,ps) all scalar products vanish except for the first one and the one where ps meets the same orthogonal polynomial. Therefore, (pr+1,ps)=(xpr,ps)ar,s(ps,ps)=(xpr,ps)(xpr,ps)=0.

However, if the scalar product satisfies (xf,g)=(f,xg) (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For s<r1,xps is a polynomial of degree less than or equal to Template:Math. On the other hand, pr is orthogonal to every polynomial of degree less than or equal to Template:Math. Therefore, one has (xpr,ps)=(pr,xps)=0 and ar,s=0 for Template:Math. The recurrence relation then simplifies to pr+1(x)=(xar,r)pr(x)ar,r1pr1(x)

or pr+1(x)=(xar)pr(x)brpr1(x)

(with the convention p1(x)0) where ar:=(xpr,pr)(pr,pr),br:=(xpr,pr1)(pr1,pr1)=(pr,pr)(pr1,pr1)

(the last because of (xpr,pr1)=(pr,xpr1)=(pr,pr), since xpr1 differs from pr by a degree less than Template:Mvar).

The Golub-Welsch algorithm

The three-term recurrence relation can be written in matrix form JP~=xP~pn(x)𝐞n where P~=[p0(x)p1(x)pn1(x)]𝖳, 𝐞n is the nth standard basis vector, i.e., 𝐞n=[001]𝖳, and Template:Mvar is the following tridiagonal matrix, called the Jacobi matrix: 𝐉=[a0100b1a110b20an2100bn1an1].

The zeros xj of the polynomials up to degree Template:Mvar, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this matrix. This procedure is known as Golub–Welsch algorithm.

For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix 𝒥 with elements 𝒥k,i=Jk,i=ak1k=1,2,,n𝒥k1,i=𝒥k,k1=Jk,k1Jk1,k=bk1k=1,2,,n.

That is,

𝒥=[a0b100b1a1b20b20an2bn100bn1an1].

Template:Math and 𝒥 are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If ϕ(j) is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated with the eigenvalue Template:Mvar, the corresponding weight can be computed from the first component of this eigenvector, namely: wj=μ0(ϕ1(j))2

where μ0 is the integral of the weight function μ0=abω(x)dx.

See, for instance, Template:Harv for further details.

Error estimates

The error of a Gaussian quadrature rule can be stated as follows.[5] For an integrand which has Template:Math continuous derivatives, abω(x)f(x)dxi=1nwif(xi)=f(2n)(ξ)(2n)!(pn,pn) for some Template:Mvar in Template:Math, where Template:Mvar is the monic (i.e. the leading coefficient is Template:Math) orthogonal polynomial of degree Template:Mvar and where (f,g)=abω(x)f(x)g(x)dx.

In the important special case of Template:Math, we have the error estimate[6] (ba)2n+1(n!)4(2n+1)[(2n)!]3f(2n)(ξ),a<ξ<b.

Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order Template:Math derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.

Gauss–Kronrod rules

Template:Main

If the interval Template:Math is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding Template:Math points to an Template:Mvar-point rule in such a way that the resulting rule is of order Template:Math. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error.

Gauss–Lobatto rules

Also known as Lobatto quadrature,[7] named after Dutch mathematician Rehuel Lobatto. It is similar to Gaussian quadrature with the following differences:

  1. The integration points include the end points of the integration interval.
  2. It is accurate for polynomials up to degree Template:Math, where Template:Mvar is the number of integration points.[8]

Lobatto quadrature of function Template:Math on interval Template:Math: 11f(x)dx=2n(n1)[f(1)+f(1)]+i=2n1wif(xi)+Rn.

Abscissas: Template:Mvar is the (i1)st zero of P'n1(x), here Pm(x) denotes the standard Legendre polynomial of Template:Mvar-th degree and the dash denotes the derivative.

Weights: wi=2n(n1)[Pn1(xi)]2,xi±1.

Remainder: Rn=n(n1)322n1[(n2)!]4(2n1)[(2n2)!]3f(2n2)(ξ),1<ξ<1.

Some of the weights are:

Number of points, n Points, Template:Mvar Weights, Template:Mvar
3 0 43
±1 13
4 ±15 56
±1 16
5 0 3245
±37 4990
±1 110
6 ±132721 14+730
±13+2721 14730
±1 115
7 0 256525
±51121153 124+715350
±511+21153 124715350
±1 121

An adaptive variant of this algorithm with 2 interior nodes[9] is found in GNU Octave and MATLAB as quadl and integrate.[10][11]

References

Citations

Bibliography

Template:Refbegin Template:Sfn whitelist

Template:Refend

Template:Numerical integration