Order of accuracy

From testwiki
Revision as of 22:14, 7 May 2023 by imported>GhostInTheMachine (Shorten short description per WP:SDSHORT)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider u, the exact solution to a differential equation in an appropriate normed space (V,|| ||). Consider a numerical approximation uh, where h is a parameter characterizing the approximation, such as the step size in a finite difference scheme or the diameter of the cells in a finite element method. The numerical solution uh is said to be nth-order accurate if the error E(h):=||uuh|| is proportional to the step-size h to the nth power:[1]

E(h)=||uuh||Chn

where the constant C is independent of h and usually depends on the solution u.[2] Using the big O notation an nth-order accurate numerical method is notated as

||uuh||=O(hn)

This definition is strictly dependent on the norm used in the space; the choice of such norm is fundamental to estimate the rate of convergence and, in general, all numerical errors correctly.

The size of the error of a first-order accurate approximation is directly proportional to h. Partial differential equations which vary over both time and space are said to be accurate to order n in time and to order m in space.[3]

References

Template:Reflist


Template:Applied-math-stub