Komlós' theorem

From testwiki
Jump to navigation Jump to search

Template:Short description Komlós' theorem is a theorem from probability theory and mathematical analysis about the Cesàro convergence of a subsequence of random variables (or functions) and their subsequences to an integrable random variable (or function). It's also an existence theorem for an integrable random variable (or function). There exist a probabilistic and an analytical version for finite measure spaces.

The theorem was proven in 1967 by János Komlós.[1] There exists also a generalization from 1970 by Srishti D. Chatterji.[2]

Komlós' theorem

Probabilistic version

Let (Ω,,P) be a probability space and ξ1,ξ2, be a sequence of real-valued random variables defined on this space with sup\limits n𝔼[|ξn|]<.

Then there exists a random variable ψL1(P) and a subsequence (ηk)=(ξnk), such that for every arbitrary subsequence (η~n)=(ηkn) when n then

(η~1++η~n)nψ

P-almost surely.

Analytic version

Let (E,𝒜,μ) be a finite measure space and f1,f2, be a sequence of real-valued functions in L1(μ) and sup\limits nE|fn|dμ<. Then there exists a function υL1(μ) and a subsequence (gk)=(fnk) such that for every arbitrary subsequence (g~n)=(gkn) if n then

(g~1++g~n)nυ

μ-almost everywhere.

Explanations

So the theorem says, that the sequence (ηk) and all its subsequences converge in Césaro.

Literature

  • Kabanov, Yuri & Pergamenshchikov, Sergei. (2003). Two-scale stochastic systems. Asymptotic analysis and control. 10.1007/978-3-662-13242-5. Page 250.

References