Kolmogorov's two-series theorem: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Beland
m MOS:BBB / convert special characters found by Wikipedia:Typo Team/moss (via WP:JWB)
ย 
(No difference)

Latest revision as of 18:36, 16 April 2024

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

Statement of the theorem

Let (Xn)n=1 be independent random variables with expected values ๐„[Xn]=μn and variances ๐•๐š๐ซ(Xn)=σn2, such that n=1μn converges in โ„ and n=1σn2 converges in โ„. Then n=1Xn converges in โ„ almost surely.

Proof

Assume WLOG μn=0. Set SN=n=1NXn, and we will see that lim supNSNlim infNSN=0 with probability 1.

For every mโ„•, lim supNSNlim infNSN=lim supN(SNSm)lim infN(SNSm)2maxkโ„•|i=1kXm+i|

Thus, for every mโ„• and ϵ>0, โ„™(lim supN(SNSm)lim infN(SNSm)ϵ)โ„™(2maxkโ„•|i=1kXm+i|ϵ )=โ„™(maxkโ„•|i=1kXm+i|ϵ2 )lim supN4ϵ2i=m+1m+Nσi2=4ϵ2limNi=m+1m+Nσi2

While the second inequality is due to Kolmogorov's inequality.

By the assumption that n=1σn2 converges, it follows that the last term tends to 0 when m, for every arbitrary ϵ>0.

References

Template:Reflist

  • Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60โ€“69.
  • M. Loรจve, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3
  • W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9