The Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem) is a functional extension of the central limit theorem. The theorem states that a scaled random walk converges to Brownian Motion.
Let {Xn:n≥0} be a sequence of independent and identically distributed random variables and assume that they are normalised, so that E[Xn]=0 and Var(Xn)=1. This assumption is no loss of generality for Xn with finite variance, since we can always consider the normalisationVar(Xn)Xn−E[Xn].
We look at the random walk generated by the sequenceSn=k=1∑nXk,and interpolate linearly between the integer points, i.e.S(t)=S⌊t⌋+(t−⌊t⌋)(S⌊t⌋+1−S⌊t⌋).
This defines a random function S∈C[0,∞). We now define a sequence {Sn∗:n≥1} of random functions in C[0,1] bySn∗(t)=nS(nt)for all t∈[0,1].
Theorem (Donsker's Invariance Principle) On the space C[0,1] of continuous functions on the unit interval with the metric induced by the sup-norm, the sequence {Sn∗:n≥1} converges in distribution to a standard Brownian motion {B(t):t∈[0,1]}.
Proof of Donsker's invariance principle
The foolowing proof is taken from Theorem 5.22, p. 131-133 of [MörtersPeres]
Proof of Donsker's invariance principle
The foolowing proof is taken from Theorem 5.22, p. 131-133 of [MörtersPeres]
The idea of the proof is to construct the random variables X1,X2,X3,… on the same probability space as the Brownian motion in such a way that {Sn∗:n≥1} is with high probability close to a scaling of this Brownian motion.
Construction of stopping times
Using Skorokhod embedding, we define T1 to be a stopping time with E[T1]=1 such that B(T1)=dX in distribution. By the strong Markov property,{B2(t):t≥0}={B(T1+t)−B(T1):t≥0}is a Brownian motion and independent of F+(T1) and, in particular, of (T1,B(T1)). Hence we can define a stopping time T2′ for the Brownian motion {B2(t):t≥0} such that E[T2′]=1 and B2(T2′)=X in distribution. Then T2=T1+T2′ is a stopping time for the original Brownian motion with E[T2]=2, such that B(T2) is the second value in a random walk with increments given by the law of X. We can proceed inductively to get a sequence 0=T0≤T1≤T2≤T3<… such that Sn=B(Tn) is the embedded random walk, and E[Tn]=n.
How close Sn∗(t) and nB(nt)
Abbreviate Wn(t)=nB(nt) and let An be the event that there exists t∈[0,1) such that ∣Sn∗(t)−Wn(t)∣>ε. We have to show thatP{∃t∈[0,1):∣Sn∗(t)−Wn(t)∣>ε}=P(An)→0n→∞.
Let k=k(t) be the unique integer with (k−1)/n≤t<k/n. Since Sn∗ is linear on such an interval we haveAn⊂∪{there exists t∈[0,1) such that ∣Sk/n−Wn(t)∣>ε}∪{there exists t∈[0,1) such that ∣Sk−1/n−Wn(t)∣>ε}.
As Sk=dB(Tk)=dnWn(Tk/n) we obtainAn⊂An∗:=∪{there exists t∈[0,1) such that ∣Wn(Tk/n)−Wn(t)∣>ε}∪{there exists t∈[0,1) such that ∣Wn(Tk−1/n)−Wn(t)∣>ε}.
Now we deal with the probabilities of events where differences are between the same processes.For given 0<δ<1 the event An∗ is contained in
An∗⊂∪{there exist s,t∈[0,2] such that ∣s−t∣<δ,∣Wn(s)−Wn(t)∣>ε}∪{there exists t∈[0,1) such that ∣Tk/n−t∣≥δ,∣Tk−1/n−t∣≥δ}.
Note that the probability of (1) does not depend on n. Choosing δ>0 small, we can make this probability as small as we wish, since Brownian motion is uniformly continuous on [0,2].
The convergence Tk/n to t
It remains to show that for arbitrary, fixed δ>0, the probability of (2) converges to zero as n→∞. To prove this we use thatn→∞limnTn=n→∞limn1k=1∑n(Tk−Tk−1)=1 almost surely.
This is Kolmogorov's law of large numbers for the sequence {Tk−Tk−1} of independent identically distributed random variables with mean 1. Observe that for every sequence {an} of reals one hasn→∞limnan=1⟹n→∞lim0≤k≤nsupnak−k=0.This is a matter of plain (deterministic) arithmetic and easily checked. Hence we have,n→∞limP{0≤k≤nsupnTk−k>δ}=0.Now recall that t∈[(k−1)/n,k/n) and let n>2/δ. ThenP{there exists t∈[0,1] such that Tk−1/n−tTk/n−t≥δ}≤P{1≤k≤nsup(nTk−(k−1)∨nk−Tk−1)>δ}≤P{1≤k≤nsupnTk−k≥δ/2}+P{1≤k≤nsupnk−Tk−1≥δ/2},and by (3) both summands converge to 0.
We have proved that probability of both events (1) and (2) converges to 0 as n→0 which implyn→∞limP{0≤t≤1supnB(nt)−Sn∗(t)≥ϵ}=0.
Approximation random walk with Brownian Motion
Choose the sequence of stopping times as in previous subsection and recall from the scaling property of Brownian motion that the random functions {Wn(t):0≤t≤1} given by Wn(t)=nB(nt) are standard Brownian motions. Suppose that K is closed subset of functions in C[0,1] and define new setK[ϵ]={f∈C[0,1]:∥f−g∥sup≤ϵ for some g∈K}.Then from triangle inequality∥Sn∗∥sup≤∥Wn∥sup+∥Sn∗−Wn∥supwe obtain the inequality for probabilitiesP{Sn∗∈K}≤P{Wn∈K[ϵ]}+P{∥Sn∗−Wn∥sup>ϵ}.As n→∞, the second term goes to 0 because of (4), whereas the first term does not depend on n and is equal to P{B∈K[ϵ]} for a Brownian motion B. As K is closed we haveϵ→0limP{B∈K[ϵ]}=P{B∈ϵ>0⋂K[ϵ]}=P{B∈K}.Putting these facts together, we obtain limsupn→∞P{Sn∗∈K}≤P{B∈K}, which is condition (ii) in the Portmanteau theorem. Hence Donsker's invariance principle is proved. □