In probability theory, the characteristic function of a random variable... The method of characteristic functions is one of the main tools of the analytic theory of probability. This appears very clearly in the proofs of limit theorems and, in particular, in the proof of the central limit theorem, which generalizes the De Moivre-Laplace theorem.
Definition If X is a scalar random variable with distribution FX then its characteristic function isφX(t)=EeitX=∫ReitxdFX(x),where i is the imaginary unit and t∈R. If FX(x) has a density f=f(x) thenφX(t)=EeitX=∫Reitxf(x)dx.In other words, in this case the characteristic function is just the Fourier transform of f(x).
Examples
Examples
Before dive into the theory of let's consider some examples of characteristic functions.
Coin flip
If P(X=1)=P(X=−1)=1/2, thenEeitX=2(eit+e−it)
Poisson distribution
If P(X=k)=k!e−λλk for k=0,1,2,…, thenEeitX=k=0∑∞k!e−λλkeikt=exp(λ(eit−1))
Normal distribution
If X∼N(μ,σ2) with probability density function2πσ1exp(−2σ2(x−μ)2)then the characteristic function will beexp(iμt−2σ2t2).
Properties of characteristic function
Properties of characteristic function
Let X be a random variable with distribution function F=F(x) and φ(t)=EeitX its characteristic function. Then φ has the following properties:
A. φ(0)=1
This property is obvious.
B. φ(−t)=φ(t)
φ(−t)=E[cos(−tX)+isin(−tX)]=E[cos(tX)−isin(tX))]
C. ∣φ(t)∣≤1
By Jensen inequality and the fact that ∣⋅∣ is convex,∣φ(t)∣=∣EeitX∣≤E∣eitX∣≤1
D. φ(t) is uniformly continuous on R
Again, by Jensen inequality and the fact that ∣⋅∣ is convex,∣φ(t+h)−φ(t)∣=E(ei(t+h)X−eitX)≤≤Eei(t+h)X−eitX=EeitX(eihX−1)≤≤EeihX−1),, so uniform convergence follows from the Dominated convergence theorem and the fact the ∣eihX−1∣≤2 a.e.
D. Eeit(aX+b)=eitbφ(at)
The proof is obvious.
Uniqueness
Uniqueness
The following theorem shows that the characteristic function is uniquely determined by the distribution function.
Theorem (Uniqueness) Let F and G be distribution functions with the same characteristic function, i.e.∫−∞∞eitxdF(x)=∫−∞∞eitxdG(x)for all t∈R. Then F(x)≡G(x).
Proof. Choose a and b∈R, and ε>0, and consider the function fε=fε(x) shown in Figure 33. We show that∫−∞∞fε(x)dF(x)=∫−∞∞fε(x)dG(x).Let n≥0 be large enough so that [a−ε,b+ε]⊆[−n,n], and let the sequence {δn} be such that 1≥δn↓0,n→∞. Like every continuous function on [−n,n] that has equal values at the end points, fε=fε(x) can be uniformly approximated by trigonometric polynomials (Weierstrass's theorem), i.e. there is a finite sumfnε(x)=k∑akexp(iπxnk)such that−n≤x≤nsup∣fε(x)−fnε(x)∣≤δn.Let us extend the periodic function fn(x) to all of R, and observe thatxsup∣fnε(x)∣≤2Then, since by (9)∫−∞∞fnε(x)dF(x)=∫−∞∞fnε(x)dG(x),we have∫−∞∞fε(x)dF(x)−∫−∞∞fε(x)dG(x)≤≤=∫−nnfεdF−∫−nnfεdG∫−nnfnεdF−∫−nnfnεdG+2δn∫−∞∞fnεdF−∫−∞∞fnεdG+2δn+2F([−n,n])+2G([−n,n])where F(A)=∫AdF(x),G(A)=∫AdG(x). As n→∞, the right-hand side of (13) tends to zero, and this establishes (10).As ε→0, we have fε(x)→I(a,b](x). It follows from (10) by the theorem on distribution functions' being the same.∫−∞∞I(a,b](x)dF(x)=∫−∞∞I(a,b](x)dG(x)i.e. F(b)−F(a)=G(b)−G(a). Since a and b are arbitrary, it follows that F(x)=G(x) for all x∈R.This completes the proof of the theorem.
The inversion formula for characteristic function
The inversion formula for characteristic function
The next theorem gives an explicit representation of distribution functionF in terms of characteristic function φ, such thatφ(t)=∫−∞∞eitxdF(x).
General case
For pairs of points a and b(a<b) at which F=F(x) is continuous,F(b)−F(a)=c→∞lim2π1∫−ccite−ita−e−itbφ(t)dt;
Proof. Let's consider the integralIT=2π1∫−TTite−ita−e−itbφ(t)dt==2π1∫−TT∫Rite−ita−e−itbeitxdF(x)dtThe integrand may look bad near t=0 but if we observe thatite−ita−e−itb=ite−ita−e−itb=∫abe−itxdx≤b−aand∫−TT∫−∞∞(b−a)dF(x)dt≤2T(b−a)<∞.Now we can apply Fubini theorem and the fact that cos(x) is even functionIT=2π1∫R∫−TTite−ita−e−itbeitxdtdF(x)==∫R{∫−TTtsin(t(x−a))dt−∫−TTtsin(t(x−b))dt}dF(x)==∫R{∫−T(x−a)T(x−a)usin(u)du−∫−T(x−b)T(x−b)vsin(v)dv}dF(x)Define the integrand function asΨT(x)=∫−T(x−a)T(x−a)usin(u)du−∫−T(x−b)T(x−b)vsin(v)dvThe functiong(s,t)=∫stvsinvdvis uniformly continuous in s and t, andg(s,t)→πas s↓−∞ and t↑∞. Hence there is a constant C such that ∣ΨT(x)∣<C<∞ for all T and x. Moreover, it follows thatΨT(x)→Ψ(x),T→∞,whereΨ(x)=⎩⎨⎧0,21,1,x<a,x>b,x=aorx=b,a<x<b.Let μ be a measure on (R,B(R)) such that μ(a,b]=F(b)−F(a). Then if we apply the dominated convergence theorem, we find that, as T→∞,IT=∫−∞∞ΨT(x)dF(x)→∫−∞∞Ψ(x)dF(x)=μ(a,b)+21μ{a}+21μ{b}=F(b−)−F(a)+21[F(a)−F(a−)+F(b)−F(b−)]=2F(b)+F(b−)−2F(a)+F(a−)=F(b)−F(a),where the last equation holds for all points a and b of continuity of F(x). □
When φ(t) is intergable
If ∫−∞∞∣φ(t)∣dt<∞, then distribution function F(x) has a density f(x) given byf(x)=2π1∫−∞∞e−itxφ(t)dt.
Proof. Let ∫−∞∞∣φ(t)∣dt<∞. Writef(x)=2π1∫−∞∞e−itxφ(t)dt.It follows from the dominated convergence theorem that this is a continuous function of x and therefore is integrable on [a,b]. Consequently we find, applying Fubini's theorem, that∫abf(x)dx=∫ab2π1(∫−∞∞e−itxφ(t)dt)dx=2π1∫−∞∞φ(t)[∫abe−itxdx]dt=c→∞lim2π1∫−ccφ(t)[∫abe−itxdx]dt=c→∞lim2π1∫−ccite−ita−e−itbφ(t)dt=F(b)−F(a)for all points a and b of continuity of F(x).Hence it follows thatF(x)=∫−∞xf(y)dy,x∈R,and since f(x) is continuous and F(x) is nondecreasing, f(x) is the density of F(x).This completes the proof of the theorem. □