Dominated Convergence Theorem: Difference between revisions

From Optimal Transport Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
In measure theory, the dominated convergence theorem is a cornerstone of Lebesgue integration. It can be viewed as a culmination of all efforts, and is a general statement about the interplay between limits and integrals.
In measure theory, the dominated convergence theorem is a cornerstone of Lebesgue integration. It can be viewed as a culmination of all efforts, and is a general statement about the interplay between limits and integrals.


==Theorem Statement==
==Statement Theorem==
Consider the measure space <math> (X,\mathcal{M},\lambda) </math>. Suppose <math>\{f_n\}</math> is a sequence in <math>L^1(\lambda)</math> such that
Consider the measure space <math> (X,\mathcal{M},\lambda) </math>. Suppose <math>\{f_n\}</math> is a sequence in <math>L^1(\lambda)</math> such that
# <math>f_n \to f</math> a.e
# <math>f_n \to f</math> a.e
Line 25: Line 25:
# Suppose we want to compute <math>\lim_{n \to \infty} \int_{[0, 1]} \frac{1 + nx^2}{(1 + x^2)^n} </math>. <ref name="Folland2">Gerald B. Folland, ''Real Analysis: Modern Techniques and Their Applications, second edition'', §2.3.28 </ref> Denote the integrand <math>f_n</math> and see that <math>|f_n| \leq 1</math> for all <math>n \in \mathbb{N}</math> and $1_{[0, 1]} \in L^1(\lambda)$. Note we only consider the constant function $1$ on $[0, 1]$. Applying the dominated convergence theorem, this allows us the move the limit inside the integral and compute it as usual.
# Suppose we want to compute <math>\lim_{n \to \infty} \int_{[0, 1]} \frac{1 + nx^2}{(1 + x^2)^n} </math>. <ref name="Folland2">Gerald B. Folland, ''Real Analysis: Modern Techniques and Their Applications, second edition'', §2.3.28 </ref> Denote the integrand <math>f_n</math> and see that <math>|f_n| \leq 1</math> for all <math>n \in \mathbb{N}</math> and $1_{[0, 1]} \in L^1(\lambda)$. Note we only consider the constant function $1$ on $[0, 1]$. Applying the dominated convergence theorem, this allows us the move the limit inside the integral and compute it as usual.


# Using the theorem, we know there does not exist a dominating function for the sequence $f_n$ defined by $f_n(x) = n1_{[0, \frac{1}{n}]}$ because $f_n \to 0$ pointwise everywhere and <math>\lim_{n \to \infty} \int f_n = 1 \neq 0 = \int \lim_{n \to \infty} f_n </math>. <ref>Craig, Katy. ''MATH 201A Lecture 15''. UC Santa Barbara, Fall 2020.</ref>
# Using the theorem, we know there does not exist a dominating function for the sequence <math>f_n</math> defined by <math>f_n(x) = n1_{[0, \frac{1}{n}]}</math> because <math>f_n \to 0</math> pointwise everywhere and <math>\lim_{n \to \infty} \int f_n = 1 \neq 0 = \int \lim_{n \to \infty} f_n </math>. <ref>Craig, Katy. ''MATH 201A Lecture 15''. UC Santa Barbara, Fall 2020.</ref>
 
==Another Application: Stirling's Formula==
 
Stirling's formula states that <math display="block"> n! \sim \sqrt{2\pi n} n^ne^{-n} </math> as <math> n\rightarrow \infty </math>. We offer a proof here which relies on the Dominated Convergence Theorem.
 
''Proof:'' Repeated integration by parts yields the formula
<math display="block"> n!= \int_0^\infty t^ne^{-t}\ dt</math>
We shall estimate the integral above. Making the variable change <math> t=n+s </math> yields
<math display="block"> \int_{-n}^\infty (n+s)^ne^{-n-s}\ ds </math> Simplifying, this becomes
<math display="block"> n^ne^{-n}\int_{-n}^\infty \left(1+\frac{s}{n}\right)^ne^{-s}\ ds</math>
Combining the integrand into a single exponential,
<math display="block> n^ne^{-n}\int_{-n}^\infty \exp\left(n\log\left(1+\frac{s}{n}\right) -s \right)\ ds</math>
We want to show that this integral is asymptotic to the Gaussian. To this end, make the scaling substitution <math> s= \sqrt{n}x </math> to obtain
<math display="block"> \sqrt{n}n^ne^{-n}\int_{-\sqrt{n}}^\infty \exp\left(n\log\left(1+\frac{x}{\sqrt{n}}\right) -\sqrt{n}x \right)\ dx</math>
Since the function <math> n\log\left(1+\frac{x}{\sqrt{n}}\right) </math> equals zero and has derivative <math> \sqrt{n} </math> at the origin, and has second derivative <math> \frac{-1}{(1+x/\sqrt{n})^2}</math>, applying the fundamental theorem of calculus twice yields
<math display="block> n\log\left(1+\frac{x}{\sqrt{n}}\right)-\sqrt{n}x = -\int_0^x \frac{x-y}{1+y/\sqrt{n}}\ dy </math> As a consequence we have the upper bounds
<math display="block> n\log\left(1+\frac{x}{\sqrt{n}}\right)-\sqrt{n}x \leq -cx^2 </math> for some <math> c>0 </math> when <math> x\leq \sqrt{n} </math> and
<math display="block"> n\log\left(1+\frac{x}{\sqrt{n}}\right)-\sqrt{n}x \leq -c|x|\sqrt{n} </math>
when <math> |x| > \sqrt{n} </math>. These bounds keep the exponential in the integrand <math> \exp\left(n\log\left(1+\frac{x}{\sqrt{n}}\right) -\sqrt{n}x \right) </math> bounded by an <math> L^1 </math> function. By the Dominated Convergence Theorem,
<math display="block"> \lim_{n\to\infty}\int_{-\sqrt{n}}^\infty \exp\left(n\log\left(1+\frac{x}{\sqrt{n}}\right) -\sqrt{n}x \right)\ dx = \int_{-\infty}^\infty \exp\left(-\frac{x^2}{2}\right)\ dx </math> where the pointwise convergence
<math display="block"> \exp\left(n\log\left(1+\frac{x}{\sqrt{n}}\right) -\sqrt{n}x \right)\rightarrow \exp\left(-\frac{x^2}{2}\right) </math> can be arrived at for all <math> x </math> by expanding the Taylor series of the logarithm. The final integral is a classic calculus integral which can be computed to equal <math> \sqrt{2\pi} </math>. This proves Stirling's formula. See <ref> Tao, Terence. [https://terrytao.wordpress.com/2010/01/02/254a-notes-0a-stirlings-formula/#point ''254A, Notes 0a: Stirling's Formula'']. What's New, 2 January 2010.</ref> for a more motivated account of this proof.


==References==
==References==

Latest revision as of 23:35, 19 December 2020

In measure theory, the dominated convergence theorem is a cornerstone of Lebesgue integration. It can be viewed as a culmination of all efforts, and is a general statement about the interplay between limits and integrals.

Statement Theorem

Consider the measure space . Suppose is a sequence in such that

  1. a.e
  2. there exists such that a.e. for all

Then and . [1]

Proof of Theorem

is a measurable function in the sense that it is a.e. equal to a measurable function, since it is the limit of except on a null set. Also a.e., so .

Now we have a.e. and a.e. to which we may apply Fatou's lemma to obtain

,

where the equalities follow from linearity of the integral and the inequality follows from Fatou's lemma. We similarly obtain

.

Since , these imply

from which the result follows. [1] [2]

Applications of Theorem

  1. Suppose we want to compute . [3] Denote the integrand and see that for all and $1_{[0, 1]} \in L^1(\lambda)$. Note we only consider the constant function $1$ on $[0, 1]$. Applying the dominated convergence theorem, this allows us the move the limit inside the integral and compute it as usual.
  1. Using the theorem, we know there does not exist a dominating function for the sequence defined by because pointwise everywhere and . [4]

Another Application: Stirling's Formula

Stirling's formula states that

as . We offer a proof here which relies on the Dominated Convergence Theorem.

Proof: Repeated integration by parts yields the formula

We shall estimate the integral above. Making the variable change yields
Simplifying, this becomes
Combining the integrand into a single exponential,
We want to show that this integral is asymptotic to the Gaussian. To this end, make the scaling substitution to obtain
Since the function equals zero and has derivative at the origin, and has second derivative , applying the fundamental theorem of calculus twice yields
As a consequence we have the upper bounds
for some when and
when . These bounds keep the exponential in the integrand bounded by an function. By the Dominated Convergence Theorem,
where the pointwise convergence
can be arrived at for all by expanding the Taylor series of the logarithm. The final integral is a classic calculus integral which can be computed to equal . This proves Stirling's formula. See [5] for a more motivated account of this proof.

References

  1. 1.0 1.1 Gerald B. Folland, Real Analysis: Modern Techniques and Their Applications, second edition, §2.3
  2. Craig, Katy. MATH 201A Lecture 15. UC Santa Barbara, Fall 2020.
  3. Gerald B. Folland, Real Analysis: Modern Techniques and Their Applications, second edition, §2.3.28
  4. Craig, Katy. MATH 201A Lecture 15. UC Santa Barbara, Fall 2020.
  5. Tao, Terence. 254A, Notes 0a: Stirling's Formula. What's New, 2 January 2010.