Borel-Cantelli Lemma: Difference between revisions
Line 13: | Line 13: | ||
Proof. Without loss of generality assume that the random variables are centered, and let <math> \overline X_n</math> denote the sample average. By Chebyshev's inequality, | Proof. Without loss of generality assume that the random variables are centered, and let <math> \overline X_n</math> denote the sample average. By Chebyshev's inequality, | ||
<math> P(|\overline X_n | > \epsilon) \leq \frac{\sigma^2}{n\epsilon} \to 0</math> as <math> n </math> approaches infinity. | <math> P(|\overline X_n | > \epsilon) \leq \frac{\sigma^2}{n\epsilon} \to 0</math> as <math> n </math> approaches infinity. Note that for Chebyshev's inequality to hold we need finite variance. | ||
It is natural to wonder the possibility of obtaining a stronger notion of convergence, perhaps under stronger condition than finite variance. To do so we need to formally define the stronger type of convergence that we need, which in study of probability we often refer to as "almost sure convergence". | It is natural to wonder the possibility of obtaining a stronger notion of convergence, perhaps under stronger condition than finite variance. To do so we need to formally define the stronger type of convergence that we need, which in study of probability we often refer to as "almost sure convergence". |
Revision as of 07:12, 18 December 2020
Many interesting theorem in probability utilizes measure theory, as it often provides some form of one-line argument that allows the proof to go through. Among one of them is the law of large numbers, which informally states that the average of iid samples approaches the mean of the distribution as the sample size grows large.
We would like to be more precise about what the words "approaches" mean. Clearly we cannot replace the word "approaches" with "converges to", as one could be unlucky and consistently draw below or above the mean so that the sample average may never even be close to the true mean. Despite of this, one thing that we do know is that consistently drawing below or above the mean, while not impossible, becomes increasingly unlikely as the sample grows large. This naturally leads to the idea of convergence in probability.
Convergence in probability
A sequence of random variables converges in probability to if for any ,
Notice that this is simply convergence in measure in probability space. Equipped with this definition, we can now state the weak law of large numbers: the sample average converges in probability to the mean. Bernoulli first proved this theorem for Bernoulli random variables in 1713, when tools like Chebyshev's inequality wasn't discovered. Later on, mathmaticians proved more general cases that does not the require the assumption of finite variance or iid random variables.
Proof. Without loss of generality assume that the random variables are centered, and let denote the sample average. By Chebyshev's inequality, as approaches infinity. Note that for Chebyshev's inequality to hold we need finite variance.
It is natural to wonder the possibility of obtaining a stronger notion of convergence, perhaps under stronger condition than finite variance. To do so we need to formally define the stronger type of convergence that we need, which in study of probability we often refer to as "almost sure convergence".
Almost sure convergence
A sequence of random variables converges almost surely to if
The strong law of large number carries this stronger notion of converges and states: the sample average converges almost surely to the mean. Before the proof, we first introduce a lemma.
Borel Catelli Lemma
Let be a probability space, and any sequence of events in . If , then .
On the other hand, if , then .
Proof.
Fix , there exists sufficiently large such that . By union bound we obtain . So . By continuity of measure , we have .
The other statement follows easily from the similar technique.