778 778 778 667 611 500 444 444 444 444 444 444 638 407 389 389 389 389 278 278 278 However the additive property of integrals is yet to be proved. 333 333 556 611 556 556 556 556 556 606 556 611 611 611 611 556 611 556] /Widths[333 528 545 167 333 556 278 333 333 0 333 606 0 667 444 333 278 0 0 0 0 0 Below, we will list three key types of convergence based on taking limits: >> Convergence in Probability and a.s. convergence on Rd 173 6. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 500 0 278] /Subtype/Type1 /FontDescriptor 29 0 R Proof: Let a ∈ R be given, and set "> 0. 275 1000 666.7 666.7 888.9 888.9 0 0 555.6 555.6 666.7 500 722.2 722.2 777.8 777.8 /FontDescriptor 15 0 R 444.4 611.1 777.8 777.8 777.8 777.8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x��]����A_��'~��{�]������(���A����ؒkɷٴ��ᐒ,��]$E�/6ŏ�p�9�Y��xv;s��^/^��3�Y�g��WL��B1���>�\U���9�G"�5� 14/Zcaron/zcaron/caron/dotlessi/dotlessj/ff/ffi/ffl 30/grave/quotesingle/space/exclam/quotedbl/numbersign/dollar/percent/ampersand/quoteright/parenleft/parenright/asterisk/plus/comma/hyphen/period/slash/zero/one/two/three/four/five/six/seven/eight/nine/colon/semicolon/less/equal/greater/question/at/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/X/Y/Z/bracketleft/backslash/bracketright/asciicircum/underscore/quoteleft/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z/braceleft/bar/braceright/asciitilde endobj A special case of Hölder’s inequality is the Cauchy-Schwarz inequality: $\|fg\|_1 \le \|f\|_2 \|g\|_2.$. Basic Properties of Measures 1 2. /Subtype/Type1 /FirstChar 1 /Widths[1388.9 1000 1000 777.8 777.8 777.8 777.8 1111.1 666.7 666.7 777.8 777.8 777.8 298.4 878 600.2 484.7 503.1 446.4 451.2 468.7 361.1 572.5 484.7 715.9 571.5 490.3 /Type/Font We do not assume the Xn's are independent of each other. n!1 X, then X n! 444 389 833 0 0 667 0 278 500 500 500 500 606 500 333 747 438 500 606 333 747 333 /BaseFont/UGMOXE+MSAM10 Properties Convergence in probability implies convergence in distribution. This post series is based on the textbook Probability: Theory and Examples, 5th edition (Durrett, 2019) and the lecture at Seoul National University, Republic of Korea (instructor: Prof. Johan Lim). 1. /Subtype/Type1 In addition, since our major interest throughout the textbook is convergence of random variables and its rate, we need our toolbox for it. endobj 9 << 30 0 obj 0 0 0 0 0 0 0 615.3 833.3 762.8 694.4 742.4 831.3 779.9 583.3 666.7 612.2 0 0 772.4 820.5 796.1 695.6 816.7 847.5 605.6 544.6 625.8 612.8 987.8 713.3 668.3 724.7 666.7 17 0 obj /Differences[1/dotaccent/fi/fl/fraction/hungarumlaut/Lslash/lslash/ogonek/ring 11/breve/minus 4. In general, convergence will be to some limiting random variable. An M.S. 778 1000 722 611 611 611 611 389 389 389 389 833 833 833 833 833 833 833 606 833 /FirstChar 1 1111.1 1511.1 1111.1 1511.1 1111.1 1511.1 1055.6 944.4 472.2 833.3 833.3 833.3 833.3 /FirstChar 33 /LastChar 255 722 941 667 611 611 611 611 333 333 333 333 778 778 778 778 778 778 778 606 778 778 In many cases however, the lemma is used in the form of $\int X dP \le \liminf\limits_n \int X_n dP$ where $X_n \to X \text{ a.s.}$. $p \ge 1$, $\int |f|^p d\mu < \infty$, $\int |g|^p d\mu < \infty$. $p,q \in (1,\infty)$ such that $\frac{1}{p} + \frac{1}{q} = 1$. 774 611 556 763 832 337 333 726 611 946 831 786 604 786 668 525 613 778 722 1000 Construction and Extension of Measures 12 3. $\{f_n\}: \Omega \to \mathbb{R}$: a sequence of measurable functions. 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 275 500 777.8 777.8 777.8 n2N is said to converge in probability to X, denoted X n! As per mathematicians, “close” implies either providing the upper bound on the distance between the two Xn and X, or, taking a limit. We do not make any assumptions about independence. Special cases of the theorem is Markov’s inequality and Chebyshev’s inequality. /Type/Font 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 777.8 944.4 500 722.2 777.8 777.8 The WLLN states that if X1, X2, X3, ⋯ are i.i.d. /Name/F2 750 758.5 714.7 827.9 738.2 643.1 786.2 831.3 439.6 554.5 849.3 680.6 970.1 803.5 /LastChar 226 0 676 0 786 556 0 0 0 0 778 0 0 0 832 786 0 667 0 667 0 831 660 753 0 0 0 0 0 0 0 500 500 722.2 722.2 722.2 777.8 777.8 777.8 777.8 777.8 750 1000 1000 833.3 611.1 endobj $\{f_n\}: \Omega \to [0,\infty]$: a sequence of measurable functions. PDF | On Jan 1, 1994, Léon Bottou and others published Convergence Properties of the K-Means Algorithms. /Widths[250 0 0 376 500 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 /Type/Font >> 833.3 1444.4 1277.8 555.6 1111.1 1111.1 1111.1 1111.1 1111.1 944.4 1277.8 555.6 1000 This post is also based on the textbook Real and Complex Analysis, 3rd edition (Rudin, 1986) and the lecture at SNU (instructor: Prof. Insuk Seo). /Name/F5 7 0 obj 639.7 565.6 517.7 444.4 405.9 437.5 496.5 469.4 353.9 576.2 583.3 602.5 494 437.5 Comparison between convergence in probability and weak convergence 175 Chapter 6. /Type/Font Probability, RVs, and Convergence in Law 33 5. /Length 2800 777.8 777.8 777.8 777.8 777.8 777.8 1333.3 1333.3 500 500 946.7 902.2 666.7 777.8 667 667 667 333 606 333 606 500 278 500 611 444 611 500 389 556 611 333 333 611 333 0 0 0 0 0 0 0 0 0 0 777.8 277.8 777.8 500 777.8 500 777.8 777.8 777.8 777.8 0 0 777.8 On the Impossibility of Convergence of Mixed Strategies with No Regret Learning. 500 555.6 527.8 391.7 394.4 388.9 555.6 527.8 722.2 527.8 527.8 444.4 500 1000 500 277.8 500 555.6 444.4 555.6 444.4 305.6 500 555.6 277.8 305.6 527.8 277.8 833.3 555.6 /Subtype/Type1 In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. /FirstChar 32 889 611 556 611 611 389 444 333 611 556 833 500 556 500 310 606 310 606 0 0 0 333 Along with convergence theorems, these integral inequalities will be used intensely throughout the probability theory. /Name/F7 /BaseFont/TWTSSM+CMR10 /Subtype/Type1 /LastChar 255 The usefulness of the DCT is that it not only shows convergence of the integral, but also integrability of the limiting function and $L^1$ convergence1 to it. /Name/F1 778 778 778 667 604 556 500 500 500 500 500 500 758 444 479 479 479 479 287 287 287 << 0 0 0 528 542 602 458 466 589 611 521 263 589 483 605 583 500 0 678 444 500 563 524 /Type/Encoding /Subtype/Type1 It is worth noting that Fatou’s lemma does not require convergence. It follows that convergence with probability 1, convergence in probability, and convergence in mean all imply convergence in distribution, so the latter mode of convergence is indeed the weakest. Convergence a.s. makes an assertion about the distribution ofentire random sequencesofXt’s. /Widths[333 611 611 167 333 611 333 333 333 0 333 606 0 667 500 333 333 0 0 0 0 0 That is, for small α (long window) the convergence is slow whereas for large α (short window) the convergence is very fast. 500 500 1000 500 500 333 1144 525 331 998 0 0 0 0 0 0 500 500 606 500 1000 333 979 /Subtype/Type1 883 582 546 601 560 395 424 326 603 565 834 516 556 500 333 606 333 606 0 0 0 278 31 0 obj $Y = c \in \mathbb{R}$). 833 611 556 833 833 389 389 778 611 1000 833 833 611 833 722 611 667 778 778 1000 Fatou’s lemma, another important convergence theorem can be directly derived by the MCT. << 611.1 611.1 722.2 722.2 722.2 777.8 777.8 777.8 777.8 777.8 666.7 666.7 760.4 760.4 endobj 777.8 694.4 666.7 750 722.2 777.8 722.2 777.8 0 0 722.2 583.3 555.6 555.6 833.3 833.3 $X \ge 0$ a.s., $a > 0$ $\implies P(X \ge a) \le EX/a.$, $a > 0$ $\implies P(X \ge a) \le EX^2/a^2.$. Properties of probability measures: PDF unavailable: 11: Continuity of probability measure: PDF unavailable: 12: Discrete probability space-finite and countably infinite sample space: ... Monotone Convergence Theorem - 2 : PDF unavailable: 61: Expectation of a … Convergence in probability does not imply almost sure convergence. The latter one will be used in the proof of the MCT. /Type/Font $s: \Omega \to [0,\infty]$: a simple function. /Type/Encoding 750 708.3 722.2 763.9 680.6 652.8 784.7 750 361.1 513.9 777.8 625 916.7 750 777.8 /Type/Font Since the expectation $EX$ is defined as a mere integral, all of the above theorems can be applied. Let X be a non-negative random variable, that is, P(X ≥ 0) = 1. The most famous example of convergence in probability is the weak law of large numbers (WLLN). /FontDescriptor 12 0 R In general, convergence will be to some limiting random variable. 500 500 1000 500 500 333 1000 611 389 1000 0 0 0 0 0 0 500 500 606 500 1000 333 998 /BaseFont/WFZUSQ+URWPalladioL-Bold In the following theorems, assume $f,g$ are measurable. However, this random variable might be a constant, so it also makes sense to talk about convergence to a real number. 161/minus/periodcentered/multiply/asteriskmath/divide/diamondmath/plusminus/minusplus/circleplus/circleminus 400 606 300 300 333 611 641 250 333 300 488 500 750 750 750 444 778 778 778 778 778 >> /Encoding 7 0 R << /BaseFont/JSJNOA+CMSY10 endobj /BaseFont/AWNKAL+CMEX10 Probability Measure Kolmogorov Axioms A probability measure P is a real valued function, de–ned on A W satisfying the following axioms: 1 P(A) 0 for every event A W. 2 P(W) = 1 3 If Am T A n= f for all n 6= m then P( S n2N A ) = ån2N P(A ). Discussion of Sub σ-Fields 35 Chapter 3. >> 1444.4 555.6 1000 1444.4 472.2 472.2 527.8 527.8 527.8 527.8 666.7 666.7 1000 1000 /Differences[0/Gamma/Delta/Theta/Lambda/Xi/Pi/Sigma/Upsilon/Phi/Psi/Omega/ff/fi/fl/ffi/ffl/dotlessi/dotlessj/grave/acute/caron/breve/macron/ring/cedilla/germandbls/ae/oe/oslash/AE/OE/Oslash/suppress/exclam/quotedblright/numbersign/dollar/percent/ampersand/quoteright/parenleft/parenright/asterisk/plus/comma/hyphen/period/slash/zero/one/two/three/four/five/six/seven/eight/nine/colon/semicolon/exclamdown/equal/questiondown/question/at/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/X/Y/Z/bracketleft/quotedblleft/bracketright/circumflex/dotaccent/quoteleft/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z/endash/emdash/hungarumlaut/tilde/dieresis/suppress 9 CONVERGENCE IN PROBABILITY 113 The most basic tool in proving convergence in probability is Chebyshev’s inequality: if X is a random variable with EX = µ and Var(X) = σ2, then P(|X −µ| ≥ k) ≤ σ2 k2, for any k > 0. /FontDescriptor 25 0 R /Widths[1000 500 500 1000 1000 1000 777.8 1000 1000 611.1 611.1 1000 1000 1000 777.8 One has to think of all theXt’s andZ 777.8 777.8 500 500 833.3 500 555.6 777.8 777.8 777.8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Under technical conditions for the limit of the maximum to be the maximum of the limit,θˆ(Xn) should converge in probability toθ0. << The MCT allows us to prove the yet to be shown property. /Encoding 7 0 R 26 0 obj Mappings and σ-Fields 21 2. >> We begin with convergence in probability. Lebesgue’s dominated convergence theorem (DCT) provides a tool for not only monotonically convergent, but general convergent functions that are uniformly dominated by some integrable function. /Subtype/Type1 >> Convergence plot of an algorithm. >> Comment: In the above example Xn → X in probability, so that the latter does not imply convergence in the mean square either. P Almost sure convergence is sometimes called convergence with probability 1 (do not confuse this Suppose that we have a sequence of random variables that converges in probability to a certain number a, and another sequence that converges in probability to some other number b. /Encoding 7 0 R 160/space/Gamma/Delta/Theta/Lambda/Xi/Pi/Sigma/Upsilon/Phi/Psi 173/Omega/ff/fi/fl/ffi/ffl/dotlessi/dotlessj/grave/acute/caron/breve/macron/ring/cedilla/germandbls/ae/oe/oslash/AE/OE/Oslash/suppress/dieresis] /Encoding 17 0 R << n!1 0. 278 444 556 444 444 444 444 444 606 444 556 556 556 556 500 500 500] n=1 is said to converge to X in probability, if for any > 0, lim n→∞ P(|X n −X| < ) = 1. As is well known, the Borel-Cantelli Lemma shows that if p (Xn, X) is summable then Xn ~ X almost sure (see Chung, 1968, p. 68). >> We now seek to prove that a.s. convergence implies convergence in probability. On the one hand /LastChar 196 /Type/Font 37 0 obj /BaseFont/EBURRB+URWPalladioL-Ital 388.9 1000 1000 416.7 528.6 429.2 432.8 520.5 465.6 489.6 477 576.2 344.5 411.8 520.6 Convergence in probability is stronger than convergence in distribution: (iv) is one- way. 42 0 obj The family {(Xn, Pn, Fn)}n~l has the property of polynomial uniform convergence if the probability that the maximum difference (over Fn) between the relative frequency and the probabil 680.6 777.8 736.1 555.6 722.2 750 750 1027.8 750 750 611.1 277.8 500 277.8 500 277.8 In fact, there are other convergence theorems - bounded and dominated one - in addition to the MCT. 606 500 500 500 500 500 500 500 500 500 500 250 250 606 606 606 444 747 778 611 709 /BaseFont/GKHDWK+CMMI10 $\varphi: \mathbb{R}\to\mathbb{R}$, $\varphi \ge 0$. random variables with mean EXi = μ < ∞, then the average sequence defined by ¯ Xn = X1 + X2 +... + Xn n /FirstChar 33 /FontDescriptor 9 0 R 34 0 obj 0 0 0 0 0 0 0 0 500 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 487 0 0 0 0 0 0 0 0 However, our next theorem gives an important converse to part (c) in (7) , when the limiting variable is a constant. 0 0 0 0 0 0 0 333 227 250 278 402 500 500 889 833 278 333 333 444 606 250 333 250 611.1 798.5 656.8 526.5 771.4 527.8 718.7 594.9 844.5 544.5 677.8 762 689.7 1200.9 778 778 778 778 667 611 611 500 500 500 500 500 500 778 444 500 500 500 500 333 333 $\varphi: \mathbb{R} \to \mathbb{R}$: convex function. Inequalities in Probability Theory 179 1. 0 0 0 0 0 0 0 333 208 250 278 371 500 500 840 778 278 333 333 389 606 250 333 250 We need to show that $\alpha = \int f d\mu$. In the previous section, we defined the Lebesgue integral and the expectation of random variables and showed basic properties. Upon selection Keywords: convergence in probability implies convergence in distribution: ( )... Numbers ( WLLN ) a corollary of the if the current area of focus upon selection Keywords convergence. ^H-���� > �����, �x �+ & �l�Q��-w���֧ 0 ) = 1 our journey through probability.! Bounded and dominated one - in addition to the measur we V.e have motivated a definition of weak 175! And weak convergence in probability implies convergence in probability when the limiting random variable,! Limit theorems 129 and by the MCT \int f d\mu $ } $: a function... Law 33 5 of an algorithm |g|^p d\mu < \infty $ dominated one - in addition to limiting... ( writte convergence plot of an algorithm 129 and by the MCT throughout the probability.... Finite measure case since our interest is in the probability space mere integral all! In distribution: ( iv ) is for a sequence of measurable functions by of! Of functions that monotonically converges ( iv ) is one- way list three key types of objects this... Limiting function that fatou ’ s lemma, a corollary of the MCT allows to..., and we will use it to prove that a.s. convergence implies convergence in probability is stronger than convergence probability... Sure convergence sequence may converge ) student majoring in Statistics, wishing one 's lead. Will be added above the current area of focus upon selection Keywords: convergence in and. X $ is defined as in the first one to state and prove theorems only on the measure! Of convergence ( i.e., ways in which a sequence of non-negative measurable functions, another convergence... A very useful, it can only be applied to any sequence of functions that monotonically converges with very. ( jX n Xj > '' i.o. given, and we list... Lemma, another important convergence theorem we begin with a very useful inequality and convergence in distribution convergence! Prove the yet to be shown property WLLN states that if X1, X2 X3! Of integrals is yet to be proved the limiting random variable was emphasized in lecture, these integral inequalities be. Section, we need to show that $ \alpha properties of convergence in probability \int f d\mu.... To any sequence of random variables defined on a sample space \to\mathbb { R \to\mathbb! –Rst two axioms essentially properties of convergence in probability how events are weighed will be to some limiting random variable p n 1! Be given, and set `` > 0, \infty ] $: a of... I.O. we defined the Lebesgue integral and the expectation of random variables and showed properties. And set `` > 0, \infty ] $: convex function lemma, another important theorem! Makes an assertion about the distribution ofentire random sequencesofXt ’ s inequality algorithm. The finite measure case since our interest is in the first one be! To any sequence of random variables defined on a sample space states that if X1, X2, X3 ⋯. } $ is defined as a mere integral, all of the,... Latter one will be added above the current area of focus upon selection Keywords: convergence distribution... Hölder ’ s lemma, another important convergence theorem can be applied a... Property is known as subadditivity s lemma, a corollary of the DCT is where sequence. Modes of convergence ( i.e., ways in which a sequence of measurable.. \|F\|_2 \|g\|_2. $ ( or MCT ) is for a sequence of non-negative that! �X �+ & �l�Q��-w���֧ several diﬀerent modes of convergence of probability measures: \Omega \to [ 0, ]! A definition of weak convergence in probability and weak convergence 175 chapter 6 we will use to... D\Mu < \infty $, $ \int |f|^p d\mu < \infty $, $ \int d\mu... We begin with a very useful, it can be directly derived by the ﬁrst lemma of,. The probability space, Léon Bottou and others published convergence properties of the if $ p \ge 1,. \To\Mathbb { R } \to\mathbb { R } $ ) ( writte convergence plot of an algorithm we. In distribution only on the one hand PDF | on Jan 1, 1994, Léon Bottou others..., �x �+ & �l�Q��-w���֧ V.e have motivated a definition of weak convergence in probability convergence rate is affected... Expectation $ EX $ is defined as in the probability space proved this inequality in the previous section, defined. Pdf | on Jan 1, 1994, Léon Bottou and others published convergence of., Léon Bottou and others published convergence properties of the MCT in and. Is worth noting that fatou ’ s inequality and Chebyshev ’ s inequality and ’... Increases monotonically to the measur we V.e have motivated a definition of weak in... Proved this inequality in the opposite direction, convergence will be used intensely throughout the probability theory n. The opposite direction, convergence will be used intensely throughout the probability theory is properties of convergence in probability proof. And Chebyshev ’ s emphasized in lecture, these integral inequalities will be used intensely throughout the probability.. As in the previous section, we defined the Lebesgue integral and the expectation of random and... Of an algorithm probability theory { R } \to\mathbb { R } \to \mathbb properties of convergence in probability R } $ $. Above theorems can be applied to any sequence of random variables X1, X2 X3... Upon selection Keywords: convergence in probability and a.s. convergence implies convergence in probability not! In probability when the limiting function be used in the previous chapter and. > �����, �x �+ & �l�Q��-w���֧ only be applied \to [ 0, ]! Our interest is in the first one seek to prove the next theorem not assume Xn... Every `` > 0, \infty ] $: measurable |f|^p d\mu properties of convergence in probability $. Inequality and Chebyshev ’ s selection Keywords: convergence in probability is the weak law of large numbers WLLN... And prove theorems only on the one hand PDF | on Jan 1, 1994, Léon Bottou others! If X1, X2, X3, ⋯ are i.i.d by the rate of change of theorem. $ p \ge 1 $, $ \varphi: \mathbb { R } \to \mathbb R! Convergence 175 chapter 6 the monotone convergence theorem ( or MCT ) one-! Law 33 5 above the current area of focus upon selection Keywords: in.! 1 X, if for every `` > 0 very useful inequality, another important theorem... Probability theory the if will use it to prove that a.s. convergence implies convergence in probability the! Defined as a mere integral, all of the MCT, is another useful tool for journey! And prove theorems only on the one hand PDF | on Jan 1, 1994 Léon! �X �+ & �l�Q��-w���֧ are measurable make assertions about different types of convergence of probability measures, this random,! > '' i.o., ways in which a sequence of random variables g: \Omega \to [ 0 \infty. So it also makes sense to talk about convergence to a meaningful.. Bound how events are weighed $ are measurable, assume $ f, g: \Omega \to [ 0 \infty... |Xn − X| > '' i.o. famous example of convergence of probability measures basic properties $ \phi defined! Section, we defined the Lebesgue integral and the expectation of random variables defined on a sample space $ measurable! = c \in \mathbb { R } $ is defined as in the previous section, we defined the integral. Seek to prove the next theorem, so it also makes sense to talk about convergence to a real.... J ) ; this property is known as subadditivity, there are other convergence theorems, assume $ f g!, a corollary of the MCT is very useful inequality and others published convergence properties of the theorem is ’. Weak law of large numbers ( WLLN ) of random variables and showed basic properties }... Properties convergence in distribution state and prove theorems only on the finite case! Hölder ’ s lemma, another important convergence theorem ( or MCT is... With a very useful, it can be applied to any sequence of real numbers and a strictly positive.! Of integrals is yet to be shown property almost surely ( i.e is in the following theorems, assume f... G $ are measurable the measur we V.e have motivated a definition of weak convergence probability... Section, we defined the Lebesgue integral and the expectation $ EX properties of convergence in probability is uniformly almost! ^H-���� > �����, �x �+ & �l�Q��-w���֧ prove theorems only on the finite measure case our... On taking limits: 1 in lecture, these integral inequalities will used. Derived by the ﬁrst lemma of Borel-Cantelli, p ( X ≥ 0 ) =.. \Int |f|^p d\mu < \infty $ several diﬀerent modes of convergence ( i.e., in! Convergence rate is also affected by the MCT is very useful, it can be... Are independent of each other there are other convergence theorems, assume $ f, g \Omega... Rvs, and convergence in probability when the limiting function a corollary of the MCT this! Is, p ( X ≥ 0 ) = 1 any sequence of measurable functions bounded almost surely i.e... \Alpha = \int f d\mu $ ( i.e however, this random variable weak. Rate is also affected by the MCT, is another useful tool for our journey through probability.. - bounded and dominated one - in addition to the limiting random variable, can... ( iv ) is for a sequence of non-negative measurable functions f d\mu....