Mean dependence
In-game article clicks load inline without leaving the challenge.
In probability theory, a random variable Y {\displaystyle Y} is said to be mean independent of random variable X {\displaystyle X} if and only if its conditional mean E ( Y ∣ X = x ) {\displaystyle E(Y\mid X=x)} equals its (unconditional) mean E ( Y ) {\displaystyle E(Y)} for all x {\displaystyle x} such that the probability density/mass of X {\displaystyle X} at x {\displaystyle x}, f X ( x ) {\displaystyle f_{X}(x)}, is not zero. Otherwise, Y {\displaystyle Y} is said to be mean dependent on X {\displaystyle X}.
Stochastic independence implies mean independence, but the converse is not true.; moreover, mean independence implies uncorrelatedness while the converse is not true. Unlike stochastic independence and uncorrelatedness, mean independence is not symmetric: it is possible for Y {\displaystyle Y} to be mean-independent of X {\displaystyle X} even though X {\displaystyle X} is mean-dependent on Y {\displaystyle Y}.
The concept of mean independence is often used in econometrics[citation needed] to have a middle ground between the strong assumption of independent random variables (X 1 ⊥ X 2 {\displaystyle X_{1}\perp X_{2}}) and the weak assumption of uncorrelated random variables ( Cov ( X 1 , X 2 ) = 0 ) . {\displaystyle (\operatorname {Cov} (X_{1},X_{2})=0).}