In mathematics, in particular in measure theory, there are different notions of distribution function and it is important to understand the context in which they are used (properties of functions, or properties of measures).

Distribution functions (in the sense of measure theory) are a generalization of distribution functions (in the sense of probability theory).

Definitions

The first definition presented here is typically used in Analysis (harmonic analysis, Fourier Analysis, and integration theory in general) to analysis properties of functions.

The function d f {\displaystyle d_{f}} provides information about the size of a measurable function f {\displaystyle f}.

The next definitions of distribution function are straight generalizations of the notion of distribution functions (in the sense of probability theory).

It is well known result in measure theory that if F : R → R {\displaystyle F:\mathbb {R} \to \mathbb {R} } is a nondecreasing right continuous function, then the function μ {\displaystyle \mu } defined on the collection of finite intervals of the form ( a , b ] {\displaystyle (a,b]} by μ ( ( a , b ] ) = F ( b ) − F ( a ) {\displaystyle \mu {\big (}(a,b]{\big )}=F(b)-F(a)} extends uniquely to a measure μ F {\displaystyle \mu _{F}} on a σ {\displaystyle \sigma }-algebra M {\displaystyle {\mathcal {M}}} that included the Borel sets. Furthermore, if two such functions F {\displaystyle F} and G {\displaystyle G} induce the same measure, i.e. μ F = μ G {\displaystyle \mu _{F}=\mu _{G}}, then F − G {\displaystyle F-G} is constant. Conversely, if μ {\displaystyle \mu } is a measure on Borel subsets of the real line that is finite on compact sets, then the function F μ : R → R {\displaystyle F_{\mu }:\mathbb {R} \to \mathbb {R} } defined by F μ ( t ) = { μ ( ( 0 , t ] ) if t ≥ 0 − μ ( ( t , 0 ] ) if t < 0 {\displaystyle F_{\mu }(t)={\begin{cases}\mu ((0,t])&{\text{if }}t\geq 0\\-\mu ((t,0])&{\text{if }}t<0\end{cases}}} is a nondecreasing right-continuous function with F ( 0 ) = 0 {\displaystyle F(0)=0} such that μ F μ = μ {\displaystyle \mu _{F_{\mu }}=\mu }.

This particular distribution function is well defined whether μ {\displaystyle \mu } is finite or infinite; for this reason, a few authors also refer to F μ {\displaystyle F_{\mu }} as a distribution function of the measure μ {\displaystyle \mu }. That is:

Example

As the measure, choose the Lebesgue measure λ {\displaystyle \lambda }. Then by Definition of λ {\displaystyle \lambda } λ ( ( 0 , t ] ) = t − 0 = t and − λ ( ( t , 0 ] ) = − ( 0 − t ) = t {\displaystyle \lambda ((0,t])=t-0=t{\text{ and }}-\lambda ((t,0])=-(0-t)=t} Therefore, the distribution function of the Lebesgue measure is F λ ( t ) = t {\displaystyle F_{\lambda }(t)=t} for all t ∈ R {\displaystyle t\in \mathbb {R} }.

Comments

  • The distribution function d f {\displaystyle d_{f}} of a real-valued measurable function f {\displaystyle f} on a measure space ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} is a monotone nonincreasing function, and it is supported on [ 0 , μ ( X ) ] {\displaystyle [0,\mu (X)]}. If d f ( s 0 ) < ∞ {\displaystyle d_{f}(s_{0})<\infty } for some s 0 ≥ 0 {\displaystyle s_{0}\geq 0}, then lim s → ∞ d f ( s ) = 0. {\displaystyle \lim _{s\to \infty }d_{f}(s)=0.}
  • When the underlying measure μ {\displaystyle \mu } on ( R , B ( R ) ) {\displaystyle (\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))} is finite, the distribution function F {\displaystyle F} in Definition 3 differs slightly from the standard definition of the distribution function F μ {\displaystyle F_{\mu }} (in the sense of probability theory) as given by Definition 2 in that for the former, F ( 0 ) = 0 {\displaystyle F(0)=0} while for the latter, lim t → − ∞ F μ ( t ) = 0 and lim t → ∞ F μ ( t ) = μ ( R ) . {\displaystyle \lim _{t\to -\infty }F_{\mu }(t)=0{\text{ and }}\lim _{t\to \infty }F_{\mu }(t)=\mu (\mathbb {R} ).}
  • When the objects of interest are measures in ( R , B ( R ) ) {\displaystyle (\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))}, Definition 3 is more useful for infinite measures. This is the case because μ ( ( − ∞ , t ] ) = ∞ {\displaystyle \mu ((-\infty ,t])=\infty } for all t ∈ R {\displaystyle t\in \mathbb {R} }, which renders the notion in Definition 2 useless.