In statistics, the multivariate Behrens–Fisher problem is the problem of testing for the equality of means from two multivariate normal distributions when the covariance matrices are unknown and possibly not equal. Since this is a generalization of the univariate Behrens-Fisher problem, it inherits all of the difficulties that arise in the univariate problem.

Notation and problem formulation

Let X i j ∼ N p ( μ i , Σ i ) ( j = 1 , … , n i ; i = 1 , 2 ) {\displaystyle X_{ij}\sim {\mathcal {N}}_{p}(\mu _{i},\,\Sigma _{i})\ \ (j=1,\dots ,n_{i};\ \ i=1,2)\ } be independent random samples from two p {\displaystyle p}-variate normal distributions with unknown mean vectors μ i {\displaystyle \mu _{i}} and unknown dispersion matrices Σ i {\displaystyle \Sigma _{i}}. The index i {\displaystyle i} refers to the first or second population, and the j {\displaystyle j}th observation from the i {\displaystyle i}th population is X i j {\displaystyle X_{ij}}.

The multivariate Behrens–Fisher problem is to test the null hypothesis H 0 {\displaystyle H_{0}} that the means are equal versus the alternative H 1 {\displaystyle H_{1}} of non-equality:

H 0 : μ 1 = μ 2 vs H 1 : μ 1 ≠ μ 2 . {\displaystyle H_{0}:\mu _{1}=\mu _{2}\ \ {\text{vs}}\ \ H_{1}:\mu _{1}\neq \mu _{2}.}

Define some statistics, which are used in the various attempts to solve the multivariate Behrens–Fisher problem, by

X i ¯ = 1 n i ∑ j = 1 n i X i j , A i = ∑ j = 1 n i ( X i j − X i ¯ ) ( X i j − X i ¯ ) ′ , S i = 1 n i − 1 A i , S i ~ = 1 n i S i , S ~ = S 1 ~ + S 2 ~ , and T 2 = ( X 1 ¯ − X 2 ¯ ) ′ S ~ − 1 ( X 1 ¯ − X 2 ¯ ) . {\displaystyle {\begin{aligned}{\bar {X_{i}}}&={\frac {1}{n_{i}}}\sum _{j=1}^{n_{i}}X_{ij},\\A_{i}&=\sum _{j=1}^{n_{i}}(X_{ij}-{\bar {X_{i}}})(X_{ij}-{\bar {X_{i}}})',\\S_{i}&={\frac {1}{n_{i}-1}}A_{i},\\{\tilde {S_{i}}}&={\frac {1}{n_{i}}}S_{i},\\{\tilde {S}}&={\tilde {S_{1}}}+{\tilde {S_{2}}},\quad {\text{and}}\\T^{2}&=({\bar {X_{1}}}-{\bar {X_{2}}})'{\tilde {S}}^{-1}({\bar {X_{1}}}-{\bar {X_{2}}}).\end{aligned}}}

The sample means X i ¯ {\displaystyle {\bar {X_{i}}}} and sum-of-squares matrices A i {\displaystyle A_{i}} are sufficient for the multivariate normal parameters μ i , Σ i , ( i = 1 , 2 ) {\displaystyle \mu _{i},\Sigma _{i},\ (i=1,2)}, so it suffices to perform inference be based on just these statistics. The distributions of X i ¯ {\displaystyle {\bar {X_{i}}}} and A i {\displaystyle A_{i}} are independent and are, respectively, multivariate normal and Wishart:

X i ¯ ∼ N p ( μ i , Σ i / n i ) , A i ∼ W p ( Σ i , n i − 1 ) . {\displaystyle {\begin{aligned}{\bar {X_{i}}}&\sim {\mathcal {N}}_{p}\left(\mu _{i},\Sigma _{i}/n_{i}\right),\\A_{i}&\sim W_{p}(\Sigma _{i},n_{i}-1).\end{aligned}}}

Background

In the case where the dispersion matrices are equal, the distribution of the T 2 {\displaystyle T^{2}} statistic is known to be an F distribution under the null and a noncentral F-distribution under the alternative.

The main problem is that when the true values of the dispersion matrix are unknown, then under the null hypothesis the probability of rejecting H 0 {\displaystyle H_{0}} via a T 2 {\displaystyle T^{2}} test depends on the unknown dispersion matrices. In practice, this dependency harms inference when the dispersion matrices are far from each other or when the sample size is not large enough to estimate them accurately.

Now, the mean vectors are independently and normally distributed,

X i ¯ ∼ N p ( μ i , Σ i / n i ) , {\displaystyle {\bar {X_{i}}}\sim {\mathcal {N}}_{p}\left(\mu _{i},\Sigma _{i}/n_{i}\right),}

but the sum A 1 + A 2 {\displaystyle A_{1}+A_{2}} does not follow the Wishart distribution, which makes inference more difficult.

Proposed solutions

Proposed solutions are based on a few main strategies:

Approaches using the T 2 with approximate degrees of freedom

Below, t r {\displaystyle \mathrm {tr} } indicates the trace operator.

Yao (1965)

(as cited by )

T 2 ∼ ν p ν − p + 1 F p , ν − p + 1 , {\displaystyle T^{2}\sim {\frac {\nu p}{\nu -p+1}}F_{p,\nu -p+1},}

where

ν = [ 1 n 1 ( X ¯ d ′ S ~ − 1 S ~ 1 S ~ − 1 X d ¯ X ¯ d ′ S ~ − 1 X ¯ d ) 2 + 1 n 2 ( X ¯ d ′ S ~ − 1 S ~ 2 S ~ − 1 X d − 1 X ¯ d ′ S ~ − 1 X ¯ d ) 2 ] − 1 , X ¯ d = X ¯ 1 − X ¯ 2 . {\displaystyle {\begin{aligned}\nu &=\left[{\frac {1}{n_{1}}}\left({\frac {{\bar {X}}_{d}'{\tilde {S}}^{-1}{\tilde {S}}_{1}{\tilde {S}}^{-1}{\bar {X_{d}}}}{{\bar {X}}_{d}'{\tilde {S}}^{-1}{\bar {X}}_{d}}}\right)^{2}+{\frac {1}{n_{2}}}\left({\frac {{\bar {X}}_{d}'{\tilde {S}}^{-1}{\tilde {S}}_{2}{\tilde {S}}^{-1}X_{d}^{-1}}{{\bar {X}}_{d}'{\tilde {S}}^{-1}{\bar {X}}_{d}}}\right)^{2}\right]^{-1},\\{\bar {X}}_{d}&={\bar {X}}_{1}-{\bar {X}}_{2}.\end{aligned}}}

Johansen (1980)

(as cited by )

T 2 ∼ q F p , ν , {\displaystyle T^{2}\sim qF_{p,\nu },}

where

q = p + 2 D − 6 D p ( p − 1 ) + 2 , ν = p ( p + 2 ) 3 D , {\displaystyle {\begin{aligned}q&=p+2D-{\frac {6D}{p(p-1)+2}},\\\nu &={\frac {p(p+2)}{3D}},\\\end{aligned}}}

and

D = 1 2 ∑ i = 1 2 1 n i { t r [ ( I − ( S ~ 1 − 1 + S ~ 2 − 1 ) − 1 S ~ i − 1 ) 2 ] + [ t r ( I − ( S ~ 1 − 1 + S ~ 2 − 1 ) − 1 S ~ i − 1 ) ] 2 } . {\displaystyle {\begin{aligned}D={\frac {1}{2}}\sum _{i=1}^{2}{\frac {1}{n_{i}}}{\Bigg \{}\ &\mathrm {tr} \left[{\left(I-({\tilde {S}}_{1}^{-1}+{\tilde {S}}_{2}^{-1})^{-1}{\tilde {S}}_{i}^{-1}\right)}^{2}\right]\\&{}+{\left[\mathrm {tr} \left(I-({\tilde {S}}_{1}^{-1}+{\tilde {S}}_{2}^{-1})^{-1}{\tilde {S}}_{i}^{-1}\right)\right]}^{2}\ {\Bigg \}}.\\\end{aligned}}}

Nel and Van der Merwe's (1986)

(as cited by )

T 2 ∼ ν p ν − p + 1 F p , ν − p + 1 , {\displaystyle T^{2}\sim {\frac {\nu p}{\nu -p+1}}F_{p,\nu -p+1},}

where

ν = t r ( S ~ 2 ) + [ t r ( S ~ ) ] 2 1 n 1 { t r ( S 1 ~ 2 ) + [ t r ( S 1 ~ ) ] 2 } + 1 n 2 { t r ( S 2 ~ 2 ) + [ t r ( S 2 ~ ) ] 2 } . {\displaystyle \nu ={\frac {\mathrm {tr} ({\tilde {S}}^{2})+[\mathrm {tr} ({\tilde {S}})]^{2}}{{\frac {1}{n_{1}}}\left\{\mathrm {tr} ({\tilde {S_{1}}}^{2})+[\mathrm {tr} ({\tilde {S_{1}}})]^{2}\right\}+{\frac {1}{n_{2}}}\left\{\mathrm {tr} ({\tilde {S_{2}}}^{2})+[\mathrm {tr} ({\tilde {S_{2}}})]^{2}\right\}}}.}

Comments on performance

Kim (1992) proposed a solution that is based on a variant of T 2 {\displaystyle T^{2}}. Although its power is high, the fact that it is not invariant makes it less attractive. Simulation studies by Subramaniam and Subramaniam (1973) show that the size of Yao's test is closer to the nominal level than that of James's. Christensen and Rencher (1997) performed numerical studies comparing several of these testing procedures and concluded that Kim and Nel and Van der Merwe's tests had the highest power. However, these two procedures are not invariant.

Krishnamoorthy and Yu (2004)

Krishnamoorthy and Yu (2004) proposed a procedure which adjusts in Nel and Var der Merwe (1986)'s approximate df for the denominator of T 2 {\displaystyle T^{2}} under the null distribution to make it invariant. They show that the approximate degrees of freedom lies in the interval [ min { n 1 − 1 , n 2 − 1 } , n 1 + n 2 − 2 ] {\displaystyle \left[\min\{n_{1}-1,n_{2}-1\},n_{1}+n_{2}-2\right]} to ensure that the degrees of freedom is not negative. They report numerical studies that indicate that their procedure is as powerful as Nel and Van der Merwe's test for smaller dimension, and more powerful for larger dimension. Overall, they claim that their procedure is the better than the invariant procedures of Yao (1965) and Johansen (1980). Therefore, Krishnamoorthy and Yu's (2004) procedure has the best known size and power as of 2004.

The test statistic T 2 {\displaystyle T^{2}} in Krishnmoorthy and Yu's procedure follows the distribution T 2 ∼ ν p F p , ν − p + 1 / ( ν − p + 1 ) , {\displaystyle T^{2}\sim \nu pF_{p,\nu -p+1}/(\nu -p+1),} where

ν = p + p 2 1 n 1 − 1 { t r [ ( S ~ 1 S ~ − 1 ) 2 ] + [ t r ( S ~ 1 S ~ − 1 ) ] 2 } + 1 n 2 − 1 { t r [ ( S ~ 2 S ~ − 1 ) 2 ] + [ t r ( S ~ 2 S ~ − 1 ) ] 2 } . {\displaystyle \nu ={\frac {p+p^{2}}{{\frac {1}{n_{1}-1}}\{\mathrm {tr} [({\tilde {S}}_{1}{\tilde {S}}^{-1})^{2}]+[\mathrm {tr} ({\tilde {S}}_{1}{\tilde {S}}^{-1})]^{2}\}+{\frac {1}{n_{2}-1}}\{\mathrm {tr} [({\tilde {S}}_{2}{\tilde {S}}^{-1})^{2}]+[\mathrm {tr} ({\tilde {S}}_{2}{\tilde {S}}^{-1})]^{2}\}}}.}

  • Rodríguez-Cortés, F. J. and Nagar, D. K. (2007). Percentage points for testing equality of mean vectors. Journal of the Nigerian Mathematical Society, 26:85–95.
  • Gupta, A. K., Nagar, D. K., Mateu, J. and Rodríguez-Cortés, F. J. (2013). Percentage points of a test statistic useful in manova with structured covariance matrices. Journal of Applied Statistical Science, 20:29-41.