The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a t-test.
The distribution is named for Harold Hotelling, who developed it as a generalization of Student's t-distribution.[1]
If the vector is Gaussian multivariate-distributed with zero mean and unit covariance matrix and is a matrix with unit scale matrix and m degrees of freedom with a Wishart distribution , then the quadratic form has a Hotelling distribution (with parameters and ):[3]
Furthermore, if a random variable X has Hotelling's T-squared distribution, , then:[1]
where is the F-distribution with parameters p and m − p + 1.
Let be the sample covariance:
where we denote transpose by an apostrophe. It can be shown that is a positive (semi) definite matrix and follows a p-variate Wishart distribution with n − 1 degrees of freedom.[4]
The sample covariance matrix of the mean reads .
The Hotelling's t-squared statistic is then defined as:[5]
which is proportional to the distance between the sample mean and . Because of this, one should expect the statistic to assume low values if , and high values if they are different.
From the distribution,
where is the F-distribution with parameters p and n − p.
In order to calculate a p-value (unrelated to p variable here), note that the distribution of equivalently implies that
Then, use the quantity on the left hand side to evaluate the p-value corresponding to the sample, which comes from the F-distribution. A confidence region may also be determined using similar logic.
Motivation
Let denote a p-variate normal distribution with location and known covariance . Let
be n independent identically distributed (iid) random variables, which may be represented as column vectors of real numbers. Define
to be the sample mean with covariance . It can be shown that
where is the chi-squared distribution with p degrees of freedom.[6]
More information To show this use the fact that ...
Proof |
Proof
To show this use the fact that
and derive the characteristic function of the random variable . As usual, let denote the determinant of the argument, as in .
By definition of characteristic function, we have:[7]
There are two exponentials inside the integral, so by multiplying the exponentials we add the exponents together, obtaining:
Now take the term off the integral, and multiply everything by an identity , bringing one of them inside the integral:
But the term inside the integral is precisely the probability density function of a multivariate normal distribution with covariance matrix and mean , so when integrating over all , it must yield per the probability axioms.[clarification needed] We thus end up with:
where is an identity matrix of dimension . Finally, calculating the determinant, we obtain:
which is the characteristic function for a chi-square distribution with degrees of freedom.
|
Close
If and , with the samples independently drawn from two independent multivariate normal distributions with the same mean and covariance, and we define
as the sample means, and
as the respective sample covariance matrices. Then
is the unbiased pooled covariance matrix estimate (an extension of pooled variance).
Finally, the Hotelling's two-sample t-squared statistic is
It can be related to the F-distribution by[4]
The non-null distribution of this statistic is the noncentral F-distribution (the ratio of a non-central Chi-squared random variable and an independent central Chi-squared random variable)
with
where is the difference vector between the population means.
In the two-variable case, the formula simplifies nicely allowing appreciation of how the correlation, ,
between the variables affects . If we define
and
then
Thus, if the differences in the two rows of the vector are of the same sign, in general, becomes smaller as becomes more positive. If the differences are of opposite sign becomes larger as becomes more positive.
A univariate special case can be found in Welch's t-test.
More robust and powerful tests than Hotelling's two-sample test have been proposed in the literature, see for example the interpoint distance based tests which can be applied also when the number of variables is comparable with, or even larger than, the number of subjects.[8][9]
Johnson, R.A.; Wichern, D.W. (2002). Applied multivariate statistical analysis. Vol. 5. Prentice hall.
Mardia, K. V.; Kent, J. T.; Bibby, J. M. (1979). Multivariate Analysis. Academic Press. ISBN 978-0-12-471250-8. Billingsley, P. (1995). "26. Characteristic Functions". Probability and measure (3rd ed.). Wiley. ISBN 978-0-471-00710-4. Marozzi, M. (2016). "Multivariate tests based on interpoint distances with application to magnetic resonance imaging". Statistical Methods in Medical Research. 25 (6): 2593–2610. doi:10.1177/0962280214529104. PMID 24740998. Marozzi, M. (2015). "Multivariate multidistance tests for high-dimensional low sample size case-control studies". Statistics in Medicine. 34 (9): 1511–1526. doi:10.1002/sim.6418. PMID 25630579.