!.
OPl'IMAL TESTS FOR SEPARABLE FAMILIES
OF HYPOTHESES
by
Robert Richard Starbuck
Institute of Statistics
MImeograph Series No. 978
Raleigh - January 1975
•
,
iv
TABLE OF CONTENTS
Page
1.
2.
3.
.......
.....
REVIEW OF LITERATURE • . . ..
. . . . . . ., • • . . .
GENERAL RESULTS
......
....
INTRODUCTION • • • • •
3.1 Introduction • • •
• •
• ••••
3.2 Some Properties of Conditional Probability Integral
Transformations • • • • • • • • • • • • • • • •
3.3 Most Powerful Similar and Most Powerful Invariant
Tests for Separable Families • • • • • • • •
3.4 Optimal Classification Rules
•••••••••
4.
APPLICATIONS •
4.1
4.2
Introduction
•••••••••••••••••
The U.M.P.S.-a Te~t for the Exponential (e,A,) Versus
the Normal (\1 ,,,, ) Distribution
••••
4.2.1
4.2.2
4.2.3
4.3
4.3.3
4.3.4
4.3.5
4.3.6
.
Exponential class as the null class of
distributions • • • •
• • • • • • •
Normal class as the null class of
distributions • • • • • • • • • • • • • •
Discrimination be~ween the fut ponential (e ,A,)
and Normal 4L,'" ) distributions • • • • •
The U.M.~.S.-~ Test For the Exponential (O,A,) Versus
the Uniform (O,e) Distribution • • • • • • •
4.3.1
4.4
..
........
Exponential class as the null class of
distributions
Uniform class as the null class of
distributions
are
Distributions of i«h •whe~ Xl:"
e,u
n
LLd.• Uniform (O~e) random variables ••
Distribution of T( 0) when Xl' ••• , X are
e,u
n
i.i.d. Exponential«(O,A.) random variables.
£ritical values of T 0) and power for
e,u
various sample sizes • • • • • • • • • • •
Discrimination between the Exponential (O,A.)
and Uniform (O,e) distributions ••
.....
:.::x·
Tests For the Exponential (O,A.) Versus the Lognor~l
(0,,,,2) Distribution • • • • • • • • • • • • • • ••
1
3
6
6
9
12
17
19
19
20
21
24
26
27
~
--=..-~~
27
28
28
29
31
31
32
v
!ABLE OF CONTENTS (Continued)
Page
known • • • •
Critical values
n = 1O, 20 •
4.4.3 0 unknown • • •
4.4.4 Critical values
n = 10, 20 •
4.4.1
4.4.2
4.5
The
·
·
·
Test for the Uniform (e ,9 ) Versus
l 2
the Normal (fl. ,0) Dis tributi.0n • • • • • • • • ••
4.5.2
4.5.3
4.5.4
The
34
35
3E!
37
Uniform class as the null class of
distributions • • • • • • • • • • •
Normal class as the null class of
distributions • • • • • • • • • • •
Critical values of T
and power for
u,n
various sample sizes • • • • • • •
Discrimination between the Uniform (9
38
• ••
38
• ••
41
• ••
,9 2 )
4l
1
and Normal (fl.,02) di.stributions • • • ••
42
U.M.P.S.~
Test for the Uniform (e ,9 2 ) Versus
l
the Exponential (9,A) Distributions • • • • • • •
4.6.1
4.6.2
4.6.5
4.6.6
4.7
· ·
• • •
•
• • • • • • • •
of T J. (a) and power for
e,
• • • • • • • •
• • • •
• • • • • • • • • • • •
•
and power for
of T*
6,J.
• • • • • • • • • • • • • •
U.M.P.S.~
4.5.1
4.6
0
Uniform class as the null class of
distributions • • • • • • • • • • • • • •
Exponential class as the null class of
distributions
•••••••••••••
Di.stribution of T
when Xl' ••• , X are
e,u
n
i.i.d. uniform (e ,9 2 ) random variables
l
Distribution of T
when Xl' ••• , X are
e,u
n
i.i.d. Exponential (e,A) random
variables • • • • • • • • • • • • • • • •
Critical. values of Te,u and power for
various sample sizes • • • • • • • • • •
Discrimination between the Uniform (e ,e 2 )
l
and Exponential (9,t..) distributions • • •
The U.M.P.S."'Q' Test for the Uniform (0,9) Versus
the Right Triangular (0,9) Distribution • • • • •
4.7.1
Right triangular class as the null class
of distributio"!1.s • • • • • • • • • • • •
43
43
45
46
47
47
47
49
49
-..=.
~<"--.,
vi
TABLE OF CONTENTS (Continued)
Page
4.7.2
4.7.3
4.7.5
4.7.6
4.8
5.
6.
Uniform class as the null class of
distributions •• • • • • • • • • • • • • •
Distribution of T
when Xl' ••• , X are
n
u,r
i.i.d. right triangular random variables
Distribution of T
when Xl' ••• , X are
u,r
n
i.i.d. Uniform (0,8) random variables • • •
Critical values of T
and power for the
u,r
various sample sizes • • • • • • • • •
Discrimination between the Right Triangular
and Uniform distributions • • • • • • • • •
The U.M.P.S.-a Test for the Pareto Versus the Lognormal Distribution • • •
• • • • •
. . ...'.. . . . . . . .
LIST OF REFERENCES . . . . . .
SUMMARY
. . ... . . . . .
... ....
50
50
51
52
52
54
55
57
-
-.
1.
INTRODUCTION
The problem of deciding which family of distributions describes
the behavior of a random sample of observations has been examined
rather extensively in the goodness-of-fit and hypothesis testing
literature.
Given a null class of distributions, if particular
alternative families of distributions are considered, then a
question arises as to the largest possible power a statistical test
may achieve.
When a particular alternative distribution or
alternative family of distributions i.s considered, the testing
problem is essentially that of testing separable hypotheses.
For a composite null hypothesis class, tests can be constructed
as a function of conditional probability integral transformations
(C.P.I.T.'s), which have been considered in O'Reilly and
Quesenberry [161.
Under general conditions these transformations
map a random sample from an unspecified member of a continuous
parametric class of distributions to a smaller set of independent
random variables with uniform distributions on the unit interval.
Then any
stati~tic
which measures the divergence of the transformed
values from a unifrom pattern may reasonably be considered as a
goodness-of-fit test statistic for the original composite null
hypothesis class.
Many such tests may be constructed and, indeed,
many exist in the extensive goodness-of-fit literature.
In this work, it is shown that a most powerful similar test
may be obtained as a function of C.P.I.T.'s for a composite goodnessof-fit null hypothesis, and sufficient conditions are given to
2
assure that such a test is uniformly most powerful against a
composite alternative class.
It is further shown under general
conditions that this test identifies with certain uniformly most
powerful invari.ant tests.
3
REVIEW OF LITERATURE
2.
The problem of testing
Po
and
~\
P e Po
H:
versus
K:
P e PI ' where
are separable families of distributions, separable in the
sense that an arbitrary member of one family can not be obtained as
the limit of the members of the other family, was first considered in
detail by
D.
R. Cox
[5J.
COX was the first writer to clearly identify
and point out the importance of tests of separable families and give
specific tests.
The term "separable" is due to hi.m.
Cox developed a
general method for this testing situation based on the logarithm of
the Neyman-Pearson maximum likelihood ratio (M.L.R.).
The statistic
he examined was
i-
tn M.L.R. - En (tn M.L.R.)\
Q-
o
~,
Of=ot
where
M.L.R.
and
Po
f
and
and
P
l
g
= sup
Of
f(x1, ••• ,x ;Of)/SUp g(xl, ••• ,x ;S)
n
S
n
denote the probability density functions arising from
,respectively.
Cox examined the asymptotic properties of
this test, and, in particular, the asymptotic variance, in order to
construct a statistic whose asymptotic distribution is standard normal.
In [6J, he develops the test for
H:
Poisson
versus
K: Geometric.
Jackson [13J investigated the adequacy of Cox's results for
normal
versus
K:
Exponential
H: Log-
and derives the power of the test.
Jackson also compares the test with other tests and derives the test
for
H:
Lognormal versus
K:
Gamma.
Using the same principle,
4
Atkinson [2J developed a test for a mixed model including a set of
hypothesized families of distributions in order to determine which
family adequately describes the data.
The M.L.R. statistics proposed by Cox lack the property of invariance, in general.
Lehmann [14J gave the general theory of
uniformly most powerful invariant (U .M.P. 1.) tests and gave integral
expressions for the location-scale parameter case.
reduction is required for partic.ular cases.
A great deal more
In two papers, Uthoff
[20, 21J derived the U.M.P.I. test for testing two-parameter Normal
versus Uniform, N.ormal versus Exponential, Uniform versus Exponential,
and Normal versus Double Exponential.
for
H: Normal
versus
K:
Uthoff also shows that the test
Double Exponential
is asymptotically
equivalent to the M.L.R. test and to Geary's test [111.
In a series
of papers, Dyer [8, 9, 101 investigated various test statistics,
including the U.M.P.I. and M.L.R. test statistics, and compared their
relative efficiencies from a discrimination point of- view for several
alternative families of distributions.
Antle, Dumonceaux, and Haas
[11 examined the M.L.R. test for several location-scale parameter
families and compared its power with the power of the U.M.P.I. test.
Their recommendation for using the M.L.R. test instead of the U.M.P.I.
test, when the two differ, is based on the ease of computation and
relatively good performance of the M.L.R. test with respect to the
U.M.P.I. test.
Dumonceaux and Antle
[71
followed with the M.L.R.
procedure for discriminating between the lognormal and Weibull ·distributions.
_ ..--.-r-
5
It should also be pointed out that many goodness-of-fit tests
are also tests for separable hypotheses, and that even though it would
intuitively be expected that tests utilizing the knowledge of the
alternative class would have better power than those tests that do
not, there is, in fact, no proof that this will occur.
Note, for
example, the remarkably strange behavior of some goodness-of-fit tests
reported by Dyer [8, 9, 10J.
6
3.
Introduction
3.1
Let
of
GENERAL RESULTS
X denote a Borel set of real numbers,
X , and
X
=
(Xl' ••• , X )
n
a
the Borel subsets
denote a vector of independent and
identically distributed (i.i.d.) random variables, each distributed
according to an absolutely continuous distribution
space
(X,a) ; and, further, suppose that
metric class of distributions
= (Pa ;
p
a
P
The set
assumed to be a K-dimensional Borel set with elements
a ) •
K
T
=
. n
(X ,an)
space
nn
= (pn .,
pn
measures on
(T , ••• , T )
I
K
=
(X
x ... x
= P x. •. x P ,
(Xn ,an)
for
X, a
P
€
p
(or
x ... x
a) •
p •
corresponding to
is also written as
Let
g:
X-+X
is a subclass of product
The class of product
pn = (pn a e
-a'
o}
n
g (Xl' ••• , x )
n
= (g(x l ),
g:
there exists a function
n
Pga(X e g 'A)
g
•
be a one-to-one transformation, and let
n
X
the corresponding one-to-one transformation of
ation
= Pa(X
••• , g(x
0 -+ 0
e A)
n
» •
(or
.for every
onto
For"a given
n
Let
formation subgroup on
n
G
n
X •
g
n
g
n
be
defined
, suppose
- e 0
ga
Let the transform-
A ea.
be such that the set of transformations
respecth~ely.
n
X
p n -+ P n ) such that
G
ponding set of transfor~ations" G are transformation
(1 ,
a = (aI' ••• ,
Also, put
n}
' ~., p n
,1:•
pn
and
is
0 ) : defined on the sample
Q-
measures
by
0
It i.s also assumed that there exists a K-dimensional sufficient
statistic
Q-
on the Borel
is a member of a para-
o} .
€
P
and the corresgroups on X and
denote the corresponding product trans-
7
Definition
A
transformation group on a space is said to be transitive if
the maximal invariant of the group is constant on the space (cf.
Lehmann [14], p. 216).
Denote by
8 ; by
hI
0
as,
a
the sub a-algebra of
h 2 the composition of a function
by
I
A
induced by a statistic
hI
with a function
the indicator function of
~.~.,
a set
A.
With the usual abuse of notation the same symbol,
g
g-l
will be used to denote a point function and the corres-
or
ponding set function.
Lerrnna 3.1
For
g:
X
~
X , one-to-one, and
S
any statistic defined on
=
Pge ( g n'A Ign
B)
S
(3.1.1)
Proof
~-~.
The function
and
a80S"-n
J
on
gn
establishes a one-to-one correspondence between
In •
In fact,
Thus
n
=.
IdP-(y). P- (g 'A
ge
. gn B gnA ge
n g nB)
8
e if
By the Radon-Nikodym theorem, a.s.
P
. an ,
A e
P (gnAIgn~s? = he (S (g.n(y)))
ge
= p(Als)
Lemma 3.2
If
T
is a sufficient statistic for
product transformation group on
Xn
0
p), and
(or
n
G
is a
that induces a transitive group
G on 0 , then the distribution function of the conditional distribution on
Xn for fixed T is invariant under
n
G , 1.~.,
(3.1.2)
-=.
-'... -.,
a.s.
P
n
if g e G(g
n
n
e G) •
Proof
Let
exists a
Let
x
e e 0
be fixed and
n
gn e G
••• , x n ) •
be an element of
such that for the corresponding
J x = (Yl' ••• , Yn) ;
= (xl'
e'
Then
y.
1.
~
x.
1.
i = 1, ••• ,
O.
-
g ,
n}
Then there
e' = ge •
and
9
a.s.
= P-ge (gn J x I~'r og-n)
e '
a.s. P
by Lemma 3.1,
by sufficiency
of
By the sufficiency of
e
T, the subscript
T,
can be omitted on
F,
leaving
= F(gx l ,
••• , gx
where the exceptional set may depend on
is almost invariant under
Gn •
g
n
n
laT."g...n>
,and thus
However, since
family on a Euclidean space, and both
n
X
a.s.
and
pn
is a dominated
n are Euclidean sets,
it follows by Lehmann [14J, Theorem 4 and discussion on p. 226, that
the exceptional set does not depend on
3.2
g
n
Some Properties of Conditional Probability Integral
Transformations
Conditional probability integral transformations are introduced
in OIReilly and Quesenberry [16], and extended to a larger collection
of classes of distributions in Quesenberry
[171.
Some basic properties
of the transformations given in [16J are developed here.
results can be obtained for the transformations in
[171
The same
by the same,
or very similar, arguments.
-.'-
10
Put
(3.2.J.)
•
••
... , xn-K-l )
and
u(x , ••• ,
l
X
_ )
n K
=
(u ' ••• , u _ ) •
l
n K
the conditional distribution of
continuous, then
In [1 61 it is shown that if
Xl' ••• , X _
gi.ven
n K
(U l ' ••• , Un _K)
=
is absolutely
••• ,un_K(X n _K»
(~(XI)'
independently and identically di.stributed
T
U(O,l)
are
random variables.
From this and a result of Basu [31, the next theorem is immediate.
Theorem 3.1
If
n ,
T
= (T l ,
••• , T )
K
is a complete and sufficient statistic for
and if the conditional distribution of
is absolutely continuous, then
(Xl' ••• , X _ )
n K
(T , ••• , T )
l
K
and
given
T
(U ' ••• , U _ )
l
n K
are independent vectors.
This theorem has important applications for constructing inference.
procedures that may be alternatives to nonparametric or robust
procedures.
The sufficient statistic
for maki.ng inferences within the family
T
contains all the information
p
(or
n), whereas the
statistic
U = (U ' ••• , U _ )
l
n K
P.
U may be used to make inferences about the class
Thus
contains information about the family
as a goodness-of-fit test for the class
p, and
T
p , such
to make a
11
parametric test within p , and the independence exploited to assess
overall error rates.
Inferences based on
U are considered in the
following sections.
Theorem 3.2
If
G is a group of transformations of
-
induced group
G on 0
continuity rank
then
u
such that the
'1,
p
is transitive, and if
has absolute
n - K and suffici.ent: statistic
T
=
(T , ••• , T ) ,
1
K
of (3.2.1) is equivalent to an i.nvariant statistic, Le.,
••• , gx ) a.s. p
n
V g e G •
Proof
By Lemma 2.1 of [16 J,
u.
J
= E(E(I[X j~Xj JIT)lx1,
By Lemma 3.2 above,
E(I[x.~x.JIT)
J
••• , X'_l)' j
J
= 1,
••• , n-K a.s. P •
is invariant under
G.
The
J
result follows.
The following lemma is a consequence of the fact that the transforming functions of (3.2.1) are (conditional) distribution functions.
Lemma 3.1
In the condi.tiona1 space for fixed
to-one correspondence between
i·£.· ,
statistics in this space.
T
=
t , there is a.s. a one-
(u ' ••• , u _ )
1
n K
(Xl' ••• , Xn )
and
(Xl' •• ., xd'
are equivalent
12
3.3
Most Powerful Similar and Meat Powerful Invariant Tests for
Separable Famlli.ea
Using the values
P e
H:
(Xl' ••• ' X ) , consider testing the hypothesis
n
Po = rPe; e eo} ,
(3.3.1)
against the compesite alternative
(3.3.2)
It will also sometimes be useful to consider a simple alternative
P = P
K' :
f
Let
and
e
respectively, for
(3.3.3)
l
denote the density and distribution functions,
Fe
F
P
e
Pl
(3.3.1) is specified by a class of densities
has a particular functional form, then
goodness-of-fit null hypothesis.
(f
denote the density
l
of
e; e
K' •
e O}
If
where
Po
f
of
e
H is a classical composite
In work in classical composite
goodness-of-fit testi.ng, it is usually (tacitly) assumed that the null
hypothesis class contains all distri.butions for which
density,
i.~.,
that
0
fa
is a
in (3.3.1) is a natural parameter space.
Definitiop
A test
vee n •
~
is
similar~
for
H of (3.3.1) if
Ep
e
(~)
=~
13
Tests for composite goodness-of-fit hypotheses are traditionally
required to be
similar~.
This restriction on tests has obvious
appeal i.n that, for example, if it is desired to test for normality,
then all normal distributions are equally normal, so the probability
of rejecti.on should not vary on the null class,
Under
H, let
U ' ••• , U _
denote the n-K
1
n K
random variables obtained by (3.2.1). Also, let f
parent density of the sample
... ,
Xl'
... , Xn
under
i.i.d.
K'
U
is zero a.s. except in the unit hypercube.
denote the
l
the corresponding density of U ,
l
n- K)
From the remark preceding Lemma 3.1, it follows that
hI (u l ,
U(O,l)
of (3.3.3), and
... , Un-K •
hI (u I ,
... , un_K)
The next lemma is a direct
application of the Neyman-Pearson Lemma.
Lemma 3.4
The most powerful
••• , U
n- K)
level~
test of
H versus
K'
based on
is
1, i f hI
CUI' ••• , U _K) > c ,
n
(3.3.4)
0, otherwise,
where
c
Ul' ••• , U _
L Ld.
n K
Let
~
= *oU.
U(O, 1)
H versus
=a
K' •
~
, for
random variables.
The following theorem shows that if
boundedly complete, then
test for
P (hI (U ' ••• , U _ ) > c}
n K
l
is determined by
is a most powerful similar-a
T
is
(M.P.S.~)
14
Theorem 3.3
If
T
(3.3.1), then the test
test for
~
is a boundedly complete sufficient statistic for
H versus
~
= woU
above is a most powerful
of
similar~
K' •
Proof
By Lehmann [14J, Theorem 2, p. 134,
if, and only i f
Thus to find a most powerful test in the class of
it is sufficient to find a most powerful conditional
the conditi.ona1 space of
Xl' ••• , X
n
most powerful Neyman-structure test.
••• , X )
n
Lemma 3.3.
and
given
T,
But for
T
simi1ar~
size~
1.£..,
=t
tests
test on
to find the
fixed,
are equivalent statistics by
Thus, the test
~
is a most powerful
similar~
test.
It will sometimes be the case that
P e PI
(U.M.P.)
of (3.3.2).
simi1ar~
Then, of course,
test for
H ve:sus
does not depend on
~
~
P
is a uniformly most powerful
K.
Conditions under which
such tests exist are considered in the following theorem.
Theorem 3.4
If the conditions for both Theorem 3.2 and Theorem 3.3 are
satisfied by
for
PO' then a U.M.P. invariant
1eve1~
test exists for
15
testing
H versus
G
K, provided
is also transitive on P
MOreover, this test is equivalent to the
U.M.P.S.~
1
test of Theorem
Proof
If
is invariant 1eve1~, then sin.ce
cp
G is transitive
it
follows from Lehmann [14J, Theorem 3, p. 220, that
v
1.~.,
cp
is a
simi1ar~
will be M.P. invariant
P e PO'
test.
Thus if a test is M.P.
1evel~,
simi1ar~,
provided it is invariant.
it
But by
Theorem 3.3, a M.P. similar-a test can a.s. be written as a function
of
ul '
•.. , u n-K
only, and is
eu-measurable and invariant a.s. by
Theorem 3.2.
Thus under rather general conditions
U.M.P •. I. -a tests
iden~ify.
U.M.P.S.~ °and-·
The two approaches of finding the
U~M.J>.S.~ test vary from example to example in the amount of effort
required to construct the test.
If
... , x
rather complicated functiomof
n
obtaining the marginal density of
3.4 is a difficult one.
U , ••• ,
l
Un~K
of (3.2.1) are
,then the task of
U , ••• , U _
required in Lemma
l
n K
When this occurs, the invariance approach
might be superior in terms of effort required, although this . is not
in general true.'
The next definition,
an~
particularly, Theorem 3.5 are motivated
by a result of Dyer [8, 9, 10J.
He demonstrates empirically that a
number of important goodness-af-fit tests have the property that the
16
power i.s less when the value of a paramet.er is assumed known than for
the case when the same parameter is assumed unknown, under the null
hypothesis.
Thi.s behavior is not particularly remarkable in that none
of the tests consi.dered have any known optimal power properties.
In
the next definition and Theorem 3.5, natural conditions are given which
assure that the power of the U.M.• P.S.""Q' test for a smaller null
hypothe.sis family i.s never less than that of the U.M.P.S .-et test for
a larger family.
hfinitio,a
Two separable families of di.stribution.s on the same space ('X.,u)
are said to be conformabl..!:, with respect to a group
formations if the corresponding group
G
of trans-
-
G on the parameter space is
transitive for each family.
Consider two testing problems
HI:
P IH
verSus
K :
I
P IK '
(3.3.5)
H2 :
P ZH
versus
K :
2
P ZK '
(3.3.6)
and
where
PlH
C
P 2H ,
P IK
C
P 2K ' and Pill
separable famili.es of distributions,
Theorem
and
P iK are conformable
i = 1, 2 •
3.1
If
is U.M.P.S.""Q' for (3.3.5) and
is U.M.P.S.""Q' for
(3.3.6), then
(3.3.7)
17
Proof
The class of testa that are similar-Q' for (3.3.6) is a subclass
of the class of teats that are si.milar-Q' for (.3.3.5).
3.4
Optimal Classification Rules
In practice, one may confront the problem of deciding which class
among a set of classes of distributions the data
has arisen from.
X = (Xl' ••• , X )
n
The general classification model is to base the
decision as to which class the data arise from on a statistic
S
= S(X 1 ,
••• , X ) •
n
When the set of classes of distributions consists
of only two members, say
Po
and
PI ' a classification rule is
&I1l
Outcome
Decision
I
S > c
X arises from PI
0
S
c
X arises from Po
It is assumed that
S
~
.
(3.4.1)
has a continuous distribution.
It has not been mentioned, but is implied by the conditions of
H versus
K is constant on
PI '
i.~.,
is not a function of
With this in mind, it seems reasonable to use
the discriminant function in (3.4.1).
hI
P e PI •
of Lemma 3.4 as
If equal error probabilities of
misclassification are required, then the next theorem shows that the
decision function based on
hI
is optimal among the class of decision
functions having equal error probabilities of mi.sc1ass ification.
18
Theorem 3.6
Consider the classification rule (3.4.1).
Assuming that
satisfies the conditions of Theorem 3.3, let
d(n)
classification function based on sample size
n
P
denote the
and statistic
h
l
of
Lemma 3.4 with error probabilities of misclassification equal to
Q'(n).
Let
size
n'
d' (n')
denote a classification function based on sample
and a statistic
S
with error probabilities of mis-
classification equal to
If
Q"(n')
S
Let
no
=
mi.n (n; n
= 1,
2,
... } .
Q'~O
Q'o ' then
Proof
Assume that
a'(n') s Q'o
for some
=OIP l )
P(d(n) = 1lpo)
=a(n)
P(d(n)
= P(d'(n')
Q"(n')
~
= Olp1)
=S'(n').
a(n) , since the
definition of
~ a(n
=
O - 1) > Q'O.
contradiction,
n'
and
U.M.P.S.~
nO •
=P(d'(n')
Therefore,
=
l\PO)
n = n' , then
test is based on
Clearly Q"(n') > Q'o if
~
a'(n')
By Theorem 3.5, i f
a(nO - 1) > Q'O.
nO'
By definition,
hl •
Q"(nO - 1)
n' < nO.
By
By the
19
4.
4.1
APPLICATIONS
Introduction
In this section the results of Chapter 3 are applied to obtain
M.P.S. and
U.M.P.S.~
tests, and to show that certain tests studied
previously by other writers are M.P.S. or U.M.P.S.""Q' tests.
When the
conditions of Theorem 3.4 are satisfied, the minimal sample size
of Theorem 3.6 is obtained for
nO
a O = .10, .05, and .01 •
In most of the testing situations considered here,
Po
and
PI
are location-scale parameter families, and the testing problem has
already been considered from the invariance and R.M.L. approaches.
When the conditions of Theorem 3.4 are satisfied, the invariance and
similar approaches yield equivalent test statistics.
The invariance
approach is easier to use in practice in many cases.
The R.M.L. is
also equivalent to the best invariant test statistic in many examples,
although this does not happen in general
(.sf.
Dyer [8, 9, 10J).
Tables were generated for various test statistics by empirical
methods whenever the distribution of the test statistic was
mathematically intractable.
For the generation of normal and
lognormal variates, the algorithm developed by Chen [4] was used.
Variates from other distributions considered were generated
using
-,
I.BLM.'s RANDU program together with the inverse of the distribution
function.
For testing
H:
P e Po
versus
K:
P e PI ' tables of criti.cal
values and power are given for samples of size
(with one exception) for critical values
n = 10, 20, and 30
a = .10, .05, and .01.
The
20
primary purpose of these tables is to enable the reader to compare
various goodness-of-fit statistics in a variety of testing situations.
These tables are not i.ntended to provide a comprehensive study of a
particular testi.ng problem.
Antle, Dumonceaux, and Hass [lJ give
additional tables for the Normal versus Cauchy, Normal versus
Exponenti.al, and Normal versus Double Exponential testing problems.
For determining
nO
of Theorem 3.6 in a particular classification
- problem, tables are presented giving
equal error probabilities for
nO'
c, and the attainable
a O = .10, .05, and .01 •
It is interesting to note that in the examples considered the
test statistic for testing two location-scale parameter families are
ratios of functions of the complete and sufficient statistics for the
respective families.
This is particularly appealing from the
computational viewpoint, in that after the respective complete and
sufficient statistics have been computed, the test statistic can be
computed directly from them, aV9iding the additional and often
cumbersome task of calculating the U-statistics.
4.2
The U.M.P.S.-a Test for the Exponential Ce"A) Versus the Normal
<u. .O'~ Distributi.on
The problem of testing exponentiality versus normality arises in
the life testing area of reliability.
It is desired to determine
whether failure rates follow an exponential or normal distribution
before inference is undertaken.
This problem may be solved by using
the U.M.P.S.-a test developed below.
21
4.2.1
Exponential class as the null class of distributions.
Let
the null class of densities be
A. eXPl -A. (x-e)}.
11
(a,
(x)
.. 00 <
.,
e < +:0 ,.
A. > 0 •
00 )
Uthoff [20] derives the U.M.P. location and scale invariant test for
2
(jJ. '" )
testing this family against a Normal
alternative family.
The
test statistic obtained is
(4.2.1)
The equivalent R.M.L. statistic is presented in Antle, Dumonceaux, and
Hass [lJ.
An alternative method of deriving the U.M.P.S.-a test
statistic is the C.P.I.T. approach based on Theorem
transformations for a random sample,
Xl' ••• , X
n
3.~.
The C.P.I.T.
from this class of
densities are (from Corollary 2.1 of [16J):
u _2
r
= {1
r-1
- (z r- 1 - z n
)! (L:
.
~=l
(z.~ - z n »}
r- 2
. ,r
= 3,
••• , n ,
(4.2.2)
where
zn = x(l) , the smallest sample member, and the other
defined as follows.
j-l, and
Let
z
n
u
n
Suppose
zi = x i + l
ul '
= z
n
i = j+1,
... , u n-2
Then
z
n
Then
= x .•
J
z
... , n-1
i
= xi
i
Z1 = u n + u n- l '
e (0,1) ,
i
= 1,
... , n-2
;
u
n-1
= zl
un- 1 > 0
z n = u n ,and
l!j
u.
J
,
are
i = 1, ••• ,
be defined as in (4.2.2) and let
u
zls
i = 2, ••.• , n-1 •
,
-
and
22
The Jacobian of the transformation is
0
0
o
1
1
u
n-1
0
o
*
1
-u
n-1
2u u 372
1 2
o
*
1
"Ie
1
o
I
---Z
u
1
'Ie
•
•
•
= det
n-3
( n-2)( IT
j=l
n-2
_~In-l
- - - n-2
(n-2)![ IT u~/jln-l
j=l J -
the assumption that
normal distribution,
2
,(;1)
l/j) (n-l)/(n-2)
u "
u n =2
J
o
o
The joint distribution of
N~
_
~l},:"1..
o
=
___-u
'Ie
"Ie
u ' ••• , u _2
l
n
Xl' ••• , X
n
2
(\-1,0-)
random variables.
must be obtained under
constitute a random sample from a
unknown.
Let
Xl' ••• , X
n
Then letting the
zIS
be Lied.
be as described
above,
n-1
• IT
11 (z i)
i=l (z ,00)
n
Applying the u-transformation to the
is
ZIS
,
the joint density of
23
n-2
n
Un:': 1
= (n-2)1(a~)n (n~2 u~/j}n-1
'-1
J-
J
exp(-[(u -e)2 + (u +u _ -e)2
n
n n 1
n-2
II
•
'if
(u,) •
j=l (0,1) J
where
a,
J
Integrati.ng
=1
1 J'1
-_ ( (j-l) I II
uj _1
h(u , ••• , un~
1
joi.nt density of
a
u '
1
=1+
11K)
... ,
with respec.t to
U
n-
2
n.
n-2
IT
11
(n_2)!(a\f!Ii)n-1( IT
j=1
CO
0
J
/j
u: )n-1
------n---~2--·- -
(u i )
,
the
a,
J
exp(-n(x-(Su _ - ne)/n)
n 1
i=1 (0,1)
11
n
J
•
1=1 (0,1)
U
u~/j)n-l
CO
n-2
IT·
and
J0co y n-2 exp(-y 2 (a-S 2 In)/ 2a 2
Jo
•
u _
n 1
is, with
S = 1 + L:
j=2
-----~n---2-~--
=
••• , n-l •
n-l
n-l 2
L: a
j
j=2
(n-2) 1(arfn)n( II
'-1
J-
j = 2,
K=1 uK
J
y
2
12a 2}dxdy
(u i )
n-2
2
2
2
exp(-y (a-S /n)2c }dy
24
11
(u.) •
(0,1) 1.
(4.2.3)
Expressing (4.2.3) in terms of
zl' ••• , zn
(ignoring the constant
coefficient), (4.2.3) reduces to
(4.2.4)
and, finally, in terms of the original
vari.ables,
X
(4.2.5)
By Theorem 3.4, the hypothesis of exponentiality is rejected if
expression (4.2.5) exceeds a critical value
T
where
P(T
> cl p o)
(4.2.6)
c ,
e,n
e,n
c, or, equivalently, i.f
=~ •
As mentioned at the end of Section 4.1,
T
e,n
is a ratio of
functions of the respective complete and sufficient statistics, and,
in particular, is a ratio of estimators of dispersion for the
respective families.
Also, the statistic
T
e,n
is independent of
the complete and suffi.cient statistics of the respective families.
4.2.2
Normal class as the null
~ass
of distributions.
(x)
a > 0 , - co < jJ. <co •
Let the
null class of densities be
(\{2TT
a)
-1·
2
2
exp( - (x-jJ.) /2c }
11.
(-00,-:0)
25
The C.P.l.T. transformat.ions for a random sample
Xl' ••• , X
n
from this class of densities are (from [16J):
U1.'
where
1/2.
.
2
= G.1.- 2((1-2)
(X.-X
.
1. i )/[(1.-1)Si +
- 2 1/2
(X.-X.)
1. 1
1.
1.
i
is the Student.-t di.stribution function with
G. 2
1.-
= 3,
••• , n ,
n - 2
degrees of freedom,
-=
X1..
Since t.he
of the
i
J
j=l
U's
and
X./i
~
are invariant
.. 2
.:>.
1.
wi.t~
i
=
- 2
'" (X. - X.)
Ii
J
1.
j=1
~
respect to linear transformations
X's, the U.M.P.S.-a and V.M.P.I.-a
by Theorem 3.4.
tests are equivalent
The test statisti.c is simply the reciprocal of
(4.2.6), with the resulting test being to reject normality i f
\Vr~
i=1
(x._X)2 /n
-....=...;;;.----1.
or, equivalently, if
~
C
(4.2.7)
,
Te,n < c ,when
peT 6,n < cl r o)
= Q'
•
This
test is equivalent to the R.M.L. test presented in Antle,
--.;;;;;"
Dumonceaux, and Hass [11.
Also, the remarks in the paragraph followi.ng
(4.2.6) apply here.
Tables of critical values and power for the test statistic
for samples of size
n = 10(5)30 are presented in [lJ.
n.
e,n
The table
values were obtained by simulation using 5000 samples of size
each
T
n
for
-'0- ~.
.
26
4.2.3
Discrimination between the Exponential (e ,1) and Normal
<u. ,0" distributions.
If it is desired to use the statistic
T
e,n
to discriminate between exponentiality and normality, then it maybe
reasonable to require
to be equal.
to obtain a
a
= P(Type
I Error)
and
Table 4.2.1 gives the minimal sample size
=S
~
a
O
for
a
O
= .10,
.05, and .01.
were obtained by simulation using 5000 samples.
accepted if
S = P(Type
T
e,n.
exceeds the critical value
II Error)
nO
required
Table values
Normality is
c ; otherwise
exponentia1ity is accepted.
Entries that appear in this and other tables which are derived
by simulation are presented with two or three digi.ts to the right of
the decimal.
By investigating samples of 5,000, 10,000 and 30,000, it
was generally observed that entries based on 5,000 and 10,000 samples
differed only in the second digit, and that entries based on 30,000
samples differed only in the third digit with those based on 10,000
samples.
Table 4.2.1
ao
a
nO
c
.10
.098
14
1.356
.05
.046
20
1.386
.01
.009
33
1.419
27
4.3
U.M.P.S.~
The
Test For the
Expo~ential
(O,A) Versus the Uniform
(0,8) Distribution
4.3.1 Exponential class as the null class of distributions.
Let
the null class of densities be
A-lexp(-Ax)
~
(x)
(0,
CO )
A> 0 •
A maximal invariant under the group
... ,
x n ) = ( ex 1 ' ••• , cxn ) ,
G of transformations
c > 0, is
since both classes have the positive real line as a region of
support.
The U.M.P.I. test rejects the hypothesis of exponentiality
whenever , [14],
where
(~A)
f
is the Uniform (0,8)
l
density.
8
-n
density and
f
O
is the Exponential
Evaluating the numerator of (4.3.1),
JCO
0
v
n-l
1..
-n
(v) dv = Xc )/n
]O,e/X(n)[
n
(4.3.2)
Evaluating the denominator of (4.3.1),
-n
A
J0
CO
v
n-l
nAn
-n
exp(-vE X./ )dv = fii'( L: Xi)
•
i=l
. i=l 1.
T;~~ -
The test then reduces to rejecting exponentiality i f
> c , and accepti.ng
otherwise, where
P(T(D) > clp)
e, u
(4.3.3)
(J
= ex
X/X(n)
• ·This test
is readily shown to be equivalent to the R.M.l. test.
As mentioned at the end of Secticn 4.1,
T(O)
e,u
is a ratio of
functions of the respective complete and sufficient statistics, and,
in particular, is a ratio of estimators of di.spersion for the
28
respective famili.es.
Also, the statistic
teO)
is independent of the.
e,u
complete and sufficient st.ati.stics of the r€8pective families •
.id.l.._!:.nif('rm
n~.lll
class as the
class of distributions.
Let t.he
null class of densities be
8 -1
11
e>
(x)
(0,8)
As in Section 4.2.2, the
T(O) <
e, u
c
0 •
U.M.P.S.~
and accept otherwi.;:;e,
test is to reject uniformity if
P(Te(Ou) < cl p ) = ex.
where
o
,
test is equivalent to the R.M.L. test.
This
Also, the remarks in the
paragraph preceding this secti.on apply here.
(0)
4.3.3 Distribution of Te,u when Xl'
Uniform (0,8) ,random variables.
member, and the other
= x .
ii'
Then
Z
n-l.
Then
zls
Let
z
n
... ,
X
are Lied.
n
= x (n)
,the largest sample
be defined as follows.
i = 1., ••• , j-l, and
zi
= x i +1
Suppose
;
zn
=x j
•
i = j+l, ••• ,
n-l
=
X/X(n)
(1
The joint density of
+
L
i=1
(4.3.4)
z,/z )/n •
1.
n
Zl' ••• , z n
is
n-l
g(zl'"" ,z ) = nS -n ~ (2) II
11 (z.) •
n
( 0 , 8) n i= 1 (0 ~ z ) 1.
(4.3.5)
n
Let
t
n
= z
X/X(n)
n
and
= (1
The joint densi.t.y of
t.
1.
= z./z
1.
n
n-l
+ !: ti)/n
i=l
t , ••• , t
1
n
i
.
is
= 1,
••• , n-l.
Then
(4.3.6).
29
n-1
~(t ) n
11 (t ) •
(0,8) n i=l (0,1) i
Observing that
t l ' ••• , t n
are i.Ld. Uniform (0,1)
are independent and that
random variables,
X/X(n)
2
•
T(O)
e,u
Using the Central Limit Theorem,
approximat.ely normally distributed with mean
(n-l)/12n
has the same
(1 + sum of (n-1) L Ld. Uniform (0,1) random
distributi.on as
variables} In
(4.3.7)
Therefore, for large enough
(n+1)/2n
is
and variance
n, the critical value
c
i.n 4.3.2 is approKimate1y
(4.3.8)
where
Hz) "" 1"'0'
01
and
is t.he st.andard ['.ermal distribution.
H.)
function.
The distributional properties of a Sum of i.i.d. Uniform (0,1)
random variables have received considerable attention in the literature
of st.atistics.
I.t is well known that the convergence. to normality of
such a Sum is very rapid, the approximation being good for many purposes
for samples as small as
n = 5.
For the .purpose of examining the
tails of the distribution, however, larger sample sizes may be
requi.red.
For
Simulation is
power
0.0093..
n
= 10
0.351.
and
01
=
.01 , the critical value obtained by
Using (4.3.8),
c.~
The comparisons i.mprove for
Distri.bution of T(O) when Xl'
e.u
in Section 4.3.3 and consider again X/X (D.)
z1' ••• , zn
is
01:;
...,
let
density of
0.349
with empirical
.05 and .10 •
X are i. i.d •
n
••• , z
n
i.n (4.3.1).
be defined as
The joi.nt
30
(4.3.9)
Let
t
z
n = n
and
to
1.
= z 1..Iz n
i
= 1,
... , n-l •
Then
n-l
X/X(n) = (l + E
to)/n •
1.
i=l
The joint density of
t , ••• , t
l
n
h(t l , ... , t n ) = nt..
(4.3.10)
is
n-l
-n n-l
t
exPt-t (1+ E to)/t..}
n
- n
i=l 1.
• 11
n-l
11
(t n ) n
(t 0) •
(0,00)
i=I(O,l) 1.
The marginal distribution of
... ,
t
n-l
(4.3.11)
is
n-l
n-l
-n
11
n
II (to) •
h*(tl,···,t -1) = nl(l+ E to)
n
i=l 1.
i=l (0,1) 1.
n-l
The distribution of Y = E t
is
i
i=l
(4.3.12)
(4.3.13)
P(y)
where
nn (.)
is the density function of the Sum of
Uniform (0,1) random variables.
n
Finally, the density of
k(r) = nln(nr)-~~_l(nr-l) •
i.i.d.
R
= T(O)
e,u
(4.3.14)
The asymptotic properties of this distribution are not known.
This density is quite complicated for even small
fore, only recommended for use for very small
is
n.
n, and is, there-
31
4.3.5
sizes.
Critical values of T(O) and power for various sample
e,u
Table 4.3.1
T(O)
e,u
gives criti.cal values of
K:
for the test
H:
Exponential (O,A)
H:
Uniform (O,S)
30
the normal approximation was used to obtain critical values and
versus
versus
K:
Uniform (O,S)
and for the test
Exponential (O,A) •
power under the assumption of uniformity.
obtained by simulation, using
30,000
For
n· 20
and
All other entries were
samples.
Table 4.3.1
Critical Values And Power of The U.M.P.
Test For Discriminating Between
Exponential and Uniform Distributions
Similar~
H:
Exponential (O,A)
K: Uniform (O,S)
Reject H i f r(O) > c
e,u
n
a=.Ol
c
Power
1.0
20
30
.594
.471
.415
H:
Uniform (O,S)
.30
.80
.98
a= .10
Ci=.05
c
Power
c
Power
.524
.419
.371
.61
.95
1.00
.487
.389
.343
.76
.98
LOO
~.
(O,S)
a=.Ol
Power
n
c
10
.351
.379
.396
20
30
4.3.6
--,..-0,.
Expon.entia1 (O,A)
Reject H i f T(O) < c
e,u
.47
.88
.98
K:
Ci=.05
c
Power
.407
.422
.431
D.i3c~nation between
distributi~~~.
.69
.96
.99
a= .10
c
Power
.438
.444
.450
.79
.98
1..00
the R.!SE.2.-71.entia1
(O.c..>
and Uniform
If it is desired to use the statistic
T(O)
e,u
discriminate between exponentia1ity and uniform:tty, then it may be
to
32
reasonable to require the error probabilities of misclassification to
be equal.
obtain
Table 4.3.2 gives the minimal sample size
01
=S
~ 01
o for
01 0
= .10, .05, and .01 •
nO
required to
Table values were
obtained by simulation using 5000 samples, and by using the normal
approximation developed in 4.3.3.
exceeds the critical value
Uniformity is accepted if
T(O)
e,u
c; otherwise exponentiality is accepted.
Table 4.3.2
=
01
4.4
0
01
nO
c
.10
.100
14
.440
.05
.048
20
.420
.01
.010
34
.401
Tests For the Exponential (0,\) Versus the Lognormal
(O,~
2
)
Distribution
The problem of deciding whether data is exponentially or lognormally distributed arises i.n the study of survival times of microorganisms which have been exposed to a disinfectant or poison (cf.,
Irwin [12J).
COX
[5,6J developed tests for the testing problem and
presented the asymptotic distribution of the test statistic.
testing
H:
Lognormal
2
(O,~)
versus
K:
For
Exponential (O,A) , the
test statistic gi.ven by Cox is
(4.4.1)
where
33
"
01
n
1
13&
=
i,n X./n ,
~
1.
i=l
= exp ("011 + '2I,,}
01 2
Lognormality is rejected if
•
Ti,
>
zOI
'
where
~(z )
01
= l~
and
~(.)
is the standard normal distribution functi.on.
For testing
Exponential (0,8)
H:
versus
K:
2
Lognormal (O,a ) ,
the test statistic given by Cox is
(4.4.2)
where
~(1)
~ '(1)
and
0,1'
= Euler's constant,
= TT 2 /6
&2'
rejected if
Te >
S
and
zOl
,
'
are as described above.
where
normal distribution function.
~(z )
01
= l~
and
Exponentiality is
~(.)
is the standard
34
It is clear that interchanging
Hand
K will not, in general,
have the effect of inverting the test statistic if Cox's method is
used, nor will the test
stati~tics
developed by this method be in-
variant, in general.
Srinivasan [19J has also considered the testing problem
Exponential !O,A)
versus
K:
Lognormal
H:
(O,a 2 ) • The statistic
proposed by Srinivasan is
D
n.
= SuplS (t) - 'r(t;A)
t
n
I'
(4.4.3)
where
F(t;A) ::: (1 - (1 - tin
)b n - 1} 11
_(t)
(O,nX)
the M.V.U.E. of the distribution function under
n
_11
[nX,
(t)
CO )
H, and
S (t)
n
is
Exponentia1ity is rejected if
the empiri.cal distribution function.
D
+
exceeds a specified cri.tica1 value.
For corrections to and conunents about Sri.nivasan's results, see
Schafer, Finkelstein,and Collins [18J and Moore [15J.
4 .4 •1 a known.
A-1 exp(-x/A)
Let the null class of densities be
11.
(x)
(0, co )
A>
°,
and the alternative class of densities be
(XC! \M::
v..TT) -1 exp( - (.tn x) 2 12(1 2 }
11
(0,
The .U ..M...P.!.-of
1
and
U.M.P~S."'Q'
a
>
°.
test is given by (4.3.1), where
is the lognormal density function
. density functi.on.
(x)
CO )
I?nd
f
O
is the exponential
Evaluating the numerator of (4.3.1),
35
n
(IT xi)
i-1
-1 co -1
v exp(-
n
Jo
n
- (IT
xi)
L (£n x. + £n v)
i-1
1.
-l co
_ co exp(-
J
i=l
2
2
12a )dv
2
n
2
(t+£,n xi) /2a ')dt , letting t - £,n v ,
L
i=l
n
= (2rr0' In) n( IT
x .)
i=l
-1
1.
n
2
n
2
2
exp( -[ L £,n x. - ( L £,n x.)
In]/2a ) •
1.
i=l
1.
i=l
(4.4.4)
The denomi.nator of (4.3.1) is given in (4.3.3).
The test reduces to
rejecting eKpone.D.ti.ality i f
n
n
n
(L X.) (IT Xi)
i=l 1.
i=1
-1
2
n
n
2
2
exp(-[ L: £n X. - (L £,nX.) /n]2a } > c(O') ,
i=l
1.
i=l
1.
or, equivalently, if
n
~(O')
e,~
T
E'
n
£,n( L: X.) - (L £,n X.)/n
i=l 1.
i=l
1.
n
2
n
2
2
£n X. - (L: £n Xi) InJ/ 2 ra > c(O') ,
i=l . 1.
i=l
- [L:
where
P(Te,£(O') > c(O')lro) = a.
(4.4.5)
It is readily shown that this test
is equivalent to the R.M.L. test.
4.4.2
Critical values of Te.g(O') and power'for n = 10, 20.
4.4.1 gives critical values of
of 0'
0'
for testing
known.
H:
T
~(O')
e,~
Exponential
(o,~)
Table
and power for various values
versus
K:
2
Lognormal (0,0' ),
All entries were obtained by simulation using 5000 samples.
y
36
Table 4.4.1
=
.=:
Critical Values and Power of T
(0') Test For
e,J-
Discriminating Between Exponential and Lognormal
Distributions
2
H: Exponential (0, t.J
K: Lognormal (0,0' ) ,
0' known
. Reject H i f Te, J- (0') > c(O')
~=.01
4.4.3
meter,
0'
~=.05
~=.10
n
0'
c(O' )
Power
c(O')
Power
c (0')
Power
10
0.4
0.6
0.8
1.0
1.4
2.0
2.4
1.63
2.07
2.24
2.38
2.75
3.07
3.20
.93
.38
.08
.08
.24
.64
.81
1.16
1.91
2.18
2.32
2.63
2.91
3.02
1.00
.80
.36
.21
.40
.78
.90
.84
1.80
2.14
2.30
2.58
2.84
2.93
1.00
.92
.53
.35
.53
.85
.94
1.63
2.52
2.84
3.02
3.36
3.65
3.75
1.00
.93
.43
' .23
.48
.91
.98
1.16
2.34
2.78
2.99
3.28
3.54
3.63
LOO
20
0.4
0.6
0.8
1.0
1.4
'2.0
2.4
.83
2.23
2.73
2.96
3.25
3.49
3.58
1.00
1.00
.89
.63
.77
.98
l.00
0' unknown.
Since
occurs in (4.4.4).
0'
1.00
.76
.45
.68
.96
.99
is not a location or scale para-
Consequently, the' left-hand si.de of
expression (4.4.4) is not a stati.stic unless
0'
is known.
is unknown, test statistics can be formed by replacing
(4.4.4) with
a, a
sample estimator of 0'.
0'
When 0'
i.n
In order to obtain an
invariant test, the expression
(4.4.6)
37
must be i.nvariant with respect to the group
..., cx ),
c > 0 •
n
G of transformations
The estimator
n
2
n
2
=[L; .tnX. - (L; .tnX.) Inll(n-1)
~
. 1
~
1,=
i=l
,,2
(J
(4.4.7)
satisfies the invariance requi.rement and, furthermore, is an unbiased
estimator of CJ
plac.ing CJ
versus
2
i f the under1yi.ng distributi.on is lognormal.
,,2
2
by CJ
in (4.4.4), a test for
2
K:
Lognormal (O,CJ)
H:
Re-
Exponential (O,A,)
is given by rejecting exponentiality if
n·
n
2
n
2
(
)/?
x.)n( IT x.)-l[ ~ .tn X. - (L; .tn Xi) In]- n-1 - > c ,
i=l 1.
i=l ~
i=l
1.
i=l
n
(L;
or, equivalently, if
n
T)'(
e,.t
=.tn( i=l
~
n
(~
X.) -
i=l
1.
n
- (n-l).tn[
~
2
.tn X. -
i=l
P(T~,.t > clro) = a .
where
..en X,)/n
1.
1.
n
(~
2
i=l
.tn X.) InJ/2n > c ,
(4.4.8)
1.
The R.M.L. test rejects exponentiality if
n
( L;
X. }
-l
1.
i=l
~
c ,
(4.4.9)
and does not have the property of invariance.
4.4.4
Critical values· of
4.4.2 gives critical values of
versus
K:
Lognormal.
5000 samples.
T{(
~2.t
and power for n = 10, 20.
Table
for the test H: Exponential
T*
e,.t
All entries. were obtained by simulation usi.ng
-~~
38
Table 4.4.2
&4bd
Ttc
Critical Values And Power of
e,t
For D15-
cri.mina t ing Between Exponential And Lognormal
Distributions
H:
Exponential
K:
Lognormal
Reject H i f T*
> c
e,1-
= .. 05
c = 1.90
Cl'
c
Cl'
c
=
.10
= 1.84
= .• 01
= 2.17
= .05
c = 2.10
Cl'
= .10
c = 2.07
Cl'
4.5
w. ,a)
Cl'= .10
Power
Power
0.4
0.6
0.8
l.0
1.4
2.0
2.4
.92
•.34
.08
.04
.17
.55
.75
.99
.69
.26
.13
.29
.67
.83
1.00
.83
.43
.26
.39
.75
.87
0.4
0.6
0.8
1.0
1.4
2.0
2.4
1.00
.85
.-27
.12
.40
.86
.97
LOO
l.00
.99
.78
.52
.66
.93
.98
= .01
c = 2.02
Cl'
20
Cl'=.05
a
Cl'
10
Cl'=.01
Power
c
n
.98
.63
.35
.55
.91
.98
The U.M.P.S.-a Test for the Uniform (9 ,9 ) Versus the Normal
1 2
Distribution
4.5.JL-JLniform class as the null class of distributions.
Let the
null class of densities be
Uthoff [20J derives the U.M.P. location and scale invariant test for
testing this family against a Normal
2
~,,,)
alternative family.
The
test statistic obtained is
(4.5.1)
39
An alternative method of derivi.ng the U.M.P.S.-ex test statistic is
the C.P.I.T. approac.h based on Theorem 3.4.
The C.P.I.T. trans·
formations for a random sampl.e
from this class of
Xl' ••• , Xn
densities are (from Coro11ory 2.1 of [161):
u
where
i = 2, ••• , n-1 ,
= 2,
i = k,
Suppose
j
Let
and
j ~ k •
zn = x k '
(4.5.2)
are defined as
ZI S
Then
= x.1.- 1 '
Z.
1.
and
i = j+1, ••• , k-1
;
To find the U.M.P. simi1ar·a test the joint
uZ' ••• , u • 1
n
Xl' ••• , X
n
distribution.
= xj
zl.
... ,
... , n-1
distribution of
that
Zn = x(n) , and the other
zl = x(l) ,
follows.
i
=
i
must be obtained under the assumption
constitute a random sample from a
Without
zl' ••• , zn
los~
of
generalit~,
let
'2
<iJ"a)
Normal
=
(~,a
2
)
(0,1) •
be as described in the preceding paragraph.
••• , z n
The joint density of
is (i.gnoring constant coefficients)
(4.5.3)
Let
u , ••• , u
2
n-1
(4.5.2) •
The join.t density of
••• , u
n
be as described in
is (ignoring constant
coefficients)
8(u , ••• ,u )
1
n
n.2
ex
\l
n
2
n-1
2
n-1 2
exp(-[nu +2u u (1+ L: u.)+u (1+ L: u.)J/2}
1
1 n
i.=2 1.
n
i=2 1.
• (-00,00)
1 (u 1) (0,00)
11 (Un)
u... 1
IT
(u.) •
i=:2 (0,1) 1.
11
40
Integrating
density of
g(u , ... , un)
1
••• , u
with respec.t to
u
and
1
u
n
,the joint
is (ignoring c.onstant coefficients)
n- 1
2
J:O
exp(-(nv +2vt(1
. co
n-l
11
• n
n-1
+
u.»/2}dvdt
L;
~
i=2
(u.)
i=2 (0, 1) ~
.
0:
co n-2
2
0 t
exp( - t
J
L;
. 2
2
u .)
1.
~=
- (1
+
n-l
2
co
J0
n-1
u.) /n)/2}dt • . n
L;
i=2
0:
«1 +
n-1
r (n-I) /2-1 exp .( -r«l +
n-l
- (1 +
0:(1+
n-l
L;
i=2
u.2)
1.
11
2
u.)
L;
~
i=2
n-l
(u )
i
2
n-1
u.) /n}dr. n
(u.)
i=2 1.
i=2 (0,1) 1.
L;
n-l
• n
11
~=2 (0,1)
1.
11
+
n-I
L;
i=2
u.)2/ n }-(n-l)/2
-.;;;;;;
1.
(u.) •
i=2 (0,1)
Expressing (4.5.4) in terms of
(1
(4.5.4)
1.
zl' ••• , zn ' (4.5.4) reduces to
(zn - zl ) 2/ (L;n (z.- -z)2)(n-l)/2 ,
i=l
~
(4.5.5)
and, fi.nally, in terms of the original X variables,
(4.5.6)
.-
41
By Theorem 3.4 the hypothesis of uniformity is rejected if
if
T
X(n) - X(l)
u,n
n
P(T
u,n
(4.5.7)
~ (X ._X)2
i=l
where
> c ,
> clpo)
~
=a
•
As mentioned at the end of Section 4.1,
T
u,n
is a ratio of
functions of the respective complete and sufficient statistics, and in
particular is a ratio of estimators of dispersion for the respective
families.
Also, the statistic
T
u,n
is independent of the complete
and sufficient stati.stics of the respective families.
4.5.2
Normal class as the null class of distributions.
Let the
null class of densities be
\~ a) -1 exp1,.-(xiJ.)
r
2 /20 2}
(v~rr
11
a>O,
(x)
(-00,00)
-00<1.l.<+00.
-
~.
As in Section 4.2.2, the U.M.P.S.-a test is to reject uniformity if
T
< c
u,n
and to accept otherwise, where
P(T
u,n
< clp 0 )
= a.
The
remarks in the paragraph following (4.5.7) also apply here.
4.5.3
sizes.
H:
Table 4.5.1 gives critical values of
Uniform
versus
Critical values of T
and power for various sample
u,n
K:
versus
Uniform.
30,000 samples.
K:
Exponential
T
u,n
for the test
and for the test
H:
Normal
All entries were obtained by simulation using
42
Table 4.5.1
Critical Values and Power of T
Test for Disu,n
criminating Between Uniform and Normal Distributions
2
H: Uniform <81'8 2 )
K: Normal 4J"C! )
Reject H if T
u,n > c
0'=.01
c
10
30
1.215
.903
.728
H:
Normal
20
0'=.05
Power
n
.07
.33
.65
c
1.138
.840
.687
4.5.4
.22
.59
.83
Power
1.096
.35
.810
.72
.667
.90
K: Uniform (81'8 2 )
Reject Hif Tu,n < c
.837
.687
.607
0'=. ~O
0'=.05
Power
c
10
20
30
c
4J.,O' 2)
0'=.01
n
0'=.10
Power
Power
c
.890
.728
.644
.07
.27
.53
Power
c
.92,1
.755
.666
.22
.54
.80
.34
.69
.90
Discrimination between the Uniform <8 ,8 2 ) and Normal
1
-~...,
If it is desired to use the statistic
T
u,n
to discriminate between uniformity and normality, then i t may be
reasonable to require the error probabHi.ties of misc1assifi.cati,on to
be equal.
obtain a
Table 4.5.2 gives the mi.nimal sample si.ze
=S
~
0'0
for
0'0
= .10,
.05, and .01
obtai.ned by si.mulation using 5000 samples.
T
u,n
exceeds th.e criti,cal value
accepted.
nO
required to
Table values were
Normality is accepted if
c; otherwise uniformity is
43
Table 4.5.2
4.6
The
aO
a
nO
c
.10
.100
31
.657
.05
.048
41
.587
.01
.009
68
~467
U.M.P.S.~
Test for the Uniform (8 ,8 2) Versus the
1
Exponential (eJl.l. Distribtlti~
4.6.1
Uniform class as the null class of distributions.
Let the
nuil class of densities be
Uthoff [20J deri.ves the U.M.P. location and scale invariant test for
testing this family agai.nst an Exponential (e ,i..)
alternative family.
The test statistic obtained is
(4.6.1)
which is easily seen to be equivalent to the R.M.L. test statistic.
An alternative method of deriving the U.M.P.S.-a test statistic is the
C.P.I.T. approach based on Theorem 3.4.
for a random sample
given in (4.5.2).
distribution of
that
(e,A)
Xl' ••• , X
n
Xl' ••• , Xn
from this class of densities are
To find the U.M.P.S.-a
u 2 ' ••• , u _
n l
The C.,P.!.T. transformations
test the joint
must be obtained under the assumption
constitute a ran.dom sample from an Exponential
distribution.
I
I
-I
Without loss of generality, let
(e,A) = (0,1)
44
Let
of
z1' ••• , zn
is
Zl' ••• ~ zn
be defined as in (4.5.2).
(i~noring
constant coefficients)
n
0:
exp(-
The joint density
2': zi)
i=1
11
(z1)
(0,00)
n-1
(zn) IT
(zi).
(z1'00)
i=2 (zl,zn)
11
11
(4.6.2)
Let
u1
= z1'
= zn
un
i = 2, ••• , n-l.
- z1 ' and
ui. = (zi - zl)/(zn - z1) ;
The joint density of
u ' ••• , un
1
is (ignoring
constant coefficients)
U. )}
~
Integrating
density of
and
g(u , ••• , un} with respect. to
1
u 2 ' '."
h(u , ••• ,u _ 1)
2
n
0:
•
I
I
u _l
n
U
n
,
the joint
is (ignoring constant coefficients)
n-l
oo n-2
0 t
exp(-t(1 + 2': u.)1
i=2 ~oo
0 e -nvd vd t ·
n-1
11
(u )
.
IT
i=2 (O'~ 1)
~
n-1
oo t n-2 exp{ -t(1 + n-l
2': u.)1dt. IT
(u )
i=2 ~
i=2 (0,1) i
0:
I0
0:
(1
11
n-1
+ 2':
u. )
i=2 ~
-(n-l) n-1
Expressing (4.6.3) in terms of zl'
IT
11
i=2 (0, 1)
... , zn
(u.).
~
(4.6.3)
, (4.6.3) reduces to
(4.6.4)
45
and, finally,
in terms of the original
X variables,
(4.6.5)
By Theorem 3.4, the hypothesis of uniformity is rejected if
if
T
_ X - xC!)
e,u = X(n) -X.(l) <
(406.6)
c ,
where
P(T
e,u <
cl p o) = O!
•
As mentioned at the end of Section ·4.1,
T
e,u
is a ratio .of
functions of the respective complete and sufficient statistics, and
in particular is a ratio of estimators of dispersion for the.
respective families.
Also, the statistic
T
e,u
is independent of the
complete and sufficient statistics for the respective families.
4.6.2
Exponential class as the null class of distributions.
Let the null class of densities be
A-1 exp(-A(x-e)1
11
(x)
(e, (X)
-00<
e <+00,
>.. > 0 •
)
As in 4.2.2, the UoM.P.S.-O! test is to reject exponentiality if
Te,u >c
and to accept otherwise, where
P(Te,u >c!P ) a O!.
O
remarks in the paragraph following (4.6.6) also apply here.
The
46
. 4.6.3
Distribudon of Te,u when Xl' ••• ,
(61,6 2 ) random variables.
(4.5.2).
Un - l
LetU 2 , ••. , Un- l
Xn
are LLd. uniform
be as described in
As mentioned at the beginning of Section 3.2.
are i.i.d. Uniform (0,1) random variables.
n-1
1:2 Ui = n(X-X(1»/(X(n)-X(1»
U2 •••.•
Observing that
(4.6.7)
- l} ,
and applying the.Cantral Limit Theorem, for large
n
the distribution
11-1
of
E Qi is approximately normal with mean
(n-2)/2
and variance
1....2
(n-2)/12.
Therefore, for large
Te : u
= (X - X(l»/(X(n)
1/2
and variance
- X(l»
(n-2)/12n 2
n, the distribution of
is approximately normal with mean
, and the critical value
c
in (4.6.6)
is approximately
c = 12
-
(4.6.8)
z(X • V(n-2)/3 /2n
where ~ (~) ... l-QI
and ~ (.) is the standard normal distribution
function.
The distributional properties of a sum of i.i.d. Uniform (0,1)
random variables have received considerable attention in the literature
of statistics.
It is well known that the convergence to normality of
such a sum 1s very rapid, the approximation being good for many
purposes for samples as small as
n'" 5.
For the purpose of
examining the tails of the distribution, however, larger samples may
be required.
For n • 10
by s:lmu1ation is
power
0.0091.
0.315.
The
and
(X
....
01 , the critical value obtained
Using (4.6.8),
compar~sons
c· 0.310
improve for
(X
•
•
05
with empirical
and .10 •
.
.....,.
.
47
4.6.4· Distribution of Te,u when Xl""'Xu are i.i.d. Exponential
(e ,6.) random variables.
Y=
The density of
Let
n-l
~
i=2
fey)
~
where
n
(.)
R
= Te,u
g(r)
=
be as described in (4.5.2).
is, using (4.6.3),
= 2-1(n_l)!(l+y)-(n-l)~
n-
2(Y)
(4.6.9)
is the density function of the sum of
Uniform (0,1)
of
Ui
U , ••• , Un - 1
2
random variables.
n
i.i.d.
Using relation (4.6.7), the density
is
(n/2)(n-l)!(nr)
-(n-l)
~n_2(nr+l)
•
(4.6.10)
The asymptotic properties of this distribution are not known.
This density is quite complicated for even small
fore, only recommended for use. for very small
4.6.5
K:
K:
n.
Critical values of Te •u and power for various sample sizes.
Table 4.6.1 gives critical values of
versus
n, and is, there-
T
Exponential and for the test
Uniform.
For
n
~
for the test
e,1.1
H:
Exponential
H:
Uniform
versus
20 , the normal approximation was used for
critical values and power under the assumption of uniformity.
Other
entries were obtained by simulation, using 30,000 samples of size
for each
4.6.6
~,l)
n
n.
Discrimination between the Uni£orm
distributions.
(e l ,e 2 )
and Exponential
If it is desired to use the statistic
T
e,u
to
discriminate between uniformity and exponentiality, then it may be
reasonable to require the erro·r probabilities of misclassification to
be equal.
Table 4.6.2 gives the minimal sample size
nO
required to
48
Table 4.6.1
ww:
=
Critical values and power of T
test for dise,u
crimlnating between uniform and exponential distributions
K:
Reject H i f T
n
10
20
30
H:
0'=.01
c
Power
.315
.357
.382
Exponential (e,A)
e,u < c
a=.05
c
Power
.366
.• 398
.416
.42
.85
.98
Exponential (e,A)
0'= .10
c
Power
.64
.94
.99
K:
•.395
.420
.435
.75
.97
1.00
Uniform (e ,e )
1
2
Reject H if T
> c
e,u
0'=.01
Power
n
c
lD
.558
.455
.406
20
30
c
0'=.-5
Power
.489
.404
.361
.24
.77
.97
c
.55
.94
1.00
.454
.376
.338
Table 4.6.2
c
.10
.05
.01
.095
.049
.0lD
15
21
36
0'= .10
Power
.409
.401
.391
.71
.98
1.00
49
obtain ex
= l3
ex 0 for
~
Ci 0
=
.10 , .05 , and .01.
Table values were
obtained by simulation, using 5000 samples, and by using the normal
approximation developed in Section 4.6.3.
accepted if
Te,u
Exponentiality is
exceeds the critical value
c; otherwise uniformity
is accepted.
It is interesting to note that the sample sizes in Table 4.6.2 are
larger than the corresponding sample sizes in Table 4.3.2, and the
power values in Table 4.6.1 are smaller than the corresponding values
in Table 4.3.1.
This is a direct consequence of the result stated in
Theorem 3.5.
4.7
TheU.M.P.S.~
Test for the Uniform (O,e) Versus the Right Tri-
angular (0,8) Distribution
4.7.1
Right triangular class as the null class of distributions,
Let the null class of densities be
26 -2x
11
(x)
6 >
(O,e)
a .
Since both classes are scale parameter classes, the U.M. P.1.-ex_
and
U.M.P.S.~
test is given by (4.3.1), where
triangular density function and
fa
f
O is the right
is the uniform density function.
Evaluating the· denominator of (4.3.1),
(26 -2 ) n
nn
1=1
fo:J
a v2n-1 11
Xi
(v) dv
(O,e/x(n»
=2
n-1 n
The numerator of (4.3.1) is given in (4.3.2).
reject right triangularity if
n
II
i"l
1f
(X( )/X )
n
2n
II x .Inx( )
1=1 1
n
i
(4.7.1)
The test is then to
> c , or equivalently,
50
n
T
where
1: f,n X./n
~
1=1
u,r
P(T u,r > clPo)
= 01
> c ,
(4.7.2)
This test is equivalent to the R.M.L.
c
test.
As mentioned at the end of Section 4.1, the U.M.P.
similar~
test
is based on the ratio of functions of the respective complete and
sufficient stat1stics.
Also,
Tu,r' the logarithm of this ratio, is
independent of the complete and sufficient statistics of the respective
families.
4.7.2
Uniform class as the null class of distributions.
Let the
null class of densities be
9
-1
e>
11 (x)
(0,9)
.
0
As in Section 4.2.2, the U.M.P.S.-Q' test is to reject uniformity if
T
<c
u,r
and accept otherwise, where
P(Tu,r < c IPo) =
01 •
The
remarks in the paragraph following (4.7,2) also apply here.
-=.
4.7.3 Distribution of Tu,r when Xl' c•• , Xn are i.i.d. right
triangular random variablesc
member, and the other
z's
Let
zn'" x(n) , the largest sample
be defined as follows.
i ... 1, .•• , j-l, and
Then
The joint density of
zl' ••• ,
... n(29 -2) n
n
II z.
z~
Suppose
i
=
j+l,
zn'" xj .
•
0
•
,
is
n-l
11
(z ) II
11 (z.) •
.i=l ~ (O,e) n i.=1 (O,z ) ~
(4.7.3)
n
Let
.density of
i
Yl'
= 1,
c,., Yn is
•
0
n-l •
e ,
n-l, and
The joint
51
(4.7.4)
Y1' •• 0' Yn-1
The marginal density of
h(Y t ""'Yn _l) = n(2a
:=
2
-2 n n-l
)
IT Yi
... nT
u,r
t
•
i
=-
i
~
i
.en Yj
1
dv
11
(y.).
(0,1) 1.
i=1
Let
2
J~ v n-
l1(y i )
(0,1)
i=1
n-l n-l
IT Yi
is
...
1,
•
0
II;
(4.7.5)
n-1
,
Note that
0
j=l
The j oio r. density of
t
l
,
e _ • ,
t
1)
=
11
2n-l exp{-2t -1) "f1II (tn-I) IT
n-
n
The marginal density of
t
n- I
(0, !XJ)
n-l
is
n-1
n-2
k(t1,···,t
t
(ti) • (4.7.6)
i=1 (0, ti+l)
is
n 1 n 1
pet) - (lrt)-12 - t - exp(_2t)
11
(4.7.7)
(t) ,
(0, co)
so
4nTu, r
freedom.
is a chi-square random variable with
The distribution function of
F(t) ... peT
4.7.4
u,r
t) ... p(x
degrees of
is
2 ~ 4nt) •
2n
(4.7.8)
Distribution of Tu , r when Xl' .•• , Xn " are L 1. d. Uniform
(o,e) random variables.
4.7.4.
~
Tu,r
2n
be defined as in Section
Let
The joint density of
zl' ••• , zn
is
(4.7.9)
i ... 1, ••. , n-1, and
density of
Yl' .•• , Yn
is
yn
= Zn
•
The joint
52
= ne -nyn-1 11
g(Y1'· •• ,y )
n
so
n-1
(y) IT
11 (y~) ,
(O,e) n i.=1 (0,1) ...
n
are Ll.d. Uniform (0,1) random variables.
i
e ,
n-1
Note that t
- ~ . .tn Yj ; i = 1,
n-l
Y1 '···'Yn - 1
Let
= nTu,r
ti
=
$'
j
e
h(t 1 ,···,t
1)
n-
= exp(-t
The marginal density of
p(t)
=
L
1)
n-
t n- l
1
,
••
2nTu,r
freedom.
0
•
t n-l
is
n-2
11
11
(t 1) IT
(0, OJ )ni=l (0, t
(f"it)-l t n-1 exp (_t)
."
The distribution function of
4.7.5
= peT u,r
2
= P(X 2n
s; t)
i 1
o ).
(4.7.11)
)
(t)
(4.7.12)
OJ )
is a chi-square random variable with
G( t)
(t 1
+
is
(0,
so
.
•
1
The joint density of
•
(4.7.10)
Tu,r
s; 2 nt)
2n
degrees of
is
(4.7.13)
•
Critical values of Tu, r and power for various sample sizes.
Table 4.7.1 gives cd tical values of T
u,r
versus
K:
Right Triangular
versus
K:
Uniform.
for the test
and for the test
H:
H:
Uniform
Right Triangular
Formulae (4.7.8) and (4.7.13) were used to
obtain the entries in the table.
4.7.6
Discrimination between the Right rriangular and Uniform
distributions.
If it is desired to use the statistic
Tu,r to
discriminate between right triangularity and uniformity, then it may
be reasonable to require the error probabilities of misclassification
to be equal.
to obtain
0I:l
Table 4.7.2 gives the minimal sample size
S
s; 01
o
for
01
0
=
.10, .05, and .01 •
nO
required
Table values
• . •_ _ .•.
n
._~
53
Table 4.7.1
Critical Values and Power of Tu,r Test for Discriminati.ng Between Uniform and Right Triangular
Distributions
H:
K:
Reject H ifT
Uniform
n
c
0/
.01
Power
10
.4130
.5541
.6248
.6693
.7007
1
.316
,706
.908
.977
.995
1
20
30
40
50
co
H:
=:;
Right Triangular
co
.6221 .794
.7263 .968
.7743 .996
.8035 1.000
.8236 1.000
1
1
K:
Uniform
u,r > c
ex=.05
Power
c
ex=.lO
c
Power
.5425 .643
.6627 .918
.7198 .986
.7549 .998
.7793 1.000
1
1
ex=.Ol
10
20
30
40
50
u,r < c
0/=.05
Power
c
Reject H ifT
n
Right Triangular
c
.7853 .735
.6970 .926
.6590 .981
.6367 .995
.6217 .999
.5
1
.9392 .536
.7961 .818
.7365 .937
.7021 .980
.6790 .994
1
.5
ex= .10
Power
c
.7103 .820
.6476 .959
.6200 .991
.6036 .998
.5925 1.000
.5
1
. . Table 4.7.2
c
.10
.05
.01
.09250
.04981
.• 00975
15
23
46
Power
.67786
.68316
.68815
54
were obtained by using formulae (4.7.8) and (407.13).
accepted if
Tu,r
exceeds the critical value
Uniformity is
c; otherwise right
triangularity is accepted.
4.8 The U,M.P.S.""'Q' Test for the Pareto Versus the Lognormal
Distribution
Let
Xl' ••• ,
~
be i.i.d. according to one of the two following
classes of densities:
= (1[2rr
2
O'x)-lexp(-(.e n X-j,L)2/4:r }
11
(x)
(Lognormal density) •
(0, co )
Let
Yi =.en Xi'
i = 1, 0.0, n.
Then
according to either the Exponential (.en 8 2 ,8 )
1
distribution.
in 4.1.
test.
The
U.M.P.S.~
Yl' ... , Yn
are Ll.d.
or the Normal
4J.,O' 2)
test for this testing problem is given
It is readily shown that this test is equivalent to the R.M.L.
55
SUMMARY
Under rather general conditions it has been shown that a most
powerful similar test may be obtained as a function of C.P.I.T.'s for
a composite goodness-of-fit null hypothesis.
Sufficient conditions
are given to assure that such a test is uniformly most powerful
against a composite alternative class.
It is further shown under
general conditions that this test identifies with certain uniformly
most powerful invariant tests.
Also, it is shown that if certain
conditions are satisfied then the addition of information about the
parameters of either the null or alternative classes results in a
revised test with power at least as great as the test based on the
original information.
The usefulness of such optimal tests is limited in practice by
the following considerations.
The separable hypotheses testing
problem consists essentially of two parametric classes of distributions,
~.,
the null hypothesis class and the alternative class.
When either of these classes is changed, a different test statistic
with different null distribution will obtain, in general.
Thus,
each new testing problem requires a new set of significance points.
Since for many cases the only practical way to obtain significance
points is by Monte Carlo simulation, the computational effort and
resultant tables would be large if tests are to be available for
many cases of interest.
In practice, one often uses the same test stati.stic for a
particular null hypothesi.s class against all alternative classes.
56
Also, the C.P.I.T. approach mentioned earlier can be used to
establish a specified size test for many different testing problems
and only one set of si.gnificance pointS is required.
Either of
these types of tests will, in general, be suboptimal for a particular
problem.
When such a suboptimal test is used, the power of the
foregoing optimal test will provide a least upper bound for the power
of such tes ts •
Finally, it was shown that under general conditions the test
statistic used to construct the most powerful similar test for a
particular testing problem can be used to construct an optimal
classification rule.
57
6.
LIST OF REFERENCES
1.
Antle, C., R. Dumonceaux, and G. Haas. 1973. Likelihood ratio
test for discrimination between twO models with unknown
location and scale parameters. Technometrics 15:19-28.
2.
Atkinson, A. C. 1970. A method of discriminating between models.
J. Royal Stat. Soc., Series B, 32:323-345.
3.
Basu, D. 1955. On statistics independent of a complete
sufficient statistic. Sankhya 15:337-380 and 20:223-226.
4.
Chen, E. H. 1971. Random normal number generator for 32-bitword computers. J. Amer. Stat. Assoc. 66:400-403.
5.
Cox, D. R. 1961. Tests of separate families of hypotheses.
Proceedings of the Fourth Berke1y Symposium, Vol. 1,
University of California Press, Berkeley, Calif., 105-23.
6.
Cox, D. R. 1962. Further results on tests of separate families
of hypotheses. J. Royal Stat. Soc., Ser. B, 24:406-424.
7.
Dumonceaux, R. and C. Antle. 1973. Discrimination between the
Log-Normal and the We~bul1 Distributions. Technometrics
15:923-926.
8.
Dyer, A. R. 1971. A comparison of classification and hypothesis
testing procedures for choosing between competing families
of distributions, including a survey of the goodness of fit
tests. Technical Memorandum No. 18, Aberdeen Research and
Development Center, Aberdeem Proving Ground, Maryland.
9.
Dyer, A. R. 1973. Discrimination procedures for separate
families of hypotheses. J. Amer. Stat. Assoc. 68:970-974.
10.
Dyer, A. R. 1974. Hypothesis testing procedures for separate
families of hypotheses. J. Amer. Stat. Assoc. 69:140-145.
11.
Geary, R. C. and E. S. Pearson. 1955. The ratio of the mean
deviation to the standard deviation as a test of normality.
Biometrika 27:310-335.
12.
Irwin, J. O. 1942. The distribution of the logarithm of survival
times when the true law is exponential. J. of Hygiene 42:
328-333.
13.
Jackson, O. A. Y. 1968. Some results on tests of separate
families of hypotheses. Biometrika 55: 355-363.
14.
Lehmann, E. L. 1959. Testing Statistical Hypotheses.
Wiley and Sons, Inc., New York City, New York.
John
--
58
15.
Moore, D. 1973. A note on Srinivasan's goodness of fit test.
Biomett:ika 60:209-211.
16.
O'Reilly, F. J. and Co P. Quesenberry. 1973. The conditional
probability integral transformation and applications to
obtain composite chi-square goodness-of-fit tests. Annals
of Statistics 1:74-83.
17.
Quesenberry. C. P. 1973. On conditional probability integral
transformations and unbiased distribution functions. Unpublished manuscript. Department of Statistics, North
Carolina State University, Raleigh. N.C.
18.
Schafer. R.,J. Finkelstein, and J. Co1linso 1972. On a goodness
of fit test for the exponential distribution with mean
unknownc Biometrika 59:222-224,
19.
Srinivasan. R. 1970. An approach to testing the goodness of fit
of incompletely specified distributions. Biometrika
57: 605-611.
•
20.
Uthoff, V. A, 1970. An opt1mUm test property of two well-known
statistics. J. Amer. Stat. Soc. 65:1597-1600.
21.
Uthoff, V. A. 1973 •. The most powerful scale and location invariant test of the normal versus the double exponential.
Annuals of Statistics 1:170-1740
© Copyright 2026 Paperzz