5.4 Some Desirable Properties of Point Estimators

5.4 Some Desirable Properties of Point
Estimators
Ulrich Hoensch
Friday, March 19, 2010
Unbiased Estimators
Definition 5.4.1
A point estimator θ̂ is called an unbiased estimator for the
parameter θ if E (θ̂) = θ for all θ ∈ Θ. Otherwise θ̂ is said to be
biased. The bias of θ̂ is
B = E (θ̂) − θ = E (θ̂ − θ).
Example
If X1 ,P
. . . , Xn is a random sample and Xi ∼ B(1, p), then
p̂ = ( X )/n is an unbiased estimator for p:
P P
X
E (X )
nE (X )
=
=
= E (X ) = p.
E (p̂) = E
n
n
n
Unbiased Estimators
Theorem 5.4.1
I
The sample mean X of a random sample is an unbiased
estimator for the population mean µ.
I
Suppose X1 , X2 , . . . , Xn is a sample of size n, chosen at
random and without replacement, from a population of size
N, then the sample mean X is an unbiased estimator for the
population mean µ.
Theorem 5.4.2
I
P
The sample variance S 2 = (1/(n − 1))( (X − X )2 ) of a
random sample is an unbiased estimator for the population
variance σ 2 .
I
Suppose X1 , X2 , . . . , Xn is a sample of size n, chosen at
random and without replacement, from a population if size N,
then ((N − 1)/N)S 2 is an unbiased estimator for the
population mean σ 2 .
Unbiased Estimators
Proof of part 2 of Theorem 5.4.2
P
Since S 2 = ( X 2 − n(X )2 )/(n − 1),
E (S ) =
=
=
2
E (X 2 ) − nE (X )
n−1
2
n(σ + µ2 ) − nVar (X ) − nµ2
n−1
2
nσ − nVar (X )
n−1
P
2
Unbiased Estimators
By Theorem 4.1.2, Var (X ) = (N − n)/(N − 1)(σ 2 /n), so
E (S 2 ) =
=
=
=
=
nσ 2 − (N − n)/(N − 1)σ 2
n−1
n − (N − n)/(N − 1)
σ2
n−1
2 (N − 1)n − (N − n)
σ
(N − 1)(n − 1)
N(n − 1)
σ2
(N − 1)(n − 1)
N
σ2
N −1
Uniform Distribution
We saw that if X1 , . . . , Xn is a random sample taken from a
U(a, b), distribution, then â = Xmin = min(X1 , . . . , Xn ) and
b̂ = Xmax = max(X1 , . . . , Xn ) are MLE. From section 3.4. we know
that the PDF of the minimum and the maximum of iid random
variables are
fXmin (x) = n(1 − F (x))n−1 f (x),
fXmax (x) = n(F (x))n−1 f (x).
If X ∼ U(a, b), we have f (x) = 1/(b − a) and
F (x) = (x − a)/(b − a) for a ≤ x ≤ b. Thus,
1
x − a n−1
dx
xn 1 −
b−a
b−a
a
n Z b
1
=
nx(b − x)n−1 dx.
b−a
a
Z
E (Xmin ) =
b
Uniform Distribution
Using integration by parts, the last expression becomes
(na + b)/(n + 1). We have
E (â) =
n
1
na + b
=
a+
b.
n+1
n+1
n+1
Similarly, it can be shown that
nb + a
n
1
E b̂ =
=
b+
a.
n+1
n+1
n+1
This means that neither MLE is unbiased. The bias for the MLE is
B = ±(b − a)/(n + 1).
Example
Let θ̂1 and θ̂2 be two unbiased estimators of θ.
I
Show that every convex combination θ̂ = aθ̂1 + (1 − a)θ̂2 ,
0 ≤ a ≤ 1 of θ̂1 and θ̂2 is also an unbiased estimator of θ.
Example
Let θ̂1 ,θ̂2 be two unbiased estimators of θ.
I
Supposing that θ̂1 and θ̂2 are uncorrelated, find the constant a
so that θ̂ = aθ̂1 + (1 − a)θ̂2 has minimal variance.
Mean Square Error
Definition 5.4.2
The mean square error of the estimator θ̂ is
2 .
MSE θ̂ = E
θ̂ − θ
Remarks.
I
We have that
2
MSE θ̂ = Var θ̂ + E θ̂ − θ = Var θ̂ + B 2 .
I
The MSE combines both the bias and the variability of the
estimator.
If θ̂ is an unbiased estimator, then MSE θ̂ = Var θ̂ .
I
Bias and Variability
Minimum Variance Unbiased Estimator
Definition 5.4.3
An unbiased estimator θ̂ that minimizes the mean square error is
called the minimum variance unbiased estimator (MVUE) of θ.
Example.
Suppose X1 , . . . , Xn is a random sample taken from a population
2
with
P mean µ and variance σ . Let θ̂ = a1 X1 + . . . + an Xn , ai ≥ 0,
ai = 1, be any convex combination of the Xi . It can be shown
that
E θ̂ = µ, Var X ≤ Var θ̂ , and
σ 2 /n = Var X = Var θ̂ only if a1 = . . . = an = 1/n.
Homework Problems for Section 5.4 (Points)
p.262-263: 5.4.1 (2), 5.4.7 (3), 5.4.8 (2).
Homework problems are due at the beginning of the class on
Monday, March 29.