•
." ·"'u
FOURIER IvlETHODS IN THE STUDY OF VARIANCE FLUCTUATIONS
IN TD'lE SERIES ANALYSIS
by
Walid Nuri
Institute of Statistics
Mimeograph Series No. 438
July, 1965
•
iv
TABLE OF CONTENTS
Page
LIST OF TABLES
vi
LIST OF ILLUSTRATIONS
vii
'.
1.
INTRODUCTION. • •
2•
REVIEW OF LITERATURE
3.
PERIODOGRAN OF SQUARES OF GAUSSIAN DATA
3·1.
3·2.
3·3.
3.4.
1
5
.'
Introduction. • • • • • • •
Harmonic Analysis . • • . .
Covariance between Two Sets of Variates
Using Corollary 3.2 in Finding
cov [ I y(N) (p)
N ' I y(N) (E')]
N
•
8
10
12
( P .J.
r P " ,p,p =1,2, .•• ,m )
17
COV[I~N)(~), I~N)(~')].
3.5.
Asymptotic Properties of
3,6.
Special Cases for the Fourier Form of CT~
26
30
3.7 .... Artificial Examples of Time Series.
3·7·1.
3.7·2.
4.
Description of Series
Calculating Procedure
4.1.
4.2.
Introduction. . .
The Function C
4.3.
Modification on C
36
39
43
43
44
s
52
s
4.3.2.
4.3.3.
5.
36
CIRCULAR STATIONARY PROCESS
4.3.1.
•
8
The Function C
s
Real and Imaginary Parts of
The Relation between C and
s
52
c . . . .
~(N) (~)
x
2
55
58
N
PERIODOGRAN OF SQUARES OF A MOVING AVERAGE. •
5.1.
5.2.
Introduction . . . . . • • . • . • •
The Origin of a Moving Average Model.
5.3.
Periodogram of
5.4.
Artificial Series
~
60
60
61
.
63
74
}
Description of the Series
Computation Procedure • •
74
83
v
TABLE OF CONTENTS (continued)
Page
6.
ESTIMATION PROCEDURES FOR l1'p l
MOVING AVERAGE MODE:L.
6.1.
2
(p = 1,2, ... ,m)
IN A
84
84
Introduction ••
2
6.2.
6.3.
7.
11' 1
+o
6.2.1.
Derivation of the Estimator
6.2.2.
Variance of Rn(~)' p = 1,2, ... ,m
A TESTING PROCEDURE FOR THE VARIANCE <T~
99
•
7·3·
7.4.
9.
85
88
92
Introduction• . • •
The Test Statistic.
7.2.1.
7.2.2.
8.
85
Numerical Applications. • . • •
7.1.
7.2.
•
A Consistent Estimator of
Statistics Based on the Periodogram . • •
The Distribution of B£(P) (p = 1,2, •.. ,k)
under the Null HYPothesis . •
7.2.3. The Development of the Test ••
7.2.4. The Mean and Variance of B£(p).
An Argument for the Power of the Test
Numerical Examples.
SUMMARY AND CONCLUSIONS
8.1.
8.2.
8.3.
8.4.
The Problem • •
The Random Process.
Circular Complex Stationary Process
The Moving Average Model. • • • • •
8.5.
8.6.
A Consistent Estimator for l1'pl2/co
Testing Procedure
LIST OF REFERENCES •
99
100
100
103
107
109
112
116
121
121
121
122
123
124
124
126
•
vi
LIST OF TABLES
Page
3.1.
Values of the periodogram of Series I and the true values
2
-1
of I r p 1 and N CO,N at p=O,1,2, ••• ,100 • • • • •
3·2.
Values of the periodogram of Series II and the true values
2
-1
of 1r p 1 and N CO,N for p=O,1,2, ••• ,100 • • • • • • • • •
40
Values of" the periodogram of Series III and the true values
2
-1
76
Values of the periodogram of Series Vj the true value of
2
1
S(p)j values of ! r p l and N- CO,N at p = 0,1,2, ... ,100.
79
G
5·1.
•
..
•
..
6.1.
6.2.
Ir 1
Values of A(p), the estimator of ~ for Series V • • • • •
6.3.
2
Ir 1
Values of A(p), the estimator of ~ for Series VI ••
81
93
°2
7.1.
7.2.
7.3.
•
..
Values of the periodogram of Series VIj the true value of
2
1
S(p)j values of Ir 1 and N- CO N at p = 0,1,2, ••• ,100.
p ,
"
2
Ir 1
Values of A(p), the estimator of cP for Series IV • • • • •
5·3.
41
of 1r p 1 and N CO,N for p=O,1,2, ••• ,100.
Values of the periodogram of Series IVj the true value of
1
S(p)j values of 1rpl2 and N- CO,N at p = 0,1,2, ••• ,100 •••
°
°
95
Periodogram values of Series VII and the corresponding
values of D.e(.e=1,2, .. . ,5) • • •
• ••••••
117
Periodogram values of Series VIII and the corresponding
values of D/.e=1,2, . .• ,5) • •
• •••••
. . 118
Periodogram values of Series IX and the corresponding
values of D/.e=1,2, .. . ,5) • • • • • • • • • • • • • • • • • 119
•
vii
LIST OF ILLUSTRATIONS
Page
5 .1.
5.2.
Periodograrn values and the true values of' S( p) f'or'
Series TV . . . . . • • . . . . . . • . . . • .
....
77
Periodogram values and the true values of' S(p) of'
Series V. . .
....
80
0
5.3.
•
'.
•
0
•
•
•
•
•
•
•
•
0
•
•
•
•
•
Periodograrn values and the true values of' S(p) f'or
Series VI • . • • • • •
82
6.1.
Values of' A(p) f'or Series TV.
96
6.2.
Values of' A(p) f'or Series V •
96
6.3. Values of' A(p) f'or Series VI.
97
I.
INTRODUCTION
Almost all the literature on time series analysis is devoted to
stati.onary processes; that is j in the wide sense definition, the class
of stochastic processes where the variance is finite and the covariance
between any two observations depends only on the difference in their
times of observation,
One way to violate this stationarity property is
to assume that a process EXt] possesses a probabHity d1stribut:i.on ~with
a certain mean and variance depending on t.
There are many practical
situations where the assumpti.on of the constant variability of a process
may not have justifiable ground:
the crop yield of a certain area is a
function of the amount of daily rainfall which may have different vari=
ability from one day to the other; a random noise x
t
i.n an electronic
device may possess a probability distribution with zero mean and vari=
ance (J"~.
In climatological studies Bliss [1958J~ who studies data on
the monthly mean temperature, showed that the variability of tempera=
tures is more in winter than in summer.
Perhaps in mentioning an
analogous situation in the field of design of
experiments~
where the
variance of the error term of the linear model is not assumed constant,
we would give a clear picture of the variety of situations in which the
present problem may be encountered.
This thesis is concerned with time series models where the
observations are normally distributed with means zero and unequal but
finite variances which are expressed as a Fourier series.
of the thesis is:
The purpose
(1) to obtain consistent estimators for the coeffi=
cients in the Fourier expansion of the variance, and (2) to test the
2
hypot",hes:ls of the constancy of the variance in a linear model.
The
study of these problems involves the study of two separate models.
first is the random process model presented by Herbst [1963a].
The
The
model for this process X is such that
t
xt :::
a
T}
t t
where T)t are uncorrelatecl normally distributed random variables with
mean z,ero and variance 1., NID (0,1), and the O't are finite real constants, not necessarily equal,
Fourier methods are employed in this
thesis to give an approximate expression for the set of O't.
X (t
t
=:
If
1, .•. , N) is a realization of the above process, then the
Fourier expansion associated with at is given by
(t
= 1,2, •.• , N),
and
Zf.... -, e
2rdf....
... r-:, i ::: V-I,
m::: [~] •
Thus
(1131 .- 0,1,2,." •.~m-l, N even) or
(i s\
=.
0,1,2,
o ••
,m,
N odd) .
We assume that few of the Fourier coefficients are non-zero.
Following
Schusterlg periodogram analysis of the scheme of hidden periodicities,
the periodogram of ~, N~2
t
!
It=l
N
~ x2 ztf....
t
2
, which differs from the one
proposed by Herbst (1963a) by the factor N~2, is used to detect the
2
dominant frequencies in the Fourier expansion of at.
It will be shown
that the above periodogram is, in a sense which will be clarified in the
3
sequel, a consistent estimate of
11p 12 ,
where r p = rp, N.
Irp l 2
provides
a measure of coincidence of rr~ with cos2~tR and Sin2~t;.
From the above model Herbst [1965] has generated a circular complex
stationary process [J ].
a
It will be shown that the sampling properties
of the covariance functions associated with this process are somewhat
similar to those of the covariance functions in the real process.
The second model is the "moving average model", studied by Herbst
[1963a], for a process such that
where the aj's are real constants, h is a positive integer, rr and
t
are as described for the first model.
~t
The "modified" first order auto~
regressive model, for example, is a special case of this model where
the constants, a,'s, are powers of the parameter.
J
X;
periodogram of
is, in a
a consistent estimate of
clarified in the sequel,
Chapter 6 of this thesis
is devoted to finding a consistent estimator of
lE,2
h
factor
II, ~
2
ajz
N
It is shown that the
Irp l2
separate from the
•
j=O
In Chapter 7, we construct a
large~sample
test that rr
t
is constant
in the moving average model, the alternative hypothesis being that rr
not constant.
The test statistic is based on the periodogram of
X;,
is
t
and
~ 2
~ a~z N ,is
h
its asymptotic distribution, which is independent of
j=O J
found to be that of the
Kolmogorov~Smirnov statistic.
Intuitively, it
4
seems to us that the test statistic based on the periodogram of ~
provides a better test than that proposed by Herbst [1963b] based on
the period.ogram of Xt .
The theory proved in this thesis is supported by artificial
examples in which the variance rr~ takes different Fourier expansions.
For the estimation problem, uncorrelated and correlated series are
constructed where only one or two Fourier coefficients are
the variance expansion.
non~zero
For the testing problem, series are constructed
in such a way that the variance will not be consta.nt; the Fourier
sions Will include a few
in
non~zero
coefficients.
expan~
5
2•
REVIEW OF LITERATURE
As it was clear from the context of Chapter l} except for the work
done by Herbst, there is no known study done on the specific models
studied in this thesis.
(t
:=
In connection with the model X :=
t
~t~t
O,±l,.!2;1 •.. ) and for a sample of size N, Herbst [1963a] proposed
the use of the periodogram of
X;:
I(~)(A) := N-IIJ 2(A)\2 = N- l
x
x
II
!2
"'-' L>.,LJ
y::."
" 't';.1\, ' ,
t=l
I
t,
to detect the periodicities associated with the Fourier expansion of
2
The argument given is as follows.
~t.
222
then Yt := ~t + Zt' where Zt = ~t(~t-l).
variables with zero means and variances
,Y
t
Thus, Zt are independent
2~~
(1
:5
t
~
N).
Therefore,
resembles the classical scheme of hidden periodicities and, by show-
ing that Jy(n)
(p:= 1,2, ..• ,m-l) is a linear function of lp' he con-
cludes that the above periodogram would be a suitable estimate of the
periodicities in ~~ and its high peaks would correspond to "active"
frequencies ~ with lp
1 o.
In the same paper Herbst [1963a] extends the above model to the
"moving average" modeL
He proved the following lemma which corresponds
to a lemma of Hannan [1960] in the stationary processes.
D
t
=
h
L: a,X _ ,
j
j=O J t
where X is as defined above.
t
Then
h"
L: a.Z J /\' •
, 0 J
J=
1
1
22 J (E)
D N
,
\
=
where, provided that ~~ <
00
1
22 Jx(~) + N~2
(t:= 0,~1,i2, •.. )
Ele
(n)!
2
<00.
e
(EN)'
Let
6
00
Let Xt
Hannan [1960] proved the following lemma,
=
.E Q;j€t-j'
j=O
where the €t are independent random variables with means zero, variance
2
cr
and fourth cumulant K ,
Then for a sample of size N
4
IN(~'X)
1
, IN(~j€) + 0(N- 2 ),
= f(~)
-y::i),
(i ==
1
The term 0(N-2) indicates that the neglected quantities will have mean
-1
CD
1
square which is of order N , if .E IQ; .\ j2 <
o
Earlier Bartlett [1954]
CD,
J
proved this lenuna, but in the following form:
1
IN(~'X) == f(A)IN(A, E) [1 + 0(N-2 )] ,
Herbst [1964] proved that for the moving average model
X
t
=
p
.E a.cr .'Tl . and using the weight function
j=O J t -J t -J
if ~ -
-1.. <
~
il
< ~ + .l..
~
otherw-ise ,
m
the function fN(~)
of YO
I
P
..E
a.Z J'~12 ,where YO ==
J=O J
N-li
~
.
= N-1 j~ODN(~'~)
Ix,N(j/N) is a consistent estimator
N
lim
N-->
00
2
(j) ==
N-l.E cr and I
x,N N
t=l t
X zjt/Nl2 ,
t=l t
Herbst [1965] generated the following circular complex stationary
process
1
J
=:
Q;
N
/
N-2 E X Z-ta: N
t=l t
(X
t
= CT't'Tl ) ,
t
7
*+s = r s
and showed that E Jela
and
The main purpose of that
paper was to construct test statistics for the variance ~~ and in this
connection he proved that
E [N- l
~ J?: cos(~)
J = Re(r s )
N
t=l t
and
E[N- lt=l~ ~ Sin(2lf;S)]
= Im("i<'
!s ) "
Herbst [1963b] proposed large sample test statistics, depending on
the periodogram of X and distributed asymptotically as the Kolmogorovt
Smirnov statistic., for testing the homogeneity of variance in a moving
average modelo
He introduced two test statistics and showed that, using
an argument based on the extreme conditions, the power of the tests
depends on the correlogram of the periodogram ordinates
0
Priestley [1962a, 1962b] studied normal stationary process X
t
given by X = Y + Zt' where Y is a stationary process with an absot
t
t
lutely continuous spectral density and Zt is a stationary process with
a discrete spectrum
0
Although the structure of this problem is entirely
different from the one we are studying at present, the testing problem
involved has some similarity to what we are dealing witho
One of the
testing procedure approaches Priestley proposed in his papers is the
grouping of the periodogram, a procedure which gives better results
than others depending on the periodogramo
8
3.
PERIODOGRAM OF
3,1.
SQUA..~ES
OF GAUSSIAN DATA
Introduction
The periodogram, in the classical time series analysis, is used to
detect the active frequencies in the mean of an observed serieso
Herbst
[1963a] studied the use of the periodogram of squares of Gaussian data
with zero means in the detection of the frequencies ~ with which the
non-zero Fourier coefficients.,
variances are associated.
p
in a Fourier series ex-pansion of the
In this chapter we will study the properties
and use of the periodogram, which is defined in a slightly different
way from the usual definition, of squares of this kind of data.
In
this case harmonic analysis will have different i.nterpretations from
the classical one, as we deal with non-stationary stochastic processes.
Consider the set EXt:
t
= 1,2,o.v,N]
of N random variables, which
is a finite length record of a time series, such that
~t
where
are NID
vary with t
0
(0, 1) and
~t
are non-negative real constants which
Let
Every real valued function, which takes a finite number of
can be approximated by a Fourier series [see
1958] •
for example Franklin,
Let us assume the Fourier series expansion for
2
~t
=
-ts
m
L;
N
I'sZ
s==m
where
A.
2rciA.
,
Z - e
i
= v-:i
~alues,
and m
2
~t
of the form:
9
Our
definition of the periodogram of the random variables Ylj Y , •.
2
o,
Y is given by:
N
where
I
y
(:>-J is a continuous random filllction. of A. and is usually evaluated
at a finite set of frequencies ~, where ~ =~, (p
= 0Jl, •.• ,m).
This
provides us with a relative measure of coincidence of ~~ with cos(2n~)
and sin(2n1r)'
The main concern of the present chapter is in finding an exact
expansion for cov
[I~N) (~»
I;N)
(~i )],
1
~
PJ pi
~
m, and studying
the asymptotic properties of this result together with some examples,
These examples are constructed by using random normal deviates multiplied by variances which follow certain Fourier expansion with specific
frequencies.
It is found that these series give clear support to the
use of this type of periodogram in detecting the frequencies in the
Fourier form of the variances.
The above covariance can be found directly by applying the definition of the periodogram given in
(3.4), but the use of the results of,
section 3 is a more direct way of arriving at this covariance.
10
3,2,
Harmonic Analysis
The periodogram given above may be written as
Therefore, taking expectations of both sides, we get:
since
E(~~) = 3, E(~2) = 10
From the expression of rr~ we notice that
where the asterisks indicate complex conjugateso
Therefore, summing up both sides., from 1 to N,we get:
and using the fact that
,
otherwise
we see that
11
The second term in (:3,5) may be written as
or
Therefore
Let us assume that
m
l:
ex=-m
2
Ii'ex l
tends to some limit as N ->
00,
Then we can conclude from (3,8) that
l
,
EI (N)
Y
(2) ~,
N
I i' 1
P
2
1
->
0 as N
->
00 ,
The form ofli'pl 2 is of a spectral density type which measures the
relative coincidence of CT~ with COS(21tt~) and Si.n(21tt~), For a large
value of N,
I;N) (~)
is an unbiased estirnate of this function,
we would expect that a large value of
2
will reveal a high value of Ii' p I
this frequency in the variances,
,
I~N) (~)
at a certain frequency
R
thereby indicating the existence of
Therefore y dominant frequencies in
the variances will show up in the high peaks of the graph of
against p, at those frequencies,
Thus,
I~N) (~),
12
:3 ,3 •
Covariance between Two Sets
of'
Variates
In this section we will prove a theorem and corollaries concerning
the covariance between two sets of variates which constitute four random
variables.
The results in this section are motivated by the need for a
formula, applicable to the random variables described in
le~na
3.1,
which would have the characteristic of the well-known lemma of Isser1is:
COV(:K\J J 11W) ~ E(xw) , E(yu) + E(xu) . E(yw),
where s, y, u, and ware normally distributed random variables; cov is
used for covariance,
The interesting point in the follmdng results is that the random
variables considered are not necessarily normally
distribu~ed.
Lemma :3 ,1
Let the random variables X, Y, U, Wbe identically distributed
random variables with mean
~J
such that each pair of these random
variables consists either of statistically independent random variables
or random variables which are identical with probability 1
(~,~o,
X=Y with probability 1 or X and Yare statistically independent),
either
Con-
sider x, y, u J and w as being these random variables when measured
around their means,
Then
E(X
cov(xy,uw)
4 ),
if x, YJ u, and w' are the same
:=
{
~(xu)
, E(yw) + E(xw)
0
E(yu.)
otherwise ,
Proof
The proof is obvious when the above random variables are the same,
To prove the second part we notice that we have only the following
cases to be considered:
13
(1)
All of the four random varlables are independent (! o!:. 0'
all different),
(2)
Then cov(xy,uw) :::: 0,
Two of the variables are independent and the other two are
the same, but independent of the first two,
(~)
:=
6 cases under this category,
There are
Then it is clear that
cov(xy,uw) = E(x~JW) - E(xy) , E(uw)
-
°
0
~
0
for each C1?se,
(3)
The four random variables form t'w'o sets of identical
variables which are independent,
cases
(a)
~~der
4 :: 2 :::: 3
There are (2)
this condition, which are listed as follows:
x, y and u,w are the two sets,
Then
cov(xy,uw) :::; E(xyuw) .", E(xy) , E(uw)
2·
2
2
?
"" E(x ) , E( u ) - E(x ) , E( u-)
:::: 0
(b)
0
x,u and y,w are the two setso
Then
cov(xy,uw) :::: E(xyuw) - E(xy) , E(uw)
"" E(xu) , E(yw) - 0
"" E(xu)
(c)
0
E(yw)
x,w and y,u are the two sets,
Then
cov(xy,uw) - E(xyuw) - E(xy) , E(uw)
= E(xw) • E(yu) ,
(4) Three of the random variables are the same and the fourth
one is different,
Then it is obvious that cov(xy,uw) :: 0,
14
Now, if x, y, u, and w satisfy ;a, then
E(xu) , E(yw)
so that cov(xy,uw)
= O.
E(xw) , E(YU) = 0,
=
If they satisfy 3b, then
E(YU) = O.
E(XW) •
Similarly, if they satisfy 3c, then
E(xu) • E(yw) = O.
Also both the resul.ts of 3b and 3c are zero whenever any of the
other cases are satisfied.
Therefore,
cov(xy,uw)
~
E(xu)
0
E(yw) + E(xw) •
is satisfied by all of the above cases.
E(yu)
Hence, the lerrnna.
The following corollary is an obvious result of the lerrnna.
Corollary 3 .1 ~
2
cov(x , uw) :; 2E(xu) • E(xw).
Theorem 3.1
If X, Y, U, and Vi satisfy the conditions af Lemma. 3.1, then
Var(r) if all the variables are the same
E(xa) . E(yw) + E(rw)
cav(XY,UW) '"
+
E(xuw) +
+
E(yu)
0
E(yu) +
E(yuw)] + ~(2-~)
+ E(yw)] , otherwise.
~[E(xyu)
+
[E(xu) + E(xw)
E(XYW)
15
Proof
The first part of the theorem is trivial.
To prove the other part J we consider;
cov(xy,uw)
= cov[(X-~)(y~~),(U-~)(W-~)]
= coV[XY-~X-~y~2,UW-~W-~u~2]
or
cov(xy,uw)
= cov(XY,OW) -
~cov(XYjW)
-
~cov(XY,U)
-
~cov(X,UW)
+ ~2cov(X,W) + ~2cov(x,u) - ~cov(Y,UW) + ~2cov(Y,W)
+
~
~-cov(Y, U).
Now, to find cov(XY,U) and other similar terms, consider
cov(xy,u)
= cov[(X-l)(Y-l),,(U-l)]
= cov(XY,U) - cov(X,U) - cov(Y,U).
But
cov(X,U)
= cov(x,u) = E(xu)
cov(Y,U)
= E(yu).
cov(XY,U)
= cov(xy,u) + E(xu) + E(yu).
cov(xy,u)
= E(xyu)
and
Therefore,
Also
- E(XY) - E(u)
= E(xyu).
Hence
cov(XY,U) = E(XYU) + E(xu) + E(YU),
and a similar result for the other terms.
16
Therefore,
cov(XY,uw) =
~[E(xyu)
E(xu) + E(yu)]
+
+ ~[E(xyu) + E(XW) + E(yw)]
+ ~[E(xuw) + E(xu) + E(xw)]
+
~[E(yuw)
=
~2[E(XU) + E(xw) + E(YU) + E(YW)]
+ E(yu) + E(yvd]
Substituting for cov(xy,uw) from lem.m.a. 3.1 and arranging
terms.~
we
get
= E(xu)
cov(XW,UW)
0
E(yw)
+
E(xw)
0
E(yu)
+ ~[E(xyu) + E(xyw) + E(xuw) + E(yuw)]
+ ~(2~~)[E(~~) +
Note:
E(xw) + E(yu)
+
E(yw)] .
The above holds true for normally distributed random variables.
The following corollaries are obvious results of theorem 3.1.
Corollary 3.2
If
~
= 1 in theorem 3.1, then
! var(-f) if all variables are the same
__
cov(XY,UW)
~ E(xu)
. E(YW) + E(xw) . E(YU) +E(xyu) +E(XYW)
+ E(xuw) + E(yuw) +E(:xu) +E(xw) +E(yu) +E(yw)
l otherwise
(3.13 )
0
Corollary,3·3
Var(-f) if all variables are the same
cov(-f, UW)
=
i
2
E(X;) . E(XW~ +E(xu) +E(xw)+ E(xuV/')
+
E(x u) + E(x w), otherwise.
(3.14)
17
3040
Using Corollary 302 in Finding
(p
:f
From the definition of
COV[ I;N)
(~), I~N)
pi, PJP'
I~N) (~)
(fi')]
:=
COV[I~N) (~),
= 1,2, 00 o,m)
we write
COV[Jy(~) • Jy(~)' Jy(~')
•
Jy(~')]
(3015 )
where, as before" the asterlsk indicates complex conjugateso
by substituting for J
y
J
Or
or
= N-4V,
where
N
V
= cov[
L:
N
f
t=l t =1
:2( t- t
ytyt,zN
I )
N
,L:
:2( a-a I )
L: YaYa,ZN
].
N
(3016)
a=l a'=l
First we shall find V, which can be written as
Applying the above corollary, setting
(3018)
and
~ = (1)2 - 1)
18
we get:
(3.19)
+
where the subscripts t, t', a, a i cannot all be equal under the
summation signs.
From the definition of
~j
we see that
::: [0 for
i
2 for i
rj
(3.20 )
=
j
and
8 for i ::: j
=
[
0
otherwise.
=k
19
Also from the definition of
~
we see that
4
84
Var(~t)
:::; E(~t) - (E(~t)]2
=:
,~
105
9
=:
96
0
Therefore
v :::; 4
N
+ 2
N
~ , ~ttlt'O'+ 2
f ~tt'ot'
t,t' ,0 =1
t,t ,0=1
N
+ 96
~ ~tttt
t=l
,
where, as before, the subscripts t, t',
(3,22 )
0, 0'
cannot all be equal under
the summation signs,
Now, allowing for the subscripts t, tV,
0, 0'
to be equal in the
summation, it is easily seen that V will reduce to the form:
20
N
N
V
=
4 .E
t,t':::;l
A.tt'tt i
+ 4
A.ttta:
+8
.E
t,a i =1
i
.E
t,a=l
tt't't
A.ttta:
N
N
A.at'oa
+ 8
A. 'tal
t,t' ,a'=l tt
+ 2
+ 8
.E
a,t'=l
.E
a,t=l
A.t..ar:n
N
N
+2
A.
N
N
+ 8
.E
t,t'=l
.E
N
.E
A.,
t , t' ,a=l tt at
N
A.tt't'a' + 2
f A.tt'at'
t,t' ,a'=l
t,t ,a=l
+ 2.E
N
+ 48 .E A.
tttt
t=l
0
Substituting from A.'S from (3,18) in (3023) we get:
+8
N
.E
.
1
t,a=
6 2
!E,' a'
_pi
(J".(j,Z
t
a
N -
N
21
N
L:
+8
t,a:=l
+~+~
N
N
N 8
+ 48 L: (J't •
t=l
(:3.24)
From C3.24) we notice that the first term can be written as:
N
L.
jCp+p1 )
4( L: (J'tZ
t=l
N
4
icp+p' ) *
)( L: (J'tZ
)
t=l
which is
and a similar result for the second term.
Also, we notice that the 4th term is a complex conjugate of the
3rd; the 5th term is a complex conjugate of the 6th; the lOth term is
a complex conjugate of the 7th; and finally the 9th term is a complex
conjugate of the 8th.
22
For a.p.;y complex number Z we have Z + Z* ::; 2Re(Z), where Re 1ndi~
cates the real part of Z.
Thus~
we see that V reduces to the following
fo:nn:
]
+ 16 Re
!J;
N ]
+ 16 Re
+ 4 Re
+ 4 Re
+
48
N
L:
8
(3.25 )
(J't •
t=l
Now we need to express V in terms of the Fourier coefficients ('1 )
s
of the expansion of
(J'~
given in (303)0
We have
(J'~, (J'~
and
(J'~
for which
we need to find the Four:ler coefficients in terms of r . We proceed as
s
follows:
(1)
Suppose
4
(J't
where
=
m
L: 0 Z
s
s==m
=ts
N
,
23
Then
:::; N-1
N !E
Z N - N for 'P :::: O,±N,
t=l
But
2::
0
00;
0 otherwise
0
Therefore
N
2::
( 0: i
z
-o:+s )'!
N = N when o:'-o:+s
= 0, or when
0:' :::;
o:-so
t::::1
Therefore
-ts
m
=
2::
where
f\. Z N
s
s=-m
f\.
s
Then by the same argument as in (1) we see that
f\.
s
Therefore
m
f\.
:::;
S
m
2:: ( 2:: 1 r~ 0)
j=-m i=~m i ~-J
Let
8
ert
m
r*j_S
or
f\.
s
:::;
ts
=
m
2:: 6,
s=-m
s
N
.z
0
~
where
-1
s = N
6,
By the same procedure as above we see that
r r* ,r*
2::
'j
~,
=-m i i-J j-s
N
2::
ts
er8 Z N
t=l
t
,
0
24
or
t1
=
s
m
L:
'V ",,*
'V*-v
i,j,k=-m'i'i~k' J"I J- k+S '
0
the sums so that the Z factors are under summations for t, t' and a,
we
V =
get:
4
m
L:
m
L:
ri1~
1s=-m i=-m
l-S
2
N !(p+p')- ts 1
(L: ZN
N)
1
t=l
ts
N -
ts
~
~
1f - N
+ 4Re[
m
L:
I"
S,S ,8
+ 4Re[
m
L:
0
rir~ r
=-m l=-m
~-s
s
I
N
L:
r ,,(
S
.
I
~,t
t
,
ts
-(p+p)- ZN
,a=l
_ tis'
N
N
=
N )]
_ tis'
-N-)]
t'
~
N
t'
rYr",'
SD
- - - -;;:::£, -
N
N
cis"
-
N)]
25
All the su.ms over t., t
i
and
0:
in (,3029) can, be fact.orized int.o a
sum over t, a S'um over t' and a sum over a.
Therefore, using the
identity (306) for each sum and remembering that
r B '" r N-s
~
and r -8 "" '7*s )
we see that V reduces to:
Therefore, referring to (3016), we get the exact expression
required:
2
cov[r(N)(;£) r(N)(l?')] _ 4
~ 'y 'V*
1
y
N' Y N
- ~!i~_mfifi-(p+pi)
i
26
From this we immediately
obtain~
The two expressions in (3031) and (3032a) are important results
for the description of behavior of the periodograms
I(N)(~)o
y
These
results) due to their complex appearance, may not give us a clear idea
of how they can be useful in understandj.ng the sampling properties and
application of the periodogramo
, tions the asymptotic behavior of
We wi.ll present in the next two
I(N\i\)
y
examples of the Fourier expansion for rr~o
for large N and some special
The purpose of the first is
to give simpler form to the above covariance by considerlng only
prominent terms in the above expressions,
Consider )'i (i
Therefore,
= 0)1"
0
sec~
"m),where by definition
27
Assume that
where '1 ,is a consta.nt"
0
chapter that
lim
N
->
00
Also, we a.ssumed at the beginning of this
)ll
~
I/'6 I
2
=:
Then in considering the
CO'
s=~m
asymptotic behavior of our covariance, we proceed as follows,
First,
consider the first term in (3,32), which is
It is clear that
Therefore,
A similar result holds for the second term in (3,32),
1'!',
Second, consider the third term in (3.32)
16 Re[,
*P
if
6/
m
~ rir *
°
or * ,] -< 1
if .'Y p*- ° m~ ,r1° *
r i - Jor *
j -p I
.
l~J j -p
i,J=~m
1,j=-m
I
,
28
Since
we get that
But
where
2
2
= max (O"t)
CY
M
0
t
Thus
Therefore
16 Re [*
7
if
P
m
~
r 17i*
l,J~~m
o
'_
.Y *
j ~p i
'~J
]
-<
If Irp* I
16
i
CY2
M
Co
N
j
,
and a similar result for the fourth term of (3.32), !o~o,
Again, by the same argument as above, it is easily seen that for
the fifth term of (3.32):
29
and for the sixth term:
(3.40)
Finally, for the last term of (3.32 L we see that
Therefore, combining reslilts in (3.35) through (3.41) and using
the fact that
COY
Ir 6 1=Ir;!, we
see that~
(N)(E) r(N)(Ev)] <
[ry
N' Y N ~
iL
N2- (cO,N )2
This can be written as
where,
C4
-18 2 C·
- ~ CTM O,N
Now, from this result, it is clear that
+ 162 <I
rf aM r p '!I +
Irp,l)
.
c
O,N
30
cov[ I
provided
2
CT
t
(N)
Y
(N).
(:2) I . (~. )] -> 0 as N ,->
N' Y N
i
is bounded uniformly in t.
00
(3.44 )
'
Also, 'we see immediately that
Var( r;N)(n)] -> 0 as N -> ex:> •
From (3.10) and the above equation,
N ->
ooJ
I;N)(~)
and in this sense J
EII~N)(n) ~ IYp 12 \2
--> 0 as
is a consistent estimator of
IYp i 2 .
Neglecting all terms but O(N- l ) in (3,32) we may conclude that for
N, COV[I~N)(~), I~N)(~i)] can be approximated by the following:
large
COy [ I
( N)(n)
~
Y
N'
I('N)(n
£
Y
N
i
)]
-
4 Re [*
~ ~*
m
~
.
m
-
N
~
~
*
' 1/ pi i ~= m' i' i - (p+p i )
4 Re [*
+ -N
YPYP I L:
0_
]
* (
royo
I )] '
~ ~= p=p
~-=m
From this it immediately follows that
(3.46)
3.6, Special Cases for the Fourier Form of CT~
To understand the sigrdficance and applications of the above results
we are presenting in this section some special cases of the Fourier
expansion for
m
L:
-m
2
CT
t
,
It is known that the Fourier form of CT~,
-ts
Ys Z
N
, can be written in the form:
31
where
The special cases follow.
(I)
Let
~~ be such that 7s~O for all erf, if I ~
only one frequency fiN.
Then
c{
ffi,
!.~.
has an expansion given by:
Considering the exact expressions for the covariance and variance
in (3.32) and (3.32a) and since Y =0 for all p~f, we see that:
l'
(a)
m
~ yiyr-(p+p') = 2Y OYf
i,--m
only two terms, YOY
f
if p=f~ 1"=0.
For there are
' which are non-zero, and lOY~f=lOYf' For
and 'tfl O
any other values of l' and pi the above sum is zero, since Y =0 for srf.
s
Obviously we will get the same result if we replace p by 1".
Therefore
otherwise.
(b)
This is obvious
since the only possible values for i which yield non-zero terms in the
above sum are -f, 0 and f.
this sum are YOY:
f
If p::::f and 1":::.:0, the only nOD-zero terms for
and lfY;' which are the same, corresponding to i=O
32
a.nd l' respectively.
!5'J the same argument as above we see that by setting
p=O, p'=f, the sums .will be 2rO/~'
a non-zero sum,
No other values of p, pi will yield
'l'hus
if
m
*
1'1'
.,.
o
'V
'-'
~=-m
~
~-
P=-I>~
if p""'f, pi=O
(
p-p i)
if p=O, pi::::f
otherwise .
m
~ I I* I*
= 3102
Pi,j=-m i i-J' J'-P
~.*
(c)
0
I If 12
+ 1 04
for p=O.
For the
non-zero terms are obtained from four possible combinations for i and J;
i=j=O; i=f; j=O; i=-f, j=O, and
yield the terms
16 and
These values of i and j will
i=j~fo
three of the term
r~lrf!2
respectively,
If p=f,
then the above four combinations of i and j will
, yield the terms
2- 12
'YOl/ f '
2
2
1"71'1 4, and two of the term 'Yo!'Yfl
respectively, Therefore
31;
:=
l
.
I'Y 1'1
0
31; • 1/1'1
o
2
2
+ 'Y
6
+ 1/1'1
for p=O
4
for p=f
otherwise,
The same result will be obtained by replacing p with pi •
(d)
For the sum
~
L...
'V,'V*
I
I·
i,j,k::;-m
to the case i=j=k=O,
or 1'0
1
1-
te rm
kl'Y*-v
•
k' +he
~
J I J_
0
The subscripts in this
s~~
,4
Yo c orr es P0 nd s
can take the values 0
All the combinations of these values, except in the case·i=j=k=O,
2
2
will yield the same term" r 0 ' I'Y fl ' Therefore.~ there are (23~1)=7
2
2
- 12
terms I~
l'Y f I • AlSO, there are seven terms of the form 'YO
• !'Y f
'
33
if f is replaced by -fo
Thus
Now applying t.he results (a)-(d.) to the covariance and
variance expressions in (30,2) and (30)2a)j we get~
or
and
for all values of p, pi
~
l,2,00o,m, not equal to fo
The variances will then follow in thE same way as
[(N)(f)]
var I y N
I
and
:=
4N
-2(:2
r0 + 2
Ir f I,2)2
above~
+ l6N-2( 2 ( 3r20"1r f!,2 + iI r f 14)]
34
,
Here it is to be noted that !'Y
f
12 Ai:2
:=
+ B2f and that all of the
above expressions do not depend on the value of
then
if
cov[
l'Y
s
l2
lyN>CfN')
0
I (N )(f+s )]
Y
--> 0 as s --~
does not decrease as i'Y s I
N
00,
2
p~
so that if p
isstill gi.ven by (3,48),
= f+s,
Therefore,
this covariance will not change, !,~., it
decreases with the increase of lag s,
On the other hand, considering expressions (3048) through (3.51),
one easily notices that, as N -:>
00,
the covariances and variances
approach zero rapidly,
Let ~~ be such that 'Y s ::: 0 for all s f f ,If I ~ m, ~ := 1,2,
f
~
~
2
.
'f 1
2
2
Le. ~t contains only two frequencl.es If and N
Then, ~t may be
II,
0
written in the form
Then, in a similar way of proof, obtained in the first example,
we arrive at the following
results~
2'Y or f
(a)
[
o
if p=fjjpi =0 or pI =f j' p=O,j=1,2
j
otherwise,
(b)
if p=O,pi:=f j or p:::f ,pi:::O(j=l,2)
j
o
otherwise
35
2,('Y
I
12
12 )+r 4
3YO
f . +IY f
0
1
2
224
3Y oIY
fj
l
+ IYfjl
for P=f j ,(j=1,2)
o
The covariances will then be as
+
for p=O
otherwise 0
follows~
48N-3[y~ + l4Y~(IYf
2
2
1
+ IY f 1 )]
2
(3053 )
2 +
Iy
1
and
cov[r(N)(l?) 0 r(N)(l?i)] = 48N- 3 ['V 4 + l4r2 (ly
Y
N'
Y
N'
'0
0
f
1
1
. f2
2 )]
1
(3054)
36
And the variances will be:
1
+ 4N- 1'Y f
j
12('Y~+2I'Yf
1)2 +
j
48N-3['Y6+14r~(I'Yf
2
1
1
+I'Y f
2
1
)]
2
(3.55 )
and
Similar remarks as those for example (I) are applied to this
example.
3.7. Artificial Examples of Time Series
3.7.1. Description of Series
In this section we present three artificial series, using 200
random normal deviates:
~t'
t
= 1,2
j
••
o,200.
Each series is con-
structed by multiplying these random deviates by the square root of
the variance ~~, which follows a specific Fourier expansion.
Therefore,
they are uncorrelated observations which may be written as Yt
= ~t~t'
t=1,2, ... ,200.
The Fourier expansions of ~~ are chosen in such a way that one or
a few of the coefficients 'Y
are zero.
s
is not zero while all the remaining ris
The sum of the absolute values of the real and imaginary
parts of these coefficients must not exceed
rO
~
0
37
It is found by
that some
peaks of
"
unplanned
I~N)(~).
ex~erience
gathered from constructing these series
f1
frequenc~es
oj
will sho'W' u:p in the relatively high
But after adjusting the coefficients,
'Y~s~
it is
found that the high values of the periodogram at those unplanned
frequencies will disappear to a certain extent,
The difficulty of
adjusting these coefficients is bigger whenever two or more frequ,enc:1es
'2
for the set of O"t
are desired,
Series I:
The generated series are listed below,
The variances fo1101l1' the expansion
t
2 : : : 20,6 - 7·4
' cos
( 100
7n:t) + 10, 4SJ,n.
' (7:lt
O"t
'100 ) ' t :::::
This means that 'YO : : : 20,6, 2Re(Y7) "" -704, 2ITIi('Y )
7
remaining coefficients are zero.
= 10.4,
and the
The periodogram of squares of the
series together with the true values of
Irp l2
+ N~lCo,N are listed at
different values of p, p : : : 0,1,2,'00,100, in Table (301).
A high peak
for this periodogram is recorded at p : : : 7, which indicates the
frequency 260 associated. with the variances.
One notices the irregular
behavior in the irregular fluctuations of the periodogram around the
2
true value of'I'Yp I + N-ICO,N as one moves farther away from p : : : 7.
The highest value of the
tr~e
periodogram is at p : : : 7 and close to the
value of the estimator at this place.
As N becomes larger, the sampling
fluctuation will decrease in magnitude but not in irregularity for
large values of p,
Series II:
There are two frequencies.. 2~0 and 260' associated with the
2
Fourier e:xpansion of O"t which is given by
38
Table:; 01
Values of the periodogram of Ser~; ea I and the
,
true values of IYpl 2 and N-1 C
at
p=Oy1 J 2 J
OJN
0
ooo J
p
a(p)
1
10072
0211
60621
10858
1·707
0326
40.864
2·954
.396
80162
7.141
2,641
50807
.419
20223
.815
,806
5,,126
7.377
1.136
2·978
8,524
9·541
.010
10292
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
p
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
a(p)
p
a(p)
p
a(p)
60651
6,,812
13,148
30136
30530
2,015
1.771
,614
50408
8.485
15.039
5·824
6.219
1·325
,704
1.392
8..392
1·935
0938
4.777
2.362
2.309
1.893
9·598
1.808
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
8,163
0893
0071
130226
7·570
10758
4.623
5.658
40632
20839
1·995
7.582
60109
20257
0191
1.437
.297
2,512
1.980
8.305
.041
1.582
200941
,537
.428
76
77
78
79
80
81
40378
).304
90158
30}39
5·9°6
20718
.170
1.454
5,742
70366
10656
·945
1.360
5·581
·764
6.881
2·393
.679
.168
.207
10.102
1.961
0027
2.868
2·771
G(O)
350.197
40,,730
0.000
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
100
39
coefficients are zero,
1J"IJ 2
the true values of
Series III:
The periodcgram of squares of this series and
+ I'(lCO,N are listed in Ta.ble (3,2),
:2
The Fourier e:x:pansion of G't in this series contains the
2
5
"
two frequencies 206 and 200
O"'2t :::'20
where 70
:=
p
l2
-
1 0 "9 cos (2nt
100 )
20,4, 2Re('/'2) :::: ~lO,9. 2Re(7 )
all the remaining
of 17
4'
0
5
7~S
are zero,
Q 8
vo
cos (5n:t
100 ) J
0'
:=
8 J 8, Im(72 ) ::::
Im(7~:::: 0 and
Values of pericdogram and true values
+ N=lCO,N are listed in Table ()o)o
High peaks at p :::: 2 and
P :::: 5 for the periodogram estimate are seen with other relative high
peaks at other values of p due to its irregular fluctuations around the
2
-1
true values of I 7,'1 + N Co N'
p
,~
),702,
Calculating
Procedur~
The above series wi.th the corresponding periodograms were con=
structed using an IBM 14100
A FORTRAN language program was written
following two stages of calculations;
(1)
The 200 random normal deviates (Tl ) 8...1J.d the necessary
t
2
coefficients for O"'t are fed into the computer,
from Xt :::: O"'tTlt (t
:=
1,2'000,200),
X is calculated
t
40
Values of the periodogram of Series II and the true
- 2
=1
values of i'Y
i and N CO,N for p=O,1,2, '00,100
Table 3020
P'
P
G(p)
1
2
0125
0682
240876
'::557
20862
,210
24,798
2,075
,467
9,194
3,786
4,542
60020
0-432
60158
,520
10583
30662
100714
0275
:3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
1o~.46
11,064
70958
0663
10682
G(O)
G(p)
P
I
Y3 1
26
27
28
29
30
4,341
3,735
90182
10852
20859
2,959
L880
,162
30261
2,313
11,963
40009
4,877
3,150
,677
10456
50573
10180
10759
40640
10933
10564
10542
60588
0585
21
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
2
2
IY 1
7
G(PJ
P
60912
L55lt
')'
,L
52
53
54
5.5
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
. ,2
! Yp ! ,
P
76
77
78
79
80
81
0474
13,,979
50198
5)011
4,588
50276
4,563
6,185
3,638
2,636
7,604
40000
,788
2,705
,194
20783
0006
8,268
10747
0463
220477
0170
10318
p~3,7
8:2
83
84
85
86
87
88
89
90
91.
92
93
94
95
96
97
98
99
100
2
YO
G(p)
5,778
3,537
7,026
20585
4,804
L389
,315
,856
2,783
4,899
1,773
- ,851
40226
4,670
2,801
4,384
1,785
10334
0164
0844
70935
0815
)+53
30175
10802
-1
N CO,N
p=l, 2, 00", 100
314,943
120960 370210
0,,000
4080400
20542
41
Table 3.3, Values of the periodogram of Series III and the true
values of I I'p 12 and N·,1 COJN for p=O,l J 2, ... ,100
p
G(p)
P
G(p)
1
2
3
4
5
2.537
20.542
3.794
2.216
24.692
1.518
40927
1.044
26
27
28
20
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
30418
10,805
10.514
·587
5.749
3.524
2.497
0579
.362
6.529
140250
50990
8,706
.240
0194
.677
4.249
2.185
.074
20195
2.994
0278
3.693
40843
,4.212
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
.1~87
4.473
5.702
0703
30574
.977
1.132
1.002
·979
4.464
30288
5.199
0321
14.771
4.104
1.667
4.865
G(O)
338.754
11'2 12
115 12
29.703 19.360
IY p l
G(p)
P
G(p)
5.086
2.784
2.057
160345
110058
0470
16,435
3,327
11.379
7.231
0434
3.211
2.940
30109
0497
,152
10118
4.150
0442
2,993
0070
4.060
24.391
3.469
2.244
76
77
78
79
80
81
82
83
84
85
86
87
88
89
3.180
6.664
7.409
9.902
4.153
3.844
4.551
.516
12.084
7.032
0640
.633
1. ,494
100431
3.394
8.216
20679
1.248
0651
0395
11.350
.150
,759
1.312
2.431
P
51
52
53
54
55
56
57
58
59
60
61
52
63
64
65
66
67
68
69
70
71
72
73
74
75
2
pr'2,5
p=1,2" .. ,100
J
00000
90
91
92
93
94
95
96
97
98
99
100
2
1'0
416.159
-1
N
CO,N
20570
42
(2)
Taki,ng the squa.re of X , the periodogram is computed by
t
applying the following expansion:
p
= 0,1,2,'00,100.
43
4.
CIRCULAR SrrATIONARY PROCESS
4.1.
Introductfon
The seri.es Yt which is obtained by squaring the series Xt
~
0"t "t
discussed in Chapter 3 was shown not to be a second order stationary
process.
The definition of a second order stat:1.onary process [Hannan,
1960] is that cov(Ytj Yt +s ) = Ts which depends onl.y on sand
val' ( Y ) '" TO <
t
00.
F::oom the series X it is poss:ible to obtain a
t
circular stationary process [Herbst,
1965] [JalJ (a
~ Oj~l,±2, ..• )
which is defined as follows:
where J
a is a complex",yalued random variable w:1 th zero mean and
variance:
(4.2 )
The fact that Ja
=:
Ja +N and
E(Ja~+) = "s" depending orD.y on Sj
(4.3)
implied that [J ] is a circular second order stationary process.
a
Herbst
[1965] presented test procedures, using the estimates of
the real and imaginary parts of r
s
y
for the variance 0"2 .
t
In this chap~
tel' we will study t,he sampling properti.es of the covariance functions
obtained for this process.
It is found tr~t the process [J ] has
a
properties simi.lar to the real second order stationary process.
approximate expressions for the covariance functions have a form
similar to that in the classical situation.
The
44
The covariances of the function
N-s
C
S
1
...lI= ---N
~ J u~+
"'S ex=l ex u. s
(8 ~ 0)
and that of Cs' wh:i.ch has a form similar to that of Cs except for the
replacement of N-s for N, are derived in Sections 2 and 3,
4,
Cs
real and imaginary parts of
In Section
are considered, which are the esti-
mates of real and imaginary parts of y .
s
4,20
The Function C
s
In this section we will derive the exact expression for the
covariance of Cs and C8
i
(s.
S
i
J
>0), and from this 'we will obtain an
-
approximate expression for cov(e , C i)'
s
s
Cs " defined by (4,4), is a complex-valued f'unction
j
consi,dered as
a bivariate function, whose components are the real and i.maginary parts
of C,
s
C i,8 an unbiased estimate of
s
parts of e
rs '
s
"l ,
s
i e, the real and imaginary
0
--
are unb,iased estimates of the real and imaginary parts of
It will be shown in this section that C has the same properties
s
as the classical sample covariance R (Bartlett, 1956]
s
0
The results of
this section will be used in the next section 'when we consider the
modifi,ed covariance function C
s
0
We start by recalling the generalized definition of covariance
applicable to complex random variables,
For any complex-valued func-
tion Z( t) the covari,ance is given by:
cov[z(t), Z(t+s)]
:=
E[Z(t) , Z*(t+s)] - E[Z(t)l , E[Z*(t+s)]
Thus, considering the covariance function C , we have
s
0
(405)
45
= (N-S)(~-S=tj
or, using
cov(JaJ~+s' JaiJ~i+s+t)'
(4.5),
cov( Cs Cs+t)
'
j
But from
N-s-t
a:1 aF.:l
N~s
= ~N
1
)( N
t)
~·s
~s-
.
N-s
1:
a=l,
(4,1) and (4,3) one can see that
or
Therefore, using
(4,6) in (4,5) we find that
N-s
cov(CS,CS+t) =
N-s·~t
(N~s)(i=s-t1 a:1 ai~l E(JaJ~+sJ~uJa'+s+t)~rsr~+t
(4.8)
we have:
46
It is easily seen that
i:.:j=k=£
i:jJ k""£J jrk
i=.e,
K=j, irk
i=k; J:.e,~
o
i:/j
otherwise,
0222222
Or, if we allow for equality of subscripts 1.n O"jO"k' O"iO"k and O"iO"j
in
(4.9) we get
2 2
i=j, k=.e
O"iO"k
2 2
1=.e
~~~~
i=k, j=.e
0
otherwise.
O"oO"k
l
E(XiXjXkX£)
Substituting in
::::
(4.9) we get
+
l
j
k=j
(4.10)
47
or,
E(J0:J*
J* J
)
0:+6 0:' 0:' +s+t
2
= N- (
1N
81
2 N
~ ~,Z
i=l
N?
L ~-kZ
)(
l
(,)k
s+t N *
)
k=l
(4.11)
Remembering that
N
and 'Y
P
.:!212
= N-1 ~ ~2Z N
t=l t
'
(4.12)
we see immediately that
Substituting this into
(4.8),
we get:
1
N-s N-s-t(
*
*)
cov(Cs,Cs +t ) = (N-s)(N-s-t) 0:~1 0:'~1 'Ya!~+sYai~+s+t+Y(o:'-a)'Y(o:'-a)+t'
(4.14)
Not, consider
Expandi.ng this term, we get ~
48
"I (N-s-t~
)+s"l *(N-s-t~ )+s+t +
+
The tern..s in (4,15) can be represented by an
P whose elements, A
o
"
:lJ
*
"I (2N-2.s-2t+l )+6 r (2N-2s-2t+l )+s+t
(N~s)X(N-s-t)
matrix
are the variable parts of the subsr;ripts in a
one-one correspondence with the above terms, ~,~, ~ ~--~ r~+sr;+s+t.
Recalling that
"I~ ;
following matrix:
r_N~
= r+N~'
we can replace the matrix P by the
49
--(N-2),-(N'·3),
.. ",-(s+t-1)
-(N-3),-(N-4),
.. o,-(8+t-2)
-(8-1)
o
Q
=
-(s+t-1),-(s+t-2), .•. ,(N-2s-2t)
-(s+t-2),-(s+t-3), ... ,(N-2s-2t+1)
-s
-(8-1)
,-(8-1), ... ,(N-2s-t-1)
,-(s-2)
, ° 00' (N-2s-t)
From the above representation of (4.15) we notice that
(N-2s-t)
occurs once,
(N-2s-t'-1)
occurs tWice,
o
-(s-l)
occurs (N-2s-t+l) times,
occurs (N-s-t) times.
(4.16)
50
All terms from -s through -(s+t-l) occur (N-s-t) times each
0
Also,
terms from -(s+t) through -(N-2) occur (N-s-t-l) times, (N-s-t-2) times,
and one time respectivelyo
Therefore, we can write:
N-s N-s-t
N-2s-t
a=l ai~l ra'~+sr~'~+s+t = V=:(N_2)[(N-s-t)-~(v)]rv+sr;+s+t
where
v~, -(s-l)~v~
~(v) " J ~S-l
-(s+t-l)~v~ -(8-1)
-(N-2)~v:5 -(s+t)
l-(v+S+t)+l
Now, let v+s=uo
-1
0
We see that
(4.18)
where
u>l
I-t<u< 1
The second term of (4014) can easily be proved to have the following form:
N-s N-s-t
N-s-t-l
a=l a'=l
u=-(N-s)+l
~
~ r(ai-a)+sr*(a'-a)+s+t =
~
[(N-s-t)
~l(u)hur~+t
(4.19 )
51
where
i:
T)l(u):=:
u>o
-t<u<o
-( u+t)
(4.20 )
Therefore, (4.14) becomes
COY (
C
1
r,'
s' 1.., +t) ~ ~s·
S
\H-~)
+
N-s-t
Tl~U)*
L:
[1 <~ TN~:-s _ t hU·"'U'''-t'
u=-(N-s)+2
\.
T
N-s-t~l
<
*
~T)'
Z
[1 -" 1 U
u=-(N-s)+l
,N,",s-t
]
(4.21)
YuYu+t
For large values of N this covariance takes the following approximate form:
(4.22 )
and from this we get
These are in exactly the same form as that for the covariance and
variance functions of a real stationary process.
if
"y
s
-> 0 as s ->
00,
One will notice that,
then the above expressions in (4.22) will not
be changed, since they do not depend on
"y.
s
This suggests that the
sampling fluctuations in the covariance function C depend on the
s
unknown "Y s even if "Y s decreases With the increase of s,
~.~.
the true
52
function y s damps down with the increase of
Thus Cs provides little
So
information, and hence is an unreliable estimate of y
s
as s becomes
larger.
Modification on C
s
The Function C
s
The modified function of Cs' Cs ' is defined by:
C =.l
s
N
L:JJ*
N a=l a a+s
where C is again a complex-valued function,
s
Since clearly
E(C ) = Ys' ~ is an unbiased estimate of Ys and the real and imaginary
s
s
parts of C are the unbiased estimates of the real and imaginary parts
s
of y.
s
In this instance
Cs has simple real and imaginary parts [Herbst,
1965) which are given by
Re(
C) =
s
2:n: ts )
cos ( -Nand 1m( -C ) ::
s
t=l t
-1 N _.2
N
L:
r
In this section we shall employ some of the results of the last
section in finding a simple expression for the covariance of
Cs ,
(s~s').
It will be found that
Cs
Cs
and
behaves in the same manner as
Cs (discussed in the last section) for large N and also for the case
in which y
Using
s
-:> 0 as s
-> co •
(4.23) and the definition of covariance we get:
(4.24 )
53
In Section 3 of this chapter it 'was shown that
(4.25 )
Therefore, substituting (4.25) into (4.24), we get
(4.26 )
N
f
'Yo:l+a+S)'~I+a+S+t:; T(s,t), say,
=1
and the matrix Q presented in the last section. Since 0:, o:i vary from
Let us consider the term
0:,0:
1 to N, the first row of Q, for example, will be -(N-2), -(N-3),
and the second row will be -(N-3), -(N-4), .•• , 1,2.
easily see that Q will take the form
~
- (N-2), - (N-3),
-(N-3 ), -(N-4),
Q
l
=
0
,
1
,
1
,
2
,
0,1
Thus, one can
such that
I)
0
II ,
0
o
Q
0
,
1
C
0
()
,
"
0
0
,
"
0
0
,
o
I)
I)
,
0
,
o a
,
1.
2
N-2,, N-l
N-l,
N
(4.27 )
Thus, it is easily seen that T(s,t) will take the form:
(4.28 )
where
T](u)
:=:
{
u-l
-( u-l)
if u > 0
if u < 0
54
Or T(s,t) can be written as:
Also, the second term of (4026) can be shown to take the form:
N-
N-l
2
[N-Ivl
I:
v=-(N-l)
h
(4.30)
I*+t
v v
Therefore, the final fOl~ for (4024) is
- +t )
cov( -C ,C
s s
= N-1
N,-l
i .. 1
[l.!l
*
1- N] LI v+s+ll'v+s+t+l
I:
v=-(N-l)
* ]
+ I VI v+t
0
(4031)
Now, for large values of N we may approximate (4031) by the
following form:
cov(Cs,Cs +t ) - Nand if I'
s
1
00
I:
Y=-oo
--> 0 whenever s -->
00,
[/V+S+l/~S+l
+
I'V/~]
the above expression for
cov(C s , Cs +t) will tend to have the form:
From the above one notices that the covariance for the new definiti~n
of Cs has a
sil~lar
form to that for the classical covariance
function for a stationary process [Bartlett, 1956], and behaves in the
same manner for large N and also for the asymptotic case in which
I'
•
s
->
0 as s
->
00
for a fixed value of No
little information about I'
s
Thus C provides us with
s
for large values of so
55
4.3.2.
Real and Imaginary Parts of
Cs
In this section we will study the sampling fluctuations of the
real and imaginary parts of C by deriving the exact and approximate
s
covariance and variance functions.
Let
Cs
:=;
A + iB
s
s '
(4.34)
'Where As, B are the real and imaginary, parts of
s
Gs
Then, since
st
-1 N
*
N
~ JaJa +
a=l
=
s
-1 N _2 Z N
N
~ ~t'
t=l
(4.35)
we see that
A
s
= N- l
N
~ Y?: cos 2nst
t=l t
N
and
(4.36 )
N
r
B = N- 1 ~
sin ~
s
t:=;l t
N
Herbst [1965] showed that E(A ) =
s
Re(r s ) and E(B s ) = Im(r).
Now,
s
to find cov(A , A I) and cov(B , B ,), using (4.36) we proceed as
s
s
s
s
follows:
ifcov(A ,A ,) - E[
s
s
-
N
~ _2_2 cos 2n st cos 2ns~t~ ]
~xtxt'
N
t,t =1
N
- E(
=3
~~
t=l
cos
2nst)
-rr-
(N _2
E
~
t=l
xt
2ns' t)
cos ---N---
N
2 2
N 4
2nst +
2nst
2ns' t'
cos---f
~t~t'COS -N- cos ~
N
t=l t
N
t,t =1
tIt'
~ ~
N 2
2nst)
N 2
2ns't)
- ( ~ ~t cos Jr" (~~t cos -N--t=l
t=l
56
In the last expression by taking one sum of the first term and
adding it to the second term, we get a qu.antity which would be cancelled
out by the negative termo
2
~cov(A
, Ai)
S
S
N 2
= 2 t=l
~ a
t
2nst
cos ~
2ns V t
~ ---N--- •
cOQ
it
Let at4
=
m
~
BIZN , and noting that cos2:rrx ::: is'l( Zx + Z-x) , we get:
i~m
L
U
-it (S+6 -)t
(S i-8
)t
(,)t
- 6 -6 = - ~ 0. ~ Z N [Z N + Z
N+ Z
N
2 i=-m ~t=l
1
m
N
which reduced to
r:fcov(As ,As' ) -- Q[o
2 S+Si + ° S i -8 + 0*'
S ~S + °S*'+S],
or
cov(A8 , AS I)
l
-N Re[o S 1+S +
:=
°i
6-6
] ,
(4.38 )
But, as' was proved in Chapter 3,
°k:=
m
•
~
*
y.y.
k '
~ 1.-
(4.39 )
~=-m
Therefore, using
(4.39) in (4.38), we get:
(4.40)
57
1
x
Remembering that sin2rrx =-IZ'
21 ~Z
~:X:
)Q
in exactly the same procedure
J
as above we can easily see that:
~cov(Bs ,Bs i)
or
cov(B ,B i)
s s
Therefore, coV(Bs,B ')
s
= !N Re[o,
s -s
=
0 i+ ), which can be written as
s
s
(4,41)
Now, for large values of N we can approximate
(4,40) and (4,41)
by the following:
(4.42)
and
Again, these formulae for covariances are of the same type of
covariance of a sample covariance function for a stationary process that
is well known in time series literature J and behave in the same fashion
for large N as well as in the case in which r
s
--> 0 as s -->
the latter case it is clear that r'!'"
( '+S ) -> 0 as s ->
1- s
if s'-s= t, formulae
(4.42) reduce to the following form:
00
00,
In
and that,
,58
(4.43 )
Thus, if whenever s -> ro
j
r s -> 0
cov(A,
A,+t)
B
S'
j
we get
= cov(B S ,BS +t)
(4.44)
and
(4.45)
proves that A and B are consistent estimators of Re(r )
s
s
s
and rm(r ) respectively, since var(A~) = var(B ) --> 0 as N --> 00.
s
N°S
N
-1
2
(21ttS)
(
)
-1
. (21tts)
Thus, s~nce Rers = N ~ rrt cos ~ and rm r s = N ~ rr2
t s~n ~ ,
t=l
t=l
o
( )
A and B provide consistent measures of coincidence of rr2 with
t
s
s
.
21ttS) and s~n(21ttS) respect~vely.
cos ( ---N0
4.3.3·
-rr-
The Relation between
As defined in Chapter 3,
Cs
and r(:)(N)
x
I(N\~.)
2
x
N
= N- 2
N
ts 2
~ y?: z N I . Hence
t=l t
I
!
Therefore, using the previous definiti.on of c~s' 'we see that
59
From a proof in Chapter 3 for large values of N, we have:
This means that for s'=s+t,
and that
2
varEl s l ] -
c
~ ReE(y:)2 .~
Yi y
r=2S + ~
1.=-00
2
IY 8 1
~
IY i
I
2
],
1=-00
(4.50)
It appears from (4,49) and (4,50) that if Y -> 0 whenever
s
S
->
00
and with N fixed, then:
and
-2
This means that if we ignore the terms of order N
in the
expres,~
sions for the covariance and variance in (3,32) and (3,32a) of Chapter
3, then the sampling fluctuations in
Ies 12
will damp down as s in-
creases, if y -> 0 as s increa.ses, On the other hand, if we include
s
terms of 0(N- 2 ), this is not the case,
60
50
PERIODOGRAM OF SQUARES OF' A MOVING AVERAGE
5010
Introduction
A more applicable model in practice than the random model
duced in the third chapter is the li.near precess model,
intro~
This model
is a linear combination of irldeI-endent random variables with zero means
h
and variances which are not necessarily equal)
tlrne se:ri.es tE.nrd.nol.ogy this .L2
innovati.ons."
th<'l
~,~o
Xt .-
o~bjYt_j
In
0
l.-v
modeJ... cf "a. mov:i.ng average of
This type of model maybe applied to a physical phenomenon
where an observation at a certain -eime depends on several other
observa~
tions on another factor made at a sequence of time previous to to
would also arrive at this form of model by considering a k=order
We
auto~
regressive scheme, KOARS) 'where k > 1 and whose characteristic equation
has roots Ill' 1l 2 ,
"0'
Ilk 'With
Ill j ! < 1
(j
:=
l,2,00.,k)o
Herbst [1963a] studied the following moving average model
h
L: ajcrt o11 i . ,
j=O
~J.'J
where aj's are real constants, crt and Tl t are as defined in Chapter 3,
In this chapter we will present tl::.e periodogram. of squares of X and
t
study its sampling properties with some artificial examples at the end
of the chapter
0
In Chapter 3 it was :proved that the periodogram of
squares of a random process is a consistent estimator of the function
2
lr
, pi
0
In the model considered at the present one would expect that
the corresponding perlodogram is an estimate of two factors., that which
involves the a.vs and the other a function of
J
2
17p loIn
the third
section of this chapter a relation between the periodogram of ~ and
61
that of' ~ :: cr~T)~ wi.11 ce proved? having the same character as that for
the stationary moving average model [Hannan, 1960] and where the variances are aS8umed to be constants.
5.2.
The Ori,g:i.n of a Moving Average Model
Many physical and natural phenomena can be described by the model
Slutsky- [1927] presented an original
discussed in the last section.
discussion of
1~i
t,ting moving average models, with constant variances,
to economical and other data
0
A ntunerous number of examples can be
found among these phenomena where the constancY:J vith respect to time,
of variances cannot be assumed,
Our moving average model may arise,
for example, in the crop yields dur:i.ng one season in which the amount
of rainfall daily is the "random cause", Leo the crop yield X at time
t
t is some constant times the rainfall at time t-l plus another constant
times the rainfall at time t-2,
00'
j
etc
0,
where the amount of rainfall
is assumed to be a random variable with zero mean and variance cr~.
A
moving average model may be considered as a limiting form of an autoregressive scheme.
Consider the
k~,order
autoregressive model
where crt are finite constants, T)t are uncor:related random variables
wi th zero means and variance 1, and k 1,8 an integer k
2:
L
To solve
this equation (in terms of crt,T)t) we notice that it can be written as
t
t-l
t-2
t-k)
( E -blE
-b 2 E ~ooo-bkE
JXt
= crtT)t
that Ej(Xt ) ::: Xt~'j' J ::: O,1,2 J o•• ,ku
J
where the operator E is such
The characteristic equation is
62
t.hen
-
which will have tr.e roots. real and
j
0
0
\l
b Et~k = 0 ,
k
....
complex~ fl·1. ,.fl?J""
"fl "
_.
k
The solu-
"tion of (502) can be written as
where the ~IS and aJ's are constants and X"t
=::
~u at the initial time uo
co
Now, if u ->
-00
j
X will take the ,form
t
00
2 2
j::::O J
-J
00
=
(h=oo) and we further assume
which is our model described in
L aOrrt 0<
X
t
(all t)"
As an e.xamp:Le, consider tb,e modified model of the first order
autoregressive scheme:
(t - 0, .±l,.:t2,,,,,,,) •
Then, solving (5,,4) by iteration, we obtain
where Xt "" Xu at initial time uo
If
!pi
<
1, then as u
->
-00
will give the solution
which is the moving average model presented here with h=oo and
a
j
;::; p
j
(j:; 0,1,,,,,
0) and we assume
00 28
L p
E~O
2
rr
t~s
<
00
(all t)o
this
63
5030 Periodogram of ~
The periodogram of ~ is defined by'
1
Using
J 2(R)
x N
B1
~
p ~
m, where
(501) in (505), we get;
= N-
l
N
tp
h
L: (
L: CLCt i O"t" .o't_ 01 i'lt_ i Tl _ ' I )Z N
t J
t=l j,ji=O J j
J
J
u
"or
1
= Bl
+ N-2B2 ' say, where
= N- 1
2 2
2
N
L: Ct'O"t .il·.Z
,
j=O t=l J -J t-J
h
!E
N
"
<
6
and
Consider B , and suppose first that h
l
as:
JE
N
I h 2
B1 = N- L: Ct. Z
j=O J
(t-j)p
N 2
2
L: O"t_ji'lt_ .Z
t=l
J
N
<
No
'I'hen, B
1
can be written
64
or
+
which is obtained by adding and subtracting terms to get a sum over t
I
which is equal to
where
For a reason which will be seen later, we
rr~y
write B in the following
l
form:
~(N+j-k)
o
ZN
)
Second, suppose that h > N; then.\) B in (5.8) would take the form
1
A such that:
A :::;: N-
1
N
2
.Jl:
N-Jo
N
L: ex Z
j=O j
L:
2
(J'
iT)
t'=l-j t
2
~
!J.:
N
vZ
+ N=
1
h
L:
.J.E
2 N
ex oZ
j=N+l J
N .
~'J
i
t =l-J
Working similarly to the above procedure, we see that
The last term of (5.13) can be written as:
or as:
or as:
Therefore, A has the form:
2
2
~
N
L: •ert i 'It i Z
h
A
,,lE
'.
= E clz N J(Y,~)
j=O J
Hence, we can write (5015) in the ferm:
A::
h
L:
2
ex
j=O J
4¥
Z
Thus, A has a form exactly sinrl,lar to that of B (5,1l), where we
l
assumed that h
S N, and in the meantime we arrived at a form similar to
that of a lemma proved for a stationary process [Hannan, 1960] and valid
for any value of ho
where
Going back to (506), we see that for any value of h
where A.c. is defined in (5017) and B in (507)0
2
67
Now, apply'iug the Minkowsk.i ineq,uality to the last two terms, we
have:
From (5017) we see that:
by applying the same inequality,
[ E IA
2
But
1 2]~ -< N
. 2 2
EI~ 1
,_.1
2
Or
hi?'
L: a-' I
j=O j I
4
= E~ = 3,
Therefore,
N-l
J-
min(, 1)
+
L:
o
1
',2~N-k 12]~
( 5020),
Now,
Hence, (5020) becomes
(5022 )
68
Now, if h
~
N, then
P
= ~ iOJ 12
No'1
j.~1
h
1 +
[E
j=O
(0)
min ( j=.l).~
h
J~l
j-.l ~
2
E
1]:::; L 100 1 [ E 1 + E 1]
0
j=-D J
0
0
max j-N
If h > N, then
But in the second term of
(5.24) we have
N
< Jo
Therefore,
or
P
< l/2
h
E
j=O
2
1
la I
j"2.
j
Hence, for any value of h we have that:
Then
El ~ I2
i.6
bounded if
h
E
ICXj I2.1.
j2
is convergent and
j=O
uniformly in t.
convergent.
If
/
CX
j
h
-= o( j'~ 3 2), then .E iCX
J=O
j
1
2
G'
t
is bounded
2 j"21 is certainly
We now consider the term B which is gi:ven by
2
Thie can be written in the form
where
The variance of B is then given by
2
,
var(B2 )
-1
=N
h
' )
. ajajiaka~iEluj.jUUkk'
,
jfj' ,kfk'=O
'
~
l\,
.
Now
But
if
t'
= t+k-j
t
=
l
and equal to zero otherwise,
t'
= t+k-j!
t+k'_ji and tV
= t+kV_j
and
or if
,
70
Thus
Therefore, it is easily seen that the variance of Br would take
i:::
the following form:
N
...,
c:.
"2
L: crt jOt: . U
t=l- v-J
Let us assume that (J't2 < constant, uniformly in to
N- 1
~ (J'2 -J. "" y(j)
-> '1 0 a~~ N -> oo} uniformly in
O,N
t=l t
Allowing for the subscripts
Then
j
0
can see
that the sum for the aU s will be
fore, from (5,28) we get that
2NI
, hL: a Z.J.E4
Ij=O J
or
(5029 )
Due to the fact that E(B )
2
2
EIB2 \
:=
0, we see that
2 (.)
J
6 N(
::.: 20"M Y
}
h
L:
J=O
la.l)
J
4
<
00
11
In exactly the same way as above we prove that
fore E1B21
2
is bounded if
convergent for
Therefore
Q"
J
j
h
j:o'Oj!
is convergent,
E
IOjl
<
00,
There~
(It is certainly
= o(j ~"7/?"
/1-),)
combining (5026) and
(5.30) into (5,19)) we get:
is bounded whenever
h
J-
Ed Bl <t
b
L
j=O
'"
1
IOJ Ic: f2"
and
.
are convergent,
j=O
By a similar procedure, as above, "re can show that
is bounded under the same conditions,
Therefore, we can write
where
EI T!--D < +
00
(5.10) in the form
El TI 4 < + 00
and
0
Therefore, from (5.32) we get:
:=
I
h
E
(iZ
JE,2
Nj
',j=O J
For the second term in
(5.33) we have:
EI\[N A2
+ B214
72
Applying the Schwarz, inequal:i.ty ,we get:
h
I:
j=O j
ci
EIT*(*)
J£'h
ZNJ(Y,~)12 = I.I: cf
~2
zN\ E(lT(i)!2
0
J~O J .
<
-
h
I:
j=O J
Y
" 2
J1~1
1
"
clz N E2'iT(~)!4
I
I(N)(~)
I
)
N
1
0
E2 (r(N)(l?) )2
N
Y
0
N
(5034)
Now J ,from previous results we find that
bounded
0
Therefore
EtII;N)(~)12
and
E~IT(~)14
are
j
is bounded) and a similar result holds for the third termo
Also, it is obvious that the last term of
second absolute momento
Theorem
Let
(5033) has a bounded
Therefore we establish the following theorem:
j
(501)
Xt
be defined as in
(501) such that
h
I:
.j=O
Iex I < 00
0
and
J
Then J
where Ii
p
= r(N)(l)
x2
N
J
g(~)
= Ij=O
~ ~zjP/NI2
N
J
I
I
J
P
= I(N)(E)
yN'
and the 0 term
indicates terms which have absolute second moments of order N~lo
1
Ignoring the term 0(N=2) and taking expectations of both sides of
73
(5.36)
where, as before, Co N
,
= s(p))
say,
m
Z
2
=
j=~,m
Ir·1 '
J
Now, applyingthe ordinary definition of covariance, we get:
,
cov(I' ,I',) - g(E. ) , g(g ) . cov(r ,I ,),
N
P P
.!~
P P
This indicates that
j
assuming boundedness of
g(n),
I; behaves in
a somewhat similar manner as the periodogram of squares of independent
random processes discussed in Chapter 3,
Theorem (5 ,2 )
I~ is a consistent estimator of g(~)lrpI2 in the sense of equation
(5.39).
Proof:
From (5.37) it is obvious that
var( I') - (g(NE.))2 var( I )
P
which tends to zero as N ->
P
00,
and from (5,36) we have
(5.39)
Therefore, I~ is a consistent estimate of g(~) ,
Irp l 2 ,
the
product of two factors, the first being a relative measure of coincidence of a~ with COS(2~jR) and sin(2~j~) and the other being the
function defined in Chapter 3 w'hich measures the relative coincidence
of o-~ with cos(g~tP) and Sin(2;tP ).
74
For future use, we shall prove the following lemma.
All absolute moments of Ii of finite order are bounded.
p
Proof~
1
From
(5.,.5)J and ignoring the term 0(N- 2 ), we have that
<'5.40 )
_. I ~ a~zJP/NI2.'1
J=O
J
it is obvious that
!
Therefore, by the Minkowski inequality, we get from
(5.40):
(5.41)
Thus,
EI I
I
1
2
is bounded since
Hm 'YO N :::: YO <
N->oo
p
00 •
Similarly,
'
we can prove the same thing for higher moments.
Hence the lemma.
5.4. Artificial Series
5.4.1. Description of the Series
In this section we will present three examples of artificial series
generated by using a number of random normal deviates (~t)'
The normal
75
deviates are multipUed by the different values of the square root of
the variance which follows a certain Fourier expansion containing two
.
'-
or more termso
Different series are constructed by choosing suitable
constants, ex. is:; to form a moving average model with two or more terms.
J
Periodograms of the squares of these series are computed together with
the true values of S(p)
Series IV °
0
The series are given as follows
0
The frequency associated wl.th the Fouri.er e:xpansion of
O"~ is 2/200 such that
2
O"t
= 22·5
- 6 .0 cos (2nt)
100 + 15.4 sin (2nt
100)'
(t
== l,2~
0
oo,N
= 200),
are zero.
The model for the series is Xt
:=
O"tT)t + 0.5 O"t-lT)t-l' where h=l,
a =1, 0, a =005 and 201 values of the random normal deviates T)t are us ed.
O
l
Periodogram values and the true values of S(p) at p=0,1,2,
are listed in Table 501 and plotted in Figure 5.10
~100,
o •••
A peak at p=2 is
observed for the periodogram and the true value of S(p), but there are
irregular fluctuations of the periodogram as we move farther away from
p=2.
Again, this type of behavior is due to the fact that the periodo-
gram does not damp as r
s
-> 0 with the increase of lag So
For large
values of N these fluctuations will relatively decrease in magnitude
and the periodogram will then be an estimate of
"j=O J
Series Vo
I
2 j IN' 2
~ ex~Z
p
Ir I2
",. h
P
This series is a three-term mO'ving average given by
.
•
e
e
Table 5010 Values of the periodogram of Series IV; the true value of S(p);
va1ues~II'p 12 and N-1 CO,N at p = 0,1:2,00.,100
p
1
2
:3
4
5
6
'7
I
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
G(p)
50435
1140908
80401
120220
13.305
10338
16.827
3.089
6.817
60281
26.707
10.365
260365
10240
80100
120313
0340
230156
80329
14,069
3.956
240599
290697
0667
22 .. 298
S(p)
50021
1110655
50015
50009
5·002
4.994
4.983
40972
4.958
4.943
4.927
)+.909
4.890
4.869
40847
40823
40798
4·772
40744
40715
4,685
4.653
40621
4.587
P
G(p)
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
h.4
45
0455
240247
500634
200760
14.175
5.796
9·175
2.869
4.999
9,505
48 0558
10.051
13·427
6.303
50785
10261
100033
.382
10.498
70621
8.884
60521
11.695
6.690
46
47
48
49
4052~:L2.;469
G(O)
11'2 12
740·948
68.29
II'p l2
S(p)
G(p)
P
4,515 51
70553
40478 52
.381
40439 53
6.756
4.400 54
5.870
4.360 55 26.624
4,870
4.318 56
4.276 57 360903
4,233 58 14.728
4.189 59
4.332
4,145 60
6,435
,542
)+.099 61
4,055 62 10,974
4.007 63 13.345
1.619
3·959 64
4.836
30912 65
,.863 66
.936
3.815 67
4.946
30766 68 15.726
60138
3·716 69
30666 70
5.190
30616 71
3· .351
40874.
30566 72
30516 73 270880
).466 74
0219
3·415 75 16.835
p~, p=1,2, .•. ,100
0.000
S(p)
p
G(p)
30365 76
5·527
3.314 77 290102
30264 78
90327
3.214 79
5·162
3.164 80
7·957
3.114 81
1·575
3.064 82
3,5943.015 83 110537
6.295
2.967 84
20918 85
3.661
3,20'2
2.871 86
20823 87
3.898
1.658
2·777 88
1,732
2.731 89
20685 90
.129
20641 91 10·983
50403
20597 92
2.554 93
2.277
20512 94 13.266
2.470 95
30941
2,430 96 13,038
2,391 97
10508
8.148
20352 98
2.822
20315 99
2.279 100 1~ 0617
2
N-1 C
YO
s(p)
20244
20210
20177
2.145
2.115
20086
2.058
20032
2.007
1,983
1.961
1,940
10921
1·903
10887
10872
10858
1,847
1.836
1,828
1.821
1.815
1.811
1.809
=.h:?0~ __= __
O,L
506.250
,.214
"",,-=".
-..::J
0\
e
e
e
120
100
8c
I ~~I,
I'
!'
I
r:o 6el
"-'
Periodogram
-~~
,",,('p'l
,
i::>, . !
I
,I
d
!
I
401 1'1
II
•I
1"
:'
Iq
'I ,
I
,,
2011'
I
i
'
!
!1,1
v
I
I
~
~--
I
I
25
0
50
75
100
p
Figure 5010
Periadogram values and the true values of S(p) for Series TV
-..J
...;j
where
0"2t
:::;'20 "6" -
"r •.h
nt ), + 1~ L . (3 nt )
. (3100
cos
U 0: Sl.n 100
the Fourier expansion for
Here we have h=2,;
the
0";
=100,
Ct
O
expansion are
0
Ct ""0065
1
1'0=20.6,;
2
t
c:ontains only one frequency 3/2000
and
Ct2:::;~0"50.
'Ihe coefficients in
2Re('Y,);:'~704.; 2Im(r)'""10.4
remaining coefficients are zero.
and all the
For tbis series 202 random normal
deviates (~t~) ,{ere used, as required by the m.)del"
Periodogram values
and the true values of S(p) are listed in Table 502 and plotted in
Figure 5.2) where a high peak for the periodogram and the true value
of 8(p) is noticed at
p~3.
This peak corresponds to the frequency
3/200 associated with the variance (T~o
Series VI.
The Fourier expansion for the varian.ce in this series
contains the two frequencies 2/200 and 5/200 such that
)
u2t = 200 4 - 1009 cos'(2nt
lOO '
so that
1'0=2004,
:=
8'
5:rrt ) '
.8 cos ( 150
2Re( 'Y 2 )=-·10 ·9, 2Re( 1'5 )=-8 .8, Im( 1'2 )=Im(Y5 )=0 and all
the other coefficients are zero.
Xt
-
The model of the moving average is
O"tT)t + 0.65 O"t_1T)t_,1' which means that h=l,
o'"''100 and Ctl =0.65.
Ct
For this model, 201 random normal deviates were used to furnish
the 200 values of the series.
of S(p)
Periodogram values and the true values
(p=O, 1,2, 0 .,100) are listed in Table .5 ,) and plotted in
Figure 5.'.
v
Two peaks for the periodogram and the
t~~e
values of S(p)
at p=2 and p=5 are observed J which correspond to the two frequencies
2/200 and 5/200 in the variance model respectivelyo
There are other
peaks for the periodogram, but relatively smaller in magnitude than the
first ones, due to the periodogram's irregular fluctuationso
e
e
e
Table 5020 Values of the periodogram of Series V; the true value of S(p);
=1
values o f1.2
r I and N Co N at p = 0,1,2, 000,100
P
,
P
1
2
3
4
5
6
7
8
9
10
11
12
13
1415
16
17
18
19
20
21
22
23
24
25
G(p)
5.159
60096
1030115
12.415
100492
10825
19,674
60195
10827
13.695
9084417,830
230398
1.851
100497
100080
10241
140039
80491
11.291
10077
310886
270816
0461
220531
S(p)
P
70071 26
70059 27
1200421 28
70014 29
6.980 30
60938 31
60890 32
60834 33
60772 34
60702 35
30470 36
60544 37
60456 38
60362 39
60262 40
60157 41
6.047 42
50933 43
50814 44
5.691 )+5
50564 46
5,434 47
50301 48
50165 49
5_0028 50
2
G(O)
11'3\
8420573
40073
G(p)
2.921
150217
780109
30950
6.773
190644
50558
3.J+70
1..485
28.935
40.764
140706
190611
1)+74
2.126
1.227
I) 0818
0180
20783
10.885
4,445
60121
50022
5·903
200697
I. I' P
S(p)
1>
40888 51
4.747 52
40604 53
40461 54
4.318 55
4.175 56
4.032 57
)0889 58
30748 59
30608 60
30470 61
3,334 62
30200 63
30069 64
20941 65
20816 66
2.695 67
20577 68
20463 69
20354 70
20249 71
20148 72
20052 73
10960 74
1.874 75
2
1
- 2.? 0
p:f3.? p=.l..?
0
0.000
•
G(p)
8.488
)0009
20758
0761
15.787
0705
300827
160469
8.107
9.406
.548
22,197
130081
1,969
1·707
2,370
2.134
13,626
50858
6,479
1.310
14.049
270803
0119
120061
.? 100
S(p)
P
1.793 76
10716 77
10645 78
1.579 79
1.518 80
1.462 81
1.412 82
10366 83
10326 8)+
10291 85
1,259 86
0'7
1.233 l..J;
1.212 88
1.195 89
1.183 90
10174 91
q"
1.170 ~t::
1.169 93
1,172 91L178 95
1,187 96
10199 ./
1.214 98
1.231 99
10250 100
r;,7
~
2
1'0
4240360
G(p)
70504
20.248
8.139
3.081
210902
20576
20006
160213
1.212
1.616
10766
10178
30661
20828
10250
200356
3.279
.)14
li~. 797
7.354
50252
90312
1.827
20496
140107
=1
N COiN
S{p)
10271
1.293
L317
10342
1,368
1.395
10422
10448
1.475
1.502
1.528
L553
10577
1.600
10621
10641
1.659
1.676
1.690
10703
1.713
10721
10727
1e731
10732
20529
~,
.-.,J
\,()
,
e
e
e
110. ;\
"
100
I
90
80
,
I
'70 I I
i
, I
I
I
Periodogram
s(p)
°1:, :,
6
...-.,
.3
II
CJ 50 I
---=--
I
! I I
I
I
I
!:
40 I:
I
I
;
,
VI
o
\l-'l-f~kJ
I !
IN
I
25
A
-~:
\j
50
~ lJ i~~f~~v
75
100
P
CP
o
Figure 5,20
Periodogram values and the true values of S(p) of Series V
e
e
Table
G(p)
p
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
:21
i"'I~
c,_
23
24
.S2
e
5.3. Values of' the periodogram of' Seri.es VI; the true value of' S(p);
2
1
values of' IY p l and N- CO?N at p = Ojlj~?o,o?100
S(p)
P
G(p)
S(p)
P
G(p)
S(p)
1"
50202 26
2.763 2.962 76
4.743 4.518 51
6.578
85.620
30362 2.t394 77
65.253 27 28.960 4.467 52
4.043 2.826 78
5.194 28 59·987 4.415 53
9.558
4.362
5.186
11.477
2.758 79
29
7.137
54
9·772
44.150 30 30.312 4.308 55 29·409 2.691 80
54.277
5.165 31 15.494 4.252 56
3.369 20623 81
3.912
20.040
41.736
4.195
2.556 82
32
4.235
5.151
57
.618
11.341
2.490 83
4.137
58
5.470
5.135 33
2.424 84
.427
12.582
4.077 59
3.663
50117 34
60
3.561 2.359 85
4.853
5·097 35 23.825 3.017
17.052
1.295 20294 86
5·075 36 61.407 3.956 61
5·051 37 22.586 3.893 62 10.603 2.231 87
5·909
5.025 38 33·377 3.830 63
8.999
9·259 2.167 88
.012 2.105 89
)+.996 39
.872 3.766 64
1.511
2.061 2.044 90
4.996 40
.029 3.702 65
5.607
16.380
10715 1.984 ",l
5·00'2 3.637 66
4.935 41
6.878 1.924 92
.385
4·901 ·42 14.343 3 .571 67
4.720 3.504 68 17.204 1.866 93
20.695
40865 4,
20686 3.438 69
4.828 44
9.944
3·015 1.809 :7'+
7.066 3.3'70 70
180.562
4.788 45
·933 10753 95
0867 10699 96
2.769
4.747 46
9·595 3 .,cye: 71
.4.705 47
35.606
3.175 3·235 72- 13 ..340 1.645 97
4.660 48 11.115 3.167 73 440722 1.594 98
13.414
).289 30099 71.t
1.649 10543 99
·4.614 49
1.259
16 .36? ._ 1+ ~2f7 __2£...-.-1-7. 012 3·0~O 75 21.404 1.494~lOO
,?
2
G(O)
IYpI2,~?5>
p=l?2yo.o~100
1
1/'2 i2
1'
0
1/'5 . .
0.000
416.159
813.490
29·703 19.360
_.
q~
("if:,
G(p)
3.875
22.420
7.316
2.833
10.184
4.685
4.544
10.518
5.361
30113
1.927
4.413
1.698
2.549
.099
15·701
10500
2.006
4.522
3 0739
6.051
4.267
10460
10960
7..560
S(p)
1,446
1.401
1.356
1.314
10273
10233
1.196
1.160
1.126
10094
1.064
10036
LOI0
.986
·964
0944
.926
·910
.896
.884
0875
.867
0862
0859
0<858
-1
N CO,N
~
~
2.51
--.:..00:_
co
i-'
82
90
80
70
Per1.odogram - - - =""',"'""===
s(p)
60
50
.-...
PI
.---0
.40
30
20
10
0
. -25
50
75
P
Figure 50)0
Periodogram values and the true values of S(p)
for Seri.es VI
100
83
50402.
Computation
Pr~d~
FORTRAN language programs were written for an IBM 1410 to generate
and compute the above three serieso
follows
The computing procedure is as
0
(1)
201 or more random normal deviates, specific values for the
coefficients of O""~ expansion and suitable values of cxjUs are fed into
the computer.
The 200 values of O""t are computed, applying the corre-
sponding model for each series.
'fhen the 200 values of
h
Xt
=
.~ CX.O""t_.~t_j
J=O J
(2)
are computed for each specific serieso
J
The periodogram of ~ is computed by using the same expansion
as that given in 30702 in Chapter 3.
84
6
0
ESrrIM.t\TION PROCEDURES FOR I. r p'i:2 (p::=:1 2
0
0
~,
0
•
0
m)
•
J.
IN A MOVING AVERAGE MODEL
6.10
Introduction
In Chapter 5 the periodogram of squares of' the moving average
scheme~ X
t (t=OJ!l~~~ooo), proved to be a consistent estimator of the
product of two functions:
interest of the present 'work is concentrated cn the study of the
existence of
function of
es~imators
2
Ir pi,
and in particular consistent estimators of a
.
.
where the function
,
I r p!
2
is oU.r tool in discovering
the frequencies associated with the variances,
It may happen that
OJ'S
have an oscillatory behavior, so that the periodogram of squares of X
t
will not only estimate the dominant periodicities due to the variances,
but also those periodicities of the coefficients
attempt to derive an estimator of IY p l
2
000
J
Therefore, an
separately i~ necessary.
Un-
fortunately, there is no direct way to find such au estimator, for
' no way t 0 separate I 'Y p 12 fro m"N'
g' (.;-e,) ;the
there lS
. y a re
. two "confounded"
factors
0
The only way to a.pproach this problem is to find an estimate
of a constant times g(~)" a constant in the sense that it has no
periodicities, or, if there are
any~
they could be neglected.
this will lead to an estimation procedure where an estimator of
constant is a ratio of two other estimatorso
Then
li'p i 2
/
But due to the difficulty
in obta.ining the exact distributions of these estimators, we shall
attempt to find
approxima~e
methods for solutionu
In this chapter we shall present
~bis
type of estl.ma.tion procedure,
by deriving an estimator which is a ratio of two other estimators.
A
consistent estimator was derived on the assumpt:ton that
I
h
_
,2
g(~) = ~ ~ zj~l
s: f.. S 1/2
is differentiable :i,n the interv9.1 0
,j=O J
gi(~)
<a <
and
[1964] in
A sinLilar assumption was made by Herbst
h
" ,2
Jr.1
deriving a consistent est:i.mator of 'YO ~ CX Z ; ,i where
000
I
I'
lim
'Y 0:=
N
->
2:)
. -1 N
~ CT+
(N
t=l
00
0
Numerical
j~O'j
!
computationB~
applied to Series
lY.~
~
VI, are given in support of the theoretical discover:i.eso
A Consistent Estimator
Derivation of the Estimator
h'1
2
In this section we "rill derive an estimator of . p'
on the
constant
assumption of regular behavior of g( f..) in the interval 0
s: f.. s: 1/20
It
is interesting to observe that the present problem of estimation
encountered in this chapter has some similarity wj,th the problem of
estimation of noise spectrum studied by physicists and engineerso
Consider J for instance 5 the following example given by Grenander and
Rosenblatt [1957]
0
A resi,stance R is connected in series with an
inductance L and a capaci,tance Co
There ie a random current Yt pro-
duced by the random voltage x t due to the random motion (noise) of the
electrons
0
The current Y is considered as a result of a filter
t
operating on x
of
H~
L.j
t
with frequency response
C and the frequency
corresponding to
Xv
~
~(~),
of the noiseo
where
~(f..)
is a function
If the spectral densities
Yt and f)~) and fy(A,) respectivelY5 then
86
Due to the physical properti,es of this experiment, the filter
allows only frequencies in the neighborhood of the resonant frequency
Thus, for the purpose of spectrum estimation, f (A) can be rex
garded as constant for all these frequencies near AOo
We shall prove the following theorem,
Theorem 6010
Consider the function
when n is a sequence of integers such that n -->
N -> co,
00
and
Then, on the assumption that
. n --.> 0
VN
1
as
(O(N=2) indicates terms which have absolute second w..oment of order N~l)
in probability,
Proof:
Consider the quantity
I- I
n n n+, ,
. - E g(~) I +'
-n (p+j)
=n
N
p J,
I
Z Ii
Applying the Minkowski inequality we get:
n
1 '
E O(N- 2 )\
'~n
'
j
0
87
1
1
where 0 (N-2) indicates terms which are of order N-2
a
Applying the Markov inequality, for
>
€
OJ
o
we get that
But
o
s~nce
-
n
liN
->0 as N->oo,
Thus, we can replace
n
~
-n
Hence the theorem,
r l +, by
P J
£!J.'
n
~ g( N ) I + ' since from theorem
p J
~n
(6,1) the other terms will tend to zero in probability, Therefore,
(6,1) becomes
(60) )
Letting g(P;j)
that
E Is
:=
i Y 8 12
=:
g(~) + ~
gl (e),
(R < e < P;J),
and remembering
+ Nco,l ~! r j 1 2 , we see that
~m
ERnqp -
~ [g(~)
+
~
'~n
n j
Now cons1der the term ~
o
gi(e)] [!Y p +j
-n
Ng
9
(8)!Yp +j
I2+ N=l ~ !Yj I2l,
l2
duct of quantities under the sum in (603a)o
(6,3a)
=m
J
resulting from the proIt
1.8
clear that
88
But
III
lim
N
Therefore, as N ->
except for
of No
00
->
L:
\.y i 12
::c
Co <
00 ,
00 ~,m
j
n
L: 'N g I( e)
-n
"!)' +J 12 -> 00
P ,
It is clear that,
n
2
L: g( ~) I )' p+ j 1 9 81.1 the other terms vanish wi. th the increase
-n
Therefore, for large N
By defini tion we have
.-
where
n
n
L: var[I~ +j] +
L: cov[I~ +jJ I ! +j:]
p
-n
P
jrj'=-n
p
0
89
where
Since ~ --> 0 as N --> ooJ the above sums in (605) over termB of
..2
order N
l/N
will tend to zero as N _.>
00
0
Thus J
considering large values
of N we can write (605) as follows;
The vanishing of the other terms (as N --> co) could easily be seen by
h
remembering that g(£N)
~ ( ~ la j l)4:= G <
h .
~ lajl
j:=O
j=O
assumed in Chapter 5 that
write
where
n
I
l
::::
l:
r(p+j)
j=,~n
Let us consider 1
1
o
and
00,
(p :::: l,2,ooo,m), since we
is convergent
0
Thus, we can
90
Now 3 for any complex function Z.~ Re( Z)
But from the relation
7
~
s
-1 N
N
2
~rrt
t:;:;;}
Z
:5 Iz I,
tS/N
Therefore,
we see that
we see that
,!.
.
~
70
1. "-"= m
whic:h t.ends to Co <
For the second term of
00 as N
r~
I
1 1-S,
->
I0 B
I -<
c::
.
Therefore,
co •
(6.10) we see that
Therefore
<. 4N-1
n
~
27
j=-n
2
m I 12
N ~
7'
::::
0'
i==m 1
~2n+1)
N '-,
2
'0 N
,
->
0
as
~
I
L.
1'
i==ID
N
->
00
1
12
.
91
Now j we consider the runction 12.'
where
(6,11)
and
Considering Alj we see that
A similar inequal i ty could easily be found for
1
2
_< 8N- l (2n+l)2. ( 7
Since~--> 0 as N -->
From
(6,9)
00,
we have that var
~"I 7 i 12
)2 ~
0j N
-m
~
0
v
it is clear that 1
Rn(~) ~
1 were proved to approach zero as N
2
-->
2
G (1
00,
1
Thus,
2
--> 0 as N --> 00,
+ 12 )", where now 11 and
Thus y
92
This means that Rn(~) is a consistent estimator or Cog(n) in the
sense that EIRn(~) - Cog(~)!2 --> 0 (N -> (0)0
At least heuristically
the following theorem would seem to be true:
is a consistent estimator of
,~
Co
'
p = 1,2,ooo,m,
(N->':Co)o
in the sense that
6030
Numerical Applications
In this section numerical applications of the theory derived in the
last section are given, applied to the Series IV-VI introduced in
Chapter 5.
The computations were carried out using an IBM 1410 with
For each series R (~) was found
n
for n=lO then the estimator A(p) (p=l,2,ooo,lOO) was calculated 0 The
FORTRAN laI'..guage for the programs,
values of A(p) (each multiplied by 20) are listed in Tables 601 through
603 and plotted in Figures 601 through 60.30
As 'we move farther away
rrom the values of p where high peaks are expected 9 there are still the
irregular fluctuations in the values or A(p)o
for thiso
There are two reasons
The first is possibly due to the influence of terms of O(N~l)
in the variance of
I(~)(~),
since such term.s will not decrease as s -> 00.
x
The second explanation could be attributed to the fact that the estimator
A(p) is a ratio of two estimators, each of which is a function of the
periodogram and consequently will occasionally y1eld high values for
this ratio at values of p where the corresponding Fourier coefficients
are zero 0 This irregular behavior will be clear when we compare
Figures 601 through 603 with the corresponding ones in Chapter 50
We
93
Table 6.10
Values of A(p)J the estimator of
for Series TV
P
A(p)
P
A(p)
P
1.
2
3
4
5
6
7
8
9
10
11
.278
6.110
,433
,675
.763
.078
1.044
.187
.424
·393
10733
0635
2.150
0104
0653
10044
00'26
10598
·543
0897
.253
10659
20052
.050
1.655
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
.029
1054·8
3·109
1.341
,923
·393
.610
.207
0385
0714
3.842
·777
10089
.616
0568
0128
10045
.040
10090
0715
.851
0660
10157
.692
2.011
51.
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
12
13
14
15
16
17
18
19
20
21
22
23
24
25
42
43
44
45
,46
47
48
49
50
73
74
75
l?'pl2
Co
A(p)
P
A(p)
0801
76
0671.
3,472
1.134
0683
10094
.209
.470
1.534
·922
0523
,469
0588
0294
.325
0021
1.857
.880
.362
20220
.662
2.135
.239
10253
.409
20660
,038
,6720581
20706
·511
30953
1.507
0456
0682
0063
10355
10355
.170
0481
.103
.484
10767
·727
0602
.400
0571
3.258
.027
20024
'77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
94
J2L
Table 6.2. Values of A(p), the estimator of C
0
for Series V
P
A(p)
p
A(p)
P
A(p)
p
A(p)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
.298
·351
5.789
.74,'5
.645
.115
1.310
.414
.125
·948
·712
10182
1.451
.165
.896
.888
.104
·950
·579
.758
.071
2.128
1·945
.035
1.549
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
.183
·939
40572
.240
.419
1.251
.341
.234
.109
2.052
3.078
1.098
1·519
.156
.207
.119
10446
.019
.299
1.091
·515
·750
.609
.767
2.564
51
52
1.061
·335
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
.916
2.488
·991
.400
2·930
·317
.244
2.140
.174
.221
.253
0167
·592
.478
.194
3·590
.562
.052
2.716
1.287
.877
1.481
,274
.360
1·931
53
'54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
•.308
.084
10755
0082
3,636
1.864
·913
1.056
.069
20692
1.388
.212
.173
.251
.206
1.466
.677
.696
.146
1.551
3.169
.015
1.472
95
2
"
1
!7
P
Table 6.3. Values of A(p), the estimator of - n
1,.,0
for Series VI
P
A(p)
p
0328
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
1
2
3
4
5
6
7
8
9
10
11
.314
1.149
12
0364
13
14
15
16
17
18
19
20
21
22
23
24
25
4.532
0527
.577
3.523
0241
1.312
.351
0240
.704
.122
.442
1.584
.033
1.540
0736
10256
.181
20426
·930
.090
1.085
A(p)
P
A(p)
0267
51
52
53
54
55
56
57
58
59
60
61
62
63
.286
u3:39
1.606
3.060
0382
10666
.883
0234
.037
0026
1.454
3.822
1.412
2.204
0070
0002
0431
1.301
.429
.233
.547
.804
0289
1.062
.347
1.772
64
65
66
67
68
69
70
71
72
73
74
75
0417
1.213
.30117
.367
4.611
1.167
1.348
0386
0153
1.184
.847
.001
0183
.170
.627
1.843
0338
.106
.098
1.478
40958
0187
2.383
P
76
77
78
79
80
81
82 ~
83
84
85
86
87
88
89
90
91
92
93
94
A(p)
0432
20462
,826
.347
10270
0537
·519
10280
.851
.486
0340
0776
.362
.575
.021
3.582
.343
.460
1.132
95
09.56
96
97
98
99
100
1.5:27
100:,8
0357
.457
1·702
96
6
5
4
2
100
Figure 6010
Values of A(p) for Series IV
6
5
4
2
1
___
o
__
~-.L..-,--_~_...;....JI'--
20
_ _:.....L
~"':"
p
Figure 6 02 0 Values of A(b) for Series V
-.:..__.,
100
•
notice that for the present estimator there are relaciveLy higher peaks
than for the corresponding ones in the periodogram at the values of p
which correspond to zero Fourier coefficients.
On the other hand,
except for Series VI, the highest peaks for A(p) correspond to dominant
frequencies in the expansion for
•
•
o
2
~t'
97
7
6
5
........
4
Pi
<'~
3
2
1
o
p
Figure
603. Values of A(p) for series VI
99
7'
A TESTING PROCEDURE FOR THE VARIANCE
7,1,
a'~
Introduction
The last chapters were concerned with developing a theory for
using models with variances depending on the time of observation,
question which arises
world
j
j
The
before using such models to describe the real
is how to make the decision about the non-constancy of the
variance within the observations,
Thus" an importa.nt problem which
needs to be investigated is that of testing the homogeneity of the
variance for the model proposed,
In this chapter we shall study the following testing hypothesis
problem,
2
O't = constant
vs ,
H:
Ir pi 'f
0
h
using the moving average model
Xt := L: CXj(J't_oTlt=j' Herbst [1963b]
j=l
J.
studied the same problem and developed two test statistics depending
on the periodogram of Xt ,
In this chapter we shall present a large
sample test statistic: which depends on the periodogram Of~,
that the testing procedure presented in this chapter is better
It seems
j
as far
as detecting the departure from the null hypothesis, than the ones proposed by Herbst
0
Priestly [1962a] and [1962b] studied a problem similar in character,
but quite different in structure to the present one J as it deals with
stationary processes which contain harmonic components superimposed on
a non-uniform continuous spectral density function,
The idea of group-
ing the periodogram ordinates which is used in this chapter is found in
100
one approach Priestley used in his papers to test for the ex:1.stence of
harmonic components,
An argument for the power of the test shows that it is an increasing function, though may not be monotomic, with the departure from the
null hypothesis,
Examples are presented in support of this test cri-
terion at the end of the chapter.
7.2,
7.2.1.
The Test Statistic
Statistics Based on the Periodogram
Let us subdivide the set of the m periodogram estimates in.to (~)
subsets as follows:
where k is some in.teger such that k ::
N -.>
00,
0(~/3);
thus
Then we form the following statistics ~
(.e-1)k+p
L:
B.e (p) =
j=(.e-1)k+l
~ -->
YN
0 as
Ii
j
(7,1)
£k
Ii,
j=(£-l )k+1 J
L:
(p :: 1,2", .,k), where
.e =, 1,2,
o
••
,~; !'~" we get ~ sets of statistics
B.e(P), each of which takes k values. Letting C "" (.e-l)k, it is easy to
see that
B.e(P) takes the form:
101
Recalling that gi(~) < a, 0 ~ ~ ~ 1/2, and using the result of
Theorem (501) we see that
By
lemma (5.1) all moments of' I
are bounded and since j < k
j
= 0(;'/3),
we can easily see that
where the 0 term indicates terms which have absolute second moment of
-1
order N
0
Now, we shall prove a theorem which asserts that the distribution
of the statistic B£(P) is asymptotically the same as the one which involves the set of Ijo
theorem
First we shall prove the following preliminary
0
Theorem
(7,1)
The random variable fj+c- Ij+cl converges to zero in probability
as N -:>
Proof:
00, (Kj
+c
=
Ij+c
.)
g(.£ )
N
Consider the quantity [K + - I + ]2 0 It is clear from (70)) that
j c
j c
1
it has mean less than C N- and variance C N- 2 , where C and C are some
2
2
l
I
constants, not depending on j or No Then by the Markov inequality we
get that:
,
where ~1 is some positive number
0
Letting ~l
1
= N-2,
we get
102
Thus, the above probability tends to zero as N ->
00.
Hence the
theorem.
Theorem (7.2)
The random variable IB£(P) - B£(P)\ converges to zero in probability
as N ->
00,
where
P
I: I j +c
j=:l
k
I: 1.+
j=:l J C
=:
[c
0=:
(£-l)k] .
Proof:
It is clear that we can write B£(p) as follows:
Consider the following identity (similar to the one suggested by
Hannon, 1960]:
P
I: (K
j=l j
P
I: K +
J=l J C
k
I: I
j +c
j=l
k
I:K
j=l j+c
-1
+ -I j + )
C
C
+~~--~-
k
I:
jo=:l
I
j+c
Now
E I:k I j +
j=l
=:
c
k ( I 10+
I:
j=l
J c
12
~
+ N- 1 ~ I y.
-m ~
12 )
~
2 (1+N- 1 )
m
I: I yo 1
-m J
•
103
k
Thus, E 1: I j +
j=l
c
~ Co
N ->
as
00 ,
Thus
k
1:
I
converges in probj=l j+c
ability to a non-zero constantu
By
IKj +c
theorem (7,1)
p
- Ij+cl -> 0,
Therefore the left-hand
side of (7,4) minus the right-hand side of (7,4) converges in probability to zero,
7,2,2,
!'~"
The Distribution of B£(p)
(p=1,2,u",k) under the Null
Hypothesis
In this section we shall derive an asymptotic distribution for
B.e(P) under the null hypothesis,
Theorem (7,))
Under the null hypothesis that ~~
cally distributed as
~
p,
= constant,
B£(P) is asymptoti-
k (Beta distribution with parameters p and k),
Proof:
2
2
= ~2 =
Let ~l
I
p
=
rl
=
It=l~ ~2T1~
2
~N
2
= ~,
Then, by definition:
Ztp/NI" 2
Or
N 2
[ 1:
t=l
2t
T1tCOS(~)]
2
N
2
+ [ 1: T1~sin(2rt;P)]
t=l
104
(7.5) can be written as:
,
where Ut(p)
U
I'
=
= T]~
cos(2n;p) and Vt(p) =
N
~ Ut(p),
t=l
V
I'
=
T]~ Sin(2~),
or II' =
0"4N-2[~+~],
N
~ Vt(p) .
t:=l
To find the asymptotic distribution of
I~N)(~)
under the null
hypothesis we consider first U and V , (p := 1,2, ••• ,m-l).
I'
I'
Ut(p) (t:= 1,2, ... ,N) is a set of independent random variables
wi th
means E Ut (I')
= cos
E[U (p)]2 .. [cos(2~tP)]2
t
(I'
(21f t p ) and variances equal to
N
= 3 cos2(2~:p)
- cos2 (21f:.E)
p
cos(21f: )
since pfO,
:=.
2 cos2(2~P)
= 1, 2, . • •, m-1 ) •
Therefore
N
E U
P
= ~
= 0,
t=l
and
N
22t
N
4t
var U = 2 ~ cos (~) = ~ [1 + cos(-2!~)]
I'
t=l
N
t=l
N
:=
N+ 0
:=
Similarly, we prove that E V
I'
N.
:=
0 and var V
I'
:=
N.
Let us find the absolute third moment, p~, of Ut(p).
defined as:
It is
= Q~
105
Using the well-known inequality
(r ~ 1),
we get that, after taking the expectations:
Elut(p) - cos(2~tP)13 ~ 4[E!Ub(p)!3 + Icos(2~tP)131,
since r=3,
EI U (p)\3 ,= EI Tl~ cos(2~tP)13
t
,
since EI Tl213
t
Therefore,
P3
:=
:=
ETl 6
t
:=
15,
p31 + p32 +
~. 4[15
N
L:
00'
+ p3N
N
I cos(21!iP)1 3
+ L:
t=l
t:=l
I cos(2~tP)131
,
Hence
Al so,
2 := s::.2
v + vs::.2 +
2
l
(j
+ o~ :=
N
,
Therefore 2.<
(j-
as proved earlier.
and
.e.
lim
N
->
<X>
:=
0 ,
(j
This means that Up is asymptotically normally distributed with
zero mean and variance N since the conditions for Liapounoff's theorem
are satis:f'ied.
Therefore,
with one degree of freedom,
~
is asY1l1Ptoti cal.l,y di s tributed as
I-
106
V2
In a similar way to the above we can prove that ~ is distributed
N
as
X2
with one degree of freedom"
Now
cov[U ,V ]
p p
= E[
=
N
N
~ Ut(p),
~ Vt(p)]
t=l
t=l
N
N
~ E[Ut(p)Vt(p)] +
~
t=l
The second term of
N
t=l
~ E[Ut(P)Vt,(p)]·
t~=l
(7.6) is obviously zero and the first term is
given by
~ E ~~ sin(~)cos(2 NtP )
=3
t=l
;=
3/2
~
t=l
. ( 4nNtp )' =
nn
~ sin(2niP)cos(2~tP)
t=l
0
Thus, Up and Vp are uncorrelated, each one of which is asyrnptotically normally distributed with zero mean
are asymptotically independent.
X; with two
Therefore,
a~ var~nce
N.
Thus, they
-E- + ~ is distributed as
degrees of freedom and this means that
We have proved in Chapter 3 that the periodogram ordinates
I p (p
= 1,2, •. o,m)
are asymptotically uncorrelated.
Therefore, the
periodogram ordinates under H form a set of independently distributed
O
random variables each of which is distributed as ~ with two degrees of
freedom.
Therefore, the statistic
107
is distributed as
~p,ko
Hence, the theorem 0
L,2 03 0 The Development of the Test
Having grouped the periodograms ordinates into m/k subsets, the
null hypothesis may then be presented in the following form:
where
H~t)
is the null hypothesis corresponding to the subset I(t-l)k+l'
ooo,I i of the periodogram given as follows:
tk
I'Ypl
:::
° for all p ::: (t-l)k+l, 00 o,.£k:
(t:= 1,2,00 o,m/k)o
Thus, we accept H if we accept all the subhypotheses
O
H~t) (t:::l,2,ooo,nV~0
Now, we need to construct a test statistic for the subhypotheses,
then develop a criterion for the general hypothesis HOo
It is clear that B,£(l) ~
Bt (2)
~ 000 ~ ~t(k)o
Bartlett
[1956]
has shown that the joint distribution of B(p) is k~ and it is easy to
prove this assertiono
But this is the joint distribution of the order
statistics of k independent rectangular, distributed random variables
on the interval (0,1)0
Bt(P)
th
here is the porder statistic of such
variables since B,£(l) ~
Bt (2)
~ 000 ~ Bt(k)o
w
theory of order statistics E[X(r)]
:=
From the distribution
n~lWhere X(r) is the r th order
statistic and n is the number of the uniformly distributed variables
over (0,1)0
Thus, E[B£(P)] - ~ and consequently E[B,£(P)] - ~ 0
108
B.e(P)
Therefore, following Bartlett,
may be tested against its
expectation by using the Ko1mogorov-Smirnov statistic,
B.e(P),
statistic B.e(P) in place of
Considering the
we would consider the following
statistic
H~.e),
as a test statistic for the null hypothesis
P[D.e
_2x2 j 2
00
~
->
Xo]
where
l::
(-l)j e
j=-oo
o
If the hypothesis is not true, then this probability is high (for
!,!., the situation will be in favor of the
a suitable value of x ),
O
alternative hypothesis,
If a is the significance level of the general hypothesis, then the
appropriate significance level, aI' of a test based on D.e will be given
by:
a =
1 -
[P[D
<
.e -
Hence, a1 - kim a, !,~., a 1
If m/k
=:
5, then a
=:
=:
d
IH
a1 0
P[D.e
]
]ID/k
=:
> da I Hol
1 _ (l-a
1
)ID/k .
0
1
,05 and ,01 for a l
=:
,01 and 0002 respectively.
From Smirnov [1948] the corresponding critical values, d
a 1 ,are 1,63
and 1,93 respectively.
Now, we form the following testing procedure:
109
7.2.4. The Mean and Variance of B£(P)
For the first order of approximation, we may write
P
L: E(L+ )
j=l
J C
k
L: E( I j +c )
_
J
[(.e-l)k
= c]
0
j=l
It is easy to see that
n
= (k-l)
2
k
I.
L:
j=l J+c
-
n
Z I j +c ' where a=£k -
=n
(k 1)
2-
and
Therefore
m
k
E L: Io+
J:=l J C
k
E(L:
2
I j +c :: L: !Y j +a I
j=-m
-n
= E L:
I +)
1 j c
Thus, for large n,
n
-
00
L:
-00
2
Iyj+al ::
00
L:
0
2
Iyil ::
CO' and we can
-00
write
P
2
where ~ := L: Iyo+ I
p
j=l J C
0
To find the variance of B£(p), we note that the approximate expresU
sion for the variance of a ratio of two functions, say u1 , is given by
2
110
p
= E I j +c ' U2
j=l
=
p
E yare I + ) =
j
j=l
c
=
k
=
= ~p
Then III
E Ij+cu
j=l
P
E r( j+c), where
and 11 2
=
r(s) is defined in (6,6)
j=l
P
E
k
E cov(I j + ,Iji+ )
j=l jV=l
c
C
=
P
E
k
E r(j+c,j'+c),
j=l j!=l
r(s,s') is defined in (607)
where
var(u )
2
Therefore, substituting in
k
E
=
r(j+c)
0
j=l
(709), we get:
P
E r(j+c)
_ j=l
-
~
-£
~
o
- 2
~
P
~2
k
E
E r(j+c,j'+c) +
j=l ji=l
-:!c
E
r(j+c)
o
To understand the behavior of this variance we may consider
!o~"
C=O, and let us assume that
2 =
~t
rp =0
~~ .Y
j
for all prO,L
Z-jt/N
0
Then it is clear that
.
E ~( if~*i - ( s +s' )
=0
for all values of sand
Si
~=-m
and
if s=s'
otherwise,
(7010)
B1(P),
This means that
j:;-l
m
0
(s,s'7'O),
III
Therefore
P
k
L:
L: r(j,ji)
j=l j i =1
=
P
L: r(j)
j=l
,
since all the terms inside the sum will be zero for
Also, it is clear that 6p
fore
(7,10)
j~ji,
Hence,
22
= Illl 2
and Co = (YO + 2111 1 ).
becomes
2
Or, approximating further, we see that
There-
112
7030
An Argument for the Power of the Test
It is extremely difficult to find an explicit expression for the
power function of the test statistic discussed in this chapter, but we
shall present the following heuristic argument in this connection
Massey [1950] proved the following theorem thus
0
0
Let D == ~IF(X) - S (x)1 be the Kolmogorov-Smirnov statistico
n
x
n
Suppose that we are testing the null hypothesis HO: F(x) == Fo(x) vSo
H: F(x) == Fl(x)
(FO(x)
P
f
Fl(X»)o
>1 -
Then the power P is given by
1
..[2;
where
and do; is such that
that is do; is the critical value of the testo
= (-drv
Now suppose that /..1
+6-vn), /..~
= (do;
+6-vn)
(the sign
before 6 depends on whether Fix) < or > :F~)(X)o Then it is clear that
I
=
1
-y2;
/..1
is a monotone decreasing function of 6, since do; is fixedo
In exactly
113
the same way we see that this is true if a minus sign is in place of the
plus sign before~.
the right-hand side of (7.13) is a mono-
Therefore~
tone increasing function of
b,.
and the power
j6
always greater than this
monotone function.
Let us examine the value of
test statistic Bl(p).
~
by considedng.1 for simplicity, the
Then for the present testing hypothesis problem
we have
P
L:
= j=l
I .
J
k
L:
j=l
under the assumption that
2
~t
2
~t
=
IJo
constant.
=
(for £=1)
Let the alternative be that
1
~
Yj z-jt/N .
-1
~
Then the alternative F (x) will be
l
,
where - indicates the periodogram estimates under the assumption of
non-constant variance.
Since
and
114
for all
j~lo
(j~l)
Thus, we may consider the Ijis
as not affected by
the alternative hypothesis assumption and hence
-
= :1+12
1
1
+1
2
+Ip
0
0
0
0
0
o+I
k
Therefore
= I 1 +12 +
I'
II+12+
0
0
+1
I +1 +
-p _ l ' '2
0
0
0
+1.pI
11 +~+o o+Ik '
o+Ik
00
_Ii (~+I
I
0
0
+ .• ~ +Ik )+II
(~+ ••• +IE)-II (~+ ••• +Ik )
(I +ooo+Ik )(I +oo.+I )
1
l
k
= III (Ip +1+
0
0
0
+I k )- II(Ip +1 +
0
0
0+Ik ) I
(II + 0 +I ) ( II+0 0+I )
k
k
0
0
0
= (Ip +1 +o o.+Ik ) III-Ill
(1 +
1
0
0
0
+I )( 1 +
k
1
0
0
0
+I )
k
Thus
~ = ~IB1(P)-Bl(P)1
(12 +0 0o+Ik)1 I 1- I l l
(11 +0 oo+Ik)(Il+o 0o+Ik )
=--:::-----~-----
k
Now E L: I j
j=2
=
(k-l)r~
N
2
,since (Tt
Ill-Ill
(II+I2 +00 o+Ik )
= constant = rOo Therefore
I
115
...
k
E[I1 + E Ijl =
j=2
2
I ?\ 12 + Ii
21 ')'1 1
7
2
.-2
N
+ (~)
N
72
0
and
Let us look at these periodogram estimates in terms of their
expectations.
We get
For fixed values of Nj
varies from 0 to 1 as
171 12
- is a monotone increasing function;
~
increases from 0 to
Thus, we can conclude that
~
it
00.
on the average is a monotone
increas~
ing function of the departure from the null hypothesis and hence that
a monotone increasing function of the) •
Power > (
departure from the null hypothesis
This means that the present test will have an increasing probability
of detecting the departure from the null hypothesis HO: ~~
= constant.
The above -argument does not prove the monotonicity of the power of
the test but at least it shows that the test is sensitive to the
j
departure from the null hypothesis.
116
Numerical Examples
In this section we are presenting three artificial series
constructed by using a number of ra.ndom deviates 0 The method of
constructing these series is similar to those described in the previous
chapters 0 FORTRAN language was used for the program to obtain the
periodogram values and the values of the
Kolmogorov'~Smirnov statistic
There are 100 periodogra.m values for each series subdivided into 5
thus each set contains 20 periodogram ordinates,
0
sets~
Values of the periodo-
grams with the corresponding values of the statistic D (t=1,2,000,5) are
t
listed in Tables 701 through 7030
The series are listed as followso
(1)
Series VII.
The expression for
rr2t
2
rr~
....
is given by
5 (
)6
(21tst)]2
= [ 1 + 5~ ( 0,3 6)s cos (21tst)
~ + ~ 0,81 sin 200
6=1
6=1
and
From Table 701 it can be seen that the value of D
L
= Dl = 20107,
= max( D1 , 000' D )
which is highly significant at the 0001 significance leve10
Thus, the test is in agreement with the structure of the serieso
(2)
Series VIII.
2
is given by
t
The Fourier expression for rr
rr~
5
= [1
4
+ ~ (048)Scos(2~~~)
6=1
The model is given by
117
Table 701,
Periodogram values of Series VII and the
corresponding values of D (£=1,2 y ooo,5)
t
p
G(p)
Ii
G(p)
P
G(p)
P
26
27
28
29
30
31
32
33
34
35
36
,083
,345
,625
,10('
,087
,072
,084
'76
0387
0185
55
.121
77
78
79
80
81
82
83
,342
,441
51
52
53
54
1
3,529
2
1.864
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
,904
,354
021~7
0088
0010
0007
,050
0224
,429
,577
,521
0342
,378
0176
,064
0061
,059
,003
40
0035
0049
0218
0112
0064
47
48
49
D
1
2.107
,050
41
42
43
44
45
46
56
57
58
59
60
61
62
6,3
64
65
66
67
68
69
70
71
72
73
,205
37
38
39
,273
0349
.326
0051
,061
0016
,013
,100
,179
0072
,048
0020
,009
,060
.171
0273
74
75
0350
,249
50
D2
10360
D
3
,766
.151
,162
,039
0015
,044
,101
0160
,202
0195
,11+9
0088
0025
0015
0013
0003
,021
,037
84
85
86
87
88
89
90
91
92
93
94
95
96
97
0067
98
0322
99
100
.200
D4
1.121
D
5
1.161
G(p)
,189
.204
,153
,113
0129
0097
.031
0005
,003
,039
,043
0058
0101
0220
,163
,164
,207
.132
0292
0253
0249
0214
,267
118
Table
7.2.
Periodogram values of Series VIII and the
corresponding values of D£(£=1,2 y • • • ,5)
P
G(p)
P
G(p)
P
G(p)
P
G(p)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
2.071
.956
.303
.0'23
.032
.077
.058
26
27
28
29
30
31
51
52
53
54
.137
.097
.047
.011
76
77
78
.55
.062
.117
.125
.099
.114
.078
32
.010
.3.3
.024
.207
.505
.388
.126
.011
.027
.071
.166
.194
.176
.029
.003
.054
.048
.028
.013
.010
.036
.045
.030
.052
.113
.205
.216
.023
.161
.325
.387
.287
.123
.078
.024
.005
.008
.071
.154
.231
.225
.193
.088
.022
D
l
2.293
34
35
36
37
38
39
40
41
42
43
44
45
46
4'7
48
49
50
D
2
1.130
56
57
.58
59
'60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
D
3
.899
'79
80
81
.164
.203
82
.000
.011
83
84
85
.0~r4
.O~(l
86
.165
.171
.128
.058
.015
.004
.024
.003
.007
.010
.015
.040
.064
.091
87
88
89
90
91
92
93
94
95
96
97
98
99
100
D"tol ·
.8'79
.041
.045
.082
.072
.050
.018
.024
.015
.018
.038
.084
.052
.056
.062
.038
.08.3
.089
.098
.118
.167
D
5
·723
119
Table 7.3.
Periodogram values of Series IX and the
correspond:lr..g values of D.e(.e=1~2i> "':15)
P
G(p)
P
G(p)
P
G(p)
P
G(p)
1
2.136
.978
.308
.045
.093
.152
.113
.013
.025
26
27
28
29
30
31
32
3.3
34
35
36
:37
.38
.39
40
41
42
43
44
45
46
.039
.253
.569
.458
0161
.0'26
.033
.112
.209
0226
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
.16.3
76
77
.118
.141
.108
.139
.098
.039
.057
.088
.082
0053
.027
.011
.009
.013
0031
.073
.040
2
.3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
.174
.366
.429
.300
.126
.067
.041
.021
.023
·050
.144
.229
.262
.234
.137
.042
D
1
2.143
47
I
48
49
50
.161
.023
.005
0072
.066
.037
.024
.0'25
0062
.075
.059
.052
.118
.215
69
70
71
72
T3
74
.225
D
2
1.112
7""
./
.112
.051
0012
.077
.191
.245
.113
0007
0013
.097
.191
.1'79
.122
.052
.004,
.010
0029
.011
.010
.019
.029
.06.3
.074,
.100
78
19
80
81
82
83
84
85
86
8'r
88
89
90
91
92
93
.057
.056
.027
.070
.678
.085
.111
.15'7
94
95
96
97
98
99
100
~
D
4
D
5
.709
.821
.662
120
From Table 7.2 it is clea:r that D
l
of the statistics D£o
cance levelo
(3)
= 2029.3
is the greatest value
This is highly significant at the .01 signifi-
Thus, we reject the hypothesis of the
rr;
being constant.
Series IXo
For this series we get the following expression for the
variance
rr2t :
3
s
0~~t
3
S
2 'at 2
2
o't :: [1 + 2: (048) cos(<.:~oo') + 2:. (068) sin( ~)]
s=l
s=l
The model is again as in Series VIII.
D
l
= 2.143,
The maximum value of D£ is
and this is again significant at the .01 significance leveL
121
8.
SUMlf.LARY AND CONCLUSIONS
8.1.
The Problem
Two problems are considered in this thesis:
(1)
To find consistent estimators for the coefficients in the
Fourier expansion of the variance of data with zero means and unequal
variances.
Two models were studied in this connection.
The first one
is the random Gaussian data model with zero means and variances, which
are not necessarily equal.
The sampling properties of a stationary
process generated from this model were studied.
The second model is the
moving average model which is a linear function of the process in the
first model.
(2)
To propose a testing procedure for the null hypothesis that
the variances in the moving average model are the same, against the
general alternative that the variances are not equal.
The variances
were expressed in a short Fourier series expansion.
8.2.
Let X (t
t
that Xt
= crtT'lt'
= 1;2, ... ,N)
The Random Process
be a realization of a random process such
where T'lt are NID with zero means and variance one and
crt possesses a Fourier expansion
cr~:= ~ r s z-st/N
s=-m
Then it was shown that the periodogram of ~,
122
is a consistent estimator of
17p 12
(p = 1,2, ••• ,m).
exact expres~
An
sion for the covariance associated with this periodogram, using a lemma
proved in this thesis, was derived.
From this result an approximate
-2
expression, ignoring terms of order N
4N-1Re [*
r 7 *I
cov[ I I ,I II] p p
ill
r i 7 *i _(
~
p p i=-m
was shown to take the form
,
+pI)
p
+ 7p*7p '
m
*
~_ 7 7 _(p_p')
i--m i i
]
Some special examples were gi,ven to clarify the meaning and applications of the above expressions.
Three artificial series were
constructed in support of the use of the periodogram Of~.
Irregular
fluctuations were noticeable in all of these examples.
8.3. Circular Complex Stationary Process
The complex~valued circular stationary process J
1
a
N
t=l t
was shown to behave in the same way as the real stationary process.
1
covariance of the function Cs = TN=6)
/
= N-2 ~ X z-ta N
The
N-s
~ Ja~+s
was shown to take the
a=l
approximate form
2
cov(C s ,Cs I) - (N-s)
When the function
Cs
1 N
.
= N- ~ Ja~+
a=l
ro
~
u=~oo
~ r*
'u u+t
was considered, the covariance
s
function took the form
-)
cov ( -Cs ,C
sI
-
N-1
00
~
V=~OO
Thus, it was indicated that C and
s
Cs
r v7*v+t
behave in the same manner
as the classical covariance function R in that the sampling fluctuations
s
123
tend to zero with the increase of N, but they never do so whenever
I'
s -> 0 as s -> co
0
The covariances for the real and imaginary parts of
C,
s
A and B
s
s
respectively, was proved to take the forms
and
Again As and Bs were shown to have the same properties as those of C
s
and C ,
s
8,4,
The Moving Average Model
h
Z a'~t_'~t_" where the a,'s are real constantsj ~t and
j=O J
J
J
J
are as defined above in Section 2, Then it was shown that
Let Xt
~t
=
= I y( N) (N'
1') )
(
Yt
:=
2 2)
~t~t
_1.. )
and the O(N
-1
indicates that the neglected terms have mean square of order N ,
it was shown that IIp
is a consistent
estimator of g(E )
.
N
2
Thus,
II'p 12 •
Three artificial series were constructed using a number of random
norrral deviates,
The different series were generated by choosing
specific Fourier expansion for ~~ and suitable coefficients aj's.
The
graph of the periodogram vs. p showed high peaks corresponding to the
dominant frequencies in the expression for ~~.
This periodogram
124
possesses properties similar to that shown in the first problem in that
the sampling fluctuations decrease as N -->
whenever
I7 s I
->
-->
0 as s
00,
but they do not do so
00.
Using the moving average model of 8.4
n
the function Rn(~) = L: I~+j
-n
was shown to be a consistent estimator of Cog(~).
Thus, using thi.s
function and the per~npogram of ~, it was shown that A(p)
17P 12
is a consistent estimator of
I'
= ~p~-
Rn(~)
Numerical applications using the series generated in 8.4 were
considered.
They revealed that high peaks of A(p) correspond to the
dominant frequency in cr~.
Graphs were drawn for these estimates.
The
present estimation procedure, being an approximate one and in a ratio
form, was another cause (in addition to the one explained in the last
section for a different estimator) for the irregular behavior of the
estimate A(p) at values of p which do not correspond to the dominant
frequencies of cr~.
8.6. Testing Procedure
2
The null hypothesis H : crt
O
= constant
r constant was considered.
2
H: crt
proposed as follows,
and - - --> 0 as N ->
'IN
A large sample testing procedure was
Subdivide the set of the periodogram ordinates as
Ii, ... , I~, I~+l"'.' I~k'
k
vSo the alternative
00.
00
0
where k is an integer such that k -->
Then
statistics were formed as follows:
th~
set of Kolmogorov- Smirnov
00
125
where
(£-l)k+p
r;
= j=(£-l)k+l
£k
r;
(£-l)k+l
II
j
(£=1,2, .•. ,~) .
Ii
j
The test statistic was then proposed as D , where D
L
L
= ~ Dt ,
and the
null hypothesis would be rejected if D > d , otherwise it would be
L
a
accepted; where
~
is such that
a is the level of significance for each D£.
An argument was given to show that this test has a power which
is increasing with the departure from the null hypothesis.
examples were given in support of this test.
Artificial
These series were con-
structed in a way so that t?ey yield non-constant variances (following
different Fourier expansions).
All the tests were in agreement with
the construction of the examples,
alternative hypothesis.
!.~.,
the tests were in favor of the
126
9.
LIST OF REFERENCES
Bartlett, M. S. 1946. On the theoretical specification and sampling
properties of autocorrelated time series. J. Roy. Statist. Soc.
Suppl. ~:27-4l.
Bartlett, M. S. 1950. The periodogram analysis of continuous spectra.
Biometrika, 21:1=16.
Bartlett, M. S. 1954. Problemes de l'analyse spectrale des series
temporelles stationnaires. Publ. Inst. Statist. (Univ. de Paris),
31 (Fasc. 3):119-134.
Bartlett, M. S. 1956. An Introduction to Stochastic Processes.
Cambridge Univ. Press, Cambridge.
Bliss, C. I. 1958. Peri6dogram Regression in Biology and Climatology.
Agricultural Experimental Station, New Haven.
Cramer, H. 1946. Mathematical Methods of Statistics.
Press, Princeton.
Franklin, P.
1958.
Fourier Methods.
Princeton Univ.
Dover Publ., Inc., New York.
Grenander, U. and Rosenblatt, M. 1957. Statistical Analysis of
Stationary Time Series. John Wiley and Sons, New York.
Hannan, E. J.
1960.
Time Series Analysis.
Methuen, Ltd., London.
Herbst, L. J. 1963a. Periodogram analysis and variance fluctuations.
J. Roy. Stat. Soc. B., 52(2):442-450.
Herbst, L. J. 1963b. A test for variance heterogeneity in the
residuals of a Gaussian moving average. J. Roy. Stat. Soc. B.,
52(2) :451-454.
Herbst, L. J. 1964. Spectral analysis in the presence of variance
fluctuations. J. Roy. Stat. Soc. B., 26(2):354-360.
Herbst, L. J. 1965. The statistical Fourier analysis of variances.
J. Roy. Stat. Soc. B. (To appear in June, 1965, :i.ssue.)
Jenkins, G. M. 1961. General considerations in the analysis of spectra.
Technometrics, 2~133=166.
Kendall, M. G. 1943. The Advanced Theory of Statistics.
Chas. Griffin and Co., London.
Vol. 1.
Massey, F. J. 1950. A note on the power of a non-parametric test.
Ann. Math. Stat., 21:440-443.
127
Priestley, M. B. 1962a . The analysis of stationary processes with
mixed spectra. I. J. Roy. Stat. Soc. B., 24(1)~215~233.
Priestley, M. B. 1962b. Analysis of stationary processes with
mixed spectra. II. J. Roy. Stat. Soc. B., 24(2)~511~529.
Slutsky, E. 1927. The summation of random causes as the source of
cyclic processes. Problems of Economic Conditions!} ed. by the
Conjecture Institute, Moscow, 2.(1). (Reprinted in Econometrica,
2.= 105· )
Smirnov, N. V. 1948. Tables for estimating the goodness of fit of
empirical distribution. Ann. Math. Stat., !2.~279-281.
Tukey, J. 1961. Discussion, emphasizing the connection between
analysis of variance and spectral analysis. Technometrics,
2.: 191- 219.
© Copyright 2025 Paperzz