In: Economics
If a researcher is considering whether to use distributed lags or proceed with a dynamic model, discuss the considerations to use one or the other.
As it’s said before, the most aim of this research is to show
that if the advertising
costs have long benefits, it must be shown as an intangible asset
in financial
statements and in their useful lives, they must amortize. But if
they have not
benefited for more than one period, they must be show as expenses
in financial
statements. Also the selection of each one of this policy can have
meaningful
effects on reporting of profits. To exist of these hesitation
caused many scientist
have done research in this field that will be explained in chapter
three. In this
chapter it will be tried to explain about research design as a
through.
DEFINING THE RESEARCH PROBLEM:
As we know, the research problem undertaken for study must be
carefully
selected Help may be taken from a research guide in this
connection. A problem
must spring from the researcher’s mind like a plant springing from
its own seed.
AUTOREGRESSIVE AND DISTRIBUTED LAG MODELS
In regression analysis involving time series data, if the
regression model
includes not only the current but also the lagged (past) values of
the explanatory
variables (the X’s), it is called a distributed-lag model. If the
model includes one or
more lagged values of the dependent variable among its explanatory
variables, it is
called an autoregressive model. Thus:
(2-8-1) Yt = + 0Xt +1Xt-1 +2Xt-2+ut
represents a distributed – lag model, whereas:
(2-8-2) Yt = + Xt+ Yt-1 + ut
is an example of an autoregressive model. The latter are also known
as dynamic
model since they portray the time path of the dependent variable in
relation to its
past value(s).
Autoregressive and distributed-lag models are used extensively in
econometric
analysis, and in this chapter we take a close look at such model
with a view to
finding out the following:
1. What is the role of lags in economics?
2. What are the reasons for the lags?
3. Is there any theoretical justification for the commonly used
lagged models in
empirical econometrics?
4. What is the relationship, if any, between autoregressive and
distributed-lag
models? Can one be derived from the other?
5. What are some of the statistical problems involved in estimating
such models?
6. Does a lead-lag relationship between variables imply causally?
If so, how does
one measure it?
THE ROLE OF “TIME,” OR “LAG” IN ECONOMICS
In economics the dependence of a variable Y (the dependent
variable) on another
variable(s) X (the explanatory variable) is rarely instantaneous.
Very often, Y
responds to X with a lapse of time. Such a lapse of time is called
a lag. To illustrate
the nature of the lag, we consider example
Link between money and prices. According to the
monetarists,
inflation is essentially a monetary phenomenon in the sense that a
continuous
increase in the general price level is due to the rate of expansion
in money supply far
in excess of the amount of money actually demanded by the economic
units. Of
course, this link between inflation and changes in money supply is
not instantaneous.
Studies have shown that the lag between the two is anywhere from 3
to about 20
quarters. we see the
effect of a 1 percent change in the M1B money supply (= currency +
Checkable
deposit at financial institutions) is felt over a period of 20
quarters. The long-run
impact of a 1 percent change in the money supply on inflation is
about 1(= mi),
which is statistically significant, whereas the short-run impact is
about 0.04, which is
not significant, although the intermediate multiplies seem to be
generally significant.
Incidentally, note that since P and M are both in percent forms,
the mi (i
in our usual
notation) give the elasticity of P with respect to M, that is, the
percent response of prices to a
1 percent increase in the money supply. Thus, m0 = 0.041 means that
for a 1 percent increase
in the money supply the short-run elasticity of prices is about
0.04 percent. The long-term
elasticity is 1.03 percent, implying that in the long run a 1
percent increase in the money
supply is reflected by just about the same percentage increase in
the prices. In short, a 1
percent increase in the money supply is accompanied in the long run
by a 1 percent increase
in the inflation rate.
DETECTING AUTOCORRELATION IN AUTOREGRESSIVE MODELS
: DURBIN h TEST
As we have seen, the likely serial correlation in the errors vt
make the estimation
problem in the autoregressive model rather complex : In the stock
adjustment model
the error term vt did not have (first-order) serial correlation if
the error term ut in the
original model was serially uncorrelated, whereas in the Koyck and
adaptive
expectation models vt was serially correlated even if ut was
serially independent. The
question, then, is: How does one know if there is serial
correlation in the error term
appearing in the autoregressive models?
A noted the Durbin-Watson d statistic may not be used to detect
(first- order)
serial correlation in autoregressive models, because the computed d
value in such
models generally tends towards 2, which is the value of d expected
in a truly random
sequence. In other words, if we routinely compute the d statistic
for such
models , there is a built-in bias against discovering (first-order)
serial correlation.
Despite this , many researchers compute the d value for want of
anything better.
Recently, however, Durbin himself has proposed a large-sample test
of first-order
serial correlation in autoregressive models. This test, called
the
h statistic, is as follows:
n
(2-12-1) h=
1-n [(var(2)]
Where n = samples size, var(2) = variance of the coefficient of
the lagged Yt-1, and
=estimate of the first-order serial correlation , which is given
by the Eq. (2-11-5).
For large sample size, Durbin has shown that if =0, the h
statistic follows the
standardized normal distribution, that is, the normal distribution
with zero mean and
unit variance. Hence, the statistical significance of an observed h
can easily by
determined from the standardized normal distribution table .
In practice there is no need to compute because it can be
approximated from the
estimated d as follows :
1
(2-12-2) = 1- d
2
Where d is the usual Durbin-Watson statistic. Therefore (3-12-1)
can be written as
1 N
(2-12-3) h = 1- d
2 1-N[(var (2)]
The steps involved in the application of the h statistic are as
follows :
1. Estimate (Yt = 0 + 1Xt + 2Yt-1 + vt ) by OLS (don’t worry
about any
estimation problems at this stage)
3. Compute p as indicate in (3-12-2).
4. Now compute h from (3-12-1), or (3-12-3).
5. Assuring n is large, we just saw that:
(2-12-4) h AN (0.1)
that is h is asymptotically normally(AN) distributed with zero mean
and unit
variance. Now from the normal distribution we known that
(2-11-4) Pr (-1.96 h 1.96) = 0.95
that is, the probability that h (i.e., any standardized normal
variable) lying between –
1.96 and +1.96 is about 95 percent. Therefore, the decision rule
now is
(a) if h 1.96 reject the null hypothesis that there is no
positive first order
autocorrelation, and
(b) If h -1.96 reject the null hypothesis that there is no
negative first order
autocorrelation, but
(c) If h lies between – 1.96 and 1.96 do not reject the null
hypothesis that there is no
first-order (positive or negative) autocorrelation.
As an illustration, suppose in an application involving 100
observations it was found
that d = 1.9 and var(2) = 0.005. Therefore:
(2-12-3) h = [1- (1.9)] = 0.7071
2 1-100 (0.005)
Since the computed h value lies in the bounds of (2-11-5), we
cannot reject the
hypothesis, at the 5 percent level, that there is no positive
first-order autocorrelation.
Note these features of the h statistic :
1. It does, not matter how many X variables or how many lagged
values of Y are
included in the regression model. To compute h, we need consider
only the
variance of the coefficient of lagged Yt-1.
2. The test is not applicable if [n var (2)] exceeds 1. (Why?) In
practice, though this
does not usually happen.
3. Since the test is a large-sample test, its application in small
samples is not strictly
justified
LIMITATIONS OF THE STUDY
The study had following limitations:
1. The period of the study was 1998 to 2004 for generalization of
the findings. This
period was limited.
2. 1512 food companies were divided in 9 groups for testing.
3. Since detailed data for each firm was not available, they were
tested as a group.
4. Since observations for each group were small (1999 to 2004), all
of them were tested
as one group( 9 groups 6 periods = 54 periods). For this reason
we used only 6
periods because the sales of the last year were necessary for the
equation. Since the
data for 1997 was not available, the year was taken to be 1999 and
1998 was taken as
previous year for it.