Financial Analysis

Read Complete Research Material

FINANCIAL ANALYSIS

Financial Analysis

Financial Analysis

Task 1

(i)

One of the assumptions underlying ordinary least squares (OLS) estimation is that the errors be uncorrelated. Of course, this assumption can easily be violated for time series data, since it is quite reasonable to think that a prediction that is (say) too high in June could also be too high in May and July. That kind of cyclical effect is indicative of positive autocorrelation, and it is quite common in time series data. But say we ignore this fact; why is it a problem to use OLS if the errors are autocorrelated? The following two tables can help to answer that. Consider a simple regression problem, and let ? be the first order autocorrelation of the errors (i.e., ? = corr(ei , ei+1)) and ? be the first order autocorrelation of the predicting variable x (it's likely that this, too, would exhibit autocorrelation; this is not a violation of any assumptions, but it can affect the properties of OLS estimators if there is autocorrelation of errors) (Gregory, 2003, 213). Further, assume the particular autocorrelation structure known as a first order autoregressive model (we'll talk more about this a little later).

The first problem with using OLS estimates in the context of autocorrelated errors is that they are inefficient; that is, they have higher variability than they should. The following table gives the efficiency of the OLS estimator of ß1 compared to the best possible estimator (the efficiency is simply the ratio of variances):

It is apparent that if errors are autocorrelated, the OLS estimator can be seriously inefficient. For example, if ? = ? = .9, the variance of the OLS estimator is 10 times that of the best estimator. For positively autocorrelated errors, the inefficiency is fairly insensitive to the autocorrelation of the predictor, but for negatively autocorrelated errors, a positively autocorrelated predictor can actually help (it's fairly unlikely, however, that the signs of the autocorrelations of the predictor and the errors would be different). Note, by the way, that results for negatively autocorrelated predictors mimic those above, except that the role of the sign of the autocorrelation of the errors is reversed (a negatively autocorrelated predictor is more trouble for negatively autocorrelated errors). This is not good, but an even bigger problem also exists. The standard error of ˆß1 if the OLS computer output is used:

It is apparent that using the usual measures of fit can lead to very misleading inferences. For example, if ? = ? = .9, the estimated variance of ß1 is about 10% of its true value. This implies that the t-statistic for 1 is about 3.1 times too large (a similar inflation of F and R2 values also occurs). Thus, if left uncorrected, an insignificant relationship (say t = 1.5) can be mistakenly viewed as highly significant (apparent t = 4.65). It often happens that a regression on time series data with R2 = .8 has the R2 drop down to ...
Related Ads