Confidence Intervals for the Parameter of a Gaussian First-Order Autoregressive Model with Additive Outliers: A Simulation Study
Main Article Content
Abstract
This paper is concerned with interval estimation of a parameter for a Gaussian first-order autoregressive model, AR(1), when there are additive outliers in a time series. We compared the confidence intervals basedon the weighted symmetric estimator ( ˆφW), the recursive mean adjusted weighted symmetric estimator (ˆφRW),the recursive median adjusted weighted symmetric estimator ( ˆφRDW ), and the improved recursive medianadjusted weighted symmetric estimator (ˆφIRDW) by using Monte Carlo simulation. Simulation results haveshown that the confidence interval based on the estimator ˆIRDW φ is better than the other confidence intervalswith respect to the coverage probability and the length criteria.Key Words: AR(1) model; Additive outliers; Confidence interval; Coverage probability; LengthIntroductionIn time series analysis, outliers or atypicalobservations can
Downloads
Article Details
References
Brockwell, P. J., and R. A. Davis. (1991). Time Series: Theory and Methods. Springer, New York.
Chang, I., G. C. Tiao, and C. Chen. (1988). Estimation of time series parameters in the presence of outliers. Technometrics, 30: 193-204.
Chatfield, C. (2001). Time-Series Forecasting. Chapman & Hall, New York.
Conover, W. J. (1980). Practical Nonparametric Statistics. John Wiley & Sons, New York.
Cryer, J. D., and K. S. Chan. (2008). Time Series Analysis with Application in R. Springer, New York.
Denby, L., and R. D. Martin. (1979). Robust estimation of the first-order autoregressive parameter. Journal of the American Statistical Association, 74: 140-146.
Fox, A. J. (1972). Outliers in time series. Journal of the Royal Statistical Society, 34: 350-363.
Guo, J. H. (2000) Robust estimation for the coefficient of a first order autoregressive process. Communications in Statistics-Theory and Methods, 29: 55-66.
Mann, H. B., and A. Wald. (1943). On the statistical treatment of linear stochastic difference equations. Econometrica, 11: 173-220.
Marriott, F. H. C., and J. A. Pope. (1954). Bias in the estimation of autocorrelations. Biometrika, 41: 390-402.
Martin, R. D. (1980). Robust estimation of autoregressive models. In Directions in Time Series (Brillinger, D. R. and Tiao, G. C., eds) pp.228-254. Institute of Mathematical Statistics, California.
Newbold, P., and C. Agiakloglou. (1993). Bias in the sample autocorrelations of fractional noise. Biometrika, 80: 698-702.
Niwitpong, S. (2007). Predictive Inference for Time Series. Ph.D. Dissertation. La Trobe University, Bundoora, Victoria, Australia.
Panichkitkosolkul, W. (2010a). New estimator for an unknown mean Gaussian AR(1) process with additive outliers. Chiang Mai Journal of Science, 37: 14-20.
Panichkitkosolkul, W. (2010b). An improved estimator for a Gaussian AR(1) process with an unknown drift and additive outliers. Thailand Statistician, 8: 1-15.
Park, H. J., and W. A. Fuller. (1995). Alternative estimators and unit root tests for the autoregressive process. Journal of Time Series Analysis, 16: 415-429.
Shaman, P., and R. A. Stine. (1988). The bias of autoregressive coefficient estimators. Journal of the American Statistical Association, 83: 842-848.
So, B. S., and D. W. Shin. (1999). Recursive mean adjustment in time series inferences. Statistics Probability Letters, 43: 65-73.
The R Development Core Team. (2010a). An Introduction to R. R Foundation for Statistical Computing, Vienna.
The R Development Core Team. (2010b). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna.