> [!tldr]
> An autoregressive process $(X_{t})$ of order $p$, abbreviated $\mathrm{AR}(p)$ is a weighted sum of the $p$ previous terms from the same process, plus a [[Purely Random Processes|purely random process]] $Z_{t}$.
> A process with mean $\mu_{X}=0$ has the form $X_{t}:= \alpha_{1}X_{t-1}+ \cdots + \alpha_{p}X_{t-p}+Z_{t}$where $\alpha_{1},\dots, \alpha_{p}$ are constant weights. More generally, $(X_{t}-\mu_{X}):= \alpha_{1}(X_{t-1}-\mu_{X})+ \cdots+Z_{t}$for processes with non-$0$ mean.
### First-Order AR Processes
An $\mathrm{AR}(1)$ process is also called a Markov Process, of the form $X_{t}=\alpha X_{t-1}+Z_{t}$and repeated substitution gives an infinite-order MA process: $X_{t}=Z_{t}+\alpha Z_{t-1} + \alpha^{2}Z_{t-2}+\cdots$so $\mathbb{E}[X_{t}]=\mu_{Z}\sum_{i}\alpha^{i}$, $\mathrm{Var}(X_{t})=\sigma^{2}_{Z}\sum_{i} \alpha^{2i}$, and $\gamma(k)=\alpha^{k}\sigma^{2}_{X}$.
Written in terms of backward shifts, $\begin{align*}
(1-\alpha B)&X_{t}= Z_{t} \\
&\Longrightarrow X_{t}=\frac{Z_{t}}{1-\alpha B}=(1+\alpha B+\alpha^{2}B^{2}+\cdots)Z_{t}
\end{align*}$
```R fold
dev.off()
data <- numeric(500)
data[1] <- rnorm(1, mean=0, sd=1)
for (i in 2:500){
data[i] <- 0.8 * data[i-1] + rnorm(1,0,1)
}
par(mfrow=c(2,1), mar=c(3,4,3,4), bg=NA)
plot(data, type="l",
ylab="Series Value",
main="AR(1) Process")
acf(data, xlab="Time",
ylab="Autocorr.", main="")
```
![[AR1ACF.png#invert]]
### General-Order AR Processes
Expressing the AR process in terms of backshift operators gives $\underbrace{(1-\alpha_{1}B-\cdots-\alpha_{p}B^{p})}_{1 / f(B)}X_{t}=Z_{t}$and the equivalent MA process $X_{t}=f(B)Z_{t}=(1+\beta_{1}B+\cdots)Z_{t}$where a finite variance requires $\sum_{i} \beta_{i}^{2}<\infty$, and stationarity requires a finite acv.f: $\gamma(k)=\sigma^{2}_{Z}\sum_{i=0}^{\infty}\beta_{i}\beta_{i+k}$
### Yule-Walker Equations
The above process for solving the acv.f. function is inefficient and algebraically difficult. A simplification follows by assume stationarity, then $\begin{align*}
X_{t-k}X_{t}&= \alpha_{1}X_{t-k}X_{t-1}+ \cdots + \alpha_{p}X_{t-k}X_{t-p}+X_{t-k}Z_{t}\\[0.4em]
\mathbb{E}[X_{t}^{2}]&= \mathbb{E}[\alpha_{1}X_{t-k}X_{t-1}+ \cdots + \alpha_{p}X_{t-k}X_{t-p}+X_{t-k}Z_{t}]\\[0.4em]
\rho(k)&= \alpha_{1}\rho(k-1) + \cdots + \alpha_{p}\rho(k-p)
\end{align*}$and the collection of equations for each $k=1,\dots$ is the **Yule-Walker equations**. As a linear system, the equations are: $\begin{bmatrix}
\rho_{1} \\ \vdots \\ \rho(p)
\end{bmatrix} = \begin{bmatrix}
\rho(0) & \cdots & \rho(p-1) \\
\vdots & \ddots & \vdots \\
\rho(p-1) & \cdots & \rho(0)
\end{bmatrix}\begin{bmatrix}
\alpha_{1} \\ \vdots \\ \alpha_{p}
\end{bmatrix}$The equations can either be solved as a linear system for a finite number of terms, or as a difference equation for a general solution.