quot;: $q_{i}^{(N)}=\mathbb{P}(\exists n\le N: X_{n}\in A\,|\, X_{0}=i)$so $q_{i}^{(N)}=\sum_{j}p_{ij}q_{j}^{(N-1)}$, and induction on $N$ gives $\forall N, q_{i}^{(N)}\le x_{i}$. > Hence $\begin{align*} x_{i}\ge \lim_{ N \to \infty }q_{i}^{(N)}&=\mathbb{P}\Big\{\bigcup_{N}\{\exists n\le N: X_{n}\in A\,|\, X_{0}=i\} \Big\} \\&=\mathbb{P}\{\exists n: X_{n}\in A\}\\ &=h_{i}^{A} \end{align*}$ ### Recurrence and Transience *Big idea: recurrence and transcience describes whether a Markov chain will indefinitely return to a particular state.* * For convenience, we will denote the **return probability** as $r_{i}=\mathbb{P}(\exists n\ge{1}: X_{n}=i\,|\, X_{0}=i)$*but this notation is not standard.* * A state $i\in I$ is **transient** if $r_{i}<1$, which means that $\mathbb{P}(\text{the chain hits } i \text{ infinitely many times}\,|\, X_0=i)=\lim_{ k \to \infty }r_{i}^k =0$ * A state $i\in I$ is **recurrent** if $r_{i}=1$, which means that $\mathbb{P}(\text{the chain hits } i \text{ infinitely many times}\,|\, X_0=i)=\lim_{ k \to \infty }r_{i}^k =1$ * A state $i$ is recurrent if and only if $\sum_{n\ge 1} p_{ii}^{(n)}=\infty$. > Proof: the total number of hits has expectation $\sum_{n} \mathbb{E}[\mathbf{1}(X_{n}=i)\,|\, X_{0}=i]=\sum_{n}\mathbb{P}(X_{n}=i\,|\, X_{0}=i)=\sum_{n}p_{ii}^{(n)}$Then (the state is recurrent) $\iff$ (the expected number of hits is infinite) $\iff$ (the sum is infinite). * <u>Recurrence/transience is a class property</u>: states in the same communicating class are either all recurrent or all transient. * Recurrent classes are all closed; all finite, closed classes are recurrent. > The two propositions are proved in problem sheet 3. ### Hitting and Return Times * The (first) **hitting time** of set $A \subseteq I$ is the random variable$H^A=\inf\{n\ge 0: X_{n}\in A\}$ * The **mean hitting time** of $A$ from $i$ is $k_{i}^{A}=\mathbb{E}[H^{A}\,|\, X_{0}=i]$ * The mean hitting times of the Markov chain $(k_{i\in I}^A)$ is the minimal non-negative solution to the equations $k_{i}^A=\begin{cases} 0 &\text{ if }i\in A \\ 1+\sum_{j}p_{ij}k_{j}^{A} & \text{ if } i\notin A \end{cases}$ > The proof uses the same strategy as the proof for hitting probabilities. * The **mean return time** of a state $i$ is$m_{i}=\mathbb{E}[\,\inf\{n\ge 1: X_{n}=i\}\,|\, X_{0}=i\,]$*That is, the expected time to return to the state after starting from it.* * A recurrent state with a finite mean return time is called **positive recurrent**; else it is **null recurrent**. * <u>A transient state always has a mean return time of</u> $\infty$, since it has a positive probability of never returning. ### Gambler's Ruin as a Markov Chain * Model > Let the gambler's wealth at time $t$ be $X_{t}$, and the winning probability be $p$. They starts with $X_{0}=\textdollar n$. > The transition probabilities are $p_{ij}=\begin{cases} p &\text{ if } j=i+1,\ i\ne 0 \\ q=1-p &\text{ if } j=i-1,\ i\ne 0 \\ 0 &\text{ if otherwise} \end{cases}$ * Hitting probabilities > The probability of hitting zero starting from state $i$, $(h_{i})$, comes from the equations$\begin{align*} h_{0}&=1 \\ h_{i\ne 0} &= \sum_{j}p_{ij}h_{j}=ph_{i+1}+qh_{i-1} \end{align*}$which has the general solution $h_{i}=A+B\big( \frac{q}{p}\big)^i$, where $h_{0}=1$ requires $A+B=1$. > Then non-negativity and minimality requires $(A,B)=\begin{cases} (1,0) &\text{ if }p\le q \\ (0,1) &\text{ if }p>q \\ \end{cases}$So the hitting probability is $1$ in the first case, and $(q / p)^{i}$ in the second case. * Transience/Recurrence > The state $0$ is obviously recurrent, since it is an absorbing state. > > The rest of the chain communicate, so it is sufficient to find the transience/recurrence of the state $1$. Its return probability is $r_{1}<p<1$, so it is transient. Hence the entire class $\{1,2,\dots \}$ is transient. > > *The lecture notes studies the modified chain where $p_{01}>0$, which makes the chain irreducible, and we can find the return probability and mean return time to $0$.* * Mean hitting time of $0$ > If $p > q$, hitting probability is $(q / p)^i<1$, so the mean hitting time is infinite. > > If $p\le q$, the mean hitting times $(k_{i})$ solve the equations $\begin{align*} k_{0}&=1 \\ k_{i\ne 0} &= 1+\sum_{j}p_{ij}k_{j}=1+pk_{i+1}+qk_{i-1} \end{align*}$*The lecture notes involve some Olympic-level hand-waving to claim $k_{i}=\beta i$, and the answer follows. To see why, note that $\begin{align*} k_{i}&= \mathbb{E}[\text{time to hit }(i-1) \text{ from }i]+k_{i-1}\\ &= k_{1} +k_{i-1} \end{align*}$by memorilessness. Then $\beta=k_{1}$.* > > When $p<q$, solving gives $k_{i}=i / (q-p)$. > > When $p=q$, any finite $\beta=k_{1}$ failes, hence the mean hitting times $(k_{i})$ must be all be infinite. --- ## Equilibrium and Convergence of Markov Chains * Recall that the distribution of the Markov chain variable $X_{n}$ can be expressed as a row vector $\lambda(t)$, and <u>right-applying the transition matrix </u>$Plt;u> moves the distribution forward in time:</u> for $n\ge m,$$\lambda(n)=\lambda(m)P^{n-m}$In particular, if $\lambda_{0}$ is the initial distribution, then $X_{n} \sim\lambda_{0} P^n$. - A distribution $\pi$ is a **stationary distribution** of a Markov chain with transition matrix $P$ if it is the left-eigenvector of $P$ with eigenvalue $1$: $\pi=\pi P$Note that it is the <u>left-eigenvector</u>, not the usual right-eigenvector. * *That is, the transition matrix does not change the stationary distribution over time.* ### Convergence Theorems * <u>Existence and uniqueness of the stationary distribution</u>: if $P$ is irreducible, then $P \text{ positive-recurrent} \iff\exists!\pi \text{ stationary, where }\,\pi_{i}=\frac{1}{m_{i}}$where $m_{i}$ is the mean return time to state $i$. * <u>Convergence to the stationary distribution</u>: if $P$ is irreducible and aperiodic with stationary distribution $\pi$, any initial distribution has $\forall i,j:$ $\begin{align*} \mathbb{P}(X_{n}=j)\to \pi_{j}\\ p_{ij}^{(n)}\to \pi_{j} \end{align*}$ - *Aperiodicity is important, as a periodic chain "remembers" its initial distribution.* * <u>Ergodic Theorem</u>: for irreducible chains, the proportion of time spent on state $i$ converges to $1 / m_{i}$ almost surely: $\begin{align*} \frac{\sum_{t=0}^{n-1}\mathbb{1}_{(X_{t}=1)}}{n}\to \frac{1}{m_{i}} \text{ a.s.} \end{align*}$where for null-recurrent and transient states ($m_{i}=\infty$), we interpret $1 / m_{i}$ to be $0$. ---