This note concerns both [[Discrete Time Markov Chains|discrete time]] and [[Continuous Time Markov Chains|continuous time Markov chains]].
> [!definition|*] Time-Reversals
> If $(Y_{n})$ is a DTMC and $(X_{t})$ a CTMC, then fixing $n, t$, the **time-reversals** of the two chains are: $\begin{align*}
\hat{Y}_{m}&:= Y_{n-m},\\
\hat{X}_{s}&:= X_{(t-s)-}:= \lim_{r \nearrow t-s} \,X_{r},
\end{align*}$defined for $0 \le m \le n$ and $0 \le s \le t$. The limit is from below so that the reversed process $\hat{X}_{s}$ is right-continuous.
*For Markov chains in equilibrium, their time-reversals are also Markov chains*:
> [!theorem|*] Time Reversals in Equilibrium
> If $(Y_{n}) \sim \mathrm{Markov}(P, \lambda)$ where $\lambda$ is a stationary distribution of $P$, then $(\hat{Y}_{m})_{m=0}^{n} \sim \mathrm{Markov}(\hat{P}, \lambda),$where $\hat{p}_{ij}=\lambda_{j}p_{ji} / \lambda_{i}$, and $\lambda$ is also invariant under $\hat{P}$.
>
> Similarly, if in the CTMC $(X_{t}) \sim \mathrm{Markov}(Q, \mu)$ where $Q$ is the generator matrix, then $(\hat{X}_{s})_{0 \le s \le t} \sim \mathrm{Markov} (\hat{Q}, \mu),$where $\hat{q}_{ij}=\mu_{j}q_{ji} / \mu_{i}$. Its transition matrix is $\hat{P}=(p_{ji}\mu _{j} / \mu_{i})_{i,j}$.
>
> > [!proof]- Proof of distribution of the time reversal
> > First we show that $(\hat{X}_{s})$ has transition matrix $\hat{P}$ as given: $\begin{align*}
> \mathbb{P}&[\hat{X}_{s_{1}}=i_{1},\dots,\hat{X}_{s_{k}}=i_{k}]\\
> &= \mathbb{P}[X_{(t-s_{1})-}=i_{1},\dots,X_{(t-s_{k})-}=i_{k}]\\
> &= \mu_{i_{k}}p_{i_{k},i_{k-1}}(s_{k}-s_{k-1}) \cdots \mu_{i_{2}} p_{i_{2}i_{1}}(s_{2}-s_{1})\\
> &= \cancel{\mu_{i_{k}}}\left( \hat{p}_{i_{k-1},i_{k}}(s_{k}-s_{k-1})\cdot \frac{\cancel{\mu_{i_{k-1}}}}{\cancel{\mu_{i_{k}}}} \right) \dots \left( \hat{p}_{i_{1}i_{2}}(s_{k}-s_{k-1})\cdot \frac{\mu_{i_{1}}}{\cancel{\mu_{i_{2}}}} \right)\\
> &= \mu_{i_{1}}\hat{p}_{i_{1}i_{2}}(s_{2}-s_{1})\dots \hat{p}_{i_{k-1}i_{k}}(s_{k}-s_{k-1}).
> \end{align*}$This matches transition probabilities in $\hat{P}$. Now we can verify that $\hat{P}$ is the solution to the backwards equation $\hat{P}'=\hat{Q}\hat{P}$, so the reversed chain has distribution $\mathrm{Markov}(\hat{Q},\mu)$.
### Reversibility and Detailed Balance
A Markov chain in equilibrium is reversible if its time-reversal has the same distribution as it does:
- For a discrete chain, this means $P = \hat{P}$,
- And for a continuous chain, this means $Q=\hat{Q}$.
This is equivalent with the following:
> [!theorem|*] Detailed Balance
> A DTMC (CTMC) is reversible if and only if its transition matrix $P$ (jump matrix $\Pi$) and stationary distribution $\lambda$ satisfies the **detailed balance equations**: $p_{ij}\lambda_{i}=p_{ji}\lambda_{j},$i.e. the two-way flow cancels out between any two states $i, j$. In this case, $P$ (or $\Pi$) and $\lambda$ are said to be in **detailed balance**.
^9fc980
Moreover, *any distribution $\lambda$ satisfying the detailed balance with $P$ (or $\Pi$) is stationary*:
> [!theorem|*] Detailed Balance Implies Stationarity
> If $P$ and $\lambda$ are in detailed balance, then $\lambda$ is a stationary distribution under $P$, i.e. $\lambda P=\lambda.$
> > [!proof]-
> > Sum over $j$ in the detailed balance equations $(i,j)$: $\begin{align*}
\sum_{j}p_{ij}\lambda_{i}&= \sum_{j}p_{ji}\lambda_{j}\\
\lambda_{i}&= (P\lambda)_{i},
\end{align*}$where the second line follows from $\sum_{j}p_{ij}=1$ on the $\mathrm{LHS}$, and definition of matrix-vector multiplication on the $\mathrm{RHS}$.
^e47297
This gives a way of computing a stationary distribution, but is not guaranteed to work, as not all stationary distributions satisfy the detailed balance.
> [!examples] Solving for stationary distribution with detailed balance
> Consider a mirrored random walk on $\{ 0,\dots,N \}$, with $p=0.5$. Then this is irreducible on a finite state space, hence positive recurrent with a unique stationary distribution. Detailed balance gives: $\begin{align*}
p_{i,i+1}\lambda_{i}&= p_{i+1,i}\lambda_{i+1} \Rightarrow \begin{cases}
\lambda_{i}=\lambda_{i+1} &\text{if } i \ne 1, N \\
\lambda_{0} = \lambda_{1} / 2 \\
\lambda_{N} = \lambda_{N-1} / 2
\end{cases}
\end{align*}$So the stationary distribution is $\lambda\propto (1,2,2,\dots,2,2,1)$.
>
> ---
>
> As a counter example, the chain might be obviously not reversible: consider a frog jumping up a ladder: it moves up by one rudder every jump, but has a chance of $q=1-p$ to slip and fall to the bottom, independent of its moves. Then its height is a Markov chain, but is obviously not reversible: $p_{k,0}=q,~ ~ p_{0,k}=0.$Therefore, detailed balance has no solutions, even though the chain (being irreducible and positive recurrent) has a unique stationary distribution.
Important cases of reversibility include:
- M-M-s [[Queueing Processes|queueing processes]];
- [[Markov Chain Monte Carlo|MCMC]] algorithms that indirectly simulates a Markov chain that is in detailed balance with a target distribution.