> [!notation] > $\Pi_{n},n \in \mathbb{N}$ is the vector space of polynomials with order at most $n$. Given a function $f$, we might try to approximate it with another (simpler) function $g$ that agrees with it at a number of points: $g:g(x_{i})=f(x_{i})\,\,\text{for }i=0,\dots,n$then $g$ is the **interpolation function** of $f$ at the (distinct) points $x_{1},\dots,x_{n}$. If $f$ is difficult to work with, **interpolating** it with a polynomial $g \in \Pi_{n}$ gives a function that is easier to manipulate -- in doing so, we hope that $g$ and $f$ behave similarly. ### Lagrange Interpolation > [!definition|*] Lagrange Interpolation > **Lagrange interpolation** of degree $n$ of the function $f$ at the distinct points $x_{0},\dots,x_{n} \in [a,b]$ gives a polynomial $p$ where $p \in \Pi_{n}:p(x_{i})=f(x_{i}),\,\,\text{for }i=0,\dots,n$note that the number of data points is $(n+1)$, one more than the degree of the polynomial. - The Lagrange polynomial is unique: any two interpolating polynomials $p, \tilde{p} \in \Pi_{n}$ must agree, as $p-\tilde{p} \in \Pi_{n}$ has $(n+1)$ roots $x_{0},\dots,x_{n}$, hence must be identically zero. Moreover, *the Lagrange polynomial always exists*: > [!exposition] Construction of the Lagrange polynomial > Define the **cardinal polynomial** $L_{k}(x)=\prod_{i \ne k} \left( \frac{x-x_{i}}{x_{k}-x_{i}} \right) \in \Pi_{n}$where the products are over $i=0,\dots,n$, excluding $k$. > - The cardinal polynomials "indicate" whether we are at the $k$th data point: $L_{k}(x_{i})=\begin{cases} > 0 & \text{if }i \ne k \\ > 1 & \text{if } i = k > \end{cases}$ > > Then scaling and linearly combining them gives the **Lagrange interpolating polynomial**: $p_{n}(x)\equiv\sum_{k=0}^{n}f(x_{k})L_{k}(x) \in \Pi_{n}$ ### Hermite Interpolation The Hermite interpolation of a function $f$ is a polynomial $p_{2n+1} \in \Pi_{2n+1}$, which agrees with both $f$ and $f'$ at $x_{0},\dots,x_{n}$: $p_{2n+1} \in \Pi_{2n+1}:\begin{cases} p_{2n+1}(x_{i})= f(x_{i}) \\ \\ p_{2n+1}'(x_{i})= f'(x_{i}) \end{cases}\,\,\text{for }i=0,\dots,n,$ Similar to the Lagrange polynomial, the Hermite polynomial always exists, can be constructed via: > [!exposition] Construction of the Hermite Polynomial > Using the cardinal polynomials $L_{k}(x)$ defined for Lagrange interpolation, define $\begin{array}{l} > H_{k}(x)= L_{k}^{2}\cdot(1-2L_{k}'\cdot(x-x_{k}))\\ > K_{k}(x)= L_{k}^{2}\cdot(x-x_{k}) > \end{array}\in \Pi_{2n+1}.$ > - $H$ and $K$ function similarly to the cardinal polynomials: $\begin{align*} > H_{k}(x_{i})&= \delta_{i,k},&&H_{k}'(x_{i})= 0\\ > K_{k}(x_{i})&= 0,\, &&K_{k}'(x_{i})= \delta_{i,k} > \end{align*}$ > > Hence the **Hermite interpolation polynomial** of $f$ at $x_{0},\dots,x_{n}$ is $p_{2n+1}(x)=\sum_{k=0}^{n}[H_{k}f(x_{k})+K_{k}f'(x_{k})] \in \Pi_{2n+1}.$ ### Interpolation Error > [!definition|*] Interpolation Error > Given a function $f$ and its interpolating function $g$, the **interpolation error** is $e(x)\equiv f(x)-g(x)$. > [!theorem|*] Error of Lagrange Interpolation > > The Lagrange interpolation error (of degree $n$ interpolation) satisfies the identity > $\text{Lagrange error}=f(x)-p_{n}(x)=\frac{f^{(n+1)}(\theta)}{(n+1)!}\underbrace{\prod_{i=0}^{n}(x-x_{i})}_{\pi(x)}$for some $\theta \in [a,b]$ dependent on $x$, and is hence bounded by $|e(x)| \le \frac{\sup_{[a,b]}f^{(n+1)}}{(n+1)!}|\pi(x)|.$ > > > [!proof]- > > Trivial if $x=x_{i}$ for some $i=0,\dots,n$, so assume otherwise. > > > > Define the auxiliary function $\phi(t)=e(t)-\frac{e(x)}{\pi(x)}\pi(t)$. Then $\phi=0$ at $(n+2)$ points $x,x_{0},\dots,x_{n}$. > > > > Rolle's theorem gives $\phi'=0$ at $(n+1)$ points, $\phi''=0$ at $n$ points, etc., and finally $\phi^{(n+1)}=0$ at some point $\theta \in [a,b]$. > > > > Then noting $p^{(n+1)}\equiv0$, $\pi^{(n+1)}\equiv(n+1)!$, $\begin{align*} > 0=\phi^{(n+1)}(\theta)&= e^{(n+1)}(\theta)-\frac{e(x)}{\pi(x)}\pi^{(n+1)}(\theta) \\ > &= f^{(n+1)}(\theta) - \frac{e(x)}{\pi(x)}(n+1)!, > \end{align*}$hence $e(x)=\frac{f^{(n+1)}(\theta)}{(n+1)!}\pi(x)$ for some $\theta \in [a,b]$. > For Hermite interpolation, the error satisfies $\text{Hermite error}=\frac{f^{2n+2}(\theta)}{(2n+2)!}\pi(x)^{2}$for some $\theta \in [a,b]$ dependent on $x$ and $\pi(x) = \prod_{i}(x-x_{i})$. ### Runge's Phenomenon *For certain functions and bad choices of data points, the error of Lagrange interpolation grows as the degree increases*; i.e., more data make the approximation worse. - In those cases $|\pi(x)|\sup f^{(n+1)}$ increases faster than the factorial in the denominator, giving terrible estimates showing **Runge's phenomenon**: ![[RungePhenomenon.svg#invert|center]] * In particular, equally spaced data points is a terrible choice that often lead to this phenomenon. To quantify the severity of this interpolation, we can use **Lebesgue's constant** (of a choice of nodes $x_{1},\dots,x_{n}$): > [!definition|*] Lebesgue's constant > Given nodes $\mathbf{x}:=(x_{0},\dots,x_{n})$ for degree-$n$ interpolation over some interval $[a,b]$ containing those nodes, **Lebesgue's constant** measures the worst-case interpolation error: > > If we denote $\mathcal{I}$ as the polynomial interpolation map $C[a,b] \to \Pi_{n}$, $\Lambda_{n}(\mathbf{x};a,b):= \max_{f \in C[a,b]} \frac{\| \mathcal{I}f\|}{\| f \| }, $i.e. the worst-case blow up. Unfortunately $\Lambda_{n} \to \infty$ for any choice of $\mathcal{I}$ (i.e. any choice of nodes $\mathbf{x}$ and interval $[a,b]$), but some choices are worse than others: - For equally-spaced points, it is almost exponential: $\Lambda_{n} \sim 2^{n+1} / (n \log n)$. - In contrast, choosing Chebyshev nodes gives logarithmic growth of $\Lambda_{n} \sim \frac{2}{\pi}\log(n+1) +1$.