Numerical integration, aka **quadrature**, computes the integral $\int _{a}^{b}w(x)f(x) \, dx$ for some integrable function $f$, when a closed formula is not easily available.
All the techniques in the course are all (indirectly) derived from the integral of some approximant: $f \xrightarrow{\text{approximate}}p \xrightarrow{\text{integrate}}\int _{a}^{b}w(x)p(x)\, dx \overset{\text{pray}}{\approx} \int _{a}^{b} w(x)f(x) \, dx $where $p$ is some approximant that is easy to integrate exactly.
Equivalently, their formulae will all be a weighted sum of data points: $\int _{a}^{b}w(x)f(x) \, dx \approx \sum_{i=0}^{k}w_{i}f(x_{i})$where $w$ is the weight function, while $w_{i}$ are constants called the **quadrature weights**.
The keys of this kind of quadrature are (1) the family of approximants used, and (2) the choice of data points to determine the approximant. For the few methods mentioned in the course:
* The **trapezium rule** integrates a piecewise linear function connecting the data points $(x_{i},f(x_{i}))$.
* **Simpson's rule** integrates piecewise quadratics fitted in intervals containing three data points.
* **Newton-Cotes quadrature** integrates the [[Polynomial Interpolation#Lagrange Interpolation|Lagrange interpolation]] through equally spaced data points. [[Polynomial Interpolation#Interpolation Error and Runge's Phenomenon|Runge's phenomenon]] makes the estimate terrible for certain functions.
### Gauss Quadrature
Newton-Cotes shows that the Lagrange polynomial through equally spaced data points does not work. *Maybe better chosen points will improve the result.*
Consider $\phi_{n+1}$, the orthogonal polynomial of order $(n+1)$ over $[a,b]$ (under weights $w$). Let $x_{1},\dots,x_{n}$ be its roots.
> [!definition|*] Gauss Quadrature as Integral
> As an integral of the approximant, *Gauss quadrature integrates the Lagrange interpolation $p_{n}$ of $f$ fitted through $x_{0},\dots,x_{n}$*: $\int_{a}^{b} w(x)f(x) \, dx \approx\int_{a}^{b} w(x) p_{n}(x) \, dx.$
* The orthogonal polynomial depends on the weights, hence so does the Gauss quadrature.
> [!definition|*] Gauss Quadrature as Weighted Sum
> The **Gauss quadrature formula** is $\int_{a}^{b} w(x)f(x) \, dx \approx \sum^{n}_{k=0}\left< L_{k},1 \right> f(x_{k}),$
> where $L_{k}(x)=\left(\prod_{i \ne k}(x-x_{i})\right) \Big/ \left({\prod_{i \ne k}(x_{k}-x_{i})}\right)$ are the cardinal polynomials of $p_{n}$ defined above.
* The **Gauss quadrature weights** are then $w_{k}=\left< L_{k},1 \right> = \int _{a}^{b} w(x)L_{k}(x) \, dx $which is again, dependent on the weight function $w$.
> [!proof] Proof that two interpretations are the same:
> Note that the Lagrange polynomial is $p_{n}=\sum_{k}f(x_{k})L_{k}$.
>
> Then using linearity, $\int_{a}^{b} w(x) p_{n}(x) \, dx=\left< p_{n},1 \right> = \left< \sum_{k}f(x_{k})L_{k},1 \right> = \sum_{k} f(x_{k}) \left< L_{k},1 \right> $
### Details of Gauss Quadrature
The definition above assumes that with a positive weight $w$, the roots of the $(n+1)$th orthogonal polynomial $\phi_{n+1}$ are all real and distinct.
- If there are less than $n+1$ points, obviously we can't uniquely fit a degree $n$ Lagrange polynomial through them.
> [!proof]
> Assume otherwise, where $x_{0}\dots,x_{r < n}$ are its real (distinct) roots. There is at least one, since otherwise $\left< \phi_{0},\phi_{n+1} \right> \ne 0$.
> Then consider $p:= \prod_{k=0}^{r}(x-x_{k})$. It changes sign if and only if $\phi_{n+1}$ does, so $ \left< p,\phi_{n+1} \right> = \int wp\phi_{n+1} \, dx \ne 0$This is only possible if $p \not\in \Pi_{n}$, a contradiction to $r < n$.
If $f \in \Pi_{2n+1}$, its $(n+1)$-point Gauss quadrature is exact: $\sum^{n}_{k=0}\left< L_{k},1 \right> f(x_{k})=\int _{a}^{b}w(x)f(x) \, dx $whereas Newton-Cotes using equally spaced data points $x_{0,\dots,n}$ is only exact up to $f \in \Pi_{n+1}$.
> [!proof]-
> Let $\phi_{n+1}$ be the $(n+1)$th orthogonal polynomial. Division algorithm gives $f=q\phi_{n+1}+r$, where $q,r \in \Pi_{n}$. Then the Gauss quadrature is $\begin{align*}
\sum^{n}_{k=0}\left< L_{k},1 \right>&\Big(\cancel{q(x_{k})\underset{=0}{\phi_{n+1}(x_{k})}}+r(x_{k})\Big)\\
&= \sum^{n}_{k=0}\left< L_{k},1 \right> r(x_{k})\\
&= \left< \sum_{k=0}^{n}r(x_{k})L_{k}, 1 \right>\\
&= \left< r,1 \right>
\end{align*}$where in the last step $\sum_{k=0}^{n}r(x_{k})L_{k}=r$ since $\mathrm{LHS}$ is the degree-$n$ Lagrange polynomial of $r=\Pi_{n}$, which is itself.
> But then the actual integral equals the same quantity: $\int wf \, dx=\left< f,1 \right> = \left< q\phi,1 \right> + \left< r,1 \right> = \cancel{\left< q,\phi \right>} + \left< r, 1 \right> = \left< r,1 \right>$since $q \in \Pi_{n}$, and $\phi_{n+1}$ is perpendicular to any element from that set.
> So the Gauss quadrature is exact for $f$.
### Gauss Quadrature and Hermite Interpolation
> [!bigidea] Gauss quadrature is equivalent to integrating the Hermite interpolation, which follows the function better and gives a better estimate of its integral.
> [!theorem|*] Gauss Quadrature Integrates a Hermite Interpolant
> If $l,h$ are the Lagrange and Hermite interpolation of $f$ through the roots of $\phi_{n+1}$, $ \text{Gauss quadrature} \equiv\underset{\text{Integral of Lagrange}}{\int _{a}^{b}w(x)l(x) \, dx}=\underset{\text{Integral of Hermite}}{\int _{a}^{b}w(x)h(x) \, dx}$
>
> > [!proof]-
> > The first equality is just the definition of Gauss quadrature.
> > For the second equality, note that $l$ is also the Lagrange interpolant of $h \in \Pi_{2n+1}$, so by the previous result, its $(n+1)$-point Gauss Quadrature is exact: $\int _{a}^{b}w(x)l(x) \, dx =\text{Gauss quadrature of }h(x)\overset{\text{exact}}{=}\int _{a}^{b} w(x)h(x) \,dx $
> [!exposition] Error of Gauss Quadrature
> The error of Gauss quadrature is$\mathrm{error}=\int _{a}^{b}w(x)f(x) \, dx - \text{Gauss quadrature},$so substituting the quadrature with the integral of the Hermite interpolation, $\begin{align*}
> \mathrm{error}&= \int _{a}^{b}w(x)\big[f(x)-h(x)\big] \, dx\\
> &= \int_{a}^{b} w(x) \cdot(\text{Hermite interpolation error})\,dx.
> \end{align*}$
> Using the formula of the Hermite interpolation error, $
> \text{error} = \frac{f^{(2n+2)}(\eta)}{(2n+2)!}\int _{a}^{b}w(x)[\pi(x)]^{2} \, dx
> $for some fixed $\eta \in [a,b]$, $\pi=\prod^{n}_{k=0}(x-x_{k})$. Hence it is bounded by $\mathrm{error} \le\frac{\sup_{[a,b]}f^{(2n+2)}}{(2n+2)!}\int _{a}^{b}w(x)[\pi(x)]^{2} \, dx .$
>
>
> > [!info]- The Hermite error formula used above
> > The error formula comes from $\begin{align*}
> \text{error}
> &= \int _{a}^{b}w (x)\cdot(\text{Hermite error}) \, dx \\
> &= \int _{a}^{b} w(x) \left( \frac{f^{(2n+2)}(\theta(x))}{(2n+2)!}[\pi(x)]^{2} \right) \, dx\\
> &= \frac{f^{(2n+2)}(\eta)}{(2n+2)!}\int _{a}^{b} w(x) [\pi(x)]^{2} \, dx
> \end{align*} $where the last step uses the MVT for integrals, which moves $f^{(2n+2)}$ outside the intergral and turns $\theta(x)$ into some constant $\eta \in[a,b]$.