Robustness can come from two sources -- the quantity being estimated, and the method of estimating it.
Some quantities are inherently more robust than others:
- The median is robust against extreme values, while means are not.
This kind of robustness is captured by the **influence function**: if $T(F)$ measures a quantity of distribution $F$, then its influence function measures its sensitivity to perturbing $F$ by $\delta_{x}$ (unit mass at $x$): $\mathrm{IF}_{T}(x):= \lim_{\epsilon \to 0} \frac{T((1-\epsilon)F + \epsilon \delta_{x}) - T(F)}{\epsilon}.$
For example, the $\mathrm{IF}$ of population mean $\mu$ is $\mathrm{IF}(x)=x-\mu$. This means that the farther $x$ deviates from $\mu$, the larger its influence is. Hence the population mean is not robust.
- Alternatives like **trimmed mean** and **winsorized mean** bounds the (effective) values of $x$, hence bounding the influence.
Furthermore, asymptotically for a sample $x_{1},\dots,x_{n}$, $\hat{\theta} \approx \theta + \frac{1}{n}\sum_{i=1}^{n}\mathrm{IF}(x_{i}).$
For robustness from estimation, in particular the choice of loss functions, see [[Loss Functions and Robustness]].
### Personal Investigations
For a measurement of how the value of one sample point influences a sample statistic, consult [[Statistic's Susceptibility of Influence from One Sample|this note]]