```dataview
table without id
File as "Topics",
join(
sort(map(
filter(file.tags, (tag) => any(map(this.domain_tags, (domtag) => contains(tag, domtag + "/")))),
(x) => replace(
regexreplace(x,
"#("+ join(this.domain_tags, "|") +")/",
""),
"_", " ")
)
),
", ") as "",
dateformat(file.mtime, "yyyy-MM-dd") as "Last Modified"
from ""
FLATTEN "[[" + file.path + "|" + truncate(file.name, 30) + "]]" as File
FLATTEN domain as domain
where
(
(domain and contains(domain, this.file.link)
and (file.name != this.file.name))
or
any(map(file.tags,
(x) => econtains(this.domain_tags, substring(x, 1))))
or
any(map(file.tags,
(x) => any(map(this.domain_tags,
(domtag) => contains(x, domtag + "/")))
))
)
and
!contains(file.path, "2 - Snippets")
and
!contains(file.tags, "subdomain")
sort file.mtime desc
```
```dataview
table without id
File as "Snippets",
join(
sort(map(
filter(file.tags, (tag) => any(map(this.domain_tags, (domtag) => contains(tag, domtag + "/")))),
(x) => replace(
regexreplace(x,
"#("+ join(this.domain_tags, "|") +")/",
""),
"_", " ")
)
),
", ") as "",
dateformat(file.mtime, "yyyy-MM-dd") as "Last Modified"
from "2 - Snippets"
FLATTEN "[[" + file.path + "|" + truncate(file.name, 30) + "]]" as File
FLATTEN domain as domain
where
(
(domain and contains(domain, this.file.link)
and (file.name != this.file.name))
or
any(map(file.tags,
(x) => econtains(this.domain_tags, substring(x, 1))))
or
any(map(file.tags,
(x) => any(map(this.domain_tags,
(domtag) => contains(x, domtag + "/")))
))
)
sort file.mtime desc
```
[[Bayesian Inference#Hierarchical Bayesian Models|Hierarchical Bayes]] using a variable $\phi \sim P$ make things hard to compute, so **empirical Bayes** use the sample to estimate components like $\hat{\phi}$ and/or $\hat{\pi}(\theta \,|\, \mathbf{x})$.
> [!definition|*] Empirical Bayes Method for Hyperparameters
>
> In empirical Bayes, the hierarchical model is simplified to:
> - A point estimate $\hat{\phi}$ of the hyper parameter.
> - A prior $\theta \sim \pi(\theta;\hat{\phi})$.
> - The posterior $\hat{\pi}(\theta;\mathbf{x}) \propto L(\theta \,|\,\mathbf{x})\cdot \pi(\theta ;\hat{\phi})$.
>
> The point estimate $\hat{\phi}$ can be made with frequentist methods like the [[Maximum Likelihood Estimator|MLE]] and [[Point Estimators#Method of Moments|method of moments]].
- The empirical Bayes posterior $\hat{\pi}(\theta\,|\,\mathbf{x})$ can then be used to make Bayes estimators like $\hat{\theta}_{\mathrm{EB}}=\mathbb{E}[\theta \,|\,\mathbf{x}]$ for quadratic loss.
- The point estimates $\hat{\phi}$ can be made using MLE or method of moments twice (i.e. once to get $\hat{\theta}$ or $\hat{\theta}_{1 \sim n}$, then again to compute the MLE or MME of $\hat{\phi}$ from it).
Alternatively, empirical bayes can directly estimate values like the posterior mean without assuming a prior -- this gives the **nonparametric empirical Bayes**. cf. [[Computer Age Statistical Inference|CASI]] p77. Robbin's formula.
- It uses the sample to estimate the **marginal distribution** $f(x)$.
- The crucial point, and the surprise, is that large data sets of parallel situations carry within them their own Bayesian information.
- One issue is that the completely nonparametric model is sensitive in sparse regions, and parametrrized models perform better in those regions.