**primal problem** $\begin{align*}
\min_{x} f_{0}(x),~ \text{ subject to } &\forall i=1,\dots,m: f_{i}(x) \le 0,\\
&\forall j=1,\dots,r: h_{j} =0.
\end{align*}$
Equivalent with the problem with an infinite-valued penalty: $\min_{x}f_{0}(x)+\sum_{i}I_{-}(f_{i}(x))+\sum_{j}I_{0}(h_{j}(x)),$where $I_{-}(f)=\begin{cases}0 & \text{if }f \le 0 \\ \infty &\text{otherwise}\end{cases}~~,~~I_{0}(h)=\begin{cases}0 & \text{if }h = 0 \\ \infty &\text{otherwise}\end{cases}$
Note that $I_{-}(f)= \sup_{\lambda \ge 0}\lambda f$, and $I_{0}(h)=\sup_{\nu \in \mathbb{R}} \nu h$, so this is further equivalent with the unconstrained **minimax** problem $\min_{x} \sup_{\lambda \succeq 0, \nu}\left\{ f_{0}(x)+\sum_{i}\lambda_{i}f_{i}(x) + \sum_{j}\nu_{j}h_{j}(x) \right\}=\min_{x}\sup_{\lambda \succeq 0,\nu}L(x,\lambda,\nu),$where $L$ is the **Lagrangian**.
The **dual problem** is $\sup_{\lambda \succeq 0,\nu} \min_{x}L(x,\lambda,\nu),$which is in general smaller than the primal objective $\min_{x}\sup_{\lambda,\nu}L$. The **dual function** is $g(\lambda, \mu):= \min_{x}L(x,\lambda,\nu)$, so the dual problem is $\sup_{\lambda \succeq 0,\nu}g(\lambda,\nu)$.
In general, the dual is no greater than the primal, i.e. $\underset{\text{dual}}{\sup_{\lambda,\nu} \inf_{x}L(x,\lambda,\nu)} \le \underset{\text{primal}}{\inf_{x}\sup_{\lambda,\nu} L(x,\lambda,\nu)};$
when the two are equal, it's called **strong duality**; one sufficient condition is convex $f_{0},f_{1},\dots$ plus Slater's condition (exists $x$ that is strictly feasible, i.e. $f_{0}(x) < \infty$ and $\forall i=1,\dots,f_{i}(x) \lneq 0$).
Under strong duality, the inequality constraints are all active, i.e. the optimiser $x ^{\ast}$ must have **complementary slackness**: $\forall i=1,\dots,\lambda_{i}^{\ast}f_{i}(x^{\ast})=0.$
Unless the Lagrangian multiplier $\lambda_{i}^{\ast}=0$ is on the boundary of its possible value (recall we require $\lambda \succeq 0$), the inequality constraint $f_{i}(x)\le 0$ achieves its equality at the optimiser $x^{\ast}$.