Wolfe duality
In-game article clicks load inline without leaving the challenge.
In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.
Mathematical formulation
For a minimization problem with inequality constraints,
minimize x f ( x ) s u b j e c t t o g i ( x ) ≤ 0 , i = 1 , … , m {\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\end{aligned}}}
the Lagrangian dual problem is
maximize u inf x ( f ( x ) + ∑ j = 1 m u j g j ( x ) ) s u b j e c t t o u i ≥ 0 , i = 1 , … , m {\displaystyle {\begin{aligned}&{\underset {u}{\operatorname {maximize} }}&&\inf _{x}\left(f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\right)\\&\operatorname {subject\;to} &&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}
where the objective function is the Lagrange dual function. Provided that the functions f {\displaystyle f} and g 1 , … , g m {\displaystyle g_{1},\ldots ,g_{m}} are convex and continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem
maximize x , u f ( x ) + ∑ j = 1 m u j g j ( x ) s u b j e c t t o ∇ f ( x ) + ∑ j = 1 m u j ∇ g j ( x ) = 0 u i ≥ 0 , i = 1 , … , m {\displaystyle {\begin{aligned}&{\underset {x,u}{\operatorname {maximize} }}&&f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\\&\operatorname {subject\;to} &&\nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)=0\\&&&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}
is called the Wolfe dual problem.[clarification needed] This problem employs the KKT conditions as a constraint. Also, the equality constraint ∇ f ( x ) + ∑ j = 1 m u j ∇ g j ( x ) {\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)} is nonlinear in general, so the Wolfe dual problem may be a nonconvex optimization problem. In any case, weak duality holds.