When we have a problem of variation where one function is maximized or minimized subject to a constraint imposed by another function
f(x1,…xn)→ function
g(x1,…,xn)=0→ constraint
Without the constraint we would have the problem
δf=∂x1∂fδx1+…∂xn∂fδxn
∂xi∂f=0i=1…n
and apply any of our multivariate optimization strategies to solve. This can be hard. Luckily, at any stationary point of the function that also satisfies the constraint, the gradient of the function at that point can be expressed as a linear combination of the gradients of the constraints at that point.
∂xi∂f−λ∂xi∂g=0
With a constraint g=0, the change in f becomes a change in the functional of f and g.