Lagrange Multipliers #
When we have a problem of variation where one function is maximized or minimized subject to a constraint imposed by another function
\[f(x_1, \ldots x_n) \rightarrow \text{ function }\] \[g(x_1, \ldots, x_n) = 0 \rightarrow \text { constraint }\]Without the constraint we would have the problem
\[\delta f = \pdv{f}{x_1} \delta x_1 + \ldots \pdv{f}{x_n} \delta x_n\] \[\pdv{f}{x_i} = 0 \qquad i = 1 \ldots n\]and apply any of our multivariate optimization strategies to solve. This can be hard. Luckily, at any stationary point of the function that also satisfies the constraint, the gradient of the function at that point can be expressed as a linear combination of the gradients of the constraints at that point.
\[\pdv{f}{x_i} - \lambda \pdv{g}{x_i} = 0\]With a constraint \( g = 0 \), the change in \( f \) becomes a change in the functional of \( f \) and \( g \).
\[f, g \rightarrow h(f(x_1, \ldots, x_{n-1}), g(x_1, \ldots, x_{n-1})) = h(x_1, \ldots, x_{n-1}) = 0\] \[\pdv{h}{x_i} \rightarrow i = 1 \rightarrow n-1\]At the stationary points we’re looking for,
Simplest example: Find the maximum area of a rectangle with perimeter \( 4a \)
\[f = A = x_1 x_2\] \[g = 0 = 2x_1 + 2x_2 - 4a\] \[\delta f = x_2 \delta x_1 + x_1 \delta x_2\] \[\lambda \delta g = \lambda 2 \delta x_1 + \lambda 2 \delta x_2\] \[x_2 + 2\lambda = 0 \qquad x_1 + 2\lambda = 0\] \[x_2 = - 2 \lambda \qquad x_1 = - 2 \lambda\] \[\rightarrow - 4 \lambda - 4 \lambda - 4 a = 0 \rightarrow \lambda = - \frac{a}{2}\]