# How do Lagrange multipliers work to find the lowest value of a function subject to a constraint?

I have been using Lagrange multipliers in constrained optimization problems, but I don’t see how they actually work to simultaneously satisfy the constraint and find the lowest possible value of an objective function.

This type of problem is generally referred to as constrained optimization. A general technique to solve many of these types of problems is known as the method of Lagrange multipliers, here is an example of such a problem using Lagrange multipliers and a short justification as to why the technique works.

Consider the parabaloid given by $f(x,y) = x^2 + y^2$. The global minimum of this surface lies at the origin (at $x=0$, $y=0$). If we are given the constraint, a requirement on the relationship between $x$ and $y$, that $3x+y=6$, then the origin can no longer be our solution (since $3\cdot 0 + 1 \cdot 0 \neq 6$). Yet, there is a lowest point on this function satisfying the given constraint.

What we have so far:

Objective function: $f(x,y) = x^2 + y^2$,
subject to: $3x+y=6$.

From here we can derive the Lagrange formulation of our constrained minimization problem. This will be a function $L$ of $x$, $y$, and a single Lagrange multiplier $\lambda$ (since we have only a single constraint). It will be this new function that we minimize.

$L(x,y,\lambda) = x^2 + y^2 + \lambda(3x+y-6)$

The Lagrange formulation incorporates our original function along with our constraint(s). On the way toward minimizing $L$, we will have to minimize the objective function $x^2 + y^2$, as well as minimize the contribution from the constraint, which is now weighted by a factor of $\lambda$. If the constraint is met, then the expression $3x+y-6$ will necessarily be zero, and will not contribute anything the value of $L$. This is the trick of the technique.

Minimizing the Lagrange formulation:

To minimize $L$ we simply find the $x,y$, and $\lambda$ values that make its gradient zero. (This is exactly analogous to setting the first derivative to zero in calculus.)

$\nabla L = 0:$

$\frac{\partial L}{\partial x} = 2x + 3 \lambda = 0$

$\frac{\partial L}{\partial y} = 2y + \lambda = 0$

$\frac{\partial L}{\partial \lambda} = 3x + y - 6 = 0$,

In our example we have arrived at a system of simultaneous linear equations which can (and should) be solved with matrix algebra. The solution will be a vector holding values for $x, y$ and $\lambda$. The lowest value of the objective function, subject to the given constraint, sits at $(x,y,f(x,y))$, and the Lagrange multiplier does not have an immediate physical interpretation. (The multipliers have meaning when appearing in certain contexts, more info on that here.)