$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

We give a new method of finding extrema.

Throughout this course, we hope it has become apparent that when given a problem:

There is more than one way to solve it.

The method of Lagrange multipliers tells us that to maximize a function constrained to a curve, we need to find where the gradient of the function is perpendicular to the curve.

Previously, when we were finding extrema of functions $F:\R ^n\to \R$ when constrained to some curve, we had to find an explicit formula for the curve. Consider this example from the previous section:

The first step for solving this problem was to find an explicit formula that drew the curve $C$. In the case above, we choose: However, finding a function that draws the constraining set could be very difficult or even impossible! If our constraining set had been our previous method will not work, as we (at least this author!) cannot find an explicit formula describing the set above. Nevertheless, there is another way. It is called the method of Lagrange multipliers. This method is named after the mathematician Joseph-Louis Lagrange. This method relies on the geometric properties of the gradient vector. Recall: There are three things you must know about the gradient vector:

• $\grad F = \vector {\pp [F]{x_1},\pp [F]{x_2},\dots ,\pp [F]{x_n}}$.
• $\grad F(\vec {x})$ points in the direction that one must leave $\vec {x}$ in order to see the initial greatest increase in $F$.
• $\grad F(\vec {x})$ points in the direction that is perpendicular to any level surface of $F$.

It is the last two facts that we will think about now. Below we see level curves for some function $F:\R ^2\to \R$ along with a constraining curve that we will call $S$:

Let’s add vectors to our graph that point in the direction of $\grad F(x,y)$. Since we know that the gradient vector is perpendicular to level curves, we can do this without computation.

If for some point $(x,y)$ on $S$ the gradient $\grad F(x,y)$ points in the “general” direction of the tangent vectors of $S$, then $(x,y)$ cannot give an extremal value of $F$, as moving along $S$ will either increase or decrease the value of $F$. Here’s the upshot:

The only candidates for local extrema occur when the gradient of $F$ is perpendicular to $S$.

How do we find these points? To do this, we will imagine that $S$ is a level curve for some other function $G:\R ^2\to \R$, and define $S$ as: now, the candidates for extrema of $F$ when constrained to a curve $S$ are found by finding $(x,y)$ such that since $(x,y)$ that satisfy this equation are those where the gradient vectors of $F$ are perpendicular to the level curve $G(x,y)= c$. This is the essence of the method of Lagrange multipliers.

The first step for solving a constrained optimization problem using the method of Lagrange multipliers is to write down the equations needed to solve the problem.

### Working with geometry

Lagrange multipliers tell us that to maximize a function $F:\R ^2\to \R$ along a curve defined by $G(x,y) = c$, we need to find where $\grad F$ is perpendicular to $G$. In essence we are detecting geometric behavior using the tools of calculus.