We learn to optimize surfaces along and within given paths.

When optimizing functions of one variable , we have the Extreme Value Theorem:

A similar theorem applies to functions of several variables.

When finding extrema of functions in your first calculus course, you had to check inside an interval and at the end-points. Now we need to check the interior of a region and the boundary curve. If we check those critical points, then the Extreme Value Theorem tells us that we have found the extrema! No other tests are necessary.

To check the interior of a region, one finds where the gradient vector is zero or undefined—we did that in the last section. Hence in this section, we will focus on how one finds extrema on the boundary. In this case, the Extreme Value Theorem still applies, and so we will be sure that we found the extrema on the boundary. Let’s see some examples, we start with a classic real-world problem.

This portion of the text is entitled “Constrained optimization” because we want to find extrema of a function subject to a constraint, meaning there are limitations to what values the function can attain. In our first example the constraint was set by the U.S. Post office. Constrained optimization problems are an important topic in applied mathematics. The techniques developed here are the basis for solving larger problems, where the constraints are either more complex or more than two variables are involved.

When working constrained optimization problems, there is often more than one way to proceed. Let’s do the previous problem again, but this time we will use vector-valued functions and the chain rule.

So far, our constraints have all been lines. This doesn’t have to be the case.

It is hard to overemphasize the importance of optimization. As humans, we routinely seek to make something better. By expressing the something as a mathematical function, “making something better” means “optimize some function.