We use the gradient to approximate values for functions of several variables.

We’ve studied differentials in our previous courses: If is differentiable, then: Here and are two new variables that have been “cooked-up” to ensure that It is worthwhile to compare and contrast The values are the change and change in when and are related. On the other hand, if is a function of , but is not necessarily equal to . Instead is the value that satisfies this equation: When is small, , the change in resulting from the change in . The key idea is that as Another way of stating this is: as goes to , the error in approximating with goes to .

Let’s extend this idea to functions of two variables. Consider , and let and represent changes in and , respectively. Now is the change in over the change in and . Recalling that and give the instantaneous rates of change of in the and -directions respectively, we can approximate as In words, this says:

The total change in is approximately the change caused by changing plus the change caused by changing .

Setting , we can rewrite this in terms of the dot product: This leads us to our next definition.

Let . Find .

We can approximate with , but as with all approximations, there is error involved. Approximating with relies on the fact that if a function is differentiable, then we can “zoom-in” on our surface until at some point, the surface looks like a plane. Of course as we’ve learned, this property is called differentiability, meaning quite literally can we use differentials to describe this surface.

If we believe that discrete data has been gathered from a function that is differentiable, it makes sense to estimate values of the function using differentials.

Approximating with the total differential

Suppose you want to approximate at the point . Without knowledge of calculus, your approximation might go like this:

We try to find numbers near and where is easy to evaluate. For example, we know that , so instead of looking at , we’ll use . Also, we know since , we’ll use it in our approximation. Hence

Without calculus, (or some other insight) this is the best approximation we could reasonably come up with. The total differential gives us a way of adjusting this initial approximation to hopefully get a more accurate answer.

The point of the previous example was not to develop an approximation method for known functions. After all, we can very easily compute using readily available technology. Rather, it serves to illustrate how well this method of approximation works, and to reinforce the following concept:

New position = old position amount of change,
New position old position + approximate amount of change.

In the previous example, we could easily compute and could approximate the change in when computing , letting us approximate the new value for .

It may be surprising to learn that it is not uncommon to know the values of and at a particular point without actually knowing a formula for . The total differential gives a good method of approximating by looking at nearby points.

Error analysis

The total differential gives an approximation of the change in given small changes in and . We can use this to approximate error propagation; that is, if the input is a little off from what it should be, how far from correct will the output be? We demonstrate this in an example.

The previous example showed that the volume of a particular tank was more sensitive to changes in radius than in height. Keep in mind that this analysis only applies to a tank of the dimensions given in the problem. A tank with a height of ft and radius of ft would be more sensitive to changes in height than in radius.

One could make a chart of small changes in radius and height and find exact changes in volume given specific changes. While this provides exact numbers, it does not give as much insight as the error analysis using the total differential.