$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

The dot product is an important operation between vectors that captures geometric information.

### The dot product

We have already seen how to add vectors and how to multiply vectors by scalars. As it turns out, there is not a single useful way to define “multiplication” of vectors, but there are several types of products defined for two vectors that have intrinsic meaning. One such example is the dot product, which we motivate using the example below.

### Two definitions of the dot product

The above scenario illustrates a quantity that is fundamentally important in physics, but it is useful in other instances as well. We can extract the mathematical essence of the above example as follows.

Given two vectors $\vec {u}$ and $\vec {v}$, the quantity $|\vec {u}||\vec {v}|\cos (\theta )$ is important.

Since this quantity is important, we dignify it with a definition.

Given the magnitude and angles made by two vectors in $\R ^2$, it is straightforward to compute, but we want to work vectors in higher dimensions, and we therefore want to find a quick way to compute this quantity using the components of $\vec {u}$ and $\vec {v}$. Thankfully, there’s a good way to do this.

While this may seem intimidating at first, we usually have in mind that $n=2$ or $3$, and we can unpack the formula in these cases.

• In $\R ^2$, we have $\vec {u} \dotp \vec {v} = u_1v_1+u_2v_2$.
• In $\R ^3$, we have $\vec {u} \dotp \vec {v} = u_1v_1+u_2v_2+u_3v_3$.

Some texts start with the above theorem as the definition of the dot product, and show that our definition can be derived from it. What is really important is that we have two equivalent ways to express the dot product. Both can be useful, as we will see in many examples to follow.

It might (and likely should) be entirely unclear at this point why the above definition and theorem are consistent with each other. The appendix to this section establishes this in more detail, but here’s an example that demonstrates their equivalence in the context of a specific pair of vectors.

### Observations and applications of the dot product

We have two different ways to find dot products. We can now make several observations about this type of product between vectors and explore some applications.

#### Nature of the dot product

While it may be easy to miss, notice that in both definitions, the dot product is defined between two vectors of the same dimension (this is most readily visible in the component description of the dot product, which requires that we pair each component in $\vec {u}$ with one in $\vec {v}$). In both definitions, when we compute the dot product, the result is a scalar.

The dot product is defined for vectors with the same dimension. When it is defined,

Let $\vec {u},\vec {v},\vec {w}$ be nonzero vectors in $\R ^3$. Determine whether the following expressions are vectors, scalars, or undefined.
• $(\vec {w} \dotp \vec {u} ) \vec {u}$ is a vectora scalarundefined .
• $5(\vec {u} +\vec {w}) \dotp {\vec {u}}$ is a vectora scalarundefined .
• $\vec {w} / \vec {u}$ is a vectora scalarundefined .
• $\vector {2,3} \dotp \vector {4,2} + 7$ is a vectora scalarundefined .
• $\vector {1,3} \dotp \vector {-1,2,5}$ is a vectora scalarundefined .
• $\vec {w} / ( \vec {u} \dotp \vec {u})$ is a vectora scalarundefined .
• $\vec {u}\dotp \vec {v}+\vec {w}$ is a vectora scalarundefined .

#### Relation to Magnitude

The dot product allows us to write some complicated formulas more simply.

Compute the magnitude of the vector $\vec {v} = \vector {-1,0,2,5}$ using the above formula.

#### Angle between vectors

Note that the notion of defining an angle in a general context in $\R ^3$ (or higher dimensions) is problematic, but the angle between two vectors does make sense. For instance, we can imagine laying a protractor on two vectors in $\R ^3$ and aligning one end with one of the vectors. We can then measure the angle formed between them.

While actually figuring out how to compute this might seem daunting, the two definitions of the dot product allow us to find this angle without too much work.

One remark is in order; we take by convention that the angle between two vectors be between $0$ and $\pi$, inclusive. Since the range of $\arccos (\theta )$ is $[0,\pi ]$, the the angle found above must be the correct angle.

Note that the logic in the above example can be generalized to produce a formula for the angle, which is listed below. Although there is a formula, the logic required to obtain it is contained in the previous example. It is therefore recommended that you are able to reproduce the logic, and not just memorize the formula.

#### Orthogonality

Given two nonzero vectors in $\R ^2$ or $\R ^3$, we say that the vectors are perpendicular if the angle between them is a right angle. Since we can use the dot product to capture information about the angle between vectors, it should not be surprising that it can be used to find perpendicular vectors.

This allows us to define and generalize our notion of “perpendicularity” when it is more difficult to visualize, and we introduce a special buzz-word to do so.

A subtle point that could be overlooked easily here is that given our definition, the zero vector in $\R ^n$ is orthogonal to every vector in $\R ^n$. While our notion of “perpendicularity” required us to think of the angle between vectors, this is not a well-defined concept when we discuss the zero vector. however, both formulations of the dot product allow us to handle the zero vector.

### Algebraic properties of the dot product

We summarize the arithmetic and algebraic properties of the dot product below.

The results above can all be established from the component formulation of the dot product. For instance, to show the symmetry property, note that if $\vec {v} = \vector {v_1,v_2}$ and $\vec {w} = \vector {w_1,w_2}$, we have

Note that the item “Notion of Orthogonality” might sound formal here, but if we are working in $\R ^3$,

The condition $\uvec {e}_i \dotp \uvec {e}_j = \left \{ \begin {array}{ll} 0, & i \neq j \\ 1, & i=j \end {array} \right .$ will require that we take $i=1,2,3$ and $j=1,2,3$ and tells us the following.

Note that this precisely agrees with our intuition that the unit vectors that are parallel to the $x$, $y$, and $z$ axes are orthogonal to each other.

As an interesting remark, instead of defining the dot product by a formula, we could have defined it by the properties above, and we could actually derive the formula from these! While this is common practice in mathematics, the process is a bit abstract and is better left as the subject of a more advanced course.