“A” Average Math2551

Spring 2025 Exam 1

These are my notes covering everything known to be on the first exam of multivariable calculus in spring 2025. I’ve published them in hopes that they’ll be useful to somebody else.

Disclaimer: this is not an official study resource and I am not a math teacher. If something I say here disagrees with something the professor said, I am wrong. These notes are intended to be useful as a review tool, not as a replacement for studying on your own.

This document roughly follows the Thomas calculus textbook across the sections known to be on the test (12.1 -> 14.2).

The exam is on February 4th during the normal lecture time for your section. Don’t be late!

3d Vectors

These are fairly simple, so I’m going to keep this section “short”. Omitted are the details of basic vector arithmetic; I recommend going through Khan Academy's vectors unit if you don’t already know how to add and subtract vectors and multiply them by scalars.

Oftentimes, 3d vectors are composed of multiples of , , and k̂.  is the vector <1, 0, 0>,  is the vector <0, 1, 0>, and  is the vector <0, 0, 1>, so by adding multiples of them, you can represent any point in 3d cartesian space. It’s very easy to convert from i, j, k form to vector notation: for instance, 4i + 3j + 2k would necessarily be <4, 3, 2>, because it’s equivalent to <1, 0, 0> * 4 + <0, 1, 0> * 3 + <0, 0, 1> * 2 = <4, 0, 0> + <0, 3, 0> + <0, 0, 2>. This means you can just read off the values in most cases. There are some situations where you’ll get a vector component form like 6k + 2j, missing a term, in which case the missing term(s) are simply set to 0 (so 6k + 2j = <0, 2, 6>).

There are two basic operations on vectors that don’t exist on scalars: the dot product and the cross product. Dot products are fairly simple to compute; you just sum the multiples of the components to get a scalar. <1, 2, 3> . <3, 2, 1> = 1 * 3 + 2 * 2 + 3 * 1 = 10. Because the magnitude of a vector is the Pythagorean identity sqrt(x * x + y * y + z * z), the dot product of a vector with itself is the square of the magnitude of the vector. There’s a shorthand for taking the magnitude of a vector: magnitude of v = ||v||. ||v|| is just a compact way to write sqrt(v . v).

Dot products have several useful properties:

Cross products are a bit trickier. The general idea is to find a vector that is orthogonal to two other vectors and has the magnitude of the area of a parallelogram described by both. The magnitude of the cross product a x b is equal to ||a|| * ||b|| * sin(θ), and the direction is a vector at a 90 degree angle to both vectors. The simplest way to calculate the cross product is with a bit of linear algebra: take the determinant of a matrix where the first row is [i, j, k] and the next two rows are the vectors you’re taking the cross product of, and that’s your cross product! It’s a bit strange.

For example, to take the cross product of <2, 3, 1> and <5, 6, 7>, we’d first set up a matrix like:

 

2

3

1

5

6

7

then take the determinant of that matrix, which is * (3 * 7 - 1 * 6) -  * (2 * 7 - 1 * 5) +  * (2 * 6 - 3 * 5) = <15, -9, -3>. <15, -9, -3> . <2, 3, 1> and <15, -9, -3> . <5, 6, 7> are indeed both 0, so <15, -9, -3> is orthogonal to both!

Cross products have a set of useful properties too:

Lines

In 3-dimensional space, line equations are a bit more complicated than in 2d. While in 2 dimensions you might have a line with the equation x + 2y = 3, x + 2y = 3 in 3d space is a plane – an infinite flat surface. As it turns out, you can’t produce a line with just a single equation in x, y, and z: the system of equations x + y = 0; z = 0 is a line, but there’s no way to convert both of those constraints into a single equation. Dealing with systems of equation is clunky, so the preferred way to think of lines in 3d space is as a point plus t times a vector, where t is everything between -infinity and infinity. The representation of a 3d line is thus like so: r(t) = <0, 1, 0> + t * <2, 3, 0>, which in desmos 3d looks like this:

The vector equation form has many benefits over other methods of writing line equations, not least that you can very quickly determine a vector parallel to a line (it’s just the vector multiple of t!) and a plane perpendicular to it (we’ll get to that in a bit).

This sort of representation also means that it’s very easy to solve problems of the form “write an equation for a line passing through point p parallel to vector v”: the answer is just r(t) = p + tv!

You can also write the line as a system of equations in terms of t. This is as simple as reading off the vector values. For instance, the system of parametric equations of <6, 4, 1> + <2, 3, 5>t is x = 6 + 2t, y = 4 + 3t, z = 1 + 5t. With a bit of algebra, you can reduce this to two equations in x, y, and z.

Another common problem is finding a line that passes through two points. In this case, the operation is once again quite simple: given points p1, p2, the line that passes through both is r(t) = p1 + (p2 - p1)t.

Finding the nearest point v on a line in the form r(t) = p + ts to the point u is as simple as translating to the origin and projecting: v = p + (u - p) . t / (t . t) * t. Finding the distance can be done easily from there, and there’s even a simplified equation: ||(u - p) x v|| / ||v|| (shamelessly copied from the Thomas calculus textbook).

In this vector form, it’s actually slightly harder to find the point at which two lines intersect. As far as I can tell, the textbook doesn’t actually include this, but it’s a problem that’s come up in class and in the homework. My solution is to reduce both lines to a system of equations in s and t and solve it, following these steps:

1

1

2

1

1

2

1

2

0

1

0

4

0

0

0

0

1

-2

Note that there are exception cases. Oftentimes, lines will “miss” each other, and the equations will not have a solution. There is another case where the parallel vectors are scalar multiples of each other, meaning the lines themselves are parallel (it’s important to distinguish this from the miss case!); finally, it’s possible for the equations to have infinitely many solutions, in which case the lines are equal.


Planes

Like 3d lines, plane equations can get messy fast. Fortunately, there is a vector way! Given a point p that the plane passes through and a perpendicular vector v describing its tilt, the equation for a plane is simply v . <x, y, z> = v . p. For instance, a plane orthogonal to <2, 3, 4> that passes through the point <0, 0, 4> is 2x + 3y + 4z = 16. Desmos 3d is a good way to graphically inspect that this is indeed the correct plane equation:

This also means that any plane in the form Ax + By + Cz = D is orthogonal to <A, B, C>. The vector equation for a line perpendicular to said plane would be r(t) = p + <A, B, C>t – and if you have a line r(t) = p + vt, a plane perpendicular to it can be immediately found to be v . <x, y, z> = 0!

Finding a plane that passes through several points is also fairly simple. To find, say, the plane through p1 = <2, 3, 1>, p2 = <0, 1, 5>, and p3 = <3, 3, 3>, we,

Plane intersections are fairly easy to solve as systems of equations using some linear algebra, similar to the technique I described in the section on lines, but there’s a much less awful vector way to do it! We simply have to find a vector v parallel to the line of intersection of two planes and a point p common to them, and the line is r(t) = p + vt. For example, with the planes 3x - 6y - 2z = 15 and 2x + y - 2z = 5 (shamelessly copied from the Thomas textbook), the orthogonal vectors are <3, -6, -2> and <2, 1, -2>. The cross product of them gives us <14, 2, 15>, which is a vector parallel to the vectors parallel to the planes – the only possibilities for such a vector are parallel to the line of intersection between the planes.

Now we solve the intersection with the plane z=0, the result of which is a single point described by the system of equations: 3x - 6y = 15, 2x + y = 5. This yields a point <3, -1, 0>, so the line of intersection will be r(t) = <3, -1, 0> + <14, 2, 15>t. Not too difficult! Note that the third plane you intersect with doesn’t matter. z=0 is a simple, good choice, but you can intersect with any! The goal of intersecting with a third plane is just to isolate a single point on the line of intersection without actually solving for the line of intersection. You can use good ol’ algebra (or even row-reduction) to solve the problem, and probably faster, but then you have a gross system of equations instead of a nice vector-equation line – ultimately, doing things the vector way pays off in pain mitigation.

The distance from a point to a plane can be found by following this process:

To instead find the nearest point on the plane, just add the reference point to the projection.

The angle between two planes is simply the angle between their normal vectors. See the section on 3d vectors for how to find that using the dot product and inverse trigonometry

Vector Functions

Vector functions are an extension of the idea of the vector equation of a line. Instead of strictly conforming to the r(t) = p + vt format, they are any function that outputs a vector. Generally this means each component will be a function of t. For instance, r(t) = <sin(t), cos(t), t^2> is a vector-valued function.

A useful property of these is that, because component notation is interchangeable with vector notation, these can also be seen as functions comprised of the sum of multiples of i, j, and k, which are constants (see the section on vectors): r(t) = sin(t) * i + cos(t) * j + t^2 * k is identical to the previous r(t). This makes some operations very straightforward. For instance, taking the antiderivative of <1/t, t, 1> is the same as taking the antiderivative of 1/t * i + t * j + 1 * k – ln(t) * i + t^2/2 * j + t * k. This can then be reassembled into <ln(t), t^2, t>. In fact, it’s not even necessary to expand the vector form:  is always , so you can just integrate the components. The same goes for taking the derivative: . Limits of vector-valued functions are also taken the same way.

If the derivative of a scalar function is the instantaneous slope, the derivative of a vector function is the instantaneous direction. This is a surprisingly useful concept. For instance, the tangent line to a vector-valued function at s is r(t) = f(s) + f’(s).

Arc Length

The general formula for the length of the path described by a vector-valued function r(t) between a and b is  – the integral of the magnitude of the derivative of r(t). If r(t) is the sum of all of the infinitesimal instantaneous vectors in the curve preceding t (the integral of the derivative), then the length of r at t is the sum of all of the infinitesimal magnitudes preceding t. Rather than summing the instantaneous vectors of r(t), we’re summing the magnitudes of the instantaneous vectors of r(t).

For instance, the arc length of a simple line  between 0 and 10 can be found like so:

This can be quite complex to solve for some vector-valued functions. Fortunately, however, trigonometry makes some such problems very easy: for instance, the arc-length of  along :

Simple!

Another use case of arc length is finding a parameterization for a curve given s rather than t – converting a vector-valued function over time to a vector-valued function that reports the position given how far it’s traveled: a function in terms of arc length. It’s a strange concept.

To do this for a function r(t), work out the arc length function s(t) like above, then invert that function to get a function for t(s), and finally substitute that function in for every occurrence of t. For example, an arc-length parameterization of the above example would be

This is a very simple operation, but also quite useful; it allows us to represent the function in terms of something concrete, the arc length, rather than some magical variable .

Curvature

The curvature of a curve is defined as the rate of change of the normalized tangent vector of a curve – the sharpness with which it is turning. In general, the formula for finding the curvature  of a function  is

Essentially, this means that k(t) is a scalar value representing the magnitude of the instantaneous twisting over the curve. For example, for the function  from the previous section,

Meaning that the curvature for any given point on the curve is just ½. This is a very convenient fact about circles: for a circle with radius , the curvature is just . This method works fine for finding the curvature of fairly simple functions, but there are better ways for more complex things (it seems to be a pattern in this course that there’s always a vector-operation way to do things better). The cross-product formula for curvature, for instance:

(That’s a cube, not a square, in the denominator). This verifiably works on the previous function:

That’s probably too much work for a very simple trig case, but for a more complicated function like , the canonical method will very quickly become unwieldy (I’ll leave showing this as an exercise to the reader). In this case, the cross product formulation is much simpler:

Somewhat disgusting and complicated, but so much easier than the conventional formulation. Taking the derivative of  (to get ) is no joke when  is already that complicated.

If you’re forced to use the conventional form, you can make it moderately simpler by arc-length parameterizing it first. It’s easier to calculate the curvature of an arc-length parameterized function because you can assume that  is always equal to 1, which means  is just  and  is just . It’s still more complicated, but not as bad.

Aside: Normal Unit Vector

Related to curvature is the idea of a normal unit vector. In terms of the unit tangent , it’s just  – because the formula for tangent in terms of a curve is identical, you can actually think of N as the tangent function to the tangent function. It is always perpendicular to the curve and points in the direction of curvature. Unfortunately, I haven’t found a non-awful way to compute the normal vector.

These are fairly simple and unsurprising, so I won’t devote much time to them. Essentially, when you compose the standard formulation for the curvature in terms of , whereas you find the curvature with , you find the unit normal with . It’s gross.

Multivariable Functions

Finally some multivariable math! It’s not calculus yet, but baby steps!

Multivariable functions will be very familiar to anyone with an understanding of computer science. Essentially, where a normal function is written in the form , multivariable functions are written in the form . It’s that simple! Multivariable functions can have any number of arguments and can be graphed rather like conventional functions. You’ve probably already used them for graphs in 3d space – , for instance, is a fairly simple plane in space, and is shorthand for a multivariable function .

The domain of a multivariable function is quite simply the set of inputs that will output a real number. For instance,  is defined over . The range is similarly the set of outputs that the function can return with real inputs:  has a range of , because it can never output a negative value given real inputs.

In general, the graphs of multivariable functions in r³ are analogous to single-variable functions in r²:  is an upwards-opening rounded cone intersecting with the z-axis at z=2 – a rotated parabola.

We can also take the limits of multivariable functions. This is not as simple as taking the limit of a single-variable function, because we have to consider every direction of approach, rather than just right and left: a function  has a limit from the positive x direction (y=0, decreasing x), and from the negative y-direction (x=0, increasing y), etc, but also has limits from any given ratio of x to y.