Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]

News
Multivariable Exam 3 Review: Thomas 16.1
by Tyler Clarke on 2025-4-3
Multivariable Exam 3 Review: Thomas 16.6
by Tyler Clarke on 2025-4-11
Multivariable Exam 3 Review: Thomas 16.3
by Tyler Clarke on 2025-4-4
Multivariable Exam 3 Review: Thomas 16.5
by Tyler Clarke on 2025-4-10
Multivariable Exam 3 Review: Thomas 16.2
by Tyler Clarke on 2025-4-3
Multivariable Exam 1 Review
by Tyler Clarke on 2025-2-18
Multivariable Exam 3 Review: Thomas 15.7
by Tyler Clarke on 2025-3-30
Multivariable Exam 3 Review: Thomas 15.5
by Tyler Clarke on 2025-3-27
Multivariable Exam 3 Review: Thomas 15.8
by Tyler Clarke on 2025-4-1
Multivariable Exam 3 Review: Thomas 15.6
by Tyler Clarke on 2025-3-30
Multivariable Exam 3 Review: Thomas 16.4
by Tyler Clarke on 2025-4-7
Multivariable Exam 3 Review: Thomas 16.7
by Tyler Clarke on 2025-4-16
Physics Exam 3 Review
by Tyler Clarke on 2025-4-11
Inductance Hell
by Tyler Clarke on 2025-4-5
A Brief Introduction to AI Math
by Tyler Clarke on 2025-4-8

Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)

Multivariable Exam 3 Review: Thomas 16.3

By Tyler Clarke in Calculus on 2025-4-4

Hello once again! Today we're covering a very exciting topic: path independence. The basic idea is that a line integral in a vector field is path independent if any path between the endpoints would get the same result. Not a complex idea, but quite difficult to express in math! If the line integral of a given path in a vector field is path independent, then the vector field is said to be conservative over that path. Conservative line integrals have a useful consequence: we don't have to consider the path! Given a conservative field `F`, all we need to know is a potential function `f` where `grad f = F`, and the integral between points A and B is `f(B) - f(A)` regardless of the path.

Finding the potential function is a pretty complex operation. Let's start by doing it in 2d. Assuming we have a gradient `grad f(x, y) = [y, x]` (this is a convenient easy one to demonstrate technique, as the answer is quite obviously `f(x, y) = xy`), we're looking for a function for which the partial derivatives are `frac {df} {dx} = y` and `frac {df} {dy} = x`. There's already a known way to find the reverse of a derivative: an antiderivative. The tricky part is that, for a situation like this, our constant `C` is actually a function of the left-out variable. For instance, the antiderivative `int y dx` is in this case clearly `xy + C(y)`.

Now comes a bit of trickery. Because `xy + C(y)` for some unknown function C must be equal to `f(x, y)`, we know that the derivative with respect to *y* must be equal to `frac {df}{dy}`. The derivative `frac d {dy} xy + C(y)` is obviously `x + C'(y)`. Let's set up an equality: `x + C'(y) = frac {df} {dy} = x`. Ah-ha! Because we can subtract `x` from both sides, we get `C'(y) = 0`. We can't say for sure if `C(y)` is 0 or constant, but we'll just assume it's 0: given the previous equation `f(x, y) = xy + C(y)`, and `C(y) = 0`, we know that `f(x, y) = xy`.

This is a confusing, crazy, and magical way to reverse the gradient function, and it's a big part of why I'm enjoying this chapter so much.

Let's do it in three dimensions. Say we've a gradient `grad f = [y + z, x, x - 2z]` (derived from the function `f = xy - z^2 + xz`). Step one is to find `int y + z dx`, which works out to `f(x, y, z) = xy + xz + H(y, z)`. The derivative of this with respect to `y` is `frac {df} {dy} = x + H_y(y, z)`, and we already know `frac {df} {dy} = x`, so we have `x + H_y(y, z) = x`. `H_y(y, z)` is 0.

The process is much the same for z. `frac {df} {dz} = x + H_z(y, z) = x - 2z`, meaning `H_z(y, z) = -2z`. We still don't have a value for H, though- just the two partial derivatives.

This is where the process recurses.

We know from these results that `grad H(y, z) = [0, -2z]`. We can now use exactly the same process as the 2d version to get `H(y, z)` - except, in this case, we have the shortcut that the answer is very obviously `H(y, z) = -z^2`. These are much faster to do if you can recognize some simple forms that produce a given gradient, and swap them in immediately; this saves a whole bunch of effort. We've known for a while now that `f(x, y, z) = xy + xz + H(y, z)`, and thus `f(x, y, z) = xy + xz - z^2`. This is correct!

An interesting property I won't get into proving (there's a proof in the Thomas book, very much worth reading!) is that gradients are conservative. Hence, if it's possible to construct `f(x, y, z)` from the vector field function, this path is conservative! That means that, as long as you have your `f(x, y, z)`, you do not have to worry about the actual path you're dealing with, and just the value of the potential function at the endpoints.