Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]
Paul's Online Math Notes
Khan Academy
Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)
By Tyler Clarke in Calculus on 2025-7-27
It's finally here! The final exam! The big one! This exam is going to be cumulative, sadly, and so we have a lot to review. I'm going to do the normal thing, going through one or two questions from each worksheet; I highly recommend reading my previous materials for quizzes and tests as well, since the questions will probably be similar.
As there are 36 individual worksheets and I do need to preserve my sanity, I'm largely going to skip the "less-interesting" ones.
Lastly, this being the grand finale, it's your last chance this semester to wear a silly hat to a calculus exam! Anyone who wears a balloon hat gets a handful of rubber ducks! We might not have a chance at being the best class Prof. Mamillapalle has ever taught, but we can definitely be the most surreal.
Nothing before this point is really worth inclusion. This problem is simple enough: we're given `ty' + 2y = sin(t)`, and asked to solve for `y`. This is obviously a linear system: dividing the whole thing by `t` gives us `y' + frac 2 t y = frac {sin(t)} t`, so `p = frac 2 t` and `q = frac {sin(t)} t`. Remember how to solve this using the integrating factor method: let `mu = e^{int p(t) dt}`, and `frac {d} {dt} (mu y) = mu q`. Solving for `y`, then: `y = frac { int mu q dt } { mu }`. In this case, `mu = e^{int frac 2 t dt} = e^{2ln(t) + C} = Ct^2` and so `y = frac {int t sin(t) dt} {t^2}`. Integrating by parts yields the not-entirely-horrible `y = frac {sin(t) - t cos(t)} {t^2} + C`.
Substitutions! We're given `frac {dy} {dx} = frac {x + 3y} {3x + y}`, and asked to solve. The best way to do this is with a substitution. If we let `y = vx`, and thus `frac {dy} {dx} = v + frac {dv} {dx} x`, we can rewrite as `v + frac {dv} {dx} x = frac {x + 3vx} {3x + vx}`. Factoring out the `x`s in the right hand side and doing some rearranging yields `frac {3 + v} {(1 + v)(1 - v)} dv = frac 1 x dx`. Decomposing to partial fractions yields `frac 1 {1 + v} + frac 2 {1 - v} dv = frac 1 x dx`; integrating gives us `ln|1 + v| - 2 ln|1 - v| = ln|x| + C`. Finally raise `e` to both sides to get `frac {1 + v} {(1 - v)^2} = Cx`.
Resubstituting `v = frac y x` and doing some algebra, we end up with `frac {x + y} {(1 - y)(x - y)} = C`. This can't really be reduced to an explicit solution, but implicit is Good Enough ™!
We're given `frac {dy} {dt} - frac y t = - frac {y^2} {t^2}`. This is clearly an `n=2` Bernoulli differential equation, so we use the substitution `u = y^{1 - n} = frac 1 y`. `frac {du} {dt} = - frac 1 {y^2} frac {dy} {dt}`. We rearrange the original equation by dividing `- y^2`, to get `- frac 1 {y^2} frac {dy} {dt} + frac 1 { t y } = frac 1 {t^2}`. Perhaps not entirely surprisingly, we substitute in `u` to get a much nicer equation: `frac {du} {dt} + frac 1 t u = frac 1 { t^2 }`. This is a first-order linear equation! We solve with the integrating factor method to get `u = frac {ln(t)} {t} + frac c t`, and resubstitute to get `y = frac t {c + ln(t)}`.
Given `x' = [[1, 1], [4, -2]] x`, solve for `x`. This one isn't too hard: we just use the old eigenvalue/eigenvector method! Our eigenvalues are `lambda_1 = -3, lambda_2 = 2`, so our eigenvectors are `v_1 = [1, -4], v_2 = [1, 1]`. Thus we just plug in: `x = C_1 e^{-3t} [1, -4] + C_2 e^{2t} [1, 1]`.
In the case of `x' = [[1, 2], [-5, 1]]x`, we end up with complex eigenvalues `1 +- sqrt(10) i` and complex eigenvectors `[+- sqrt(10) i, -5]`. Plug into the general form: `e^t (C_1 cos(sqrt(10)t) [sqrt(10) i, -5] + C_2 sin(sqrt(10)t) [-sqrt(10) i, -5])`.
`x' = [[3, -4], [1, -1]] x`. This is a repeated eigenvalue case with `lambda = 1, v = [2, 1]`. General form: `x = C_1 e^t [2, 1] + C_2 t e^t [2, 1]`. Note that this is not quite complete - we also need a generalized eigenvector, found by solving `[[3 - lambda, -4], [1, -1 - lambda]] x = [2, 1]`. One option is `w = [1, 0]`. We insert this into the second part of the solution: `x = C_1 e^t [2, 1] + C_2 e^t ([2t, t] + [1, 0])`
Solve the second order linear differential equation `2y'' - 5y' - 3y = 0`. And scramble around for some acronym to make these names shorter. I can't remember if there was a "right" way to do this back at 4.3, but the associated polynomial method works nicely: `2r^2 - 5r - 3 = 0`. We get real and distinct roots `1, frac 3 2`, so th simplest general form works: `y = C_1 e^{t} + C_2 e^{frac 3 2 t}`.
We're given `y'' - 2y' - 3y = 3e^{2t}` and asked to solve for `y`. This is clearly a UC problem. We start by finding the complementary solution: `y_c'' - 2y_c' - 3y_c = 0`, so via the associated polynomial method, `y_c = C_1 e^{-t} + C_2 e^{3t}`. Because the RHS is `3e^{2t}`, we can guess `y_p = Ae^{2t}`, which substitutes to give us `4Ae^{2t} - 4Ae^{2t} - 3Ae^{2t} = 3e^{2t}`. Obviously `A = -1`. Hence: our final solution is `y = y_c + y_p = C_1 e^{-t} + C_2 e^{3t} - e^{2t}`.
We're given `x' = [[-1, -1], [0, 1]] x + [18, 3t]`. Recall that `x = A u` and `A u' = g` where `g` is the nonhomogeneous term and `A` is the fundamental solution matrix. We need to find the fundamental solution matrix. This is actually not very hard: we just use the eigenvalue method, getting the complementary solution `x = C_1 e^t [1, -2] + C_2 e^{-t} [1, 0]`, so `A = [[e^t, e^{-t}], [-2 e^t, 0]]`. To find `u'`, we solve with Gaussian elimination: the augmented matrix is `[[e^t, e^{-t} | 18], [-2e^t, 0 | 3t]]`, so `u' = [-frac 3 2 t e^{-t}, frac 3 2 t e^t + 18 e^t]`. Integrating (and assuming 0 constants) gives us `u = [frac 3 2 e^{-t} (t + 1), frac 3 2 e^t (t - 1) + 18 e^t]`. Finally: because `x = A u`, we matrix multiply to get `x = [3t + 18, -3t - 3]`. Nice!
I skipped all of Laplace. There will be a Laplace formula sheet on the exam.
This problem asks us to turn `t(t - 1)y'''' + e^t y'' + 4t^2 y = 0` into a first-order system. This actually isn't too hard. The trick is to pick a reasonable `x`: `x = [y, y', y'', y''', y'''']`. We also need to find `y''''`: we do this by rearranging, to get `y'''' = - frac {e^t} {t(t - 1)} y'' - frac {4t^2} {t(t - 1)} y`. Finally a matrix can be filled, giving us an equation in the form `x' = A x` (in this case, `x' = [y', y'', y''', y'''', y''''']`): `x' = [[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [ - frac {4t^2} {t(t - 1)}, 0, - frac {e^t} {t(t - 1)}, 0]] x`. This is actually the whole thing!
We're given the matrix `X' = [[1, -2, 2], [-2, 1, -2], [2, -2, 1]] X` and asked to solve for the general solution. This being a first-order linear homogeneous system, we just use the eigenvalue/eigenvector method: we end up with `lambda_1 = 5, v_1 = [1, -1, 1]`, `lambda_2 = -1, v_2 = [-1, 0, 1]`, and `lambda_3 = -1, v_3 = [1, 1, 0]`. The solution, thus, is `X = C_1 e^{5t} [1, -1, 1] + C_2 e^{-t} [-1, 0, 1] + C_3 e^{-t} [1, 1, 0]`. This is also pretty easy.
Find the critical points of `x' = 1 + 5y, y' = 1 - 6x^2`. This is relatively easy: `5y = -1, 6x^2 = 1` trivially gives us `y = - frac 1 5, x = +- sqrt(frac 1 6)`. These are in fact the only two possible critical points.
Find the critical points of `x' = x - y^2, y' = x - 2y + x^2`. This gives us obviously `x = y^2, 2y = x(1 + x)`. There's obviously a critical point at `(0, 0)`. At `x = -1`, there's no real-valued solution (`y = i`); if we take `y = frac {x(1 + x)} 2` and substitute, we get `x = frac {x^2(1 + x)^2} 4`, or `x^3 + 2x^2 + x - 4 = 0`: this has a real solution at `x = 1`, for which `y = 1`: thus, we have two critical points, `(0, 0)` and `(1, 1)`. Let's also find the linear system at those points. Recall that the linear system is written in terms of the partial derivatives at the point: if `x' = F(x, y)` and `y' = G(x, y)`, at some point `a`, `x' = [[F_x(a), F_y(a)], [G_x(a), G_y(a)]] x`. The Jacobian matrix in this case is `[[1, -2y], [1 + 2x, -2]]`, and so for `(0, 0)` we have `x' = [[1, 0], [1, -2]]x` and at `(1, 1)` we have `x' = [[1, -2], [3, -2]]`.
Hooray. Improved Euler.
I'm only going to do the first step of this because, frankly, I really hate using Euler. As we've talked about before, the Improved Euler Formula gives us `y_{n + 1} = y_n + h frac { f(t_n, y_n) + f(t_{n + 1}, y_n + h f(t_n, y_n))} 2`. Our step size is `h = 0.05`, and `y' = f(t, y) = 2y - 3t, y(0) = 1`.
We already know that `y_1 = 1, t_1 = 0`: plugging those in gives us `f(t_1, y_1) = 2`, so we can substitute known values into the equation to get `y_{2} = 1 + 0.05 frac { 2 + f(t_2, 1.1)} 2`: `t_2 = 0.05`, so `y_2 = 1 + 0.05 frac { 4.05 } 2 = 1.10125`. Tedious, but not difficult.
Well, folks, this is the end! There will be no more Deadly Boring Math posts on differential equations. It's been quite a wild ride.
Stay tuned over break for more Math Gun and some fancy calculations regarding the speed of a robot my (non-GT-affiliated) team's been building.
If anyone really liked Deadly Boring Math, consider becoming an author! As I'm leaving physics for computer engineering, I won't be writing as much; shoot me an email at plupy44@gmail.com if you wanna help keep DBMUS alive.
Godspeed, and good luck!