Chapter 1: Introduction
A differential equation is an equation involoving an (unknown) function y and some of its derivatives. The basic goal is to solve the equation, i.e., to determine which function or functions satisfy the equation. Differential equations come in several types, and our techniques for solving them will differ depending on the type.
Ordinary vs. partial: If y is a function of only one variable t, then our differential equation will involve only derivatives w.r.t. t, and we will call the equation an it ordinary differential equation. If y is a function of more than one variable, then our differential equation will involve partial derivatives, and we will call it a partial differential equation. We will deal almost exclusively with ordinary differential equations in this class.
Systems: Sometimes the rates of change of several functions are inter-related, as with the populations of a predator y(t) and its prey x(t), where x¢ = ax-axy and y¢ = gxy - cy . We call this a system of differential equations, and its solution would involve finding both x(t) and y(t).
Order: Techniques for solving differential equations differ depending upon how many derivatives of our unknown function are involved. The order of a differential equation is the order of the highest derivative appearing in the equation. The Implicit Function Theorem tells us that we can rewrite our equation so that it equates the highest order derivative with an expression involving lower order terms:
Linear vs. non-linear: A differential equation is linear if it can be written as
(i.e., the function F is linear in the variables y,y¢,¼,y(n-1), although it need not be linear in t). A differential equation is non-linear if it isn't linear! E.g.,
is non-linear, while
is linear.
Solving a differential equation means to determine which function or functions satisfy the equation. Our solutions come in two flavors: explicit solutions y = y(t) which provide a function of t which satisfies the equation, and implicit solutions which provide an equation g(y,t) = 0 which any explicit solution would have to satisfy. The idea is that we can treat g(y,t) = 0 as implicitly defining y as a function of t; given a specific value t = c for t, we solve (numerically?) g(y,c) = 0 for y to determine the value of the solution to the differential equation at c.
In general, a differential equation y¢ = f(t,y) will have many solutions; but typically one particular solution can be specified by requiring one additional condition be met; that y take a specific value y0 at a specific point t0. If we think of the time t0 as the time at which we ``start'' our solution, then we call the pair of equations
an initial value problem (or IVP). There is a general result which gives conditions guaranteeing that an IVP has a solution:
If y¢ = f(t,y) is a differential equation with both f and [(¶f)/(¶y)] continuous for a < t < b and a < y < b, and t0 Î (a,b) and y0 Î (a,b), then for some h > 0, the initial value problem
has a unique solution for t Î (t0-h,t0+h) .
In general, however, the size of the interval where we can guarantee existence (and uniqueness) can be very small, and often depends on the choice of initial value! For example, for the equation
the righthand side is continuous everywhere (as is the partial derivative), but the interval we can choose for the solutions y = -1/(t+c) depends on c, which will depend on the initial condition! And it can never be chosen to be the entire real line.
Failure to satisfy the hypotheses of the result can easily kill both existence and uniqueness. For example, the equation
has many solutions with the initial condition y(0) = 0, such as y = 0 and y = (2t/3)3/2 .
In many cases, especially for first order differential equations, we can `see' what a solution should look like without actually finding the solution. For first order equations, y¢ = f(t,y), a solution y(t) will satisfy y¢(t) = f(t,y(t)), and so we can think of f(t,y) as giving the slope of the tangent line to the graph of y(t) at the point (t,y(t)). But since the function f is already known, we can draw small line segments at `every' point of the t-y plane with slope f(t,y) at the point (t,y); this is called the direction field for our differential equation. A solution to our differential equation is simply a function whose graph is tangent to each of these line segments at every point along the graph. Thinking of the direction field as a velocity vector field (always pointing to the right), our solution is then the path of a particle being pushed along by the velocity vector field. From this point of view it is not hard to believe that every (first order ordinary) differential equation has a solution, in fact many solutions; you just drop a particle in and watch where it goes. Where you drop it is important (it changes where it goes), which really is what gives rise to the notion of an initial value problem; we seek to find the specific solution with the additional initial value y(t0) = y0.
Most first order equations cannot be solved by the methods we will present here; the function f(y,t) is too complicated. For such equations, the best we can often do is to approximate the solutions, using numerical techniques. One method is the tangent line method, also known as Euler's method. The idea is that our differential equation y¢ = f(t,y) tells us the slope of the tangent line at every point of our solution, and the tangent line can be used to approximate the graph of a function, at least close to the point of tangency. In other words, for a solution to our differential equation,
for t-t0 small. If we wish to approximate y(t) for a value of t far away from our initial value t0, we use the above idea in several steps. We cut up the interval into n pieces of length h (called the stepsize), and then set
and continue until we reach yn, which will be our approximation to y(t) = y(tn) . Each step can be thought of as a mid-course correction, using information about the direction field at each stage to determine which way the solution is tending.
Calculus teaches us that at each stage the error introduced is approximately proportional to the square of h. So with a stepsize half as large, we will require twice as many steps, but each introduces an error only about one-fourth as large, so overall we get an error only half as large. This leads us to conclude that as the stepsize goes to 0, the error between our approximate solution yn and y(tn) goes to 0.
Chapter 2: First Order Differential Equations
There is a class of first order equations for which we can readily find solutions by integration; there are the separable equations. A differential equation is separable if it can be written as
This allows us to `separate the variables' and integrate with respect to dy and dt to get a solution:
In the end, our solution looks like F(y) = G(t) + c, so it defines y implicitly as a function of t , rather than explicitly. In some cases we can invert F to get an explicit solution, but often we cannot.
For example, the separable equation y¢ = ty2 , y(1) = 2 has solution
so solving the integrals we get (-1/y) = (t2/2)+c, or y = -2/(t2+2c) ; setting y = 2 when t = 1 gives c = -1 .
Perhaps the most straightforward sort of differential equation to solve is the first order linear ordinary differential equation
We will typically (following tradition) write such equations in standard form as
For example, near the earth and in the presence of air resistance, the velocity v of a falling object obeys the differential equation v¢ = g -kv, where g and k are (positive) constants.
There is a general technique for solving such equations, by trying to think of the left-hand side of the equation as the derivative of a single function. In form it looks like the derivative of a product, and by introducing an integrating factor m(t), we can actually arrange this. Writing
we find that (where exp(blah) means e raised to the power `blah')
and so
which we can then solve for y.
Putting this all together, we find that the solutions to (**) are given by
For example, the differential equation ty¢ -y = t2+1 , after being rewritten in standard form as y¢ -(1/t)y = t+(1/t), has homogeneous solution
so we have
and so our solutions are y = t2-1+ct, where c is a constant.
But what is c ? Or solution is actually a family of solutions; a particular solution (i.e., a particular value for c) can be found from an initial value y(t0) =y0. For example, if we wished to solve the initial value problem
we can plug t = 2 and y = 5 into our general solution to obtain c = 1 .
Chapter 3: Mathematical Models
In many instances, the rate of change of a quantity can be best analysed by treating the factors that make the quantity go up separately from those that make it go down; each can often be easily understood in isolation. We can then build a differential equation modeling the behavior of the quantity y = y(t) as
As a basic example, we have mixing problems. The basic setup has a solution of a known concentration mixing at a known rate with a solution in a vat, while the mixed solution is poured off at a known rate. The problem is to find the function which gives concentration in the vat at time t. It turns out that it is much easier to find a differential equation\ which describes the amount of solute (e.g., salt) in the solution (e.g., water), rather than the concentration.
If the concentration pouring in is A, at a rate of N, while the solution is pouring out at rate M with concentration A(t)= x(t)/V(t), then if the initial volume is V0, we can compute V(t) = V0+(N-M)t . The change in the amount x(t) of solute can be computed as (rate falling in)-(rate falling out), which is
This is a linear equation, and so we can solve it using our techniques above.
We can also deal with a succession of mixing problems, the output of one becoming the input of the next, by treating them one at a time; the only change in the setup above is that the incoming concentration for the next vat (to solve for xi+1(t)) would be the concentration xi(t)/Vi(t) found by solving the equation for the previous vat.
Another situation where this kind of analysis proves successful is in modeling population growth. The idea is that if y is the population at time t, then
Typically, the birth rate is proportional to the population, i.e. is ry, while the death rate is either modeled as being proportional to the population (Malthusian model) or is a sum (logistic model); one part is proportional to the population (death by ``natural causes''), the other is proportional to the square of the population (this typically represents contact between individuals,arising from competition for food, overcrowding, etc.), i.e., is ky2 . Put together, and combining the two terms proportional to population, we obtain
Both equations are separable, and so we can use phase lines to understand their long-term behavior, as well as finding explicit solutions (using partial fractions, for the logistic equation).
Newton's Law of Cooling: This states that the rate of change of the temperature T(t) of an object is proportional to the difference between its temperature and the ambient temperature of the air around it. The constant of proportionality depends upon the particular object (and the medium, e.g., air or water) it is in. In other words,
Since a cold object will warm up, and a warm object will cool down, this means that the constant k should be positive. Writing the equation as
we find the solution (after solving the IVP)
Typically, k is not given, but can be determined by knowing the temperature at some other time t1, by plugging into the equation above and solving for k.
If we wish to model the motion of an object, whose position at time t is given by x(t), then (setting v(t) = x¢(t)) Newton's Second Law of Motion tells us that
When we can understand these forces, in terms of t and v, we can build a first order differential equation, which we can then bring our techniques to bear to solve. Typical forces include:
gravity: Fg = mg or Fg = -mg, depending upon whether we think of the positive direction as down (giving +) or up (giving -). g = 9.8 m/sec2 = 32 ft/sec2 (approximately)
air resisitance: this is typically modeled either as Fa = -kv (for smallish velocities) or Fa = -kv2 (for large velocities). It always acts to push our velocity towards 0, hence the - sign.
external force: Fe = g(t) ; this represents a force that ``follows along'' the object and tries to push it in a direction that is ``pre-programmed'' in time.
With these sorts of forces, we get a general equation
which we can solve by the methods we have developed. For example, ignoring external forces and assume the positive direction is ``down'', we have the initial value problem
with solution
As t®¥, v(t)®[mg/k] = the terminal velocity.
Chapter 4: Linear Second Order Equations
Basic object of study: second order linear differential equations. Standard form:
Initial value problem: we need two initial conditions
Basic existence and uniqueness: if p(t), q(t), and g(t) are continuous on an interval around t0, then any initial value problem has a unique solution on that interval. Our Basic goal: find the solution!
(*) is called homogeneous if g(t) = 0 ; otherwise it is inhomogeneous. (*) is an equation with constant coefficients if p(t) and q(t) are constants.
Our main new technique for exploring these equations will be operator notation. We write L[y] = y¢¢+p(t)y¢+ q(t)y (this is called a linear operator), then a solution to (*) is a function y with L[y] = g(t). Some familiar linear operators: Dn[y] = y(n) ( the n-th derivative operator). The operator is called linear because
For a linear differential equation, L[c1y1+c2y2] = c1L[y1]+c2L[y2], and so if y1 and y2 are both solutions to L[y] = 0 then so is c1y1+c2y2 . c1y1+c2y2 is called a linear combination of y1 and y2. This fact is called the Principle of Superposition: more generally, for a linear operator, if L[y1] = g1(t) and L[y2] = g2(t), then L[y1+y2] = g1(t)+g2(t) .
Basic idea: with (the right) two solutions y1, y2 to a homogeneous linear equation
we can solve any initial value problem, by choosing the right linear combination: we need to solve
for the constants c1 and c2; then y = c1y1+c2y2 is our solution. This we can do directly, as a pair of linear equations, by solving one equation for one of the constants, and plugging into the other equation, or we can use the formulas
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
where |
|
| ||
|
|
|
| ||
|
|
W is called the Wronskian (determinant) of y1 and y2 at t0 . The Wronskian is closely related to the concept of linear independence of a collection y1,¼,yn of functions; such a collection is linearly independent if the only linear combination c1y1+ ¼+ cnyn which is equal to the 0 function is the one with c1 = ¼ = cn = 0 .
Two functions y1 and y2 are linearly independent if their Wronksian is non-zero at some point; for a pair of solutions to (***), it turns out that the Wronskian is always equal to a constant multiple of
and so is either always 0 or never 0. We call a pair of linearly independent solutions to (***) a pair of fundamental solutions. By our above discussion, we can solve any initial value problem for (***) as a linear combination of fundamental solutions y1 and y2. By our existence and uniqueness result, this gives us:
If y1 and y2 are a fundamental set of solutions to the differential equation (***), then any solution to (***) can be expressed as a linear combination c1y1+c2y2 of y1 and y2.
A differential equation is called autonomous if the function f(t,y) is really a function f(y) only of the variable y. We will learn how to solve such equations below; but we can learn alot about the solutions to such an equation simply by understanding the graph of f(y) .
One feature of the solutions is that we can translate in time and get another solution; if y(t) is a solution to y¢ = f(y), then so is z(t) = y(t+c) for any constant c, as can be verified by plugging z into the differential equation. This can also be verified geometrically, using the direction field approach. For an autonomous equation, the slope of the direction field is always the same along horizontal lines (since it depends on y, not t), and so if we pick up a solution curve, tangent to the direction field, and translate it in the horizontal direction, it will still be everywhere tangent to the direction field, and so is also a solution.
The key to understanding solutions to such equations y¢ = f(y) is to find equilibrium solutions, that is, solutions y = constant =c . Such solutions have derivative 0, and so for such solutions we must have f(c) = 0. The basic idea is that these equilibrium solutions tell us a great deal about the behavior of every solution to the differential equation.
If the function f(y) is continuous, then between the zeroes of f (i.e., the equilibrium solutions of the differential equation) f has all the same sign, and so for the solutions, y¢ has the same sign, so y(t) is either always increasing or always decreasing. It cannot cross the equilibrium solutions, since this would violate the uniqueness of solutions to our differential equation. (Here we assume that the derivative of f is also continuous.) If a solution curve becomes asymptotic to a horizontal line, that line must be an equilibrium solution, because the tangent lines along our solution must be becoming horizontal, i.e., f(y(t)) = f(y) is approaching 0 = f(limit of y(t)).
Therefore, the structure of the solutions is very simple; between consecutive equilibrium solutions, the solutions increase or decrease monotonically from one equilibrium to the other. This allows us to classify equilibrium solutions as one of three kinds: stable equilibria, where nearby solutions all converge back to the equilibrium, unstable equilibria, where nearby solutions all diverge away from the equilibrium, and semistable equilibria or nodes, where on one side the solutions converge back, and on the other they diverge away.
The easiest way to assemble this data is to plot the roots of f on a number line (called the phase line, and then determine the sign of f in the intervals in between. Where it is positive, solutions move to the right (i.e., up), while where it is negative they move left. Marking these as arrows, a stable equilibrium has arrows on both sides pointing towards it, and an unstable equilibrium has both arrows pointing away.