# Calculus 2, Lab 5: Systems of differential equations#

Today’s activity takes us a bit beyond what is covered in the textbook, but not too much beyond!

As side benefits, we get an early introduction to parametric curves, and some reminders about the utility of linear algebra.

Our focus will be **coupled systems of autonomous differential equations**. The context is the following: we have two variables \(x\) and \(y\) that both depend on an independent variable \(t\) (our parameter, which in many contexts can be viewed as time).

In a coupled system, the two quantities \(x\) and \(y\) change in time, but they also influence each other. An example would be sales numbers at two competing stores: the more that store A sells, the less store B sells, and vice-versa.

We will begin with **linear** systems, of the form
$\(
\begin{aligned}\frac{dx}{dt} &= ax+by\\ \frac{dy}{dt}&=cx+dy\end{aligned}
\)\(
for constants \)a,b,c,d\(. (In a more general linear system, \)a,b,c,d\( could be functions of \)t$, but we will stick to the time-independent case to keep things simple.)

We will end with a classic example: a **predator-prey** model, where we have two animal populations: a prey species, that if left untouched would experience exponential population growth, and a predator species, whose population growth (or decline) depends on the size of the prey population. This is a **non-linear** system, and we will not be able to understand all the details at this point, but we can still explore it both graphically and numerically.

## A preview of parametric curves#

Later in the course, we will study **parametric curves**: sets of points in the plane of the form \((x,y)=(f(t),g(t))\), where \(f\) and \(g\) are functions of a parameter \(t\) defined in some domain \([a,b]\). As \(t\) increases from \(a\) to \(b\), we can imagine a point that moves in “time”, tracing out a curve.

For example, we can generate the unit circle \(x^2+y^2=1\) as the set of points \((\cos(t),\sin(t))\), \(t\in [0,2\pi]\).

One of the facts that we will learn later in the course is that the slope of the tangent line to a parametric curve at a point \((x_0,y_0)=(f(t_0),g(t_0))\) is given by $\( m = \frac{dy}{dx} = \frac{y'(t)}{x'(t)}. \)$

## Direction fields for linear systems#

We can modify the direction field code from our last tutorial to work for a system of two differential equations. For the `dfield`

program from last week, we were given a differential equation \(y'=f(x,y)\), and we defined the slope using the function \(f(x,y)\). Given a system
$\(
\begin{aligned}\frac{dx}{dt}&=f(x,y)\\ \frac{dy}{dt}&=g(x,y)\end{aligned}
\)\(
we will redefine the slope as \)g(x,y)/f(x,y)$, based on the result given above.

But you might have noticed last week that our direction fields didn’t always look great. So before getting into the differential equations, we are going to explore the code a bit more carefully.

We will test a few versions below to see how they perform. First, our code from last week. The only change I’ve made is to calculate the slope as \(g(x,y)/f(x,y)\) instead of just \(f(x,y)\).

```
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
plt.rcParams['figure.figsize'] = [10, 10]
import numpy as np
```

```
def dfield1(f,g):
for j in x:
for k in y:
slope = g(j,k)/f(j,k)
domain = np.linspace(j-0.05,j+0.05,2)
def fun(x1,y1):
z = slope*(domain-x1)+y1
return z
plt.plot(domain,fun(j,k),solid_capstyle='projecting',solid_joinstyle='bevel')
plt.title("Slope field for y'=f(x,y)")
plt.grid(True)
plt.axis([-2,2,-2,2])
plt.show()
```

Let’s try this for the system $\( \begin{aligned}\frac{dx}{dt}&=-x+3y\\\frac{dy}{dt}&=-5x-2y\end{aligned} \)$

To set things up, we need to define the functions \(f(x,y)=-x+3y\) and \(g(x,y)=-5x-2y\), as well as values to use for both \(x\) and \(y\).

Notice that I’ve set the range for both `x`

and `y`

to match the values in the `plt.axis`

command, which sets what range of values gets displayed for each variable.

```
def f(x,y):
return -x+3*y
def g(x,y):
return -5*x-2*y
x=np.linspace(-2,2,30)
y=np.linspace(-2,2,30)
```

Now, let’s create our direction field!

```
dfield1(f,g)
```

You should probably notice two things:

It looks terrible!

There’s a warning about divison by zero. The latter comes from the fact that \(f(x,y)=0\) whenever \(x=3y\). Let’s fix that, and see if it helps.

The easiest way (that I can think of) to fix this is to use Python’s `if`

…`else`

logic.
If \(f(x,y)\neq 0\), we proceed as before. If \(f(x,y)=0\), we need to do something else. We’ll deal with the exceptional case first.

If \(f(x,y)=0\), what sort of line should we be plotting? (Think about it for a moment.)

Did you answer “vertical”? If so, good job!

Let’s look more carefully at the code we are using:

`domain = np.linspace(j-0.05,j+0.05,2)`

creates an array with 2 points: `[j-0.05,j+0.05]`

. These are the \(x\) coordinates for the beginning and end of the line segment.

```
def fun(x1,y1):
z = slope*(domain-x1)+y1
return z
```

This calculates the \(y\) coordinate along a line through the point \((x_1,y_1)\) with slope equal to `slope`

.

Now look at `plt.plot(domain,fun(j,k),solid_capstyle='projecting',solid_joinstyle='bevel')`

. The additional options are style options. The important part is `plt.plot(domain,fun(j,k))`

. Remember that `domain`

is really an array with two numbers. Since `fun(j,k)`

depends on `domain`

, it is also an array.

So this is a command to plot a line between two points. The first two inputs are arrays: an array of \(x\) values, and an array of \(y\) values. So `plt.plot([a,b],[c,d])`

is a line between two points. The \(x\) values are \(a\) and \(b\), and the \(y\) values are \(c\) and \(d\), so the line goes from \((a,c)\) to \((b,d)\).

In our code above, the initial point is \((j-0.05, k-0.05m)\), where \(m\) is the value of `slope`

, and the final point is \((j+0.05, k+0.05m)\).

For a vertical line through the point \((j,k)\), we can use `plt.plot([j,j],[k-0.05,k+0.05])`

, since we want the same \(x\) coordinate at the beginning and at the end. Our modified code with no division by zero looks like this:

```
def dfield2(f,g):
for j in x:
for k in y:
if f(j,k)==0:
plt.plot([j,j],[k-0.05,k+0.05],solid_capstyle='projecting',solid_joinstyle='bevel')
else:
slope = g(j,k)/f(j,k)
domain = np.linspace(j-0.05,j+0.05,2)
def fun(x1,y1):
z = slope*(domain-x1)+y1
return z
plt.plot(domain,fun(j,k),solid_capstyle='projecting',solid_joinstyle='bevel')
plt.title("Slope field for y'=f(x,y)")
plt.grid(True)
plt.axis([-2,2,-2,2])
plt.show()
```

Let’s see if this version does any better:

```
dfield2(f,g)
```

No division by zero!

But it still looks terrible. Notice how the lines with the steepest slopes are really long. Why do you think that is? (Think about this now; you will be asked to answer it later.)

Here is one more version. This time, instead of using the equation of a tangent line to plot our line segments, we will use a trick from the parametric curve toolkit: if we want to draw a line through the point \((a,b)\) with slope \(m\), we can let \(\theta=\arctan(m)\) (remember that \(\tan\theta\) gives slope), and define our line parametrically as

where \(t\) is allowed to vary over some interval. If \(t\in [t_0, t_0+\Delta t]\), then:

so the length of the line segment is

This time, all of our segments have the same length, regardless of slope. Here is the code:

```
def dfield(f,g):
for j in x:
for k in y:
if f(j,k)==0:
theta=np.pi/2
else:
slope = g(j,k)/f(j,k)
theta = np.arctan(slope)
plt.plot([j,j + 0.05*np.cos(theta)],
[k,k+0.05*np.sin(theta)],
solid_capstyle='projecting',solid_joinstyle='bevel')
plt.title("Direction field for x'=f(x,y),y'=g(x,y)")
plt.grid(True)
plt.axis([x[0],x[len(x)-1],y[0],y[len(y)-1]])
plt.show()
```

Let’s see if this version does any better:

```
dfield(f,g)
```

I think we have a winner!!

There is one more thing we could do: in addition to the slope \(\dfrac{dy}{dx} = \dfrac{y'(t)}{x'(t)}\) of a parametric curve \((x(t),y(t))\), you will learn in Calculus III that you can consider the **tangent vector** \(\vec{r}{{\,}}'(t) = \langle x'(t),y'(t)\rangle\). This vector aligns with the slope, but it also gives us a *direction of travel*. This can help us better understand how our solutions behave.

To plot arrows, the `matplotlib`

library has the `quiver`

command. The syntax looks like `quiver(x,y,u,v)`

, where `x`

and `y`

are arrays, giving the coordinates where each arrow should be located, while `u`

and `v`

give the direction of the vector. The `meshgrid`

command below takes our two one-dimensional arrays of numbers and turns them into a two-dimensional array (i.e. a matrix). Here is what our direction field looks like with arrows. Note that the code is a lot simpler: the `meshgrid`

command lets us skip the two `for`

loops we needed before.

```
x,y=np.meshgrid(np.linspace(-2,2,30),np.linspace(-2,2,30))
u = f(x,y)/np.sqrt(f(x,y)**2+g(x,y)**2)
v = g(x,y)/np.sqrt(f(x,y)**2+g(x,y)**2)
plt.quiver(x,y,u,v)
# If you want to add colour, add a 5th argument to quiver. Try functions of x,y,u,v.
# For example, plt.quiver(x,y,u,v,np.arctan2(v,u)) assigns colour based on the angle of the vector
plt.show()
```

### Problem 1#

Finally, an exercise! Try to explain the following items. (You can double-click on this cell to edit it, if you would like to type in your answers.)

(a) In the original direction field plot, why are the lines with nearly vertical slopes so much longer than the lines with nearly horizontal slopes?

(b) Why do the parametric equations \(x=a+\cos(\theta)t, y=b+\sin(\theta)t\) define a line through the point \((a,b)\) with slope \(m=\tan(\theta)\)?

(*Hint:* the parametric equations for a line are the same thing as a vector equation for a line from Math 1410.)

## Euler’s Method for systems#

We can adapt Euler’s method to systems of differential equations. For the system $\(\begin{aligned}\frac{dx}{dt} &= ax+by\\ \frac{dy}{dt}&=cx+dy,\end{aligned}\)\( there are three types of solutions that we can look for, given initial values \)x(0)=x_0\( and \)y(0)=y_0$:

The solution \(x(t)\), giving \(x\) as a function of \(t\)

The solution \(y(t)\), giving \(y\) as a function of \(t\)

The

**solution curve**, which is in the \(xy\) plane. (This implicitly defines \(y\) as a function of \(x\).)

If we have analyitic (exact solutions) for \(x(t)\) and \(y(t)\), we can often *eliminate the parameter* to get an equation relating \(x\) and \(y\), which will define the solution curve. But if we are solving numerically, this isn’t an option. Fortunately, it also is not necessary. In our algorithm, we will begin with an array defining the \(t\) values we use as inputs. Euler’s method will return arrays for both \(x\) and \(y\). We can then choose which two of these three arrays to plot together: we can plot the points \((t,x)\), giving \(x(t)\), or the points \((t,y)\), giving \(y(t)\), or the points \((x,y)\), giving the solution curve.

Here again is our code for Euler’s method, updated for a system of two equations.

```
def Euler2(f,g,x0,y0):
x = np.zeros(len(t)) #defines array of zeros with same length as t
y = np.zeros(len(t)) #these zeros will be rewritten below
x[0] = x0 #initial value for x
y[0] = y0 #initial value for y
for n in range(0,len(t)-1):
x[n+1] = x[n] + f(x[n],y[n])*(t[n+1] - t[n]) #defines next step for x
y[n+1] = y[n] + g(x[n],y[n])*(t[n+1] - t[n]) #notice f and g do not depend on t
return (x,y)
```

We will now define two plotting functions. The first will plot the solution functions \(x(t)\) and \(y(t)\). The second will plot the solution curve against the direction field, since both of these inhabit the \(xy\) plane.

```
def xysol(f,g,x0,y0):
(x,y) = Euler2(f,g,x0,y0)
plt.plot(t,x,'b-',t,y,'r-')
plt.legend(['x(t)','y(t)'])
#plt.axis([0,10,0,10]) #uncomment to adjust viewing window
plt.grid(True)
plt.show()
```

```
p = np.linspace(-2,2,20) # adjust these if necessay to change the grid for the direction field
q = np.linspace(-2,2,20)
def dfsol(f,g,x0,y0):
for j in p:
for k in q:
if f(j,k)==0:
theta=np.pi/2
else:
slope = g(j,k)/f(j,k)
theta = np.arctan(slope)
plt.plot([j,j + 0.05*np.cos(theta)],
[k,k+0.05*np.sin(theta)],
solid_capstyle='projecting',solid_joinstyle='bevel')
(x,y) = Euler2(f,g,x0,y0)
plt.plot(x,y)
plt.title("Direction field and solution curve for x'=f(x,y),y'=g(x,y)")
plt.grid(True)
plt.axis([p[0],p[len(p)-1],q[0],q[len(q)-1]])
plt.show()
```

Here are the results for the system we’ve been studying so far, with initial conditions \(x_0=1, y_0=-1\).

```
def f(x,y):
return -x+3*y
def g(x,y):
return -5*x-2*y
t = np.linspace(0,2*np.pi,100)
```

```
xysol(f,g,1,-1)
```

```
dfsol(f,g,1,-1)
```

### Problem 2#

For the following systems of differential equations of the form

use the `xysol`

function above to plot approximate solutions \(x(t)\) and \(y(t)\). Then use the `dfsol`

function to plot the approximate solution curve against the direction field. In each case you will need to define the functions `f(x,y)`

and `g(x,y)`

, as well as the array `t`

.

#### Part (a)#

\(a = 0.1, b=0.05, c=0, d=-0.2, x_0 = 0.5, y_0=1\)

#### Part (b)#

\(a = 0, b=2, c=-2, d=0, x_0=0.5, y=-0.5\)

**Notes:**

for each system, include a line of the form

`t = np.linspace(0,b,n)`

, where`b`

is the endpoint of the interval you want to use, and`n`

is the number of steps used.values you could try:

`b=2*np.pi`

, (or some other multiple of \(\pi\)) if things look like they’re oscillating, or`b=2`

(or some other single digit integer) if they don’t, and`n=100`

, or`n=1000`

, etc.the larger you set the

`n`

value, the more accurate your curves will be.in the

`dfsol`

function, the values`p=np.linspace(-2,2,20)`

and`q=np.linspace(-2,2,20)`

set the size of the viewing window, as well as the number of lines drawn for the direction field. If you don’t like how things look, you could try redefining these.

## Solving linear systems#

What if we want to compare our approximate solutions to the We can’t cover everything in one tutorial, but there are some very cool connections between systems of linear differential equations and linear algebra. Given the system

consider the matrix

A if we can find a number \(\lambda\) and a non-zero vector \(\vec{v}\) such that $\(A\vec{v}=\lambda \vec{v},\)\( we call \)\lambda\( an **eigenvalue** of \)A\(, and \)\vec{v}\( an **eigenvector** of \)A$.

(You’ve seen this already if you’ve done Math 1410. If not, don’t worry – you won’t need to know how to find eigenvalues and eigenvectors, or even how to multiply a vector by a matrix! We’ll let the computer do this.)

Next, we learn how to use Python to compute eigenvalues and eigenvectors. We’ll use these to create our solution.

**Note:** many linear algebra calculations can be done quite easily using SymPy, and we get nicely formatted results, as well. (Matrices look like matrices, for example… If you are interested, see my textbook for Math 3410) But we are going to want our eigenvalues to be numbers that NumPy understands, so that we can use them to plot solutions.

A matrix can be entered in NumPy using the following syntax: `np.array([[a,b],[c,d]])`

, where `[a,b]`

is the first row, and `[c,d]`

is the second. (This of corse works for larger matrices, too.)

Let’s enter the matrix for the example I’ve been using.

```
A=np.array([[-1,3],[-5,-2]])
print(A) # for NumPy objects, print(A) produces better results than A
```

We can obtain the eigenvalues and eigenvectors of \(A\) using the linear algebra libarary from SciPy.

```
import scipy.linalg as la
x=la.eig(A)
print(x)
```

We notice two things. First, this is both the eigenvalues **and** the eigenvectors! The eigenvalues are first, in a \(2\times 1\) array. The eigenvectors are second, in a \(2\times 2\) array. Second, all the numbers are complex! To extract the eigenvalues and eigenvectors, we can do the following:

```
(evals,evects)=la.eig(A)
print(evals)
```

Try asking for either `evals[0]`

or `evals[1]`

to confirm that this behaves as an array:

We can also extract real and imaginary parts:

```
print(np.real(evals[0]))
print(np.imag(evals[1]))
```

It turns out that the eigenvalues of the matrix \(A\) characterize the types of solutions we can expect to get:

distinct real eigenvalues \(\lambda_1, \lambda_2\): \(x(t)=c_1e^{\lambda_1 t}+c_2e^{\lambda_2 t}\).

repeated real eigenvalue \(\lambda\): \(x(t)=c_1e^{\lambda t}+c_2 te^{\lambda t}\).

purely imaginary eigenvalues \(\lambda = \pm i \omega\): \(x(t)=c_1\cos(\omega t)+c_2\sin(\omega t)\).

complex eigenvalues \(\lambda = k+\pm i\omega\): \(x(t) = c_1 e^{kt}+\cos(\omega t)+c_2 e^{kt}\sin(\omega t)\).

In each case \(c_1\) and \(c_2\) are constants, and the result for \(y(t)\) is similar. The constants are determined by initial conditions, but there are relationships between the constants for \(x(t)\) and the constants for \(y(t)\) determined by the eigenvectors of \(A\). (For further details, see the end of this notebook.)

### Problem 3#

For both of the systems you used in Problem 2:

determine the eigenvalues of the coefficient matrix \(A\).

decide which of the four cases above your eigenvalues correspond to.

comment on whether or not the type of function for that case agrees with the graphs you found in Problem 2.

## Non-linear systems#

The numerical methods we’ve introduced in this notebook will work for non-linear systems as well, although an analytical approach is far more complicated. (The general strategy is to find the critical points for a non-linear system, and then approximate near each critical point by a linear system.)

In particular, we will look at Volterra’s **predator-prey** model. In this model, \(x(t)\) is the population of a prey species (e.g. rabbits) and \(y(t)\) is the population of a predator species (e.g. foxes).
For constants \(a,b,c,d\), the predator-prey model is given by:

### Problem 4#

(No code required)

(a) If \(y=0\) (no predators), what can you say about the population \(x\)?

(b) If \(x=0\) (no prey), what happens to the population \(y\)?

(c) What is the significance of the \(xy\) term in each equation?

Now, let’s define our predator-prey system:

```
a=2
b=1.2
c=1
d=0.9
def f(x,y): # define dx/dt=f(x,y)
return a*x-b*x*y
def g(x,y): # define dy/dt=g(x,y)
return -c*y+d*x*y
p = np.linspace(0.1,5,20)
q = np.linspace(0.1,5,20)
```

We can then run the `xysol`

and `dfsol`

programs to generate plots for this system: first, to show how the two populations change over time, and second, to show the direction field and solution curve.

```
t = np.linspace(0,4*np.pi,100)
xysol(f,g,1.5,0.5) #choosing initial values x0=1.5 and y0=0.5
```

```
dfsol(f,g,1.5,0.5)
```

It turns out that the solution curve above should really be a closed curve. It doesn’t appear this way due to the error involved in using Euler’s method. To improve things, try changing the definition of `t`

to use a smaller step size (perhaps change 100 to 1000), and then **re-run the cells.**

What this means in terms of our population dynamic is that this particular predator-prey ecosystem is **stable**. The two populations go up and down over time, but they do so in a **cycle**.

### Problem 5#

Choose two different sets of values for \(a,b,c,d\), and then run both the `xysol`

and `dfsol`

programs to create plots for each. In each case, comment on the results.

(Pick your own values, and experiment a bit. If your outcome isn’t interesting, or seems unreasonable, try again with different numbers. Note that in some cases you could cause exponential growth so large that you overflow the values the system can handle, so choose carefully!)

**Note:** it should be sufficient to redefine `a,b,c,d`

and then run the two programs. You don’t have to enter new definitions for `f(x,y)`

and `g(x,y)`

.

## Analytic methods for linear systems#

(For information purposes only.)

Note that a system of differential equations could be written in vector form as

where \(A=\begin{bmatrix}a&b\\c&d\end{bmatrix}\) is the coefficient matrix for the system.

It turns out that if \(\lambda\) is an eigenvalue of \(A\), and \(\vec{v}=\begin{bmatrix}v_1\\v_2\end{bmatrix}\) is a corresponding eigenvector then a solution to our linear system is given by

In vector form, we can write

This works because, on the one hand, \(\vec{x}{\,}'(t) = \lambda e^{\lambda t}\vec{v}\), and on the other hand, \(A\vec{x}(t) = A(e^{\lambda t}\vec{v})=e^{\lambda t}A\vec{v}=e^{\lambda t}\lambda \vec{v}\).

In most cases we will get two different eigenvalues, with corresponding eigenvectors and the general solution will be a linear combination (or superposition) of these two solutions. That is, if \(\vec{x}_1(t) = e^{\lambda_1 t}\vec{v}_1\) and \(\vec{x}_2(t)=e^{\lambda_2 t}\vec{v}_2\), then the general solution is

If \(\lambda = u+iv\) is complex, we can use Euler’s formula (yes, the same Euler) to write

It turns out that if \(\lambda=u+iv\) is an eigenvalue, the other one will be \(\lambda=u-iv\) (thanks to the quadratic formula), and it is possible to combine the two solutions to give an answer entirely in terms of real numbers. The general solution for a system of two linear differential equations whose coefficient matrix has eigenvalues \(\lambda_\pm = u\pm iv\) looks like

for some vectors \(\vec{w}_1,\vec{w}_2\). (These are not the original eigenvectors, but they are related.)

When we have a solution like \(\vec{x}(t)=c_1\vec{v}_1 e^{\lambda_1 t}+c_2\vec{v}_2 e^{\lambda_2 t}\), and initial conditions \(x(0)=x_0\), \(y(0)=y_0\), we can determine the coefficiencts \(c_1, c_2\) as follows:

This is a system of two equations with two unknowns. The matrix whose columns are given by the two eigenvectors will be invertible, allowing us to solve for \(c_1\) and \(c_2\).

There are a lot of other details we’ve glossed over here, which you will learn about in a course on differential equations.