Skip to main content

Worksheet 4.5 Worksheet: linear dynamical systems

Suppose we have a sequence \((x_n)\) defined by a linear recurrence of length \(k\text{,}\) as in Worksheet 2.5:
\begin{equation*} x_{n+k} = a_0x_k + a_1x_{k+1}+\cdots + a_{k-1}x_{n-k+1}\text{.} \end{equation*}
We would like to represent this as a matrix equation, and then use eigenvalues to analyze, replacing the recurrence formula with a matrix equation of the form \(\vv_{k+1}=A\vv_k\text{.}\) A sequence of vectors generated in this way is called a linear dynamical system. It is a good model for systems with discrete time evolution (where changes occur in steps, rather than continuously).
To determine the long term evolution of the system, we would like to be able to compute
\begin{equation*} \vv_n = A^n \vv_0 \end{equation*}
without first finding all the intermediate states, so this is a situation where we would like to be able to efficiently compute powers of a matrix. Fortunately, we know how to do this when \(A\) is diagonalizable: \(A^n = PD^nP^{-1}\text{,}\) where \(D\) is a diagonal matrix whose entries are the eigenvalues of \(A\text{,}\) and \(P\) is the matrix of corresponding eigenvectors of \(A\text{.}\)

1.

Consider a recurrence of length 2, of the form
\begin{equation*} x_{k+2} = ax_k+bx_{k+1}\text{.} \end{equation*}

(a)

According to Worksheet 2.5, what is the polynomial associated to this recurrence?

(b)

Let \(\vv_k = \begin{bmatrix}x_k\\x_{k+1}\end{bmatrix}\text{,}\) for each \(k\geq 0\text{,}\) and let \(A=\begin{bmatrix}0\amp 1\\a \amp b\end{bmatrix}\text{.}\) Show that
\begin{equation*} \vv_{k+1}=A\vv_k, \text{ for each } k\geq 0\text{.} \end{equation*}

(c)

Compute the characteristic polynomial of \(A\text{.}\) What do you observe?

2.

For a recurrence of length 3, given by
\begin{equation*} x_{k+3} = ax_k+bx_{k+1}+cx_{k+2}\text{:} \end{equation*}

(a)

Determine a matrix \(A\) such that \(\vv_{k+1}=A\vv_k\text{,}\) where \(\vv_k = \begin{bmatrix}x_k\\x_{k+1}\\x_{k+2}\end{bmatrix}\text{.}\)

(b)

Compute the characteristic polynomial of \(A\text{,}\) and compare it to the associated polynomial of the recurrence.

(c)

Show that if \(\lambda\) is an eigenvalue of \(A\text{,}\) then \(\xx = \begin{bmatrix}1\\ \lambda\\ \lambda^2\end{bmatrix}\) is an associated eigenvector.

3.

Consider the Fibonacci sequence, defined by \(x_0=1, x_1=1\text{,}\) and \(x_{k+2}=x_k+x_{k+1}\text{.}\) Let \(A\) be the matrix associated to this sequence.

(a)

State the matrix \(A\text{,}\) and show that \(A\) has eigenvalues \(\lambda_\pm = \frac{1}{2}(1\pm \sqrt{5})\text{,}\) with associated eigenvectors \(\xx_\pm = \begin{bmatrix}1\\ \lambda_\pm\end{bmatrix}\text{.}\)

(b)

Let \(P = \begin{bmatrix}1\amp 1\\ \lambda_+ \amp \lambda_-\end{bmatrix}\text{,}\) let \(D = \begin{bmatrix}\lambda_+ \amp 0\\ 0 \amp \lambda_-\end{bmatrix}\text{,}\) and let \(\mathbf{a_0}=\begin{bmatrix}a_0\\a_1\end{bmatrix}=P^{-1}\vv_0\text{,}\) where \(\vv_0=\begin{bmatrix}1\\1\end{bmatrix}\) gives the initial values of the sequence.
Show that
\begin{align*} \vv_n \amp = PD^nP^{-1}\vv_0\\ \amp = a_0\lambda_+^n\xx_+ + a_1\lambda_-^n\xx_-\text{.} \end{align*}

(c)

Note that Part 4.5.3.b tells us that although the Fibonacci sequence is not a geometric sequence, it is the sum of two geometric sequences!
By considering the numerical values of the eigenvalues \(\lambda_+\) and \(\lambda_-\text{,}\) explain why we can nonetheless treat the Fibonacci sequence as approximately geometric when \(n\) is large.
(This is true more generally: if a matrix \(A\) has one eigenvalue that is larger in absolute value than all the others, this eigenvalue is called the dominant eigenvalue. If \(A\) is the matrix of some linear recurrence, and \(A\) is diagonalizable, then we can consider the sequence as a sum of geometric sequences that will become approximately geometric in the long run.)

4.

As a more practical example, consider the following (over-simplified) predator-prey system. It is based on an example in Interactive Linear Algebra 1 , by Margalit, Rabinoff, and Williams, but adapted to the wildlife here in Lethbridge. An ecosystem contains both coyotes and deer. Initially, there is a population of \(20\) coyotes, and \(500\) deer.
We assume the following:
  • the share of the deer population eaten by a typical coyote in a year is \(10\) deer
  • in the absence of the coyotes, the deer population would increase by \(50\)% per year
  • \(20\)% of the coyote population dies each year of natural causes
  • the growth rate of the coyote population depends on the number of deer: for each 100 deer, \(10\) coyote pups will survive to adulthood.
If we let \(d_t\) denote the number of deer after \(t\) years, and \(c_t\) the number of coyotes, then we have
\begin{align*} d_{t+1} \amp = 1.5d_t - 10c_t\\ c_{t+1} \amp = 0.1d_t + 0.8c_t\text{,} \end{align*}
or, in matrix form,
\begin{equation*} \mathbf{p}_{t+1} = A\mathbf{p}_t\text{,} \end{equation*}
where \(\mathbf{p}_t=\bbm d_t\\c_t\ebm\) and \(A = \bbm 1.5 \amp -10\\0.1 \amp 0.8\ebm\text{.}\)
After \(t\) years, the two populations will be given by \(\mathbf{p}_t=A^t\mathbf{p}_0\text{,}\) where \(\mathbf{p}_0=\bbm 500\\20\ebm\) gives the initial populations of the two species. If possible, we would like to be able to find a closed-form formula for \(\mathbf{p}_t\text{,}\) which would allow us to analyze the long-term predictions of our model.

(a)

Analyze the eigenvalues of this matrix, and diagonalize. The sympy library won’t be up to the task. Instead, some combination of numpy and scipy, as described by Patrick Walls on his website 2 , will be needed.

(b)

The eigenvalues turn out to be complex! What does that tell you about the nature of the system? What is the long-term behaviour of this system?

(c)

What if you adjust the parameters? Can you come up with a system where both species flourish? Or one where they both disappear? Or one where the populations oscillate regularly?

(d)

You may have read this while wondering, “Does Sean actually know anything about ecology and population dynamics? Did he just make up those numbers?”
The answers are, respectively, no, and yes. Can you come up with numbers that are based on a realistic example? What does our model predict in that case? Is it accurate?

5.

A special type of linear dynamical system occurs when the matrix \(A\) is stochastic. A stochastic matrix is one where each entry of the matrix is between \(0\) and \(1\text{,}\) and all of the columns of the matrix sum to \(1\text{.}\)
The reason for these conditions is that the entries of a stochastic matrix represent probabilities; in particular, they are transition probabilities. That is, each number represents the probability of one state changing to another.
If a system can be in one of \(n\) possible states, we represent the system by an \(n\times 1\) vector \(\vv_t\text{,}\) whose entries indicate the probability that the system is in a given state at time \(t\text{.}\) If we know that the system starts out in a particular state, then \(\vv_0\) will have a \(1\) in one of its entries, and \(0\) everywhere else.
A Markov chain is given by such an initial vector, and a stochastic matrix. As an example, we will consider the following scenario, described in the book Shape, by Jordan Ellenberg:
A mosquito is born in a swamp, which we will call Swamp A. There is another nearby swamp, called Swamp B. Observational data suggests that when a mosquito is at Swamp A, there is a 40% chance that it will remain there, and a 60% chance that it will move to Swamp B. When the mosquito is at Swamp B, there is a 70% chance that it will remain, and a 30% chance that it will return to Swamp A.

(a)

Give a stochastic matrix \(M\) and a vector \(\vv_0\) that represent the transition probabilities and initial state given above.

(b)

By diagonalizing the matrix \(M\text{,}\) determine the long-term probability that the mosquito will be found in either swamp.

(c)

You should have found that one of the eigenvalues of \(M\) was \(\lambda=1\text{.}\) The corresponding eigenvector \(\vv\) satisfies \(M\vv=\vv\text{.}\) This is known as a steady-state vector: if our system begins with state \(\vv\text{,}\) it will remain there forever.
Confirm that if the eigenvector \(\vv\) is rescaled so that its entries sum to 1, the resulting values agree with the long-term probabilities found in the previous part.

6.

A stochastic matrix \(M\) is called regular some power \(M^k\) has all positive entries. It is a theorem that every regular stochastic matrix has a steady-state vector.

(a)

Prove that if \(M\) is a \(2\times 2\) stochastic matrix with no entry equal to zero, then \(1\) is an eigenvalue of \(M\text{.}\)

(b)

Prove that the product of two \(2\times 2\) stochastic matrices is stochastic. Conclude that if \(M\) is stochastic, so is \(M^k\) for each \(k=1,2,3,\ldots\text{.}\)

(c)

Also prove that if \(M^k\) has positive entries for some \(k\text{,}\) then \(1\) is an eigenvalue of \(M\text{.}\)

7.

By searching online or in other textbooks, find and state a more interesting/complicated example of a Markov chain problem, and show how to solve it.
personal.math.ubc.ca/~tbjw/ila/dds.html
patrickwalls.github.io/mathematicalpython/linear-algebra/eigenvalues-eigenvectors/