Skip to main content

Section 4.6 Matrix Factorizations and Eigenvalues

This section is a rather rapid tour of some cool ideas that get a lot of use in applied linear algebra. We are rather light on details here. The interested reader can consult sections 8.3–8.6 in the Nicholson textbook.

Subsection 4.6.1 Matrix Factorizations

Subsubsection 4.6.1.1 Positive Operators

Let \(T\) be a linear operator defined by a matrix \(A\text{.}\) If \(A\) is symmetric (for the case of \(\R^n\)) or hermitian (for the case of \(\C^n\)), we say that the operator \(T\) is self-adjoint.
Definition 4.6.1.
A self-adjoint operator \(T\) is positive if \(\xx^H T\xx\geq 0\) for all vectors \(\xx\neq \zer\text{.}\) It is positive-definite if \(\xx^H T\xx\gt 0\) for all nonzero \(\xx\text{.}\) If \(T=T_A\) for some matrix \(A\text{,}\) we also refer to \(A\) as a positive(-definite) matrix.
The definition of a positive matrix is equivalent to requiring that all its eigenvalues are non-negative. Every positive matrix \(A\) has a unique positive square root: a matrix \(R\) such that \(R^2=A\text{.}\) Since \(A\) is symmetric/hermitian, it can be diagonalized. Writing \(A = PDP^{-1}\) where \(P\) is orthogonal/unitary and
\begin{equation*} D = \bbm \lambda_1 \amp 0\amp \cdots \amp 0\\0\amp \lambda_2 \amp \cdots \amp 0\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ 0\amp 0\amp \cdots \amp \lambda_n\ebm\text{,} \end{equation*}
we have \(R=PEP^{-1}\text{,}\) where
\begin{equation*} D = \bbm \sqrt{\lambda_1} \amp 0\amp \cdots \amp 0\\0\amp \sqrt{\lambda_2} \amp \cdots \amp 0\\ \vdots \amp \vdots \amp \ddots \amp \vdots\\ 0\amp 0\amp \cdots \amp \sqrt{\lambda_n}\ebm\text{.} \end{equation*}
The following theorem gives us a simple way of generating positive matrices.
For any \(\xx\neq \zer\) in \(\R^n\text{,}\)
\begin{equation*} \xx^T A\xx = \xx^TU^T U\xx = (U\xx)^T(U\xx) = \len{U\xx}^2\geq 0\text{.} \end{equation*}
What is interesting is that the converse to the above statement is also true. The Cholesky factorization of a positive-definite matrix \(A\) is given by \(A=U^TU\text{,}\) where \(U\) is upper-triangular, with positive diagonal entries.
Even better is that there is a very simple algorithm for obtaining the factorization: Carry the matrix \(A\) to triangular form, using only row operations of the type \(R_i+kR_j\to R_i\text{.}\) Then divide each row by the square root of the diagonal entry to obtain the matrix \(U\text{.}\)
The SymPy library contains the cholesky() algorithm. Note however that it produces a lower triangular matrix, rather than upper triangular. (That is, the output gives \(L=U^T\) rather than \(U\text{,}\) so you will have \(A=LL^T\text{.}\)) Let’s give it a try. First, let’s create a positive-definite matrix.
Next, find the Cholesky factorization:
Note that \(L\) is not the same as the matrix \(B\text{!}\)

Subsubsection 4.6.1.2 Singular Value Decomposition

For any \(n\times n\) matrix \(A\text{,}\) the matrices \(A^TA\) and \(AA^T\) are both positive. (Exercise!) This means that we can define \(\sqrt{A^TA}\text{,}\) even if \(A\) itself is not symmetric or positive.
  • Since \(A^TA\) is symmetric, we know that it can be diagonalized.
  • Since \(A^TA\) is positive, we know its eigenvalues are non-negative.
  • This means we can define the singular values \(\sigma_i = \sqrt{\lambda_i}\) for each \(i=1,\ldots, n\text{.}\)
  • Note: it’s possible to do this even if \(A\) is not a square matrix!
The SymPy library has a function for computing the singular values of a matrix. Given a matrix A, the command A.singular_values() will return its singular values. Try this for a few different matrices below:
In fact, SymPy can even return singular values for a matrix with variable entries! Try the following example from the SymPy documentation 1 .
For an \(n\times n\) matrix \(A\text{,}\) we might not be able to diagonalize \(A\) (with a single orthonormal basis). However, it turns out that it’s always possible to find a pair of orthonormal bases \(\{e_1,\ldots, e_n\}, \{f_1,\ldots, f_n\}\) such that
\begin{equation*} Ax = \sigma_1(x\cdot e_1)f_1+\cdots + \sigma_n(x\cdot e_n)f_n\text{.} \end{equation*}
In matrix form, \(A = P\Sigma_A Q^T\) for orthogonal matrices \(P,Q\text{.}\)
In fact, this can be done even if \(A\) is not square, which is arguably the more interesting case! Let \(A\) be an \(m\times n\) matrix. We will find an \(m\times m\) orthogonal matrix \(P\) and \(n\times n\) orthogonal matrix \(Q\text{,}\) such that \(A=P\Sigma_A Q^T\text{,}\) where \(\Sigma_A\) is also \(m\times n\text{.}\)
The basis \(\{f_1,\ldots, f_n\}\) is an orthonormal basis for \(A^TA\text{,}\) and the matrix \(Q\) is the matrix whose columns are the vectors \(f_i\text{.}\) As a result, \(Q\) is orthogonal.
The matrix \(\Sigma_A\) is the same size as \(A\text{.}\) First, we list the positive singular values of \(A\) in decreasing order:
\begin{equation*} \sigma_1\geq \sigma_2\geq \cdots \geq \sigma_k\gt 0\text{.} \end{equation*}
Then, we let \(D_A = \operatorname{diag}(\sigma_1,\ldots, \sigma_k)\text{,}\) and set
\begin{equation*} \Sigma_A = \begin{bmatrix}D_A\amp 0\\0\amp 0\end{bmatrix}\text{.} \end{equation*}
That is, we put \(D_A\) in the upper-left, and then fill in zeros as needed, until \(\Sigma_A\) is the same size as \(A\text{.}\)
Next, we compute the vectors \(e_i = \frac{1}{\len{Af_i}}Af_i\text{,}\) for \(i=1,\ldots, k\text{.}\) As shown in Nicolson, \(\{e_1,\ldots, e_r\}\) will be an orthonormal basis for the column space of \(A\text{.}\) The matrix \(P\) is constructed by extending this to an orthonormal basis of \(\R^m\text{.}\)
All of this is a lot of work to do by hand, but it turns out that it can be done numerically, and more importantly, efficiently, by a computer. The SymPy library has an SVD algorithm, but it will not be efficient for larger matrices. In practice, most Python users will use the SVD algorithm provided by NumPy; we will stick with SymPy for simplicity and consistency.
Remark 4.6.3.
The version of the SVD given above is not used in computations, since it tends to be more resource intensive. In particular, it requires us to store more information than necessary: the last \(n-r\) rows of \(Q\text{,}\) and the last \(m-r\) columns of \(P\text{,}\) get multiplied by columns/rows of zeros in \(\Sigma_A\text{,}\) so we don’t really need to keep track of these columns.
Instead, most algorithms that you find will give the \(r\times r\) diagonal matrix \(D_A\text{,}\) consisting of the nonzero singular values, and \(P\) will be replaced by the \(m\times r\) matrix consisting of its first \(r\) columns, while \(Q\) gets replaced by the \(r\times n\) matrix consisting of its first \(r\) rows. The resulting product is still equal to the original matrix.
In some cases, even the matrix \(D_A\) is too large, and a decision is made to truncate to some smaller subset of singular values. In this case, the resulting product is no longer equal to the original matrix, but it does provide an approximation. A discussion can be found on Wikipedia 2 .
Example 4.6.4.
Find the singular value decomposition of the matrix \(A = \begin{bmatrix}1\amp 1\amp 1\\1\amp 0\amp -1\end{bmatrix}\text{.}\)
Solution.
Using SymPy, we get the condensed SVD 3 . First, let’s check the singular values.
Note that the values are not listed in decreasing order. Now, let’s ask for the singular value decomposition. The output consists of three matrices; the first line below assigns those matrices to the names P,S,Q.
Note that the output is the “condensed” version, which doesn’t match the exposition above. It also doesn’t follow the same ordering convention: we’ll need to swap columns in each of the matrices. But it does give us a decomposition of the matrix \(A\text{:}\)
To match our earlier presentation, we first set \(\Sigma_A = \bbm \sqrt{3}\amp 0\amp 0\\0 \amp \sqrt{2}\amp 0\ebm\text{.}\) Next, we need to extend the \(3\times 2\) matrix in the output above to a \(3\times 3\) matrix. We can do this by choosing any vector orthogonal to the two existing columns, and normalizing. Let’s use entries \(1/\sqrt{6},-2/\sqrt{6},1/\sqrt{6}\text{.}\) Noting that we also need to swap the first two columns (to match the fact that we swapped columns in \(\Sigma_A\)), we get the matrix
\begin{equation*} Q = \bbm \frac{\sqrt{3}}{3}\amp \frac{\sqrt{2}}{2}\amp \frac{\sqrt{6}}{6}\\ \frac{\sqrt{3}}{3}\amp 0\amp -\frac{\sqrt{6}}{3}\\ \frac{\sqrt{3}}{3}\amp -\frac{\sqrt{2}}{2} \amp \frac{\sqrt{6}}{6}\ebm\text{.} \end{equation*}
Let’s check that it is indeed orthogonal.
Finally, we take \(P=\bbm 1\amp 0\\0\amp 1\ebm\) (again swapping columns), which is just the identity matrix. We therefore should expect that
\begin{equation*} P\Sigma_A Q^T = \Sigma_A Q^T = A\text{.} \end{equation*}
Let’s check.
It worked!
The Singular Value Decomposition has a lot of useful appplications, some of which are described in Nicholson’s book. On a very fundamental level the SVD provides us with information on some of the most essential properties of the matrix \(A\text{,}\) and any system of equations with \(A\) as its coefficient matrix.
Recall the following definitions for an \(m\times n\) matrix \(A\text{:}\)
  1. The rank of \(A\) is the number of leadning ones in the RREF of \(A\text{,}\) which is also equal to the dimension of the column space of \(A\) (or if you prefer, the dimension of \(\im (T_A)\)).
  2. The column space of \(A\text{,}\) denoted \(\csp(A)\text{,}\) is the subspace of \(\R^m\) spanned by the columns of \(A\text{.}\) (This is the image of the matrix transformation \(T_A\text{;}\) it is also the space of all vectors \(\mathbf{b}\) for which the system \(A\xx=\mathbf{b}\) is consistent.)
  3. The row space of \(A\text{,}\) denoted \(\operatorname{row}(A)\text{,}\) is the span of the rows of \(A\text{,}\) viewed as column vectors in \(\R^n\text{.}\)
  4. The null space of \(A\) is the space of solutions to the homogeneous system \(A\xx=\zer\text{.}\) This is, of course, equal the kernel of the associated transformation \(T_A\text{.}\)
There are some interesting relationships among these spaces, which are left as an exercise.
Exercise 4.6.5.
Let \(A\) be an \(m\times n\) matrix. Prove the following statements.
(a)
\((\operatorname{row}(A))^\bot = \nll(A)\)
Hint.
Note that \(\vv\in \nll(A)\) if and only if \(A\vv=\zer\text{,}\) and \(\vv\in(\operatorname{row}(A))^\bot\) if and only if \(\vv\cdot \mathbf{r}_i=\zer\) for each row \(\mathbf{r}_i\) of \(A\text{.}\)
Note also that \((A\vv)^T=\vv^T A^T\) is the (dot) product of \(\vv^T\) with each column of \(A^T\text{,}\) and each column of \(A^T\) is a row of \(A\text{.}\)
(b)
\((\csp(A))^\bot = \nll(A^T)\)
Hint.
Notice that \(\vv\in \nll(A^T)\) if and only if \(A^T\vv=\zer\text{,}\) and that \((A^T\vv)^T=\vv^T A\text{.}\) Your reasoning should be similar to that of the previous part.
Here’s the cool thing about the SVD. Let \(\sigma_1\geq \sigma_2\geq \cdots \geq \sigma_r\gt 0\) be the positive singular values of \(A\text{.}\) Let \(\vecq_1,\ldots, \vecq_r,\ldots, \vecq_n\) be the orthonormal basis of eigenvectors for \(A^TA\text{,}\) and let \(\vecp_1,\ldots, \vecp_r,\ldots, \vecp_m\) be the orthonormal basis of \(\R^m\) constructed in the SVD algorithm. Then:
  1. \(\displaystyle \rank(A)=r\)
  2. \(\vecq_1,\ldots, \vecq_r\) form a basis for \(\operatorname{row}(A)\text{.}\)
  3. \(\vecp_1,\ldots, \vecp_r\) form a basis for \(\csp(A)\) (and thus, the “row rank” and “column rank” of \(A\) are the same).
  4. \(\vecq_{r+1},\ldots, \vecq_n\) form a basis for \(\nll(A)\text{.}\) (And these are therefore the basis solutions of \(A\xx=\zer\text{!}\))
  5. \(\vecp_{r+1},\ldots, \vecp_m\) form a basis for \(\nll(A^T)\text{.}\)
If you want to explore this further, have a look at the excellent notebook by Dr. Juan H Klopper 4 . The ipynb file can be found on his GitHub page 5 . In it, he takes you through various approaches to finding the singular value decomposition, using the method above, as well as using NumPy and SciPy (which, for industrial applications, are superior to SymPy).

Subsubsection 4.6.1.3 QR Factorization

Suppose \(A\) is an \(m\times n\) matrix with independent columns. (Question: for this to happen, which is true — \(m\geq n\text{,}\) or \(n\geq m\text{?}\))
A \(QR\)-factorization of \(A\) is a factorization of the form \(A=QR\text{,}\) where \(Q\) is \(m\times n\text{,}\) with orthonormal columns, and \(R\) is an invertible upper-triangular (\(n\times n\)) matrix with positive diagonal entries. If \(A\) is a square matrix, \(Q\) will be orthogonal.
A lot of the methods we’re looking at here involve more sophisticated numerical techniques than SymPy is designed to handle. If we wanted to spend time on these topics, we’d have to learn a bit about the NumPy package, which has built in tools for finding things like polar decomposition and singular value decomposition. However, SymPy does know how to do \(QR\) factorization. After defining a matrix A, we can use the command
          Q, R = A.QRdecomposition()
        
.
Let’s check that the matrix \(Q\) really is orthogonal:
Details of how to perform the QR factorization can be found in Nicholson’s textbook. It’s essentially a consequence of performing the Gram-Schmidt algorithm on the columns of \(A\text{,}\) and keeping track of our work.
The calculation above is a symbolic computation, which is nice for understanding what’s going on. The reason why the \(QR\) factorization is useful in practice is that there are efficient numerical methods for doing it (with good control over rounding errors). Our next topic looks at a useful application of the \(QR\) factorization.

Subsection 4.6.2 Computing Eigenvalues

Our first method focuses on the dominant eigenvalue of a matrix. An eigenvalue is dominant if it is larger in absolute value than all other eigenvalues. For example, if \(A\) has eigenvalues \(1,3,-2,-5\text{,}\) then \(-5\) is the dominant eigenvalue.
If \(A\) has eigenvalues \(1,3,0,-4,4\) then there is no dominant eigenvalue. Any eigenvector corresponding to a dominant eigenvalue is called a dominant eigenvector.

Subsubsection 4.6.2.1 The Power Method

If a matrix \(A\) has a dominant eigenvalue, there is a method for finding it (approximately) that does not involve finding and factoring the characteristic polynomial of \(A\text{.}\)
We start with some initial guess \(x_0\) for a dominant eigenvector. We then set \(x_{k+1} = Ax_k\) for each \(k\geq 0\text{,}\) giving a sequence
\begin{equation*} x_0, Ax_0, A^2x_0, A^3x_0,\ldots\text{.} \end{equation*}
We expect (for reasons we’ll explain) that \(\lVert x_k-x\rVert \to 0\) as \(k\to\infty\text{,}\) where \(x\) is a dominant eigenvector. Let’s try an example.
The dominant eigenvalue is \(\lambda = 7\text{.}\) Let’s try an initial guess of \(x_0=\begin{bmatrix}1\\0\end{bmatrix}\) and see what happens.
We might want to confirm whether that rather large fraction is close to \(\frac23\text{.}\) To do so, we can get the computer to divide the numerator by the denominator.
The above might show you the fraction rather than its decimal approximation. (This may depend on whether you’re on Sage or Jupyter.) To get the decimal, try wrapping the above in float() (or N, or append with .evalf()).
For the eigenvalue, we note that if \(Ax=\lambda x\text{,}\) then
\begin{equation*} \frac{x\cdot Ax}{\lVert x\rVert^2} = \frac{x\cdot (\lambda x)}{\lVert x\rVert^2} = \lambda\text{.} \end{equation*}
This leads us to consider the Rayleigh quotients
\begin{equation*} r_k = \frac{x_k\cdot x_{k+1}}{\lVert x_k\rVert^2}\text{.} \end{equation*}
We can convert a rational number r to a float using either N(r) or r.evalf(). (The latter seems to be the better bet when working with a list.)

Subsubsection 4.6.2.2 The QR Algorithm

Given an \(n\times n\) matrix \(A\text{,}\) we know we can write \(A=QR\text{,}\) with \(Q\) orthogonal and \(R\) upper-triangular. The \(QR\)-algorithm exploits this fact. We set \(A_1=A\text{,}\) and write \(A_1=Q_1R_1\text{.}\)
Then we set \(A_2 = R_1Q_1\text{,}\) and factor: \(A_2=Q_2R_2\text{.}\) Notice \(A_2 = R_1Q_1 = Q_1^TA_1Q_1\text{.}\) Since \(A_2\) is similar to \(A_1\text{,}\) \(A_2\) has the same eigenvalues as \(A_1=A\text{.}\)
Next, set \(A_3 = R_2Q_2\text{,}\) and factor as \(A_3 = Q_3R_3\text{.}\) Since \(A_3 = Q_2^TA_2Q_2\text{,}\) \(A_3\) has the same eigenvalues as \(A_2\text{.}\) In fact, \(A_3 = Q_2^T(Q_1^TAQ_1)Q_2 = (Q_1Q_2)^TA(Q_1Q_2)\text{.}\)
After \(k\) steps we have \(A_{k+1} = (Q_1\cdots Q_k)^TA(Q_1\cdots Q_k)\text{,}\) which still has the same eigenvalues as \(A\text{.}\) By some sort of dark magic, this sequence of matrices converges to an upper triangular matrix with eigenvalues on the diagonal!
Consider the matrix \(A = \begin{bmatrix}5&-2&3\\0&4&0\\0&-1&3\end{bmatrix}\)
Now we repeat the process:
Do this a few more times, and see what results! (If someone can come up with a way to code this as a loop, let me know!) The diagonal entries should get closer to \(5,4,3\text{,}\) respectively, and the \((3,2)\) entry should get closer to \(0\text{.}\)

Exercises 4.6.3 Exercises

1.

Find the singular values \(\sigma_1 \ge \sigma_2\) of
\begin{equation*} A = {\left[\begin{array}{cc} -8 \amp 0\cr 0 \amp 4 \end{array}\right]}. \end{equation*}

2.

Find the singular values \(\sigma_1 \ge \sigma_2 \ge \sigma_3\) of
\begin{equation*} A = {\left[\begin{array}{ccc} 5 \amp 0 \amp -3\cr 3 \amp 0 \amp 5 \end{array}\right]}. \end{equation*}

3.

Find the \(QR\) factorization of the matrix \({\left[\begin{array}{cc} 4 \amp 4\cr 6 \amp 13\cr 12 \amp 33 \end{array}\right]}.\)

4.

Find the \(QR\) factorization of the matrix \({\left[\begin{array}{ccc} 2 \amp 5 \amp 7\cr 4 \amp -8 \amp 11\cr 4 \amp 1 \amp -1 \end{array}\right]}.\)
docs.sympy.org/latest/modules/matrices/matrices.html#sympy.matrices.matrices.MatrixEigen.singular_values
en.wikipedia.org/wiki/Singular_value_decomposition
docs.sympy.org/latest/modules/matrices/
www.juanklopper.com/wp-content/uploads/2015/03/III_05_Singular_value_decomposition.html
github.com/juanklopper/MIT_OCW_Linear_Algebra_18_06