Skip to main content

Section B.3 SymPy for linear algebra

SymPy is a Python library for symbolic algebra. On its own, it’s not as powerful as programs like Maple, but it handles a lot of basic manipulations in a fairly simple fashion, and when we need more power, it can interface with other Python libraries.
Another advantage of SymPy is sophisticated “pretty-printing”. In fact, we can enable MathJax within SymPy, so that output is rendered in the same way as when LaTeX is entered in a markdown cell.

Subsection B.3.1 SymPy basics

Running the following Sage cell will load the SymPy library and turn on MathJax.
Note: the command from sympy import * given above is not best practice. It can be convenient when you want to do a quick calculation (for example, on a test), but can have unintended consequences. It is better to only load those parts of a library you are going to use; for example, from sympy import Matrix, init_printing.
If you are going to be working with multiple libraries, and more than one of them defines a certain command, you can use prefixes to indicate what library you want to use. For example, if you enter import sympy as sy, each SymPy command will need to be appended with sy; for example, you might write sy.Matrix instead of simply Matrix. Let’s use SymPy to create a \(2\times 3\) matrix.
The A on the second line asks Python to print the matrix using SymPy’s printing support. If we use Python’s print command, we get something different; note that the next Sage cell remembers the values from the previous one, if you are using the HTML version of the book.
We’ll have more on matrices in Subsection B.3.2. For now, let’s look at some more basic constructions. One basic thing to be mindful of is the type of numbers we’re working with. For example, if we enter 2/7 in a code cell, Python will interpret this as a floating point number (essentially, a division).
(If you are using Sage cells in HTML rather than Jupyter, this will automatically be interpreted as a fraction.)
But we often do linear algebra over the rational numbers, and so SymPy will let you specify this. First, you’ll need to load the Rational function.
You might not think to add the comma above, because you’re used to writing fractions like \(2/7\text{.}\) Fortunately, the SymPy authors thought of that:
Hmm... You might have got the output you expected in the cell above, but maybe not. If you got a much worse looking fraction, read on.
Another cool command is the sympify command, which can be called with the shortcut S. The input 2 is interpreted as an int by Python, but S(2) is a “SymPy Integer”:
Of course, sometimes you do want to use floating point, and you can specify this, too:
One note of caution: Float is part of SymPy, and not the same as the core Python float command. You can also put decimals into the Rational command and get the corresponding fraction:
The only thing to beware of is that computers convert from decimal to binary and then back again, and sometimes weird things can happen:
Of course, there are workarounds. One way is to enter \(0.2\) as a string:
Another is to limit the size of the denominator:
Try some other examples above. Some inputs to try are 1.23 and 23e-10
We can also deal with repeating decimals. These are entered as strings, with square brackets around the repeating part. Then we can “sympify”:
Finally, SymPy knows about mathematical constants like \(e, \pi, i\text{,}\) which you’ll need from time to time in linear algebra. If you ever need \(\infty\text{,}\) this is entered as oo.
Finally, from time to time you may need to include parameters (variables) in an expression. Typical code for this is of the form a, b, c = symbols('a b c', real = True, constant = True). Here, we introduce the symbols a,b,c with the specification that they represent real-valued constants.

Subsection B.3.2 Matrices in SymPy

Here we collect some of the SymPy commands used throughout this text, for ease of reference. For further details, please consult the online documentation 1 .
To create a \(2\times 3\) matrix, we can write either A=Matrix(2,3,[1,2,3,4,5,6]) or A=Matrix([[1,2,3],[4,5,6]]), where of course the size and entries can be changed to whatever you want. The former method is a bit faster, but once your matrices get a bit bigger, the latter method is less prone to typos.
Also of note: a column vector \(\bbm 1\\2\\3\ebm\) can be entered using Matrix([1,2,3]). There are also certain built in special matrices. To get an \(n\times n\) identity matrix, use eye(n). To get an \(m\times n\) zero matrix, use zeros(m,n), or zeros(n) for a square matrix. There is also syntax for diagonal matrices, such as diag(1,2,3). What’s cool is that you can even use this for block diagonal matrices:
To get the reduced row-echelon form of the matrix \(A\text{,}\) simply use A.rref(). Addition, subtraction, and multiplication use the obvious syntax: A+B, A*B, etc.. The determinant of a square matrix is given by A.det(). Inverses can be computed using A.inv() or A**-1. The latter is rather natural, since powers in general are entered as A**n for \(A^n\text{.}\)
In most cases where you want to reduce a matrix, you’re going to want to simply use the rref function. But there are times where this can be overzealous; for example, if you have a matrix with one or more symbols. One option is to replace A.rref() with A.echelon_form(). The echelon_form function creates zeros in the pivot columns, but does not create leading on ones.
For example, let’s take the matrix \(A = \bbm a \amp 2\amp b\\2\amp 1\amp a\\2a\amp b\amp 3\ebm\text{.}\) Note the difference in output between rref and echelon_form.
It is possible to manually perform row operations when you need additional control. This is achieved using the function A.elementary_row_op(<arguments>), with arguments op,row,k,row1,row2.
We have the following general syntax:
  • To swap two rows:
    • op='n<->m'
    • row1=i, where i is the index of the first row being swapped (remembering that rows are indexed starting with \(0\) for the first row).
    • row2=j, where j is the index of the second row being swapped.
  • To rescale a row:
    • op='n->kn'
    • row=i, where i is the index of the row being rescaled.
    • k=c, where c is the value of the scalar you want to multiply by.
  • To add a multiple of one row to another:
    • op='n->n+km'
    • row=i, where i is the index of the row you want to change.
    • k=c, where c is the multiple of the other row.
    • row2=j, where j is the index of the other row.
When studying matrix transformations, we are often interested in the null space and column space, since these correspond to the kernel and image of a linear transformation. This is achieved, simply enough, using A.nullspace() and A.colspace(). The output will be a basis of column vectors for these spaces, and these are exactly the ones you’d find doing Gaussian elimination by hand.
Once you get to orthogonality, you’ll want to be able to compute things like dot products, and transpose. These are simple enough. The dot product of vectors X,Y is simply X.dot(Y). The transpose of a matrix A is A.T. As we should expect, X\dotp Y = X^TY.
Of course, nobody wants to do things like the Gram Schmidt algorithm by hand. Fortunately, there’s a function for that. If we have vectors X,Y,Z, we can make a list L=[X,Y,Z], and perform Gram Schmidt with GramSchmidt(L). If you want your output to be an orthonormal basis (and not merely orthogonal), then you can use GramSchmidt(L,true).
It’s useful to note that the output from functions like nullspace() are automatically treated as lists. So one can use simple code like the following:
If for some reason you need to reference particular vectors in a list, this can be done by specifying the index. If L=[X,Y,Z], then L[0]==X, L[1]==Y, and L[2]==Z.
Next up is eigenvalues and eigenvectors. Given an \(n\times n\) matrix \(A\text{,}\) we have the following:
  • For the characteristic polynomial, use A.charpoly(). However, the result will give you something SymPy calls a “PurePoly”, and the factor command will have no effect. Instead, use A.charpoly().as_expr().
  • If we know that \(3\) is an eigenvalue of a \(4\times 4\) matrix \(A\text{,}\) one way to get a basis for the eigenspace \(E_3(A)\) is to do:
    B=A-3*eye(4)
    B.nullspace()
    
    If you just want all the eigenvalues and eigenvectors without going through the steps, then you can simply execute A.eigenvects(). The result is a list of lists — each list in the list is of the form: eigenvalue, multiplicity, basis for the eigenspace.
    For diagonalization, one can do A.diagonalize(). But this will not necessarily produce orthogonal diagonalization for a symmetric matrix.
For complex vectors and matrices, the main additional operation we need is the hermitian conjugate. The hermitian conjugate of a matrix A is called using A.H, which is simple enough. Unfortunately, there is no built-in complex inner product, perhaps because mathematicians and physicists cannot agree on which of the two vectors in the inner product should have the complex conjugate applied to it. Since we define the complex inner product by \(\langle \zz,\ww\rangle = \zz\dotp\bar{\ww}\text{,}\) we can execute the inner product in SymPy using Z.dot(W.H), or (W.H)*Z, although the latter gives the output as a \(1\times 1\) matrix rather than a number.
Don’t forget that when entering complex matrices, the complex unit is entered as I. Also, complex expressions are not simplified by default, so you will often need to wrap your output line in simplify(). The Sage Cell below contains complete code for the unitary diagonalization of a \(2\times 2\) hermitian matrix with distinct eigenvalues. When doing a problem like this in a Sage cell, it’s a good idea to execute each line of code (and display output) before moving on to the next. In this case, printing the output for the list L given by A.eigenvects() helps explain the complicated-looking definitions of the vectors v,w. Of course, if we had a matrix with repeated eigenvalues, we’d need to add steps involving Gram Schmidt.
There are a few other commands that might come in handy as you work through this material:
  • Two matrices can be glued together. If matrices A,B have the same number of rows, the command A.row_join(B) will glue the matrices together, left-to-right. If they have the same number of columns, A.col_join(B) will glue them together top-to-bottom.
  • To insert a column C into a matrix M (of appropriate size) as the \(j\)th column, you can do M.col_insert(j,C). Just remember that columns are indexed starting at zero, so you might want j-1 instead of j. This can be useful for things like solving a system \(A\xx=B\text{,}\) where you want to append the column \(B\) to the matrix \(A\text{.}\)
  • A \(QR\)-factorization can be performed using Q,R=A.QRdecomposition()
  • The Jordan canonical form \(M\) of a matrix \(A\) can be obtained (along with the matrix \(P\) whose columns are a Jordan basis) using P,M=A.jordan_form().
docs.sympy.org/latest/modules/matrices/matrices.html