Skip to main content

Section 2.3 Isomorphisms, composition, and inverses

We ended the last section with an important result. Exercise 2.2.17 showed that existence of an injective linear map \(T:V\to W\) is equivalent to \(\dim V\leq \dim W\text{,}\) and that existence of a surjective linear map is equivalent to \(\dim V\geq \dim W\text{.}\) It’s probably not surprising than that existence of a bijective linear map \(T:V\to W\) is equivalent to \(\dim V = \dim W\text{.}\)
In a certain sense that we will now try to make preceise, vectors spaces of the same dimension are equivalent: they may look very different, but in fact, they contain exactly the same information, presented in different ways.

Subsection 2.3.1 Isomorphisms

Definition 2.3.1.

A bijective linear transformation \(T:V\to W\) is called an isomorphism. If such a map exists, we say that \(V\) and \(W\) are isomorphic, and write \(V\cong W\text{.}\)
We again need to prove both directions of an “if and only if”. If an isomorphism exists, can you see how to use Exercise 2.2.17 to show the dimensions are equal?
If the dimensions are equal, you need to construct an isomorphism. Since \(V\) and \(W\) are finite-dimensional, you can choose a basis for each space. What can you say about the sizes of these bases? How can you use them to define a linear transformation? (You might want to remind yourself what Theorem 2.1.8 says.)
If \(T:V\to W\) is a bijection, then it is both injective and surjective. Since \(T\) is injective, \(\dim V\leq \dim W\text{,}\) by Exercise 2.2.17. By this same exercise, since \(T\) is surjective, we must have \(\dim V\geq \dim W\text{.}\) It follows that \(\dim V=\dim W\text{.}\)
Suppose now that \(\dim V =\dim W\text{.}\) Then we can choose bases \(\{\vv_1,\ldots, \vv_n\}\) of \(V\text{,}\) and \(\{\ww_1,\ldots, \ww_n\}\) of \(W\text{.}\) Theorem 2.1.8 then guarantees the existence of a linear map \(T:V\to W\) such that \(T(\vv_i)=\ww_i\) for each \(i=1,2,\ldots, n\text{.}\) Repeating the arguments of Exercise 2.2.17 shows that \(T\) is a bijection.
Buried in the theorem above is the following useful fact: an isomorphism \(T:V\to W\) takes any basis of \(V\) to a basis of \(W\). Another remarkable result of the above theorem is that any two vector spaces of the same dimension are isomorphic! In particular, we have the following theorem.

Exercise 2.3.4.

Theorem 2.3.3 is a direct consequence of Theorem 2.3.2. But it’s useful to understand how it works in practice. Note that in the definition below, we use the term ordered basis. This just means that we fix the order in which the vectors in our basis are written.

Definition 2.3.5.

Let \(V\) be a finite-dimensional vector space, and let \(B=\{\mathbf{e}_1,\ldots, \mathbf{e}_n\}\) be an ordered basis for \(V\text{.}\) The coefficient isomorphism associated to \(B\) is the map \(C_B:V\to \R^n\) defined by
\begin{equation*} C_B(c_1\mathbf{e}_1+c_2\mathbf{e}_2+\cdots +c_n\mathbf{e}_n)=\bbm c_1\\c_2\\\vdots \\c_n\ebm\text{.} \end{equation*}
Note that this is a well-defined map since every vector in \(V\) can be written uniquely in terms of the basis \(B\text{.}\) But also note that the ordering of the vectors in \(B\) is important: changing the order changes the position of the coefficients in \(C_B(\vv)\text{.}\)
The coefficient isomorphism is especially useful when we want to analyze a linear map computationally. Suppose we’re given \(T:V\to W\) where \(V, W\) are finite-dimensional. Let us choose bases \(B=\{\vv_1,\ldots, \vv_n\}\) of \(V\) and \(B' = \{\ww_1,\ldots, \ww_m\}\) of \(W\text{.}\) The choice of these two bases determines scalars \(a_{ij}, 1\leq i\leq n, 1\leq j\leq m\text{,}\) such that
\begin{equation*} T(\vv_j) = a_{1j}\ww_1+a_{2j}\ww_2+\cdots + a_{mj}\ww_j, \end{equation*}
for each \(i=1,2,\ldots, n\text{.}\) The resulting matrix \(A=[a_{ij}]\) defines a matrix transformation \(T_A:\R^n\to \R^m\) such that
\begin{equation*} T_A\circ C_B = C_{B'}\circ T\text{.} \end{equation*}
The relationship among the four maps used here is best captured by the “commutative diagram” in Figure 2.3.6.
Figure 2.3.6. Defining the matrix of a linear map with respect to choices of basis.
The matrix of a linear transformation is studied in more detail in Section 5.1.

Subsection 2.3.2 Composition and inverses

Recall that for any function \(f:A\to B\text{,}\) if \(f\) is a bijection, then it has an inverse: a function \(f^{-1}:B\to A\) that “undoes” the action of \(f\text{.}\) That is, if \(f(a)=b\text{,}\) then \(f^{-1}(b)=a\text{,}\) or in other words, \(f^{-1}(f(a))=a\) — the composition \(f^{-1}\circ f\) is equal to the identity function on \(A\text{.}\)
The same is true for composition in the other order: \(f\circ f^{-1}\) is the identity function on \(B\text{.}\) One way of interpreting this is to observe that just as \(f^{-1}\) is the inverse of \(f\text{,}\) so is \(f\) the inverse of \(f^{-1}\text{;}\) that is, \((f^{-1})^{-1}=f\text{.}\)
Since linear transformations are a special type of function, the above is true for a linear transformation as well. But if we want to keep everything under the umbrella of linear algebra, there are two things we should check: that the composition of two linear transformations is another linear transformation, and that the inverse of a linear transformation is a linear transformation.

Exercise 2.3.7.

Show that the composition of two linear maps is again a linear map.

Exercise 2.3.8.

Given transformations \(S:V\to W\) and \(T:U\to V\text{,}\) show that:
  1. \(\displaystyle \ker T\subseteq \ker ST\)
  2. \(\displaystyle \im ST\subseteq \im S\)
Hint.
This is simpler than it looks! It’s mostly a matter of chasing the definitions: see Remark 2.2.3.

Exercise 2.3.9.

Let \(T:V\to W\) be a bijective linear transformation. Show that \(T^{-1}:W\to V\) is a linear transformation.
Hint.
Since \(T\) is a bijection, every \(\ww\in W\) can be associated with some \(\vv\in V\text{.}\)

Remark 2.3.10.

With this connection between linear maps (in general) and matrices, it can be worthwhile to pause and consider invertibility in the context of matrices. Recall that an \(n\times n\) matrix \(A\) is invertible if there exists a matrix \(A^{-1}\) such that \(AA^{-1}=I_n\) and \(A^{-1}A=I_n\text{.}\)
The same definition can be made for linear maps. We’ve defined what it means for a map \(T:V\to W\) to be invertible as a function. In particular, we relied on the fact that any bijection has an inverse.
Let \(A\) be an \(m\times n\) matrix, and let \(B\) be an \(n\times k\) matrix. Then we have linear maps
\begin{equation*} \R^k \xrightarrow{T_B} \R^n\xrightarrow{T_A} \R^m\text{,} \end{equation*}
and the composition \(T_A\circ T_B:\R^k\to \R^m\) satisfies
\begin{equation*} T_A\circ T_B(\xx) = T_A(T_B(\xx)) = T_A(B\xx)=A(B\xx)=(AB)\xx=T_{AB}(\xx)\text{.} \end{equation*}
Note that the rules given in elementary linear algebra, for the relative sizes of matrices that can be multiplied, are simply a manifestation of the fact that to compose functions, the range of the first must be contained in the domain of the second.

Exercise 2.3.11.

Show that if \(ST=1_V\text{,}\) then \(S\) is surjective and \(T\) is injective. Conclude that if \(ST=1_V\) and \(TS=1_w\text{,}\) then \(S\) and \(T\) are both bijections.
Hint.
This is true even if the functions aren’t linear. In fact, you’ve probably seen the proof in an earlier course!
Theorem 2.3.2 also tells us why we can only consider invertibility for square matrices: we know that invertible linear maps are only defined between spaces of equal dimension. In analogy with matrices, some texts will define a linear map \(T:V\to W\) to be invertible if there exists a linear map \(S:W\to V\) such that
\begin{equation*} ST = 1_V \quad \text{ and } \quad TS = 1_W\text{.} \end{equation*}
By Exercise 2.3.11, this implies that \(S\) and \(T\) are bijections, and therefore \(S\) and \(T\) are invertible, with \(S=T^{-1}\text{.}\)
We end this section with a discussion of inverses and composition. If we have isomorphisms \(S:V\to W\) and \(T:U\to V\text{,}\) what can we say about the composition \(ST\text{?}\)

Exercise 2.3.12.

    The inverse of the composition \(ST\) is \(S^{-1}T^{-1}\text{.}\)
  • True.

  • The composition of \(ST\) and its inverse should be the identity. Is that the case here? (Remember that order of composition matters!)
  • False.

  • The composition of \(ST\) and its inverse should be the identity. Is that the case here? (Remember that order of composition matters!)
We know that the composition of two linear transformations is a linear transformation, and that the composition of two bijections is a bijection. It follows that the composition of two isomorphisms is an isomorphism!
With this observation, one can show that the relation of isomorphism is an equivalence relation. Two finite-dimensional vector spaces belong to the same equivalence class if and only if they have the same dimension. Here, we see again the importance of dimension in linear algebra.

Remark 2.3.13.

If you got that last exercise incorrect, consider the following: given \(S:V\to W\) and \(T:U\to V\text{,}\) we have \(ST:U\to W\text{.}\) Since \(ST\) is an isomorphism, it has an inverse, which goes from \(W\) to \(U\text{.}\) This inverse can be expressed in terms of the inverses of \(S\) and \(T\text{,}\) but we’re going backwards, so we have to apply them in the opposite order!
\begin{align*} U \amp \xrightarrow{T} V\xrightarrow{S} W \amp \text{ defines } \amp \quad ST:U\to W\\ U \amp \xleftarrow{T^{-1}} V \xleftarrow{S^{-1}} W \amp \text{ defines } \amp \quad (ST)^{-1}:W\to U \end{align*}

Exercises Exercises

1.
Let \(T:P_3 \rightarrow P_3\) be defined by
\begin{equation*} T(ax^2+bx+c)=(4 a + b)x^2 + (-4 a - 4 b + c) x - a . \end{equation*}
Find the inverse of \(T\text{.}\)
2.
  1. The linear transformation \(T_1: R^2 \rightarrow R^2\) is given by
    \begin{equation*} T_1(x, y) = (2 x + 9 y, 4 x + 19 y). \end{equation*}
    Find \(T_1^{-1}(x, y)\text{.}\)
  2. The linear transformation \(T_2: R^3 \rightarrow R^3\) is given by
    \(T_2(x, y, z) = (x + 2 z, 1 x + y, 1 y + z).\)
    Find \(T_2^{-1}(x, y, z)\text{.}\)
  3. Using \(T_1\) from part a, it is given that:
    \(T_1(x, y) = (2, -4)\)
    Find \(x\) and \(y\text{.}\)
    \(x =\) , \(y =\) .
  4. Using \(T_2\) from part b, it is given that:
    \(T_2(x, y, z) = (6, -3, 1)\)
    Find \(x\text{,}\) \(y\text{,}\) and \(z\text{.}\)
    \(x =\) , \(y =\) , \(z =\)