Skip to main content

Worksheet Participate

1.

Consider the matrix \(A=\left[\begin{array}{rrr} 1 \amp 0 \amp 2 \\ 0 \amp 1 \amp -1 \\ -3 \amp 0 \amp -7 \end{array}\right]\text{.}\) Note that performing Gauss-Jordan elimination requires performing in order
\begin{align*} 3R_1+R_3 \amp \rightarrow R_3 \\ -R_3 \amp \rightarrow R_3 \\ R_3+R_2 \amp \rightarrow R_2 \\ -2R_3+R_1 \amp \rightarrow R_1 \text{.} \end{align*}
(a)
Calculate \(E_1\text{,}\) \(E_2\text{,}\) \(E_3\text{,}\) and \(E_4\) without using technology. What is the inverse of each matrix you found?
(b)
Use the sage cell below and modify it to use what you got for \(E_1\) and verify that \(E_1A\) performs \(3R_1+R_3 \rightarrow R_3\) on \(A\text{.}\)
(c)
Now similarly modify the sage cell below with what you found for \(E_2\text{,}\) \(E_3\text{,}\) and \(E_4\) and verify that \(E_4E_3E_2E_1A=I\text{.}\)
(d)
What do the calculations in the cell below tell us about \(A^{-1}\) and elementary matrices?
(e)
Use sage to verify your calculations of the inverses in the first part, and to verify that
\begin{equation*} A=E_1^{-1}E_2^{-1}E_3^{-1}E_4^{-1}\text{.} \end{equation*}

2.

The theorem containing various things equivalent to the property of being invertible is sometimes called β€œThe Invertible Matrix Theorem”. But the star of the show is really the property of the reduced row echelon form of \(A\) being \(I\text{.}\) Let’s explore a bit how some of the various properties are related.
(a)
Suppose you know that \(A\) is an \(n\times n\) matrix whose reduced row echelon form is \(I\text{.}\) Set up the augmented matrix \(\big[A | I \big]\text{.}\) What happens after you row reduce this matrix? What does mean for the invertibility of \(A\text{?}\)
(b)
Suppose you know that \(A\) is an \(n\times n\) matrix whose reduced row echelon form is \(I\text{.}\) Now we know that performing a row operation is the same as multiplying by an elementary matrix. Performing a row operation on \(\big[A | I \big]\) has the same result as \(\big[E_1A | E_1I \big]\text{.}\) The reduced row echelon form of \(A\) being \(I\) means that there is a sequence of elementary matrices \(E_1, E_2, \ldots E_m\) such that \(E_m\cdots E_2 E_1A=I\text{.}\) What does this mean about \(E_m\cdots E_2 E_1\text{?}\)
(c)
Suppose you know that \(A\) is an invertible \(n\times n\) matrix and you have a linear system with \(A\) as its coefficient matrix. Then you can write \(AX=B\) where \(X\) is a column matrix of variables and \(B\) is your column matrix of constants. How many solutions are there for \(X\text{?}\) How do you know?
(d)
Suppose you know that \(A\) is an invertible \(n\times n\) matrix and that \(X\) is a matrix with \(n\) rows such that \(AX=\mathbf{0}\text{.}\) What does \(X\) have to equal? How do you know?
(e)
Suppose that \(A\) is an \(n\times n\) matrix and \(X\) is a column matrix with \(n\) rows. Now suppose that the only solution to \(AX=\mathbf{0}\) is that \(X=\mathbf{0}\text{.}\) (Why isn’t it possible that there are no solutions?) Set up \(\big[A | \mathbf{0} \big]\text{.}\) After row reducing to solve the system, how many free variables can there be? What does the reduced row echelon form have to look like?