Matrix Inverses
Find the matrix that undoes what another matrix does
When you multiply a number by 5, you can undo it by multiplying by $\frac{1}{5}$. The number $\frac{1}{5}$ is called the multiplicative inverse of 5 because $5 \times \frac{1}{5} = 1$. Matrices have a similar concept: for certain matrices, there exists another matrix that “undoes” the original, returning you to where you started.
This idea is surprisingly powerful. If a matrix represents a transformation, like rotating an image or encrypting a message, then its inverse represents the opposite transformation: unrotating the image or decrypting the message. If a matrix encodes a system of equations, its inverse can solve that system instantly, no matter what values appear on the right-hand side. The inverse is like having an “undo button” for matrix operations.
Not every matrix has an inverse, just as the number 0 has no multiplicative inverse (there is no number you can multiply 0 by to get 1). Understanding which matrices are invertible and how to find their inverses is one of the central skills in linear algebra.
Core Concepts
What Is a Matrix Inverse?
For a square matrix $A$, the inverse of $A$ (if it exists) is a matrix denoted $A^{-1}$ (read as “$A$ inverse”) such that:
$$AA^{-1} = A^{-1}A = I$$
where $I$ is the identity matrix. Both products must equal the identity matrix. This is analogous to how $5 \times \frac{1}{5} = \frac{1}{5} \times 5 = 1$ for numbers.
Here is a concrete example. Let:
$$A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix} \quad \text{and} \quad A^{-1} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$$
You can verify that these are inverses by multiplying them:
$$AA^{-1} = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}\begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix} = \begin{pmatrix} 2(3) + 1(-5) & 2(-1) + 1(2) \ 5(3) + 3(-5) & 5(-1) + 3(2) \end{pmatrix} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} = I$$
The matrix $A^{-1}$ completely undoes what $A$ does. If $A$ transforms a vector $\vec{v}$ into $\vec{w} = A\vec{v}$, then $A^{-1}$ transforms $\vec{w}$ back into $\vec{v}$:
$$A^{-1}(A\vec{v}) = (A^{-1}A)\vec{v} = I\vec{v} = \vec{v}$$
Invertible vs. Singular Matrices
A matrix that has an inverse is called invertible (or nonsingular). A matrix that does not have an inverse is called singular (or noninvertible).
Only square matrices can have inverses. A $2 \times 3$ matrix cannot have an inverse because there is no matrix you could multiply it by to produce the identity matrix. But even among square matrices, not all are invertible.
Here is a simple example of a singular matrix:
$$B = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}$$
Why is this matrix singular? Notice that the second row is exactly twice the first row. When you try to find an inverse, you will run into a contradiction because $B$ “collapses” some information: it sends different vectors to the same place. You cannot undo that collapse.
Geometrically, $B$ squashes all of two-dimensional space onto a line. There is no way to reverse this squashing because you cannot recover which point the vector came from when it is now sitting on a line.
The Determinant Test for Invertibility
There is a quick way to tell if a $2 \times 2$ matrix is invertible: compute its determinant. For the matrix:
$$A = \begin{pmatrix} a & b \ c & d \end{pmatrix}$$
the determinant is:
$$\det(A) = ad - bc$$
The rule is simple:
- If $\det(A) \neq 0$, then $A$ is invertible.
- If $\det(A) = 0$, then $A$ is singular (no inverse exists).
Let us check our examples:
For $A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}$: $\det(A) = 2(3) - 1(5) = 6 - 5 = 1 \neq 0$. So $A$ is invertible.
For $B = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}$: $\det(B) = 1(4) - 2(2) = 4 - 4 = 0$. So $B$ is singular.
The determinant measures how much a matrix “scales” area. When the determinant is zero, the matrix squashes everything onto a lower-dimensional space (a line or a point), destroying information that cannot be recovered.
The Formula for 2x2 Inverses
For a $2 \times 2$ matrix with nonzero determinant, there is an explicit formula for the inverse:
$$\text{If } A = \begin{pmatrix} a & b \ c & d \end{pmatrix} \text{ and } \det(A) = ad - bc \neq 0, \text{ then}$$
$$A^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \ -c & a \end{pmatrix}$$
The recipe is:
- Swap the diagonal entries ($a$ and $d$ switch places)
- Change the signs of the off-diagonal entries ($b$ and $c$ become $-b$ and $-c$)
- Divide everything by the determinant
Let us apply this to $A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}$:
Step 1: Compute the determinant: $\det(A) = 2(3) - 1(5) = 1$
Step 2: Apply the formula: $$A^{-1} = \frac{1}{1}\begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$$
This matches what we had earlier.
Finding Inverses Using Row Reduction
The $2 \times 2$ formula is convenient, but it does not extend to larger matrices. For $3 \times 3$ matrices and beyond, we use a different approach based on row reduction.
The method is elegant: create an augmented matrix with $A$ on the left and the identity matrix on the right. Then apply row operations to reduce $A$ to the identity. The same operations will transform the identity matrix on the right into $A^{-1}$.
$$[A \mid I] \xrightarrow{\text{row operations}} [I \mid A^{-1}]$$
Why does this work? Each row operation can be represented by multiplying on the left by an elementary matrix. If a sequence of operations transforms $A$ into $I$, then the product of all those elementary matrices equals $A^{-1}$. Applying those same operations to $I$ gives us exactly $A^{-1}$.
Here is the procedure step by step:
- Write the augmented matrix $[A \mid I]$
- Use Gaussian elimination (row operations) to reduce the left side to REF, then RREF
- If the left side becomes $I$, the right side is $A^{-1}$
- If the left side has a row of zeros, $A$ is singular and has no inverse
Properties of Matrix Inverses
Matrix inverses satisfy several important properties:
Inverse of an inverse: $(A^{-1})^{-1} = A$
Just as $\left(\frac{1}{5}\right)^{-1} = 5$, taking the inverse twice gets you back to the original.
Inverse of a product: $(AB)^{-1} = B^{-1}A^{-1}$
Notice the reversal of order. This makes sense if you think about it: to undo “do $B$ then do $A$,” you must “undo $A$ first, then undo $B$.” It is like putting on socks and shoes. To take them off, you remove the shoes first, then the socks.
Inverse of a transpose: $(A^T)^{-1} = (A^{-1})^T$
The inverse of the transpose equals the transpose of the inverse. You can do these operations in either order.
Inverse of a scalar multiple: $(cA)^{-1} = \frac{1}{c}A^{-1}$ (for $c \neq 0$)
Uniqueness: If $A$ has an inverse, it has exactly one inverse.
Solving Systems with Inverses
One of the most powerful applications of matrix inverses is solving systems of linear equations. Consider the matrix equation:
$$A\vec{x} = \vec{b}$$
If $A$ is invertible, you can solve for $\vec{x}$ by multiplying both sides on the left by $A^{-1}$:
$$A^{-1}(A\vec{x}) = A^{-1}\vec{b}$$ $$(A^{-1}A)\vec{x} = A^{-1}\vec{b}$$ $$I\vec{x} = A^{-1}\vec{b}$$ $$\vec{x} = A^{-1}\vec{b}$$
The solution is simply $\vec{x} = A^{-1}\vec{b}$.
This approach is particularly valuable when you need to solve multiple systems with the same coefficient matrix but different right-hand sides. Once you have computed $A^{-1}$, finding the solution for any $\vec{b}$ is just a matrix-vector multiplication.
When Does a Matrix Have an Inverse?
A square matrix $A$ is invertible if and only if any of the following equivalent conditions hold:
- The determinant of $A$ is nonzero: $\det(A) \neq 0$
- The reduced row echelon form of $A$ is the identity matrix
- $A$ has $n$ pivots (where $A$ is $n \times n$)
- The equation $A\vec{x} = \vec{0}$ has only the trivial solution $\vec{x} = \vec{0}$
- The columns of $A$ are linearly independent
- The rows of $A$ are linearly independent
- For every $\vec{b}$, the equation $A\vec{x} = \vec{b}$ has exactly one solution
All of these conditions are equivalent. If one fails, they all fail, and the matrix is singular.
The most practical tests are:
- For $2 \times 2$ matrices: compute the determinant
- For larger matrices: use row reduction and check if you can reach the identity matrix
Notation and Terminology
| Term | Meaning | Example |
|---|---|---|
| $A^{-1}$ | Inverse of $A$ | $AA^{-1} = I$ |
| Invertible | Has an inverse | Non-zero determinant |
| Nonsingular | Has an inverse (synonym for invertible) | |
| Singular | Does not have an inverse | Determinant is zero |
| $\det(A)$ or $ | A | $ |
| $[A \mid I]$ | Augmented matrix for finding inverse | Left half is $A$, right half is $I$ |
Examples
Find the inverse of $A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}$.
Solution:
We will use the formula: $A^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \ -c & a \end{pmatrix}$
Step 1: Identify the entries.
For $A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}$, we have $a = 2$, $b = 1$, $c = 5$, $d = 3$.
Step 2: Compute the determinant.
$$\det(A) = ad - bc = (2)(3) - (1)(5) = 6 - 5 = 1$$
Since $\det(A) = 1 \neq 0$, the matrix is invertible.
Step 3: Apply the formula.
$$A^{-1} = \frac{1}{1}\begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$$
Answer:
$$A^{-1} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$$
Verify that $A \cdot A^{-1} = I$ for $A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}$ and $A^{-1} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$.
Solution:
We need to compute the product $AA^{-1}$ and verify it equals the identity matrix.
Step 1: Set up the multiplication.
$$AA^{-1} = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}\begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$$
Step 2: Compute entry $(1,1)$ using row 1 of $A$ and column 1 of $A^{-1}$.
$$(AA^{-1})_{11} = 2(3) + 1(-5) = 6 - 5 = 1$$
Step 3: Compute entry $(1,2)$ using row 1 of $A$ and column 2 of $A^{-1}$.
$$(AA^{-1})_{12} = 2(-1) + 1(2) = -2 + 2 = 0$$
Step 4: Compute entry $(2,1)$ using row 2 of $A$ and column 1 of $A^{-1}$.
$$(AA^{-1})_{21} = 5(3) + 3(-5) = 15 - 15 = 0$$
Step 5: Compute entry $(2,2)$ using row 2 of $A$ and column 2 of $A^{-1}$.
$$(AA^{-1})_{22} = 5(-1) + 3(2) = -5 + 6 = 1$$
Step 6: Assemble the result.
$$AA^{-1} = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix} = I$$
Verification complete: The product equals the identity matrix, confirming that $A^{-1}$ is indeed the inverse of $A$.
You could similarly verify that $A^{-1}A = I$, and you would get the same result.
Use row reduction to find the inverse of $A = \begin{pmatrix} 1 & 2 \ 3 & 7 \end{pmatrix}$.
Solution:
We will set up the augmented matrix $[A \mid I]$ and row reduce until the left side becomes $I$.
Step 1: Create the augmented matrix.
$$[A \mid I] = \left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 3 & 7 & 0 & 1 \end{array}\right]$$
Step 2: Eliminate the entry below the first pivot.
We need to eliminate the 3 in position $(2,1)$. Apply $R_2 \to R_2 - 3R_1$:
$$\left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 3 & 7 & 0 & 1 \end{array}\right] \xrightarrow{R_2 \to R_2 - 3R_1} \left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 0 & 1 & -3 & 1 \end{array}\right]$$
Step 3: Eliminate the entry above the second pivot.
We need to eliminate the 2 in position $(1,2)$. Apply $R_1 \to R_1 - 2R_2$:
$$\left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 0 & 1 & -3 & 1 \end{array}\right] \xrightarrow{R_1 \to R_1 - 2R_2} \left[\begin{array}{cc|cc} 1 & 0 & 7 & -2 \ 0 & 1 & -3 & 1 \end{array}\right]$$
Step 4: Read off the inverse.
The left side is now the identity matrix, and the right side is $A^{-1}$:
$$A^{-1} = \begin{pmatrix} 7 & -2 \ -3 & 1 \end{pmatrix}$$
Verification: Let us quickly check using the determinant formula.
$\det(A) = 1(7) - 2(3) = 7 - 6 = 1$
By the formula: $A^{-1} = \frac{1}{1}\begin{pmatrix} 7 & -2 \ -3 & 1 \end{pmatrix} = \begin{pmatrix} 7 & -2 \ -3 & 1 \end{pmatrix}$
Both methods give the same answer.
Answer:
$$A^{-1} = \begin{pmatrix} 7 & -2 \ -3 & 1 \end{pmatrix}$$
Solve the system $\begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix} = \begin{pmatrix} 4 \ 11 \end{pmatrix}$ using the inverse from Example 1.
Solution:
We have the matrix equation $A\vec{x} = \vec{b}$ where:
$$A = \begin{pmatrix} 2 & 1 \ 5 & 3 \end{pmatrix}, \quad \vec{x} = \begin{pmatrix} x \ y \end{pmatrix}, \quad \vec{b} = \begin{pmatrix} 4 \ 11 \end{pmatrix}$$
From Example 1, we know $A^{-1} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}$.
Step 1: Apply the solution formula.
$$\vec{x} = A^{-1}\vec{b}$$
Step 2: Compute the matrix-vector product.
$$\vec{x} = \begin{pmatrix} 3 & -1 \ -5 & 2 \end{pmatrix}\begin{pmatrix} 4 \ 11 \end{pmatrix}$$
$$= \begin{pmatrix} 3(4) + (-1)(11) \ (-5)(4) + 2(11) \end{pmatrix}$$
$$= \begin{pmatrix} 12 - 11 \ -20 + 22 \end{pmatrix}$$
$$= \begin{pmatrix} 1 \ 2 \end{pmatrix}$$
Step 3: Write the solution.
$$x = 1, \quad y = 2$$
Verification: Substitute back into the original equations:
- First equation: $2(1) + 1(2) = 2 + 2 = 4$ (True)
- Second equation: $5(1) + 3(2) = 5 + 6 = 11$ (True)
Answer: $(x, y) = (1, 2)$
Show that $B = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}$ has no inverse.
Solution:
We will show this matrix is singular using multiple approaches.
Method 1: Determinant Test
Compute the determinant:
$$\det(B) = 1(4) - 2(2) = 4 - 4 = 0$$
Since the determinant is zero, $B$ is singular and has no inverse.
Method 2: Row Reduction
Let us try the row reduction method and see what happens:
$$[B \mid I] = \left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 2 & 4 & 0 & 1 \end{array}\right]$$
Apply $R_2 \to R_2 - 2R_1$:
$$\left[\begin{array}{cc|cc} 1 & 2 & 1 & 0 \ 0 & 0 & -2 & 1 \end{array}\right]$$
The left side has a row of zeros. We cannot transform it into the identity matrix. This confirms that $B$ is singular.
Method 3: Geometric Interpretation
Notice that the second row $(2, 4)$ is exactly $2$ times the first row $(1, 2)$. The rows are proportional, which means they point in the same direction.
Geometrically, the two columns $\begin{pmatrix} 1 \ 2 \end{pmatrix}$ and $\begin{pmatrix} 2 \ 4 \end{pmatrix}$ are also proportional: the second column is twice the first.
When you multiply $B$ by any vector $\begin{pmatrix} x \ y \end{pmatrix}$:
$$B\begin{pmatrix} x \ y \end{pmatrix} = x\begin{pmatrix} 1 \ 2 \end{pmatrix} + y\begin{pmatrix} 2 \ 4 \end{pmatrix} = (x + 2y)\begin{pmatrix} 1 \ 2 \end{pmatrix}$$
Every output is a multiple of the vector $\begin{pmatrix} 1 \ 2 \end{pmatrix}$. The matrix $B$ squashes all of two-dimensional space onto a single line. Many different input vectors map to the same output, so there is no way to “undo” this transformation.
Why the inverse cannot exist:
For an inverse to exist, we would need a matrix $B^{-1}$ such that $BB^{-1} = I$. But the columns of $B^{-1}$ would need to be vectors that $B$ sends to $\begin{pmatrix} 1 \ 0 \end{pmatrix}$ and $\begin{pmatrix} 0 \ 1 \end{pmatrix}$. Since every output of $B$ lies on the line through $\begin{pmatrix} 1 \ 2 \end{pmatrix}$, and $\begin{pmatrix} 1 \ 0 \end{pmatrix}$ is not on that line, no such vector exists.
Conclusion: The matrix $B = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}$ is singular and has no inverse.
Key Properties and Rules
Fundamental Properties of Inverses
For invertible matrices $A$ and $B$ of the same size, and a nonzero scalar $c$:
| Property | Formula | Why It Works |
|---|---|---|
| Inverse of inverse | $(A^{-1})^{-1} = A$ | Undoing an undo returns you to the start |
| Inverse of product | $(AB)^{-1} = B^{-1}A^{-1}$ | Reverse order to undo operations |
| Inverse of transpose | $(A^T)^{-1} = (A^{-1})^T$ | Transpose and inverse commute |
| Inverse of scalar multiple | $(cA)^{-1} = \frac{1}{c}A^{-1}$ | Scaling inverses by the reciprocal |
| Inverse of power | $(A^n)^{-1} = (A^{-1})^n$ | Undo $n$ applications with $n$ inverse applications |
Conditions for Invertibility (Equivalent Statements)
For an $n \times n$ matrix $A$, the following are all equivalent:
- $A$ is invertible
- $\det(A) \neq 0$
- The RREF of $A$ is $I_n$
- $A$ has $n$ pivots
- $A\vec{x} = \vec{0}$ implies $\vec{x} = \vec{0}$
- The columns of $A$ are linearly independent
- The rows of $A$ are linearly independent
- For every $\vec{b}$, $A\vec{x} = \vec{b}$ has a unique solution
If any one of these fails, they all fail.
The 2x2 Inverse Formula
For $A = \begin{pmatrix} a & b \ c & d \end{pmatrix}$ with $ad - bc \neq 0$:
$$A^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \ -c & a \end{pmatrix}$$
Remember: swap the diagonal, negate the off-diagonal, divide by the determinant.
Real-World Applications
Decrypting Coded Messages (Hill Cipher)
The Hill cipher is an encryption method that uses matrix multiplication. To encrypt a message, you convert letters to numbers (A=0, B=1, …, Z=25), arrange them into a vector, and multiply by an invertible matrix $A$ (the encryption key).
To decrypt, you multiply by $A^{-1}$. The security of the cipher depends on keeping the matrix $A$ secret. Anyone who knows $A$ can compute $A^{-1}$ and decode any message.
For example, if the encryption matrix is $A = \begin{pmatrix} 3 & 3 \ 2 & 5 \end{pmatrix}$ and you receive the encoded vector $\begin{pmatrix} 15 \ 14 \end{pmatrix}$, you would compute $A^{-1}\begin{pmatrix} 15 \ 14 \end{pmatrix}$ to recover the original message.
Undoing Transformations in Graphics
In computer graphics and animation, transformations (rotation, scaling, shearing) are represented by matrices. When you need to reverse a transformation, such as rotating an object back to its original position, you apply the inverse matrix.
If a rotation matrix $R$ rotates by 30 degrees counterclockwise, then $R^{-1}$ rotates by 30 degrees clockwise. This is used constantly in video games and 3D modeling software when moving cameras, undoing operations, or computing relative positions between objects.
Solving Systems Efficiently When the Right-Hand Side Changes
In many engineering applications, you need to solve the same system $A\vec{x} = \vec{b}$ for many different values of $\vec{b}$. For example, in circuit analysis, the matrix $A$ represents the circuit topology, while $\vec{b}$ represents different input voltages.
Computing $A^{-1}$ once allows you to find solutions for any $\vec{b}$ with just a single matrix-vector multiplication: $\vec{x} = A^{-1}\vec{b}$. This is much faster than solving the system from scratch each time, especially when $A$ is large.
Control Theory and Feedback Systems
In control engineering, the behavior of systems (like autopilots, robotic arms, or temperature regulators) is often described by matrix equations. The ability to invert certain matrices determines whether a system can be controlled, meaning whether you can steer it from any state to any other state.
The invertibility of specific matrices (called controllability and observability matrices) tells engineers whether their control system can actually achieve the desired behavior. A singular matrix indicates a fundamental limitation in what the system can do.
Self-Test Problems
Problem 1: Find the inverse of $A = \begin{pmatrix} 4 & 3 \ 3 & 2 \end{pmatrix}$ using the formula.
Show Answer
Step 1: Compute the determinant: $$\det(A) = 4(2) - 3(3) = 8 - 9 = -1$$
Since $\det(A) \neq 0$, the inverse exists.
Step 2: Apply the formula: $$A^{-1} = \frac{1}{-1}\begin{pmatrix} 2 & -3 \ -3 & 4 \end{pmatrix} = \begin{pmatrix} -2 & 3 \ 3 & -4 \end{pmatrix}$$
Problem 2: Is the matrix $\begin{pmatrix} 6 & 9 \ 2 & 3 \end{pmatrix}$ invertible? Explain.
Show Answer
Compute the determinant: $$\det = 6(3) - 9(2) = 18 - 18 = 0$$
Since the determinant is zero, the matrix is singular (not invertible).
You can also see this because the first row $(6, 9)$ is exactly 3 times the second row $(2, 3)$. The rows are proportional, which always means the determinant is zero.
Problem 3: Verify that if $A = \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix}$, then $(A^{-1})^{-1} = A$.
Show Answer
Step 1: Find $A^{-1}$.
$\det(A) = 1(4) - 2(3) = -2$
$$A^{-1} = \frac{1}{-2}\begin{pmatrix} 4 & -2 \ -3 & 1 \end{pmatrix} = \begin{pmatrix} -2 & 1 \ \frac{3}{2} & -\frac{1}{2} \end{pmatrix}$$
Step 2: Find $(A^{-1})^{-1}$.
$\det(A^{-1}) = (-2)(-\frac{1}{2}) - (1)(\frac{3}{2}) = 1 - \frac{3}{2} = -\frac{1}{2}$
$$(A^{-1})^{-1} = \frac{1}{-\frac{1}{2}}\begin{pmatrix} -\frac{1}{2} & -1 \ -\frac{3}{2} & -2 \end{pmatrix} = -2\begin{pmatrix} -\frac{1}{2} & -1 \ -\frac{3}{2} & -2 \end{pmatrix} = \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} = A$$
The property $(A^{-1})^{-1} = A$ is verified.
Problem 4: Solve $\begin{pmatrix} 4 & 3 \ 3 & 2 \end{pmatrix}\begin{pmatrix} x \ y \end{pmatrix} = \begin{pmatrix} 5 \ 4 \end{pmatrix}$ using the inverse from Problem 1.
Show Answer
From Problem 1, $A^{-1} = \begin{pmatrix} -2 & 3 \ 3 & -4 \end{pmatrix}$.
$$\begin{pmatrix} x \ y \end{pmatrix} = A^{-1}\begin{pmatrix} 5 \ 4 \end{pmatrix} = \begin{pmatrix} -2 & 3 \ 3 & -4 \end{pmatrix}\begin{pmatrix} 5 \ 4 \end{pmatrix}$$
$$= \begin{pmatrix} (-2)(5) + 3(4) \ 3(5) + (-4)(4) \end{pmatrix} = \begin{pmatrix} -10 + 12 \ 15 - 16 \end{pmatrix} = \begin{pmatrix} 2 \ -1 \end{pmatrix}$$
Solution: $(x, y) = (2, -1)$
Verification: $4(2) + 3(-1) = 8 - 3 = 5$ and $3(2) + 2(-1) = 6 - 2 = 4$. Both equations are satisfied.
Problem 5: Use row reduction to find the inverse of $\begin{pmatrix} 2 & 1 \ 4 & 3 \end{pmatrix}$.
Show Answer
Step 1: Set up the augmented matrix: $$\left[\begin{array}{cc|cc} 2 & 1 & 1 & 0 \ 4 & 3 & 0 & 1 \end{array}\right]$$
Step 2: $R_1 \to \frac{1}{2}R_1$: $$\left[\begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{2} & 0 \ 4 & 3 & 0 & 1 \end{array}\right]$$
Step 3: $R_2 \to R_2 - 4R_1$: $$\left[\begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{2} & 0 \ 0 & 1 & -2 & 1 \end{array}\right]$$
Step 4: $R_1 \to R_1 - \frac{1}{2}R_2$: $$\left[\begin{array}{cc|cc} 1 & 0 & \frac{3}{2} & -\frac{1}{2} \ 0 & 1 & -2 & 1 \end{array}\right]$$
Answer: $A^{-1} = \begin{pmatrix} \frac{3}{2} & -\frac{1}{2} \ -2 & 1 \end{pmatrix}$
Problem 6: If $A$ and $B$ are invertible $3 \times 3$ matrices, simplify $(AB)^{-1}(A^{-1})^{-1}$.
Show Answer
Step 1: Use the property $(AB)^{-1} = B^{-1}A^{-1}$: $$(AB)^{-1}(A^{-1})^{-1} = B^{-1}A^{-1}(A^{-1})^{-1}$$
Step 2: Use the property $(A^{-1})^{-1} = A$: $$= B^{-1}A^{-1}A$$
Step 3: Use $A^{-1}A = I$: $$= B^{-1}I = B^{-1}$$
Answer: $(AB)^{-1}(A^{-1})^{-1} = B^{-1}$
Problem 7: For what value(s) of $k$ is the matrix $\begin{pmatrix} k & 4 \ 2 & k \end{pmatrix}$ singular?
Show Answer
A matrix is singular when its determinant equals zero.
$$\det\begin{pmatrix} k & 4 \ 2 & k \end{pmatrix} = k \cdot k - 4 \cdot 2 = k^2 - 8$$
Set this equal to zero: $$k^2 - 8 = 0$$ $$k^2 = 8$$ $$k = \pm\sqrt{8} = \pm 2\sqrt{2}$$
Answer: The matrix is singular when $k = 2\sqrt{2}$ or $k = -2\sqrt{2}$.
For all other values of $k$, the matrix is invertible.
Summary
-
The inverse of a square matrix $A$ is the matrix $A^{-1}$ such that $AA^{-1} = A^{-1}A = I$. The inverse “undoes” what $A$ does.
-
A matrix that has an inverse is called invertible or nonsingular. A matrix without an inverse is called singular.
-
For a $2 \times 2$ matrix $A = \begin{pmatrix} a & b \ c & d \end{pmatrix}$:
- The determinant is $\det(A) = ad - bc$
- If $\det(A) \neq 0$, then $A^{-1} = \frac{1}{ad-bc}\begin{pmatrix} d & -b \ -c & a \end{pmatrix}$
- If $\det(A) = 0$, the matrix has no inverse
-
For larger matrices, find the inverse using row reduction: transform $[A \mid I]$ into $[I \mid A^{-1}]$.
-
Key properties:
- $(A^{-1})^{-1} = A$
- $(AB)^{-1} = B^{-1}A^{-1}$ (order reverses)
- $(A^T)^{-1} = (A^{-1})^T$
-
Solving systems: If $A$ is invertible, the unique solution to $A\vec{x} = \vec{b}$ is $\vec{x} = A^{-1}\vec{b}$.
-
A matrix is invertible if and only if: its determinant is nonzero, its RREF is the identity, it has full rank, and several other equivalent conditions.
-
Applications include cryptography (Hill cipher), computer graphics (undoing transformations), engineering (solving systems efficiently), and control theory.