Linear Independence and Basis
Identify the essential building blocks of a vector space
When you describe a location in your city, you might say “three blocks east and two blocks north.” You are using two directions, east and north, as your reference. These directions are your basis: the fundamental building blocks from which you construct all other directions. You would never say “three blocks east, two blocks north, and five blocks northeast” because northeast is redundant. It can already be described using east and north. The northeast direction is not independent of the others.
This intuition lies at the heart of two of the most important concepts in linear algebra: linear independence and basis. Linear independence asks: which vectors are truly essential, and which are redundant? A basis answers: what is the minimal set of vectors that can build everything in the space? Together, these concepts reveal the fundamental structure of any vector space.
In this lesson, you will learn how to determine whether a set of vectors is linearly independent, what it means for a set of vectors to form a basis, and how the dimension of a space connects to these ideas. These concepts are not just theoretical. They are practical tools that appear whenever you need to find the essential components of a system, whether you are compressing data, analyzing mechanical constraints, or selecting features in a machine learning model.
Core Concepts
What Does “Independent” Mean for Vectors?
Think about what it means for information to be independent. If I tell you “the temperature is 20 degrees Celsius” and “the temperature is 68 degrees Fahrenheit,” I have given you two pieces of information, but they are not independent. One can be computed from the other. You only need one of them.
Vectors work the same way. A set of vectors is linearly independent if no vector in the set can be written as a linear combination of the others. Each vector contributes something genuinely new that the others cannot provide.
A set of vectors is linearly dependent if at least one vector can be written as a linear combination of the others. In this case, one or more vectors are redundant. Removing them would not change the span of the set.
The Formal Definition
Here is the precise mathematical formulation. A set of vectors ${\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k}$ is linearly independent if the only solution to the equation
$$c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_k\vec{v}_k = \vec{0}$$
is the trivial solution: $c_1 = c_2 = \cdots = c_k = 0$.
If there exists a solution where at least one $c_i \neq 0$, then the vectors are linearly dependent.
Why does this definition work? If there is a nontrivial solution, say with $c_1 \neq 0$, then you can solve for $\vec{v}_1$:
$$\vec{v}_1 = -\frac{c_2}{c_1}\vec{v}_2 - \frac{c_3}{c_1}\vec{v}_3 - \cdots - \frac{c_k}{c_1}\vec{v}_k$$
This shows that $\vec{v}_1$ is a linear combination of the other vectors. It is not independent. Conversely, if one vector can be written as a combination of the others, you can rearrange that equation to get a nontrivial solution to $c_1\vec{v}_1 + \cdots + c_k\vec{v}_k = \vec{0}$.
Testing for Linear Independence
To test whether vectors are linearly independent:
- Set up the equation $c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_k\vec{v}_k = \vec{0}$
- Write this as a homogeneous system $A\vec{c} = \vec{0}$, where the columns of $A$ are the vectors
- Solve the system using row reduction
- If the only solution is $\vec{c} = \vec{0}$ (all free variables are eliminated), the vectors are linearly independent
- If there are free variables (nontrivial solutions exist), the vectors are linearly dependent
The key insight: linear independence of a set of vectors is equivalent to the corresponding homogeneous system having only the trivial solution.
Geometric Interpretation
In $\mathbb{R}^2$ and $\mathbb{R}^3$, linear independence has a beautiful geometric meaning:
In $\mathbb{R}^2$:
- Two vectors are linearly independent if they point in different directions (not parallel)
- Two vectors are linearly dependent if they are parallel (one is a scalar multiple of the other)
In $\mathbb{R}^3$:
- Two vectors are linearly independent if they are not parallel (they span a plane)
- Three vectors are linearly independent if they do not all lie in the same plane (they span all of $\mathbb{R}^3$)
- Three vectors are linearly dependent if they all lie in the same plane
The geometric pattern: linearly independent vectors do not lie in a lower-dimensional space. They truly span out into the full number of dimensions you would expect.
What Is a Basis?
A basis for a vector space $V$ is a set of vectors that is:
- Linearly independent: No vector in the set is redundant
- Spanning: Every vector in $V$ can be written as a linear combination of the basis vectors
A basis is the “just right” set of vectors. Too few vectors, and you cannot build everything in the space. Too many, and some vectors are redundant. A basis has exactly the right number, with no waste and no gaps.
Think of a basis like a coordinate system. In $\mathbb{R}^2$, the vectors $\vec{e}_1 = \begin{pmatrix} 1 \ 0 \end{pmatrix}$ and $\vec{e}_2 = \begin{pmatrix} 0 \ 1 \end{pmatrix}$ form a basis. Any point in the plane can be written uniquely as $x\vec{e}_1 + y\vec{e}_2$. That is what makes them a coordinate system.
The Standard Basis
The standard basis for $\mathbb{R}^n$ consists of the vectors:
$$\vec{e}_1 = \begin{pmatrix} 1 \ 0 \ 0 \ \vdots \ 0 \end{pmatrix}, \quad \vec{e}_2 = \begin{pmatrix} 0 \ 1 \ 0 \ \vdots \ 0 \end{pmatrix}, \quad \ldots, \quad \vec{e}_n = \begin{pmatrix} 0 \ 0 \ 0 \ \vdots \ 1 \end{pmatrix}$$
Each $\vec{e}_i$ has a 1 in position $i$ and 0s everywhere else.
The standard basis is linearly independent (no vector is a combination of the others) and spans $\mathbb{R}^n$ (any vector $\begin{pmatrix} x_1 \ x_2 \ \vdots \ x_n \end{pmatrix}$ equals $x_1\vec{e}_1 + x_2\vec{e}_2 + \cdots + x_n\vec{e}_n$).
But the standard basis is not the only basis. There are infinitely many bases for $\mathbb{R}^n$. Any set of $n$ linearly independent vectors forms a basis.
Dimension: The Size of a Basis
Here is a remarkable fact: every basis for a vector space $V$ has the same number of vectors. This number is called the dimension of $V$, written $\dim(V)$.
- $\dim(\mathbb{R}^n) = n$ (the standard basis has $n$ vectors)
- $\dim(P_2) = 3$ (a basis is ${1, x, x^2}$, three polynomials)
- A line through the origin in $\mathbb{R}^3$ has dimension 1
- A plane through the origin in $\mathbb{R}^3$ has dimension 2
Dimension counts the “degrees of freedom” in a space. In $\mathbb{R}^3$, you need three numbers to specify a point, so the dimension is 3. On a plane in $\mathbb{R}^3$, you only need two numbers (once you establish coordinates on the plane), so the dimension is 2.
Finding a Basis for a Span
Given a set of vectors, you might want to find a basis for their span. The span might have redundant vectors, and a basis strips away that redundancy.
Method: Put the vectors as columns of a matrix and row reduce. The columns that correspond to pivot columns form a basis for the span.
Alternatively, the nonzero rows of the row echelon form give a basis for the row space, which equals the column space of the transpose.
Coordinates with Respect to a Basis
Once you have a basis $B = {\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n}$ for a vector space $V$, every vector $\vec{v}$ in $V$ can be written uniquely as:
$$\vec{v} = c_1\vec{b}_1 + c_2\vec{b}_2 + \cdots + c_n\vec{b}_n$$
The coefficients $c_1, c_2, \ldots, c_n$ are called the coordinates of $\vec{v}$ with respect to the basis $B$. We write:
$$[\vec{v}]_B = \begin{pmatrix} c_1 \ c_2 \ \vdots \ c_n \end{pmatrix}$$
This notation emphasizes that coordinates depend on the choice of basis. The same vector $\vec{v}$ will have different coordinates in different bases.
To find $[\vec{v}]_B$, solve the system $c_1\vec{b}_1 + c_2\vec{b}_2 + \cdots + c_n\vec{b}_n = \vec{v}$ for the coefficients $c_i$.
Notation and Terminology
| Term | Meaning | Example |
|---|---|---|
| Linearly independent | No vector is a linear combination of others | $\begin{pmatrix} 1 \ 0 \end{pmatrix}, \begin{pmatrix} 0 \ 1 \end{pmatrix}$ |
| Linearly dependent | At least one vector is combination of others | $\begin{pmatrix} 1 \ 2 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \end{pmatrix}$ |
| Basis | Linearly independent spanning set | Standard basis of $\mathbb{R}^2$: $\vec{e}_1, \vec{e}_2$ |
| Dimension | Number of vectors in a basis | $\dim(\mathbb{R}^3) = 3$ |
| $[\vec{v}]_B$ | Coordinates of $\vec{v}$ in basis $B$ | If $\vec{v} = 2\vec{b}_1 + 3\vec{b}_2$, then $[\vec{v}]_B = \begin{pmatrix} 2 \ 3 \end{pmatrix}$ |
| Standard basis | The basis ${\vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n}$ for $\mathbb{R}^n$ | $\vec{e}_1 = \begin{pmatrix} 1 \ 0 \ 0 \end{pmatrix}$ in $\mathbb{R}^3$ |
| Trivial solution | The solution $c_1 = c_2 = \cdots = c_k = 0$ | The all-zeros solution |
Examples
Determine whether $\vec{v}_1 = \begin{pmatrix} 1 \ 2 \end{pmatrix}$ and $\vec{v}_2 = \begin{pmatrix} 3 \ 6 \end{pmatrix}$ are linearly independent.
Solution:
We need to determine if $c_1\vec{v}_1 + c_2\vec{v}_2 = \vec{0}$ has only the trivial solution.
Step 1: Write out the equation.
$$c_1\begin{pmatrix} 1 \ 2 \end{pmatrix} + c_2\begin{pmatrix} 3 \ 6 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix}$$
Step 2: This gives the system:
$$c_1 + 3c_2 = 0$$ $$2c_1 + 6c_2 = 0$$
Step 3: Notice that the second equation is exactly 2 times the first equation. Row reduce:
$$\left[\begin{array}{cc|c} 1 & 3 & 0 \ 2 & 6 & 0 \end{array}\right] \xrightarrow{R_2 \to R_2 - 2R_1} \left[\begin{array}{cc|c} 1 & 3 & 0 \ 0 & 0 & 0 \end{array}\right]$$
Step 4: Analyze the result.
We have one pivot and one free variable ($c_2$). Let $c_2 = t$. Then $c_1 = -3t$.
For any value of $t$, we get a solution. For example, $t = 1$ gives $c_1 = -3$, $c_2 = 1$.
Verification: $(-3)\begin{pmatrix} 1 \ 2 \end{pmatrix} + (1)\begin{pmatrix} 3 \ 6 \end{pmatrix} = \begin{pmatrix} -3 + 3 \ -6 + 6 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix}$
Conclusion: The vectors are linearly dependent.
Geometric insight: Notice that $\vec{v}_2 = 3\vec{v}_1$. The vectors are parallel, pointing in the same direction. One is just a stretched version of the other.
What is the dimension of $\mathbb{R}^4$?
Solution:
The dimension of a vector space is the number of vectors in any basis.
The standard basis for $\mathbb{R}^4$ is:
$$\vec{e}_1 = \begin{pmatrix} 1 \ 0 \ 0 \ 0 \end{pmatrix}, \quad \vec{e}_2 = \begin{pmatrix} 0 \ 1 \ 0 \ 0 \end{pmatrix}, \quad \vec{e}_3 = \begin{pmatrix} 0 \ 0 \ 1 \ 0 \end{pmatrix}, \quad \vec{e}_4 = \begin{pmatrix} 0 \ 0 \ 0 \ 1 \end{pmatrix}$$
This basis contains 4 vectors.
Answer: $\dim(\mathbb{R}^4) = 4$
Interpretation: You need 4 numbers to specify a point in $\mathbb{R}^4$. The space has 4 degrees of freedom.
Determine if $\vec{v}_1 = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix}$, $\vec{v}_2 = \begin{pmatrix} 0 \ 1 \ 1 \end{pmatrix}$, $\vec{v}_3 = \begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$ are linearly independent.
Solution:
We need to determine if $c_1\vec{v}_1 + c_2\vec{v}_2 + c_3\vec{v}_3 = \vec{0}$ has only the trivial solution.
Step 1: Set up the matrix equation.
Form the matrix with these vectors as columns and solve $A\vec{c} = \vec{0}$:
$$A = \begin{pmatrix} 1 & 0 & 1 \ 0 & 1 & 1 \ 1 & 1 & 0 \end{pmatrix}$$
Step 2: Row reduce the augmented matrix.
$$\left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 \ 1 & 1 & 0 & 0 \end{array}\right]$$
$R_3 \to R_3 - R_1$:
$$\left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 \ 0 & 1 & -1 & 0 \end{array}\right]$$
$R_3 \to R_3 - R_2$:
$$\left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 \ 0 & 0 & -2 & 0 \end{array}\right]$$
Step 3: Continue to reduced row echelon form.
$R_3 \to -\frac{1}{2}R_3$:
$$\left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 \ 0 & 0 & 1 & 0 \end{array}\right]$$
$R_1 \to R_1 - R_3$ and $R_2 \to R_2 - R_3$:
$$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \end{array}\right]$$
Step 4: Read off the solution.
The RREF shows $c_1 = 0$, $c_2 = 0$, $c_3 = 0$. There are no free variables.
Conclusion: The only solution is the trivial solution, so the vectors are linearly independent.
Note: Since we have 3 linearly independent vectors in $\mathbb{R}^3$, these vectors form a basis for $\mathbb{R}^3$.
Find a basis for the span of $\vec{v}_1 = \begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix}$, $\vec{v}_2 = \begin{pmatrix} 2 \ 4 \ 6 \end{pmatrix}$, $\vec{v}_3 = \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix}$.
Solution:
We need to find a linearly independent subset that spans the same space as all three vectors.
Step 1: Form a matrix with these vectors as columns.
$$A = \begin{pmatrix} 1 & 2 & 1 \ 2 & 4 & 0 \ 3 & 6 & 1 \end{pmatrix}$$
Step 2: Row reduce to find the pivot columns.
$$\begin{pmatrix} 1 & 2 & 1 \ 2 & 4 & 0 \ 3 & 6 & 1 \end{pmatrix}$$
$R_2 \to R_2 - 2R_1$ and $R_3 \to R_3 - 3R_1$:
$$\begin{pmatrix} 1 & 2 & 1 \ 0 & 0 & -2 \ 0 & 0 & -2 \end{pmatrix}$$
$R_3 \to R_3 - R_2$:
$$\begin{pmatrix} 1 & 2 & 1 \ 0 & 0 & -2 \ 0 & 0 & 0 \end{pmatrix}$$
Step 3: Identify the pivot columns.
The pivots are in columns 1 and 3. Column 2 does not have a pivot.
Step 4: Select the original vectors corresponding to pivot columns.
The pivot columns are columns 1 and 3, so a basis for the span is:
$$\left{ \vec{v}_1, \vec{v}_3 \right} = \left{ \begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix}, \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} \right}$$
Verification: Notice that $\vec{v}_2 = 2\vec{v}_1$. The second vector was redundant because it is just twice the first vector.
Conclusion: A basis for $\text{span}{\vec{v}_1, \vec{v}_2, \vec{v}_3}$ is $\left{ \begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix}, \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} \right}$.
The span is a 2-dimensional subspace (a plane through the origin) in $\mathbb{R}^3$.
Find the dimension of the column space of $A = \begin{pmatrix} 1 & 2 & 3 \ 2 & 4 & 6 \ 1 & 0 & 1 \end{pmatrix}$.
Solution:
The dimension of the column space equals the number of pivot columns (also called the rank of the matrix).
Step 1: Row reduce the matrix.
$$A = \begin{pmatrix} 1 & 2 & 3 \ 2 & 4 & 6 \ 1 & 0 & 1 \end{pmatrix}$$
$R_2 \to R_2 - 2R_1$ and $R_3 \to R_3 - R_1$:
$$\begin{pmatrix} 1 & 2 & 3 \ 0 & 0 & 0 \ 0 & -2 & -2 \end{pmatrix}$$
Swap $R_2$ and $R_3$:
$$\begin{pmatrix} 1 & 2 & 3 \ 0 & -2 & -2 \ 0 & 0 & 0 \end{pmatrix}$$
$R_2 \to -\frac{1}{2}R_2$:
$$\begin{pmatrix} 1 & 2 & 3 \ 0 & 1 & 1 \ 0 & 0 & 0 \end{pmatrix}$$
Step 2: Count the pivots.
There are 2 pivots (in columns 1 and 2).
Step 3: Identify a basis for the column space.
The pivot columns in the original matrix are columns 1 and 2:
$$\text{Basis for Col}(A) = \left{ \begin{pmatrix} 1 \ 2 \ 1 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \ 0 \end{pmatrix} \right}$$
Conclusion: The dimension of the column space is 2.
Observations:
- The matrix has 3 columns, but only 2 of them are linearly independent.
- Column 3 is a linear combination of columns 1 and 2. Indeed, $\begin{pmatrix} 3 \ 6 \ 1 \end{pmatrix} = \begin{pmatrix} 1 \ 2 \ 1 \end{pmatrix} + \begin{pmatrix} 2 \ 4 \ 0 \end{pmatrix}$.
- The column space is a 2-dimensional subspace (a plane) in $\mathbb{R}^3$.
- Notice also that row 2 of the original matrix is exactly 2 times row 1, which explains why we got a row of zeros during reduction.
Key Properties and Rules
Properties of Linear Independence
- A single nonzero vector is always linearly independent
- The zero vector is linearly dependent with any other vectors (since $1 \cdot \vec{0} + 0 \cdot \vec{v} = \vec{0}$ is a nontrivial combination)
- If a set is linearly dependent, any larger set containing it is also linearly dependent
- If a set is linearly independent, any subset of it is also linearly independent
- In $\mathbb{R}^n$, any set of more than $n$ vectors must be linearly dependent
Properties of Bases
- Every basis for a vector space has the same number of vectors (the dimension)
- In an $n$-dimensional space, any set of $n$ linearly independent vectors is a basis
- In an $n$-dimensional space, any set of $n$ vectors that spans the space is a basis
- Every vector in the space can be written uniquely as a linear combination of basis vectors
Dimension Facts
- $\dim(\mathbb{R}^n) = n$
- $\dim(P_n) = n + 1$ (polynomials of degree at most $n$)
- $\dim(M_{m \times n}) = mn$ (all $m \times n$ matrices)
- The dimension of a subspace is at most the dimension of the containing space
- $\dim(\text{Col}(A)) = \dim(\text{Row}(A)) = \text{rank}(A) = $ number of pivots
The Rank-Nullity Theorem
For an $m \times n$ matrix $A$:
$$\dim(\text{Col}(A)) + \dim(\text{Null}(A)) = n$$
In words: the rank (dimension of column space) plus the nullity (dimension of null space) equals the number of columns.
This powerful result connects the “size” of the solutions to $A\vec{x} = \vec{0}$ with the “size” of the range of the transformation $\vec{x} \mapsto A\vec{x}$.
Real-World Applications
Data Compression
When you have a large dataset with many features, not all features may be independent. Some features might be redundant, expressible as combinations of others. Principal Component Analysis (PCA) finds a basis for the subspace that captures most of the variation in the data. By projecting onto this lower-dimensional subspace, you can compress data while retaining the essential information. The dimension of this subspace tells you how many “true” degrees of freedom your data has.
Degrees of Freedom in Mechanical Systems
In engineering, the dimension of a vector space represents the degrees of freedom of a system. A robot arm with multiple joints has a certain number of independent movements it can make. If the joints are constrained (some movements depend on others), the effective dimension of the movement space is reduced. Understanding linear independence helps engineers determine how many independent controls are needed for a mechanical system.
Basis Functions in Fourier Analysis
In signal processing, any periodic signal can be written as a sum of sines and cosines of different frequencies. These trigonometric functions form a basis for the space of periodic signals. The Fourier transform finds the coordinates of a signal with respect to this basis. Different bases (like wavelets) are used in image compression algorithms like JPEG 2000.
Feature Selection in Machine Learning
When building a machine learning model, you often have many input features. If some features are linearly dependent on others, including all of them adds no information but increases computational cost and can lead to numerical instability. Feature selection algorithms identify a linearly independent subset of features, essentially finding a basis for the feature space that captures all the relevant information.
Self-Test Problems
Problem 1: Are the vectors $\begin{pmatrix} 1 \ 0 \end{pmatrix}$ and $\begin{pmatrix} 0 \ 1 \end{pmatrix}$ linearly independent?
Show Answer
Yes, they are linearly independent.
Set up the equation: $c_1\begin{pmatrix} 1 \ 0 \end{pmatrix} + c_2\begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix}$
This gives: $c_1 = 0$ and $c_2 = 0$.
The only solution is the trivial solution, so the vectors are linearly independent.
These are the standard basis vectors for $\mathbb{R}^2$. Geometrically, they point in perpendicular directions and cannot be parallel.
Problem 2: Are the vectors $\begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix}$ and $\begin{pmatrix} -2 \ -4 \ -6 \end{pmatrix}$ linearly independent?
Show Answer
No, they are linearly dependent.
Notice that $\begin{pmatrix} -2 \ -4 \ -6 \end{pmatrix} = -2\begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix}$.
Therefore: $2\begin{pmatrix} 1 \ 2 \ 3 \end{pmatrix} + 1\begin{pmatrix} -2 \ -4 \ -6 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \end{pmatrix}$
This is a nontrivial solution with $c_1 = 2$ and $c_2 = 1$, so the vectors are linearly dependent.
Geometrically, these vectors are parallel (they point in opposite directions along the same line).
Problem 3: What is the dimension of the subspace $W = {(x, y, z) : x + y + z = 0}$ in $\mathbb{R}^3$?
Show Answer
$\dim(W) = 2$
The constraint $x + y + z = 0$ means $z = -x - y$. The general vector in $W$ is:
$$\begin{pmatrix} x \ y \ -x-y \end{pmatrix} = x\begin{pmatrix} 1 \ 0 \ -1 \end{pmatrix} + y\begin{pmatrix} 0 \ 1 \ -1 \end{pmatrix}$$
A basis for $W$ is $\left{ \begin{pmatrix} 1 \ 0 \ -1 \end{pmatrix}, \begin{pmatrix} 0 \ 1 \ -1 \end{pmatrix} \right}$.
This basis has 2 vectors, so $\dim(W) = 2$.
Geometrically, $W$ is a plane through the origin in $\mathbb{R}^3$.
Problem 4: Do the vectors $\begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix}$, $\begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix}$, $\begin{pmatrix} 0 \ 1 \ 1 \end{pmatrix}$, $\begin{pmatrix} 1 \ 1 \ 1 \end{pmatrix}$ form a linearly independent set?
Show Answer
No, they are linearly dependent.
In $\mathbb{R}^3$, any set of more than 3 vectors must be linearly dependent (since the dimension is 3).
Here we have 4 vectors in $\mathbb{R}^3$, so they must be linearly dependent.
To find a specific dependence relation, notice:
$$\begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 1 \end{pmatrix} - \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} - \begin{pmatrix} 0 \ 2 \ 0 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \end{pmatrix}$$
Wait, let me recalculate. Actually:
$$\begin{pmatrix} 1 \ 1 \ 0 \end{pmatrix} + \begin{pmatrix} 1 \ 0 \ 1 \end{pmatrix} + \begin{pmatrix} 0 \ 1 \ 1 \end{pmatrix} - 2\begin{pmatrix} 1 \ 1 \ 1 \end{pmatrix} = \begin{pmatrix} 2 \ 2 \ 2 \end{pmatrix} - \begin{pmatrix} 2 \ 2 \ 2 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \ 0 \end{pmatrix}$$
So $\vec{v}_1 + \vec{v}_2 + \vec{v}_3 - 2\vec{v}_4 = \vec{0}$, confirming linear dependence.
Problem 5: If $B = \left{ \begin{pmatrix} 1 \ 1 \end{pmatrix}, \begin{pmatrix} 1 \ -1 \end{pmatrix} \right}$ is a basis for $\mathbb{R}^2$, find the coordinates $[\vec{v}]_B$ for $\vec{v} = \begin{pmatrix} 3 \ 1 \end{pmatrix}$.
Show Answer
We need to find $c_1$ and $c_2$ such that:
$$c_1\begin{pmatrix} 1 \ 1 \end{pmatrix} + c_2\begin{pmatrix} 1 \ -1 \end{pmatrix} = \begin{pmatrix} 3 \ 1 \end{pmatrix}$$
This gives the system: $$c_1 + c_2 = 3$$ $$c_1 - c_2 = 1$$
Adding the equations: $2c_1 = 4$, so $c_1 = 2$.
Substituting back: $2 + c_2 = 3$, so $c_2 = 1$.
Answer: $[\vec{v}]_B = \begin{pmatrix} 2 \ 1 \end{pmatrix}$
Verification: $2\begin{pmatrix} 1 \ 1 \end{pmatrix} + 1\begin{pmatrix} 1 \ -1 \end{pmatrix} = \begin{pmatrix} 2 \ 2 \end{pmatrix} + \begin{pmatrix} 1 \ -1 \end{pmatrix} = \begin{pmatrix} 3 \ 1 \end{pmatrix}$ (checks)
Problem 6: Find a basis for the null space of $A = \begin{pmatrix} 1 & 2 & 1 \ 2 & 4 & 2 \end{pmatrix}$.
Show Answer
Solve $A\vec{x} = \vec{0}$:
Row reduce: $$\begin{pmatrix} 1 & 2 & 1 \ 2 & 4 & 2 \end{pmatrix} \xrightarrow{R_2 - 2R_1} \begin{pmatrix} 1 & 2 & 1 \ 0 & 0 & 0 \end{pmatrix}$$
One equation: $x_1 + 2x_2 + x_3 = 0$, so $x_1 = -2x_2 - x_3$.
Free variables: $x_2 = s$, $x_3 = t$.
$$\vec{x} = \begin{pmatrix} -2s - t \ s \ t \end{pmatrix} = s\begin{pmatrix} -2 \ 1 \ 0 \end{pmatrix} + t\begin{pmatrix} -1 \ 0 \ 1 \end{pmatrix}$$
Basis for Null$(A)$: $\left{ \begin{pmatrix} -2 \ 1 \ 0 \end{pmatrix}, \begin{pmatrix} -1 \ 0 \ 1 \end{pmatrix} \right}$
The null space is 2-dimensional.
Problem 7: Can three vectors in $\mathbb{R}^2$ be linearly independent? Explain.
Show Answer
No.
The dimension of $\mathbb{R}^2$ is 2. In an $n$-dimensional space, any set of more than $n$ vectors must be linearly dependent.
Since $\dim(\mathbb{R}^2) = 2$, any set of 3 or more vectors in $\mathbb{R}^2$ is linearly dependent.
Geometrically: in a plane, you can have at most 2 non-parallel directions. Any third vector must lie somewhere in the plane, meaning it can be expressed as a combination of two vectors that span the plane.
Summary
-
Linear independence means no vector in a set is a linear combination of the others. Formally, vectors $\vec{v}_1, \ldots, \vec{v}_k$ are linearly independent if $c_1\vec{v}_1 + \cdots + c_k\vec{v}_k = \vec{0}$ has only the trivial solution $c_1 = \cdots = c_k = 0$.
-
Linear dependence means at least one vector is redundant. If there is a nontrivial solution to $c_1\vec{v}_1 + \cdots + c_k\vec{v}_k = \vec{0}$, you can express one vector as a combination of the others.
-
Geometrically, linearly independent vectors do not lie in a lower-dimensional subspace. Two independent vectors in $\mathbb{R}^2$ are not parallel. Three independent vectors in $\mathbb{R}^3$ do not lie in a common plane.
-
A basis is a linearly independent set that spans the entire vector space. It is the minimal set of building blocks from which all vectors can be constructed.
-
The standard basis for $\mathbb{R}^n$ consists of $\vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n$, where $\vec{e}_i$ has a 1 in position $i$ and 0s elsewhere.
-
Dimension is the number of vectors in any basis. Every basis for a given space has the same number of vectors. $\dim(\mathbb{R}^n) = n$.
-
Coordinates with respect to a basis $[\vec{v}]_B$ are the coefficients when $\vec{v}$ is written as a linear combination of the basis vectors.
-
To test linear independence, form a matrix with the vectors as columns and row reduce. If there are no free variables (all columns are pivot columns), the vectors are independent. If there are free variables, they are dependent.
-
To find a basis for a span, put the vectors as columns of a matrix, row reduce, and select the original columns corresponding to pivot columns.
-
The Rank-Nullity Theorem states that for an $m \times n$ matrix, the rank (dimension of column space) plus the nullity (dimension of null space) equals $n$.
-
Applications include data compression, understanding mechanical degrees of freedom, Fourier analysis, and feature selection in machine learning. Linear independence identifies the essential, non-redundant components of any system.