Vector Spaces and Subspaces

Abstract the idea of 'vectors' beyond arrows to any objects that add and scale

So far, you have thought of vectors as arrows in space, lists of numbers that you can add together and scale. But here is a surprising fact: the same rules that govern $\mathbb{R}^2$ and $\mathbb{R}^3$ also apply to polynomials, matrices, and even functions. You can add two polynomials and get another polynomial. You can multiply a matrix by a number and get another matrix. These operations follow the same fundamental patterns as vector addition and scalar multiplication.

This observation leads to one of the most powerful ideas in mathematics: the concept of a vector space. Instead of thinking of vectors as specifically being arrows or columns of numbers, we abstract the idea to include any mathematical objects that behave like vectors. This abstraction is not just elegant. It is practical. Techniques you develop for solving problems in $\mathbb{R}^3$ will automatically work for polynomials, functions, and countless other structures. Learn one set of tools, apply them everywhere.

In this lesson, you will learn what makes a vector space, how to recognize when a subset forms a subspace, and how these abstract ideas connect to concrete objects like null spaces and column spaces that appear throughout linear algebra.

Core Concepts

Why Generalize Beyond Arrows?

Before diving into definitions, let us see why this abstraction matters. Consider these three seemingly different mathematical objects:

Example 1: Vectors in $\mathbb{R}^2$

Take $\vec{u} = \begin{pmatrix} 1 \ 2 \end{pmatrix}$ and $\vec{v} = \begin{pmatrix} 3 \ 1 \end{pmatrix}$. You can add them:

$$\vec{u} + \vec{v} = \begin{pmatrix} 4 \ 3 \end{pmatrix}$$

And you can scale them:

$$2\vec{u} = \begin{pmatrix} 2 \ 4 \end{pmatrix}$$

Example 2: Polynomials

Take $p(x) = 1 + 2x$ and $q(x) = 3 + x$. You can add them:

$$p(x) + q(x) = 4 + 3x$$

And you can scale them:

$$2p(x) = 2 + 4x$$

Example 3: Matrices

Take $A = \begin{pmatrix} 1 & 2 \ 0 & 1 \end{pmatrix}$ and $B = \begin{pmatrix} 3 & 1 \ 1 & 0 \end{pmatrix}$. You can add them:

$$A + B = \begin{pmatrix} 4 & 3 \ 1 & 1 \end{pmatrix}$$

And you can scale them:

$$2A = \begin{pmatrix} 2 & 4 \ 0 & 2 \end{pmatrix}$$

Notice anything? The operations look remarkably similar. In each case, addition combines two objects of the same type to produce another object of that type. Scalar multiplication takes a number and an object to produce another object of that type. The algebraic rules (like commutativity of addition) work the same way in all three examples.

This is not a coincidence. Vectors, polynomials, and matrices are all examples of vector spaces. By identifying the common structure, we can develop theorems and techniques that apply to all of them simultaneously.

Definition of a Vector Space

A vector space $V$ over the real numbers is a set of objects (called “vectors”) together with two operations:

  1. Vector addition: A way to add two vectors to get another vector
  2. Scalar multiplication: A way to multiply a vector by a real number to get another vector

These operations must satisfy the following axioms (rules that must hold for all vectors $\vec{u}$, $\vec{v}$, $\vec{w}$ in $V$ and all scalars $c$, $d$ in $\mathbb{R}$):

Addition Axioms:

  1. Closure under addition: $\vec{u} + \vec{v}$ is in $V$
  2. Commutativity: $\vec{u} + \vec{v} = \vec{v} + \vec{u}$
  3. Associativity: $(\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$
  4. Zero vector: There exists a vector $\vec{0}$ in $V$ such that $\vec{u} + \vec{0} = \vec{u}$ for all $\vec{u}$
  5. Additive inverses: For each $\vec{u}$, there exists $-\vec{u}$ in $V$ such that $\vec{u} + (-\vec{u}) = \vec{0}$

Scalar Multiplication Axioms:

  1. Closure under scalar multiplication: $c\vec{u}$ is in $V$
  2. Associativity: $c(d\vec{u}) = (cd)\vec{u}$
  3. Distributivity over vector addition: $c(\vec{u} + \vec{v}) = c\vec{u} + c\vec{v}$
  4. Distributivity over scalar addition: $(c + d)\vec{u} = c\vec{u} + d\vec{u}$
  5. Identity: $1\vec{u} = \vec{u}$

This list might look intimidating, but these are all properties you already know and use intuitively. They are simply the formal statement that addition and scaling work the way you expect them to.

The key insight is that we do not specify what the “vectors” actually are. They could be arrows, polynomials, matrices, functions, or anything else. What matters is that the operations behave according to these rules.

Important Examples of Vector Spaces

Let us examine some fundamental vector spaces:

$\mathbb{R}^n$: The Space of n-Tuples

The most familiar vector space is $\mathbb{R}^n$, the set of all ordered lists of $n$ real numbers:

$$\mathbb{R}^n = \left{ \begin{pmatrix} x_1 \ x_2 \ \vdots \ x_n \end{pmatrix} : x_1, x_2, \ldots, x_n \in \mathbb{R} \right}$$

With component-wise addition and scalar multiplication, $\mathbb{R}^n$ satisfies all ten axioms. This is the vector space you have been working with most often.

$P_n$: The Space of Polynomials of Degree at Most $n$

The set of all polynomials with degree at most $n$ forms a vector space:

$$P_n = {a_0 + a_1x + a_2x^2 + \cdots + a_nx^n : a_0, a_1, \ldots, a_n \in \mathbb{R}}$$

For example, $P_2$ contains all polynomials like $3 + 2x - x^2$, $5$, and $x^2$. Addition is the usual polynomial addition, and scalar multiplication is multiplication by a constant.

Why is this a vector space? Adding two polynomials of degree at most $n$ gives another polynomial of degree at most $n$ (closure). The zero polynomial $0$ serves as the zero vector. Every polynomial $p(x)$ has an additive inverse $-p(x)$. All the other axioms follow from the properties of real number arithmetic.

$M_{m \times n}$: The Space of Matrices

The set of all $m \times n$ matrices with real entries forms a vector space:

$$M_{m \times n} = {A : A \text{ is an } m \times n \text{ matrix with real entries}}$$

Addition is matrix addition (add corresponding entries), and scalar multiplication multiplies every entry by the scalar. The zero matrix (all entries 0) is the zero vector.

Function Spaces

The set of all continuous functions from $\mathbb{R}$ to $\mathbb{R}$, often denoted $C(\mathbb{R})$, forms a vector space. So does the set of all differentiable functions, the set of all infinitely differentiable functions, and many other function spaces.

Addition is pointwise: $(f + g)(x) = f(x) + g(x)$. Scalar multiplication is also pointwise: $(cf)(x) = c \cdot f(x)$. The zero function $f(x) = 0$ (zero everywhere) is the zero vector.

These function spaces are infinite-dimensional (a concept we will explore later), making them more complex than $\mathbb{R}^n$, but the same axioms apply.

What Is a Subspace?

A subspace of a vector space $V$ is a subset $W$ of $V$ that is itself a vector space under the same operations. In other words, a subspace is a vector space living inside another vector space.

For example, consider $\mathbb{R}^2$. The set of all vectors of the form $\begin{pmatrix} x \ 0 \end{pmatrix}$ (the $x$-axis) is a subspace. It is a line through the origin, and any sum of vectors on this line stays on the line. Any scalar multiple of a vector on this line stays on the line.

Visually, subspaces of $\mathbb{R}^2$ are:

  • The zero vector alone (the trivial subspace)
  • Lines through the origin
  • All of $\mathbb{R}^2$ itself

Subspaces of $\mathbb{R}^3$ are:

  • The zero vector alone
  • Lines through the origin
  • Planes through the origin
  • All of $\mathbb{R}^3$ itself

Notice a pattern? Subspaces always contain the origin. This is not a coincidence. It is a requirement.

The Subspace Test

You might worry that to verify a subspace, you need to check all ten vector space axioms. Fortunately, there is a shortcut. Since $W$ inherits its operations from $V$, and $V$ already satisfies the axioms, most properties come for free.

Subspace Test: A non-empty subset $W$ of a vector space $V$ is a subspace if and only if:

  1. Closure under addition: For all $\vec{u}, \vec{v} \in W$, we have $\vec{u} + \vec{v} \in W$
  2. Closure under scalar multiplication: For all $\vec{u} \in W$ and all scalars $c$, we have $c\vec{u} \in W$

That is it. Just two conditions to check. If both hold, $W$ is a subspace. If either fails, $W$ is not a subspace.

Why does this work? If $W$ is closed under scalar multiplication and $\vec{u} \in W$, then $0 \cdot \vec{u} = \vec{0} \in W$, so the zero vector is automatically in $W$. Similarly, $(-1)\vec{u} = -\vec{u} \in W$, so additive inverses are automatic. The other axioms (commutativity, associativity, etc.) hold because they hold in the larger space $V$.

Practical tip: An equivalent single condition is that $W$ is closed under linear combinations. If $\vec{u}, \vec{v} \in W$ and $c, d$ are scalars, then $c\vec{u} + d\vec{v} \in W$. This combines both conditions into one.

Non-Examples: Sets That Are Not Subspaces

Understanding what fails helps solidify the concept. Here are common examples of subsets that are not subspaces:

Example: Lines not through the origin

In $\mathbb{R}^2$, the line $y = x + 1$ is not a subspace. It does not contain the origin: the point $(0, 0)$ does not satisfy $0 = 0 + 1$.

More concretely, if $\vec{u} = \begin{pmatrix} 0 \ 1 \end{pmatrix}$ is on this line, then $2\vec{u} = \begin{pmatrix} 0 \ 2 \end{pmatrix}$ is not on the line (since $2 \neq 0 + 1$). Closure under scalar multiplication fails.

Example: The first quadrant

In $\mathbb{R}^2$, the first quadrant ${(x, y) : x \geq 0, y \geq 0}$ is not a subspace. Take $\vec{u} = \begin{pmatrix} 1 \ 1 \end{pmatrix}$. Then $-1 \cdot \vec{u} = \begin{pmatrix} -1 \ -1 \end{pmatrix}$ is in the third quadrant, not the first. Closure under scalar multiplication fails.

Example: A circle

The unit circle ${(x, y) : x^2 + y^2 = 1}$ is not a subspace. It does not contain the origin. Also, adding two points on the circle typically gives a point not on the circle.

Important Subspaces: The Null Space

One of the most important subspaces in linear algebra is the null space (also called the kernel) of a matrix.

Definition: The null space of an $m \times n$ matrix $A$ is the set of all solutions to $A\vec{x} = \vec{0}$:

$$\text{Null}(A) = {\vec{x} \in \mathbb{R}^n : A\vec{x} = \vec{0}}$$

This is always a subspace of $\mathbb{R}^n$ (the domain of the matrix). Let us verify:

Zero vector: $A\vec{0} = \vec{0}$, so $\vec{0} \in \text{Null}(A)$.

Closure under addition: If $A\vec{u} = \vec{0}$ and $A\vec{v} = \vec{0}$, then $A(\vec{u} + \vec{v}) = A\vec{u} + A\vec{v} = \vec{0} + \vec{0} = \vec{0}$, so $\vec{u} + \vec{v} \in \text{Null}(A)$.

Closure under scalar multiplication: If $A\vec{u} = \vec{0}$, then $A(c\vec{u}) = cA\vec{u} = c\vec{0} = \vec{0}$, so $c\vec{u} \in \text{Null}(A)$.

The null space tells you about the solutions to the homogeneous system $A\vec{x} = \vec{0}$. If the null space contains only the zero vector, the system has a unique solution. If the null space contains non-zero vectors, there are infinitely many solutions.

Important Subspaces: The Column Space

The column space (also called the range or image) of a matrix $A$ is the set of all possible outputs when you multiply $A$ by vectors.

Definition: The column space of an $m \times n$ matrix $A$ is:

$$\text{Col}(A) = {A\vec{x} : \vec{x} \in \mathbb{R}^n}$$

Equivalently, the column space is the span of the columns of $A$. If $A = \begin{pmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n \end{pmatrix}$, then:

$$\text{Col}(A) = \text{span}{\vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n}$$

The column space is a subspace of $\mathbb{R}^m$ (the codomain of the matrix). It tells you which vectors $\vec{b}$ make the system $A\vec{x} = \vec{b}$ consistent. The equation $A\vec{x} = \vec{b}$ has a solution if and only if $\vec{b}$ is in the column space of $A$.

Important Subspaces: The Row Space

The row space of a matrix $A$ is the span of the rows of $A$, viewed as vectors.

Definition: If $A$ is an $m \times n$ matrix with rows $\vec{r}_1, \vec{r}_2, \ldots, \vec{r}_m$ (each viewed as a vector in $\mathbb{R}^n$), then:

$$\text{Row}(A) = \text{span}{\vec{r}_1, \vec{r}_2, \ldots, \vec{r}_m}$$

The row space is a subspace of $\mathbb{R}^n$. Interestingly, the row space equals the column space of $A^T$ (the transpose of $A$).

Row operations do not change the row space. This means you can find the row space by reducing $A$ to row echelon form and taking the span of the non-zero rows.

The Span of a Set of Vectors

Given vectors $\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k$ in a vector space $V$, their span is the set of all linear combinations:

$$\text{span}{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k} = {c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_k\vec{v}_k : c_1, c_2, \ldots, c_k \in \mathbb{R}}$$

The span is always a subspace. It is the smallest subspace containing all the given vectors.

Geometric interpretation in $\mathbb{R}^3$:

  • The span of a single non-zero vector is a line through the origin
  • The span of two non-parallel vectors is a plane through the origin
  • The span of three “independent” vectors (not all in the same plane) is all of $\mathbb{R}^3$

If some vectors in your set are redundant (expressible as combinations of others), the span does not change when you remove them. This leads to the concept of linear independence, which you will study soon.

Notation and Terminology

Term Meaning Example
Vector space Set with addition and scalar multiplication satisfying axioms $\mathbb{R}^3$, $P_2$ (polynomials of degree $\leq 2$)
Subspace Subset that is itself a vector space Lines through origin in $\mathbb{R}^2$
$\text{span}{\vec{v}_1, \ldots, \vec{v}_k}$ All linear combinations of the vectors $\text{span}\left{\begin{pmatrix} 1 \ 0 \end{pmatrix}, \begin{pmatrix} 0 \ 1 \end{pmatrix}\right} = \mathbb{R}^2$
Null space Solutions to $A\vec{x} = \vec{0}$ $\text{Null}(A) = {\vec{x} : A\vec{x} = \vec{0}}$
Column space Span of columns of $A$; all possible $A\vec{x}$ $\text{Col}(A)$
Row space Span of rows of $A$ $\text{Row}(A)$
Closure Property that an operation keeps you inside the set Adding two vectors in $W$ gives a vector in $W$
Zero vector The additive identity; $\vec{v} + \vec{0} = \vec{v}$ $\begin{pmatrix} 0 \ 0 \ 0 \end{pmatrix}$ in $\mathbb{R}^3$

Examples

Example 1: Verifying a Few Vector Space Axioms for $\mathbb{R}^2$

Show that $\mathbb{R}^2$ satisfies the commutativity axiom and the zero vector axiom.

Solution:

We need to verify two axioms for $\mathbb{R}^2$ with standard vector addition.

Commutativity axiom: $\vec{u} + \vec{v} = \vec{v} + \vec{u}$

Let $\vec{u} = \begin{pmatrix} u_1 \ u_2 \end{pmatrix}$ and $\vec{v} = \begin{pmatrix} v_1 \ v_2 \end{pmatrix}$ be any vectors in $\mathbb{R}^2$.

$$\vec{u} + \vec{v} = \begin{pmatrix} u_1 \ u_2 \end{pmatrix} + \begin{pmatrix} v_1 \ v_2 \end{pmatrix} = \begin{pmatrix} u_1 + v_1 \ u_2 + v_2 \end{pmatrix}$$

$$\vec{v} + \vec{u} = \begin{pmatrix} v_1 \ v_2 \end{pmatrix} + \begin{pmatrix} u_1 \ u_2 \end{pmatrix} = \begin{pmatrix} v_1 + u_1 \ v_2 + u_2 \end{pmatrix}$$

Since addition of real numbers is commutative ($u_1 + v_1 = v_1 + u_1$ and $u_2 + v_2 = v_2 + u_2$), we have $\vec{u} + \vec{v} = \vec{v} + \vec{u}$.

Zero vector axiom: There exists $\vec{0}$ such that $\vec{u} + \vec{0} = \vec{u}$ for all $\vec{u}$.

The zero vector in $\mathbb{R}^2$ is $\vec{0} = \begin{pmatrix} 0 \ 0 \end{pmatrix}$.

For any $\vec{u} = \begin{pmatrix} u_1 \ u_2 \end{pmatrix}$:

$$\vec{u} + \vec{0} = \begin{pmatrix} u_1 \ u_2 \end{pmatrix} + \begin{pmatrix} 0 \ 0 \end{pmatrix} = \begin{pmatrix} u_1 + 0 \ u_2 + 0 \end{pmatrix} = \begin{pmatrix} u_1 \ u_2 \end{pmatrix} = \vec{u}$$

Both axioms are verified. The remaining axioms can be verified similarly, confirming that $\mathbb{R}^2$ is a vector space.

Example 2: Is the Set of Vectors $\begin{pmatrix} x \\ 0 \end{pmatrix}$ a Subspace of $\mathbb{R}^2$?

Let $W = \left{ \begin{pmatrix} x \ 0 \end{pmatrix} : x \in \mathbb{R} \right}$. Determine if $W$ is a subspace of $\mathbb{R}^2$.

Solution:

We use the subspace test: check closure under addition and closure under scalar multiplication.

Step 1: Is $W$ non-empty?

Yes. For example, $\begin{pmatrix} 0 \ 0 \end{pmatrix} \in W$ (take $x = 0$).

Step 2: Check closure under addition.

Take two arbitrary vectors in $W$: $$\vec{u} = \begin{pmatrix} a \ 0 \end{pmatrix}, \quad \vec{v} = \begin{pmatrix} b \ 0 \end{pmatrix}$$

Their sum is: $$\vec{u} + \vec{v} = \begin{pmatrix} a + b \ 0 + 0 \end{pmatrix} = \begin{pmatrix} a + b \ 0 \end{pmatrix}$$

This has the form $\begin{pmatrix} x \ 0 \end{pmatrix}$ where $x = a + b$, so $\vec{u} + \vec{v} \in W$.

Step 3: Check closure under scalar multiplication.

Take any vector $\vec{u} = \begin{pmatrix} a \ 0 \end{pmatrix} \in W$ and any scalar $c \in \mathbb{R}$.

$$c\vec{u} = c\begin{pmatrix} a \ 0 \end{pmatrix} = \begin{pmatrix} ca \ 0 \end{pmatrix}$$

This has the form $\begin{pmatrix} x \ 0 \end{pmatrix}$ where $x = ca$, so $c\vec{u} \in W$.

Conclusion: $W$ passes both closure tests, so $W$ is a subspace of $\mathbb{R}^2$.

Geometric interpretation: $W$ is the $x$-axis, a line through the origin. Lines through the origin are always subspaces.

Example 3: Is the Set of Vectors $\begin{pmatrix} x \\ 1 \end{pmatrix}$ a Subspace of $\mathbb{R}^2$?

Let $W = \left{ \begin{pmatrix} x \ 1 \end{pmatrix} : x \in \mathbb{R} \right}$. Determine if $W$ is a subspace of $\mathbb{R}^2$.

Solution:

We check the subspace conditions. We only need to find one failure.

Quick check: Is the zero vector in $W$?

The zero vector is $\begin{pmatrix} 0 \ 0 \end{pmatrix}$. Every vector in $W$ has the form $\begin{pmatrix} x \ 1 \end{pmatrix}$, where the second component is always 1.

Since the zero vector has second component 0, not 1, we have $\vec{0} \notin W$.

Conclusion: $W$ is not a subspace of $\mathbb{R}^2$.

Alternative verification using closure:

Even without checking for the zero vector, we can see closure fails. Take $\vec{u} = \begin{pmatrix} 1 \ 1 \end{pmatrix} \in W$. Consider $2\vec{u}$:

$$2\vec{u} = \begin{pmatrix} 2 \ 2 \end{pmatrix}$$

This does not have the form $\begin{pmatrix} x \ 1 \end{pmatrix}$ (the second component is 2, not 1), so $2\vec{u} \notin W$. Closure under scalar multiplication fails.

Geometric interpretation: $W$ is the horizontal line $y = 1$, which does not pass through the origin. Lines that miss the origin are never subspaces.

Example 4: Finding the Span of Two Vectors

Find $\text{span}\left{ \begin{pmatrix} 1 \ 2 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \end{pmatrix} \right}$ and describe it geometrically.

Solution:

The span consists of all linear combinations:

$$\text{span}\left{ \begin{pmatrix} 1 \ 2 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \end{pmatrix} \right} = \left{ c_1\begin{pmatrix} 1 \ 2 \end{pmatrix} + c_2\begin{pmatrix} 2 \ 4 \end{pmatrix} : c_1, c_2 \in \mathbb{R} \right}$$

Key observation: The second vector is a multiple of the first:

$$\begin{pmatrix} 2 \ 4 \end{pmatrix} = 2\begin{pmatrix} 1 \ 2 \end{pmatrix}$$

This means any linear combination can be rewritten:

$$c_1\begin{pmatrix} 1 \ 2 \end{pmatrix} + c_2\begin{pmatrix} 2 \ 4 \end{pmatrix} = c_1\begin{pmatrix} 1 \ 2 \end{pmatrix} + c_2 \cdot 2\begin{pmatrix} 1 \ 2 \end{pmatrix} = (c_1 + 2c_2)\begin{pmatrix} 1 \ 2 \end{pmatrix}$$

Since $c_1 + 2c_2$ can be any real number (by choosing appropriate $c_1$ and $c_2$), the span equals:

$$\text{span}\left{ \begin{pmatrix} 1 \ 2 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \end{pmatrix} \right} = \text{span}\left{ \begin{pmatrix} 1 \ 2 \end{pmatrix} \right} = \left{ t\begin{pmatrix} 1 \ 2 \end{pmatrix} : t \in \mathbb{R} \right}$$

Geometric description: The span is a line through the origin in the direction of $\begin{pmatrix} 1 \ 2 \end{pmatrix}$. In slope-intercept form, this line is $y = 2x$.

Conclusion: $\text{span}\left{ \begin{pmatrix} 1 \ 2 \end{pmatrix}, \begin{pmatrix} 2 \ 4 \end{pmatrix} \right}$ is the line $y = 2x$, which can be written as:

$$\left{ \begin{pmatrix} x \ 2x \end{pmatrix} : x \in \mathbb{R} \right}$$

Important lesson: Adding a redundant vector (one that is already a multiple of vectors you have) does not expand the span. The two vectors are linearly dependent, a concept we will formalize soon.

Example 5: Finding the Null Space of a Matrix

Find the null space of $A = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}$.

Solution:

The null space is the set of all solutions to $A\vec{x} = \vec{0}$.

Step 1: Set up the homogeneous system.

$$\begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}\begin{pmatrix} x_1 \ x_2 \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix}$$

This gives the system: $$x_1 + 2x_2 = 0$$ $$2x_1 + 4x_2 = 0$$

Step 2: Reduce to row echelon form.

$$\left[\begin{array}{cc|c} 1 & 2 & 0 \ 2 & 4 & 0 \end{array}\right] \xrightarrow{R_2 \to R_2 - 2R_1} \left[\begin{array}{cc|c} 1 & 2 & 0 \ 0 & 0 & 0 \end{array}\right]$$

Step 3: Identify basic and free variables.

The pivot is in column 1, so $x_1$ is a basic variable. Column 2 has no pivot, so $x_2$ is a free variable.

Step 4: Express the basic variable in terms of the free variable.

From row 1: $x_1 + 2x_2 = 0$, so $x_1 = -2x_2$.

Step 5: Write the general solution.

Let $x_2 = t$ (where $t$ is any real number). Then:

$$\vec{x} = \begin{pmatrix} x_1 \ x_2 \end{pmatrix} = \begin{pmatrix} -2t \ t \end{pmatrix} = t\begin{pmatrix} -2 \ 1 \end{pmatrix}$$

Step 6: Describe the null space.

$$\text{Null}(A) = \left{ t\begin{pmatrix} -2 \ 1 \end{pmatrix} : t \in \mathbb{R} \right} = \text{span}\left{ \begin{pmatrix} -2 \ 1 \end{pmatrix} \right}$$

Verification: Check that $\begin{pmatrix} -2 \ 1 \end{pmatrix}$ is in the null space:

$$A\begin{pmatrix} -2 \ 1 \end{pmatrix} = \begin{pmatrix} 1 & 2 \ 2 & 4 \end{pmatrix}\begin{pmatrix} -2 \ 1 \end{pmatrix} = \begin{pmatrix} 1(-2) + 2(1) \ 2(-2) + 4(1) \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix} \checkmark$$

Geometric interpretation: The null space is a line through the origin in $\mathbb{R}^2$, specifically the line in the direction of $\begin{pmatrix} -2 \ 1 \end{pmatrix}$.

Note: The matrix $A$ has two rows that are multiples of each other (row 2 is 2 times row 1), which is why we ended up with a row of zeros. This redundancy in the matrix creates room in the null space.

Key Properties and Rules

Properties of Subspaces

  • Every subspace contains the zero vector $\vec{0}$
  • The zero vector alone ${\vec{0}}$ is a subspace (the trivial subspace)
  • The entire space $V$ is a subspace of itself
  • The intersection of two subspaces is always a subspace
  • The union of two subspaces is generally NOT a subspace (unless one contains the other)

Properties of Span

  • $\text{span}{\vec{v}_1, \ldots, \vec{v}_k}$ is always a subspace
  • $\text{span}{\vec{0}} = {\vec{0}}$
  • $\text{span}{}$ (empty set) is defined to be ${\vec{0}}$
  • Adding redundant vectors does not change the span
  • The span is the smallest subspace containing the given vectors

Properties of Null Space

  • $\text{Null}(A)$ is a subspace of $\mathbb{R}^n$ (where $A$ is $m \times n$)
  • $\text{Null}(A) = {\vec{0}}$ if and only if $A\vec{x} = \vec{0}$ has only the trivial solution
  • The dimension of $\text{Null}(A)$ equals the number of free variables

Properties of Column Space

  • $\text{Col}(A)$ is a subspace of $\mathbb{R}^m$ (where $A$ is $m \times n$)
  • $\vec{b}$ is in $\text{Col}(A)$ if and only if $A\vec{x} = \vec{b}$ is consistent
  • $\text{Col}(A)$ equals the span of the columns of $A$

The Subspace Test Summary

A non-empty subset $W$ of a vector space $V$ is a subspace if and only if for all $\vec{u}, \vec{v} \in W$ and all scalars $c, d$:

$$c\vec{u} + d\vec{v} \in W$$

(This single condition combines closure under addition and scalar multiplication.)

Real-World Applications

Solution Sets of Differential Equations

In physics and engineering, many phenomena are described by differential equations. For example, the equation

$$\frac{d^2y}{dt^2} + y = 0$$

describes simple harmonic motion (like a spring oscillating). The solutions to this equation form a vector space. You can verify: if $y_1(t)$ and $y_2(t)$ are both solutions, then $c_1 y_1(t) + c_2 y_2(t)$ is also a solution. The general solution is $y = c_1 \cos(t) + c_2 \sin(t)$, a two-dimensional vector space spanned by $\cos(t)$ and $\sin(t)$.

This is not just an abstract observation. It means that once you find two independent solutions, you have found all solutions. The structure of vector spaces guarantees this.

Polynomial Curve Fitting

When fitting a polynomial of degree at most $n$ to data points, you are working in the vector space $P_n$. The least-squares fitting problem becomes a problem about projecting a target onto a subspace. The mathematical machinery of vector spaces and subspaces provides the theoretical foundation for regression analysis and curve fitting throughout statistics and machine learning.

Signal Processing

In digital signal processing, a signal can be viewed as a vector. A finite signal with $n$ samples is a vector in $\mathbb{R}^n$. Frequency components of signals live in specific subspaces. Filters are designed to project signals onto certain subspaces (keeping some frequencies) while eliminating components in other subspaces (removing noise or unwanted frequencies).

The Fourier transform, one of the most important tools in signal processing, is fundamentally about changing bases in a vector space of functions. Different representations (time domain vs. frequency domain) are just different ways of expressing the same vector.

Quantum Mechanics

In quantum mechanics, the state of a physical system is represented by a vector in a complex vector space called a Hilbert space. Physically observable quantities correspond to measurements, and possible measurement outcomes are related to subspaces. The famous principle that you cannot simultaneously measure position and momentum with arbitrary precision is a statement about the geometry of these subspaces.

The entire mathematical framework of quantum mechanics is built on linear algebra. Vector spaces are not just useful in quantum mechanics; they are the language in which the theory is written.

Self-Test Problems

Problem 1: Is the set ${(x, y, z) \in \mathbb{R}^3 : x + y + z = 0}$ a subspace of $\mathbb{R}^3$?

Show Answer

Yes, it is a subspace.

Check 1: Is the zero vector in the set? $0 + 0 + 0 = 0$, so yes.

Check 2: Closure under addition. If $x_1 + y_1 + z_1 = 0$ and $x_2 + y_2 + z_2 = 0$, then:

$(x_1 + x_2) + (y_1 + y_2) + (z_1 + z_2) = (x_1 + y_1 + z_1) + (x_2 + y_2 + z_2) = 0 + 0 = 0$

Check 3: Closure under scalar multiplication. If $x + y + z = 0$, then:

$cx + cy + cz = c(x + y + z) = c \cdot 0 = 0$

Both closure conditions pass, so this is a subspace. Geometrically, it is a plane through the origin.

Problem 2: Is the set ${(x, y, z) \in \mathbb{R}^3 : x + y + z = 1}$ a subspace of $\mathbb{R}^3$?

Show Answer

No, it is not a subspace.

The zero vector $(0, 0, 0)$ is not in the set because $0 + 0 + 0 = 0 \neq 1$.

Alternatively: Take $(1, 0, 0)$ which satisfies $1 + 0 + 0 = 1$. Then $2(1, 0, 0) = (2, 0, 0)$, but $2 + 0 + 0 = 2 \neq 1$, so closure under scalar multiplication fails.

Geometrically, this is a plane that does not pass through the origin.

Problem 3: Is the set of all $2 \times 2$ symmetric matrices (where $A^T = A$) a subspace of $M_{2 \times 2}$?

Show Answer

Yes, it is a subspace.

A symmetric $2 \times 2$ matrix has the form $\begin{pmatrix} a & b \ b & c \end{pmatrix}$.

Check 1: The zero matrix $\begin{pmatrix} 0 & 0 \ 0 & 0 \end{pmatrix}$ is symmetric.

Check 2: If $A$ and $B$ are symmetric, then $(A + B)^T = A^T + B^T = A + B$, so $A + B$ is symmetric.

Check 3: If $A$ is symmetric and $c$ is a scalar, then $(cA)^T = cA^T = cA$, so $cA$ is symmetric.

Problem 4: Find the null space of $A = \begin{pmatrix} 1 & 1 & 1 \ 1 & 2 & 3 \end{pmatrix}$.

Show Answer

Solve $A\vec{x} = \vec{0}$:

$$\left[\begin{array}{ccc|c} 1 & 1 & 1 & 0 \ 1 & 2 & 3 & 0 \end{array}\right] \xrightarrow{R_2 \to R_2 - R_1} \left[\begin{array}{ccc|c} 1 & 1 & 1 & 0 \ 0 & 1 & 2 & 0 \end{array}\right]$$

Continue to RREF:

$$\xrightarrow{R_1 \to R_1 - R_2} \left[\begin{array}{ccc|c} 1 & 0 & -1 & 0 \ 0 & 1 & 2 & 0 \end{array}\right]$$

Basic variables: $x_1$, $x_2$. Free variable: $x_3 = t$.

From the RREF: $x_1 = t$ and $x_2 = -2t$.

$$\text{Null}(A) = \left{ t\begin{pmatrix} 1 \ -2 \ 1 \end{pmatrix} : t \in \mathbb{R} \right} = \text{span}\left{ \begin{pmatrix} 1 \ -2 \ 1 \end{pmatrix} \right}$$

Problem 5: Describe the column space of $A = \begin{pmatrix} 1 & 2 \ 3 & 6 \end{pmatrix}$.

Show Answer

The column space is the span of the columns:

$$\text{Col}(A) = \text{span}\left{ \begin{pmatrix} 1 \ 3 \end{pmatrix}, \begin{pmatrix} 2 \ 6 \end{pmatrix} \right}$$

Notice that $\begin{pmatrix} 2 \ 6 \end{pmatrix} = 2\begin{pmatrix} 1 \ 3 \end{pmatrix}$, so the columns are parallel.

$$\text{Col}(A) = \text{span}\left{ \begin{pmatrix} 1 \ 3 \end{pmatrix} \right} = \left{ t\begin{pmatrix} 1 \ 3 \end{pmatrix} : t \in \mathbb{R} \right}$$

The column space is a line through the origin in $\mathbb{R}^2$.

Problem 6: Let $W_1 = \text{span}\left{ \begin{pmatrix} 1 \ 0 \end{pmatrix} \right}$ and $W_2 = \text{span}\left{ \begin{pmatrix} 0 \ 1 \end{pmatrix} \right}$. Is $W_1 \cup W_2$ a subspace?

Show Answer

No, $W_1 \cup W_2$ is not a subspace.

$W_1$ is the $x$-axis and $W_2$ is the $y$-axis.

Take $\vec{u} = \begin{pmatrix} 1 \ 0 \end{pmatrix} \in W_1 \subseteq W_1 \cup W_2$ and $\vec{v} = \begin{pmatrix} 0 \ 1 \end{pmatrix} \in W_2 \subseteq W_1 \cup W_2$.

Then $\vec{u} + \vec{v} = \begin{pmatrix} 1 \ 1 \end{pmatrix}$.

This vector is not on the $x$-axis (not in $W_1$) and not on the $y$-axis (not in $W_2$), so $\vec{u} + \vec{v} \notin W_1 \cup W_2$.

Closure under addition fails.

Problem 7: Is the zero polynomial the zero vector in $P_3$? Explain.

Show Answer

Yes. The zero polynomial is $p(x) = 0$ (the polynomial that equals 0 for all values of $x$).

In $P_3$, this can be written as $0 + 0x + 0x^2 + 0x^3$.

It satisfies the zero vector property: for any polynomial $q(x) \in P_3$,

$$q(x) + 0 = q(x)$$

Every vector space has exactly one zero vector, and in $P_n$, that zero vector is the zero polynomial.

Summary

  • A vector space is a set of objects with addition and scalar multiplication satisfying ten axioms. The abstraction allows us to apply the same techniques to arrows, polynomials, matrices, functions, and more.

  • Important examples of vector spaces include $\mathbb{R}^n$ (n-tuples), $P_n$ (polynomials of degree at most $n$), $M_{m \times n}$ (matrices), and function spaces.

  • A subspace is a subset of a vector space that is itself a vector space. To verify a subspace, check closure under addition and closure under scalar multiplication. Every subspace contains the zero vector.

  • The null space of a matrix $A$ is the set of solutions to $A\vec{x} = \vec{0}$. It is always a subspace of the domain.

  • The column space of a matrix $A$ is the span of its columns, which equals the set of all possible outputs $A\vec{x}$. It tells you which systems $A\vec{x} = \vec{b}$ are consistent.

  • The row space of a matrix $A$ is the span of its rows. Row operations preserve the row space.

  • The span of a set of vectors is the set of all linear combinations. It is always a subspace and represents the smallest subspace containing those vectors.

  • Sets that do not pass through the origin (like lines $y = x + 1$ or planes $x + y + z = 1$) are never subspaces.

  • Vector spaces appear throughout mathematics, physics, and engineering, from differential equations to signal processing to quantum mechanics. The abstract framework provides powerful tools that work across all these applications.