Coordinates and matrices

Publish at:

What becomes possible once a basis is fixed?

Which properties survive a change of basis?

A basis let us reach every vector in a space through one coherent family of generators. That handle now opens the way to a numerical description. We knew that every vector could be assembled from the chosen basis, yet we had not written down the coefficients that perform that assembly.

That is the next structural move.

Once a basis is fixed, every vector can be recorded by a list of numbers.

If B = (e1, e2, ..., en) is a basis of V, then each vector v in V can be written in one and only one way as

v = a1 e1 + a2 e2 + ... + an en

The coefficients a1, a2, ..., an are the coordinates of v relative to the chosen basis B.

This is the first point that has to remain sharp. A vector comes to us through a coordinate list relative to a basis. The list depends on the basis. Change the basis, and the coefficients usually change, even while the vector itself stays fixed.

Coordinates arise from a chosen basis. They belong to the description of the vector space rather than to the vector space by itself.

Still, the gain is substantial. A vector space that was previously accessed through abstract generators can now be described by coordinate columns:

        | a1 |
        | a2 |
[v]_B = | .. |
        | an |

The notation reminds us that the column presents v in the basis B and gives us access to the space through that representation.

Once this translation is available for vectors, the same idea applies to linear maps.

Suppose f: V -> W is linear, and suppose we have fixed a basis B = (e1, ..., en) for V and a basis C = (w1, ..., wm) for W.

Because f is linear, it is enough to know what it does to the basis vectors of V. Each image f(ej) lands in W, so it can be expanded in the basis C. The coefficients of that expansion become the jth column of a matrix.

That is where matrices come from structurally. A matrix is the coordinate record of a linear map after bases have been chosen. Its role is to organize that representation in a form we can compute with.

V ──f──▶ W
│        │
│basis   │basis
▼        ▼
R^n ──A──▶ R^m
(matrix represents f)

Read the diagram from top to bottom. The linear map belongs to the structural world above. The matrix belongs to the numerical world below. Choosing bases is what lets the first be represented by the second.

And once that representation exists, linear maps become computable. If v has coordinate column [v]_B, then applying f and then taking coordinates in C gives

[f(v)]_C = A [v]_B

The matrix A packages exactly the information needed to compute the coordinate description of the output from the coordinate description of the input.

This creates a predictable temptation. Because the computation becomes explicit, it is easy to let the representation stand in for the map. The structural distinction we have been building toward keeps those two levels clear.

The same linear map can be represented by different matrices when different bases are chosen. If the basis in the domain changes, the input coordinates change. If the basis in the codomain changes, the output coordinates change. The underlying map remains the same while its numerical description shifts.

For a linear map from a space to itself, this change is especially visible. If A is the matrix in one basis and P records a change of basis, then the same map in the new basis is represented by

A' = P^(-1) A P

The entries change. The transformation remains the same.

That is why matrices belong to representation: they express linear structure through chosen bases.

The same discipline explains matrix multiplication. If f: V -> W is represented by A and g: W -> X is represented by B, then the composite g ∘ f is represented by BA.

Matrix multiplication is the coordinate shadow of composition of linear maps.

So several things now separate cleanly. Coordinates depend on a basis. Matrices depend on bases. Linear maps and vector spaces remain at the structural level those descriptions express.

Some quantities survive every allowable change of basis and therefore belong to the structure more deeply. Dimension survives. So do rank and nullity. Whether a map is invertible survives. And the fact that composition is coherent survives as well, because changing the basis does not change the underlying pattern of composition.

This is the real achievement of coordinates and matrices. They make linear structure operable while preserving the distinction between representation and structure.

We can now calculate with vectors and maps, solve systems, compare transformations, and express composition numerically. The most writable layer is the representational one, while the more fundamental layer is the underlying structure it expresses.

A matrix is a bookkeeping device for a chosen description of a linear map. A coordinate column is a bookkeeping device for a chosen description of a vector. The structure lives above the bookkeeping.

At this point linear algebra has become fully representable. We can pass from abstract structure to numerical form whenever a basis is chosen, and we can move between descriptions while preserving the object described.

Linear structure now supports full numerical description. A further layer of structure is what brings spatial meaning into view by distinguishing angle from scaling and distance from arbitrary linear distortion.

What extra structure appears when a linear object is interpreted as space?

And which transformations preserve spatial meaning together with linear structure?

References

  1. Matrix (mathematics) (opens in a new tab)
  2. Coordinate vector (opens in a new tab)
  3. Change of basis (opens in a new tab)
  4. Matrix multiplication (opens in a new tab)
  5. Rank-nullity theorem (opens in a new tab)