Evanalysis
3.2Source-backedEstimated reading time: 2 min

3.2 Transpose and special matrices

Use transpose, symmetry, non-commuting products, and block structure to read matrix shape and algebraic behavior.

Note collections

MATH1030: Linear algebra I

Rigorous linear algebra notes on systems, matrices, structure, and proof, with interaction used only where it clarifies the mathematics.

Chapter 1

Systems of equations

Learn to read equations as full solution sets.

Chapter 2

Matrices and elimination

Build matrix intuition and use row reduction with purpose.

Chapter 3

Matrix algebra

Matrix multiplication, transpose, and structural matrix notation.

Chapter 4

Solution structure

Homogeneous systems, null spaces, and the shape of full solution sets.

Chapter 5

Invertibility

Understand when a matrix can be undone and why that matters.

Once matrix multiplication is available, structure starts to matter. Some matrices are easier to understand because their entries line up in a special pattern. The transpose is one of the main tools for detecting that structure.

Transpose swaps rows and columns

Definition

Transpose

If A=[aij]A = [a_{ij}] is an m×nm \times n matrix, then its transpose ATA^T is the n×mn \times m matrix whose (j,i) entry is aija_{ij}.

Equivalently,

(AT)ji=aij.(A^T)_{ji} = a_{ij}.

So every row of AA becomes a column of ATA^T, and every column of AA becomes a row.

Worked example

Compute a transpose

Let

A=[142035].A = \begin{bmatrix} 1 & 4 & -2 \\ 0 & 3 & 5 \end{bmatrix}.

Then

AT=[104325].A^T = \begin{bmatrix} 1 & 0 \\ 4 & 3 \\ -2 & 5 \end{bmatrix}.

The 2×32 \times 3 matrix becomes a 3×23 \times 2 matrix because rows and columns swap roles.

Two basic identities are worth learning early:

(AT)T=A,(A+B)T=AT+BT.(A^T)^T = A, \qquad (A + B)^T = A^T + B^T.

When products are defined, transpose also reverses the order:

(AB)T=BTAT.(AB)^T = B^T A^T.

The reversal is not cosmetic. It reflects the fact that the row-column pairing has been turned around.

Symmetric and skew-symmetric matrices

Some square matrices agree with their transpose; others differ from it in a very controlled way.

Definition

Symmetric and skew-symmetric matrices

Let AA be a square matrix.

  • AA is symmetric if AT=AA^T = A.
  • AA is skew-symmetric if AT=AA^T = -A.

For a symmetric matrix, reflecting across the main diagonal changes nothing. For a skew-symmetric matrix, reflection changes every entry by a minus sign.

Worked example

Classify two matrices

The matrix

[2113]\begin{bmatrix} 2 & -1 \\ -1 & 3 \end{bmatrix}

is symmetric because transposing it leaves the entries unchanged.

The matrix

[0440]\begin{bmatrix} 0 & 4 \\ -4 & 0 \end{bmatrix}

is skew-symmetric because

AT=[0440]=A.A^T = \begin{bmatrix} 0 & -4 \\ 4 & 0 \end{bmatrix} = -A.

Notice that every diagonal entry of a skew-symmetric real matrix must be 0, because the diagonal entry must equal its own negative.

Order matters: commuting versus non-commuting matrices

The course notes emphasize that matrix multiplication is not commutative in general. Still, some pairs of matrices do commute.

Definition

Commuting matrices

Two square matrices AA and BB of the same size commute if

AB=BA.AB = BA.

Commuting matrices are special. They are not the default.

Theorem

Matrix multiplication is not commutative in general

There exist square matrices AA and BB such that ABBAAB \ne BA.

Worked example

A concrete non-commuting pair

Let

A=[1101],B=[1011].A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, \qquad B = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}.

Then

AB=[2111],BA=[1112].AB = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}, \qquad BA = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}.

So ABBAAB \ne BA.

This is why you must preserve order in every matrix identity. Reversing factors changes the statement.

Special matrices are easier to read because of their shape

Several standard matrix families are defined by the locations of zero entries. These shapes matter because they simplify later computations.

  • A diagonal matrix has nonzero entries allowed only on the main diagonal.
  • An upper-triangular matrix has 0 below the main diagonal.
  • A lower-triangular matrix has 0 above the main diagonal.

For such matrices, multiplication and invertibility often become easier to analyze because the zero pattern survives useful operations.

Block matrices group information on purpose

Sometimes a large matrix is best read as smaller submatrices pasted together. That is called block notation.

For example,

[A11A12A21A22]\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix}

is a block matrix made from four smaller blocks. Block notation is not a new kind of matrix. It is a disciplined way to see structure inside a large one.

When the sizes match correctly, block addition and block multiplication follow the same formal patterns as ordinary matrix operations. The advantage is that you can reason about large matrices one chunk at a time.

Common mistakes

Common mistake

Do not transpose a product in the same order

The correct identity is (AB)T=BTAT(AB)^T = B^T A^T, not ATBTA^T B^T. The order reverses.

Common mistake

Symmetric does not mean every pair of matrices commutes

Symmetry is a property of one matrix. Commuting is a property of a pair of matrices. They answer different questions.

Quick checks

Quick check

If AA is 3×23 × 2, what is the size of ATA^T?

Swap rows and columns.

Solution

Answer

Quick check

What can you say about the diagonal entries of a real skew-symmetric matrix?

Use the equation AT=AA^T = -A on the diagonal.

Solution

Answer

Exercise

Quick check

Why is every diagonal matrix equal to its transpose?

Answer from the location of its possible nonzero entries.

Solution

Guided solution

This note builds on 3.1 Matrix multiplication and identity matrices. For solution-set structure, continue to 4.1 Homogeneous systems and null space.

Key terms in this unit