Interview Preparation
Linear Algebra & Decomposition
Brief notes prepared for technical interviews
BasicsNorms & SimilarityMatrix DecompositionPCA
← Back to Archives

These notes cover the linear-algebra machinery behind ML — vector-space structure, curvature, similarity, and the matrix decompositions used in optimization, compression, and dimensionality reduction.

Basics

Linearity

Linear Algebra Basics / Linearity

Basis

Basis

Rank

Rank

Hessian

Hessian

Pseudo-Inverse

Pseudo-Inverse (page 18 portion)

Pseudo-Inverse (page 19 portion)

Determinant

Determinant

Taylor Expansion

Taylor Expansion

Norms & Similarity

L2 Norm

L2 Norm

\[\|w\|_2^2 = \sum_i w_i^2\]

L1 Norm

L1 Norm

\[\|w\|_1 = \sum_i \lvert w_i \rvert\]

Cosine Similarity

Cosine Similarity (page 20 portion)

Cosine Similarity (page 21 portion)

\[\cos(x, y) = \frac{x^\top y}{\|x\| \|y\|}\]

Manifold

Manifold

Kernel

Kernel

Matrix Decomposition

Matrix Decomposition intro

Diagonalization

Diagonalization

\[A = P D P^{-1}\]

Eigen Decomposition

Eigen Decomposition (page 24)

Eigen Decomposition (page 25)

\[A = Q \Lambda Q^{-1}, \qquad \det(A - \lambda I) = 0, \qquad (A - \lambda I) v = 0\]

Singular Value Decomposition (SVD)

SVD / Truncated SVD

Low-Rank Adaptation (LoRA)

LoRA

\[W = W_0 + BA, \quad B \in \mathbb{R}^{d \times r}, \; A \in \mathbb{R}^{r \times k}, \quad r \ll \min(d, k)\]

Principal Component Analysis (PCA)

PCA (handwritten)

Goals

Data preprocessing

Covariance Matrix

\[\Sigma = \frac{1}{n} X^\top X\]

Optimization

\[\max_u u^\top \Sigma u \quad \text{s.t.} \quad \|u\| = 1\]

Limitations