Interview Preparation
Probability & Bayesian Inference
Brief notes prepared for technical interviews
Bayes TheoremMLE / MAPCommon DistributionsCLT
← Back to Archives

These notes cover the probabilistic foundations of ML — how to model uncertainty, update beliefs from data, and choose between maximum-likelihood and Bayesian objectives.

Bayes Theorem

Bayes Theorem (page 21)

Bayes Theorem continuation (page 22 top)

\[p(\theta \mid x) = \frac{p(x \mid \theta) \, p(\theta)}{p(x)}\]

Likelihood

Likelihood

Maximum Likelihood Estimation (MLE)

MLE

Maximum A Posteriori (MAP)

MAP

\[\hat{\theta}_{\text{MAP}} = \arg\max_\theta \, p(x \mid \theta) \, p(\theta)\]

Probability Distributions

Probability Distributions intro

Bernoulli

Bernoulli

\[P(X = x) = p^x (1 - p)^{1 - x}, \quad x \in \{0, 1\}\]

Binomial

Binomial (page 26)

Binomial statistics (page 27)

\[P(X = k) = \binom{n}{k} p^k (1 - p)^{n - k}\]

Multinomial

Multinomial

\[P(\mathbf{X} = \mathbf{x}) = \frac{n!}{x_1! \cdots x_K!} \prod_{i=1}^{K} p_i^{x_i}\]

Normal (Gaussian)

Normal (Gaussian)

\[\mathcal{N}(x; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi}\sigma} \exp\!\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)\]

Exponential

Exponential (page 27, with handwritten annotations)

Exponential (page 28 portion)

Poisson

Poisson

\[P(X = k) = \frac{\lambda^k e^{-\lambda}}{k!}\]

Gamma

Gamma

\[p(x) = \frac{\lambda^k}{\Gamma(k)} x^{k-1} e^{-\lambda x}, \quad x \geq 0\]

Central Limit Theorem (CLT)

CLT (page 28 portion)

CLT statement (page 29 portion)

\[\frac{1}{n}\sum_{i=1}^n X_i \xrightarrow{d} \mathcal{N}\!\left(\mu, \frac{\sigma^2}{n}\right) \quad \text{as } n \to \infty\]