0. Introduction | Slides | Notebook
Course content, a deliverable, and spam classification in PyTorch.
1. Optimization and PyTorch Basics in 1D
Optimization setup, minimizers and stationarity, 1D gradient descent, diagnostics, step-size tuning, and PyTorch autodiff basics.
2. Stochastic Optimization Basics in 1D
Empirical risk, SGD updates, step-size schedules, noise floors, unbiasedness and variance, minibatches, and validation diagnostics.
3. Optimization and PyTorch basics in higher dimensions | Live demo
Lift optimization to $\mathbb{R}^d$, derive gradient descent from the local model, and tour PyTorch tensors, efficiency, dtypes, and devices.
4. Loss functions and models for regression and classification problems | Live demo
Formulate ML objectives, choose losses for regression/classification, and build/train linear and convolutional models in PyTorch.
5. A step-by-step introduction to transformer models
Building transformers from scratch: embeddings, attention, residual connections, and next-token prediction on Shakespeare.
6. A step-by-step introduction to diffusion models
Diffusion models from first principles: forward process, reverse process, noise prediction, U-Net, sampling, DDIM, conditional generation, and FID.
Some 2025 content below; yet to be deleted.
7. Stochastic gradient descent: insights from the Noisy Quadratic Model
When should we use exponential moving averages, momentum, and preconditioning?
8. Stochastic Gradient Descent: The general problem and implementation details | Notebook
Stochastic optimization problems, SGD, tweaks, and implementation in PyTorch
9. Adaptive Optimization Methods | Notebook | Cheatsheet
Intro to adaptive optimization methods: Adagrad, Adam, and AdamW.
10. Benchmarking Optimizers: Challenges and Some Empirical Results | Cheatsheet
How do we compare optimizers for deep learning?
11. A Playbook for Tuning Deep Learning Models | Cheatsheet
A systematic process for tuning deep learning models
12. Scaling Transformers: Parallelism Strategies from the Ultrascale Playbook | Cheatsheet
How do we scale training of transformers to 100s of billions of parameters?
A recap of the course.