0. Introduction | Slides | Notebook
Course content, a deliverable, and spam classification in PyTorch.
1. Optimization and PyTorch Basics in 1D
Optimization setup, minimizers and stationarity, 1D gradient descent, diagnostics, step-size tuning, and PyTorch autodiff basics.
2. Stochastic Optimization Basics in 1D
Empirical risk, SGD updates, step-size schedules, noise floors, unbiasedness and variance, minibatches, and validation diagnostics.
3. Linear Regression: Gradient Descent | Slides | Notebook
Linear regression via gradient descent.
4. How to compute gradients in PyTorch | Slides | Notebook
Introduction to PyTorch’s automatic differentiation system.
5. How to think about derivatives through best linear approximation
How to think about derivatives through best linear approximation.
6. Stochastic gradient descent: A first look
A first look at stochastic gradient descent through the mean estimation problem.
7. Stochastic gradient descent: insights from the Noisy Quadratic Model
When should we use exponential moving averages, momentum, and preconditioning?
8. Stochastic Gradient Descent: The general problem and implementation details | Notebook
Stochastic optimization problems, SGD, tweaks, and implementation in PyTorch
9. Adaptive Optimization Methods | Notebook | Cheatsheet
Intro to adaptive optimization methods: Adagrad, Adam, and AdamW.
10. Benchmarking Optimizers: Challenges and Some Empirical Results | Cheatsheet
How do we compare optimizers for deep learning?
11. A Playbook for Tuning Deep Learning Models | Cheatsheet
A systematic process for tuning deep learning models
12. Scaling Transformers: Parallelism Strategies from the Ultrascale Playbook | Cheatsheet
How do we scale training of transformers to 100s of billions of parameters?
A recap of the course.