Public paper breakdowns
Each breakdown walks through a paper's core ideas, equations, and intuition. They're designed for self-study — readable end-to-end, with the original prose, math, and step-by-step explanations on the same page.
We’re adding new public breakdowns over time. Each one walks through a foundational paper with the same step-by-step style.
Devlin et al., 2018
Masked language modeling, bidirectional context, and the pre-train / fine-tune recipe that reset NLP.
Hu et al., 2021
Why low-rank adapters work, what gets frozen vs. trained, and the math behind parameter-efficient fine-tuning.
Radford et al., 2021
Contrastive image–text training, the joint embedding space, and how zero-shot transfer falls out of it.
Ho et al., 2020
Forward noise, reverse denoising, and the variational objective that powers modern image synthesis.
Upload it to Deconstructed and get this same style of section-by-section explanation, notation help, and equation breakdown.