D

Public paper breakdowns

Foundational papers, explained step by step

Each breakdown walks through a paper's core ideas, equations, and intuition. They're designed for self-study — readable end-to-end, with the original prose, math, and step-by-step explanations on the same page.

Available now

Coming soon

We’re adding new public breakdowns over time. Each one walks through a foundational paper with the same step-by-step style.

  • BERT

    Coming soon

    Devlin et al., 2018

    Masked language modeling, bidirectional context, and the pre-train / fine-tune recipe that reset NLP.

  • LoRA

    Coming soon

    Hu et al., 2021

    Why low-rank adapters work, what gets frozen vs. trained, and the math behind parameter-efficient fine-tuning.

  • CLIP

    Coming soon

    Radford et al., 2021

    Contrastive image–text training, the joint embedding space, and how zero-shot transfer falls out of it.

  • Diffusion Models

    Coming soon

    Ho et al., 2020

    Forward noise, reverse denoising, and the variational objective that powers modern image synthesis.

Have a paper you’re stuck on?

Upload it to Deconstructed and get this same style of section-by-section explanation, notation help, and equation breakdown.