109 Episodes

  1. Managing frontier model training organizations (or teams)

    Published: 3/19/2025
  2. Gemma 3, OLMo 2 32B, and the growing potential of open-source AI

    Published: 3/13/2025
  3. Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL

    Published: 3/12/2025
  4. Elicitation, the simplest way to understand post-training

    Published: 3/10/2025
  5. Where inference-time scaling pushes the market for AI companies

    Published: 3/5/2025
  6. GPT-4.5: "Not a frontier model"?

    Published: 2/28/2025
  7. Character training: Understanding and crafting a language model's personality

    Published: 2/26/2025
  8. Claude 3.7 thonks and what's next for inference-time scaling

    Published: 2/24/2025
  9. Grok 3 and an accelerating AI roadmap

    Published: 2/18/2025
  10. An unexpected RL Renaissance

    Published: 2/13/2025
  11. Deep Research, information vs. insight, and the nature of science

    Published: 2/12/2025
  12. Making the U.S. the home for open-source AI

    Published: 2/5/2025
  13. Why reasoning models will generalize

    Published: 1/28/2025
  14. Interviewing OLMo 2 leads: Open secrets of training language models

    Published: 1/22/2025
  15. DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

    Published: 1/21/2025
  16. Let me use my local LMs on Meta Ray-Bans

    Published: 1/15/2025
  17. (Voiceover) DeepSeek V3 and the actual cost of training frontier AI models

    Published: 1/9/2025
  18. The state of post-training in 2025

    Published: 1/8/2025
  19. Quick recap on the state of reasoning

    Published: 1/2/2025
  20. (Voiceover) 2024 Interconnects year in review

    Published: 12/31/2024

2 / 6

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai