Summary & Insights
What if AI could move from talking about science to actually doing it? That’s the ambitious goal driving Liam Vedas (co-creator of ChatGPT) and Doge Chubuk (former DeepMind physics lead) at Periodic Labs. Their conversation reveals a fundamental shift in AI development: moving beyond digital reward functions based on text and code to a “physically grounded reward function” provided by real-world experiments. They argue that for AI to genuinely accelerate science and physical R&D—from discovering new superconductors to designing advanced materials—it must have “experiment in the loop,” where nature itself is the final judge of an AI agent’s proposals.
The core of Periodic’s approach is building a frontier AI research lab tightly coupled with physical automation. This means creating “AI physicists” that don’t just reason about quantum mechanics via simulation but also design and run real experiments, such as powder synthesis in a lab, using robots. The iterative cycle of simulation, theory, and physical verification creates a robust training environment. This addresses key shortcomings in current AI: scientific data is often noisy, negative results are rarely published, and true discovery requires the ability to act and iterate in the real world, not just analyze existing datasets.
Their north star is a lofty scientific breakthrough, like discovering a 200 Kelvin superconductor, which would fundamentally update our understanding of quantum mechanics. However, the path to that goal involves creating practical, near-term value. They envision building “co-pilot tools for engineers and researchers in advanced industries” like semiconductors, space, and advanced manufacturing. By solving specific, high-value R&D bottlenecks for these sectors, they aim to create a virtuous cycle where commercial success fuels faster scientific progress. The team, a unique blend of world-class machine learning researchers and experimental scientists, embodies this integrated philosophy, constantly teaching each other to bridge the gap between digital models and physical reality.
Surprising Insights
- The critical value of negative results: Scientific progress is hampered because negative results are rarely published, yet they provide a crucial learning signal. Periodic’s lab is designed to generate and utilize these valid negative results to train their AI systems more effectively.
- Scaling laws have a “slope” problem: While AI capabilities scale predictably with data and compute, the slope of improvement for far “out-of-domain” tasks (like specific physics problems) can be so shallow that reaching competence through general internet pre-training alone could take impractically long. A “beeline” via targeted training on relevant data is necessary.
- Current models are terrible at science because they weren’t trained for the process: Even the smartest LLMs today won’t make discoveries because they haven’t been trained in the iterative method of scientific inquiry—hypothesizing, simulating, experimenting, and learning from failure.
- Noise in published data is a major blocker: Key material properties in scientific literature can have reported values spanning orders of magnitude. Training on this noisy data alone means the best an AI can do is replicate the confusion, not find a clearer physical truth.
- The best physicist on the team has as much to learn as a newcomer: The founders note that the breadth of knowledge required (e.g., for superconductivity) spans so many sub-fields that even an expert is closer to a novice in the grand scheme than they are to omniscience, highlighting the need for collaborative, cross-disciplinary AI.
Practical Takeaways
- Ground AI optimization in physical reality: For R&D in any physical domain, consider how to create a concrete, automated reward function based on real-world measurement (e.g., material strength, conductivity) instead of relying solely on digital proxies or human preference scores.
- Build integrated teams of domain experts and ML practitioners: Accelerating progress in fields like material science requires ML researchers and bench scientists to work shoulder-to-shoulder, with structured learning sessions to bridge terminology and conceptual gaps.
- Prioritize generating clean, actionable data: When applying AI to physical problems, invest in high-throughput, high-quality data generation. Be mindful of the noise in existing datasets and design processes to capture valid negative results, not just successes.
- Start with a well-scoped, high-impact sub-problem: Rather than aiming for general “AI for science,” follow Periodic’s model: choose a specific, measurable goal (like improving a particular material property) that requires solving many foundational pieces of the pipeline, from simulation to automated experiment.
- Move beyond retrieval to knowledge distillation in models: For enterprise applications in technical fields, consider moving past simple retrieval-augmented generation (RAG). Explore “mid-training” strategies to deeply embed proprietary technical knowledge (simulation data, experimental results) into model weights for richer understanding and reasoning.
Scaling laws took us from GPT-1 to GPT-5 Pro. But in order to crack physics, we’ll need a different approach.
In this episode, a16z General Partner Anjney Midha talks to Liam Fedus, former VP of post-training research and co-creator of ChatGPT at OpenAI, and Ekin Dogus Cubuk, former head of materials science and chemistry research at Google DeepMind, on their new startup Periodic Labs and their plan to automate discovery in the hard sciences.
Follow Liam on X: https://x.com/LiamFedus
Follow Dogus on X: https://x.com/ekindogus
Learn more about Periodic: https://periodic.com/
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Podcast on Spotify
Listen to the a16z Podcast on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
