AI transcript
0:00:06 Welcome to the Next Wave Podcast. I’m Matt Wolfe, and I could not be more excited to
0:00:11 share today’s episode with you. So we’ve gone from AI that can chat with you to AI that
0:00:18 can work for you. And the difference? Well, this new AI can actually think through problems,
0:00:24 catch its own mistakes, and complete complex tasks from start to finish, just like a human
0:00:30 employee would. This is what everyone in the AI world calls AI agents. You’ve probably heard the
0:00:36 term. But here’s why this breakthrough changes everything for regular people. If you’re someone
0:00:41 who codes, there’s no more debugging AI hallucinations. It can actually check its own
0:00:48 work. If you run a business, these AI agents can actually plan out and finish complex tasks,
0:00:53 just like one of your employees might. And the implications for humanity? Well,
0:00:59 with these new tools, drug discoveries, testing, and real-world trials can now take weeks instead
0:01:05 of decades. In fact, Isometric Labs is already gearing up for human trials of AI-discovered drugs
0:01:11 right now. And we’re also already getting stories about how AI has successfully diagnosed human
0:01:19 illnesses when human doctors couldn’t. These things are accelerating insanely fast. But this isn’t just
0:01:26 about better chatbots. We’re talking about AI that understands the physical world, plans weeks ahead,
0:01:31 and even works while you’re asleep. And the company that’s leading the charge in all of this
0:01:38 is Google DeepMind. They’ve already used this thinking AI to predict protein structures that used to take
0:01:45 years. Now it just takes seconds. It’s called AlphaFold. They’ve also invented AI that can invent new
0:01:53 algorithms, including AI algorithms. It’s called AlphaEvolve. It’s insane stuff. Two million researchers
0:02:00 worldwide are using their tools right now. But with this power comes some pretty big questions. Can we
0:02:07 trust it? What happens to privacy? What about our jobs? Can we trust Google with our data?
0:02:13 So I sat down with Google DeepMind CEO Demis Hassabis to get answers straight from the source
0:02:22 about how we got from autocomplete to actual thinking and what comes next. He’s a Nobel laureate,
0:02:28 a knight, and one of the most influential pioneers in the world of AI. And somehow,
0:02:34 I managed to get him to sit down and chat with me about all of this. What he told me will change
0:02:42 how you think about AI forever. So without further ado, here’s my conversation with Sir Demis Hassabis.
0:02:52 Cutting your sales cycle in half sounds pretty impossible, but that’s exactly what Sandler
0:02:57 Training did with HubSpot. They used Breeze, HubSpot’s AI tools, to tailor every customer interaction
0:03:01 without losing their personal touch. And the results were pretty incredible. Click-through
0:03:08 rates jumped 25%, and qualified leads quadrupled, and people spent three times longer on their
0:03:13 landing pages. Go to HubSpot.com to see how Breeze can help your business grow.
0:03:21 Hey Demis, great to see you again. So my first question for you is, can you sort of describe
0:03:26 what’s happening under the hood with an LLM? Like what’s kind of going on? Can we sort of demystify
0:03:33 it for people a little bit? I can try. So I mean, at the basic level, what these LLM systems are trying
0:03:38 to do is very simple in a way. They’re just trying to predict the next word. And they do that by obviously
0:03:46 looking at a vast training set of language. But the trick is not just to regurgitate what it’s already seen, but actually
0:03:52 generalize to something novel that you are now asking it. And it seems like, you know, we’ve managed with the
0:03:55 modern day systems is to get that generalization to work.
0:04:01 Gotcha. So at IO, you announced the new DeepThink, right, which is so much more powerful, and it’s topping all of the
0:04:06 benchmarks for things like coding and math and all that. What happened under the hood that caused that new leap?
0:04:12 Well, new techniques have been brought into the foundational model space where there’s called
0:04:16 pre-training, where you sort of train the initial base model based on, you know, all the training
0:04:21 corpus. Then you try and fine tune it with a bit of reinforcement learning feedback. And now there’s this
0:04:27 third part of the training, which is we sometimes call inference time training or thinking, where you’ve
0:04:34 got the model, and you give it many cycles to sort of go over itself and go over its answer, maybe do use
0:04:40 some tools. For example, it could fact check with search, something like that, before it outputs the
0:04:45 answer to the user. So it gets a chance to sort of correct itself and adjust what it’s going to
0:04:50 output. And of course, if you do that, you get a much better answer. And then what Deep thinks about
0:04:56 is actually taking that to the maximum and giving it loads more time to think and actually even doing
0:05:01 parallel thoughts and then choosing the best one. And it turns out it works really well. And, you know,
0:05:07 we pioneered that kind of work in the past, actually nearly a decade ago now with AlphaGo and our games
0:05:12 playing programs, because in order to be good at games, you need to do that kind of planning and
0:05:14 thinking. And now we’re trying to do it in a more general way here.
0:05:20 Right, right. So it almost kind of thinks of a whole bunch of potential responses and then goes
0:05:23 through reviews all the potential responses and then figures out what the best response from those
0:05:28 potential responses. Exactly. And it can go over and correct some parts of it and use tools to check
0:05:33 some aspects of it. So, you know, especially in certain areas like maths and the coding, it really
0:05:39 improves the answers. Amazing. Very cool. So you’ve mentioned that the long term goal is to sort of
0:05:45 let these AIs have like a world model. Right. So can you sort of explain what you mean by a world
0:05:51 model and what does that open up to us? Well, so we’re all familiar with large language models now,
0:05:57 but of course, we have five sensors and we operate in the real world. And language is only one aspect,
0:06:04 very important aspect of our world and human civilization, but only one aspect. And so I think for a model,
0:06:09 what we mean by a world model is a model, sometimes we call it a multimodal model that can understand
0:06:16 not just language, but also audio, images, video, all sorts of input, any input, and then potentially
0:06:22 also output any kind of token as well. And the reason that’s important is if you want a system to be a good
0:06:27 assistant, it needs to understand the physical context around you. Or if you want robotics to work
0:06:32 in the real world, the robot needs to understand the physical environment. Right. So in order to do that,
0:06:35 you have to have what we sometimes like to call a world model.
0:06:40 Cool. So what sort of new things do you think that’ll open up to people once they have that ability?
0:06:45 I think robotics is one of the major areas. I think that’s what’s holding back robotics today. It’s
0:06:49 not so much the hardware, it’s actually the software intelligence. You know, the robots need to understand
0:06:54 the physical environment. But I think that that’s also what will make today’s sort of nascent
0:06:58 assistant technology and things like you saw with Project Astra that we show in Gemini Live.
0:07:04 For that to work really robustly, you want as accurate as world models you can. And then the
0:07:10 other thing is, if you want to do planning in the real world, you need to sort of plan multiple
0:07:15 steps with your world model. So in order for that to be good for long range planning, your world model
0:07:20 has to be very accurate as well, which is pretty hard when you’re talking about real world situations.
0:07:29 My First Million hosted by Sam Parr and Sean Puri is brought to you by the HubSpot Podcast Network,
0:07:34 the audio destination for business professionals. My First Million features famous guests like Alex
0:07:40 Hromozzi, Sofia Amoroso, and Hasan Minhaj sharing their secrets for how they made their first million and
0:07:45 how to apply their learnings to capitalize on today’s business trends and opportunities.
0:07:52 They recently had a fascinating episode about how you can scale a profitable agency with zero employees
0:07:57 using AI agents. Listen to My First Million wherever you get your podcasts.
0:08:05 So you’ve mentioned things like AI will be able to, most likely in the future, solve things like
0:08:10 room temperature superconductors and, you know, more energy efficiency and curing diseases.
0:08:15 Out of the sort of things that are out there that it could potentially solve, what do you think the
0:08:20 sort of closest on the horizon is? Well, as you say, we’re very interested and we actually work on
0:08:25 many of those topics, right? Whether they’re mathematics or things like material science, like
0:08:30 superconductors, you know, we work on fusion, renewable energy, climate modeling. But I think
0:08:35 the closest if you think about and probably most near term is building on an alpha fold work.
0:08:41 And we spun out a company called Isomorphic Labs to do drug discovery. We think that we sort of
0:08:46 the whole drug discovery process from first principles with AI. And normally, you know,
0:08:52 it takes the rule of thumb is around a decade for a drug to go from sort of identifying why a disease
0:08:57 is being caused to actually coming up with a cure for it. And then finally being available to patients.
0:09:02 It’s a very laborious, very hard, painstaking and expensive process. And I would love to be able
0:09:09 to speed that up to a matter of months, maybe even weeks one day and cure hundreds of diseases like that.
0:09:14 And I think that’s potentially in reach. It sounds maybe a bit science fiction like today,
0:09:19 but that’s what protein structure prediction was like, you know, five or six years ago before we
0:09:24 came up with alpha fold and used to take years to find painstakingly with experimental techniques,
0:09:29 the structure of one protein. And now we can do it in a matter of seconds with these computational
0:09:34 methods. So I think that sort of potential is there. And it’s really exciting to try and make that happen.
0:09:40 Amazing. So you guys just announced Alpha Evolve recently, which looks amazing, right? It’s an AI
0:09:46 that essentially can help you come up with new algorithms, right? So how close are we to AIs that
0:09:51 are sort of designing new AIs to improve the AIs and then we start entering the cycle?
0:09:56 Yes. I mean, it’s a baby step in that direction. I think it’s really cool breakthrough piece of work
0:10:02 where we’re combining kind of, in this case, evolutionary methods with LLMs to try and get
0:10:08 them to sort of invent something new. And I think there’s going to be a lot of promising work actually
0:10:13 combining different methods in computer science together with these foundation models like Gemini
0:10:18 that we have today. So I think it’s a great, very promising path to explore just to reassure everyone
0:10:22 when it still has humans in the loop, scientists in the loop to kind of, it’s not directly improving
0:10:28 Gemini. It’s using these techniques to improve the AI ecosystem around it, slightly better algorithms,
0:10:32 better chips that the system’s trained on versus it’s the algorithm that it’s using itself.
0:10:38 Right, right. So AI agents, they’ve been sort of a big talk in the AI community recently.
0:10:43 And this week at IO, we saw Project Mariner, which can go and open up 10 different browsers and
0:10:48 go and do a whole bunch of things on your behalf. How far off do you think we are to
0:10:52 being able to give an agent like a week’s worth of work and then goes and executes that for us?
0:10:57 Yeah, I mean, I think that’s the dream to kind of offload some of our mundane admin work and
0:11:01 also to make things like much more enjoyable for us. You know, you have maybe have a trip to
0:11:06 Europe or Italy or something, and you want the most amazing itinerary sort of built out for you and
0:11:11 then booked. I love our assistance to be able to do that. You know, I hope we’re maybe a year away or
0:11:16 something from that. I think we still need a bit more reliability in the tool use. And again,
0:11:21 the planning and the reasoning of these systems, but they’re rapidly improving. So as you saw with
0:11:26 the latest Project Mariner, and so it’d be great for that to come together with some of the other
0:11:29 advances we’re making with the Gemini Live and the Astro technology.
0:11:34 Yeah. What do you think the biggest bottleneck is right now to sort of getting that long-term agent?
0:11:38 I think it’s just the reliability of the reasoning processes and the tool
0:11:45 And making sure because each one, if it has a slight chance of an error, if you’re doing like
0:11:50 a hundred steps, even a 1% error doesn’t sound like very much, but it can compound to something
0:11:55 pretty significant over a hundred, you know, 50 or a hundred steps. And a lot of the really interesting
0:12:00 tasks you might want these systems to help you with will probably need multi-step planning and action.
0:12:05 Gotcha. So I want to shift gears a little bit here and talk a little bit about some of the
0:12:09 sort of fears and concerns that have come up in like my YouTube comments and things like that.
0:12:14 You know, people are worried about things like privacy and losing their jobs to AI and all of
0:12:20 that kind of stuff. And so I’m curious, how does a company like DeepMind build the trust of the general
0:12:23 public that you can trust them with this kind of technology?
0:12:26 Yeah. Well, look, I think we’ve tried to be, and I think we try to be responsible role
0:12:31 models actually with these frontier technologies. Partly that’s showing what AI can be used for,
0:12:36 for good, you know, like medicine and biology. I mean, what better use could there be for AI than
0:12:42 to cure, you know, terrible diseases. So there’s always been my number one thought there, but there’s
0:12:45 other things, you know, where it can help with the climate, energy and so on that we’ve discussed.
0:12:50 But I think we’ve got to, you know, companies is incumbent on them to behave thoughtfully
0:12:55 and responsibly with this powerful technology. We take privacy extremely seriously at Google,
0:12:59 always have done. And I think, you know, most of the things we’ve been discussing with the
0:13:03 assistants, they would be opt-in. They’ll make the universal assistant much more useful for you,
0:13:08 but you would be, you know, intentionally opting into that very clearly with all the transparency
0:13:13 around that. And what I want us to get to is a place where the assistant feels like it’s working
0:13:19 for you. It’s your AI, right? Your personal AI, and it’s working on your behalf. And I think that’s
0:13:22 that’s the mode, you know, that’s the, at least the vision that we have and that we want to deliver
0:13:25 and that we think users and consumers will want.
0:13:31 So all of those are incumbent. And actually I would say to your viewers as well, you have a lot of
0:13:36 say in this in the sense of like, you should exercise your consumer choices and buy services
0:13:41 and products from companies that you feel are acting responsibly and the leadership is acting
0:13:45 responsibly and you like the type of work that they’re doing. Because now we’re entering this sort
0:13:51 of commercialization, productization era of AI now. Right. And you know, I think your viewers
0:13:55 and everyone has a big say in that. Right, right. So one of the things that you guys also demoed at
0:14:01 IO that I got a chance to actually test out a little bit earlier was the Android XR glasses. And those were
0:14:06 absolutely mind blowing when I tried them the first time. And so I guess the flip side of this sort of
0:14:12 privacy thing is if everybody’s sort of walking around wearing glasses that have microphones and cameras on
0:14:18 them, how do we ensure that the sort of privacy of the other people around us is secure?
0:14:21 I think it’s a great question. I mean, first thing is to make it very obvious that you’re
0:14:26 it’s on or off in these types of things, you know, in terms of the user interfaces and the form factors.
0:14:30 I think that’s number one. But I also think this is the sort of thing where we’ll need
0:14:36 sort of a societal agreement and norms about how do we do we all want if we have these devices,
0:14:41 they’re popular and they’re useful. What are the guardrails around that? And I think that’s why we’re
0:14:45 only entrusted tester at the moment is partly the technology is still developing, but also we need
0:14:50 to think about the societal impacts like that ahead of time, you know, not just with the technology,
0:14:55 but also society in general and civil society kind of inputting into what might be the right way to
0:14:59 handle that type of world. Right. So I’ve got one last question here. It’s kind of a two-parter
0:15:05 question. So what excites you most about what you can do with AI today? And what excites you most about
0:15:10 what we’ll be able to do in the very near future? Cool. Well, today, I think it’s the AI for science work is my,
0:15:15 you know, always been my passion. And I’m really proud of what AlphaFold and things like
0:15:18 that have empowered. They’ve become a standard tool now in biology and medical research, you know,
0:15:23 over two million researchers around the world use it in their incredible work and vital work. So that’s
0:15:30 fantastic to me. In the future, you know, I’d love a system to basically enrich your life and work for
0:15:36 you on your behalf to protect your mind space and your own thinking space from all of the digital world
0:15:41 that’s bombarding you the whole time. And I think actually one of the answers to that is that we’re
0:15:45 all feeling in the modern world with social media and all these things is maybe a digital assistant
0:15:50 working on your behalf that only at the times that you want surfaces the information rather than
0:15:54 interrupting you at all times of the day. Amazing. Well, thank you so much, Demers. This has been
0:16:10 absolutely fascinating. I really, really appreciate the time that you spent with me today. So thank you
0:16:18 We’ve got a major announcement. HubSpot is the first CRM to launch a deep research connector with
0:16:24 ChatGPT. Customers can now bring their customer context into the HubSpot deep research connector
0:16:30 and take action on those insights. Now you can do truly remarkable things for your business. Customer
0:16:36 success teams can quickly surface inactive companies, identify expansion opportunities and receive targeted
0:16:43 plays to reengage pipelines, then take those actions in the customer success workspace in HubSpot to drive
0:16:49 retention. Support teams can analyze seasonal patterns and ticket volume by category to forecast staffing needs
0:16:56 for the upcoming quarter and activate breeze customer agents to handle spikes and support tickets. This truly
0:17:04 is a game changer for the first time ever. Get the power of ChatGPT fueled by your CRM data with no complex setup.
0:17:10 The HubSpot deep research connector will automatically be available to all HubSpot accounts across all
0:17:17 tiers that have a ChatGPT team enterprise or Edu subscription. Turn on the HubSpot deep research connector
0:17:26 in ChatGPT to get powerful PhD level insights from your customer data. Now let’s get back to the show.

Want the ultimate guide to Google’s Gemini? Get it here: https://clickhubspot.com/evt

Episode 68: How is Google DeepMind pushing the boundaries of AI to tackle drug discovery, robotics, and even autonomous AI agents? Matt Wolfe (https://x.com/mreflow) sits down with DeepMind CEO Sir Demis Hassabis (https://x.com/demishassabis), a neuroscientist, AI pioneer, Nobel laureate, and knight, to peel back the curtain on Google’s latest advances—and the ethical challenges that come with them.

In this episode, Matt and Demis go deep on what’s powering the newest generation of AI agents, how models like AlphaFold and AlphaEvolve are accelerating scientific breakthroughs, and why world models are so important for the future of robotics. Demis shares why he believes AI is poised to reshape society—for better and for worse—and what Google is doing to build public trust in its systems.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) AI Revolutionizing Drug Discovery

  • (03:35) Advanced Model Training Methods

  • (07:06) Accelerating Drug Discovery with AI

  • (11:12) AI’s Responsible Role in Society

  • (13:56) AI Revolutionizing Science & Life

Mentions:

Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Reply

Your email address will not be published. Required fields are marked *