AI transcript
0:00:15 the CTO of Microsoft. Kevin has been at the center of some of the most important technology
0:00:21 shifts of the past decade, leading Microsoft’s work on AI, cloud computing, and the partnerships
0:00:25 that brought tools like OpenAI’s models to millions of people around the world.
0:00:31 In this conversation, we get into the big questions everyone’s asking about AI. Are we already at a
0:00:36 point where these systems are too complex for humans to fully understand? How do we make sure
0:00:41 that as they get smarter, they stay aligned with our goals? We’ll talk about the massive challenge
0:00:46 of energy efficiency in AI, what Microsoft is doing to make these systems more sustainable and
0:00:52 accessible, and even when we might see autonomous AI agents that can go off and complete entire
0:00:57 projects for us while we sleep. Kevin also shares where he thinks AI is headed in the next year or
0:01:03 two, what kind of jobs and opportunities it might create, and why he believes the tools we’re building
0:01:09 today could unlock an explosion of creativity and entrepreneurship. It’s a fascinating, wide-ranging
0:01:16 discussion with one of the most influential people in tech right now. So stick around because you’re not
0:01:19 going to want to miss this one.
0:01:27 My first question, do you think AI will get to a point where AI is so smart that humans truly don’t
0:01:27 understand what’s going on under the hood?
0:01:35 Well, look, I think in certain ways we are, although I don’t think that that’s any different
0:01:42 from some other like really complicated systems that we built or like honestly a bunch of complicated
0:01:50 phenomena that we don’t quite understand. So in my mind, technology is always about like, is there a path to
0:01:56 being able to debug it when it doesn’t behave the way that you want it to behave? And I think, you know, there are an
0:02:03 are an increasingly good set of tools that we’re developing to try to, you know, be able to characterize the
0:02:09 performance of really complicated AI systems and to debug them when they, you know, aren’t doing what you
0:02:11 intended them to do.
0:02:11 Right.
0:02:17 You know, that we’ve been talking for years and years and years about full stack developers or developers who can
0:02:24 understand systems from top to bottom. And I think you still need that a lot. You will be more successful as a
0:02:34 developer in the age of agentic AI, like in this era where you’re using AI to do significant parts of your software
0:02:39 development job. If you understand that full system so that when it misbehaves, you’re like, okay, like, I’m going to
0:02:46 punch down a level of abstraction and like go, you know, try to investigate what’s going on here. And then like, you know, if
0:02:50 that doesn’t do the trick, like another one and another one, and like eventually getting all the way down to bare metal.
0:02:55 Right, right. My follow up question to that would be like, if AI gets to a point where it’s like, you know,
0:02:59 super intelligent, right, we get to that super intelligence phase where maybe it’s writing code that, you know,
0:03:04 humans haven’t even figured out how to write this kind of code yet. How do we make sure that it sort of continues to
0:03:22 stay aligned with our goals? Yeah, I mean, there is a tremendous amount of active research and development and engineering on alignment, both at Microsoft and like a whole bunch of the companies that we partner with and, you know, a whole bunch of the companies that we’re big fans of. And like, honestly, like a whole bunch of companies that we compete with. Like, it’s one of the things that I think all of us building this technology really want to make sure functions well the same way that all of us have a
0:03:29 Right. You know, universally high level of focus on security and classical software. You know, no one wishes for anybody’s software to be less secure or AI systems to be, you know, less safe or less responsible or less aligned than what we need them to be. So yeah, I do think that’s really cool.
0:03:34 that there’s just a ton of activity there. And I’ve even seen it over the past, you know,
0:03:51 you know, 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years or 20 years.
0:03:58 So yeah, I do think that there’s just a ton of activity there. And I’ve even seen it over the past, you know, 20 years.
0:04:27 You know, three plus years as we’ve made really powerful AI available in APIs and as we’ve plumbed it through to products like I run Microsoft’s Deployment Safety Board, which is this thing that really makes sure that we rigorously test and investigate everything that we’re doing to make sure that the things that we’re launching adhere to Microsoft’s published responsible AI standards. And like the sophistication of the tools that we use to do
0:04:33 to do the job of the deployment safety board has just increased exponentially over the past couple of years.
0:04:42 Right, right. Yeah, I love hearing that a lot of the sort of big tech companies are all sort of aligned on the same goal and they’re all kind of working together, even though they might be competitors.
0:05:00 I wanted to ask about the whole energy efficiency kind of thing. I know there’s some concern that with, you know, data centers and AI and all of that using up so much energy. What sort of things is Microsoft doing to sort of a bring the energy usage down, but also be kind of democratize AI so it can be more accessible to more people.
0:05:20 Yeah. Well, look, I think the core thing that all of us are doing and like it’s been a particular focus at Microsoft over the past handful of years is AI demand has surged is like you’ve got a bunch of goals that are aligned. So we just spend an extraordinary amount of energy trying to optimize the energy consuming parts of the system because energy costs a lot.
0:05:44 Right. And like we’ve made very specific, hard, sustainable energy commitments to the entire world that we intend to meet. And so inside of those constraints, like you have to make sure that as the AI demand is growing, things are getting cheaper. Like that’s the way to make them more accessible so that like you’re just lowering the barrier of entry to people doing increasingly complicated things with the system.
0:05:56 And like you have to make them efficient so that you can run more AI computations on a fixed power budget or a fixed hardware budget or a fixed floor plan budget that you have inside of a data center.
0:06:09 You know, and then I think outside of that, we’re doing a couple of other things that are interesting. So one is like we’ve got this constant background threat running, trying to figure out whether or not there are these sort of disruptive efficiency breakthroughs.
0:06:26 So like the transformer for instance, which is the foundational technology that all of these modern large language model AIs are built on now. One way to think about it is like it was a disruptive increase in the capability of the systems, but it was also a disruptive increase in the efficiency of systems.
0:06:35 And so we’re doing a bunch of basic computer science AI research on trying to find, you know, what those next disruptive things might be.
0:06:35 Right.
0:06:45 It’s research so you can’t predict when the disruption might happen, but like we feel good about the amount of energy we’re investing in that. And again, it’s one of those things where incentives are all aligned.
0:06:58 like we just desperately for all sorts of reasons like need those disruptions to happen, whether, you know, they’re our research or, you know, some research that emerges, you know, elsewhere that’s publicly available.
0:07:25 And then the other thing too that we’re doing is like we’re trying to help the electric power industry find new, sustainable, scalable sources of production so that you don’t have to live in a world where you act as if there’s energy scarcity because like AI aside, like you actually do want a world where there’s energy abundance and then sustainable energy abundance.
0:07:40 Right, right. Lots of good things. Like just think about like some of these problems that we have in the world right now. So, you know, there are major social unrest happening in parts of the world right now because we have water scarcity.
0:07:41 Right.
0:07:52 And there are all sorts of ways that you could solve water scarcity problems if, for instance, desalinization were cheaper to do, but it is an energy intensive process right now.
0:08:07 And so it’s too expensive and like too unsustainable to do. If all of a sudden you had an energy breakthrough where, you know, energy became an order of magnitude cheaper and, you know, orders of magnitude more abundant, then like you could go solve problems like that.
0:08:19 Right, right. Yeah, I got an opportunity to go see the applied science lab yesterday and talk to some of them. And one of the things they were saying is that the new MPU units are actually quite a bit less expensive to develop than GPU units.
0:08:20 Yep.
0:08:26 Making it so that, you know, maybe you don’t have the best graphic cards to play Cyberpunk 2077, but you’ll be able to have AI, right? Yeah.
0:08:46 AI agents are transforming marketing. They’re changing it as we know it. And the old ways of marketing, they’re gone. The new ways are really agent first ways. And Kieran just wrote this amazing blog post about how to think about marketing and do marketing in an agent first world.
0:09:14 In a world where agents might be buying from agents or agents are facilitating ways people can buy. And there are these three very specific changes that he outlined. And if you’re in marketing today and you are not clear what these changes are, you’re missing the boat. You’re going to get left behind. And we don’t want that. We want to help you stay ahead of the pack. And so you want to read Kieran’s post right now. You can click the link in the description below. That is going to give you the blueprint you need to do marketing in an AI first world.
0:09:43 It’s an important thing to understand that every year over the past five or six years, as we’ve been building these transformer based AI systems, you know, every couple of years or so, you get a new hardware generation that’s given you about a 2x price performance improvement, which is enormous. Like it’s just really, really extraordinary. But on top of that, like you’re getting a set of even greater efficiency improvements that are happening in the world.
0:09:50 You know, you’re getting a lot of improvements that are happening every single year in the software layer, that’s improving, you know, energy efficiency and capital efficiency in these systems. And it looks, you know, plus or minus like 10x a year.
0:10:18 Right. And so, yeah, I was, you know, at this event last night where I said, you know, like when I was a young developer, like because I’m an old fart now, like all I had was this crappy old Moore’s law, which, you know, again, was like this extraordinary exponential, but like it’s not the same exponential that we’re on right now. And so like you just got to remember, like the hardware is getting better and the software is getting better at an incredible clip, which means like all the same exponential that we’re on right now. And so like you just got to remember, like the hardware is getting better and the software is getting better at an incredible clip.
0:10:26 So like you just got to remember, like the AI is going to get cheaper, faster, like more capable and more energy efficient at the same time.
0:10:44 Right. I want to shift over to agents because it seems like agents was a big piece of, you know, what you’ve been talking about. How far off do you think we are to being able to sort of tell an agent before I go to bed, redesign my website, code it up and have it live for me? And then I wake up the next morning and I’ve got everything completed. How far off is that?
0:10:53 Yeah, I don’t think that that is actually as far off as it might sound. It kind of sounds like a science fictional scenario, especially like if you rewind to last year’s build.
0:11:05 But like I think it’s closer than you might imagine. Like I’ve got a dad at my kid’s school, like every school event, like we were friends and he comes up to me and he’s like, oh, my God, you can’t believe the crap that I’m doing.
0:11:15 He runs his own business and like he’s, you know, one man shop, you know, with contractors that he’s hired over the years to go help him build apps.
0:11:25 And like he still has, you know, these contractors, but like now he has this other thing called, you know, software development agents that are helping him do all of this stuff.
0:11:33 And he’s using everything like just a super sophisticated user. He uses our stuff. He uses OpenAI stuff. He uses, you know, Anthropic stuff. He uses Google stuff.
0:11:39 And the rate at which he is able to do his work is really incredible.
0:11:48 I mean, like really, you know, so you just described a thing that’s going to sort of asynchronously and autonomously go off and like do a bunch of work while I’m sleeping.
0:11:50 He’s almost there right now.
0:11:56 I had my kids end of year review of some social entrepreneurship work that they were doing.
0:12:03 And like my daughter was in this group with two other girls, like building this app that was trying to get kids to take calcium supplements.
0:12:10 And completely unbeknownst to me, like she was off using these tools to go build an app.
0:12:19 And like the app, like if I remember the first mobile apps that I wrote in 2008, it’s probably better than the first one that I wrote by a mile.
0:12:22 And like my daughter has never taken a programming course.
0:12:29 You know, she’s had a little bit of programming exposure in like a multidisciplinary design class that she took.
0:12:31 She’s in 10th grade, 16 years old.
0:12:34 And yeah, she did all of this.
0:12:40 It also didn’t ever occur to her to like tell me what she was doing or ask for dad’s help, which is rich.
0:12:46 So, yeah, look, I think we’re probably closer than most people think to that scenario you just described.
0:12:46 Right, right.
0:12:53 One of the things that you mentioned both last night and again today on stage was memory sort of being a bit of a bottleneck.
0:12:54 Why is it a bottleneck?
0:12:58 And what needs to be accomplished to sort of get past the memory issues?
0:13:00 First, let me describe why it’s a problem.
0:13:03 So, you know, like you and I are interacting right now.
0:13:09 And, you know, even though both you and I are busy, like we will have a recollection of this interaction that we’ve had.
0:13:21 So that if we ever are interacting again, like we can recall, you know, this conversation and what we were talking about and like we’ve just sort of got a foundation, you know, between the two of us for, you know, future interactions.
0:13:36 If you’re thinking about agents as things that you can delegate things to and that you can like think of as collaborators, that memory is just going to be a really foundationally important part of the user experience of these things.
0:13:41 And it’s also pretty important from, again, an efficiency point of view.
0:13:50 So right now, a lot of the times when you’re using an agent, because memory isn’t as good as it needs to be, like it just hasn’t remembered anything about your previous transaction.
0:13:59 And it doesn’t even remember much about what it has done itself in taking a sequence of actions to go solve a problem, which is kind of crazy.
0:14:08 And so you spend a bunch of time, like rebuilding state inside of these agents that you shouldn’t have to if memory was functioning well.
0:14:12 And so part of that is just, you know, it’s kind of a constrained thing.
0:14:16 So the way that these systems work is you have context windows.
0:14:28 So like a prompt, you can only put so many tokens or words of instruction or information or context, like into this window that you then feed to the inference system to get a response back.
0:14:33 And the response also consumes space inside of the context window.
0:14:58 And, you know, if you’re iterating over the course of a session like that consumes space in a context window and these context windows are sort of bound because like there’s some ways that you can implement context and inference where, you know, the inference is quadratic in the length of the context windows, which means like it gets not just expensive, you know, with one little increment per additional token.
0:15:02 And like it gets a lot more expensive per additional token of processing that you’re doing.
0:15:22 So one of the things that we’ve had to invent over the past handful of years are more efficient ways to use that context, more efficient attention algorithms so that you like don’t necessarily for each part of the inference calculation, you have to look at the entire context of information that’s there.
0:15:39 And like building things that are, you know, honestly, a little bit more efficient than, you know, retrieval augmented generation, which was sort of the way that we had before to take, you know, contextually or semantically relevant things to the task at hand and like pulling them into those context windows for processing.
0:15:56 So it’s just been like one of those things that was too expensive and like we needed to go do some efficiency work to make it better and like also some, you know, like really, if you think about how your memory and my memory work, we have pretty good precision and recall, but it’s not pretty good one shot precision and recall.
0:16:10 So if you ask me to go remember something that happened five years ago, I am not going to be able to just like that, tell you exactly what that recollection is, but like I have a way to go get to it.
0:16:19 It’s like, okay, like I kind of remember that, like I can put it in context, that context helps me like go out to my sources of information, just kind of search around.
0:16:29 And like I have a way to like have really broad recall and a way to take imprecise recollections and make them precise.
0:16:32 And so like that’s also what we need to do inside of these memory systems.
0:16:44 So like memory in an agent isn’t a database lookup, it’s like an iterative process that the agent, you know, may need to do to get the right piece of information very precisely.
0:16:56 Right, right. I’m curious, if we look ahead, let’s say like a year or two from now, what sort of things do you think an agent will be able to do that like we all do on like a daily basis right now?
0:17:00 Is there anything you think agents are just going to take off our plate and people don’t even realize agents are going to take that off our plate?
0:17:08 Yeah, look, I think there are things that I hope for that probably are hard, not for technical reasons, but for other reasons.
0:17:16 And then there are, you know, certainly a whole bunch of knowledge worker toil that I hope, you know, gets resolved.
0:17:19 And then there are things where I think they’re just sort of hard.
0:17:23 So like the hard things are like the embodied AI things.
0:17:35 So like there’s just a bunch of stuff that all of us have to do in the physical world, like, you know, do your laundry and, you know, like I can’t get my kids to like put the dishes, you know, take them from the sink and put them in the dishwasher.
0:17:38 Like it just really irritates me, I wish I had a robot to do that.
0:17:51 Unfortunately, I think those are like really hard technological problems where we’re probably not on as fast a path to getting those embodied AI problems solved as we are with some of these cognitive problems.
0:17:57 You know, I think a whole bunch of stuff like the scenario you described is somewhat likely to happen in the next year.
0:18:26 Like I have a whole bunch of things that I wish an agent could go do for me while I slept where I could wake up in the morning and, you know, it’s already started writing responses to, you know, emails that I need to like get right to first thing in the morning where, you know, maybe someone else’s agent has like talked to my agent, you know, try to get quick responses to things that, you know, the other person needs where, you know, they no longer have to wait for me.
0:18:27 Yeah.
0:18:27 Right.
0:18:28 Right.
0:18:29 Because like I’m blocking them.
0:18:34 So like, I think a bunch of this asynchronous stuff is actually going to start happening pretty quickly.
0:19:01 Like the reason it hasn’t happened so far is because you just need agents when they’re taking action, like they have to be really precise and like so far we’ve had kind of imprecise memory and, you know, action taking actually has gotten dramatically better over the past year with these reasoning models, but like we need, you know, all of this MCP plumbing that I’ve been talking about to happen so that, you know, agents can do this communication to other agents and systems that needs to be done.
0:19:08 And then there’s like stuff that I, you know, hope for and they’re just choices that we make as society, whether or not we get them.
0:19:09 Right.
0:19:10 They’re not technical barriers.
0:19:28 So I wish that medical diagnostics were more available to people everywhere over the next year that like folks like my mom who lives in rural central Virginia and she may not have access to the same diagnostic medicine that I have access to living in Silicon Valley.
0:19:29 Right.
0:19:41 Like I wish these AI systems which are already like right now plenty good enough to help give a real boost to people living in rural central Virginia where my mom lives.
0:19:45 Like I wish like we would get to more adoption of that.
0:19:47 But again, like that’s not a technology problem.
0:19:48 That’s a set of choices that we’re making.
0:19:49 Right, right.
0:19:51 So I have two last questions for you.
0:19:54 So with sort of every tech leap, right?
0:19:55 New jobs are created.
0:19:56 Some jobs disappear.
0:20:00 I’m curious, what sort of new jobs do you think will be created as a result of AI?
0:20:05 Well, I have what I think unfortunately is a contrarian opinion about software development.
0:20:16 I think people sometimes think that all of this leverage that programmers are getting with their software development tools powered by AI means that there’s going to be fewer programming jobs.
0:20:19 I think there are actually going to be more programming jobs.
0:20:26 I look at that app that my daughter did like, you know, the AI tools basically turned her into a programmer.
0:20:27 Right.
0:20:33 And like she doesn’t want to be a computer scientist and she doesn’t have time to like make herself a good programmer.
0:20:35 She wants to be a biologist.
0:20:36 Right.
0:20:49 And so like, I think we’re going to have like a ton of people who are effectively doing software development in places where software development couldn’t happen before at all because there just aren’t enough developers in the world.
0:21:01 And I think that, you know, we’re going to so lower the barrier to entry to application creation that you’re just going to have a lot more folks wanting to create a lot more apps.
0:21:06 And this is, by the way, what has happened every time in the history of software.
0:21:07 Right.
0:21:16 That we have made a big leap forward in leverage like, you know, building tools for programmers that make them more productive has always resulted in us needing more programmers.
0:21:19 So like that’s certainly a thing that I think we’re going to need more of.
0:21:24 And then, you know, it’s sort of hard to predict what the other jobs are going to be.
0:21:48 I do think that the shape of the jobs that we’re going to need, the really, really blindingly important thing is going to be people who are really sensitive to the needs of their fellow human beings who are sort of thinking about like, OK, what are long term needs versus short term opportunities?
0:22:10 Yeah, like how do I do things that are like legitimately and seriously in service to, you know, what society needs and like what the people around me need and like the problems they’re struggling with and the solutions that, you know, they ought to be looking for that they might not be because they don’t understand what the art of the possible is with, you know, this new disruptive technology.
0:22:21 And so I think we’re going to just need all sorts of new product makers, you know, who have problems and they don’t even realize yet what capability they have available to them to go solve those problems.
0:22:29 And like that’s just going to hopefully unlock a ton of entrepreneurship and creativity that is going to benefit a ton of people, I think.
0:22:40 Yeah, yeah. Amazing. So my last question, it’s kind of a two parter. What excites you most about what you can do with AI today? And what excites you most about what we’ll be able to do with AI in the near future?
0:23:09 Yeah, I’m just really personally like this is me, Kevin, not Microsoft. Like I am such a curious person by nature. Like I’m in I got just a near schizophrenic array of hobbies that I’m working on. Like right now, like literally I was sitting on stage waiting to go on for my bill keynote and I was texting back and forth with a bunch of friends about a, you know,
0:23:39 a Hikidashi guru Japanese ceramics kiln that we’re designing the second version of. And, you know, one of the interesting things about that process is like we’re using AI to help with it to do a whole bunch of things. So like a lot of like a Hikidashi guru is like a particular type of firing process that was very common in Japan in the 16th century and earlier. And like it just isn’t a lot of documentation.
0:23:58 about like how you design one of these things and like, you know, what all the other considerations are and like firing ceramic objects in them. And so, you know, these deep research agents can like look at documents that aren’t in your language. They can like help fill in gaps that are there in the published literature.
0:24:27 And then like you can use them for doing sort of weird scientific things. Like we’re trying to figure out like where to place the burner port on this kiln where what you want is like a vortex, a stably forming vortex of fire. And it’s rectangular. And so, like, you got to figure out like where the, you know, eddy currents are going to be with this hot gas inside of the kiln. And like, we’re asking the system to help us figure out how to do a simulation of this combustion process.
0:24:32 And like, there’s just no way I would have time to go into this depth on this thing, given I’ve got a day job without these deep research assistants. So like, that’s just wild. It’s wild that in 2025, we’re already here. Right. And it’s just sort of a great and awesome thing for a curious person to have these tools. So over the next year, like the thing that I’m really hoping for is that we will have systems that are like reliably taking
0:24:33 action on our behalf.
0:24:36 that like we sort of have transitioned from this mode of, like we sort of have transitioned from this mode of, like we’ve
0:24:48 sort of transitioned from this mode of, like we’ve got, like, this depth on this thing that we’ve got a day job without these deep research assistants. So like, that’s just wild. It’s wild that in 2025, we’re already here. Right. And it’s just sort of a great and awesome thing for a curious person to have these tools.
0:25:15 So over the next year, like the thing that I’m really hoping for is that we will have systems that are like reliably taking action on our behalf that like we sort of have transitioned from this mode of, you know, you are having to sit down with these tools synchronously and like have a session with them to like, you really do say, go sort this out for me. And like, you can have a week to go do it.
0:25:45 And like, I’m not asking you to burn a week worth of GPU time, but like you can make requests into systems and like wait for the response to come back. You can, you know, sort of interact with other people and other agents on my behalf and like wait for those responses and then come back to me asynchronously. Like I get a signal when like you got something to show me and like, you know, we can collaborate and iterate a little bit like that. Like, I think just getting into that mode where that’s what the UX
0:26:08 of these things look like I think is just going to be super interesting and fantastic and we’re going to discover a bunch of problems that we’ll then have to go solve and like that’s what my talk will probably be like next year at Build is like it will be immediately obvious like we’ve got a half a dozen like complicated things about what I just described that we’re going to have to go sort out and like complicated problems to go sort out is my favorite thing.
0:26:14 Amazing. Well, thank you so much for spending the time with me today. I really, really appreciate it. Thank you. It was a pleasure chatting with you.
0:26:42 Well, that’s it for today’s episode of the next wave podcast. I really hope you enjoyed this conversation with Kevin Scott as much as I did. A couple of things really stood out to me. First, how much progress is being made around alignment and safety and how seriously Microsoft is treating those challenges. And second, Kevin’s point that AI isn’t just about efficiency. It’s also opening the door for more people to build, create and solve problems.
0:26:59 Even if they’ve never written a line of code before. The big takeaway here is that AI isn’t just a technical shift. It’s a societal one and that the people and companies who lean into curiosity, efficiency and responsibility are the ones who will thrive in this next wave.
0:27:26 So a huge thank you again to Kevin from Microsoft for joining me and sharing his perspective. If you enjoyed this conversation, make sure to subscribe to the next wave over on YouTube, Spotify, Apple podcast, or wherever you listen to podcasts. And if you found this episode valuable, please share it with a friend. Maybe someone who’s trying to wrap their head around the future of AI. Thanks so much for listening. And hopefully I’ll see you in the next one. Bye bye.
0:27:28 Bye bye.
0:27:28 Bye bye.
0:27:29 Bye bye.
Want the guide to create AI Agents? get it here: https://clickhubspot.com/fhc
Episode 77: Are we nearing a future where AI agents can autonomously tackle our biggest challenges—while remaining efficient, safe, and truly aligned with human goals? Matt Wolfe (https://x.com/mreflow) sits down with Microsoft CTO Kevin Scott (https://x.com/kevin_scott), a leader at the forefront of AI, cloud computing, and the revolutionary partnerships powering today’s tech landscape.
In this episode, Matt digs deep with Kevin into the real obstacles and opportunities facing AI agents: from the complexities of AI systems that even humans struggle to fully understand, to breakneck advances in energy efficiency, memory, and software-hardware evolution. Kevin shares insider stories about making AI sustainable and accessible on a global scale, why big tech is united on AI safety, and how democratized tools are opening the floodgates of creativity and entrepreneurship. Whether you’re curious about the future of autonomous agents, the jobs AI will create, or how your life will change in the next 1-2 years—this is a conversation you can’t miss.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
-
(00:00) AI Alignment and Safety Focus
-
(05:00) Optimizing AI Amid Energy Constraints
-
(09:15) AI Advancements: Exponential Efficiency Gains
-
(14:12) Improving AI Context Efficiency
-
(17:32) Challenges in Embodied AI Progress
-
(21:01) Future Demand: Programmers and Empathy
-
(24:28) Future of AI: Asynchronous Collaboration
-
(26:15) AI: Societal Shift and Opportunity
—
Mentions:
-
Kevin Scott: https://www.linkedin.com/in/jkevinscott
-
Microsoft: https://www.microsoft.com/en-us/
-
Nvidia: https://www.nvidia.com/en-us/
-
Open AI: https://openai.com/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
-
Newsletter: https://news.lore.com/
-
Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano