#224 Bret Taylor – A Vision for AI’s Next Frontier

AI transcript
0:00:03 Technology companies aren’t entitled to their future success.
0:00:06 AI, I think, will change the landscape of software.
0:00:10 And I think it will help some companies and it will really hurt others.
0:00:15 And so when I think about what it means to build a company that’s enduring,
0:00:19 that is a really, really tall task in my mind right now.
0:00:24 Because it means not only making something that’s financially enduring over the next 10 years,
0:00:31 but setting up a culture where a company can actually evolve to meet the changing demands
0:00:36 of society and technology when it’s changing at a pace that is like unprecedented in history.
0:00:40 So I think it’s one of the most fun business challenges of all time.
0:00:45 I just get so much energy because it’s incredibly hard and it’s harder now than it’s ever been
0:00:49 to do something that lasts beyond you. But that, I think, is the ultimate measure of a company.
0:01:02 Welcome to the Knowledge Project Podcast. I’m your host, Shane Parish.
0:01:06 In a world where knowledge is power, this podcast is your toolkit for mastering the best
0:01:08 what other people have already figured out.
0:01:13 If you want to take your learning to the next level,
0:01:17 consider joining our membership program at fs.blog/membership.
0:01:24 As a member, you’ll get my personal reflections at the end of every episode, early access to episodes,
0:01:29 no ads including this, exclusive content, hand-edited transcripts, and so much more.
0:01:32 Check out the link in the show notes for more.
0:01:36 Six months after Brett Taylor realized AI was about to change everything,
0:01:41 he walked away from his co-CEO job at Salesforce to start from scratch.
0:01:46 That’s how massive this shift really is. The mastermind behind Google Maps and the
0:01:51 former chief technology officer at Facebook, Brett reveals the brutal truths about leadership,
0:01:57 AI, and what it really takes to build something that endures long after you’ve reached the top.
0:02:03 Brett’s led some of the most influential companies in tech and seen exactly what makes businesses scale,
0:02:08 what kills them from within, and why most founders don’t survive their own success.
0:02:13 In this conversation, you’ll discover why so many companies are already on life support without
0:02:19 realizing it, how first principles thinking separates the next wave of winners from everyone else,
0:02:25 and the hidden reason most acquisitions fail. We’ll explore why AI is bigger than anyone suspects,
0:02:31 plus the mindset shift that turns great engineers into exceptional CEOs. Whether you’re a founder,
0:02:37 an operator, or simply someone who wants to think sharper, this episode will change how you see your
0:02:42 business, technology, and the future. It’s time to listen and learn.
0:02:53 What was your first real aha moment with AI where you realized, holy shit, this is going to be huge?
0:02:59 I had two separate aha moments. One that I don’t think I really appreciated how huge it would be,
0:03:07 but it kind of reset my expectation, which was the launch of Dolly in the summer of 22. Is that right?
0:03:15 I might be off by Europe. I think summer of 22. And the avocado chair that they generated. And I had
0:03:22 been, well, my background is in computer science and pretty technically deep. I hadn’t been paying
0:03:31 attention to large language models. I just didn’t follow the progress after the Transformers paper. And
0:03:38 I saw that and my reaction was, I had no idea computers could do that. And that particular launch,
0:03:43 you know, seeing a generated image of an avocado chair, I don’t think I extrapolated to what,
0:03:51 you know, where we are now. But it, for me, shook me and realized I need to pay more attention to this
0:03:57 space and open AI specifically than I had been. I think I had, that was the moment where I realized like,
0:04:01 I clearly have been not paying attention to something significant. And then it was,
0:04:08 you know, six months later, coincidentally, like the month after I left Salesforce, the chat GBD came out
0:04:14 and, uh, before it became a phenomenon, although it did so quickly, but I was already, you know,
0:04:20 plugged into it. And, and, uh, and I was, uh, from then on, you know, I could not stop thinking about it.
0:04:26 Um, but that avocado chair, I don’t know why I think it was the, there was a bit of an emotional
0:04:33 moment where you saw a computer doing something that wasn’t just rule-based, but, uh, creative and,
0:04:39 um, the idea of a computer doing something and creating something from scratch, uh, was, uh,
0:04:43 well, it doesn’t seem so novel. Uh, you know, a few years later just blew my mind at the time.
0:04:49 One of the unique things about you is that you’ve started companies. You’ve been acquired by Facebook
0:04:56 and Salesforce inside those companies. You rose up to be the CTO at Facebook, the co-CEO at Salesforce.
0:05:03 Talk to me about founders working for founders and founders working within a company.
0:05:10 Yeah. It’s, uh, it’s a very, um, challenging transition for a lot of founders to make. I think
0:05:17 there’s lots of examples of acquisitions that have been really transformative from a business standpoint.
0:05:23 Uh, I think YouTube, Instagram being two of the more prominent that have clearly changed the shape of,
0:05:30 of the acquiring company. But even in those cases, you know, the founders didn’t stay around that long.
0:05:33 Uh, and those guys, that’s maybe a little unfair, you know, stick around for a little bit.
0:05:39 I think the interesting thing about being a founder is it’s not just building a business,
0:05:43 but it’s very much your identity. Um, and I think it’s very hard for people aren’t founders to
0:05:48 experience it. If you take everything very personally, you know, from the product to the
0:05:55 customers, to the press, to, uh, your competitors, uh, the, the both in inner and outer measures of,
0:06:02 of, of success. And I think when you go to being acquired, there’s a business aspect to it. And,
0:06:06 you know, can you operate within a larger company, but that’s intertwined with a sense of identity.
0:06:12 You go from being a, the founder of a company and the CEO of a company or CTO of a company,
0:06:16 whatever your, your title happens to be as one of the co-founders to be in a part of a larger
0:06:22 organization. And to fully embrace that, you actually need to change your identity. Um, you need to go from
0:06:29 being, you know, the head of Instagram or my case, the head of quip to being an employee of Salesforce or
0:06:36 going from being the CEO of friend feed to being an employee of Facebook. And what I’ve observed is
0:06:42 it’s that identity shift is a prerequisite for most of the other things. It’s not simply your ability to
0:06:48 handle the politics and bureaucracy of a bigger company or to navigate a new structure. I actually
0:06:54 think most founders don’t make that leap where they actually identify, uh, with that new thing.
0:06:57 It’s even harder for some of the employees too, because most of the time in an acquisition,
0:07:02 an employee of an acquired company didn’t choose that path. And in fact, they chose to work for
0:07:07 a different company and they, you know, the, the acquisition determined a different outcome.
0:07:11 And that’s why integrating acquisitions is so nuanced. And I would say that, uh,
0:07:18 having the experience of having been acquired, uh, you know, before and having acquired some
0:07:23 companies before when I got to Salesforce, I really tried to be self-aware about that and really tried
0:07:29 to, you know, be a part of Salesforce, you know, and tried to shift my identity and, and not be a
0:07:35 a single issue voter around quip, you know, I’d really tried to embrace it. Um, and, uh, and I
0:07:38 think it’s really hard for some founders to do it. Some founders don’t want to, honestly, you know,
0:07:43 they, uh, maybe cash the check and, and, you know, that’s the, it’s more of a transactional relationship.
0:07:50 I, um, I, I really actually am so grateful for the experience of having been at Facebook and Salesforce.
0:07:53 I learned so much, but it really took a lot of effort on my part to just, um,
0:07:59 transform my perception of myself and who I am to get that value out of the company that acquired us.
0:08:04 How did you, how did it change how you did acquisitions at Salesforce? You guys did a
0:08:09 lot of acquisitions while you were there and you’re acquiring founders and sort of startups. And I think
0:08:15 Slack was while you were there too. How did that change how you went about integrating that company
0:08:20 into the Salesforce culture? I’ll talk abstractly about talking about some specific acquisitions too, but
0:08:27 first, I think I tried to approach it with more empathy, um, and more realism. You know, uh,
0:08:35 one of the nuanced parts about acquisitions is there’s the period of, um, doing the acquisition.
0:08:39 There’s sort of the period, uh, after you’ve decided to do it, of doing due diligence. And then there’s
0:08:44 a period when it’s done and you’re integrating the company and sort of the period after one of the
0:08:51 things that I have observed is that, uh, companies doing acquisitions, often the part of deciding to do
0:08:58 it is a bit of a mutual sales process. Um, uh, you’re trying to find a fair value for the company
0:09:04 and, and there’s some back and forth there, but at the end of the day, there’s usually some objective
0:09:09 measure of that, um, influenced by a lot of factors, but, but there’s some fair value of that.
0:09:15 But what you’re trying to do is what are, uh, and corporate speak could be synergies, but like,
0:09:19 why do this? Why is one plus one greater than two? You know, that’s, that’s why you do an acquisition
0:09:26 just from first principles. It’s often a exercise in storytelling. You know, uh, you, you know,
0:09:31 bring this product together with our product and customers will, you know, find the whole greater
0:09:38 than the sum of its parts. This team applied to our sales channel, or if you’re a Google acquisition,
0:09:44 you know, imagine the traffic we can drive to, to this product experience. Uh, you know, in the case of
0:09:49 something like an Instagram, imagine our ad sales team attached to your, you know, amazing product
0:09:55 and how quickly we can help you realize that value, whatever it might be. I find that people,
0:10:01 because there’s sort of a craft of storytelling to, uh, for both sides to come to the same conclusion
0:10:08 that they should, uh, do this acquisition sometimes, uh, either simplifies or sugarcoats,
0:10:14 like some of the realities of it. Um, you know, little things like, you know, how much control
0:10:21 will the founding team of the acquired company have over those decisions? Um, uh, will it be operated
0:10:27 as a standalone business unit or will your team be sort of broken up into functional groups within the
0:10:33 larger company? And it’s sort of those little, they’re not little, but those I’ll say boring,
0:10:38 but important things that often people don’t talk enough about. And you don’t need to figure
0:10:42 out every part of an acquisition to make it successful, but often you can end up running
0:10:47 into like true third rails that you didn’t find because you were having the storytelling discussions
0:10:51 rather than getting down to brass tacks about how things will work and what’s important.
0:10:55 The other thing that I think is really important is being really clear what success looks like.
0:11:02 Um, and you know, I think, uh, sometimes it’s a business outcome, sometimes it’s a product goal,
0:11:11 but I found that, um, if you went to most of the like larger acquisitions, uh, in the valley and you,
0:11:18 two weeks after it was closed, interviewed the management team of the acquiring company and the
0:11:21 acquired company and you asked them like, what does success look like two years from now?
0:11:28 My guess is like 80% of the time you get different answers. Um, and I think, uh, it goes back to this
0:11:31 sort of storytelling thing where you’re talking about the benefits of the acquisition. We’re talking
0:11:35 about like, what does success look like? So I really tried to approach it. I tried to, um,
0:11:40 pull forward some harder conversations when, when we’re, you know, when I’m doing acquisitions or
0:11:45 even when I’m being acquired since it’s happened to be not twice so that, you know, when you’re approaching
0:11:50 it, you’d not only get the, Hey, why is one plus one equal greater than two? Everything’s gonna be
0:11:57 awesome. You know, but no, for real, like what, you know, what does success look like here? And then,
0:12:02 you know, as a founder, your job of an acquiring acquired company is to tell your team that and align
0:12:07 your team to that. And I think founders don’t take on enough accountability towards making these
0:12:11 acquisitions successful as I think they should. And, um, and it goes back to again, a certain,
0:12:17 uh, naivete, you know, it’s like, you’re, you’re, you’re not your company anymore.
0:12:22 You’re a part of something larger. And I think, you know, successful ones work when everyone embraces,
0:12:27 um, embraces that. What point in the acquisition process is that conversation? Is that after we’ve
0:12:33 signed our, our binding, you know, sort of commitment or is it, we should have that conversation
0:12:39 before. So I know what I’m walking into. My personal take is it’s not something you have,
0:12:44 you have to get to the point where the two parties want to merge, you know, and that’s a,
0:12:49 obviously a financial decision, particularly if it’s like a public company, there’s a board and
0:12:56 shareholders. Most acquisitions in the Valley are a larger firm acquiring a private firm. That’s not
0:13:00 all of them, but I would say that’s the vast majority. And in those cases, there’s often a
0:13:04 qualitative threshold where someone’s like, yeah, let’s do this. We’ve kind of have the high level
0:13:09 terms, sometimes a term sheet, you know, formally, I think it’s right after that. Um,
0:13:16 so where people have really committed to the, the key things, how much value, why are we doing this,
0:13:23 the big stuff. And there’s usually, uh, you know, many, lots of lawyers being paid lots of money to
0:13:28 turn those term sheets into, you know, uh, uh, more complete set of documents, usually more complete
0:13:33 due diligence, stuff like that. That’s a, there’s an awkward waiting period there. And
0:13:38 that’s a time I think where like the strategic decision makers in those moments can get together
0:13:44 and say, let’s talk through what this really means. And, um, the nice part about having them for all
0:13:48 parties is you’ve kind of made the commitment to each other. So it’s, you’ve, I think you have more
0:13:54 social permission to have real conversations at that point. Um, but you also haven’t consummated the
0:14:01 relationship, you know? And so, uh, there’s a, the power imbalance isn’t totally there and, and you can
0:14:06 really talk through it. And it also, I think engenders trust just because by having harder
0:14:11 conversations in those moments, you’re learning how to have real conversations and learning how each
0:14:16 other works. So that’s my, my personal opinion when to have it. So you mentioned the board, you’ve been
0:14:22 on the board of Shopify, you’re on the board of open AI, you’re a founder. What’s the role of a board
0:14:26 and how is it different when you’re on the board of a founder led company?
0:14:34 I, um, really like being involved, um, in a board. Um, and I’ve been involved in multiple boards
0:14:40 because I think I am an operator through and through. I probably self-identify as an engineer first
0:14:48 more than anything else. And I love to build learning and how to be an advisor, um, is a very different
0:14:54 vantage point that I think, uh, you see how other companies operate and you also learn how to
0:15:00 have an impact and add value without doing it yourself. Um, and it’s a very, and I’ve really,
0:15:05 I think become a better leader, you know, having learned to do that. I have really only joined,
0:15:12 uh, boards that were led by founders because typically I think they, you can speak to them,
0:15:17 but I think they sought me out because I’m a founder and I like working with founder led companies. Um,
0:15:26 I, I think the, uh, founders, I’m sure there’s lots of studies on this, but I think founders drive better
0:15:34 outcomes for, uh, companies. Um, there’s a, I think founders tend to have permission to make bolder,
0:15:39 more disruptive decisions about their business than a professional manager. There’s exceptions
0:15:44 like Satya, I think is, you know, uh, one of the greatest and not the greatest CEO of, you know,
0:15:50 our generation and, uh, as a professional manager, but you know, you look at, uh, everyone from, uh,
0:15:57 Toby Lukey to Mark Benioff to Mark Zuckerberg to, uh, Sam at open AI. And I think when you have founded
0:16:04 a company, it’s all your stakeholders, employees in particular, uh, give you the benefit of the
0:16:09 doubt. You know, you created this thing. And if you say, Hey, we need to, um, do a major shift in
0:16:16 our strategy, even hard things like, uh, layoffs, founders tend to get a lot of latitude and are
0:16:21 judged, I think differently. And, and I think rightfully so in some ways, because of the interconnection
0:16:26 of their identity to the thing that they’ve created. And so I actually really believe in founder led
0:16:32 companies. Um, one of the real interesting challenges is going from a founder led company to
0:16:35 not, and you know, Amazon has gone through that transition. Microsoft has gone through that
0:16:42 transition, um, for that reason. Uh, but I love working with founders. Um, and I, I love working
0:16:49 with people like Toby and Sam because they’re so different than me yet. Um, uh, and I can see how
0:16:54 they operate their businesses and I am inspired by it. I learned from it and obviously working for, for
0:16:58 market Salesforce, you, you have like, wow, that’s really interesting. Like most like an
0:17:03 anthropologist. Like, why did you do that? You know, I want to learn more. And so I love working
0:17:07 with founders that inspire me because I just learned so much from them. It’s such an interesting front
0:17:11 row seat into what’s happening. Do you think founders go astray when they start listening to
0:17:16 too many outside voices? And this goes back to the, I’m sure you’re aware of the Brian Chesky,
0:17:20 founder mode, the founder mode. Do you think, talk to me about that.
0:17:27 I have such a nuanced point of view on this because it is decidedly not simple. Uh, so
0:17:34 broadly speaking, I really like the spirit of founder mode, which is just having
0:17:42 deep founder led accountability for every decision at your company. Um, I think that that’s how great
0:17:49 companies operate. Uh, and when you, you know, proverbially make decisions by committee or you’re
0:17:55 more focused on process than outcomes, um, that produces all the experiences we hate as employees,
0:17:59 as customers, you know, that’s the proverbial DMV, right? You know, it’s like process over outcomes.
0:18:06 Um, and then similarly, uh, you look at the disruption in all industries right now because
0:18:12 of AI, you know, the companies that will recognize where things are clearly going to change. Like
0:18:18 everyone can see it. It’s like a slow motion car wreck. Everyone knows how it ends. You need that
0:18:24 kind of decisive breakthrough boundaries, layers of management, um, to actually make change as fast
0:18:30 as required in business right now. The issue I have not with Brian’s statements, Brian’s amazing,
0:18:37 um, is how people can sort of interpret that and sort of execute it as a caricature of what I think
0:18:44 it means. Uh, you know, there was a, I remember after Steve jobs passed away and you know, um, I don’t
0:18:50 know, I’ve met Steve a couple of times. I haven’t never worked with him in any meaningful way, you know,
0:18:54 know, but he was sort of, uh, if you believe the story is like kind of, uh, pretty hard on his
0:18:59 employees and, and very exacting. And I think a lot of founders were like mimicking that, you know,
0:19:04 done to wearing a black turtleneck and yelling at their employees. I’m like, not sure that was the
0:19:10 cause, you know, uh, I think Steve jobs taste and judgment through, you know, executed through that,
0:19:15 you know, packaging was the cause of their success and somehow. And then similarly, I think founder
0:19:21 mode can be weaponized as an excuse for just like overt micromanagement. And that probably won’t
0:19:27 lead to great outcomes either. And most great companies are filled with extremely great individual
0:19:35 contributors who make good decisions and work really hard. And, uh, uh, companies that are like solely
0:19:40 executing through the judgment of individual probably aren’t going to be able to scale to be truly great
0:19:46 companies. So I have a very nuanced point because I actually believe in founders. I believe in actually
0:19:52 that accountability that comes from the top. I believe in cultures where, you know, founders have
0:19:58 license to go in and all the way to a small decision and fix it, the infamous question mark emails from
0:20:02 Jeff Bezos, you know, that type of thing. That’s, that’s the right way to run a company, but that doesn’t
0:20:08 mean that you don’t have a culture where individuals are accountable and empowered. And, uh, you don’t
0:20:12 want, uh, you know, people trying to decide, make business decisions because of what will please our
0:20:17 dual leader, you know, which is like the caricature of this. And so, you know, after that came out,
0:20:20 I could sort of see it all happening, which is like, some people will take that and be like, you know what,
0:20:24 you’re right. I need to go down and be in the details. And some people will do it and probably make
0:20:28 everyone who works for them miserable and probably both will happen as a consequence. So.
0:20:34 That’s totally. Thank you for the detail and nuance that I love that too. Do you think engineers make
0:20:39 good leaders? I do think engineers make good leaders, but one thing I’ve seen is that
0:20:50 I think that I really believe that great CEOs and great founders, um, start usually with one specialty,
0:20:57 but become, uh, more broadly specialists in our parts of their business. Um, you know,
0:21:04 I think the, uh, businesses are multifaceted and rarely is a business’s success due to one
0:21:08 thing, uh, like engineering or product, which is where a lot of founders come from.
0:21:14 Often your go to market model is important, uh, for consumer companies, how you engage with the
0:21:22 world and public policy becomes extremely important. And I think as you see, um, uh, founders, you know,
0:21:27 grow from doing one thing to growing, to be in a real meaningful company like Airbnb or Meta or
0:21:32 something, you can see those founders really transform from being one thing to many things.
0:21:38 Um, so I do think engineers make great leaders. I think the first principles thinking the system
0:21:46 design thinking, um, really benefits things like organization design strategy. Um, and, but I also
0:21:53 think that, you know, uh, when we were speaking earlier about identity, I think one of the main
0:21:58 transitions founders need to make, especially engineers, uh, is you’re not like the product
0:22:06 manager for the company or the CEO. And at any given day, do you spend time recruiting an executive
0:22:12 because you have a need? Do you spend time, uh, on sales because that will have the biggest impact?
0:22:19 Um, do you spend time on public policy or regulation? Because if you don’t, uh, it will happen to you and,
0:22:25 and could really impact your business in a negative way. And I think engineers who are
0:22:31 unwilling to elevate their identity from what they were to what it needs to be in the moment
0:22:36 often leads to sort of plateaus, uh, in companies growth. So a hundred percent, I think engineers
0:22:43 make great, um, leaders and it’s not a coincidence. I think that most of the Silicon Valley great Silicon
0:22:49 Valley CEOs came from engineering backgrounds. Um, but I also don’t think that’s sufficient either
0:22:54 as your company scales. And I think that making that transition as all the great ones have is incredibly
0:23:00 important. To what extent are all business problems, engineering problems? That’s a deeper philosophical
0:23:06 question that I think I have the capacity to answer. Um, what is engineering? What I like about
0:23:14 approaching problems, uh, as an engineer is, uh, first principles thinking and understanding,
0:23:21 uh, the root causes of issues rather than simply addressing the symptoms of the problem. And I do think
0:23:27 that coming from a background in engineering, that is, um, everything from process, like how engineers do a
0:23:32 root cause analysis of a outage on a server is a really great way to analyze why you lost a sales
0:23:39 deal. You know, like I love the systematic approach of engineering. One thing that I think going back to
0:23:45 good ideas that can become caricatures of themselves, like one thing I’ve seen though with engineers who
0:23:53 go into other disciplines is, um, sometimes you can overanalyze decisions in some domains. Let’s just take
0:24:00 modern communications, which is driven in social media and, and very fast paced. Um, having a
0:24:07 systematic first principles discussion about every, you know, tweet you do is probably not a great comms
0:24:15 strategy. Um, and so, uh, and then similarly, um, you know, there are some aspects of say enterprise
0:24:22 software sales that, you know, aren’t rational, but they’re human, you know, like forming personal
0:24:27 relationships, you know, and, and the importance of those to building trust with a partner. It’s not
0:24:33 all just, you know, product and technology. And so I would say, I think a lot of things, uh, coming
0:24:40 with an engineer mindset can really benefit, but I do think that, uh, taking that to its like logical
0:24:47 extreme can lead to analysis paralysis, can lead to, uh, over intellectualizing some things that are
0:24:52 fundamentally human problems. And so, yeah, I think a lot can benefit from engineering, but I wouldn’t say
0:24:57 everything’s an engineering problem in my experience. You brought up first principles a couple times. You’re
0:25:03 running your third startup now, Sierra. It’s going really well. How do you use first principles in terms
0:25:12 of how to use that at work? Yeah, it’s, it’s particularly important right now because the market of AI is
0:25:22 changing so rapidly. So if you rewind two years, you know, most people hadn’t used chat GPT yet. Uh,
0:25:31 most companies hadn’t heard the phrase large language models or generative AI yet. And in two years, you have chat GPT
0:25:38 becoming one of the most popular consumer services in history faster than his than any service in history.
0:25:46 And you have across so many domains in the enterprise, uh, really rapid transformation.
0:25:52 The law is being transfer transformed. Marketing is being transformed. Customer service, which is where my
0:26:00 company Sierra works is being transformed. Software engineering is being transformed. And the amount of
0:26:06 change in such a short period of time is, uh, I think unprecedented. Uh, and, uh, perhaps I lack the
0:26:12 historical context, but it feels faster than anything I’ve experienced in my career. And so as a consequence,
0:26:19 I think, uh, if you’re responding to the facts in front of you and not thinking from first principles about
0:26:25 why we’re at this point and where it will probably be 12 months from now, the likelihood that you’ll make the
0:26:33 right strategic decision is almost zero. Uh, so, uh, as an example, uh, it’s really interesting to me that
0:26:39 with modern large language models, one of the careers that is being most transformed is software engineering.
0:26:47 Uh, and, uh, you know, one of the things I think a lot about is how many software engineers will we have
0:26:54 our company three years from now? What will the role of a software engineer be as we go from being authors of code to
0:26:57 operators of code generating machines? Um,
0:27:01 What does that mean for the type of people we should recruit?
0:27:06 And if I look at the actual craft of software engineering that we’re doing right now, um,
0:27:11 I think it’s literally a fact that it’ll be completely different two years from now.
0:27:16 Yet I think a lot of people building companies hire for the problem in front of them rather than doing
0:27:22 that. But two years is not that long. Those people that you hire now will just be getting really
0:27:28 productive a couple of years from now. So we try to think about most of our long-term business from
0:27:34 first principles, everything from, I’ll say a couple examples in our business. Our pricing model is really
0:27:40 unique and comes from first principles thinking rather than having our customers pay a license for the
0:27:45 privilege of using our platform. We only charge our customers for the outcomes. Uh, meaning if the
0:27:50 AI agent they’ve built for their customers solves the problem, there’s like a usually a pre-negotiated
0:27:55 rate for that. And that comes from the principle that in the age of AI, software isn’t just helping
0:28:02 you be more productive, but actually completing a task. Uh, what is the right and logical business
0:28:06 model for something that completes a task? Well, charging for a job well done rather than charging for
0:28:12 the privileges using the software. Um, similarly, um, you know, we, with a lot of our customers,
0:28:18 you know, we help deliver them a fully working AI agent. We don’t hand them a bunch of, uh, software
0:28:25 and say, good luck, you know, configure it yourself. And the logic there is, you know, uh, in a world where,
0:28:32 uh, making software is easier than it ever is before. And you’re delivering outcomes for your customer.
0:28:37 Um, the delivery model of software probably should change as well. And we’ve really tried to
0:28:41 reimagine what like the software company of the future should look like and trying to,
0:28:46 you know, model that in everything that we do. That’s brilliant. How do you think software
0:28:50 engineering will change? Is it you’re going to have fewer people or the people are going to be
0:28:57 organized differently or how do you see that? How geeky can I get as geeky as you want?
0:29:04 I actually wrote a blog post, uh, right before Christmas about this. I think this is an area
0:29:09 that deserves a lot more research. Uh, I’ll describe where I think we are today and smart people may
0:29:18 disagree, but a lot of the modern large language models, both the traditional large language models
0:29:24 and sort of the new reasoning models are trained on a lot of source code. And it’s a, an important input
0:29:29 to all of the knowledge that they’re trained on. Um, as a consequence, even the early models were
0:29:37 very good at generating code. Um, so, you know, uh, every single engineer at, at CIRA uses a cursor,
0:29:43 which is a great product that basically integrates with the IDE visual studio code to help you
0:29:50 generate code more quickly. Um, it feels like a local maximum, uh, in a really obvious way to me,
0:29:57 which is you have a bunch of code written by people, um, written in programming languages that
0:30:04 were designed to make it easy for people to tell a computer what to do. Probably this funniest example,
0:30:10 this is Python. Um, it almost looks like natural language, but it’s notoriously not robust. Um,
0:30:16 you know, most Python bugs are found by running the program because there’s not static type checking.
0:30:25 Um, similarly, there’s most bugs, uh, while you could run a fancy static analysis, like most bugs show up
0:30:32 simply at runtime because, uh, it’s just not designed. Um, it’s designed to be ergonomic to write. Um,
0:30:39 yet we’re using AI to generate that. Uh, we, and so we’ve sort of designed most of our computer
0:30:47 programming systems to make it easy for the author of code to type it quickly. Um, and we’re in a world
0:30:53 where actually generating code is going to like the marginal cost of doing that is going to zero,
0:30:58 but we’re still generating code and programming languages that were designed for human authors.
0:31:07 And similarly, um, if you’ve ever like looked at someone else’s code, um, which a lot of people
0:31:13 do professionally, it’s called the code review. It’s actually quite hard to do a code review. Um,
0:31:18 you know, you’re end up interpreting, you’re trying to basically put the system in your head and simulate
0:31:25 it as you’re reading the code to find errors in it. So the irony now that I’ve taken things that are
0:31:30 code programming languages that were designed for authors and now having humans do the job of
0:31:36 essentially code reviewing code written by an AI. And, and yet all of the AI is being in the code
0:31:42 generation part of it. I’m like, I’m not sure it’s, it’s great, but we’re generating a lot of
0:31:47 code with similar flaws to that we’ve been generating before from security holes to just functional bugs
0:31:54 and in greater volumes. And I think we’re, uh, what I would like to see is if you start with the
0:32:01 premise that generating code is free or, or, or going towards free, what would be the programming
0:32:06 systems that we would design? So for example, uh, you know, Rust is an example of a programming
0:32:13 language that was designed for safety, not for programming convenience. Uh, you know, my understanding is
0:32:18 that the, you know, Mozilla project, you know, there were so many security holes in
0:32:24 Firefox. They said, let’s make a programming language that’s very fast. Uh, you know, uh,
0:32:28 but everything can be checked statically, including memory safety. Well, it’s a really interesting
0:32:34 direction where you weren’t operating, like optimizing for authorship convenience or optimizing for correctness.
0:32:40 Are there programming language designs that are designed so a human looking at it can very quickly
0:32:46 evaluate, does this do what I intended it to do? There’s an area of computer science I studied in
0:32:50 college called formal verification, which at the time was turning a lot of computer programs into
0:32:55 math proofs and finding inconsistencies. And it sort of worked well, not as well as you’d hope.
0:33:02 But, you know, in a world where AI is generating a lot of code, you know, should we be investing in
0:33:08 more informal verification so that the operator of that code generator machine can more easily verify
0:33:14 that it does in fact to do what they intended is to do? And could a combination of a programming
0:33:19 language that is more structurally correct and structurally safe and exposes more primitives for
0:33:26 verification plus a tool to verify? Could you make an operator of a code generated machine 20 times
0:33:30 more productive, but more importantly, make the robustness of their output 20 times greater?
0:33:36 And then similarly, you know, there’s themes, things go in and out of fashion, but like test driven
0:33:40 development, you know, where you write your unit test first or your integration test first and then
0:33:44 write code until it fulfills the test. Most programmers I know who are really good,
0:33:49 not despise it, but it’s just like a, it sounds better than it, than it is in practice. But
0:33:54 again, writing code is free, you know, so writing tests is free, you know, how can you create a
0:34:00 programming system where the combination of great programming language design, formal verification,
0:34:05 robust tests, because you didn’t have to do the tedious part of writing them all. Could you make
0:34:12 something that made it possible to write increasingly complex systems that were increasingly robust?
0:34:15 And then similarly, like the elephant in the room for me is the anchor tenant of most of
0:34:19 these code generating systems are an IDE right now, you know, and
0:34:26 that obviously doesn’t seem as important in this world. And even with coding agents,
0:34:30 which is sort of where the world is going, it doesn’t change the fact that like, you know,
0:34:33 who’s accountable for the quality of it, who’s fixing it. And I think
0:34:39 there is a world where we can make reasonable software by just automating what we as software
0:34:46 engineers do every day. But I have a strong suspicion that if we designed these systems with the role of a
0:34:53 software engineer in mind, being an operator of a machine rather than the author of the code,
0:34:58 we could make the process much more robust and much more productive. And it feels like a research
0:35:03 problem to me, it doesn’t feel. And I think a lot of people, and for good reason, including me,
0:35:08 are just excited about the efficiency of software development going up. And I want to see the new
0:35:11 thing, though. I’m constructively dissatisfied with where we are.
0:35:16 It’s so interesting that if software AI is good enough to write the code, should we get enough to
0:35:17 check the code?
0:35:23 That’s a great, great question. But actually, I’ll, you know, it’s still funny to me that we’d be
0:35:28 generating Python, you know, just because for anyone who’s listening right now has ever operated a web
0:35:34 service running Python, it’s CPU and intensive, really inefficient. You know, should we be taking most of
0:35:41 the unsafe C code that we’ve written and converting it to a safer system like Rust? You know, if authoring
0:35:46 these things and checking it are relatively free, shouldn’t all of our programs be incredibly
0:35:52 efficient? Should they all be formally verified? Should they all be analyzed by a great agent? I do
0:35:58 think it can be turtles all the way down. You can use AI to solve most problems in AI. The thing that I’m
0:36:04 trying to figure out is like, what is the system that a human operator is using to orchestrate all those
0:36:10 tasks. And, you know, I go back to the history of software development, and most of the really
0:36:15 interesting metaphors in software development came from breakthroughs in computing. So, you know, the C
0:36:20 programming language came from Unix. And when these time sharing systems were really, it went from sort
0:36:27 of punch cards to something that were a lot more agile. Small talk came out of the development of the
0:36:34 graphical user interface at Xerox PARC. And, you know, there was a sort of a confluence of message
0:36:39 passing as a metaphor and the graphical user interface. And then there was a lot of really
0:36:46 interesting principles that came out of networking, you know, and sort of distributed systems, distributed
0:36:52 locking, sequencing. I think we should recognize that we’re in this brand new era as significant
0:36:57 as the GUI. You know, it’s like a completely new era of software development. And if you were just to say,
0:37:03 I’m going to design a programming system for this new world from first principles, what would it be?
0:37:06 And I think when we develop it, I think it will be really exciting because rather than
0:37:13 automating and turning up the speed of just generating code and with the same processes we
0:37:21 have today, I think we’ll feel native to this system and give a lot more control to the people who are
0:37:24 orchestrating the system in a way that I think will really benefit software overall.
0:37:29 Let’s dive into AI a little bit. How would you define AGI to the layman?
0:37:39 I think a reasonable definition of AGI might be that any task that a person can do at a computer,
0:37:49 that system can do on par or better. I’m not sure it’s a precise definition, but I’ll tell you where that
0:37:54 comes from and it’s flaws, but there’s not a perfect definition of AGI in my opinion,
0:37:59 or there’s not a precise definition of AGI. I’m sure there’s good answers.
0:38:06 One of the things about the G and AGI is about generalization. So can you have a system that is
0:38:14 intelligent in domains that it wasn’t explicitly trained to be intelligent on? And so I think that’s
0:38:22 one of the most important things is like given a net new domain, can this system become more competent
0:38:31 and more intelligent than a person sort of trained in that domain? And I think that’s sort of the,
0:38:35 you know, at or better than a person is certainly a good standard there. And that’s sort of the
0:38:40 definition of super intelligence. The reason I mentioned at a computer is I do think that
0:38:51 it is a bar that means like if there’s a digital interface to that system, it affords the ability
0:38:59 for AI to interact with it, which is why that’s a bar that’s reasonable to hit. I say that because
0:39:08 one of the interesting questions around AGI is how quickly it does generalize. And there are domains in
0:39:19 the world that the progress in that domain isn’t necessarily limited by intelligence, but by other
0:39:24 social artifacts. So as an example, and I’m not an expert in this area, but if you think about
0:39:33 the pharmaceutical industry, my understanding is, you know, the one of the main bottlenecks
0:39:42 is clinical trials. So no matter how intelligent a system would be in discovering new therapies,
0:39:50 it may not materially change that. And so you may have something that’s discovering new insights in math,
0:39:55 and that would be delightful and amazing. But the existence of that
0:40:01 system that’s super intelligent in one domain may not translate to all domains equally.
0:40:06 I just heard at least a snippet of a talk by Tyler Cohen, the economist. And
0:40:12 it was really interesting to hear his framing on this about which parts of the economy
0:40:18 could sort of absorb intelligence more quickly than others. And so I choose that definition of AGI,
0:40:24 recognizing that there’s not a perfect definition, because it captures the ability of
0:40:31 this intelligence to generalize, while also recognizing that the domains of society might
0:40:36 not apply with equal velocity, even once we reach that point of a system being able to have that level
0:40:37 of intelligence.
0:40:44 When I think about what artificial intelligence is limited by, or the bottlenecks, if you will,
0:40:50 I keep coming back to a couple of things. There’s regulation, there’s compute, there’s energy,
0:40:54 there’s data, and there’s LLMs. Am I missing anything?
0:40:57 Uh, so you’re saying the ingredients to AGI?
0:41:03 Yeah, like, there’s limitations on each aspect of those things. And there seem to be the main
0:41:09 contributors to the, what’s limiting us from even accelerating at this point. Is that, how do you
0:41:10 think about that?
0:41:14 Yeah, what you said is roughly how I think about it. I’ll put it into my own words, though.
0:41:25 I think the three primary inputs are data, compute, and algorithms. And data is probably obvious,
0:41:30 but you know, one of the things after the Transformers model was introduced is it afforded
0:41:37 an architecture with just much greater parallelism, which meant models could be much bigger and train
0:41:44 more quickly on much more data, um, which just led to a lot of the breakthroughs with, that’s the LLM,
0:41:50 just they’re large. And, uh, the scaling laws, you know, a couple years ago, you know, indicated like the
0:41:58 larger you make the model, um, the more intelligent it would be, and at a degree of efficiency that was, uh, tolerable.
0:42:06 Uh, and there, we are, you know, there’s lots of stuff written about this, but you know, there’s
0:42:11 in terms of just like textual content to train on, you know, the availability of new content is
0:42:16 certainly waning. And some people would say, I think there’s like a data wall. Uh, I’m not an expert in
0:42:21 that domain, but it’s been talked about a lot and you can read a lot about it. There’s a lot of interesting
0:42:27 opportunities though to generate data too. Um, so, uh, there’s a lot of people working on simulation.
0:42:31 If you think about a domain, like self-driving cars, simulation is a really interesting way to
0:42:36 generate. Uh, is that synthetic data? Is that what? Yeah, I would say that synthetic data,
0:42:42 though synthetic data has a, uh, uh, simulation and synthetic data are a little different. So
0:42:49 you can generate synthetic data, like generate a novel. Um, simulation, I would put at least in my
0:42:56 head and I’m sure that like academics might critique what I’m saying, but, uh, I’ve used simulation is
0:43:00 based on a set of principles like the laws of physics. So if you were to build a real world
0:43:06 simulation for training, a self-driving car, um, you’re not just generating arbitrary data,
0:43:10 like the roads don’t turn into loop-de-loops, you know, because that’s not possible with physics.
0:43:18 So by constraining a simulation with a set of, uh, real world constraints, the data has more efficacy,
0:43:24 you know, and so, uh, and there’s, uh, sort of a, it constrains the different permutations
0:43:29 of data you can generate from it. So it’s, I think a little bit higher quality, but then along those
0:43:37 lines, you know, uh, a lot of people wonder if you generate, uh, synthetic data, um, how much value can
0:43:43 that add to a training process? Um, you know, is it sort of, uh, regurgitating information it already
0:43:49 had? What’s really interesting about, you know, reasoning and reasoning models is I think, uh,
0:43:54 I feel really optimistic these models are generating net new ideas. And so it really affords the opportunity
0:44:00 to break through, uh, some of these, the data wall as well. So data is one thing. And I think both
0:44:06 synthetic data and simulation are really interesting opportunities to, to grow there. Then you have compute.
0:44:15 And, uh, this is, um, something that, you know, it’s why, uh, there’s so many data center investments.
0:44:21 It’s why Nvidia as a company has, has grown so much. Um, the, probably the more interesting kind
0:44:27 of breakthroughs there are these reasoning models where, uh, there’s not quite such a formal separation
0:44:33 between the training process and the inference process where you can spend more compute at the
0:44:38 time of inference to generate more intelligence, um, which has really been a breakthrough in a variety
0:44:42 of ways, I think is really interesting, but it shows you how you can run up against walls and,
0:44:47 and find new opportunities to use it. And then finally algorithms. And the biggest breakthrough
0:44:52 is obviously the transformers model attention is all you need that paper from Google that sort of led to
0:44:56 where we are now. But there’s been a number of really important papers since then, from
0:45:02 the idea of chain of thought reasoning into, um, what, uh, at open AI, what we did with the O1 model,
0:45:08 which is to, um, do some reinforcement learning on those chains of thought to, uh, really reach new
0:45:14 levels of intelligence. Um, and so I do think that I mentioned some anecdotes about some breakthroughs
0:45:20 there because my view is that each one of them has their own problems. You know, compute,
0:45:27 it’s very capital intensive. Um, and a lot of these models, the half-life of their value is pretty
0:45:32 short because new ones come out so frequently. And so, you know, you’re, you wonder like, you know,
0:45:37 can we afford, uh, what’s the business case for investing this capex? And then you have a
0:45:44 breakthrough like, uh, you know, O1 and you’re like, gosh, you know, with a distilled model and
0:45:48 moving more to inference time, it changes the economics of it. You have data. You say, gosh,
0:45:53 we’re running out of textual data to train on. Well, now we can generate reasoning. We can do
0:45:56 simulations. Oh, that’s an interesting breakthrough. And then on the algorithm side, as I mentioned,
0:46:02 just the idea of these reasoning models is really novel itself. And each of these at any given point,
0:46:06 if you talk to an expert and in one of them, and I’m an expert in none of them,
0:46:12 they will tell you the sort of current plateau that they can see on the horizon. And there usually is
0:46:16 one, I mean, you’ll talk to different people about how long the scaling laws for something
0:46:19 will continue and you’ll get slightly different opinions, but no one thinks it’s looking to last
0:46:26 forever. Um, and at each one of those, because you have so many smart people working on them,
0:46:32 you often have people discovering a breakthrough, um, in each of them. And so as a consequence, I,
0:46:39 I really do feel optimistic about the progress towards HEI because one of those plateaus might
0:46:44 extend a while if we just don’t have the key idea that we need to break through the idea that we will
0:46:50 be stuck on all three of those domains feels very unlikely to me. And in fact, what we’ve seen
0:46:55 because of the potential economic benefits of AGI is we’re in fact seeing breakthroughs in all three of
0:47:02 them. And, um, as a consequence, um, you know, you’re just seeing just the blistering pace of progress
0:47:09 that we’ve seen over the past couple of years. At what point does AI start making AI better than
0:47:15 we can make it or making it better while we’re sleeping or we can’t be too far from that?
0:47:19 Well, it might reflect back to our software engineering discussion, but you know,
0:47:25 the broadly, this is the area of AGI around self-improvement, which is meaningful from a
0:47:31 improvement standpoint, but also obviously from a safety standpoint as well. So I, um,
0:47:36 I don’t know when that will happen, but I do think, you know, by some definition,
0:47:42 you could argue that it’s happening already in the sense that every engineer in Silicon Valley is already
0:47:49 using coding agents and, um, platforms like Cursor to help them code. So it’s contributing already.
0:47:54 Um, and I imagine as, uh, coding assistants go to coding agents in the future,
0:47:58 most engineers in Silicon Valley will show up in the morning and,
0:48:04 but this is sort of the difference between, uh, you know, the assisted driving and Tesla versus like
0:48:10 self-driving, right? Like at what point do we leap from, I’m a co-pilot in this to,
0:48:12 I’m, I don’t have to do anything.
0:48:17 I mean, it’s a question that’s, there’s so much nuance to the answer. I’m not sure to answer
0:48:21 because I’m not sure you’d want to necessarily, like, I think for some software applications,
0:48:25 that’s important. But when we brought up, you know, we were talking about the active software
0:48:32 development, people have to be accountable, um, for the software that they produce. Um, and that
0:48:38 means if you’re doing something simple, like a software as a service application, that it’s secure,
0:48:45 that it’s reliable, that it, the functionality works as intended for something as meaningful as,
0:48:51 uh, you know, uh, an agent that is, um, somewhat autonomous, does it have the appropriate guard
0:48:57 rails? Um, does it actually do what the operators intended? Is there appropriate safety measures?
0:49:02 So I’m not sure there’s really any system where you’d want to turn a switch and, and go get your
0:49:08 coffee. But I do think to the point on, you know, uh, these broader safety things is I think that when
0:49:15 you think about, uh, more advanced models, we need to be developing not only more and more advanced,
0:49:22 um, safety measures and safety harnesses, but also, um, using AI to supervise AI and things like that.
0:49:28 So it’s a part, uh, probably my colleague on the board, Zico Coulter is probably a better person to talk
0:49:32 through some of the technical things, but there’s a lot of prerequisites to get to that point. And I’m not
0:49:37 sure it’s simply like the availability of the technology. Um, just because it is, uh, that at
0:49:41 the end of the day, we are accountable for the safety of the systems we produce, not just opening
0:49:47 I like every, every engineer. Um, and, and, and that’s a principle that should not change.
0:49:53 What does that mean? Like when we say safety and AI, that seems so vague in general that everybody
0:49:58 interprets it quite differently. Like, how do you think about that? And how do you think about that
0:50:05 in the world where, uh, let’s say we regulate safety in the United States and another country
0:50:11 doesn’t regulate safety? How does that affect the dynamic of it? I’ll answer broadly and then
0:50:18 go into the regulatory question. So I really like opening eyes mission, uh, which is to ensure that
0:50:24 AGI benefits all of humanity. That isn’t only about safety. Um, and, and I, and I believe intentionally.
0:50:29 So though the, the, obviously the mission was created prior to my arrival because it’s both
0:50:34 about safety, kind of Hippocratic oath first to no harm. And I don’t think one could credibly achieve
0:50:39 that mission if we created something unsafe. So I would say that’s the most important part of the
0:50:45 mission, but there’s also a lot of other aspects of benefiting humanity. Um, is it universally accessible?
0:50:52 Is there a digital divide where some people have access to AGI and some don’t? Um, uh,
0:50:58 similarly, you could argue that does it, are we maximizing the benefits and minimizing the downsides?
0:51:04 Uh, clearly, uh, AI will disrupt some job, but it also could democratize access to healthcare,
0:51:11 education, expertise. Um, so as I think about the mission, it starts with safety, but I actually like
0:51:16 thinking about it more broadly because I think at the end of the day, benefiting humanity is the mission
0:51:21 and, um, uh, safety is a prerequisite, but it’s almost like going to my analogy of the Hippocratic
0:51:28 oath. A doctor’s job is, you know, uh, to cure you first do no harm, but then to cure you and a doctor
0:51:32 that did no harm, but didn’t cure you wouldn’t be great either. So I really like to think about the,
0:51:40 uh, holistically and, um, again, uh, Zika or Sam might have a more complete answer here, but broadly,
0:51:48 I think about does the system that represents AGI, um, align with the intentions of the people
0:51:53 created it and the intentions of the people operating it, um, so that it, it does what we want and it’s
0:52:00 a tool, um, uh, that benefits humanity that, um, a tool that we’re actively using, um, to affect the
0:52:05 outcomes that we’re looking for. And that’s kind of the way I think about safety. Um, and, uh,
0:52:10 it can be meaningful things like misalignment or, or more subtle things like unintended consequences.
0:52:16 Um, and I think that latter part is probably the areas, uh, really interesting from, uh, um,
0:52:23 intellectual and ethical standpoint as well. If I look at, um, uh, what was the bridge in Canada
0:52:28 that fell down where it motivated the ring that a lot of engineers. Oh yeah. I forget the name of it,
0:52:34 but just like the, whether it’s the Tacoma Narrows Bridge in Washington or, uh, three mile island or
0:52:43 these intersections where, um, uh, we’ve engineered these, um, you know, what, what at the time people
0:52:49 hope would be positively impact humanity, but something went horribly wrong. Um, sometimes it’s
0:52:54 engineering, sometimes it’s bureaucracy, sometimes it’s a lot of things. And so I don’t think when I think
0:53:00 about safety, I don’t just look at the technical measures of it, but how does this technology
0:53:05 manifest in society? How do we make decisions around it? And you could take, put another way,
0:53:10 technology is rarely innately good or bad. It’s sort of what we do with it. Um, and I think those
0:53:15 social constructs and, uh, matter a lot as well. Um, so I think it’s a little early to tell because we
0:53:21 don’t have this kind of super intelligence right now. Um, and I think it won’t just be a technology
0:53:28 company defining how it manifests in society. And you could imagine, uh, taking a very well aligned
0:53:35 AI system and a human operator directing it towards something, um, that would, uh, objectively hurt
0:53:40 society. And, and there’s a question of like, who gets to decide who’s accountable? And it’s a
0:53:47 perennial question. I mean, it’s whether you’re deciding, uh, you know, uh, uh, should you use
0:53:51 your smartphone in school? You know, who, who should decide that? And I, there’s parents
0:53:55 who will tell you, Hey, it’s my decision. It’s my kid. And then there’s principals who will tell
0:54:00 you it’s not benefiting the school. And I’m not sure that’s going to be my place or our place,
0:54:04 but there’ll be a number of those conversations that are much deeper than that question that I
0:54:12 think we’ll need to answer. Um, as it relates to regulation, uh, there’s two, uh, not conflicting
0:54:16 forces, but two forces that exist somewhat independently, but relate to each other. One
0:54:22 is the pace of progress in AI and ensuring that, you know, uh, the, the folks working on frontier
0:54:29 models are ensuring those models do benefit humanity. And, uh, and then there’s the, uh,
0:54:36 sort of geopolitical landscape, which is, you know, do you want, uh, AGI to be created by the freedom,
0:54:43 uh, sort of, uh, the West, um, by democracies, um, or do you want it to be created by more
0:54:50 totalitarian governments? And so I think the inherent tension for regulators will be, um,
0:54:57 a sense of obligation to ensure that, you know, uh, the technology organizations creating AGI
0:55:04 are in fact focusing enough on, um, that I’m fitting humanity, all the other, uh, uh, stakeholders
0:55:11 there that whose, uh, interests that they’re accountable for and ensuring that the West remains
0:55:16 competitive. Um, and, and, uh, I think that’s a really nuanced thing. And I think, uh, you know,
0:55:23 my, my view is it’s very important that the West is, uh, leads in AI and I’m very proud of the fact that,
0:55:28 um, you know, open AI is based here in the United States and we’re investing a lot in the United
0:55:32 States. And I think that’s very important. And I also, you know, having sort of seen the inside of,
0:55:37 I think we’re really focused on benefiting humanity. So I, I tend to think that, you know,
0:55:41 it needs to be a multi-stakeholder dialogue, but I think there’s a really big risk that some
0:55:47 regulations could have the unintended consequence of, of, uh, slowing down this larger conversation.
0:55:51 But I don’t say that to be dismissive of it either. It’s actually just a impossibly hard,
0:55:55 uh, problem. And I think you’re seeing it play out as you said, and in really different ways in
0:56:00 Canada, United States, Europe, China, elsewhere. I want to come back to compute and the dollars
0:56:07 involved. So, I mean, on one hand you have, um, if I just, I could start an AI company today by,
0:56:12 you know, going, putting my credit card down and using AWS and leveraging their infrastructure,
0:56:17 which they’ve, they’ve built, they’ve spent the hundreds of billions of dollars and I get to use
0:56:24 it on a time-based model. On the other hand, you have people like open AI, uh, and Microsoft investing
0:56:32 tons of money into it that may be more proprietary or, um, how do you think about the different models
0:56:38 competing? And then the one that really throws me for a bit of a loop is Facebook. So Facebook has
0:56:47 spent meta, I’m like aging myself here. So meta comes along and, you know, possibly for the good
0:56:53 of humanity, but like I tend to think Zuck is like incredibly smart. So I don’t think, I don’t think
0:56:59 he’s spending, you know, a hundred billion dollars to develop a free model and give it away to society.
0:57:05 How do you think about that in terms of return on capital and return on investment?
0:57:11 It’s a really complicated business to be in just given the capex required to build
0:57:15 a frontier model. But let me just start with a couple definitions of terms that I think are useful.
0:57:22 Um, I think most large language models I would call foundation models. And I like the word foundation
0:57:28 because I think it will be foundational to most intelligent systems going forward. And
0:57:37 most people building modern models, uh, particularly if they involve language image or, or audio shouldn’t
0:57:42 start from building a model from scratch. They should pick a foundation model, either use it off the shelf
0:57:47 or fine tune it. Um, and so it’s truly foundational in many ways. Uh, in the same way,
0:57:52 most people don’t build their own servers anymore. They lease them from one of the cloud infrastructure
0:57:58 providers. I think foundation models will be something trained by companies that have a lot
0:58:06 of capex and leased by a broad range of customers who have a broad range of use cases. Um, and I think
0:58:12 that leads in the same way that data center builders having a lot of data centers enabled you to have
0:58:17 the capital scale to build more data centers. I think the same will largely be true of, uh, you know, building
0:58:23 the huge clusters to, to do training and things like that. Foundation models, I think are somewhat distinct
0:58:30 from frontier models and frontier models. I think it’s a term credited to, to Reed Hoffman, but, uh, I may be mistaken
0:58:35 on that. That’s where I heard it from. And these are the models that are usually like the one or two that are
0:58:43 clearly the leading edge. Oh, three, as an example from open AI. And these frontier models are being built by labs
0:58:52 who are trying to build AGI that benefits humanity. And I think if you’re deciding whether you’re building a
0:59:00 foundation model and, uh, what your business models around it, it’s very different business than I’m going to go pursue AGI.
0:59:07 Uh, because if you’re pursuing AGI, really, there’s only one answer, which is to build and train and move to
0:59:13 the next front, you know, horizon, because if you can truly build something that is AGI, the economic value
0:59:20 is so great. Uh, I think there’s a really clear business case there. If you’re pre-training a foundation
0:59:28 model, that’s the fourth best, uh, that’s going to cost you a lot of money. And the return on that
0:59:36 investment is, is probably fairly questionable because why use your fourth best large language model versus a frontier
0:59:43 model or an open source one, uh, for meta. And as a consequence of that, I think we have probably have too many people
0:59:49 building models right now. There’s already been some consolidation actually of companies being folded into Amazon and
0:59:56 Microsoft and others. But I do think it will play out a bit like the cloud infrastructure business where a very small number of
1:00:04 companies with very large CapEx budgets, uh, are responsible for both building and operating these data centers.
1:00:10 And then developers and, and in consumers will use things like chat GPT as a consumer or as a developer,
1:00:16 you’ll license, uh, and rent, you know, one of these models, uh, in the cloud. Um,
1:00:21 how it will play out is a really great question. You know, I think the, uh, I heard one investor
1:00:28 talk about these as like the fastest depreciating assets of all time. Um, uh, on the other hand,
1:00:34 you know, I, uh, if you look at the revenue scale of something like an open AI and, and what I’ve read
1:00:41 about places like Anthropic, let alone Microsoft and Amazon, it’s pretty incredible as well. And so you
1:00:46 can’t really, if you’re one of those firms, you can’t afford to sit on the sidelines as, as the
1:00:51 world transforms. But I, I would have a hard time personally like funding, uh, a startup that says
1:00:56 I’m going to do pre-training. You know, it’s, it’s, uh, it’s, I don’t really know like what’s your,
1:01:00 what’s your, um, differentiation in this marketplace. And I think a lot of those companies,
1:01:05 you’re already seeing them consolidate because they have the cost structure of a pharmaceutical
1:01:09 company, but not the business model. But this is just it though, right? Like open AI has a revenue
1:01:16 model around a revenue model. Microsoft has a revenue model around their AI investments. They
1:01:24 just updated the price of teams with copilot. You know, uh, Amazon has a revenue model around
1:01:29 AI in a sense to getting other people to pay for it through AWS. And then they’re getting the advantages
1:01:35 of it, uh, at Amazon too, from a consumer point of view and all the millions of projects. Bezos
1:01:40 was doing an interview last week. He said, there’s every project at Amazon basically has an AI component
1:01:46 to it now. Facebook on the other hand has spent all of this money already. And with, you know, an endless
1:01:54 amount, presumably insight or like not insight and endless amount to go, but they don’t have a revenue
1:01:59 model specifically around AI where it would have been cheaper obviously for them to use a different
1:02:05 model, but that would have required presumably giving data away or like, you know, I’m just trying to work
1:02:12 through it from Zuck’s point of view. If you know, I actually will take Mark at his word and you know,
1:02:18 that post he wrote about open source, I think was very well written and encourage people to read it.
1:02:23 I think that’s a strategy. And you know, if you look at Facebook, um, you know, you’ve got me saying
1:02:28 Facebook too. So that was, that was what it was called. You know, the company has always really
1:02:34 embraced open source. And if I look at really popular things from react to, uh, you know, now
1:02:40 the llama models, it’s always been a big part of their strategy to court developers around sort of their
1:02:46 ecosystem. And Mark articulated some of the strategy there and I’m sure there’s elements of commoditizing
1:02:52 your compliment. But I also think that, you know, if you can attract developers towards models, there’s
1:02:58 a strength. Um, I, you know, I’m not really on the inside there, so I don’t really have a perspective
1:03:04 on it other than I actually think it’s really great that there’s different players with different
1:03:10 incentives, all investing so much. And I think it is really furthering the cause of like bringing these
1:03:18 amazing tools to society. And, um, but a lot changes. I mean, if you look at the price of GPT
1:03:24 4.0 mini, you know, it is, uh, so much higher quality than like the highest quality model two years
1:03:30 ago and much cheaper. Um, I, I haven’t done the math on it, but it’s probably cheaper to use that than
1:03:37 to host self host any of the open source models. So even, even the existence of the open source models,
1:03:43 it’s not free. I mean, inference costs money. And so there’s a lot of complexity here. And, and
1:03:48 actually I have the email even being relatively close to stuff. Like I have no idea where things
1:03:53 are going, but you know, it’s, um, you could talk to a smart engineer and they’ll tell you,
1:03:59 oh yeah, if you built your own servers, you’ll spend less than renting them from say Amazon Web
1:04:04 Services or Azure. That’s sort of true in absolute terms, but misses the fact, like, do you want someone
1:04:09 on your team building servers? Oh, and in fact, if you change the way your service works and you
1:04:14 need a different SKU, like you all of a sudden are doing training and you need Nvidia, uh, you know,
1:04:21 H 100s. Now all of a sudden you’re built servers like this, you know, asset that’s worthless. So
1:04:26 I think with a lot of these models, you know, the presence of open source is incredibly important and,
1:04:32 and, uh, and, uh, and I really appreciate it. I also think like the economics of AR pretty complex
1:04:39 because the hardware is very unique. The cost to serve is much higher. Um, techniques like
1:04:45 distillation have really changed the economics of models, whether or not it’s open source or, or,
1:04:52 uh, hosted and, and, and leased. Um, so it’s, I think broadly speaking for developers,
1:04:57 it’s kind of an amazing time right now because you have a, like a menu of options that’s incredibly
1:05:02 wide. And I actually think of it as, you know, just like in cloud computing, you’ll end up with
1:05:06 a price performance quality trade off. And for any given engineering talents, they’ll have a different
1:05:13 answer and that’s appropriate. And some people use, uh, open source Kafka. Some people work with
1:05:19 confluent. Um, great. You know, like that’s just the way these things work, you know, and, um,
1:05:22 so you don’t think AGI is going to be like a winner take all. You think there’s going to be multiple
1:05:30 options that have by definition, whatever the definition is of AGI. Well, first I think open AI,
1:05:36 I believe will play a huge part in it because, uh, there’s both the technology, which I think open AI
1:05:44 continues to lead on. Um, but also chat GPT, which has become synonymous with AI for most consumers.
1:05:50 But more than that, um, it is the way most people access AI, um, today. And so one of the interesting
1:05:57 things like what is AGI, we talked about, you know, opinions on what the definition might be. But the
1:06:02 other question is like, how do you use that? Like, what do you, what is, uh, what is the packaging? Um,
1:06:09 and some of, uh, intelligence will be simply the outcomes of it, like a discovery of a new drug,
1:06:15 which would be, you know, remarkable and hopefully we can cure some illnesses. Uh, but others will be
1:06:20 just how you as an individual access it. And, you know, I, most of the people I know, like if they’re
1:06:26 signing an apartment lease, we’ll put it into chat GPT, you get a legal opinion. Uh, if you get, you know,
1:06:34 lab results from your doctor, you can get a second opinion on, on chat GPT, um, Clay and I use, uh,
1:06:42 the O1 pro mode for like criticizing our strategy at Sierra all the time. And so for me, what’s so
1:06:48 remarkable about chat GPT, which was this, you know, quirkly named research preview that has come to be
1:06:56 synonymous with AI as I do think that it will be the delivery mechanism for AGI when it’s produced and, uh,
1:07:02 not just because of the many researchers at open AI, but because of the amazing like utility it’s become
1:07:06 from individuals. And I think that’s really neat because I don’t know if it would have been obvious
1:07:12 if we were having this conversation three years ago, um, you know, and you were talking about
1:07:17 artificial general intelligence. I’m not sure either of us would have envisioned something so simple as
1:07:24 a form factor, uh, to absorb it that you just talk to it. Um, so I think it’s great. And especially as I
1:07:30 think about the mission of open AI, which is to ensure that AGI benefits humanity, what a simple,
1:07:37 accessible form factor, there’s free tiers of it. Like what a kick ass way to benefit humanity. So I
1:07:42 really think that will be central to what we come as society to define, uh, as a AGI.
1:07:50 You mentioned using it at Sierra to critique your, your business strategy. What do you know about prompting
1:07:54 that other people miss? I mean, you must have the best prompts.
1:07:56 People think that, you know, cause I’m affiliated with it.
1:07:59 You’re not going like, here’s my strategy. What do you think? What are you putting in there?
1:08:07 Um, I often with the, uh, reasoning models, which are slower, we’ll use a faster model
1:08:17 first GPT four O to refine my prompts. Um, so, uh, over the holidays, um, partly because I was thinking
1:08:21 about the future of software engineering, I’ve, I’ve written a lot of compilers in my time. I’m like
1:08:27 written enough that I, you know, it’s like, uh, uh, uh, it’s, it’s easy for me. So I decided to see
1:08:36 if I could have a one pro mode, um, generate end to end, a compiler front end, parsing the grammar,
1:08:42 um, checking for semantic correctness, generating an intermediate representation, and then using,
1:08:49 uh, uh, LLVM, which is sort of a compiler collection, um, that’s very popular to actually do,
1:08:56 you know, run, run it all. And I would spend a lot of time iterating on four O to sort of like
1:09:02 refine and, and make more complete and specific what I was looking for. And then I would put it
1:09:07 into a one pro mode, go get my coffee and, you know, come back and get it. I’m not sure if that’s
1:09:11 a viable technique, but it’s really interesting because I do think in the spirit of AI being the
1:09:20 solution to more problems than AI, um, having a, uh, lower latency, simpler model help refine,
1:09:24 essentially. I like to think of it as like, you’re like a product manager and you’re asking,
1:09:29 you know, an engineer what to do is your, is your product requirements document complete and specific
1:09:35 enough. And, uh, waiting for it is sometimes slower than, and so I, I like doing it in stages like
1:09:38 that. So that’s my, my chip at some point. There’s probably someone from open AI listening. He’s gonna
1:09:43 like roll their eyes, but that’s just, uh, that’s, uh, who can I talk to at open AI? That’s like the prompt.
1:09:50 Yeah. I’m like so curious about this because I’ve, I’ve actually taken recently to, uh, getting
1:09:56 open AI or chat GBT, I guess, if you want to call it that I’ve been getting chat GBT to write the prompt
1:10:03 for me. So I’ll prompt it with, I’m prompting an AI. Yeah. Here are the key things similar to my
1:10:08 technique. I want to accomplish what would an excellent prompt look like. And then I’ll copy
1:10:15 paste that prompt that it gives me back into the system. Uh, but I’m like, I wonder what I’m missing
1:10:18 here. Right. Like it’s a good technique. I mean, there’s lots of engine techniques like that. Like
1:10:25 self-reflection is a technique where you have a model observe and critique, you know, a decision
1:10:30 like a chain of thought. Uh, so in general, you know, that mechanism of self-reflection is I think
1:10:39 a really effective technique. You know, at Sierra, we help companies build customer facing AI agents. So, um,
1:10:44 if you’re setting up a Sonos speaker, you’ll now chat with an AI. If you’re a SiriusXM subscriber,
1:10:50 you can chat with Harmony, who’s their AI to manage your account. Um, we use all these tricks,
1:10:56 you know, self-reflection to detect things like hallucination or decision-making, generating
1:11:02 chains of thought for more complex tasks to ensure that it’s, you know, you’re putting as much, uh,
1:11:10 compute and cognitive load into important tricks. So, uh, you know, we’re the, there’s a whole
1:11:15 industry around sort of figuring out how do you exact the, like robustness and, and, um,
1:11:18 precision out of these models. So it’s really fun, but changing rapidly.
1:11:25 Hypothetical question. You, you’ve been hired to lead or advise a country, uh, that wants to become
1:11:33 an AI superpower. What sort of, um, steps would you take? What sort of policies would you think
1:11:37 would help create that? How would you bring investment from all over the world into that country?
1:11:41 And researchers, right? Like, so now all of a sudden you’re competing. It’s not the United States.
1:11:46 Like, how do you, how do you sort of set up a country like from first principles all the way
1:11:48 back to like, what does that look like? What are the key variables?
1:11:54 Well, I mean, especially, uh, this is definitely outside of my domain of expertise, but I would say
1:12:04 one of the key ingredients to modern AI is compute, um, which is a noun that wasn’t a noun until recently,
1:12:11 but now compute is a noun. And, uh, you know, I do think that’s one area where policymakers can,
1:12:19 um, uh, because it involves a lot of things that, uh, touch, uh, federal and local governments like power,
1:12:28 land. Um, and then similarly attracting the capital, which is immense to finance, uh, to the real estate,
1:12:35 to purchase the, uh, you know, uh, compute itself, um, and then to sort of operate the data center. And again,
1:12:41 there’s really immense power requirements for these data centers as well. Um, and then, you know,
1:12:47 it’s attracting sort of the right researchers and research labs to, you know, leverage that. But in
1:12:52 general, where there is compute, the research labs will find you, you know, and so I think that’s it.
1:12:56 And then there’s a lot of national security implications too, just because, you know,
1:13:01 these models are very sensitive, at least the frontier models are. And so, um, you know,
1:13:08 how you, your place in the geopolitical landscape is quite important. Like will research labs and,
1:13:13 uh, will the U S government be comfortable with training happening there and, and export restrictions
1:13:20 and things like that. But I think a lot of it comes down to infrastructure, uh, as it relates to policy,
1:13:28 is my intuition. Uh, you know, I think right now so much of AI is constrained on, on infrastructure that,
1:13:35 that is the input to a lot of, uh, of this stuff. Um, uh, and then there’s a lot around,
1:13:39 you know, attracting talent and all that. But as I said, you know, you look at the research labs,
1:13:44 it’s not that many people, actually, it’s a lot, but the compute is a limited resource right now.
1:13:49 That’s a really good way to think about it. I think about this from the lens of Canada,
1:13:56 right? Which is like, we don’t have enough going on in, in AI. We, we tend to lose most of our great
1:14:02 people to the states, uh, who then go to set up infrastructure here for whatever reason and don’t
1:14:09 bring it back to Canada. And I, I wonder how Canada can compete better. So this is like sort of the lens
1:14:15 I like look at these questions through, how do you see that the next generation of education?
1:14:20 Like if you were setting up a school today from scratch and again, hypothetical, not your domain
1:14:27 of expertise, but like using your lens on AI, how do you think about this? So like what skills will kids
1:14:32 need in the future and what skills do we probably don’t need to teach them anymore that we have been
1:14:39 teaching them? Well, I’ll start with, uh, the benefits that I think are probably obvious, but I’m incredibly
1:14:48 excited about. I think education can become much more personalized. Oh, totally. Have you seen synthesis
1:14:53 tutor by the way? No, I have not. Oh, so they developed this, uh, synthesis, this AI company developed this
1:15:00 tutor, which actually teaches kids. And it’s so good that El Salvador, the country just recently adopted
1:15:06 it and replaced their teachers and, uh, like it’ll teach you, but it teaches you specific to what you’re missing.
1:15:11 So it’s not like every lesson’s the same. It’s like, well, you’re not understanding this foundational concept.
1:15:17 So it’s like K through five or six right now. That’s amazing. And you know, I actually, and the results are like off the charts.
1:15:22 Well, it doesn’t surprise me. And I, I don’t actually view it as like necessarily replacing a teacher,
1:15:26 but my view is if you have a teacher with 28 kids in his or her class,
1:15:33 the likelihood that they all learn the same way or learn at the same pace is very unlikely.
1:15:39 And, you know, I can really think of a, say an English teacher or history teacher orchestrating
1:15:45 their learning journeys through a topic, say AP European history in the United States,
1:15:50 there’s a curriculum, they need to learn it. Um, how someone will remember something or understand
1:15:57 the significance of Martin Luther, you know, is very different. And, um, you can, you know,
1:16:03 generate a audio podcast for someone who might be an audio auditory learner. Um,
1:16:08 you can create cue cards for someone who needs that kind of repetition, repetition. Um,
1:16:14 you can visualize, uh, key moments in history, um, for people who just maybe want to more viscerally
1:16:19 appreciate why this was a meaningful event rather than this dry piece of history. And all of that,
1:16:23 as you said, can be personalized to the way you learn and how you learn. And I think it’s just
1:16:30 incredibly powerful. And so one of the things I think is neat about AI is it’s, uh, democratizing
1:16:35 access to a lot of things that used to be fairly exclusive. A lot of wealthy people, if their child
1:16:40 was having trouble in school, would pay for a tutor, a math tutor, science tutor. Um, and, you know,
1:16:46 you know, if you look at, uh, kids who are trying to get into, you know, uh, big name colleges, you
1:16:51 know, if you have the means, you’ll have someone prep you for the SATs or help you with your college
1:16:58 essays, all of that should be democratized if we’re doing our jobs well. And it means that we’re not
1:17:05 limiting people’s opportunity from by their means. And I think that’s a, just the, the most, uh,
1:17:09 American thing ever, uh, Canadian as well. Um, it’s the most incredible thing.
1:17:14 It’s the most incredible thing, humanity. And, and so I, I just think education will change for
1:17:20 the positive in so many ways. Um, because, uh, I, I actually, with my kids walking around when they
1:17:24 ask, uh, you know, if you have little kids, they ask why, why, why? And, you know, there’s some point
1:17:28 a parent just starts making up the answer or being dismissive. And like, we have chat TVT out and it’s
1:17:34 like the best when you’re traveling and put on advanced voice mode and be like, ask a hundred percent.
1:17:39 And I’m listening to, you know, it’s like you’re, uh, you live through your children’s curiosity.
1:17:44 And, um, you know, my daughter went to high school and came home with Shakespeare for the first time.
1:17:49 And I was, she asked me a question. I was like, I, I felt this is like total inadequacy. I was like,
1:17:54 I was very bad at this the first time. And then we put it into chat GPT and it was the most thoughtful
1:17:59 answer. And she could ask follow-up questions. And I actually was, you know, with her because I was
1:18:02 like, Oh, I forgot about that. You know, didn’t even think about that. So I, I just think it’s
1:18:09 incredible. And I would like to, uh, in public school systems, uh, I think it’s really, I think
1:18:16 it would be a really great, uh, when public school systems formally adopt these things so that they lean
1:18:25 into, uh, tools like chat GPT, uh, as mechanisms to L like raise, uh, the performance level of their
1:18:30 classroom. And, and hopefully you’ll see it in things like test scores and other things because,
1:18:35 uh, kids can get the extra time, even if the school system can’t afford it for everyone.
1:18:40 Uh, and then most importantly, kids care getting explanations according to their style of learning,
1:18:44 which I think will be, um, quite, uh, important as well. As it relates to skills,
1:18:50 it’s really hard to predict right now. And I, I would say that I do think learning how to learn and
1:18:54 learning how to think will continue to be important. So I think most of, you know,
1:19:00 primary and secondary education shouldn’t and is not vocational necessarily. Um, some of it is,
1:19:06 uh, you know, I took auto shop and all of that and I’m glad I did, but I couldn’t fix my electric
1:19:10 car today with that knowledge, you know, things change and I don’t think it needs to be purely,
1:19:16 you know, um, non-vocational, but you know, the basics of learning how to think, uh, learning,
1:19:25 um, uh, uh, writing, reading, math, physics, uh, chemistry, biology, not because you need to memorize
1:19:32 it, but understand the mechanisms that, uh, uh, create the world that we live in is, is quite
1:19:42 important. Um, I do think that the, there’s a risk of people sort of, uh, becoming ossified in the tools
1:19:49 that they use. Um, so, you know, uh, let’s go back to our discussion of software engineering for a
1:19:54 second, but I’ll give other examples. You know, if you define your role as a software engineer is how
1:20:01 quickly you type into your IDE, the next few years might leave you behind, you know, because that, um,
1:20:06 that is no longer a differentiated, you know, part of the software engineering experience or will not
1:20:13 be, but your judgment as a software engineer will continue to be, uh, incredibly important in your
1:20:21 agency and making a decision about what to build, how to build it, um, how to architect it, uh, maybe
1:20:27 using AI models as a creative foil. And so I think that, uh, just in the same way, if you’re an accountant,
1:20:32 you know, using Excel doesn’t make you less of an accountant, uh, and, um, and just because you
1:20:38 didn’t, you know, handcraft that math equation, it doesn’t make the results any less valuable to your
1:20:44 clients. Um, and so I think we’re going to go through this transformation where I think the, um,
1:20:50 the tools that we use to create value in the world will change dramatically. And I think some people who
1:20:57 define their jobs by their ability to use the last generation’s tools really, really effectively,
1:21:06 um, will, will be disrupted. But I, I think we, if we can empower people and, and to reskill, um, and also
1:21:10 broaden the aperture by which they define the value they’re providing to the world, I think a lot of
1:21:16 people can make the transition. The thing that is sort of uncomfortable, not really in education,
1:21:23 or it’s just earlier in, in most people’s lives. It’s just, I think the pace of change, uh, exceeds
1:21:29 that of most technology transitions. And I think it’s unreasonable, um, to expect most people to
1:21:34 change the way they work that quickly. And so I think the, the next five years, I think will be,
1:21:40 you know, for some jobs really disruptive and tumultuous. But if you take the longer view and you
1:21:46 fast forward 25 or 50 years, I’m incredibly optimistic. I think it’s the, the change will
1:21:53 require, um, from society, from companies and from individuals, like an open-mindedness about
1:21:58 reskilling and, and re-imagining their job to the lens of this like dramatically different new technology.
1:22:04 At what point do we get to, I mean, we’re probably on the cusp of it now and it’s happening in pockets,
1:22:10 but what point do we start solving problems that humans haven’t been able to solve or eliminating
1:22:15 paths that we’re on maybe with medical research that it’s like, no, that, that the, this whole
1:22:22 thing you’ve spent $30 billion on, you know, based on this 1972 study that was fabricated. But that one
1:22:27 study had all these derivative studies and like, I’m telling you it’s false, you know, because I can look
1:22:32 at it through an objective lens and get rid of these 30 billion while you’re smiling.
1:22:40 So, Oh no, I just, I hope soon. I mean, I hope, I mean, I, uh, there was a, a lot of, there’s a,
1:22:44 one of the models I can’t remember which one introduced a very long context window. And there’s a lot of
1:22:50 people on X over the weekend putting in their, uh, thesis, you know, like grads, grad school thesis in
1:22:55 there. And, and it was actually critiquing them with like surprising levels of fidelity. Uh,
1:23:02 and, uh, I think we’re sort of there perhaps with the right, um, right tools, but certainly over the
1:23:06 the next few years, I, you know, we talked about how, what does it mean to generalize AI?
1:23:15 Certainly in the areas of, um, science that are, you know, largely represented through text and
1:23:20 digital technology, like math being probably the most, uh, most applicable, there’s not really
1:23:24 anything keeping AI from getting really good at math. There’s not really an interface to the real
1:23:30 world. You don’t need to do a clinical trial to verify something’s correct. So I feel a ton of optimism
1:23:36 there. Um, it’ll be really interesting in like, you know, areas of like theoretical physics. Um,
1:23:40 uh, you’ll tend to, you’ll continue to have the divide between the applied and the theoretical
1:23:45 people. But I think there could be like really interesting new ideas there and perhaps some,
1:23:50 uh, finding logical inconsistencies with some of the, you know, uh, fashionable theories,
1:23:56 which has happened many times over the past few decades. Um, I think, I think we’ll get there soon.
1:24:01 And I actually, I, um, what’s really neat about is most of the scientists, I know people
1:24:05 who are actually like doing science, like they’re the most excited about these technologies and I,
1:24:09 they’re using them already. And I think that’s really neat. And I think we’re hopefully going to be,
1:24:15 I really hope we see more breakthroughs in science. One of the things I am not an expert in, but I’ve
1:24:25 read a lot, like, uh, a lot as a amateur about is just the slowdown in scientific breakthroughs over
1:24:29 the past, you know, a few decades and, and some theories that it’s because of the degree of
1:24:34 specialization that we demand of grad students and things like that. And I hope with, you know,
1:24:43 in general with AI, um, democratizing access to expertise, um, I, I have a completely personal
1:24:50 theory that it will benefit deep generalists in a lot of ways too, because your ability to understand
1:24:57 a fair amount and a lot of domains and leveraging AI, um, knowing where to prompt the AI to,
1:25:04 to go explore and, and, um, bringing together those domains, it will start to shift sort of the
1:25:10 intellectual power from people who are extremely deep to people who actually can, uh, orchestrate,
1:25:15 uh, intelligence between lots of different domains for breakthroughs. I think that’ll be really good
1:25:20 for society because most scientific breakthroughs aren’t, they tend to be, you know, cross-pollinating
1:25:24 very important ideas from a lot of different domains, which I think will be really exciting.
1:25:26 How important is the context window?
1:25:33 I think it could be quite important. Um, especially it certainly simplifies working with an AI. If
1:25:39 you can just give it everything and instruct it to do something. Um, and so, and, and assuming it
1:25:46 works, you know, you can extend a context window and it can, um, uh, the, the tension can be spread
1:25:52 fairly thin and, and, and the robustness of the answer can be questionable. So, but assuming,
1:25:58 let’s just for argument’s sake, you know, perfect robustness, um, I think it can really simplify the
1:26:05 interface, uh, to AI. The, not all uses, I also think that we’re talking about open source models and
1:26:13 APIs. Um, I also think that, you know, most, what I’m excited about in the software industry is not
1:26:19 not necessarily a large language model with a prompt and a response being the product of AI,
1:26:25 but actually end in closed loose systems that use large language models as pieces of infrastructure.
1:26:30 And I actually think that a lot of the value in software will be that. And for many of those
1:26:35 applications, the context window size can matter, but often because you have contextual awareness of
1:26:40 the process that you’re executing, um, yeah, context window is a little bit less important.
1:26:45 So I think it matters a lot to intelligence. Um, you know, there’s a, I can’t remember someone,
1:26:49 one of the, some researcher said, you know, you put all of human knowledge in the context window,
1:26:54 and you ask it to invent the next thing. You know, it’s a, uh, obviously a reductive, uh, thought,
1:26:59 but, but interesting. Um, uh, but I actually, I’m equally excited about sort of the industrial
1:27:02 applications of large language models, sort of like my company, Sierra. And if you’re,
1:27:08 you’re, um, returning a pair of shoes at a retailer and it’s a process that’s fairly complicated and,
1:27:15 uh, you know, is it within the return window? Uh, you know, uh, do you want to return it in store?
1:27:19 Do you want to send it? Do you want to print to your QR code? Blah, blah, blah, blah. Um,
1:27:23 the orchestration of that is as significant as the models themselves. And I actually think as we,
1:27:28 um, just like, uh, uh, computers, you know, there’s going to be a lot of things where computers are a
1:27:32 part of the experience, but it’s not like manifesting itself as a computer. So I,
1:27:36 I’m actually equally excited about those. And I think context window is slightly less important
1:27:41 than those applications. Do you think that the output from AI should be copyrightable
1:27:46 or patentable or let, let me just take an example. If I go to the U S patent office,
1:27:53 I download a patent for, let’s say the AeroPress and I upload it to, uh, Oh one pro. And I say,
1:27:58 I can’t upload it yet. Cause you don’t let me do the PDS, but I upload it to four. And, uh,
1:28:05 so I say, Hey, what’s the next logical leap that I could patent office. It would give me back diagrams
1:28:10 and an output. And presumably if I look at that and I’m like, yeah, that’s legit. I want to file
1:28:14 that patent. Can I, I don’t know to answer that question. I’m not an expert in sort of intellectual
1:28:21 property, but I, uh, uh, I think there will be an interesting question of, was that your idea
1:28:27 because you used a tool to do it? I think the answer is probably yes, that you, you use the tool
1:28:37 to do it. But I also think that the, um, uh, uh, in general, like the sort of marginal cost of intelligence
1:28:43 will go down a lot. So a lot of the, you know, I think in general, like we’ll, we’ll be in this
1:28:50 renaissance of, of new ideas and intelligence being produced. And so, uh, I think that’s broadly a good
1:28:55 thing. And I think, you know, the marginal value of that insight that you had might be lower than it
1:29:00 was, you know, and years ago, what I was hoping you would say is that, you know, that’s going to become
1:29:05 less and less important because I feel like all the patent trolls and all of this stuff that slows down
1:29:10 innovation in some ways, uh, obviously like there’s legitimate patents that people infringe on and
1:29:15 there should be legal recourse. But if I could just go and patent like a hundred things a day,
1:29:19 because it seems like that should not be allowed. This is what I’m saying though.
1:29:24 Well, in general, I think that, you know, companies, you know, I think patents make sense
1:29:29 if it’s protecting something that’s an active use that you, you know, invented and you’re, you’re trying to
1:29:35 uh, you know, like the standard, you know, uh, legal rationale for patents, just generating a
1:29:40 bunch of ideas and patenting. It seems destructive to the value of it. So here’s the idea I had last
1:29:45 night to counter this because I was like, I don’t want somebody doing this. Uh, and I was thinking
1:29:50 like, what if prior art eliminates patents? Yeah. So I was like, what if I just set off like an
1:29:55 instance and just publish it on a website? Nobody has to read that website. Here’s a billion ideas.
1:30:00 Exactly. But it’s like basically patenting like anything, not patenting,
1:30:05 but it’s creating prior art for everything. So like, you can’t compete on that anymore.
1:30:10 I don’t know. I was like thinking about that. I thought it was fun. Um, tell me about the
1:30:16 Google maps story. This is like now legend and I want to hear it from you. Uh, this is my weekend
1:30:22 coding. Is that what you want to hear about? Yeah. Um, yeah. So, uh, I’ll start with just
1:30:28 like the story of Google maps, the abbreviated version. Uh, we had launched a product at Google
1:30:33 called Google local, which was sort of a yellow pages, uh, search engine. Uh, you probably,
1:30:37 probably most listeners don’t even know what yellow pages are, but it was a thing back then.
1:30:44 And, um, we had licensed maps from map quest, which was the dominant sort of mapping provider at the
1:30:49 time. And it was sort of an eyesore on the experience and also always felt like it could
1:30:54 be a more meaningful part of the kind of local search and navigation experience on Google. So
1:30:58 Larry Page in particular was really pushing us to really invest more in maps.
1:31:05 Um, we found this, uh, small company with a, like four people in it, if I’m remembering correctly,
1:31:10 started by Lars and Jens Rasmussen called where to technologies where, um, they had made a windows
1:31:16 application called expedition. Um, that was just a beautiful, uh, mapping product. Um,
1:31:20 it was running on windows long after it was sort of out of fashion to make windows apps, but they,
1:31:25 they were sort of where the technology they’re comfortable with, but they’re really, um,
1:31:32 their, their maps modeled the A to Z maps and in the UK were just beautiful. And they just had a lot
1:31:39 of passion for mapping. So we did a little aqua hire of them and took together the Google local team and
1:31:44 Lars and Jens’s team and, and said, okay, like, let’s take the good ideas from this windows app and
1:31:49 the good ideas from Google local. And like, let’s bring them together to make something completely
1:31:54 new. And that, and that’s what became Google maps. But there was a couple of idiosyncrasies
1:32:00 in the integration because it was a, um, Windows app. It really helped and hurt us in a number of
1:32:05 ways. Like one of the ways it helped us is the reason why Google maps, we were able to drag the map
1:32:11 and it like, uh, was so much more interactive than any web application that preceded it was
1:32:17 the standard that we needed to hit from interactivity was set by a native windows app, not set by the
1:32:25 legacy, uh, you know, websites that we had used at the time. And I think that by having the goalposts
1:32:30 so far down the field, because they had just started with this windows app, which was sort of a quirk of
1:32:36 Lars and Jens, just like technical choices. We made much bolder technical bets than we would
1:32:41 have. Otherwise, I think we would have ended up much less interactive had we, uh, not started with
1:32:47 that quirky technical sort of a decision. But the other thing was this windows app. There’s a lot of
1:32:52 like, it’s hard to describe the like early two thousands. We wouldn’t live it, but like XML was
1:32:58 like really in fashion. So like most things and windows and other places like XML and XSLT,
1:33:03 which was a way of transforming XML into different XML was the basis of everything. It was like all
1:33:09 of enterprise software was like XML this, XML that. So similarly, when we were taking some of these ideas
1:33:14 and putting them in a web browser, we kind of like went into autopilot and used like a ton of XML
1:33:23 and it made everything just like really, really tedious. And so Google maps launched with some
1:33:27 really great ideas like the draggable maps. And, and we did a bunch of stuff for the local search
1:33:32 technology. So you could, you know, overlay restaurant listings. It was really great. It was a really
1:33:38 successful launch. Uh, we were like the hot shots within Google afterwards. And, uh, but it really
1:33:41 started to show its craft. And we got to this point where we decided we wanted to support the Safari
1:33:45 web browser, which was relatively new at the time. This is before, you know, mobile phones.
1:33:52 And, uh, there was much less XML support in Safari than there was an internet explorer in Firefox. And
1:34:00 so one of the engineers implemented like a full XSLT transform engine in JavaScript to get it to work.
1:34:07 And it was just like shit on top of shit on top of shit. And so what was a really elegant fat,
1:34:12 like fast web application had sort of quickly become something, you know, there’s a lot of
1:34:17 dial up modems at the time and other things. So like you’d show up to maps and it just was slow.
1:34:21 And like, it just bothered me as like someone who takes a lot of pride in their craft. And so
1:34:29 I got really, uh, energized and like over a weekend and a lot of coffee, like rewrote it.
1:34:34 Um, but it rewrote the whole thing there rewrote. Yeah. More or less the whole thing. And it took
1:34:38 probably another week of like, you know, working through the bugs, but yeah, I sent it out to the,
1:34:43 you know, the team after that weekend. And it was, it was nice. The reason I was able to do it,
1:34:50 yeah, I’m like a decent programmer, but you know, you’d also like lived with every bad decision up to
1:34:56 that point too. So I knew exactly the output I was going to like, I had simulated in my head,
1:35:01 like if I could do it over again, this is the way I do it. So by the time I like put my hands on the
1:35:07 keyboard on like, you know, Friday night, I, it wasn’t like I was designing a product. Like I knew
1:35:11 I had been in every detail of that product since the beginning and including me, the bad decisions
1:35:16 too. They’re not all the bad decisions. And so it was just very clear. I knew what I wanted to
1:35:20 accomplish. And for any, you know, any engineers worked on a big system, you have the whole system
1:35:27 mapped out in your head. So I knew, knew everything. And I, and I, and, and I also knew that, you know,
1:35:33 there’s a lot of pride of authorship with engineering and code. So I sort of knew I really wanted to
1:35:38 finish it over the weekend so that people could use it and see how fast it was and kind of overcome
1:35:44 anyone who was like, you know, you know, protective of the code they had written a few months ago.
1:35:50 And so I really wanted the prototype to go out. And so I did it. And then I didn’t,
1:35:53 it’s funny, I never talked about it again, but I think Paul Buhite, who’s was a co-creator,
1:36:00 Gmail, and, and I worked and started friend feed with me. He was on an interview and mentioned this
1:36:03 story. So now all of a sudden, it’s like, everyone’s talking about it. And I was like,
1:36:07 well, thank you, Paul. It’s a little embarrassed that people know about it, but it was, it was,
1:36:10 it’s a true story. And, and, and XML is just the worst.
1:36:17 Did you get a lot of flack from the people who had built the system you effectively replaced? Like,
1:36:21 you were part of that team, but everybody else had so much invested in it, even though it was like,
1:36:23 shit on top of shit, on top of shit.
1:36:28 you know, um, I wrote a lot of it too. So yeah, I’m sure there was some around it, but actually,
1:36:33 I think good teams want to do great work. And so, uh, I think there was a lot of people constructively
1:36:41 dissatisfied with the state of things too. And, um, uh, you know, I think, uh, you know, the engineer
1:36:46 had written that XSLT transform, I think was like, you know, a little bit, it’s a lot of work. So you have
1:36:52 to throw out a lot of work, which feels bad, but particularly, you know, um, Lars and Jens and I,
1:36:56 like, we want to make great products. And so I don’t think there was a, you know, at the end of the day,
1:37:01 everyone’s like, wow, that’s great. You know, we went from a bundle size of 200 K to a bundle size
1:37:06 of 20 K and it was a lot faster and better. So, you know, broadly speaking, I think good engineering
1:37:12 cultures. You don’t want a culture of, um, you know, ready, fire, aim, but I also think you just
1:37:20 need to be really outcomes oriented. And I think people, if they become, they’d start to treat
1:37:26 their code is too precious. It can really, uh, impede forward progress. Um, and yeah, I’ll just take,
1:37:33 like, I, my understanding is like a lot of the early self-driving car software was a lot of hand-coded
1:37:38 heuristics and rules. And, you know, a lot of smart people think that eventually it’ll probably
1:37:43 be a more monolithic model that, uh, encodes many of the same rules. You have to throw out a lot of
1:37:47 code in that transition, but it doesn’t mean it’s not the right thing to do. And so I think in general,
1:37:51 um, yeah, there might’ve been some feathers ruffled, but at the end of the day, everyone’s like,
1:37:55 that’s faster and better. Like, let’s, let’s do it, you know, which is, I think the right decision.
1:37:59 That’s awesome. I’m going to give you another hypothetical. I want you to share your
1:38:05 inner monologue with me as you think through it. So if I, uh, told you, you have to put 100% of
1:38:11 your net worth into a public company today and you couldn’t, you couldn’t touch it for at least 20
1:38:15 years, what company would you invest in? And like, walk me through your thinking.
1:38:20 I literally don’t know how to answer that question. Um, how would you think about it without giving me
1:38:24 an answer? Like, what? Yeah, that’s a good question. I, first of all, I’ll give you how
1:38:29 I think about it, but I’m so, uh, having not been a public company CEO for a couple of years, I’m
1:38:34 blissfully don’t pay attention as much, um, to the public markets. And in particular right now,
1:38:40 it’s obviously valuations have gone up a lot. So there’s a, but because it’s a long-term question,
1:38:45 maybe that doesn’t matter. I think what I’d be thinking about right now is, um,
1:38:53 over the next 20 years, like what are the parts of the economy that will most benefit from this
1:38:57 current wave of AI? That’s not the only way to invest over a 20 year period, but certainly it’s
1:39:03 a domain that I understand. And in particular, you know, I mentioned that, uh, talk, I heard a snippet of
1:39:09 from Tyler Cohen, which is like, it will probably AI will probably benefit different parts of the economy.
1:39:13 Um, disproportionately, there will be some parts of the economy that can essentially,
1:39:20 um, where intelligence is a limiting factor to its growth and where you can absorb
1:39:26 almost arbitrary levels of intelligence and generate almost arbitrary levels of growth.
1:39:30 Obviously there’s limits to all of this just because you change one part of the economy,
1:39:36 it impacts other parts of the economy. And that was what, uh, uh, Tyler’s point was, uh, in his talk.
1:39:39 But I would probably think about that because I think that
1:39:44 over a 20 year period, there are certain parts of society that won’t be able to change extremely
1:39:49 rapidly. Um, but there will be some parts that probably will, and it’ll probably be
1:39:56 domains where intelligence is, is the scarce resource, uh, right now. And then I would probably
1:40:00 try to find companies that will disproportionately benefit from it. And I assume this is why like
1:40:06 Nvidia stock is so high right now, because if you want to sort of get downstream, you know,
1:40:11 Nvidia will probably benefit from all of the investments in AI. Um, I’m not sure I would
1:40:15 do that over a 20 year period, just assuming that the infrastructure will shift. So I don’t have
1:40:19 an intelligent answer, but that’s the way I would think about it if I were, if we’re doing that exercise.
1:40:26 I love that. Where do you think, like, what’s your intuition say about what areas of the economy are
1:40:33 limited by intelligence and not just economy? I mean, perhaps politicians, uh, might be limited by,
1:40:40 by this and, and aid and benefit from, in which case countries could benefit enormously from AI and unlock
1:40:45 growth and potential in their economy. But I think maybe just to scope the question, like what areas of
1:40:51 the economy do you think are limited by intelligence or workers, like smart workers, in which case, like,
1:40:59 that’s another limit of intelligence. Yeah. I mean, uh, two that are, I think probably going to benefit a
1:41:07 lot are technology and finance. Um, you know, where you’re, you know, if you can make better financial
1:41:12 decisions than competitors, you’ll generate outsized returns. And that’s why over the past, you know,
1:41:18 30 years, you know, of machine learning, um, you know, uh, hedge funds and financial service
1:41:24 institutions, everything from fraud prevention to true investment strategies, it’s already been an area
1:41:30 of domain, uh, domain of investment, um, software, similar, as we talked about, I think that, uh,
1:41:36 at some point we will be, um, we will no longer be supply constrained in software, but we’re not anywhere
1:41:42 close to it right now. And you’re taking something that has always been the scarce resource, which is
1:41:47 software engineers and you’re making it not scarce. And I think as a consequence,
1:41:53 you just think of like, how much can that industry grow? We don’t know. Um, but we’ve been so constrained
1:41:59 on software engineering as a resource, uh, who knows over the next 20 years, but we’ll find out, uh,
1:42:04 where, where the limits are. But to me, intellectually, there’s just a ton of growth there. Um,
1:42:09 and then broadly, I think areas of like processing information are areas that will
1:42:16 really benefit, um, quite a bit here. And so that, and I think the, the thing that I would think about
1:42:19 over a 20 year period is like second and third order effects, which is why I don’t have an intelligent
1:42:22 answer. And if you’re asking me to put all my money in something, I would think about it for a
1:42:28 while, um, probably use a one pro a little bit to help me. Um, but, uh, you know, because you can end
1:42:33 up, uh, generated a bunch of growth in the short term, but then, you know, if everyone does it,
1:42:39 it commoditizes the whole industry, you know, type of thing. So, you know, there used to be, you know,
1:42:44 before the introduction of the freezer ice was like a really expensive thing and now it’s free,
1:42:49 you know? And so I think it is really important to actually think through those. If you’re talking
1:42:52 in a timeframe of like 20 years. And that’s why having not thought about this question ahead of
1:42:57 time, I, um, you could be quite simplistic elsewhere, but I would say software and finance
1:43:02 are areas that I, I think stand to reason should benefit quite a bit. I love that response. How do
1:43:12 you balance, uh, having a young family with also running a startup again? I work a lot. Um, I don’t,
1:43:20 uh, I really care and love care about and love working. Um, so one thing is that I, um,
1:43:27 well, there’s always trade-offs in life. Um, if I didn’t love working and I wouldn’t do it as much
1:43:34 as I do, but I, I just love, uh, love to create things and love to have an impact. And so I like
1:43:39 jump out of bed in the morning and, um, work out, go to work and then spend time with my family
1:43:45 broadly, probably, you know, being honest first. I’m not perfect. I think for a second,
1:43:48 I don’t have a ton of hobbies. You know, I basically work and spend time with my family.
1:43:54 Um, the first time we talked, you saw a couple of guitars in my background. Uh, I haven’t picked
1:44:00 one of those up in a while. Um, uh, I, I mean, I literally pick it up occasionally, but I, you know,
1:44:05 do not devote any time into it. And I don’t regret that either. Like I am so passionate about what
1:44:10 we’re building at Sierra. I’m so passionate about opening. I am so love my family so much. I don’t
1:44:15 really have any regrets about it, but I basically just like life is all about where do you spend your
1:44:20 time and mine is at work and with family. And so that’s how I do it. I don’t know if I’m particularly
1:44:26 balanced, but I don’t strive to be either. I really take a lot of pride and I love, I love to work.
1:44:32 Having sold the companies you started twice, how does that influence what you think of Sierra? Like,
1:44:37 are you thinking like, Oh, I’m building this in order to sell it? Or do you think differently? Like,
1:44:41 this is my life’s work. I’m building this with, that’s not going to happen.
1:44:47 I absolutely, uh, intend Sierra to be an enduring company and an independent company,
1:44:54 but to be honest, every entrepreneur, every company starts that way. And so, um, you know,
1:45:00 uh, I’m really grateful for both Facebook and Salesforce for having acquired my previous companies
1:45:04 and hopefully I had an impact about those companies, but you don’t start off. Well,
1:45:11 at least I never started off saying, Hey, I want to make a company to sell it. Um, uh, and, uh,
1:45:15 but I actually think with Sierra, we have just a ton of traction in the marketplace. Uh,
1:45:19 I really do think Sierra is the leader in helping consumer brands build customer facing
1:45:25 AI agents. And I’m really proud of that. So I really see a path to that. And I joke with Clay,
1:45:30 I want to be, you know, an old man sitting on his porch, you know, complaining how
1:45:33 the next generation of leaders at Sierra don’t listen to us anymore. You know,
1:45:37 I want this to be something that not only is enduring, but outlives me. Um, and I think
1:45:43 just actually, I don’t think we’ve ever talked about this, but it was really interesting, um,
1:45:47 moment for me when Google went from its one building in mountain view to its first corporate
1:45:53 campus. It, uh, we moved into the Silicon graphics campus, which was right over near shoreline
1:46:00 Boulevard and in mountain view. And, uh, SGI had been a really successful company enough to build a
1:46:04 campus. And when we, it was actually quite awkward. We moved into like half the campus,
1:46:08 they were still in half and they’re like, we’re this up and coming company. They’re declining.
1:46:14 And then when Facebook, when we moved out of the second building, we were in Palo Alto,
1:46:19 it was slightly larger building. I think we leased it from HP, but when we finally got a campus,
1:46:24 it was from Sun Microsystems who had gone through an Oracle acquisition and had been sort of on the
1:46:32 decline. And it was interesting to me because both SGI and Sun, um, had been started and grown to
1:46:36 prominence in my lifetime. Uh, obviously I was maybe like a little younger, obviously,
1:46:41 but in my lifetime enough to build a whole corporate campus and then declined fast enough
1:46:48 to sell that corporate campus to a new software company. And for me, I, it was just so interesting
1:46:54 to have done that twice to move into like a, you know, a used campus, you know, for the previous,
1:47:00 uh, uh, uh, owners. It was a very stark reminder that technology companies aren’t entitled to their
1:47:07 future success. And I think we’ll see this actually now with AI, AI, I think will change the landscape
1:47:14 of software to be, um, tools of productivity that to agents that actually accomplish tasks. And I think it
1:47:20 will help some companies who, for whom that’s a, uh, uh, amplifies their existing value proposition
1:47:25 and it will really hurt others where it will essentially the seat based kind of model of
1:47:32 legacy software will wane, um, very quickly and then really harm them. And so when I think about the,
1:47:40 what it means to build a company that’s enduring, um, that is a really, really tall, um, task in my mind
1:47:44 right now, because it means not only making something that’s financially enduring over the
1:47:52 next 10 years, but setting up a culture where a company can actually evolve to meet the changing
1:47:58 demands of, uh, society and technology at a, when it’s changing at a pace that is like unprecedented
1:48:03 in history. So I think it’s one of the most fun business challenges of all time. And I think it has
1:48:10 as much to do with culture as it has to do with technology because every line of code in Sierra
1:48:15 today will be completely different, you know, probably five years from now, let alone 30 years
1:48:19 from now. Um, and, uh, I think that’s really exciting. So when I think about it, I just get
1:48:26 so much energy because, um, it’s incredibly hard and it’s harder now than it’s ever been, um, to do
1:48:30 something that lasts beyond you. Um, but that I think is the ultimate measure of a company.
1:48:34 You mentioned AI agents. How would you define that? What’s an agent?
1:48:39 I’ll define it more broadly and then I’ll tell you how we think about it at Sierra, which is a more
1:48:44 more narrow view of it. The word agent comes from agency. And I think it means affording
1:48:53 a software, the opportunity to reason and make decisions autonomously. Um, and I think that’s
1:48:59 really all it means to me. And I think there’s lots of different, uh, applications of it. The three
1:49:04 categories that I think are meaningful and I’ll end with the Sierra one just so I can talk about it a
1:49:10 a little more, but one is personal agents. So I do think that most people will have
1:49:18 probably one, but maybe a couple AI agents that they use on a daily basis that are, uh, essentially
1:49:25 amplifying themselves as an individual. Um, you can do the rote things like help you triage your email to
1:49:32 helping you schedule a vacation. You know, you’re flying back to, um, Edmonton and help you arrange
1:49:38 your travel. Um, two more complex things like, you know, I’m going to go ask my boss for promotion,
1:49:44 like help me role play. And, um, you know, uh, I’m setting up my resume for this job. Help me do that
1:49:49 too. I’m applying for a new job. Help me find companies I haven’t thought of that I should be applying to.
1:49:55 Uh, and I think these agents will be really powerful. I think it might be a really hard
1:50:00 product to build because when you think about all the different services and people you interact with
1:50:06 every day, it’s kind of everything. So it’s not, it has to generalize a lot to be useful to you. And
1:50:12 because of the personal privacy and things like that, it has to work really well for you to trust it.
1:50:15 So I think it’s going to take a while to go. I think it’ll be a lot of demos. I think it’ll take
1:50:22 a while to be robust. The second category of agent is I would say, um, really filling a persona, uh,
1:50:32 within a company. So a coding agent, a paralegal agent, um, a analyst agent. Um, I think these
1:50:37 already exist. I mentioned cursor. There’s a company called Harvey that makes a legal agent. I’m sure
1:50:44 there’s a bunch in the analyst space. Um, these do a job and they’re more narrow. Um, but they, uh,
1:50:49 they’re really commercially valuable because most companies hire people or consultants that do those
1:50:55 things already, like analyze the contracts of the supply chain, right? That’s a kind of a rote
1:51:01 kind of law, but it’s really important and AI can do it really well. So I think that’s why, uh,
1:51:06 this is the area of the economy that I think is really exciting. And, and as, uh,
1:51:11 I’m really excited about all the startups in this space because you’re essentially, um, taking what
1:51:16 used to be a combination of people and software and really making something that solves a problem.
1:51:24 Uh, and by narrowing the domain of, of autonomy, you can have more robust guard rails and even with
1:51:29 current models actually achieve something that’s effective enough to be commercially viable today.
1:51:35 Um, and, uh, and by the way, it changes the total addressable market of these models too. Like,
1:51:39 I don’t know what the total addressable market of legal software was three years ago, but it
1:51:43 couldn’t have been that big. I couldn’t tell you like a legal software company. I probably should,
1:51:48 I just can’t think of one, but if you think about the money we spend on lawyers, that’s a lot. And so
1:51:56 you end up where you’re broadening the, the addressable market quite a lot. The domain we’re
1:52:03 in, um, I think is somewhat special, which is, um, a company’s branded customer facing agent. And
1:52:08 the reason why I think it’s, one could argue we’re sort of, uh, helping with customer service,
1:52:14 which is a, a, a persona, a role, but I do think it’s broader than that. Because if you think about,
1:52:20 um, a website, you know, like your insurance company’s website, try to list all the things
1:52:24 you can do on it. You can look up the stock quote, you can look up the management team,
1:52:31 you can compare their insurance company to all their competitors. You can file a claim. You can,
1:52:39 you know, uh, buy, you can bundle your home and auto. You can, um, uh, um, add a member of your
1:52:44 family to your premium. There’s a million things you can do on it. Essentially over the past 30 years,
1:52:50 websites, a company’s website singular has come to be the universe of everything that you can do with
1:52:54 that company. I like to think it was like the digital instantiation of the company.
1:53:01 And that’s what we’re helping our customers do at Sears, help them build a conversational AI that does
1:53:05 all of that. So, you know, most of our customers start with customer service and it’s a great
1:53:10 application because no one likes to wait on hold and, and having something that has perfect access
1:53:16 to information is multilingual and empathetic is just amazing. But you know, when you put a conversational
1:53:22 AI as your digital, um, front door, people will say anything they want to it. And, um,
1:53:28 we’re now doing product discovery, consider purchases, going back to the insurance example.
1:53:34 Hey, you know, I’ve got a 15 year old daughter. I really am concerned about the cost of her premium
1:53:39 until she grows up. Tell me, um, which plan I should be on. Tell me why you’ll be better than
1:53:43 your competitors. That’s a really complex interaction, right? That’s not something that,
1:53:47 can you make a webpage that does that? No, that’s, but that’s a great conversation.
1:53:54 And so we really aspire that when you encounter a branded agent in the wild, we want Sierra to be
1:54:00 the platform that powers it. And it’s super important because there was a case, at least in Canada,
1:54:06 where an AI agent for Air Canada hallucinated a bereavement policy, right? But they were found liable
1:54:12 to hold themselves to what the agent said. Yeah. I mean, it turns out, and it was an AI
1:54:17 agent. There was no human involved in the whole thing. Well, look, it’s one thing if chat GPT
1:54:22 hallucinate something about your brand. It’s another if your AI agent hallucinate something about your
1:54:29 brand. So the bar just gets higher. So the robustness of these agents, the guardrails, everything is more
1:54:35 important when it’s yours and it has your brand on it. And so it’s harder, but I also, I’m just so
1:54:41 excited for it because this is a little overly intellectual, but I really like the framing.
1:54:49 If you think about a modern website or mobile app, it’s essentially you’ve created a directory of
1:54:56 functionality from which you can choose. But the main person with agency in that is the creator of
1:55:02 the website. Like what are the universe of options that you can do? When you have an AI agent represent
1:55:08 your brand, the agency goes to the customer. They can express their problem any way they want in a
1:55:12 multifaceted way. And so it means that like your customer experience goes from the
1:55:18 enumerated set of functionality you’ve decided to put on your website to whatever your customers ask.
1:55:23 And then, you know, you can decide how to fulfill those requests or whether you want to.
1:55:29 But I think it will really change the dynamic to be really empowering to consumers. As you said,
1:55:35 I mean, the reason that that Air Canada case is the reason we exist. You know, companies,
1:55:42 if they try to build this themselves, there’s a lot of ways you can shoot yourself in the foot.
1:55:48 But in particular, too, your customer experience should not be wedded to one model, let alone even
1:55:54 this current generation of models. So with Sierra, you can define your customer experience once in a
1:56:00 way that’s abstracted from all of the technology. And it can be a chat. It can be, you can call you on
1:56:05 the phone. It can be all of those things. And as new models and new technology comes out,
1:56:10 our platform just gets better. But you’re not like re-implementing your customer experience. And I
1:56:14 think that’s really important because, you know, we were talking about what’s happened over the past
1:56:19 two years. Can you imagine if you’re a consumer brand like ADT Home Security and thinking about,
1:56:23 like, how can you maintain your AI agent in the face of all of that, right? It’s just not even,
1:56:28 it’s not tenable. I mean, it’s not what you do as ADT. So they’ve worked with us to build their AI agent.
1:56:35 Like, how do you fend off complacency? Like, a lot of these companies, and maybe not in tech
1:56:44 specifically, but they get big, they get dominant, and then they take their foot off the gas. And that
1:56:50 opens the door to competitors. And there’s like a natural entropy almost to bureaucracy in some of
1:56:57 these companies that, and the bureaucracy sows the seeds of failure and competition. How do you,
1:56:58 how do you fend that off constantly?
1:57:05 It is a really challenging thing to do at a company. One of the, there’s two things that
1:57:15 I’ve observed that I think manifest as corporate complacency. One is bureaucracy. And I think the
1:57:22 root of bureaucracy is often when something goes wrong, companies introduce a process to fix it.
1:57:32 And over the sequence of 30 years, the layered sum of all of those processes that were all created for
1:57:42 good reason, with good intentions, end up being a bureaucratic sort of machine where the reasons for
1:57:48 many of the rules and processes are rarely even remembered by the organization. But it creates this
1:57:55 sort of natural inertia. Sometimes that inertia can be good. You know, it’s like, you know, if you end up
1:58:01 with you, there’s definitely been stories of executives coming in and ready, fire, aim, new strategies that
1:58:08 backfire massively. But often, it can mean in the face of a technology shift or a new competitor,
1:58:13 you just can’t move fast enough to address it. The second thing that I think is more subtle is as a
1:58:20 a company grows in size. Often its internal narrative can be stronger than the truth from customers.
1:58:28 I remember one time when this sort of peak of the smartphone wars, and I ended up visiting a friend
1:58:37 on Microsoft’s campus. And I got off the plane and, you know, Seattle Tacoma Airport, drove into Redmond,
1:58:43 went on to the campus. And all of a sudden, everyone I saw was using Windows phones.
1:58:50 I assume it must have been a requirement or formal or social, like you were definitely uncool if you’re
1:58:56 using anything else. And from my perspective at the time, like the war had already been lost.
1:58:57 Yeah.
1:59:04 Like, it was definitely a two-horse race between Apple and Google on iOS and Android.
1:59:10 And I remember sitting in the lobby waiting for my friend to get me from the security check-in.
1:59:15 And I made a comment, like it wasn’t a confrontation, but I made a comment to someone who’s at Microsoft.
1:59:22 I was like, you know, something along the lines of, are you required to use Windows phones?
1:59:27 How these are? And I just sort of like curious. And then I got a really bold answer, which is like,
1:59:31 yeah, we’re going to win. Like, we’re taking over the smartphone market. And I was like,
1:59:36 I didn’t say anything because it was like a little socially awkward. I was like, no, you’re not. Like,
1:59:38 you lost like four years ago.
1:59:44 But there’s something that’s happening that’s preventing you from getting reality.
1:59:48 Well, that’s the thing is, if you think about it, if anyone, if you’ve ever worked for like a large
1:59:55 company, you know, when you work at a small company, you care about your customers and your
2:00:02 competitors and you feel every bump in the road. When you’re a, you know, junior vice president of
2:00:10 whatever, and you’re, you know, eight levels below your, you know, CEO, and you have a set of
2:00:15 objectives and results, uh, your, you might be focused as I want to go from junior vice president
2:00:20 to senior vice president. That’s what success looks like for me. And you end up with this sort
2:00:28 of myopic focus on this internal world in the same way your kids will focus on, you know, the social
2:00:33 dynamics of their high school, not the world outside of it. And it’s probably rational by the way, because
2:00:38 like, you know, probably their social life is more determined by those, you know, 1,000 kids in their
2:00:42 high school than it is like all the things outside. But this is, that’s the life of a
2:00:48 person inside of these big places. And so you end up where, uh, you know, if you have a very senior
2:00:53 head of product, who’s like, are this competitor says they’re faster, but this next version we’re
2:00:58 so much better. And then everyone says, all of a sudden that’s like the windows phone is going to
2:01:04 win. That’s what everyone says. And, and you truly believe it because everyone you meet says the
2:01:10 same thing and you end up reflecting, you know, uh, customer anecdotes through that lens and you end up
2:01:16 with this sort of reality distortion field manifested from the sum of, of this sort of myopic
2:01:22 storytelling that, um, exists within, within companies. The, what’s interesting about that
2:01:27 is like, you know, the ability for a culture to believe in something is actually a great strength of
2:01:33 a culture, but it can lead to this as well. And so the combination of bureaucracy and inaccurate
2:01:39 storytelling, um, I think is the reason why companies sort of die. Uh, and, and it’s really
2:01:45 remarkable to look at, you know, the blackberries of the world or the TiVos or the, you know, there,
2:01:52 you can really, um, you know, as the plane is crashing, like tell the story that you’re not. And,
2:01:58 um, and, and, and then similar, as I said, like culturally, you can still have like the person
2:02:03 in the back of that crashing plane being like, when am I going to get promoted to SVP? And then you’re
2:02:08 like, you know, and, and that’s, I mean, this is like, I mean, I’ve seen it a hundred times. And so
2:02:14 I think it really comes down to leadership, you know, and I think that one of the things that most
2:02:19 great companies have is they are obsessed with their customers. Um, and I think the, the free market
2:02:24 doesn’t lie. And so I think the, one of the most important things I think for any like enduring
2:02:29 culture, particularly in an industry that changes as rapidly as software is how close are your
2:02:35 employees to customers and how much can customer like the direct voice of your customers be a part of,
2:02:42 uh, your decision-making. Um, and that is something that I think you need to constantly work out because
2:02:52 that, you know, person employee number 30,462, you know, how does he or she actually, actually
2:02:55 directly hear from customers is, it’s not actually a simple question to answer.
2:02:58 Is it direct? Is it filtered? How many filters are there?
2:03:04 That’s exactly right. And then, um, I think the other part on leadership is, you know,
2:03:09 we talked about bureaucracy is process is there to serve the needs of the business.
2:03:18 And, uh, uh, often, um, mid-level managers, uh, don’t get credit for removing process.
2:03:24 They often are held accountable for things going wrong. Um, and I think it really takes top-down
2:03:31 leadership to, uh, you know, remove bureaucracy. Um, and, uh, it is not always comfortable, you know,
2:03:40 when companies remove spans of control or, uh, all the people impacted will, it’s like antibodies.
2:03:44 Uh, and for good reason, I mean, it makes sense their lives are negatively impacted or whatever it is,
2:03:51 but it almost has to come from the top because you need to give, uh, air cover, uh, almost certainly
2:03:56 something will go wrong by the way. I mean, like processes usually exist for a reason. Um,
2:04:01 but when they accumulate, um, without end, you end up with bureaucracy. So those are the two things
2:04:06 that I always, uh, and you could smell it when you go into a really bureaucratic company, the,
2:04:13 the inaccurate storytelling, the process over outcomes. And it’s just, uh, it sort of sucks the
2:04:19 energy out of you when you feel it. That’s a great answer. We always end these interviews with the exact
2:04:24 same question, which is what is success for you? Success for me, we talked about how I spend my
2:04:31 time with my family at work is, you know, having a happy, healthy family and being able to work with
2:04:35 my co-founder Clay for the rest of my life, making Sierra into an enduring company. That would be success for me.
2:04:47 Thanks for listening and learning with us. The Farnham street blog is where you can learn more
2:04:54 about my new book clear thinking, turning ordinary moments into extraordinary results. It’s a transformative
2:05:00 guide that hands you the tools to master your fate, sharpen your decision-making and set yourself up for
2:05:14 unparalleled success. Learn more at fs.blog slash clear until next time.

What happens when one of the most legendary minds in tech delves deep into the real workings of modern AI? A 2-hour long masterclass that you don’t want to miss.

 

Bret Taylor unpacks why AI is transforming software engineering forever, how founders can survive acquisition (he’s done it twice), and why the true bottlenecks in AI aren’t what most think. Drawing on experiences, he explains why the next phase of AI won’t just be about better models—but about entirely new ways we’ll work with them. Bret exposes the reality gap between what AI insiders understand and what everyone else believes.

Listen now to recalibrate your thinking before your competitors do. 

(00:02:46) Aha Moments with AI

(00:04:43) Founders Working for Founders

(00:07:59) Acquisition Process

(00:14:14) The Role of a Board

(00:17:05) Founder Mode

(00:20:29) Engineers as Leaders

(00:24:54) Applying First Principles in Business

(00:28:43) The Future of Software Engineering

(00:35:11) Efficiency and Verification of AI-Generated Code

(00:36:46) The Future of Software Development

(00:37:24) Defining AGI

(00:47:03) AI Self-Improvement?

(00:47:58) Safety Measures and Supervision in AI

(00:49:47) Benefiting Humanity and AI Safety

(00:54:06) Regulation and Geopolitical Landscape in AI

(00:55:58) Foundation Models and Frontier Models

(01:01:06) Economics and Open Source Models

(01:05:18) AI and AGI Accessibility

(01:07:42) Optimizing AI Prompts

(01:11:18) Creating an AI Superpower

(01:14:12) Future of Education and AI

(01:19:34) The Impact of AI on Job Roles

(01:21:58) AI in Problem-Solving and Research

(01:25:24) Importance of AI Context Window

(01:27:37) AI Output and Intellectual Property

(01:30:09) Google Maps Launch and Challenges

(01:37:57) Long-Term Investment in AI

(01:43:02) Balancing Work and Family Life

(01:44:25) Building Sierra as an Enduring Company

(01:45:38) Lessons from Tech Company Lifecycles

(01:48:31) Definition and Applications of AI Agents

(01:53:56) Challenges and Importance of Branded AI Agents

(01:56:28) Fending Off Complacency in Companies

(02:01:21) Customer Obsession and Leadership in Companies

Bret Taylor is currently the Chairman of OpenAI and CEO of Sierra. Previously, he was the CTO of Facebook, Chairman of the board for X, and the Co-CEO of Salesforce. 

Newsletter – The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter

Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: ⁠⁠⁠⁠⁠⁠⁠fs.blog/membership⁠⁠ and get your own private feed.

Watch on YouTube: @tkppodcast

Learn more about your ad choices. Visit megaphone.fm/adchoices

Leave a Comment