AI transcript
0:00:22 You know, sometimes I sit there and I run into a bug, whether it’s a Google product or an Apple product or, you know, Facebook or whatever.
0:00:32 I’m like, this is an obvious bug. And I know that there are teams out there, there are people getting paid millions of dollars a year to make some of the worst software.
0:00:36 And it will never get fixed because people don’t care. No one’s paying attention.
0:00:43 That’s just one symptom out of a great many that is, you know, the result of basically treating people like, you know, hoarded resources.
0:00:46 The world is full of problems. Let’s go solve those things.
0:00:58 Welcome to The Knowledge Project. I’m your host, Shane Parrish.
0:01:05 In a world where knowledge is power, this podcast is your toolkit for mastering the best of what other people have already figured out.
0:01:14 If you want to take your learning to the next level, consider joining our membership program at fs.blog.com.
0:01:26 As a member, you’ll get my personal reflections at the end of every episode, early access to episodes, no ads, including this, exclusive content, hand-edited transcripts, and so much more.
0:01:28 Check out the link in the show notes for more.
0:01:35 Today, we’re pulling back the curtain on one of the most powerful forces in the tech and venture capital world, Y Combinator.
0:01:46 With less than a 1% acceptance rate and a track record that includes 60% of the last decade’s unicorn startups, YC has shaped the startup world as we know it.
0:01:57 Gary Tan, president of Y Combinator, joins us to break down what separates transformative founders from the rest and why so many ambitious entrepreneurs still get it wrong.
0:02:05 We’ll explore the traits that matter the most, the numbers behind billion-dollar companies, and why earnestness often beats raw ambition.
0:02:12 But there’s a seismic shift happening in venture capital, and AI is at the center of it.
0:02:21 We’ll dig into how artificial intelligence is reshaping startups from idea generation to regulation and what it means for the next wave of innovation.
0:02:32 If you’re curious about Silicon Valley’s secrets, the present and the future of AI, or how true innovation gets funded, this conversation is for you.
0:02:35 It’s time to listen and learn.
0:02:42 I want to start with what makes Y Combinator so successful.
0:02:48 I guess I can’t talk about YC without talking about Paul Graham and Jessica Livingston.
0:02:52 I mean, it started because they’re remarkable people.
0:03:06 And, you know, Paul, when he started his company, I don’t think he ever had the idea that he would ever become someone who created a thing like YC.
0:03:13 He was just trying to help people and sort of follow his own interests, I think.
0:03:21 He just said, I know how to make products and make software and make them in a way that people can use them.
0:03:32 And then after he actually sold that company, ViaWeb, it was one of the first, you know, to say we have Shopify, ViaWeb was sort of like the very first version of it.
0:03:39 He actually basically created the first web browser-based program.
0:03:47 So he was one of the first people to hook up a web request to an actual program in Unix.
0:03:51 You know, today we call it CGI Bin or, you know, all these different things.
0:04:06 But, you know, he was so early on the web that, you know, it was a new idea to make software for the web that didn’t require like some desktop thing that you had to use to configure the website.
0:04:16 And so I think he’s just always been an autodidact, a really great engineer, and then just a polymath.
0:04:19 So I think that that’s what really made YC.
0:04:20 I mean, he wrote essays.
0:04:26 He sort of attracted all the people in the world who wanted to do the thing that he wanted to do.
0:04:36 And so I think Paul Graham and his essays became a shelling point for people who this new thing that could really happen in the world.
0:04:39 And, you know, that started very early.
0:04:43 I mean, I think it started literally with the web itself.
0:04:53 And, you know, that’s why in 2005 he was able to get hundreds to thousands of really amazing applications from people who wanted to do what he did.
0:04:56 And then the magic is it’s only a 10-week program.
0:05:02 I think he had only a dozen people in that very first program in 2005.
0:05:07 And then out of that very first program, Sam Altman went through it.
0:05:10 And Sam, I guess it’s interesting.
0:05:20 I mean, if you have a draw that is very profound, it will draw out of the world the people who that speaks to those people.
0:05:27 And so you end up needing in society these like sort of shelling points for certain ideas.
0:05:39 And then that, you know, the idea that someone could sit down in front of a computer and create a piece of software that a billion people could use turned out to be very contrarian and very right.
0:05:50 And so, you know, today I think of YC as really, it’s actually, you know, software events and media.
0:05:55 And, you know, I think you’ve had Naval Ravikant on before.
0:06:04 And, you know, I think I remember distinctly Naval talking about like those are the few forms of extreme leverage you have in the world.
0:06:08 And so, you know, I think Y Combinator is this crazy thing.
0:06:17 It’s like when people realize they could start a startup, they went on Google and they searched and they found Paul’s essays.
0:06:21 And then through his essays, they found Y Combinator.
0:06:30 And then YC started funding people like, you know, Steve Huffman, who ended up creating Reddit in that very first batch and selling that to Condé Nast.
0:06:40 And, you know, Dropbox, then Airbnb, then, you know, today, you know, Coinbase, you know, DoorDash.
0:06:44 There are just so many companies that, you know, are incredible.
0:06:53 I mean, Airbnb is this insane marketplace that houses way more people on any given night than, you know, the biggest hotel chains in the world.
0:06:55 And it’s like, on the one hand, unimaginable.
0:06:59 On the other hand, like, that’s the kind of thing that you can do.
0:07:02 Like, you can just, you know, do things, which is wild.
0:07:05 And so, I think that that’s why it works.
0:07:11 It’s, we attract people who want to create those things and then we give them money.
0:07:16 And then more importantly, I think the know-how is we give it away for free.
0:07:18 Go deeper on that.
0:07:36 Earlier, just now, we were chatting about this podcast setup, but we spend a lot of time writing essays and putting out content on our YouTube channels and just trying to teach people, you know, how do you actually do this stuff?
0:07:42 There’s like a lot of mechanical knowledge about how do you incorporate or how do you raise money for the first time?
0:07:45 And all of that is out there for free.
0:07:53 And, you know, on the other hand, I think of doing YC, being in the program, it’s a 10-week program.
0:07:55 We make everyone come to San Francisco now.
0:08:12 At the end of it, it culminates in people raising, you know, sort of the median raise is about a million to a million and a half bucks for, you know, sometimes teams that are two or three people, just an idea starting at, you know, at the beginning of that.
0:08:14 So, that’s the demo day, is that the, yeah.
0:08:22 And yeah, we have, you know, I think we have about a billion dollars a year in, you know, funding that comes into YC companies.
0:08:27 And that’s because the acceptance rate to get into YC is only 1%.
0:08:29 So, let me get this straight.
0:08:32 You have, I think I read somewhere, 40,000 applications a year.
0:08:35 Yeah, I think it’s close to 70,000, 80,000 at this point.
0:08:37 How do you filter those?
0:08:50 Well, we ourselves use software, but we also have 13 general partners who actually read applications and we watch the one-minute video you post.
0:08:56 And, you know, the most important thing to me is that I want us to try the products, right?
0:09:02 You know, sure, we can use the resume and, you know, people’s careers and where they went to school.
0:09:04 You know, we’re not going to throw that out.
0:09:06 Like, it’s a factor in anything.
0:09:11 But the most important thing to me is not necessarily the biography.
0:09:14 It’s actually, you know, what have you built?
0:09:15 What can you build?
0:09:16 Go deeper on the software thing.
0:09:20 I don’t think I’ve heard that before that you guys, obviously, you have to use software.
0:09:22 But what does the software do?
0:09:23 How does it filter?
0:09:29 Yeah, I mean, ultimately, the best thing that we can do is actually brute force read.
0:09:38 And on average, I think a group partner will read something like 1,000 to 1,500 applications for that cycle that they’re working.
0:09:47 So the best thing we can do is, like, not – it is basically, like, humans trying to make decisions, you know,
0:09:51 which is maybe a little antithetical to, you know, the broader thing right now.
0:09:54 And now it’s, you know, let’s just use AI for everything.
0:09:58 But I think that the human element is still very important.
0:10:01 Most mornings, I start my day with a smoothie.
0:10:04 It’s a secret recipe the kids and I call the Tom Brady.
0:10:09 I actually shared the full recipe in episode 191 with Dr. Rhonda Patrick.
0:10:12 One thing that hasn’t changed since then, protein is a must.
0:10:20 These days, I build my foundation around what Momentus calls the Momentus 3, protein, creatine, and omega-3s.
0:10:27 I take them daily because they support everything – focus, energy, recovery, and long-term health.
0:10:31 Most people don’t get enough of any of these things through diet alone.
0:10:34 What makes Momentus different is their quality.
0:10:36 Their whey protein isolate is grass-fed.
0:10:40 Their creatine uses Creopure, the purest form available.
0:10:44 And their omega-3s are sourced for maximum bioavailability.
0:10:46 So your body actually uses what you take.
0:10:52 No fillers, no artificial ingredients, just what your body needs, backed by science.
0:10:58 Head to livemomentus.com and use code KNOWLEDGEPROJECT for 35% off your first subscription.
0:11:04 That’s code KNOWLEDGEPROJECT at livemomentus.com for 35% off your first subscription.
0:11:40 And then at the end, you sort of like, I guess the last filter is like this 10-minute interview.
0:11:46 So what do you ask in 10 minutes to determine if somebody’s going to be part of Black Combinator?
0:11:54 I guess the surprising thing that has worked over and over again, ultimately, is in those 10 minutes,
0:11:59 either you learn a lot about both the founders and the market, or you don’t.
0:12:03 So we’re looking for incredibly crisp communication.
0:12:06 So I want to know, you know, what is it?
0:12:13 And, you know, often the first thing I ask is not just what is it, but why are you working on it?
0:12:16 Like, I want to sort of understand where did this come from?
0:12:17 Did you just read about it on the internet?
0:12:23 Or a much better answer is, you know, well, I spent a year working on this,
0:12:27 and I got all the way to the edge of, you know, what people know about this thing.
0:12:35 And, you know, what’s cool about, you know, the biographical is that then it invites more questions, right?
0:12:38 It’s the best interviews in 10 minutes.
0:12:40 Like, you learn about an entire market.
0:12:46 You learn about a set of people that, you know, normally you might not ever hear of.
0:12:48 It’s like you’re traveling.
0:12:54 It’s like you’re traveling the idea maze with the people you’re talking to.
0:12:55 This is all over Zoom.
0:13:00 And, you know, at the end of those 10 minutes, like, sometimes the 10 minutes becomes 15.
0:13:06 Like, you want to talk to people longer because that’s what a great interview feels like to me.
0:13:11 It feels like I’m a cat and I see a little yarn and I’m just pulling on the yarn.
0:13:15 I’m just pulling on the thread because it’s like this, you know, there’s something here.
0:13:21 This person understands something about the world that, you know, actually makes sense to me.
0:13:30 And I think what we’re looking for is actual signal that there’s a real problem to be solved.
0:13:34 There are people on that end who are willing to pay.
0:13:53 And then, you know, working backwards, what a great startup ultimately is, is something real that people are willing to pay for that probably has durable moats that, you know, it doesn’t mean that, you know, it means that that company could actually become much bigger than you.
0:13:58 You don’t want to start a restaurant, for instance, because there’s infinite competition for restaurants.
0:14:04 But you do want to start, you know, something like Airbnb that has network effects or…
0:14:06 That can really scale.
0:14:06 Exactly.
0:14:13 Or, you know, in AI today, one of the more important things is, you know, are people willing to pay?
0:14:21 And today, because people are not selling software, they’re increasingly actually selling intelligence.
0:14:27 They’re like, you know, like it or not, like these are things that you could not buy before.
0:14:36 Like, you know, probably the most vulnerable things in the world today are things that you could, you know, farm out to an overseas call center.
0:14:38 That’s sort of like the low-hanging fruit today.
0:14:46 And, you know, basically, how do you find things that people want and how do you actually provide it for them?
0:14:51 And the remarkable thing is that, you know, that’s why it only has to be 10 minutes.
0:15:05 You know, one of the things I feel like I learned from Paul Graham interviewing alongside him so many years was that sometimes I’d go through and this person would come in, they had an incredible resume, you know, they’re like had a PhD or they studied under this famous…
0:15:21 Or, you know, they worked at Google or Facebook or all these really famous places, they had an impressive resume or they had the credentials of someone who I felt like, you know, should be able to do it.
0:15:22 But then they had a mess of an interview.
0:15:26 Like, we didn’t get any signal from it.
0:15:27 We didn’t understand.
0:15:28 Or, like, it just seemed garbled.
0:15:33 Or, you know, at the end of it, sometimes they’re asking, like, oh, we just, you know, 10 minutes is too short.
0:15:34 We need more time.
0:15:50 And one of the things I feel like I learned from Paul was that if in 10 minutes you cannot actually understand what’s going on, it means the person on the other end doesn’t actually understand what’s going on and there isn’t anything to understand, which is surprising.
0:15:52 That’s a really good point.
0:15:53 I bet you that holds true.
0:16:02 Do you look at people that you’ve been successful with that don’t work out and then people that you’ve filtered out that do become maybe successful and try to learn from that?
0:16:02 Oh, definitely.
0:16:03 All the time.
0:16:06 I mean, I think that’s the trickiest thing.
0:16:15 You know, I think the system itself will always produce, you know, both false positives and false negatives because it is only 10 minutes.
0:16:17 But you have the highest batting average.
0:16:24 Like, Y Combinator, my understanding is it’s like 5% of the companies become billion-dollar companies.
0:16:28 Yeah, about 2.5% end up becoming decacorns sooner or later.
0:16:34 But that would be the highest batting average of any VC firm, maybe with Sequoia being the exception.
0:16:42 What’s interesting to me is most of the people that I know in that space are doing hundreds of hours of work per company.
0:16:51 And you guys can’t do that because you have 80,000 people applying and you’re still the most or at least top tier in terms of success.
0:17:00 Yeah, I mean, what’s great is I, you know, I don’t want to compete with Sequoia or Benchmark or Andreessen Horowitz or, you know, they’re our friends, honestly.
0:17:00 Yeah.
0:17:01 Done right.
0:17:09 Like, we’re much earlier than everyone else because we want to actually give them half a million dollars when they have just an idea.
0:17:11 Or maybe they don’t even know their co-founder yet.
0:17:13 That’s what makes it more incredible.
0:17:18 It’s because the batting average should be way lower based on where you’re at in the stock in terms of funding.
0:17:19 Yeah.
0:17:20 You know what it is, though?
0:17:26 I spent five years, seven years, actually, away from YC before coming back a couple years ago.
0:17:34 So I ended up, I think, in the top 10 of the Forbes Midas list as my final year before coming back to YC.
0:17:41 And why haven’t other people, you know, we ask this all the time, why haven’t other people come for us?
0:17:45 You know, I think there are lots of people who are doing various things that might work.
0:17:55 And I guess so far, people sort of lose interest or, you know, float off and go do higher status things.
0:17:55 Right.
0:18:13 Working with founders when they’re just right at the beginning and just an idea is actually, you know, relatively low status work because, you know, it’s very high status to work with a company that is, you know, worth 50 or 100 billion dollars now.
0:18:18 But guess what, like, that’s 10 years from now or sometimes 15 or 20 years from now.
0:18:24 You know, it all starts out very low status and all the way in the weeds.
0:18:31 Like, you’re answering sort of relatively simple questions and you’re giving relatively small amounts of money.
0:18:33 Well, you were giving 20 at the start, right?
0:18:34 Now you give 500?
0:18:35 Is that the…
0:18:37 Half a million dollars today, yeah.
0:18:40 Has that changed the ratio of success?
0:18:44 I think some of it is, well, we find out in 10 years.
0:18:48 If anything, I think that the unicorn rate has gone up over time.
0:18:53 You know, 10, 15 years ago, I think it was closer to maybe 3.5% to 4%.
0:18:56 And now we’re around 5.5%.
0:19:04 Some batches from maybe 2017, 2018 are, you know, pushing 8% to 10%.
0:19:13 Some of those companies in that area, in that vintage, about 50% of companies end up raising what looks like a Series A.
0:19:20 And then the wild thing about it is it actually takes a long time for people to get there.
0:19:27 So, you know, I think that YC has actually flipped a lot of the, I guess, myths of venture.
0:19:38 You know, one of the myths of venture maybe 10, 15 years ago was that, you know, within nine months of funding a company, you will know whether or not that company was good or bad.
0:19:47 And, you know, going back to that stat, you know, about half of companies that go through YC will end up raising a Series A.
0:19:52 That’s, you know, much higher than any other pre-seed or seed sort of situation that I know of.
0:20:00 But about a quarter of those who raise the Series A, they do it in year five or later.
0:20:07 And that’s a function of, like, we’re funding 22-year-olds, you know, 19-year-olds, 24-year-olds.
0:20:13 I mean, we’re funding people who are so young that sometimes they’ve never shipped software before.
0:20:31 Sometimes, you know, they’re fresh off of an internship, you know, it takes three to five years to mature, to learn how to iterate on software, how to deliver really high-quality software, how to manage people, how to manage people effectively, give feedback.
0:20:37 And so the wild thing is, I mean, sometimes it takes five years for those things to come together.
0:20:46 In my head, and correct me if I’m wrong here, there’s a bit of, like, misfit, geek, people have told me this won’t work or won’t be successful.
0:20:51 And then when I get to Y Combinator, I’m around a whole bunch of other people who are exactly like me.
0:20:52 Oh, yeah.
0:20:53 For the first time in my life.
0:20:53 Yeah.
0:20:54 And they’re super ambitious.
0:21:01 To what extent do you think that that environment just creates better success or better outcomes?
0:21:03 Oh, that was definitely true for me.
0:21:10 I mean, without that, I feel like what my – I mean, I had a good, a really great community at the end of the day.
0:21:14 Like, it was, you know, my fellow Stanford grads.
0:21:24 But I guess the weird thing to say is that, like, being around people who are really earnestly trying to build helps, you know, 10x more.
0:21:30 The default startup scenario out there is not about signal.
0:21:31 It’s about the noise.
0:21:33 Like, you’re playing for these other things.
0:21:35 Like, how much money can I raise?
0:21:38 And from what, you know, high-status investor?
0:21:42 Like, you know, some people sort of float off and they become scenesters.
0:21:45 They’re like, oh, let me try to get a lot of followers on Twitter.
0:21:47 That’s the most important thing.
0:21:59 And then what we really try to do at YC during the batch and then afterwards and, you know, in our office hours working with companies is, like, when we spot that kind of stuff, it’s like, oh, no, no.
0:22:00 Like, maybe don’t do that.
0:22:11 Like, you know, let’s go back to product, market, actually building and then iterating on that, getting customers, you know, long-term retention.
0:22:27 All of those things are the fundamentals and everything else is, like, the trappings of success or – and those will always feel – what’s funny is, like, in other communities, all of those things will always feel more present to hand and they’re easier.
0:22:29 Like, you can just get it.
0:22:35 Like, you’re, you know, on stage keynoting or, you know, even doing the podcast game, I feel, like, guilty, you know?
0:22:36 Like, it’s kind of funny.
0:22:41 We see that in people and then sometimes, you know, often that will kill their startup.
0:22:43 Like, they take their eye off the ball.
0:22:44 You know, angel investing.
0:22:54 If you’re a startup founder and suddenly some, you know, people have heard of you and people try to add you as a scout.
0:22:58 Like, people kill their startups all the time by that, just by taking their eye off the ball.
0:23:05 Go deeper on that a little bit in terms of focus and how people sort of lose their way unintentionally.
0:23:12 And then do they catch it before it starts to go off the rail or does it – it sort of just crashes and then there’s no coming back from it?
0:23:19 I mean, it crashes and then, you know, sometimes you have to go and do your next startup or, you know, or I don’t know.
0:23:23 Sometimes people just go off and become VCs after that and that’s okay too.
0:23:31 Is that the difference between somebody who, like, wants to run a company and start a company versus somebody who wants to be seen as running a company and starting a company?
0:23:37 I think that that’s probably the biggest danger to people who want to be founders.
0:23:41 I mean, I think I’ve seen Peter Thiel talk about this.
0:23:44 Like, he doesn’t really want people who want to start startups.
0:23:54 From my perspective, it’s certainly much better to find people who have a problem in the world that they feel like they can solve and they can use technology to solve.
0:23:57 And that’s, like, sort of a more earnest way to look at it.
0:24:05 And if you look at the histories of some of the things that are the biggest in the world, they actually start like that.
0:24:14 You know, there are lots of interviews with Steve Jobs and Steve Wozniak saying, you know, I never meant to start a company or ever wanted to make money.
0:24:18 All I wanted to do was make a computer for me and my friends.
0:24:23 And so, you know, many, many more people kept coming to me saying, can you build me a computer?
0:24:27 And they just, you know, like a cat, were pulling on this thread.
0:24:31 It’s like the company was a reluctant side effect on this.
0:24:45 In history, it seems like a lot of innovation comes from great concentration of people together, whether it’s a city or the Industrial Revolution or all of these things tend to be localized and then spread over the world, if I understand it correctly.
0:24:47 Why Silicon Valley?
0:24:48 Why San Francisco?
0:24:54 And why haven’t other countries been able to replicate that success inside?
0:24:58 Well, at YC, what we hope is that people actually come to San Francisco.
0:25:04 And, you know, we do strongly advocate that they stay, but it’s no requirement.
0:25:16 And then what we hope is that if they do leave, they end up bringing the networks and know-how and culture and, you know, frankly, vibes.
0:25:20 And they bring it back to all the other startup hubs in the world.
0:25:24 And I think that that’s some of the stuff that has actually come about.
0:25:30 I mean, Monzo was started by now my partner, Tom Blomfield.
0:25:37 He’s a partner at YC now, but he started, you know, multiple startups and a few of them, you know, multiple unicorns, actually.
0:25:40 And both of them are some of the biggest companies in London, for instance.
0:25:52 So what we hope is that San Francisco becomes sort of really Athens or Rome in antiquity, you know, send us your best and the brightest, you know, ideally you stay here.
0:26:03 One thing we spotted is that the teams that come to San Francisco and then stay in San Francisco or the Bay Area, they actually double their chance of becoming a unicorn.
0:26:04 Oh, wow.
0:26:14 So if it’s one thing that you could do, it’s be around people and be in the place where making something brand new is in the water.
0:26:24 So if hypothetically you create a new country tomorrow and you wanted to spur on innovation, what sort of policy, you got to compete with San Francisco.
0:26:27 What sort of policies would you think about?
0:26:35 Like, how would you think about setting that up to attract capital, to attract the right mindset of people, to attract and retain these people?
0:26:41 I think what I want for San Francisco, for instance, is I think the rent should be lower.
0:26:49 And so rather than subsidizing demand, we actually need to increase supply like fairly radically, actually.
0:26:50 And that just hasn’t happened.
0:27:08 I think I was looking at it for the entire last calendar year, I think, you know, maybe Scott Wiener had just posted this on X that literally there were no new housing starts in all of, you know, San Francisco proper for the last year.
0:27:15 So how are we supposed to actually bring down the rents and make this place, you know, actually livable?
0:27:32 You know, if San Francisco is the microcosm where, you know, people build the future and it is sort of the siren song for, you know, 150 IQ people who are very, very ambitious and have our, you know, techno-optimistic ideology.
0:27:48 And it’s also where they are most likely to succeed society and certainly, you know, America is not serving society the right way if we’re getting in the way of these smart people trying to solve these problems, trying to build the future.
0:27:51 But just continuing on the Y Combinator theme for a second.
0:27:57 Are there ideas that you’ve said no to, but you think they’re going to be successful, they just scare you?
0:27:59 And you’re like, no, that’s too scary.
0:28:06 I mean, if it’s scary, but might or probably will be good, I think we want to fund them.
0:28:12 And certainly there are things that would be bad for society, but are likely to make money.
0:28:17 And, you know, the history is our partners are, everyone’s independent.
0:28:27 You know, we have a process that is very predicated on, you know, if you’re a general partner at YC, you know, you pretty much can fund what you want.
0:28:31 You know, we run it by each other to make sure, you know, sort of double check, like, the thinking.
0:28:34 But I think we’re pretty aligned there.
0:28:43 Like, there are lots of examples of, you know, maybe five or six years ago, there was a rash of telehealth companies that are focused on, for instance, ADHD meds.
0:28:54 And I distinctly remember one of our partners, Gustav Alstromer, he met that team and he said, you know what, we’re not going to fund these guys.
0:29:04 You know, it’s going to make money, but I don’t want to live in a world where it is that easy to get, you know, people on these drugs.
0:29:11 Like, they’re ultimately methamphetamines and, you know, these are controlled substances and this is the wrong vibe.
0:29:14 Like, we did not like the vibe that we got from the founders of that company.
0:29:20 So, you know, I hope that YC continues that way and I think it will.
0:29:29 Ultimately, we want people who are, I mean, ultimately trying to be benevolent, at least, you know.
0:29:37 How would you think about, like, just the idea of spitballing if I were to come to you and be like, I’m starting a cyber weapons company?
0:29:40 I guess some of it is like, are you only going to sell to Five Eyes?
0:29:44 Because, you know, I really liked what MIT put out recently.
0:29:48 They were very clear.
0:29:54 They said, you know, MIT is an institution and that institution is an American institution.
0:30:01 And so, being very clear about that, I thought, was totally the right move for MIT.
0:30:08 And, you know, I think that YC needs to be a similar, you know, an institution of similar character.
0:30:09 I like that.
0:30:12 What do you wish founders knew about sales coming in?
0:30:14 Oh, how hard it is.
0:30:22 And, I mean, you know, like it or not, you know, the ideal founder is someone who has lived, like, 20 lifetimes and has the skills of 20 people.
0:30:26 And the thing is, you know, you can’t get that.
0:30:35 And so, probably the first conference that we have, the first mini conference we have when we welcome the batch in is the sales mini conference.
0:30:40 And, essentially, it is don’t run away from the know.
0:31:01 Spencer Skates of Amplitude has this great analogy that he told, you know, some companies when he came by to speak recently that I’ve been thinking a lot about, which is sales is about, you know, having 100 boxes in front of you and maybe five or six of those boxes has a gold nugget in them.
0:31:14 And if you haven’t done sales before, you think, I really, I’m going to gingerly, in a very gingerly way, open that first box and hope, hope, hope that, you know, I have a gold nugget.
0:31:19 And then, you know, I don’t, I almost don’t want to know that there isn’t a gold nugget in there.
0:31:21 Like, I’m so afraid of rejection.
0:31:32 It’s sort of remarkable how often high school and family and, you know, the 10,000 hours of human training people get from their childhoods comes up in Paul Graham’s essays.
0:31:41 I always think about that because I think that most people’s backgrounds just don’t prepare them for a sale.
0:31:43 It’s a very unnatural thing to do sales.
0:31:48 But then the sooner that you acquire those skills, like, the more free you become.
0:31:59 And what Spencer says about those 100 boxes is instead of, like, being incredibly afraid of, you know, getting an F or, you know, nothing’s going to happen to you.
0:32:01 Just, like, flip open all 100 boxes immediately.
0:32:05 And then, you know, you should aggressively try to get to a no.
0:32:12 And, you know, you’d rather get a no so you can spend less time on that lead and you can get on to the next one.
0:32:22 I mean, I think that that’s, like, a very interesting example of the mindset shift that you can read about, but you sort of need, it takes a village.
0:32:28 Like, you sort of need to be around lots and lots of people for whom that is true, that has been true.
0:32:36 And I think that, you know, maybe that’s actually one of the reasons why YC startups are much more successful.
0:32:45 Like, other people give as much money or, you know, as you said, like, venture capital VC firms tend to give, you know, a lot more money.
0:32:59 I mean, there are clones of YC right now that give, like, twice as much money, for instance, but I don’t think that they’re going to see this level of success because they’re not going to have as earnest people who become as formidable around you.
0:33:01 Like, it’s actually a process.
0:33:07 It’s so interesting to me because as you’re saying that, there’s something that strikes me about the simplicity of what you’re doing.
0:33:13 And then also, like, Berkshire Hathaway, you know, everybody’s tried to replicate Berkshire Hathaway, but they can’t.
0:33:14 Yeah.
0:33:23 And because they can’t maintain the simplicity, they can’t maintain the focus, they can’t do the secret sauce, which obviously has a lot to do with Charlie Munger and Warren Buffett.
0:33:28 And with you guys, it has a lot to do with the founders that you attract and you can bring together.
0:33:31 But you have billions of dollars effectively trying to replicate it.
0:33:32 Nobody’s able to do that.
0:33:34 I think that that’s really interesting.
0:33:37 And it’s not like you’re doing something that’s super complicated.
0:33:37 Yeah.
0:33:40 It doesn’t sound like it unless I’m missing something.
0:33:44 Like, it’s a very simple sort of process to bring the people together.
0:33:48 And obviously, there’s filtering and you guys are really good at doing that.
0:34:00 I mean, what my hope is, I feel like when Paul and Jessica created YC, I went through the program myself in 2008 and I came out transformed.
0:34:07 And then that’s very explicitly what I want to happen for people who go through the batch today.
0:34:17 It’s, you know, it isn’t just like show up to a bunch of dinners and network with some people who happen to be, you know, it’s much deeper than that.
0:34:28 Like, I want people to come in maybe with like, you know, the default worldview and then I want them to come out with a very radically different worldview.
0:34:36 I want someone who is much more earnest, someone who is not necessarily trying to sort of like hack the hack.
0:34:53 They’re trying to, you know, and I think this mirrors what you were saying from, you know, what, you know, rest in peace Charlie Munger talks about and what Warren Buffett talks about around all of these things are in the short term popularity contests.
0:34:57 But in the end, all that matters is the weighing machine.
0:35:07 So you can raise your series A, you can throw amazing parties, TechCrunch can write about you, all these Twitter anons can fet you as like the next greatest thing.
0:35:12 And you could get, you know, hundreds of thousands of followers on X or whatever.
0:35:19 But, you know, at the end of the day, you look down and did you create something of great value?
0:35:34 Like, did you, like, did you, with your hands and, you know, did you assemble people and capital and, you know, create something that, you know, when all is said and done, solve some real problem, put people together?
0:35:37 You know, is there real enterprise value?
0:35:38 And that’s the weighing machine.
0:35:49 And, you know, the way that YC makes money, the way that, you know, the founders make money, it’s all aligned at that point.
0:35:51 Like, yeah, there’s, like, a way to hack the hack.
0:35:56 And I don’t really know what the end game is on the other stuff.
0:35:57 It’s just very short term.
0:36:09 Whereas, you know, on a 5, 10, 15-year basis, like, if you are nose to the grindstone, earnestly working on the thing, you know, you will succeed.
0:36:13 Like, I think that that’s what Paul Graham’s essay about being a cockroach actually is.
0:36:22 And, you know, that’s why 25% of the people who reach some form of product market fit at YC do it in year five or later.
0:36:23 It’s like they don’t quit year one.
0:36:24 They don’t quit year two.
0:36:27 Like, you know, they are learning and growing.
0:36:32 I have one other really crazy stat that, like, I’m thinking about all the time right now.
0:36:36 There’s a founder, or there’s a VC, actually.
0:36:38 His name is Ali Tamasab.
0:36:39 He works at Data Collective.
0:36:41 He wrote a book called Super Founders.
0:36:44 And I get this email from him out of the blue.
0:36:54 He says, did you know that about 40% of the unicorns from the last 10 years in the world were started by multi-time serial founders?
0:36:55 And I was like, okay, that’s a cool stat.
0:36:56 Like, makes sense.
0:37:00 Like, multi-time founders are, you know, they know a lot more.
0:37:01 They have networks.
0:37:03 They have access to capital.
0:37:04 Like, that’s not a surprising stat.
0:37:08 You know, if anything, it’s a little surprising that it’s only 40%.
0:37:09 Like, you would have guessed maybe that was 80.
0:37:13 But the thing he said after that really shocked me.
0:37:25 He said, did you know that 40%, you know, of those 40%, 60% of those people, the people who created unicorns the last 10 years, are YC alumni.
0:37:26 Oh, wow.
0:37:28 So, I’m like, that’s crazy.
0:37:36 Like, I’m really glad that YC exists now because, you know, even if, you know, YC today is basically a thing that is for first-timers.
0:37:39 You know, we do have second-timers apply.
0:37:40 We have, we do accept them.
0:37:46 But, you know, we primarily think of the half a million dollars, you know, it really is for people who are starting out.
0:37:48 And it’s kind of hilarious.
0:37:54 Like, I have no product right now for people who are, you know, for my YC alums.
0:37:56 And maybe that’s okay.
0:38:05 You know, it’s, you know, that’s our gift to the rest of Sand Hill Road because, you know, they’re the ones who are going to be the fund returners for all of the rest of Sand Hill Road.
0:38:16 Would you say, like, in terms of personal characteristics, it sounded like determination was definitely one of the most important outside of the company or venture?
0:38:29 What are the other personal sort of skills or behaviors or characteristics that people have that you say you would think correlate to the, not only the successful first time, but second, third, fourth?
0:38:29 Yeah.
0:38:41 I mean, the number one thing that I want that comes to mind for me is, I mean, maybe it’s even surprising because that’s not a word that you might associate with Silicon Valley founders.
0:38:43 I think of the word earnest.
0:38:46 What does earnest mean?
0:38:48 Like, incredibly sincere, I think.
0:38:51 Basically, what you see is what you get.
0:38:53 Like, you’re not trying to be something else.
0:38:57 It’s, like, authentic, but, like, you know, even humble in that respect, right?
0:39:00 Like, I’m trying to do this thing.
0:39:12 The opposite, I mean, and it’s surprising because, you know, I don’t know if people associate that with Silicon Valley startups, but I see that in the founders that are the most successful and most durable.
0:39:22 I see it in Brian Armstrong at Coinbase, like, which is fascinating because that’s definitely not the trait that you would apply to most crypto founders.
0:39:27 And, you know, I would use Sam Bankman Freed as sort of the opposite of that.
0:39:38 Like, you know, Brian Armstrong is an incredibly earnest founder who literally read the Satoshi Nakamoto white paper and said, this is going to be the future.
0:39:49 And let me work backwards from, you know, when you talk to him, like, the reason why he wanted these things, like, comes directly out of his own experience.
0:40:05 I mean, at Airbnb, they were dealing with the financial systems of, you know, myriad countries and it’s, like, international, just sending money from one country to another was totally fraught and totally not, you know, something that was accessible to normal people.
0:40:08 Like, remittance is this crazy scam.
0:40:15 It’s insane, like, how many fees that people have to pay just to, like, send money home or do cross-border commerce, right?
0:40:20 So this is something that was incredibly earnest of Brian Armstrong to do.
0:40:24 He said, here’s the thing that is broken in the world that, you know, he saw personally.
0:40:40 I think he spent time in, you know, Buenos Aires in Argentina and he saw hyperinflation and he said, you know, this is a technology that solves real problems that I have seen hurt people and I know that this technology can solve it.
0:40:47 And then after that, he’s just, like, nose to the grindstone working backwards from that thing that he wants to create in the world.
0:40:50 And, you know, it’s no surprise to me.
0:41:00 I mean, there were many years in there that I think our whole community were looking at, we were looking at someone like Sam Bankman-Fried and just wondering, like, what’s going on over there?
0:41:08 He speed ran this sort of money, power, fame game to an extreme degree, so much so that he stole customer funds to do it.
0:41:09 And, like, that was the answer.
0:41:12 Like, that’s anti-earnest.
0:41:15 Like, that is the definition of he was a crook.
0:41:16 He’s in jail now.
0:41:30 And, you know, my hope is that people who look, you know, if you just look at Brian Armstrong versus SBF, I’m hoping that, you know, young people listening to this right now take that to heart.
0:41:32 It’s like the things that actually win.
0:41:39 You know, I mean, and going back to Buffett, you know, I went to their, you know, sort of conclave in Omaha.
0:41:41 Oh, you went to the Woodstock for capital.
0:41:42 Yeah.
0:41:42 Yeah.
0:41:44 I mean, amazing.
0:41:50 And I think those guys are, by definition, extremely earnest.
0:41:52 You know, I don’t think it’s an affectation.
0:41:56 I think it’s, like, it’s, like, legit and serious.
0:41:58 Like, those guys did everything, you know.
0:41:59 What is it?
0:42:00 It’s their thing, right?
0:42:05 It’s, you know, work on high-class problems with high-class people.
0:42:07 Like, I mean, it’s, that’s very, very simple.
0:42:09 You know, just do it the right way, right?
0:42:10 Yeah.
0:42:13 And so, that’s what I want.
0:42:27 I think that if YC is the shelling point for earnest, friendly, ambitious nerds to steal something from, you know, I have a friend on Twitter who goes by Visa, VisaCon.
0:42:31 And, you know, he has a whole book on it.
0:42:34 I think it’s called Friendly Ambitious Nerd, if you look it up.
0:42:40 I mean, I think that that’s what YC, by definition, should be attracting.
0:42:47 And, you know, Brian Armstrong is, like, the best, one of the best founders I’ve ever met and gotten the chance to work with and fund.
0:43:02 And, you know, I think the world desperately needs more people like that, where, you know, in the background, just, like, consistent, doing the right thing, trying to attract the right people, like, you know, chop wood, carry water, that’s it.
0:43:10 He also took a big stand before it became popular, that the workplace is, like, a performance place.
0:43:15 It’s not, you don’t bring all of your politics and all that stuff in.
0:43:18 But he did that at a time when it was courageous.
0:43:21 Like, it was really, he was one of the first people out of the gate.
0:43:23 And he took so much flack for that.
0:43:24 Yeah.
0:43:25 And I remember-
0:43:26 That he’s vindicated now.
0:43:32 I know, but I remember reading, like, his thing, and I was like, oh, this is great, but, like, why are we even pointing this out, you know?
0:43:37 Like, and then he got, like, I read the stuff online, and I was like, this is crazy.
0:43:39 That’s the media environment, right?
0:43:42 I thought it was interesting, anyway, that he came out and did that.
0:43:53 And I think where it relates to the earnestness is only somebody who’s really comfortable with themselves and, like, trying to do good in the world could really come out and take that stand at that point in time.
0:43:55 Yeah, that’s true leadership.
0:43:55 Yeah.
0:44:02 What’s the biggest unexpected change you’ve seen in building companies in the AI world?
0:44:15 I think the biggest thing that is increasingly true, and we’re seeing a lot of examples of it in the last year, is blitzscaling for AI might not be a thing.
0:44:17 What’s blitzscaling?
0:44:19 So, I think Reid Hoffman wrote a whole book about it.
0:44:22 It was definitely true in the time of Uber.
0:44:42 So, you know, that was sort of a moment when interest rates were descending, and then these sort of international, increasingly international marketplaces, this sort of, you know, offline to online marketplaces like Uber in cars or delivery, or you could say Instacart, DoorDash, you could throw in, you know, Lyft.
0:44:49 There was sort of this whole wave of, you know, sort of the top startups were marketplace startups.
0:45:00 But also in software, but also in software too, this idea that, you know, scale could be used as a bludgeon, that, you know, the network effects grow, you know, sort of exponentially.
0:45:07 And then because you could have access to more and more capital, whoever raised more money would have won.
0:45:12 And I feel like that was extremely true in that era, sort of the 2010s.
0:45:16 And then in the 2020s, especially by, you know, we’re in the mid-2020s now.
0:45:24 I think that we are seeing incredible revenue growth with way fewer people, and that’s very remarkable.
0:45:30 We have companies basically, you know, going from zero to six million dollars in revenue in six months.
0:45:36 We have companies going from zero to 12 million dollars a year in revenue in 12 months, right?
0:45:42 And with under a dozen people, like usually five or six people.
0:45:44 And so that’s brand new.
0:45:51 Like, this is the result of large language models and intelligence on tap.
0:45:54 And so that’s a big change.
0:46:01 Like, you know, I think we are seeing companies that in the next year or two will get 250, 100 million dollars a year in revenue.
0:46:08 Really with under, you know, maybe 10 people, maybe 15 people tops.
0:46:12 And so that was relatively rare.
0:46:16 And my prediction would be this becomes quite common.
0:46:20 And my hope is that’s actually a really good thing.
0:46:28 Like, this is sort of the silver lining to, you know, what has been really a decade of big tech, right?
0:46:30 Like, it’s more and more centralized power.
0:46:46 You know, what might happen here is that, you know, and what we’re actively trying to do at YC is we hope that there, you know, are thousands of companies that each can make hundreds of millions to billions of dollars and give consumers an incredible amount of choice.
0:46:55 And we hope that that will be very different than sort of this, you know, the opposite, I think, was increasingly true.
0:47:05 Like, we have fewer and fewer choices in operating systems, in, you know, web browsers and, you know, across the board, like, just more and more concentration of power in tech.
0:47:06 Like, two thoughts here.
0:47:10 One, like, how much do you think that cloud computing plays into that?
0:47:16 Because now I don’t have to buy $6 billion in infrastructure to be that, you know, five-person company.
0:47:19 I can rent it based on demand.
0:47:22 So that’s enabled me not to compete on a capital basis.
0:47:24 Yeah, that was true.
0:47:27 That was even why Y Combinator in 2005 could exist.
0:47:36 You know, I remember working at a startup in 1999, 2000, or at, like, internet consulting firms.
0:47:43 And these were, like, million-dollar projects because you had to actually pay $100,000 or hundreds of thousands of dollars to Oracle.
0:47:48 You had to pay hundreds of thousands of dollars to your colo to, like, rack real servers.
0:47:51 So the cost of even starting a company was just huge.
0:48:03 Yeah, I mean, I remember Jeff Bezos actually launched AWS at a YC startup school at Stanford campus in 2008, right, when I was starting my first company.
0:48:07 So I think, you know, cloud really opened it up.
0:48:12 And, you know, that’s part of the reason why startups could be successful.
0:48:17 You know, you didn’t need to raise $5, $10 million just to rack your server.
0:48:21 And, you know, that’s the other big shift.
0:48:29 Like, I think in the past it was very, very common to have, you know, Stanford MBAs or Harvard MBAs be the CEO.
0:48:32 And then you would have to go get your hacker in a cage.
0:48:34 You had to, you know, get your CTO.
0:48:38 And, you know, there was sort of that split.
0:48:47 And then now what we’re seeing is, you know what, like, the CEO of the majority of YC companies, they are technical.
0:48:53 Is this the first revolution, like, technological revolution where the incumbents have a huge advantage?
0:49:10 You know, I think they have an advantage, but it’s not clear to me that they are conscious and aware and, like, at the wheel enough to take real advantage of it because they have too many people.
0:49:16 And then it’s all, I mean, I think this is what founder mode is actually about.
0:49:20 So, last year we had a conference with Brian Chesky.
0:49:23 We invited our top YC alums there.
0:49:26 We brought Paul and Jessica back from England.
0:49:31 And we had this one talk that wasn’t even on the agenda.
0:49:46 But I managed to text Brian Chesky of Airbnb and I got him to come and speak very openly and honestly in front of, you know, a crowd of about 200 of our absolute top alumni founders.
0:49:58 And he spoke very eloquently and in a raw way about how your company ends up not quite being your own unless you are very explicit.
0:50:02 Like, you know, I, this is actually my company.
0:50:09 I am actually going to have a hand and a role to play in all the different parts of this company.
0:50:21 I’m not going to, you know, basically the classic advice for management is hire the best people you possibly can and then give them as much rope as you possibly can.
0:50:25 And then somehow that’s going to result in, you know, good outcomes.
0:50:34 And then I think in practice, and this is sort of the reaction that is turning out to create a lot of value across our community, certainly.
0:50:38 But I think the memes are out there and it’s actually changing the way people are running businesses.
0:50:42 It’s sort of a shade of what you were saying earlier with Brian Armstrong.
0:50:47 Like, you know, you can sit back and allow your executives to sort of run amok.
0:50:57 And, you know, if the founder and the CEO does not exercise agency, you know, then it’s actually a political game.
0:51:02 And then you have sort of fiefdoms that are fighting it out with one another.
0:51:03 And the leader is not there.
0:51:13 Then you enter the situation where neither the leader nor the executives have power or control or agency.
0:51:16 And then you have, everyone’s disempowered.
0:51:18 Everyone is making the wrong choice.
0:51:21 You know, retention is down.
0:51:22 You’re wasting money.
0:51:28 You have lots and lots of people who are sort of working either against each other or not working at all.
0:51:38 And that’s, you know, I think a pretty crazy dysfunction that took hold across arguably every Silicon Valley company, period.
0:51:45 And it’s still, you know, it’s still, you know, mainly in power at, you know, quite a few of those companies, actually.
0:51:50 Though I think people are aware now that that’s not the way to run your company.
0:51:53 Are the bigger companies sort of like shaping up or no?
0:52:03 The way that I think about this analogy is sort of like, if I’m the young skinny kid and I’m competing against the fat bloated company, I want to run upstairs.
0:52:07 It’s going to suck for me, but it’s going to suck way more for them.
0:52:15 I think this is maybe a function of, you know, blitzscaling and using capital as a bludgeon like gone wrong.
0:52:24 You know, you can look at, you know, almost any of these companies, they probably hired way too many people.
0:52:38 And at some point they were viewing smart people as, you know, maybe a hoarded resource that, you know, if you were playing some sort of adversarial, you know, StarCraft and you didn’t want…
0:52:46 You know, the ironic thing is like they themselves were not using the resources properly either, right?
0:52:47 They just didn’t want somebody else to have them.
0:52:48 Exactly.
0:52:57 I guess it felt like a little bit of a prisoner’s dilemma because I think the result is that, you know, tech progress itself decelerated.
0:53:06 You have like the smartest people of a generation basically retired in place working at places that, you know, the world is actually full of problems.
0:53:22 Like why are people sort of retired in place pulling down, you know, insane by average American standards, absolutely insane salaries to build software that, you know, doesn’t change, doesn’t get better.
0:53:31 Or, you know, I mean, sometimes I sit there and I run into a bug into, you know, whether it’s a Google product or an Apple product or, you know, Facebook or whatever.
0:53:34 I’m like, this is an obvious bug.
0:53:41 And I know that there are teams out there, there are people getting paid millions of dollars a year to make some of the worst software.
0:53:46 And it will never get fixed because there’s no way, like, you know, people don’t care.
0:53:48 No one’s paying attention.
0:54:01 Yeah, that’s just one symptom out of a great many that is, you know, the result of, yeah, I don’t know, basically treating people like, you know, hoarded resources instead of like, they should, you know, the world is full of problems.
0:54:02 Let’s go solve those things.
0:54:09 When it comes to AI, the raw inputs, I guess, if you think about it that way, are sort of the LLM.
0:54:14 Then you have power, you sort of have compute, you have data.
0:54:20 Where do you think incumbents have an advantage and where do you think startups can successfully compete?
0:54:21 Yeah.
0:54:28 I mean, we had a little bit of a scare, I think, last year with AI regulation that was potentially premature.
0:54:38 So, you know, there was sort of a moment maybe a year or two ago and you sort of see it in the shades of it did make it into, say, Biden’s EO.
0:54:50 These sort of, you know, passed a certain amount of, you know, mathematical operations like that’s banned or not banned, but, you know, we require all of this extra regulation.
0:55:02 You have to report to the state, like, you better get a license, you know, it’s, that felt like the early versions of potentially regulatory capture where, you know, they wanted to restrict open source.
0:55:10 They wanted to restrict, you know, the number of different players, you know, sitting here a year after a lot of those attempts.
0:55:29 I feel pretty good because it feels like there are five, maybe six labs, all of whom are competing in a fair market trying to deliver models that, you know, honestly, any startup, anyone, you know, any of us could just, you know, pick and choose.
0:55:33 And, you know, there’s no monopoly danger.
0:55:40 There’s no, you know, crazy pricing power that one person, one entity wields over the whole market.
0:55:43 And so, I think that that’s actually really, really good.
0:55:46 I think it’s a much fairer playing field today.
0:55:52 And then, I think it’s interesting because it’s an interesting moment.
0:56:04 I think that, you know, basically, there’s a new Google-style sort of oligopoly that’s emerging around, like, who provides the AI models.
0:56:15 But because it won’t be, it probably won’t be a monopoly, that’s probably the best thing for the consumer and for actually every citizen of the world.
0:56:18 Because, you know, you’re going to have choice.
0:56:22 Let’s go deeper on the regulation and then come back to sort of competition.
0:56:28 How would you regulate AI or how do you think it should be regulated or do you think it should be regulated?
0:56:29 It’s a great question.
0:56:33 I guess there are a bunch of different models that I could see happening.
0:56:50 You know, I think what’s emerging for me is that the two things that I think, I think the first wave of people who are really worried about AI safety, not to be flippant, but like my concern is that they basically watch Terminator 2.
0:57:06 You know, and I’m like, I like that movie too, but you’re right now, you know, there’s sort of that moment in the movie where they say suddenly the AI becomes self-aware and it becomes, you know, it takes agency, right?
0:57:20 I think the funny thing, at least as of today, you know, these systems are, it’s just matrix math and there is no agency yet.
0:57:28 Like there’s basically, they’re equivalent to incredibly smart toasters and some people are actually kind of disappointed in that.
0:57:40 And personally, I’m very relieved and I hope it stays that way because that means that there’s still going to be a, you know, clear role for humans in the coming decades.
0:57:45 And, you know, I think it takes the form of two very important things.
0:57:48 One is agency.
0:57:51 I mean, people often ask, like, what should we be teaching our kids?
0:57:58 And, you know, the ironic thing is we send them to a school system that is not designed for agency.
0:58:02 It is literally designed to take agency away from our children.
0:58:05 And maybe that’s a bad thing, right?
0:58:10 Like, we should be trying to find ways to give our children as much agency as possible.
0:58:23 That’s why I’m actually personally pretty pro screens and pro Minecraft and Roblox and, you know, giving children like this sort of playground where they can exercise their own agency.
0:58:25 Have you tried Synthesis Tutor?
0:58:26 Oh, yeah.
0:58:27 Yeah, yeah.
0:58:28 I’m a small personal investor in them.
0:58:34 And, you know, I think that we’re just scratching the surface on how education will actually change.
0:58:35 But that’s a great example.
0:58:47 Like, those Synthesis is, like, designed around trying to help people have, help children, like, you know, actively be in these games that increase instead of decrease agency.
0:58:48 And it’s crazy.
0:58:50 So, it teaches the kids math.
0:58:58 And my understanding just from reading a little bit is El Salvador just replaced, like, the K-5 math with Synthesis Tutor.
0:59:00 And the results are, like, astounding.
0:59:02 Yeah, it’s way better.
0:59:05 I mean, the kids get involved and they’re obviously invested in it.
0:59:12 The regulation question is really interesting, too, because it begs the question of, it’s a worldwide industry.
0:59:13 Yeah.
0:59:20 And so, regulating something in one country, be it the United States or another country, doesn’t change what people can do in other countries.
0:59:23 And yet, you’re competing on this global level.
0:59:24 Yeah.
0:59:31 I think the biggest question around it is, of course, I mean, the existential fear is, like, where are all the jobs going to go?
0:59:36 And then, my hope is that it’s actually two things.
0:59:56 One is, like, I think that robotics will play a big key role here, where I think that if we can actually provide robots to people that do real work for people, that will actually change people’s sort of standards of living in, like, fairly real ways.
1:00:00 So, I think universal basic robot is relatively important.
1:00:07 You know, I think some of the studies coming back about UEI have not, you know, universal basic income, where you just give money to people.
1:00:11 It’s just not really resulting in a different…
1:00:13 I think they’ve never read a psychology textbook.
1:00:19 I mean, just going away from the economics of it, people need to feel like they’re part of something larger than themselves.
1:00:19 Yeah.
1:00:31 And if they don’t feel like they’re part of larger than something, like they’re contributing to something, they’re part of a team, they’re bigger than what they are as a person, then it leads to all these problems.
1:00:33 Yeah, exactly.
1:00:46 And then, you know, I think that we really need to actually give everyone, you know, on the planet some real reason why this stuff is actually good for them, right?
1:01:02 Like, I think if there is only sort of a realignment without a material increase in people’s day-to-day livelihoods and, you know, their quality of life, like, maybe we’re doing something wrong, actually.
1:01:05 And left to its own devices, like, you know, it’s possible.
1:01:30 So, I don’t know what the specific things are, but I think that that’s what it would look like, you know, if regulation were to come into play or there was some sort of realignment in reaction to, you know, the nature of work changing, that would be the outcome that, you know, the majority of people, if not all people, like, see the benefit in some sort of direct way.
1:01:33 And if we don’t do that, then there will be unrest.
1:01:36 I think that that’s one of the criteria.
1:01:40 You know, I don’t have the answer, but I think that that’s sort of one of the things I’d be on the lookout for.
1:01:46 At what point do you think the models start replacing the humans in terms of developing the models?
1:01:50 So, like, at what point are the models doing the work of the humans in OpenAI right now?
1:01:54 And they’re actually better than the humans at improving the model?
1:01:56 Yeah, we’re not there yet.
1:02:05 So, there’s some evidence that synthetic data is working, and so some people believe that synthetic data is, you know, where the models are, like, sort of self-bootstrapping.
1:02:12 So, just to explain to people, synthetic data is when the model creates data that it trains itself on?
1:02:12 That’s right.
1:02:17 And so, I guess the other really big shift is actually test time compute.
1:02:28 Like, literally, O1 Pro is this thing that you can pay $200 a month for, and it actually just spends more time at the sort of quarry level.
1:02:31 It might come back, you know, five minutes, ten minutes later.
1:02:41 But it will be much more correct than sort of the, you know, predict next token version that you might get out of, you know, standard chat GPT.
1:02:49 Yeah, from what I can tell, that’s where a lot of the wilder things might come out.
1:02:54 You know, level four AGI, as defined by OpenAI, is innovators.
1:03:02 So, we have, you know, lots of startups, both YC and not YC, that are trying to test that out right now.
1:03:11 They’re trying to apply the latest reasoning models from OpenAI that are about to come out, you know, like O3 and O3 Mini.
1:03:17 And they’re trying to apply them to actually, you know, scientific and engineering use cases.
1:03:27 So, you know, there’s a cancer vaccine biotech company called Helix that did YC a great many years ago.
1:03:34 But what they’ve figured out is they can actually hook up some of these models to actual wet lab tests.
1:03:41 And, you know, that’s something that I’d be keeping track of, like, over the next couple years.
1:04:04 Like, if only by applying, you know, dollars to energy that then goes into these models, will there be real breakthroughs in, you know, biological sciences, like being able to do new processes or come to a deeper understanding of, you know, whether it’s, you know, cancer or cancer treatment or, you know, anything in biotech.
1:04:15 The first experiments of those of that sort that’s happening in the next year, even, you know, in computer-aided design and manufacturing.
1:04:20 I mean, there’s a YC company called Camfer that is trying to apply.
1:04:27 They actually were one of the winners of the recent YC O1 hackathon we hosted with OpenAI.
1:04:34 And their winning entry was literally hooking up O1 to airfoil design.
1:04:49 So, being able to increase the sort of lift ratio just by applying, you know, spend more time thinking about this O1 and it’s able to create a better and better airfoil given a certain number of constraints.
1:05:09 So, you know, obviously, these are like relatively early and toy examples, but I think it’s a real sort of optimistic point around how do we increase the standard of living and push out like sort of the light cone of all human knowledge, right?
1:05:19 Like, you know, that is like a fundamental good for AI, you know, between that and the inroads it might make in education.
1:05:25 These are like some real, you know, white pill things that I think are going to happen over the next 10 years.
1:05:37 And these are the ways that AI becomes not, you know, sort of Terminator 2, but instead like, you know, sort of the age of intelligence as, you know, Sam pointed out in a recent essay.
1:05:48 Like, I think that if we can create abundance, if we can increase the amount of knowledge and know-how and science and technology in the world that solves real problems.
1:05:52 And, you know, I don’t think it’s going to happen on its own.
1:06:12 Like, you know, each of these examples are, there’s, you know, frankly, a YC startup, like right there on the edge, trying to take these models and then apply them to domains that, you know, it’s kind of like, you know, Google probably could have done what Airbnb did, but it didn’t because Google’s Google, right?
1:06:26 And so in the same way, I think that whether it’s open AI or Anthropic or Meta’s lab or DeepSeek or some other lab that wins, like I think that we’re going to have a bunch of different labs and they’re going to serve a certain role, like pushing forward human knowledge that way.
1:06:48 And then, you know, my white pill version of what the world I want to live in is one where, you know, our kids or really any kid with agency can get access to a world-class education, can get all the way to the edge of, you know, what humans know about and are able to do or able to like sort of affect.
1:07:01 And then, you know, sort of empowered by these agents, empowered by ChatGPT or Perplexity or, you know, whatever agent, you know, it’s going to look like her from the movie, right?
1:07:07 Like we’re going to have these, you know, basically super intelligent entities that we talk to.
1:07:17 I’m hoping that they don’t have that much agency, you know, I’m hoping that actually they are just like sort of these inert entities that are your helpers.
1:07:23 And if that’s true, like that’s actually a great scenario to be in, you know, that’s the future I want to be in.
1:07:32 Like I don’t want to be, I don’t think anyone wants to be sort of, you know, to borrow a term from Venkatesh Rao.
1:07:38 Like I don’t think any of us want to be under the API line of, you know, these AIs, right?
1:07:41 Like, and I think that really passes through agency.
1:07:47 The minute a robot can do laundry, I’m in, I’ll be the first customer.
1:07:52 Yeah, there are YC companies and many startups out there that are actively trying to build that right now.
1:08:08 My intuition is that it strikes me as immediate progress could come from just ingesting all of the academic papers that have been done on a certain topic and either disproving ones that people think are still correct.
1:08:20 And thus cutting off research on top of something that’s not likely to lead to anything or making connections because nobody can read all these papers and make the connections and make maybe the next leap, right?
1:08:24 Like not the quantum leap, but like the next logical step that who’s doing that?
1:08:25 I mean, that’s inevitable.
1:08:29 And then someone listening here might want to do it.
1:08:31 And then in which case they should apply to YC.
1:08:36 And maybe you should, we should do a joint request for startup for this next YC batch.
1:08:36 I like it.
1:08:37 I want equity there.
1:08:37 All right.
1:08:49 But it’s also interesting because then you think about that and you’re like, if I’m a government and I’m funding research, that research should all be public because I want people to be able to take it, ingest it, and make connections that we haven’t made yet.
1:08:54 And it seems like a lot of that research these days is under lock and key.
1:09:03 So you get this data advantage in the LLMs where some LLMs buy access or steal access or whatever, have access to it, and then some don’t.
1:09:07 How do you think about that from a data access LLM quality point of view?
1:09:08 Hmm.
1:09:09 It’s a good question.
1:09:12 I mean, yeah, it’s a bit of a gray area these days.
1:09:14 I mean, I’m not all the way in.
1:09:19 I don’t actually run an AI lab, even though, you know, and I was not actually at one day.
1:09:20 You run the meta AI lab.
1:09:21 Yeah, that’s right.
1:09:24 Not the meta AI lab.
1:09:27 Not meta the company, but like meta as in all of them.
1:09:28 Yeah.
1:09:30 That’s a good question.
1:09:42 I guess the funniest thing, my main response to all of that around like provenance of the data itself is at some point, like it feels like it actually is fair use, though.
1:09:44 I mean, that’s all the way into case law.
1:09:45 Yeah.
1:09:48 Well, here’s another interesting twist on this then.
1:09:52 Like, so the airfoil, they designed this new airfoil.
1:09:53 Is that patentable?
1:10:00 I mean, at least in terms of like generated images, my understanding is generated images are not copyrightable.
1:10:09 But if AI generates not only the science behind it, maybe like we’re at a point where, you know, maybe in the next couple of years, AI is doing more science than we’ve done.
1:10:17 Like, is that going to be copyrightable or patentable or sort of like withheld or is that public access, public knowledge?
1:10:24 Well, my intuition would say people are just going to take the outputs of, you know, these AI systems.
1:10:34 And as far as I know, you know, you can submit a patent and there’s not a checkbox yet that says like, was this, did you use AI as a part of it?
1:10:38 So why wouldn’t, here’s another startup idea for anybody listening that we both want in on.
1:10:46 Why wouldn’t somebody just read all the patent filings in the US and be like, make the next logical step for me and patent that, like attempt to just patent it.
1:10:54 Like a one person company could literally like ingest the US patent database and be like, okay, here’s the innovation in this.
1:10:58 What’s the next quantum leap or the next, even the next step that’s patentable.
1:11:02 Okay, automatically file and…
1:11:02 You’re funded.
1:11:04 I’m in.
1:11:05 I got two ideas.
1:11:06 I love those.
1:11:07 I don’t know.
1:11:10 I think these are all totally open and fair game.
1:11:16 And then I guess maybe going back to regulation, that’s one of the stranger things that is happening right now.
1:11:26 You know, one of the pieces of discourse out there during the AI safety debates, like in the last year, for instance, are about bioterror.
1:11:34 And, you know, the wild thing is, you know, basically possessing instruments of creating bioweapons is already illegal.
1:11:43 So do you really need special laws for a scenario that are already covered by laws that exist?
1:11:51 I mean, that’s just like my sort of rhetorical question back when people are really, really worried about bioterror.
1:12:05 You know, I think there’s this funny example where AI safety think tanks were in Congress and they were sort of, you know, going to chat GPT and, you know, typing in sort of a doomsday example.
1:12:10 And it spits out this, you know, kind of like an instruction manual on like, well, you need to do this.
1:12:11 You’d have to acquire this.
1:12:13 You know, here’s this thing you would do in the lab.
1:12:16 And, you know, of course, like those steps are illegal.
1:12:29 And then I think a cooler head prevailed in that, you know, the rebuttal was someone next went to Google, entered the same thing and got exactly the same response.
1:12:33 So, you know, yes, like I’ve seen Terminator 2 as well.
1:12:35 You know, am I worried about it?
1:12:37 You know, my PDOOM score is 1%.
1:12:41 Like, I’m not totally, you know, unworried, right?
1:12:47 It would be a mistake to completely dismiss all worries.
1:13:11 It would also potentially be worse to prematurely optimize and basically make a bunch of worthless laws that slow down the rate of progress and prevent things like better cancer vaccines or better airfoils or, you know, frankly, like, you know, nuclear fusion or like clean energy or better solar panels or engineering manufacturing.
1:13:14 You know, manufacturing methods that are better than what we have today.
1:13:17 I mean, there’s so many things that technology could do.
1:13:24 Like, why are we going to stand in the way of it until we have a very clear sense, like, that is actually what we need to do?
1:13:26 What does scare you about AI?
1:13:28 I mean, it’s brand new, right?
1:13:30 So, the risk is always there.
1:13:32 You know, it’s so funny, though.
1:13:35 I mean, I’m not unafraid.
1:13:43 On the other hand, like, you know, this principle of you can just do things still applies to computers, right?
1:13:52 Like, if the system becomes so onerous, like, maybe you would go and, like, let’s shut down the power systems.
1:13:55 Let’s shut down the data centers themselves.
1:13:57 Like, why wouldn’t people try to do that, right?
1:13:58 And they might do that.
1:14:02 And, you know, I think that people try to do that every day now.
1:14:02 Right.
1:14:03 Before AI.
1:14:03 Right.
1:14:11 If it became that bad, like, you know, I’m sure there would be some sort of human solution to try to fix this.
1:14:22 But, you know, just because I read about the Butlerian Jihad in the Dune series doesn’t mean that I need to live like that’s what’s going to happen.
1:14:28 So, you don’t believe there’s going to be one winner that dominates, like OpenAI or Anthropic or…
1:14:30 It might still happen, right?
1:14:34 You know, I think that there are lots of reasons why it won’t happen right now.
1:14:35 But, you know, who’s to say?
1:14:37 Everything is moving so quickly.
1:14:40 Like, I think that, you know, these questions are the right questions to ask.
1:14:42 I just don’t have the answers to them.
1:14:44 I know, but you’re the person to ask.
1:14:57 It’s like asking, like, I guess, will Windows or Mac win or, you know, we’re just literally living through that time where very, very smart people are, you know, fighting over the marbles right now.
1:14:57 Totally.
1:15:12 And then to me, though, like, working backwards, the best scenario is actually one where we have lots of marble vendors and you get choice and nobody has sort of too much control or, you know, cornering of all the resources.
1:15:23 What’s your read on Facebook almost doing a public good here and spending, you know, I think it’s over 50 billion at this point and just releasing everything open source?
1:15:30 Yeah, I think that, you know, what Zuck and Ahmad and the team over there are doing is, frankly, God’s work.
1:15:35 I think it’s great that they’re doing what they’re doing and I hope they continue.
1:15:37 What would you guess is the strategy behind that?
1:15:47 It’s kind of funny because my critique on Meta would be, you know, they very openly make everyone, they put in everyone’s faces, right?
1:15:53 Like, you can’t use Facebook or Instagram without, or even WhatsApp without seeing like, hey, Meta has AI now.
1:16:01 But the funniest thing is, like, I’m very surprised that they don’t think about sort of like the basic product part of it.
1:16:08 Like, I went to Facebook Blue app recently and I was going to Vietnam and I just wanted to say, okay, Meta AI, you’re so smart.
1:16:12 Tell me my friends in Vietnam and it didn’t know anything about me.
1:16:14 I’m like, this is some basic rag stuff.
1:16:15 Like, I get it.
1:16:19 Like, you’re already spending billions of dollars on training these things.
1:16:31 How about, like, you know, spend a little bit of money on, like, the most basic type of, you know, retrieval augmented generation for me and my, you know, like, it’s, they’re just sort of sprinkling it in and it’s a little bit of a checkbox.
1:16:34 So, you know, I’m a little bit mystified, right?
1:16:38 Like, if they were very unified about it, I would really get it, right?
1:16:43 Like, clearly, the way that we’re going to interface with computers is totally going to change.
1:16:53 What Anthropic is doing with computer use is, you know, I think that, you know, what I’ve heard is basically every major lab is probably going to need to release something like that.
1:17:04 Whether it’s an API the way Anthropic has or literally built into this, you know, the, you know, runtime that you run on your computer.
1:17:07 Like, there’s going to be a layer of intelligence.
1:17:12 Like, you can sort of see the shade of the very, very dumb version of it from Apple and Apple intelligence.
1:17:17 It’s like sort of sprinkling in intelligence into notifications and things like that.
1:17:24 But I think it’s virtually guaranteed that the way we interface with computers will totally change in the next few years.
1:17:38 You know, the rate of, you know, the rate of improvement in the models, you know, as of today, all the smartest things that you might want to do, there’s still actually things that you have to go to the cloud for.
1:17:41 And then that opens a whole can of worms.
1:17:53 But there’s some evidence that, you know, in the frontier research of, you know, the best AI labs, it’s pretty clear that there’s sort of parent models and child models.
1:18:05 And so there’s distillation happening from the frontier, very largest models with the most data and the most intelligence down into smarter and smarter tiny models.
1:18:14 There’s a claim this morning that a 1.5 billion parameter model, I think, got 84% on the AIME math test.
1:18:14 Oh, wow.
1:18:20 Which is like a 1.5 billion parameters is like so small that it could fit on anyone’s phone.
1:18:20 Yeah.
1:18:25 So, you know, and that was like DeepSeek R1 just got released this morning.
1:18:29 So it hasn’t been verified yet, but I think it’s super interesting.
1:18:39 Like we are literally day to day, week to week, learning more that, you know, these intelligent models are going to be on our desktops, in our phones.
1:18:42 And, you know, we’re right at that moment.
1:18:45 So is the model better?
1:18:46 Is the LLM better?
1:18:50 Like what makes that model so successful with so few parameters?
1:18:50 Oh, I don’t know.
1:18:51 I haven’t tried it yet.
1:18:59 But, you know, I mean, some of it is you can be very specific about what parts of the domain you keep.
1:19:00 Okay.
1:19:08 And then, you know, I guess, you know, math might be one of those things that just isn’t, you know, it doesn’t require, you know, 1.5 trillion parameters.
1:19:14 It takes 1.5 billion to do an 84% job of it, which is pretty wild.
1:19:18 I mean, that’s another weird thing of AI regulation.
1:19:23 You know, I think Biden, for instance, his last EO was sort of this export ban.
1:19:29 And DeepSeq is a Chinese company releasing these models open source.
1:19:34 And I believe that they only have access to last generation NVIDIA chips.
1:19:41 And so, you know, some of it is like, why are we doing these, like, measures that, like, may not actually even matter?
1:19:47 It’s interesting, right, because you think of constraint being one of the key contributors to innovation.
1:19:48 Yeah.
1:19:56 By limiting them, you also maybe enable them to be better, because now they have to work around these constraints, or presumably have to work around them.
1:19:57 I doubt they’re actually sort of working around them.
1:19:58 That sounds right.
1:20:09 I mean, I think the awkward thing about AI regulation is there’s something like $4 billion of money sloshing around think tanks and AI safety organizations.
1:20:21 And, you know, someone was telling me recently, like, if you looked at on LinkedIn for some of the people in these sort of giant, the giant NGO morass of think tanks.
1:20:40 Sorry if people are a part of that and getting mad at me right now hearing this, but, you know, there’s a lot of people who went from, you know, bioterror safety experts to, like, you know, one entry right, you know, right above that in the last even six or nine months, they’ve become AI bioterror safety experts.
1:20:44 And I’m not saying that’s a bad thing, but it’s just, you know, very telling, right?
1:20:54 Like, anytime you have billions of dollars going into, you know, a thing, maybe prematurely, you know, people have to justify what they’re doing day to day.
1:20:55 And I get it.
1:20:56 So many rent seekers.
1:21:09 I want to foster an environment of more competition within sort of like general safety constraints, but I don’t think we’re pushing up against those safety constraints to the point where it would be concerning.
1:21:15 But we also operate in a worldwide environment where other people might not think the same way about safety that we do.
1:21:21 And then it’s almost irrelevant what we think in a world where other people aren’t thinking that way and it can be used against us.
1:21:32 I think we’re going into a very interesting moment right now with, you know, the AI czar is Sri Ram Krishnan, who, you know, used to be a general partner at Andreessen Horowitz.
1:21:35 And I think that that’s a very, very good thing.
1:21:43 Like, we want people who have the networks into people who have built things, who have built things themselves, you know, as close to that as possible.
1:21:56 And, you know, I think that it is actually a real concern that the space is moving so quickly that, you know, if it takes legislation two years to make it through, that might be too slow.
1:22:11 And so it’s sort of even more important that the people who are close to the president and the people who are in the executive branch, at least in the United States, like they should be able to respond quickly, whether it’s through an EO or other means.
1:22:20 I don’t know what it’s like in the States, but in Canada, I was looking at the Senate the other day and I was just trying to like, is there anybody under like 60 in the Senate kind of thing?
1:22:26 Like, does anybody understand technology or they all grew up in the world where, you know, Google became a thing after they were already adults?
1:22:42 And it strikes me that there’s a difference, you know, the pace of technology improvement versus the pace of law, but also, or regulation, but also the people that are enacting those laws don’t tend to, they have a different pace as well, right?
1:22:44 Like they, our kids are in a different world.
1:22:49 Like my kids don’t know what a world without AI looks like, neither do yours.
1:22:52 But we do, you know, cause we’re, we’re similar age.
1:22:58 And then, you know, our parents have this other thing where it’s like, well, we used to have landline phones and like all of these other things.
1:23:04 And it strikes me that those people shouldn’t maybe not be regulating, you know, AI.
1:23:05 That sounds right.
1:23:08 I mean, I think it’s more profound now than ever before.
1:23:18 I mean, the other thing that’s really wild to think about is, um, it’s, I, I, what comes to mind is that meme on the internet where like, there’s the guy at this dance.
1:23:24 It’s just like, you know, that, uh, everyone else is dancing and they’re in the corner and it’s like, they don’t know.
1:23:32 It’s like, you know, if you go any, almost anywhere in the world, um, I, you know, people maybe have heard of ChatGPT.
1:23:35 They definitely haven’t heard of Anthropic or Claude.
1:23:35 Yeah.
1:23:38 Um, you know, it just hasn’t touched their lives yet.
1:23:47 And then meanwhile, like the first thing they do is they look at their smartphone and, you know, they’re using Google and, you know, they’re addicted to TikTok and things like that.
1:23:58 So do you think we get to a point where, and this is very like Ender’s game, if I remember correctly in the movie where, you know, you pull up an article on a major news site.
1:24:10 I pull up an article on a major news site and at the base, it’s sort of like the same article, but now it’s catered to you and catered to me based on our political leanings or what we’ve clicked on or what we watched before.
1:24:17 Well, my, my hope is that there’s such a flowering of choice that, you know, it’s going to be your choice, actually.
1:24:25 I mean, the difficulty is like, well, then you have a filter bubble, but you know, that exists today with social media today.
1:24:25 Yeah.
1:24:28 Um, okay.
1:24:33 So here’s a white pill that I don’t know if it’s going to happen, but I hope it happens.
1:24:50 Um, you know, one of the reasons why it’s so opaque today is literally that, um, you know, X has, you know, or X or, you know, before it was called Twitter and Twitter had, you know, thousands of people working at that place.
1:24:54 And, um, you know, you needed thousands of people maybe, right.
1:24:59 Or I guess the tricky thing is like Elon came in and quickly asked like 80 or 90% of the people.
1:25:02 And it turns out you didn’t need 80 or 90% of the people.
1:25:06 So that’s like another, you know, form of founder mode taking hold.
1:25:17 But, um, like it or not, you know, I can’t go into, uh, Twitter today and tool around with my 4U, like my 4U is written for me, right.
1:25:21 It’s in some server, some place, and there’s a whole infrastructure thing.
1:25:21 Yeah.
1:25:22 You don’t control it.
1:25:34 But it’s conceivable, um, you know, today with CodeGen, you know, uh, today engineers are basically, you know, writing code about five or 10 X faster than they would before.
1:25:38 Um, and that sort of capability is only getting faster and better.
1:25:48 Like it’s sort of conceivable that, uh, you should be able to just write your own algorithm and maybe you’ll be able to, you know, run it on your own, you know, and, and you’ll want choice.
1:25:55 And so, you know, the kind of, um, regulation that I would hope for is actually open systems, right.
1:25:58 Like I would want to actually write my own version of that.
1:26:04 Like I don’t want the, the best version of that is actually like, I want to see an exp, you know, I, I won’t maybe.
1:26:09 You want to see, uh, my, for you, I’ll go like very plainly.
1:26:16 And then I want to be able to see, see if I can convert that into the one that I want, or I can choose from, you know, 20 different ones.
1:26:22 Two ideas here, you know, as you’re mentioning that one, like your list could be your default.
1:26:28 Like I want this list to be, but the other one is like, maybe there’s just 20 parameters and you get to control those parameters.
1:26:33 And it could be, uh, you know, you could consider it political as one parameter from left to right.
1:26:34 Right.
1:26:38 Um, you can, you could be like happy, sad, like you could sort of filter in that way.
1:26:40 I know that’d be super interesting.
1:26:50 So, I mean, if, if regulation is coming, like give me open systems and open choice, and that’s, you know, sort of the path towards liberty and, you know, sort of human flourishing.
1:26:54 Um, and then the opposite is clearly what’s been happening, right?
1:27:00 Like, uh, Apple, you know, closing off the iMessage protocol so that, you know, it’s literally a moat.
1:27:04 Like, oh no, like that person has, uh, an Android.
1:27:07 So, they’re going to turn our really cool blue chat into a green chat.
1:27:08 We don’t talk to those people.
1:27:09 Yeah, right.
1:27:10 I know, right.
1:27:19 Uh, I, I mean, that’s just a pure example of, um, you know, Apple even today still, you know, they’re opening it up a little bit more with RCS.
1:27:26 But, you know, it’s, uh, those are actually in reaction to the work of Jonathan Cantor and the DOJ.
1:27:27 Yeah.
1:27:43 So, there are efforts out there that are very, very much worth our, uh, attention around reigning in big tech and reigning in, um, the ways in which, like, these sort of subtle product decisions only make money for big tech.
1:27:46 And they reduce choice and, you know, ultimately reduce liberty.
1:28:01 It’d be super interesting to be able to have an advantage if you’re big tech and you, you’re a company and you come up with this, but have that advantage erode automatically over time in the sense that you might have a 12 month lead.
1:28:09 But what you’re really trying to do is foster continuous, like, if you’re a government and you’re trying to regulate it, it’s like, I don’t want to give you a golden ticket.
1:28:12 I want you to have to earn it and you can’t be complacent.
1:28:13 So, you have to earn it every day.
1:28:20 And so, yeah, maybe you have, like, a two-year window on this blue bubbles and, and then you have to open it up.
1:28:22 But now you’ve got to come up with the next thing.
1:28:24 You’ve got to, you push forward instead of just coasting.
1:28:28 Like, Apple really hasn’t come up with a ton lately.
1:28:28 Yeah.
1:28:40 And then I think, uh, the reason why it’s so broken is actually that, uh, government ultimately is, you know, very manipulatable by, uh, by money.
1:28:41 Yeah.
1:28:43 And, you know, that’s sort of the world we live in.
1:28:45 Do you think that’ll be different under Trump?
1:28:50 I don’t tend to get into politics here, but so many people in the administration are already incredibly wealthy.
1:28:51 Oh, yeah.
1:28:52 That’s the hope.
1:28:56 I mean, uh, we’re friends with a great many people who are in the administration.
1:29:02 We’re very hopeful and we’re, you know, wishing them, we’re hoping that really great things come back.
1:29:08 And, you know, uh, in full transparency, like, I think I was too naive and didn’t understand how anything worked in 2016.
1:29:10 That’s not what I was saying in 2016.
1:29:15 I was fully, you know, an NPC in the system.
1:29:18 Um, but, you know, also that being said, I’m a San Francisco Democrat.
1:29:27 So I really have a very, very little, uh, you know, I have very little special knowledge about how the new administration is going to run.
1:29:31 Um, except that I, you know, really am rooting for them.
1:29:37 I’m hoping that they are able to be successful and to, you know, make America truly great.
1:29:45 Like I am a hundred percent, you know, even though I didn’t vote for Trump, uh, I am 110%, you know, down for making America truly awesome.
1:29:50 What do you believe about AI that few people would agree with you on?
1:29:52 It might be that point that I just gave you.
1:30:01 Like, I think that, um, a lot of people are hoping that, uh, the AI becomes self-aware or, you know, have agency.
1:30:18 Um, and, um, from here, the kind of world we live in will be very different if somehow the, you know, literally AI entities are given, you know, maybe the line is actually, will we have an, uh, an AI CEO?
1:30:29 Like, will we have a company that just like literally gives in to, you know, whatever the central entity says, like, that’s what we’re going to do.
1:30:35 Every problem, you know, you know, it’s sort of the exact, um, extreme opposite of founder mode.
1:30:36 It’s like AI mode.
1:30:49 Like, will we live in a world in the future where, you know, corporations decide, like, you know what, a human is messy and kind of dumb and doesn’t have a trillion token context window and, like, won’t be able to do what we wanted to do.
1:30:55 So we would trust in AI and, you know, an LLM consciousness, based consciousness more than a human being.
1:30:57 Like, I’d be worried about that.
1:31:00 I was thinking about this last night, watching the football game, actually.
1:31:03 And I was like, why are humans still calling plays?
1:31:15 Like, yes, for coaching, but like calling players in the game, an AI, I feel like at this point with like, oh, one pro or something, we’d be ahead of where we are as human.
1:31:17 I’m wondering if teams should try that.
1:31:18 That’d be super interesting.
1:31:20 Oh, that’s going to be the next level of money ball then.
1:31:22 We’ll just try it in preseason, right?
1:31:25 Like, or try it in a regular season game.
1:31:30 I don’t know, but it strikes me that like, they would know who’s on the field, who’s moving slower than normal.
1:31:36 Like all these, a million more variables than we can even comprehend or compute or end historical data.
1:31:45 You know, the last 16 weeks this team has played, you know, when you run to the right after they just subbed or something, like they can see these correlations that we would never pick up on.
1:31:47 Not causation, but correlation.
1:31:48 It’d be super fascinating.
1:32:00 Yeah, I mean, what’s funny about it is, I think in those sort of scenarios, you might just see a crazy speed up because of human effects.
1:32:11 I mean, when you look at organizations and how they make decisions, so many of them, you know, there’s sort of like a Straussian reading of them.
1:32:15 There’s sort of like at the surface level, you’re like, I want to do X.
1:32:20 But like right below that is actually something that is not about X.
1:32:25 You know, for a corporation, it has to be like, we have a fiduciary duty to our shareholders.
1:32:27 And we need to maximize profit, for instance.
1:32:37 And then right below that, you know, corporations or, you know, entities of any set of people, like they do all sorts of things, not for reason X on the top.
1:32:47 It’s actually like, oh, actually, you know, the people who are really in power, you know, don’t like that person or, you know, they rub them the wrong way or…
1:32:48 Or human.
1:32:49 Yeah, exactly.
1:32:49 Right.
1:32:53 It’s like, these are like extremely influenceable systems.
1:32:56 Your idea might be best, but I’m going to disagree because it’s your idea, not my idea.
1:32:57 Right.
1:33:07 And then I think that’s why, in general, we really hate politics inside companies because, you know, it sort of works against the collective.
1:33:14 Do you think we’d ever see a city, like a mayor, then first before even a CEO as like an AI mayor?
1:33:20 You know, I guess like now that we’re sitting here thinking about it, it’s like sort of conceivable.
1:33:26 But, you know, in sort of all of these cases, I would much rather there be a real human being.
1:33:27 Kind of like a plane, right?
1:33:31 Like we want a physical pilot, even though the plane is probably better off by itself.
1:33:32 Yeah, that’s right.
1:33:34 And that might be what ends up happening.
1:33:41 Like even if 90% of the time you’re using the autopilot, like you always need a human in the loop.
1:33:46 And, you know, I’d be curious if that turns out to be one of the things that society learns.
1:34:01 One of the crazier ideas I’ve been talking to people about that, like, I feel like would be a fun sci-fi book would be just speculation playing out on, you know, sort of how this interacts with nation states.
1:34:07 Like, you know, China obviously is run by a central committee and arguably Xi Jinping.
1:34:16 You know, seemingly, if you had ASI, you would only want, you know, sort of the central committee to have it.
1:34:27 And so that might turn into like a very specific form of that, you know, it’s, you know, China might end up having one ASI that is totally centrally controlled.
1:34:31 And then everything else about it, you know, sort of comes out of that.
1:34:40 And then you might end up with, I mean, controversially, like, I think often they’re trying to be benevolent, right?
1:34:42 Like if you spend time in China, it’s incredibly clean.
1:34:46 It’s, you know, I’m sure there’s all sorts of crazy stuff that happens that is quite unjust.
1:34:49 Just, you know, I have no idea.
1:34:55 It’s not really even my place to like argue one way or another what it’s like to be in China.
1:34:58 But that’s an interesting idea.
1:35:09 It’s like, you know, that society probably, you know, unless there’s other changes there, like that’s, you can sort of count on a single artificial super intelligence,
1:35:13 like sort of setting the, how everything works over there.
1:35:27 I mean, probably internal to the Politburo itself, you know, they’re going to have to have all these discussions about what do we do with this ASI and who gets to, you know, where does the agency, the ultimate agency of that nation come from?
1:35:37 Going back to something you said earlier, I think the ultimate combination, at least for right now, is human and machine intelligence working in concert where the machine intelligence might be the default and then the human opts out.
1:35:38 Right.
1:35:40 And that’s exercising judgment.
1:35:41 It’s like, no, we’re not.
1:35:54 And when you look at chess, that tends to be the case where the best players are using computers, but they know when, oh, there’s something the computer can’t see here, or there’s an opportunity that it just doesn’t recognize.
1:36:00 And I think it was Tyler Cowen who said that, like he had a word for it, mixing the technologies.
1:36:01 Fascinating.
1:36:01 Yeah.
1:36:05 And then, yeah, the question is like, well, what, how does America approach it?
1:36:07 Like potentially it’s much more laissez-faire.
1:36:21 And then in that case, like my argument would be like the most American version of it is that like, you know, you and I have our own ASI and like each, you know, each citizen should be, you know, issued an ASI and be taught how to get the most out of it.
1:36:23 And, you know, maybe it needs to be embodied with a robot.
1:36:28 Like we should all, you know, we should all be Superman in that, in that sense.
1:36:38 And that would be like the most empowering version of a society that, of like free and, you know, free people created equal, right?
1:36:44 And then, you know, there might be other versions and you’re, I mean, I’d be curious, like, you know, what’s the European version of it?
1:36:53 Maybe that version has, you know, all the check marks and like, oh, is, you know, every decision has to be, you know, was this AI assisted or not?
1:36:58 And like, let’s check the provenance on like, you know, how that AI was like trained.
1:37:06 And I mean, I don’t know, there are all these different, there’s like a billion different ways all of these different governments are going to sort of approach this technology.
1:37:12 What are the smartest people at the leading edge of AI talking about right now?
1:37:16 I mean, you know, the hard part is like, I spend most of my time not with those people.
1:37:21 I spend most of my time with people who are commercializing it.
1:37:31 So, so the very, very smartest people are clearly the people who are in the AI labs actually actively doing, you know, sort of creating these models.
1:37:40 But, you know, sort of the, the people who I know who are in those rooms, I mean, sounds like test time compute is really it.
1:37:47 You know, that’s, the reasoning models are sort of the thing that will really come to, come to bear this year.
1:37:50 Like we’re sort of under, you know, understanding that right now.
1:37:59 You know, for now, it sounds like pre-training might have hit some sort of scaling limit, you know, the nature of which I don’t understand yet.
1:38:02 You know, there’s a lot of debate about it.
1:38:08 You know, will, will there be new 4.0 style models that have more data or more compute?
1:38:18 And seemingly, you know, there’s just rumors of, you know, training runs gone awry that, you know, basically the scaling laws may have petered out, but I don’t know.
1:38:24 So, we have sort of like the LLM, we have the reasoning, the LLM and the reasoning model are different, correct?
1:38:30 The way OpenAI talks about O1, they’re sort of connected, but like different steps.
1:38:30 Okay.
1:38:32 And so, we have progress there.
1:38:33 Yeah.
1:38:34 Then we have progress with the data.
1:38:37 And then we have progress with inference.
1:38:38 Yep.
1:38:41 Well, we just don’t have enough GPUs, really.
1:38:48 Like, you know, I think what’s funny is like, I’m still pretty bull on NVIDIA and that they more or less have.
1:38:48 Oh, talk to me about this.
1:38:54 Like the monopoly on, you know, sort of the best price performance and…
1:38:56 So, you think this is going to continue?
1:39:01 Like these trillions of dollars of investments in AI.
1:39:05 Basically, you know, I think you can live in two different worlds.
1:39:08 One world says like, all of this is hype.
1:39:10 We’ve seen AI hype before.
1:39:12 Like, it’s not going to pan out.
1:39:18 And then I think the world that we’re spending a lot of time in, like the world really wants intelligence.
1:39:25 And then the scary version of this is like, yes, some of it actually is labor displacement, right?
1:39:29 Like in the past, what tech would do is we’d be selling you hardware.
1:39:32 We’d be selling you a computer on every desk.
1:39:33 Like everyone needs a smartphone.
1:39:36 You know, we’re selling you Microsoft Office.
1:39:37 We’re selling you package software.
1:39:41 We’re selling you Oracle, SQL Server.
1:39:47 Like, you know, we’re selling, you know, SaaS apps like Salesforce.
1:39:52 Like, you know, it’s $10,000, you know, per seat per year, that kind of thing.
1:40:04 Or we’re selling, you know, classically Palantir was selling, you know, million dollar or $10 million ACV, you know, very specific vertical apps, right?
1:40:10 And so all of those things are selling software or hardware, and that’s like selling technology.
1:40:23 And so increasingly what we’re starting to see is like, you know, especially the bleeding edge is probably customer support and all of the things that you would use for a call center.
1:40:37 Like those are sort of the things that are already so well defined and specified, and there’s a whole training process for people in, you know, usually overseas to do these jobs.
1:40:49 And AI now is just coming in and like it’s, you know, the speech to text and text to speech, those things are indistinguishable from human beings now.
1:41:06 And you can train these things, the evals are good, the prompting is good, you know, going back to what we were saying earlier, like what we’re seeing is like, you know, like it or not is actually replacing labor.
1:41:12 Has anybody created an AI call center from scratch and now is ingesting customers?
1:41:20 Yes, I mean, I funded a company in this very current batch that, you know, it’s called Leaping AI.
1:41:26 They are working with some of the biggest wine merchants in Germany, which is fascinating.
1:41:30 So, I mean, that’s another fascinating thing.
1:41:36 Like these things speak all human, you know, they certainly speak all the top languages very, very well and are indistinguishable.
1:41:46 And, you know, I think 80% of the ordering volume for some of their customers is entirely no human in the loop.
1:41:49 I would love to see government call centers go to this.
1:41:50 Yeah, exactly.
1:41:52 It would scale so much better.
1:41:58 I was on the hold for like three hours the other day for like a 15-minute question that I had to answer.
1:42:10 And it’s like, well, this could be, A, it could be done so much quicker by somebody who’s not a human and probably more securely and reliably and more consistent regardless of who’s on the other end or how they’re talking.
1:42:12 How would you define AGI?
1:42:20 I guess the funniest thing is Microsoft, I think, is defining it when it gets its hundred billion dollars back.
1:42:33 But I, you know, am sort of skeptical of that because, you know, I think basically only Elon Musk then would, you know, qualify as a human general intelligence, I think.
1:42:40 Like AGI, the thing is like in a lot of domains, it feels like it’s here, actually.
1:42:57 I mean, you know, can it have a conversation with someone and take, you know, give incredibly good wine pairing recommendations and have a perfectly fine indistinguishable from a real human, you know, sort of, or even better than human sort of interaction.
1:43:02 And also, like, take orders for very expensive wine and have that just work.
1:43:03 Yeah.
1:43:04 Yes, like that’s happening right now.
1:43:05 Yeah.
1:43:17 So, I think in a lot of domains, and this is sort of the year where, like, maybe there’s like 5% or 10% of things that, like, it’s, you know, sort of hitting the Turing test and, you know, really satisfying that.
1:43:24 But, you know, I think maybe this is a year where it goes from, like, 10% to 30% and the year after that it doubles again.
1:43:28 And, you know, the next few years are, like, actually the golden age of building AI.
1:43:29 Totally.
1:43:41 I think, like, I’m super optimistic, at least for the next, like, 5 years, about the things we’ll discover, the progress we’ll make, the impact we’ll have on humanity and a lot of the things that plague us.
1:43:45 What do you, I want to get into how you use AI a little bit.
1:43:48 What do you know about prompting that most people miss?
1:43:51 I mean, I’m mainly a user.
1:43:55 You know, I spend a lot of time with people who spend a lot of time in prompts.
1:44:00 Probably the person I would most point people to is Jake Heller.
1:44:02 So, he’s the founder of Case Text.
1:44:05 He was one of the first people to get access to GPT-4.
1:44:17 And we think of him at YC as the first man on the moon, in that he was the first to successfully commercialize GPT-4 in the legal space.
1:44:30 What he said was that, you know, they had access to GPT-3.5 and it basically hallucinated too much to be used for actual, like, legal work.
1:44:34 Like, lawyers would see one wrong thing and say, like, oh, I can’t trust this.
1:44:46 GPT-4, he found, actually, you know, with good evals would actually, you know, give, they could program the system in a way that it would actually work.
1:44:58 And what he says, he figured out was if GPT-4 started hallucinating for them, they realized that they were doing too much work in one prompt.
1:45:06 They needed to take that thing that they asked GPT-4 to do and then break it up into smaller steps.
1:45:17 And then they found that they could get deterministic output for GPT-4 like a human if they broke it down into steps.
1:45:18 Oh, interesting.
1:45:26 And what he needed to do, I mean, I sort of, it’s sort of equivalent to Taylor time and motion studies in factories.
1:45:30 It feels like that’s what he did for what a lawyer does.
1:45:44 Let’s say you have to put together a chronology of what happened in a case and what a real, he’s a real life lawyer, so, which is like sort of unusually perfect to figure out this prompting step.
1:45:56 Like, he realized that he needed to look at what a real lawyer would do and literally replicate that, like, Taylor time and motion style in the process and prompts and workflow.
1:46:05 So, for instance, doing this type of summarization, he would have to go through and read all the materials.
1:46:14 And then this is why apparently lawyers have, you know, sort of their many, many different colored little flags and highlighters and things like that.
1:46:26 They just get very good at, you know, doing a read through paragraph by paragraph, sentence by sentence, and pulling out the things that are relevant and then sort of synthesizing it.
1:46:31 And so, you know, early versions of case text and a lot of it today, I think, is still just doing that.
1:46:34 It’s like, what is a specific thing that a human does?
1:46:39 Break it down into the very specific steps that a real human would do.
1:46:44 And then actually, basically, if it breaks, you’re just asking in that step to do too many things.
1:46:46 So, like, break it down into even smaller steps.
1:46:48 And somehow that worked.
1:46:57 And, like, basically, this is the blueprint that I think a lot of YCE companies and AI vertical SaaS startups are doing across the whole industry right now.
1:47:09 They literally are taking, you know, model out what a human would do in knowledge work and then break it down into steps and then have evaluations for each of those prompts.
1:47:20 And then as the models get better, because you have, you know, what we call the golden evals, basically, you just run the golden evals against, you know, the newest model.
1:47:25 Like, you know, 4.0 comes out, Cloud 3.5 comes out, DeepSeek comes out.
1:47:32 You know, you have evals, which is basically a test set of prompt, context window, data, and output.
1:47:36 And you can actually, you know, what’s funny is, like, it’s even fuzzy that way.
1:47:43 Like, you can even use LLMs in the evals themselves to, you know, score them and figure out, you know, does it make sense?
1:47:45 Can you give us an example of an eval?
1:47:47 Like, make it tangible for people?
1:47:47 Oh, yeah.
1:47:48 It’s really straightforward.
1:47:49 It’s just a test case, right?
1:48:03 So, given this prompt and this data, you know, evaluate the prompt to see if, you know, and it usually maps directly to, like, something that is, you know, true, false, yes, no, like, something that is pretty clear.
1:48:09 Like, you know, let’s say there’s a deposition and, you know, someone makes a certain statement, right?
1:48:19 You might have a prompt that is, like, you know, is this, you know, is what this person said in conflict with, you know, any of the other witnesses?
1:48:22 Or, I don’t know, I’m totally making this example up.
1:48:22 Yeah, yeah.
1:48:27 Like, this is the kind of thing that you can do, you know, at a very granular level.
1:48:29 You might have thousands of these.
1:48:40 And then that’s how, you know, Jake Heller figured out he could create something that would, you know, basically do the work of hundreds of, you know, lawyers and paralegals.
1:48:46 And it would take, you know, a day or an afternoon instead of, you know, three months of discovery.
1:48:47 That’s fascinating.
1:48:50 How do you use AI with your kids?
1:48:54 Oh, I love making stories with them.
1:48:58 So, you know, what I find is O1 Pro is actually extra good now.
1:49:03 So, yeah, actually, there’s, like, an interesting thing that’s happening right now.
1:49:14 And I saw it up close and personal this morning looking at some blog posts about DeepSeq R1, which is DeepSeq’s reasoning model.
1:49:22 I was reading Simon Willison’s blog post about he got DeepSeq R1 running.
1:49:28 It’s the first, one of the first open source versions of sort of the reasoning.
1:49:42 And so, what we just described with how Jake Heller broke it down into chain of thoughts to make case text work, it turns out that that maps to basically how the reasoning stuff works.
1:49:58 And so, you know, the difference between what Jake did with GPT-4 when it first came out and what O1 and O1 Pro maybe is doing and what DeepSeq R1 is doing clearly because it’s open source and you can see it,
1:50:08 is that those steps, like, breaking it down into steps and the sort of metacognition of, like, whether or not, like, it makes sense at all of those micro steps.
1:50:13 That’s what, in theory, this reasoning is actually happening.
1:50:17 That’s actually happening in the background for O1 and O3.
1:50:23 And if you use ChatGPT, you’ll see the steps, but it’s like a summary of it.
1:50:24 Right.
1:50:27 And so, it’s, you know, I just only saw it this morning.
1:50:29 I mean, this is such new stuff.
1:50:35 Like, I was hoping that someone would do a open source reasoning model just so we could see it.
1:50:37 And that’s what it was.
1:50:41 I think Simon’s blog post this morning showed, here’s a prompt.
1:50:48 And then he could actually see, I think he said, pages and pages of the model talking to itself.
1:50:51 Literally, you know, does this make sense?
1:50:53 Like, can I break it down into steps?
1:51:02 So, what we just described as a totally manual action that a really good prompt engineer CEO like Jake Heller did,
1:51:07 and he sold his company, Case Text, for almost half a billion dollars to Thomson Reuters.
1:51:15 That is actually very similar to what the model is capable of doing on its own in a reasoning model.
1:51:20 And that’s what it’s doing when it’s doing, like, test time compute.
1:51:24 It’s actually just spending more time, you know, thinking.
1:51:27 Before it spits out the final answer.
1:51:35 So, how do you create a competitive advantage in a world like that where perhaps that company had an advantage for a year or two,
1:51:39 and now all of a sudden it’s, like, built into the model for free?
1:51:44 Yeah, I mean, I think, you know, ultimately the model itself is not the moat.
1:51:47 Like, I think that the evals themselves are the moat.
1:51:51 I don’t have the answer yet.
1:51:56 Basically, for now, maybe it’s a toss-up.
1:52:01 If you’re a very, very good prompt engineer, you will have far better golden evals,
1:52:08 and the outcomes will be much better than what O3 or, you know, DeepSeq R1 can do,
1:52:12 because it’s specific to your data, and it’s much more in the details.
1:52:16 I think that that remains to be seen.
1:52:21 Like, the classic thing that Sam Altman has told YC companies and, you know, told most startups, period,
1:52:25 is you should count on the models getting better.
1:52:30 So, if that’s true, then, you know, that might be a durable moat for this year,
1:52:34 but it might not be past, you know, I mean, O3 we haven’t even seen yet.
1:52:37 The results seem, like, fairly magical.
1:52:43 So, it’s possible that advantage goes away even as soon as this year.
1:52:46 But all the other advantages still apply.
1:52:51 Like, you know, one thing that a lot of our founders who are getting the $5 to $10 million a year in revenue
1:52:58 with five people in a single year are saying is, you know, yes, there’s prompting, there’s evals,
1:53:01 like, there’s a lot of magic that, like, is sort of mind-blowing.
1:53:06 But what doesn’t go away is building a good user experience,
1:53:13 building something that a human being who does that for a job sees that, knows that’s for me,
1:53:18 understands how to start, knows what to click on, how to get the data in.
1:53:28 And so, you know, one of the funnier quips is that, you know, the second best software in the world for everything
1:53:36 is using ChatGPT because you can basically copy and paste, you know, almost any workflow or any data.
1:53:41 And it’s like the general purpose thing that, you know, you can just drop data into it.
1:53:49 And it’s the second best because the first best will be a really great UI made by a really good product designer
1:53:57 who’s a great engineer, who’s a prompt engineer, who actually creates software that doesn’t require copy-paste.
1:53:59 It’s just like link this, link that.
1:54:00 Okay, now this thing is now working.
1:54:07 And so I think that that’s, those are the, like, the moats are not different, actually, at the end of the day.
1:54:12 It’s still, you still have to build good software, you still have to be able to sell,
1:54:19 you have to retain customers, you have to, but you just don’t need, like, a thousand people for it anymore.
1:54:20 You might only need six people.
1:54:22 Okay, I want to play a game.
1:54:27 I’m going to, you have 100% of your net worth, you have to invest it in three, three companies.
1:54:28 Oh, God. Okay.
1:54:32 And so the first company, you have to invest half, and then 30, and then 20.
1:54:34 So altogether, 100%.
1:54:40 Which companies out of the big tech companies, how would you allocate that between,
1:54:45 here’s my biggest bet, my second biggest bet, my third, from today going forward?
1:54:54 Okay, I guess, you know, is it cheating to say I’d put even more money into my, the YC funds that I already run?
1:54:55 But that’s a cop out.
1:54:56 That’s a cop out.
1:54:57 That goes without saying.
1:55:06 I think that it’s very unusual, just because, you know, we end up, like, this is the commercialization arm of every AI lab, is what I realize.
1:55:11 But short of that, I mean, maybe NVIDIA, Microsoft, Meta.
1:55:13 In that order?
1:55:14 Probably.
1:55:15 Why?
1:55:17 I mean, NVIDIA just, you know, has an out and out.
1:55:20 Like, for now, they’re just so far ahead of everyone else.
1:55:33 I mean, it can’t last forever, but I think that, you know, the demand for building the infrastructure for intelligence in society is going to be absolutely massive.
1:55:39 And maybe on the order of the Manhattan Project, and we just haven’t really thought about it enough, right?
1:55:56 Like, it’s entirely conceivable, like, if, say, like, level four innovators turns out to work, like, you know, it’s sort of the meta project, because then it’s like the Manhattan Project of instantiating more Manhattan Projects.
1:56:16 Like, actually, like, you know, you could imagine, if we can, if more test time compute, or, you know, you could do the work of, you know, 10,200 IQ Einsteins working on bringing us, you know, basically unlimited clean energy.
1:56:16 Yeah.
1:56:23 Like, that alone will, I mean, if anything, like, that’s probably the bigger problem right now.
1:56:25 Like, we know that the models will continue to get better.
1:56:31 We know that, you know, the demand for intelligence will be unending.
1:56:52 And then, you know, even going back to the robotics question, it’s like, if we end up making, you know, universal basic robotics, you know, the limit will still actually be, you know, sort of the climate crisis and the ability, the available energy available to human beings, right?
1:56:58 And, you know, and, you know, maybe solar can do it, but maybe there are lots of other sort of solves.
1:57:05 But, you know, I think energy and access to energy is sort of the defining question at that point.
1:57:22 Like, everything else you could solve, like, and everything else you could sort of either, you know, if it’s in the realm of science and engineering, like, you know, in theory, between robots and, you know, more and more intelligence, like, we could sort of figure these things out.
1:57:25 But not if we run out of energy.
1:57:28 Okay, why Microsoft and why Meta next?
1:57:33 I mean, I think Microsoft has just really, really deep access to OpenAI.
1:57:37 And I think OpenAI is probably, you said public companies, right?
1:57:37 Yeah, yeah.
1:57:49 And so, you know, I think there’s a non-zero, pretty large percentage of, like, the market cap of Microsoft that I think is pretty predicated on Sam Altman and the team at OpenAI continuing to be successful.
1:57:50 Totally.
1:57:53 And then why Meta?
1:58:00 I mean, I think Meta is sort of the dark horse because, like, they are amassing talent and then they have crazy distribution.
1:58:06 And I think, you know, I just would never count Zuck out.
1:58:12 I think that he, you know, it’s so crazy that it’s super smart that he is on that.
1:58:16 You know, he’s always thinking about what is the next version of computing.
1:58:23 Like, so much so that he probably put more money than he should have into AR and that was maybe premature.
1:58:25 He might still end up being right there.
1:58:37 But, you know, AI for a fraction of what he’s put into AR is likely to push forward all of humanity and, you know, and accelerate technological progress in a really profound way.
1:58:40 I want to switch subjects a little bit.
1:58:43 A few years ago, you met with Mr. Beast.
1:58:44 Oh, yeah.
1:58:45 And talked about YouTube.
1:58:46 What did you learn?
1:58:48 Because your channel changed.
1:58:49 Oh, yeah.
1:58:49 He’s great.
1:58:52 I mean, he was very brusque with me.
1:58:56 He said, you know, look, man, your titles suck and your thumbnails are even worse.
1:59:07 And, you know, I think that he spent so much time trying to understand the YouTube algorithm and what people want that he just loaded it completely into his brain.
1:59:09 And what makes a good title?
1:59:11 I think it’s clickbait.
1:59:16 Unfortunately, you know, unfortunately, and this is the thing.
1:59:24 Like, when you’re trying to make smart content, it’s actually kind of tricky because you don’t want necessarily more clicks.
1:59:27 You want more clicks from people who are smart.
1:59:33 So we title our episodes differently on YouTube usually than on the actual audio feed.
1:59:38 Because if you want YouTube to pay attention, you have to almost be more provocative intentionally.
1:59:39 That sounds right.
1:59:40 Yeah.
1:59:43 Like we could call this, you know, AI ends the world or something.
1:59:44 Yeah, that’s right.
1:59:46 You know, get people to watch.
1:59:48 But that’s not actually what we’re talking about at all.
1:59:49 What makes a good thumbnail?
1:59:50 What did you learn about thumbnails?
1:59:57 Oh, um, usually like a person looking into the camera seems to help a lot.
1:59:57 Okay.
2:00:01 Um, and then you want it to be relatively recognizable.
2:00:16 Like, you know, you want some sort of style that when someone sees it, you know, I mean, basically what I was doing at the time was just taking whatever frame that was, you know, sort of kind of representative and throwing it in there.
2:00:24 Um, but when you train someone to look at YouTube, you know, back to back to back every time it shows up, like you sort of want to be highly recognizable.
2:00:30 So you want to have a distinct thumbnail like yours with the overlay, sort of like the red.
2:00:31 Yeah.
2:00:38 But, you know, once I stopped posting so regularly, you know, then it sort of didn’t matter as much anymore.
2:00:42 But if you’re going to post very regularly, that’s pretty important, actually.
2:00:44 So, yeah, unfortunately, it’s clickbait.
2:00:54 And then there is an interesting interaction like, um, you know, yes, you can optimize for better thumbnails and better titles for the click through.
2:01:01 But if it has absolutely nothing to do with the actual body, as you mentioned, you will not get watch time.
2:01:04 And then YouTube will be like, oh, people aren’t watching this.
2:01:05 We’re not going to promote it.
2:01:08 Because the big thing about YouTube is discovery.
2:01:08 Yeah.
2:01:16 And, like, we notice this all the time where it’s sort of like you just get this audience, but you don’t get to keep the audience as a creator, which is really interesting.
2:01:17 Well, you do if you are regular.
2:01:23 And then the other hack is be very shameless about asking for subs.
2:01:27 And then the funniest thing is, like, subs do very little, actually.
2:01:34 There’s no guarantee that you show up in people’s feeds if someone subs.
2:01:35 It, like, helps a little bit.
2:01:37 Liking helps more.
2:01:39 Watch time helps the most.
2:01:53 And then the extreme, like, you know, over-the-top hack that, you know, probably you should do here is you should ask for the like, subscribe, and hit the bell icon.
2:02:02 Because if you hit the bell icon and they have notifications on, that’s the only thing that is almost as good as having their email address and emailing them.
2:02:04 You heard it here, people.
2:02:05 Gary just told you.
2:02:11 You got to click like, subscribe, and hit the bell icon because you want knowledge.
2:02:14 You want to be smart, and this is the place to get it.
2:02:15 Oh, I love that.
2:02:16 Thank you.
2:02:17 Good advertising here.
2:02:21 I want to ask just a couple of random questions before we wrap up here.
2:02:28 What are some of the lessons that you learned from Paul Graham that you sort of apply or try to keep in mind all the time?
2:02:35 I think the number one thing that is very hard, but is so, I mean, you can see it and read it in his essays.
2:02:47 It’s to be plain spoken and to sort of be hyper aware of artifice, of kind of like bullshit, basically.
2:02:52 Like, don’t let bullshit, you know, I think like it creeps in here and there.
2:03:06 I’m like, oh, yeah, you know, I sometimes am in danger of like caring too much about like the number of followers I have and things like that, you know, whereas like actually I shouldn’t be worried about that.
2:03:14 Like what I should be worried about is, and you know, I spend a lot of time with our YouTube team and our media team at YC talking about this.
2:03:23 It’s like, if we get too focused on just view count, we’re liable to just, yeah, like optimize for the wrong audience.
2:03:34 If we’re not being authentic to ourselves or, you know, if we’re just trying to like follow trends or, you know, do things that get clicks, it’s like that’s not helpful to them either.
2:03:36 Like, then we’re just on this treadmill, right?
2:03:43 Yeah, basically like trying to be very, very high signal to noise ratio.
2:03:48 You know, the thing that I probably struggle with most and, you know, I don’t know, maybe some of the listeners here might feel this.
2:03:55 It’s like sometimes I think out loud and then, you know, really, really great ideas are not like thinking out loud.
2:04:03 They’re actually figuring out a very complex concept and then trying to say it in like as few words as possible.
2:04:09 And, you know, the amount of time that Paul spends on his essays is fascinating.
2:04:18 It’s, you know, sometimes days, like sometimes weeks, like he’ll just, you know, iterate and iterate and send it out to people for comment.
2:04:34 And, you know, the amount of time he spends whittling down the words and trying to like combine concepts and say the most with the least number of words, it would shock you.
2:04:38 And then also that is actually thinking, like writing is thinking.
2:04:47 Like one of the more surprising things that we do a lot of at YC is we help people spend time thinking about their two-sentence pitch.
2:04:57 So, you know, you would think that that’s, oh, yeah, that’s like something, you know, startup 101, like you’re helping people with their pitch.
2:04:58 That sounds so basic.
2:04:59 Like, yeah, I guess that makes sense.
2:05:01 Like that’s what an incubator would do.
2:05:07 But the reason why it’s very important is that it’s actually almost like a mantra.
2:05:09 It’s like a special incantation.
2:05:17 Like you believe something that nobody else believes and you need to be able to breathe that belief into other people.
2:05:21 And you need to do it in as few words as possible.
2:05:25 Like, so if you, the joke is like, oh, yeah, like what’s your elevator pitch?
2:05:34 But like you might run into someone who could be your CTO, who could introduce you to your lead investor, who could be your very best customer.
2:05:36 And you will literally only have that time.
2:05:40 You know, you will only have time to get two sentences in.
2:05:43 And so, and even then, I mean, I guess it’s kind of fractal.
2:05:45 Like that’s what I love about a really great interview.
2:05:49 Like, you know, someone comes in and I’m like, oh, yes, I get it.
2:05:53 Like, I know what it is and I know why that’s important.
2:05:55 I know why I should spend more time with you.
2:05:57 That’s what a great two sentence pitch is.
2:06:00 And, you know, knowing what it is, is very hard.
2:06:07 Like that’s all of Paul Graham’s, you know, sort of editing down and whittling down in a nutshell.
2:06:08 It’s like people do really complex things.
2:06:12 How do you say what you do in one sentence?
2:06:14 That’s very hard, actually.
2:06:18 And then, you know, the second sentence is like, why is it important?
2:06:19 Why is it interesting?
2:06:24 Why should I, you know, and then that may well change with like the person that you’re talking to.
2:06:34 So, yeah, to the degree that clear communication is clear thinking, you know, one of the things I did when I first joined YC,
2:06:41 I had no intention of ever becoming an investor, ever being a partner, let alone running the place.
2:06:43 Like I was just a designer in residence.
2:06:53 And what I did was I did 30-minute, 45-minute office hours with companies in the YC winter 11 batch sitting in back then as an interaction designer.
2:06:55 I used OmniGraffle a lot.
2:06:58 And so we just sat there and designed their homepage.
2:07:01 And it’s like this is what the call to action is to say.
2:07:03 Here’s, you know, put the logo here.
2:07:04 Here’s the tagline.
2:07:10 Here’s the, you know, maybe you have a video here or, you know, right below you have a how it works.
2:07:19 And then, you know, what’s funny about it is like some people, you know, would take the designs we did in those like 30, 45-minute things and like that would be their whole startup.
2:07:20 Yeah.
2:07:25 And like sell those companies for hundreds of millions of dollars years later, which is just like fascinating to think about.
2:07:31 It’s like clear communication, great design, you know, creating experiences for other people.
2:07:34 All of those are sort of exercising the same skill.
2:07:36 And so that’s what a founder really is.
2:07:42 It’s like, you know, a founder to me is a little bit less what you might expect.
2:07:48 It’s like, oh, this is someone with a firm handshake who looks like a certain way and like bends the will of the people.
2:07:52 Like you might think of an SPF that’s like, that’s all artifice.
2:07:53 Like think about that guy.
2:07:57 Like that guy was like full of shakes and like the guy was like on meth, right?
2:08:01 Like the guy was, you know, everything about it was an affectation, right?
2:08:06 Like he was a caricature of like an autist, right?
2:08:10 Like we see very autistic, incredibly smart engineers all the time.
2:08:13 But, you know, for him, it was like that was part of the act.
2:08:14 Yeah.
2:08:24 Like I remember he did a YouTube video with Nas Daly and I love, you know, Nasir’s great and I love Nas Daly, but I couldn’t believe the video that SPF went on.
2:08:27 It was just like full of basically bullshit, right?
2:08:30 And the exact opposite of Brian Armstrong.
2:08:34 And yeah, we’re always on the lookout for that.
2:08:36 He wasn’t trying to fool you.
2:08:36 Was that?
2:08:37 Oh yeah, I guess so.
2:08:39 I mean, he was fooling the world.
2:08:40 Because you know, right?
2:08:46 Like you know, it’s hard to fool somebody who knows versus somebody who doesn’t know.
2:08:47 And he wasn’t trying to appeal to you.
2:08:50 He was trying to appeal to other people who didn’t know.
2:08:54 It’s the same as going back to Buffett, just tying a few of these conversations together, right?
2:09:00 Like everybody repeats what Buffett says, but the people who actually invest for a living or know Warren or
2:09:07 Charlie or spend time with them can recognize the frauds because they can’t go a level deeper into it.
2:09:09 They can’t actually go into the weeds.
2:09:14 Whereas those guys can go from like the one inch level to the 30,000 foot level and everything in between.
2:09:17 And they don’t get frustrated if you don’t understand.
2:09:23 Whereas a lot of the fraudsters, one of the tells is they can’t go, they can’t traverse the levels.
2:09:31 And then they do tend to get defensive or sort of angry with you for not understanding what they’re saying, which is really interesting.
2:09:33 And then I just want to tie the writing back to what you said.
2:09:38 You said, if you can’t get it clear in like two sentences, you might miss an opportunity.
2:09:41 That goes to the 10-minute interview, right?
2:09:47 Where you’re looking for, maybe it’s not the perfect pitch, but you want that level of clarity with people.
2:09:54 And it’s really the work of producing that that helps you hone in on your own ideas and discover new ideas.
2:09:54 Yeah.
2:09:57 I mean, I feel like we’re in like the idea fire hose.
2:10:01 So we’re just like hearing about all kinds of things that are very promising.
2:10:16 And then I think the most unusual thing that I’m still getting used to is, I mean, in full transparency, I mean, probably the median YC startup still fails, right?
2:10:29 Like, YC might be one of the most successful institutions of its sort that has ever existed, inclusive of venture capital firms on the one hand.
2:10:33 On the other hand, like the failure rate is absolutely insane, right?
2:10:43 Like, you know, it is still a very small percentage of the teams actually do go on and, you know, create these, you know, companies worth 50 or $100 billion.
2:10:49 But the remarkable thing is not that, you know, it’s that low.
2:10:52 The remarkable thing is that it happens at all.
2:10:56 Like, it’s just unbelievable that…
2:10:59 I think you have the coolest job in the world, or at least like warmly.
2:10:59 Oh, I agree.
2:11:02 If I had to pick like the top 10, like you’d be up there.
2:11:03 I agree.
2:11:13 I mean, it’s especially to have, you know, I pinch myself every day on the regular, like in the morning, I wake up and it’s like, oh, this AI thing is happening.
2:11:24 And then somehow I’m filling the shoes of the person who, like, I mean, Sam Altman probably brought forward the future by, you know, five years, 10 years, at least 10 years.
2:11:36 Like, all of the things that, you know, him and Greg Brockman and all the researchers he brought on, like, were working on, that happened, that was going to happen, right?
2:11:45 Like, I think there’s a lot of the Sam Altman haters or the OpenAI haters out there love to point out, like, oh, you know what?
2:11:47 Like, the Transformer was made by all these teams.
2:11:51 I mean, some of it’s like, these teams absolutely did incredible things.
2:11:52 Like, you can’t take away from that, right?
2:11:55 The researchers did, you know, Demis did incredible things.
2:12:03 But at the same time, it’s like they believed a thing that nobody else believed and they brought the resources to bear.
2:12:04 Totally.
2:12:10 And so recently, you know, Sam Altman came back to speak at our AI conference this past weekend.
2:12:21 And we, you know, I couldn’t think of another way to start that conference than have Sam Altman and, you know, a bunch of his, you know, old…
2:12:22 We had Bob McGrew there.
2:12:26 We had Evan Morikawa, who was the eng manager who released ChatGPT.
2:12:32 Bob McGrew actually worked with me at Palantir back in the day, but he’s, you know, outgoing chief research officer.
2:12:34 Jason Kwan was there.
2:12:40 He actually worked at YC Legal before leaving to, you know, run a lot of things at OpenAI.
2:12:42 And so I had them all stand up.
2:12:54 And we had a room full of, you know, 290 founders, all of whom were working on things that happened essentially because OpenAI existed.
2:12:57 And there was like a standing ovation.
2:12:58 Oh, that’s awesome.
2:13:02 So, and, you know, Sam, to his credit, was like, you know, not just us.
2:13:05 You know, these researchers did so many things as well.
2:13:09 But all that being said, it’s like, we’re in the middle of the revolution.
2:13:10 Oh, totally.
2:13:12 This is just like, I mean, it’s not even the middle.
2:13:23 I think it’s like, like, just after the first pitch of the first inning of, like, what is about to be, like, a great, great time for humanity, for technology.
2:13:24 I’m with you.
2:13:26 I’m, like, so excited to be alive right now.
2:13:29 So lucky, so blessed to, like, be a witness to this.
2:13:32 And I think we’re going to make so much progress on so many things.
2:13:34 Go back to the haters.
2:13:35 Like, there’s always people pulling you down.
2:13:38 But there are never people that are in the trenches doing anything.
2:13:47 I’ve rarely seen, you know, people who are working on the same problem attacking their competition like that or undermining them.
2:13:49 Or, you know, it’s just ignore them.
2:13:53 You know, on our end, we’re just hoping to lift up the people who want to build.
2:13:54 Yeah.
2:13:55 This is the golden age of building.
2:13:56 Amazing.
2:14:02 I want to just end with the same question we always ask, which is, what is success for you?
2:14:13 I think looking back, I mean, growing up, I always just looked up to the people who made the things that I loved.
2:14:18 And, you know, Steve Jobs, Bill Gates, like, the people who really created something from nothing.
2:14:27 And I just think of Steve saying, you know, we want to put a dent in the universe.
2:14:30 And ultimately, that’s what I want.
2:14:34 Like, that’s, you know, success to me is how do we bring forward?
2:14:40 You know, actually, this is actually when Paul Graham came to recruit me to come back to YC.
2:14:46 I had actually left and started my own VC firm, you know, got to $3 billion under management.
2:14:48 Yeah, you guys did Coinbase.
2:14:49 Yeah, totally.
2:14:54 I mean, returned $650 million on that investment alone.
2:15:03 You know, I was sort of right at the pinnacle of my investing, you know, as if you’re running my own VC firm.
2:15:08 And Paul and Jessica came to me and said, Gary, we need you to come back and run YC.
2:15:14 And it was really, really hard to walk away from that.
2:15:16 Luckily, I had very great partners.
2:15:21 Brett Gibson, my partner, my multi-time co-founder, went through YC with me.
2:15:25 He actually built a bunch of the software with me at YC, you know, before we left.
2:15:27 He runs it now.
2:15:29 They’re off to the races and still doing great work.
2:15:38 And, you know, I sat down with Paul and, you know, right after we shook hands and, you know, he’s like, Gary, do you understand what this means?
2:15:53 It means that, you know, if we do this right, we, you know, kind of like I think what Sam did with OpenAI with, you know, pulling forward large language models and AI and bringing about AGI sooner.
2:16:00 Like, YC is sort of one of the defining institutions that is going to pull forward the future.
2:16:15 And it’s not more complicated than how do we get in front of optimistic, smart people who, you know, have benevolent, you know, sort of goals for themselves and the people around them.
2:16:32 How do we give them, you know, a small amount of money and a whole lot of know-how and a whole lot of access to networks and, you know, a 10-week program that hopefully reprograms them to be more formidable while simultaneously being more earnest.
2:16:35 And then the rest sort of takes care of itself.
2:16:40 Like, you know, this thing has never existed before like this and it deserves to grow.
2:16:53 Like, it deserves to, you know, if we could find more people and fund them and have them be successful at even, you know, the same rate, we would do that all day.
2:16:57 I mean, and I think what are the alternatives, right?
2:17:10 Like, I think of all the people who, you know, they’re locked away in companies, they’re locked away in academia, you know, or heck, like, you know, these days, the wild thing about intelligence is like intelligence is on tap now, right?
2:17:21 Like, all of the impediments to being able to, all of the impediments to fully realizing what you want to do in the world are starting to fall away.
2:17:27 Like, you know, there’s always going to be something that stands in the way of any given person.
2:17:37 And I’m not saying like those things are equal, but they, you know, through technology and through access to technology, those things are coming down.
2:17:44 Like, if there’s the will, if there’s the agency, if there’s the taste, like, that’s what I want for society.
2:17:46 I want them to achieve that.
2:17:53 In a lot of ways, we have more equality of opportunity now than we’ve ever had in the history of the world, but not equality of outcome.
2:17:54 That’s right.
2:17:54 Yeah.
2:17:56 And that, you know, that’s sort of the quandary, right?
2:17:59 Like, you have to choose.
2:18:05 Do you want the outcomes to be equal or do you want a rising tide to raise all boats?
2:18:09 I’m a huge fan in equal opportunity, but not unequal outcome.
2:18:10 I’m with you.
2:18:14 Thank you for listening and learning with me.
2:18:18 If you’ve enjoyed this episode, consider leaving a five-star rating or review.
2:18:22 It’s a small action on your part that helps us reach more curious minds.
2:18:32 You can stay connected with Farnham Street on social media and explore more insights at fs.blog, where you’ll find past episodes, our mental models, and thought-provoking articles.
2:18:35 While you’re there, check out my book, Clear Thinking.
2:18:42 Through engaging stories and actionable mental models, it helps you bridge the gap between intention and action.
2:18:45 So your best decisions become your default decisions.
2:18:47 Until next time.
Most accelerators fund ideas. Y Combinator funds founders—and transforms them. With a 1% acceptance rate and alumni behind 60% of the past decade’s unicorns, YC knows what separates the founders who break through from those who burn out. It’s not the flashiest résumé or the boldest pitch but something President Garry Tan says is far rarer: earnestness. In this conversation, Garry reveals why this is the key to success, and how it can make or break a startup. We also dive into how AI is reshaping the whole landscape of venture capital and what the future might look like when everyone has intelligence on tap.
If you care about innovation, agency, or the future of work, don’t miss this episode.
Approximate timestamps: Subject to variation due to dynamically inserted ads.
(00:02:39) The Success of Y Combinator
(00:04:25) The Y Combinator Program
(00:08:25) The Application Process
(00:09:58) The Interview Process
(00:16:16) The Challenge of Early Stage Investment
(00:22:53) The Role of San Francisco in Innovation
(00:28:32) The Ideal Founder
(00:36:27) The Importance of Earnestness
(00:42:17) The Changing Landscape of AI Companies
(00:45:26) The Impact of Cloud Computing
(00:50:11) Dysfunction with Silicon Valley
(00:52:24) Forecast for the Tech Market
(00:54:40) The Regulation of AI
(00:55:56) The Need for Agency in Education
(01:01:40) AI in Biotech and Manufacturing
(01:07:24) The Issue of Data Access and The Legal Aspects of AI Outputs
(01:13:34) The Role of Meta in AI Development
(01:28:07) The Potential of AI in Decision Making
(01:40:33) Defining AGI
(01:42:03) The Use of AI and Prompting
(01:47:09) AI Model Reasoning
(01:49:48) The Competitive Advantage in AI
(01:52:42) Investing in Big Tech Companies
(01:55:47) The Role of Microsoft and Meta in AI
(01:57:00) Learning from MrBeast: YouTube Channel Optimization
(02:05:58) The Perception of Founders
(02:08:23) The Reality of Startup Success Rates
(02:09:34) The Impact of OpenAI
(02:11:46) The Golden Age of Building
Newsletter – The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter
Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed.
Watch on YouTube: @tkppodcast
Learn more about your ad choices. Visit megaphone.fm/adchoices