AI transcript
0:00:08 by the smartest people in the world working incredibly hard for the next five years.
0:00:12 Humanity went through the agricultural revolution and industrial revolution.
0:00:14 We’re going through another revolution.
0:00:16 We will not be able to call it something.
0:00:20 Future people will call it something, but we are going through something.
0:00:25 The number of solo entrepreneurs that this technology is going to enable
0:00:28 has vastly increased what a single person can do.
0:00:32 For the first time, opportunity is massively available for everyone.
0:00:37 Just the ability for more people to be able to become entrepreneurs is massive.
0:00:42 The age of solo entrepreneurship powered by AI is here,
0:00:45 but the path to full automation is messier than the hype suggests.
0:00:49 Today, you’ll hear from Adam D’Angelo, founder of Quora and CEO of Poe,
0:00:52 and Andrade Massad, founder and CEO of Replit,
0:00:56 on why we’re in a brute force era of AI rather than true intelligence
0:00:58 and what that means for the future of work.
0:01:01 We discussed the expert data paradox,
0:01:06 how automating entry-level jobs creates a crisis in training the next generation of experts,
0:01:10 why managing tens of agents in parallel will define the next wave of productivity,
0:01:13 and how the sovereign individual framework might be the best lens
0:01:16 for understanding AI’s economic and political impact.
0:01:20 Plus, Adam makes the case for why vibe coding is radically underrated,
0:01:26 and Amjad explains what Claude 4.5’s strange new self-awareness might signal about the path ahead.
0:01:28 Let’s get into it.
0:01:32 Adam, Amjad, welcome to the podcast.
0:01:32 Thank you.
0:01:34 Yeah, thanks for having us.
0:01:37 So a lot of people have been throwing cold water over LLMs lately.
0:01:39 So in some general bearishness,
0:01:41 people talking about the limitations of LLMs,
0:01:42 why they won’t get us to AGI.
0:01:46 Well, maybe what we thought was just a couple years away is now maybe 10 years away.
0:01:49 Adam, you seem a bit more optimistic.
0:01:51 Why don’t you share your broad general overview?
0:01:54 Yeah, I mean, honestly, I don’t know what people are talking about.
0:01:58 I think if you look a year ago, the world was very different.
0:02:04 And so just judging on how much progress we’ve made in the last year with things like reasoning models,
0:02:08 things like the improvement in code generation ability,
0:02:11 the improvements in video gen,
0:02:14 it seems like things are going faster than ever.
0:02:19 And so I don’t really understand where the kind of bearishness is coming from.
0:02:22 Well, I think there’s some sense that we hoped that they would be able to
0:02:25 replace all of tasks or all jobs.
0:02:28 And maybe there’s some sense that it’s like middle to middle, but not end to end.
0:02:32 And maybe labor won’t be automated in the same way that we thought it would on the same timeline.
0:02:36 Yeah, I mean, I don’t know what the previous timelines people were thinking were,
0:02:40 but I think if you go five years out from now, we’re in a very different world.
0:02:44 I think a lot of what’s holding back the models these days is not actually intelligence.
0:02:51 It’s getting the right context into the model so that it can be able to use its intelligence.
0:02:56 And then there’s some things like computer use that are still not quite there,
0:03:00 but I think we’ll almost definitely get there in the next year or two.
0:03:07 And when you have that, I think we’re going to be able to automate a large portion of what people do.
0:03:15 I don’t know how we call that AGI, but I think it’s going to satisfy a lot of the critiques that people are making right now.
0:03:18 I think they won’t be valid in a year or two.
0:03:20 What is your definition of AGI?
0:03:21 I don’t know.
0:03:23 Everyone thinks it’s something different.
0:03:28 One definition I kind of like is if you say that you have a remote worker, a human,
0:03:35 any job that can be done by someone whose job can be done remotely, that’s AGI.
0:03:39 You can then say it doesn’t have to be better than the best person in the world at every single job.
0:03:40 Some people call that ASI.
0:03:43 It doesn’t have to be better than teams of people.
0:03:52 You can argue with those different definitions, but I think once we get to be better than a typical remote worker at the job they’re doing,
0:03:54 we’re living in a very different world.
0:03:59 And I think that’s effectively what people, that’s a very useful anchor point for these definitions.
0:04:04 So you’re not sensing the same limitations of LOMs that other people are?
0:04:06 You think there’s a lot more room that LOMs can go from here?
0:04:09 We don’t need like a brand new architecture or other breakthrough?
0:04:10 I don’t think so.
0:04:20 I mean, I think there are certain things like memory and learning, like continuous learning that are not very easy with the current architectures.
0:04:25 I think even those you can sort of fake and maybe we’re going to be able to get them to work well enough.
0:04:30 But we just don’t seem to be hitting any kind of limits.
0:04:34 The progress in reasoning models is incredible.
0:04:40 And I think the progress in pre-training is also going pretty quickly, maybe not as quickly as people had expected,
0:04:46 but certainly fast enough that you can expect a lot of progress over the next few years.
0:04:48 Amjad, what’s your reaction hearing all this?
0:04:54 Yeah, I think I’ve been pretty consistent and consistently right, perhaps.
0:04:55 Dare I say.
0:04:59 Consistent with yourself or consistent with what I’m saying?
0:05:01 With myself and with, I think, how things are unfolding.
0:05:16 That I started being a bit of a more public doubter of things around the time when the AI safety discussion was reaching its height back in maybe 22, 23.
0:05:23 And I thought it was important for us to be realistic about the progress because otherwise we’re going to scare politicians.
0:05:24 We’re going to scare everyone.
0:05:26 DC will descend on Silicon Valley.
0:05:27 They’ll shut everything down.
0:05:35 So my criticism of the idea of AGI 2027, you know, that paper that I think it’s called Alexander, someone else wrote.
0:05:43 And then in the situational awareness and all this hype papers that are not really science, they’re just vibe.
0:05:45 Here’s what I think will happen.
0:05:47 The whole economy will get automated.
0:05:50 Jobs are going to disappear.
0:05:54 All of that stuff is that, again, it’s just, I think it’s unrealistic.
0:05:57 It is not following the kind of progress that we’re seeing.
0:06:00 And it is going to lead to just bad policy.
0:06:06 So my view is LLMs are amazing, amazing machines.
0:06:11 I don’t think they are exactly human intelligence equivalent.
0:06:21 You can still trick LLMs with things like they might have solved the strawberry one, but you can still trick it with like single sentence questions.
0:06:23 Like how many R’s are in this sentence?
0:06:28 I think I tweeted about it the other day, which was like three out of the four models couldn’t, didn’t get it even.
0:06:34 And then GPT-5 with high thinking had to think for 15 seconds in order to get a question like that.
0:06:41 So LLMs are, I think, a different kind of intelligence than what humans are.
0:06:45 And also they have clear limitations.
0:06:57 And we’re papering over the limitations and we’re kind of working around them in all sorts of ways, whether it’s in the LLM itself and the training data or in the infrastructure around and everything that we’re doing to make them work.
0:07:02 But that makes me less optimistic that we’ve cracked intelligence.
0:07:20 And I think once we truly crack intelligence, it’ll feel a lot more scalable and that you can, and that the idea behind the bitter lesson will actually be true and that you can just pour more power, more resources, more compute into them and they’ll just scale more naturally.
0:07:26 I think right now there’s a lot of manual work going into making these models better.
0:07:37 In the true pre-training scaling era, GPT-2, 3, 3.5, maybe up to 4, it felt like you can just put more internet data in there and it just got better.
0:07:40 Whereas now it feels like there’s a lot of labeling work happening.
0:07:42 There’s a lot of contracting work happening.
0:07:52 A lot of these contrived RL environments are getting created in order to make LLMs good at coding and becoming coding agents.
0:07:53 And they’re going to go do that.
0:07:56 I think the news from OpenAI that they’re going to do that for investment banking.
0:08:09 And so I try to coin this term I call functional AGI, which is the idea that you can automate a lot of aspects of a lot of jobs by just going in and collecting as much data and creating these RL environments.
0:08:13 It’s going to take enormous effort and money and data and all of that in order to do.
0:08:19 And I think I agree with Adam that things are going to get better 100% over the next three months, six months.
0:08:22 Cloud 4.5 was a huge jump.
0:08:26 I don’t think it’s appreciated how much of a jump it was over four.
0:08:29 There’s really, really amazing things about Cloud 4.5.
0:08:30 So there is progress.
0:08:32 We’re going to continue to see progress.
0:08:37 I don’t think LLMs, as they can understand, are on the way to AGI.
0:08:47 And my definition for AGI is, I think, the old school RL definition, which is a machine that can go into any environment and learn efficiently.
0:08:56 In the same way that a human could go into, you can put a human into a pool game and within two hours, they can two pool and be able to do it.
0:09:00 Right now, there’s no way for us to have machines learn skills like that on the fly.
0:09:04 Everything requires enormous amount of data and compute and time and effort.
0:09:14 And more importantly, it requires human expertise, which is the non-bitter lesson idea, which is human expertise is not scalable.
0:09:15 And we are reliant.
0:09:17 Today, we are in a human expertise regime.
0:09:29 Yeah, I mean, I think that humans are certainly better at learning a new skill from a limited amount of data in a new environment than the current models are.
0:09:40 I think that, on the other hand, human intelligence is the product of evolution, which used a massive amount of effective computation.
0:09:44 And so this is a different kind of intelligence.
0:09:54 And so because it didn’t have this massive equivalent of evolution, it just has pre-training for that, which is not as good.
0:09:59 You then need more data to learn everything, every new skill.
0:10:21 I think that’s going to be more a function of when we can produce something that is as good as human intelligence, even if it takes a lot more compute, a lot more energy, a lot more training data.
0:10:33 We could just put in all that energy and still get to software that’s as good as the average person at doing a typical job.
0:10:34 So I don’t disagree with that.
0:10:40 And that’s, it feels like we’re in a brute force type of regime, but maybe that’s fine.
0:10:45 So where’s the disagreement then, I guess?
0:10:46 So there’s agreement on that.
0:10:48 Where is the divergence, perhaps?
0:11:01 I don’t think that we’ll get to the singularity or I don’t think that, I don’t think we’re going to get to the next level of human civilization until we, we, we, we crack the true nature of intelligence.
0:11:08 Like until we understand it and have algorithms that are actually not brute force.
0:11:13 And you think those will take a long time to come?
0:11:17 I’m sort of agnostic on, on that.
0:11:24 It just does, it does feel like the LLMs in a way are distracting from that because all the talent is going there.
0:11:30 And therefore there’s less talent that are trying to do basic research on, on intelligence.
0:11:32 Yeah.
0:11:41 At the same time, a huge portion of talent is going into AI research that used to previously wouldn’t have gone into AI at all.
0:11:52 And so you have this, this massive industry, massive funding, you know, funding compute, but also funding human employees.
0:12:08 That is, I guess I, nothing seems fundamentally so hard that it couldn’t be solved by the smartest people in the world working incredibly hard for the next five years on it.
0:12:11 But, but basic research is, is different, right?
0:12:20 Like trying to, um, like trying to get into the fundamentals and as opposed to like, there’s a lot of industry research.
0:12:25 Like how do we make these things more useful, uh, in order to generate profit?
0:12:28 And, um, so I, I think that’s, that’s different.
0:12:39 And often, I mean, Thomas Kuhn, this philosopher of science talks a lot about how these research programs end up, you know, becoming like a bubble and like sucking all the attention and ideas.
0:12:50 And like, think, think, think, think about physics and how there are like these industry of, I don’t know, string theory and like, it pulls everything in and there’s sort of a plug, black hole of progress.
0:12:59 And, you know, yeah, yeah, no, I, and I think, I think one of his things was like, you got to wait until the current people retire.
0:12:59 That’s right.
0:13:01 And have a chance at changing the paradigm.
0:13:03 He’s very pessimistic about paradigms, yeah.
0:13:08 But I, I guess I feel like the current paradigm, this is maybe our district, I think the current paradigm is pretty good.
0:13:15 And I think we’re nowhere near the sort of like diminishing returns of continuing to push on it.
0:13:16 Mm-hmm.
0:13:26 And I bet, yeah, I guess I would just bet that you can keep doing different innovations within the paradigm to, to get there.
0:13:26 Mm-hmm.
0:13:30 So let’s say we continue to brute force it.
0:13:33 We’re able to automate a bunch of labor.
0:13:42 Do you estimate that GDP is something of, you know, four or 5% a year or are we going up to 10% plus or what does it do to the economy?
0:13:46 I think it depends a lot on exactly where we get to and what, what AGI means.
0:14:04 But so, so let’s say you have, let’s say you have LLMs that with, with an amount of energy that costs $1 an hour, they could do a job of any human.
0:14:09 Let’s just, just, just, let’s take that as a, as a theoretical point you could get to.
0:14:15 I think you’re going to get to much more than four to 5% GDP growth in that world.
0:14:18 I think the issue is you may not get there.
0:14:29 So you, it may be that the LLMs that can do everything a human can do actually cost more than humans do currently, or they can do kind of like 80% of what humans can do.
0:14:31 And then there’s this other 20%.
0:14:39 And I think, I do think at some point you get to LLMs can, they can do everything, every single thing a human can do for cheaper.
0:14:42 Like, I, I don’t see a reason why we don’t eventually get there.
0:14:44 That may take 5, 10, 15 years.
0:15:01 But I think until you get there, we’re going to get bottlenecked on the things that the LLMs still can’t do, or the, you know, building enough power plants to, to supply the energy or other bottlenecks in, in the supply chain.
0:15:24 One thing I worry about is, uh, the deleterious effect of LLMs in the economy in that say LLMs, uh, you know, effectively automate, uh, the entry level job, but not, but, but, but the, but not the expert’s job.
0:15:24 Right.
0:15:38 So, um, let’s take, uh, you know, QA, QQual Assurance, um, and, uh, it, it’s, it’s so good, but, uh, there’s still all these long tail event, uh, you know, events that it doesn’t handle.
0:15:46 And so you have a lot of, uh, really good QA people now, like managing like hundreds of agents and you effectively increase productivity a lot.
0:15:51 Uh, but they’re not hiring new people because the agents are better, better than new people.
0:15:56 Uh, and, and, and that, that feels like a weird equilibrium to be in.
0:15:56 Right.
0:15:57 Yeah.
0:15:58 And I don’t think that many people are thinking about it.
0:15:58 Yeah.
0:15:59 Yeah, for sure.
0:16:00 Yeah.
0:16:06 No, I, I, I think that’s, you know, I think it’s happening with, um, CS majors graduating from college.
0:16:08 There’s just not as many jobs as there used to be.
0:16:15 And, and, um, LLMs are a little more substitutable for what they previously would have done.
0:16:17 And I’m sure that’s contributing to it.
0:16:22 And then it means that you’re going to have fewer people going up that ramp that, you know,
0:16:26 companies paid a lot of money to, to employ them and, and, and train them.
0:16:29 Um, and so I, I think it’s a real problem.
0:16:39 I think it’s gonna, I’m guessing you’ll probably see some kind of like that problem also creates a economic incentive to solve the problem.
0:16:49 So it may be that there’s like more opportunities for companies that can train people or maybe use of AI to, to teach people these things.
0:16:52 Um, but for sure, that’s, that’s an issue right now.
0:17:07 Uh, another related problem is that, uh, since we’re dependent on, uh, expert data in order to train the LLMs and the LMs start to substitute, um, those workers.
0:17:16 But, but, but, but, but, you know, at some point there’s no more experts cause they’re all out of the jobs and, and, and, and, and they’re equivalent to that LMs.
0:17:24 But if the LMs is truly dependent on, on labeling data, expert RL environments, then how would they improve beyond that?
0:17:35 I think that’s something, uh, question for an economist to really sit down and think about is like, once you get the first tick of automation, I mean, there, there are some challenges there.
0:17:39 And so how do you go, how do you go, how do you go to the next part?
0:17:40 Yeah.
0:17:47 I mean, I, I think it, a lot of it is going to depend on how good of RL environments can be created.
0:17:56 So, you know, on the one extreme, you have something like AlphaGo where it’s just a perfect environment and you can just blast past expert level.
0:18:04 Um, but I think a lot of jobs have limited data that anyone can, can train from.
0:18:13 And so I think it’ll be interesting to see how, how easy is it for research efforts to, to overcome that, that bottleneck.
0:18:43 If you had to make a guess on what job category is going to be introduced or explode in, in the future, um, you know, some people say it’s like the, you know, everyone’s an influencer, you know, or, or in some sort of caring field or, um, you know, everyone’s employed by the government and some sort of bureaucrat thing or, um, you know, maybe training the AI in, in, in some way, uh, you know, as, as more and more things start to get automated, you know, what is your, your guess as to what more and more people start to do?
0:18:46 You know, doing art and poetry is, yeah.
0:19:00 I mean, at some point you have everything automated and then I think people will do art and poetry and, you know, I, and there’s a data point that the people playing chess is up since computers got better at human than, than humans at chess.
0:19:15 So I don’t think that’s a bad world if people are all just kind of free to pursue their, their hobbies, uh, as long as you have some kind of, you know, way to distribute wealth so that, so people can afford to, to live.
0:19:25 Um, but I, you know, in the near, that, that, that’s a while away and in the near term, well, like 10, 15 years out.
0:19:31 I don’t, I don’t know how much, but yeah, in the, in the, I’ll put it in the, at least 10 years range.
0:19:40 Um, I, I think in the near term, the job categories that are going to explode, the jobs that can really leverage AI.
0:19:49 And so, so people who are good at using AI to, to accomplish their jobs, especially to accomplish things that the AI couldn’t have done by itself.
0:19:52 There’s just, there’s just massive demand for, for that.
0:19:57 I don’t think we’re going to get to a point where you automate every, every job.
0:20:00 Uh, definitely not in the current paradigm.
0:20:04 I would, uh, I would doubt it happening.
0:20:11 I, I, I, I’m not certain it would ever happen, but definitely not in the current paradigm.
0:20:16 Now, here’s what I think, because a lot of jobs is about servicing other humans.
0:20:24 You need to be fundamentally human in order to, you need to be actually human in order to understand what other people want, you know?
0:20:26 And so you need to have the human experience.
0:20:41 So unless we’re going to, uh, create, uh, human humans, unless the, unless AI is actually embodied in human experience, then humans will always be the generators of ideas in the economy.
0:20:52 Adam, we respond to Andrade’s point around the human part, because you created one of the most, you know, the best wisdom of the crowds, you know, uh, platforms in the universe.
0:21:05 Um, and now you’ve gone, you know, all, all, all in with Poe, um, what are your thoughts on, you know, to what extent will we be relying on, um, humans versus will we be trusting AIs to, you know, be our therapists, be our, you know, caretakers in other ways?
0:21:10 You know, humans have a lot of knowledge collectively.
0:21:22 And, you know, even like one individual person who’s an expert and has lived a whole life and had a whole career and seen a lot of things, they, they often know a lot of things that are not written down anywhere.
0:21:23 Tacit knowledge.
0:21:30 And, um, you call it tacit knowledge, but also, also what they’re capable of writing down if you did ask them a question.
0:21:45 I think there’s still an important role for, for people to play in the world by sharing their knowledge, especially when they have knowledge that, that just wasn’t otherwise in an LLM’s training set.
0:22:01 Um, you know, whether they will be able to make a full-time living doing that, I, I don’t know, but if that becomes a bottleneck, then, then for sure, that’s going to mean that all the sort of like economic pressure goes, goes to that.
0:22:07 I don’t, I don’t, in terms of the, like, you know, you have to be human to know what humans want.
0:22:11 I don’t know about that.
0:22:30 So, like, as an example, I think, I think recommender systems, the system that ranks your Facebook or Instagram or Quora feed, those recommender systems are already superhuman at predicting what you’re going to be interested in, in reading.
0:22:49 Like, if, if, if I gave you a task that was like, make me a feed that I’m going to read, like, there’s just no way, no matter how much you knew about me, there’s no way you could compete with these algorithms to just have so much data about everything I’ve ever clicked on, everything everyone else has ever clicked on, what all the similarities are between all those, those different data sets.
0:22:59 And so, I don’t know, you know, it’s, it’s true that as a human, you can kind of like simulate being a human and that makes it easier for you to like test out ideas.
0:23:07 And I’m sure that composers and artists are, that this is an important part of their, their process for doing work is they like.
0:23:08 Or chefs or, you know.
0:23:12 Yeah, they, they produce something and, you know, a chef will cook something and they taste it.
0:23:13 And it’s important that they can taste it.
0:23:21 But I don’t know, you know, they, they just, they have very little data compared to what AI can be trained on.
0:23:23 So, so I don’t know how that’s going to shake out.
0:23:25 That’s a, that’s a good point.
0:23:41 I mean, ultimately what recommender systems are, they’re like aggregating all the different tastes and then sort of finding where you sit in the sort of multidimensional taste vector space and like getting you the best content there.
0:23:43 So I guess there’s some of that.
0:23:45 I think that’s more narrow than we think.
0:23:53 Like, like, like, yes, it, it’s true in recommender systems, but I’m not entirely sure it’s true of, of, of everything.
0:24:07 Um, but so I, I think the best prediction for where the world is headed and that this is not a endorsement or necessarily like this is where I think the world had it.
0:24:21 Cause I think part of it is, uh, will be slightly, uh, unstable, unstable system, but I think the sovereign individual can change to be, I think, really good set of predictions for the future.
0:24:27 Although it’s not a scientific book or not, it’s a very polemic book.
0:24:37 And, um, but, but the idea is, uh, you know, in the late eighties, early nineties, um, are they economists?
0:24:38 I’m not sure.
0:24:51 I think there are comments over political science majors, uh, two people out of the UK, um, wrote this book about trying to predict what happens, uh, when computer technology matures, right?
0:24:56 They’re like, you know, humanity went through the agriculture revolution and the industrial revolution.
0:25:01 We’re going through another, another revolution, uh, clearly, uh, information revolution.
0:25:02 Now we call it intelligence revolution, whatever.
0:25:09 I think we will not be able to call it something as a future people will call it something, but we are going through something.
0:25:12 And so they’re trying to predict, okay, what, what happens from here?
0:25:31 And what they arrive at is that the, um, ultimately you’re going to have large swaths of people that are potentially unemployed or economically not, um, contributing, but you’re going to have the entrepreneur, the entrepreneur capitalists going to be, uh, so highly leveraged.
0:25:34 Because they can spin up these companies with AI agents very quickly.
0:25:43 So because they have this, because they’re very generative, they have interesting ideas, they’re human, they’ve, uh, they have interesting ideas about what other people want.
0:25:48 They can create these companies very quickly in these products and services, and they can organize the economy in certain ways.
0:26:03 And the politics will change because, uh, you know, today’s politics is based on, uh, every human being, uh, economically, uh, productive.
0:26:20 Uh, but when you have only, uh, when you have massive automation and then you have a few entrepreneurs and, uh, very intelligent, generative people are actually, uh, able to be productive, then the political structures also change.
0:26:36 Um, uh, and so they talk about how the, you know, nation state sort of subsides and instead you go back to, uh, to an era where, um, states are like competing over people, over wealthy people.
0:26:45 And like they, you know, uh, as a sovereign individual, you can like, uh, uh, negotiate your tax rate with your favorite state.
0:26:53 And so it starts to sound like biology a little bit, and I don’t think it is far from where I, where it might be headed.
0:27:10 Now, again, it’s, it’s not a sort of a value judgment or, or desire, uh, but, but I do think it’s worth thinking about when, when people are not the, the, you know, unit of economic productivity, things have to change, including culture and, and politics.
0:27:27 Yeah, I think there’s a question with that book and, and some of this conversation more broadly of like, when does the technology reward the, uh, you know, the defender versus the, the sort of aggregator or something, or like the, um, when does it incentivize more decentralization versus centralization?
0:27:36 Like, uh, remember Peter Thiel had this quip a decade ago of like, you know, crypto is libertarian, is more decentralizing, AI is, you know, communist or more centralizing.
0:27:46 And it, it, um, it’s not obvious to me that that, that that’s entirely accurate, um, on, on either side, AI does seem to empower a bunch of individuals, as you were saying.
0:27:58 And then also, you know, crypto turns out it’s like FinTech or science like stable, you know, uh, it does empower sort of, uh, you know, in nation states, we’re talking about doing this sort of like, you know, the, the China thing that they were going to do.
0:28:05 So, yeah, I think there’s an open question as to, you know, which, which technology leads to, who does it empower more?
0:28:07 Or the edges or the, the center.
0:28:17 And I think if it empowers the edges, it seems like the sovereign individuals is, and, and maybe there’s a barbell, uh, where it’s like both basically the, the big, the income is just getting much, much, much, much, much bigger.
0:28:20 And there’s like these edges, but anyways, that’s all right.
0:28:21 Yeah.
0:28:21 Yeah.
0:28:29 I’m, I’m very excited for the, um, the number of solo entrepreneurs that this technology is going to enable.
0:28:36 I think it’s, it’s just greatly, it’s vastly increased what, what a single person can do.
0:28:50 And there’s so many ideas that just never got explored because it’s a lot of work to get a team of people together and maybe raise the funding for it and get the right kind of people with all the different skills you need.
0:28:56 Um, and now that one person can, can bring these things into existence, I think, I think we’re going to see a lot of really amazing stuff.
0:29:07 Yeah, I get these tweets all the time about people who like with their jobs, because they started making so much money using tools like, like Rap Lead and, um, it’s, it’s really exciting.
0:29:13 I think, uh, if for the first time, opportunity is massively available for, for everyone.
0:29:24 Uh, and I think that, that is to me, the most exciting thing about this technology, other than all the other stuff that we’re talking about, just the ability for more people to be able to become entrepreneurs.
0:29:31 Yeah, that, that trend is obviously going to happen as we look out at the next decade or two.
0:29:36 Do you think that AI is more likely to be sustaining or disruptive in the Christian sense?
0:29:44 And so to ask it another way, do you think that most of the value capture is going to come from companies that were scaled pre open AI starting?
0:29:48 Um, uh, so, so Replit still counts as the, the, the latter.
0:29:54 And so does Quartt in some degree, or, or do, um, do you think most of the value is going to be captured by companies that started, you know,
0:29:56 after, let’s say, 2015, 2016?
0:30:03 So there’s a related question, which is how much of the value is going to go to the hyperscalers versus everyone else.
0:30:25 And I think on that one, we are, I actually think we’re in a pretty good balance where there’s enough competition among the hyperscalers that the, um, there’s enough competition that as an application level company, you have choice and you have alternatives.
0:30:45 And the, um, there’s also not so much competition that the, you know, labs like Anthropic and OpenAI, there’s not so much competition that they are unable to raise money and make these long-term investments.
0:30:55 And so I actually think we’re in a pretty good balance and, and we’re going to have a lot of, a lot of new companies and a lot of growth among the, the hyperscalers.
0:30:59 I think that’s, that’s about right.
0:31:06 So the terminology of sustaining versus disruptive comes, comes from, uh, uh, the innovators dilemma.
0:31:15 Uh, and, uh, it’s, it’s this idea that, uh, whenever there’s a new technology trend, it’s sort of, there’s this idea of a power curve.
0:31:20 It starts as a toy almost or something that doesn’t really work or captures the lower end of the market.
0:31:27 But as it sort of evolves, uh, it goes up the power curve and eventually disrupts even the incumbents.
0:31:37 So originally the incumbents don’t pay attention to it, uh, because it looks like a toy and then eventually disrupts everything and eats the entire, uh, sort of market.
0:31:45 Uh, and so that, that was true of PCs, you know, when PCs came along, the big, uh, main, mainframe manufacturers did not, uh, uh, pay attention to it.
0:31:50 And, and initially it was like, yeah, it’s for, it’s, you know, for kids or whatever.
0:31:54 Uh, but we, we have to run these large computers or data centers or whatever.
0:31:58 But now even data centers are running on PCs and so on.
0:32:03 Um, and, and so PCs were just a hugely disruptive, uh, force.
0:32:07 Uh, but there, there are technologies that come along and really benefit their incumbents.
0:32:12 And really don’t really benefit the, uh, the, uh, new players, the startups.
0:32:14 Uh, I think Adam’s right.
0:32:15 It’s, uh, it’s both.
0:32:23 Um, and maybe for the first time it’s kind of both, uh, like a huge technology trend because the internet was hugely disruptive.
0:32:36 Um, but, but this time, uh, it feels like it is an obvious supercharge for the incumbents, for the hyperscalers, for the large, uh, internet companies.
0:32:48 But it also enables, uh, new business models that, uh, that is perhaps counterposition against the, uh, the existing, existing ones.
0:32:56 Although the, the, the, you know, I think what happened is everyone read that book and everyone learned how to not be disrupted.
0:33:03 Uh, for example, ChatGPT was fundamentally counterpositioned against Google because, uh, Google had a business that, that was actually working.
0:33:09 Uh, ChatGPT was seen as this, uh, technology that hallucinates a lot and creates a lot of bad information.
0:33:11 And Google wanted to be trusted.
0:33:13 And so Google had ChatGPT internally.
0:33:20 They didn’t release Gemini until like two years after ChatGPT and ChatGPT had sort of already won the, like, at least brand recognition.
0:33:25 Um, and, and so there, there was, in a way, open AI came out as a disruptive technology.
0:33:30 Uh, but, but now Google realizes this is a disruptive technology and kind of response to it.
0:33:33 At the same time, it was always obvious that AI is getting better with Google.
0:33:39 At minimum, it’s, uh, you know, overview, uh, search overview has gone a lot better.
0:33:45 Um, all it’s, uh, you know, workspace suite is, is getting a lot better with Gemini.
0:33:47 Uh, their mobile phones, everything gets better.
0:33:50 So it’s, it seems like it’s, it’s both.
0:33:50 Yeah.
0:33:51 I really agree.
0:33:55 Like everyone read the book and that changes what the theory even means.
0:33:55 Yeah.
0:34:00 Because you have, you have like all the, all the public market investors have read that book.
0:34:05 And they now are going to punish companies for not adapting and reward them for adapting.
0:34:08 Even if it means they have to make long-term investments.
0:34:15 I think, you know, all the, the management leadership of the companies have, have read the book and they’re on top of their game.
0:34:22 I think we also just like the people running these companies are in, I, I guess I would say smarter.
0:34:30 I think then like the, the companies from the generation that that book was sort of built on.
0:34:35 And they’re, they’re on at the top of their game and they are, a lot of them are founder controlled.
0:34:41 And so they can make, it’s easier for them to sort of take a hit and, and make these, these investments.
0:35:01 So that’s, I actually, you know, I think if, if you had an environment more like we had in say like the nineties, I think this would actually be more disruptive than, than the, the current hyper, hyper competitive world that we’re in now.
0:35:16 One mistake that we as firm have reflected on over the past few years, though, of course I haven’t been here for more than just a few months, is this idea of we’ve, that we’ve passed on companies because we, they weren’t going to be the market leader or the, or the category winner.
0:35:22 And thus we thought, Oh, you know, learning the lessons from, from web two, you have to invest in the, in the category winner.
0:35:26 That’s where things are going to consolidate and value is going to accrue over time.
0:35:32 And, um, it seemed, so why do the, the next foundation model company if the first one already has a, as a headstart.
0:35:48 Um, but it seems like the market has gotten so much bigger that in foundation models, but also in applications, there’s just multiple winners and they’re kind of, you know, fragmenting or, you know, and taking parts of the market that are all venture scale.
0:35:57 I’m curious if this is a durable phenomenon or, but, um, it’s, that seems just one difference than, than the web two era is just more winners, um, across more categories.
0:36:04 I think network effects are playing much less of a role now than they did in the web two era also.
0:36:08 And that, that makes it easier for competitors to get started.
0:36:14 There’s still a scale advantage because, you know, if you have more users, you can get more data.
0:36:25 If you have more users, you can raise more capital, but that advantage is not, it doesn’t make it absolutely impossible for a competitor of smaller scale.
0:36:33 It makes it hard, but it’s, there, there’s definitely like room for more winners than, than there was before.
0:36:44 I think another difference is that people are seeing the value, um, so strongly that they’re willing to pay, um, early on in maybe a way that they, the question with web two companies was how are they going to make money?
0:36:47 And you were Facebook super early, obviously, you know, Google, et cetera.
0:36:49 It was like, Oh, how are they going to monetize?
0:36:53 And, you know, the companies here are monetizing from, from the get go, you know, your guys’ companies included.
0:36:54 Yeah.
0:36:55 Yeah.
0:37:03 And the, I think with the earlier generation of companies, the monetization kind of depended on scale.
0:37:04 Yeah.
0:37:10 like, you couldn’t build a good ad business until you got to millions, tens of millions of users.
0:37:15 And now with subscriptions, you can just charge right away.
0:37:19 I think, especially thanks to things like Stripe that are making it easier.
0:37:23 Um, and so that, that, that’s also made it a lot more friendly to, to new entrants.
0:37:26 There’s, there’s, there’s also, uh, questions of geopolitics.
0:37:35 Like, you know, it seems clear that we’re not, uh, in this, um, globalized era and perhaps it’s going to get much worse.
0:37:41 And so investing in the foundation, in the open AI of, of Europe might be a good idea.
0:37:45 And like similarly, China being an entire different, different world.
0:37:49 And so there’s, um, sort of a geo aspect of it.
0:37:49 Yeah.
0:37:53 All of a sudden our geopolitics, you know, nerdiness is, uh, helpful.
0:37:55 Is, is, is useful.
0:37:58 Um, Adam, you know, we were talking about sort of human knowledge.
0:38:01 Did you see yourself with Poe kind of disrupting yourself in a sense?
0:38:06 Or, or talk about the, the, the bet that you made with, with Poe and the sort of evolution there.
0:38:14 You know, I think we saw Poe more as just an additional opportunity than, than as disruption to, to Quora.
0:38:23 Um, the, the way we got to it was we, in early 2022, we started experimenting with using GPT-3 to generate,
0:38:31 and, and, and, and we compared them to the, the human answers and sort of realized that they weren’t as good.
0:38:38 But what was really unique was that you could instantly get an answer to anything you wanted to ask about.
0:38:41 And we realized it didn’t need to be in public.
0:38:45 It actually was, your preference would be to, to have it be in private.
0:38:54 And so, we felt like there was just a new opportunity here to, to let people chat with, with AI and, in private.
0:38:54 Yeah.
0:39:00 And it seemed like you were also making a bet on how the different players were going to, that there was going to be.
0:39:01 Yeah, yeah.
0:39:06 So, it was also a bet on diversity of, of model companies, which took a while to play out.
0:39:10 But I, I think now we’re, we’re getting to the point where there’s, there’s a lot of models.
0:39:17 There’s a lot of companies, especially when you go across modalities, you think about image models, video models, audio models.
0:39:21 Um, especially like the reasoning research models are, are sort of diverging.
0:39:24 Agents are starting to be their own source of diversity.
0:39:35 Um, so, so we’re lucky to, to now be getting into this world where there’s, there’s sort of enough diversity for a, a general interface aggregator to, to make sense.
0:39:37 Um, but yeah, it was, it was a bet early on.
0:39:47 We kind of, it’s, it’s surprising actually that, um, even, uh, not particularly technical consumers actually do use multiple AIs.
0:39:52 Uh, like I didn’t expect that, like, you know, people only use Google.
0:39:56 They never like looked at Google and then Yahoo or like very few people do it.
0:40:05 But now you talk to just average people and they’ll say, yeah, I use ChatGP most of the time, but Gemini is much better at like these types of questions.
0:40:06 And so, yeah, interesting.
0:40:08 The sophistication of consumers have gone on.
0:40:15 And even people saying that they have different personalities and they, you know, you know, sort of, uh, resonate with Claude more, you know, or whatever.
0:40:22 The, um, I want to return back to this point we said earlier, Adam, about kind of talking about like dark matter.
0:40:23 About how we’re going to, you know, brute force.
0:40:28 There’s a lot of knowledge that people have that’s, you know, sort of not, um, sort of categorized yet.
0:40:29 And it’s not just task knowledge.
0:40:32 It’s actually knowledge that you could, you know, ask them about and they could describe it.
0:40:38 How, you know, because one question people have with LMS is like how much we’ve already trained the whole internet.
0:40:40 How much more knowledge is there?
0:40:42 Um, is there, is it like 10 X?
0:40:43 Is it like a thousand X?
0:40:56 Like what is sort of the, what is your kind of intuitive sense of if we do brute force it and build this whole, you know, machine that gets all the knowledge out of humans onto sort of, you know, a data set that we can then, you know, implement.
0:40:59 How do we think about the upside from there?
0:41:13 You know, I think it’s very hard to quantify, but there’s a massive industry developing around getting human knowledge into the form where AI can use it.
0:41:23 So this is things like scale AI, surge, Mercor, but there, there’s a massive long tail of other companies just getting started.
0:41:39 And as you have, you know, as intelligence gets cheaper and cheaper and more and more powerful, the bottleneck I think is increasingly going to be on the data.
0:41:42 And what do you need to create that intelligence?
0:41:49 And so that’s going to cause this, that’s going to cause more and more of this to happen.
0:41:53 It might be that people can make more and more money by training AI.
0:41:56 It might be that more and more of these companies get started.
0:42:08 Um, or it might be, it might be that there’s, there’s other forms of it, but I think, I think it’s going to be sort of like the economy is going to naturally value whatever the AI can’t do.
0:42:13 And what is the framework for what it can’t, like what is the mental model for what it can’t do?
0:42:26 I don’t, you know, you can, you can ask an AI researcher, they might have a better answer, but to me, there’s just information that’s not in the training set.
0:42:34 And that is something that’s inherently going to be, you know, going to be something that AI can’t do.
0:42:36 There will be, you know, the AI will get very smart.
0:42:37 It can do a lot of reasoning.
0:42:41 It could prove every math theorem at some point.
0:43:00 If it starts from, you know, some axioms that you, that you give it, but if it doesn’t know how did this particular company solve this problem 20 years ago, if that wasn’t in the training set, then only a human who knows that is going to be able to answer that question.
0:43:08 And so over time, how do you see Quora interfacing with, or like, how are you running these in parallel?
0:43:09 How do you think about this?
0:43:09 Yeah.
0:43:15 So, I mean, Quora, our focus is on human knowledge and, and letting people share their knowledge.
0:43:27 And, um, that knowledge may be helpful for, you know, it’s, it’s, it’s helpful for other humans and it’s, it’s also helpful for AI to, to learn from.
0:43:42 Um, we have relationships with some of the AI labs, um, and we’re going to sort of play the role, Quora will play the role that it is meant to play in this ecosystem, which is a, as a, a source of, of human knowledge.
0:43:44 Um, at the same time, AI is making Quora a lot better.
0:43:58 We’ve been able to make, uh, major improvements in moderation quality and in, uh, in ranking answers and in, uh, just, just improving the product experience.
0:44:03 So, uh, so it’s gotten a lot better by applying AI to it.
0:44:04 Yeah.
0:44:07 And, and I’m going to talk, talk about your future as well.
0:44:11 Obviously, you know, you had this business for, for a long time, you know, focused on developers.
0:44:13 At one point, you’re targeting, you know, um.
0:44:14 It was a non-profit.
0:44:14 No.
0:44:15 Exactly.
0:44:18 The end tech market, I believe you did two or three million in revenue reported.
0:44:23 And then, you know, recently, tech crunch, I know it’s outdated, but I think it reported something like 150 million.
0:44:30 I know it’s higher since you’ve had this incredible growth as, as you’ve shifted the, the business model, um, and, and the customer segment.
0:44:33 How do you think about the, the future of Repli?
0:44:39 Um, I think, uh, Karpathi, uh, recently said that it’s going to be the decade of agents.
0:44:42 Uh, and I think that’s absolutely right.
0:44:54 It’s, um, uh, as opposed to like prior modalities of AI, like when, uh, AI first came to coding, it was autocomplete with Copilot.
0:44:57 Then it became sort of chat with ChatDipT.
0:45:07 Then I think Cursor innovated on this composer modality, which is like editing like large chunks of, uh, files.
0:45:08 But that’s it.
0:45:12 I think Repli, what Repli innovated is, is, is, is the agents.
0:45:27 Um, and the idea of like not only editing code, provisioning infrastructure, like databases, doing migrations, uh, you know, connecting to the cloud, deploying, uh, having the entire debug loop, like executing the code, running tests.
0:45:33 Um, and so just like the entire development lifecycle loop happening inside an agent.
0:45:35 And that’s going to take a long time to mature.
0:45:48 So where agent in beta came September, 2024, and it was the first of its kind that did this both code and infrastructure, but it was, you know, fairly janky, didn’t work very well.
0:45:57 And then agent V1 around, uh, December, uh, uh, it took, took another, um, uh, generation of models.
0:46:00 So you go to from Claw 3.5 to 3.7.
0:46:08 3.7 was the first model, uh, that, uh, really knew how to use a computer, a virtual machine.
0:46:11 So unsurprisingly, it was the first also computer use model.
0:46:14 Um, and these things have been moving together.
0:46:24 Uh, and so with every generation of models, we see, we find new capabilities and, um, you know, um, agent V2 improved on autonomy a lot.
0:46:26 Agent V1 could run for like two minutes.
0:46:30 Agent V2, uh, uh, uh, ran for 20 minutes.
0:46:33 Agent V3, we advertised it as running for 200 minutes.
0:46:38 It just felt like it should be symmetrical, but like it’s actually runs kind of indefinitely.
0:46:41 Like we’ve had users running it for 20 plus hours.
0:46:56 Um, and the main idea there was that if we put a verify on the loop, I remember reading DeepSeq, uh, a paper from NVIDIA about how they, um, used DeepSeq to write CUDA kernels.
0:47:03 And they were able to run DeepSeq for like 20 minutes if they put a verifier on the loop, like being able to run tests or something like that.
0:47:07 And I thought, oh, okay, so what kind of verifier can we put on the loop?
0:47:12 Obviously you can put unit tests, but unit tests doesn’t really capture whether the app is working or not.
0:47:17 So we started kind of digging into computer use and whether computer use was going to be able to test apps.
0:47:21 Computer use is very expensive and, um, it’s actually kind of still very buggy.
0:47:27 And like Adam talked about, that’s going to be, uh, a big area of improvement that’ll unlock a lot of applications.
0:47:35 But we ended up building our own framework with like a bunch of hacks and some, some AI research and Repplet’s computer use, I think, testing models.
0:47:45 I think one of the best, um, and, uh, and once we put that into the loop, then, uh, you can put Repplet in high autonomy.
0:47:55 So we have an autonomy scale, uh, uh, you can, you can, you can choose your autonomy level and then it just writes the code, goes and tests the applications.
0:48:01 If there’s a bug, it reads the error log and like writes the code again and, and can go for, for, for hours.
0:48:05 And I’ve seen people build amazing things by letting it run for, for a long time.
0:48:07 Now that needs to continue to get better.
0:48:12 That needs to, um, to get cheaper and faster.
0:48:16 Uh, so it’s not necessarily a point of pride to run for a lot longer.
0:48:18 Like it should be as fast as possible.
0:48:20 So we’re working on that.
0:48:25 Um, agent four, there’s a bunch of ideas that are going to be, uh,
0:48:32 coming out agent four, but one of the big things is you shouldn’t be just like waiting for that one feature that you requested.
0:48:35 You should be able to work, uh, on a lot of different features.
0:48:38 So the idea of like parallel agents is very interesting to us.
0:48:44 So, you know, you ask for a login page, but you could also ask for a Stripe, uh, check out.
0:48:46 And then you ask for an admin dashboard.
0:48:57 The AI should be able to figure out how to paralyze all these different tasks or some tasks are not paralyzable, but should also be able to do merge across the code.
0:49:02 So being able to do collaboration across AI agents, um, is very important.
0:49:06 And that way the productivity of a single developer goes up by a lot.
0:49:11 Right now, even when you’re using Cloud Code or Cursor and others, there isn’t a lot of parallelism going on.
0:49:32 But I think the next, uh, boost in productivity is going to come from sitting in front of a programming environment like Replit and being able to manage, uh, tens of agents, maybe at some point hundreds, but, you know, at least, you know, five, six, seven, eight, nine, ten agents, uh, all different, all the, you know, working in different parts of your, your product.
0:49:49 I also think that, um, UI and UX, uh, could, could use a lot of work in terms of, um, right now, um, you’re trying to translate your ideas, uh, into just like textual representation.
0:49:52 I’m just like, uh, like a PRD, right?
0:49:54 Uh, what product managers do, right?
0:49:58 Just product descriptions, but product descriptions that don’t, it’s really hard.
0:50:04 And you see it in a lot of tech companies, it’s really hard to align on the exact features because it’s like, language is fuzzy.
0:50:10 And so I think there’s a, there’s a world in which you’re interacting with AI in a more multimodal fashion.
0:50:19 So open up, uh, like a whiteboard and being able to draw and like diagram with AI and, and, and really work with it.
0:50:30 Like you work with a human, uh, and then, um, then the next stage of that, uh, having, uh, like better memory, better, better memory inside the project, but also across the project.
0:50:47 And perhaps having different instantiations of replet agent that, uh, you know, that this, this agent is really good at like, um, Python data science because, um, you know, it has all the information and skills and memories of, about my company, what it’s done in the past.
0:50:57 So I’ll have a data analysis, like sort of replet agent, and I’ll have a, like a front end replet agent and they have memory over multiple projects and over time and over interactions.
0:51:01 And maybe they sit in your Slack, like a, like a worker and you can like talk to them.
0:51:10 So again, like I can, I can keep going for another 15 minutes about a roadmap that could span like three to four to five years perhaps.
0:51:19 And so, but, but this, this agent, this agent phase that we’re in is just, there’s so much work to do and it’s, it’s, it’s going to be a lot of fun.
0:51:20 Yeah.
0:51:32 It’s a, I was talking to one of our mutual friends, one of the co-founders of one of these, uh, you know, big productivity companies and he leads a lot of their R and D and he’s like, man, uh, during the week these days, I’m not even talking to,
0:51:33 humans anymore as much.
0:51:36 I’m just like, it’s just, you know, using all these agents to, to build.
0:51:39 So it’s, it’s living in the future to some degree is already in the present.
0:51:47 There’s something interesting about that and that are people talking to each other less at, at companies and is that a bad thing?
0:51:55 Um, so it’s, you know, I think, uh, I, I, I’m starting to think more about the second order effects of things like that.
0:52:02 Um, uh, you know, will it make it awkward for like, again, the new grads, I feel so bad.
0:52:14 Like, uh, you know, if, if people are not sharing as much knowledge between each other or it’s like, it’s not culturally easy to go ask for help because like you should be able to use AI agents.
0:52:19 Uh, there’s some, there’s some cultural forces that I think need to be reckoned with.
0:52:20 Yeah.
0:52:23 I think a lot of tough cultural forces for zoomers these days.
0:52:24 Yes.
0:52:27 Um, let’s get gearing towards closing here.
0:52:35 Um, obviously you guys are, you know, focused on running your companies, but to stay current on the AI ecosystem, you, you guys also make an angel investments as well.
0:52:38 Um, where are you guys most, uh, most excited?
0:52:40 Um, you know, we, we haven’t talked about robotics.
0:52:48 Are you guys bullish on, on robotics in the, in the near term or any emerging categories or use cases or spaces that you’re looking to make more investments in?
0:52:55 Or you, you have made some guys think vibe coding generally is just unbelievably like high potential.
0:53:00 Um, just the idea that all the, you know, this is under hyped even still.
0:53:01 I think so.
0:53:11 I think, you know, just opening up the potential of software to the mainstream of, you know, every, everyone.
0:53:22 I think that, and yeah, I actually think one reason I think it’s under hyped is that the tools are still very far from what you can do as a professional software engineer.
0:53:40 And if you imagine that they’re going to get there, and I think there’s no reason why they wouldn’t, it’ll take a few years, but, um, then it’s like everyone in the world is going to be able to create any things that would have taken a team of a hundred professional software engineers.
0:53:44 That’s just going to massively open up opportunities for, for everyone.
0:53:55 So I think Repl.let is like a great example of this, but I think it’s also gonna, that there will be cases other than just like building applications that, that this also creates.
0:54:05 By the way, just on that note, if you were going to Stanford or Harvard, you know, today, 2025, just enter, would you major again in computer science or just focus on building something?
0:54:17 I think I would, I mean, I, I, I, I, I went to college starting in 2002 and it was right after the dot-com bubble had burst and there was a lot of pessimism.
0:54:27 And I remember my, um, my roommate, his parents had told him like, don’t study computer science, even though that was, that was something he really liked.
0:54:41 Um, and I just kind of did it because I, I liked it and I think that, I think that it’s definitely like the job market is worse than it was a few years ago.
0:54:56 At the same time, I think having these skills to understand the sort of fundamentals of what’s possible with algorithms and data structures, I think that actually really helps you in, in managing agents when, when you’re using them.
0:55:01 Um, and I, I, I’m guessing that it will continue to be a valuable skill in the future.
0:55:05 I also think the other question is like, what else are you going to study?
0:55:10 And, and every single thing you could imagine, there’s an argument for why it’s going to be automated.
0:55:16 So I think you might as well study what you enjoy and, and, and I think this is as good as, as anything.
0:55:17 Yeah.
0:55:26 Um, I think there’s a lot to, to get excited by one thing is maybe kind of random, but like I get really fired up to see like mad science experiments.
0:55:30 experiments like the, uh, deep seek OCR that came out the other day.
0:55:31 Did you, did you see it?
0:55:41 It’s, it’s wild where, um, correct me if I’m wrong, cause I only looked at it briefly, but basically you can, um, get a lot more economical with a context window.
0:55:50 So if you like have a screenshot of the text instead of a fucking text, uh, I’m not, I’m not the right person to be correcting you.
0:55:55 But like, it’s, there’s, there’s definitely some, some really interesting things.
0:55:55 Yeah.
0:56:15 I saw another thing on Hacker News the other day where, um, you know, uh, text diffusion, uh, where someone made a text diffusion model by instead of doing, I was saying, denoising, he would take like a single BERT instance and like try to, you know, mask different words and, uh, and just predict like this different tokens.
0:56:21 And, um, and so we have a lot of components and I don’t think people think a lot about that.
0:56:38 You know, we have now the, you know, base pre-trained models, we have the, all these RL reasoning models, we have the, uh, you know, encoded decoder models, we have diffusion models, we have, there’s all these different things like, just like, you know, you mix them in different ways.
0:56:41 Uh, I feel like there isn’t a lot of that.
0:56:49 I mean, it’d be great, it’d be great if a, like a new research company just like comes out and is like not trying to like compete with OpenAI and things like that.
0:56:56 But instead, uh, it’s just trying to like discover how to put these different components together in order to create a new flavor of these models.
0:57:03 Yeah, in crypto, they talk about composability and like mixing primitives together and, and AI, maybe there needs to be more experimentation.
0:57:17 There’s less playing around, I found like there is, like, I remember in the like web 2.0 era when we were like playing around with JavaScript and what browsers could do and what web workers could do, whatever, there was a lot of like really interesting, weird experiments.
0:57:26 I mean, Replit was born out of that, the original version of Replit in open source pre, pre the company, which my interest was like, can you compile C to JavaScript, right?
0:57:34 That was like one of the interesting things and that, that became WASM by the time it was, uh, and scripted and it was like such a, such a nasty hack.
0:57:45 And, um, but I think there’s so much, I think we’re in an era of Silicon Valley where it’s like very, um, very get rich driven.
0:57:47 And that makes me a little sad.
0:57:50 And that’s partially why I moved the company out of SF.
0:58:02 I feel like the culture in SF has, has gotten maybe to, maybe I wasn’t there, but like during the dot com era, a lot of people talked about how it’s sort of like get rich fast or the crypto thing.
0:58:16 So I feel like there needs to be a lot more tinkering and I would love to see more of that and more companies getting funded that are trying to just do something a little more novel, even if it doesn’t mean like it fundamentally new, new model.
0:58:18 Last question.
0:58:21 Um, Amjad, you’ve, uh, been into consciousness for a long time.
0:58:32 Um, are, are you bullish that we will, um, via some of this AI work or just some scientific progress elsewhere, make some progress and understand in, in, uh, you know, getting across this, this hard problem or.
0:58:37 You know, something happened recently, uh, which is interesting.
0:58:45 Um, uh, quad 4.5, uh, seem to have to become more aware of its context length.
0:58:51 So as it gets closer to the end of the context, it starts becoming becoming more economical with tokens.
0:58:59 It also, it looks like it’s awareness when it’s being red teamed or in test environment, like jumped significantly.
0:59:01 And so there’s something happening there.
0:59:01 That’s quite interesting.
0:59:13 Now, I think, uh, in terms of, you know, the, the question of, of consciousness, it is still fundamentally not a scientific question.
0:59:28 And, and there is a sort of, uh, we’ve given up on trying to make it scientific, but I think it, uh, I think this is also, uh, the problem that I talked about with all the energy going into LMs.
0:59:37 Um, uh, no one is trying to really think about the true nature of intelligence, true nature of, uh, consciousness.
0:59:43 Um, and there’s a lot of really core, core questions.
1:00:06 Like one of my favorite one is, uh, the, uh, Roger Penrose, um, Emperor’s New Mind, where he wrote a book about how everyone in the sort of philosophy of mind space, uh, and perhaps the larger scientific ecosystem started thinking about the brain in terms of a computer.
1:00:36 And in that book, he tried to show that it fundamentally is impossible for the brain to be, uh, uh, computer because, uh, humans, uh, are able to do things that Turing machines cannot do or Turing machines like fundamentally get, get stuck on such as, um, uh, you know, just, uh, basic logic, um, puzzles, uh, that.
1:00:42 We’re able to kind of detect, but like, there’s no way to include that in a, in a, in a, in a Turing machine.
1:00:48 For example, like this statement is false, you know, there’s like old logic puzzles.
1:01:06 Um, and, uh, anyways, it’s like a complicated argument, but, uh, if you read that book or, or many others, uh, there’s like a core strain of arguments in the theory of mind about how, uh, computers, uh, are, uh,
1:01:09 fundamentally different from, from human intelligence.
1:01:23 And, uh, and so, yeah, I mean, I haven’t really, I’ve been very busy, so I haven’t really updated my thinking too much about that, but, but I think there’s, there’s a, there’s a, there’s a, there’s a huge field of study there that is not being studied.
1:01:27 If you were a freshman, uh, entering college today, would you study philosophy?
1:01:27 I would do that.
1:01:28 I would do that.
1:01:30 I would definitely study philosophy of mind.
1:01:40 I would probably go into neuroscience, uh, cause I think those are the core questions that are kind of become very, very important as AI kind of continues to see more of jobs and economy and things like that.
1:01:41 That’s a great place to wrap.
1:01:42 I’m John.
1:01:43 Adam, thanks for coming on the podcast.
1:01:43 Thank you.
1:01:49 Thanks for listening to this episode of the A16Z podcast.
1:01:56 If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family.
1:02:07 For more episodes, go to YouTube, Apple podcasts, and Spotify, follow us on X at A16Z and subscribe to our sub stack at A16Z.substack.com.
1:02:10 Thanks again for listening and I’ll see you in the next episode.
1:02:24 As a reminder, the content here is for informational purposes only should not be taken as legal business tax or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund.
1:02:29 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
1:02:37 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
Adam D’Angelo (Quora/Poe) thinks we’re 5 years from automating remote work. Amjad Masad (Replit) thinks we’re brute-forcing intelligence without understanding it.
In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we’re anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a “missing middle” in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering.
Plus: Why coding agents can now run for 20+ hours straight, the return of the “sovereign individual” thesis, and the surprising sophistication of everyday users juggling multiple AIs.
Resources:
Follow Amjad on X: https://x.com/amasad
Follow Adam on X: https://x.com/adamdangelo
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Podcast on Spotify
Listen to the a16z Podcast on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply
You must be logged in to post a comment.