AI transcript
0:00:06 It means every culture has their own AGI.
0:00:12 And eventually, every culture has their own social network and cryptocurrency and AI.
0:00:16 You know, the AI is sort of like their oracle that’s at the center of society.
0:00:21 And they’ve got their deterministic law with cryptocurrency and their probabilistic guidance with AI.
0:00:22 And the social network that binds the whole thing to AI.
0:00:29 So those three technologies are like social technologies that are almost like the reactor core of the network state of a modern internet-first society.
0:00:34 The way we talk about AI often reveals more about us than the technology itself.
0:00:42 In this episode, I’m joined by technologist and founder Balaji Srinivasan, alongside A16Z general partner Martin Casado,
0:00:49 to unpack how our language, whether we frame AI as a god, a swarm, or a tool, shapes our hopes and fears.
0:00:54 Balaji, known for his work in crypto and network states, also has deep roots in machine learning.
0:00:58 Today, we explore where AI discourse has gone off course,
0:01:00 what today’s systems can and can’t do,
0:01:03 and how different cultures might build very different AIs,
0:01:05 each reflecting their own values and constraints.
0:01:09 It’s a conversation about belief, control, and the systems we build.
0:01:11 Let’s get into it.
0:01:16 As a reminder, the content here is for informational purposes only.
0:01:20 Should not be taken as legal, business, tax, or investment advice,
0:01:22 or be used to evaluate any investment or security,
0:01:27 and is not directed at any investors or potential investors in any A16Z fund.
0:01:32 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:01:35 For more details, including a link to our investments,
0:01:39 please see A16Z.com forward slash disclosures.
0:01:47 Martin and I were talking offline about how amazing your thread was on AI.
0:01:52 And, you know, often you’re a crypto guy, you’re a network state guy, but you’re a technologist.
0:01:52 Sure.
0:01:54 And you’ve been thinking about AI for quite some time.
0:01:57 So why don’t you trace us through your evolution a bit?
0:01:58 Totally.
0:02:02 So, you know, actually, Martin and I are roughly contemporaries at Stanford.
0:02:05 I got my PhD in 05, 06.
0:02:07 I got my PhD in 07.
0:02:08 Yeah.
0:02:09 So we’re roughly…
0:02:11 We overlapped almost completely.
0:02:11 Yeah.
0:02:12 Yes, that’s right.
0:02:16 I’m a little gray down over here, and Martin’s a little gray up over here, and we’re complementary grays.
0:02:20 We both have that ambiguous kind of Middle Eastern look.
0:02:21 Yes, exactly.
0:02:21 That’s exactly…
0:02:22 Although neither of us are Middle Eastern.
0:02:23 That’s correct.
0:02:24 That’s exactly right.
0:02:25 That’s right.
0:02:28 So we’re both men of a certain age, I think is fair, as the phrase goes.
0:02:40 And the funny thing is, you know, I taught machine learning and computational statistics and so on in the context of genomics at Stanford for, you know, the mid-2000s and founded a DNA sequencing company.
0:02:48 And really, I would say, like, my original career expertise for 10 years, it’s all I was doing was, in a sense, ML full time.
0:03:03 But then I got into crypto right in the early, mid-2010s, just as the deep learning revolution was getting underway with ImageNet and all that series of papers in the mid-2010s, early-2010s.
0:03:10 So I’ve suggested my thought process, I have a foundation in probability and stats and multivariable calculus and blah, blah, blah.
0:03:12 And so I’m conversing with the space.
0:03:17 The thing I will say, and I’m sure if Martin has thought on this, then I’ll get to this specific tweet.
0:03:25 In the 2010s, you know, we were all tracking diffusion models and language models and so forth and it was improving.
0:03:31 But, you know, like style transfer, for example, the mid-2010s was working by that time, right?
0:03:35 And, you know, GPT-2 and so on was interesting.
0:03:38 It could kind of blurt out like a sentence.
0:03:47 But I admit that I never really thought it was going to get past, like, Markov chain-y, like, stuff, you know?
0:03:54 I was surprised at how much better GPT, like, DALI was a hint in early 2022.
0:03:58 But I was surprised at how coherent chat GPT was.
0:03:59 I think everybody was.
0:04:04 But it was like a huge jump up from what it was before in terms of sort of being Markov chain.
0:04:13 And so I’ve been kind of observing over the last, you know, two years, being originally very deep in machine learning, then very deep in crypto.
0:04:16 But, you know, you can’t be deep in everything.
0:04:18 You can’t be deep in, or maybe Elon can.
0:04:18 Okay.
0:04:26 Aside from Elon, it’s pretty hard to be at the cutting edge of so many different fields at the same time because they’re deep fields which have a lot going on.
0:04:34 So with respect to AI, there’s, like, several realizations I’ve had over the last few years in terms of the unarticulated limitations of the space, right?
0:04:41 Some of these I think I sort of, I think I came up with relatively early in the history of modern AI.
0:04:43 And others I think I’ve come to more recently.
0:04:45 But let me kind of enumerate them in no particular order.
0:04:46 Martine, jump in any time.
0:05:04 So the first is that something that motivated, I think, both Eliezer and Altman, a bunch of folks who built OpenAI, was almost like the same kind of sentiment that motivated people to build, like, the Sistine Chapel, which was sort of like an implicit Abrahamic monotheism, which was like summoning God, right?
0:05:05 Like, you know, communing with God.
0:05:14 But in the Abrahamic God sense, right, of also the vengeful God who would turn you into paperclips, like turning you into pillars of salt, right?
0:05:16 That was an implicit thing that was behind, right?
0:05:21 Because when they talk about AGI, they talk about AGI as implicitly a unitary thing.
0:05:30 We will get to AGI and then it will go to infinity and will be, like, you know, raptured, singularity kind of thing, even though that’s implicit.
0:05:39 And because I’ve been thinking so much about crypto and other kinds of things, I was like, well, there’s a different sort of implicit school of thought, which is polytheistic AGI, right?
0:05:43 Do we have, rather than the vengeful God, do we have war of the gods?
0:05:54 Do we have many superhuman intelligences, all from different cultural backgrounds that have the mores and the values imprinted on them?
0:05:59 And very early on, I had a tweet that said, at a minimum, there’s going to be American AI and Chinese AI.
0:06:05 And if we’re lucky, there’ll be decentralized, open source, you know, crypto-style AI.
0:06:09 Because at the time, the American AI was woke, highly woke, because it’s 2022.
0:06:14 And I knew that China was going to stop at nothing to copy it, so I knew we were going to get at least two.
0:06:18 And I thought we might get N if we were lucky enough to get decentralized AI.
0:06:25 And that wasn’t obvious at that time, because the cost of training models was so high, and OpenAI was so far ahead.
0:06:26 It took a while before other people caught up.
0:06:33 But now it’s very clear that we’re going to have lots and lots of high-quality, open-source, decentralized models.
0:06:38 Like a new one comes out almost every week, you know, and China’s going to work hard on this.
0:06:40 It’s a big thing, like all the DeepSeq models.
0:06:50 So Polytheistic AGI, I think, is one very useful macro frame, which takes away some of the sort of AI apocalypse tones, I think.
0:06:59 Because I don’t think the image generators or text chatbots are going to cause the, you know, destruction that people had.
0:07:03 People thought they were going to, like, bust out and, you know, do systems programming.
0:07:04 And Martine can speak about that.
0:07:07 Martine and I kind of were talking about that.
0:07:07 There are certain limitations.
0:07:11 It’s now more clear, actually, that they’re worse at systems programming than they are at visuals.
0:07:12 We’ll come back to that.
0:07:14 So A, Polytheistic AGI.
0:07:14 And so what does that mean?
0:07:17 That means every culture has their own AGI.
0:07:27 And eventually, every culture has their own social network and cryptocurrency and AI, which is, you know, the AI is sort of like their oracle that’s at the center of society.
0:07:32 And they’ve got their deterministic law with cryptocurrency and their probabilistic guidance with AI.
0:07:34 And their social network that binds the whole thing together.
0:07:41 So those three technologies are like social technologies that are almost like the reactor core of the network state of a modern internet-first society, right?
0:07:43 And they’ll be customized for each different kind of group.
0:07:45 And certain things will be disallowed and allowed.
0:07:49 Like image generation might not be allowed in subcultures or NSFW or whatever.
0:07:50 All that kind of stuff would be tweaked.
0:07:52 So, okay, that’s one concept.
0:08:01 The second concept, you know, Eliezer Yudkowsky, who, by the way, you know, even if I disagreed with a lot of his stuff, did do a lot to promote AI and people getting into it and so forth.
0:08:10 So, even if I disagreed with bombing the data centers and various other kinds of things, I give him significant partial credit for getting people motivated to look into the space.
0:08:12 He was definitely into it in that sense.
0:08:13 So, like, directionally, there was something.
0:08:15 You know, I was trying to see the right side of things.
0:08:30 But a second big concept that I think I really disagree with Eliezer on, that I think is being borne out, is the idea that AI could just cogitate for millions of years and figure things out and it could outmaneuver you all the time.
0:08:39 And we know that’s not true because turbulence, chaos, cryptographic equations are not like that, right?
0:08:47 You can come up with turbulent systems, chaotic systems, where you simply, with finite precision arithmetic, cannot forecast out indefinitely.
0:08:49 In fact, you get fracturing and breaking.
0:08:58 And cryptographic caches are set up in such a way as well to be hypersensitive to initial conditions where a small change of one character can get a totally different MD5 sum or something like that.
0:09:13 So, those were situations, and if you wanted to, you’d come up with a thought experiment where you inserted turbulence or chaos into your decision process, sort of like you shake a turbulent clock or something like that before throwing a pitch, and the AI wouldn’t be able to predict your actions, right?
0:09:18 And that’s like just a simple experiment in real life, just, you know, the flow of a fluid is turbulent, you know?
0:09:22 So, that actually put bounds on what AI can predict, right?
0:09:27 Quantitative, physical, and mathematical bounds on what an AI can predict, right?
0:09:29 Can I just add just a little bit of color here?
0:09:29 Sure.
0:09:32 I think this is great, but I think we need to call out.
0:09:45 So, the way that you describe AI as like gods and monotheistic and polytheistic, it’s great at describing how human beings view AI, right?
0:09:45 Sure.
0:09:51 But in reality, we’re talking about software running on computers that are bound by those limitations, right?
0:09:52 So, I don’t view them as gods personally.
0:09:56 You know, I got my PhD in systems, so I view them as like system software.
0:10:04 And I actually think the original sin in all of this AI, you know, anthropomorphic fallacy started with Bostrom, right?
0:10:10 It was one of these kind of thought experiments where, you know, when Nick Bostrom wrote Superintelligence was what, 2014?
0:10:15 And he was talking about this platonic ideal of AI.
0:10:30 And this platonic ideal of AI just happened to be able to recursively self-improve and just happened to like have these kind of like super physical things that no AI today has.
0:10:39 But it happened that the conversation around AI started then, and then it just coincidentally, we had these LLMs show up four years later.
0:10:46 And so, somehow, our thought process on these two things dovetailed, right?
0:10:55 And so, people would take all of these kind of mental ruminations, these thought experiments, and they would apply it to like actual systems.
0:11:07 And I think the problem with doing that, the problem with taking some platonic ideal, whether it’s Bostrom’s or the Abrahamic view of God or any kind of religious view, is that it is very blinkered to the limitations.
0:11:09 And you’ve pointed out a great limitation.
0:11:11 We’ve been doing computer simulation forever.
0:11:17 We totally know the limits of simulating physical phenomenon, particularly chaotic systems.
0:11:19 We have actually very hard bounds on these.
0:11:25 By the way, it’s not just like, you know, the limits on the size of like a computer word or an integer or something like that.
0:11:30 There are actually like very strong, you know, limits on time and the amount of compute necessary, etc., right?
0:11:31 And so, we know these.
0:11:33 And so, you can talk about these in two ways.
0:11:35 I think for this conversation, we should do this.
0:11:41 It’s like, well, you’re so good at talking about like, what is this platonic ideal and how should we have a mental kind of model for this?
0:11:43 For like the non-computer specialists.
0:11:48 And what I would love to do as we go through this conversation is talk about like, actually, these are still bound by computer systems.
0:11:50 We know the limitations of computer systems.
0:11:52 And so, let’s see how those bound it.
0:11:54 And you just beautifully did both of those things in the same one.
0:11:57 I just want to make sure that we tease apart both of those things as we have.
0:11:57 Totally, totally.
0:12:00 I feel I can kind of speak both languages here.
0:12:05 Where I understand where those guys are coming from, because I kind of am like also a tech radical.
0:12:06 You know what I mean?
0:12:06 Right?
0:12:09 But I’m also a tech pragmatist.
0:12:11 So, I think I straddle that boundary.
0:12:15 The danger is, is that we don’t say that this is a platonic ideal.
0:12:17 People will map it to existing systems.
0:12:17 Totally, totally.
0:12:24 That’s exactly what happened in 2020, 2021, which is they took this thought experiment that started with Bostrom.
0:12:25 And it was a thought experiment.
0:12:28 If you go back to the original book, you’re like, listen, this has nothing to do with real systems.
0:12:30 And they applied it to a real system.
0:12:32 And I think that was kind of the original sin.
0:12:32 I know, I know.
0:12:36 Well, but let me poke on that a little bit and then also defend your point.
0:12:38 The Turing test was a thought experiment.
0:12:40 That was a platonic ideal.
0:12:44 No reference to neural networks, no reference to implementation details.
0:12:52 And yet, it served as something that went from a thought experiment to an applied thing with, you know, the Lebschen-Gauspec test.
0:12:54 The CAPTCHA was like V0 of that, you can argue.
0:12:56 It became commercially important.
0:13:00 And now, obviously, AI has blown past the Turing test.
0:13:02 It can, you know, be people in this.
0:13:04 And then, you know, the Chinese room.
0:13:06 John Searle’s thing, right?
0:13:07 About machine translation.
0:13:09 Another thing that was like a platonic ideal.
0:13:16 And there’s other things that are like that, like six degrees of separation, which social networks actually made real, right?
0:13:19 So, I’m not necessarily against platonic ideals.
0:13:21 Thought experiments are very, very important.
0:13:27 I just think that we need to be very clear when we’re having a conversation not to conflate them with, like, the actual systems.
0:13:31 And I think that’s the thing that we’ve kind of fallen into in this conversation.
0:13:32 Not you and I.
0:13:34 I just think the broader discourse outside of us has fallen into it.
0:13:35 That’s right.
0:13:39 So, now, your point, which is a very good one, is these are real systems.
0:13:41 And they actually have real limitations.
0:13:52 I think one of the more interesting things for me over the last few months and years has been defining exactly where those limitations are.
0:13:56 Because I think where they landed was kind of counterintuitive, right?
0:13:57 100%.
0:13:59 Let me give some of them that I think about.
0:14:04 One of them is that you have this decentralized AI rather than AGI.
0:14:10 That alone, I think, kind of nukes a bunch of the concepts of we’ll get to AGI and just win.
0:14:16 Because it’s kind of clear that there’s just like a rapid onrush of new models and it’s like more of a continuous kind of thing.
0:14:19 The fast takeoff scenario didn’t happen, right?
0:14:22 On the other hand, can an AI write a sonnet?
0:14:23 It can, right?
0:14:24 It can do it better than most humans.
0:14:25 Can it write a screenplay?
0:14:27 It can do that, again, better than most humans.
0:14:30 There’s a lot of things that we thought as maybe harder.
0:14:36 We thought maybe like locomotion or something would be easier to solve than what we think of as higher cognitive functions.
0:14:40 But it’s actually the locomotion that’s still harder in some ways, right?
0:14:46 I think that one now in retrospect is easy to explain via economic equilibrium.
0:14:48 We just didn’t have the right model.
0:14:55 Well, so, for example, when you’re competing with a human brain on 3D navigation in space,
0:15:06 you’re competing with whatever the 4 million-year-old mammalian brain and a body that’s been running away from predators and picking berries for 4 million years.
0:15:09 It’s incredibly highly evolved.
0:15:16 When you’re competing with a prefrontal cortex, which is like, you know, the language, learning, and creativity, how old is it?
0:15:17 250,000 years?
0:15:22 And so, if you take this question from an economic equilibrium standpoint, you’re saying,
0:15:27 well, listen, do you want to compete with the most evolved system that’s solving the much more difficult problem,
0:15:32 which is much higher dimensionality, has to deal with like, you know, chaotic nonlinear systems, like you said,
0:15:43 or do you want to deal with the very new evolution that deals with kind of a much denser space that actually does a pretty good job with linear interpolation?
0:15:46 You know, the problems that actually works very well with linear interpolation, etc.
0:15:48 And so, I think you’re totally right.
0:15:49 It was not obvious.
0:15:53 I mean, listen, AI has always been solving what we thought was the easier problem.
0:15:58 But our fallacy is like easier for humans because we’re really good at it, because we’ve been doing it for a really long time.
0:16:04 And it turns out, the harder problems are just harder for us, because we’ve only been doing them more recently.
0:16:10 So, this is a great case of where, like, our intuition on the problems to solve with the wrong ones,
0:16:13 because of our own kind of anthropomorphic fallacy, right?
0:16:15 Our own notions of what problems are easy and hard.
0:16:17 Yeah, I mean, okay, true.
0:16:23 I would say, one of the surprises I had, that’s why it’s interesting when you said economics,
0:16:31 one of the surprises I had with GPT-3 and ChatGPT was how far you could get with language.
0:16:31 Yeah.
0:16:36 Like, before the ChatGPT moment, it wasn’t obvious to me.
0:16:44 Then, afterwards, you’re like, okay, language is sophisticated enough to encode almost any concept about the world, right?
0:16:45 Or rather.
0:16:48 Well, if I put you in a very dark room.
0:16:49 Yeah.
0:16:54 And that’s an arbitrary construction, and I describe how you navigate and pick something up,
0:16:55 I think you’d have a very tough time doing it.
0:16:57 No, no, no, I know.
0:17:01 What I’m saying, though, is basically, if you generalize language to be streams of symbols,
0:17:09 right, you could send telemetry to some, like, basically, I think I was surprised by
0:17:15 how many concepts were encoded in language that could be learned, even to the point of, like,
0:17:20 rough world models of, like, you know, the map of the earth or the proximity of things.
0:17:22 You can back out those kinds of things.
0:17:27 The distinction is, is it could have been the human being, the human mind that looked at the world,
0:17:31 did the reasoning, and created the world model, and that is cached in language.
0:17:33 Yeah, that’s what I’m saying.
0:17:34 Okay, great.
0:17:34 Okay.
0:17:38 But the going from the world to the world model was the human, and then everything else,
0:17:39 and then the language, okay, great.
0:17:40 I agree.
0:17:40 That’s right.
0:17:46 But it was a little surprising to me that you could get that far with language models,
0:17:52 as opposed to, like, spatial reasoning, as opposed to that PredictNext token would get as far as it did.
0:17:55 That was surprising to me that that was the angle.
0:18:00 The reason is, you and I both did so much stuff on Markov chains and conditional random fields
0:18:03 and all that kind of stuff, and, of course, it transfers to a different architecture,
0:18:08 but I just wouldn’t have believed that that method taken to, I mean,
0:18:12 another thing I think was very counterintuitive for me in the late 2010s was double descent,
0:18:17 because as a classical machine learning guy, like, that’s just very counterintuitive
0:18:20 that you could go past overtraining back into, like, you know, good regime.
0:18:26 So I want to actually talk about one thing you did say, which is self-replication, right?
0:18:29 I don’t think of that as a forever constraint on AI.
0:18:34 I think of that as a today constraint, and the reason I think of that as a today constraint is
0:18:37 they’re not embodied, so they’re not scripting robots.
0:18:41 Because they’re not scripting robots, they can’t build data centers and mines
0:18:44 and replicate themselves and so on and so forth, right?
0:18:51 And, you know, the whole concept of consciousness initially was that consciousness evolved
0:18:56 so that, like, you know, you’re running away from, like, a bore or something like that,
0:18:59 and you have a model of yourself, and then there’s, like, a branch ahead.
0:19:03 And if you have a model of yourself, you know whether you’ll fit under that branch or not.
0:19:07 Whereas if you just had a generic model of, like, the more self-consciousness you have,
0:19:12 you can sort of simulate your run under that branch, whether you’re going to die or survive, right?
0:19:18 And so that’s, like, one theory on why consciousness arose to help your survival replication.
0:19:19 Should that help with goal setting?
0:19:23 Yeah, you stand outside of yourself to be able to see the experiment, yeah.
0:19:24 That’s right.
0:19:26 So right now, AI does not really have goal setting.
0:19:28 It doesn’t have reproduction.
0:19:30 It doesn’t have embodiment.
0:19:32 And it can’t act independently of humans.
0:19:36 This is one of the big things I think that people were really scared about in late 2022,
0:19:40 and they’ve calmed down on this, that you and I both poked on, Martine, I think,
0:19:45 was, is this thing going to jump out of the box and, you know, code itself?
0:19:49 Now we laugh at that, but the reason that I think that hasn’t happened
0:19:52 is AI can’t prompt itself yet.
0:19:58 And prompting, I argue, is actually a much harder thing than people realize
0:20:01 because, did we talk about my analogy of, like, the spaceship?
0:20:02 Talk about it.
0:20:03 There’s something worth going through.
0:20:06 Okay, so let’s say you have a really fast spaceship,
0:20:07 like close to speed of light or something.
0:20:11 You still have to point at the phi and psi coordinate
0:20:14 on the surface of a sphere, you know, in coordinate space
0:20:15 as to where you’re going to point that ship.
0:20:18 And if you’re going to take it on a journey,
0:20:20 then you’ve got waypoints like, here’s this heading,
0:20:22 and here’s that heading, and so on and so forth.
0:20:25 Like a series of phi-psi pairs on the surface of a sphere.
0:20:25 Okay.
0:20:28 And that’s only two floating point variables, you know, right?
0:20:33 By contrast, if you take, I mean, how high dimensional is the vector
0:20:36 that you’re giving as input when you talk about a prompt, right?
0:20:41 If you just take UTF-8 code points, I mean, or just even ASCII, right?
0:20:45 And you have like a few words, that’s much higher dimensional
0:20:48 than just a, like a vector of two floating point variables, right?
0:20:49 Yeah.
0:20:53 So you can get to a very high level of dimensionality
0:20:57 in terms of direction vector you’re pointing this AI spaceship in, right?
0:21:02 And so a prompt is a very high dimensional direction vector,
0:21:06 even if you account for the fact that many potential prompts
0:21:09 of just strings of random characters wouldn’t be interesting.
0:21:11 It’s still a very, very high dimensional vector.
0:21:13 So it’s like you’ve got a fast spaceship,
0:21:15 but you still have to point it in a direction to go somewhere.
0:21:17 I think that’s a good analogy, right?
0:21:20 Well, I think there’s one more level of complexity you have to add
0:21:25 to your analogy, which talks about how difficult it is to make these things
0:21:29 kind of, let’s call it autonomous, which is closing the control loop.
0:21:29 Yes.
0:21:31 So let me try and build on your analogy.
0:21:31 That’s the verifying part.
0:21:31 Go ahead.
0:21:32 Yes, go ahead.
0:21:36 So it turns out that the directions that you point in,
0:21:38 it has to understand, right?
0:21:40 That has to be in distribution.
0:21:43 So you can’t point into a direction it doesn’t understand
0:21:45 because it does the worst thing if you point in a direction it doesn’t understand.
0:21:47 It just crashes right into that wall or whatever.
0:21:49 It just tries to do it.
0:21:50 No, it does a random direction.
0:21:52 It’s the worst thing ever, right?
0:21:58 And so the problem is if it’s producing a direction
0:22:01 that has to go into itself,
0:22:04 it doesn’t know what it knows and it doesn’t know what it doesn’t know.
0:22:05 Does that make sense?
0:22:06 Yes, that’s right.
0:22:08 In fact, it’s optimized to fake it.
0:22:09 Yes.
0:22:11 And so if you could tell it, if you could say,
0:22:14 hey, listen, produce a bunch of directions by feeding the last direction in,
0:22:17 you have no idea of a direction it spits out
0:22:19 is going to be in distribution when it comes back in.
0:22:22 And that’s what closing the control loop means.
0:22:24 And it’s a very tough problem.
0:22:26 I think this is such an important point for us to go into
0:22:30 because in theory, you can close the control loop on these things.
0:22:34 But as scientists, we want bounds on what that means, right?
0:22:39 Like, for example, clearly you want to gather new information to update your model.
0:22:43 Then we want bounds on how much information you need to gather.
0:22:46 It turns out that model was trained on everything humans have ever gathered.
0:22:49 So it’s like the incremental experiment going to update that?
0:22:50 Maybe.
0:22:52 Information theoretically says probably not.
0:22:54 We don’t even have bounds on these things.
0:22:58 Now, you said previously, well, when it comes to a chaotic system,
0:23:00 we know that computers, you know,
0:23:04 it takes a long time for them to compute a nonlinear system.
0:23:06 Actually, you know, I just want to pause you there.
0:23:07 You just gave me an idea for a great prompt,
0:23:12 which is what areas do you feel your knowledge is the most thin on?
0:23:13 This is the key.
0:23:14 Right?
0:23:15 That’s a great prompt.
0:23:15 Yes.
0:23:18 Does a model know to what extent is in distribution or is out of distribution?
0:23:21 Yeah, I’m going to try that one on.
0:23:24 Actually, by the way, this is, you know.
0:23:25 But this is the key.
0:23:25 This is the key.
0:23:31 Self-reflection is the key because if it produces an output that’s out of distribution,
0:23:36 then, of course, you have error and then you’re not there to kind of nudge it back.
0:23:36 Yeah.
0:23:41 So it’s like real-time events, obscure or niche academic fields,
0:23:45 and specialized subfields behind paywalls, local and regional information,
0:23:50 human emotion, intent or experience, private or proprietary systems.
0:23:54 Actually, pretty interesting, quick off-the-cuff response here, right?
0:23:59 You should ask it if it can always produce a response to which it has a lot of data.
0:24:02 Always produce a response.
0:24:02 What do you mean by that?
0:24:05 So the question is, is if you’re closing the control loop,
0:24:08 it’s going to spit something out that you’re going to feed back in, right?
0:24:08 Yeah.
0:24:09 That’s the whole point.
0:24:14 So the question is, will it always spit stuff out that if you feed it back in,
0:24:14 won’t give you nonsense?
0:24:16 Oh, I see.
0:24:16 Yeah.
0:24:20 I mean, well, people have actually tried the experiment of take the image
0:24:22 and just exactly replay the previous image,
0:24:24 and then it like morphs into something totally different.
0:24:29 Well, actually, I want to do like maybe 10 quick hits off of that initial post
0:24:32 because you’re reminding me of something just now,
0:24:37 which is another kind of angle I have on AI is prompts are tiny programs,
0:24:40 which is more common today, but I think,
0:24:42 at least I articulated it went viral a while ago,
0:24:44 but they’re programs in a hidden API.
0:24:49 Because normally you have an API that is fully documented,
0:24:51 but very error intolerant.
0:24:52 Prompting is the opposite.
0:24:55 It’s completely undocumented, but it’s highly error tolerant.
0:24:57 It will usually do what you mean,
0:25:01 but the better your vocabulary, the better you can prompt it.
0:25:03 So now art history is an applied subject, right?
0:25:07 Like knowing a vocabulary of like Cezanne versus Picasso and so on and so forth,
0:25:11 you can actually pull up the style that you want on demand.
0:25:13 So the broader your vocabulary, the broader your subject knowledge,
0:25:14 the more you can get out of it.
0:25:17 We’re in the age of the phrase, which is the prompt,
0:25:23 the 140 character tweet, and the 12 words for your crypto password, right?
0:25:28 These phrases of power in AI, in social media, and in crypto just unlock everything.
0:25:30 So the better your vocabulary, the more you can do, right?
0:25:32 So I think of prompts as tiny programs.
0:25:35 And actually, one of the things that I’ve gotten in the habit of doing
0:25:38 is writing, it’s total opposite of search.
0:25:41 You know, with search, you learn to type things in keywordese,
0:25:45 and you sort of figure out the word that has the most specificity TF-IDF,
0:25:46 you know, on the page or whatever.
0:25:52 I will write sometimes these long memos to an AI.
0:25:56 And then continuing, maybe you don’t love this analogy, but I think it’s funny.
0:26:00 Continuing the polytheistic analogy, I’ll give them to Brahma, Vishnu, and Shiva.
0:26:01 Okay.
0:26:07 So I’ll give them to ChatGPT and Claude, and now Grok and what have you, right?
0:26:11 I’ll consult all the gods, and then I’ll make my decision on that basis, right?
0:26:13 And then sometimes have them argue with each other, right?
0:26:16 And why do I say kind of half-jokingly gods?
0:26:21 Because like the Hindu kind of frame on that is not like the, you know, fearing god thing.
0:26:23 It’s not the same kind of thing.
0:26:30 But in a sense, it is a superhuman intelligence that knows everything about your culture.
0:26:35 And if you ask it the right question, it can tell you something that you didn’t know.
0:26:39 But in like the Hindu tradition, they’re not infallible, right?
0:26:42 Like that is to say, it’s not the same as the all-knowing, all-seeing.
0:26:45 It’s more like superhumans, more like superheroes.
0:26:47 People will argue with me about that, but I think that’s more.
0:26:48 Yeah, I think that’s good.
0:26:52 It’s like actually the Norse tradition is similar to the Hindu tradition in some ways, right?
0:26:55 Where the gods were not infallible, but they were superhuman.
0:26:56 This is a great and it’s a useful framing.
0:27:01 Just remember that when it comes to computer systems, we can put formal bounds on them.
0:27:02 We can do this information theoretically.
0:27:04 We can do this computationally.
0:27:04 Totally.
0:27:05 And like that’s going to come.
0:27:07 And once that happens, we will understand these systems fully.
0:27:10 And it’ll be very hard to think of them as gods at that point.
0:27:10 Well, that’s right.
0:27:14 I mean, that’s the thing is actually what’s interesting is that the interpretability work
0:27:19 that Anthropik and others have done and the work on like grokking or what have you, right?
0:27:22 That’s actually really good stuff because you can pick apart,
0:27:23 neurons.
0:27:26 You can find the golden gate neuron if you saw that kind of thing, right?
0:27:28 You can dial that up, dial that down.
0:27:32 You can start actually taking apart these AI brains in a way that hasn’t happened before.
0:27:35 A few other kinds of things.
0:27:40 So the thread that you guys, that thread actually summarized several of my research.
0:27:43 I should probably put this into a post so it’s like kind of there for the record.
0:27:46 So I’m just going to do a bunch of these and maybe get your thoughts.
0:27:51 So first concept in no particular order, AI doesn’t do it end to end.
0:27:52 It does it middle to middle.
0:27:57 So the business spend, so basically you have to still prompt it and then you have to verify
0:27:57 it.
0:28:00 And people talk about prompting, but they talk less about verifying.
0:28:07 And Karpathy and I had this good conversation a few weeks ago where basically AI is going
0:28:13 to create massive numbers of jobs in proctoring and verification because it’s so good at faking
0:28:14 things.
0:28:19 So one of my other kind of concepts is AI makes everything fake and crypto makes it
0:28:19 real again.
0:28:23 Because AI is a probabilistic technology and crypto is a deterministic technology.
0:28:27 And so like crypto is in some sense what AI can’t fake.
0:28:29 It’s like the hard cryptographic equations.
0:28:32 It can’t fake a Bitcoin private key.
0:28:34 It can’t fake even an on-chain NFT.
0:28:35 That’s what AI cannot fake.
0:28:40 And so that’s, you know, like the hard barriers, right?
0:28:41 I mean, I generally agree.
0:28:44 I mean, I don’t think crypto solves the grounding problem, right?
0:28:48 I mean, it’s a mechanism you could use, but it’s a mechanism.
0:28:49 Grounding in reality?
0:28:50 Yeah, yeah, yeah.
0:28:51 The actual physical grounding problem.
0:28:53 But the data ingest problem, yeah.
0:28:54 So I disagree with you on that.
0:28:54 And here’s why.
0:28:56 Or let me give you a counter argument at least.
0:28:59 Right now, just give a concrete example.
0:29:04 Let’s say you ask Perplexity to summarize the FTX hack in 2022.
0:29:10 When it would do so, among the citations I would give you would be links to a block explorer,
0:29:11 right?
0:29:14 That would actually have on-chain data that you can cryptographically
0:29:19 verified that this transfer of these funds happened at this time.
0:29:22 And if you want to go even further, you can actually pull out the digital signatures
0:29:25 and the hashes and the timestamps from that block explorer, right?
0:29:26 Sure, yeah.
0:29:27 Now, here’s my argument.
0:29:30 My argument is that works for financial data.
0:29:37 But what’s happening now with Farcaster and other kinds of things is with the increase in block
0:29:42 space, you could put more and more kinds of data on-chain, and we’re going to have to because
0:29:46 you’re going to need crypto instruments and you’re going to need cryptographically hashed
0:29:51 posts and crypto IDs to know that it was posted by a human or to know the data wasn’t tampered
0:29:51 with.
0:29:54 So more and more kinds of data are going to go on-chain.
0:30:00 And then that will eventually mean that an AI’s citations are to on-chain data, which is
0:30:01 both financial data and social data.
0:30:07 And so then at least it will map back in terms of grounding to an on-chain cryptographically
0:30:09 provable assertion of some kind.
0:30:13 And you might say, well, at least that’ll be an assertion at the metadata level.
0:30:18 Like, we can prove that this digital signature made this assertion at this timestamp with
0:30:19 this probability.
0:30:19 Sure.
0:30:21 I’m talking about real-world grounding.
0:30:29 Like, I say something, I am a human being, you know, you have no idea of whether I said
0:30:30 is true or not true.
0:30:34 You know, if there’s a geographic place where there’s a picture of the geographic place taken
0:30:37 from a 1970s photo, was that doctored or not?
0:30:42 I mean, like, the actual physical world grounding just because you can’t encode, you know, digital
0:30:45 data yet at this point for the physical world.
0:30:49 Now, it’s a great mechanism to do that once we can solve the ingest problem.
0:30:54 So let me talk about something which is happening now that I’ve been kind of funding on the
0:30:54 side.
0:31:01 It’s not a full solution, but it’s, I think, a partial solution, which is, so crypto instruments,
0:31:08 the idea would be that when you capture like a frame of data, right, for example, sequencing
0:31:12 machines like DNA sequencing, when the data is coming off the machine, it’s like,
0:31:16 TIFF files that are actually image data that gets processed into ACs, Gs, and Ts.
0:31:20 And many other kinds of instruments basically have a stream of data coming off the machine
0:31:21 as you’re capturing it.
0:31:22 Cameras are like that, right?
0:31:22 Okay.
0:31:29 So you could, and there are things that do this already, take a hash of that and post it
0:31:31 on-chain at that time, right?
0:31:37 And what that would at least say is that that frame of data existed at that time.
0:31:44 And so if you had something that was like a scientific experiment, right, you know, like a pre-registered
0:31:48 double-blind trial or something like that, you could have not just a crypto instrument, you
0:31:53 could also have other people with proof of humans there who have a sort of attestation ceremony.
0:31:59 And now you have a number of different kinds of on-chain data that start to get harder
0:32:00 to fake in a coordinated way.
0:32:01 Not impossible.
0:32:02 I agree with all of it.
0:32:08 As soon as you get it into the system, then crypto is a great mechanism for ensuring kind
0:32:10 of end-to-end guarantees.
0:32:14 It’s just the data ingest problem is a longstanding problem in computer science.
0:32:18 And over time, everything you’re saying is going to be more and more true because over time,
0:32:20 we’re going to be more and more incented to make sure the stuff going into the system
0:32:21 is true.
0:32:22 So I just totally agree.
0:32:24 It’s just, listen, I’m an old-school networking guy.
0:32:28 Like for us, there’s like the internet and there’s the stuff that goes in the internet
0:32:31 and then you just kind of use different mechanisms for both and it’s worth calling out.
0:32:31 That’s all.
0:32:32 That’s right.
0:32:32 Okay, great.
0:32:37 So next kind of concept maybe to discuss, and I think this is a useful division.
0:32:40 This is a relatively recent point that I made to myself that thought was useful.
0:32:45 AI is good for the visual and less good for the verbal.
0:32:46 What do I mean by that?
0:32:53 So when it’s generating images, when sharing video, when sharing user interfaces like Vercel’s
0:32:59 v0 or Replit’s user interfaces, the great thing about them is you can instantly see them.
0:33:05 And with the GPUs that we have in hardware, you can verify cheaply whether they’re good enough,
0:33:05 right?
0:33:08 Because you can just instantly get the gestalt of it, right?
0:33:15 Whereas when it’s back-end code, when it’s legalese, when it’s like, you know, mathematical equations,
0:33:20 you have to slow down and use system two thinking, not system one, right?
0:33:21 And it’s not just your gestalt impression.
0:33:25 You have to actually go line by line and check whether it’s right.
0:33:32 And that is actually the expensive step, the verifying, right over there.
0:33:38 So I think that’s a non-obvious thing where the more front-end and video and visual it is
0:33:40 what you’re doing, the easier it is to say.
0:33:42 And now the interesting concept is how much of that, go ahead, you’re going to say.
0:33:47 I’ll just add one more thing, which is like, for me, again, like I spend most of my time
0:33:51 in like software and engineering, the big distinction is stateless versus stateful, right?
0:33:55 So if you’re generating code that is going to have semantics that evolve while you’re running it,
0:33:58 it’s just impossible to spot check.
0:34:00 Like some things are computationally irreducible.
0:34:02 You actually have to run the computation to the answers.
0:34:03 Like the image is the perfect example.
0:34:05 It’s visual and it’s basically stateless.
0:34:07 Like all of the state is there.
0:34:08 There is no kind of runtime semantic.
0:34:09 Totally agree.
0:34:10 That’s right.
0:34:16 And whereas even a relatively small snippet of back-end code could have a fairly complex
0:34:20 finite state machine essentially underlying it, or even infinite state machine.
0:34:21 It’s dynamically bound.
0:34:23 It could be very, that’s right.
0:34:26 And so simulating the time dynamics that you have to use something different, maybe formal
0:34:29 verification, if it’s algebra, you know, it’s all, yeah.
0:34:31 Or you’d actually have to run it if it’s computation irreducible.
0:34:33 Like there’s no way to statically do it.
0:34:38 And so it does reduce to almost like the computation verification problem, which is this longstanding
0:34:40 problem in computer science forever.
0:34:40 Right.
0:34:46 Now, formal verification, at least for a subset of programs, has become commercially viable for
0:34:51 smart contracts because they’re so high value and they’re so small that they actually, it’s
0:34:53 worth doing that on, right?
0:34:53 Totally.
0:34:56 And yeah, it’s not going to work for the general case, but you can do a constrained case.
0:34:57 But okay.
0:35:00 So that was one major division, visual versus verbal.
0:35:06 Another, when you’re getting to like stateful and so on, is the limits of AI are the things
0:35:09 I’m interested in where you draw like a fine distinction of what it can do, are very crisp.
0:35:14 And what I think AI is particularly bad at that people are trying to use it for, that they’re
0:35:18 going to fail on, in my view, is when they try to use it for markets or politics.
0:35:25 And let me explain why I say that for systems that are time invariant, you know, like the
0:35:34 mapping of an image to the label cat or the rules of a game like chess or checkers or even
0:35:41 go, or, you know, something where there’s like a static rule set or static mapping, right?
0:35:45 Then you can do the train test paradigm and train a model and so on and so forth.
0:35:52 However, when you have something which is time varying, especially rule varying and adversarial
0:35:58 like markets are or like politics are, then the same trade will eventually quickly start
0:35:59 resulting in a loss.
0:36:03 And by the way, the other guys are also using an AI on you, right?
0:36:05 And so it’s decentralized AI again, right?
0:36:13 And so that does actually argues that the CEO or the creator who is constantly sensing the
0:36:18 market or sensing the political wins and has a thesis on it based on human nature or other
0:36:23 things or what have you, actually is the sensor that then prompts the AI.
0:36:30 And that’s a job that is hard for the AI to do at a really deep level because it’s time varying,
0:36:32 rule varying adversarial demands.
0:36:36 And it goes back to what you said in the very beginning, which is if you look at these type
0:36:39 of equilibrium, they’re complex differential equations, which are nonlinear.
0:36:40 Yeah.
0:36:44 And so in order to predict what’s happening, we’d have to like do this nonlinear extrapolation,
0:36:47 which we know that these things are not very good at.
0:36:52 What’s interesting is I wasn’t even thinking of the stock market as complex, but you’re right.
0:36:58 I mean, Mandel wrote this great book on the fact that these things are super chaotic.
0:37:02 Yeah, you’re absolutely right that actually it would be a useful thing to show just with
0:37:08 a toy example, a chaotic system that is time varying or another one’s adversarial.
0:37:14 Now, the thing about that though, is to argue against my point, they’ve gotten AIs that are
0:37:16 actually pretty good at StarCraft, right?
0:37:20 Which, it starts to stretch the boundaries of what I was saying, because it’s definitely
0:37:21 adversarial, right?
0:37:26 And it’s like more time varying than chess, you know?
0:37:29 You could argue it’s not rule varying, but it’s time varying.
0:37:32 Okay, let me go to another point here, right?
0:37:37 So another concept is, I think, so by the way, the commercial implication of that point on prompting
0:37:44 and verifying is business spend moves towards prompting, proctoring, verifying, basically
0:37:46 checking all the stuff that AI can generate.
0:37:48 That’s going to be a huge, huge, huge thing.
0:37:55 And that maps to KYC, that maps to, like, in a bad way, you know, the glass cases in Walmart,
0:37:56 right?
0:38:01 In a sense, a low-trust society is spending more and more and more on verification and proctoring
0:38:02 and so on, right?
0:38:03 Okay, next.
0:38:09 AI means amplified intelligence, not agentic intelligence, because the smarter you are, the
0:38:10 smarter the AI is, better writers are, better prompters.
0:38:11 What are your thoughts on that?
0:38:16 Yeah, I mean, it’s interesting in the coding space that we actually start to have numbers.
0:38:17 on this now.
0:38:17 Oh, interesting.
0:38:18 Yeah.
0:38:24 So if you actually look at, like, relative productivity gains, it just turns out if you’re a more senior
0:38:27 developer, you will have better productivity gains.
0:38:29 Oh, I hadn’t seen that graph.
0:38:31 So it is something that makes the smart smarter, basically.
0:38:35 But also on a relative basis, which is really surprising, right?
0:38:37 If you think about it, it’s actually not surprising.
0:38:39 It’s like, you know what the fundamental trade-offs are.
0:38:41 You know what to ask.
0:38:42 You know how to interpret the results.
0:38:44 You know how to throw away bad stuff when it’s bad.
0:38:49 And so clearly, if you kind of know what you’re doing, you can both verify to your point the
0:38:52 output, but you can also be more specific of your ass.
0:38:56 I think it’s really important for all of us to realize that formal languages came out of
0:38:59 natural languages, not the other way, right?
0:39:02 Like, if you could explain all of this stuff in English to each other, we would, but it’s
0:39:03 just really inefficient.
0:39:07 So we came up with more efficient ways to explain trade-offs.
0:39:10 Yeah, basically, constrained languages that reduced ambiguity.
0:39:12 This literally is strictly an efficiency thing, right?
0:39:17 And so, like, someone that knows how to speak these formal languages to the models is going
0:39:20 to articulate what they want better, and is going to be able to interpret the results better
0:39:21 if the response is formal.
0:39:24 And so, I mean, it’s kind of a nice codification of exactly what you’re saying.
0:39:31 Yeah, I mean, the thing about it is AI means everyone’s a CEO, because, like, to a great
0:39:35 AI, you speak to them, like, in some ways to a great employee, where you give clear written
0:39:38 instructions, and then you can verify the output.
0:39:45 So it actually kind of turns management into a skill that it hyper-deflates the cost of trying
0:39:49 one’s hand as a CEO or as a manager, because you have to give those.
0:39:55 And so the better you are in communicating what it should do, often the more people you can
0:39:56 manage and so on and so forth.
0:40:01 By the way, this gets to the next point, which is AI doesn’t really take your job.
0:40:04 It takes the job of the previous AI, okay?
0:40:10 And what I mean by that is, you now have a slot on your roster at every company for
0:40:16 an AI image editor, an AI text, you know, a chatbot thing, an AI code, you know, IDE thing,
0:40:17 and so on and so forth.
0:40:24 And each new release of Grok or Claude or whatever competes against ChatGPT and Grok and Claude,
0:40:24 right?
0:40:31 And so the AI takes the job of the previous AI, because they’re complementing, you kind of have
0:40:35 a whole raft of AI augmenters that are augmenting your humans.
0:40:40 But those AIs are competing in AI space to a large extent with the previous AI.
0:40:46 Because once you’ve onboarded an image generator into your flow, then it just keeps improving
0:40:48 and you start using it in more places.
0:40:51 It’s an AI taking the job of the previous AI.
0:40:52 Let me know your thoughts.
0:40:56 This is an adjacency to what you’re just saying, but can I actually push on something you said
0:40:56 previously?
0:40:59 Because I actually agree with your polytheistic view of the world.
0:41:00 I totally agree.
0:41:03 But let me just provide the counter argument for us to noodle on in this vein, which is,
0:41:07 have you seen this kind of thing that all the AIs, if you ask to produce a random number,
0:41:08 produce the same number?
0:41:09 Have you seen this?
0:41:11 Yes, it’s like seven or something like that.
0:41:12 Seven or so, right?
0:41:19 So one thing that is, to me, was non-intuitive, but remarkable about these models is how easy
0:41:24 they are to distill, which is as soon as someone creates a leader, everybody uses that leader
0:41:25 and kind of sucks the life out of it.
0:41:30 And then all the models converge on it very quickly, which you could argue that this is
0:41:33 a counter to the polytheistic.
0:41:34 Yeah, kernel intelligence.
0:41:36 It’s basically the, yeah, there’s like a core.
0:41:39 Maybe there are a hundred AIs, but it just turns out they all have the same capability.
0:41:43 So is it just a technicality that they’re actually different and they’ve all learned from
0:41:44 each other?
0:41:45 Well, so it’s an interesting question.
0:41:52 And my view is, I’ve not called this a strong view yet, but my view is that’s almost like
0:41:54 the human body plan and spinal column.
0:41:59 And then you differentiate on top of that core spinal column, maybe, you know?
0:42:06 And it’s like, you’d have some sort of, just like every human to first order can see and
0:42:08 speak and hear and so on and so forth.
0:42:13 But then some people have, you know, much better vision or they have much better speech or something
0:42:14 like that, right?
0:42:16 So there may be some distilled kind of thing.
0:42:22 Oh, by the way, another interesting part on what you’re saying, in general, text on the
0:42:25 internet is not emitted equally by every group.
0:42:29 Here’s how I would distill my view on this, which is very much in line with what you’re saying,
0:42:32 which is, I think the universe is very complex.
0:42:35 And I don’t think it gives up its secrets easily at all.
0:42:38 And I think the universe is full of fundamental trade-offs.
0:42:39 Like, you can’t have both.
0:42:41 You have to choose A or B.
0:42:46 And so these models will align with those fundamental trade-offs, right?
0:42:52 And maybe it’s performance, maybe it’s correctness, maybe, you know, whatever it ends up being.
0:42:59 And so as soon as you want a specific solution for a given problem where it hits one of those
0:43:03 trade-offs, you’re just going to need a different model because you just can’t end up having both.
0:43:06 And again, because I know the coding space the best, we see this a lot, right?
0:43:11 Which is, you know, a model that’s very good for certain parts of code is just not going to be
0:43:14 generally good at other things because those are the trade-offs made when training it.
0:43:18 And I think that this is the future plurality of model model we’re going to.
0:43:19 Well, is that true?
0:43:21 You know, I thought somebody said something.
0:43:25 I may be wrong about this, but I saw some counterintuitive result that said
0:43:28 that making the AI specialize in one area makes it worse in other areas.
0:43:30 Do you see something like that?
0:43:30 Well, yeah.
0:43:34 So this is a very big debate, but the debate goes as follows.
0:43:40 Like the first wave of AI was pre-training where everything you threw into it, like it just
0:43:41 got smarter.
0:43:46 And so that’s kind of a 10 for 10 technical win, right?
0:43:48 Just because it’ll be as good as writing coding as in sonnets.
0:43:54 But as soon as you’re doing RL, where you’re training it in a specific domain with a specific
0:43:59 verifier, you’re likely losing, you know, other areas.
0:44:02 So you make it really good at playing chess, it’s going to be less good at, you know,
0:44:05 something else like that, you know, writing general scores or something.
0:44:10 So I think the current debate now and the current data seems to suggest that RL doesn’t generalize
0:44:11 in the same way.
0:44:16 And so now we are in this case where you would have a plurality of models because you’re always
0:44:19 robbing Peter to pay Paul when you make it good at a certain domain.
0:44:21 That’s right.
0:44:21 Yeah.
0:44:27 I think also, you know, somewhat related to that, in terms of what domains it’s good at and
0:44:32 so on and so forth, at least right now, I think, I’m not sure if you agree with this,
0:44:34 AI doesn’t really take your job.
0:44:35 It allows you to do any job.
0:44:36 Yeah.
0:44:41 Because you can get to like an okay level as like a user interface designer or sound effects
0:44:42 or something like that.
0:44:48 But you need a specialist for polish and that, you know, though, I wonder maybe with enough
0:44:52 RLHF from specialists, maybe that won’t be as necessary.
0:44:53 I don’t know.
0:44:53 Maybe you have some thoughts.
0:44:55 So here’s my current mental model.
0:44:56 There’s two personas.
0:44:58 There’s persona number one is the expert in the space.
0:45:01 And there’s persona number two is the non-expert in the space.
0:45:07 So if the non-expert in the space is using AI, it’s taking the place of the expert, right?
0:45:13 And so maybe you’ll ask it and it’ll give you some, let’s say I want a 3D asset for a video
0:45:14 game and I’m a programmer.
0:45:20 Then I’m going to ask it for a nice 3D asset and then it’ll give me one, right?
0:45:21 So I’m the non-expert.
0:45:26 The expert user to our previous will actually know how to ask it better and it’ll likely
0:45:31 get better results because it’s an actual domain expert and that expert is using it.
0:45:32 So I think we see both.
0:45:36 Actually, if you look in the market, you see both of these uses and I think both of them
0:45:39 will persist, which is if I’m a programmer.
0:45:42 It’s like a doctor talking to their doctor and they can just instantly go to specialist
0:45:43 language and so forth.
0:45:44 So you’re right.
0:45:49 Even if they’re specialist RLHF, you may not be able to, you may not be able to access that.
0:45:50 Yeah, that’s right.
0:45:56 Like why would I want to learn the entire domain and make all of the trade-offs of, you
0:46:01 know, 3D design that somebody else could have done all of that work for me and they could
0:46:06 talk to the model in the specialist way when I can just talk to my model using code, right?
0:46:10 In order to very efficiently use a specialist model, I would have to become a specialist would
0:46:12 be the argument to our previous one.
0:46:19 So I think for casual use, I can use these models for whatever I want, but to really use
0:46:23 them very well, again, to our previous conversation, I’d have to become a specialist and somebody
0:46:25 else will have maybe already invested all of that time.
0:46:30 By the way, if you actually look at products, these products actually have both these distinctions.
0:46:35 Some products are very clearly for the casual user trying to replace like, you know, think
0:46:36 about cursor versus lovable, right?
0:46:38 So lovable is I’m a casual user.
0:46:40 I’m going to create a website.
0:46:41 I don’t need to have to know about code.
0:46:43 And that’s great.
0:46:44 You can create amazing things.
0:46:47 Cursor is I am a professional software developer.
0:46:49 I have an IDE and I have an IDE.
0:46:51 Now, over time, maybe these things converge.
0:46:52 That could be the case.
0:46:56 But thus far, these are very different user bases, right?
0:47:00 There’s professional coding versus basically casual coding.
0:47:04 I mean, you know, part of it is, which is interesting and a little counterintuitive.
0:47:08 It’s like, if you think about what a computer can do, it can do the job of an accountant.
0:47:10 It can do the job of a physicist.
0:47:15 But then you clad it in something and it’s adding up numbers in Excel and you clad in something
0:47:18 else and it’s doing simulations for MATLAB, right?
0:47:25 And so we already know that when it came to logical system two thinking, that computers
0:47:26 are actually really good at that.
0:47:31 And now you have something similar where these models are actually very versatile, but you clad
0:47:36 it in the power user interface and you clad it in the casual interface.
0:47:41 And, you know, it’s implicit contextual prompting as well, probably as well as a system prompt
0:47:42 that makes it do those things.
0:47:47 I mean, the thing that’s interesting to me about something like chain of thought is that
0:47:51 and I think this is where people were freaking out in late 2022, and maybe they’ll still
0:47:58 be right to freak out, is computers have historically always been good at the logical style, much
0:47:59 better, superhuman at that.
0:48:06 Now they’re also superhuman, in a sense, at the probabilistic style, at least of text generation
0:48:07 and so on and so forth.
0:48:12 And so it’s not inconceivable that someone could figure out a way to merge those two, you know,
0:48:14 like a quantum gravity theory of things, right?
0:48:17 Where you take the probabilistic and the deterministic and pull them together.
0:48:22 Yeah, this is where the very old school, you know, systems part of me thinks that there’s
0:48:24 a fundamental trade-off here, right?
0:48:24 What’s that?
0:48:25 You can trade off.
0:48:28 Well, I feel like you can trade off.
0:48:30 You can build a system for determinism.
0:48:36 And you can, you know, build a system which basically cuts a bunch of corners.
0:48:39 But you can’t build a system that does both.
0:48:41 What are the tool-use stuff?
0:48:45 I mean, AI has gotten pretty good at figuring out when it wants to generate an image, when
0:48:50 it’s supposed to search, when it’s supposed to read a PDF, and, you know, like…
0:48:50 Right.
0:48:55 But now you’re acknowledging exactly what I’m saying, which is some things you want the fuzzy
0:48:57 thing and some things you want a traditional system.
0:48:57 Yes.
0:48:59 But if you have a fuzzy system on top…
0:49:00 Yeah, yeah, for sure.
0:49:03 But then maybe it just becomes a consumption layer, and all that hard work is still being
0:49:04 done by traditional systems.
0:49:08 The full argument is, if you have a model that does everything, you wouldn’t need tools, right?
0:49:13 Because tools are literally, like, the API to the traditional system.
0:49:18 And so that’s almost like a capitulation that, like, some things you want, you know, traditional
0:49:19 software to do.
0:49:20 No, I know.
0:49:23 But what I’m saying is a hybrid system, if you’re just a pragmatist and you don’t care,
0:49:24 right?
0:49:28 Could a hybrid system actually get there?
0:49:29 I can’t say it couldn’t.
0:49:30 Right.
0:49:35 And, you know, maybe it’s something where what we actually need is something like Elon’s,
0:49:37 you know, billion miles of Tesla driving.
0:49:44 If we have enough context, not from LLMs, but from pointer movements and mouse clicks on,
0:49:49 you know, iOS or macOS, an AIOS could…
0:49:49 Yeah, yeah.
0:49:58 So your question is, can I have one trained neural net, one trained LLM that can do both,
0:50:00 like, the fuzzy stuff and the hyper-resistant?
0:50:01 Like, is that possible?
0:50:01 Yeah.
0:50:07 My gut, again, this is total intuition, is that the universe is way too heavy-tailed.
0:50:09 It’s way too non-linear.
0:50:14 And so the state space is too high for that to actually encode all of that and basically…
0:50:15 But humans can do it.
0:50:15 No, we don’t.
0:50:16 We use calculators.
0:50:17 And we use software.
0:50:22 Like, the whole reason we build software is because we can’t do it.
0:50:24 But at some level…
0:50:26 Well, maybe we’re just talking about two different things.
0:50:31 So clearly, AI with traditional software can do great stuff.
0:50:36 To me, the question is, can you have one AI that does all of those things, you know, without
0:50:37 the traditional software?
0:50:39 Can you build one LLM?
0:50:41 And I think the answer to me is obviously no.
0:50:43 But I’ve heard arguments that it can.
0:50:43 Yeah.
0:50:48 You know, what’s interesting to me is, so, you know, like, that’s why I’d say I’m a little
0:50:49 bit of a…
0:50:49 I mean, a little bit.
0:50:53 I’m on the borderline of the tech pragmatist and the tech radical, right?
0:50:58 Where I think I always want to try to identify the limitations of the systems today and then
0:51:00 see how you could push beyond it.
0:51:05 So, you know, like, I consider calculators, which were made by humans, to be ultimately
0:51:06 humans doing it.
0:51:07 You know what I mean?
0:51:10 Like, in the sense of, it’s like a tool that we came up with.
0:51:15 I mean, to give you an example, like, earlier, I was like, you know, AI is good at visual,
0:51:16 but not at verbal.
0:51:22 But, for example, you can turn audio into spectrograms, which you can then look at visually, and at
0:51:25 least you can see for radical deviations and so on.
0:51:27 Maybe not the entire sound, but you can see.
0:51:34 And so, can we do Fourier transform-like things on other outputs of AI where we can quickly
0:51:35 inspect them visually?
0:51:39 You know, is there some grid or visualization kind of thing where we can turn it into a visual
0:51:40 problem?
0:51:41 Yeah, I love this.
0:51:46 So, like, a thought experiment I have is, can I literally create a bunch of audio outputs
0:51:50 so that I can listen to my AI like I listen to a car?
0:51:51 Because, like, you know what?
0:51:52 Yeah, you can tell if it’s wrong.
0:51:56 Like, I don’t know what’s wrong with my car, but I know it’s not normal.
0:51:59 We’re very good at atmospheric inputs, and I think this is a great idea.
0:52:04 Can we start exposing the internals so that we understand when it’s working well and when
0:52:05 it’s not working well, you know?
0:52:09 And I do feel like there’s a lot of stuff we can push on here that, you know, we’re very
0:52:09 early innings of.
0:52:13 Yeah, like colored text, for example, in terms of its level of confidence.
0:52:14 Yeah, yeah.
0:52:16 You know, stuff like that, like yellow, red, green, right?
0:52:19 There’s a lot of AI UX that one can do.
0:52:21 So, let me make a few other points that I think are interesting.
0:52:26 Killer AI is already here, and it’s called drones, and every country is pursuing it.
0:52:30 So, we don’t have to care really about the image generators and chatbots.
0:52:33 All the worry about super persuaders or whatever is all pretty stupid.
0:52:35 Strong agree.
0:52:36 Strong agree, right?
0:52:40 And the thing is, when I push people on this, what’s interesting is some of the people
0:52:45 who were, oh my God, we need to regulate everything, are now actually on the side of, we need to
0:52:46 build it before China.
0:52:51 But the thing is, in both senses, first it was safety, then it was security, but it both comes
0:52:52 to control.
0:52:55 And you might argue that the security argument is a better argument.
0:52:57 I think it’s a more realistic argument in some ways.
0:53:03 But the concept of Killer AI is already here, is interesting, because they put so much stock
0:53:06 in the, like, oh, it’s going to persuade everybody to do things.
0:53:07 It’s a super persuader.
0:53:13 But persuading is statistical, and, you know, drones are deterministic, or at least, you know,
0:53:15 the guns on a drone or whatever are deterministic, right?
0:53:20 I think that the interesting question around AI and, like, attack and defense is, does it
0:53:21 change the equilibrium?
0:53:24 Because the internet did.
0:53:27 So the internet actually introduced the notion of asymmetry, which is the more that you rely
0:53:29 on it, the more vulnerable you are.
0:53:35 So to wit, the United States is more vulnerable than, you know, random, you know, random third
0:53:36 world country.
0:53:39 And it’s not clear to me you get the same thing with AI.
0:53:43 Like, it could just be, like, it just enables everybody to have bigger weapons, but the equilibrium
0:53:43 is the same.
0:53:48 Well, I think that it actually has really huge impacts for borders.
0:53:54 And unfortunately, I think China is well positioned here for a very specific reason, which is China,
0:54:00 their justification for the Great Firewall is they’ve justified it as digital borders.
0:54:03 They say, we can introdite physical packets.
0:54:05 Why can’t we introdite digital packets?
0:54:05 Yeah.
0:54:05 Right?
0:54:06 Yeah.
0:54:12 And now, with the whole Ukraine controlling drones in your territory thing, that becomes more
0:54:13 than simply a metaphor.
0:54:14 It’s a real thing.
0:54:17 It’s like controlling cloud space, right?
0:54:17 Yeah.
0:54:24 And, you know, if you can allow somebody to script drones or script humanoids in your jurisdiction,
0:54:28 then they can blow things up, right?
0:54:30 That’s no longer a theoretical thing, you know?
0:54:35 And now, the counter-counter argument is, well, maybe you just have them pre-programmed autonomous
0:54:39 so they don’t even need an internet connection, and they can just do cameras or whatever.
0:54:39 That’s true.
0:54:43 And in Ukraine, you know these things with the drones on cables, you know this crazy thing
0:54:44 they’re doing there, right?
0:54:44 Yeah.
0:54:50 You know those, like, big, unwinding cable things that you sometimes see on, like, ships, right?
0:54:50 Yeah.
0:54:56 So, they have these cables for these one-way drones that are these ridiculously long, like,
0:55:02 multi-kilometer long ethernet cables, like Cat1 cables on a drone, so it can be offline,
0:55:07 and it goes past signal jammers or something like that, and then goes and blows up on its target.
0:55:11 And it goes forward, and the cable gets tangled in the trees, but they don’t care.
0:55:15 Because it’s not going to go a reverse trip where it has to yank the, you know?
0:55:17 So, when it’s a one-way path, it doesn’t matter.
0:55:19 It’s a one-way drone, which is crazy.
0:55:25 So, that’s an argument that at least at short distance, near the border, drones, you know,
0:55:26 would be able to get in you, right?
0:55:33 But that concept of digital borders becoming hard borders, I think, is going to become more
0:55:41 of a thing where, basically, you know, I actually gave a talk on this 12 years ago that, you know,
0:55:46 your immigration policy becomes your firewall because with telepresence, you can move robots
0:55:48 around, and that’s starting to become real.
0:55:54 So, that is something where I think that has real implications for the geography of a country
0:55:59 because the alternative to that, of having, quote, defensible borders, is basically an encrypted
0:56:03 state where you don’t even know where it is on the face of the earth.
0:56:08 And what I mean by that is, can you make a map of Bitcoin?
0:56:10 Not really, right?
0:56:15 It’s something where it’s so dispersed, and you don’t know every holder, and they’re moving
0:56:21 around the world, and there’s no single map of every mine, and even if you got a map, you
0:56:24 wouldn’t know if it was complete or erroneous or out of date or something like that.
0:56:29 You actually have, in a sense, security through obscurity, so you couldn’t just go and blow
0:56:34 all those things up, versus something that’s outlined on the map is sort of sessile and vulnerable
0:56:35 in a certain way, right?
0:56:41 So, that’s something I think about a lot in terms of what do future borders look like, and
0:56:44 you might have hard digital borders, and China might preserve its territory, but those that
0:56:50 can’t enforce for rare reason, hard digital borders, can’t stop these kind of drone, you know,
0:56:51 introgressions.
0:56:52 Let me know your thoughts.
0:56:54 Business and drones are not autonomous.
0:56:59 Well, they would still need to be given a control signal to do something, right?
0:57:00 Not other than fully autonomous.
0:57:03 You’d just be like, go blow up this building, here’s a picture.
0:57:06 And then, like, a fully autonomous zone would not need any packets.
0:57:08 That’s true.
0:57:08 That’s true.
0:57:11 And then, also, if you think about it, are you going to block every single telephone call
0:57:15 and it’s really difficult, that needle in a haystack?
0:57:19 I have some super spooky stories of being in China.
0:57:22 And I remember, this is kind of a non-sector, but I have to tell you, it’s so spooky.
0:57:24 I was in China probably 10 years.
0:57:25 Now, I used to work for the government.
0:57:28 I used to work for the intelligence community, like, when I was a kid.
0:57:29 I mean, I was a kid.
0:57:29 I was a kid.
0:57:33 This was, like, in 2001, you know, like, my first job out of college.
0:57:36 But anyway, so, say, 10 years later, I was on a business trip in China.
0:57:41 And I called a friend of mine, and I was telling about my day, and the phone dropped.
0:57:46 So, I called my friend again, and I was telling my friend about my day, and the phone dropped
0:57:47 again.
0:57:48 And I’m like, what’s going on here?
0:57:51 So, what I was recounting is where I had been.
0:57:53 So, I called my friend one more time.
0:57:56 I said, 1, 2, 3, Tiananmen Square, and the phone dropped.
0:58:06 So, I think there was a bug in whatever software they had, and somehow I had been picked up.
0:58:11 But I don’t think it’s too crazy to assume every conversation on every phone call can
0:58:13 be monitored and has been for a while.
0:58:14 Well, so, now they have something.
0:58:18 This is another way where AI does change the balance of power in the following way, right?
0:58:23 Like, in China, they had this saying, which is, the mountains are high and the emperor is
0:58:24 far away, right?
0:58:29 And China always had a different conception of the balance of power between the government
0:58:30 and the people than the West did.
0:58:35 On the one hand, you know, this is a broad generalization over thousands of years of history,
0:58:36 very broadly.
0:58:42 Like, on the one hand, because the state had all the weapons and the army and so on and
0:58:49 so forth, like, that chair could morph into an agent of the emperor or a CCP guy today if
0:58:50 the government so desired.
0:58:53 Because there’s no, like, limits truly, you know, in the sense of it can just do whatever
0:58:54 it wants, right?
0:58:58 On the other hand, whatever the law is written down, the people just do what they want, right?
0:59:05 So, they kind of, in a very pragmatic way, say, the limit is really the limit of what people
0:59:07 can enforce, right?
0:59:10 And if the state has lots of power, then any written limit doesn’t really matter.
0:59:13 But the state can’t find you since you’re on the other side of the world, then the written
0:59:14 law doesn’t matter either, right?
0:59:20 Which is a different conception than the progressive versus libertarian within the West, where they’ll
0:59:23 always quote law against each other back and forth.
0:59:25 What is written is what is permitted or what have you, right?
0:59:33 But AI does change that balance because now the mountains are never high and the emperor
0:59:34 is never far away.
0:59:36 The long arm is incredibly long.
0:59:37 The long arm is infinite.
0:59:37 Yeah.
0:59:39 They can synthesize.
0:59:42 You know, there was something, maybe you know this thing, Martine, I think it was called
0:59:45 TIA, total information awareness in a rock at a certain point.
0:59:48 But the idea was they had satellites covering a rock.
0:59:52 And so, every time some guy was putting down like an IED or something like that, they would
0:59:57 rewind the satellite to find who the guy was that did that and where he came from and then
0:59:59 put a bomb through his window or what have you, right?
1:00:05 And in a sense, it was like tracking someone for like their whole life, you know, because
1:00:09 you were sewing together the trace of them through all of these cameras, right?
1:00:12 And that’s totally possible for China to do now.
1:00:17 And the difference is that AI makes it possible to, for a long time, that data was ingested,
1:00:22 but it couldn’t really be parsed or queried because it was too difficult to look through
1:00:25 5,000 hours of video on one person or whatever.
1:00:28 That’s increasingly becoming queryable, right?
1:00:32 And ingestible and summarizable in a way that it never was.
1:00:37 And so, I think that the real check on something like that is going to have to be cryptography,
1:00:41 exit, you know, and so on and so forth, which is ultimately like get out of the jurisdiction,
1:00:45 you know, have property that they cannot actually seize.
1:00:48 Like, you go back against the limits of power and what have you, right?
1:00:49 Anyway, let me pause there.
1:00:51 Just some thoughts on balance of power since you talked about that.
1:00:52 Oh, that was great.
1:00:53 Oh, that was great.
1:00:55 Okay, last one.
1:01:02 I think that there’s going to be, there already is an anti-AI backlash that’s like the anti-crypto
1:01:08 backlash and will be part of the anti-tech backlash because a lot of people are not using
1:01:09 AI for what we’re using it for.
1:01:12 They’re using it for like therapy or they’re using it like,
1:01:15 as a companion or something like that, you know?
1:01:16 It’s the top of the pyramid of needs.
1:01:21 It’s funny if you actually look, it’s like self-actualization, spirituality, therapy.
1:01:25 It’s like finally computers are addressing like further on up.
1:01:26 That’s right.
1:01:30 And there’s another aspect to it that I think hasn’t gotten as much press, but that’s interesting
1:01:35 to understand is that just like, you know, the tariffs are meant to kind of ward off Chinese
1:01:36 competition.
1:01:37 I’m not sure if they’ll work.
1:01:38 In fact, I’m skeptical.
1:01:43 There’s a similar, much less publicized thing that’s happening at many media corporations
1:01:46 where they’re unionizing to try to ward off AI competition.
1:01:53 Like they have union contracts that say the editors, owners cannot use AI, right?
1:01:59 So they’re making their organization very brittle where they think they own the market, but
1:02:02 they’re not allowing themselves to use AI, right?
1:02:07 And then eventually they’re going to be beaten by AI-enabled competitors that pull all of their
1:02:11 followers and views and so on away from them because they’re just more efficient.
1:02:15 So I think that’s going to result in an anti-AI backlash.
1:02:20 And I think that it’s already here where, you know, on some like artist forums or whatever,
1:02:21 they’ll say, are you an AI supporter?
1:02:22 Have you heard that?
1:02:28 You know, like AI artists spend as much time on building things as traditional artists.
1:02:30 It’s just a different tool set.
1:02:31 This is my view.
1:02:32 Yes, yes, yes.
1:02:32 That’s true.
1:02:40 But basically they feel that it’s similar to the reaction by master craftsmen in the 1800s,
1:02:41 right?
1:02:44 When mass production started taking over what they were doing in the physical world, this
1:02:45 is now happening in the digital world, right?
1:02:46 Totally.
1:02:46 Yeah.
1:02:49 And the other aspect of this is I don’t think people have thought about the international
1:02:55 aspect where if you’ve got, let’s say, a lawyer who’s making 200K a year in the US or a doctor
1:03:01 in the West, and then you’ve got somebody from the Philippines or India or anywhere in the
1:03:09 world who’s making currently $2,000 a year, maybe the converged wage with AI plus their IQ
1:03:14 or what have you, AI plus human convergence is like $20K a year, which is a 10X for the
1:03:17 person abroad, but a one-tenth for the person in the West.
1:03:20 It radically increases consumer surplus and so forth.
1:03:20 Yeah.
1:03:24 But I do think that that’s going to be something that’s going to be a big deal in the years
1:03:24 to come.
1:03:26 And so we’ll have to figure out how to mitigate that.
1:03:28 To your previous point, I just think this is so important.
1:03:30 Like I agree there’s going to be a huge backlash.
1:03:34 And I think some of it’s going to be rooted in the experience of individual people.
1:03:38 You know, my job is shifting and that I am very sympathetic that I think we should address.
1:03:43 But I think there’s something more pernicious going on, which is, and maybe this is my cynicism,
1:03:47 but more and more I kind of view politics as like you’ve got, you know, pretty sophisticated
1:03:49 people and they have clientele classes.
1:03:50 Yes.
1:03:51 Patron, patron, client.
1:03:51 Yes.
1:03:52 Yeah.
1:03:55 What they say is basically what will move the clientele class the most.
1:03:57 That’s what they just look for soundbites that will move the clientele class.
1:04:02 Actually, you know, the patron are sophisticated people that can actually hold nuance in their
1:04:02 head.
1:04:03 They know they’re confidence.
1:04:04 Right, but they dumb it down on purpose.
1:04:05 But they dumb it down on purpose, right?
1:04:08 And what better talking point than AI?
1:04:10 This goes back to the Promethean legend.
1:04:12 I mean, we’re terrified of technology.
1:04:14 You can anthropomorphize it.
1:04:16 You can talk about it as gods.
1:04:18 I mean, it is the perfect tool to mobilize.
1:04:20 And we’re seeing this on both the right and the left, right?
1:04:23 So this is not in any way beholden to one party.
1:04:29 So I think this is like the ultimate, you know, political, you know, tool for any purpose.
1:04:30 And we’re seeing it for that.
1:04:34 And I think for that, even more than crypto, by the way, I think the AI strikes to the heart
1:04:37 of people’s insecurities more than crypto ever could.
1:04:38 And so I think that this is the big battle.
1:04:39 And I think it’s going to be bigger.
1:04:40 It’s interesting.
1:04:42 It’s all of the above, right?
1:04:45 Because AI is disrupting media.
1:04:46 Crypto is taking power over money.
1:04:49 Robots are taking power over manufacturing.
1:04:50 And drones are taking power over the military.
1:04:55 So all of these, and by the way, there’s a crypto angle to at least three of them.
1:04:57 Because obviously there’s a crypto angle to money.
1:04:59 There’s a crypto angle to AI in terms of constraints.
1:05:03 There’s a crypto angle to the drones because you’re going to want the control plan for the drones
1:05:06 to be on chain since that’s the part that can’t get hacked, whereas the Pentagon can’t get hacked.
1:05:10 So I do think this is something where it’s going after quite a few power centers.
1:05:13 I just don’t think 300 years ago, if you said, well, listen, you know, we’re going to do this
1:05:17 new crypto thing, people would be like, well, if you said, listen, we’re going to create AI,
1:05:21 these artificial, you know, intelligences that have unbound power.
1:05:25 I think you’re really getting out of core human security that we’ve seen in Myths and Legends
1:05:27 for 3,000 years and probably longer.
1:05:28 That’s true.
1:05:29 That’s true.
1:05:32 Well, we’ll see what happens with currencies and so on, but I think you’re right.
1:05:32 It’s both.
1:05:34 It’s both for sure.
1:05:39 Thanks for listening to the A16Z podcast.
1:05:45 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
1:05:47 We’ve got more great conversations coming your way.
1:05:48 See you next time.
a16z General Partners Erik Torenberg and Martin Casado sit down with technologist and investor Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.
Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.
This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies.
Timecodes:
0:00 Introduction: The Polytheistic AGI Framework
1:46 Personal Journeys in AI and Crypto
3:18 Monotheistic vs. Polytheistic AGI: Competing Paradigms
8:20 The Limits of AI: Chaos, Turbulence, and Predictability
9:29 Platonic Ideals and Real-World Systems
14:10 Decentralized AI and the End of Fast Takeoff
14:34 Surprises in AI Progress: Language, Locomotion, and Double Descent
25:45 Prompting, Verification, and the Age of the Phrase
29:44 AI, Crypto, and the Grounding Problem
34:26 Visual vs. Verbal: Where AI Excels and Struggles
37:19 The Challenge of Markets, Politics, and Adversarial Systems
40:11 Amplified Intelligence: AI as a Force Multiplier
43:37 The Polytheistic Counterargument: Convergence and Specialization
48:17 AI’s Impact on Jobs: Specialists, Generalists, and the Future of Work
57:36 Security, Drones, and Digital Borders
1:03:41 AI, Power, and the Balance of Control
1:06:33 The Coming Anti-AI Backlash
1:09:10 Global Implications: Labor, Politics, and the Future
Resources:
Find Balaji on X: https://x.com/balajis
Find Martin on X: https://x.com/martin_casado
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Leave a Reply