#217 Josh Wolfe: Human Advantage in the World of AI

AI transcript
0:00:02 Look at AI right now.
0:00:03 Billions of dollars being spent.
0:00:07 The flurry of all of this benefits us as consumers,
0:00:08 always and everywhere.
0:00:10 Just wait and they’ll compete and compete
0:00:12 and it will accrue to us as users.
0:00:15 How do we create an unfair advantage in a world of AI?
0:00:18 I think the limits to human intelligence
0:00:20 are rooted in our biology.
0:00:23 AI’s over time will understand us in many ways better
0:00:25 than we understand ourselves.
0:00:27 We can still be computers and chess, done.
0:00:29 We can still be them and go, done.
0:00:30 We can be them and video games, done.
0:00:32 Okay, but we still have creativity, done.
0:00:34 All these things have been trained
0:00:38 on the sum total of all human creation.
0:00:40 And now they’re being trained on the sum total
0:00:42 of human creation plus artificial creation.
0:00:46 I’m absolutely convinced that we are going to have machines
0:00:48 doing science 24/7.
0:00:50 I just think it’s gonna be part of the total overture
0:00:55 over of creation and I think it’s a beautiful thing.
0:00:57 (upbeat music)
0:01:00 (upbeat music)
0:01:11 Welcome to The Knowledge Project.
0:01:13 I’m your host, Shane Parish.
0:01:15 In a world where knowledge is power,
0:01:18 this podcast is your toolkit for mastering the best
0:01:20 what other people have already figured out.
0:01:22 If you want to take your learning to the next level,
0:01:24 consider joining our membership program
0:01:27 at fs.log/membership.
0:01:29 As a member, you’ll get my personal reflections
0:01:32 at the end of every episode.
0:01:35 Early access to episodes, no ads, including this,
0:01:38 exclusive content, hand edited transcripts,
0:01:39 and so much more.
0:01:42 Check out the link in the show notes for more.
0:01:44 While others ask what’s trending,
0:01:48 Josh Wolf asks, what seems impossible today?
0:01:51 He’s built a career betting on scientific breakthroughs
0:01:53 that most people don’t believe can happen.
0:01:55 As co-founder of Lux Capital,
0:01:57 he’s backed companies cleaning up nuclear waste
0:02:00 and building brain computer interfaces.
0:02:02 But here’s the contradiction.
0:02:03 Despite investing in technology
0:02:05 that could make humans obsolete,
0:02:09 Josh is profoundly optimistic about human potential.
0:02:12 His thinking challenges conventional wisdom.
0:02:15 While most see AI as automation and threats to humanity,
0:02:19 Josh sees it as a catalyst for human achievement.
0:02:22 In this conversation, we explore this paradox,
0:02:25 diving deep into how technological evolution
0:02:28 can amplify rather than diminish what makes us human.
0:02:30 From geopolitical power shifts
0:02:32 to the future of human creativity,
0:02:35 Josh reveals the exact frameworks he uses
0:02:36 for seeing what others miss
0:02:39 and betting on the seemingly impossible future.
0:02:42 It’s time to listen and learn.
0:02:48 – Start with what you’re obsessed with today.
0:02:49 What’s on your mind?
0:02:52 – Well, first and foremost, kids and family.
0:02:54 Trying to be a good dad, good husband.
0:02:57 Technologically obsessed with so many different things.
0:02:59 We were just in a partnership meeting.
0:03:01 And probably the most interesting thing at the moment
0:03:05 is thinking about the speed of certain technologies,
0:03:06 like the actual physical technologies
0:03:08 and where the bottlenecks are.
0:03:12 So in biology, you’ve got all kinds of reactions,
0:03:14 nature figured out, evolution figured out,
0:03:15 enzymes and catalysts and things
0:03:16 that can speed up reactions.
0:03:20 But you can move faster than the speed of biology.
0:03:22 Now you think about AI, total different field,
0:03:26 but the same sort of underlying philosophical principle.
0:03:31 If you’ve tried ChatGPT operator,
0:03:35 it can only move at the speed of the web.
0:03:37 And at that it’s sort of with latency a little bit slow.
0:03:40 So we’re thinking about what are the technologies
0:03:41 that can accelerate these things
0:03:43 that have these natural almost physics limits.
0:03:46 And even if those limits are biological or digital.
0:03:48 So that is something that at the moment I’m obsessed with
0:03:50 in part because I have ignorance about it.
0:03:51 And when I have ignorance about it,
0:03:53 my competitive spirit says,
0:03:54 how do I get smart about this?
0:03:56 How do I get some differentiated insight?
0:03:57 How do I know what everybody else thinks
0:04:00 and then find the sort of white space answer?
0:04:03 So that’s one big thing, obsessed with geopolitics.
0:04:08 It is the great game, everything from US-China competition
0:04:14 to Iran, Israel, access of religious conflict,
0:04:17 the Sahel and Maghreb in Africa,
0:04:19 which is not an area that a lot of people
0:04:21 talk about or think about that I believe
0:04:26 with low probability, super high magnitude import
0:04:28 is going to become the next Afghanistan,
0:04:30 that that region in the Sahel,
0:04:34 all these coups and cascades of these coups of failed states
0:04:36 where you have Russian mercenaries,
0:04:39 violent extremists, Chinese CCP coming in
0:04:42 with infrastructure and influence,
0:04:45 European colonial powers being kicked out
0:04:48 against this backdrop of brilliant scientists
0:04:51 and engineers and technologists that went to HBS or Stanford
0:04:53 and worked at Metta and Google and are going back
0:04:57 and particularly in like Ghana and Kenya and Nigeria
0:04:58 or building businesses.
0:05:03 So that continent is going to be a truly contested space
0:05:07 for future in progress and for utter chaos and terrorism.
0:05:09 So yeah, it’s a widespread of stuff
0:05:11 that I’m probably currently obsessed with.
0:05:12 – Let’s talk about all those things.
0:05:14 Let’s start with sort of diving in.
0:05:16 You mentioned chat, GBT operator.
0:05:16 – Yes.
0:05:18 – And the limitations sort of being,
0:05:19 we’re moving at the speed of the web.
0:05:21 At what point do you think that we’re,
0:05:24 those systems are all designed for humans?
0:05:26 At what point do they become designed for AI first
0:05:27 and then humans are using them
0:05:30 or do we just have two simultaneous interfaces?
0:05:31 – I think it’s going to be both.
0:05:34 I think that there’s always this like ignorance arbitrage
0:05:37 where somebody figures out that there’s an opportunity
0:05:39 to take advantage and improve a system
0:05:41 while people don’t understand it.
0:05:42 And so I think that there are people
0:05:46 that are probably going to launch redesign businesses
0:05:48 where they say we will optimize your web pages
0:05:50 just like people did for search engine optimization
0:05:53 when Google rose that you had to be more relevant
0:05:54 because Google was so important.
0:05:56 Google was influencing whether or not people would see you
0:05:57 or discover you.
0:06:00 And so if there are certain tasks like open table
0:06:02 or restaurants or shopping,
0:06:05 they are now going to start to shift their user interface
0:06:08 not just for the clicks of a human,
0:06:09 but for the clicks of an AI
0:06:11 that’s doing research on the best product.
0:06:14 And the way that those are going to negotiate
0:06:17 and in some cases influence or trick the judgment
0:06:20 and the reasoning of the AI is really interesting.
0:06:23 So I think that that is probably the next domain
0:06:25 where those things are going to get better, faster,
0:06:27 they’re going to have more compute,
0:06:29 but then people are going to be redesigning
0:06:31 some of these experiences, not for us
0:06:33 and the user interface of humans, but machine to machine.
0:06:35 And you’re already starting to see this
0:06:36 where there was one element of like
0:06:39 in instantiation of R1 communicating with another
0:06:41 in R1, but the language that they were commuting
0:06:43 was not English or Chinese or even traditional code.
0:06:45 It was like this weird almost alien language.
0:06:48 And so I think you can see a bunch of that.
0:06:49 There’s another adjacent theme
0:06:51 which I think is also really interesting,
0:06:54 which is AI portends the death of APIs.
0:06:58 So APIs allow Meta with their Metaglasses or Orion
0:07:03 to be able to communicate with Uber and Spotify
0:07:04 through the backend, through software.
0:07:07 But increasingly those things are complicated,
0:07:09 they’re hard to negotiate, there’s a lot of legal,
0:07:11 there’s API calls, there’s restrictions,
0:07:13 there’s privacy, there’s controls.
0:07:14 But if you’re one of these companies that are like,
0:07:16 I don’t want to go through all of that.
0:07:19 Can I just use AI to pretend that I’m a user?
0:07:20 And in fact, I had this experience
0:07:22 where I was using Operator
0:07:25 and I had this moral dilemma for a split second.
0:07:28 Do I click to confirm that I’m not a bot
0:07:29 because I had to take control over it
0:07:30 because the bot actually was trying
0:07:32 to do my research on behalf of me.
0:07:36 And so you see a world where APIs
0:07:39 that have been the plumbing of everything in SAS
0:07:41 and software and negotiating behind the scenes
0:07:44 may start to lose influence and power to AIs
0:07:47 that are able to basically just negotiate on the front end
0:07:48 as though they were a user.
0:07:50 So I think that that whole domain
0:07:54 is going to very rapidly evolve in like a quarter or two.
0:07:57 – You mentioned sort of the limitations
0:07:59 like biology moves at a certain speed.
0:08:01 There’s a couple of subset of questions here,
0:08:04 but one is where’s the limitation in AI growth right now?
0:08:07 It seems like we have energy as a key input.
0:08:10 We have compute as a key input.
0:08:13 And then we have data slash algorithms
0:08:15 as like the next key input.
0:08:16 What am I missing?
0:08:18 What’s the limitation on those?
0:08:20 – Well, start with conventional wisdom
0:08:21 which has heretofore been correct,
0:08:23 which is you need more compute,
0:08:26 you need more capital, scaling laws for AI,
0:08:29 just throw more GPUs and processors and money
0:08:31 and ultimately energy to support that
0:08:33 and you will get better and better breakthroughs.
0:08:35 The counter to that, which you’re seeing in some cases
0:08:37 with open source or people that have now trained
0:08:38 on some of the large models
0:08:41 is that there’s gonna be a shift towards model efficiency.
0:08:43 And so that’s number one,
0:08:44 that people are gonna figure out
0:08:46 how do we do these more efficiently with less compute.
0:08:49 Number two, which is a big sort of contrary thesis
0:08:53 that I have is that a significant portion of inference.
0:08:55 So, you know, if you break down training,
0:08:57 you still need large 100,000 ish clusters
0:08:59 of each 100 top performing chips.
0:09:00 It’s expensive.
0:09:03 Only the hyperscalers, heretofore have been able to do that.
0:09:04 You can do some training
0:09:06 if you’re using things like together compute
0:09:08 in some of our companies without having to do that yourself.
0:09:12 Sort of like going back to on-prem versus colo
0:09:15 versus cloud transition 15 years ago.
0:09:17 But I think that you’re gonna end up doing
0:09:20 a lot of inference on device.
0:09:22 Meaning instead of going to the cloud
0:09:25 and typing a query, like 30 to 50% of your inference
0:09:27 may be on an Apple or an Android device.
0:09:29 And if I had to bet today,
0:09:32 it’s Android because of the architecture over Apple.
0:09:33 But if Apple can do some smart things,
0:09:35 maybe they can catch up to this,
0:09:37 but some of the design choices and the closed aspects
0:09:40 which have been great for privacy as a feature
0:09:42 may hurt them in this wave.
0:09:44 And you could already see like perplexity
0:09:46 can actually be an assistant on my Android device.
0:09:49 I carry both so I can understand both operating systems.
0:09:51 You can’t do that yet on iOS and Apple.
0:09:52 But here’s the insight.
0:09:56 If 30 to 50% of my inference is my cached emails
0:09:59 and my chats and my WhatsApp
0:10:01 and all the stuff that I keep on my device,
0:10:04 my photos, my health data, my calendar,
0:10:08 then the memory players may play a much bigger role.
0:10:10 So now you have Samsung, SK Hynex, Micron
0:10:12 that are important here.
0:10:15 If I had to come up with somewhat pejorative analogy,
0:10:18 I think Samsung is going to be more like Intel.
0:10:20 I think that they’re just a little bit sclerotic
0:10:22 and bureaucratic and slow moving
0:10:24 and it’s gonna sort of ebb and decline.
0:10:27 I think Micron is a US company
0:10:30 which is going to be more constrained
0:10:33 by restrictions that are put on export control.
0:10:35 And I think SK being a Korean company
0:10:37 is gonna be able to skirt those
0:10:40 in the same way that Nvidia has with distribution
0:10:42 to China through Singapore and Indonesia and Malaysia,
0:10:43 which is now being investigated.
0:10:47 But the memory players are likely to be ascendant.
0:10:49 And so what Here2For was a bottleneck on compute
0:10:52 could shift attention, talent, money
0:10:55 into new architectures where memory plays a key role
0:10:58 on small models on device for inference.
0:11:00 – I’ve always thought of memory as like a commodity.
0:11:02 It doesn’t matter if I have a M-U chip
0:11:03 or a scan desk of Samsung.
0:11:05 – And that’s what people thought about CPUs back in the day
0:11:08 and then GPUs for just traditional video graphics.
0:11:11 And then the AI researchers came and said,
0:11:12 “Well, wait a second.
0:11:14 We can run these convolutional neural nets on these GPUs.”
0:11:17 And they reached into somebody else’s domain
0:11:19 of PlayStation and Xbox and pulled them in
0:11:22 and suddenly it lit up this phenomenon
0:11:25 that turned Nvidia from 15 billion
0:11:27 to two and a half or three trillion.
0:11:31 Do you think Nvidia has got a long runway?
0:11:34 – I think that their ability with that market capitalization
0:11:36 and that cash and the margin that they have,
0:11:38 they can reinvent and buy a lot of things.
0:11:41 So I think Jensen is a thoughtful capital allocator.
0:11:44 I think he’s benefited and caught lightning in a bottle
0:11:45 over the past 10 years.
0:11:47 And now just particularly the past six,
0:11:49 I would not count Nvidia out.
0:11:52 Now, what’s the upside from two and a half or three trillion
0:11:53 to five trillion or 10 trillion?
0:11:56 Or do they go back down to, I have no idea.
0:11:58 So that’s more of a fundamental valuation
0:11:59 based on the speculation.
0:12:01 But if you have 90, 95% margins
0:12:04 and your chips are $30,000,
0:12:07 could you shrink them down and sell for $3,000
0:12:09 and take small margins but get more volume?
0:12:12 And this was some of the debate that I think happened
0:12:14 with the release of DeepSeq where you had Satya
0:12:16 basically talking about Jeven’s paradox
0:12:19 that any one thing might get more efficient
0:12:21 but the result is not that you have less demand.
0:12:24 In aggregate, you have much more demand.
0:12:26 The classic example of this is like refrigerators.
0:12:29 Single refrigerator back in the day was an energy hog.
0:12:32 All of a sudden you make these things more efficient
0:12:33 and what happens, it becomes much cheaper.
0:12:37 So if something’s cheaper, you’re gonna buy more of it
0:12:39 and they shrunk refrigerators down.
0:12:41 And so now everybody had one in their garage
0:12:44 and in their office and in their basement.
0:12:46 And so the aggregate demand for electricity
0:12:49 and then for all the components and coils and refrigerant
0:12:50 went up, not down.
0:12:52 Same thing with bandwidth.
0:12:56 If you have a 56K Bod modem, which was like my first modem,
0:12:58 you know, dial up internet and all that kind of stuff
0:13:01 on CompuServe and, you know, then you go to a T3
0:13:03 and fiber optic, you know, at the speed of light,
0:13:06 it is way more efficient but your usage now is just huge.
0:13:09 You’re streaming 4K videos and watching Netflix
0:13:11 and trading on Bloomberg.
0:13:14 Whereas if you actually want to decrease use,
0:13:17 the non-obvious thing that you would wanna do
0:13:18 is actually slow down speed.
0:13:20 I mean, put me on a 56K Bod modem today
0:13:22 and just like pull out my hair and never use it.
0:13:23 You couldn’t even use Gmail on that.
0:13:24 – Right, exactly. – Exactly.
0:13:26 Where do you think intelligence
0:13:28 is the limiting factor of progress?
0:13:30 – Human intelligence?
0:13:34 I think that the great thing about us is that we are,
0:13:36 I don’t know, 60 or 70% predictable
0:13:41 in that so many of the foibles and virtues and vices
0:13:45 of man and woman are Shakespearean.
0:13:46 You know, there are hundreds of, I mean,
0:13:48 tens of thousands of years old from modern evolution,
0:13:52 but with that we are still irrational.
0:13:53 You know, Danny Kahneman was a friend.
0:13:56 Danny passed last year, just an amazing guy,
0:13:58 but he could document all the heuristics
0:14:01 and all the biases which you study and write about.
0:14:04 And he’s like, I’m still a victim of them.
0:14:06 Even knowing them doesn’t really insulate me
0:14:08 from falling to them.
0:14:09 It’s just like an optical illusion.
0:14:11 You can know it’s an optical illusion,
0:14:13 but you still see it and you still fall for it.
0:14:17 So I think the limits to human intelligence
0:14:19 are rooted in our biology.
0:14:22 And we have all of these embodied intelligence
0:14:26 that have sort of been externalized in calculators
0:14:29 and in computers that help us to overcome that.
0:14:32 And I certainly feel today that a significant portion
0:14:35 of my day that might be spent with a colleague
0:14:36 riffing on something.
0:14:39 And sometimes that’s like great for a muse
0:14:41 or tapping into information or intelligence
0:14:43 or some tidbit or a piece of gossip
0:14:45 or an experience that they had.
0:14:47 And that’s why like the diversity of cognitive diversity
0:14:49 is really important.
0:14:51 I’m complimenting that with all day chats
0:14:54 with perplexity and Claude and open AI
0:14:58 and any number of LLMs that might hallucinate
0:15:01 just like a friend and might be wrong about something,
0:15:03 but might give me some profound insight.
0:15:05 – I think what I’m interested in is
0:15:08 at what point will machines like if intelligence
0:15:10 is the limiting factor on progress
0:15:12 in certain domains or areas.
0:15:15 It strikes me that in the near future,
0:15:19 the machines will be able to surpass human intelligence.
0:15:20 – For sure.
0:15:21 – And if that’s the case,
0:15:25 then those areas are rife for either disruption
0:15:26 or rapid progress.
0:15:30 – Well, I think, you know what Peter Drucker never imagined
0:15:32 when he was talking about knowledge workers
0:15:33 in the shift from like blue collar workers
0:15:35 to white collar workers was that machines
0:15:38 would actually most threaten those professions.
0:15:42 And so you take some of the most elite professions, doctors.
0:15:46 The ability to do a differential diagnosis today.
0:15:47 I take my medical records as soon as I get them
0:15:50 from top doctors and I still put them into LLMs
0:15:52 to see what did they miss?
0:15:55 And sometimes it unearths some interesting correlations.
0:15:58 Is there a scientific paper from the past 10 years
0:16:00 that might have bearing on this particular finding
0:16:01 or something in a blood test?
0:16:03 So that is really interesting.
0:16:07 Lawyers, languages, code, multi-billion dollar lawsuits
0:16:09 sometimes come down to the single placement
0:16:11 and interpretation by a human of a word.
0:16:13 One of the things that Danny Kahneman recognized
0:16:16 and he published this in his last book Noise
0:16:19 was that the same case, the same information,
0:16:21 the same facts presented to the same judge
0:16:23 at different times of day
0:16:25 or to presented to different judges.
0:16:26 They are not objective.
0:16:29 And he actually thought that for justice and fairness
0:16:31 that you would want the human intelligence
0:16:34 applied to these situations with their biases
0:16:36 to be either complimented or replaced by AIs
0:16:38 that had a consistency and a fidelity
0:16:40 in how they made decisions.
0:16:42 So those are all high paying jobs
0:16:44 with lots of training, lots of time
0:16:47 to gain the experience, the reasoning, the intelligence.
0:16:50 And many of those things are at risk.
0:16:54 Government itself, the ability to legislate, make decisions.
0:16:55 You wanna be able to capture
0:16:56 and express the will of the people.
0:16:59 But increasingly we have social media
0:17:00 that’s able to do that, it could be corrupted
0:17:02 but there’s mechanisms to figure out
0:17:05 how do you really surface what does the populace care about?
0:17:07 And some in Europe have tried to do these things.
0:17:11 The key thing is always like what’s the incentive
0:17:13 and what’s the vector where somebody can come in
0:17:15 and corrupt these things?
0:17:18 But interestingly, I actually think that the people
0:17:21 whose jobs are like most protected in this new domain
0:17:23 are blue collar workers.
0:17:26 A robot today can’t really fully serve a meal
0:17:28 and they cannot effectively even though
0:17:32 every humanoid robot tries to do folding laundry
0:17:36 and there’s still basic jobs that are not low paying
0:17:38 but they’re arguably safer than ever.
0:17:40 And this ties into immigration and technology
0:17:44 and human replacement, but people that are doing maintenance,
0:17:46 people that are plumbers, many of these things
0:17:49 are standardized systems, but it’s like the old joke
0:17:52 about the plumber that comes in and he comes in
0:17:55 and he taps a few things and suddenly the pipes are fixed
0:17:57 and he says how much is it, it’s 1,000 bucks.
0:18:00 1,000 bucks for that, he’s like it was a $5 party.
0:18:03 He’s like yeah, the par was $5 but 995 was knowing
0:18:05 where to tap and where to put it.
0:18:07 And so I think that there’s still a lot
0:18:10 of this like tacit knowledge and craft and maintenance
0:18:13 that is gonna be protected against the rise of the machines
0:18:15 that are gonna replace most of the white collar
0:18:16 intelligence and knowledge workers.
0:18:19 – Do you believe, I think it was Zaku who came out
0:18:22 and said by mid this year, 2025,
0:18:26 that AI would be as good as a mid-level engineer.
0:18:27 – Encoding.
0:18:28 – Encoding.
0:18:29 – Yeah, for sure.
0:18:30 – What are the implications of that?
0:18:34 Walk me through like the next 18 months if that’s true.
0:18:36 – Well, again, if you take this in the frame
0:18:38 of Jevons Paradox, then a lot more people are going
0:18:41 to be able to code in ways that they never have.
0:18:44 And in fact, I think it was like Andre Caparthi
0:18:46 who was on Twitter a day or two ago
0:18:49 talking about who he himself as a coder
0:18:51 was basically just talking in natural language
0:18:52 and having the AI, and I forget which one
0:18:55 he was particularly using, generate code.
0:18:58 And then if it tweaks something and he wanted to change
0:19:01 a design and make a particular sidebar
0:19:02 a little bit thicker or thinner,
0:19:04 he would just say make the sidebar and it was able
0:19:05 to do that.
0:19:08 So I think the accessibility for people who never coded,
0:19:11 never programmed to be able to come up with an idea
0:19:12 and say, “Oh, I wish there was an application
0:19:14 “that could do X, Y and Z to be able to quickly do that.”
0:19:15 Is great.
0:19:17 For the big companies who employ many coders
0:19:20 that are competing at an ever faster speed.
0:19:21 You know, you have somebody like Mark Benioff
0:19:23 who’s saying that they’re not hiring any more coders
0:19:25 at the same time that he’s still talking about
0:19:27 the primacy of SAS, which is this weird contradiction.
0:19:30 But I would suspect that maybe you lose 10 to 30%
0:19:33 of the people that you normally would have hired,
0:19:35 but the people that are there are still like
0:19:38 these 10 X coders and now they have a machine
0:19:40 that’s helping them be like 20 to 100 X.
0:19:42 Do you think margins go up then
0:19:43 for a lot of these companies?
0:19:44 I don’t know.
0:19:46 I always feel like margins are always fleeting in the sense
0:19:48 because it’s like a fallow safe composition.
0:19:50 One company stands up a little bit higher
0:19:52 and then everybody else is on their tippy toes.
0:19:54 So I think it just changes the game,
0:19:57 but I don’t think that you have some like permanent margin.
0:19:59 The only time you get really large margins
0:20:02 is when you truly have like a monotonic NVIDIA today.
0:20:03 Until there’s an alternative,
0:20:06 whether in architecture or algorithms or in something else,
0:20:07 you know, they’ve got dominant margins
0:20:10 because they can charge super high prices
0:20:12 because there is no alternative.
0:20:15 So when you have that, there is no alternative,
0:20:19 but in many domains, given enough time,
0:20:21 there’s an alternative and then margins just resettle
0:20:24 and look at cars, you know, the average margin on cars.
0:20:27 Cars today are 10,000 times better
0:20:29 by every measure of fuel efficiency,
0:20:33 comfort, air conditioning, satellite radio,
0:20:35 but those margins never persisted
0:20:37 as being like permanently high.
0:20:38 I always come back to sort of buff it
0:20:40 in the ’60s with the loom, right?
0:20:42 I always relate everything to the loom
0:20:43 ’cause everybody was coming to him
0:20:44 when he first took control and they’re like,
0:20:45 “Oh, we got this new technology.”
0:20:47 And he’s like, “Yeah, but all the benefits
0:20:49 “gonna go to the customer, it’s not gonna go to me.”
0:20:52 I mean, look at AI right now.
0:20:54 Billions of dollars being spent.
0:20:56 All the foundation models, all the competition,
0:20:57 the second that deep seat comes out,
0:21:00 it suddenly accelerated the internal strategic decisions
0:21:02 from open AI of when are we gonna release models?
0:21:06 And so the flurry of all of this benefits us as consumers,
0:21:07 always and everywhere.
0:21:11 And so we were looking at internally
0:21:13 installing some new AI system
0:21:16 to surface all of our disparate documents.
0:21:17 And there’s a bunch of these.
0:21:22 And our Gmail, our Slack, our Google Docs,
0:21:23 our PDFs, our legal agreements
0:21:25 and just have a repository with permissions
0:21:28 and all this, and it’s expensive
0:21:29 in part because going back to that
0:21:31 Univerance arbitrage, somebody could charge us
0:21:33 a lot of money to do that implementation.
0:21:35 And my default was just wait.
0:21:36 Why don’t we wait six months?
0:21:37 Because this is gonna be available
0:21:40 from all of the major LLM providers today
0:21:42 that want to get the enterprise accounts.
0:21:44 And let’s just wait and they’ll compete and compete
0:21:46 and it will accrue to us as users.
0:21:47 – Talk to me about all these models.
0:21:50 People are spending hundreds of billions,
0:21:52 if not trillions of dollars around the globe,
0:21:54 competing on a model.
0:21:56 Do you think that’s the basis of competition
0:21:57 or how does that play out?
0:22:00 And then you have Zaku’s trying to open source it
0:22:03 and he spent, I don’t know what 60 to a hundred billion
0:22:07 probably by the end of June this year, open sourcing it.
0:22:08 So he’s basically like,
0:22:09 I wouldn’t say he’s doing it for free,
0:22:11 but what’s the strategy there?
0:22:14 – First you have just straight head-to-head competition,
0:22:15 you know, Anthropic and Claude
0:22:18 and Chajapiti and opening AI and others.
0:22:20 Then you have sovereign models.
0:22:21 So there are countries that are saying
0:22:24 we don’t want to be beholden to the US or to China.
0:22:27 We find a company in Japan called Sakana.
0:22:27 This is one of the lead authors
0:22:30 from the Google Transformer and a guy, David Ha
0:22:32 and just incredible team.
0:22:34 And they are actually trying to do
0:22:37 these super efficient novel architecture.
0:22:37 So they’re not trying to treat
0:22:39 in these multi-hundred thousand clusters.
0:22:40 Their latest model,
0:22:42 which was based on this evolutionary technique
0:22:45 was like eight GPUs, which was wild.
0:22:46 So that’s one trend.
0:22:48 But on the strategic question for Zaku,
0:22:50 I actually think that he’s probably playing
0:22:52 at the smartest of everybody, which is,
0:22:53 and he’s been open about this.
0:22:56 We’re going to open source the models with Lama
0:22:58 and we’re going to let people develop on them.
0:23:01 Why? Because the real value is going to be
0:23:04 in the repository of data.
0:23:06 Longitudinal data, deep data.
0:23:07 If you go back 10, 15 years,
0:23:09 like the number one thing in tech was big data,
0:23:10 big data, big data, okay?
0:23:12 Well, now if you actually have big data,
0:23:14 you want to use whatever models are out there
0:23:16 to run on your proprietary silo of data.
0:23:18 So the people that I think are going to be advantaged,
0:23:19 Meta, why?
0:23:21 They’ve got all my WhatsApp messages.
0:23:23 Apple doesn’t, Meta does.
0:23:25 They’ve got all my Instagram likes and preferences
0:23:27 and every detail of how long I spend
0:23:29 and linger on something and what I post
0:23:30 and all of that content.
0:23:32 My Facebook, which I don’t really use anymore
0:23:34 other than when Instagram, you know, cross posts to it.
0:23:36 But that is super valuable.
0:23:39 And they care about that in part
0:23:43 because Zaku needs to route around both Apple and Google.
0:23:44 He does not have a device.
0:23:47 I mean, you’ve got Oculus and MetaQuest and whatnot,
0:23:48 but that’s not the one.
0:23:51 This Orion with the neural band
0:23:53 from the company control labs that we funded,
0:23:55 which was for the non-invasive brain machine interface
0:23:57 to be able to use free gestures,
0:23:59 which is an absolute directional arrow of progress, right?
0:24:02 Disintermediating the control surfaces you have,
0:24:03 remote controls and all that kind of stuff
0:24:05 and just being able to gesture, map a device
0:24:08 to your human body is absolutely the trend.
0:24:10 But he’s thinking about how do I route around these devices
0:24:12 and how do I have a long repository
0:24:13 of everybody’s information
0:24:15 and use the best model that’s out there.
0:24:16 And the great thing about open source
0:24:18 is it’ll continue to improve over time.
0:24:20 So I think that that’s a winning strategy.
0:24:22 I think the people that are continuing
0:24:23 to develop ever better models
0:24:25 unless they have proprietary data
0:24:27 are gonna be sort of screwed.
0:24:28 Bloomberg should do really well.
0:24:31 I mean, the huge amount of proprietary information,
0:24:33 all the acquisitions that they’ve done over time,
0:24:36 being able to normalize merger data
0:24:37 and historic information
0:24:40 and the longitude of price information
0:24:42 and correlations between different asset classes,
0:24:46 being able to run AI on top of that is like a quant’s dream.
0:24:50 So I think that people that have hospital systems
0:24:52 arguably some governments have used efficiently,
0:24:55 but anybody that has a proprietary source of information,
0:24:57 clinical trials, failed experiments
0:24:59 inside of pharma companies,
0:25:02 being able to do that is the real gold.
0:25:05 And the large language models are effectively like,
0:25:07 over time, I think going to trend towards zero
0:25:09 in a commodity excavator of that data.
0:25:11 So the mode is really in the data.
0:25:12 I think so.
0:25:14 Because everything will be sort of comparable
0:25:15 running on top of that.
0:25:17 The data sitting by itself is like an oil well
0:25:20 that isn’t mined, you know, we’re not gas finding,
0:25:21 that is in fact.
0:25:23 So it needs to be extracted.
0:25:25 And I think that most likely open source,
0:25:27 but in some cases enterprise partnerships
0:25:29 between anthropic or open AI
0:25:32 with some of these siloed data sets
0:25:34 will unleash a lot of value.
0:25:35 – So aside from meta,
0:25:38 what counterintuitive sort of public companies
0:25:42 would you say have like really interesting data sources?
0:25:43 – Ah, that’s a good question.
0:25:45 I haven’t really spent a lot of time on that
0:25:49 to figure out who’s got crazy amounts of proprietary data.
0:25:50 Pharma would be a good one
0:25:52 because obviously they’re, you know,
0:25:54 tracking both their successful
0:25:56 but their unsuccessful clinical trials.
0:25:59 There’s a lot of information in the unsuccessful data,
0:26:01 like the things that fail that you can learn from.
0:26:02 You could argue that Tesla, of course,
0:26:04 who I’m very publicly critical of,
0:26:05 like if they truly are collecting
0:26:09 a ton of road user data from,
0:26:11 you know, every Tesla that’s being driven,
0:26:12 that would be valuable.
0:26:14 Anything where there’s a collection,
0:26:17 a set of sensors, a repository of information
0:26:19 that is owned by them.
0:26:21 Anything that we’ve signed off on that, you know,
0:26:25 your data is free for us to use, like meta.
0:26:26 – You think of Tesla,
0:26:28 they should have the best mapping software in the world.
0:26:32 They literally like drive millions of miles every day.
0:26:33 They can update everything.
0:26:34 They can locate police.
0:26:36 They can locate speed cameras.
0:26:37 They can get real time traffic.
0:26:38 – Weather patterns, yeah, totally.
0:26:39 – Yeah.
0:26:41 – But okay, the flip side of that though, right?
0:26:45 Taking sort of like the opposite view for a moment, Netflix.
0:26:46 Netflix has all of our viewing data.
0:26:47 They know what we like.
0:26:48 They know what you like.
0:26:50 They can make a perfect set of channels for you.
0:26:53 And the recommendations are reasonably good approximations
0:26:55 of adjacencies to things that you liked,
0:26:57 but they haven’t been successful,
0:27:01 nor has the human algorithm at say HBO in the past,
0:27:04 of like perfectly creating the next show
0:27:06 that you really want to see.
0:27:09 And what’s interesting about that is,
0:27:11 they’ve put a lot of money into this,
0:27:14 but it hasn’t yielded the recipe maker
0:27:16 for like the next perfect show.
0:27:18 And oftentimes the thing that you want to see
0:27:20 is almost something that’s orthogonal
0:27:22 from what you’ve been watching.
0:27:27 Like I heard Anthony Mackie, who’s in the latest Marvel movie
0:27:29 talking about an expectation that he’ll be,
0:27:30 somebody was like, how long do you think
0:27:32 you’ll be doing Marvel movies?
0:27:34 And he’s like, oh, I think probably like the next 10 years.
0:27:37 And I’m like, probably two or three,
0:27:38 because people are just bored of this stuff after a while.
0:27:41 Like nobody wants to see another,
0:27:42 I don’t want to see another Marvel movie.
0:27:45 I like the adjacency of like The Boys,
0:27:47 which was like the dark superhero kind of movie.
0:27:50 And I think trying to find groups
0:27:52 that have proprietary data
0:27:56 that have some predictive value,
0:27:58 the most value probably for society,
0:28:00 I don’t know if it’ll be entirely captured by companies,
0:28:03 is just all the scientific information that we have.
0:28:04 Because I’m absolutely convinced
0:28:09 that we are going to have machines doing science 24/7.
0:28:10 – Well, so talk to me a little bit about,
0:28:12 I want to come back to Tesla in a sec,
0:28:13 but let’s go down the science that,
0:28:16 why has nobody sort of taken every study published
0:28:19 in a domain say Alzheimer’s research,
0:28:22 popped it into GPT and be like, where are we wrong?
0:28:25 What studies have been fabricated or proven not true
0:28:27 that we’re investing research in, right?
0:28:29 Like, ’cause studies get built on studies and studies.
0:28:32 And so if something from the 80s came out
0:28:33 and it’s like completely false,
0:28:37 we’ve probably spent $20 billion down this rabbit hole.
0:28:39 And what’s the next most likely thing to work?
0:28:41 Is anybody doing that?
0:28:43 – I have to imagine they are because deep research
0:28:46 came out today or in the past 24 hours from OpenAI,
0:28:48 which is sort of their model with a better engine,
0:28:50 so to speak, than Google’s deep research,
0:28:51 which itself was impressive,
0:28:53 both because of its ability to search many sources
0:28:55 and then the ability to sort of,
0:28:57 I think it was either there or through Nopakellum
0:28:58 to conjure the podcast,
0:29:01 which at first was a static presentation
0:29:03 of near human quality voice,
0:29:04 but now you can interrupt it like a radio call,
0:29:05 which is super cool.
0:29:10 But you can say go through the past 15 years of PNAS papers,
0:29:14 or science and nature papers around this particular topic
0:29:16 and find correlations between papers
0:29:18 that do not cite each other
0:29:20 or tell me any spurious correlations.
0:29:21 And the beauty of all of that
0:29:24 on sort of the information or informatics side
0:29:27 is eventually you will have a materials
0:29:29 and methods output of that,
0:29:32 that you can feed into something like bench-ling
0:29:35 or some of the automated lab players
0:29:37 to actually say like run the experiment.
0:29:40 So I’m absolutely convinced, like high-certitude,
0:29:42 I don’t know exactly which company will do it.
0:29:44 We’ve invested in some, they haven’t worked,
0:29:46 we’ll invest in more, hopefully they will.
0:29:49 But this directional hour of progress of the idea
0:29:54 by analogy of machines doing science 24/7 automated
0:29:55 is going to happen.
0:29:56 I’ll give you one or two analogies.
0:30:00 If you were a musician back in the day,
0:30:01 you know, if you and I were starting a band,
0:30:04 we would have to go and get studio time here in New York City
0:30:05 or Electric Ladyland or whatever,
0:30:07 you bring your instruments, okay,
0:30:09 maybe you could rent the instruments there
0:30:12 and then GarageBand and Logic and Pro Tools pops up.
0:30:15 And now we don’t have to be in the same physical space.
0:30:17 My instrument is virtualized.
0:30:20 I can create a temporal sequence of notes.
0:30:22 I can layer them in, you could play drums,
0:30:23 you could do vocals, blah, blah, blah, okay.
0:30:25 Science is the same thing.
0:30:26 I can be on the beach in the Bahamas
0:30:28 and conjure a hypothesis
0:30:31 and use one of the AIs to test the hypothesis
0:30:34 and look at past literature searches,
0:30:36 see freedom to operate, see if there’s white space
0:30:39 and then tell one of these cloud labs
0:30:41 that is literally like sending something to AWS
0:30:44 back in the day and say, run this experiment.
0:30:46 And the beauty of this is the robot will do the,
0:30:48 well, here’s the beauty, the virtue of the vice.
0:30:49 The robot will do the experiment
0:30:51 and it should do it perfectly because it’s digital
0:30:54 and it’s high fidelity.
0:30:56 The vice is so much scientific breakthrough
0:30:59 has often happened because of serendipitous screw ups.
0:31:02 And so you want almost to engineer
0:31:04 like a temperature on an AI model,
0:31:06 a little bit of stochastic randomness
0:31:08 so that the machine can sort of screw it up
0:31:09 to see what might happen because, you know,
0:31:13 penicillin and Viagra and rubber and Vulcan,
0:31:15 all these things happen by like random processes
0:31:17 and then post-fact, we’re like, huh, that’s funny.
0:31:19 And then, you know, you run with it.
0:31:22 But then the machine will say, here’s the results.
0:31:23 And it will then reverse prompt you
0:31:25 and say, do you want to run the experiment again
0:31:27 but changing the titration of this
0:31:28 to 10 milliliters instead of five?
0:31:30 And you just click a button from the beach
0:31:31 and you’re like, yes, and the robots run it.
0:31:35 Whoever ends up creating and building that,
0:31:36 I think is going to make a fortune.
0:31:37 – Well, you don’t even have to decide.
0:31:39 The robots could decide, yes, right?
0:31:40 – Totally.
0:31:41 – And you’re sort of out of the loop
0:31:42 and it just outputs science.
0:31:43 – Totally.
0:31:45 – That would be so interesting.
0:31:46 Before we get to that point,
0:31:47 we’ll probably get to the point
0:31:49 where models make themselves better.
0:31:51 Is that the point where it really starts
0:31:54 to go like parabolic almost?
0:31:55 – I don’t know.
0:31:58 I definitely see that models can improve
0:32:01 ’cause you can even argue like deep seeks R1
0:32:03 is a model that was improving upon outputs
0:32:06 from chat GPT and so on.
0:32:08 So I definitely think that there will be
0:32:10 this recursive improvement,
0:32:14 but you’re still going back to being rate limited by time
0:32:17 and biological or chemical reactions.
0:32:20 You still need to instantiate this
0:32:21 into a physical experiment.
0:32:23 And so you can model and simulate all you want,
0:32:26 but then you actually have to like do the thing
0:32:27 and make the compound.
0:32:31 And so those still take steps and organic chemistry
0:32:33 and there’s like 20 reaction steps
0:32:36 and people optimize to like reduce them down to six.
0:32:38 And you still need the physical reagents
0:32:41 and the right temperature and the experimental design.
0:32:43 So I still think that that’s gonna be the bottleneck,
0:32:46 but for sure like the ideation and experimental design
0:32:49 is gonna, that’s just gonna absolutely explode.
0:32:52 And then you’ll have these automated labs
0:32:53 with lots of different instruments
0:32:55 where robots will be able to take out
0:32:57 a sample from a centrifuge and put it into the next thing.
0:33:00 And like you don’t need humans to do that
0:33:02 any more than you need humans to assemble
0:33:04 sophisticated iPhones.
0:33:05 I mean, we still have very cheap labor
0:33:06 and Foxconn factories in China
0:33:08 and Vietnam and elsewhere now doing that,
0:33:11 but there’s no reason for that over time.
0:33:12 Isn’t that low hanging fruit though?
0:33:15 Just like look at all the work that we’ve done so far
0:33:17 and tell us where we’re on the right track
0:33:19 and where we’re sort of like we’re going astray.
0:33:21 And like there’s nothing preventing that
0:33:22 from happening today.
0:33:24 I mean, it’s very like David Doi chains.
0:33:25 Like if it obeys the laws of physics,
0:33:27 that should be possible, there’s nothing about this
0:33:29 that is like totally speculative and fantastical
0:33:31 that it doesn’t obey the laws of physics.
0:33:33 – The other project that I wanted to,
0:33:35 I was thinking about sort of doing
0:33:38 is just calling like a prior art.org or something
0:33:41 and having AI read through all the patents
0:33:44 and make the next adjacent sort of like patent like
0:33:46 improvement and then just publish it
0:33:47 ’cause then there’s prior art.
0:33:48 – Right.
0:33:50 Well, yeah, there will for sure be AI patent trolls
0:33:52 if you put it negatively.
0:33:53 – This would dissuade that.
0:33:55 I mean, in a sense, it would sort of be making
0:33:59 prior art for as much as you can 24/7.
0:34:01 – There are companies that do this
0:34:02 where they have creative patent filers
0:34:04 for continuations in part so that they can keep
0:34:06 sort of the life of this going.
0:34:10 But the rise of agentic AI,
0:34:13 you can have some crazy idea, some brain fart it,
0:34:15 you know, 9/30, 10 o’clock a night.
0:34:17 And you just had, you know, cocktail with friends.
0:34:20 You’re like, oh, like imagine, you know, if that exists.
0:34:22 Well, do the research, does this exist?
0:34:25 And if it doesn’t exist, can you, you know, write a patent
0:34:27 and sketch a diagram for me and file it
0:34:29 and starting incorporate a company.
0:34:31 Now, all those things have to go at the speed
0:34:33 of like certain processes,
0:34:35 but all of that could be done overnight
0:34:36 where you literally wake up
0:34:39 and there are multiple agents working on your behalf
0:34:42 that have filed a patent, created a design,
0:34:45 incorporated a company, possibly even put it out
0:34:48 to some group and raised money for it overnight
0:34:49 that have opted into it.
0:34:52 And I don’t know if there was like successful enough
0:34:53 and it hit a bunch of criteria.
0:34:57 I might allocate some capital into an account
0:35:01 to allow a robot AI to actually receive pitches,
0:35:03 respond to it and create a small portfolio
0:35:04 to allocate as an experiment.
0:35:07 So you can see this whole thing is just like a human idea
0:35:10 or maybe one inspired by interactions with an AI
0:35:13 that by the time you wake up, you have a company started
0:35:17 and the basis for people to actually do work.
0:35:20 Like that in 10 or 20 years, people look back
0:35:23 and be like, how did we not see that coming?
0:35:24 And look at all the jobs that are being created
0:35:27 because every single person is now creating
0:35:29 and has like six virtual companies.
0:35:31 – The future is gonna be wild.
0:35:32 – Yeah.
0:35:34 – Talk to me, let’s go back to Tesla for a second.
0:35:36 Why the hate on Tesla?
0:35:40 – Let me say, I think Elon is amazing at certain things.
0:35:45 Elon is arguably the greatest storyteller fundraiser
0:35:50 inspiration for anybody in the past, maybe in all time.
0:35:53 Truly, I think his relationship with the truth
0:35:55 has been questionable.
0:35:58 And so in Tesla particularly, I think there was a time
0:36:01 where the short sellers started to identify things
0:36:03 not because they just hated the company
0:36:04 or hated the future or any of this.
0:36:06 And he was able to very shrewdly weaponize
0:36:08 the us versus them.
0:36:10 They’re trying to kill us, right?
0:36:11 Most short sellers that I know
0:36:13 happen to be very disaffected people
0:36:15 and they have a chip on their shoulder.
0:36:19 And to me, the motivating force
0:36:21 and the incentive for them is not that
0:36:25 they just want to make money, but they want to be right.
0:36:27 And they want to be right because they’ve identified
0:36:30 somebody that they think is intentionally doing wrong.
0:36:32 It’s the same thing as an investigative journalist.
0:36:36 It’s the same thing as a opposition research
0:36:37 for a politician.
0:36:38 It’s the same thing for somebody
0:36:43 that is trying to debunk a Sunday preacher charlatan
0:36:46 that basically is almost intellectually competitive
0:36:49 to say you are trying to pull the wool
0:36:52 over these people’s eyes and I know what you’re doing
0:36:54 and I’m going to call you out on it.
0:36:58 And so for me with Tesla, I think that they got away
0:37:00 with accounting fraud on warranty reserves
0:37:02 and a whole bunch of other things.
0:37:04 I think there was a lot of presided digitization
0:37:07 and magic of look over here while we’re doing this.
0:37:10 And today it doesn’t matter because they got away with it.
0:37:14 But I think that there was not the same kind of honesty
0:37:18 that I would ascribe to Jeff Bezos in how he built Amazon
0:37:21 and raised a few hundred million dollars of equity.
0:37:22 You could look at stock-based compensation.
0:37:24 You could look at debt as capitalization,
0:37:29 but created this monster that is profitable cashflow positive
0:37:31 and never raised another dollar of equity.
0:37:33 And Elon raised north of $50 billion,
0:37:38 took out $50 billion, treated it like an ATM,
0:37:40 said I’m never selling a share and sold lots of shares.
0:37:43 And then whether he had to do it to buy Twitter or whatever.
0:37:46 I just, I don’t feel it was done as honestly
0:37:48 as other entrepreneurs that I greatly admire.
0:37:51 Now that said, SpaceX, I have no issue with.
0:37:53 I think SpaceX is an extraordinary company.
0:37:57 I think it’s an incredibly important American company.
0:37:59 I think without it, we would be at a massive disadvantage.
0:38:01 I think it is truly a national treasure,
0:38:03 run by Gwynne Shotwell, incredible engineers.
0:38:05 We’ve backed a bunch of these engineers
0:38:07 that have come out from Tom Mueller
0:38:10 to that I just, I think the world of.
0:38:13 I’ve just, yeah, I’ve been much more critical
0:38:15 about Elon’s relationship with the truth
0:38:17 as it came to Tesla and in many ways,
0:38:19 I felt like the whole thing was unnecessary.
0:38:21 – Do you think those are one-time things
0:38:23 or they’re systemic and they crop up
0:38:24 every few months or something?
0:38:25 – In his personality?
0:38:26 – Yeah.
0:38:27 – He’s past the stratosphere now.
0:38:32 Like it’s, you know, he’s proximate to power in ways
0:38:34 that people can’t compete with.
0:38:39 If you’re an investor in or you’re Sam Altman in open AI,
0:38:40 you’re not only worried about competition,
0:38:42 you’re worrying about a personal grudge
0:38:44 from somebody who has the ear of the president
0:38:48 that can weaponize all kinds of systems of power
0:38:51 from the DOJ to the FTC to the FBI.
0:38:54 And I would be very nervous
0:38:56 being an adversary with that kind of power.
0:38:57 So.
0:38:58 – Altman came out and said he didn’t think
0:39:01 Elon would use that power against him.
0:39:03 – Which is a nice and smart thing
0:39:06 and a necessary thing to say publicly.
0:39:07 And I think that Elon has even said like,
0:39:09 I won’t use that, but.
0:39:11 – What’s the saying, power corrupts?
0:39:12 – Yeah, an absolute power corrupts, absolutely.
0:39:15 But I don’t know, like you’re in a position
0:39:20 of power at DOJ and OMB and Office of Personnel
0:39:23 and are you, and you have influence
0:39:26 and you can shut some of these things down.
0:39:27 Does Elon love the SEC?
0:39:31 You know, he’s been pretty vocal about that institution.
0:39:35 Does he love the National Highway Transit Safety?
0:39:37 So if these things are gutted, you know,
0:39:39 I think you’ve got more free reign
0:39:41 to shut down criticism.
0:39:43 You know, you’ve got similarly the best entrepreneurs
0:39:47 when short sellers are like, you know, saying something,
0:39:48 they don’t want to ban short selling.
0:39:50 They don’t want short sellers to be arrested.
0:39:52 They just prove them wrong.
0:39:53 And so.
0:39:55 – My favorite story about that was Brad Jacobs
0:39:59 and his, the short report came out at the stop draw.
0:40:04 Precipitously, he borrowed $2 billion, bought back shares.
0:40:06 – Yeah, this is big skin in the game, right?
0:40:07 – Right, totally.
0:40:09 – We’re gonna double down and go forward.
0:40:12 I think he turned that two into 10 or eight or something.
0:40:13 Like it was just crazy.
0:40:15 – So for me, I don’t know, 13 or so,
0:40:19 like after I was bar mitzvah, I became atheist.
0:40:22 And I just wouldn’t, I would see like these preachers
0:40:24 exploiting people.
0:40:26 It just like irked me and it irked me in this.
0:40:28 It wasn’t in some, I reflected on this over the years.
0:40:32 It wasn’t in some virtuous holier than now kind of thing.
0:40:33 It was intellectually competitive.
0:40:35 It’s like, I see what you’re doing.
0:40:38 You’re running a con and I wanna call it out.
0:40:41 And it wasn’t rooted in like self virtue
0:40:43 of like pursuit of truth.
0:40:45 The real thing when I like thought about it is,
0:40:48 no, I wanna show that you’re cheating people.
0:40:50 – Short sellers are necessary to a well-functioning market,
0:40:51 right? – I think so.
0:40:52 – We need to hear both sides of the story
0:40:54 and make our own judgments and decisions.
0:40:56 What point do you think that computers
0:40:58 are gonna really make most of the investing decisions?
0:41:02 – Well, you could argue today they are,
0:41:04 not because they’re doing reasoning and analysis
0:41:06 and fundamental work, but because the structure
0:41:11 of the market is so dominated by passive indexation.
0:41:13 And that is effectively an algorithm.
0:41:18 And that algorithm says $1 in buy, $1 out sell.
0:41:21 And in both cases, indiscriminately.
0:41:24 And so you just have a flood of money
0:41:27 that goes into the market and these indices buy everything.
0:41:29 And it becomes this massive market cap weighted,
0:41:32 accelerant and then people say sell
0:41:33 and then the money just comes out.
0:41:36 So the past, I don’t know, 10 plus years
0:41:37 where this has really become the case
0:41:40 with Fidelity and BlackRock and State Tree
0:41:42 and others that the ETFs, which were well-intentioned,
0:41:44 you know, you go and listen to Buffett back in the day.
0:41:46 It’s like, just put it in the market, right?
0:41:49 It’s hard for active managers to out-compete.
0:41:52 Definitely the case for the past 10 or 15 years.
0:41:55 But I do think that we will see a return
0:41:59 to active managers that are able to discriminate
0:42:01 true fundamentals in part because I think
0:42:03 that the cost of capital is just going to rise
0:42:06 and all the funny money of the past 10 years
0:42:07 is going to wash out.
0:42:09 – Two questions, two rabbit holes.
0:42:09 I want to go down here.
0:42:11 One, at what point do you think active managers
0:42:14 and analysts is replaced by AI in the same way
0:42:17 that Zuck is saying an engineer at Metta
0:42:19 is going to be replaced by AI?
0:42:23 – Already, there are AIs that can not only go through Qs
0:42:27 and Ks, 10 Qs and 10 Ks and can listen
0:42:30 to quarterly earnings reports and CEOs
0:42:33 that are talking at conferences or on podcasts
0:42:35 and can get an emotional sentiment,
0:42:37 can see where they’re varying their language
0:42:40 in ways that only the subtlest of analysts in the past
0:42:42 or portfolio manager could do.
0:42:44 And I think the most valuable thing that AI is going to do
0:42:47 when you ask it questions and it comes up with the answers
0:42:49 and assuming those answers are accurate
0:42:51 and cross-correlated and double-checked,
0:42:55 they actually say, here’s the five questions you didn’t ask.
0:42:58 And so that is going to unleash real insight.
0:43:00 Now, there is still this human aspect
0:43:03 of being able to look at somebody and decide,
0:43:05 do I trust them or not?
0:43:10 And I think that the best analysts are able to say,
0:43:12 very Buffett-like or Joel Greenblatt-like,
0:43:14 is this a good business?
0:43:15 And there’s ways to measure that,
0:43:17 like a fundamentally good business,
0:43:18 even if an idiot was running it,
0:43:21 and then do I think it’s had a good price
0:43:23 and is therefore my expected return going to be high
0:43:25 and do I trust the people that are running it
0:43:27 because ultimately I am allocating capital to them
0:43:29 in the same way that somebody allocates capital to us.
0:43:31 I like the virtue of our private markets
0:43:34 because I am less, we’re still beholden,
0:43:37 but far less beholden to a day-to-day market,
0:43:42 Mr. Market, fluctuation of manic depressive positivity
0:43:44 or pessimism.
0:43:46 We have 10-year locked funds.
0:43:48 We’re able to make long-term bets.
0:43:50 It’s arguably this great source of time arbitrage
0:43:52 when everybody else is looking and discounting back a year
0:43:54 or 18 months or two years.
0:43:57 But you think about the three main sources of edge
0:43:58 and we’ve talked about this in the past,
0:44:01 but informational, analytical, and behavioral.
0:44:04 Informational advantage used to exist a long time ago,
0:44:08 regulations like Reg FD and avoidance of insider trading
0:44:12 and information that tried to equalize the playing field
0:44:15 in addition to a huge influx of really brilliant people
0:44:19 that are able to use cutting edge data tools.
0:44:21 Having an information advantage is really hard.
0:44:23 Having an analytic advantage where AI can play a role
0:44:26 in that, let’s assume we all have the same information
0:44:28 and I don’t have any better intel or information.
0:44:30 Like for example, just inside on the information advantage,
0:44:33 there was a hedge fund, which I won’t name,
0:44:36 which was very cleverly going and actually buying stuff
0:44:38 online from Adobe.
0:44:41 Every time they did, they got a piece of legal
0:44:45 inside information, which is Adobe’s web URL,
0:44:49 actually, when you made a purchase for Creative Cloud,
0:44:52 would tell you 4,723 or whatever.
0:44:54 And then like six hours later, they would go,
0:44:56 and it was like 4,000,000, and so they could infer
0:45:00 and extrapolate what the sales were based on this
0:45:02 because when they bought it six days later
0:45:04 or six hours later or whatever they saw
0:45:07 where they were in the queue and you can sort of extrapolate.
0:45:10 That was legal to do, but once that signal leaks out
0:45:11 and somebody’s like, oh, that’s a clever way
0:45:15 to figure that out, then it rapidly erodes.
0:45:17 So informational advantage, hard analytical advantage,
0:45:20 brilliant people combined with brilliant technology,
0:45:22 really hard, and then it goes into this last one,
0:45:25 which is the behavioral, and that, to me,
0:45:27 is the persistent thing.
0:45:30 Now, AIs over time will understand us in many ways better
0:45:32 than we understand ourselves.
0:45:35 Google already arguably knows more about you
0:45:37 than the closest people in your life based on the things
0:45:40 that you search for, search in private for,
0:45:41 and AI will as well.
0:45:44 And already there’s an eerie moment that I appreciate
0:45:47 because I’ve given myself over to the information gods.
0:45:51 There’s a required energy to try to maintain privacy,
0:45:53 and I just feel like it’s not worth it.
0:45:55 We can talk about privacy ’cause most people basically
0:45:57 just wanna keep private, their sex life,
0:46:00 their bathroom time, and how much money they have,
0:46:03 unless you are super rich or super poor.
0:46:05 Because if you’re super rich, you broadcast.
0:46:08 You’re on the Forbes 100, 400.
0:46:09 You’re showing the house you just bought,
0:46:11 the art you just bought, you’re signaling your wealth.
0:46:14 And if you’re broke, then you’re really poor.
0:46:15 There’s people that are on Twitter like,
0:46:17 I’m dead ass broke, I have no money,
0:46:19 and they literally, it’s the people that are in the middle
0:46:21 that are middle class, but want people to think
0:46:23 that they’re upper class, or people that, you know.
0:46:26 And so everything else for privacy,
0:46:27 I think is like out the window.
0:46:30 But the reason I was saying this is,
0:46:32 I’ve given myself over to the information gods,
0:46:35 and when I go to ChatGPT, because it has memory,
0:46:36 and it’s constantly updating it,
0:46:38 there are times where I remember this thing,
0:46:39 and I’m like, how did you know that?
0:46:41 And I forgot that it was like from a search three months ago
0:46:43 where I mentioned something about my kids,
0:46:44 and a place that I like to vacation.
0:46:46 And part of me actually appreciates
0:46:48 that that repository is compounding,
0:46:51 but it does sort of scare you
0:46:54 because I remember the things that we talked about.
0:46:56 And then if you were like, oh, like I heard you went to that,
0:46:57 I’d be like, well, who’d you hear that from?
0:47:00 But now if I asked the AI, who’d you hear that from?
0:47:01 It would be like, you, you told me that.
0:47:04 – You told me, wait ’til it gets on your device.
0:47:05 – Exactly.
0:47:07 – And then, okay, well, let’s go back up this rabbit hole
0:47:11 a little bit here, back to passive indexation.
0:47:13 So the rise of this is really post 2010, right?
0:47:16 The mass rise of passive indexing.
0:47:20 We’ve never seen, well, we did during COVID,
0:47:22 but there was so much money thrown into the system.
0:47:25 What do you think the second, third order effects of this
0:47:28 will be, especially in terms of volatility
0:47:30 or something unforeseen?
0:47:34 – Well, you saw this a little bit with just how quickly
0:47:38 the market reacted with a single largest one day loss
0:47:41 of five, six, 700 billion dollars within video,
0:47:43 just because of the fear over deep seek.
0:47:47 The fear was a cascading, traditional information cascade.
0:47:49 Oh my gosh, what does this mean for the expectations
0:47:53 we have about demand for compute and CAPEX
0:47:54 and the expenditures that people are gonna have?
0:47:57 Do we need to rethink this mental model?
0:48:00 And so I think that there’ll be things like that,
0:48:03 that whipsaw and shock people.
0:48:05 – Did they become reflexive at some point?
0:48:08 Like maybe at 500 million or 500 billion,
0:48:12 it didn’t, but had that hit 700 or 800 or a trillion,
0:48:16 like does it then start the auto selling and the,
0:48:17 – That’s a good question.
0:48:18 I don’t know.
0:48:21 Obviously systems where there are significant leverage
0:48:23 in the system are ones that are most prone to that,
0:48:25 these sort of Minsky moments where, you know,
0:48:26 things are going fine, going fine,
0:48:28 and then suddenly they just collapse.
0:48:30 And usually that’s where you have a lot of leverage
0:48:31 in the system.
0:48:32 And sometimes it’s hidden leverage.
0:48:34 I don’t know other than some of the two extra,
0:48:39 three X lever ETFs that that’s the case in traditional market.
0:48:42 But where does this go from here?
0:48:45 – I’m not sure on the passive active piece,
0:48:50 what will break this other than if you were to have
0:48:53 widespread news reports of a handful of active managers
0:48:56 that are suddenly beating the market, you know, decisively,
0:48:59 and they’re pointing to the structure of the market.
0:49:02 And so there’s a rebalancing where people start
0:49:04 to shift out of these things.
0:49:05 On the volatility piece,
0:49:08 what’s interesting is over the past five, six, seven years,
0:49:10 maybe five especially,
0:49:14 a lot of LPs, allocators have gone into private credit,
0:49:16 have gone into private equity,
0:49:20 in part because they are mechanisms of muting volatility
0:49:22 and the vicissitudes of the market
0:49:24 because you don’t have daily market to market.
0:49:28 And so there’s been a little bit of this perverse incentive,
0:49:30 but as Buffett says, you know,
0:49:32 what the wise do in the beginning, the fool does in the end,
0:49:33 and then these things get overdone.
0:49:37 And so I’m actually worried about some of those asset classes
0:49:39 where private credit in particular,
0:49:43 I think was wise to do a few years ago and now is overdone.
0:49:45 You have another phenomenon,
0:49:47 which is every major sophisticated,
0:49:51 large private equity firm, Apollo, KKR, Carlisle, et cetera,
0:49:53 are all starting to think about,
0:49:56 or are actively thinking about both permanent capital
0:49:58 in the form of insurance vehicles like Apollo
0:50:01 and accessing retail in a huge way.
0:50:04 That many people see retail being the next wave of this.
0:50:05 – And when you say retail,
0:50:07 you actually just mean normal day-to-day consumers.
0:50:10 – Individual investors that might’ve been on Robinhood
0:50:13 and could never hear before access Apollo or Carlisle,
0:50:16 but in aggregate, you’re talking about
0:50:18 trillions of dollars of investor money.
0:50:21 And so I think that they’re gonna be tapped,
0:50:22 they’re gonna be into these vehicles.
0:50:27 That will present new interesting financial vehicles
0:50:29 because you’re gonna have to find ways
0:50:31 to give people liquidity for these things.
0:50:34 And so I’ve heard about some interesting things,
0:50:36 actually from a friend, Mike Green,
0:50:39 who I think is a really smart practitioner
0:50:42 and student of markets.
0:50:45 He was one of the earliest to this passive active piece.
0:50:46 He was one of the earliest to understanding
0:50:49 the mechanisms behind the scenes for the SPAC movement.
0:50:53 He’s early now to this idea of Uniswap,
0:50:58 which has a certain mechanism that provides liquidity
0:51:01 by having, let’s say like 80 or 90% in treasuries
0:51:03 and 10% in some underlying.
0:51:07 And you’re able to swap out some illiquid thing
0:51:09 for effectively some liquid pool up to some point,
0:51:11 there’s some repricing.
0:51:14 And he’s been thinking about
0:51:17 something that like Apollo is doing with State Street
0:51:19 and that this portends a movement
0:51:22 into these almost artificial,
0:51:23 like if I would have talked about ETF 20 years ago,
0:51:25 people would be like, “What is it?
0:51:26 Don’t we have mutual funds already?”
0:51:28 And they’re like, “No, but they’re gonna go super low fee
0:51:31 and you’ll be able to trade them on a daily basis.”
0:51:34 There’s something here to watch about the flood
0:51:38 of retail money that will go into illiquid alts,
0:51:39 private equity in particular,
0:51:42 and new vehicles that are formed
0:51:45 to be able to provide liquidity because of that.
0:51:48 And I think that that’s both gonna be really interesting
0:51:49 and potentially to your point,
0:51:53 creates something that sets up some massive blow up.
0:51:55 – The other thing that I wanna come back to
0:51:58 in the rabbit hole here is you mentioned
0:52:01 persistent advantage is behavioral.
0:52:02 – Yes.
0:52:04 – Talk about that in the context of humans
0:52:06 and how do we create an unfair advantage
0:52:08 in a world of AI for humans?
0:52:10 Like what are the ways that we can,
0:52:12 like your kids, like how are you teaching them
0:52:15 to navigate this in a way that gives them–
0:52:16 – In advantage.
0:52:18 – An advantage, a behavioral advantage.
0:52:21 – So you go back some years and it was like,
0:52:26 okay, we can still be computers in chess, done.
0:52:27 We can still beat them and go, done.
0:52:28 We can beat them in video games, done.
0:52:31 Okay, but we still have creativity, done.
0:52:34 Now, of course, human creativity is not dead,
0:52:37 but every day I am doing something creative on AIs
0:52:37 that I can’t do.
0:52:40 I cannot paint, I cannot draw, I cannot conjure.
0:52:43 I like taking photographs and I like the composition of that.
0:52:46 But I can engineer prompts,
0:52:48 which itself is an act of creativity
0:52:52 and get the most inspiring muses and results.
0:52:54 And I can take works of art that I like
0:52:56 and put them in and ask it to describe it
0:52:59 and do it six times, particularly in like mid-journey,
0:53:04 and recreate from the prompt some alternative of it.
0:53:05 There was even an artwork that I loved,
0:53:09 which was this mixed mash of superheroes
0:53:11 that looked like it was put through a blender.
0:53:12 And I was gonna buy the artist’s work
0:53:15 and then I couldn’t describe it to Lauren, my wife.
0:53:18 And so I put it into mid-journey and described it.
0:53:19 And then I just clicked a button
0:53:21 and it made four versions of it.
0:53:23 I guess 16 versions ’cause each one was four.
0:53:25 And it was insane.
0:53:26 And I was like, why am I gonna buy this?
0:53:30 ‘Cause I just recreated it and I felt morally bad
0:53:31 ’cause I wasn’t copying.
0:53:33 But I had a perfect description of the style.
0:53:36 And so I thought that that was pretty wild.
0:53:40 So these are tools that I think kids should be using.
0:53:40 They should be learning.
0:53:42 It’s just like a language.
0:53:45 And I think they need to be versed with it,
0:53:49 in part to understand the domains to avoid
0:53:54 because they’re gonna be not void of emotional
0:53:58 or aesthetic or moral value, but you’re not gonna make money.
0:54:01 And so my wife and I debate this all the time about dance.
0:54:03 Dance is amazing.
0:54:09 We started dating and she took me to this dance thing.
0:54:12 I was really not into dance at all at Parsons Theater.
0:54:15 It was this guy, David Parsons at the Joyce.
0:54:18 And he has this performance called “Caught.”
0:54:20 And me being into science and technology,
0:54:21 I’m watching a bunch of people dance.
0:54:22 I’m not really into it.
0:54:27 All of a sudden, it’s a weird ambient sound,
0:54:29 very electronic, very tron-like.
0:54:33 And there’s a guy in white pants and that’s it.
0:54:35 And then a strobe light goes on.
0:54:37 Anyway, and the strobe starts flashing.
0:54:42 And all you see as a viewer is this person floating in the air.
0:54:45 But behind the scenes, what they’re really doing is
0:54:47 jumping, jumping, jumping, jumping perfectly timed
0:54:49 to the choreography of the strobe.
0:54:51 So you see them caught,
0:54:54 but they’re doing this crazy kinetic athletic thing.
0:54:56 And I was just like, that’s super cool, right?
0:54:58 Everybody should see this, right?
0:54:59 And it’s just like inspiring.
0:55:02 Now that said, we had this debate afterwards
0:55:04 because she’s like, you know, these dancers make no money.
0:55:06 And I’m like, well, market forces would say
0:55:09 that there’s too many dancers or there’s not enough demand.
0:55:12 And they collect unemployment insurance for half the year
0:55:14 because they have to work other jobs
0:55:15 and they don’t have this and it’s just wild.
0:55:18 And so we got into a debate about,
0:55:20 what is the societal value of this?
0:55:22 Now, I found it valuable and I would go
0:55:24 and I would pay money and be a patron,
0:55:26 but I wouldn’t want my kids to go and pursue that
0:55:28 unless it was that they were solving for
0:55:32 just the aesthetic passion that they had.
0:55:35 But I think that there’s very few crevices
0:55:38 where AI will not creep in
0:55:41 and either be able to do the thing nearly as good,
0:55:45 including, you think about the compression
0:55:47 that we all enjoy of a Game and Throne’s episode,
0:55:51 the compression algorithm of all of that talent,
0:55:55 set design, special effects, screenwriting,
0:55:57 a lifetime of acting and performing,
0:55:59 you know, just swish down.
0:56:01 One of our companies Runway ML.
0:56:04 You can, and there’s others that you can conjure.
0:56:06 Today it’s 10 seconds, but tomorrow it’ll be two minutes
0:56:09 and full feature films with no key grips, no lighting,
0:56:12 no costume design, no set design, no actors.
0:56:15 And voice is entirely generated by AI.
0:56:20 And so does that strip this art of its soul kind of thing?
0:56:23 I don’t think so.
0:56:26 I think it just creates a new form of art, just like Pixar.
0:56:27 You know, for all the people that were doing
0:56:29 Disney Mickey animations by hand,
0:56:31 and then suddenly we get this 3D rendered graphics
0:56:34 and incredible storytelling, that will be timeless,
0:56:35 but the tools that we will use
0:56:38 will very rapidly replace these things.
0:56:41 So I think our kids should be embracing
0:56:43 and using all of these tools.
0:56:45 The only tool that I restrict them from is TikTok,
0:56:48 but otherwise, and that’s for a variety of bad influence
0:56:50 and Chinese Communist Party.
0:56:54 But otherwise I want them learning how to use every tool
0:56:57 as they would every appliance in a kitchen.
0:56:58 Outside of those tools,
0:57:01 where do you think the advantage comes from?
0:57:03 It is like a networking advantage
0:57:05 in who you know become more pronounced.
0:57:07 I think it’s this.
0:57:08 I think it’s human to human.
0:57:10 I think that if you always can frame things
0:57:12 as like what’s abundant and what’s scarce,
0:57:16 in a world where there’s gonna be an abundance of access
0:57:18 to information and abundance of access
0:57:22 to creative construction of things,
0:57:26 art and literature and movies and just by the way,
0:57:30 as an aside, I also will take boring PTA messages
0:57:33 from our school and I will put them into large language
0:57:35 models and send them to my wife where I’m like,
0:57:36 do this in the style of Matt Levine,
0:57:38 the Bloomberg deli writer,
0:57:39 or do this in the style of Shane Parrish,
0:57:43 or do this in the style of Al Swearingen from Deadwood.
0:57:45 And it’s just, it’s absolutely entertaining and brilliant.
0:57:48 It takes something that’s so boring
0:57:52 and cliched and hackneyed and like just brings it to life,
0:57:52 right?
0:57:53 – I do the same thing.
0:57:54 – So it’s a lot of fun, right?
0:57:55 It actually makes this stuff interesting,
0:58:00 but the advantage is gonna come in,
0:58:01 what do I do with that when I output it?
0:58:04 I enjoy it for a second, but then I share it.
0:58:05 And so all of these things, all the value,
0:58:07 all the market capitalization of all social media
0:58:09 is about sharing.
0:58:10 And I still think that we’re gonna produce,
0:58:13 we’re gonna share and what becomes scarce is this,
0:58:17 is like human connection, because we are still human
0:58:18 and we still want that.
0:58:20 We wanna be hugged and we want intimacy
0:58:22 and we wanna laugh with each other.
0:58:25 I mean, I’ll share just a quick aside from this,
0:58:27 like two or three months before Danny Kahneman
0:58:29 died last year.
0:58:31 Lauren and I went over with a filmmaker
0:58:35 and another woman and we had a dinner with Danny
0:58:39 and his partner, Barbara Tversky, it was Amos, his wife.
0:58:44 And we were talking about aging and getting older
0:58:46 and memories and Danny had this great point about
0:58:49 that the pleasure of pleasurable things
0:58:53 got less pleasurable,
0:58:57 but less than the pain of painful things got less painful.
0:59:01 So for him, the loss of a friend,
0:59:03 you became a little bit anesthetized to it.
0:59:05 The first time you lose a friend, it’s tragic,
0:59:07 but when all your friends are dropping dead,
0:59:10 it’s like, ah, this is just happening.
0:59:12 The first person you know gets divorced
0:59:14 and all these things over time,
0:59:17 the half-life of the pain just decreases.
0:59:19 You’re still losing the pleasurable things.
0:59:21 So food didn’t taste as good and wine didn’t taste as good
0:59:24 and music didn’t sound as good and sleep wasn’t as good
0:59:26 and sex wasn’t as good and all these pleasurable things.
0:59:29 But he thought that the pain was less painful
0:59:32 than the pleasure was less pleasurable.
0:59:33 And so I thought that was interesting inside.
0:59:35 Barbara had a different view
0:59:36 and she’s still alive
0:59:37 and I’m taking some license to share her view.
0:59:42 But she said, no, it’s still painful.
0:59:45 And the main reason she said that pain is painful
0:59:48 is that these memories I have of Amos
0:59:52 or I shared that moment with this person.
0:59:55 And the great human feeling is commiserating
0:59:57 about the thing that we experienced together
1:00:00 or the memory and laughing hysterically
1:00:02 which I still do with my childhood friends
1:00:04 of this shared moment.
1:00:06 And AI will never get that.
1:00:08 Another person won’t get the inside joke.
1:00:11 And so to your question about the advantage,
1:00:13 the advantage is in that human connection
1:00:16 because we are still human and we want that, we pine for it.
1:00:18 And so I thought that was a really profound counter
1:00:22 to Danny’s view, which is that when you lose these people
1:00:27 you lose the partner to amplify that emotion,
1:00:28 a good one or a bad one.
1:00:31 And I think that being able to have like uniquely human
1:00:34 experiences, understand each other, support each other,
1:00:36 that that is still gonna be an advantage.
1:00:38 – A lot of the things you said,
1:00:41 what sort of common thread with them in my mind
1:00:45 is we feel part of something larger than just ourselves.
1:00:46 – Yes.
1:00:48 – Like relate this to working remotely
1:00:52 or and how this interacts with other things, right?
1:00:55 Where we might not feel part of something larger
1:00:57 than ourselves and remote work is a great example
1:01:00 where you sort of, the ability is there
1:01:02 whether you do part-time or full-time
1:01:05 to shut off your laptop and the world looks like you.
1:01:05 – Right.
1:01:06 – You’re not forced to interact with people
1:01:08 with different political views,
1:01:10 different socioeconomic status.
1:01:13 You surround yourself so you don’t feel a part
1:01:15 of something bigger than, and that changes how you vote.
1:01:18 It changes a whole bunch of things in your life,
1:01:18 I would imagine.
1:01:21 I’m speculating here, but I’m sort of like thinking out loud.
1:01:22 – No way.
1:01:25 I think a lot of your values, a lot of our values.
1:01:28 And again, I’m gonna invoke Danny here in a conversation
1:01:32 before he passed away, was that you may think
1:01:36 you think the things you think because you analyzed
1:01:37 and you reasoned or whatever, but no.
1:01:38 The reason you think the thing you think
1:01:41 is because of the five or six most important people
1:01:44 around you and they sort of believe something
1:01:45 and you believe them.
1:01:47 And so again, an information contagion
1:01:48 and you will tell yourself that you believe it
1:01:50 because you really thought deeply about it
1:01:52 and you reasoned through this.
1:01:53 But no, the reality is you believe things
1:01:55 just because there’s the social phenomenon.
1:02:00 So working from home versus working in person,
1:02:02 there are so many, everybody is here Monday
1:02:04 through Friday now.
1:02:05 You know, at first it was Monday through Thursday,
1:02:07 take Friday, now I’m like, look, if you need to leave,
1:02:10 you go to family first, it’s a principle here,
1:02:13 never miss a concert or recital, a science fair,
1:02:15 but we need to be together.
1:02:16 Why?
1:02:17 Because there’s so many interstitial moments.
1:02:20 There’s a chance serendipity moment
1:02:21 because I come out of a meeting
1:02:24 and I’m able to introduce you to Grace
1:02:26 or Brandon is meeting with somebody and every day,
1:02:27 hey, do you have a minute?
1:02:28 You know, knock, knock,
1:02:29 and then somebody’s making an introduction
1:02:30 and you just never know what it unlocks.
1:02:33 That never happens on Zoom or on calls, it just doesn’t.
1:02:35 The structure of that doesn’t allow for that serendipity
1:02:37 and those sort of human connections.
1:02:41 The ability to really feel when somebody swallows
1:02:43 when you ask them a question and they’re feeling nervous
1:02:46 or like, hey, like something going on, you know,
1:02:47 and they’re, because everybody’s fighting
1:02:50 some epic battle, you know, they’ve got relationship issues,
1:02:53 they got parents issues, they got a sick person, like,
1:02:55 and we are just, we’re still human.
1:03:00 So I feel deeply that we should all be connected in person.
1:03:01 And I do think that that’s an advantage
1:03:05 in a world of abundant AI and sort of cold sterile,
1:03:08 even if it has the simulacrum of,
1:03:10 and again, a lot of people will use AI
1:03:13 as a beneficial way to share things
1:03:14 that they might not even be comfortable sharing
1:03:18 with a person and have a consultant therapist to tell them.
1:03:20 But I’ll give another example,
1:03:24 which is another colleague here moved from one city
1:03:27 to another and he happens to be religious observant,
1:03:29 but as an atheist, okay?
1:03:32 But he moved to this new city and he found a tribe
1:03:35 that he’s like, I immediately plugged in.
1:03:37 Like we didn’t know anybody, we didn’t have any friends,
1:03:40 but by being part of this religion,
1:03:42 we instantly had friends and peers.
1:03:45 And I was like, not that it’s cynical or selfish
1:03:49 or, you know, we could put a valence of meaning behind it,
1:03:51 but really at the end of it is like,
1:03:52 will you help me?
1:03:53 Is there reciprocity?
1:03:56 And that’s this like ancient sense
1:04:00 of whether it’s transactional or not, it’s over or not,
1:04:02 you’re signaling the depth of the sacrifices
1:04:03 that you would make for the group.
1:04:06 This idea that something is bigger than yourself,
1:04:10 belonging is arguably like the most pleasurable thing,
1:04:11 having friendships, you know, you do all these,
1:04:13 look at all these studies of people that age
1:04:15 and what was meaningful to them in leading a good life
1:04:18 and being ostracized or feeling rejected
1:04:20 or left as like the most painful thing.
1:04:23 So I think that that’s a timeless human truth.
1:04:26 And I certainly feel like,
1:04:28 and I encourage this with my kids, by the way,
1:04:31 I don’t want them just to have a group of friends in school,
1:04:34 because just like the diversity that you need in a portfolio,
1:04:35 you need to have hedges and all these other things
1:04:37 because maybe something is not going right
1:04:37 in that friend group.
1:04:41 And then you are more at risk of catastrophizing
1:04:43 that, oh my God, like nobody likes me.
1:04:47 And you know, and so if you’re in a soccer team
1:04:51 and you go to a religious group or you’re in Hebrew school
1:04:52 or you’re on a dance team
1:04:54 and you have a neighborhood group of friends
1:04:56 and my kids are in like six different things
1:04:58 outside of school each
1:05:00 and they then can bring these people together,
1:05:01 which itself adds this feeling of like,
1:05:04 oh, I connected people and I’m this node
1:05:08 and it sort of cements the network in a way
1:05:12 that is profound and meaningful and comforting.
1:05:13 I like that.
1:05:15 I want to come to aging a little bit here.
1:05:17 I don’t want to do it.
1:05:18 I don’t want to age.
1:05:19 – Talk to me about this
1:05:23 because I feel like based on the research happening now,
1:05:25 the amount of money going into this,
1:05:27 the progress that we seem to be making
1:05:31 at least on biomarkers that we understand,
1:05:33 we seem to be able to dramatically slow
1:05:36 at best our aging process.
1:05:37 You think we’re going to make a quantum leap
1:05:40 and in term of average lifespan for humans,
1:05:44 maybe adding 10 or 15 or 20 good years in the next 10 years?
1:05:48 – I think that’s possible, 10 or 15 or 20, not doubling.
1:05:51 I mean, we did that, you know, in a few generations,
1:05:53 you know, people would die at 40 years old or 50 years old
1:05:56 and people now regularly until their 70s or 80s.
1:05:57 You know, this is an interesting thing
1:06:01 because we don’t fund longevity work here
1:06:05 and I’m personally not invested in any of these things.
1:06:10 I will go to my doctor and I will get my blood tests
1:06:13 and he will suggest that I take some supplements
1:06:14 because I’m low in iron or this or that
1:06:16 and that’s entirely reasonable
1:06:19 just to maintain sort of homeostatic function.
1:06:22 But I don’t go absolutely crazy and intense,
1:06:24 but I appreciate the people, Brian Johnson and others
1:06:27 that are self-hacking and doing this in pursuit
1:06:29 and Ray Kurzweil was doing it back in the day.
1:06:32 Nobody has really seen Ray out that much, you know?
1:06:34 He was taking like, I don’t know, 100 supplements a day
1:06:35 or something like that.
1:06:38 And last I saw, I think he had a two-pay
1:06:40 and like, it was like a messy situation,
1:06:43 but I’m glad that these people are doing it
1:06:46 both in pursuit of staving off their own mortality
1:06:49 and a public service either interpreted as
1:06:50 I can’t believe you go to these extremes
1:06:52 and I’m not gonna do that ’cause it’s super stressful
1:06:54 or maybe you’re going to unearth something
1:06:57 and we’re all gonna be on metformin and all these,
1:07:00 but there are this timeless pursuit of avoiding death.
1:07:03 It’s very human.
1:07:07 It is and it goes back like the first form
1:07:09 of avoiding death was I’m not gonna die.
1:07:12 So the search for the fountain of youth and Ponce de Leon,
1:07:15 today modern pharmaceuticals and drugs and supplements
1:07:16 and lifestyle changes.
1:07:20 The second form was fine, I’m gonna die,
1:07:22 but I’m gonna come back.
1:07:25 And so reincarnation and maybe that was
1:07:26 the spiritual or religious sense.
1:07:29 And then today’s version of that would be Alcor
1:07:32 or any of these cryo, I’m gonna freeze my brain
1:07:35 so that when they figure this out, they’ll bring me back.
1:07:37 The third was, okay, fine, I’m gonna die,
1:07:39 but I am more than my physical self.
1:07:41 There’s an ethereal soul.
1:07:42 The modern technological version
1:07:44 would be the ghost in the machine,
1:07:46 endless sci-fi movies about people uploading themselves
1:07:49 and their likeness, which by the way,
1:07:51 are sort of a really interesting phenomenon
1:07:54 and how we deal with loss and the ability,
1:07:57 if there is an AI that is totally trained on my voice
1:07:59 and my likeness and everything I’ve ever said,
1:08:03 which I have done, that my kids would actually have a dad AI
1:08:05 and maybe they can consult it for questions
1:08:07 or is that a good thing or a bad thing?
1:08:10 I don’t know, but it is going to be a thing.
1:08:12 Then you have, okay, I’m gonna die,
1:08:13 but I’m gonna live on through my progeny,
1:08:15 through my children, through my genes,
1:08:17 which is the evolutionary impetus.
1:08:20 And I’m gonna live on through my works
1:08:23 and you won’t be there to experience either of those things
1:08:25 unlike the first three where you’re not gonna die
1:08:26 or you’re gonna come back.
1:08:28 And so I think about the people that,
1:08:30 whether it’s where I grew up in Coney Island, Brooklyn,
1:08:31 they put graffiti on the wall
1:08:33 and they put themselves up until it gets washed away
1:08:36 or if you’re Dave Rubenstein or Steve Schwartzman,
1:08:37 you put your graffiti etched in stone
1:08:39 on the New York public library
1:08:40 or but it’s really no different,
1:08:42 just $100 million instead of free
1:08:44 and the potential for being jailed.
1:08:47 And then it’s through your children.
1:08:49 And I think there it’s very Buffett-like,
1:08:54 the moral sort of mandate would be like he described
1:08:57 Don Keough of Coke that when you do die,
1:08:59 you want them to say what they said about him,
1:09:01 which was everybody loved him.
1:09:03 I don’t have that, like not everybody loves me,
1:09:05 but my kids is most important.
1:09:07 – I think the theory was the people
1:09:09 that you want to love you, love you.
1:09:10 – Which is interesting because the people
1:09:13 that we celebrate the most, if you think about Steve Jobs
1:09:16 and even Elon, like are the people that are closest to him.
1:09:19 I don’t mean millions of fans that don’t actually know him,
1:09:20 but I used to have this debate
1:09:22 with one of my best friends about Steve Jobs,
1:09:26 like the world loves Steve Jobs, but like–
1:09:29 – People in his orbit didn’t always love Steve Jobs.
1:09:32 – He’s like terrible, you know, like he’s so mean or like,
1:09:33 so I think that’s really interesting.
1:09:36 But going back, the common thing amongst all the people
1:09:39 that have tried to defeat death,
1:09:40 from the people that weren’t gonna die
1:09:42 through found of youth or modern biotech,
1:09:43 the people that were gonna upload themselves,
1:09:45 the people that were gonna come back,
1:09:48 the people that leave it to their kids or to their works,
1:09:51 the common thing amongst all them is that they’re dead.
1:09:53 You know, nobody has beaten death.
1:09:57 And so the mental model that I like on this is,
1:09:58 okay, take a piece of paper
1:10:00 and put the day you were born on the front
1:10:01 and then the day you’re gonna die,
1:10:03 roughly plus 80 years on the back.
1:10:06 And the only thing that you may have control over in part
1:10:09 is the story that you write between these two pages.
1:10:11 And my brother-in-law passed when he was early 40s,
1:10:14 stomach cancer, you know, he lived a tragic short-term,
1:10:16 you know, relative to others.
1:10:18 And, but maybe you get to live this epic tone
1:10:19 and it’s like, how do you write it?
1:10:20 And who do you spend your time with?
1:10:22 And nobody’s gonna look back and say, you know,
1:10:24 I wish I would have taken that extra meeting
1:10:26 or done that extra business trip or something.
1:10:28 It’s like, I’m really glad that I was there
1:10:30 for my kids or my spouse.
1:10:32 And yet you work really hard.
1:10:35 Yeah, and I always prioritize my kids.
1:10:38 Like, I think about all the time, like their judgment,
1:10:40 you know, some people are like, meet their maker
1:10:41 or they, you know, because I don’t believe,
1:10:43 I care deeply.
1:10:46 Well, they say my dad was there for everything.
1:10:47 And in part for me,
1:10:49 because my father was not present in my life,
1:10:51 my parents split when I was super young.
1:10:53 It’s my little guy who’s, I have two daughters and a son,
1:10:55 but my son, who’s nine,
1:10:58 always wants me to have a play date with my dad, you know?
1:11:01 And we’re, we’re civil and we speak a few times a year.
1:11:03 But I’m like, it’s just not that relationship.
1:11:04 I’m not gonna have that, you know?
1:11:06 And he’s like, yeah, but I really want you guys to,
1:11:09 and I’m like, no, I get to be the dad that I am to you
1:11:11 because I didn’t really have that.
1:11:12 And I’m making up for it now.
1:11:14 And this is what it is.
1:11:18 And then I worry that if they see such a present father,
1:11:19 do they take that for granted?
1:11:20 Oh, dude.
1:11:21 And then do they screw it up in the next generation?
1:11:22 You know, and so.
1:11:23 I have the same thoughts.
1:11:25 I actually talked to a therapist who had this
1:11:27 because I was like, am I too present in their lives?
1:11:29 Like they need some space, you know?
1:11:30 I’m home and they get home from after school.
1:11:34 And my wife and I talk about this like my parents split.
1:11:37 My father was married four times.
1:11:40 All I wanted was a stable nuclear family.
1:11:42 But if my kids grow up in a stable nuclear family,
1:11:42 do they take it for granted?
1:11:44 And does like one of them become a cheater
1:11:46 and infidelitus and all this kind of stuff because they,
1:11:48 and I have no idea.
1:11:50 But, but I know for me what is meaningful
1:11:51 and what makes me feel good.
1:11:55 And it’s a totally selfish thing that is about solving
1:11:55 for what I want.
1:11:58 Selflessly, I think it ends up being virtuous.
1:12:01 So what would your top like three or four priorities be then
1:12:03 if you were to outline them?
1:12:06 My kids and my wife call it family number one.
1:12:07 I mean, you know, you think about like the people
1:12:09 that have lost everything physically and materialistically
1:12:12 in the fires recently in LA.
1:12:14 Like family is like, you know,
1:12:15 and I don’t care over time as they grow up
1:12:17 and where they are and they’re, you know,
1:12:19 but that to me is like the most important thing.
1:12:22 And number two is purpose and meaning.
1:12:23 I think this is a universal thing,
1:12:26 but I feel lucky that I enjoy what I do.
1:12:28 It’s an intellectual puzzle.
1:12:32 There’s times of like great fierce competition.
1:12:34 We’re losing to, you know, I don’t like to lose.
1:12:38 We’re losing out to another firm that’s got an entrepreneur.
1:12:40 I like the intellectual gratification
1:12:41 of being right when other people are wrong.
1:12:43 I’m very intellectually competitive that way
1:12:46 and discovering something that people haven’t discovered.
1:12:49 I always talk about Linus Pauling, the double Nobel laureate
1:12:51 who wanted for both chemistry and peace.
1:12:54 And he has this quote about science,
1:12:56 which I just absolutely love.
1:13:00 And this is like, I hope until I’m 90 or 100 or 110
1:13:02 or whatever modern science lets me live till
1:13:04 that I continue to have this
1:13:05 ’cause it’s an addictive feeling,
1:13:08 which is that I know something that nobody else knows
1:13:11 and they won’t know until I tell them.
1:13:14 And I love that discovering a legal secret
1:13:17 and knowing that this is going to be announced
1:13:18 at the scientific breakthroughs coming
1:13:19 and nobody else knows about it yet.
1:13:21 So that to me is like meaning and purpose.
1:13:24 It’s intellectually competitive
1:13:27 and I understand the intellectual competitiveness.
1:13:27 I wanna be right.
1:13:29 I wanna make money.
1:13:31 I want the credit for it,
1:13:32 but it really is about like the status
1:13:33 ’cause otherwise you would just do these things
1:13:34 in private and totally in quiet.
1:13:37 But I like that feeling of that,
1:13:42 even if it’s being glorious and ego and all vanity.
1:13:43 So family, that sense of purpose
1:13:46 driven by this intellectual competitiveness.
1:13:49 And then I think I didn’t appreciate this as much,
1:13:51 but it’s sort of adjacent to the first.
1:13:54 There’s a handful of people who I imagine myself
1:13:56 like retiring with or like guy friends
1:13:58 and people that I enjoy spending time with
1:14:00 that have the same sort of values
1:14:02 and they’re very family driven.
1:14:05 And so my cousin in particular,
1:14:07 this amazing guy, Jason Redless,
1:14:09 one of my wife’s best friends,
1:14:10 this woman, Molly Carmel.
1:14:12 They’re both, we call it family.
1:14:13 They’re like friends and family.
1:14:15 But yeah, it’s a powerful thing
1:14:16 that I don’t wanna ever lose.
1:14:18 – That’s awesome.
1:14:20 You process a ton of information.
1:14:24 What’s that workflow like for information to get to you?
1:14:27 How are you using technology to filter?
1:14:29 How are you filtering information?
1:14:31 – So I typically go to bed between 12 and 12 30,
1:14:36 wake up around seven kids before they leave for school,
1:14:38 about 40 minutes.
1:14:39 I do a lot of physical activity,
1:14:42 usually three days a week between working out,
1:14:44 trainer, jujitsu, all kinds of interesting stuff.
1:14:47 But probably about an hour to an hour and a half
1:14:50 in the mornings of reading through
1:14:53 something like 40 different papers now.
1:14:54 It used to be like seven,
1:14:56 but then when I start to travel internationally,
1:14:57 if I went to the mid-east,
1:15:00 I would find an English version of some of the key papers,
1:15:01 same thing in Japan and elsewhere.
1:15:03 And so now I read a lot of international papers.
1:15:07 And when I say read, I use an app called PressReader.
1:15:09 It has the digital replica of the specific version,
1:15:10 which I really value.
1:15:11 And I know we’ve talked about this in the past,
1:15:14 but I like to know what the editor put on C22.
1:15:17 That’s not as visible on the website
1:15:20 because there’s meta information that the editor is saying,
1:15:21 this is not important to be on the front page,
1:15:24 but if I disagree and I think that there’s a magnitude
1:15:25 of informational importance,
1:15:28 that to me is like some sort of edge.
1:15:30 Then I will take screenshots of those.
1:15:34 And so I will sort of call it scout and scour
1:15:35 through all these papers,
1:15:37 take screenshots in some cases,
1:15:41 I may even take those screenshots and put them into an AI.
1:15:43 And basically say, give me a summary of this article
1:15:46 or give me the three key quotes that really matter.
1:15:50 And so I’ll go down all kinds of like rabbit holes with that.
1:15:50 So that’s the first,
1:15:54 which is just like call it 24 hours worth of information
1:15:56 that is basically put into editorial decree.
1:15:59 You can usually get the FT in New York time
1:16:01 like four, five, six PM Eastern time.
1:16:03 And so you have a little bit of information edge
1:16:05 because most people don’t get the FT
1:16:06 for another 12 or 14 hours,
1:16:08 but they don’t know that they could get it online,
1:16:10 but that is valuable.
1:16:11 And I care about all those things,
1:16:14 including not the sophisticated newspapers,
1:16:16 but like the less sophisticated ones like USA Today,
1:16:18 and I wanna know what is the average person going to read
1:16:19 when they wake up in a Marriott
1:16:20 and get the paper delivered under their door
1:16:22 and that kind of stuff.
1:16:25 Then Twitter, I have all kinds of lists that I follow.
1:16:25 And at any given time,
1:16:28 it might be something that is geopolitical
1:16:29 and war related where I’m going down to deep hole
1:16:31 or sometimes it’s AI and technology,
1:16:32 sometimes it’s my team
1:16:34 and what they’re posting and reading about,
1:16:35 but a lot on Twitter,
1:16:37 which I find truly invaluable.
1:16:38 I mean, I know a lot of people say
1:16:40 it’s like this dark cesspool of whatever,
1:16:42 but you can just filter through and cut out the people.
1:16:44 I’m muting and blocking people all the time.
1:16:46 And I’m discovering all kinds of
1:16:47 just absolutely incredible people.
1:16:49 It has been, as you know,
1:16:50 from one of our first conversations,
1:16:52 like this idea of randomness and optionality,
1:16:54 it is this huge randomness generator,
1:16:56 this huge optionality generator and the accessibility.
1:16:58 And I just, I absolutely love it.
1:17:01 So that’s another thing where I’m really rooting for Elon
1:17:03 in the continued success of X
1:17:05 because I’ll continue to pay a lot of money for it.
1:17:07 And I pay more money for it,
1:17:10 but I find it super valuable and it’s real time pulse.
1:17:12 And I’m excited for GROC to continue
1:17:15 ’cause I think that GROC and X just continue to sort of,
1:17:16 I mean, that’s a repository.
1:17:18 We talked about that before, repository of information.
1:17:21 – One of the first things Elon did was cut off access
1:17:23 to Google from the data. – Totally.
1:17:25 And I think that’s the right move, right?
1:17:28 This is our platform, same way as Meta has.
1:17:31 So I think that that repository of everybody’s tweets
1:17:33 and retweets and likes and the comments that they’ve made,
1:17:34 you can already go on and do this
1:17:37 in sort of a relatively superficial way
1:17:39 where you can say roast me.
1:17:40 And it will basically off your past,
1:17:43 I don’t know, like 25 or 30 tweets sort of roast you
1:17:45 based on what you’ve tweeted about,
1:17:47 but they’re a longitudinal access of people
1:17:50 that have 10,000, 100,000 tweets
1:17:53 is an amazing pastiche of like, you know what,
1:17:55 there’s an interesting thing here
1:17:58 which we were just riffing on internally,
1:17:59 which I’ll come back to.
1:18:00 Just remind me on sort of wrapping yourself
1:18:04 in this information mosaic and breaking free from it.
1:18:07 But papers in the morning, Twitter, internal Slack,
1:18:11 emails, texts, you know, just like processing
1:18:15 all this information, I use rewind on my Mac,
1:18:18 which is effectively doing nonstop screen capture.
1:18:20 And there will be other tools like this in part
1:18:21 because I do not remember the source
1:18:22 when I saw the information.
1:18:24 It’s sort of the same thing of like,
1:18:26 if you see a show and you were to ask me today,
1:18:28 like where did you, I don’t know, it was on Apple TV,
1:18:29 like was it on Paramount?
1:18:30 Was it on CBS?
1:18:31 Was it on Netflix?
1:18:33 Like I have no idea, right?
1:18:35 And in fact, usually when I do the Apple search
1:18:37 and it doesn’t show up, it means it’s on Netflix
1:18:39 ’cause it can search Netflix, right?
1:18:42 Yeah, just huge information omnivore, everything.
1:18:46 And then there’s some random writer that I follow
1:18:48 that’s like Catherine Schultz at the New Yorker,
1:18:51 Adam Gopnik and people whose style of writing
1:18:53 and the selection of their subjects.
1:18:54 I find really interesting
1:18:56 and then I’ll go deep into some of their themes.
1:18:58 – So you use rewind, press reader.
1:19:00 What are the other like technological tools
1:19:01 that you’re finding super valuable?
1:19:05 – Every AI, I might take an essay, read it,
1:19:08 ask it to summarize the key points,
1:19:10 ask it to put it in different voices,
1:19:12 take two different essays and say,
1:19:13 where do these things agree or disagree?
1:19:16 And so yeah, like just nonstop.
1:19:21 I’m on AI easily more than Google now,
1:19:23 but I don’t know, two, three hours a day.
1:19:24 – What have you learned about prompting
1:19:27 that would help everybody get better results?
1:19:31 – Usually very specific, like I give it a priming thing,
1:19:34 like you are the world’s, it’s a neuroscience paper.
1:19:36 You are the world’s greatest expert in neuroscience.
1:19:39 You have read every paper that has been published.
1:19:43 You have both a skeptical eye to new claims,
1:19:47 but you are also open-minded to interesting correlations
1:19:48 that might not have been considered.
1:19:53 Read this paper and give me the three most provocative,
1:19:56 non-obvious points and give me the three cliches.
1:19:57 And so just, and by the way,
1:20:01 I will put them into three different models at the same time.
1:20:03 So I will open three different browsers,
1:20:05 arrange them and put it into chat GPT,
1:20:06 put it into Claude, put it into one of the perplexing models
1:20:08 that’s not running on those two.
1:20:11 And sometimes I’ll mix and match them.
1:20:12 – I love that.
1:20:15 – It’s sort of like a pallet of mixing.
1:20:18 We have not yet done this as a partnership,
1:20:21 but we’ve talked about it, having an AI partner.
1:20:25 There’s still a behavioral discomfort
1:20:26 about recording conversations.
1:20:28 You and I are recording our conversation now.
1:20:30 But every partnership discussion we have,
1:20:34 we were confident that it was protected and encrypted
1:20:37 because we might say things that could be harmful.
1:20:38 – You don’t want them coming at.
1:20:40 – We could insult somebody or like,
1:20:42 or we have a piece of intel that we don’t want out.
1:20:45 But if we were comfortable that it was perfectly private,
1:20:48 which is a hard thing to promise, but if it was,
1:20:50 you would have a repository of every conversation
1:20:53 we’ve ever had over the past X number of years,
1:20:55 the decisions that we wrestled with,
1:20:57 you would be able to have somebody to advise us,
1:20:59 an AI to advise us,
1:21:02 where are we showing biases in consistency
1:21:04 between a decision we made three months ago and this?
1:21:06 What is different this time?
1:21:08 Which voices are not speaking up?
1:21:09 And you can already get this in some cases
1:21:11 with like certain Zoom calls or other recording things
1:21:14 where it’ll tell you who spoke for how long.
1:21:18 And then you could run like a Bayesian analysis of,
1:21:20 okay, given that we’re looking at these two companies,
1:21:21 give me the outside view,
1:21:24 the base rate of success historically,
1:21:25 which in venture honestly doesn’t matter,
1:21:27 but and then give me a Kelly criterion
1:21:28 of how you might size this
1:21:30 based on the projected internal confidence.
1:21:31 And so there’s all kinds of things
1:21:34 that we could internally do to use these tools,
1:21:37 which I think over time we’ll probably experiment with.
1:21:40 But the biggest thing is basically having like
1:21:41 a capture of everything, you know,
1:21:43 everything that you see, everything that you hear,
1:21:46 everything that my, I’ve already given over again
1:21:48 to the privacy gods, everything that my screen sees.
1:21:51 And so I trust that that siloed on my device,
1:21:52 it’s not going to the cloud,
1:21:53 but it’s super helpful when I’m trying to search
1:21:55 for something I’m like, was that a Gmail?
1:21:56 Was that a text?
1:21:57 Was that a thing on Twitter?
1:21:58 Was that a PDF I read?
1:21:59 Where did I see that?
1:22:03 And the ability to DVR my life is super valuable.
1:22:04 If I could do that with my conversations,
1:22:06 like who said that the other day?
1:22:08 In fact, Lauren and I just had somebody over,
1:22:10 you know, we host people at our house
1:22:12 and we couldn’t remember who told us this thing.
1:22:14 And we were like, I had to go through my calendar
1:22:15 to see who was over on Thursday or Friday.
1:22:17 Oh yeah, okay, it was, you know,
1:22:20 but being able to search your life instantly,
1:22:23 I think it’s going to be a generational change
1:22:25 in the same way that people were not comfortable,
1:22:26 you know, posting on Facebook.
1:22:28 And then they were comfortable.
1:22:30 And then like now people are like posting themselves
1:22:34 in swimsuits and bikinis and it just doesn’t matter.
1:22:36 That to me is going to be a big step change.
1:22:38 I want to come back to the info mosaic,
1:22:40 but one thing we never talked about YouTube
1:22:43 being like such a huge data source.
1:22:43 Incredible.
1:22:45 Closed and slightly open, I guess.
1:22:46 Yeah.
1:22:47 In some ways.
1:22:49 Yeah, like, well, I love that moment
1:22:52 when I think it was Mira from OpenAI was asked,
1:22:54 like, you know, so did you train?
1:22:56 And she was like, but you didn’t want to answer.
1:22:57 Crickets.
1:22:58 Yeah.
1:23:02 Okay, the info mosaic and breaking free from sort of like,
1:23:04 one thing I do love about acts is that
1:23:07 it shows you views that are contrary to your own,
1:23:09 like the algorithms gotten pretty good at.
1:23:10 Yes.
1:23:12 And there are, what is it, ground news
1:23:13 that you can sort of do this
1:23:16 where it will actually give you a bias on, you know,
1:23:18 certain things and it’ll give you both sides of the view.
1:23:19 So if you truly are objective
1:23:21 and like truly knowledge seeking,
1:23:23 then you would want to experience that.
1:23:24 And I feel like that will be an option
1:23:26 that you just click and enable feature
1:23:28 and you know, it’s able to identify some of the biases
1:23:29 and whatnot.
1:23:30 This idea of the information mosaic
1:23:33 was a recent conversation I was having with my colleague,
1:23:34 Danny Crichton, who runs like our risk gaming stuff
1:23:36 where we’re coming up with all kinds of crazy scenarios
1:23:39 and imagining these low probability, high magnitude events.
1:23:42 And the idea was that over time,
1:23:46 this perfect simulacrum of Shane or of Josh
1:23:47 is going to exist.
1:23:49 Everything that I’ve ever said on every podcast,
1:23:50 everything I’ve ever written publicly,
1:23:51 forgetting all my private thoughts,
1:23:53 but just everything that I’m out there publicly,
1:23:54 my voice, my tone, okay.
1:23:58 And so I almost imagined it like this matrix like mosaic,
1:24:01 like a Spider-Man costume that’s like form fitting.
1:24:04 It’s me or a close approximation of me.
1:24:06 But what if you want to break free from that?
1:24:08 In a sense, if I said,
1:24:10 give me something in the style of Shane Parish,
1:24:11 it might conjure something in the style of Shane Parish,
1:24:13 but in the style of Josh Wolf
1:24:17 or in the style of David Milch or Christopher Hitchens,
1:24:20 you know, I actually love invoking dead voices,
1:24:22 you know, to sort of bring them back from the dead, right?
1:24:24 And have them opine on the topic.
1:24:25 What would Christopher Hitchens say
1:24:27 about this article, blah, blah, blah.
1:24:30 But what if I wanted to break free stylistically?
1:24:35 If I said, give me a image of a horse in Tribeca
1:24:37 in the style of Wes Anderson.
1:24:40 You know, I can imagine the pastel palettes
1:24:42 that it would conjure and you could imagine that too
1:24:45 with the, you know, rectilinear framing and whatever.
1:24:48 But what if Wes Anderson suddenly had like a new
1:24:51 stylistic change in his over and wanted to just shift?
1:24:53 Like he’d be constrained, you know,
1:24:56 in the same way that people hate when, you know,
1:24:57 I don’t know, maybe when Dylan went electric
1:25:00 or like, you know, somebody else changes their style
1:25:01 or their genre.
1:25:05 And so there’s this aspect where AI constrains you.
1:25:10 And they’re just sort of playing with this idea of,
1:25:12 you know, how do you break free in the same way
1:25:15 that there might be like the right to be forgotten
1:25:16 that maybe you want to change your style.
1:25:19 The great virtue of college for most people
1:25:22 is this quartet of years where you can break free
1:25:25 from who you were for the past four years.
1:25:27 And nobody knows who you were and what you cared about.
1:25:31 And maybe you were into heavy metal,
1:25:34 but you were in like the band, you know,
1:25:35 and you couldn’t break free or maybe you were gay
1:25:38 and nobody knew or all these things that you can just
1:25:41 suddenly like be yourself and explore new things.
1:25:45 And there’s this element where the great virtue of college
1:25:48 is self-expiration against the constraints of high school,
1:25:51 but could AI be this constraining force?
1:25:53 Because the more content that you put into it,
1:25:55 the more it knows you,
1:25:59 the more you may have trouble varying from it.
1:26:01 And so there’s something interesting there.
1:26:02 – I like that a lot.
1:26:05 Let’s talk military and technology
1:26:08 and you guys are big investors in Antrol.
1:26:10 Where’s that going in the future?
1:26:13 – Well, there’s gonna be a lot more brilliant minds,
1:26:16 I think that feel comfortable, motivated,
1:26:20 not only by a sense of purpose, patriotism,
1:26:23 but also principle and capital making that they see,
1:26:25 the things that they doubted early on,
1:26:26 like why is this time different
1:26:29 in another defense company of which there weren’t very many,
1:26:32 but seeing Andrew’s ascendancy and valuation
1:26:34 and success and program wins,
1:26:35 I think has inspired a lot of people
1:26:37 like wait, there’s something going on here.
1:26:39 We went from 50 primes down to five,
1:26:42 you’re seeing the rise of these neoprimes.
1:26:44 I deeply believe that Antrol in the next few years
1:26:48 will be a $30 to $50 billion publicly traded business,
1:26:52 doing single digit mid billions of revenue
1:26:53 with software like margins
1:26:55 that are not like these cost plus margins.
1:26:57 So that is gonna usher in a big wave
1:26:59 and they’re buying companies,
1:27:00 they’re acquiring smaller businesses,
1:27:03 but you’ll continue to see that sort of evolution
1:27:05 in a world that people realize
1:27:07 is not kumbaya peace and safety.
1:27:12 There are bad actors that when we take a step back
1:27:15 or on our back foot or a little bit permissive
1:27:18 that they arm up, it happened with Iran.
1:27:21 And I think the prior administrations from Obama and Biden
1:27:24 were well-intentioned in trying to bring them
1:27:28 into the Western world, but it was a sort of ruse
1:27:29 from Iran standpoint.
1:27:33 Same thing with Gaza and Israel and Russia
1:27:35 and who thought that we were gonna see a land war
1:27:38 in the 21st century where Russia would invade Ukraine
1:27:43 and China and Taiwan and North Korea
1:27:45 and the African continent,
1:27:47 if we talked about in the Sahel Maghreb,
1:27:49 infiltration of a lot of these groups into South America.
1:27:52 I mean, there’s just lots of conflict waiting
1:27:57 and the best way to avoid conflict is to have deterrence.
1:28:00 And if Ukraine had nuclear weapons,
1:28:01 Putin wouldn’t have invaded.
1:28:03 Most of the West and NATO really said,
1:28:04 “Don’t worry, we got your back.”
1:28:06 And even though you’re not part of NATO
1:28:08 and they never nuclearized,
1:28:13 I think the world, timelessly through all of human history,
1:28:18 is gonna face enormous conflict, resource wars,
1:28:20 water may be next.
1:28:24 I think there’s something like 1,900 active conflicts
1:28:26 around the world, around water rights.
1:28:28 You look at China and Pakistan, control of the water.
1:28:32 I mean, there’s just like a lot of resources.
1:28:36 You look at disrupting undersea cables,
1:28:38 sabotage efforts.
1:28:40 You look at deep sea mining.
1:28:42 You look at space as another frontier.
1:28:46 There’s just a lot of opportunities for zero-sum conflict.
1:28:49 And when you can’t reconcile those conflicts
1:28:52 through diplomacy or negotiations or agreement,
1:28:54 it goes to violence.
1:28:58 And the people that can bring or effect or export violence
1:28:59 typically have the upper hand.
1:29:03 And part of what has made this country great
1:29:05 and made it powerful and made it the economic juggernaut
1:29:10 is that it is allowing for the low entropy system,
1:29:14 even though the country at times seems chaotic,
1:29:16 that allows for the high entropy production
1:29:20 of entrepreneurial ideas and free market capitalism
1:29:23 and booms and busts is that we have
1:29:25 the most powerful military on the planet.
1:29:28 You could argue that that didn’t just benefit the United States.
1:29:31 It benefited Canada, Europe, it benefited a lot of–
1:29:32 Mexico are allies for sure.
1:29:34 You can watch many fictional movies
1:29:36 that have run these counterfactuals
1:29:38 of what would have happened if Nazi Germany had won
1:29:43 or the Russians had landed for all mankind
1:29:45 before we did in the moon.
1:29:48 But we’re getting away from an era
1:29:53 of like, here’s a trillion dollar boat effectively.
1:29:56 It depends who you talk to, if shouldn’t we?
1:29:58 Like, I mean, if that boat can be taken out
1:30:01 by a $3,000 drone, how effective is it?
1:30:03 Yes, for sure, the asymmetry of a threat
1:30:08 of an aircraft carrier against a large fleet of drones.
1:30:10 It is very much, if you talk to Sam Paparo,
1:30:11 who’s the head of Indo-Pakarm,
1:30:13 he will say it is all about mass on target.
1:30:18 There’s certain things that automation cannot do.
1:30:19 And he wants what he calls,
1:30:20 which I guess is a technical term,
1:30:24 a hellscape in that region, the Taiwan Straits
1:30:29 and South China Sea, so that you make it really impossible
1:30:32 for them to have any military dominance.
1:30:36 But it is an era where it’s about,
1:30:38 you saw this again with Iran and Israel and Gaza
1:30:42 and Syria’s missiles and counter missiles
1:30:44 and rockets and intercontinental ballistic missiles
1:30:47 and hypersonics and space weapons.
1:30:50 It is just about going back to almost like
1:30:53 Planet of the Apes, one ape through another,
1:30:56 a rocket another or a twig or a stroller.
1:30:58 The weapons get more powerful,
1:30:59 but the behavior doesn’t change.
1:31:01 We’re back to throwing projectiles at each other,
1:31:03 and it’s just they’re automated,
1:31:07 they’re at speeds or at levels of
1:31:10 a treatable, overwhelming defensive forces
1:31:12 that that is the battlefield.
1:31:15 Do you think values become a disadvantage in some ways then?
1:31:18 Like for example, if the United States were,
1:31:21 we need a human operator to pull the trigger
1:31:22 and another country was no,
1:31:24 it could be completely automated.
1:31:24 For sure.
1:31:26 And therefore in a dogfight,
1:31:28 we’re likely to more win.
1:31:31 Look, this is already happening in the information space
1:31:35 where we have certain and in the autonomous space.
1:31:40 I was in the Pacific region with SOCOM,
1:31:43 and there’s a drone operator who’s flying the drone.
1:31:44 There’s another drone operator
1:31:48 who’s piloting the weapon system,
1:31:49 and there’s two lawyers.
1:31:51 So they’re helping the commander
1:31:54 who is effectively given like a God shot of,
1:31:57 how many combatants and civilians can be killed
1:32:00 and what ratio, and sometimes it’s like five to one
1:32:03 or 10 to one, but there’s lawyers that can authorize
1:32:06 because we have a certain rule of engagement
1:32:09 that frankly gives these military personnel
1:32:13 the ethical comfort that this is a superior system.
1:32:15 But for sure, if there are people that don’t have
1:32:17 that same moral code, in some cases,
1:32:20 they can be at least temporarily advantaged.
1:32:22 – Well, you can think of that through AI too,
1:32:23 not just military, right?
1:32:26 If we restrict, we put restrictions on any technology
1:32:28 and another country doesn’t,
1:32:31 sometimes that can cause an advantage to another country.
1:32:34 – China has the 50 cent army.
1:32:37 These people are getting 50 cents for every tweet
1:32:38 and information they put out.
1:32:40 The State Department, when they want to tweet something out
1:32:43 through groups, there’s literally like a disclaimer
1:32:45 that says, and it’s like one woman in Tampa
1:32:48 that’s doing this, like this was sponsored by the state.
1:32:50 So we have these ethical restrictions,
1:32:54 which definitely tie our hands behind our back
1:32:55 in some cases.
1:32:57 And our enemies will always try to weaponize this.
1:32:59 So, I mean, you can look at many vectors today
1:33:01 that don’t seem like they’re threat vectors,
1:33:03 but they have been weaponized.
1:33:05 Social media information, we know.
1:33:09 And the best fix for that is identifying the bad actors
1:33:11 and also inoculating people with a heightened degree
1:33:13 of skepticism, but the vast majority
1:33:15 of the American population will not be inoculated.
1:33:17 They will see the things that they want to see.
1:33:19 They will follow the accounts that they want to follow.
1:33:22 And then occasionally those accounts will start to pepper
1:33:24 in other information that they want people to believe.
1:33:27 And that’s how information cascades can go.
1:33:29 We have open systems, immigration.
1:33:31 You’re seeing a lot of the rise
1:33:34 of the populist anti-immigrant movement in part
1:33:37 because in some cases it’s a result of good intentions
1:33:40 of providing sanctuary cities and wanting to help people
1:33:43 and provide amnesty and help immigrants come here
1:33:44 because that’s what our country was built on.
1:33:46 And then you want those people to assimilate.
1:33:48 And when they’re not assimilating,
1:33:50 but then you also have bad actors like Putin
1:33:52 who has weaponized immigration and put migrants
1:33:54 on people’s borders to create pressures
1:33:56 so that you can get a political movement
1:33:58 from inside the country that will be sympathetic
1:33:59 to the nationalist sensibilities
1:34:02 and he’s orchestrated that very well.
1:34:05 And so infiltration into our university systems,
1:34:08 which accept foreign capital and you see Qatar
1:34:12 that has influenced very massively domestic US universities.
1:34:13 China doesn’t allow that.
1:34:16 US is not able to come in and sponsor Chinese universities.
1:34:18 TikTok, of course, a huge one, right?
1:34:20 We banned it years ago,
1:34:23 right when it had become TikTok for musically,
1:34:24 in part because at the time,
1:34:26 I seemed like a conspiratorial nuts saying,
1:34:28 I don’t trust this with the Chinese Communist Party
1:34:29 having control over this.
1:34:32 And then behind the scenes myself and many others
1:34:35 have played a role in helping to orchestrate what I hope
1:34:38 will not be thwarted by Trump to see this divested.
1:34:41 I have no problem with people using TikTok,
1:34:44 but it should not be in the hands of the algorithm
1:34:46 and control of the Chinese Communist Party.
1:34:48 And it’s on a lot of government phones.
1:34:49 It’s insane.
1:34:50 It’s really interesting.
1:34:52 Why haven’t we seen more isolated attacks
1:34:54 that are cheap and using technology?
1:34:56 And what I mean by that is,
1:34:59 nefarious actor can probably for two or three grand
1:35:03 effectively plot an assassination.
1:35:04 I guess the question would be to what end?
1:35:07 And so you still have to realize that many people,
1:35:09 even if they’re like evil geniuses,
1:35:12 have an objective in mind.
1:35:14 And do they just want to sow chaos
1:35:15 and create distrust in the system
1:35:17 and have people scrambling?
1:35:19 Whereas they’re an opportune time to strike.
1:35:24 Think about the Israel operation with the Beeper plan.
1:35:25 This was 10 years in the making.
1:35:27 Now they could have done it at any point in time
1:35:30 five years ago, but they waited until a precise moment.
1:35:32 And so being able to do the thing
1:35:34 and deciding when you do the thing
1:35:36 are two different decisions.
1:35:39 But I think that we’ve been warned for a very long time
1:35:41 about hacking and infiltration
1:35:44 into our physical infrastructure, for sure.
1:35:46 Somebody could shut down air traffic control.
1:35:47 And what we saw just recently
1:35:49 between the Black Hawk helicopter
1:35:52 and this regional plane from, was it Kansas City?
1:35:54 You know, crash in Washington, DC.
1:35:58 You could see the FAA shutdown and have a glitch.
1:36:01 You can have infiltration into our banking system
1:36:03 and just like the Sony hack, right?
1:36:05 The big thing with the Sony hack
1:36:07 was not that the systems were disabled.
1:36:09 It’s that information was revealed.
1:36:10 You want to create civil war in this country.
1:36:13 Just reveal everybody’s emails for the past year.
1:36:15 The things that we’ve said about each other, you know.
1:36:17 I mean that like reveal truth in a sense, right?
1:36:18 It was the great irony.
1:36:21 So the obfuscation of these things in private
1:36:23 helped to create a civil society.
1:36:26 Our water systems, our infrastructure, our traffic lights,
1:36:27 you know, I mean, all the things that you’ve seen
1:36:31 in sci-fi movies when like things just start breaking,
1:36:36 I’m actually amazed that our infrastructure globally,
1:36:39 but you know, even in New York City, even in this office,
1:36:42 there’s a million skews in this office.
1:36:44 You know, above our heads right now,
1:36:45 there’s an HVAC system.
1:36:46 Like the fact that we trust this system
1:36:49 and it’s not gonna fall and explode or blow up like,
1:36:52 and then we’re shocked when these things do,
1:36:54 but I’m constantly amazed that the entropy,
1:36:57 the forces of entropy are constrained
1:36:59 by either really good engineering
1:37:02 or inspection of systems or whatever it might be,
1:37:03 the maintenance of systems,
1:37:05 which is another really interesting thing.
1:37:07 This idea of maintenance.
1:37:11 Like the past 10 years have been all about growth,
1:37:12 growth, growth, growth.
1:37:15 You go to financial statement on CapEx,
1:37:16 you’ve got growth and you’ve got maintenance.
1:37:19 And I think in a world where rising cost of capital
1:37:21 keeps going up for a variety of reasons.
1:37:22 I think there’s a ton of dry powder
1:37:24 and venture capital and private equity,
1:37:25 but a lot of it I call wet powder
1:37:27 because this money is basically reserved
1:37:29 for companies and people don’t realize.
1:37:32 Reshoring, all of these things, you know,
1:37:33 tariffs, they’re gonna be inflationary,
1:37:35 they’re gonna be a rising cost of capital.
1:37:37 If you have a rising cost of capital,
1:37:40 if you are a CFO or you’re on a board
1:37:41 and you’re thinking about good governance
1:37:43 and capital allocation,
1:37:44 we’re not buying the next new hot thing
1:37:46 unless we really have to like AI today.
1:37:47 – Yeah.
1:37:49 – We’re thinking about how do we maintain
1:37:50 the existing assets we have?
1:37:52 And those assets could be satellites up in space,
1:37:54 they could be military installations,
1:37:56 they could be our telecom infrastructure,
1:37:59 our bridges, our waterways, sanitation, processing,
1:38:01 our HVAC systems, our industrial systems,
1:38:03 all those things that need to be maintained.
1:38:06 So I’m increasingly interested in new technologies.
1:38:08 This could be software, services, sensors,
1:38:12 all kinds of things that can help apply to old systems
1:38:12 to maintain them for longer,
1:38:15 depreciate them for longer, let them last for longer.
1:38:16 I think there’s gonna be increasing demand
1:38:18 for maintenance of systems.
1:38:20 But I’m amazed that everything around us
1:38:21 is just not constantly breaking.
1:38:24 It truly is like miraculous.
1:38:25 – It is when you think about it, right?
1:38:26 Yeah.
1:38:28 What do you think of Doge?
1:38:30 – The currency or the initiative?
1:38:32 – The initiative, no talk crypt.
1:38:36 I think it’s a virtuous thing because
1:38:39 it’s shining a spotlight on a lot of things
1:38:40 that were just done because they were done
1:38:42 and you get this bloat.
1:38:45 Or in some cases there was like overt obfuscation.
1:38:48 So I think sunlight heals all and putting a spotlight
1:38:52 on ridiculous spending or ridiculous inefficient things.
1:38:56 I will say I grew up sort of a center left Democrat
1:38:57 my entire life.
1:38:59 The first time you go to the DMV,
1:39:00 you become a Republican.
1:39:04 Like it’s just like you want systems that have competition
1:39:06 because competition makes things better
1:39:08 because if you have a monopoly on something
1:39:10 you don’t have to improve.
1:39:12 If there’s one regional carrier for an airline,
1:39:14 if there’s one restaurant,
1:39:17 if there’s one place you have to go to for your passport,
1:39:19 you don’t want that sort of centralized control
1:39:20 because the service is going to suck
1:39:22 because they don’t have to do any better.
1:39:26 So I think if you can put a spotlight on excess
1:39:29 and waste and bureaucracy and at least begin
1:39:32 the conversations at a bare minimum of wait,
1:39:34 we’re spending how much money on what?
1:39:36 I think that that’s a virtuous thing.
1:39:39 Whether or not these things will be effective
1:39:42 at really reducing costs, TBD,
1:39:44 but it actually seems quite positive
1:39:46 that they may hit some of their targets
1:39:49 of trying to reach what is it a billion a day or more.
1:39:52 And if that could end up reducing the deficit by 10%
1:39:54 or 20% let alone 50% going from two trillion
1:39:56 to a trillion would be incredible.
1:39:59 So whatever the motivation,
1:40:00 I don’t believe it’s patriotism.
1:40:02 It could be intellectually competitive.
1:40:03 It could be power.
1:40:04 Whatever the motivation,
1:40:07 I think that the means to the end,
1:40:09 I think the end is a virtuous pursuit.
1:40:12 If you were to take over a country effectively
1:40:15 and you were in charge of policies and regulations,
1:40:17 what would you do to attract capital
1:40:21 and become competitive over the next 20 or 30 years?
1:40:22 What sort of things would you implement?
1:40:25 What would you get away from and not do?
1:40:27 Well, I have an adjacent answer
1:40:29 of if I were Secretary of State or Secretary of Defense
1:40:31 for the day, which I’ll give you first,
1:40:34 which is I would really put priority on Africa
1:40:36 as a continent and particularly that Sahel Maghreb
1:40:38 because I do believe between violent extremists,
1:40:41 Russian mercenaries, China infrastructure,
1:40:43 you are one terror event away projected into Europe
1:40:45 that creates the next Afghanistan
1:40:48 and suddenly NATO and the US are in there dealing with ISIS.
1:40:50 You’re already seeing the first authorized strike
1:40:53 by Trump on ISIS in Somalia.
1:40:58 And so you’ve got Sudan, Chad, Mali, Niger,
1:41:00 like it is just a hotbed of people
1:41:02 that were coming from Syria and Afghanistan,
1:41:05 Islamic extremists, it is a bad situation.
1:41:07 And I think that we should be proactive there
1:41:08 before we have to be reactive
1:41:10 and it’s a lot more costly in lives and money
1:41:12 and blood and treasure.
1:41:12 The second thing I would do is a
1:41:15 hemispheric hegemony declaration.
1:41:19 I just went to Nicaragua, Nicaragua for a variety of reasons.
1:41:20 We went instead of Costa Rica,
1:41:22 but I feel much safer in Costa Rica.
1:41:23 And I was worried that I was not gonna be able
1:41:24 to leave Nicaragua and we went with a friend
1:41:26 who happens to be a prominent journalist.
1:41:28 He was not allowed entry into the country.
1:41:31 So it really through our family vacation,
1:41:35 his family of five, my family of five for a wrench
1:41:39 because the government is trying to take over
1:41:42 the banking system and they don’t want it to be covered
1:41:43 by financial journalists and these kinds of things.
1:41:46 And you look at who is in there
1:41:48 and you literally have presence from Hezbollah,
1:41:51 from China, CCP, from Iran.
1:41:52 It’s a bad situation.
1:41:58 The places that we think in most of Central America’s,
1:42:01 Caribbean, South America are vacation spots
1:42:04 where we get our coffee and we go on a nice vacation,
1:42:06 massive infiltration from adversaries.
1:42:09 And so I think we are losing the game
1:42:11 and I would declare almost like a new monorail doctrine
1:42:15 kind of thing where we say the entire Western hemisphere,
1:42:17 you’ve got a billion people,
1:42:21 both ability to project into the Pacific and the Atlantic.
1:42:24 You’ve got mostly English and Spanish speaking people
1:42:25 safer Brazil with Portuguese.
1:42:27 You’ve got a ton of resources,
1:42:30 a ton of brilliant educated doctors and whatnot.
1:42:31 And I would just shore up this hemisphere,
1:42:33 particularly against a crink,
1:42:34 you know, China, Russia, Iran, North Korea
1:42:36 and their influence.
1:42:38 If we were worried about like Cuba
1:42:39 and the Cuba missile crisis
1:42:43 and things proximate 90 miles or what from Florida,
1:42:47 I think that China is doing very smart strategic things.
1:42:48 So us going back in and saying,
1:42:52 we’re gonna reclaim the Panama Canal and our influence on it,
1:42:55 like forgetting about provocations of Mexico,
1:42:58 of like the Gulf of America versus Gulf of Mexico.
1:43:00 But I think that having influence in that region
1:43:01 is really important.
1:43:03 So those would be the two things as sect F or sect state
1:43:06 that I would do is declare a hemispheric hegemony
1:43:07 and make sure that we shore up our allies in the region
1:43:10 and get out our adversaries and their presence in part
1:43:11 because there’s so much commerce
1:43:13 and money and infrastructure that’s going in
1:43:15 and then focus on the Sahel and Maghreb in Africa.
1:43:18 For country competition, brain drain,
1:43:19 you want the best and the brightest.
1:43:21 You need to fund basic research and basic science.
1:43:22 It should be undirected.
1:43:24 That’s the serendipity and the randomness
1:43:26 and the optionality that leads to great breakthroughs.
1:43:29 You want capital markets to be these low entropy carriers
1:43:32 for high entropy entrepreneurial surprise.
1:43:35 So predictable rules and regulations.
1:43:37 I would lower, I don’t know why we don’t have a flat tax.
1:43:38 I mean, I know why we don’t,
1:43:42 but I would just have a flat tax, make it super simple.
1:43:43 Rich people are gonna scout around
1:43:45 and figure out how to get around the tax system anyway
1:43:47 and poor people are burdened by it.
1:43:49 I get progressive versus regressive,
1:43:53 but I would just simplify our tax scheme massively.
1:43:55 Anybody that is coming here
1:43:56 and getting an education in this country,
1:43:58 I would staple a visa as long as they stay here,
1:44:01 work for American company for at least five years,
1:44:03 let them become, we want the best and the brightest here.
1:44:06 We are, as an example again,
1:44:07 and I don’t mean to make this like all China,
1:44:09 but they are our most dominant adversary,
1:44:13 50% of all AI undergrads in the world today
1:44:15 are being graduated by China.
1:44:17 In our own country in the United States,
1:44:22 38% of researchers in AI are from China.
1:44:24 So we’re outnumbered even domestically, it’s a big deal.
1:44:27 And we used to attract something like 23%
1:44:31 of all foreign graduates here, that’s down to 15%.
1:44:32 People are either going to other countries
1:44:33 or staying in their own country.
1:44:37 And so we need that, that’s what won us World War II.
1:44:40 You know, if Einstein would have stayed in Germany or…
1:44:43 – What causes that to happen is that the tax rate,
1:44:45 is it opportunities available?
1:44:46 Is it housing costs?
1:44:49 Like what are the factors that go into people leaving?
1:44:51 – Well, start with the attracting part.
1:44:53 You know, as Walter Riston said,
1:44:54 people go where they’re welcome
1:44:56 and stay where they’re well treated.
1:44:58 So we should be welcoming.
1:44:59 Now there’s a debate about immigration
1:45:03 and we should distinguish between the bad people
1:45:05 and like brilliant people and we should want them here.
1:45:08 – That just comes down to like basic batting.
1:45:10 – Right, but you know, some of that is exploited.
1:45:13 You know, a lot of these consultants with Wipro
1:45:16 and some of the Indian business processes,
1:45:20 BPO is the business processes operations, or outsourcing.
1:45:23 Housing is a difficult one, but people can always figure out,
1:45:24 New York City is expensive,
1:45:26 but you can live in Long Island City
1:45:27 and Brooklyn and Queens.
1:45:33 But I think, yeah, housing availability,
1:45:35 we have, I mean, our cities are so rich
1:45:36 and filled with culture and people.
1:45:37 And particularly if you’re a young person,
1:45:39 you want to be around the density of that
1:45:41 because you’re trying to find peers and a mate
1:45:44 and all of that, even if you’re from another country,
1:45:45 you go to New York, you can find your enclave
1:45:48 of Korean or Chinese or Russian or Ukrainian
1:45:52 or Israeli and Caribbean, it’s all here.
1:45:54 So, yeah, I think just having a culture
1:45:57 that embraces this and encourages it,
1:45:59 you already have a robust venture capital system
1:46:00 of risk taking, many other countries don’t have that.
1:46:02 So that’s another thing.
1:46:05 But if you were to design the system from scratch,
1:46:08 you want openness with security.
1:46:10 So some means of vetting.
1:46:12 You want a great education system that can attract people
1:46:15 that view it as a status to have graduated
1:46:17 from that particular school.
1:46:18 And people want to be around people,
1:46:20 whether this is in a company or in a country
1:46:21 that are like them, that are competitive
1:46:24 and highly intellectual, that they respect or admire,
1:46:25 want to compete with.
1:46:26 So that’s number two.
1:46:29 Their work, something that we did in the 1980s,
1:46:31 I think it was in 1980 was the Bidol Act,
1:46:33 where government funding for research
1:46:36 would allow the university and the principal investigator
1:46:38 to actually own the intellectual property
1:46:39 that became an asset.
1:46:42 That asset could be licensed to a company.
1:46:45 It absolutely opened the floodgates for venture capital
1:46:46 to be able to commercialize that.
1:46:49 That happened to coincide with Orissa
1:46:52 and allowing retirement plans to go into venture capital.
1:46:54 So now you have a pool of risk capital,
1:46:58 which you need for taking risk on unseasoned people
1:46:59 and unseasoned companies.
1:47:02 And then you need a robust capital market system
1:47:04 to be able to continue to allocate money.
1:47:05 But again, capital goes where it’s welcome,
1:47:06 stays where it’s well treated,
1:47:09 true of human capital, true of financial capital.
1:47:12 And so a rules-based system, a strong military,
1:47:13 if you were starting a country from scratch,
1:47:14 you’re not gonna have that.
1:47:17 But you need great allies then, think about Singapore.
1:47:19 Yeah, I think that’s a phenomenal model.
1:47:20 Singapore is a great model.
1:47:21 – That’s awesome.
1:47:23 I hope some of the government people we have
1:47:25 are listening to this.
1:47:27 I wanna come to IP for a second and copyright
1:47:29 and then wrap it up here.
1:47:33 So do you think that AI should be able to create IP
1:47:34 or copyrighted material?
1:47:36 Like if I tell AI to write a book,
1:47:38 is should that be copyrightable?
1:47:41 And who owns the copyright, the AI or me for the prompt?
1:47:44 – It’s super complicated because the first debate
1:47:45 about this, which is the great irony, right?
1:47:48 Because open AI investors and stakeholders were up in arms
1:47:50 that are one stole from open AI.
1:47:51 But you can make the argument that open AI
1:47:53 is trained on the repository of like the public internet
1:47:56 and every art that’s ever been produced and whatnot.
1:47:59 Now, if you were an art student and you went to the Louvre
1:48:03 or to the Met or to MoMA and you sat there and studied it
1:48:04 or took a picture of it.
1:48:07 And then we learned through copying.
1:48:09 We learned through symmetry and imitation
1:48:11 and we remixed these things.
1:48:14 And there’s this great, what is his name?
1:48:17 Kirby who did everything as a remix.
1:48:18 I just sent it to a friend,
1:48:20 but it was like updated last year.
1:48:24 And it’s so brilliant in its compilation
1:48:29 of every facet of culture that you love from books
1:48:33 to your Tarantino movie, to the Beatles, to art,
1:48:34 to scenes and movies.
1:48:37 Like it was all copied from something, you know?
1:48:40 And you’re like, wait a second.
1:48:42 That riff came from this 1940 song
1:48:44 from this African American blues guitarist
1:48:47 that John Leonard or Paul McCartney stole.
1:48:50 And you’re like, and so everything was sort of stolen
1:48:52 from somebody.
1:48:54 It was imitated, tweaked slightly.
1:48:56 And by the way, that’s what we are, right?
1:48:58 I mean, you get two people who exist.
1:49:00 And then there’s this genetic recombination
1:49:02 of their source material.
1:49:03 And every one of my kids are different,
1:49:04 but they came from the,
1:49:09 and so remixing is like how everything happens.
1:49:12 And it’s like Matt Ridley said, ideas having sex.
1:49:15 And so to your core question, yes,
1:49:18 I think that if I do a calculation
1:49:20 and I’m using a calculator instead of like
1:49:22 doing math by pencil,
1:49:25 that calculation is still an input into my output.
1:49:29 If I’m using AI to generate art
1:49:33 and it’s my prompt instead of the gesture of my brush
1:49:35 and the strokes of my hand,
1:49:37 then I think it should still be mine.
1:49:41 Even if it was trained like a great art student was
1:49:44 by staring and learning and studying and then emulating.
1:49:45 And then these things evolve.
1:49:47 You look at Picasso through all the different phases
1:49:49 of his style of art,
1:49:54 from like realism and portraiture to cubism and abstract.
1:49:57 These things are just evolved until you find the white space
1:49:59 that defines you.
1:50:01 And that goes back to like,
1:50:03 if I train all my AI and everything I’ve ever wrote,
1:50:06 but then like my voice is,
1:50:08 a new voice is rare and hard to create.
1:50:13 So I actually think we should probably worry more about
1:50:16 how do you break free from this constraints of these things?
1:50:20 Then, should they be copyrightable?
1:50:22 – Well, that goes back to our earlier conversation.
1:50:24 Do we just end up in this lane that we can’t get out of?
1:50:26 Or we don’t even recognize we’re in a lane, I guess.
1:50:27 – Right.
1:50:29 – In some ways, even more devastating.
1:50:31 – And the brilliance of all of this, again,
1:50:33 like I’m a big believer that we make our fictions
1:50:35 and our fictions make us.
1:50:37 And if you’ve watched Westworld,
1:50:38 I don’t know if you’ve-
1:50:40 – Watched an episode or two, yeah.
1:50:41 – The first episode, you know,
1:50:44 you have a guest who comes to the park
1:50:46 and he’s sort of squinty-eyed looking at the host
1:50:48 who’s actual robot that you learn later on,
1:50:50 but he doesn’t know at the time.
1:50:52 And he’s like looking at her and she goes,
1:50:53 you want to ask, so ask.
1:50:55 And she knew what he was going to ask her.
1:50:56 And he goes, are you real?
1:50:57 – Yeah.
1:51:00 – And she goes, well, if you have to ask, does it matter?
1:51:03 And, you know, I’m going to sort of spoiler alert
1:51:04 on Westworld.
1:51:08 It’s all about these hosts interacting with the guests
1:51:10 and they’re there to serve the guests.
1:51:12 But in fact, it’s the opposite
1:51:14 because every host is watching and learning
1:51:17 every small nuance, every gesture,
1:51:18 every inflection of your voice,
1:51:21 every cadence of your speech.
1:51:24 And it’s learning you so that it can basically create
1:51:25 a perfect simulacrum of you
1:51:28 and 3D print biologically, a version of you.
1:51:32 And so it’s a really profound philosophical question
1:51:34 about how we’re interacting with these things.
1:51:37 But all these things have been trained
1:51:41 on the sum total of all human creation.
1:51:42 And now they’re being trained
1:51:43 on the sum total of human creation
1:51:45 plus artificial creation.
1:51:47 And some of that is done with human prompts
1:51:49 and some of it is going to be done automatically.
1:51:50 But I just think it’s going to be part
1:51:54 of the total overture of creation.
1:51:56 And I think it’s a beautiful thing.
1:51:59 So does anything about this scare you?
1:52:02 About AI, like the direction we’re heading?
1:52:06 I think in the near term, the thing that scared me
1:52:10 is what again, scarcity and abundance,
1:52:13 what becomes abundant is people’s ability
1:52:14 to use AI to produce content.
1:52:17 And I don’t know if I’m getting an email from somebody.
1:52:20 Did an AI ride it or optimize it
1:52:23 or was it really a thoughtful note from John?
1:52:26 This young college student who’s persistent,
1:52:28 was it really them?
1:52:31 And can I infer something about their persistence
1:52:32 and their style of writing?
1:52:34 Or did they put it into an AI and know
1:52:37 from the repository of what influences me
1:52:39 and what I’ve talked about and what I care about
1:52:40 that they, you know, so many people are like,
1:52:41 oh, I heard you on this podcast
1:52:43 and I felt compelled to write you
1:52:46 because I too care deeply about family
1:52:46 and you know, blah, blah, blah, right?
1:52:48 I mean, those are surface level stuff
1:52:51 but somebody that’s more nuanced about it.
1:52:53 Am I being manipulated by them or by the AI?
1:52:55 And if it’s them, there’s a cleverness to it
1:52:56 that I might admire.
1:52:59 If it’s just the AI, I feel suddenly more vulnerable.
1:53:02 So what becomes abundant is the sort of,
1:53:04 not just information, misinformation or whatever
1:53:05 but the production of it,
1:53:08 what becomes scarce is veracity and truth.
1:53:13 And that to me was less scary
1:53:16 but more you need to be inoculated
1:53:18 and immunized, vaccinated
1:53:19 and you’re almost going to become
1:53:21 a little bit more distrusting
1:53:24 but like your reactions right now,
1:53:25 I might say something and you might say,
1:53:28 oh, and maybe you actually thought it was profound
1:53:30 or maybe you’re like, this is not interesting at all
1:53:32 but there’s something authentic, right?
1:53:35 About this and we are reading each other
1:53:36 and reacting to each other.
1:53:38 That to me is going to become ever more valuable.
1:53:43 So our humanity, the interactions are the scarce thing.
1:53:48 Even if and as through other mediums, it’s hard to tell.
1:53:50 I love that.
1:53:51 I always love talking to you too.
1:53:55 So I get so much energy and ideas out of our conversations
1:53:56 and I’ll be chewing on this for weeks.
1:53:59 I know we always end with, what is success for you?
1:54:00 You’ve answered this before,
1:54:02 I’m curious to see how it changes.
1:54:05 It really is the eyes of my kids.
1:54:07 It is for me, them saying my dad did that
1:54:10 or my dad made that or my dad was present for me.
1:54:13 And I think it’s the story I tell myself
1:54:16 about my own life and my relationship with my father
1:54:17 and wanting to invert that.
1:54:22 And so for me, success is like them saying I’m proud.
1:54:24 He was my dad, he was a great father
1:54:26 and I’m proud that he does all these things.
1:54:30 And when we find a company or like some of these secrets
1:54:33 that I talked about, I share them with my kids.
1:54:35 And so I was taking my middle daughter
1:54:37 to my oldest place tennis, my middle does soccer.
1:54:40 My little guy plays basketball like 10 days a week.
1:54:42 He’s better at nine years old than I was at 19
1:54:44 and I was reasonably good.
1:54:47 And I like sharing these stories.
1:54:49 So I’m like, you know, next week there’s a story
1:54:52 that’s gonna come out about this particular thing.
1:54:55 And nobody knows about it except the company and now you.
1:54:57 And they’re like, oh my God, really?
1:54:58 And I’m like, yeah.
1:54:59 And like, you can’t tell anybody, you know?
1:55:01 And I just, I love that feeling.
1:55:02 – That’s awesome.
1:55:03 – And I do it in part.
1:55:05 Not because I want them to learn about it,
1:55:06 but I want them to be proud of me
1:55:09 as selfish and Vainglorious as that is.
1:55:11 And to be like, oh, my dad’s cool, you know?
1:55:12 So that’s a success.
1:55:13 – I think you’re cool.
1:55:15 You’re not my dad, but man.
1:55:17 – I’ll tell you, my 15-year-old daughter
1:55:19 definitively does not think I’m cool.
1:55:21 She says, you are so cringe.
1:55:24 – And I think everybody’s kids say that, right?
1:55:25 It’s the same with my kids.
1:55:26 Like, instead of telling them something,
1:55:28 all sometimes they might be listening to this,
1:55:31 but I’ll get my friends to tell them.
1:55:32 And then all of a sudden it holds weight.
1:55:35 But if I tell them the same thing, like, yeah, whatever.
1:55:37 – Same thing with our spouses and that, yeah.
1:55:38 – Thank you very much.
1:55:40 – She ain’t always great to do with you.
1:55:43 I admire what you’ve built and the repository
1:55:46 and compendium of the ideas and the minds
1:55:46 that you’ve assembled.
1:55:48 It’s like a great thing for the world.
1:55:49 Thank you.
1:55:50 – Thank you.
1:55:52 Thank you for listening and learning with me.
1:55:54 If you’ve enjoyed this episode,
1:55:56 consider leaving a five-star rating or review.
1:55:58 It’s a small action on your part
1:56:00 that helps us reach more curious minds.
1:56:03 You can stay connected with Furnham Street on social media
1:56:06 and explore more insights at fs.blog,
1:56:08 where you’ll find past episodes,
1:56:11 our mental models, and thought-provoking articles.
1:56:13 While you’re there, check out my book, “Clear Thinking.”
1:56:16 Through engaging stories and actionable mental models,
1:56:20 it helps you bridge the gap between intention and action.
1:56:24 So your best decisions become your default decisions.
1:56:25 Until next time.
1:56:27 you
1:56:29 you

While Silicon Valley chases unicorns, Josh Wolfe hunts for something far more elusive: scientific breakthroughs that could change civilization. As co-founder and managing partner of Lux Capital, he’s looking for the kind of science that turns impossible into inevitable. Josh doesn’t just invest in the future—he sees it coming before almost anyone else. 

 

In this conversation, we explore: 

  • The rapid evolution of AI and potential bottlenecks slowing its growth 
  • The geopolitical battle for technological dominance and rise of sovereign AI models 
  • How advances in automation, robotics, and defence are shifting global power dynamics 
  • Josh’s unfiltered thoughts on Tesla and Elon Musk 
  • AI’s revolution of medical research 
  • Parenting in a tech-dominated world 
  • How AI is forcing us to rethink creativity, intellectual property, and human intelligence itself 
  • Why the greatest risk isn’t AI itself—but our ability to separate truth from noise 

 

Despite the challenges ahead, Josh remains profoundly optimistic about human potential. He believes technology isn’t replacing what makes us human—it’s amplifying it. This episode will challenge how you think about innovation, risk, and the forces shaping our future. If you want to stay ahead of the curve, you can’t afford to miss it. 

 

Josh Wolfe co-founded Lux Capital to support scientists and entrepreneurs who pursue counter-conventional solutions to the most vexing puzzles of our time. He previously worked in investment banking at Salomon Smith Barney and in capital markets at Merrill Lynch. Josh is a columnist with Forbes and Editor for the Forbes/Wolfe Emerging Tech Report. 

(00:00:00) Introduction

(00:01:42) Interview with Josh Wolfe

(00:02:46) Current Obsessions

(00:05:11) AI and its Limitations

(00:10:58) Memory Players in AI

(00:13:27) Human Intelligence as a Limiting Factor

(00:15:38) Disruption in Elite Professions

(00:17:15) AI and Blue-Collar Jobs

(00:18:29) Implications of AI in Coding

(00:19:40) AI and Company Margins

(00:25:48) AI in Pharma

(00:26:44) AI in Entertainment

(00:28:04) AI in Scientific Research

(00:30:24) AI in Scientific Research

(00:33:31) AI in Patent Creation

(00:34:49) AI in Company Creation

(00:35:33) Discussion on Tesla and Elon Musk

(00:40:54) AI in Investment Decisions

(00:42:20) AI in Analyzing Business Fundamentals

(00:45:27) AI, Privacy, and Information Gods

(00:53:04) AI and Art

(00:56:43) AI and Human Connection

(00:58:22) AI, Aging, and Memory

(01:00:46) The Impact of Remote Work on Social Dynamics

(01:03:18) The Role of Community and Belonging

(01:05:44) The Pursuit of Longevity

(01:11:58) The Importance of Family and Purpose

(01:14:18) Information Processing and Workflow

(01:23:00) AI and Personal Style

(01:26:03) Investment in Military Technology

(01:28:09) Global Conflict and Military Deterrence

(01:31:28) Information Warfare

(01:32:32) Infiltration and Weaponization of Systems

(01:37:06) Infrastructure Maintenance and Growth

(01:38:27) DOGE Initiative

(01:40:09) Attracting Capital and Global Competitiveness

(01:43:16) Attracting Talent and Immigration

(01:45:42) Designing a System from Scratch

(01:47:30) AI and Intellectual Property

(01:51:56) The Fear of AI

(01:53:57) Defining Success

(01:55:38) Closing Remarks

 

Newsletter – The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter

Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: ⁠⁠⁠⁠⁠⁠⁠fs.blog/membership⁠⁠ and get your own private feed.

Watch on YouTube: @tkppodcast

Learn more about your ad choices. Visit megaphone.fm/adchoices

Leave a Comment