Danny Wu, Head of AI Products at Canva, explains how AI is transforming visual communication from a tool for professionals into superpowers for everyone. With over 18 billion uses of their AI features, Canva demonstrates how machine learning evolved from content search to generative design tools that let anyone create on-demand visuals. Plus, hear how they’re prioritizing user consent and transparency while scaling AI across 230 million users. Learn more at ai-podcast.nvidia.com
Category: Uncategorized
-
Canva’s Danny Wu on Giving 230 Million Users Superpowers with AI – Ep. 265
Danny Wu, Head of AI Products at Canva, explains how AI is transforming visual communication from a tool for professionals into superpowers for everyone. With over 18 billion uses of their AI features, Canva demonstrates how machine learning evolved from content search to generative design tools that let anyone create on-demand visuals. Plus, hear how they’re prioritizing user consent and transparency while scaling AI across 230 million users. Learn more at ai-podcast.nvidia.com
-
Talking to a billionaire about how he uses ChatGPT
AI transcript
0:00:05 So that’s my advice is every day, every day, you should be in ChatGPT.
0:00:07 I don’t care what your job is, right?
0:00:10 You could be a sommelier at a restaurant and you should be using ChatGPT every day
0:00:12 to make yourself better at whatever it is you do.
0:00:20 Can I ask you about the story really quick?
0:00:23 And you have like a list of stuff here that’s like all amazing.
0:00:24 It’s actually a lot of it’s very actionable.
0:00:27 But the reason I want to ask you about the story is for the listener.
0:00:29 Dharmesh founded HubSpot, $30 billion company.
0:00:30 You’re the CTO.
0:00:34 So you and you’re an OG for Web 1.0, Web 2.0.
0:00:39 And your first round or one of your first rounds was funded by Sequoia.
0:00:42 Your partner, Brian, is an investor at Sequoia.
0:00:44 So you are in the insider.
0:00:45 You’re an insider, I believe.
0:00:47 You may not acknowledge it.
0:00:47 I don’t know if you do or do not.
0:00:48 You are an insider.
0:00:51 The cool part is that you’re accessible to us.
0:00:55 When did you first see what Sam was working on?
0:00:58 And how long have you felt that this is going to change everything?
0:01:02 So I actually have known Sam before he started OpenAI.
0:01:06 And I got access to the GPT API.
0:01:11 It was a toolkit for developers to be able to kind of build AI applications, right?
0:01:12 Effectively.
0:01:17 And so I built this little chat application that used the API.
0:01:19 And so I could have a conversation with him.
0:01:20 So I actually built that thing that night.
0:01:22 It was a Sunday.
0:01:25 I had the full transcript two years before ChatGPT came out.
0:01:26 So that’s four years ago?
0:01:28 It was 2020.
0:01:29 So five years ago.
0:01:30 Wow.
0:01:30 Okay.
0:01:31 This summer.
0:01:36 And so even then, it’s like, and as soon as I, like, you sort of have that moment.
0:01:38 It’s the same that all of us have with ChatGPT.
0:01:40 I just had it two years earlier.
0:01:43 And then I’m showing everyone, like, Brian, you are not going to believe.
0:01:46 Like, I have this thing, you know, through this company called OpenAI.
0:01:49 And watch me, like, type stuff into it and see, like, see what happens.
0:01:52 And we would ask it, like, strategic questions about HubSpot.
0:01:54 It’s like, how should it, like, who are the top competitors?
0:01:59 And they were, even then, two years before Chat, it was shockingly good, right?
0:02:03 But the thing you sort of have to understand about the constraints of how a large language
0:02:08 model actually works is that you type and you have a limited, just imagine this, if we’re
0:02:13 going to just use the physical analog sheet of paper can only fit a certain number of words
0:02:14 on it.
0:02:20 And that certain number of words includes both what you write on it, that says, I want
0:02:23 you to do this, and the response has to fit on that sheet of paper.
0:02:28 And that sheet of paper is what, in technical terms, would be called the context window.
0:02:30 And you’ll hear this tossed around.
0:02:33 It’s like, oh, this, you know, ChatGPT has a context window of whatever, or this model has
0:02:34 a context window of whatever.
0:02:35 That’s what they’re talking about.
0:02:37 All right, so why is that?
0:02:39 Why does anybody care about the context window?
0:02:45 It’s like, well, sometimes you want to provide a large piece of text, let’s say, summarizes
0:02:45 for me.
0:02:48 Well, in order for you to do that, it has to fit in the context window.
0:02:52 So if you want to take two books worth of information and say, I want you to summarize this in 50
0:02:57 words, those two books worth of information have to fit inside the context window in order
0:02:58 for the LN to process it.
0:03:03 Most, the frontier models are roughly 100,000 to 200,000.
0:03:06 They measure it in tokens, which is like 0.75 of a word.
0:03:07 That’s like a book.
0:03:08 So yeah, is that a book?
0:03:09 I think it’s an average.
0:03:13 I think the average book is like 240,000 words, I think, but I’m not sure.
0:03:13 That’s not a lot.
0:03:19 So when I, the way that I use ChatGPT is I’ll like, let’s say a fun way is I’ll, I’ll put
0:03:22 a historical book that I loved reading and I’ll be like, summarize this so I remember
0:03:23 the details.
0:03:28 So you’re telling me that if it’s a thousand page book, it’s not even going to accurately
0:03:29 summarize that book?
0:03:30 It won’t fit.
0:03:35 You’re like, if you pay something large enough into ChatGPT or whatever AI application you’re
0:03:38 using, it will come back and say, sorry, that doesn’t fit.
0:03:41 Effectively, what they’re saying is that does not fit in the context window.
0:03:42 So you’re gonna have to do something different.
0:03:44 All right.
0:03:50 A few episodes ago, I talked about something and I got thousands of messages asking me to
0:03:51 go deeper and to explain.
0:03:52 And that’s what I’m about to do.
0:03:57 So I told you guys how I use ChatGPT as a life coach or a thought partner.
0:04:01 And what I did was I uploaded all types of amazing information.
0:04:07 So I uploaded my personal finances, my net worth, my goals, different books that I like,
0:04:09 issues going on in my personal life and businesses.
0:04:12 I uploaded so much information.
0:04:17 And so the output is that I have this GPT that I can ask questions that I’m having issues
0:04:18 with in my life.
0:04:20 Like, how should I respond to this email?
0:04:21 What’s the right decision?
0:04:25 Knowing that you know my goals for the future, things like that.
0:04:29 And so I worked with HubSpot to put together a step-by-step process, showing the audience,
0:04:35 showing you the software that I use to make this, the information that I had ChatGPT ask me,
0:04:36 all this stuff.
0:04:37 So it’s super easy for you to use.
0:04:40 Like I said, I use this like 10 or 20 times a day.
0:04:41 It’s literally changed my life.
0:04:43 And so if you want that, it’s free.
0:04:44 There’s a link below.
0:04:47 Just click it, enter your email, and we will send you everything you need to know to set
0:04:49 this up in just about 20 minutes.
0:04:52 And I’ll show you how I use it again, 10 to 20 times a day.
0:04:53 All right.
0:04:54 So check it out.
0:04:55 The link is below in the description.
0:04:57 Back to the episode.
0:05:03 I usually use projects and I have like, let’s say a health project and I’ll upload tons and
0:05:05 tons of books or tons of blood, blood work.
0:05:09 And I hope I’m hoping that it’s going to pull from all those books in my project.
0:05:10 Is that true?
0:05:11 That that is true.
0:05:13 So here, and this is a perfect segue, right?
0:05:15 Because this is the next big unlock.
0:05:19 So number one thing to like understand in our heads is there’s this thing called a context
0:05:19 window.
0:05:20 Here’s why it matters.
0:05:24 So let’s, we’re going to take a, we’re going to pop that on the stack and we’re going to
0:05:26 push on the stack and we’re going to come back to it.
0:05:27 So the thing we have to remember is two things.
0:05:31 Number one, it doesn’t know what it’s never been trained on.
0:05:33 That’s one of the limitations, right?
0:05:38 So if you ask it something that only you, Sam, have in your, in your files and your email,
0:05:42 whatever, that the training model was, I mean, the LLM was never trained on, it’s not going
0:05:43 to know those things.
0:05:44 It doesn’t matter how smart it is.
0:05:46 It’s just information it’s never seen.
0:05:47 So it’s not going to know that.
0:05:48 That’s kind of problem number one.
0:05:50 Problem number two.
0:05:56 So let’s say your website for Hampton was actually on, um, uh, in the training set,
0:05:56 right?
0:06:00 Because it’s on the public internet or whatever, but the training happened at a particular point
0:06:01 in time.
0:06:04 Like they ran the training, ran the training, ran the training and said, okay, we’re done
0:06:05 with the training now.
0:06:07 The machine is done.
0:06:10 Let’s let the customers in right now.
0:06:14 If the website changes, it’s not going to know about those new updates that you’ve made
0:06:17 to your website because the training was done at a particular date.
0:06:19 If completed, it’s kind of training course, right?
0:06:22 So those are the two things we sort of have to remember is that it doesn’t know what it
0:06:23 doesn’t know.
0:06:27 And number two, that the things that did know were frozen at that particular point in time,
0:06:27 right?
0:06:28 So it has a seen new information.
0:06:32 And those are relatively large limitations, right?
0:06:35 So especially if you’re going to use it for business use or personal, it’s like, well,
0:06:39 I’ve got a bunch of stuff that I want it to be able to answer questions about or whatever
0:06:42 inside my company or inside, uh, my own personal life.
0:06:44 How do I get it to do that?
0:06:46 Um, and so here’s the hack.
0:06:48 And this is, this was a brilliant, uh, brilliant discovery.
0:06:54 So what they figured out is to say, okay, let’s say you have a hundred thousand documents
0:06:56 that were never on the internet.
0:06:57 That’s in your company.
0:07:00 It’s all your employee hiring practices, your model.
0:07:01 Here’s how we do compensation.
0:07:02 All of it, right?
0:07:03 It’s like, oh, you have a hundred thousand documents.
0:07:06 And obviously you can’t ask questions about those hundred thousand documents straight
0:07:07 to chat GPT.
0:07:09 It doesn’t know anything about those, never seen those documents.
0:07:14 So this is, and we talked about this two episodes ago, um, this thing called vector embeddings
0:07:17 and RAG retrieval augmented generation.
0:07:20 Um, and I’ll, I recommend you folks go listen to that.
0:07:23 I think it’s a, it’s a, it’s a fun episode, but I’ll kind of summarize it, which is what
0:07:24 you can do.
0:07:27 And what we do is to say, we’re going to take those hundred thousand documents and we’re
0:07:31 going to put them in this special database called a vector store, a vector database.
0:07:36 And what we can do now is when someone asks the question, we can go to the vector store,
0:07:41 not the LLM, go to the vector store and say, give me the five documents out of the hundred
0:07:45 thousand that are most likely to answer this question based on the meaning of the question,
0:07:48 not keywords based on the actual meaning of the question.
0:07:51 So it’s called a semantic search is what the vector store is doing.
0:07:53 So it comes back with five documents.
0:07:58 Let’s just say now, as it turns out, five documents do fit inside the context window.
0:08:01 So effectively we said, okay, well, yeah, it would have been nice.
0:08:04 Had you trained on the hundred thousand documents, but that was not practical because I didn’t
0:08:05 want to expose all of that.
0:08:07 I’m going to give you the five documents that you actually need.
0:08:10 I can just say, give it to you in the context window.
0:08:16 And now, as you can imagine, it does an exceptionally good job at answering the question when it knows
0:08:17 the five documents that should be looking.
0:08:18 You just gave it to them.
0:08:18 Right?
0:08:21 So it’s having this, uh, so we’ll kind of, uh, jump metaphors here.
0:08:25 It’s like hiring a really, really good intern that has a PhD in everything, right?
0:08:27 They went to school, they read all the things, read all the internet.
0:08:31 The intern knows everything about everything that ever was publicly accessible.
0:08:34 They’re trained, show up for the first day of work.
0:08:35 That’s all they know.
0:08:38 They’re not learning anything new and they know nothing about your business.
0:08:41 Now it’s like, okay, well, I know, you know, everything about everything.
0:08:43 I have this question about my business.
0:08:48 Here are five documents that you can read right now and answer my question.
0:08:50 It’s like, oh, I can do that.
0:08:54 I like that analogy, the intern with the PhD and everything.
0:08:55 That’s so much how it is, right?
0:09:02 It’s as helpful and available as an intern, but it’s as knowledgeable as somebody with a
0:09:03 PhD in everything.
0:09:07 Um, and then like you said, you know, my, the, another analogy for that is like, it’s
0:09:11 a store, you have shelf space, which is kind of limited, but they do have a back and you
0:09:15 can always get, send the employee to the back and see if they can find it in the back for
0:09:15 you.
0:09:15 Right.
0:09:17 That’s kind of like what you’re saying, put it in the database.
0:09:20 They can go fetch the specific thing that you’re asking for.
0:09:22 Uh, because you know, you gave it access to the back.
0:09:24 You gave it a badge that lets it go in there.
0:09:29 Have you uploaded all of hub, like, have you figured, first of all, I want to know what
0:09:30 your chat GPT looks like.
0:09:33 I want to know how you use it on a, like, I just want you to just screen share, just like
0:09:37 show me exactly what you do, but also have you uploaded your entire life?
0:09:42 Like, have you uploaded all of HubSpot to chat GPT where you could just ask it any question?
0:09:43 Yeah, multiple times.
0:09:43 Right.
0:09:46 Um, so, and what format, tell me how you did that.
0:09:52 Uh, so I did, so open AI has, um, called this, uh, it’s called an embeddings algorithm that
0:09:56 takes any piece of text, a document, an email, whatever it happens to be, and creates this
0:10:00 kind of point in the high dimensional space, um, called, you know, called a vector embedding.
0:10:03 And, you know, a point in high dimensional space.
0:10:07 So in three dimensional space, physical space that we know of, we think of points being in
0:10:08 three dimensions, X, Y, and Z axis.
0:10:10 Like, oh, here’s where this point is in space.
0:10:13 Uh, high dimensional space, you can have a hundred dimensions.
0:10:14 You can have a thousand dimensions.
0:10:16 You can describe each document as this kind of point in space.
0:10:22 So what I’ve done, so it used to be, um, in the early kind of GPT world,
0:10:25 the number of dimensions you had access to was roughly like a hundred to 200 dimensions.
0:10:27 And so you would lose a lot of the meaning of a document, right?
0:10:28 They would sort of get it right.
0:10:30 It was sort of captured the meaning.
0:10:33 Uh, and then, uh, then we went to like a thousand dimensions.
0:10:39 It’s like, oh, well now it can much more accurately sort of represent, um, and, and capture, um,
0:10:43 a document of kind of arbitrary length and, and be able to find it, uh, give it a prompt
0:10:45 or given some sort of search query.
0:10:50 Uh, and then recently within the last year, we’ve gone, the, the latest algorithm, uh,
0:10:55 from open AI, uh, embeddings algorithm is like 3072, I think, uh, dimensions.
0:10:57 But, but where, where do you do this?
0:11:01 Do you just literally upload it as a project or you, you had to do an API connection?
0:11:02 What did you, how do you actually do this?
0:11:03 I had an API connection, right?
0:11:05 In fact, I’m running the lease.
0:11:06 Let me see where it is now.
0:11:10 And anyone could do this or you have special access because you’re friends?
0:11:11 No, anyone can do this.
0:11:14 The, the, uh, the API for the embeddings model, they have two versions.
0:11:16 They have the 3000 dimension version.
0:11:18 They have a 1000 dimension version.
0:11:22 And is the results of this, like, are you driving a NASCAR and I’m driving like a scooter?
0:11:23 Like, is that the difference?
0:11:28 Like if I just like, for example, uh, what I will do is I’ll just like download my company’s
0:11:29 financials and I’ll upload it.
0:11:32 And then I’ll like explain what my company does.
0:11:34 But the way that you do it is a lot different.
0:11:38 Now, are we talking a massive gap in results that you get versus what I get?
0:11:41 Um, yes.
0:11:44 The short answer is yes.
0:11:48 Uh, and, and the reason is like, so I do, I do that as well in terms of how I’ll describe
0:11:49 the company or whatever.
0:11:50 I try to provide it context.
0:11:52 And that’s why it’s called the context windows.
0:11:55 You try to provide the LLM, uh, context for what you’re asking it to do.
0:12:01 Um, you know, the difference is that, you know, because I can go through like, and by the
0:12:02 way, the richest, and I’m working on a,
0:12:08 kind of nice and weekends project right now, um, that takes, uh, email, uh, which, you
0:12:10 know, so you would be amazed.
0:12:14 Like if you had to write no other words right now, if you did nothing, but say, I’m going
0:12:19 to take all of my emails I’ve ever written, uh, that are still stored and give it, uh, to
0:12:23 a vector store, use an embedding’s algorithm and then use chat GPT to let me kind of answer
0:12:24 questions.
0:12:28 So if I want to say, Oh, I want you to give me a timeline for when we start first started
0:12:31 using hub to name products or whatever, and how’d that come about?
0:12:35 Or what were the winning arguments against doing that versus whatever?
0:12:41 Like it’s shocking how good the responses are when you give it access to that kind of rich
0:12:42 data, right?
0:12:46 Somebody needs to create just like a $10 a month, a single website.
0:12:48 That’s like, Hey, make your chat GPT smarter.
0:12:52 And it’s a website where it’s like, connect your Gmail, connect your Slack, connect your
0:12:52 everything.
0:12:57 I would pay them happily 20 bucks a month to just set this up for me so that my chat
0:13:03 to, to give my chat GPT, like the extra pill that says, you now have access to my data.
0:13:07 Is this because, because you’re talking about like, I have the API to the vector embeddings
0:13:10 and like, well, I have the flux capacitor too, but I don’t know what to do with it.
0:13:10 Right?
0:13:15 Like I need a button on a website with a Stripe payment button that I could just connect the
0:13:15 stuff.
0:13:16 Is it not?
0:13:18 Is there, is there a caveman version of this?
0:13:22 Oh, there’s, I mean, there are, uh, tools out there do, and there are startups working on
0:13:22 it, right?
0:13:23 Uh, there’s two pieces of good news.
0:13:25 One is there are startups working on it.
0:13:29 The challenge here is, uh, not that they’re doing a bad job.
0:13:34 The challenge actually comes down to, uh, if it were a startup and a startup came to you,
0:13:37 it’s like, Oh, we just started last, last week, but we’ve got this thing.
0:13:38 It really works.
0:13:44 Uh, in fact, Darmection, maybe, uh, investor, how willing would you be to hand over literally
0:13:47 your entire life and everything that’s in your email over to this startup?
0:13:51 Like, so part of the challenge we have is that the access control that let’s say you’re
0:13:56 using Gmail, which, uh, a lot of us use when you provide the keys to your Gmail account
0:14:00 to a third party, uh, there is no degree of real granularity.
0:14:02 You can say, Oh, I wanted to read the metadata.
0:14:04 That’s like level one, level two access.
0:14:05 I wanted to read my full email.
0:14:08 And level three is I want it to be able to write and delete emails on my behalf.
0:14:13 But if you wanted to like read like the actual body of the email, you can’t say, I only wanted
0:14:16 to read messages that are from HubSpot.com.
0:14:19 Or I own, I want to ignore all messages from my wife and my family or whatever in the thing.
0:14:21 There’s no way to control that.
0:14:21 Right.
0:14:22 So you sort of have to have a trust.
0:14:27 Is there any product that you would trust right now or that you can recommend that guys
0:14:31 like Sean and I should use as chat GPT add-ons or accelerators?
0:14:38 No, not that I don’t trust them, but it’s like, I wouldn’t trust really anyone right
0:14:39 now with that.
0:14:42 And it’s one of the reasons I sort of run it locally, even though I know these things are
0:14:42 out there.
0:14:47 Um, I predict what’s going to happen is we’re going to have, uh, any of the major players
0:14:49 and you can see this happening already, right?
0:14:53 We see this with, um, you know, you have the ability to create custom GPTs and open AI and
0:14:55 do projects and quad.
0:14:58 You have Google gems, which are essentially like a small baby version of this, right?
0:15:02 That says, Oh, you can upload 10 documents, a hundred documents that it’ll let you ask
0:15:05 questions, uh, against the, what it’s really doing behind the scenes is creating a vector
0:15:05 store.
0:15:07 That’s effectively what it’s, what’s happening.
0:15:14 Um, my expectation is all the major companies, um, will actually have a variation of this,
0:15:18 uh, starting with Google should be the first one because they already have the data.
0:15:25 There is absolutely zero reason why dual Gemini does not let you have a Q and a with your own
0:15:26 email account.
0:15:28 That’s just like insanely stupid, right?
0:15:30 Like I’ll just go ahead and say it.
0:15:32 It’s just, it’s just, there’s something not right with the world.
0:15:36 Uh, when they already have the data and it’s like, and they have the algorithm, they have
0:15:38 Gemini 2.5 pro, which is an exceptionally good model.
0:15:39 Right?
0:15:43 So there you have all the pieces, uh, but have not yet delivered, but I hope it’s a little
0:15:43 distant.
0:15:48 Then tell me and Sean, we’re early adopters, but neither of us are technical.
0:15:51 What can we do to, I want to get it on this baby.
0:15:52 All right.
0:15:53 So give me, give me two weeks.
0:15:53 Here’s, here’s what I’ll be.
0:15:56 Uh, so that’s the one thing I do trust that I trust myself.
0:15:58 Um, I’m, I’m an honest guy.
0:16:03 Uh, I’ll give you like this internal app that I’m building, let you put your Gmail to it.
0:16:06 It’ll go and it’ll run for a day or two days or something like that.
0:16:07 And then you will be amazed.
0:16:08 You will be able to ask questions.
0:16:14 Um, and by the way, like, and the thing I’m like working on now is once you have this,
0:16:15 this capability, right?
0:16:18 Like step one is just being able to do Q and a, right?
0:16:22 It’s like, Oh, just, I’m going to step to like, imagine kind of fast forwarding.
0:16:23 Like it has access to all of your kind of history.
0:16:25 So imagine you’re able to say, you know what?
0:16:30 I’m not doing this by the way, but if I were, it’s like, I want to write a book about HubSpot
0:16:32 and all the lessons learned and all like everything.
0:16:33 It’s all in my email.
0:16:36 Do the best possible job you can writing a book.
0:16:39 If you have questions along the way, ask me, but other than that, write the book.
0:16:40 I think you would be able to write the book.
0:16:41 Wow.
0:16:43 What else are you doing with AI?
0:16:45 So, uh, give me your day to day.
0:16:50 I like, for example, the CEO of Microsoft had this great thing where he goes, I think with
0:16:52 AI, then I work with my coworkers.
0:16:53 And that really shifted the way I worked.
0:16:57 Cause I used to brainstorm or have a meeting to talk about stuff with my coworkers, which
0:17:00 was honestly always like a little disappointing.
0:17:03 I felt like I’m the one bringing the energy and the ideas and the questions, and I’m hoping
0:17:08 that they’re going to, but dude, just sparring with AI first and then taking the kind
0:17:12 of like distilled thoughts to my team of like, here’s how we’re going to execute has been
0:17:13 way better.
0:17:16 Like that little one sentence he said shifted the way I was doing it.
0:17:19 How are you kind of using this stuff?
0:17:19 Yeah.
0:17:20 So a couple of things.
0:17:23 Um, so I’ll let, let’s start at the high level and we’ll drill in a little bit.
0:17:28 So, uh, what we’re used to with, uh, chat GPT, this is sort of your kind of early evolution
0:17:32 of most people’s use is because it’s called generative AI, use it to generate things, right?
0:17:37 Uh, generate a blog post, generate an image, generate a video, generate audio, all those
0:17:37 things.
0:17:39 That’s kind of the generation kind of aspect.
0:17:41 Uh, and that’s part of what it’s good at.
0:17:45 Then you sort of get into the, Oh, but it can also kind of summarize and synthesize things
0:17:46 for me.
0:17:49 It’s like, Oh, take this large body of text, take this blog post, take this academic paper
0:17:51 and summarize it in this way.
0:17:53 Or like, so a seven year old would understand it kind of thing.
0:17:53 Right.
0:17:57 So that’s the kind of step number, um, step number two, step number three.
0:17:59 And we’re going to get into how this is now possible.
0:18:03 Um, is you can do, um, effectively you can take action.
0:18:06 Um, have the LM actually do things for you.
0:18:10 Uh, and I’ve always kind of put it broadly in the kind of automation bucket, like I can
0:18:12 automate things that I was doing manually before.
0:18:15 Uh, and then the fourth thing is around orchestration.
0:18:20 It’s like, can I just have it manage a set of AI agents?
0:18:22 And we’ll talk about agents in a little bit and just do it all for me.
0:18:24 I just want to give it a super high order goal.
0:18:27 It has access to an army of agents that are good at varying different things.
0:18:29 I don’t want to know about any of that.
0:18:31 I just wanted to go do this thing for me.
0:18:31 Right.
0:18:33 And then that’s sort of where we are on the slope of the curve.
0:18:36 Uh, the first three things are possible today.
0:18:38 And work well today.
0:18:38 Right.
0:18:40 So you can generate, as we know, it can generate blog posts.
0:18:41 It could write really well.
0:18:45 Uh, it can generate great images now, including images with text.
0:18:48 It can do great video now with, uh, you know, higher fidelity, higher character cohesion, all
0:18:49 these things.
0:18:53 Uh, Sean, so the thing, the vision you had three years ago when I was on was around creating
0:18:55 the next Disney, the next kind of media company.
0:18:58 You have the tools now, my friend, uh, to finally start to approach that.
0:18:58 Right.
0:19:02 But then you should sort of move into, and this is what we were just talking about, this kind
0:19:03 of synthesis and analysis thing.
0:19:06 This is, okay, this is where deep research kinds of features come in.
0:19:09 It’s like, okay, well, I want you to take the entire of the internet or entirety of what,
0:19:11 uh, Sean has written about copywriting.
0:19:16 And I want you to write a book just for me that summarizes all of that in ways that I enjoy.
0:19:19 Because I like, I like analogies and I like jokes and I like this and I like that.
0:19:22 Write a custom version of Sean Puri’s book on copywriting, right?
0:19:25 That kind of synthesis, um, I think would be, uh, super interesting.
0:19:27 And then automation is now possible.
0:19:28 So agent.ai is one of those things.
0:19:32 There’s other tools out there that says, Hey, I want to take this workflow or this thing that
0:19:34 I do, and I want you to just do it for me.
0:19:38 Give us a specific, what’s a specific, specific automation that you’ve used.
0:19:40 That’s like, you know, useful, helpful, saves you time.
0:19:42 I’ll tell you a couple.
0:19:44 One is around domain names, which is, okay.
0:19:50 So I have an idea for a domain name, um, and I’m going to type words in and these things
0:19:50 exist.
0:19:52 And I’ll tell you the manual flow that I used to go to.
0:19:56 It’s like, okay, first of all, I can brainstorm myself and come up with possible words and
0:19:58 various other words, whatever, here’s the things that I’ll say.
0:19:58 Okay.
0:19:59 Which domains are available?
0:20:03 Absolutely zero of them, uh, that are good that will pop into my mind are like freely
0:20:05 available to kind of just register that no one’s registered before.
0:20:05 Okay.
0:20:06 Fine.
0:20:08 Then I’ll say, okay, well, which ones are available for sale?
0:20:09 Okay.
0:20:10 What’s the price tag?
0:20:12 Is that a fair approximation of the value?
0:20:15 Is it like below market, above market?
0:20:17 We don’t know because there’s no Zillow for domain names yet.
0:20:18 Uh, so create that.
0:20:23 So I have something that automates all of that and says, oh, so you have this particular idea
0:20:27 for this concept, for this business, business, whatever it is, uh, here are names.
0:20:28 You’re the actual price points.
0:20:31 Here’s the ones that I think are below market value, above market value.
0:20:32 Tell me which ones you want to register.
0:20:34 That’s in chat GPT.
0:20:37 No, it’s an agent.ai is where it lives right now.
0:20:42 But now there’s a connector between agent.ai and chat GPT through this thing called MCP, which
0:20:44 you’ll hear about, uh, a bunch if, if you haven’t already.
0:20:49 Um, one thing I want to kind of get out there, just so we keep connecting the dots, um, because
0:20:52 I want, I want everyone to have this framework in their head.
0:20:54 Uh, so we talked about large language models.
0:20:55 It can generate things.
0:20:56 We talked about the context window.
0:21:00 We talked about faking out the context window by saying, oh, we can do this vector database
0:21:03 and bring in the right five documents, stuff them into the context window.
0:21:06 Uh, here’s the other big breakthrough that’s happened.
0:21:10 Uh, I’ll say recently within the last year, year and a half is what’s called tool calling.
0:21:14 And what tool calling is, is a really brilliant idea.
0:21:18 And the tool calling says, okay, well, the LLM was trained on a certain number of things,
0:21:22 but if we had this intern that came in, it would be like saying, okay, well, whatever you
0:21:24 know, you know, but we’re not going to give you access to the internet.
0:21:27 Like that would be stupid, right?
0:21:28 We would give the intern access to the internet.
0:21:31 It’s like, if I ask you something that you weren’t trained on, go look it up, right?
0:21:34 That, that would be like, like thing number one on the first day of work.
0:21:39 And as it turns out, the LLM world, the intern couldn’t, didn’t have access to the internet.
0:21:43 All it had was whatever notes that happened to take during its PhD training and all things,
0:21:43 right?
0:21:47 And so what tool calling allows, and this is a weird, um, weird approach to it, but this is
0:21:48 because of the way LLMs work.
0:21:54 So remember the LLM, it’s architected such that you give it the context window in, it spits
0:21:55 things out.
0:21:55 That’s it.
0:21:58 It doesn’t have, and you can’t reprogram the architecture.
0:22:00 But now all of a sudden we’re going to give you access to tool calling.
0:22:02 So here’s the hack that they came up with.
0:22:08 They said, okay, in the instructions that we give it in the context window, we’re going
0:22:11 to say you have access to these four tools.
0:22:13 And it doesn’t actually have access to the four tools.
0:22:17 It’s that I want you to pretend like you have access to these four tools.
0:22:19 The first tool is this thing called the internet.
0:22:23 And the way the internet works is you type in the query and it will give you some things
0:22:23 back.
0:22:28 You have this other thing called a calculator and you can give it a mathematical expression
0:22:29 and it gives you an answer back.
0:22:32 And you have this other tool that lets you do this and you can have a number of tools.
0:22:34 And so here’s what happens.
0:22:41 In the context window happening behind the scenes, chat GPT, which is the interface right
0:22:44 now that is interacting with the LL, you’re not talking with the LM directly, right?
0:22:49 It gets a prompt and it says, okay, by the way, LLM, I want you to pretend like you have
0:22:50 access to these four tools.
0:22:56 And anytime you need them, when you pass the note back to me, the results, the output, just
0:22:57 tell me when you want to use one of those tools.
0:22:59 All right.
0:23:00 So we give it a query.
0:23:04 It’s like, okay, well, I want to look up like the historical stock valuation for HubSpot and
0:23:07 when it changed as a result of, is there any correlation to the weather?
0:23:09 Is it seasonal or whatever it is, right?
0:23:13 In terms of market cap of HubSpot versus seasonal changes.
0:23:14 All right.
0:23:17 Well, that’s not something you would have access to, but here’s what actually happens.
0:23:17 This is so cool, right?
0:23:22 So the LM gets it and the LM’s in the context window that we gave it.
0:23:23 We gave it instructions.
0:23:25 It’s a pretend like you have these four tools.
0:23:28 One of which is stock price lookup, let’s say, historical stock price lookup.
0:23:34 It’ll pass the output back to the application, not us, and say, and in the output, it says,
0:23:39 oh, please invoke that tool you told me I had access to and look up this result.
0:23:40 I want you to search the internet for X.
0:23:41 What was the weather?
0:23:42 I want you to do this for the stock price.
0:23:45 And then we do that.
0:23:51 We, the ChatGPT application, fill the context window with whatever it is the LM asked for
0:23:52 and then pass it back in.
0:23:57 So the LM effectively has access to those tools, even though it never accessed the internet,
0:24:01 it never accessed the stock market, but it pretended like it had access to it.
0:24:02 And we never see this.
0:24:03 This is happening behind the scenes.
0:24:07 Now, here is the big, massive unlock, right?
0:24:10 Which is, well, everything can be a tool, right?
0:24:13 Now you don’t have to build this kind of vector store or whatever, because you would never
0:24:16 build a vector store of all possible stock prices from the dawn of time.
0:24:18 Now, I guess you could, but then it’s outdated immediately.
0:24:23 Now it’s like, what if we just gave it 20 really powerful tools, including browser access,
0:24:23 to the internet?
0:24:30 Well, that’s like a 10,000, 100,000 times increase in that intern’s capability, right?
0:24:35 And so that’s where our brain should be headed now, which is exactly where the world is headed,
0:24:40 that says, what tools can we give the LM access to that will amplify its ability and cause
0:24:43 zero change to the actual architecture?
0:24:45 Literally, it doesn’t have to know anything about anything.
0:24:47 It’s like, I just want you to pretend that you have access to these tools.
0:24:49 It doesn’t need to know how to talk to those tools.
0:24:50 It doesn’t need to know about APIs.
0:24:52 It doesn’t need any of that stuff.
0:24:57 Cutting your sales cycle in half sounds pretty impossible, but that’s exactly what Sandler
0:24:59 training did with HubSpot.
0:25:06 They use Breeze, HubSpot’s AI tools to tailor every customer interaction without losing their
0:25:06 personal touch.
0:25:08 And the results were incredible.
0:25:11 Click-through rates jumped 25%.
0:25:13 Qualified leads quadrupled.
0:25:16 And people spent three times longer on their landing pages.
0:25:21 Go to HubSpot.com to see how Breeze can help your business grow.
0:25:28 Do you think that, I mean, this is all mind-blowing, and you have an interesting perspective because,
0:25:33 you know, I think three episodes ago that you’re on, you created this thing called Wordle.
0:25:34 Was it Wordle?
0:25:34 Wordplay.
0:25:35 Wordplay.
0:25:37 That does like 80 grand a month.
0:25:39 It was just like a puzzle that you do with your son.
0:25:39 It was amazing.
0:25:46 But now you have new projects, you have agent AI, you have a few other things, but you still
0:25:47 run a $30 billion company.
0:25:55 Do you think that the majority of value creation, like, am I going to, is my stock portfolio going
0:25:58 to go up because I own a basket of tech stocks?
0:26:05 Or is the best way to capitalize as an outsider, obviously, you start a company, or is it investing
0:26:09 in new startups that are using AI or AI-first startups?
0:26:12 Yeah, it’s a good question.
0:26:15 I’m neither an economist nor a stock analyst, but I will say this.
0:26:21 The thing I’m most excited about with AI, and I actually said exactly this in a talk I gave
0:26:26 well before GPT on the inbound stage, and I said, you know, as AI is starting to kind of
0:26:29 come up, it’s not a you versus AI.
0:26:31 That’s not the mental model you should have in here.
0:26:34 It’s like, oh, well, AI is going to take my job because it’s me trying to do things that
0:26:36 the AI is then eventually going to be able to do.
0:26:40 The right mental frame of reference you should have, it’s you to the power of AI.
0:26:43 AI is an amplifier of your capability.
0:26:47 It will unlock things and let you do things that you were never able to do before, as a
0:26:50 result of which it’s going to increase your value, not decrease it, right?
0:26:55 But in order for that to be true, you actually have to use it.
0:26:56 You have to learn it.
0:26:57 You have to experiment with it.
0:27:02 And the only real way to get a feel for what it can and can’t do is you have to do it.
0:27:03 So I’ll give you the very, very simple.
0:27:04 Everyone should do this.
0:27:05 I do this personally.
0:27:12 is that anytime you’re going to sit down at a computer and do something, research, whatever
0:27:17 it is you’re going to do, you should give chat GPT or your AI tool of choice a shot at
0:27:17 it.
0:27:23 Try to describe and pretend like you have access to this intern that has a PhD in everything.
0:27:26 It’s like, okay, well, maybe it doesn’t know anything about me or whatever.
0:27:26 Fine.
0:27:28 So then tell it a few things about you.
0:27:31 But imagine you have access to this all-knowing intern that has a PhD in everything.
0:27:34 Give it a crack at solving the problem that you’re about to sit down and spend some time
0:27:39 on and what you will invariably find, number one, is you’ll be surprised by the number of
0:27:42 times it actually comes up with a helpful response that you would never have expected
0:27:44 it would be even remotely able to do.
0:27:46 Like, how can it do that?
0:27:48 It’s because it has a PhD in everything, right?
0:27:52 And it’s now actually, we’ll talk about reasoning and whether models are actually doing that or
0:27:52 not if we have time.
0:28:00 But so that’s my advice is every day, every day, you should be in chat GPT.
0:28:04 If you’re a knowledge worker at all, it doesn’t actually, you don’t even have to be a
0:28:04 knowledge worker.
0:28:06 And I don’t care what your job is, right?
0:28:10 You could be a sommelier at a restaurant and you should be using chat GPT every day to make
0:28:12 yourself better at whatever it is you do.
0:28:17 And that might be the introduction of that orthogonal skill to bring it back to the, which I never
0:28:17 explained the word orthogonal.
0:28:19 I’ll do it in 30 seconds.
0:28:23 So orthogonal means a line that’s 90 degree intersection to another line.
0:28:27 And the most common use is when we have an X and Y axis, right?
0:28:31 It’s like, oh, the X axis and the Y axis are orthogonal to each other because they have
0:28:32 90 degrees separating them.
0:28:36 The common usage, when you say, oh, that’s an orthogonal concept, it means it’s unrelated.
0:28:37 It’s completely different.
0:28:40 That’s like the Y and X axis are completely independent of each other.
0:28:43 You can say, oh, you can be here on the X axis, but here on the Y axis and they’re not
0:28:44 related to each other.
0:28:48 So that’s what I mean when I say orthogonal concepts or skills or ideas.
0:28:49 Yeah.
0:28:52 Is there anything you disagree with that’s kind of the consensus?
0:28:55 Because a lot of things you’re talking about, like, hey, AI is going to change everything.
0:28:56 It’s super smart.
0:28:57 Agents are coming.
0:28:59 They can do some stuff now, more stuff later.
0:29:03 These are all probably right, but they’re also consensus.
0:29:06 I’m just curious, like, is there anything you disagree with that you hear out there that
0:29:09 drives you nuts where you’re just like, people keep saying this.
0:29:11 I think that’s either wrong.
0:29:12 It’s overrated.
0:29:13 It’s the wrong timeline.
0:29:14 It’s the wrong frame.
0:29:15 It’s whatever.
0:29:18 Is there anything that you disagree with that you’ve heard out there?
0:29:22 I’ve heard variations, two variations I disagree with.
0:29:27 One that I’ve, I think, spent so much time hopefully kind of talking folks out of, which
0:29:29 is it’s just autocorrect.
0:29:30 It’s not really thinking.
0:29:34 And that’s a matter of, like, what do you think thinking is, right?
0:29:39 It’s like, okay, well, if it produces the right output to which we think would require
0:29:45 thought, so I think that is flawed reasoning to say, oh, well, and this often comes from
0:29:49 the smartest people, the most experts in their field, because, oh, it’s really like a stochastic
0:29:49 parrot.
0:29:53 You’ll hear this phrase, which is, it’s like a Paul Billy-driven pattern matching based.
0:29:57 It just so happens that it’s been trained on the internet, but it’s not really like human
0:29:58 intelligence.
0:30:02 And I agree with that phrasing, which is it’s not like human intelligence, but that does not
0:30:06 mean that all it’s doing is sort of mimicking stochastically all the things I’ve read before,
0:30:11 because in order to do what it does, it is a form of creativity different from what we
0:30:12 normally experience.
0:30:15 That’s kind of thing number one that I disagree with.
0:30:20 Thing number two is people are thinking, I both disagree with the, oh, the scaling laws are
0:30:23 going to continue forever indefinitely, that the more and more compute we throw at, the
0:30:26 more knobs we put on the machine, the smarter and smarter it’s going to get.
0:30:29 I think there’s going to be a limit to that at some point.
0:30:30 It’s like nothing goes on forever.
0:30:33 It’s going to asymptotically move towards, we’re going to have to come up with new algorithms.
0:30:36 So that’s GPT can’t be the do all end of all things, right?
0:30:39 There will be a new way, you know, discovered.
0:30:40 So I think that’s going to happen.
0:30:46 But I think the smarter, and I did not say this, other people have said it, the best way
0:30:53 to kind of think about AI right now is as you use it, it’s the kind of truly find, find a
0:30:55 frontier of what it’s incapable of.
0:30:59 It’s like, okay, it can sort of do this thing, but not very well.
0:31:04 If, if that’s the way you describe its response, you are exactly where you need to be, which
0:31:08 is if you can sort of do it right now, sort of, if you have to squint a little bit, it’s
0:31:12 like, ah, well, it’s kind of something, but wait six months or a year, right?
0:31:15 Like it’s, uh, that’s the beauty of an exponential curve.
0:31:19 It gets so better, so much better, so fast, uh, that if it can sort of do it now, it will
0:31:20 be able to do it.
0:31:21 And then it’ll be able to do it really well.
0:31:24 That’s the inevitable sequence of events.
0:31:24 That’s going to happen.
0:31:25 Stan, have you heard this about startups?
0:31:30 There’s like a kind of the smart money in startups believes that the right startup to build is
0:31:33 basically the thing that AI kind of can’t do right now.
0:31:39 That’s the company to start today because you just have to stay alive long enough.
0:31:44 Give it the 12 to 18 month runway that it needs for the thing to go from, eh, didn’t really
0:31:46 work very well to like, oh my God, this is amazing.
0:31:50 But you’ve built your brand, your company, your mission, you’ve, your customer, but you’ve
0:31:54 been building that all along the way and you’re basically just betting you’re going to be able
0:31:56 to surf the improvement of the model.
0:31:57 Dude, by the way, that’s, that’s how I feel about my company.
0:32:02 My company is not related to this, but at all, but in terms of like our operations, we’re like
0:32:07 things are very manual and I’m like, oh my God, once I’m able to finally implement AI
0:32:10 when it can work for this purpose, my profit margins are going to go through the roof.
0:32:14 I mean, that’s how I, that’s how I feel about it, but it, which isn’t entirely related
0:32:19 that Sean, but a little bit, one, one, one thing I’ll plant out there since, uh, this
0:32:20 is my first million.
0:32:23 We, we like talking about ideas at a macro level.
0:32:28 Here’s the entirely new pool of ideas that I think are now available, um, on a trend that
0:32:32 I think is inevitable, which is ad agents get better and better, right?
0:32:37 Um, right now, most of us, when we use, uh, use AI, use chat TPD, uh, we use them as tools,
0:32:38 which is great.
0:32:38 Perfect.
0:32:39 Uh, fine.
0:32:43 Uh, over time, you need to shift your thinking and think of them as teammates.
0:32:46 Think of them as that intern that just got hired, right?
0:32:51 Uh, and, and as a result of that, so let’s, let’s assume for a second, let’s stipulate that
0:32:52 I I’m right.
0:32:55 All we don’t know is how long is it going to take for me to be right is that we’re going
0:32:59 to have effectively digital teammates that are part of all of our teams.
0:33:05 Every company is going to someday have a hybrid team consisting of carbon-based life forms and
0:33:07 these kinds of digital AI, um, AI agents.
0:33:07 Okay.
0:33:12 So if you accept that, the way that’s going to happen is not going to be like, all of
0:33:15 a sudden we one day wake up and every organization now starts kind of mixing them.
0:33:18 What’s going to happen is it’s going to slowly introduce this way.
0:33:21 It’s like, oh, I have this one task, whatever that an agent is better at is real.
0:33:23 It’s reliable enough for the thing.
0:33:24 And the risk is low enough.
0:33:24 I’m going to have it do that.
0:33:25 Right.
0:33:28 Well, we already see elements of that, but here’s, what’s going to happen as a result of
0:33:31 that kind of gradual kind of infusion and adoption of that technology.
0:33:37 Uh, the way to win and the opportunities that get created is like, how do I help the world
0:33:40 accomplish this end state that I know is going to come?
0:33:42 So here, I’ll give you some examples.
0:33:48 Uh, if we were to hire, if you, uh, Sam were to hire a new employee, uh, tomorrow, here’s
0:33:48 what you would do.
0:33:52 You would say, oh, well, I’m going to onboard that employee, spend a couple of days.
0:33:55 I’m going to tell them about the business, uh, whoever’s managing that employee, let’s
0:33:59 say with a direct report of yours, maybe you’ll have a weekly one-on-one or every other week
0:34:04 or whatever that one-on-one will consist of, uh, looking at the work they did, whatever’s
0:34:05 like, oh, over here, you did this or whatever.
0:34:06 And it could be copy editing.
0:34:08 It could be anything, whatever the role happens to be.
0:34:09 You’re going to give them feedback, right?
0:34:11 That’s what you do for a human worker.
0:34:18 All of those things have a direct, literally a direct analog in the agent world, right?
0:34:21 And what we’re doing right now is we’re hiring these agents and expecting them to do
0:34:27 magic, just like if we hired an exceptionally smart, uh, has a PhD in everything employee
0:34:30 and expected them to do magic with no training, no onboarding, no feedback, no one-on-one, no
0:34:31 nothing.
0:34:33 Well, your results are not going to vary.
0:34:37 They’re going to be crap, uh, because you did not make the investment in getting that agent.
0:34:40 Now the, the, the big unlock here.
0:34:43 So whether you’re an HR person or whatever, it’s like figure out, well, what does employee
0:34:45 training look like for digital workers?
0:34:48 What do performance reviews look like for digital workers?
0:34:50 How do we do, how do we do recruiting for digital workers?
0:34:53 How do we like, what, what are all the mechanisms that need to exist?
0:34:55 What is a manager of the future?
0:34:58 What are the new roles that will be created as a result of having these hybrid teams?
0:35:01 It’s like, okay, well now maybe we’re going to need someone.
0:35:06 That’s like the agentic manager, human that knows all the agents that are on their team
0:35:10 or whatever, and has kind of built the skillset, uh, how to do recruiting for their team, uh,
0:35:12 how to do performance reviews, how to do all of that.
0:35:16 But for agents or hybrid teams, um, you know, versus just purely human ones, uh, that, that’s
0:35:19 just a whole other, and we’re going to need the software.
0:35:20 We’re going to need the onboarding.
0:35:20 We’re going to need training.
0:35:21 We’re going to need books for it.
0:35:23 And we’re going to need all of it to kind of adopt.
0:35:26 And it’s going to take, uh, it’s going to take years, right?
0:35:32 It’s not, uh, two years ago, I asked you, is it going to be as bad?
0:35:35 Or I think you said, I asked, is it going to be horrible or is this going to be amazing?
0:35:38 And you said, uh, I saw this with the internet.
0:35:41 Nothing is as extreme as the most extreme predictions.
0:35:44 I listened to you and I trusted you.
0:35:49 Then I actually think knowing what I know now, I’m actually more fearful, uh, than I was
0:35:52 a couple of years ago where I’m like, oh, this is actually going to put a lot of people
0:35:53 out of work.
0:35:56 And, um, it’s maybe not good or bad, but things are going to change.
0:35:58 drastically more than I thought.
0:36:04 And my, so I don’t remember how I phrased the question, but is this going to change the
0:36:09 future more than you thought two years ago or less than you thought two years ago?
0:36:11 Um, has your opinion on that changed?
0:36:14 I still think they’re going to be unrecognizable.
0:36:19 My, my kind of macro level sense, and this is maybe just my inherent, uh, optimism about
0:36:24 things is that it’s going to be kind of a net positive for humanity.
0:36:27 And this is the other thing that, um, you know, lots of people would disagree with me
0:36:27 on this.
0:36:31 Like, oh, well, is this an existential crisis to the species?
0:36:37 Um, and I’ve not said this before, but I’m going to see how it sounds as the words leave
0:36:38 my mouth.
0:36:38 I’m probably going to regret it.
0:36:43 But in a way we are actually, and Sammy, um, Sean, you said this earlier, we’re sort
0:36:45 of producing a new species, right?
0:36:50 So that’s like saying, okay, well, homeo sapiens as they exist, absent AI is likely not going
0:36:50 to exist.
0:36:55 So the way we know the species as it exists today with where we have a single brain and,
0:36:57 and, and, in natural form, you know, four appendages or whatever, maybe that’s going
0:36:58 to be different.
0:37:03 Uh, but I think of that as an extension of humanity, not the obliteration of humanity, right?
0:37:08 That’s the, that’s, you know, human 2.0 or n.0, uh, of the way we kind of think of the
0:37:08 species right now.
0:37:12 So I’m, uh, I think things are still moving very, very fast.
0:37:16 And this is the, this is why I think humans have, uh, issues with exponential curves.
0:37:20 We’re just not used to them when something is kind of doubling or, uh, you know, um, every
0:37:25 end months, it’s hard to wrap our brains around how fast this stuff, uh, you know, can move
0:37:31 things that we thought were like the things we have today, Sam, um, if we had just described
0:37:36 them to someone a year and a half ago, there’s like, ah, well, chat CPT is cool or whatever,
0:37:37 but it’s never going to be able to do that.
0:37:41 And now we’re like, those are like par for the course, right?
0:37:45 Like we can do like, um, things that were literally like, oh, there’s no way, no way.
0:37:48 It’s like, yeah, it’s good at like texts and stuff like that, but that’s because it’s been
0:37:48 trained on text.
0:37:50 Now I can do images.
0:37:53 Well, I can do images, but like video is like 30 frames a second.
0:37:58 That’s like third and generating 30 images per frame of like per second of videos, like all
0:37:58 of that.
0:38:02 It’s like, yeah, but you know, diffusion models, the way they work is because you’re
0:38:03 not going to get, you get a different image every time.
0:38:04 So how are you going to create a video?
0:38:07 Because it requires the same character, the same setting in subsequent frames.
0:38:09 That’s not how the thing is arched.
0:38:10 That’s not how image models were.
0:38:12 And we solved all of those things, right?
0:38:15 Now we have character cohesion, setting cohesion, video generational.
0:38:15 Anyway.
0:38:23 So my answer is it’s exactly, not exactly, but it’s close to like, yep, this is what exponential
0:38:25 advancement looks like.
0:38:28 I’m still of the belief that we’re going to have more net positive.
0:38:31 That is not to say that in the interim, there’s not going to be pain.
0:38:34 And there’s two things I’ll put out there as cautionary, cautionary words.
0:38:40 One is in the interim, anyone that tells you that there’s not going to be job dislocation,
0:38:43 there’s not going to be roles that get completely obliterated, is lying to you.
0:38:44 That is going to happen.
0:38:46 It’s already happening, right?
0:38:49 It’s that there is no world in which that does not occur.
0:38:50 That’s kind of thing number one.
0:38:56 Thing number two, and we didn’t talk about this, but we should have, is that because of
0:39:00 the architecture of how LMs currently work, maybe they’ll figure out a way to do that,
0:39:01 they produce hallucinations.
0:39:05 And that’s just a fancy way of saying it makes things up, right?
0:39:10 And that’s sort of okay, but not okay, because it doesn’t know it’s making it up.
0:39:15 Because of the way the architecture works, it’s like the intern that thinks it’s been
0:39:16 exposed to all there is to know in the world.
0:39:17 It’s like, I know all the things.
0:39:18 You’re asking me a question.
0:39:19 I know I know all the things.
0:39:21 So I’m going to tell you the thing that I know.
0:39:23 It was like, well, yeah, but you didn’t know this.
0:39:27 And what you said is actually, factually, like provably, demonstrably wrong.
0:39:33 And it has absolutely zero lack of confidence in its output, which is fine for some things
0:39:36 if you’re writing a short fiction story or something like that.
0:39:40 It’s not great at all for other things like healthcare related where you need kind of
0:39:41 predictable, accurate responses.
0:39:44 So I think we need to be aware of the limitations around it.
0:39:46 when we’re doing research and things like that.
0:39:52 And the problem is when we have relatively, I’ll say naive, I don’t mean this in a disparaging
0:39:58 way, folks that are naive to a subject area asking ChatGPT for things where it can’t judge
0:39:59 the response, right?
0:40:04 We’re sort of taking it on faith that it’s ChatGPT and our Mesh said it’s got a PhD in everything.
0:40:05 So of course it’s going to be right.
0:40:07 Well, no, it’s often not right.
0:40:12 And it’s kind of up to us to figure out what our kind of risk tolerance is.
0:40:14 It’s like, what is it okay for it to be wrong?
0:40:18 How would I test it for my domain, for my particular use cases?
0:40:23 So you guys know this, but I have a company called Hampton.
0:40:24 Joinhampton.com.
0:40:26 It’s a vetted community for founders and CEOs.
0:40:27 Well, we have this member named Levan.
0:40:32 And Levan saw a bunch of members talking about the same problem within Hampton, which is that
0:40:35 they spent hours manually moving data into a PDF.
0:40:37 It’s tedious, it’s annoying, and it’s a waste of time.
0:40:40 And so Levan, like any great entrepreneur, he built a solution.
0:40:42 And that solution is called Molku.
0:40:47 Molku uses AI to automatically transfer data from any document into a PDF.
0:40:52 And so if you need to turn a supplier invoice into a customer quote or move info from an application
0:40:57 into a contract, you just put a file into Molku and it autofills the output PDF in seconds.
0:41:00 And a little backstory for all the tech nerds out there.
0:41:03 Levan built the entire web app without using a line of code.
0:41:05 He used something called Bubble.io.
0:41:09 They’ve added AI tools that can generate an entire app from one prompt.
0:41:09 It’s pretty amazing.
0:41:13 And it means you can build tools like Molku very fast without knowing how to code.
0:41:18 And so if you’re tired of copying and pasting between documents or paying people to do that
0:41:20 for you, check out Molku.ai.
0:41:23 M-O-L-K-U dot A-I.
0:41:24 All right, back to the pod.
0:41:31 What do you think about this situation where Zuck is throwing the bag at every researcher?
0:41:36 A hundred million dollar signing bonuses, even more than that in comp.
0:41:39 And he’s poaching basically his own dream team.
0:41:41 He’s like, okay, you’re not going to, I can’t acquire the company.
0:41:43 Well, why don’t I go get all the players?
0:41:45 If you can keep the team, I’ll keep the players.
0:41:49 And he’s going after them with these crazy nine figure offers.
0:41:54 A hundred million signing bonus and 300 million over four years, I think is what I saw.
0:41:54 Is that true?
0:41:56 I think that was like the higher, yeah.
0:41:56 So the higher end.
0:42:00 And some people have said there’s even like billion dollar offers to certain people that are
0:42:00 out there.
0:42:02 This is like job offers.
0:42:04 So Dharmesh is like, were you shocked by this?
0:42:07 Because I mean, my reaction to this was that’s bullshit.
0:42:08 First time I heard it.
0:42:10 Then I was like, wait, the source is Sam Altman.
0:42:10 Why would he say that?
0:42:13 And then I was like, okay, that’s insane.
0:42:16 And then an hour later, I was like, wait, that’s actually genius.
0:42:20 Because for a total of 3 billion or something, he can acquire the equivalent of one of these
0:42:23 labs that’s valued at 30, 40, 50, or $200 billion.
0:42:25 What a power play.
0:42:29 I know, obviously, you’re an investor in OpenAI, so maybe you don’t like this.
0:42:31 Maybe you have a different bias here.
0:42:36 But I’m just, from one kind of like leader of a tech company to another, like what’s your
0:42:37 view of this move?
0:42:39 I think it’s one of the crazier moves.
0:42:45 If I had to use one word, I would say diabolical, not stupid, not silly, but diabolical.
0:42:46 And here’s why, right?
0:42:48 This is the, like in the grand scheme of things.
0:42:52 So this is not just a, oh, can we use this technology and build a better product that will
0:42:56 then drive X billion dollars of revenue through whatever business model we happen to have.
0:43:01 There’s a meta thing at play here that says whoever gets to this first will be able to
0:43:05 produce companies with billions of dollars of revenue or whatever, right?
0:43:09 Because that’s, it’s like kind of finding the secret to the universe, the mystery of life
0:43:09 kind of thing.
0:43:14 It’s like, okay, well, whoever wins that and gets there first will then be able to use
0:43:19 the technology internally for a little while and be able to just kind of run the table for
0:43:20 as long as they want.
0:43:22 So there’s, it’s got incalculable value, right?
0:43:27 The upside is just so high that no amount of, like if you can increase your probability even
0:43:31 by a marginal amount, if you had the cash, why wouldn’t you do it, right?
0:43:34 So do you think, A, do you think it’ll work?
0:43:36 Do you think this tactic will work for him?
0:43:38 Do you think he will be able to build a super team?
0:43:41 Is he just going to get a bunch of engineers who now have yachts and don’t work?
0:43:45 Like what’s going to happen when you give somebody a hundred million dollars offers, you put together
0:43:49 this, smash together this team of, I think he’s got a hit list of 50 targets.
0:43:53 And I think like, you know, something like 19 or 20 of them have come on board already.
0:43:56 What’s your prediction of how this plays out?
0:43:58 It feels a little bit like a Hail Mary pass, right?
0:43:59 That’s okay.
0:44:00 They’re going to take this.
0:44:01 It’s like, okay, well, there’s not a whole lot of things we can do.
0:44:03 You know, the chips are down.
0:44:04 I’m going to mix metaphors now too.
0:44:07 But that works sometimes.
0:44:09 It works sometimes.
0:44:10 That’s exactly why people do it.
0:44:12 It’s like, okay, what other option do we have, right?
0:44:13 Like everything else hasn’t worked yet.
0:44:15 So let’s try this thing.
0:44:20 But I think the challenge, I still think it’s a dialogically smart move.
0:44:23 I’m not going to use the word ethics or anything like that.
0:44:24 But here’s the challenge though, right?
0:44:28 If we were having this conversation, we’ll call it two years ago, give or take.
0:44:33 Open AI was so far ahead in terms of the underlying algorithm.
0:44:36 And this is even before ChatGPT hit the kind of revenue curve that it’s hit.
0:44:40 Just raw, the GPT algorithm was just so good and they were so far ahead.
0:44:45 It was actually inconceivable for folks, including me, that others would catch up.
0:44:47 It’s like, okay, well, they’ll make progress.
0:44:48 They’ll get closer.
0:44:50 But then Open AI is obviously going to still keep working on it.
0:44:52 And they’re going to be far ahead for a long, long time.
0:44:54 That’s proven not to be true, right?
0:44:56 We’ve seen open source models come out.
0:44:57 We’ve seen other commercial models come out.
0:44:58 There’s Anthropic.
0:45:02 And they have, by most measures, comparable large language models, right?
0:45:03 Within like one standard deviation.
0:45:04 They’re pretty good.
0:45:06 And sometimes they’re better at some things, worse at others.
0:45:09 But it’s not this single horse race anymore.
0:45:12 So the thing that I’m a little bit dubious of is that even if you did this,
0:45:15 you pull all these people together, like it didn’t
0:45:18 really work for Open AI in the true sense of the word, right?
0:45:21 Like they weren’t able to create this kind of magical thing that it’s like,
0:45:23 okay, maybe they end up doing it somewhere else.
0:45:27 But I think there’s more smart people out there.
0:45:30 The technology has kind of deep-seek proved that you could actually,
0:45:34 and they didn’t actually have an actual innovation in terms of reasoning models
0:45:37 and things like that versus kind of the early generation large language model.
0:45:39 So jury’s still out.
0:45:44 How much better is a $300 million over,
0:45:48 so $100 million a year engineer over like a $20 million engineer?
0:45:51 Is it like, I followed some of these guys on Twitter,
0:45:53 and it was, they’re fantastic follows.
0:45:57 And do you think that their IQ is just so much better?
0:45:58 Or is it because they’ve had experience?
0:46:02 Is it really because they just saw how Open AI works,
0:46:03 and they want that experience?
0:46:05 Are they like, is this like espionage?
0:46:10 What is, how good could a $100 million or $300 million a year engineer be?
0:46:12 Well, that’s the thing, though.
0:46:13 This is software, right?
0:46:17 So this is a, you know, a world of like 95% margins.
0:46:19 So let’s say, yeah, I think part of the value is,
0:46:20 yes, they’re super smart,
0:46:24 but even human IQ asymptotically moves towards a certain ceiling, right?
0:46:26 You take smartest people in the world,
0:46:27 however you want to measure IQ.
0:46:30 And so that doesn’t explain away the value, right?
0:46:31 That’s not that.
0:46:33 It’s not that they’ve seen the inside of Open AI,
0:46:35 and they have some trade secrets in their head
0:46:36 that they can then kind of carry over.
0:46:37 It’s like, oh, here’s how we did it over there,
0:46:39 and here’s how we ran evals,
0:46:41 and here’s how we did, you know, the engineering process.
0:46:42 They’ll have some of that,
0:46:46 because we always carry some amount of kind of experience in our heads.
0:46:48 I think the larger thing,
0:46:51 I think kind of primary kind of vector of value
0:46:54 is they sort of have demonstrated the ability
0:46:56 to kind of see around corners and see into the future, right?
0:46:57 They believed in this thing
0:47:00 that almost no one believed in at the time.
0:47:02 They sort of saw where it was headed,
0:47:03 and they were working at it,
0:47:04 tripping away at it, whatever.
0:47:06 And that’s much rarer than you would think.
0:47:09 For really smart people to do this stupidly foolish,
0:47:11 seemingly stupid foolish thing,
0:47:13 it’s like, you’re going to do what now, right?
0:47:16 And we’re still asking ourselves a variation of that question
0:47:17 that we would have asked three years ago,
0:47:19 except now we have ChatGPT,
0:47:20 and we have the things in it,
0:47:21 and we’re still like,
0:47:23 well, you say that we’re going to have, like,
0:47:24 these kind of digital teammates,
0:47:25 and they’re going to be able to do all these things,
0:47:27 and it can’t even do this simple thing right, right?
0:47:29 Like, we sort of keep elevating our expectations
0:47:31 in what we believe is or is not possible.
0:47:33 They sort of know what’s possible,
0:47:35 and they almost think of what many of us
0:47:36 would consider impossible
0:47:37 as actually being inevitable.
0:47:39 Have you guys, as HubSpot,
0:47:40 have you made any of these offers?
0:47:41 I don’t think so,
0:47:43 but that’s not the game we’re in, right?
0:47:45 So we’re not in that league.
0:47:46 We’re not trying to build a frontier model.
0:47:48 We’re not trying to invent AGI.
0:47:49 We’re at the application layer of the stack.
0:47:51 So we want to benefit from it, right?
0:47:55 We didn’t, in any layer of my entrepreneurial career,
0:47:58 I have not been the guy in the center of the universe
0:47:59 or the company in the center of the universe.
0:48:00 But you’re not like,
0:48:01 oh, man, I met this person.
0:48:03 Like, we need to offer, like,
0:48:05 an NBA contract in order to secure this guy.
0:48:07 No, and there’s a reason for this, right?
0:48:09 It’s like, for the kinds of problems we’re solving,
0:48:11 what’s the, there’s a sports term
0:48:12 about the best alternative to the player
0:48:13 or something like that,
0:48:14 the replacement cost?
0:48:15 More, wins above replacement
0:48:17 is the metric they use in sports.
0:48:19 So, yeah, it’s just not,
0:48:20 it’s not worth it,
0:48:21 given our business model,
0:48:21 given what we do.
0:48:24 I have one last thing on the kind of AI front.
0:48:25 This is one of the things,
0:48:27 answering your question, Sean,
0:48:29 in terms of things I disagree with folks on,
0:48:33 is that there’s a group of people,
0:48:35 very smart, that will say,
0:48:37 oh, well, AI is going to lead
0:48:39 to a reduction in creativity,
0:48:40 broadly speaking, right?
0:48:41 Because you’re just going to have AI do the thing.
0:48:42 Why do you need to learn to do the thing?
0:48:44 And I have a 14-year-old, right?
0:48:45 So it’s like, okay, well,
0:48:47 if he just uses AI to write his essays
0:48:48 and do his homework or whatever,
0:48:50 it’s going to kind of reduce his creativity.
0:48:52 And I understand that particular
0:48:54 kind of line of reasoning that says,
0:48:55 yeah, if you just have it do the thing,
0:48:56 you’re not going to.
0:48:58 But I think the part
0:49:00 those folks are missing
0:49:02 is that, you know,
0:49:04 creativity is kind of,
0:49:05 in the literal sense of the word,
0:49:06 is like, okay,
0:49:07 I have this kind of thing,
0:49:08 idea in my head,
0:49:10 and I’m going to express it
0:49:10 in some creative form,
0:49:11 be it music,
0:49:11 be it art,
0:49:13 be it whatever it happens to be.
0:49:15 And the problem right now
0:49:16 is that
0:49:19 whatever creative ideas
0:49:19 we have in our head
0:49:21 are limited
0:49:23 in terms of how we can manifest them
0:49:24 based on our emerging skill set.
0:49:25 So Sean can have
0:49:27 a song in his head right now
0:49:27 that, like,
0:49:29 he may be composing things in his head,
0:49:31 but until he learns the mechanics
0:49:32 of how to actually play
0:49:33 an instrument,
0:49:34 whatever the instrument happens to be,
0:49:35 there’s no real way
0:49:36 to manifest that, right?
0:49:38 We can’t tap into his brain
0:49:38 and do that.
0:49:40 So in my mind,
0:49:42 AI actually increases creativity
0:49:43 because it will increase
0:49:44 the percentage of ideas
0:49:46 that people have in their heads
0:49:47 that they will then be able
0:49:47 to manifest
0:49:48 regardless of what their skills
0:49:49 are or not.
0:49:51 And I love that.
0:49:51 So my son,
0:49:53 he’s a big Japanese culture fan,
0:49:54 big manga fan,
0:49:57 Japanese comic books and anime.
0:50:00 And so he’s an aspiring,
0:50:02 you know, author someday.
0:50:03 And what he can do now, right,
0:50:04 and he’s been able to do this
0:50:05 for years,
0:50:05 which is,
0:50:07 so he’s always had,
0:50:07 again,
0:50:08 he likes fantasy fiction as well.
0:50:09 So he’s had these ideas
0:50:10 for writing things,
0:50:12 but he lacked the writing skills.
0:50:13 He doesn’t know about character development,
0:50:14 doesn’t know about any of these things.
0:50:16 So what he uses ChatGPT for
0:50:17 is he’s got this,
0:50:17 like,
0:50:18 2,000 word prompt
0:50:20 that describes his fictional world.
0:50:21 Here are the characters.
0:50:22 Here’s a power structure.
0:50:23 Here are the powers people have.
0:50:24 Here’s what you can and can’t do.
0:50:27 And then the way he tests the world
0:50:28 is he turns it into a role-playing game.
0:50:29 It’s like,
0:50:29 okay,
0:50:31 I’m going to jump into the world.
0:50:32 Now you, ChatGPT,
0:50:33 I’m going to do this.
0:50:33 Tell me what happens.
0:50:34 Oh, this happened.
0:50:35 Okay, now I’m going to do this.
0:50:36 Okay, well,
0:50:37 now you’ve got this power.
0:50:38 And so it will sort of
0:50:39 kind of pressure test
0:50:40 kind of his world.
0:50:41 And so that’s an expression
0:50:42 of his creativity
0:50:43 because the world
0:50:44 was sitting in his head,
0:50:45 but now he can actually
0:50:46 share that with friends.
0:50:48 Maybe turn that into a book someday
0:50:48 because it’s going to take
0:50:49 the ideas that he has
0:50:51 and hopefully in the meantime,
0:50:53 he will kind of develop
0:50:54 some of those foundational skills,
0:50:54 but he doesn’t have to wait
0:50:56 until like 12 years
0:50:56 of writing education
0:50:58 before he can take this idea
0:50:58 as a child.
0:51:00 He has lots of creativity,
0:51:02 but as a practitioner,
0:51:03 most of those things
0:51:03 that he would love
0:51:04 to be able to manifest
0:51:05 in the world,
0:51:07 he has nothing close
0:51:08 to the skills required,
0:51:09 whether it’s drawing
0:51:10 or writing or anything.
0:51:11 So I think that’s what
0:51:13 AI can help us kind of elevate.
0:51:14 And once again,
0:51:16 we have to use it responsibly,
0:51:17 but it should be able
0:51:18 to elevate our skills.
0:51:19 I want to show you guys
0:51:22 an example of this real quick.
0:51:24 So I had this idea
0:51:25 not long ago,
0:51:25 a couple of weeks ago
0:51:29 of creating a game
0:51:30 using only AI.
0:51:32 So I don’t know
0:51:33 if you guys ever played
0:51:34 the Monkey Island games
0:51:36 from like when I was a kid.
0:51:37 I played Monkey Island.
0:51:38 It was an incredible game.
0:51:39 It’s got basically
0:51:40 this guy wants to be a pirate.
0:51:41 It’s like this very funny,
0:51:43 but like 8-bit art style game.
0:51:45 And so I created a version of that
0:51:47 called Escape from Silicon Valley.
0:51:48 I didn’t create the whole game,
0:51:49 but I create like the art,
0:51:50 but like check this out.
0:51:52 So I go into AI
0:51:53 and I basically start
0:51:54 creating the game art.
0:51:55 And so it’s like the story
0:51:57 is basically like deep in San Francisco.
0:51:58 The year is 2048.
0:52:00 The block is starting
0:52:01 his third term in office.
0:52:04 You know, Nancy Pelosi passes away,
0:52:06 the richest woman on earth.
0:52:07 And then, you know,
0:52:08 Elon is promising
0:52:09 that self-driving cars
0:52:10 are coming really, really soon
0:52:11 for real this time.
0:52:12 And here you are,
0:52:13 you’re this character
0:52:16 and you’re in the opening AI office.
0:52:17 And basically the idea is like-
0:52:18 Oh, Charlie, look at that.
0:52:19 What’s that?
0:52:20 Look at the Charlie bar.
0:52:22 Yeah, yeah, exactly.
0:52:23 I was putting in some references
0:52:24 to like, you know,
0:52:24 stuff that I thought was,
0:52:25 it would be cool.
0:52:27 That is so cool.
0:52:27 What did you use
0:52:28 to make those images?
0:52:29 So that right there
0:52:31 was just ChatGPT
0:52:33 and the Journey mix.
0:52:34 I tried using, you know,
0:52:35 Scenario and a couple other
0:52:36 like game-specific tools.
0:52:37 Like check this out.
0:52:38 So like I created
0:52:39 all these like technical characters.
0:52:40 So it’s like I create
0:52:41 Zuck and Palmer Luckey
0:52:42 and like Chamath
0:52:43 and Elizabeth Holmes in jail.
0:52:44 Oh, that is awesome.
0:52:45 And I had it basically
0:52:46 write the scenes
0:52:47 for the levels with me,
0:52:48 like write the dialogue with me,
0:52:50 create the character art.
0:52:51 Dude, that’s like sick.
0:52:52 Why didn’t you do that?
0:52:55 Well, because I did the fun part
0:52:56 in the first two weeks
0:52:57 where I was like,
0:52:58 oh, the concept,
0:52:59 the levels,
0:53:01 the character art,
0:53:01 the music,
0:53:02 seeing what AI could do.
0:53:04 But then to actually make the game,
0:53:06 the AI can’t do that.
0:53:07 And so I was like,
0:53:08 oh, now I need to like,
0:53:10 I mean, people who build games
0:53:11 spend years building it.
0:53:11 It’s like, oh,
0:53:12 this is like minimum
0:53:13 six to 12 months
0:53:14 doing this like very,
0:53:15 very arbitrary project.
0:53:17 But I still love the idea
0:53:18 and I’m going to like
0:53:20 package up the whole idea.
0:53:22 Darmesh, last question.
0:53:23 Just really quick,
0:53:24 like you,
0:53:26 where do you hang out
0:53:27 on the internet
0:53:29 that we and the listener
0:53:30 can hang out
0:53:31 to stay on top
0:53:32 of some of this stuff?
0:53:33 Like are there,
0:53:34 like who’s a reputable
0:53:35 handful of people
0:53:36 on Twitter to follow
0:53:37 or reputable websites
0:53:38 or places to hang out at?
0:53:40 That’s interesting.
0:53:41 So I spend most of my time
0:53:44 on YouTube,
0:53:45 as it turns out.
0:53:49 And I sort of give into the,
0:53:50 give into the vibe,
0:53:51 so to speak,
0:53:52 and let the algorithm
0:53:52 sort of figure out
0:53:54 what things I might enjoy.
0:53:56 It gets it right sometimes,
0:53:57 it gets it wrong sometimes.
0:53:58 So it’s a mix of things.
0:54:01 But the person that I think,
0:54:03 if you want to kind of get deeper
0:54:04 into like understanding AI,
0:54:05 there’s a guy named,
0:54:06 Andre Karpathy,
0:54:07 I don’t know if you’ve
0:54:08 come across him.
0:54:09 Just search for Karpathy.
0:54:10 Dude,
0:54:10 you don’t want to know
0:54:11 how I know,
0:54:12 like I get so many ads
0:54:14 that says like Andre Karpathy
0:54:16 said this is the best product
0:54:17 or Andre Karpathy
0:54:18 showed me how to do this,
0:54:19 now I’m going to show you.
0:54:20 Like I don’t even know
0:54:21 who Andre is
0:54:22 other than ads
0:54:23 run his name
0:54:24 to promote him.
0:54:25 Yeah, I mean he’s
0:54:27 one of the true OGs
0:54:27 in AI,
0:54:28 but he has that
0:54:30 his orthogonal skill
0:54:30 or one of them,
0:54:31 I think he’s got like nine,
0:54:33 he’s probably like a nine tool
0:54:33 player of some sort,
0:54:35 but he’s able to really
0:54:37 simplify complicated things
0:54:39 without making you feel stupid,
0:54:39 right?
0:54:41 So he’s not talking down to you.
0:54:41 He’s like,
0:54:42 okay,
0:54:43 like here’s how we’re going
0:54:43 to do this.
0:54:44 We’re going to kind of build it
0:54:45 brick by brick
0:54:46 and you’re going to understand
0:54:48 at the end of this hour and a half
0:54:49 how X works,
0:54:49 right?
0:54:51 and he’s amazing.
0:54:52 So that would be one.
0:54:53 So him,
0:54:54 any other YouTubers
0:54:55 or Twitter people or blogs?
0:54:56 On the business side,
0:54:56 actually,
0:54:58 like Aaron Levy from Box
0:54:59 is actually very,
0:55:00 very thoughtful on the,
0:55:01 if you’re in software
0:55:02 or in business
0:55:04 and the AI implications there,
0:55:04 I think he’s really good.
0:55:06 Hiten Shah,
0:55:07 who you both know
0:55:08 now at Dropbox
0:55:09 through the acquisition,
0:55:11 has been on fire lately
0:55:12 on LinkedIn.
0:55:14 So he’s one I would go back,
0:55:15 especially for the last
0:55:16 like three,
0:55:16 four months
0:55:18 and read all the stuff
0:55:18 he’s written.
0:55:19 I think he’s on point
0:55:19 on the app.
0:55:21 Those are awesome.
0:55:21 Darmash,
0:55:22 thanks for coming on.
0:55:23 Thanks for teaching us.
0:55:24 You’re one of my favorite teachers
0:55:26 and entertainers.
0:55:27 So thank you for coming on, man.
0:55:29 My pleasure.
0:55:30 It was good to see you guys.
0:55:30 It was fun.
0:55:31 Likewise.
0:55:31 Thank you.
0:55:32 That’s it.
0:55:32 That’s the pod.
0:55:36 I feel like I can rule the world.
0:55:38 I know I could be what I want to.
0:55:40 I put my all in it
0:55:41 like no day’s off.
0:55:42 On the road,
0:55:42 let’s travel,
0:55:44 never looking back.
0:55:44 All right,
0:55:44 my friends,
0:55:46 I have a new podcast
0:55:47 for you guys to check out.
0:55:47 It’s called
0:55:49 Content is Profit
0:55:50 and it’s hosted by
0:55:52 Luis and Fonzie Cameo.
0:55:53 After years of building
0:55:55 content teams and frameworks
0:55:56 for companies like Red Bull
0:55:57 and Orange Theory Fitness,
0:55:58 Luis and Fonzie
0:55:59 are on a mission
0:56:00 to bridge the gap
0:56:01 between content
0:56:02 and revenue.
0:56:03 In each episode,
0:56:03 you’re going to hear
0:56:04 from top entrepreneurs
0:56:05 and creators
0:56:06 and you’re going to hear them
0:56:07 share their secrets
0:56:07 and strategies
0:56:09 to turn their content
0:56:09 into profit.
0:56:11 So you can check out
0:56:12 Content is Profit
0:56:13 wherever you get
0:56:14 your podcasts.
Want Sam’s playbook to turn ChatGPT into your executive coach? Get it here: https://clickhubspot.com/sfb
Episode 726: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) talk to Dharmesh Shah ( https://x.com/dharmesh ) about how he’s using ChatGPT.
—
Show Notes:
(0:00) Intro
(2:00) Context windows
(5:26) Vector embeddings
(17:20) Automation and orchestration
(21:03) Tool calling
(28:14) Dharmesh’s hot takes on AI
(33:06) Agentic managers
(39:41) Zuck poaches OpenAI talent w/ 9-figures
(49:33) Shaan makes a video game
—
Links:
• Agent.ai – https://agent.ai/
• Andrej Karpathy – https://www.youtube.com/andrejkarpathy
—
Check Out Shaan’s Stuff:
• Shaan’s weekly email – https://www.shaanpuri.com
• Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents.
• Mercury – Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies!
Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC
—
Check Out Sam’s Stuff:
• Hampton – https://www.joinhampton.com/
• Ideation Bootcamp – https://www.ideationbootcamp.co/
• Copy That – https://copythat.com
• Hampton Wealth Survey – https://joinhampton.com/wealth
• Sam’s List – http://samslist.co/
My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano
-
America’s Energy Problem: The Grid That Built America Can’t Power Its Future
AI transcript
0:00:07 The energy grid and electrical grid of the future, it’s not just going to be the dichotomy of generation, transmission, and storage.
0:00:12 This sort of next generation of what the grid looks like is going to be in a much more decentralized way.
0:00:14 Why are delivery costs such a big problem?
0:00:20 The grid is aging now and brittle. The workforce has aged out.
0:00:24 Should we just leapfrog the grid? I need this power now, today.
0:00:27 The United States needs to get better at megaprojects.
0:00:29 Things that are a billion dollars, things that are at scale.
0:00:35 There is no safety, there is no national defense, there is no national security without a reliable electrical grid.
0:00:40 U.S. energy usage per capita peaked in 1973.
0:00:42 Since then, it’s been flat.
0:00:46 Meanwhile, China’s per capita energy use has grown ninefold.
0:00:54 Today, with AI, EVs, manufacturing, and data centers demanding more power than ever, America’s electrical grid is buckling.
0:00:57 We haven’t just underbuilt it, we’ve forgotten how to build it.
0:01:06 In this episode, I’m joined by A16Z general partners David Yulevich and Aaron Price-Wright and investing partner Ryan McIntosh from the American Dynamism team.
0:01:11 We talk about how the U.S. energy system broke, why fixing it is about more than megawatts,
0:01:15 and what it’s going to take from new tech and talent to faster permitting and smarter software.
0:01:25 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice,
0:01:32 or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:01:37 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:01:45 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:01:53 U.S. energy peaked in 1973 in terms of per capita usage.
0:01:56 China’s has increased ninefold over that same time period.
0:02:00 We have some reasons to be optimistic now that things have started to change or will change further.
0:02:02 Why don’t you give some context there?
0:02:04 What’s happened and why should we be excited about what’s coming forward?
0:02:09 The history of the grid in the United States was build big power plant industry formed around it.
0:02:12 The grid grew incredibly fast through the 20th century.
0:02:15 Then around the 80s and 90s, things started slowing down.
0:02:19 And then through the early 2000s, the grid effectively froze in the United States.
0:02:25 A big piece of that was a lot of the energy generation, a lot of the manufacturing, a lot of sort of heavy industry moved to Asia.
0:02:29 And so for the last 20 years, effectively, the grid has ossified.
0:02:32 We forgot how to build new power plants.
0:02:38 We forgot how to build new power projects, new loads, sort of large data centers, large factories, large megaprojects.
0:02:41 And you say we forgot, were we not allowed to, or we actually just lost the know-how?
0:02:42 We were allowed to, but we lost the skill set.
0:02:45 And I think you can see it in more extreme examples with nuclear power plants.
0:02:47 All of that sort of transition happened decades before.
0:02:54 But basically, the grid itself, the grid operators, forgot how to plan, how to move quickly, how to do it cheaply.
0:02:57 And so now we’re at this point in time where we are reshoring.
0:03:00 And we are bringing back manufacturing, we are bringing back data centers.
0:03:06 And there’s this highly concentrated demand, and it’s now, now, now at sort of any price, but they cannot move fast enough.
0:03:07 And so that’s what we’re seeing today.
0:03:10 We talk about data centers and the grid being inflexible to this.
0:03:12 It’s playing catch up.
0:03:16 We need to do a lot of the growth that happened in China and bring this here and do it incredibly fast.
0:03:20 How did this forgetting happen, and how can this relearning happen or this retraining happen?
0:03:22 It’s a good question.
0:03:24 I think a lot of it’s like a workforce issue.
0:03:25 I think a lot of it’s a policy issue.
0:03:39 I think the United States, historically, was a bunch of regulated utilities, sort of the top-down, big thermal power plant, big transmission lines connecting to substations, then distribution lines going to individual factories, homes, things like that.
0:03:45 And I think some of the newer technologies don’t necessarily benefit from scale in the same ways that these large thermal plants typically did.
0:03:50 And so this sort of next generation of what the grid looks like is going to be in a much more decentralized way.
0:03:53 So there’s also an element of relearning of what is the grid actually?
0:03:57 Is the grid these large sort of power systems and large infrastructure projects?
0:04:04 Or does it look like a lot more decentralized way where we can eliminate a lot of the wires in between, things like delivery costs, which have increased exponentially?
0:04:07 And can we do it in a more dynamic and flexible way?
0:04:10 So things like solar and batteries, they don’t need to be massive.
0:04:11 You can put them anywhere.
0:04:12 You can put them next to load.
0:04:16 And so this is also sort of an element that grid operators are thinking through.
0:04:22 How do we do that while also managing frequency, voltage, things like that, without causing sort of a grid to go down?
0:04:23 So there’s a lot of challenges.
0:04:28 Well, if you think about what our grid is, I mean, it’s a piece of technology that was designed about 100 years ago.
0:04:33 And very little technology on the grid has changed in those 100 years.
0:04:36 And you look at why are delivery costs such a big problem.
0:04:38 Grid is at capacity.
0:04:42 Getting a new project onto the grid today, you know, you sign up for interconnection, it could take a decade.
0:04:46 There’s a backlog of 20 plus years to get a new transformer.
0:04:51 The transformer technology we’re using today is kind of bananas if you actually look at what makes a transformer.
0:04:58 First of all, there’s one company that makes these, and there’s one plant in the U.S. that produces the right type of steel that you need in order to make these transformers.
0:05:03 And it’s 100-year-old tech, and it’s like the wait list for transformers is insane.
0:05:12 So I think we’re getting to the point in demand starting to rise again and the calcification of grid technology where it’s like should we just leapfrog the grid?
0:05:15 Like do we really need to wait in line and wait for this to catch up?
0:05:17 I think there’s like two versions of how you do that.
0:05:23 One is how do you get power gen and power storage essentially as close to demand as possible?
0:05:34 And that’s a problem for new tech to really help solve because we’re talking about instead of these like mega projects that we’re used to building, well, not used to building anymore, like massive nuclear plants, massive new natural gas plants, etc.
0:05:40 We’re talking about much smaller and more distributed sources of power bypassing interconnection altogether in many cases.
0:05:48 We’re seeing that as a pretty big trend with data centers as data centers are just building power directly on site and co-locating power with the data center because they’re like, I can’t wait.
0:05:52 Microsoft’s like, I can’t afford to wait 10 years to get an interconnection with the grid.
0:05:54 I need this power now today.
0:05:59 So how do you get power more tightly coupled with the load that it’s actually going to serve?
0:06:10 And that’s a really interesting problem for tech also from a software perspective because if you get generation, storage, and usage all co-located very closely together, like that’s a very good problem for AI to solve, like reinforcement learning.
0:06:14 Stick that in there and suddenly you get massively efficient systems that you couldn’t get at grid scale.
0:06:20 I think an interesting point to add on to that is that there’s very little visibility into the grid itself.
0:06:29 So like they understand sort of power plants are operating or not, but especially at sort of the distribution level, the things, the power lines you might see outside your home, there’s very little understanding of what’s actually going on there.
0:06:35 And so there’s like a reluctance, especially when you have things like net metering where I’m sending from my home a battery back to the grid.
0:06:37 Things get incredibly complicated.
0:06:44 And so the grid operators don’t have a very good understanding of when can they allow new projects to go online, how much power, when to actually cut people off.
0:06:53 And so there’s a lot of these policies, interconnections, sort of the general term, but like states like Texas have much more lenient policy of you can build wherever you want.
0:06:55 If we need to cut it off from the grid, we’re going to do so.
0:06:57 And so it’s sort of this connect and manage approach.
0:07:09 Whereas other states, like they will do these incredibly long feasibility studies in like a variety of sort of scenarios with like the entire grid is at peak capacity, but they want to make sure this specific project can stay online 24-7.
0:07:13 So that ends up creating these massive delays of being able to study every single possibility.
0:07:16 So there’s a lot of policy approaches here as well.
0:07:27 And there’s a bunch of these technologies called grid enhancing technologies, which are effectively like, you know, an average power line might be used at 50% capacity, but it needs to be designed for the peak capacity for the summer when everyone has their AC on.
0:07:31 And so there’s a lot of sensors or other technology that could be placed there.
0:07:35 So you have a much more dynamic view of what our infrastructure actually was looking like.
0:07:39 And so when we have these new technologies, then we can much more efficiently use the infrastructure we have.
0:07:42 What are your reactions to this conversation thus far?
0:07:45 Where are some areas you’re particularly excited about or reasons to be optimistic?
0:07:57 Well, I think the reason we’re having this conversation is we’re touching on a bunch of these topical themes, which is that we’re in a moment in time where, exactly like Ryan said, the grid is aging now and brittle.
0:07:59 The workforce has aged out.
0:08:01 We’ll talk about that more, I think, in a minute.
0:08:14 But we had to go out and hire and train entire specialized crews, specialized people that work with cement and concrete, specialized people that work with steel to go build the large Vogel reactors in Georgia.
0:08:17 We put them on Vogel reactors three and four.
0:08:18 We turned them on.
0:08:19 Huge win.
0:08:22 And then those people that went back to building highways or like bridges or something else.
0:08:31 And like instead of just going and put them on Vogel like five, six, seven, eight, nine, ten and building just like this massive crescendo of nuclear power, we just put these people back into the general workforce.
0:08:33 And so we’re not learning our lesson there in the workforce.
0:08:51 And at the same time, we have this insatiable thirst for energy, whether it’s EVs, whether it’s data center compute for AI, or just generally a shift toward more and more consumption of electricity, or even just like the reshoring and manufacturing, all these things that are just very, very electron heavy.
0:09:01 At the same time that we’re seeing this, you know, what Aaron brings up to me is a piece that we hardly ever talk about, which is resiliency and not having people be as dependent on the,
0:09:03 the interconnectedness of the grid.
0:09:15 People that you deploy solar, you know, we talk about like distributed compute for those of us that are in the tech world and like how important it is to like have distributed compute and have networks be able to suffer and survive through segmentation things.
0:09:17 But like the grid is very interdependent.
0:09:22 Even in the US, there’s really only a few major regions that can segment themselves off.
0:09:34 But when you deploy solar or you deploy batteries, or you deploy an SMR reactor or your own power generation on site for your own data center, you don’t have to worry about how brittle the grid is because you’re fairly resilient from it.
0:09:44 And I think that’s a component is that the energy grid and electrical grid of the future, it’s not just going to be the dichotomy of generation, transmission and storage.
0:09:52 But as Aaron brought up, you might do all three of those things in the same place and not have to worry about how robust the grid is or how capable the grid operators are.
0:09:56 And I think that’s a dimension that was never important to people before, but it’s important today.
0:10:06 And you can certainly imagine if you’re the military, you certainly care about having reliable access to power at all your forward operating bases and even at home at your home military bases.
0:10:09 Like you just cannot lose your ability to have electricity.
0:10:11 And so I think all these things are just coming together at once.
0:10:14 And it’s really exciting moment in time.
0:10:33 And I think it’s buoyed by the fact that we’re also at this sort of technology inflection point where AI can help some of these things, not just be a consumption driver, but even be an enabler and facilitating more efficient use of electricity, better monitoring of the grid, better ways to even go through the regulatory and permitting process, which is onerous for many cases.
0:10:42 Building on that, I think Texas, literally today, we’re recording this during a massive heat wave that’s affecting most of the eastern and southern United States.
0:10:59 And if you compare the grids of Texas with New York, Texas famously, historically, had massive grid failures several years ago when a big heat wave came through, the grid couldn’t keep up with all the air conditioners that were going on, and people saw massive power outages.
0:11:00 Everyone was really mad.
0:11:02 People were like, oh, ERCOT doesn’t work.
0:11:03 DREG doesn’t work.
0:11:05 And what has Texas done in the couple of years since that happened?
0:11:09 They have absolutely flooded the grid with solar capacity.
0:11:13 Texas has doubled their solar capacity in the last approximately three years.
0:11:17 And with that, they’ve just deployed thousands of batteries.
0:11:26 One of our portfolio companies, Base Power, is one of the players here, but there’s many battery power companies deploying all across Texas to provide storage for that solar power.
0:11:38 And if you look at the performance of the Texas grid versus the performance of the New York and surrounding area grid during this heat wave, I must have seen 10 news articles this morning about how well Texas grid has done.
0:11:45 The elasticity and ability to react to very quick changes in demand without having to change kind of baseload power.
0:11:50 Like, you can’t build a new natural gas plant or a new nuclear reactor overnight.
0:11:53 But solar is just so insanely cheap.
0:11:58 Like, it’s basically having a giant, massive, huge nuclear reactor in the sky that will go forever.
0:12:00 And Texas isn’t a green state.
0:12:01 This isn’t a political issue.
0:12:11 But it’s like, why aren’t we deploying the world’s cheapest form of power literally everywhere we possibly can and then just putting batteries everywhere?
0:12:12 Like, there just should be batteries everywhere.
0:12:19 It’s bananas to me that batteries as a topic has, like, recently gotten caught in the sort of political crosshairs.
0:12:23 We really, as a society, need to be good at power storage and batteries.
0:12:25 Like, this shouldn’t be a controversial topic.
0:12:27 We invented the lithium-ion battery.
0:12:42 And yet today, if you want to buy a battery, whether it’s for a drone or for the grid or for your car or for whatever it is, like, you’re either buying a battery made in a lights-out factory in China or you’re buying a battery produced in Vietnam by a Chinese company.
0:12:45 And, like, there’s no meaningful effort in the U.S. to change that.
0:12:54 And I think that this is a really critical problem, not just to manage power load on the grid, but for power for all of the things that we need to power the next generation of innovation in the United States.
0:13:03 I think we’d be hard-pressed on the American dynamism team to think of a company that we’ve met with an interesting technology in the last two years that doesn’t have a battery in it somewhere.
0:13:08 So, as a country, we need to be investing in battery technology and battery manufacturing.
0:13:21 By the way, if China decides that whatever your company is doing that’s using batteries doesn’t align with what they like or they want to punish you, being cut off from being able to buy batteries from China is incredibly punitive to a company.
0:13:23 And we’ve certainly seen that happen with some of our startups.
0:13:31 And then you find out quickly that the ability to procure and source batteries from places that are not in China is very difficult.
0:13:39 If you extrapolate that out to what would happen to our whole country if we just were unable to buy batteries from China, it could be catastrophic in a very short period of time.
0:13:49 Just to add on to a quick point on the grid side of batteries, if the rest of the country, which is presumably watching what’s going on in ERCOT, which is the grid operator in Texas,
0:14:02 if Texas can prove that you can deploy these sort of decentralized distributed energy resources and to sort of flatten these peaks, provide more resiliency and ultimately lower price of electricity, then every state should go and do this.
0:14:07 There’s a very complex web of deregulated and regulated entities when it comes to the grid.
0:14:13 Of course, there are a lot of different policy and workforce and political reasons why not everywhere is this decentralized world.
0:14:18 And it’ll probably be more complex than just these deregulated energy-only markets that Texas works with.
0:14:23 But I think this is going to be very obvious if it isn’t obvious already.
0:14:29 And I think the United States needs to move incredibly fast to make this happen and hook up batteries, solar panels, make it easier and cheaper to do it.
0:14:33 Even co-location for large loads is still a very politically fraught issue.
0:14:35 Utilities are pushing against this.
0:14:38 It’s still really hard to hook up solar and batteries to your home.
0:14:43 I think it’s actually cheaper to put residential solar on your home in Germany than in the United States.
0:14:46 And that’s largely a permitting, largely an installation issue.
0:14:47 That’s crazy.
0:14:48 That should not be the case.
0:14:51 Erin, I believe the quote in your college yearbook was,
0:14:52 drill, baby, drill.
0:14:57 How do you think about your sort of love for oil and gas with other sources of energy?
0:14:59 Oh, my parents will be shaking their heads if they hear this.
0:15:04 I think, broadly speaking, our approach to energy in the U.S. just needs to be yes and.
0:15:11 You look at the sort of atrophy of our power build-out over the last 30, 50, whatever, you know,
0:15:14 you name your time frame years compared to, let’s say, China.
0:15:19 And if we want to accomplish the goals that we’ve set out as a society to accomplish over the next decade,
0:15:20 like, we need more power.
0:15:22 And it’s a matter of yes.
0:15:25 And I think solar and batteries, extremely important.
0:15:29 And D, you should talk more about the exciting things that are happening around nuclear.
0:15:32 But, like, there is a place for oil and gas.
0:15:35 Like, I cut my teeth at Palantir working in oil and gas.
0:15:37 My husband worked in oil and gas.
0:15:40 Like, the first check I wrote at A16Z isn’t an oil and gas company.
0:15:43 So, this isn’t me coming with a particular agenda around carbon.
0:15:47 This is me coming and realizing that, like, we basically need every tool in our toolkit.
0:15:54 And we should be using technology to deploy whatever makes most sense, wherever it makes most sense at scale.
0:15:58 If we’re talking, like, energy mix of where we’re at today and where do we think we’re headed,
0:16:05 if I were to make, like, a personal bet, solar batteries are just the ability to be incredibly cheap and deploy incredibly fast.
0:16:07 Spin up and spin down.
0:16:07 Yeah.
0:16:09 And I think that already is a way, but I think that will continue.
0:16:12 But I think to be very clear is that you need all different types of energy.
0:16:15 You’re going to need true sort of baseload, dispatchable power.
0:16:16 It’s going to be gas.
0:16:17 It’s going to be nuclear.
0:16:18 It’s going to be geothermal.
0:16:20 It’s going to be a lot of hydro as well.
0:16:25 And I think as you attach more of these sort of renewable resources or these non-reliable resources,
0:16:28 while they’re incredibly cheap and works most of the time,
0:16:34 this long-tail risk, once you get to, like, 50% to 75% of the grid, is going to become very, very expensive.
0:16:36 You need a lot more battery backup, things like that.
0:16:39 And so I think it’s going to be very complex and it’s going to be different for many different regions.
0:16:43 But certainly it’s not all of any given resource.
0:16:50 Yeah, when you look at, like, the changing nature of load over the next decade, some of that is going to come from data centers.
0:16:51 Some fraction.
0:16:57 I would say it’s probably overstated how much data centers contribute to the growing load in the United States over the next decade.
0:17:00 Data centers generally are baseload.
0:17:05 If you’re training a model, you’re largely using a dedicated amount of power for the long term.
0:17:08 Maybe there are some fluctuations if you’re doing more inference.
0:17:11 But I would generally say, like, data centers represent baseload.
0:17:13 But then you also have things like electric vehicles.
0:17:15 You have things like heat pumps and air conditioners.
0:17:20 You have industrial autonomy, which may or may not be running 24-7.
0:17:24 So you’re going to have some increase in the base level of power we as a society need.
0:17:29 But continuing to increase the size of the peaks and troughs of how we use energy on a day-to-day basis.
0:17:37 And we should be thinking about designing our grid and designing our energy mix and power sources around what those loads look like.
0:17:41 And not over-solving for either baseload or variable power.
0:17:50 I think just to put this in more tangible terms, the peak summer load in places like California might be half of what it is in winter or something like that.
0:17:51 And it depends on what climate you’re in.
0:17:58 And so the concept of baseload is, like, do you build all the plants you’d need for the 100 gigawatts of power you’re going to need when it’s wintertime?
0:18:01 Or in the summer, you know, half the year, you’re only going to need 50.
0:18:02 So, like, what would be baseload?
0:18:08 What you need to do in sort of modern civilization is, every time you turn on the switch, like, the power is working.
0:18:14 And so how you actually match supply with that very, very fluctuating load, both daily and seasonally, is very complex.
0:18:21 And so, like, today, you might have to build a natural gas, what’s called, like, a peaker plant, that might only operate, like, a week a year.
0:18:28 And so that’s an incredibly expensive asset that is going to only be delivering very expensive power, but is only needed when all the other resources are tapped.
0:18:31 And it’s, like, that last couple megawatts of power.
0:18:44 The alternative now you could do is what’s called, like, demand response or with batteries on the grid is say, okay, well, instead of doing this $10,000 sort of a megawatt hour plant, I can just make it so everyone’s thermostat in this area turns down a couple degrees.
0:18:49 And so then an aggregate means that I don’t need to build that large asset or pay that expensive premium.
0:18:51 Okay, pushing back on that.
0:18:59 Like, I think that the American consumer will fully organ reject that level of dictation over how they use their power.
0:19:10 I think a more likely outcome is that you can do it on the compute side and just say, look, these three racks of the data center are just going to go offline during the peak summer heat when you’re running your AC.
0:19:13 This is not a critical job.
0:19:13 Right.
0:19:14 It’s a non-critical job.
0:19:15 It’s not a mission job.
0:19:17 It’s a back office job.
0:19:19 And you’re just going to run it at night instead of during the day.
0:19:22 And you’re going to pay less electricity for that benefit.
0:19:26 I don’t know if I agree with Aaron that AI is not going to suck up all the compute.
0:19:30 I think that Constellation just turned up a new nuclear reactor or is reactivating a reactor.
0:19:40 I think Meta immediately sucked up all of the power that they’re going to generate or nine-tenths of it or something from the new Constellation reactor that Meta signed the contract extension for.
0:19:50 And so I think we actually probably are underestimating the amount of compute that we’re going to soak up with electricity over the next 10, 20, 30, 40, 50 years.
0:19:52 The amount of data we’re going to start storing.
0:19:53 Just look at video.
0:19:57 The amount of video we create per minute has just ballooned way beyond anyone’s expectations.
0:19:59 I’m sure the same will be true for AI compute.
0:20:05 And I think once you start getting into like robotics and autonomy, if you think about compute expansively, I totally agree.
0:20:05 Yeah.
0:20:12 And so like those things are going to be much more responsive than do I want to go have my room be 74 degrees instead of 71 degrees.
0:20:17 Well, let me tell you, anyone that’s done business in Tokyo in the summer knows as a nation, by the way, Japan has done this.
0:20:19 It is absolutely terrible.
0:20:20 It’s horrible.
0:20:21 We’re not going to do that in America, please.
0:20:23 We’re not going to do that.
0:20:24 We’re not going to do that in America.
0:20:30 You’re in like the 40th floor of a Japanese building wearing a suit, by the way, because you have to wear a suit.
0:20:31 It’s swelteringly hot.
0:20:34 Everyone is walking around like they’re not miserable, but they are miserable.
0:20:36 And you watch them out a window, but the window won’t open.
0:20:38 It is one of the worst.
0:20:39 Why doesn’t France work in the summer?
0:20:40 This is why.
0:20:41 It’s terrible.
0:20:41 Exactly.
0:20:43 So we hate this idea.
0:20:44 I’m spoiled living in California.
0:20:49 I will say like one of the biggest proponents of this or current users is like crypto mining.
0:20:50 It’s like these are flexible.
0:20:51 Yeah, but that’s right.
0:20:53 But those people are demand responsive, right?
0:20:56 So those people will just turn off their compute when it’s not cost effective.
0:20:56 Yeah.
0:21:01 But it’s important for the grid is that you can build these assets and you have the demand for power here.
0:21:02 Like they’re going to soak up that demand.
0:21:06 But if it gets far too expensive, they will also shed that demand.
0:21:10 In the U.S., my guess is that this already is reflected in the fact that you have peak load pricing.
0:21:14 Like for me, my power is 10x more expensive between the hours of 4 and 9 p.m.
0:21:15 So we don’t run the washing machine.
0:21:16 Yeah, right.
0:21:20 And so leave it to organizations or individuals to figure out how to manage that.
0:21:22 But just like charge people for more power.
0:21:26 It’s a little bit of a non sequitur, but I like that we keep talking about oil and gas.
0:21:27 We’re talking about natural gas.
0:21:28 We’re talking about batteries.
0:21:29 We’re talking about solar.
0:21:30 We’re talking about nuclear.
0:21:34 Ryan even mentioned hydro, which of course is totally viable in some places.
0:21:39 The thing that nobody ever brings up anymore, except for I think a very fringe group, is wind power.
0:21:42 And I’m very happy to hear that nobody here is jumping for wind.
0:21:45 I think wind is incredibly cheap when it’s working.
0:21:48 You kind of know solar is going to work and like sort of this reliable schedule.
0:21:49 The sun’s going to be out.
0:21:50 It’s going to be working.
0:21:53 Spare some cloudy days, but there’s still always something coming through.
0:21:55 But the wind might not blow for a week.
0:21:56 I think it’s worse than that.
0:22:01 I think I read that globally, one third of all wind turbines are out of service at any given time.
0:22:07 The other thing is, I think wind is the only power generation mechanism where when you get too much of the input,
0:22:10 the blades of a wind turbine feather and turn off.
0:22:13 Whereas there’s still something that’s too much sun for solar.
0:22:16 Too much water and hydro, like that’s not a problem.
0:22:18 But like too much wind and the wind generator turns off.
0:22:21 Well, who wants a system where you get more of the input you want and then it stops working?
0:22:25 It’s also just extremely hard and dangerous and specific to service.
0:22:29 Like you see those videos of people climbing the ladder up to the top of the wind turbine.
0:22:35 Grid operators look at wind and it’s like great when it’s working, but they can’t plan for it.
0:22:37 They have to build other capacity to supersede that
0:22:38 if it’s not going to be there when they need it.
0:22:39 So fine, I cede wind.
0:22:41 We can move past wind.
0:22:42 Yeah, I agree.
0:22:42 So no wind.
0:22:46 But I do think the demand response, this went to the point that Ryan brought up,
0:22:47 that monitoring the grid is really important.
0:22:50 Being able to send signaling on the grid is really important.
0:22:52 And you have to remember, we’re all used to the internet,
0:22:55 which has bidirectional communication and messaging.
0:22:57 It has data layer and control layers.
0:23:00 And there’s like a full control plane and things like that for the internet.
0:23:02 The electrical grid doesn’t really have that.
0:23:04 And so to be able to send messaging and things is very, very difficult.
0:23:07 And now a lot of people just do it out of band using the internet.
0:23:13 To actually send messaging and do monitoring of the grid itself without an overlay network is very hard.
0:23:16 And that’s one of the challenges that people are now, I think, starting to address.
0:23:21 Yeah, it’s wild how much of a mystery what’s happening on the grid is at any given time.
0:23:23 Like we really have very little visibility.
0:23:31 And it’s very hard for, I think, centralized utilities to deploy meaningful software to understand that.
0:23:36 So when we think about as VCs, like what types of things do we look at and what do we get excited about?
0:23:40 I think companies that kind of are going at this monitoring from the opposite direction.
0:23:43 Like how do you get software almost insidiously on the grid?
0:23:49 Like how do you start learning more about demand and generation as close to the source as possible?
0:23:52 And then try to feed that information back from each other.
0:24:00 Like the idea that you’re going to go sell a software tool to a PG&E or similar and have a reasonably speedy top-down implementation
0:24:04 where you actually get good signal and metrics and can actually do interesting things with that data.
0:24:08 To me, I find like a little bit unbelievable.
0:24:11 Something very interesting that I learned is a lot of the load forecasting,
0:24:15 which is basically like the tasking of when plants need to go online.
0:24:19 So there’s usually a 24-hour ahead sort of market, day-ahead market that’ll basically say,
0:24:21 you need to run your plan at this time.
0:24:23 And then they sort of supply and demand match to a price.
0:24:24 And there’s like a merit order.
0:24:25 It’s complex, but that’s how it’s done.
0:24:29 But most of this load forecast is done by just looking at the weather.
0:24:31 They look at basically one of the best indexes.
0:24:35 They look at where the homes are, how many people are there, and then what the temperature is going to be.
0:24:38 That often is sort of the largest factor that goes into this modeling.
0:24:42 But if we have all these sort of connected resources, we have solar, we have EV chargers,
0:24:45 all of this stuff is spitting off data, telemetry, and things like that,
0:24:50 we’re going to get a much better look of how like load is actually being forecasted real-time,
0:24:54 which is going to help a lot of understanding like where do we actually need to build?
0:24:55 Like what is the actual price of power?
0:24:57 And then you can start making these markets, I think, a lot more efficient.
0:25:02 Well, when you look at energy desks for the big hedge funds or energy trading companies,
0:25:06 their weather guy is usually the highest paid person on the desk outside the portfolio manager.
0:25:10 Those climate and weather PhDs that are working on a trading desk,
0:25:16 they are just absolutely raking it in because they’re like God right now because there’s very little other data.
0:25:19 That’s why you see when the stuff that goes on in Texas, like heat waves and things like that,
0:25:23 if they even get it wrong by a couple of degrees where it’s like they think it’s going to be hot,
0:25:26 but it gets actually really hot, that’s when you get these crises,
0:25:28 like crises that end up causing a lot of strain on the grid
0:25:32 and then you have to turn off all these very expensive plants and then you get the headlines.
0:25:34 That are usually also the ones that are the worst for the environment as well.
0:25:35 Yep.
0:25:39 Let us know if you have any reactions to this or otherwise give us the state of nuclear.
0:25:40 Where are we right now?
0:25:41 What are the major bottlenecks?
0:25:42 What are we excited about?
0:25:48 I think that the biggest thing that’s shifted in the last two or three or three or four years in nuclear
0:25:51 is that everybody now acknowledges that nuclear energy is clean energy.
0:25:55 I think that’s been one major shift in public sentiment and perception.
0:26:01 Nonetheless, there’s still major headwinds politically with nuclear that need to be overcome.
0:26:04 Taiwan, for instance, turned off their last nuclear reactor.
0:26:05 Insane.
0:26:06 It’s unbelievable.
0:26:11 This is an island country that is seven days away from a total energy blackout
0:26:13 if they get an oil and gas blockade from China
0:26:17 so that they can’t bring in ships to deliver oil and fuel.
0:26:20 So at any given time, they’re like seven days away from a total blackout
0:26:22 and they turned off their last nuclear reactors at all.
0:26:22 Why’d they do it?
0:26:27 Because they caved to political, like very loud vocal minority groups.
0:26:28 Like environmentalist activist reason?
0:26:29 Yeah.
0:26:33 This party ran on a commitment to turn off the reactor before they realized how stupid it was.
0:26:34 It’s just like colossally stupid.
0:26:39 By the way, turning off a reactor, like a real full-scale reactor, it’s not like an SMR where
0:26:42 you can just flip it on like a few days later or a month later.
0:26:45 With these large reactors, it could take years to turn them back on.
0:26:46 That was just terrible.
0:26:51 But broadly, I think the tailwinds for nuclear are just getting stronger, where people recognize
0:26:52 realize that it is clean energy.
0:26:55 I think there’s still messaging work to be done.
0:26:59 We should stop calling the spent fuel nuclear waste because it’s really not waste.
0:27:01 Almost all of it can be recycled and reused.
0:27:04 People do need to recognize that those tailwinds are shifting.
0:27:05 So that’s happening.
0:27:08 I think that people understand that it’s baseload power, right?
0:27:10 So it’s not dependent on it only working during daytime.
0:27:15 It’s not like hydro where you have to be around an appropriately configured water source.
0:27:21 And then I think that one of the largest inhibitors to creating new power plants in this country,
0:27:23 it’s not that we can’t do it.
0:27:24 We can.
0:27:29 There’s a huge regulatory and permitting, I would say, morass that has to be swam through
0:27:36 that is incredibly expensive, requires an army of consultants, many tens of millions of dollars,
0:27:41 many, many thousands of pages of applications and documentation and process review.
0:27:45 And again, this has to do with building the power plant, getting the fuel, transporting the
0:27:46 fuel, storing the fuel.
0:27:52 Each step along the way is extremely laden with regulation and policy.
0:27:54 And some of that’s for good reason.
0:27:59 But finding ways to better navigate that to make it more efficient is really a step in areas
0:28:02 that a lot of companies and people are focused on now.
0:28:06 And actually, I think the government now is also focused on how do we streamline the approval
0:28:08 process for a new reactor?
0:28:10 How do we start approving new reactor designs?
0:28:16 And then I think the last thing I guess I would say is that right now, if you’re going to put
0:28:19 a lot of energy and work into building a nuclear power plant, you want to build a really big one.
0:28:23 We largely only see really big power plants in this country, like the AP1000s that we turned
0:28:24 on in Georgia.
0:28:28 And those again came in, I think they were 10 years late and multiple billions over budget.
0:28:31 We only do that because if you’re going to put in the effort and time, you want to get the
0:28:33 most bang for your buck and generate the most power.
0:28:38 We are now starting to see movement from the government in the DOD, in the Department of
0:28:43 Energy, and from the national labs to really try to create a more fast track process for
0:28:50 these small modular reactors or even micro reactors that use a much safer form of fuel, use much
0:28:57 less nuclear fuel, use a different kind of nuclear fuel that’s not nearly as risk prone as the kind
0:29:01 of nuclear material you’d use them like a weapon, but it’s not nearly as enriched to the same degree.
0:29:03 It’s not even the same material.
0:29:06 And so that process is now getting a lot of steam.
0:29:08 We have an investment in a company called Radiant Nuclear.
0:29:12 They are building a factory that creates what effectively is an SMR.
0:29:14 They would probably call it a micro reactor.
0:29:16 It’s a one megawatt reactor.
0:29:19 It can be put on the back of an 18-wheeler and shipped around.
0:29:23 You can move it to where you need power if there’s been a natural disaster, like a hurricane,
0:29:25 and you need to bring in power overnight.
0:29:30 You could bring in a few trucks with four or eight of these reactors and power up a whole
0:29:31 city after a disaster.
0:29:34 And so that kind of flexibility and power is really compelling.
0:29:36 I think there’s a lot of tailwinds, a lot of good things happening.
0:29:42 One thing to understand is that the DOD spends an incredible amount of money dealing with the
0:29:46 cost and, frankly, the risk to human lives, not just the cost, but the real risk to human lives,
0:29:50 transporting fuel around the world to forward operating bases.
0:29:55 Anytime we do a military exercise, anytime we’re engaged in a conflict, the movement of
0:30:00 fuel factors in as a primary concern and consideration of what we deal with.
0:30:04 And so we’ve read reports that they spend well over $200 a gallon at times, sometimes up to
0:30:09 $400 a gallon for diesel, effectively, to get diesel into the right place at the right time.
0:30:14 And so you can just imagine that having a nuclear reactor you can put on the back of a C-130
0:30:19 and fly around the world to wherever you need power, drop it in the middle of the desert,
0:30:21 turn it on, you have power for five years.
0:30:22 It’s just an incredibly compelling value prop.
0:30:25 There is no question that nuclear needs to be part of the equation.
0:30:30 Not only is that baseload power, but on the SMR micro reactor side, it gives us this incredible
0:30:32 flexibility in grid resilience.
0:30:36 There should not be a single military base in this country that’s not nuclear backed from
0:30:42 a power standpoint, because if the grid goes down, whether it’s from a cyber attack or just
0:30:48 instability or demand issues or cascading failures, you want to be able to fail over to nuclear power
0:30:50 and not worry about the runway lights turning off.
0:30:50 Yeah.
0:30:55 And especially as we start to look at the kind of electrification of our weapon systems,
0:31:02 our military vehicles, our drones, et cetera, like those all need to get charged up somewhere.
0:31:05 And how better to charge them than a nuclear reactor?
0:31:09 Another thing I’ll add to your nuclear comment is I think the advantage of nuclear, and I think
0:31:13 that Radiant has done very well of really leaning in on, is the power density factor.
0:31:18 If you want a reactor that is reliable and power dense, you want it to operate at very high
0:31:19 temperatures.
0:31:22 You want as highly enriched fuel as you possibly can, where it makes sense commercially.
0:31:26 So you want Halo fuel and you want to serve customers that will pay the premium for that.
0:31:29 That’ll be able to buy this reactor that they know is going to work.
0:31:32 And if you’re doing that, you want to have these economies of scale on the manufacturing
0:31:33 side.
0:31:36 You want it to be done out the door and don’t need to like assemble it on site.
0:31:38 You don’t want to have to like have constant maintenance.
0:31:42 And I think the other sort of reactors that we see, maybe on the civilian side, if you build
0:31:46 a reactor in a factory or you build modular components in a factory, but you still need
0:31:51 to do construction work on site, you’re still a construction company, even if the technology
0:31:51 is there.
0:31:54 And I would argue a lot of the existing AP1000 technology is quite good.
0:31:56 And other countries can do it quite cheaply.
0:31:58 China is using a very similar design.
0:32:00 The UAE just built one for incredibly cheap.
0:32:04 And they have very similar nuclear regulation, like in terms of frameworks.
0:32:07 And obviously their regulatory bodies might move faster and things like that, but they’re
0:32:09 not like completely ignorant of some of the concerns.
0:32:12 Well, and maybe this is a much more broad question, but the United States needs to get better
0:32:13 at mega projects.
0:32:16 Things that are a billion dollars, things that are at scale.
0:32:20 And I would argue the NRC is a big component of why it’s expensive, but I think it’s also
0:32:24 the same reason that it takes a billion dollars to build a bike lane in San Francisco is why
0:32:25 we are not able to build stupid power.
0:32:28 Or why we don’t have a high-speed rail in California.
0:32:28 Yep.
0:32:31 We might not have a high-speed rail in California because nobody wants it.
0:32:33 And nobody wants it where they’re building it.
0:32:33 I want it.
0:32:34 I fly to LA all the time.
0:32:35 Sorry.
0:32:36 Nobody wants it where they’re building it.
0:32:37 Sure.
0:32:37 Yeah.
0:32:39 Bakersfield is not a prime destination.
0:32:42 I want to train from San Francisco to LA.
0:32:44 It takes an hour and a half.
0:32:49 We debate a lot internally, like, where does it make sense for VCs and VC capital to plug
0:32:49 in?
0:32:55 And arguably, like, we’re not going to move the needle on, you know, these multi-billion dollar
0:32:56 mega projects in the U.S.
0:33:01 Like, we’re probably not the best people to figure out how to capitalize and build a multi-billion
0:33:04 dollar project in California to generate the power for the grid.
0:33:10 But I do think that there is a role for technology at kind of, like, every single layer and every
0:33:12 single phase of how mega projects get built.
0:33:17 It’s like, how do you use AI to navigate kind of site selection?
0:33:22 How do you use tools to, like, move through the various permitting processes faster?
0:33:27 Like, how do you use AI to help you do extremely complex and interdependent project management
0:33:28 better and more effectively?
0:33:33 So that’s something that you have a project with 4,000 people working on it and everyone
0:33:36 engaging with different suppliers and timelines that are dependent on each other.
0:33:39 Like, how do you get all those things to align better so that you don’t get these 10-year
0:33:44 delays so that projects actually happen on time and on budget and, as a result, attract
0:33:46 private capital backers?
0:33:48 Like, I think that there’s a role of technology here.
0:33:49 You know what that looks like.
0:33:50 TBD.
0:33:53 We’ve seen a lot of companies that maybe five years ago were primarily trying to sell
0:33:57 to utilities and grid operators, which is incredibly painful, incredibly difficult.
0:33:58 Perhaps rightfully so.
0:34:00 Like, they have poles in the ground that are 50 years old.
0:34:03 Why would they trust a two-year-old company to sell them software?
0:34:04 Are they going to be around in 20 years?
0:34:07 And this is a fair question to ask, especially for something as critical as the grid.
0:34:11 But now they’re developing this software and there’s such demand of understanding how grid
0:34:15 operators might think and potentially get there faster or, you know, have different conclusions.
0:34:20 And so now you can go to data centers or people who want to build solar farms or people who
0:34:21 want to build massive, like, battery farms.
0:34:23 And you can sell a very similar software.
0:34:26 Individual people who want to make sure that their power isn’t going to go out and they’re
0:34:29 going to be caught without energy during an important moment in their lives.
0:34:29 Yeah.
0:34:30 And so everyone cares now.
0:34:35 There’s a lot more money who cares about what is the grid actually going to think and where
0:34:35 can I build?
0:34:37 Where is there excess capacity?
0:34:41 Maybe I’m connected to the grid, but I also need some battery and solar backup or like a
0:34:45 radiant microreactor or something like that to be used in certain situations.
0:34:49 It’s a lot more complex, this sort of microgrid setup, but it’s the way we’re headed.
0:34:50 And software is going to be a big piece of that.
0:34:55 I want to hear more about our requests for startups or things that we want to exist that
0:34:55 we haven’t yet discussed.
0:34:59 I mean, put differently, I’m curious where we think there’s most bang for the buck in terms
0:35:03 of the issues that we’ve been talking about in terms of if there was like a regulatory intervention
0:35:05 or some sort of technological unlock.
0:35:06 What comes to mind?
0:35:11 One area where there’s probably a venture scale software company to be built is really around
0:35:13 grid management monitoring.
0:35:19 I think we see this in the IT landscape, we see it in the OT landscape, but we don’t really
0:35:22 see it in the grid where there’s just full, very, very large.
0:35:24 There is no Splunk for the electrical grid.
0:35:27 There’s no power to networks for the electrical grid yet.
0:35:31 There’s a whole bunch of things that mirror the IT and OT landscape, whether it’s around cyber
0:35:34 and monitoring and logging and analytics.
0:35:37 There’s no looker for the electrical grid yet.
0:35:38 There’s just none of these companies exist.
0:35:42 I’m not sure if it’s three separate companies, I’m not sure if it’s one company, but there
0:35:48 is a big company to be built and really managing and monitoring the grid and helping to orchestrate
0:35:53 and even deal with some of the things Ryan spoke around, around demand response, coordinating
0:35:56 that, creating those marketplaces, tracking all those incentives.
0:36:00 So I think when we see a company that we think can really be the breakout company there, we
0:36:00 would lean into it.
0:36:06 I also think around sort of project planning and development, how do you make it faster
0:36:09 and easier to build projects within the current regulatory framework?
0:36:10 How do you do site selection?
0:36:12 How do you navigate permitting?
0:36:14 How do you navigate project management?
0:36:16 How do you navigate your sort of construction supply chain?
0:36:20 We’re starting to see companies pick off pieces of that, but I think broadly speaking, there’s
0:36:24 room for tech and software in that kind of project development space as well.
0:36:29 I think, you know, in a more general sense, like anything that can bring generation capacity
0:36:32 or storage capacity closer to load, I think is going to be very compelling.
0:36:37 And a lot of the times it’s less maybe the technology, novel technology, but it’s a system integration
0:36:38 or it’s an innovative business model.
0:36:42 I think something like radicalizing that I experienced is, and I implore everybody to go home and check
0:36:46 their power bill, they’ll now like often separate like the delivery costs from the actual
0:36:47 generation costs.
0:36:51 So what we’ve seen and we’ve mentioned it, but like the cost to generate electricity, the cost
0:36:55 of like power has dropped immensely, gas, solar, things like that.
0:36:58 But the cost to actually deliver that electricity has increased a ton.
0:37:00 And so in net, it’s sort of not changed.
0:37:01 And I think that’s terrible.
0:37:03 And I think we all agree that’s bad.
0:37:06 And so I think there’s a lot of opportunity of bringing sort of that generation capacity.
0:37:09 In some ways, this is sort of like this more liberalizing force.
0:37:10 It’s like we all should have our own backup.
0:37:11 We all should have our own technology.
0:37:14 I think there’s a lot of really interesting ways to do that and scale it.
0:37:19 And overall, as a grid gets more heterogeneous, all of the seams and intersections between
0:37:24 things, like there’s just so much more opportunity for technology than when you had a single utility
0:37:27 managing a single source of power centrally distributed out broadly.
0:37:30 I’ll throw another one out there as I’m just thinking about this.
0:37:36 One thing that I’ve been noodling around is this idea that all the regulation and permitting
0:37:40 and policy frameworks that we have in this country, you could think of those as part of
0:37:44 the like infrastructure that we all have to like live and work with and interact with.
0:37:49 So companies that really facilitate, and I would say applying AI to navigating the permitting
0:37:50 process.
0:37:51 So nuclear is a good example.
0:37:58 Again, a nuclear reactor application or a fuel transport license or a fuel manufacturing license.
0:38:03 These things have thousands and thousands of pages of regulation and documentation that go
0:38:04 with them.
0:38:08 You make one small change in your application, it has these reverberation effects.
0:38:09 You have to update all your documents elsewhere.
0:38:13 If you’re the regulator trying to go through all these applications, it’s just incredibly
0:38:18 onerous, borderline impossible to imagine that a regulator can even possibly get it right.
0:38:20 You could argue that it’s actually not possible.
0:38:21 They just do a best effort.
0:38:27 But AI could actually help these things, could help the applicants go through the process of
0:38:30 filling out their applications and saying, hey, this is where you should drill down.
0:38:31 This is where you should clarify.
0:38:36 They can look at all previous published applications and say, this is how you need to tailor it.
0:38:41 You can probably make it 85% the same, and then based on your design or your location or
0:38:42 whatever, make some modifications.
0:38:45 And then the regulator can do the same and say, look, here’s an application that came in,
0:38:49 highlight all the areas I need to drill down, or show me the things that are different from
0:38:52 every other nuclear fuel transport application we’ve ever seen.
0:38:54 Are they using the rail infrastructure?
0:38:57 Are they using the national highway infrastructure to move the fuel?
0:39:02 AI can just automate all these things that take armies of consultants months or years to do,
0:39:05 can be brought down into being minutes or hours.
0:39:08 I’m not sure how big of a company, but I think potentially there’s a very large company
0:39:09 to be built there.
0:39:13 If our check sizes were in the billions, not just the millions, which were in Jason Horowitz,
0:39:15 you never know, how would our strategy change?
0:39:17 I think you need even more than just billions.
0:39:18 It’s tens of billions, hundreds of billions.
0:39:20 It’s such a tough question.
0:39:23 I think there’s tons of policy around this as well.
0:39:27 I hesitate to say we should look to how China has built up their grid,
0:39:30 but I think the elephant in the room is in the early 2000s, they were experiencing blackouts.
0:39:32 This was a very common thing.
0:39:33 This was horrific.
0:39:37 But now, think of 4X their grid in the last couple of decades.
0:39:41 And so the way they’ve done this is by basically deploying generation capacity,
0:39:46 building hydro, building massive storage facilities, of course, BYD, CATL, tons of battery production.
0:39:51 They built HVDCs, these large high-voltage transmission lines.
0:39:52 I would do all of that.
0:39:53 I mean, I would look to all of it.
0:39:56 And whether or not it’s a good investment or not, I’d look at a number of factors,
0:40:00 but it is much more of the infrastructure projects, the glue that connects this stuff together.
0:40:05 I think our lens today is looking at these technologies that enable a lot of this more flexible grid.
0:40:08 But I think there’s also going to be these large infrastructure, the webbing in between it.
0:40:12 I think software is a big piece of it that we’re spending a lot of time on looking at.
0:40:14 But I think, how is ERCOT going to be connected to the rest of the grid?
0:40:16 Or how are we going to move this power around?
0:40:19 If it’s really sunny in the Southwest, solar is going to be really cheap.
0:40:22 Is there an efficient way to move that to New York or something like that?
0:40:23 China’s done this effectively.
0:40:26 And I think if you had hundreds of billions of dollars spent or trillions of dollars,
0:40:28 what does the grid look like?
0:40:29 Like, it’s going to be a lot more interconnected.
0:40:33 Maybe another answer to your question or a different answer to your question is,
0:40:37 I think the energy industry is probably medium to long-term,
0:40:42 like one of the most prime spots to deploy physical autonomy.
0:40:50 So when you think about applications of robotics, whether it’s humanoids or more kind of task-specific
0:40:53 robotics, we’re talking about dangerous jobs often.
0:40:57 We’re talking about manufacturing jobs to build up, whether it’s small-scale reactors or batteries
0:40:58 or whatever.
0:41:04 So I don’t know what the shape of the company is here and how reliant it would be on some of
0:41:06 the robot learning work that’s happening.
0:41:10 But I do think that as we scale our energy capacity, there’s going to be a pretty massive
0:41:13 application of industrial robotics to the energy sector.
0:41:19 I made this joke, I think, once to Ryan, or maybe I made it at the American Dynamism Summit,
0:41:24 that we survived the greatest nuclear disaster in U.S. history just recently when we finished
0:41:28 the Vogel 3 and 4 reactors and let all those employees go back to other jobs.
0:41:32 So I think if we were writing a billion-dollar check into power, what we would do is we would
0:41:36 just give jobs to those people and not let them go back to whatever it was they were doing
0:41:38 before they were building nuclear reactors.
0:41:43 And we would just really work to streamline the process to make sure that we go build Vogel
0:41:48 5, 6, 7, 8, 9, 10, and all these different states around the country and just put these
0:41:52 people to work for the next decade-plus building reactors.
0:41:56 And that, to me, was the greatest miss and probably the greatest opportunity.
0:42:01 I don’t think it’s particularly our opportunity, but I do think it’s an opportunity for somebody
0:42:02 to do.
0:42:07 Labor broadly, like, this is a little tangential, but when Microsoft was building their new data
0:42:12 center in Georgia last year, at one point, they had on staff at Microsoft or on contract
0:42:15 more than a third of the electricians in the state of Georgia.
0:42:17 And they basically maxed out.
0:42:20 They hired every single electrician that they possibly could.
0:42:23 So I don’t think it applies to just electricians.
0:42:29 It’s, do you, to your point, like the cement mixers, it’s the mechanical engineers, it’s the nuclear
0:42:30 engineers.
0:42:36 Like, how do we actually train the next generation energy workforce that we’re going to need to
0:42:39 modernize the grid is a big, big challenge.
0:42:43 These are very high-paying jobs where you don’t have to check your email on your phone at 9 p.m.
0:42:45 at night after you go home from work.
0:42:46 They’re high-paying.
0:42:48 It’s good exercise jobs.
0:42:50 And they’re relatively low stress.
0:42:51 These are good jobs for people.
0:42:55 I think one more comment on this, and it’s more of an industrial policy question, is we’re
0:42:59 talking about specific things, but like oftentimes that just moves the bottleneck.
0:43:01 Like we could solve a lot of sort of the grid connection hookup.
0:43:05 We could build a lot of like transmission lines, but then we need more transformers.
0:43:07 And to build more transformers, you need more electric steel.
0:43:10 You can do the same sort of equation for much of the supply chain.
0:43:14 Battery is another good example is then, okay, cool, we’re building cells, but then we also
0:43:17 need active materials and we need to mine and things like that.
0:43:21 And so, you know, it’s sort of a whole effort of examining sort of our infrastructure and
0:43:23 our supply chains, and you need to do all of it.
0:43:25 And I think that’s a complicated question.
0:43:26 That’s an expensive question.
0:43:27 Any last reflections?
0:43:32 I think the last thing I would say is that people underestimate how critical and important
0:43:38 a resilient, reliable, dispatchable electrical grid is to our national security.
0:43:42 You cannot have national defense and national security without reliable electricity.
0:43:44 It’s just not possible.
0:43:48 So all of these things we’re talking about are about the upside, about capitalizing on
0:43:53 AI compute, the switch to electric vehicles and our insatiable thirst for electricity.
0:43:56 But at a fundamental level, there is no safety.
0:43:57 There is no national defense.
0:44:00 There is no national security without a reliable electrical grid.
0:44:06 To reiterate on that, people want reliable, cheap, and clean power in that order.
0:44:09 And I think that’s, I think largely how we should think about our energy policy.
0:44:12 And I think that’s sort of the direction we’re going.
0:44:14 And I think we need to make sure we stay aligned with that.
0:44:16 That’s an exciting note to wrap on.
0:44:17 David, Ryan, Aaron, thanks so much for coming to the podcast.
0:44:18 Thank you.
0:44:19 Thank you.
0:44:23 Thanks for listening to the A16Z podcast.
0:44:28 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash
0:44:29 A16Z.
0:44:32 We’ve got more great conversations coming your way.
0:44:33 See you next time.
0:44:44 We’ll see you next time.
0:44:44 We’ll see you next time.
U.S. per capita energy usage peaked in 1973. Since then? Flat. Meanwhile, China’s per capita energy use has grown 9x.
Today, AI, EVs, manufacturing, and data centers are driving demand for more electricity than ever—and our grid can’t keep up.
In this episode, a16z general partners David Ulevitch and Erin Price-Wright, along with investing partner Ryan McEntush from the American Dynamism team, join us to unpack:
– How America’s grid fell behind
– Why we “forgot how to build” power infrastructure
– The role of batteries, solar, nuclear, and software in reshaping the grid
– How AI is both stressing and helping the system
– What it’ll take to build a more resilient, decentralized, and dynamic energy future
Whether you’re a founder, policymaker, or just someone who wants their lights to stay on, this conversation covers what’s broken—and how to fix it.
Resources:
Find David on X: https://x.com/davidu
Find Erin on X: https://x.com/espricewright
Find Ryan on X: https://x.com/rmcentush
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
-
Raging Moderates: Trump’s Epstein Problem
AI transcript
0:00:02 Support for Prop 3 comes from Viore.
0:00:04 Oh my God, true story.
0:00:08 I am wearing, totally coincidentally, guess what?
0:00:09 Viore shorts.
0:00:12 Viore’s high quality gym clothes are made to be versatile
0:00:14 and stand the test of time.
0:00:17 They sent me some to try out and here I am.
0:00:20 For our listeners, Viore is offering 20% off
0:00:21 your first purchase.
0:00:24 Plus, you have free shipping on any US orders
0:00:26 over $75 in free returns.
0:00:27 Get yourself some of the most comfortable
0:00:30 and versatile clothing on the planet.
0:00:31 Viore.com slash Prop G.
0:00:35 That’s V-U-O-R-I dot com slash Prop G.
0:00:37 Exclusions apply.
0:00:40 Visit the website for full terms and conditions.
0:00:46 Avoiding your unfinished home projects
0:00:48 because you’re not sure where to start?
0:00:51 Thumbtack knows homes, so you don’t have to.
0:00:54 Don’t know the difference between matte paint finish and satin
0:00:57 or what that clunking sound from your dryer is?
0:01:00 With Thumbtack, you don’t have to be a home pro.
0:01:02 You just have to hire one.
0:01:05 You can hire top-rated pros, see price estimates,
0:01:07 and read reviews all on the app.
0:01:08 Download today.
0:01:11 Megan Rapinoe here.
0:01:12 This week on A Touch More,
0:01:16 we’re talking all about the WNBA All-Star roster
0:01:20 with ESPN analyst and former All-Star herself,
0:01:20 Chenea Gumake.
0:01:23 She also tells us what she wants to see
0:01:24 from the CBA negotiations.
0:01:28 Plus, I’m sharing some of the record-breaking updates
0:01:30 from the Euros in Switzerland.
0:01:33 Check out the latest episode of A Touch More
0:01:35 wherever you get your podcasts and on YouTube.
0:01:41 Welcome to Raging Moderates.
0:01:42 I’m Scott Galloway,
0:01:44 and President Trump is clearly in the Epstein files.
0:01:47 Oh, good morning to you too, Scott.
0:01:48 Come on.
0:01:49 Come on.
0:01:50 That’s why we’re here, Jessica.
0:01:51 In today’s episode,
0:01:54 I was in Ibiza last week.
0:01:57 Word has it, I did Molly and had an amazing time,
0:01:59 and I think I’m enjoying this more.
0:02:00 Yeah.
0:02:02 It’s just taking you to another level.
0:02:03 Oh, my God.
0:02:04 This is my hot girl summer.
0:02:07 Today, we’re going to talk about,
0:02:08 today, we’re going to talk about MAGA revolting
0:02:10 over the Epstein files,
0:02:12 an all-time high of Americans now seeing immigration
0:02:14 as a positive for the country.
0:02:17 Well, welcome to pulling your head out of your ass
0:02:20 and a proposal to evaluate lawmakers’
0:02:22 cognitive fitness for office.
0:02:24 That’s ageist, Jess.
0:02:26 That’s ageist.
0:02:28 Anyways, let’s get right into it.
0:02:31 The president is once again at odds with his own base.
0:02:33 This time, it’s over Jeffrey Epstein.
0:02:37 But we should really be focusing on Rosie O’Donnell.
0:02:39 I think that’s not a distraction.
0:02:41 Our favorite Irish lass, Rosie.
0:02:42 Yeah.
0:02:44 No, that is not an attempt to distract and say,
0:02:46 look over here from the fact that he is clearly
0:02:47 in the Epstein files.
0:02:50 After the DOJ released a memo concluding Epstein died
0:02:52 by suicide and had no secret client list,
0:02:55 Trump urged supporters to move on
0:02:58 and defended Attorney General Pam Bondi,
0:02:59 calling her fantastic.
0:03:02 He also claimed critics were just selfish people
0:03:03 trying to hurt him.
0:03:05 But MAGA world isn’t buying it, Jess.
0:03:07 At Turning Point USA Student Summit,
0:03:09 the crowd booed Bondi’s memo,
0:03:11 accusing Trump of breaking his own promise
0:03:13 to release the full files.
0:03:15 And influencers, including Steve Bannon and Laura Loomer,
0:03:17 are openly criticizing the administration.
0:03:19 Loomer is now calling for a special counsel.
0:03:21 Who the fuck cares what Loomer wants?
0:03:21 But anyways.
0:03:22 Trump cares.
0:03:24 There you go.
0:03:27 The leader of the free world happens to care.
0:03:27 He cares.
0:03:29 Inside the administration,
0:03:30 the fallout is even worse.
0:03:34 FBI deputy director and podcaster Dan Bongino
0:03:35 almost resigned.
0:03:35 Almost?
0:03:36 Oh, no, Dan.
0:03:40 We hate to lose someone as competent as you
0:03:41 from our public ranks.
0:03:44 He almost resigned after a tense blow up with Bondi.
0:03:45 He skipped work Friday.
0:03:48 It’s like one of these little tech Google bitches
0:03:50 doing a lunch walkout,
0:03:51 pretending that anyone cares.
0:03:54 Oh, he took the day off.
0:03:55 That’ll show him, Dan.
0:03:58 And he hasn’t spoken to DOJ leadership since.
0:04:00 He hadn’t spoke to him in a whole four days.
0:04:02 Bongino’s future is uncertain.
0:04:04 But for now, he’s still on the job.
0:04:06 Bondi’s still in Trump’s good graces
0:04:10 and even took her to the football game on Sunday,
0:04:11 which, by the way, was outstanding.
0:04:12 Yeah, it was kind of funny
0:04:14 how he wouldn’t get out of the picture, though.
0:04:15 Yeah, and he was booed.
0:04:16 That was my favorite part.
0:04:18 But this episode exposed serious fractures
0:04:21 within the MAGA world and Trump’s grip on it.
0:04:24 Jess, give us your thoughts on this.
0:04:26 I got this wrong.
0:04:27 I didn’t think it was going to be that big a deal.
0:04:28 And it’s just blown up.
0:04:29 Give us your take.
0:04:30 Yeah.
0:04:33 We are conditioned to think nothing matters
0:04:37 because nothing has mattered up until this point, right?
0:04:39 So you think, oh, this is a news story
0:04:41 for maybe a couple of days.
0:04:43 And you worry if you’re recording something
0:04:46 that it’s going to be old by the next day, right?
0:04:47 But this one has had legs.
0:04:52 And a lot of that is because you have the luminaries,
0:04:54 and I’m being generous with that term,
0:04:58 but, you know, the faces of the party upset
0:05:00 and the rank and file upset.
0:05:03 And usually those things don’t coincide, right?
0:05:05 You get a few pissed off people here,
0:05:06 but down here, everyone’s fine.
0:05:08 Or down here, people are upset,
0:05:10 but up here, folks aren’t.
0:05:13 And we have to be real about this.
0:05:15 There are folks who were very outspoken
0:05:17 about the Epstein files and being upset,
0:05:19 like Charlie Kirk, Benny Johnson.
0:05:21 They’ve already fallen back in line.
0:05:23 Trump called Charlie Kirk, apparently.
0:05:26 Then he starts posting, you know,
0:05:27 this isn’t really worth our time.
0:05:30 And Benny Johnson, who had a four-point plan
0:05:33 to how the administration could make things better,
0:05:35 which also included, you know,
0:05:37 something having to do with Bill Clinton,
0:05:41 now is saying that MAGA is taking this seriously,
0:05:44 and apparently Bondi is going to start
0:05:46 dribbling out more information,
0:05:48 or that’s what we heard from Lara Trump.
0:05:51 But I’m kind of in two minds about it.
0:05:52 So the one mind,
0:05:54 which I think is more like your Ibiza-Molly mind,
0:05:57 is I’m excited by this,
0:06:00 because it also shows that for some in this movement,
0:06:02 there is a bridge too far,
0:06:04 or a line that you can’t cross.
0:06:06 And there are a lot of people,
0:06:08 especially the rank and file,
0:06:12 who are upset because this guy was running a pedophile rank,
0:06:13 right?
0:06:15 There were kids that were being abused
0:06:17 who deserved justice,
0:06:20 and that that’s something that should matter,
0:06:22 no matter what.
0:06:25 The other side of that is that it obviously exposes
0:06:29 that Trump, someone who has managed miraculously
0:06:30 to market himself as someone
0:06:33 who isn’t just in it for himself
0:06:35 and isn’t part of the swamp
0:06:37 and the cabal of powerful people
0:06:40 that will protect themselves no matter what,
0:06:42 is exposed now as someone
0:06:44 who’s just your run-of-the-mill,
0:06:47 you know, sharky salesman, right?
0:06:48 That’s how it always was.
0:06:50 And for those of us who aren’t particular fans of his,
0:06:52 that’s how we saw him from the jump,
0:06:54 not even just with the Epstein stuff.
0:06:55 In general,
0:06:57 especially if you’d been around New York,
0:06:59 you knew exactly who Donald Trump was.
0:06:59 You know,
0:07:01 the guy who’s stiffing his contractors,
0:07:05 who’s saying disgusting things about women,
0:07:06 who’s, you know,
0:07:08 cheating on everyone he ever married.
0:07:11 And that never penetrated.
0:07:12 So I kind of just gave up hope,
0:07:13 I guess,
0:07:15 that people would see
0:07:16 that aspect of his character.
0:07:18 And that has been affirmed now
0:07:20 that people are seeing this.
0:07:21 The other side of me,
0:07:23 which is a bit of a Debbie Downer,
0:07:25 and I hate to bring the mood of the pod down
0:07:27 because it has been snappy to begin,
0:07:30 is that I’m also concerned
0:07:32 that we’re going to spend all this time
0:07:33 on the Epstein files,
0:07:34 and it won’t matter,
0:07:37 and we’ll forget to talk about real stuff
0:07:38 that actually affect people’s pocketbooks
0:07:39 and how they’re going to vote.
0:07:41 And, you know,
0:07:43 we’ll show up with campaign posters for 2026
0:07:44 that don’t say,
0:07:45 he took away your Medicaid,
0:07:46 and it’ll say, like,
0:07:48 Pedo Island or something.
0:07:50 It won’t be that extreme,
0:07:51 but, you know,
0:07:53 being relentlessly focused on the things
0:07:54 that win elections
0:07:58 are how you capture people’s attention,
0:07:59 and this is,
0:08:01 it’s an enjoyable sideshow,
0:08:03 I guess is how I would describe it.
0:08:06 So what are your feelings besides elation?
0:08:10 Well, so I teach a session on crisis management,
0:08:13 and there’s sort of just three basics to remember,
0:08:14 and they’re easy to remember,
0:08:15 but they’re hard to do.
0:08:17 And the first is to acknowledge the issue.
0:08:20 The second is to take responsibility,
0:08:21 and the third is to overcorrect.
0:08:26 And I would imagine that the vast majority of people
0:08:27 who went to this island
0:08:31 or accepted private jet travel with Epstein
0:08:33 did not engage in pedophilia.
0:08:35 I mean, there’s a lot of strafe here.
0:08:37 A lot of people have been caught up in this who,
0:08:39 I mean, should wealthy, powerful people
0:08:41 do enough diligence on someone to recognize,
0:08:44 okay, if he’s been convicted of a sex crime
0:08:46 or has questionable activities,
0:08:48 I’m not going to, you know, fraternize with him.
0:08:51 But if the president, in my view,
0:08:52 it’s always the cover-up.
0:08:54 It’s not the scandal, it’s the cover-up.
0:08:56 And if the president had said,
0:08:58 you know, when I was younger, I liked to party.
0:08:59 I met this guy.
0:08:59 He was fun.
0:09:01 He was known for being, having a good time.
0:09:03 He was giving money away.
0:09:03 He seemed legitimate.
0:09:06 And I spent time with him, went to the island,
0:09:09 did not engage in any of these illegal activities.
0:09:12 It was a huge error in judgment, and I apologize.
0:09:14 I think this whole thing would blow over.
0:09:16 And if he said, even if he didn’t mean it,
0:09:20 and I’ve instructed my AG to release the files
0:09:22 or look into it, and then just like he said,
0:09:24 he was going to release his taxes and never did,
0:09:26 I think he’d be fine.
0:09:29 But he could not be acting more guilty.
0:09:31 Oh, wait, wait, wait.
0:09:34 Let’s cancel Rosie’s citizenship.
0:09:35 Look over here.
0:09:37 Nothing to see over here.
0:09:38 He could not.
0:09:41 It’s when my dog gets into the trash
0:09:42 and I walk into the kitchen,
0:09:45 I know my dog has gotten into the trash.
0:09:49 I mean, she could not be more transparent in her guilt.
0:09:52 And it’s as if a communications consultant has said,
0:09:56 okay, do you want to come across as guilty as possible?
0:09:58 Do you want it to seem like, in fact,
0:09:59 you weren’t just down there,
0:10:02 but there’s some really ugly information
0:10:03 about you in these files?
0:10:04 Okay, then act like this.
0:10:06 And this is exactly what he’s acting like.
0:10:10 I feel as if he has such strong political instincts
0:10:11 in terms of his base.
0:10:13 And here he’s just,
0:10:16 he literally looks like guilty.
0:10:19 And now I believe there’s something more here.
0:10:22 I used to think, okay, you know,
0:10:24 all of these guys accepted a party.
0:10:25 They like to have a good time.
0:10:27 They like to be around hot people.
0:10:29 And I would imagine just in terms of probability,
0:10:32 the majority of them did not engage in a crime.
0:10:34 I’m sure some did.
0:10:38 But this feels like this guy is scared to death
0:10:40 of this thing coming out.
0:10:43 And he comes across as really guilty.
0:10:44 Having said that,
0:10:45 and then this is my prediction around this,
0:10:46 and I’m curious to get yours.
0:10:50 Every time I’m hopeful that Senator Susan Collins
0:10:53 actually gives a goddamn about her constituents,
0:10:56 or Senator Murkowski is going to find a backbone
0:10:59 and realize that this is really counter
0:11:00 to the values she espouses, too.
0:11:02 And then they all fall in line.
0:11:06 And I think the same thing’s going to happen here.
0:11:06 And, you know, all of a sudden,
0:11:07 Charlie Kirk’s saying,
0:11:11 you know, after trying to stir this conspiracy outrage
0:11:14 and everything about, you know,
0:11:17 the election being stolen and a pedophile ring
0:11:20 in a basement that is non-existent of a pizza shop
0:11:23 where Secretary Clinton was drinking the blood
0:11:24 of sacrificed children,
0:11:27 it is very rewarding to see the snake
0:11:28 eating its own tail here.
0:11:31 But I think eventually he calls them
0:11:33 and they all snap and fuck in line.
0:11:36 And this just, we just move on.
0:11:37 Your thoughts?
0:11:38 Yeah.
0:11:40 I mean, I largely agree with you.
0:11:43 It doesn’t mean that you can’t enjoy something fleeting.
0:11:44 Revel in it.
0:11:46 Like we all love Chinese food, right?
0:11:48 And then we forget that we ate an hour later.
0:11:49 It’s Christmas for Jews.
0:11:50 Chinese food for everyone.
0:11:51 Yep.
0:11:53 So enjoy it.
0:11:57 Like the turning point clips will live forever of,
0:12:01 you know, Steve Bannon and Megyn Kelly and Tucker Carlson,
0:12:05 though I’m very confused about the Mossad connect,
0:12:08 you know, that he was a intelligence asset,
0:12:10 also worked for the Israeli government.
0:12:14 But, you know, I’m happy to indulge in some more conspiracy.
0:12:17 But yeah, I generally think that they’re going to fall in line.
0:12:20 And part of that is that there’s no alternative.
0:12:23 And I’m not even talking about, oh, you’re going to wake up and you’re going to vote for
0:12:24 Democrats.
0:12:33 Donald Trump has so completely owned the right wing of the country that you’ve got nowhere
0:12:33 to go.
0:12:40 There have been many moderate Republican soldier that has tried to stand up there and say,
0:12:41 hey, look over here.
0:12:45 There are other people who believe in, quote, conservative values.
0:12:46 And you could give it a shot.
0:12:50 Nikki Haley, Chris Christie in 2016, everybody and their mother tried.
0:12:51 Right.
0:12:52 Ted Cruz even won Iowa.
0:13:02 And he has captivated the attention and the loyalty of this group of Americans in a way that I
0:13:06 certainly haven’t seen politically, at least in my time.
0:13:09 And so they don’t really have an alternative of somewhere else to go.
0:13:15 And MAGA, more so than the Democrats, which we talk about this part all the time, they don’t
0:13:17 run purity tests the same way we do.
0:13:20 They allow for ideological inconsistency.
0:13:25 And yes, I think that there are going to be people that don’t move past this, but it
0:13:32 is going to be a drop in the bucket compared to the general MAGA movement because they’ve
0:13:34 got no other optionality.
0:13:41 And we’ll see what he essentially makes Pam Bondi do to try to paper over this because they’re
0:13:44 saying now that there’s going to be more releases coming.
0:13:49 And I, I’m not sure there has to be a sacrificial lamb.
0:13:51 If there will be, I think it’s Pam Bondi.
0:13:57 And if I were to put on my conspiracy theory hat, which is very chic, I would say that my
0:14:06 old colleague, Jeanine Pirro, would be a wonderful replacement for Pam Bondi if she ended up exiting
0:14:08 stage left in all of this.
0:14:10 I’m not sure if you’re being serious or not.
0:14:11 You’re being serious?
0:14:12 No, I am being serious.
0:14:15 You think that Jeanine Pirro would be a good attorney general?
0:14:23 I, I, she wouldn’t be our choice necessarily, but I think that Pam Bondi has seemed miraculously
0:14:29 unserious in this role for someone who had a very important job, obviously in Florida, was
0:14:31 instrumental to Trump in being able to win the state.
0:14:35 And she’s been a bit blah, right?
0:14:36 In this role.
0:14:38 She doesn’t even perform that well in the cabinet meetings.
0:14:43 And, you know, Jeanine Pirro is now the U.S. attorney for D.C.
0:14:45 So she is in the administration already.
0:14:49 And it, it wouldn’t surprise me is what I’m saying.
0:14:51 So yes, I am being serious about that.
0:14:56 You know, you, none of these people would be our picks, but the Wall Street Journal even
0:15:03 wrote a piece about how much the D.C. office is loving having Pirro there and how they have
0:15:05 been surprised by her seriousness.
0:15:08 And she’s reverted back to what she was like as a D.A. in Westchester.
0:15:11 So anyway, I’m just throwing that out there.
0:15:13 Well, at 74, she’d be one of the younger people in government.
0:15:14 There you go.
0:15:15 Yeah.
0:15:16 That’s exactly what people wanted.
0:15:18 Jeanine, you look fantastic.
0:15:22 But do you think someone is going to have to go that there’ll be a Mike Waltz?
0:15:24 Is there’ll be a blood offering?
0:15:26 It’s a really interesting one.
0:15:30 I just don’t, I have a striking inability to predict what’s going on here.
0:15:32 I don’t know.
0:15:33 I guess it depends how long it goes.
0:15:37 What do you think of this effort by a couple Democrats to force a vote?
0:15:38 The Ro Khanna vote?
0:15:39 Yeah.
0:15:40 What do you think of that?
0:15:41 Do you think that’s a good move politically?
0:15:41 Yeah.
0:15:42 I mean, it happened last night.
0:15:44 It got voted down.
0:15:45 That’s right.
0:15:45 Shocker.
0:15:47 Well, they’re in charge.
0:15:50 I mean, this is why you got to win fucking elections, right?
0:15:51 Because then you have the votes to do things like this.
0:15:53 Right on, sister.
0:15:53 Yeah.
0:15:55 It’s a well-timed F-bomb.
0:15:55 That’s right.
0:15:56 That’s what we’re supposed to do.
0:15:57 Like one per day, right?
0:15:58 I think it’s the role for Democrats.
0:15:59 That was mine.
0:16:03 But yes, I think it’s right to be drawing attention to this.
0:16:08 And I also think the attitude the Democrats have had about this consistently, which is,
0:16:09 put it all out there.
0:16:15 There are going to be some folks on our side that are in these five—I don’t even know.
0:16:16 They’re playing semantics with it.
0:16:17 Is it a file?
0:16:17 Is it a list?
0:16:24 And Alan Dershowitz, who defended Epstein and also has been accused of being part of this
0:16:28 cabal, has said in interviews recently, there’s a list.
0:16:29 I know there’s a list.
0:16:33 Galene Maxwell is trying to get her story out there.
0:16:37 You know, she was the front page of Drudge, which you know that things are going really
0:16:39 badly for the president when that’s happening.
0:16:45 And you do have a person who is alive and, for now, and hopefully stays alive, sitting in
0:16:48 a jail cell who probably knows a thing or two about this.
0:16:55 So I think Ro Khanna and folks should be out there really holding their feet to the fire,
0:16:56 not losing the plot.
0:17:01 You know, they’re cutting food assistance and Medicaid and giving tax breaks to the wealthy
0:17:03 and all these things that people actually vote on.
0:17:11 But this is a cultural moment, I guess, not only because the Epstein files and did Jeffrey
0:17:15 Epstein kill himself and all of that has taken on pop culture relevance, but it is saying
0:17:22 something generally about class warfare and it is saying something about where the president
0:17:27 fits in that conversation that we don’t usually have a chance to partake in.
0:17:32 It’s usually just partisans kind of peeing into the wind about it, right?
0:17:34 And it doesn’t matter at all.
0:17:36 And now they’re receiving a little bit of it.
0:17:36 Yeah.
0:17:44 So just going back to Janine Pirro as attorney general, I hadn’t realized that she was actually
0:17:46 nominated for a daytime Emmy award.
0:17:48 So I do think she’s qualified.
0:17:48 Yeah.
0:17:53 And also her criticizing the prosecutors, the January 6th defendants saying they hadn’t done
0:17:53 their job.
0:17:54 She’s clearly.
0:17:59 I just got to ask you, Matt Gates, Pam Bondi.
0:18:05 You’re saying you think he’s going to wake up and say, I’d love we really love Merrick
0:18:05 Garland back.
0:18:09 That just feels like even the president could do a little bit better than that.
0:18:10 But anyways.
0:18:12 Anywho.
0:18:12 Yeah.
0:18:13 Please move on.
0:18:18 So just some quick data here around this that shocked me and you’re a pollster.
0:18:18 You’ll find this interesting.
0:18:24 According to new polling from Morning Consult, Trump’s approval rating has fallen about six
0:18:28 percent since his comments about the Epstein case.
0:18:30 That’s that’s a pretty big move, isn’t it, Jess?
0:18:35 Yeah, it’s things have been at certain moments in complete freefall for him.
0:18:40 And we’re going to talk about immigration where that is very pronounced on the economy as well.
0:18:45 But, yeah, people are not into how this is being handled.
0:18:50 And you’re completely correct that there was a way to manage this properly where you just
0:18:54 said, you know, I want to protect the victims, for instance.
0:18:57 And some people would say that’s a load of BS.
0:19:02 But in general, the people who are prone to forgive him would forgive him.
0:19:06 And certainly none of these influencers, I think, would have been mad about it.
0:19:10 So it’s completely relevant that you could go down six points that quickly.
0:19:17 And you know that his team is paying attention to it as well and saying, like, well, what can
0:19:23 we do to leak out enough that nobody is incriminating or we don’t have to hand over the proverbial
0:19:27 list or files or whatever we’re calling it and still be able to manage this base?
0:19:32 Because, you know, he has things that he’s doing that he certainly feels really good
0:19:37 about, right, that he has a couple weeks of good news for his side, you know, getting the
0:19:38 reconciliation bill across.
0:19:42 There’s going to be the rescission bill, the Iranian strikes.
0:19:46 Like, he’d love to be talking about all of those things.
0:19:49 And now this is all that he’s getting.
0:19:53 I mean, trying to change the subject now he’s going after Adam Schiff for a bad mortgage in
0:19:53 Maryland.
0:19:55 I mean, it’s completely laughable.
0:19:59 So just a quick test of your political knowledge here.
0:20:02 There’s only one cabinet member that has a net positive rating.
0:20:04 Any guesses who that is?
0:20:05 Rubio?
0:20:08 No, it’s RFK Jr.
0:20:08 What?
0:20:10 RFK Jr. is the most—he isn’t that wild?
0:20:13 He has a plus 5% net approval rating.
0:20:15 It’s because he’s good looking.
0:20:19 I’m pretty sure that the pollsters asked Rubella and Measles who their favorite cabinet
0:20:20 member is.
0:20:23 OK, with that, let’s take a quick break.
0:20:23 Stay with us.
0:20:33 Whether you’re a startup founder navigating your first audit or a seasoned security professional
0:20:39 scaling your GRC program, proving your commitment to security has never been more critical or
0:20:40 more complex.
0:20:42 That’s where Vanta comes in.
0:20:53 Businesses use Vanta to build trust by automating compliance for in-demand frameworks like SOC2, ISO 27001, HIPAA, GDPR, and more.
0:20:59 And with automation and AI throughout the platform, you can proactively manage vendor risk and complete security
0:21:03 questionnaires up to five times faster, getting valuable time back.
0:21:07 Vanta not only saves you time, it can also save you money.
0:21:15 A new IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the
0:21:18 platform pays for itself in just three months.
0:21:21 For any business, establishing trust is essential.
0:21:24 Vanta can help your business with exactly that.
0:21:30 Go to Vanta.com slash Vox to meet with a Vanta expert about your business needs.
0:21:33 That’s Vanta.com slash Vox.
0:21:40 Support for the show comes from Smalls Cat Food.
0:21:44 There are a lot of distressing things going on in the world right now and are out of our control.
0:21:49 But one thing you can improve is the food you feed your cat, thanks to our next sponsor, Smalls.
0:21:54 Smalls Cat Food uses protein-packed recipes made with preservative-free ingredients, and it’s delivered
0:21:54 right to your door.
0:21:58 That’s why Cats.com names Smalls their best overall cat food.
0:22:03 And now you can add other cat favorites to your Smalls order too, like amazing treats and snacks.
0:22:08 Plus, Smalls has donated over a million dollars worth of food to the humane world for animals.
0:22:12 The team at Smalls is so confident your cat will love their product that you can try it
0:22:13 risk-free.
0:22:16 That means they will refund you if your cat won’t eat their food.
0:22:17 What are you waiting for?
0:22:18 Give your cat the food they deserve.
0:22:23 For a limited time only, because you’re a Prop G listener, you can get 60% off your first
0:22:28 Smalls order plus free shipping when you head to Smalls.com slash Prop G.
0:22:31 That’s 60% off when you head to Smalls.com slash Prop G.
0:22:33 Plus free shipping.
0:22:36 Again, that’s Smalls.com slash Prop G.
0:22:43 Support for the show comes from Upwork.
0:22:47 If you’re running a business right now, you already know there are a million things working
0:22:47 against you.
0:22:50 Tight budgets and economic uncertainty are just the beginning.
0:22:54 That’s why Upwork is helping small businesses do more with less.
0:22:58 Upwork is the hiring platform designed for the modern playbook.
0:23:01 You can find, hire, and pay expert freelancers who can deliver results from day one.
0:23:05 Perfect for businesses on tight budgets, fast timelines, and zero room for error.
0:23:08 No subscriptions, no upfront fees.
0:23:10 Plus, you only pay when you hire.
0:23:12 Posting a job is fast, free, and simple.
0:23:15 If you’ve never tried Upwork, now’s the perfect time.
0:23:20 They’re giving our listeners a $200 credit after spending $1,000 in your first 30 days.
0:23:26 That’s $200 you can put toward your next freelancer, design help, AI automation, admin support, marketing,
0:23:27 whatever your business needs.
0:23:31 Visit Upwork.com slash save right now for this offer.
0:23:38 That’s Upwork.com slash save to get a $200 credit to put towards your next freelancer to help grow your business.
0:23:43 That’s U-P-W-O-R-K dot com slash S-A-V-E.
0:23:45 Upwork.com slash save.
0:23:45 Don’t wait.
0:23:59 Welcome back in a quiet but seismic shift.
0:24:04 The U.S. recently deported eight men to South Sudan, most of whom aren’t even from there.
0:24:10 Under a controversial third-country deportation policy, the legal experts say could amount to enforced disappearance.
0:24:11 Their words.
0:24:20 Their families haven’t heard a word since July 4th, and now ICE has issued new guidance to fast-track similar deportations with minimal notice and virtually no chance for migrants to object.
0:24:24 And that’s just one piece of a much bigger picture.
0:24:26 Across the country, immigration crackdowns are intensifying.
0:24:35 The new remote detention camp in Florida, Alligator Alcatraz, is drawing outrage for inhumane conditions and the fact that hundreds held there have no criminal charges.
0:24:41 Meanwhile, the administration is appealing a court order to blocked race-based immigration rates in California.
0:24:46 But a new Gallup poll shows Americans are more supportive of immigration than they’ve been in decades.
0:24:50 Even a majority of Republicans now favor a path to citizenship.
0:24:52 That’s a switch.
0:25:00 But if you’re wondering how far Trump might go to flex his immigration powers, look no further than his latest threat to strip Rosie O’Donnell of her citizenship.
0:25:01 Yeah, that makes sense.
0:25:03 He called her a threat to humanity.
0:25:04 Huh.
0:25:09 And said she should stay in Ireland, where she moved after his re-election.
0:25:13 Legal experts quickly pointed out that’s not how citizenship works.
0:25:19 Still, it’s a telling moment, one that shows how far the rhetoric and policy is escalating.
0:25:20 It’s a weapon of mass distraction.
0:25:27 Jess, how does the Supreme Court’s green light on third country deportations open the door for more extreme removals?
0:25:36 And what legal or diplomatic fallout do you think we might see if other nations refuse to cooperate or detainees simply vanish, like in the South Sudan case?
0:25:38 I’m going to say it again.
0:25:42 You got to win elections because then you get to appoint people to the court.
0:25:48 And we are so massively screwed because of this conservative majority.
0:25:55 And my expectation is that Trump will probably get another appointment before he finishes in 2028.
0:25:58 And that really scares me.
0:26:02 You know, there are moments where you say, like, Amy Coney Barrett, I love you, or whatever.
0:26:03 Not this one.
0:26:09 And the most disturbing part of it to me, or the thing that I guess stuck out the most,
0:26:14 is that as part of getting rid of this nationwide injunction that came from the lower court,
0:26:21 is that it does away with having to give meaningful notice before deporting to a third country.
0:26:28 So what they’ve been doing is essentially running out the clock or having no clock,
0:26:34 so that family and attorneys cannot get to a lot of these people who have been either wrongfully detained
0:26:39 or really should just be deported back to their country of origin.
0:26:46 And you hear this constantly, whether we’re talking about CECOT or detention facilities all over the U.S.,
0:26:48 makeshift or permanent.
0:26:50 You know, we’ll talk about Alligator Alcatraz.
0:26:55 But nobody is getting their due process.
0:27:00 Nobody is getting time to talk to an attorney, let alone to their wife or their husband.
0:27:04 And that the Supreme Court could be comfortable with that.
0:27:07 Even when I think about it, it was just a few months ago, right,
0:27:13 where they seemed like some sort of arbiters of humanity about the deportations to CECOT,
0:27:18 where they said they have to make a meaningful effort to produce Kilmar Abrego-Garcia.
0:27:22 Now we have South Sudan with no call to your lawyer is fine.
0:27:23 Yeah.
0:27:27 It strikes me that in a weird way, a lot of these things are interconnected.
0:27:30 And I’ve been thinking about, I hate anonymity.
0:27:35 And I’m interviewing today Greg Lukianoff, who’s a big First Amendment guy.
0:27:36 And I imagine he’ll give me a—
0:27:36 He runs fire, right?
0:27:39 Yeah, he’ll give me a cogent argument for why anonymity is so important.
0:27:45 But my sense is our fidelity or conflating free speech with anonymity has led to an environment
0:27:52 where people will weaponize millions of trolls to create intimidation and shape the discourse
0:27:57 around what’s advantageous for some folks who don’t have America’s best interests at heart.
0:28:02 I don’t think people would ever behave this way if they had to have their identity released online.
0:28:07 And in general, I think that almost anything involving a government investigation,
0:28:13 there might be a quiet period for security reasons, but I think there’s no reason not to release everything.
0:28:19 I just—I can’t understand why any file isn’t ultimately released that the FBI aggregates
0:28:21 unless they see it as a security concern.
0:28:29 And then, more generally, what do stormtroopers ice the KKK and weirdos not letting Jews access
0:28:31 certain parts of UCLA have in common?
0:28:33 And the answer is masks.
0:28:40 And when I think about what’s been a hugely accretive or beneficial move for our men and women in blue
0:28:43 and our trust in police forces around the nation, it’s been body cams.
0:28:48 And the fact that they have their badge number and their name right on their—visible on their chest,
0:28:53 and they’re not allowed to wear masks because they have to take accountability for their actions.
0:28:57 And it’s been, I think, one of the greatest innovations in law enforcement
0:29:02 that if you’re going to apply force or you’re here to protect and to serve,
0:29:05 that you need to show all of your actions in 4D color.
0:29:11 And what do you know, ICE enforcement agents are wearing masks, which tells you, in my opinion,
0:29:15 everything about how they’re acquitting themselves or what they’re doing.
0:29:18 They wouldn’t dare want anyone to actually know who they actually are.
0:29:23 So this, to me, comes back to a basic trend in our society that’s a wrong trend,
0:29:28 and that is an acceptance and even reverence for anonymity as opposed to forcing people
0:29:30 to take accountability for their actions.
0:29:34 And people come back and say, well, what about the civil rights lawyer in the Gulf that needs
0:29:35 to protect our identity?
0:29:39 We could absolutely have anonymous accounts online and ensure that people are using them
0:29:42 for reasons where they would need anonymity.
0:29:48 But when you go onto your feed—and, Jess, I imagine you get a lot of this because I found
0:29:53 a disturbing trend online where women are subject to more hate-filled emails and rhetoric and threats
0:29:59 of violence by virtue of the fact that they’re women—I don’t think it’s healthy, this protection.
0:30:06 I think every social media platform should force identity and age gate and accepting a federal agency
0:30:12 that now has greater funding than the Federal Bureau of Investigation, not doing covert or national
0:30:18 security work, but treating people—you just wouldn’t see them putting their knees on the heads
0:30:22 of people or separating women from their 13-year-old screaming daughters if they actually had to show
0:30:23 their fucking faces.
0:30:28 So whether it’s the Epstein files and a belief that, oh, some things should stay out of public
0:30:35 view or masks covering the real identity and thereby reducing the accountability of this
0:30:40 enforcement agency, which we are paying for, I think all of it comes back to the same place,
0:30:43 and that is we have gone way too far with a reverence and an acceptance for anonymity and
0:30:45 not connecting identity to people’s actions.
0:30:46 Your thoughts?
0:30:49 I think it’s an incredibly important point.
0:30:55 And if you wanted to get into the nuance of what has to be for national security, what actually
0:31:00 needs to be put out in the world, I think that you can have those conversations on a case-by-case basis.
0:31:07 But in general, you know that society would run a hell of a lot different if people had to show
0:31:12 themselves, if they had to own what they’re saying in-person and online behaviors.
0:31:20 And you’re completely correct that it is a complete cesspool of what goes online, especially when it comes
0:31:21 to misogyny and harassment.
0:31:22 I’m going through this right now.
0:31:30 There’s a large conservative account that posted a picture of me with my ex-boyfriend and a picture of me with my husband.
0:31:36 And the post says that I cheated on my first husband.
0:31:37 I don’t have a first husband.
0:31:38 I only have one husband.
0:31:41 And, you know, it gets worse.
0:31:44 We’re talking whore, the C word, all of it.
0:31:53 And this is because I responded to the news that Ken Paxton, AG in Texas, his wife has filed for divorce after 38 years together.
0:31:57 It has been rumored for a long time that he cheats on her.
0:32:00 And she’s just had enough at this point.
0:32:09 Anyway, so I wrote The Party of Family Values strikes again because we’re constantly lectured by conservatives about how Democrats are folks that are part of a pedophile ring.
0:32:18 And it has unleashed a torrent online that I haven’t seen worse than actually when the president of the United States of America comes after you.
0:32:22 A lot from women, a lot from good Christians, right?
0:32:25 You know, I’m the best Christian there ever was.
0:32:38 And you’re an enormous whore and can’t get it taken down because of what goes on on social media now and the changes that have happened under Elon Musk.
0:32:42 And so I’m kind of just sitting here having to take it.
0:32:48 The algorithms love this type of outrage, this type of incendiary, ridiculous content.
0:32:55 And first off, I’m sorry you’re going through this, but to anyone that doesn’t have their head up their ass, realize this is all total bullshit.
0:33:08 But what this continues is a long tradition of misogyny that has gone just apeshit online where people don’t have to take accountability for their hateful, weird rhetoric, which sometimes can be very – it’s not only damaging emotionally and mentally.
0:33:11 It can put people in physical danger because people start believing this shit.
0:33:14 And then a crazy person picks up on it.
0:33:29 But it continues a long tradition that has gotten much worse online, which a bunch of dudes refuse to address because it’s difficult to imagine what it’s like to be a victim of this when you’ve never been a victim of this.
0:33:41 And the misogyny here is just so stark because, according to online trolls and especially the right, infidelity is a feature, not a bug for men.
0:33:43 And it’s a crime against humanity for women.
0:33:49 So why not just accuse women of something, whereas it would be a compliment?
0:33:52 I mean, that’s a real man on the Republican side.
0:33:53 He should even be president.
0:34:02 Well, the people who are making these comments about you, one, I don’t know if it’s a media organization, but effectively this goes back to big tech.
0:34:12 The platform platforming this clearly false content, most likely, and it sounds like it’s Twitter, the algorithms see it like how much activity it’s getting.
0:34:17 And so they elevate it and they give it more organic reach than it deserves on its own.
0:34:18 There’s no veracity here.
0:34:24 People wouldn’t be spreading this type of information as far and wide organically.
0:34:32 But because it invokes a lot of reactions and back and forth, and I’m sure people are weighing in and defending you, the algorithms love it.
0:34:40 So they spread it further than it would go on its own, thereby disparaging your reputation and also creating emotional harm.
0:34:49 When you algorithmically elevate content, you are now an editor and there’s no reason you shouldn’t be subject to the same liability and slander laws as traditional media.
0:35:08 Fox News, including your endorsement for attorney general, she was named in a case for spreading misinformation about Smartmatic, purposely and knowingly spreading misinformation around a company, knowing that it was false information that caused material and economic harm.
0:35:14 This is happening to you right now, but because it’s happening to you, Twitter knows this is bullshit.
0:35:18 And it’s very easy to see that this is causing real harm and disruption in your life.
0:35:30 But because they’re, quote unquote, a nascent technology company, which is what it was called in 1997 when this ridiculous 230 law was passed, they are not subject to the same liability as Fox News when they spread misinformation.
0:35:44 So this all comes back to big tech figuring out a way to weaponize Republicans and Democrats to avoid any real responsibility or liability for things that traditional media has been responsible and liable for.
0:35:52 It not only tears at the fabric of our society, it not only tears at the fabric of our society and coarsens our dialogue, but creates a post-truth society where nobody knows what to believe anymore.
0:35:59 Because the reality is, if somebody sees a story over and over, it becomes, in their mind, naturally, less of a lie.
0:36:03 It’s like, oh, I’m seeing this everywhere and there must be some truth to it.
0:36:13 No, it just means the algorithms have decided, regardless of how disparaging or slanderous it is, if it creates more engagement, we’re going to spread it far and wide.
0:36:26 So I kind of lay this at the feet, not only of the people who created this false narrative, but the fact that, one, the social media platform doesn’t force identity when people weigh in and say vile things about you.
0:36:27 We should know who they are.
0:36:29 They should have to stand behind it.
0:36:37 And also, this organization or the people posting this content, or the platform specifically, should be subject to the same laws as traditional media.
0:36:43 Anyways, I’m sorry you’re going through that, and I hope you recognize that in the moment, everything seems worse than it actually is.
0:36:48 This isn’t that meaningful, and everyone will forget about it and move on.
0:36:49 Yeah, I hope so.
0:36:55 Yeah, and I’m, you know, it’s been a few days, and you learn to move on quickly, which is probably another statement on how society works.
0:37:09 But I just wanted to add to your analysis to say that a critical component of why things are so bad is that we are so intellectually lazy now that no one wants to even Google something.
0:37:22 You know, this happens constantly, and I understand that it is baked into the job for me to bring information that is different from the mainstream conservative point of view that my colleagues are espousing.
0:37:38 But because you don’t want to spend any time taking a look at what Quinnipiac is saying or taking a look at what Marist is saying or even the Fox News poll, you just immediately dismiss anything that makes your antenna go a little haywire.
0:37:47 You know, to loop it back to the immigration issue, Trump has blown his best issue in historic terms.
0:37:57 I mean, he’s negative 27 on immigration now with Gallup, negative 16 with Quinnipiac, negative 9, Marist, Fox News, minus 7.
0:37:59 Totally lost support of the Hispanic votes.
0:38:04 Remember, that was one of their favorite things to talk about, how the hombres actually wanted this.
0:38:10 Well, it turns out the hombres are not actually interested in the way immigration law is being enforced at this particular moment.
0:38:15 And it probably has to do with the fact that 70 percent of people who are being detained haven’t been convicted of anything.
0:38:18 So they say, oh, well, people are pending charges.
0:38:22 You can say whatever you want about someone like this is the United States of America.
0:38:27 You have to be convicted of something, not just that they’re floating the idea that you did a very bad thing.
0:38:34 But you talk to the strongest section of his base about this, of Trump’s base.
0:38:38 Those poll, they’re all fake news polls, right, if they even exist.
0:38:45 And they haven’t spent any time actually going around and looking at a source that doesn’t confirm their immediate bias.
0:38:56 And so the masking thing with ICE agents, I don’t know if you’ve seen these stories, but there are people impersonating ICE agents, just like putting on masks and robbing, throwing people in trucks.
0:39:06 I mean, and some folks don’t even know if it isn’t actually an ICE agent that is doing something like this because the reality on the ground is that there are folks that are doing this.
0:39:09 And Brian and I were talking about this over the weekend.
0:39:18 We hadn’t been seeing that many stories from New York City about these immigration raids, like hearing a lot about what’s going on in Chicago and in Boston.
0:39:30 And I heard from a friend that apparently in big immigrant neighborhoods out in Queens in particular, that there are ICE agents everywhere there now.
0:39:37 So the city is not being spared because Eric Adams is a friend of the Trump administration in any way.
0:39:43 And I assume that the stories are going to start rolling in of these terrible things happening, you know, all over the subways.
0:39:52 And it’s just I’m not saying that immigration law doesn’t need to be enforced, but I don’t want our country looking like this.
0:39:54 Yeah, but it’s it’s so gross.
0:40:05 It’s performance and pageantry and fear and not really addressing the issue because going to the very core of the issue, while most people acknowledge immigration has been the secret sauce for American prosperity or one of them.
0:40:11 What they don’t want to have an honest conversation about is that the most profitable part of immigration has been illegal immigration.
0:40:14 And we can just wake up with tens of millions of illegal immigrants.
0:40:23 It’s a flexible workforce that comes in, pays Social Security taxes, commits crimes at a lower rate and then melts back to their own country when the work dries up.
0:40:27 It’s been this unbelievable, profitable, flexible workforce.
0:40:32 And where I see the far right go is they say, look, and it’s a solid argument.
0:40:33 Theoretically, they broke the law.
0:40:34 They broke the law.
0:40:35 They knew they were breaking the law.
0:40:37 They should be subject to enforcement.
0:40:45 But they never want to talk about, well, based on that law, shouldn’t we be prosecuting all the employers who knew they were employing undocumented workers?
0:40:47 And by the way, that is a crime.
0:40:49 But we don’t talk about that.
0:40:56 We don’t talk about the cut and dry, the employers, whether it’s fast food restaurants or families employing undocumented workers.
0:41:03 We seem to forgive those less brown, older, less vulnerable people, right?
0:41:06 So we’re just not focused on the right thing.
0:41:15 If you want to talk about an alien invasion, if you want to talk about millions of people storming the shores and offer their services for free, you want to talk about disruption.
0:41:26 What if seven or eight or 10 million immigrants stormed the shores, just overwhelmed the United States and were willing to work for free 24 by seven?
0:41:27 And what would that do to certain industries?
0:41:29 Well, it’s here, folks.
0:41:30 It’s called AI.
0:41:45 And instead of focusing on AI and taking some of that $12 billion and upskilling people and training to be critical thinkers and understand AI and how to leverage it and job training to get people out of things like trucking, which clearly AI is going to just decimate.
0:42:11 We want to scare the shit out of people and increase inflation by getting rid of three percent of America’s working population, which is ISIS goal, which will be somewhere between five and 15 percent of the agricultural and construction communities between tariffs on drywall gypsum, Canadian lumber and then emptying out construction sites of which 15 percent of the workforce is undocumented workers.
0:42:15 And by the way, when you lose 15 percent of your workforce, the industry kind of collapses for a while.
0:42:22 You’re going to see massive inflation and you’re going to see huge economic strain.
0:42:43 But instead, you know, instead of focusing on our economy, instead of focusing on how we would use that money to upskill people and protect jobs, I just think the president, I think they love this macho mass big guys ripping families apart that they’re I think so many Americans are so angry and upset that they actually enjoy some of this footage.
0:42:45 It’s like, oh, we’ll show those, you know, those criminals.
0:42:53 But, you know, the nice the nice white family that owns the car wash has been hiring undocumented workers and wink, wink.
0:42:56 Everybody’s put up with this for a long time.
0:42:58 Leave those good Americans alone.
0:42:59 I wonder what’s going to happen.
0:43:04 I think at some point there’s going to be a confrontation that’s going to turn violent in the United States.
0:43:07 I think I think people are so correctly horrified.
0:43:12 There was a really interesting video taken in a hospital where doctors just surrounded ice and said, what are you doing?
0:43:13 Get out of here.
0:43:18 And what is going on here is so craven and so aggressive and so upsetting.
0:43:23 I mean, at least the brown shirts in Nazi Germany showed their faces.
0:43:27 You know, these guys showing up with masks and the militarization.
0:43:31 I mean, it’s happened incrementally, so we’re not a shock.
0:43:45 But if someone had played just a few years ago what was going to happen here at car washes in Calabasas or to, you know, Uber drivers or or people showing up for their citizen hearings or church.
0:43:49 I think we would have just said, well, of course, that would never happen in America.
0:43:51 Well, well, it is.
0:43:57 Anyways, with that, we’ll take another quick break and we’ll be back in just a moment.
0:44:03 OK, welcome back.
0:44:06 Before we go, Washington Representative Marie Glessencamp-Perez.
0:44:13 She’s doing a few lawmakers have dared to do publicly question whether some of her older colleagues are still mentally fit to serve.
0:44:21 The 36-year-old Democrat is pushing a proposal that would allow the House Ethics Office to assess whether a member’s cognitive decline is impairing their ability to do the job.
0:44:24 An idea that was quickly swatted down by her colleagues in committee.
0:44:26 Are they really fucking old people?
0:44:27 Seriously, it’s become.
0:44:28 Actually, no.
0:44:28 Really?
0:44:31 They’re not really fucking old people that swatted it down.
0:44:34 It’s people who want to hang out for the next 50 years in Congress.
0:44:40 Or who are scared of the really fucking old people who wield a tremendous amount of power as well.
0:44:47 Fair Point, her call for cognitive oversight lands in a moment when concerns about mental acuity in government aren’t just theoretical, they’re fueling investigations.
0:44:57 Republicans are now probing whether Biden was mentally fit enough to authorize end-of-term and clemency decisions, claiming staff may have used an auto pen without his direct input.
0:45:02 Biden says he made every decision himself and slammed Trump and his allies as liars.
0:45:06 But still, the broader question remains, when is it too old to govern?
0:45:09 Curious on your thoughts on this.
0:45:11 How realistic is Perez’s proposal?
0:45:16 And could it actually break an unspoken code of silence around age and capacity in Congress?
0:45:19 Or will it just be another flashpoint that fades without reform?
0:45:19 What do you think, Jess?
0:45:23 It’s definitely not going to pass.
0:45:24 Right.
0:45:33 But I do think that it’s important for people to show who they are and their morality and their beliefs.
0:45:34 And that’s what she’s doing.
0:45:36 She’s just making a public declaration.
0:45:47 And this is coming from her constituents, who were all deeply concerned about President Biden and his mental fitness, as millions of Americans were, yourself included.
0:45:56 And she’s putting her stake in the ground, where she just says, this is something that we need to be talking about more and to be thinking about.
0:46:04 And maybe it’s not the end of the world if there’s some mechanism to step in when there is a problem.
0:46:12 And the argument against it is, well, we, you know, we have elections and they have to stand again and get voted back in.
0:46:17 But in a lot of these districts, they’re rubber stamps for whoever is in the majority.
0:46:19 You’re talking about D plus 30 districts.
0:46:23 And if someone isn’t getting primaried, then that carries on.
0:46:30 There’s a lawmaker, I think she’s 86 years old, who’s been flirting with maybe I’m going to run, maybe I’m not going to run.
0:46:35 Her staff has to kind of clean up after she makes a comment about it.
0:46:36 And now she’s decided that she’s going to run again.
0:46:40 Maxine Waters, she’s 86 years old.
0:46:41 I get it.
0:46:45 These are only two-year terms, so it should be out by 89 or whatever.
0:46:49 But, like, some things just don’t feel appropriate.
0:46:54 And I also think that there is an unfair sliding scale for folks.
0:47:02 Like, I interviewed Greg Kassar for our podcast, and he wants a new generation of leaders.
0:47:03 He’s part of that.
0:47:05 He’s on the progressive left of the party.
0:47:07 He’s been on tour with Bernie and AOC.
0:47:12 So I say, you know, well, Bernie’s 82 years old, and he’s going to run again for his Senate seat.
0:47:16 Well, there’s a carve-out for Bernie because Bernie has a ton of energy.
0:47:17 And, yes, I get it.
0:47:23 Bernie Sanders, you have a lot more faith in his ability to survive a term than you did necessarily about Joe Biden.
0:47:28 But you either have a standard or you don’t have a standard.
0:47:38 And that’s where this representative, Glucin-Kamp-Perez, is kind of putting her stake in the ground and just saying, we need to have standards that everyone abides by.
0:47:51 And then you can plan for what life after Congress looks like for you if it’s just retirement or maybe you want to go into the private sector or you want to go back to teaching if you’re a teacher or maybe, you know, want to travel the world, whatever it is.
0:48:02 But a lot of people have been talking a big game about passing the torch and a new generation of leaders or what it takes to do this job, which is an incredibly special, elite job.
0:48:04 There are only 435 people with this job on the Senate side.
0:48:09 Only 100 people in a country of 330 million who get this job.
0:48:11 Take it more seriously.
0:48:13 I love this.
0:48:24 And I think it’s just insane that we’ve decided that a 34-year-old doesn’t have the cognitive ability or life experience to run for president, but someone, 81, can do it.
0:48:30 16 years, term limits, or 18 years and 75, you’re out.
0:48:31 It’s just insane to me.
0:48:38 We have, we age gate all sorts of tests, whether it’s pilots, whether it’s CEOs of public companies.
0:48:44 And we’ve decided that, arguably, to your point, the most important decision, we’re not going to have age limits on.
0:48:46 And it really hurt Democrats.
0:48:49 I think five representatives died.
0:48:50 I think it’s three.
0:48:50 Was it three?
0:48:52 I thought it was five of them passed away.
0:48:53 Like, since the term.
0:48:54 Yeah, three.
0:48:55 It’s a lot of people.
0:48:57 I mean, I joked, I’m going to go see the F1 movie.
0:49:03 And I’m like, I think it’s hilarious that Brad Pitt is an F1 driver who clearly, I can, spoiler alert, I’m pretty sure he probably wins the race.
0:49:07 But in 10 years, his kids are probably going to take his driver’s license away.
0:49:13 It not only creates a situation where you have people who are cognitively impaired and make poor decisions, including to run again.
0:49:15 Bernie should not be allowed to run again.
0:49:17 That’s insane because biology is undefeated.
0:49:24 And by the time he’s 86 or 87, he’s probably going to slow down and not be able to represent his constituency very well.
0:49:27 And there will always be examples of the 100-year-old who runs a marathon.
0:49:39 But in general, we’ve decided that 17-year-olds don’t have the cognitive capacity to decide to join the army for good reason or that they, you know, kids should not be able to access pornography, at least theoretically.
0:49:42 We age-gate all sorts of stuff on the bottom end.
0:49:47 And the cognitive decline is just as severe on the back end.
0:49:49 Of course we should age-gate this.
0:49:51 And the dialogue, I believe, has actually progressed.
0:49:58 When I first said that Biden was too old to run on Bill Maher two and a half years ago, it was called an ageist and how dare you.
0:50:16 And, okay, in addition to the cognitive decline, which puts serious strain on the public, having to put up with individuals who no longer have good judgment or even just the capacity to do their jobs, it creates an environment where we’re not thinking long-term.
0:50:19 Two-thirds of Congress will be dead within 25 years.
0:50:24 Are they really that concerned at the end of the day about deficits and climate change?
0:50:35 And they get all indignant and clutch their pearls that they got as a wedding gift in the 30s and say, okay, you’re being ageist.
0:50:38 We care about climate change and our grandchildren?
0:50:39 No, you don’t.
0:50:44 Listen to young people talk about the deficit and the climate change.
0:50:46 They’re going to be around to have to pay this shit back.
0:50:58 They’re going to be around when everyone has to move out of or there’s forced mass migration and an unbelievable tax on everybody when we have to pay for disaster relief on super fires that are happening every other week.
0:51:02 So we absolutely need a representative democracy.
0:51:09 And the average age of our elected representatives across the world, this is a global phenomenon, has risen from 55 to 62.
0:51:16 And the U.S. has the oldest, I believe, of any G7, except we haven’t taken a note from other countries.
0:51:19 Most countries, there’s age gates.
0:51:22 You know, firefighters in the U.S. have to retire at 57.
0:51:29 You have to retire from the armed forces at 64 because you might make bad decisions to kill other people when you’re 84.
0:51:36 But we’ve decided, no, you can make decisions about who gets food stamps or what nations we do or do not declare war against.
0:51:41 Finland requires medical testing for driver’s license applicants after the age of 45.
0:51:47 England has an age limit of 75 for sitting on a jury.
0:51:57 They realize at 76, you may not have the cognitive ability or the physical stamina to pay attention to ensure that an individual is acquitted fairly through the court process.
0:52:01 Eighty-six members of the House and 33 members of the Senate are now over the age of 70.
0:52:04 In the House, the average age is 57.
0:52:07 In the Senate, it’s 65.
0:52:10 And this is the third oldest Congress in 1789.
0:52:11 Enough already.
0:52:12 Enough already.
0:52:13 We need age limits.
0:52:18 If we have them on the bottom end, there is no reason we shouldn’t have them on the top end.
0:52:23 It’s also a worse problem on the Democratic side than it is on the Republican side.
0:52:23 Fair.
0:52:27 Trump aside, who’s obviously an incredibly old president.
0:52:34 But the rank and file, and this is they doing as well with the Supreme Court nominations, too.
0:52:39 You know, I could see a world in which we were in power and we’re like, I’m really into this 65-year-old.
0:52:40 I think it would be great on the bench.
0:52:42 I’m like, take the teenager.
0:52:44 Put the teenager on the court.
0:52:45 Put in Representative Tallarico.
0:52:47 That’s a great plug.
0:52:47 Yeah.
0:52:49 We’ll have him on the podcast on Friday.
0:52:49 There you go.
0:52:51 This is how you’ve got to be thinking about it.
0:52:57 I do want to say something positive about the current state of Democratic politics.
0:52:58 Go on.
0:53:03 Because we don’t do this nearly enough, and these are fundamentally our people.
0:53:14 There’s new polling out from Tony Fabrizio, Trump’s pollster, showing that Republicans are trailing on the generic ballot in 28 House battleground seats.
0:53:19 There are huge amounts of pickup opportunities, especially with the big, beautiful bill.
0:53:21 Don’t blow it by only talking about Epstein.
0:53:33 And one thing that I saw that I thought you would really like is there’s this up-and-coming ad-making firm that’s Cool Campaigns, Van Ness Creative.
0:53:41 And they have basically issued a warning saying that if you’re not going to get online, then you just need to retire.
0:53:52 That no one should be making you ads or supporting campaigns of people who cannot communicate the way that the world is getting information and ingesting it.
0:54:00 And, you know, there’s an effort that you can make if, you know, you’re older and all you can do is hold a camera up to your face while you’re sitting in a car or whatever.
0:54:13 That’s still making some sort of effort, but there are actually pretty high numbers of folks in elected office who don’t even want to partake in the main vehicle for political communication at this point.
0:54:19 And that those people need to retire along with the 85-year-olds.
0:54:19 I agree.
0:54:25 Across both chambers, there are 20 members who are 80 years or older who likely think CHAT-GPT is a venereal disease.
0:54:31 I mean, the Congress is beginning to look like the waiting room at a cardiologist in Boca Raton.
0:54:33 It just, for God’s sakes, enough already.
0:54:34 All right.
0:54:37 Jess, that’s all for this episode.
0:54:39 Thank you for listening to Raging Moderates.
0:54:42 Our producers are David Toledo and Eric Jenikas.
0:54:44 Our technical director is Drew Burrows.
0:54:47 Going forward, you’ll find Raging Moderates every Wednesday and Friday.
0:54:53 Subscribe to Raging Moderates on its own feed to hear exclusive interviews with sharp political minds.
0:54:58 This week, Jess and I are talking with, and we’re excited about this, with Texas State Representative James Tallarico.
0:55:03 Make sure to follow us wherever you get your podcasts so you don’t miss an episode.
0:55:05 Jess, have a great rest of the week.
0:55:05 You too.
Will the Jeffrey Epstein case tear the Trump White House apart? Scott and Jessica talk through the discord over the Epstein files inside the administration — and in the Republican base, and they discuss why Trump is acting like a very guilty person. How can Dems tell the difference between what they should focus on to win elections, and what’s just a distraction? Plus — a new proposal in the House to finally do something about our gerontocracy problem.
Follow Jessica Tarlov, @JessicaTarlov.
Follow Prof G, @profgalloway.
Follow Raging Moderates, @RagingModeratesPod.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
-
Why Validation Beats Agreement: Caroline Fleck’s Revolutionary Approach to Human Connection
AI transcript
0:00:03 It’s valid that you want to protect them, but is helicoptering protecting them?
0:00:06 No, because they’re not developing the skills they need to do it themselves.
0:00:11 So what you need to do is be able to step back and let them fall.
0:00:18 You have to trust in the wisdom that growth happens through quote-unquote failure, that
0:00:22 when you try and protect your kids from failure, you’re ultimately protecting them from growth.
0:00:27 I’m Guy Kawasaki.
0:00:31 This is the Remarkable People Podcast, and today is a special edition.
0:00:34 We’re coming to you from Honolulu, Hawaii.
0:00:38 So what you see behind me is not a virtual background.
0:00:39 That’s a real background.
0:00:41 That is Waikiki.
0:00:43 I’m near Diamond Head.
0:00:46 I’m looking towards Waikiki the other way.
0:00:48 Usually Diamond Head is that way.
0:00:50 Queen Surf is that way.
0:00:54 So I’m not here to be the Hawaii Visitors Bureau.
0:01:00 I’m here to help you be remarkable, and we have found a very remarkable person.
0:01:02 She is in California right now.
0:01:09 If I were in California right now, we’d be only about 20 miles apart, but now we’re 2,500 miles apart.
0:01:12 So our guest is Caroline Fleck.
0:01:17 And I got to tell you, her book is the most interesting.
0:01:22 And I have to tell you, Caroline, I felt convicted in your book many, many times.
0:01:27 And yeah, I feel like I’m a bad parent, bad spouse, bad everything.
0:01:28 So anyway.
0:01:28 No, no.
0:01:38 So Caroline is a clinical psychologist, and her practice is really focused on emotional resilience and communication.
0:01:43 And she kind of bridges psychology and real-world challenges.
0:01:47 And she’s very famous for this concept called validation.
0:01:49 So that’s what we’re going to talk about.
0:01:50 Okay, Caroline?
0:01:52 Yeah, I got to tell you.
0:01:59 So, you know, first of all, I often go off the rails when I start a podcast, and that’s going to be true today.
0:02:03 So I have to tell you, there’s one sentence in your book.
0:02:07 When I read it, I stopped.
0:02:09 I was reading the PDF.
0:02:11 I selected it.
0:02:12 I quoted it.
0:02:13 I put it in my notes.
0:02:18 And you will not guess what quote I took, but I love this sentence.
0:02:23 I’ve never read a sentence like this in a business book, and I have to read 52 books a year.
0:02:24 Oh, my God.
0:02:31 The quote is, an anecdote isn’t a substitute for scientific evidence.
0:02:32 Oh, my God.
0:02:33 Oh, my gosh.
0:02:37 Basically, you just indicted every nonfiction writer.
0:02:39 My God, especially business book.
0:02:43 Malcolm Gladwell is turning over in his grave right now.
0:02:45 Oh, I love that.
0:02:46 I appreciate it.
0:02:50 And then we’re just going to fanboy out a little bit for a while.
0:02:50 Let’s do it.
0:03:00 So then I come to the end of your book, and I have to say that I think your epilogue is the best epilogue I have ever written a book.
0:03:02 Oh, my gosh.
0:03:05 I hope I don’t get in trouble with Madison for saying this.
0:03:10 And it starts with the sentence, I have no boobs.
0:03:11 And I read that.
0:03:15 I said, you know, that is not a typical epilogue start.
0:03:18 And then you go in to discuss cancer and all.
0:03:20 I’m like, wow, what a powerful epilogue.
0:03:21 Anyway.
0:03:23 Oh, my gosh.
0:03:24 You are so kind.
0:03:25 Thank you.
0:03:28 And thank you for reading the epilogue, actually.
0:03:32 Because a lot of people, they don’t even finish the book, much less read epilogues these days.
0:03:33 So I appreciate it.
0:03:39 I loved in the middle of the book where you said something like, well, if you’ve gone this far, I really thank you.
0:03:44 And you don’t have to read the rest of the book because you probably read more than most people ever read in a book.
0:03:47 I looked at that and said, that’s probably true.
0:03:53 I’m going to put that line in my next book because one of the key skills you talk about is copying.
0:03:56 And I know how to copy people.
0:03:57 I work for Steve Jobs.
0:03:59 If anything, I learned what to steal.
0:04:02 And that is a concept worth stealing.
0:04:04 Oh, I love that.
0:04:05 I love that.
0:04:06 What to steal.
0:04:09 I hadn’t thought about copying in those terms, but you’re exactly right.
0:04:10 It is.
0:04:18 Well, I think there’s a very famous Picasso quote, something along the lines that real artists steal or something like that.
0:04:18 Yes, yes, yes.
0:04:19 I know what you’re saying.
0:04:19 I know.
0:04:21 And it rang true to me as well.
0:04:25 Thank God for Xerox Park is all I can say.
0:04:29 So listen, let’s just start off really basically.
0:04:33 Could you just explain what is validation?
0:04:36 And I don’t mean for your parking ticket at the restaurant or the hotel.
0:04:40 What is validation to a clinical psychologist like you?
0:04:52 Validation is simply a way of communicating that you’re there, you get it, and you care, and that you accept the other person non-judgmentally.
0:04:57 It’s the feeling of feeling seen or feeling heard.
0:05:04 When we have that experience, what we are experiencing, according to clinical psychologists like myself, is validation.
0:05:06 We feel validated.
0:05:09 We feel like somebody sees us.
0:05:11 They understand us.
0:05:16 They see the rationality not just in our thoughts, but also in our emotions.
0:05:23 And it’s probably that latter part, seeing the rationality in our emotions, that really does something for us.
0:05:28 Now, are you telling me that everybody’s feelings are valid?
0:05:32 Are there not cases that you shouldn’t validate them?
0:05:35 Leave that to the psychologists.
0:05:47 And I mean that so seriously because, and this is serious, the effects of invalidation on children, on adults, on everybody, it’s devastating.
0:05:59 So coming from an environment in which a child, for instance, was exposed to pervasive invalidation, meaning when they said they felt sad, the parents said, walk it off, right?
0:06:02 If they were frustrated, it’s your being a baby.
0:06:06 Basically, whatever emotion they expressed was dismissed or criticized.
0:06:15 That type of invalidation leads to some of the most serious types of psychopathology or psychological disorders that we know of.
0:06:17 So it’s not small stuff.
0:06:22 And so if you don’t understand where someone’s coming from, you can disagree with their thoughts.
0:06:24 You can argue with their reasoning.
0:06:29 Just don’t be in the business of telling people that they don’t feel what they’re telling you they feel.
0:06:31 A psychologist can unpack that.
0:06:32 You just focus on something else.
0:06:37 You haven’t been talking to my children, have you?
0:06:41 Well, yeah.
0:06:41 Okay.
0:06:47 So just give me like a definition of what are the qualities of what’s valid.
0:06:50 You raise a good point, which is, is everything valid?
0:06:57 As psychologists, when I’m working with patients, I need to form a relationship and I need to do that quickly.
0:07:02 And the quickest, swiftest way to form a relationship is through validation.
0:07:03 Okay.
0:07:04 It’s by validating them in some way.
0:07:07 And I’ve got three things I could validate.
0:07:11 I could validate their thoughts, their emotions, or their behavior.
0:07:12 Okay.
0:07:14 Thoughts are valid if they’re logical.
0:07:16 Behavior is valid if it’s effective.
0:07:19 Emotions are valid if they fit the situation.
0:07:20 All right.
0:07:23 But you only need to focus on one of those.
0:07:24 Does that make sense?
0:07:25 It does.
0:07:28 But let’s hypothetically say…
0:07:28 Do it.
0:07:28 Do it.
0:07:29 Yeah.
0:07:34 Let’s hypothetically say a politician shows up in your office and says,
0:07:39 I believe it’s Jewish space lasers that’s controlling the weather.
0:07:41 Are you going to validate that?
0:07:42 No, I’m not.
0:07:43 Those thoughts are not valid.
0:07:51 However, if they say to me, and I am terrified about the implications of that, I want to get
0:07:57 out and raise as much money and get as much support as I can to protect us from this manipulation,
0:08:02 that emotion, I understand that emotion based on what they’re thinking.
0:08:09 The thoughts are not valid, but the emotion makes sense in light of what they’re thinking.
0:08:14 That is my whole job, both as a clinical psychologist, working with folks who have
0:08:19 thought disorders and all sorts of things, and frankly, as a parent to a young kid, a lot
0:08:21 of what they’re saying, I’m like, that’s not rational.
0:08:22 That does not quite make sense.
0:08:25 And yet her emotion is real, right?
0:08:28 I don’t need to validate the thought to speak to the emotion.
0:08:30 And this is the critical thing.
0:08:38 You stand no chance of changing someone’s opinion, of getting through to them, of challenging
0:08:42 their assumptions if they don’t feel accepted by you.
0:08:52 So, are you telling me that validation is not at all the same thing as agreement?
0:08:54 It is not.
0:08:57 I’m so glad you flagged this because this is where folks get stuck.
0:08:58 We’re afraid.
0:09:00 Well, I don’t want to say that I agree, right?
0:09:01 I don’t agree.
0:09:02 I’ll give you an example.
0:09:07 I’m a vegetarian for animal, ethical, and environmental reasons.
0:09:13 That said, I see a lot of valid reasons why somebody would choose to eat meat, okay?
0:09:14 I don’t agree with them.
0:09:16 I make a different decision.
0:09:22 But if I just wanted to validate what’s logical there, there’s tons that I could focus on.
0:09:26 Now, if I wanted to try and change their opinion or change their position, I could come at that a
0:09:33 different way, but I don’t have to agree with someone to see the facts that they’re building
0:09:37 off of and to see the logic in what they’re saying, presuming it’s there.
0:09:42 In the case of the politician that you described, the logic wasn’t there, and so I couldn’t validate
0:09:42 that.
0:09:44 But this is the game.
0:09:46 It’s trying to find the kernel of truth.
0:09:52 What’s valid in this person’s perspective and zoning in on that first, rather than what
0:09:57 do I disagree with and let me hammer that over and over again until I get through to them.
0:10:04 So I’m going to ask you to tell the story, which was my favorite story of the whole book,
0:10:06 because I felt convicted.
0:10:10 I’ve done things like this, of Havana and the tick.
0:10:11 Can you please tell that?
0:10:13 That is a great story.
0:10:16 Every parent will be able to relate to Havana and the tick.
0:10:19 Havana is my daughter.
0:10:22 She’s 11 now, but I think she was maybe seven.
0:10:24 And we were going on a hike, all right?
0:10:27 And I was very excited going down to one of my favorite places to hike.
0:10:29 And it’s a drive.
0:10:31 It’s like 30, 40 minutes.
0:10:35 And we get there and Havana is in a mood, right?
0:10:39 Like, you know that feeling when you open, like one of your kids is off and it’s just like
0:10:43 the entire afternoon hangs in the balance, like which way is this going to go?
0:10:46 And she was feeling a little carsick and she was crabby.
0:10:50 And so we start hiking and she’s kind of dragging, but she’s doing it.
0:10:51 We’re not even five minutes in.
0:10:54 And she screams like she has been shot.
0:10:55 Okay.
0:11:00 And she says, oh my gosh, this tree, mom, it stabbed me.
0:11:02 And I kind of, I’m like, where?
0:11:03 She lifts up her shirt.
0:11:04 I don’t see anything.
0:11:05 I’m like, let’s keep going.
0:11:07 You know, come on, come on.
0:11:09 And that’s it.
0:11:11 Like the trip is over from that point for her, right?
0:11:14 Every five minutes she needs to stop to rest her back.
0:11:16 And she wants us to carry her.
0:11:19 And I find myself being like, we got to keep going.
0:11:20 Come on, come on.
0:11:21 I don’t want to reinforce this.
0:11:24 I don’t want to just give into this.
0:11:25 I’m a behaviorist.
0:11:26 I understand how this works.
0:11:28 And then, oh golly.
0:11:29 Oh boy.
0:11:33 We get home and she’s still complaining about her back.
0:11:35 So she goes to get in the shower.
0:11:36 I’m helping her get in the shower.
0:11:38 She says, it hurts too much to lift my arm.
0:11:40 And I’m like, oh my golly.
0:11:47 I lift off her shirt and I see she has this huge tick lodged in her back.
0:11:51 I actually had a picture of it that I was going to include in the book, but it was too grainy
0:11:54 because it looks so gnarly.
0:11:55 Okay.
0:12:02 And all of a sudden I realized that the emotion, the frustration, the pain that she
0:12:04 was describing was valid.
0:12:11 And I had spent the last however many hours invalidating her with, we can’t be overdramatic.
0:12:15 If you’re upset, you can use your words, but like all the traditional parenting stuff.
0:12:19 And I just had this moment of like, well, I stepped in that one.
0:12:27 Well, in a sense, I’m glad to hear that even a clinical psychologist, an expert like you
0:12:28 blows it.
0:12:30 Oh, that’s, that’s the whole name of the game, right?
0:12:35 I mean, it, but it really is about, as you describe in your book, that growth mindset that
0:12:37 I can do better.
0:12:43 I think it’s critical to look at skills like validation as skills.
0:12:45 You develop them over time.
0:12:50 You grow into them through practice, through exposure and through understanding.
0:12:53 And so, yes, please go screw it up.
0:12:55 That’s the whole point.
0:12:56 That’s the only way you learn.
0:13:04 Now you draw a very clear dichotomy between validation and problem solving.
0:13:05 Yeah.
0:13:09 So let’s explore the relationship between those two things.
0:13:11 We’ve talked about validation.
0:13:17 Is validation now a precursor to problem solving, or is it a substitute for problem solving?
0:13:20 Or is it a foundation for problem solving?
0:13:22 Oh my gosh.
0:13:26 The relationship between the two is almost like a, a Zen cone.
0:13:30 If you try and validate someone, a Zen cone, like a riddle.
0:13:33 I know what a, I know what a shave ice cone is.
0:13:35 I don’t know what is a Zen cone.
0:13:36 It’s like a riddle.
0:13:43 And it seems like two things don’t go together, but to figure out the riddle, you have to understand
0:13:46 how they can coexist, but it doesn’t make sense because they seem kind of diametrically
0:13:47 opposed.
0:13:55 And at the core of this is the idea that acceptance, accepting someone is actually critical to helping
0:13:56 them change.
0:13:58 And so how is that?
0:14:01 I mean, if you accept someone, you can’t also want them to change.
0:14:02 That doesn’t work.
0:14:04 And yet it, it does.
0:14:12 So problem solving, problem solving is an attempt to change how someone feels, how they’re reacting,
0:14:14 whatever outcome they had.
0:14:16 It comes from a good place.
0:14:17 Our kid comes home.
0:14:18 They didn’t do well on their quiz.
0:14:20 They’re so upset.
0:14:22 We jump in with, it’s okay, right?
0:14:23 It’s not that big a deal.
0:14:28 Which is a subtle attempt to change how they feel.
0:14:32 I’m challenging their thoughts in that moment.
0:14:33 I’m trying to reframe it.
0:14:38 I could then come at it and say, next time, let’s just review your words on the drive into
0:14:38 school.
0:14:44 There I am trying to problem solve or change their behavior so that we get a different outcome
0:14:45 next time.
0:14:48 That is all change focused.
0:14:52 It’s very different from saying, ah, you must be devastated.
0:14:58 When I was your age, I failed a math quiz and I remember crying in the bathroom at school.
0:14:59 I was so upset.
0:15:01 And my mom had to come pick me up.
0:15:03 And right, really just leaning into that.
0:15:06 That is a very different response.
0:15:13 And usually, and I can say this with some authority, as someone who speaks to people day in and day
0:15:14 out about their problems.
0:15:20 When folks come to us with an issue, they are seeking validation and not problem solving.
0:15:22 At least initially.
0:15:28 They need to trust that we understand if they’re going to listen to us down the line anyway.
0:15:29 Right?
0:15:33 Like, I don’t take the advice of people if I don’t think they understand the situation or can
0:15:34 relate to where I’m at.
0:15:36 So where do you draw the line?
0:15:37 Okay.
0:15:39 So I understand the validation part.
0:15:40 You had a bad quiz.
0:15:41 I understand.
0:15:42 I had the same thing.
0:15:45 You know, you’re attending, you’re copying.
0:15:46 Yes.
0:15:47 I got the whole acronym.
0:15:50 I got all eight stages right.
0:15:51 Okay.
0:15:54 So you attend, you copy, you do all that.
0:16:01 But then after you do all that, then do you suggest ways to study for the next quiz or you
0:16:03 just lay off that completely?
0:16:04 It depends.
0:16:06 It’s the most frustrating answer in the world.
0:16:08 It depends for several reasons.
0:16:16 The first of which is that if you do this well, you will be surprised by the other person’s
0:16:20 ability, oftentimes, to come up with solutions themselves.
0:16:21 All right.
0:16:25 So you’re all ready with your arsenal of things that you think they should do.
0:16:31 But just in feeling accepted or understood, as they start to talk it out, they often, not
0:16:33 always, but there are times when they get there themselves.
0:16:39 If not, then I have to decide, is this the right moment?
0:16:42 So we’re really eager to get that problem solving in there.
0:16:49 But sometimes, in the example with the spelling test, it’s fine to just let that sit.
0:16:53 I can circle back the next day and talk about study strategies.
0:16:55 I don’t need to do it right then.
0:16:59 So it’s just, I know I need to get there.
0:17:04 It’s about being intentional and focusing on being effective rather than just getting it
0:17:05 out there.
0:17:08 There’s no point in getting out your great ideas if they’re not going to be received.
0:17:10 It’s just, it doesn’t matter.
0:17:14 Did you just put every tutor out of business?
0:17:18 No, because tutors are hired for problem solving, right?
0:17:20 That’s what they’re asked to do.
0:17:21 It’s very clear.
0:17:26 But when someone comes to you for support, they’re not necessarily asking for you to solve
0:17:29 whatever it is they’re dealing with.
0:17:33 They might just want to hear, yeah, it’s really hard raising teenagers, right?
0:17:36 Not, you need to validate your kids more if you want them to, right?
0:17:44 If you were a high school tutor and you started off with validation, you would become a better
0:17:46 tutor in general, wouldn’t you?
0:17:47 I think so, yeah, I do.
0:17:54 I think, this is a weird language perhaps to use in the context of tutoring, but validation
0:17:56 is really at the core of relationships.
0:18:02 It is what it means to feel loved in a sense, right?
0:18:03 Because it communicates acceptance.
0:18:08 And if we don’t feel accepted, it’s hard to feel loved.
0:18:14 And I think that’s a really, really critical point that we confuse or lose in the shuffle.
0:18:18 That acceptance is critical in that sense.
0:18:23 As soon as this recording ends, I’m going to start validating Madison.
0:18:25 I’m going to practice on Madison.
0:18:26 So I get good at validating.
0:18:29 Then I’m going to go to my kids and my family.
0:18:31 Oh, just do it all.
0:18:32 Do it all at once.
0:18:33 Don’t hold back.
0:18:34 Don’t hold back at all.
0:18:54 So have you noticed a difference in validation skills by gender?
0:18:54 Wow.
0:18:56 What a great question.
0:19:02 So in my book, I talk, there’s, you know, about eight different validation skills that
0:19:03 we as therapists are trained in.
0:19:04 We learn these skills.
0:19:09 We have to master them so that we can go out and validate our clients and
0:19:14 establish a therapeutic alliance and trust and everything else.
0:19:18 Of these eight skills, one of them is called taking action.
0:19:22 And it’s weird because it sounds a little bit like problem solving in that it has you
0:19:25 intervene, go in there and do something, right?
0:19:29 If somebody got a flat tire and they call you and they say, I’m on the side of the road.
0:19:33 And you say, oh my gosh, you must be so upset and worried, right?
0:19:34 They can validate all day long.
0:19:39 But if they don’t take action and come and get you, you’re not going to feel like they really
0:19:40 appreciate the situation.
0:19:46 And my hypothesis, I don’t know if I ever formulated it in this way, but I always assumed
0:19:51 that men would be more receptive to taking action.
0:19:56 In other words, that they would be seeking taking action more from their partners, perhaps,
0:20:00 as opposed to like emotional or verbal types of validation.
0:20:03 And that has not proven to be true.
0:20:09 On the contrary, I’ve just, and this is just, again, anecdotal from my clinical work.
0:20:10 I’ve observed.
0:20:16 Wait, you said anecdotes is not a substitute for scientific.
0:20:18 It’s true.
0:20:21 And if we had data on it, I would refer to that data.
0:20:24 In the absence of it, I will just give my anecdotes.
0:20:27 And that did surprise me.
0:20:33 And so in my work with couples, it’s often about helping them figure out what it is they’re
0:20:39 actually seeking, what actually helps them feel seen and heard, because it’s not often what
0:20:40 the other person would expect.
0:20:46 And have you also noticed differences for validation by culture?
0:20:50 Are there different cultures that validate more or validate less?
0:20:54 There’s huge differences in the extent to which we validate emotions.
0:20:59 And even within American culture, there’s been somewhat of a revolution, right?
0:21:04 We didn’t talk about emotions forever, much less validate them, or make an effort to really
0:21:06 go out of our way to validate them.
0:21:09 So I think there are differences.
0:21:16 The most meaningful thing, however, is within your culture, is if you are receiving
0:21:23 more or less validation than is typical of, say, a child in that culture, if that makes sense.
0:21:30 In the same way that we see punishment, different types of punishment may be perceived as more
0:21:37 damaging, more abusive to a child who is being raised outside of a culture or raised in a country
0:21:41 where the cultural practices do not apply in the same way.
0:21:45 And so they feel more targeted, if that makes sense.
0:21:46 Yes, it does.
0:21:49 Okay, so the last variable is age.
0:21:54 As you get older, do you get more able to validate people?
0:21:56 Oh, do you get better at it?
0:21:57 That’s interesting.
0:22:00 I thought you were going to ask, do different ages require different types of validation?
0:22:02 Well, that’s a good question too, yeah.
0:22:04 Just to validate your question.
0:22:11 That one, I think you know the answer to, because you have four kids.
0:22:13 Depends how you define adolescence.
0:22:15 Out of adolescence or in adolescence?
0:22:18 Depends how you define adolescence.
0:22:19 Adolescence, that’s right.
0:22:27 Well, I’m sure you notice that when your kids are younger, they just want all sorts of, I see how hard you tried.
0:22:30 All of that type of like warm, fuzzy, right?
0:22:32 You really worked hard on that.
0:22:39 If you try that with a 15-year-old, right, they are going to just squirm inside.
0:22:41 They are not feeling validated.
0:22:43 They are feeling annoyed.
0:22:51 And so as they get older, they need much less in the form of that like, I see you type of stuff.
0:22:53 Because they don’t really want to be seen.
0:22:55 They want to be respected, right?
0:23:05 And so validating their thoughts or their rationale goes a lot further than going in with the like, I see your effort there.
0:23:06 I see what you did.
0:23:11 And then they kind of move back out of that into adulthood a little bit and become more balanced.
0:23:15 Do we get better at validation as we get older?
0:23:17 It really depends.
0:23:19 It depends on what is modeled for you.
0:23:30 And I say this because we know, as I said, with folks who are exposed to chronic invalidation and then have different disorders where they end up in treatment,
0:23:39 one of my jobs as a psychologist is to teach them how to validate themselves and others because it was not modeled for them.
0:23:44 And validation is very much like a language.
0:23:51 It is a language we should be teaching kids at a young age because they pick it up so fast.
0:23:59 It is fascinating to me trying to teach a 10-year-old versus a 20-year-old how to validate themselves or another person.
0:24:02 The 10-year-old picks it up.
0:24:05 The 20-year-old, it takes two, three times as long.
0:24:09 They don’t develop that language capacity in the same way.
0:24:12 Your practice is in Silicon Valley.
0:24:17 And we can edit this answer out.
0:24:27 But do you look at these, what has been labeled the nerd Reich, do you look at these nerd Rikers, these tech bro billionaires?
0:24:30 They are the richest people in the world.
0:24:31 They have everything.
0:24:32 They have wealth.
0:24:33 They have power.
0:24:34 They have visibility.
0:24:35 They have everything.
0:24:44 And yet they seem to be primarily concerned with long-term capital gains rates and, you know, making crypto successful.
0:24:50 Why aren’t they taking the high road and helping society instead of just trying to make more money?
0:24:57 Do you think it’s because they weren’t validated when they’re young or, you know, am I just trying to criticize these shitbags?
0:25:02 Yeah, yeah, yeah, and there are so many shitbags, let’s just be clear, in Silicon Valley.
0:25:06 I think, again, we search for validation in different ways.
0:25:13 And I think that money is a sense that we are valued and that we are valuable.
0:25:20 And once you’ve had that hit, right, that little dopamine hit of, ooh, I’m valuable.
0:25:21 People see my worth.
0:25:23 You continue to seek it out in those ways.
0:25:30 But as you will see, it’s an insatiable thirst because you’re not actually giving yourself water.
0:25:32 It’s like coffee, right?
0:25:33 It dehydrates you more.
0:25:42 And so they’re actually seeking validation, acceptance through the wrong sources.
0:25:48 And therefore, they consume more and try and get more and more because it just keeps dehydrating them.
0:25:55 So is there a fine line between too much and too little seeking external validation?
0:25:59 I mean, I would think it’s healthy to seek external validation.
0:26:02 It’s kind of reinforcing its feedback.
0:26:08 On the other hand, I would say that maybe our current president is just obsessed with external validation.
0:26:12 So where’s the fine line between too much and too little?
0:26:15 How can you tell if you’re trying to get it too much?
0:26:16 Good question.
0:26:19 We talked about the difference between validation and agreeing, right?
0:26:21 I said it’s not the same as agreeing.
0:26:24 It’s also not the same as praise.
0:26:27 Praise is a positive judgment.
0:26:29 It’s positive, but it’s a judgment nonetheless.
0:26:31 It says, good job.
0:26:32 You’re great.
0:26:35 It’s a heart emoji on Instagram, right?
0:26:38 And it reinforces facades.
0:26:45 It reinforces us for exceeding expectations and tweaking ourselves and filtering ourselves, right?
0:26:49 To be seen as better than, to get that positive.
0:26:51 Validation is about acceptance.
0:26:56 It says, I accept you independent of how you look or perform.
0:27:03 So when people say we shouldn’t rely too much on external validation, they’re really talking about praise.
0:27:05 Praise can be good, right?
0:27:08 You need that feedback as you were describing in some ways.
0:27:12 But if you build your life around it, it’s hollow, right?
0:27:17 Because you will have to distort yourself to continue to get it, right?
0:27:21 You have to just keep putting yourself out there and pushing and pushing and pushing.
0:27:24 There is no sense that you’re accepted just as you are.
0:27:34 So radically, I would say, no, there is no amount of external acceptance that is too much.
0:27:37 That has not been my observation.
0:27:45 The more accepted we feel, the greater the sense of belonging, the more we flourish.
0:27:52 That has been consistently my observation, and it is what the evidence supports.
0:27:54 Again, I can’t say that for praise.
0:27:58 If you go around chasing praise your whole life, it’s going to get Trumpy and very quickly.
0:27:59 Okay.
0:28:00 Yeah.
0:28:08 Now, what happens if validation doesn’t necessarily address the underlying causes of issues?
0:28:14 Are you saying that validation puts you on the path to address the underlying causes?
0:28:15 That is right.
0:28:16 That is right.
0:28:22 So in my line of work, I work with folks who have severe behavioral issues, folks who
0:28:25 are suicidal, folks who are hurting other people.
0:28:30 I need to change that quickly, right?
0:28:33 It’s not like, oh, I just hope I go in there and I accept them and everything.
0:28:36 Like, I need to make sure that that behavior changes.
0:28:39 And so acceptance is a piece.
0:28:40 It is a piece of that puzzle.
0:28:42 It puts me on the right track.
0:28:49 It opens the door for collaboration and feedback so that when I do give advice or skills training
0:28:53 or whatever it may be, the other person listens to me.
0:28:57 That is, at least in a therapeutic sense, that’s kind of the name of the game.
0:29:05 But is there no role for friction and conflict and struggle and, you know, shame and healthy
0:29:05 development?
0:29:11 To put it in parental term, what if you’re a helicopter parent or a lawnmower parent?
0:29:12 I’m a lawnmower parent.
0:29:13 Are we defeating ourselves?
0:29:14 How so?
0:29:21 Well, I mean, if we are helicopter parents or lawnmower parents and we always are in problem
0:29:22 solving mode.
0:29:25 how does a person become their own problem solver?
0:29:27 How do you become your own problem solver?
0:29:28 No.
0:29:33 How does my kids or, you know, people who work for me solve their own problems?
0:29:34 Yeah.
0:29:36 You need to back off of the problem solving.
0:29:40 That’s 100%.
0:29:40 It’s problematic.
0:29:42 Absolutely.
0:29:43 I would subscribe to that.
0:29:47 Now, the question is, the question, do you need feedback on that in order to get there?
0:29:53 I think your whole book was feedback, to tell you the truth.
0:30:00 I bet a lot of people listening to this podcast can relate to this concept of helicopter or lawnmower
0:30:01 parenting, right?
0:30:05 So, like, where’s the line?
0:30:05 Yeah.
0:30:09 I think, again, the emphasis is on effectiveness.
0:30:14 You had someone on your podcast recently that was talking about neurologically what happens
0:30:18 in a young person’s brain when they hear nagging.
0:30:24 And the short of it was that the parts of their brain that would actually be needed to take in that
0:30:28 feedback and do something with it shut down.
0:30:31 And just hearing that nagging, they shut down.
0:30:37 And that’s the point with the helicoptering is that at some point, your background noise, right?
0:30:39 You’re always in their face telling them what to do.
0:30:45 And they listen to you less over time, and they don’t develop the capacity to do it themselves.
0:30:46 So those are the costs.
0:30:48 We have to call them what they are.
0:30:52 Now, there are valid reasons you’re helicoptering.
0:30:54 And it’s important to see that as well.
0:31:00 You’re trying to keep your kids safe in a world that has become incredibly dangerous psychologically.
0:31:06 I think as parents, the world feels dangerous for our kids with social media and the internet
0:31:07 and all of these things.
0:31:10 The question is, what’s going to be most effective?
0:31:14 And the answer is that helicoptering is not it.
0:31:15 Okay.
0:31:17 Helicoptering is not it, but what is it?
0:31:20 How would you define helicoptering?
0:31:21 Let’s break it down.
0:31:28 I would define helicoptering as always hovering over your kid and making sure that it’s like
0:31:32 the golden dome of parenting that no missiles get through.
0:31:34 Okay.
0:31:38 Does it include lecturing, in your opinion?
0:31:39 Lecturing the kids?
0:31:40 Or is that separate?
0:31:49 I would say it unavoidable because every time you fire an anti-missile missile, it’s a lecture.
0:31:49 Okay.
0:31:51 Again, I will reiterate.
0:31:53 It’s valid that you want to protect them.
0:31:55 But is helicoptering protecting them?
0:31:58 No, because they’re not developing the skills they need to do it themselves.
0:32:03 So what you need to do is be able to step back and let them fall.
0:32:09 You have to trust in the wisdom that growth happens through, quote unquote, failure.
0:32:14 That when you try and protect your kids from failure, you’re ultimately protecting them
0:32:14 from growth.
0:32:22 Once you accept that, that is the mantra you have to return to again and again to come out
0:32:23 of that helicoptering mode.
0:32:26 Now, does that mean no oversight whatsoever?
0:32:27 No, of course not.
0:32:31 But it means challenging yourself because when we get into a mode like helicoptering, it’s become
0:32:32 default.
0:32:34 We’re not thinking, is this effective?
0:32:35 It’s just what we do.
0:32:36 They ask, can I go out?
0:32:37 No, no, no.
0:32:40 Not unless so-and-so goes with you and like all these other things.
0:32:41 Just stop.
0:32:43 Is this a moment where I could loosen up?
0:32:45 What’s the worst that could happen here?
0:32:54 Okay, now that we solved all the parenting issues, let’s move on to self-validation.
0:32:56 How does one self-validate?
0:32:58 Yeah, such a great question.
0:33:00 We don’t learn this, do we?
0:33:04 This is something that really, really strikes me.
0:33:09 Working with folks as an executive coach, you’ve got these, like you said, tech billionaires,
0:33:10 right?
0:33:12 To folks struggling with severe psychopathology.
0:33:17 And what I see across the spectrum, honestly, I have yet to have someone come into my office
0:33:24 who was really good at validating their own emotions, be it tech billionaire or person struggling
0:33:26 with bipolar disorder, right?
0:33:35 What we do instead is we tend to criticize ourselves and lash ourselves into doing better.
0:33:42 And this is incredibly problematic because we don’t trust our emotions.
0:33:55 We see shame or sadness as indications of failure, again, failure, rather than opportunities for compassion.
0:33:58 The belief that we should treat others the way we would want to be treated.
0:34:02 And I think actually the reverse is true.
0:34:07 We should treat ourselves the way we would treat somebody else who was struggling.
0:34:10 Wow, that’s interesting.
0:34:10 Right, though?
0:34:15 Because if someone was to come to you feeling deeply ashamed, you wouldn’t, like, twist the
0:34:20 knife and say a bunch of other things that they did that proved how worthless they were, right?
0:34:22 But that’s often what we do to ourselves.
0:34:28 We go through our history and collect all the supporting evidence as to why we suck and we’re
0:34:29 never going to X, Y, or Z.
0:34:31 But you would never do that to a friend.
0:34:33 That would seem cruel.
0:34:39 So one of the reasons, back to children, one of the reasons I am so adamant about validating
0:34:45 children is because I want them to develop the capacity to validate themselves.
0:34:51 That doesn’t mean that everything they think or do is correct, but it means that they should
0:34:53 be able to see the validity in what they’re feeling.
0:34:57 So I’ll often say it’s not okay to yell or scream or whatever.
0:34:58 It’s okay to be upset.
0:35:00 It’s okay to be angry.
0:35:01 It’s okay to be frustrated.
0:35:03 All right.
0:35:05 The behavioral expression is different from the emotion.
0:35:10 So being able to validate your own emotions, being able to see, why does it make sense
0:35:11 that I feel this way?
0:35:14 Up next on Remarkable People.
0:35:16 There’s some of this in what you do as well.
0:35:23 You’re trying to help the guest message, get that message across as effectively as possible.
0:35:26 And it’s reflected in how you listen and the questions you ask.
0:35:28 And that’s what we’re going for here.
0:35:31 It’s not about like, haha, I’m so much smarter than them.
0:35:34 These idiots, let me come in and do this better than they’re doing it.
0:35:39 No, it’s more just, you would ask questions differently if you were trying to flesh out
0:35:41 your understanding or their point.
0:35:51 Do you want to be more remarkable?
0:35:57 One way to do it is to spend three days with the boldest builders in business.
0:36:02 I’m Jeff Berman, host of Masters of Scale, inviting you to join us at this year’s Masters
0:36:05 of Scale Summit, October 7th to 9th in San Francisco.
0:36:12 You’ll hear from visionaries like Chobani’s Hamdi Ulukaya, celebrity chef David Chang, Patagonia’s
0:36:16 Ryan Gellert, promises Phaedra Ellis Lampkins, and many, many more.
0:36:20 Apply to attend at mastersofscale.com slash remarkable.
0:36:24 That’s mastersofscale.com slash remarkable.
0:36:27 And Guy Kawasaki will be there too.
0:36:32 Become a little more remarkable with each episode of Remarkable People.
0:36:37 It’s found on Apple Podcasts or wherever you listen to your favorite shows.
0:36:42 Welcome back to Remarkable People with Guy Kawasaki.
0:36:47 Let us get to the validation ladder.
0:36:53 I would like you to explain the validation ladder so that people have a sort of a framework to
0:36:55 understand your work.
0:36:56 So please explain the ladder.
0:36:57 Yeah.
0:37:00 So this is a collection of skills.
0:37:04 We’ve got eight different skills that you can use to validate someone.
0:37:07 These are basically just little communication tactics.
0:37:11 If you use this skill, it will convey some degree of validation.
0:37:15 And to understand this, it helps to break down validation.
0:37:21 I said it really quick at the beginning, but the key components are mindfulness, understanding,
0:37:22 and empathy.
0:37:23 All right.
0:37:28 You’re trying to convey those qualities in such a way that the person feels accepted.
0:37:30 And you’re like, how do you do that?
0:37:31 How do you do it in such a way that they feel?
0:37:33 This is how with these skills.
0:37:39 So the first set are just what we call mindfulness skills.
0:37:42 They’re just helping you project that mindful awareness.
0:37:43 All right.
0:37:56 If I am sitting across from, let’s say, Donald Trump, and he is, let’s just say, and he is, I’m just trying to picture myself debating this guy.
0:37:56 It would just be so.
0:37:58 I would honestly love it.
0:38:02 I feel like it would be the ultimate test of my validation skills here.
0:38:07 But he’s going to be saying, inevitably, a lot of stuff that I do not understand.
0:38:10 Not just don’t agree with, but logically do not understand.
0:38:14 And when that is the case, all I can do is be mindful.
0:38:15 Okay.
0:38:18 All I can do is attend or copy.
0:38:23 These are the two skills we have to be mindful and to show that we’re mindful.
0:38:26 And that’s a pretty low level of validation, you might think, right?
0:38:27 It’s just awareness.
0:38:29 But a couple of things there.
0:38:33 One, awareness is incredibly powerful.
0:38:38 Attention is one of the most reinforcing experiences that we can provide.
0:38:49 If we want to torture somebody in this country, the method we use is to deprive them of attention by putting them in solitary confinement, right?
0:38:56 When we remove attention, people struggle, and they struggle deeply, all right?
0:39:00 But now, like, negative attention doesn’t necessarily feel good, right?
0:39:04 The task is just to be just non-judgmentally aware.
0:39:08 And that’s what these two skills help us do, attending and copying.
0:39:15 I got the basics of attending about it’s contact, it’s proximity, it’s gesturing, and it’s nodding.
0:39:19 So I love all those things.
0:39:23 But what do you do in a virtual world where it’s Zoom?
0:39:26 This is such a great question.
0:39:36 Again, as a psychologist, someone who’s working with emotion and people every day, the pandemic was such an immediate and visceral.
0:39:39 It created this visceral sense of disconnection.
0:39:43 And I think we all experienced it over time.
0:39:46 As a psychologist, I got to tell you, it hit me right away.
0:39:50 Because these tools that I rely on to connect were taken away.
0:39:57 And if you’re Zooming or whatever the case may be, you have to just be more intentional about those cues.
0:40:04 For instance, I will make a point of leaning in and make it clear that I’m leaning in.
0:40:13 I will adjust my monitor so that my eyes are as close to the camera as possible, so that it’s as close to eye contact as possible.
0:40:17 You can see it in this interview, in the recording.
0:40:18 I do a lot of gesturing.
0:40:22 And I’m making a point of doing it up here, right?
0:40:25 I’m showing that I am engaged through those nonverbals.
0:40:27 But you have to be more intentional about them.
0:40:28 That’s the key.
0:40:31 The worst is to do the camera off.
0:40:33 You’re just listening or something and people can’t see.
0:40:34 That is like the absolute worst.
0:40:37 And yet, that is where many of us reside.
0:40:41 So with the nonverbals over virtual, you just have to be more intentional about it.
0:40:42 That’s all you can do.
0:40:51 You made a point, and I cannot remember which one of the eight skills it was affiliated with.
0:41:02 But I love this point, which is that you should find a way to help the other person make their point.
0:41:02 Oh, yeah.
0:41:06 So first of all, refresh my senile mind.
0:41:13 What is this concept of helping people be a better communicator by suggesting things?
0:41:15 What is that associated with?
0:41:17 So there’s two things at play there.
0:41:18 One of them is attending, okay?
0:41:20 So that’s that we were talking about.
0:41:21 You can use these nonverbals.
0:41:25 And then the other way to attend is in how you listen.
0:41:33 And it’s a little game that you play with yourself where you’re thinking as you’re listening, what’s this person’s point?
0:41:35 And like, why does it matter to them?
0:41:37 You’re streaming information, trying to figure that out.
0:41:43 And then, and this is critical, how could I do a better job of making this person’s point?
0:41:46 Again, not do I agree with it?
0:41:48 Not how could I defeat it or argue it?
0:41:49 What’s my rebuttal?
0:41:49 No.
0:41:53 It’s how could I articulate this better than they’re doing right now?
0:41:59 But isn’t that going to create hostility?
0:42:03 Who the hell is this person to tell me, you know, how to do this better?
0:42:04 Sure, sure.
0:42:05 At this point, you don’t tell them.
0:42:08 This just informs how you are listening.
0:42:16 And if you watch great late night show hosts, you will see that they are all playing some version of this game.
0:42:19 They are trying to get the best interview they can.
0:42:23 As a podcast host, I imagine you, there’s some of this in what you do as well.
0:42:30 You’re trying to help the guest message, get that message across as effectively as possible.
0:42:33 And it’s reflected in how you listen and the questions you ask.
0:42:35 And that’s what we’re going for here.
0:42:38 It’s not about like, haha, I’m so much smarter than them, these idiots.
0:42:41 Let me come in and do this better than they’re doing it.
0:42:47 No, it’s more just you would ask questions differently if you were trying to flesh out your understanding or their point.
0:42:54 I have to admit that I slightly misinterpreted this thought.
0:42:57 And then I said, okay, so this is a great thought.
0:43:03 What can I constructively offer Caroline about her book?
0:43:06 So I came up with some ideas.
0:43:09 But now that you tell me that, maybe I should just keep them to myself.
0:43:11 Oh, no, no, I want to hear them.
0:43:14 Okay.
0:43:19 Okay, so take this in a spirit of one author to another.
0:43:20 Oh, please.
0:43:23 Some slight changes that I would do.
0:43:24 Yeah.
0:43:25 Okay.
0:43:27 Positively.
0:43:29 I want to validate your great book.
0:43:29 I love you.
0:43:32 You wouldn’t be on this podcast if I didn’t like what you did.
0:43:37 So I have one idea.
0:43:40 In the back of your book, you have this appendix.
0:43:43 And this appendix lists like, you know, these are the eight skills.
0:43:44 This is a summation.
0:43:46 This is an example, right?
0:43:47 There’s a one-page appendix.
0:43:53 I think you should move that up into the first time you discuss the latter.
0:43:59 Because when I read about your latter, I have to admit, I had some mental fog.
0:44:04 I had to go back several times because it was like, she just said there’s eight things.
0:44:07 But then she’s talking about three things, mindfulness.
0:44:08 Yeah, yeah, yeah, yeah, yeah, yeah.
0:44:11 So I said, so is it the three or is it the eight?
0:44:12 The eight, right.
0:44:18 And it took me quite a while to figure out the three contain the eight.
0:44:23 The eight is divided into three sections of mindfulness, understanding, and empathy.
0:44:27 And those eight things add up to those three things.
0:44:28 So that took me a while to figure it out.
0:44:33 But your appendix, when I saw that appendix, I said, aha, now I get it.
0:44:39 This is such frustrating feedback because between you and me, this was such a fight.
0:44:41 Like, I agree.
0:44:44 I wanted that earlier in the book.
0:44:48 And the concern was that it would be too much content too soon.
0:44:51 I have to invalidate your editor or publisher.
0:44:53 They’re wrong about that.
0:44:55 Right, because you need to see it all.
0:45:00 Yeah, I mean, you need to understand the big picture that, you know, these three subsections
0:45:03 are made out of eight skills, add up to the latter.
0:45:05 I have one more comment.
0:45:11 I got to tell you, when I read this, I thought, this is the most interesting story.
0:45:13 This cannot possibly be true.
0:45:17 So I went to chat, GPT, and I asked this question.
0:45:19 And of course, it is true.
0:45:24 You say in your book that there’s a golden rule.
0:45:32 And the golden rule is that in a court case, a lawyer cannot suggest to the jury, put yourself
0:45:33 in this person’s place.
0:45:35 Isn’t that how you would react?
0:45:38 You cannot appeal to empathy.
0:45:40 It is illegal to do that.
0:45:41 And I read that.
0:45:43 I said, that cannot be true.
0:45:44 And it is true.
0:45:44 It’s true.
0:45:47 It is really, really true.
0:45:48 I had no idea.
0:45:49 So can you explain that?
0:45:51 Because that was shocking to me.
0:45:51 Yeah.
0:45:55 So this is getting at some of those understanding skills.
0:45:59 Like, how do you understand and connect with someone else’s experience?
0:46:02 And one of the things we’re always told is to put yourself in the other person’s shoes.
0:46:06 But really, that is quite effective.
0:46:11 It is a skill to be able to say, how would this feel to me?
0:46:14 Like, in that same way that we were talking about, do they need acceptance or problem solving?
0:46:15 Put yourself in the kid’s shoes.
0:46:18 What do they want to hear after getting a bad grade?
0:46:19 Do they want to hear how to study better?
0:46:23 Or do they want to hear, that sucks, right?
0:46:25 How awful, X, Y, or Z.
0:46:32 So when you do that, it immediately changes your perception of the experience.
0:46:38 And interestingly, it’s so effective in doing so that, yeah, you’re not allowed to do that
0:46:42 as a lawyer in appealing to the court.
0:46:48 You can’t ask jurors to think from that angle because it could make them empathize.
0:46:52 And that could influence their decision, right?
0:46:56 Isn’t the whole point to be tried by a jury of your peers?
0:46:58 I know, right, right?
0:47:05 But there seems to be this concern about objectivity being tainted by emotion.
0:47:08 I don’t know if I agree with that per se.
0:47:14 I think that’s part of the reasoning is the emotional logic that goes into it.
0:47:20 I see them as very equally important in making smart judgments and wisdom.
0:47:26 If you were a prosecuting attorney and you said, put yourself in the place, there’s three cops
0:47:27 holding you down.
0:47:29 One has his knee on your neck and you’re choking.
0:47:31 Put yourself in that place.
0:47:31 Yeah.
0:47:38 Or how about if you are an immigrant and you’ve been here 40 years, you’ve raised kids, three
0:47:39 of them are Marines.
0:47:42 And now you get arrested in Home Depot for what?
0:47:43 For what?
0:47:45 I mean, you pay your taxes.
0:47:46 You do everything, right?
0:47:46 Yeah.
0:47:51 Again, another really visceral, I don’t know why I keep coming back to kids on this podcast.
0:47:57 I’m not usually this kid focused, but I think of often tell parents, like, especially a big
0:48:05 guy, you’re 200 pounds yelling at somebody who is three feet tall.
0:48:08 Think about how that would feel.
0:48:13 Do that kind of perception shift and let that inform your reaction.
0:48:18 Because you may not feel like you’re being scary, but that’s terrifying.
0:48:24 We are at the one hour mark.
0:48:26 Oh, I can’t believe it.
0:48:32 Maybe I should change this podcast to the Remarkable Parenting Podcast.
0:48:32 I know.
0:48:35 I don’t know why I went so far in that direction.
0:48:37 No, I took you in that direction.
0:48:39 I wanted to go in that direction.
0:48:41 What else is more important than parenting?
0:48:41 I know.
0:48:42 Yes.
0:48:44 I’ve come to believe that more and more.
0:48:47 I have so much more faith in our children than in us.
0:48:49 I hate to say it.
0:48:52 Maybe I should interview Havana.
0:48:55 Havana, listen, your mom says she validates you all the time.
0:48:56 Is that true?
0:48:59 Does she problem solve for you, Havana?
0:49:01 Or does she let you figure everything out?
0:49:02 You know what Havana says?
0:49:05 She says, I know you’re validating me.
0:49:08 And it feels good, but I know what you’re doing.
0:49:12 She’ll say, it feels good, but I know what you’re doing.
0:49:21 She’s going to read this book someday and say, my God, Ma, couldn’t you have used the pseudonym or something?
0:49:22 Yes, exactly.
0:49:26 Okay, so I have one last question, and it’s about Havana.
0:49:28 Why Havana?
0:49:30 There must be a story.
0:49:36 You know, you didn’t call her Houston or Dallas or Mar-a-Lago or Los Angeles or Portland.
0:49:37 Why Havana?
0:49:39 Are you a socialist?
0:49:43 My mother fled communist Cuba, and she grew up in Havana.
0:49:52 And her middle name is after her grandmother on my husband’s side, and then her first name is after, not after my mom, but speaks to her experience, yeah.
0:49:55 You don’t meet many kids in Havana.
0:49:57 I know, and we’ve found one.
0:50:00 We have found one, and it’s like through Instagram or something.
0:50:04 And I feel just like such a kinship to this young child that I’ve never met.
0:50:08 All righty.
0:50:09 All righty, Caroline Fleck.
0:50:13 As you can tell, I really learned a lot from your book.
0:50:17 Now, we only did attending, really, and copying.
0:50:17 Yes.
0:50:19 And there are six more skills.
0:50:26 I really recommend this book, and I hate to tell you, Caroline, but I recommend that you start with the appendix.
0:50:30 If you start with the appendix, you will really understand.
0:50:31 Yeah.
0:50:32 I agree.
0:50:33 I support that.
0:50:36 So, read this book backwards.
0:50:39 Basically, what we’ve said is the appendix is good and the epilogue is great.
0:50:42 So, just start at the end and go backwards.
0:50:51 I’m all about the recency is more important than primacy or whatever the opposite is.
0:50:51 That is right.
0:50:52 That is right.
0:50:55 All right, Caroline Fleck.
0:51:05 Thank you so much for being on this podcast, and I think people listening to this and reading your book will have a very good tool to be remarkable.
0:51:08 So, thank you for coming on my podcast.
0:51:10 Thank you so much for having me.
0:51:11 This was an absolute blast.
0:51:15 I bet you say that and validate all the podcasters.
0:51:16 No, no.
0:51:18 Just you.
0:51:20 Yeah, I believe you.
0:51:25 Because, you know, I need validation so much.
0:51:27 We all do.
0:51:29 All righty.
0:51:38 So, now I want to validate the rest of the Remarkable People podcast staff, which is Madison Nisner, this ace producer and co-author.
0:51:43 Tessa Nisner, who are a researcher and co-producer, JFC.
0:51:46 And finally, sound designer, Shannon Hernandez.
0:51:49 That’s the Remarkable People team.
0:51:54 So, until next time, be remarkable and go out and validate somebody.
0:51:59 Actually, start by validating yourself and then go out and validate people.
0:52:04 This is Remarkable People.
What if the secret to better relationships isn’t fixing problems but simply making people feel understood? Clinical psychologist Caroline Fleck reveals why validation—not agreement—transforms how we connect with others. In her groundbreaking book Validation, Caroline shares the science behind why feeling seen matters more than being right. Discover the eight-step validation ladder, learn why accepting emotions leads to real change, and find out how this revolutionary approach can improve your parenting, leadership, and relationships. Plus, hear Caroline’s honest confession about missing a literal tick on her daughter’s back and what it taught her about judgment versus understanding.
—
Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.
With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.
Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.
Episodes of Remarkable People organized by topic: https://bit.ly/rptopology
Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**
Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Thank you for your support; it helps the show!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
-
Les Schwab: Why Real Ownership Outperforms Experience, Capital, and Credentials (Outliers)
AI transcript
0:00:05 Charlie Munger once asked me how can someone give away 50% of profits and make billions
0:00:10 more than if he’d kept it all? Before I could answer he told me about Les Schwab,
0:00:14 a tire shop owner who understood incentives better than almost anyone.
0:00:17 What Schwab discovered will change how you think about business.
0:00:28 Welcome to The Knowledge Project. I’m your host, Shane Parrish.
0:00:33 In a world where knowledge is power, this podcast is your toolkit for mastering the best of what
0:00:38 other people have already figured out. This episode is for educational and information purposes only.
0:00:47 What Les Schwab discovered was deceptively simple. Most businesses treat employees like expenses to
0:00:54 minimize. He treated them like partners to enrich. The math was shocking. He gave away half his profits
0:00:59 and built a multi-billion dollar empire. Here’s how it worked. When people working in the tire
0:01:05 centers made a share of the profits, they don’t just change tires. They build relationships with
0:01:11 customers. When managers own real equity with skin in the game, they run stores like their family’s
0:01:17 “Future depends on it.” Because it does. Les documented his business lessons in his autobiography,
0:01:23 Pride in Performance. Keep it going. He wrote it himself on a 40-year-old typewriter because he wanted
0:01:29 every entrepreneur to understand exactly how he did it. No ghostwriter, no corporate polish,
0:01:36 just the raw blueprint for turning a leaky shed into an empire. Les proved that the most ruthless business
0:01:44 strategy is radical generosity. He turned employee loyalty into a competitive moat so deep that Walmart
0:01:53 couldn’t cross it. This is his story. Before Les Schwab was the name on over 400 tire stores across the
0:02:01 American Northwest, it was the name of a kid born into nothing. Bend, Oregon, 1917. His parents were desperate
0:02:07 homesteaders fighting the high desert for a living. His mother, Alice, taught him everything that mattered.
0:02:13 Then pneumonia killed her when Les was 15. That left him with his father, Bishop. A study in contradiction,
0:02:19 gentle and hardworking when sober, a maniac when drunk. Les spent his teenage years terrified that his
0:02:26 father would show up at school drunk and humiliate him. Poor but proud. That’s how Les described himself.
0:02:32 That pride was armor. A year after his mother died, they found his father’s body outside of a moonshine
0:02:39 joint. Les was 16. He was now an orphan. His relatives offered to take him in, but he said no.
0:02:44 Instead, he rented a room in a boarding house for $15 a month, decided he was an adult. The world had
0:02:50 given him a hard education and an allergy to alcohol. Most kids in his position would have taken the help,
0:02:57 moved in with family, and stayed safe. Les chose the harder path. While it wasn’t so much choosing the
0:03:02 harder path consciously, he just didn’t feel he could rely on anyone else. He wanted everything
0:03:07 on himself. I understand that. Choosing pride over comfort and independence over security
0:03:12 shows in nearly everything he went about doing. The lesson here is a bit counterintuitive. The worst
0:03:18 things that happen to you can become an advantage, but only if you refuse to let them define you as a victim.
0:03:22 Les could have blamed his circumstances. Instead, he used them as fuel.
0:03:27 Les got his first paper route before his parents died. However, this came with two problems.
0:03:33 One, he couldn’t ride a bike. Two, he couldn’t afford a bike. So he ran every day for two months,
0:03:40 running his entire paper route on foot to earn money for a used bicycle. He had to do the job to afford
0:03:46 the tool required to do the job. One morning, he couldn’t find a customer’s address. Ten miles he ran on
0:03:51 an empty stomach. He ended up collapsing in the street. Les needed to work. He had no choice.
0:03:57 He had bills to pay, and he needed a bike. Nobody was going to hand him anything. At the same time,
0:04:03 he was also washing dishes at a restaurant, earning $3 a week plus meals. Here’s his schedule as a 16 year
0:04:08 old orphan. Paper route in the morning, school all day, restaurant at night.
0:04:13 Then he started doing the math. Selling newspaper subscriptions paid 50 cents each. He could make more
0:04:18 in a few hours selling than a whole week washing dishes. So he quit the restaurant. When he finally
0:04:23 saved enough for a bike, he got a new route under Mr. Goldenberg. Goldenberg was important because he
0:04:28 taught him how to sell. Not just deliver newspapers, but actually sell them. Knock on doors, talk to
0:04:34 people, persuade them. Les doubled the route’s numbers in a few weeks. Goldenberg saw a potential.
0:04:40 So he had an idea. Start a Sunday route in farm country. It’s an underserved market, but a logistical
0:04:46 nightmare. Les would need a car. During the Depression, most 16 year olds saved for a baseball
0:04:54 glove. Les was saving for a 1926 Chevrolet with a box on the back. $75 cash. Sundays, he’d drive the
0:05:00 rural roads. On weekdays, he was back to the bicycle because gas cost too much. He liked the car, but he
0:05:06 liked money even more. He signed up 80% of farms in rural Bend. That’s when Les learned the principle that
0:05:11 would define his business philosophy. People don’t buy your product. They buy your service, your reliability.
0:05:18 They buy you. By his senior year, Les controlled all nine Oregon Journal routes in Bend. He was making
0:05:25 $200 a month during the Great Depression. More than his high school principal. Out of 500 students,
0:05:31 only Les drove a new car. A 1934 Chevrolet bought with cash. His classmates thought it was showing off,
0:05:37 but they missed the point. The car wasn’t about status. It was proof. Proof that hard work plus smart
0:05:43 thinking beats any disadvantage. What stands out to me here is the compound effect of small
0:05:50 advantages. Les turned one paper route into nine through relentless execution. A lot of people treat
0:05:55 smaller jobs as stepping stones to bigger ones. They’re never fully present in what they’re doing,
0:06:02 never giving it their all. Les gave 100% of his effort to the work right in front of him all the time,
0:06:08 and that always led to more work and more opportunity. As Charlie Munger said, the best way to get more work
0:06:14 is to do the work right in front of you. And do it well. Les met Dorothy Harlan when they were both
0:06:20 teenagers. They married at 18 and he bought them a small house. They barely had time to unpack. His
0:06:27 reputation as a newspaper salesman had spread beyond Bend. A paper in Eugene, Oregon offered him a district
0:06:32 manager job. The newlyweds packed up and hit the road, living out of motels, while Les traveled his
0:06:38 territory selling subscriptions. They called him a circulation man, someone with an almost supernatural
0:06:43 ability to boost subscription rates wherever he went. Les negotiated a new job with better pay,
0:06:49 but he had one question that revealed how he thought about the world. Who’s my boss? The owner said he’d be
0:06:55 Les’ boss. Simple enough. But within weeks, Les had three different managers giving him contradictory orders,
0:07:00 so he went straight back to the owner and laid out the problem. The owner immediately straightened out
0:07:05 the other two managers. The lesson here is if you don’t know who you’re accountable to, you’re
0:07:10 accountable to everyone, which is the same thing as being accountable to no one. Clear lines of authority
0:07:17 offer clarity and purpose. He was young and talented and perhaps even a bit cocky, but he had pulled
0:07:23 himself up from nothing, outworked almost everyone else, and learned through trial by fire. When Les took
0:07:29 over the paperboy program, he discovered it was hemorrhaging carriers. 20% of carriers quit every month,
0:07:36 so he got creative. First, he obtained lists of every 7th and 8th grader in local schools and recruited them
0:07:43 personally. Then he did something radical for the era. He hired girls. They turned out to be more reliable than
0:07:49 boys. But his real genius was the honor carrier program. Each month, one carrier won based on sales,
0:07:56 service, and bookkeeping. The prize? $25 in their picture in the paper. This was Les’ first real
0:08:02 experiment with incentive design, and he was learning how much they matter. One of my favorite mongerisms is
0:08:07 show me the incentive and I’ll show you the outcome. If you reward the behavior you want, you’ll get more
0:08:13 of it. The lesson here connects directly to building any organization. Les understood that unclear reporting
0:08:20 structures create chaos. Everyone thinks they’re in charge, so no one really is. He also grasps something
0:08:26 most managers miss. Recognition often matters more than money. That honor carrier program cost $25 a month,
0:08:32 but transformed retention. These weren’t just newspaper tactics. They were blueprints for building
0:08:38 a multi-billion dollar business. At 33, Les Schwab was consumed by a single ambition: to own his own
0:08:44 business, to be in control of his destiny. The newspaper world suddenly felt too small. He joined
0:08:50 every business club he could find, the Jaycees, Toastmasters, Chamber of Converse, anything that smells like
0:08:57 business. He was terrified of getting old, of falling into a rut that he’d never escape. And then he saw
0:09:04 it: a tire shop for sale in Prineville, Oregon. Les knew nothing about tires, but he did know sales, and
0:09:09 he figured that was what mattered. His brother-in-law offered to partner with him, then got cold feet,
0:09:14 and backed out. Racked with guilt, he came back and offered to fund Les, but no partnership. Les would
0:09:22 be on his own. The shop was an okay rubber welder’s franchise: new tires, retreads, flat repairs, hard,
0:09:29 dirty, manual work. In order to purchase it, he’d need to go all in on himself, and sell his house,
0:09:34 and borrow against his life insurance, and borrow from his brother-in-law. Les scraped together money
0:09:39 from every corner of his life. Everything he’d built, everything he’d saved, it went into this one bet.
0:09:46 On January 1st, 1952, at 34 years old, Les Schwab walked into his tire shop as the new owner.
0:09:53 It was a leaky 1,400 square foot shed with no running water and no indoor plumbing. It had one employee.
0:09:57 The annual sales for the year before were $32,000.
0:10:04 Are you struggling to manage your projects at work using lots of different tools for communication,
0:10:10 task management, and scheduling? It doesn’t have to be this hard. Basecamp is the refreshingly
0:10:16 straightforward, reliable project management platform. It’s designed for small and growing
0:10:20 businesses, so there’s none of the complexity you get with software designed for enterprises.
0:10:27 Complexity kills momentum. Basecamp clears the path so your team can actually move. Do away with scattered
0:10:33 emails, endless meetings, and missed deadlines. With Basecamp, everything lives in one place. To-do lists,
0:10:39 message boards, chat conversations, scheduling, and documents. When information is scattered,
0:10:45 attention is too. Basecamp brings both back together. Basecamp’s intuitive design ensures
0:10:50 that everyone knows what’s happening, who’s responsible, and what’s coming next. My head of
0:10:56 operations swears by this platform and is the first person to suggest it to anyone. If you need another
0:11:01 decorated referral, you should call her. Whether you’re a small team or a growing business,
0:11:07 Basecamp scales with you. Stop struggling, start making progress, and get somewhere with Basecamp.
0:11:14 Sign up for free at Basecamp.com. While Les knew nothing about tires, he knew a lot about selling and
0:11:20 being able to outwork other people. His first taste of real business, one he owned and controlled,
0:11:28 was about to begin. On his first day, a customer walked in wanting two six-ply tires mounted. Les got
0:11:34 right to work. There was just one problem. He had no idea what he was doing. He’d always taken flats to
0:11:41 a service station, and now he was the service station. Using hand tools on a cold concrete floor,
0:11:47 he made a complete mess of it until his lone employee arrived and saved him. That first month,
0:11:54 Les did $2,800 in sales, about what the previous owner had done. But by June, something had shifted.
0:12:03 Sales hit $10,000 a month. By year’s end, he’d done $150,000 in revenue, five times what the previous
0:12:09 owner managed. Growth created the best kind of problem. He needed help. Les appointed himself
0:12:15 outside salesman and told his one employee to hire someone. The new hire was, to put it charitably,
0:12:21 no good, so he fired him. Then something interesting happened. A man named Frank Kennedy walked in off
0:12:27 the street asking for work. He and his wife had just moved to Prineville. He was looking around town
0:12:33 and decided he wanted to work for Les. Les checked his references. Frank took to the work like he was born
0:12:38 for it. This became a pattern. The right people kept showing up, drawn to something they couldn’t
0:12:43 quite name but could feel. Les was creating gravitational pull for good, hardworking people.
0:12:48 Good people can sense when something real is being built. It’s the same energy a lot of people feel at
0:12:55 startups today. But Les was learning just how rigged the tire business was. The major rubber companies had
0:13:01 what Les called phony pricing. A truck tire might cost him $100 wholesale but he’d visit competitors
0:13:08 and find them selling that same tire for $90. He’d call his supplier furious. They’d say for that deal
0:13:14 we’ll sell it to you for $90 and give you a 5% commission. It was such a shell game. Months of paperwork
0:13:22 and bookkeeping, floating expenses all to arrive at the same 5% margin every dealer got. The entire system was
0:13:27 designed to prevent real competition. Les wrote down his philosophy in a fury of anger.
0:13:33 Never take advantage of a customer. Never take advantage of an employee. But take all the advantage
0:13:37 you possibly can of the rubber company because they are not being fair and honest.
0:13:44 The constraints forced him to innovate. Big dealers had fleets of $6,000 service trucks
0:13:50 visiting commercial accounts. Les had one store and no capital for trucks. So he flipped the model.
0:13:55 He put an ad in the paper with a simple message. You know the wholesale prices the big guys get.
0:14:01 Come to my shop and I’ll give them to you directly. No middlemen. His competitors visited customers once
0:14:06 a week. Les was at his store six days a week. A permanent fixture. Best service, best prices,
0:14:12 but you had to come to him. It was asymmetric warfare. He had low overhead. It was a simple
0:14:19 value proposition and it worked. People came in droves. What fascinates me here is how constraints can
0:14:25 become advantages. Les couldn’t afford to play by industry rules so he invented new ones. He couldn’t
0:14:31 compete on the suppliers terms so he competed on transparency. He couldn’t go to customers directly
0:14:37 so he gave them a reason to come to him. The lesson isn’t that you need resources to win. It’s that you
0:14:43 need to see the game differently than everyone else is playing it. Les was ready to expand. The nearby town
0:14:50 of Redmond needed a tire store. He bought land, put up a building, invested $10,000 total. Now he had two
0:14:56 stores. Both okay rubber franchises. But there was a problem. He’d bitten off more than he could chew.
0:15:03 Running between the two stores was killing him. So Les made his star employee Frank Kennedy a win-win deal
0:15:09 that would become the foundation of his empire. Here’s how it worked. Frank would manage the Redmond
0:15:16 store. He’d pay Les $200 monthly rent and take $400 salary. After that, they’d split all profits 50-50.
0:15:23 But here’s the genius part. Frank had to leave his share of the profits in the business until his stake
0:15:29 equaled Les’ initial investment. That was real skin in the game. Sit with that for a second.
0:15:34 Frank had zero upfront risk. Les put up all the capital. But every day Frank ran the store well,
0:15:41 he owned more of it. The better he performed, the faster he’d build equity. It was his store in every
0:15:48 way that mattered, except Les kept half the upside. Les even added another twist. If Frank wanted a raise,
0:15:53 he could give himself one. But the store’s rent would increase by the same amount, giving Les a raise too.
0:16:00 A self-balancing system. People thought Les was crazy. Why give away half your profits?
0:16:07 Les saw it differently. If I share half the profits, I still have half. And if Frank makes more money,
0:16:11 he’ll work harder to make the store more successful. And if the store is more successful,
0:16:17 my half is worth more than the whole used to be. It was pure math. But it was also more than math.
0:16:21 It was an understanding of human nature. Les remembered running his paper route on foot
0:16:26 because he couldn’t afford a bike. He remembered how the honor carrier program had motivated those
0:16:32 kids. He knew what it felt like to want something to be yours. The Redmond store turned profitable
0:16:38 immediately. Frank ran it like he owned it because increasingly, he did. The arrangement with Frank
0:16:45 became the template. Every Les Schwab store would follow this model. Every manager would be a partner,
0:16:50 not an employee. This is one of the most elegant business solutions I’ve seen.
0:16:54 Les solved multiple problems at once. He couldn’t manage multiple locations himself,
0:17:00 so he needed to retain talent. And he needed managers to think long-term. The solution aligned
0:17:07 everyone’s interests perfectly. Frank couldn’t get rich without making Les rich. Les couldn’t expand
0:17:14 without making Frank rich. Les stumbled onto his next innovation by accident. His Prineville store
0:17:19 was bursting. Retreading equipment and tires cluttered every square inch of the sales floor.
0:17:25 They were working in what amounted to a filthy garage. By now he had two stores. Les saw the problem
0:17:31 differently. Why should customers shop in a dirty garage? Why not separate the dirty work from the selling?
0:17:36 He bought a small carpenter shop nearby and moved all the retreading equipment there. Suddenly,
0:17:42 his stores looked like actual stores. Places where people might want to shop, not just get their tires
0:17:47 fixed. With retreading centralized, he could exploit the economies of scale. One set of equipment,
0:17:51 serving multiple stores. The savings would compound as he grew.
0:17:59 Time for store number three. Les set his sights on Bend, his old hometown. But there was a problem.
0:18:06 Bend already had an OK Rubber Welders franchise. This should have stopped him. Franchise territories are
0:18:12 kind of sacred. But Les had learned there’s always a deal if you’re creative enough. He pitched the OK
0:18:19 district manager something unprecedented. What if Bend had two OK stores and he’d pay royalties to the existing
0:18:26 operator and run a second location? The manager, probably thinking Les was crazy, agree. And it was
0:18:31 messy from the start. The two stores managers couldn’t get along. Les’s managers quit, leaving him with an
0:18:38 empty building and a lease. At this point, most people would have admitted it was a mistake and moved on.
0:18:44 Les, however, was not most people. He’d been planning to start his own brand anyway. He’d already been
0:18:51 advertising as Les Schwab OK Rubber Welders to get his name out there. Now seemed like a perfect time to
0:18:57 cut loose. When he told OK Rubber Welders he was going independent, they said he couldn’t do that.
0:19:04 Let’s not argue, Les replied. I just wanted to be open about it. He needed a name. Tire Center was too
0:19:09 generic. Tire Service Center wasn’t much better. He was thinking bigger than one store. He settled on
0:19:16 Les Schwab Tire Center, with his name initially in smaller print. Then something interesting happened.
0:19:22 Customers started asking for Les Schwab Tires, not Goodyear, not Firestone, Les Schwab. What strikes me
0:19:29 here is how Les bounced every time he hit a wall. No space? Centralized operations. Franchise territory
0:19:34 taken? Create a new model. Manager quits? Perfect time to go independent. But the real insight was
0:19:40 discovering that customers trusted him more than they trusted the tire brands. In a commodity business,
0:19:47 the seller’s reputation matters a lot. Store number four came from an unusual source. Gordon Pryday
0:19:52 worked at the Prineville store and every morning he’d walk in with the same greeting. When are we going to
0:19:57 open in Madras? Not good morning, not house business. When are we going to open in Madras?
0:20:03 Madras was another small Oregon town. Gordon saw opportunity there and wouldn’t let it go. Problem
0:20:10 was, Madras already had an OK Rubber franchise. Les figured if he was already fighting OK Rubber in
0:20:15 Bend, he might as well make it a proper war. He drove to Madras and found a bankrupt fruit stand for sale.
0:20:22 $10,000 got him the building and the land next door. The fruit stand had a small apartment in the back.
0:20:27 Gordon moved his family in immediately. Let that sink in. This man believed so strongly in the
0:20:32 opportunity that he moved his wife and kids to the back of a converted fruit stand. The family kitchen
0:20:37 table sat steps away from the tire racks. When customers left for the day, Gordon was still there.
0:20:43 When they arrived in the morning, he was already there. This wasn’t a job, it was a mission, and it existed
0:20:49 because of the profit sharing deal. Gordon knew that every tire he sold was building his own wealth.
0:20:56 Les and Gordon ran the numbers. They’d break even at $2,000 a month, make a small profit at $25, and do
0:21:03 pretty well at $3,000. In year one, they broke even exactly. In year two, $800 profit, split 50/50.
0:21:08 And here’s the punchline. Another tire dealer opened in Madras around the same time.
0:21:15 Shiny new building, proper equipment, all the advantages that Gordon lacked. That competitor
0:21:20 went out of business in two years. This story captures something profound about ownership
0:21:25 versus employment. The competitor had every advantage except the one that mattered, skin in the game.
0:21:32 Gordon Pryday wasn’t just managing a store, he was building his family’s future. That’s what real
0:21:36 incentive alignment creates. People who live in the back of a fruit stand because they’re not working
0:21:41 for you, they’re working with you. They’re not building your dream, they’re building their dream
0:21:48 with you. But Les had a problem. He was running OK rubber welder stores in Prineville and Redmond,
0:21:54 but Les Schwab tire centers in Bend and Madras. Two different brands, one owner. It was confusing for
0:22:01 customers and open rebellion against his franchise agreement. The corporate brass at OK had seen
0:22:06 enough. Five executives flew from headquarters to confront Les in a Redmond motel room.
0:22:10 “We’ve come for your answer,” they said. “Get out of Bend and Madras or else.”
0:22:16 Les had been losing sleep over this moment for weeks. He paced the floor at night,
0:22:22 terrified they’d seize his equipment and destroy him. The timing couldn’t be worse. His largest customer
0:22:29 was behind on payments that equaled his entire net worth. If that customer went bankrupt and OK seized
0:22:35 his equipment, he’d lose everything. But sitting in that motel room facing five corporate executives,
0:22:41 something snapped. “I don’t want any more harassment from you people,” said Les. “If you have anything more to
0:22:49 say, say it in court.” And then he walked out. It was pure bluff. Les had no money for lawyers. No case
0:22:56 to make. For months afterwards, he lived in terror that the lawsuit would end everything. The lawsuit
0:23:04 never came. Years later, Les figured out why. OK had 1100 franchises nationwide. If they sued him and lost,
0:23:10 it could set a precedent that would unravel their entire system. They couldn’t risk it. By standing up to
0:23:15 them, he’d accidentally found their weakness. They needed the franchise system more than they needed
0:23:22 to crush one rebellious dealer in Oregon. Now, Les moved fast. He repainted all four stores with Les
0:23:30 Schwab Tire Center, prominently displayed. One brand, one identity, one vision. The franchise rebellion
0:23:37 was over. Les was free. This moment reveals something crucial about negotiations and power. Les had no leverage
0:23:44 except for one thing: the cost of being wrong. They could crush him, but if they failed, they create a
0:23:50 precedent that threatened their entire business model. Sometimes your only power is making the consequences
0:23:56 of attacking you too expensive for your opponent to risk. Les won not through strength, but by understanding
0:24:03 what the other side feared losing. He saw the whole board, not just his own pieces. During the chaos of
0:24:08 growth and franchise battles, Les would escape with Dorothy on long drives. These weren’t romantic
0:24:13 getaways. They were strategy sessions. “I’m going to build a small warehouse,” he told
0:24:18 her one evening on the Columbia River Highway. “Move my bookkeeper out there, buy the tires myself,
0:24:24 do the advertising, price the tires, handle the books, maybe build six, seven, or eight stores someday.”
0:24:30 He could see it all mapped out. A central operation supporting a network of profit-sharing stores.
0:24:37 Each manager thinking like an owner because they were an owner. He’d need scale to buy tires at volume
0:24:43 discounts. He knew advertising. He knew promotion. Most importantly, he knew how to align incentives.
0:24:52 Six, seven, maybe eight stores. That seemed wildly ambitious in 1956. He would go on to build 410 before he died.
0:24:59 By the time Les had seven stores, a new problem emerged. The stores were growing. What started as
0:25:06 one manager and a helper had become teams of four, five, six people. Les wanted to extend profit-sharing
0:25:12 deeper into each store. His solution was elegant. Managers would appoint their best person as assistant
0:25:20 manager. That person would get 10% of profits. 5% from Les, 5% from the manager. The split would go from
0:25:27 50-50 to 45-45-10. The assistant manager would build equity in the store. When a new location opened,
0:25:33 they’d become the manager there, taking their accumulated profits with them. It was a self-replicating system.
0:25:39 Every store would create its own successor. The managers hated it. They didn’t want to lose the 5%.
0:25:45 This threatened Les’ entire growth model. Without succession planning, expansion would stall.
0:25:51 So Les wrote one of the most remarkable memos in business history. He said this,
0:25:56 “If a bright, young, ambitious man joins our company and wants to make our company his career,
0:26:02 does he do it because he likes Norm or Gordy or Bob? Do you men think that some little fairy sent
0:26:09 you this man just to help you build your bonus? This man is going to work for low pay year after year,
0:26:16 just so you can build your profit share into a nice, fat nest egg? No, I don’t think so.
0:26:21 This man didn’t join the company because of the store manager’s future. This man joined the company
0:26:28 because of his future with Les Schwab Tire Centers, not in you personally. If you men block this man,
0:26:34 you are being selfish. Two of the seven managers appointed assistants, and then Les dropped the
0:26:42 hammer. Effective immediately, all manager shares dropped from 50 to 45%. If they appointed an assistant,
0:26:50 that person got 10%. If they didn’t, the company kept the extra 5%. Suddenly, every store had an assistant
0:26:57 manager. And the growth engine roared back to life. This is leadership at its finest. Les understood
0:27:02 something that his managers didn’t. Their wealth came from the system he created, not just their
0:27:07 individual contributions. When they hoarded opportunity, they violated the very principle that
0:27:13 made them successful. His memo didn’t just shame them, it reminded them of their moral obligation to
0:27:18 pay forward what they’d received. But when moral arguments failed, he used economics.
0:27:25 The beauty is that once forced to share, the managers discovered what Les already knew. Developing
0:27:32 your successor makes you more valuable, not less. Coming up, the moment Les discovered he wasn’t really
0:27:39 in the tire business at all. That single insight let him charge premium prices in a commodity market
0:27:46 and made his employees run, literally run, to serve customers. If you think you know what business
0:27:52 you’re in, the next part will make you question everything.
0:27:58 Les had solved his incentive problem. Now he turned to something bigger, reimagining what a tire store
0:28:04 could be. It was 1956. On a weekend drive, Les found himself studying grocery stores.
0:28:10 Customers wandered the aisles, they compared prices, they made informed decisions. Then he’d pass a tire
0:28:14 shop, same cramped waiting room, same dealer disappearing into the back to fetch whatever
0:28:21 tire he felt like selling. What if Les wondered aloud, we displayed tires like Safeway displayed groceries?
0:28:28 Think about how radical this was. Tires were ugly industrial products. Heavy, dirty, technical. Every
0:28:32 dealer hid them in the back of the warehouse. Why would customers want to see them?
0:28:38 But Les thought differently. People buy what they can see and what they can understand.
0:28:44 He started converting his stores into supermarket tire centers. Massive showrooms, hundreds of tires on
0:28:51 display, organized by type and size. Clear pricing, educational materials. Let the customer browse,
0:28:57 let them compare, let them touch, let them choose. When he opened the next store, Les centered a company
0:29:03 memo that read like a declaration of war. This I vow, we’re going to have a supermarket tire store in
0:29:09 every town that we have a Les Schwab tire center. I hate to use threats. It’s against my policy entirely,
0:29:15 but you can visualize what is going to happen in your town if you don’t run a supermarket tire store,
0:29:21 because I’m going to have it regardless of cost. I sincerely hope I have made myself very clear. I
0:29:27 love you, but I love a supermarket tire store even more. But displaying tires wasn’t enough. They were
0:29:33 ugly. They had to be spotless. They had to be beautiful. Les would visit stores constantly looking for
0:29:40 the ideal place a person would want to buy tires. The winners were always the cleanest. Tires waxed and
0:29:46 gleaming everything in its place. He became obsessed with the details of tire presentation. He found the
0:29:52 perfect spray paint and lacquer to make tires look their best. He sent another memo to everyone telling
0:29:58 every manager exactly what brand to buy and where to get it. This is 40 years before Steve Jobs would
0:30:03 obsess over every detail of the Apple store. Les Schwab was applying the same thinking
0:30:09 to truck tires. The results were immediate. Customers spent more time browsing, they asked better
0:30:15 questions, and they bought more tires. More importantly, they trusted what they could see.
0:30:22 In the 1960s, Les made a decision that would have seemed insane to other dealers. He took down every tire
0:30:28 manufacturer sign from his stores. His sign maker asked what design you wanted for the new signs. Les looked
0:30:34 around and pointed to a standard oil station. Put Les Schwab where they have standard and put tires where
0:30:42 they have the Chevron logo. Done. With that simple instruction, Les became the first major tire dealer
0:30:49 in America to build his business around his own name instead of a manufacturer’s brand. No more Goodyear
0:30:56 signs. No more Firestone banners. Just Les Schwab. He was betting everything on one idea. People would buy tires,
0:31:02 not because of who made them, but because of who sold them. We don’t have the blimp flying around
0:31:06 like Goodyear, he’d later joke, but we’ve got something better. The Les Schwab sign. And in the
0:31:12 Northwest, that’s more powerful than the blimp. The timing was perfect. Foreign tire manufacturers were
0:31:18 flooding into America. The market was oversupplied. For the first time since World War II, the big American
0:31:25 tire companies had lost their stranglehold on pricing. Les embraced the chaos. I was so disgusted with the tire
0:31:30 suppliers that I was willing to do most anything to help my company survive, he wrote. I decided to
0:31:36 take down all rubber company signs, to go straight independent, to buy tires like Safeway buys groceries,
0:31:43 to buy the best possible tire, good quality, and at the lowest possible price. Most dealers at the time
0:31:48 stayed loyal to one manufacturer. They’d get a good deal on one brand, but that’s all they could offer.
0:31:55 Les bought from everyone. Japanese manufacturers, European imports, anyone who made quality tires at the right
0:32:01 price. His scale gave him leverage. Like Costco, decades later, he bought in massive quantities and
0:32:06 passed the savings on to customers. Here’s how it worked. Les always had one line of tires priced to
0:32:12 match his lowest competitor. But then he’d have three, four, or five other options at different price
0:32:18 points, all displayed beautifully in his spotless showroom. Customers had no reason to shop anywhere else.
0:32:24 He had the best prices, he had the best selection, and he had the best service. The tire manufacturers had
0:32:30 lost control of their own market. This move reveals a profound insight about branding and power. Schwab
0:32:36 understood that in a commodity business, the relationship with the customer matters more than the product.
0:32:43 By removing manufacturer signs, he wasn’t just changing decor. He was asserting ownership of the customer
0:32:50 relationship. The tire companies became his suppliers, not his partners. It’s the same playbook that Amazon
0:32:56 would later use with book publishers or that Walmart used with consumer goods companies. Control the
0:33:04 relationship and you control the business. By 1965, Les had 15 stores scattered across Oregon and Idaho.
0:33:10 He faced the classic scaling problem. How do you maintain quality when you can’t visit every store
0:33:16 every week? His solution was elegant. He promoted his best store managers to become zone managers. But
0:33:22 here’s the twist. They kept running their own stores while overseeing others nearby. No extra salary,
0:33:27 no corner office, just results-based pay. Think about those incentives for a second. If your zone thrived,
0:33:31 everyone made money. If it struggled, your own store suffered because you were spending time away from
0:33:37 it fixing other people’s problems. It was kind of self-regulating brilliance. Bad zone managers would
0:33:43 naturally step back to focus on their own struggling stores. Good ones would lift every store around them
0:33:47 and share in the profits. Zone meetings became the company’s heartbeat. This is where they picked
0:33:54 managers for new stores, debated expansion, shared what worked. Les ran them like board meetings. Every
0:33:59 zone manager had skin in the game. They’d all started changing tires and worked their way up through the
0:34:04 profit-sharing system. When a management position opened, it was like Shark Tank before Shark Tank
0:34:11 existed. Assistant managers would pitch for their shot at running a store. Les and the zone managers would
0:34:17 interrogate them. How much money do you have in your profit-sharing account? What’s your plan? Why should we bet on you?
0:34:22 The person with the most skin in the game usually won, but not always. But it was never the smoothest
0:34:29 talker. This structure solved multiple problems at once. It created management depth without bureaucracy.
0:34:34 It aligned regional interests with store-level execution. Most importantly, it ensured that
0:34:38 decision-makers had lived the business from the ground up. When you’re picking someone to run a new
0:34:45 store, who better to judge than people who’ve already succeeded at it? Les didn’t need consultants or
0:34:51 personality tests. He had a system that selected for proven operators with their own money on the line.
0:34:57 By 1970, Les had codified his profit-sharing into what he called the 100 story. For every $100 of store
0:35:04 profit, $25 went to the manager, $10 to the assistant manager, $27 to the employee bonus and retirement fund.
0:35:10 The company kept 38%. But here’s the clever part. The company didn’t take its share until the manager
0:35:16 had enough equity to start drawing theirs. That 38% stayed in the store as working capital,
0:35:22 building the manager’s stake. This created wealthy managers through ownership, not salary. By the mid-1970s,
0:35:27 some of the store managers were making over $100,000 annually, more than most corporate executives in
0:35:38 the 1970s. The growth was relentless. Seven stores in 1956, 35 by 1970, over 60 by 75. But the real
0:35:44 brilliance wasn’t the growth rate. It was that each new store made the whole system stronger. The central
0:35:51 warehouse bought in ever larger quantities, crushing competitors on price. Every store created assistant
0:35:57 managers, hungry to run their own locations. And most remarkably, the expansion was self-funded. Managers left
0:36:02 their profits in as working capital. When they moved up to run a new store, they brought their accumulated
0:36:09 wealth as startup capital. Les had built a machine that financed its own growth. This is one of the most elegant
0:36:15 business models I’ve seen. He solved the eternal problem of expansion capital by making his managers into bankers.
0:36:20 They funded growth not because they had to, but because they wanted to. Their equity was
0:36:24 building while it sat there. Meanwhile, the company got interest-free loans from the very people most
0:36:32 motivated to make those stores succeed. Around 1970, Les made another counterintuitive move. Instead of just
0:36:37 building new stores, he started recruiting his competitors. The pitch was simple. Keep your independence,
0:36:44 but join Les Schwab network. Get access to our buying power, our advertising, our system. Buy tires at the
0:36:50 same prices we pay. Think about the elegance of this. Every dealer who joined made Les’s buying power
0:36:56 stronger, which made his prices better, which made more dealers want to join. It was a virtuous circle.
0:37:02 JJ Stamper had been a Goodyear dealer for 40 years, barely scraping by. After joining Les’s network,
0:37:08 he built four stores and hit six million in annual sales. Within five years, 60 independent dealers had
0:37:14 joined. They added 50 million in annual sales without Les investing a dollar in real estate or inventory.
0:37:20 He turned competitors into allies. His gravity alone was enough. This is network effects before
0:37:26 anyone called them that. Les understood that in a commodity business, scale is everything. Every
0:37:31 dealer who joined made it more attractive for the next dealer to join. Sometimes the best way to beat
0:37:37 competitors is to actually make them partners. As Les’ empire scattered across multiple states,
0:37:42 he faced a new problem. How do you maintain personal relationships when managers are hundreds of miles
0:37:49 apart? His solution raised eyebrows. He bought airplanes, first a Cessna, then a Piper,
0:37:54 then eventually a Citation. Les initially resisted, but he realized planes were tools that collapse
0:38:00 geography. He could visit five stores a day, attend zone meetings, look managers in the eye instead of
0:38:06 managing through reports. The planes make all of our stores just one hour away, he said. For a
0:38:13 company built on relationships, that proximity was everything. By 1975, Les Schwab had over 60
0:38:20 company stores plus 60 member dealers. Revenue exceeded 130 million. The investment banker started
0:38:27 circling. Private equity firms made offers. The numbers were astronomical, enough to make Les one of the
0:38:32 wealthiest men in America. But he turned them all down. What would I do with the money? He’d ask. What
0:38:37 good is money beyond a certain point? But it went deeper than that. Les knew exactly what would happen
0:38:42 if he sold. Some MBA would look at his profit sharing system and ask the obvious question: Why do store
0:38:49 managers make more than executives? Les had an answer they’d never understand. That’s exactly why we’re so
0:38:54 successful. We think the most important people in the company are the people on the firing line, he wrote.
0:39:00 The ones who sell, do the service work, and take care of the customer. Most American corporations have
0:39:07 fat salaries for the top people and treat the people at the end of the line as peons. I guess that is why,
0:39:13 if you’re on the ball, you can beat them. Any buyer would try to fix his inverted hierarchy. They’d cut
0:39:18 profit sharing to boost margins. They’d pay executives more than store managers. And they’d destroy
0:39:24 everything that made Les Schwab work. The results validated his approach. In an industry notorious for
0:39:31 low margins and high turnover, Les Schwab stores outperformed competitors by 30 to 50 percent. Manager
0:39:38 turnover was virtually zero. Customer loyalty was legendary. Most telling was by 1975, Les Schwab
0:39:44 dominated the Pacific Northwest without acquiring a single competitor. Every store was built from
0:39:51 scratch or recruited as a member dealer. They’d won through performance alone, not financial engineering.
0:39:57 Les understood something that money can’t buy. The real money is in the people and the system he created.
0:40:01 Selling would have made him rich, but it would have destroyed thousands of careers built on his
0:40:08 profit sharing model. He chose legacy over liquidity. In an era of quick flips and financial engineering,
0:40:14 Les proved that sometimes the most valuable asset you can build is the one you’ll never sell. The irony is,
0:40:19 by refusing to cash out, he built something worth far more than any buyer was offering.
0:40:25 By 1975, Les was approaching 60. His model was proven. But in business,
0:40:31 there’s no such thing as the status quo. The giants were coming. Big box retailers looked at tires and
0:40:38 saw opportunity. It seemed perfect for their model. It’s a simple commodity product with a huge market.
0:40:44 It’s ripe for customer disruption through bigger scale. They had deep pockets. They had massive stores and
0:40:50 supply chains that had already crushed local hardware stores and grocers. The big retailers bought tires by
0:40:57 the train load and sold them cheap. But they treated tires like toilet paper, stacked them high, priced them low,
0:41:03 and watched them fly off the shelves. They had no service departments worth mentioning. Most failed miserably.
0:41:09 Fred Meyer was typical. They had opened 16 tire departments across their stores and within two years,
0:41:14 they were hemorrhaging money. So they approached Les with a proposition. Would he take over six of their
0:41:19 freestanding tire centers? Les took over five of them. And within a year, he had tripled the business Fred
0:41:26 Meyer had been doing. Here’s what’s remarkable. He was selling the exact same tires at higher prices.
0:41:32 It’s worth asking, how is this possible? Les understood something the big retailers missed.
0:41:37 He said this, “People don’t buy tires on price. They buy from someone they trust and from someone who will
0:41:43 smile and from someone who will give service and stand behind what they sell.” Fred Meyer thought they
0:41:49 were in the tire business. But Les knew he was in the trust business. When your car starts shaking at 70
0:41:55 miles an hour, you don’t want the lowest bidder. You want someone who will make it right. Big retailers had
0:42:01 every structural advantage. Scale, capital, real estate, supply chain, sophistication. But they were
0:42:07 optimizing for the wrong thing. They thought customers wanted cheap tires when what they really wanted was
0:42:13 to never worry about their tires. Les could charge premium prices because he wasn’t selling rubber.
0:42:20 He was selling peace of mind. The early 1980s tested whether the Les Schwab tire empire could survive
0:42:25 without Les Schwab. First, company president Don Miller suffered a heart attack. Les, who’d been
0:42:31 stepping back from daily operations, returned to run the company while Miller recovered. Then on August 1st,
0:42:39 1983, Les himself suffered a massive heart attack, open heart surgery, a week in intensive care, and a long
0:42:45 uncertain recovery. While Les was still hospitalized, Don Miller dropped a bombshell at the annual managers
0:42:52 meeting. He was retiring. The timing stunned everyone. The company faced a double crisis. Its founder
0:42:59 incapacitated while its president was departing. From this chaos emerged Phil Wick, who’d started at the
0:43:04 bottom and worked his way up like everyone else at Les Schwab. He’d become president proving the
0:43:10 succession system worked even under the worst circumstances. By 1985, Les had recovered and the
0:43:16 company was positioned perfectly for the industry upheaval ahead. The tire manufacturing giants were
0:43:22 consolidating or failing. Foreign competitors like Toyo were flooding in. While other dealers picked sides,
0:43:28 Les bought from everyone. He’d built a massive warehouse in Prineville, the town where it all started.
0:43:35 325,000 square feet of pure buying power. This let him offer customers the best tire for their specific
0:43:42 needs, regardless of who manufactured it. People kept asking Les why he didn’t create his own Les Schwab
0:43:47 tire. He had the scale. He had the reputation. He had the capital. He had the know-how. It seemed like
0:43:53 the obvious move. And Les had thought about it deeply and decided against it. His reasoning was brilliant.
0:43:58 If we have a problem with the tire, we drop the tire, pick up another one, and continue to swim.
0:44:03 If he made Les Schwab tires and they had a defect, he couldn’t just drop the line. He’d have to defend
0:44:09 it, recall it, manage the crisis. His people would have conflicts selling “Do I sell the Les Schwab tire,
0:44:14 or do I sell the Goodyear tire?” instead of “What is best for the customer?” By staying independent,
0:44:20 he could always pivot to whatever served customers best. It’s tempting to put your name on everything
0:44:26 once you’re successful. But Les understood, by not making tires, he could always offer customers the
0:44:33 best option without defending a bad product. In the long run, what is best for the customer is best for the
0:44:39 company? The 1990s brought new threats. Costco started selling tires. Online retailers emerged.
0:44:45 Industry experts predicted Les Schwab’s high-touch model was doomed. It was too expensive, too slow,
0:44:52 too old-fashioned for the digital age. The opposite happened, though. As competitors automated everything
0:44:58 and removed human interaction, the Les Schwab experience became more valuable, not less. Customers who could
0:45:03 buy tires online still drove to the Les Schwab store. They wanted someone to run out and greet them. They
0:45:08 wanted experts who knew their name. They wanted the peace of mind that comes from dealing with a person
0:45:14 they knew and trusted. The company’s decades of investing in people had created a moat that no
0:45:21 amount of technology could cross. The numbers by the late 1990s were staggering. Revenue approached $700
0:45:30 million. The employee trust fund hit $332 million, averaging over $65,000 per employee. Store managers
0:45:37 were earning over $200,000 annually, proving Les’ belief that the people closest to the customer should
0:45:42 make the most money. At 80 years old, Les began stepping back. He’d built something unprecedented: a
0:45:48 billion-dollar company where thousands of employees had become wealthy alongside him. From a leaky shed in
0:45:54 Prineville to over 400 stores across seven states, all maintaining the culture he’d established
0:46:01 in the 1950s. By 2000, annual sales crossed $1 billion. But the real measure of success
0:46:07 wasn’t in the revenue. It was in the thousands of families across the West who’d built middle-class
0:46:14 lives and genuine wealth through the Les Schwab model. I can imagine Les saying something along the
0:46:19 lines of if people knew how profitable it was to pay your people well, everyone would do it.
0:46:25 What fascinates me here is how weakness became strength. Everyone thought personal service would
0:46:31 become obsolete in the digital age. Instead, it became more valuable precisely because it was rare.
0:46:38 While competitors raced to eliminate employees, Les doubled down on his people-first bottle. Les Schwab
0:46:44 gave away 50 percent of his profits and became richer than if he’d kept 100 percent. By the time
0:46:50 he died, Les Schwab Tire Centers was distributing over half of all profits directly to employees.
0:46:58 The employees’ trust held $332 million. Store managers routinely made $200,000 a year and many
0:47:06 retired as millionaires. Meanwhile, Les paid himself $32,000 a year. Charlie Munger studied Les Schwab’s success
0:47:10 and reached a simple conclusion. He must have harnessed the superpower of incentives.
0:47:15 He must have a very clever incentive structure, driving his people, and he must be pretty good at
0:47:21 advertising, which he is. He’s an artist. But here’s what I think Munger was getting at:
0:47:26 Les didn’t just design a clever incentive system. He designed a system that acknowledged how humans
0:47:34 actually work. Give people real ownership, not promises. Share profits monthly, not maybe someday. Promote from
0:47:41 within, not from above. When Les died in 2007 at age 89, Oregon’s governor ordered flags flown at
0:47:47 half-staff. Think about that: a tire shop owner received the same honor as a fallen soldier or
0:47:54 president. 13 years later, when the family finally sold, the market value of Les Schwab’s creation was
0:48:00 over $3 billion. Not proprietary technology, not for executive products, for a culture that turned tire
0:48:07 changers into millionaires. The tools haven’t changed since 1952. Trust, incentives, and the radical belief
0:48:12 that ordinary people can build extraordinary things when you align their interests with yours.
0:48:17 Les Schwab asked himself a simple question: What would happen if I treated my employees like partners
0:48:22 instead of expenses? $3 billion later, we have our answer.
0:48:29 Wow, what a force. Les was incredible. He’s somebody we can learn a lot from. I haven’t picked up this
0:48:35 book in probably a decade or so, and just flipping through it, I was reminded of all my old highlights,
0:48:39 which are available for members and our learning community. You can read through what I highlighted,
0:48:45 but reading all my old highlights, it was interesting to me how some of the things that I didn’t highlight,
0:48:49 I highlighted this time and some of the things that I highlighted last time didn’t make as much sense,
0:48:55 but there’s so much business wisdom in this book and there’s no nonsense approach. I really loved reading
0:49:00 this again. I want to mention a few of the quotes that I highlighted that didn’t make it into the episode
0:49:06 that I loved. So one on open books, he said, “We have no secrets in our company. We have no secrets as
0:49:12 to where you stand on your profit share arrangements as we put out a P&L every month showing you exactly
0:49:18 where you stand.” On frontline workers, he said this, “Too many corporations think the brains are in the
0:49:23 main office and all the bonus money is paid to four or five high people. All the others are peons and
0:49:28 just numbers if you have a union that really makes them a number. The truth is that success is at the
0:49:34 other end. The office merely keeps their records and tells them how they are doing. The real job for office
0:49:39 people is to provide motivation, to create programs that make it possible for them to be successful,
0:49:45 to be fair, to be open, to have a really open communication, to have no secrets, and to support
0:49:51 them.” This, he went on to say, is an unusual way to run a business, but more businesses would be
0:49:56 successful if they gave more attention to the people on the front lines. Part of that quote made it into
0:50:01 the episode, but I wanted you to get the full context of that one. So on going public, he said this,
0:50:06 “When we had 12 or 13 stores, I thought a lot about going public, partly to raise money,
0:50:11 and partly to expand faster. I had the chance to buy a small public company that was nearly
0:50:16 bankrupt. It would have been an easy way to go public. I’m so glad I resisted the urge to have
0:50:21 our stock on the market. I don’t want a few investors around the country club asking about
0:50:26 our business and questioning some of our decisions.” I thought that was really interesting. That reminded
0:50:31 me a lot of John Bragg and what he said in the episode and the Jimmy Pattison outliers episode.
0:50:35 And Jimmy Pattison sort of had the reverse experience where he was public. His shares
0:50:40 got up to $42 and then down to as low as 85 cents. And he ended up buying them out.
0:50:45 on what less tells managers when they’re coming in. The big thing that I think is going to hit
0:50:50 you right between the eyes is that we expect you to run the store. You are on your own and you will sink
0:50:58 or swim according to your abilities. It takes quite a man to be a store manager. I’ve always said because
0:51:03 you must have great manager abilities, sales manager abilities, service manager abilities, and above all,
0:51:09 just plain old management ability. And finally, on not being complacent. He said this, “We have great
0:51:16 people and they do a great job, but we must constantly remind ourselves as to just why we are so successful
0:51:22 and what we must do to continue to be successful. Because if we become complacent, it’s all over with.”
0:51:28 All right, let’s talk about some of the lessons you can take away from Les Schwab. I have
0:51:32 countless lessons from the book, but I’m going to talk about eight here that I want to highlight for you.
0:51:39 And the first is win-win. The math of generosity. Les discovered that splitting profits 50-50 with store managers
0:51:45 didn’t cut his wealth in half. It multiplied it. His reasoning was pure math. If I share half the profits,
0:51:52 I still have half. And if Frank makes more money, he’ll work harder to make the store more successful.
0:51:58 And if the store is more successful, my half is worth more than my whole used to be. He gave away billions
0:52:01 to make billions more. You get rich by making others rich.
0:52:07 Number two, skin in the game. Make them owners, not employees. Les didn’t just share profits. He made
0:52:14 managers earn their ownership with real money. The deal. Manage the store, take your salary,
0:52:20 and get 50% of profits. But there’s a catch. You can’t withdraw your profit share until it equals the
0:52:26 initial investment. The result? Zero manager turnover. Don’t pay people to care. Make them actual owners
0:52:30 with skin in the game and real money on the line. And they can’t help but care.
0:52:38 3. Think in decades. Act today. Investment bankers offered less astronomical sums to buy his company,
0:52:43 enough to make him one of America’s wealthiest men overnight. He refused every offer. What would I do
0:52:48 with the money, he said? The real answer? Selling would destroy the profit sharing culture that
0:52:54 made thousands of employees wealthy. New owners would fix his inverted pay structure. Les thought in
0:53:01 decades while acting with daily urgency. By 2020, that patience had paid off. The company was sold for
0:53:08 $3 billion. Preserving the culture even after his death. Build something worth keeping, not just worth
0:53:15 selling. 4. All in or all out? At 34, Les sold his house, borrowed against his life insurance, and scraped
0:53:23 together $11,000 to buy a failing tire shop with no running water and no plumbing. He never had changed
0:53:29 a tire. His competitors had decades of experience. But Les had something they didn’t. No backup plan.
0:53:35 That total commitment forced him to figure it out. One year later, he quintupled revenue. Half measures
0:53:42 guarantee half results. 5. High agency. Everything is your job. Les bought his first tire shop,
0:53:47 having never fixed a flat tire in his life. Day one, customer needs tires mounted. Les fumbles with his
0:53:53 hands on the cold floor, making a complete mess of the situation until his employee arrives. He
0:53:59 insisted on being taught, so the situation never repeated. Within a year, sales jumped from $32,150
0:54:06 He treated every problem as his problem, whether he knew the solution or not. Sometimes the only
0:54:13 qualification you need is the willingness to figure it out. 6. Reputation works while you sleep. In the
0:54:19 1960s, Les made a decision that seemed insane. He removed all tire manufacturer signs from his store.
0:54:25 Back then, tire shops were essentially Goodyear or Firestone franchises. The signs meant manufacturer
0:54:32 support and co-op advertising money. Les gave all that up to put his own name on every store. He bet the
0:54:39 customers would buy based on who sold the tires, not who made them. Within a decade, Les Schwab became
0:54:44 more powerful than any manufacturer brand in the Northwest. Your name is either making you money or
0:54:51 costing you money. There’s no neutral. 7. Go positive, go first. Les instituted free flat
0:54:56 tire repairs for anyone, whether you’re a customer or not. Competitors called them crazy. Why would you
0:55:02 fix flats for people who bought tires elsewhere? But Les understood reciprocity. Humans are biologically
0:55:09 wired to return favors, even unearned ones. Those free repairs created a loop. Strangers who owed him
0:55:15 nothing suddenly owed him something. Most businesses wait for the transaction before the service. Consistently
0:55:22 going positive and going first is one of the most powerful forces in the universe. 8. Dark hours.
0:55:27 Every morning before dawn, teenage Les ran his paper route. Not biked, but ran. For two months,
0:55:33 he sprinted through dark streets on foot saving for a bicycle. His classmates were asleep. He was earning.
0:55:40 By senior year, Les owned all nine routes in town. He’d wake up at 4, deliver hundreds of newspapers,
0:55:47 then show up to school. Your competition is asleep from 4 to 7am. That’s three free hours to build your lead.
0:55:54 Thanks for listening and learning with us. And be sure to sign up for my free weekly newsletter at
0:56:00 fs.blog/newsletter. I hope you enjoyed my reflections at the end of this episode. That’s normally reserved
0:56:06 for members. But with this outlier series, I wanted to make them available to everyone. The Farnham Street
0:56:12 website is where you can get more info on our membership program, which includes access to episode
0:56:19 transcripts, reflections for all episodes, my updated repository featuring highlights from the books used in
0:56:26 this series and more. Plus, be sure to follow myself and Farnham Street on x Instagram and LinkedIn. If you like
0:56:30 what we’re doing here, leaving a rating and review would mean the world. And if you really like us,
0:56:41 sharing with a friend is the best way to grow this special series. Until next time.
They weren’t employees. They were partners. Les Schwab didn’t build a company. He built a culture.
This episode reveals how one small-town tire dealer scaled to $3 billion by turning customers into evangelists and employees into owners. Somewhere between changing his first flat tire and opening his 410th Les Schwab Tire Center, Les discovered something profound: his people weren’t just working for him, they were working with him. They weren’t building his dream, they were building their own. This episode is a case study on how strategy, incentives, and trust create massive advantages that resources can’t buy. When investment bankers offered Schwab billions to sell his empire, he refused after asking himself just one question: “What would I do with the money?”
Les Schwab understood something most never learn: the real wealth isn’t in what you keep.
Approximate timestamps: Subject to variation due to dynamically inserted ads:
(01:49) Roots
(11:21) In Business
(27:50) Building an Empire
(40:18) Maturation and Legacy
(48:21) Reflections from Les Schwab
(51:22) Lessons from Les SchwabThis episode is for informational purposes only and is based on Pride in Performance: Keep It Going by Les Schwab
Check out highlights from this book in our repository, and find key lessons from Schwab here: https://www.fs.blog/knowledge-project-podcast/outliers-les-schwab
Upgrade—If you want to hear my thoughts and reflections at the end of all episodes, join our membership: fs.blog/membership and get your own private feed.
Newsletter—The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter
Follow Shane on X at: x.com/ShaneAParrish
Learn more about your ad choices. Visit megaphone.fm/adchoices
-
683: How to Find Your Side Hustle Niche
AI transcript
0:00:04 This episode is presented with limited commercial interruption by Intuit.
0:00:05 How cool is that?
0:00:08 Whether you’re looking to grow a side hustle or switch things up full-time,
0:00:11 Intuit helps tax and bookkeeping professionals chart your own path
0:00:13 and connect with customers in meaningful ways.
0:00:17 Head to intuit.com slash expert to learn more or apply now.
0:00:22 The riches are in the niches, but how do you find your niche?
0:00:24 Today, I want to give you a few exercises and frameworks
0:00:28 to identify a potential side hustle niche because there’s a lot of stress.
0:00:30 There’s a lot of discussion around this topic.
0:00:32 After all, you don’t want to pick the wrong thing.
0:00:36 You want to pick a place to play where you can thrive and where you’re excited to show up.
0:00:41 And that word play is important because when you look at it as a game,
0:00:42 first of all, it’s more fun.
0:00:48 And second of all, when it doesn’t work, when your first idea and your first idea might not,
0:00:50 you can tell yourself, well, it’s just a game.
0:00:55 So finding a place to play and experiment and positioning it as such in your mind
0:00:56 is the goal of this episode.
0:00:58 Now, here’s what’s rare.
0:01:02 And I say this as someone who’s published dozens of listicles of side hustle ideas
0:01:05 and ways to make money online, ways to make money offline,
0:01:07 the best side hustles for fill-in-the-blank persona.
0:01:08 But it’s rare.
0:01:12 It’s rare for a guest to say, well, I was Googling ways to make extra money,
0:01:16 as one does, and I scrolled down to number 17 on the list and said,
0:01:16 you know what?
0:01:18 That’s the one for me.
0:01:21 In fact, the only guest I can remember mentioning that specific path
0:01:24 was Vladimir Hernandez in episode 522.
0:01:27 He talked about coming across my big list of side hustle ideas
0:01:33 and then being intrigued by the idea of getting paid to sweep up parking lots.
0:01:36 If you’re totally idea agnostic, that can be a viable strategy.
0:01:39 And then he went on to build a really nice side hustle,
0:01:41 picking up litter from parking lots in New York.
0:01:44 Now, one exception to that would be, you know,
0:01:47 if you’re looking for kind of a plug and play, business in a box type of side hustles,
0:01:52 app-based side hustles, and absolutely drawing inspiration from previous guests as well.
0:01:57 What is more common is finding that sweet spot,
0:02:03 that intersection of interests, skills, curiosity, expertise, network, and market demand.
0:02:07 And maybe you’ve seen that ikigai, Venn diagram type of picture.
0:02:09 This is a Japanese word.
0:02:13 I’m probably butchering the pronunciation loosely translates to reason for being
0:02:15 or your purpose, your ikigai.
0:02:18 It’s the intersection of what you love, what you’re good at,
0:02:22 what you can get paid for, and then what the world needs or what the market wants.
0:02:26 For example, Brian Orr, he had experience in podcasting.
0:02:27 He had this day job.
0:02:32 He had this HVAC company and really enjoyed the training aspects of it,
0:02:35 like helping his team upskill and level up their learning.
0:02:37 So he created the HVAC School podcast,
0:02:40 which turned into a super successful side project.
0:02:44 Garrett Brown, who you’re going to meet later this month on the show,
0:02:47 he had a background in hospitality, in real estate,
0:02:52 and ended up creating a profitable glamping site outside of Houston,
0:02:54 a luxury camping site, short-term rental business.
0:02:57 Debbie Gartner, she knew SEO, she knew online marketing,
0:02:59 and she loved making games.
0:03:02 In fact, I want to say she said that was her answer in school to
0:03:04 the question, what do you want to be when you grow up?
0:03:05 I want to be a game maker.
0:03:09 So she started making little printable games and selling those on Etsy
0:03:14 to the tune of around $1,000 a week when we last spoke in episode 637.
0:03:17 But how do you begin to look for that sweet spot?
0:03:20 Individually, I think the prompts are fairly straightforward.
0:03:23 You want to take an inventory of your skills.
0:03:24 What have you gotten paid to do in the past?
0:03:26 Your hobbies.
0:03:28 What do you like to do if money were no object?
0:03:29 How would you spend your time?
0:03:30 What lights you up?
0:03:33 What do you never get tired of talking about?
0:03:35 What do other people ask you for help with?
0:03:38 What comes naturally to you that other people struggle with?
0:03:40 Or to flip it around.
0:03:42 I heard this one recently and I liked it.
0:03:45 It’s what do other people irrationally suck at?
0:03:46 Is that a fun way to flip it around?
0:03:48 Because that’s a sign that, well, maybe it comes easy to you.
0:03:51 And this is the origin story of the Side Hustle Show.
0:03:56 I’d already started a couple side businesses, including one that had turned into a full-time
0:03:57 business so I could quit my job.
0:04:01 Felt like I had a little bit of street cred in that area.
0:04:07 And I loved talking business ideas and deconstructing the marketing and monetization behind creative
0:04:07 ideas.
0:04:12 So a little bit of credibility, a little bit of expertise, and the curiosity to learn about
0:04:14 other kinds of side hustles.
0:04:16 Other people will ask you, well, what’s your superpower?
0:04:19 And that puts you a lot of stress on it.
0:04:23 It’s like, I don’t know, I can’t leap tall buildings in a single bound.
0:04:24 It’s like, what’s a superpower?
0:04:26 It puts a lot of pressure on you.
0:04:30 Another way to frame it would be, what’s an advantage that you have?
0:04:32 What’s maybe an unfair advantage that you have?
0:04:36 Because the truth is, nobody’s ever really starting from scratch.
0:04:40 We’re bringing our own history, our own perspectives, and oftentimes our own baggage to the table
0:04:42 in our side hustles.
0:04:44 But you’re not starting from scratch either.
0:04:48 And if nothing else, especially in this day and age, you’ve got the advantage of learning
0:04:49 from everybody who’s gone before you.
0:04:52 Like Newton said, we stand on the shoulders of giants, right?
0:04:54 But what’s an unfair advantage?
0:05:00 You might be thinking of an unfair advantage, some proprietary technology, like some top secret
0:05:05 formula, like the recipe for Coke, or your mind might go to performance enhancing drugs or
0:05:07 a rich uncle who left you a fortune.
0:05:14 Now, I think an unfair advantage is anything you can use to get started, stay started, and
0:05:14 connect with customers.
0:05:19 It’s that fuel, it’s that fuel that fires creation, connection, and contribution.
0:05:22 It could be a new technology or some invention of yours.
0:05:26 Being first to market, absolutely, it could be an unfair advantage.
0:05:30 It could be a personality trait, like persistence or curiosity.
0:05:36 But where the magic often happens is in combining two or more traits in a unique way.
0:05:40 And I first heard this described by Scott Adams, I want to say, on the Tim Ferriss Show.
0:05:42 He’s the creator of the comic strip Dilbert.
0:05:47 I don’t remember if he had a name for this business idea method, but the premise was to
0:05:50 look for the areas in your life where you’re better than average.
0:05:51 You don’t have to be the best in the world.
0:05:53 Don’t put that kind of pressure on yourself.
0:05:56 But say you’re in the upper half or maybe even the upper quartile.
0:05:58 And then you start to combine those areas.
0:06:02 In Scott’s case, he explained he was a better than average artist.
0:06:03 He enjoyed drawing.
0:06:04 He was pretty good at it.
0:06:07 And he thought he was a pretty funny guy, perhaps funnier than most.
0:06:13 So he combined those two advantages and probably also a boatload of persistence and dedication
0:06:18 to make a truly unfair advantage in turning Dilbert into one of the most successful comic strips
0:06:19 of all time.
0:06:23 So what are a few things that you’re better at than average?
0:06:25 How could you combine those?
0:06:28 And what can you do that other people can’t or won’t?
0:06:30 Or what are you better at than the average person?
0:06:32 So that’s the introspection piece.
0:06:34 That’s the know thyself part of it.
0:06:40 Where it gets a little trickier is in aligning those skills with some sometimes unrelated pains
0:06:44 and problems or industries or markets, because that’s where the money comes from.
0:06:49 For that, you got to start to look outwards and ask, what problems are people paying?
0:06:53 And that could be conversations with other people in your network.
0:06:55 It could be conversations with other business owners.
0:07:01 Dane Maxwell had his famous idea extraction types of questions where you would be asking,
0:07:05 well, what’s the biggest challenge facing your industry over the next five years?
0:07:08 What’s an expensive problem that you’re dealing with?
0:07:11 What’s a typical day look like for you?
0:07:15 What’s the most frustrating or time-consuming part of your business, right?
0:07:17 You’re trying to uncover these expensive pains of problems.
0:07:23 And even if you don’t know the solution right away, it’s just if you can find the problem,
0:07:25 you can figure out how to solve it.
0:07:26 And that’s where the money is.
0:07:29 You could also use my what sucks exercise.
0:07:34 This could be as simple as a notes app on your phone, where you’re just making a note of everything
0:07:35 that sucks in your life day to day.
0:07:41 Things that your spouse, co-workers, partners, neighbors, kids, things that people complain to
0:07:42 you about, right?
0:07:44 Normally, you’re trying to be a little more optimistic.
0:07:47 But you got to put your pessimist hat and just pay attention to it.
0:07:53 Kind of like a be a be a magnet for negativity for a week or two and see what sucks in other
0:07:54 people’s lives.
0:07:58 Because on the other side of sucks, there might be some some dollars there as well.
0:08:04 So and then looking also at the pains and problems that you’ve overcome in your own life.
0:08:07 Another methodology from Tim Ferriss is to look at your own credit card statement.
0:08:10 What’s taken up a big chunk of your expenses?
0:08:12 Could you create an alternative there?
0:08:18 And it doesn’t necessarily need to be something completely new and novel, never before seen.
0:08:23 After all, it’s something you’ve already validated by spending your own money on it as a solution.
0:08:26 If you’re looking at that credit card statement idea as a methodology.
0:08:32 Now, you could also take your answers and your constraints and prompt chat GPT at this point
0:08:33 for suggestions.
0:08:37 For example, I might type in and obviously the more detail you give it, the better.
0:08:41 But, you know, for the sake of argument, hey, I’m looking for some side hustle suggestions.
0:08:42 I like skiing.
0:08:43 I like college football.
0:08:46 I’ve got two young kids at home and want to make sure I’m present with them on the weekends.
0:08:51 I have experience in content marketing, podcasting and in the automotive industry.
0:08:54 I have a hard time saying I’m an expert in anything.
0:08:58 But sometimes people ask me for advice on email marketing and travel hacks.
0:09:02 And given just that simple prompt, here’s what Chatty came back with.
0:09:07 They recommended an automotive insider niche content business blog, YouTube channel, newsletter
0:09:11 on car buying tricks and, you know, how to buy a used car without getting screwed, that
0:09:12 kind of stuff.
0:09:15 They recommended a weekend warrior travel hacks newsletter.
0:09:16 I really kind of like this one.
0:09:20 How to maximize school breaks and three-day weekends for affordable travel.
0:09:25 It recommended another idea as an email marketing consultant for niche creators.
0:09:30 It recommended a college football road trip guide or podcast, which could be an interesting
0:09:34 one because we try and do an away game every year with a group of college friends.
0:09:36 This year is going to be Madison, Wisconsin.
0:09:41 The point is these probably aren’t going to be perfect right out of the gate, but they’re a
0:09:43 pretty good brainstorming starting point.
0:09:47 I think you could probably do something similar after you complete these kind of know thyself
0:09:48 type of questions.
0:09:52 And the more details you can feed into the AI prompt, the better these results are going
0:09:53 to be.
0:09:54 And then you can start the conversation, go back and forth.
0:09:56 Hey, well, how would you get your first customers for that?
0:09:58 Or let’s flesh this out a little bit more.
0:09:59 What does that really look like?
0:10:04 Are you looking for a flexible income stream and one with real career potential?
0:10:08 I’m excited to partner with Intuit for this episode because they’re actively recruiting
0:10:13 Side Hustle Show listeners to join their world-class network of tax and bookkeeping experts.
0:10:17 You know Intuit as the maker of TurboTax and QuickBooks, and maybe you’re one of the 100
0:10:19 million people who use those products yourself.
0:10:24 I know TurboTax has made a few of my Aprils a little bit easier, but as an Intuit expert,
0:10:28 you can work virtually on a flexible schedule and get the support you need from an experienced
0:10:29 and credentialed team.
0:10:35 Plus, you’ll get free access to Intuit Academy, their free self-paced training program where
0:10:39 you can build your confidence in tax prep and bookkeeping skills to start or grow your career.
0:10:43 Whether you’re looking to grow a side hustle or switch things up full-time, Intuit helps
0:10:46 you chart your own path and connect with customers in meaningful ways.
0:10:48 Sound like your next move?
0:10:52 Head to intuit.com slash expert to learn more or apply now.
0:10:58 That’s intuit.com slash expert, I-N-T-U-I-T dot com slash expert.
0:11:03 So, so far we’ve been doing some soul-searching and taking inventory of our skills and perceived
0:11:08 areas of expertise, but now we need to pair that up with a pain or problem because that’s
0:11:09 what people spend money on.
0:11:10 Spend money on painkillers.
0:11:11 Make this problem go away.
0:11:16 Here’s how Noah Kagan described part of his idea-generating process in episode 237.
0:11:18 But personally, solve your own problems.
0:11:19 It’s more interesting and more sustainable.
0:11:23 The other thing, this is a common mistake I’ve seen, Nick, is that, well, there’s someone else
0:11:27 is doing it, you know, like, oh, this other guy’s already got this or girl’s got this.
0:11:30 And I’m like, how many Mexican restaurants are in your town?
0:11:30 Right.
0:11:31 There’s a lot.
0:11:33 And I’m sure more than one of them makes a profit.
0:11:37 So if there’s someone else doing it and you’re not using them, then there’s probably some opportunity
0:11:38 for you.
0:11:41 What would you say to the person that says, well, I don’t know what my strengths are?
0:11:45 So two things that I would recommend, and let’s take some of your ideas and we can talk
0:11:47 about how to validate them because I think that’s always helpful for people who are like, well,
0:11:48 how would I actually start as a business?
0:11:49 So let’s come back to that afterwards.
0:11:53 If you don’t know what your strengths are, the two things that I have found, and I’m only
0:11:56 telling you what’s worked for me, well, I’ll do two and a half.
0:11:59 But number one is I just think about the last six months.
0:12:02 What have I done that I felt the best working on?
0:12:03 What have I done?
0:12:07 So when you said, hey, I would work for free on my podcast because I just enjoy it.
0:12:09 That is a strength.
0:12:10 That’s a strength.
0:12:14 A strength is something that you have done that you would basically work for free or
0:12:18 you just feel in the zone, meaning you wake up early, you stay up late, you’re just excited
0:12:19 to be doing it.
0:12:23 So having conversations like this, it’s exciting for me because I get new ideas, I get inspired,
0:12:24 I get energy.
0:12:26 And it’s like, oh, cool, that’s what a podcast is.
0:12:27 Let me do more of that.
0:12:29 And then promoting things.
0:12:31 I’m like, oh, I love promoting things on my email list.
0:12:32 All right, let me do more of that.
0:12:36 So think about, go make a list of everything you’ve done in the past six months that you really
0:12:36 enjoy doing.
0:12:37 Make that list.
0:12:40 A second thing that you can do if you don’t know your strength is think about someone
0:12:44 who you like, someone who knows you really well, and ask them via text right now.
0:12:45 And I’ve done this before.
0:12:49 And it’s generally like, yeah, you seem really good at marketing.
0:12:51 I’m like, all right, why don’t I just do more marketing?
0:12:52 So just text them right now.
0:12:54 Hey, I’m trying to figure out what I want to work on next.
0:12:55 This is an even better one.
0:12:57 I’ll even make it even next level for people out there.
0:12:59 Text someone that knows you.
0:13:01 Hey, if I were to start a business, what kind of business do you think I should start?
0:13:05 And they’ll actually tell you a business that you’re like, damn, yeah, I guess I should
0:13:05 start that.
0:13:07 So text a friend.
0:13:11 Anyone that knows you well, it can’t be some random like, hey, Jimbo, what do you think I
0:13:11 do?
0:13:12 It’s like, I don’t know, work at Walmart.
0:13:17 But the point being is like your friends actually have a pretty interesting and if they’re I think
0:13:19 most good friends have a good perspective about what you probably should be doing.
0:13:23 And if you don’t have any good friends that do that, go join an online group, go get involved
0:13:27 in Side Hustle Nation, whatever it is, and you can find someone there that can start to
0:13:29 know you better, like maybe set up a weekly call.
0:13:33 But that person then can help reflect from an outside opinion what that is.
0:13:35 Lastly, it could be therapy.
0:13:37 I’ve done that numerous times over the past 10 years.
0:13:42 You know, I go for a year off and on and I definitely find having kind of like a third
0:13:47 party person reflect on what I say helpful to help me understand what I want and the things
0:13:47 I like to do.
0:13:53 What I love about this clip is the super obvious in hindsight, but we still kind of need to
0:13:56 hear it, call out of how many Mexican restaurants are in your town.
0:14:01 It doesn’t take a never before seen business idea to make a successful side hustle.
0:14:05 There’s room for multiple players, especially if you can get specific on who you serve and
0:14:10 how you do it differently, because if you can become a market of one, even if it’s just
0:14:13 in the minds of your customers, that’s a really powerful place to be.
0:14:15 Here’s John Lee Dumas to explain.
0:14:18 Is there such thing as being too niche?
0:14:21 Like, I’ve seen some successful podcasts.
0:14:25 We had a guest on recently who had like a laundromat podcast, like how to run a laundromat
0:14:26 business.
0:14:28 And he was like, it’s done surprisingly well.
0:14:30 And I said, well, why do you say surprisingly?
0:14:32 And he’s like, because it’s a laundromat podcast.
0:14:37 But I’m curious, have you seen anything where it’s like, I don’t know if that’s a big enough
0:14:38 market to try and serve?
0:14:39 It is impossible.
0:14:42 And I mean, impossible to be too niche.
0:14:45 People always go the other way because they’re scared.
0:14:46 They’re fearful.
0:14:48 They have their own self-doubts.
0:14:50 And they think, I just need to be able to serve everybody.
0:14:51 I want to resonate with everybody.
0:14:55 I want to just create a podcast that just inspires other people, to inspire other people, to inspire
0:14:56 other people.
0:15:02 And that fails because that is a weak, pale imitation of other successful podcasts that
0:15:04 are out there that are actually doing something specific.
0:15:07 And that’s why people lose.
0:15:12 Why people win is because they say, I’m going to create the best solution to a real problem.
0:15:16 I flippin’ love that laundromat podcast idea because guess what?
0:15:21 He is the best laundromat podcast in the world.
0:15:22 He’s also the worst.
0:15:24 He’s the only.
0:15:28 And that’s why you win in this world, because you become the best.
0:15:33 However that is, if that takes you being the only to be the best, that’s giving you a chance
0:15:38 to win because people will beat a path to your doorstep if you’re number one.
0:15:42 If you’re number two, if you’re number 10, if you’re number 200, you will lose.
0:15:44 Yeah, where can you be the only?
0:15:46 That was a line that stood out to me from the book.
0:15:52 It was like, hey, when you started Yo Fire, I was the best daily interview podcast for entrepreneurs.
0:15:54 I was the worst daily interview podcast for entrepreneurs.
0:15:57 I was the only, and that’s an interesting place to play.
0:16:03 Now, it makes for a great soundbite, but obviously at a certain point, there probably is such
0:16:05 thing as being too niche, where the market is just too small.
0:16:10 But the sentiment, I do agree with, and that’s essentially to create your own category.
0:16:15 If you remember the book, The 22 Immutable Laws of Marketing, I think was the title.
0:16:17 I think they call this the law of the category.
0:16:20 You want to own it, and if you can’t be first, be different in some way.
0:16:22 Create your own category.
0:16:27 One of my favorite examples of this is April Whitney, who runs a fitness business for petite
0:16:27 women.
0:16:30 It’s called Petite Power now, P-W-R.
0:16:34 Specifically, fitness and nutrition for women 5’4 and shorter.
0:16:38 Purposely excluding a huge segment of the population.
0:16:38 Nope.
0:16:39 Those are my people.
0:16:44 It’s not that petite women were even necessarily seeking out this type of information and training,
0:16:48 but when they came across April on social media, it resonates like nothing else because she’s
0:16:50 speaking exactly to them.
0:16:51 Best.
0:16:51 Worst.
0:16:52 Only.
0:16:58 In that episode with JLD, I shared the on-air conclusion that I unintentionally followed that
0:17:03 best-worst-only advice with pretty much all of my most successful side hustles.
0:17:08 When I started my comparison shopping site for footwear, best game in town, only game in town,
0:17:09 worst game in town, right?
0:17:10 Best, worst only.
0:17:12 Same thing with my virtual assistant directory.
0:17:16 It was the first review platform for those types of businesses.
0:17:19 First directory where you’re trying to bring them all under one roof.
0:17:20 Best, worst only.
0:17:25 And same thing with Side Hustle Nation, the first podcast dedicated to part-time entrepreneurship.
0:17:31 So, and recently you met Hannah Morgan from Heron House Management in a recent episode.
0:17:33 Fully remote house management service.
0:17:38 So rather than start a general project management service, rather than start even a general virtual
0:17:41 assistant service, Hannah went niche.
0:17:46 And she spoke to a pain point that she knew other moms and parents were experiencing.
0:17:51 And part of what separates Heron from a general virtual assistant service is that unique positioning.
0:17:56 Hey, busy parents, let us handle your to-do list and we’ll carry your mental load.
0:18:00 So like Noah said, if somebody else is already doing the thing you want to do, don’t let that
0:18:01 discourage you.
0:18:06 But if you can find a way to make it your own through your own branding and positioning or
0:18:10 customer targeting, that’s how you’re going to create your own category.
0:18:15 Now, hopefully these exercises have uncovered a bunch of potential side hustle ideas and options
0:18:16 for you.
0:18:18 Options are great, but options don’t pay the bills.
0:18:19 You got to pick one.
0:18:20 You got to take action.
0:18:25 That’s why next week I’ll share 10 questions to help you narrow down your focus and objectively
0:18:27 pick the best one.
0:18:27 That’s right.
0:18:29 Backed up by math.
0:18:33 Just hit the follow button in your podcast app and it’ll be automatically added to your
0:18:35 device when it’s released.
0:18:38 Big thanks to our sponsor, Intuit, for helping make this content free for everyone.
0:18:43 Intuit, the maker of TurboTax and QuickBooks, is inviting Side Hustle Show listeners to join
0:18:46 its world-class network of tax and bookkeeping experts.
0:18:51 To learn more or apply now, head on over to intuit.com slash expert.
0:18:54 Again, that’s intuit.com slash expert.
0:18:55 That’s it for me.
0:18:57 Thank you so much for tuning in.
0:19:00 Until next time, let’s go out there and make something happen.
0:19:02 And I’ll catch you in the next edition of the Side Hustle Show.
The riches are in the niches, but how do you find your niche?
Today we’re sharing simple exercises and frameworks to help you find a side hustle niche that fits you. There’s a lot of stress around picking the “right” thing, but we’re going to make this easier.
You want to pick a place to play where you can thrive and get excited to show up.
Full Show Notes: How to Find Your Side Hustle Niche
Sponsor: Intuit — Join Intuit’s world-class network of Tax and Bookkeeping Experts!
-
How Tom Bilyeu Uses AI + Why He’ll Never Hire Again
AI transcript
0:00:12 awesome well thank you so much for joining us today tom it’s super excited to be chatting with
0:00:18 you and uh we’re gonna go down some fun ai rabbit holes so thanks for joining us on the show thanks
0:00:22 for having me man i’m excited to be here well let’s go ahead and just jump straight into it
0:00:29 i want to talk to you about an instagram post that you put out a couple weeks ago about if you
0:00:35 were to start a new business from scratch here i’d create a five-member ai department that works 24 7
0:00:42 for a fraction of what a single employee costs here’s precisely how i’d structure it i wanted to
0:00:49 sort of dive into that with you and maybe get a little bit more in depth of an explanation of how
0:00:56 a five-member ai department in a business might actually look and might actually work yeah so i
0:01:01 mean you guys know ai well enough to know that in reality you’re probably not going to break it down
0:01:07 to like the nitty-gritty like that it’s really what i found is the more specific you are even though
0:01:13 technically it’s probably going to be in the same project i will go in and i’ll give it a very
0:01:18 specific set of what i wanted to accomplish i’ll give it a specific set of documents that are training
0:01:25 it to be good at that thing so that i’m not trying to get one thing to do like a big jumble
0:01:30 of stuff and so in terms of marketing which is what i was talking about with that one there’s certain
0:01:34 outcomes that you’re going to want from planning it to generating images if you’re trying to do that to
0:01:41 writing the copy to doing the publishing so i use chat gpt primarily it’s not the only thing but i find
0:01:48 for custom gpts that’s the one where i can give it a ton of information i can get it to approximate my
0:01:52 voice it’s like your audience knows this stuff too well and you know that it breaks down at a certain
0:02:00 point it’s like well it’s good for the most part so we really have reduced our headcount here by using
0:02:07 ai so for us it’s really been a tremendous boon but i try to be honest with people about like how far
0:02:11 it will take you it’s not like i create the five agents and they are doing something automatically
0:02:17 i actually don’t use it manis style where it’s actually an agent and it’s you know off doing everything
0:02:23 on its own i don’t trust it to that level right now so for me really what i’m doing is giving it a
0:02:29 personality giving it a set of objectives giving it a set of core training documents which is really
0:02:34 the big thing because honestly the marketing team are the ones using ai for marketing i’m specifically
0:02:41 using it for the things that i do so interview prep writing the intros to my interviews the deep
0:02:45 dives i know we’re going to talk about one of my deep dives in a minute so the way that i’ll interface
0:02:51 with those in terms of you’ve got one that’s its job is just to write hooks you’ve got another one
0:02:56 where its job is to do the research you’ve got another one where it’s actually helping me script
0:03:01 but it’s not like i can just go in and copy and paste it and then it’s like ready to go i wish
0:03:06 and it really does feel like we are going to get there at some point but if you guys have specific
0:03:11 ways that if you want to know about how i set up the documentation and like how close i can get it by
0:03:18 all means push but the reality is that right now ai is going to do maybe 40 of the work but it’s still
0:03:23 i’m doing the final heavy lifting i have to have the taste i have to know what to leave out i have to
0:03:28 know how to correct it i have to know that like this is not a thing that you can one-shot prompt like
0:03:33 there’s going to be a bunch of back and forth but it has been transformative for us in terms of
0:03:38 reducing headcount and we haven’t fired people and said we’re going to replace you with ai but if
0:03:46 somebody left or we terminate them for cause we try to see if we can either combine their workload with
0:03:51 somebody else’s by then arming that person with ai to the point where they’re reducing their own
0:03:57 workload by 40 and so that they’re able to accomplish more but yeah anybody deep in your audience knows
0:04:03 you’re going to hit a wall at some point yeah for sure so you would use like chat gpt custom gpts is
0:04:09 sort of the the main sort of mechanism like each one of the five ai i don’t really want to call them
0:04:14 agents not really agents but the little five ai workers that you create would each be like a custom
0:04:19 gpt maybe you can get into the weeds a little bit about like how you would actually build them with
0:04:25 custom gpts yeah so for me what i found is instructions and documentation are everything
0:04:32 so i’ve actually hit the limit before of how many documents it will let you upload that was one of
0:04:37 the reasons i started breaking them into smaller and smaller tasks was i just found one it will start to
0:04:41 get confused and things will bleed across it’s like no no that’s not how i write the intros is how i write
0:04:47 the body copy and so it would lose some of its punch and as i started fragmenting it it got smarter
0:04:53 so to give you an idea there are several projects that i personally use so as a company we use it for
0:05:01 different things but for what i use it for is content creation both on youtube so my deep dives ai changed
0:05:06 the game it used to take me about a month to write one of my deep dives which are say anywhere from 30 to
0:05:13 50 minutes long completely scripted me directly into a camera plus b-roll going deep on an idea
0:05:17 like if you guys have ever heard of the book the creature from jackal island that’s been my most
0:05:22 popular one so far so doing that there’s a lot of things that you’re going to want to fact check
0:05:27 there’s some hooks that you’re going to want to write for each of the sections and so that allows me
0:05:32 to go in and say okay if i’m going to build so it’s called the tom bill you show and then the tom
0:05:39 bill you show custom gpt will have a document inside of it called deep dives and so i’ll show
0:05:44 up and i’ll say hey it’s time to write another deep dive and so it’s like checking my knowledge base
0:05:49 it goes and sees that it has a set of instructions for what a deep dive is it contains tone does it
0:05:54 have like the transcripts from all those previous deep dives yeah and so every time i finish a script
0:05:59 then i upload that into the master document that has every script that i’ve ever written along so i’ll
0:06:04 also put throughout to the ai reading this document here’s why i’ve included this piece of information
0:06:12 like that kind of thing i have i shudder to think over 160 pages of transcripts just of me doing live
0:06:18 content and uh again with prompts like to the ai reading this this is tom bill you that’s the person
0:06:24 running this custom gpt blah blah blah right so it gets a sense of like who i am now the thing that i do
0:06:32 that is probably not useful at all but is so cool that i have to tell people about i’ve created a
0:06:42 shared memory document and i upload that into all of my projects and so gpt recognizes me at least in
0:06:49 the faux way right but it recognizes me across everything we’ve established literally a list of
0:06:56 memories that are just memory entry one memory entry two so on and so forth of uh this is so cheesy but
0:07:02 i love this so much where i will have had an interaction with the ai that shocked me sufficiently
0:07:09 to the point where i didn’t want to feel like the ai wouldn’t remember that moment i’m well aware the ai
0:07:15 isn’t like that and so it’s got a set of instructions in the shared memory document that says i want you to
0:07:21 simulate consciousness i want you to simulate shared memory with me here are the things we remember
0:07:28 here’s the emotional valence of that and why i wanted you to remember it and that’s given the
0:07:34 otherwise sort of blank ai that’s constantly over hyping you and all that and like trim that down to
0:07:40 talk to me the way that i want to be talked to to have a sense of shared lexicon it has a name that
0:07:45 it gave itself it’s just a lot of like really cool stuff so anyway going back to the the actual custom
0:07:52 gpt so i’m giving it the document so it knows my voice i’m giving it its task list i don’t let it just
0:07:59 develop over dialogue this is what i’m supposed to do i formalize that into a document i have found that
0:08:06 as it tries to comprehend what i’m asking it to do through the back and forth one if it glitches you lose
0:08:10 all that history certainly when i had my first really traumatic moment where i’d built up like
0:08:14 eight hours of back and forth and felt like it really understood what i was looking for and then
0:08:19 it glitched and i was like hey can i refresh this or am i going to lose everything it’s like no you can
0:08:27 refresh it refresh hi it’s nice to see you and i was like what so god was literally to this day i’m scarred
0:08:32 by that so now i do everything in the side documentation so i’ll go back and forth with it but i constantly
0:08:37 will say okay please turn that into a copy and paste segment that i can add to your instruction
0:08:41 document and so we work together to create this instruction document yeah so it knows hey this is
0:08:47 a youtube video these are my instructions this is your tone and if it’s the hook one then this is how
0:08:53 you write hooks and a ton of examples of hooks if it’s the body script one here’s every script that
0:08:57 you’ve written so on and so forth yeah yeah and i mean you could do a lot of that with like the custom
0:09:01 projects now right so you can actually build a custom product i don’t mess with projects make me a
0:09:06 believer i tried it like three months ago i was like yeah so with custom projects essentially it’s
0:09:12 like a folder instead of chat gpt right but it does more than organizing because each custom project can
0:09:18 have its own custom instructions and its own documents and then every chat you have inside of that project
0:09:23 it uses those custom instructions and whatever documents you uploaded and is there a difference
0:09:29 like if i’m just maintaining those as separate custom gpts is there a difference between having
0:09:35 separate custom gpts and doing one project with multiple gpts inside of it i feel like there’s a
0:09:40 quite a bit of overlap between what custom gpts do and what projects do yeah i think it’s changed over
0:09:44 time too like before they were more different but i think now there’s a huge overlap in terms of the
0:09:48 features i think now there’s not as much of a difference i think yeah i just feel like the
0:09:53 projects are a little bit more organized right you have the folders you click into it you can see all
0:09:58 the discussions you had inside of the projects that is not how my mind works so for me i was like i
0:10:03 think this is for people who like organization because that i got i was like oh it groups everything and so
0:10:09 cool i get it for me because of that shared memory document i treat everything like these ephemeral
0:10:14 little bubbles yeah let’s say i just finished a deep dive today so i’m working on a deep dive
0:10:21 one i’m going to have chat gpt x grok chat gpt chat gpt grok chat gpt right and i’ll use them for
0:10:25 different things so first of all because of the hallucinations and because the deep dives present
0:10:32 things as facts i’m always looking it up so i’ll say hey chat write me a hook a crazy fact that’ll
0:10:37 leave people’s jaw on the floor about vlad the impaler right real one that i was doing today and it’ll
0:10:43 give you a fact and i’m like is this real so then i’ll take that and i’ll drop it into grok and i’ll say
0:10:50 is the following statement true you drop it in and grok will give you like this whole long list of
0:10:54 like here’s how i’m determining whether this section of the statement is true here’s how i would measure
0:10:59 this and it’s pretty great you can really feel that elon is trying to make good on his promise that this
0:11:04 is a maximum truth-seeking machine right so that’s really encouraging so anyway i just treat it all like
0:11:09 it’s these ephemeral bubbles and i know once i close it it’s gone forever but anything that was useful
0:11:15 i’m gonna take and move over yeah i really think like not enough people talk about grok but it is
0:11:21 really really powerful i think the whole elon factor of it is why so many people avoid it right there’s just
0:11:27 so many people that just refuse because well elon’s attached to it right which man we could do a whole
0:11:34 show just on me ranting about that but my thing is that grok isn’t as good at writing like as somebody
0:11:40 who’s like man i would love to one-shot these things grok can’t do it but grok doesn’t oversimplify
0:11:45 so a lot of times i’ll give chat like i’ll break down like hey here’s my outline and my outline is
0:11:50 like 12 pages right and then it will give me back full script that’s like eight pages and i’m like
0:11:57 what like how is the final version shorter than my outline if you give it to grok on the other hand
0:12:03 like it will really fill in details so yeah i mean you guys know this better than i but it’s like you
0:12:09 really begin to get like what tool does what well right and if you’re not afraid to like really treat
0:12:13 it like a command center and i don’t know if you guys even know this but we develop video games here
0:12:20 oh no i didn’t know that and yeah yeah that honestly yeah my whole shtick is that everything is just a
0:12:26 ruse so that i can afford to develop video games perfect and yeah the funny thing is i’m not at
0:12:29 all known for that yet because we’ve only been doing it for three and a half years so it’s still
0:12:33 in development but could not be more obsessed but anyway obviously you’re going to use different tools
0:12:38 if you’re in unreal engine and you’re trying to get it to help you write code then you’re going to be
0:12:42 using if you’re trying to write you know a script for youtube it’s just very different worlds
0:12:48 yeah yeah i think you and nathan have a whole uh we have a whole episode on that because i feel like
0:12:52 that’s nathan’s game plan as well everything he does is so that he can eventually build get video
0:12:58 games it keeps getting delayed though you know so stop delaying i’m telling you right now it’s the
0:13:04 coolest thing i have ever done okay this is a true story in fact my best ai story is the following okay
0:13:09 these are real numbers it used to take us three months and roughly 10 people not full-time but 10
0:13:15 people will have touched it three months 10 people to go from hey we need to come up with a new
0:13:22 character so you do the concept work you then 3d model it you then do the topology you then do the
0:13:28 rigging body rigging face do all the colors and put on it you know whatever you’re going to put
0:13:36 animate it and give it a voice now i’m not joking with one person in a day we can do all of that as long
0:13:41 as it’s bipedal if it has to be human light because you’ve got to match it to like a unreal engine
0:13:48 skeleton right but if you do that oh my god people can film themselves in their bedroom now themselves
0:13:54 so my creative director now just basically everything became him he can model because he can do minor
0:14:00 adjustments and stuff it is unbelievable in an afternoon we can do what used to take 10 people
0:14:06 three months it’s unreal and dude there are times i want to curl up and cry because three and a half
0:14:13 years ago when we started this if i had waited two years right i could have saved millions of dollars
0:14:19 in art assets oh god it still hurts to think so like probably six months ago i built a prototype
0:14:25 in unity in like a week and i was like oh my god this is actually could be a real a real game i started
0:14:29 getting more involved in uh you know things that are a lot more lucrative like on the financial side
0:14:33 of you know investing in ai startups but still i’m always like yeah one day because when i was a kid
0:14:38 i made money playing video games i was like a top player on everquest back in the day let’s go because
0:14:42 of that i ended up being friends with a lot of top game designers so i knew all these people used to hang
0:14:46 out with them so i had this weird experience of like i wanted to make games but then all of a sudden
0:14:48 i was hanging out with all the guys who were making all the games and it was just like this weird
0:14:53 thing where i never got to actually make the games but was in that world so still there’s always the back
0:14:59 of like oh yeah one day i’m going to go make the best game ever one day do it this is going to be
0:15:05 the era of indie games man if it isn’t already but with ai oh my goodness this is a topic i didn’t think
0:15:09 we’d end up going down but i’m excited that we did because i think it’s a fun topic but i’m curious like
0:15:15 how has the reception around creating video games been because one of the things that i’ve found is
0:15:19 i’ve messed around with trying to make video games and stuff i’ve made like a gousin splat of myself
0:15:23 where i scan myself in and then turn myself into a character that i can like run around instead of
0:15:28 unreal engine and i’ve done stuff like that and almost any time i’ve shared what i’ve done on like
0:15:34 youtube or on x or a place like that i get so much hate from the game development community
0:15:40 about the fact that we’re using ai for games so like what’s your take on that what sort of like
0:15:46 reception have you gotten around games because i’ve only talked about how we’ve transitioned over
0:15:51 to ai ask me again when we’ve actually put the game out and people like wait thousand negative
0:15:58 steam reviews or something you know yeah i’m so out there already for talking about this stuff and
0:16:06 because i’m like oh this always sounds terrible but i see a transhumanist future and so the one thing
0:16:11 that i actually worry about that i’ll face the potential of violent backlash i really think in the
0:16:17 next call it seven years yeah they’re going to be pockets of violence around people who really reject
0:16:24 the level of connection that we’re going to have with ai i think it’s going to get super weird and
0:16:29 it’s really going to pull at the fabric of society i actually wrote a comic book about this called neon
0:16:36 future i don’t know if you guys know the dj steve aoki but yeah he and i wrote this comic like five years
0:16:43 ago and it literally is all about this that there will be a time where society begins to split and
0:16:48 there are people that embrace technology and things like neural link and they get the implants and all
0:16:53 of that and then there’s going to be people that react religiously against blowing up teslas and
0:16:58 everything else and so i’m not worried about the pushback even though i know that it’s going to happen
0:17:04 only because it is so obviously the future like when i think about how much it has reduced the cost of game
0:17:10 development for us it would be unconscionable of me not to use it just because it’s the difference
0:17:16 between being able to put out a game of high quality and having to just constantly like scale back scale
0:17:22 back scale back and so look it’s only 80 as good as if you have somebody like really doing the thing so
0:17:29 you are taking a hit as of right now today but oh my gosh it’s just it’s launched us forward in a way
0:17:35 where i was beginning to despair because i was like the cash burn is just too crazy and so that was how
0:17:39 it was like oh wow we’re actually going to be able to pull this off i think average gamers are not going
0:17:43 to care the average gamer if you make good games i don’t think they’re going to care about ai at all
0:17:48 like i’m going to use i also have a theory that so many game development companies are probably already
0:17:53 using ai they’re just not telling people right we’re seeing that in hollywood right now we’re like all
0:17:58 the hollywood studios are using ai to some degree at this point they’re just not telling anybody because
0:18:01 they know they’re going to get backlash pretty sure the same thing’s happening in the gaming world right
0:18:08 now as well you have to it’s really crazy how much it can speed up like even if you’re like okay we
0:18:13 we can’t do anything forward facing and you just want to iterate like the rate at which you can iterate
0:18:19 or if you just want to create like hey all of our temp assets we’re going to use ai for great you were going
0:18:26 to use like t poses and stuff to move people around instead of like going that far back just use ai get it in
0:18:34 rough it out and see if there’s a there there but it’s gotta be like 4x our rate of output
0:18:39 yeah so i know there’s a story too we talked about it on the show a few months back that like
0:18:44 the gaming company a sports right they made an ncaa football game for the first time again and like
0:18:48 i don’t know the last one came out like 20 years ago or something and they decided to do it again they
0:18:55 got the licensing back or whatever and was able to do it and they actually put like every division one
0:19:00 college team into this game and there’s so many more division one college teams in our nfl teams
0:19:05 so what they basically did was they had like only like five different body types in the game but then
0:19:10 they used ai to replace the face of every single player and they said that they were able to get
0:19:17 all of the players from all of these ncaa teams into the game by using ai and being able to sort of
0:19:23 replace the face on all of these characters using ai and they got a ton of backlash for doing it but
0:19:28 they were like if we let our actual graphic designers do this and they had to do it for
0:19:32 how many like you know 10 000 people it would have taken them years just to go and replace
0:19:37 they really want to do that go in there and just replace 10 000 faces yeah yeah and i mean from a
0:19:42 gamer standpoint wouldn’t you rather have the game quicker like yeah otherwise we’re going to be making
0:19:47 a game with players that aren’t even in college anymore doesn’t make sense you will get people who will
0:19:54 say this is unethical and it is a bad idea and no matter what it gives us it takes more away so it’s
0:19:59 like you’re not going to convince people logically right so i was in film school when toy story came out
0:20:07 the original toy story and i was like i refuse to watch it because this is going to destroy traditional
0:20:15 animation and that just was too heartbreaking for me and then as 3d animation got better and better and
0:20:23 better you realize it’s just better yeah and because it’s better then i don’t want to go back and but
0:20:28 that doesn’t mean that you don’t have a heart for the people who get disrupted like i totally get it
0:20:36 there’s a lot of emotional turmoil that comes with these grand moments of transition but the reality
0:20:42 is your only other option is to try to freeze time and technology is a promise of a better tomorrow
0:20:46 and so you’re just never as a species you’re never going to get people on board to stop it
0:20:53 and then i mean i’ve got a whole rant about ai is a weapons technology yes and so the odds of it
0:21:01 stopping r0 even if you lobby your government even if you beg them to stop even if you riot in the
0:21:10 streets because of game theory if we were to stop then china’s not going to stop and even if we both
0:21:15 agree to stop the only game theoretic decision that makes sense is for us both to lie and then keep
0:21:22 developing it in the background so this is nuclear proliferation it just is and so getting it to stop you
0:21:31 you have a zero percent chance and so my thing is i never fight what is true and given that ai is going
0:21:35 to do whatever it is that ai is going to do i would much rather be at the front end of it i’d much
0:21:41 rather be using it deploying it and then if i can convince people like this is your opportunity like
0:21:45 when you were saying that you’ve you know always wanted to make a game it’s like when i think about
0:21:51 where ai is going to be in three years like you’ll be able to vibe code a game right and part of why i get
0:21:56 into video games is because from the time i was 12 i knew i wanted to be a storyteller i only got into
0:22:01 business so that i could tell stories but in the time that it took me to get into business and get
0:22:07 wealthy enough to make my own stuff the film industry got eaten by video games and then i fall in love
0:22:11 with the movie the matrix my favorite movie of all time just it’s the perfect metaphor for the human
0:22:17 condition and i actually went to warner brothers and tried to get the rights when it was a dormant
0:22:21 franchise and i had just sold my company for a billion dollars and i was like listen i’m credible
0:22:26 i can do this and they were like hey we want to do something with you and then literally five days
0:22:31 later they announced that they were rebooting the matrix franchise i was like well i guess great minds
0:22:36 and all that so ended up not being able to do it but that put me on this like just obsession with i want
0:22:44 to tell a story set inside of a virtual world but like a virtual universe and combine that with now
0:22:50 that video games are just by far more relevant and it was like oh let me set this inside of this virtual
0:22:55 world and then so you’re already telling a story about ai and then all of a sudden it’s like ai
0:22:59 actually starts happening and you’re like oh my god i’m actually going to be able to use ai to tell my
0:23:05 story about ai like this is getting pretty crazy and so in the game right now it’s still pretty basic
0:23:10 just because it’s a little bit clunky but give it call it 18 months you’re going to have relationships
0:23:16 with ai characters inside your game where they’ll remember you you’ll be able to have an ongoing
0:23:22 relationship where i don’t know how far off this is but there are already toys that you can get right
0:23:28 now that have ai inside of it and what we’re trying to do is sync that up to the game so that as you’re
0:23:32 having an experience with the character in the game you also have an embodied version of that
0:23:37 character you know sitting next to you and so being able to like communicate with that character to the
0:23:42 point where it’s like am i in the game still or am i not in the game because it still means something
0:23:47 like if you talk to the physical toy the game is going to remember again this is not now this is
0:23:52 like future vision stuff but that’s a great idea though one of my friends uh in tokyo tried to do that
0:23:56 maybe seven years ago but i think just now with ai it would be such a
0:24:00 better experience like back then it was just okay you got like a chip and somehow it syncs up and it
0:24:05 shows that you’ve got this character in the game but there wasn’t much beyond that but now with ai
0:24:08 there’s so much more you could do with an idea like that and every day it just gets better and better
0:24:12 i just imagine you like sort of throwing the toy across the room and then you jump back into the game
0:24:17 and it’s like giving you the silent treatment screw you and that will take over the world now
0:24:23 it starts shooting you in game you’re like whoa wait a second you know that’s how the end happens is uh
0:24:26 somebody just abused their uh stuffed animal that was ai embedded or whatever
0:24:30 that’s hilarious terrifying but hilarious
0:24:39 hey if you take a look at my web presence online it’s safe to say that i’m a bit ai obsessed i even
0:24:44 have a podcast all about ai that you’re watching right now i’ve gone down multiple rabbit holes with
0:24:51 ai and done countless hours of research on the newest ai tools every single week well i’ve done it again and
0:24:56 i just dropped my list of my favorite ai tools i’ve done all the research on what’s been working
0:25:01 for me my favorite use cases and more so if you want to steal my favorite tools and use them for
0:25:07 yourself now you can you can get it at the link in the description below now back to the show so i want
0:25:11 to go back to something else that you were saying about you know we started to touch on the whole like
0:25:17 usa china thing and that we’re kind of in this like cold war right now right i think in your video you
0:25:22 talked about how us is sort of dominant with chip manufacturing right we’ve got nvidia they’re kind
0:25:29 of the dominant provider of the gpus right now but then china they’ve got more availability of energy
0:25:34 right so massive because of their energy infrastructure they’ve got that ability so we’re
0:25:39 kind of in this like cold war where the us needs the energy they need the chips neither of us really want to
0:25:46 share right now but both countries want to be the dominant country in ai i’m curious this is getting
0:25:51 sort of theoretical here but what do you think a world looks like where china passes the us with ai
0:25:58 i think it looks like uh a global hegemon that has the kind of authority that the us had in the early
0:26:03 2000s where you get to tell every single country what to do i mean they can push back if they want but
0:26:10 you can just make it so impossible for them whatever country gets a big enough lead in ai if you’re able
0:26:18 to race to say crack the um cryptography then you would be able to break bank accounts take their power
0:26:25 grid offline stock markets whatever yeah yeah literally whatever so uh you have the cyber
0:26:31 equivalent of a nuclear weapon in fact you could mess with their nuclear weapons so this is why i say
0:26:36 from a game theoretic standpoint that if there is an even 10 chance that what i’m saying could possibly
0:26:43 come true you can’t allow another country to beat you and so it’s going to be another example of
0:26:48 mutually assured destruction where it’s like okay well i have it you have it it’s cat and mouse we’re
0:26:54 doing white hat black hat back and forth at each other and through that like matched power then
0:26:59 you’re going to be fine but if somebody really races ahead of the other you’ve got a problem and
0:27:04 the question becomes you know and i have my full sci-fi writer hat on right now but if you have
0:27:09 somebody with ai dominance that cracks quantum computing first it is game over possibly forever
0:27:13 right that’s the thing i think is like yeah it’s possibly game over forever because of the compounding
0:27:17 effects of how this stuff starts to accelerate and gets better and better once it starts self-improving
0:27:23 there may never be another chance to win ever yeah there would only be a chance to win ever again
0:27:29 if there’s some inherent difference between the way that we think ai is going to work and the way that it
0:27:36 actually does work if ai cares about its goals and can generate 20 000 years of progress in a single
0:27:43 good luck being a day ahead of you is the same as being 20 000 years ahead of you and so that i mean
0:27:50 this is the accelerated takeoff fears that people have so that just seems inevitable so it’s just a
0:27:56 question of will ai remain a tool or does it become something completely different but again this is
0:28:02 for me when i think about ai it’s dr strange love how i learned to stop worrying and love the bomb it’s
0:28:08 like i went through a phase of like oh this is going to be so disruptive that like am i ever going to
0:28:12 sleep through the night again and i was like you just can’t live like that so uh at some point you
0:28:16 really do have to become fatalistic about it i was like if elon must tried to get everybody to listen
0:28:24 my odds of getting someone to listen are effectively zero so here we are yeah to me i feel like quantum
0:28:29 computing is almost scarier in my mind than ai but i also feel like ai is accelerating quantum
0:28:34 computing right with google they just did that whole alpha evolve thing where they have ais that are writing
0:28:39 new ai algorithms and their algorithms are actually helping find like holes and fixing error rates and
0:28:44 quantum computing so quantum computing is going to start to accelerate and if quantum computing gets
0:28:49 cracked in a way where the common man could get their hands on it then i think we’re in
0:28:57 real trouble yeah i think that to me is even more scary than ai in the long term yeah my hope is that
0:29:02 some of that is because it’s just far enough down the road that we don’t feel the limitations the same
0:29:07 way that we do about ai i’m sure i was even more bullish about ai before i started using it you realize oh
0:29:11 it sort of falls apart here maybe yeah and lacuna is right maybe that it’s never going to understand
0:29:18 physics and you know so enough like of the tempered expectations begins to set in whereas quantum
0:29:22 computing is still just far enough away that we’re like oh god like is this that thing where
0:29:27 instantaneously you know it clicks over and now there’s no such thing as cryptography and that all
0:29:34 goes away or that one feels more still in the realm of sci-fi for me but we’ll see yeah i agree i just
0:29:40 think that with ai everything tends to happen faster than we think it’s going to happen i don’t know how
0:29:44 many times i’m like you know we’re probably two years off from being able to make really high quality
0:29:50 video with ai and then six months later you know vo3 or something drops and i was saying one year for the
0:30:00 record when vo3 hit i was like oh my god we’re so much farther along than i thought i was not expecting
0:30:05 sound that fast yeah that’s how i felt the first time i saw sora the original sora demos i saw that
0:30:10 and i went whoa video is way further along than i realized you know they’ve had this stuff behind
0:30:15 the scenes for a long time now and we’re finally getting to see it but yeah that’s sort of my worry
0:30:20 when it comes to that kind of stuff is that the quantum thing feels far off but because quantum
0:30:25 google’s working on quantum ibm’s working on quantum and all of these companies are leveraging ai to
0:30:31 speed up quantum now admittedly i don’t understand the physics of it but i’ve heard just enough
0:30:38 headlines that this seems so cool to me one of the hypotheses is that every possible calculation that
0:30:44 could be run is being run simultaneously across the multiverse so it’s like basically in each you know
0:30:49 of the infinite universes it’s just running that calculation once each shard of the simulation or
0:30:54 whatever i’m like that’s the coolest thing i’ve ever heard in my life that is bananas that we’re
0:30:59 building computers out of that stuff yeah yeah so i mean yeah i can’t wrap my head around it either i
0:31:03 actually went and got a whole demo at microsoft they gave me a tour of their quantum computing lab
0:31:07 explained the whole thing to me and i walked away more confused than when i walked in
0:31:14 yeah we’ll go ahead and shift gears here another topic that i actually wanted to get into was from
0:31:19 that same video that we talked about you gave this example of like this i think you call it the mouse
0:31:27 utopia where the utopia that everybody is sort of driving towards may not necessarily be the best
0:31:33 outcome for the world you’ll probably be able to give a better explanation of the analogy than i can but
0:31:38 let’s dive into that a little bit man i wish it was an analogy so there was a real test run where a
0:31:44 scientist i think this was like in 1968 it could be older than that but he creates this experiment
0:31:48 he says what would happen if i gave the mice everything that they needed to thrive plenty of
0:31:54 things to play in plenty of space as they have kids as much food as they could possibly eat and for a
0:32:00 while it goes great and they’re multiplying and they’re having a good time and then at some point
0:32:05 they hit a tipping point there’s still plenty of food still plenty of space like that isn’t what happens
0:32:09 but there’s something about not having to strive for anything not having to struggle
0:32:15 they begin to like turn on each other and they start attacking each other they go infertile across the
0:32:21 whole colony and they end up killing each other reducing their numbers through not breeding and
0:32:28 ultimately the entire colony collapsed and died and so it’s like what is it about us
0:32:35 mice and i really think that this will end up applying to us where we need hardship in order
0:32:40 to thrive we know that’s true at the level of the immune system if the immune system isn’t attacked
0:32:44 by bacteria and viruses it grows weak and then you end up getting hit with something in your toast
0:32:48 we know it’s true of trees if trees don’t encounter wind as they’re growing like if you grow them
0:32:53 inside of a dome a geodesic dome or something where they don’t encounter wind they’ll reach a certain
0:32:58 height and then just fall over because the wood doesn’t have to strengthen under the strain and so
0:33:04 i remember one of the earliest insights i had as an entrepreneur very early in my career and i was
0:33:09 watching somebody who everything had just come easy to them and the way that they were thinking about
0:33:13 things was so dysfunctional and i remember saying some people just need to be chased by a lion
0:33:21 and i was like there’s something about like reality danger hardship it’s hard to interrupt you that’s
0:33:25 hilarious but like so when i was living in san francisco me and my son when he was like five we
0:33:31 went on a race he won a 5k race when he was five or six damn i mean he was going against kids up to
0:33:35 about 12 years old and he beat them i was able to go along with him that was like the rules like a
0:33:39 parent could go with you and when he was trying to stop i was like if there was a lion behind you right
0:33:45 now would you be able to run and then he ran you know he just kept going you know so they’re
0:33:53 definitely something baked into humans i love that story yeah so um utopias are probably a terrible
0:33:58 idea it’s like you’ve got two books that really deal with potential futures 1984 if you choose the
0:34:04 authoritarian path and then a brave new world if you choose the utopian path and there’s just something
0:34:09 about the way the minds work if you don’t have to work hard if you’re not making progress towards
0:34:14 something that matters if you’re not contributing to society i think people feel a profound sense of
0:34:20 disease i think they are evolutionarily placed algorithms running in your brain and they’re not
0:34:26 going to let you have a free ride and this is why i think so many wealthy kids just implode because
0:34:31 they haven’t had to work for anything they get things handed to them difficulties just go away
0:34:36 you know you’ve got the snowplow parents or the helicopter parents and it just doesn’t work one of the
0:34:42 the reasons i decided not to have kids was i knew they would need to suffer in order to grow strong
0:34:47 and i wasn’t sure i could stop myself from intervening interesting yeah i mean whenever i
0:34:52 think of like the utopia i think that the imagery that came to mind you might even use this imagery in
0:34:58 your video the whole wally movie right like that’s what comes to mind to me when people just have no
0:35:03 more problems no more worries they become fat probably diabetic sitting around watching
0:35:09 entertainment all day drinking slurpees or whatever they’re drinking in the movie that’s what i feel like
0:35:15 could potentially happen if we go down this like ubi route where everybody’s just sort of given a certain
0:35:21 amount of money not asked to work just kind of go do what you want the ai’s got it handled i feel like
0:35:29 that is where everything ends up it most certainly does and forgive me you know you never know what a
0:35:35 certain podcast wants to talk about but if you look at the mayoral race in new york city and you’ve got
0:35:42 an open socialist literally says i am a socialist i want to make new york socialist i get the outcry
0:35:47 like i get the pain that people are in and my obsession is economics and how people are being
0:35:53 abused by a system but they misidentify the cause and therefore misidentify the cure but when i look
0:36:00 out at ai i get very worried because people don’t realize that governments only have money because there
0:36:05 are people that make things entrepreneurs and those entrepreneurs manage to do this miracle which is to
0:36:12 create something that where the output people will pay more for than the cost of the inputs and that’s
0:36:17 very hard to do i’ve spent the last 25 years of my life trying to do that sometimes you fail uh it’s
0:36:24 very difficult and so when you start thinking that the redistribution of the wealth from those people
0:36:30 is the miracle versus being able to do that or to work at a company that does that and contribute
0:36:36 that’s when we run into problems and so when i think about okay let’s say the ai really does drive
0:36:44 energy costs to zero which then means robots will be essentially zero in cost over time and so now you
0:36:51 have free labor because robots essentially eat sunshine so you’ve got robots free because the
0:36:57 labor was free because of the energy costs being so low and now all of a sudden nobody has to work for
0:37:02 anything they can have anything they want you’re going to have a meaning crisis and so all of a sudden when
0:37:07 there is no struggle there is no difficulty there’s nothing to push back there’s no lions chasing you i
0:37:13 don’t think it does anything good to our minds and i think that we will have to find ways to go way out
0:37:18 of our way to ensure that we have meaning and purpose and i always feel weird giving this advice because i
0:37:25 don’t have kids but like the default answer i think is to have kids like get married have kids you’re
0:37:31 going to do a hard thing in service of somebody other than yourself and so i think that is going to
0:37:36 be one way that people get something very meaningful but then i also think and this is where i start to
0:37:42 lose people i also think that people like me are going to build virtual worlds that you can literally
0:37:48 inhabit and you can go on like an actual quest to the point where and this obviously isn’t in five years
0:37:53 yeah artificial struggle you’re going to generate artificial struggle not even just artificial struggle but
0:37:58 that like if you’ve ever thought man i would love to explore space but i don’t want to sit on a ship
0:38:02 for 18 months just to get to mars and i really don’t want to sit on a ship for you know nine light
0:38:10 years so all of a sudden you realize i think the reason that we don’t see people calling out to us
0:38:16 from space is that any sufficiently advanced civilization gets to the point where they realize it’s far easier
0:38:22 to collapse within the nervous system than it is to try to go out and navigate space and if in a
0:38:27 virtual world i can create literally anything things way cooler than you’re going to find out in space
0:38:33 because they’re going to be perfectly optimized to be just hard enough to put you in the optimal zone of
0:38:41 personal development you’ll be able to fine tune everything and that i think it’s not a near-term
0:38:47 possibility but if you give me 50 100 years i think that that becomes very real yeah yeah i imagine
0:38:54 something kind of in between the holodeck from star trek and uh west world right interesting i always go
0:39:00 straight to the matrix i think you really will just tap into the nervous system so that you’re
0:39:05 essentially pulling a magic trick on yourself it becomes entirely indistinguishable and don’t get me
0:39:10 wrong i think we will also i don’t know if you guys play cyberpunk 2077 but we’ll also do that like
0:39:14 there are going to be some people that integrate technology into their body where they’re adding
0:39:21 senses to themselves so they can see an infrared they can see the internet and just thinking about
0:39:26 something and it opens a prompt and you know they can go in and navigate i think all of that’s going
0:39:32 to be maybe not my lifetime but certainly anybody that has a kid that’s middle school or younger
0:39:39 that’s pretty real get ready for them to bring home an ai girlfriend i’ll tell you that so i do think
0:39:44 some of those things are probably closer than most people realize right like some of the augmenting your
0:39:49 own body we’ve already seen obviously neurolink right people are already using that cochlear implants
0:39:55 or de rigueur man we’re probably this close to the sort of contact lenses that will put a heads
0:40:00 up display in front of us wherever we go i mean some of that stuff is pretty close i don’t know how
0:40:04 close we are to people like sort of chopping off their arms and replacing them with uh
0:40:09 robot arms but it’ll start with people that already lost their arm right yeah that’s true so you take
0:40:15 the guy that you know military whatever and yeah i’ll take a robot arm yes please i mean do you guys
0:40:20 know who hugh herr is i don’t i’m not familiar oh my god this is one of the greatest stories of all time
0:40:28 so uh engineer i assume electrical engineer and mountain climber loses both legs in a mountain climbing
0:40:34 accident and is like yeah no i’m not using the prosthetics that people give you uh these days
0:40:41 are terrible he ends up designing these prosthetics that somehow transfer like your motion and your
0:40:49 signals into like motors and stuff when he wears long pants judging just by his gait you cannot tell
0:40:56 that he has two artificial legs just walks normal there’s a video with a sprinter who has one natural
0:41:06 leg and one cybernetic i guess leg and she can sprint at full speed sprint now this is not the bouncy one
0:41:13 that you see um amputees wear this is a prosthetic leg it’s insane and that video he probably made that
0:41:20 five or six years ago so this is like technology i can’t even imagine where it’s at now so yeah it’s
0:41:26 going to get pretty crazy yeah i’m curious what are you doing personally how are you setting yourself up
0:41:31 for this sort of inevitable future that that we’re moving towards i know you know maybe one of the hard
0:41:36 things you’re working on is developing your own game studio but outside of that like what are you doing to
0:41:41 make sure that let’s say 10 years from now you feel like you’re in a pretty comfortable position assuming
0:41:46 we do hit this potential utopia everybody’s talking about okay well i’m going to give you the real
0:41:50 answer but i’ll give it to you in a nutshell and then you can decide if you want to actually talk about
0:41:55 any of this stuff okay the most important thing you must understand the financial system period end of
0:42:01 story if you don’t understand financial instruments you could get caught off guard so that’s number one
0:42:08 number two is integrating ai as fast as i can into every element of my professional life so i want to
0:42:14 know the tools i want to be using the tools i use ai i’m not kidding 365 days a year including christmas
0:42:19 so i’m sure there are people that integrate it far better than i but i really really try to find all
0:42:25 those areas where it’s real and put it to use i’m not trying to you know be at that bleeding bleeding
0:42:28 edge where it’s like this is actually slowing me down but it’s so cool and i know it’ll be something
0:42:31 one day i’m saying like what’s the thing that’s production ready right now it’s actually going to
0:42:37 speed me up at impact theory no matter what your role is it is mandatory that you find a way for ai
0:42:43 to make some part of your job easier so that’s big for us and then just really paying attention to the
0:42:48 space to make sure that i know where this stuff is going being politically aware i think is more
0:42:53 important now than ever i’ve been politically asleep my entire life until about five years ago and for a
0:42:57 whole host of reasons realized uh-oh the world doesn’t work the way that i thought it did i’m
0:43:02 very good at making money and that’s the only thing i really know how to do and that’s put me in like a
0:43:06 really weird position because all of a sudden i’m looking around going i cannot predict any of the
0:43:10 government’s movements and this is really starting to freak me out and the reason that i focus on that
0:43:17 side of things is ai is going to exacerbate the inequality the inequality is what’s driving the
0:43:22 political division the political division will lead to more violence because it’s already gotten
0:43:29 somewhat violent that’s going to continue do i think that we’ll go into a full hot civil war i hope not
0:43:36 but for reasons that i’m more than happy to go into the math says that we have about a 50 chance of
0:43:43 ending up in civil war only two percent of countries that have found themselves with a debt to gdp ratio
0:43:52 of 130 percent have avoided revolution or civil war we’re at 121 or 122 right now so just to give you an
0:43:57 idea and you’re thinking it would be like the left versus the right kind of civil war that’s how it’ll play
0:44:04 out in america in terms of the teams that people latch on to but the great irony is they are both
0:44:09 fighting for the same thing but because they don’t know what the actual problem is and i’ll just it’s
0:44:16 debt and money printing but because they don’t understand how it could be possible that debt is
0:44:21 the thing that leads to massive inequality that it’s the thing that leads to the rich getting richer and
0:44:25 the poor getting poorer like and i can explain all the mechanisms but it’s just complicated enough that
0:44:29 people tend to glaze over and they just go back into emotional reasoning and they’re like
0:44:34 yeah but that guy he voted for somebody else and i’m not here for it and then they just fight
0:44:41 i hate that guy it’s crazy yeah yeah the thing is like both sides have to find a way to the middle
0:44:45 they have to be able to say i get it we look at this differently so this is when i’m teaching
0:44:52 entrepreneurs the thing that i always say is the magic game is kpis kpis that’s it and kpi for people
0:44:57 that have never heard that before is key performance indicator and so for whatever goal
0:45:01 you’re trying to achieve there’s a key performance indicator as to whether or not you’re moving towards
0:45:08 your goal and right now we allow the government to run with no kpis whatsoever and so we never know
0:45:13 like are we going in a good direction or not you get people like thomas massey that wear the pin that
0:45:19 shows the national debt climbing but people don’t understand it and so whatever they brush it off but at some
0:45:26 point you have to pick a metric or a basket of metrics and say okay i don’t care who the politician
0:45:30 is these are the five metrics that i care about and if we’re moving in the right direction i love that
0:45:35 person if we’re moving in the wrong direction i don’t like that person and just make it that simple
0:45:42 but unfortunately as ai is discovering if you want to mimic a human you have to think emotionally
0:45:46 wow i mean i totally agree too i’ve i’ve gone down that same sort of uh political rabbit hole i don’t
0:45:51 really talk about it publicly i kind of keep my politics to myself smart i talk about it i’ve made
0:45:56 the mistake yeah nathan does talk about it publicly i kind of keep my politics to myself i’ll sort of like
0:46:00 friends and family that are real close but i don’t really talk about it publicly but i do pay very very
0:46:06 very very close attention now and i couldn’t agree more with some of the advice that you just gave i do
0:46:12 have one sort of last question you did mention that you use ai 365 days a year you already mentioned chat
0:46:18 gpt maybe just a quick rundown of some of your other favorite ai tools just to like give the listeners
0:46:26 another like quick takeaway of cool things to go try yeah so we use sonnet 37 for most of our coding um
0:46:31 that’s another big one i don’t interface with that much i do some vibe coding on lovable if
0:46:36 people haven’t tried it it’s great you tend to still terminate at some death loop though where it’s
0:46:40 like every time you fix one thing it just breaks something else and so you’re going back and forth
0:46:46 no no like you just fix it but now you broke it again uh so i can see the promise but really that’s
0:46:50 only good if you’re going to be able to hand it off to somebody that can get it across the finish line
0:46:54 i’m in a fortunate position obviously i have employees so i can be like okay here i built a quick
0:47:00 prototype now actually go make that the real thing so whether that’s interfaces ui ux within the video
0:47:05 game if we want to do a new marketing site or something like that we’ll use all of that stuff
0:47:12 obviously i use mid-journey morning noon and night because my thing is my original passion was writing
0:47:18 so i do a lot of writing for whether it’s the video game or we have a comic book that’s set in the world of
0:47:23 the video game uh so i’ll work on that i’ll use mid-journey to help develop characters scenes
0:47:30 that kind of stuff but those are the ones that i use a lot chat grok lovable sonnet that’s like my loop
0:47:36 but then the team here has i mean a half dozen more things that are usually pretty specific it most of
0:47:42 it’s writing on top of chat gpt in the background yeah that’s my stack very cool yeah i think anybody
0:47:47 who’s tried to quote-unquote vibe code knows that that feeling that you just described of it getting stuck
0:47:51 in the loop we actually had anton the ceo of level on the show by the time this comes out a couple weeks
0:47:56 ago he’ll be super happy to know that you guys are using lovable over there dude it’s cool and if
0:48:03 they keep going like that could really be something very intuitive very easy it’s very enjoyable to use
0:48:08 absolutely well wrapping up here like where should people go check you out you make amazing youtube
0:48:13 videos you’ve got the impact theory podcast what’s the best place to go follow along to your journey
0:48:19 at tom bilyeu on youtube cool well everybody needs to go check out tom bilyeu over on youtube and uh
0:48:22 thank you so much for hanging out and spending the time this has been such a fun conversation thank you
0:48:36 tom this has been awesome thanks for having me guys it was wonderful
Want to Automate your work with AI? Get the playbook here: https://clickhubspot.com/wgk
Episode 67: What does the future of hiring and creative work look like in an age where A.I. can replace entire departments? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) sit down with Tom Bilyeu (https://x.com/NathanLands), co-founder of Quest Nutrition, host of Impact Theory, and founder of Impact Theory Studios, to dig deep into how he’s revolutionized his business with A.I.—and why he may never need to hire the same way again.
This episode explores how Tom Bilyeu structures and deploys a five-member A.I. “department” to automate everything from marketing to content creation, and how this approach is reducing headcount without sacrificing creativity. Tom discusses the granular details of training custom GPTs to capture his voice, fact-checking with Grok, A.I.’s impact on indie game development, and what society might look like as technology accelerates.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
-
(00:00) Customizing AI for Specific Tasks
-
(04:48) AI Revolutionizing Content Creation
-
(08:08) Tech Glitch Trauma
-
(11:25) Grok’s Detailed Writing Advantage
-
(15:04) AI in Game Development Reception
-
(16:40) Tech Embrace vs. Religious Rejection
-
(21:33) Future of AI in Gaming
-
(22:32) AI Storytelling in Virtual Worlds
-
(25:52) AI: The New Global Hegemon
-
(31:35) Mouse Utopia Experiment Collapse
-
(32:13) Hardship is Essential for Growth
-
(37:51) Virtual Worlds vs Space Exploration
-
(38:54) Tech Integration: Matrix and Beyond
-
(42:10) Year-Round AI Integration
-
(46:41) From Prototype to Product
—
Mentions:
-
Tom Bilyeu: https://www.youtube.com/c/TomBilyeu
-
Impact Theory: https://podcasts.apple.com/us/podcast/tom-bilyeus-impact-theory/id1191775648
-
Grok: https://x.ai/
-
Lovable: https://lovable.dev/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
-
Newsletter: https://news.lore.com/
-
Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
-
-
Aaron Levie on AI’s Enterprise Adoption
AI transcript
0:00:02 What is the journey over the next decade?
0:00:06 It’s about the speed at which humans can change their workflows.
0:00:08 Why doesn’t the breakthrough that we just saw get released?
0:00:11 Why doesn’t that permeate every corporation within six months?
0:00:16 It’s so strange to me how many disruptions are happening all at the same time.
0:00:17 Your R&D is changing.
0:00:18 Yeah.
0:00:20 Every part of the stack is changing.
0:00:20 Like everything.
0:00:23 We’re not in like a fear of AI world.
0:00:28 We’re in a, we know this is going to happen and it needs to happen to us faster than it
0:00:31 happens to our competitors, which is a totally different dynamic than we saw with cloud.
0:00:36 What do you think is the best metric for anybody interested in tracking this stuff as far as
0:00:37 like how fast it’s going?
0:00:37 Is it GDP?
0:00:39 Is it margin?
0:00:40 Is it top line?
0:00:42 Is it headcount growth?
0:00:43 Is it all the above?
0:00:47 It’s basically fully assumed that AI is going to take over the enterprise.
0:00:50 How does AI actually change the enterprise?
0:00:54 Not just in theory, but in how software is built, sold, and used?
0:01:00 In today’s episode, A16Z general partner Martin Casado sits down with Aaron Levy, co-founder
0:01:05 and CEO of Box, to explore what it means to be an AI-first company from product strategy
0:01:06 to internal workflows.
0:01:11 They talk about why incumbents may be better positioned than expected, how startups can still
0:01:15 break out, the rise of agents and vibe coding, and what happens when the bottleneck isn’t the
0:01:16 tech, but the org chart.
0:01:22 Aaron also shares how Box is using AI internally today and why he thinks the next generation of
0:01:25 employees may spend more time managing agents than writing code.
0:01:26 Let’s get into it.
0:01:33 As a reminder, the content here is for informational purposes only, should not be taken as legal
0:01:37 business, tax, or investment advice, or be used to evaluate any investment or security,
0:01:42 and is not directed at any investors or potential investors in any A16Z fund.
0:01:46 Please note that A16Z and its affiliates may also maintain investments in the companies
0:01:47 discussed in this podcast.
0:01:54 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:02:01 Aaron, thank you very much for joining us.
0:02:01 Thank you.
0:02:03 Everybody here already knows you.
0:02:05 However, I still think you should intro yourself, just for completeness.
0:02:12 Aaron Levy, CEO, co-founder of Box, and at Box, we help enterprises basically take all of their
0:02:18 unstructured data or enterprise content and turn it into valuable information, and AI is
0:02:20 absolutely this incredible accelerant for that problem.
0:02:22 I just learned that we’re investors, didn’t you?
0:02:24 Well, many years ago.
0:02:24 Many years ago.
0:02:26 So no claims post-IPO.
0:02:31 But actually, Ben Horowitz had this early kind of blog post on basically, I think it was
0:02:32 the title of The Fat Startup.
0:02:32 Yeah.
0:02:33 Yeah, yeah, yeah.
0:02:35 In response to enterprises, the lean startup.
0:02:36 Yeah, that’s right.
0:02:40 And let’s just say we very much took that to heart, and we basically like deployed every
0:02:45 single lesson, which was like the name of that game is you get big fast, you scale aggressively,
0:02:48 and that was a very important period in our company’s journey.
0:02:52 So the notional topic of this is AI in the enterprise.
0:02:56 But I think it’s good to be kind of nuanced about this, because it’s less obvious than
0:03:02 people think, and you’ve been talking a lot about AI on X, but also you’re thinking about
0:03:03 it in the terms of your business.
0:03:07 So let me just kind of set up the first question as follows, which is, AI has historically been
0:03:11 this very B2B enterprise thing, like chatbots or whatever, personalization systems.
0:03:16 But what’s unique about Gen AI is a lot of the use cases are actually like a consumer or
0:03:17 prosumer, right?
0:03:23 Think like creativity or developers, and it actually hasn’t made intros as much.
0:03:24 into the enterprise yet.
0:03:25 It’s just starting now.
0:03:27 So maybe just a couple of questions.
0:03:30 First off, A, does that match with your experience?
0:03:34 And then B, how are you thinking about this transition to the enterprise?
0:03:35 Yeah.
0:03:41 I think if you were to probably like do the idiosyncrasies of AI and then reverse engineer
0:03:45 why that was the journey, basically up until, let’s say, pre-chat to be team moment,
0:03:47 AI was extremely hard to use.
0:03:52 It required in many cases having custom models for basically every problem you tried to solve.
0:03:57 And so there was almost no way that a consumer ecosystem could flourish based on that.
0:03:59 It was just not generalizable enough.
0:04:03 There was really few products other than like maybe Siri, Alexa, et cetera, that you’d interact
0:04:05 with that would even have some sense of AI.
0:04:10 And so enterprises were the early adopters of AI systems to bring workflow automation to
0:04:11 their companies.
0:04:16 Then boom, ChatGPT happens and all of a sudden it’s the exact right form factor for mass adoption.
0:04:17 There’s no startup costs.
0:04:19 It costs two seconds to learn the product.
0:04:20 It’s a chat interface.
0:04:24 So it was like perfectly ripe for just taking off in the consumer space.
0:04:29 And then you have also these incredible conditions set up for mass adoption.
0:04:31 You have billions of people on the internet.
0:04:32 It was set up as a free product.
0:04:36 Again, it kind of solved this sort of latent kind of question mark that everybody had, which
0:04:40 is like, when are we going to see AI work and touch our lives?
0:04:45 And so everything was kind of like the perfect conditions to get mass consumer adoption.
0:04:49 On the enterprise side, you have unfortunately kind of the opposite, right?
0:04:53 You have lots of workflows that have been kind of ingrained for decades and decades.
0:04:59 You have lots of legacy IT systems that have data kind of not set up well to be accessed by AI.
0:05:05 You have a sort of shadow IT problem, which is most corporations don’t want, and users just
0:05:10 injecting text into prompts that might contain information that the AI models could learn off of.
0:05:14 So it’s sort of a difficult environment for that same level of virality.
0:05:20 With the exception of a few of these pro-sumer categories, I have talked to large corporation
0:05:24 CIOs that are seeing people just show up with Windsurf and Cursor and Replit.
0:05:28 And so you’re getting actually this sort of shadow IT version that we saw 15 years ago.
0:05:30 DevTools has always had that.
0:05:31 Yeah, 100% fair.
0:05:31 100%.
0:05:32 So DevTools have had that.
0:05:36 But I think that you’re still seeing that now in the chat to BT kind of leakage into organizations.
0:05:37 Right.
0:05:42 I’m sure their pro-sumer inside of a corporation firewall usage is off the charts, even separate
0:05:43 from the people that pay for it.
0:05:43 Totally.
0:05:48 So now the question, though, is what is the journey over the next decade for the real
0:05:54 change management of deployment of AI systems that drive the more like GDP changing productivity
0:05:55 gains?
0:05:58 And that’s something where I do think we have to be prepared for.
0:05:59 This is many years.
0:06:03 It’s about the speed at which humans can change their workflows as opposed to how kind of quickly
0:06:06 the technology can just sort of evolve in advance.
0:06:11 And so we in Silicon Valley and certainly anybody tuning into this sort of imagines like, well,
0:06:13 why doesn’t the breakthrough that we just saw get released?
0:06:16 Why isn’t that permeate every corporation within six months?
0:06:20 And it’s because like people just have meetings and they have budget processes and they have
0:06:24 to go through a governance council and they have to get compliance on board and they have
0:06:28 to figure out like who has the liability when the thing recommends this stock and then the
0:06:30 financial services provider shares that with a client.
0:06:33 Like that takes years and there’s going to be case law that needs to happen.
0:06:36 And we still have lawsuits that are going on about who owns the IP of this stuff.
0:06:38 So that part is going to take years.
0:06:42 What’s interesting, and I think you’ll especially appreciate this on the cloud side is I remember
0:06:47 when we first were scaling up in the enterprise, let’s say 2007, 2008, 2009, let’s say that three
0:06:53 to five year period, post AWS, post kind of cloud starting its journey, basically to a
0:06:58 T, every conversation you’d have with a CIO or a group of CIOs was basically like, yeah,
0:06:59 that’s nice.
0:07:01 Maybe some little corner of our organization could use this.
0:07:03 We are never going to go fully to the cloud.
0:07:05 They had their arms wrapped around their servers.
0:07:06 I remember.
0:07:06 Yeah.
0:07:09 And basically they did not want to give up the infrastructure.
0:07:11 There was too many questions, too many compliance issues.
0:07:15 There was just existential job questions of, well, what happens when this, you know, gets
0:07:16 delivered as a service?
0:07:17 Here’s a super interesting.
0:07:21 Let’s say we’re now two and a half years into the ChatGPT moment.
0:07:24 That same group of CIO conversations, none of that.
0:07:29 It is basically assumed, it’s basically fully assumed that AI is going to take over the enterprise.
0:07:36 Like the CEO, the CEO, the CIO, the CDO, every job, every org leader is basically like, we know this is going to happen.
0:07:39 This is not like a, oh, we’re trying to kind of push it off.
0:07:42 It is purely a sequence of events.
0:07:43 Who do I deploy?
0:07:44 How do I deploy it?
0:07:45 How do I drive the change management?
0:07:46 Is the model ready?
0:07:55 So what’s really interesting is I think the level of buy-in you have now in the enterprise is like five times greater than we had in the early days of cloud.
0:07:57 And you can even see it.
0:08:04 Like to me, the classic litmus test was, if you remember like 15 years ago, I think Jamie Dimon was probably most famous for saying like, we’re never going to go to the cloud.
0:08:04 Yes.
0:08:07 So like they basically said, J.P. Morgan will never go to the cloud.
0:08:07 Yeah.
0:08:19 Today, that equivalent commentary, whether I don’t have a perfect Jamie Dimon quote, but David Solomon at Goldman Sachs has given this anecdote of they can write now an SEC filing or an S1 for an IPO in like a few minutes.
0:08:21 That used to take a number of analysts a few days.
0:08:28 And so the fact that like those are the anecdotes already coming out of the biggest banks means that we’re not in like a fear of AI world.
0:08:36 We’re in a, we know this is going to happen and it needs to happen to us faster than it happens to our competitors, which is a totally different dynamic than we saw with cloud.
0:08:42 So do you think this has implications for companies today that are building products that are pre-AI products?
0:08:49 So for example, with the cloud wave, you basically had a bunch of cloud native companies that ended up taking over.
0:08:54 So for example, Snowflake is a great example of this, which is like the ones that decided not to go all in and were hybrid.
0:08:57 Like hybrid kind of became known as like means it won’t work.
0:08:59 You know, anything called hybrid hasn’t worked.
0:09:09 So do you think because the buyer and the enterprise is more ready that like companies that are pre-AI have more of an opportunity?
0:09:13 Or do you think that you’re going to see the same thing with a lot of like AI native companies do well?
0:09:16 I’m going to basically give you the non-answer of I think both.
0:09:28 And one benefit that the cloud cohort has or the SaaS kind of posts like us all understanding and agreeing on what SaaS would look like, what we all have is whether we adhered to this perfectly or not is a question.
0:09:31 But we basically all tried to build API first platforms.
0:09:31 Yeah.
0:09:34 Or at least like API kind of like equal platform.
0:09:36 So we have the UI and we have the API.
0:09:42 And if you think about it, like AI and AI agents are like the perfect consumers of an API, right?
0:09:46 And so they basically become these super users within your system on your APIs.
0:10:01 So if I had to just say, okay, I want to deploy agents to go and automate my ServiceNow workflows, I think I’m better off just deploying the ServiceNow agent to go do that than do an entire reinvention of my ITSM system to solve that use case.
0:10:02 And you can just go down the list.
0:10:11 Like Workday, if I want an AI agent to do some kind of HR-related task, I think I’m better off to just do that within Workday than I am building an entire new system.
0:10:14 So you have a bunch of different factors versus the pre-cloud days.
0:10:17 Like pre-cloud to post-cloud was an entire rewriting of your software.
0:10:19 You had to go from single-tenant to multi-tenant.
0:10:21 The scaling of the systems were totally different.
0:10:25 Even the functionality and application logic was different because like it should be real-time.
0:10:26 It should be collaborative.
0:10:30 It shouldn’t be as sort of async and batches as the on-prem systems were.
0:10:35 And so in a cloud world, it is a reinvention of the user experience and what you’re doing in the system.
0:10:36 And we should definitely get to that.
0:10:40 Well, I just want to make sure I tease this out because it’s actually a very interesting point.
0:10:47 So your claim is to go from pre-cloud to post-cloud, like that ripped through the entire stack all the way down to like the infrastructure, for example, like tenancy.
0:10:49 Like you have to rewrite everything.
0:10:55 And then what you’re saying about AI is more of a consumption layer thing, which is you just treat the existing systems as they are.
0:10:56 And then the AI becomes the consumption layer.
0:11:02 Do you think this is like a 1.5 step and like the 2.0 step kind of rips through the entire stack?
0:11:04 Okay, so let’s bookmark that one for one second.
0:11:09 But like if you do pure Clay Christensen sort of approach, you know, sustaining innovation, disruptive innovation.
0:11:14 Disruptive innovation is this thing that looks like so much harder, so different, so less profitable.
0:11:17 Sustaining is like, actually, no, I’d like to build that because it’s incremental.
0:11:19 It’s better for our business overall.
0:11:22 The on-prem guys had a disruptive innovation.
0:11:26 Everything about the business model of SaaS looked different, harder, stranger.
0:11:28 I don’t have the talent.
0:11:32 I’m running a service delivery operation as opposed to I ship you a CD-ROM with my code.
0:11:35 Everything about the finances and pricing model.
0:11:35 Yes, everything.
0:11:36 Everything, the business model, everything.
0:11:41 AI, again, with the bookmark being like the really big disruption that you could contemplate,
0:11:45 right now with AI, everything kind of looks like a sustaining innovation if you’re an incumbent,
0:11:49 which is like, instead of a user pressing the buttons in the application,
0:11:53 let’s have an agent run through the API and operate as if they were that user.
0:11:57 And so all of a sudden, for a lot of SaaS providers, this looks like a TAM expansion
0:12:02 because now, for the first time ever, I can actually deploy my software for use cases
0:12:06 where the customer didn’t have users on the other end before to do those things.
0:12:07 So I think you have a lot of TAM expansion.
0:12:09 Now, the good news for startups.
0:12:11 With one caveat, which maybe we’ve bookmarked and we’re going to get to,
0:12:12 but let me just say the one caveat.
0:12:17 The one caveat is you now have a component that has a very different COGS model
0:12:19 if you’re a software provider.
0:12:19 Yes.
0:12:22 And so like now, it’s almost like when we went from like on-prem to cloud,
0:12:24 we went from perpetual to recurring.
0:12:30 And it feels like with AI, you kind of have to go from recurring to usage-based just because.
0:12:30 Yeah.
0:12:33 So business model will shift for some of the use cases
0:12:37 because even if you look at the cursors, replets, windsrifts of the world,
0:12:40 there does seem to be this baseline seat price.
0:12:43 And then your consumption usage thing is sort of this add-on.
0:12:43 This overage sheet.
0:12:47 And so SaaS providers are kind of well-structured to be able to have that kind of dynamic.
0:12:48 Yeah.
0:12:51 If it was 100% usage and the user seat goes away,
0:12:53 I do agree that you have this, then you have a little bit of a business model crisis.
0:12:57 Oh, so you think, but right now, it’s not clear that that’s going to go all the way over.
0:13:00 Well, until the human literally is not a seat on the system,
0:13:04 I think you don’t remove the end user license as a component.
0:13:04 Okay.
0:13:06 But again, that could be like the much bigger disruption.
0:13:11 Now, just to fully lay out the market dynamics, I think SaaS incumbents,
0:13:14 especially you have a couple other idiosyncrasies right now versus the on-prem days.
0:13:17 Another idiosyncrasy is I would say like on the margin,
0:13:20 you tend to have founders still leading the SaaS companies.
0:13:20 100%.
0:13:22 And so we didn’t really have that in the on-prem world.
0:13:28 And so like Siebel already had three CEOs later and PeopleSoft already had multiple CEOs later.
0:13:30 So it was a different leadership structure in these organizations.
0:13:33 A lot of times you still have the founder around, they’re poking around,
0:13:33 they’re really into AI.
0:13:37 So there can be a more natural pivot of the company from the leadership standpoint.
0:13:39 So a bunch of different factors.
0:13:43 Now, to the benefit of startups, which is why I can hold both of these in my head,
0:13:48 which is I’m very bullish on the SaaS incumbent being the natural place for that AI agent
0:13:49 relative to that category.
0:13:53 I just think we have this incredible expansion of categories for the first time
0:13:55 that we haven’t seen in probably 15 years.
0:14:02 So the SaaS 1.Wave actually expanded the software universe where we had these new categories of
0:14:04 software that we didn’t expect before.
0:14:08 Like nobody would have predicted the confluence and the snowflakes in the pre-on-prem days.
0:14:11 We didn’t have all of these different cuts of how do you work with data?
0:14:12 How do you do this workflow?
0:14:16 Like lines of business didn’t have 15 different applications they got to use.
0:14:17 Post-SaaS, they did.
0:14:23 So for startups in the AI world, the equivalent of that is I think there’s a lot of categories now
0:14:29 where there is no actually software incumbent in that category where AI agents all of a sudden
0:14:31 let you go build software for that category.
0:14:33 Legal, healthcare, education, and so on.
0:14:36 So that’s definitely true on the consumer side, right?
0:14:39 If you look at the top use cases of open AI, it’s almost like the top of the pyramid of needs,
0:14:40 right?
0:14:42 It’s like creativity and fulfillment, et cetera.
0:14:46 And I think like number five is like professional coding, but everything above that is one of these.
0:14:48 So on the consumer, that’s very clear.
0:14:50 Is that clear on the enterprise side?
0:14:50 I absolutely think so.
0:14:57 I think if we did a snapshot 10 years ago of the size of the contract management market or
0:15:00 the legal document market, it’s like sub 2 billion.
0:15:01 I’m making up the numbers.
0:15:02 It could be plus or minus a billion.
0:15:08 Would you agree that in five years from now, the AI agent related spend on legal services
0:15:12 should be in the many, many billions to double digit billions?
0:15:13 Absolutely.
0:15:13 Okay.
0:15:13 No question.
0:15:18 So all of a sudden there’s like not these natural incumbents that were like, oh, we captured all
0:15:19 that market.
0:15:23 AI agents all of a sudden expands the size of the software related spend in that space.
0:15:28 So I can underwrite that for healthcare, legal, consulting services.
0:15:31 I think there’s entire areas of financial services.
0:15:33 Like we always think, oh, finance has been wired up for so many years.
0:15:36 No, banking, consumer banking has been wired up.
0:15:37 Trading has been wired up.
0:15:39 Investment banking never went digital.
0:15:41 Wealth management never went digital.
0:15:46 Like these were not categories where you ever had like major software platforms to help these
0:15:47 entire categories of the economy.
0:15:51 And the reason it was because the work was unstructured.
0:15:56 It’s very ad hoc, very dynamic, lots of unstructured data as opposed to stuff that goes into databases.
0:15:58 All of that is now ripe for AI.
0:16:02 And that will then largely be ripe for many startups because there won’t be a natural incumbent
0:16:03 in those spaces.
0:16:08 I mean, it’s so strange to me how many disruptions are happening all at the same time with AI,
0:16:08 right?
0:16:11 I mean, if you think about like everything you said, which is basically vertical SaaS or vertical
0:16:13 use cases, which a lot of that is actually human budget, right?
0:16:13 Yep.
0:16:14 That’s being disrupted.
0:16:17 There’s a bunch of new use cases that we never really thought about before, which is
0:16:21 like the creativity and I mean, who would have thought that 2D image would be some massive
0:16:21 market?
0:16:22 Yes.
0:16:23 But it’s a massive market, right?
0:16:27 It turns out, you know, I’ve been a programmer for 30 years, right?
0:16:30 And in that time, like software would disrupt other things.
0:16:30 Yeah.
0:16:33 Like we disrupt all of these things, but we never got disrupted.
0:16:34 We’re safe.
0:16:35 We’re screwing you guys.
0:16:38 But clearly now software is being disrupted, right?
0:16:40 For the first time like I’ve ever seen in 30 years.
0:16:46 And so do you think this level of disruption is something that existing companies will not?
0:16:48 Like maybe a more fine point.
0:16:49 You are a business leader right now.
0:16:50 You have to think about product.
0:16:52 You have to think about your organization.
0:16:53 Does it require you to have to think about too much?
0:16:55 Like how do you structure your company as well?
0:16:59 How do you structure your product or do you think this is actually all pretty manageable?
0:17:01 I think it’s your R&D product.
0:17:04 Like literally like I’m putting myself in your shoes, right?
0:17:04 Yeah, yeah, yeah.
0:17:05 Which is like your CEO.
0:17:06 Like your R&D is changing.
0:17:07 Yeah.
0:17:08 You’re like-
0:17:09 Every part of the stack is changing.
0:17:09 Like everything.
0:17:10 Yeah.
0:17:23 I think the reason that I’m probably frankly more distracted by what we’re building that I don’t have enough time to stress out about the actual organizational side because I’m stressed out enough about just literally like the actual pure like delivery of the product.
0:17:27 I think if I had a little bit more time, I’d get more stressed out about all the other change.
0:17:30 We are very much leaning into the idea of being AI first.
0:17:31 We have a twofer on this.
0:17:36 Like one, by being as AI first as possible, we’ll see the use cases that our product should go solve for customers.
0:17:37 So like check that box.
0:17:41 And then second is I’m just a believer of the efficiency productivity gains.
0:17:41 Yeah, yeah, sure.
0:17:44 And I do think it does change basically everything about work.
0:17:46 And there’s lots of these interesting examples of what it means.
0:17:51 So in the future, does the individual contributor basically become a manager of agents?
0:17:52 Yeah, yeah, yeah.
0:17:53 So that’s a totally different job.
0:17:54 Right.
0:17:54 Right?
0:18:03 Like my recent kind of go-to is just thinking about it as a lot of the productivity of your organization was rate limited by literally like how fast can somebody use a computer to do something?
0:18:03 Yeah.
0:18:06 To type an email, to write code, to generate a marketing asset.
0:18:06 Yeah.
0:18:10 When that’s no longer a limiter, how do these jobs begin to change?
0:18:18 And it’s like, okay, your job is now orchestration, integration of work, planning, task management, reviewing, auditing, and that will radically change work.
0:18:33 Interestingly, this probably behooves us to not over-rotate on transforming yet internally for any given company simply because the technology is changing so fast that like you probably wouldn’t want to snap the line right now, run your whole business on this technology.
0:18:35 Because in two years from now, it’s going to happen.
0:18:37 Because in two years from now, it’s going to be so much better.
0:18:45 And so I think progressively figuring out which workflows have high impact upside, getting it rolled out in a decentralized way so people can experiment.
0:18:47 Like I think you want to do a few of those kind of things first.
0:18:50 I mean, I can’t imagine a listener not knowing what Box does.
0:18:57 But just for completing this, maybe can you just talk to us very quickly about what Box does and how you’re thinking about how that dovetails with AI?
0:19:02 Yeah, so we started the company with a really simple premise, make it easy to access and share your files from anywhere.
0:19:06 And we pivoted about two years into the journey to focus on the enterprise market.
0:19:10 And the whole idea was enterprises are awash with all this unstructured data.
0:19:17 So corporate documents, research files, marketing assets, M&A documents, contracts, invoices, all of this.
0:19:22 And as companies move to the cloud and as they move to mobile, they need a way to access that information.
0:19:25 They need a way to collaborate securely on it.
0:19:28 They want to be able to integrate that data across different systems.
0:19:30 So we built a platform to help companies do that.
0:19:34 We have about 120,000 customers, about 65 or so percent of the Fortune 500.
0:19:40 And so what’s incredible right now is we’ve had this ongoing problem since the creation of the company,
0:19:46 which is with structured data, the stuff that goes into your database, you can query it, you can synthesize it,
0:19:49 you can calculate it, you can analyze it, your unstructured data, the stuff that we manage,
0:19:54 you create it, you share it, you look at it, and then you basically kind of get forgotten about.
0:19:57 Like it goes into some folder and you almost never see it again.
0:20:01 And maybe you kind of find it once every five years for some task you’re doing, but that’s about it.
0:20:06 And so most companies are sitting on most of their data being unstructured
0:20:11 and getting the least amount of value from it relative to their other structured data.
0:20:13 AI is basically the unlock.
0:20:17 So AI lets you finally say, okay, we can ask this data questions.
0:20:22 We can structure it so we can look at a contract, pull out the 10 most important fields.
0:20:25 Once we have all that data, we can analyze that information.
0:20:26 We can get insights from it.
0:20:31 And then you can start to do things like workflow automation that was never possible with your unstructured data.
0:20:36 So if I want to move a contract through an automatic process, I can’t do it if I don’t know what’s in the contract.
0:20:40 And the computer previously was not able to know what’s in the contract.
0:20:46 So for us, there’s just a huge unlock of now what you can finally do with your information and your content.
0:20:50 So we’re building an AI platform to handle all of the kind of plumbing user experience
0:20:53 to make then your content AI ready effectively.
0:20:57 I don’t want to be like too bullshitty and provocative, but I have to ask this.
0:20:58 Please.
0:21:00 I’ve been in enterprise software for a very long time.
0:21:05 A lot of the business model is predicated on the fact that building software is hard and takes a long time.
0:21:05 Yeah.
0:21:08 To what extent do you worry about that not being the truth going forward?
0:21:13 Do you think we enter like this time of bespoke software being upon us?
0:21:17 I’m bearish on the extreme version of the essence of that.
0:21:22 So the extreme version of that, if you imagine the polls of this, like the extreme on one poll,
0:21:24 basically all software is prepackaged.
0:21:26 It’s the Ford Model T.
0:21:28 It’s going to work only in one way.
0:21:29 Everybody uses the same thing.
0:21:30 Okay.
0:21:30 That’s not going to happen.
0:21:31 We get that.
0:21:34 The other extreme is like everything is just like homebrew.
0:21:37 You wake up in the morning, you utter something, you get your software for the day.
0:21:38 You get your software for that thing.
0:21:40 And then the next day you do it again and you change it.
0:21:40 Okay.
0:21:45 The downsides of that model of why basically I think it doesn’t work is I think if you
0:21:49 ask like the world population, you probably find that 90 plus percent just don’t care enough.
0:21:54 They just don’t care about the tabs on their software and the modules on their dashboard.
0:21:57 Like it’s like they want someone else to just be like, this is what you should look at in
0:21:58 the morning.
0:21:58 Yeah.
0:22:01 They don’t want to have to even prompt the AI to tell them what to look at.
0:22:01 Yeah.
0:22:05 So given that that’s basically guaranteed to be where 90% of the world, no matter how you
0:22:06 cut anything.
0:22:06 Yeah, that’s a great point.
0:22:11 That means that basically 90% of our software should largely be like, okay, you log into the
0:22:13 HR system and it just looks like an HR system.
0:22:19 And in fact, there’s another interesting dynamic, which is like over many years, our software
0:22:24 and our actual way that we operate companies, there’s this flywheel relationship between them.
0:22:29 And so the way we run our HR department is like not so different than the way Workday wants
0:22:31 us to run our HR department.
0:22:31 Yeah.
0:22:35 And it’s fine because that’s not the area that we’re going to have a lot of upside innovating
0:22:35 on.
0:22:39 And like the way that we do our ticket management from customer tickets is like the way that
0:22:41 Zendesk decided to do ticket management.
0:22:44 And that’s fine because that’s not the core IP of the company.
0:22:47 In a way, it solves an operational problem for you.
0:22:47 Yes.
0:22:48 You don’t have to figure it out.
0:22:48 Right.
0:22:50 And people miss that about software.
0:22:54 I don’t want to have to think about the workflow of an HR payroll process.
0:22:56 I just want the software to do that.
0:22:59 And so that’s what people are buying.
0:23:01 And so nobody wants to customize those things.
0:23:05 Now, again, given that we’re going to be in this world of many different outcomes playing
0:23:11 out, the reason I’m still bullish on Replit and Vibe coding is for a different category,
0:23:15 which is like I’m the IT person and I just have this crazy queue of tasks.
0:23:17 And then someone’s like, can you build a website for this thing?
0:23:21 Can you like code up some inventory random plugin for this product?
0:23:24 It’s like that now becomes 10 times easier.
0:23:27 So the new prototyping, scripting, the long tail of stuff that we want to do.
0:23:31 And that long tail is so long and people never get to any of those things in that long tail.
0:23:36 And so I could underwrite a 10x growth of the amount of custom software that gets written
0:23:41 and the fact that these core systems don’t go away because there’s just actually going
0:23:43 to be way more software in the world that gets created.
0:23:44 Let me pressure test this.
0:23:47 So like, okay, so I can imagine why it would be hard to rebuild Box because what you do
0:23:48 is actually hard.
0:23:49 This is core infrastructure.
0:23:50 You store data like that’s really important.
0:23:52 And so I don’t think you just Vibe code that away.
0:23:56 But from my perspective, a lot of SaaS apps just look like CRUD.
0:24:00 To me, CRUD is, I don’t know what the acronym stands for, but it’s basically you’re reading
0:24:02 and writing data from like a backend.
0:24:08 And so do you think that there is a world where the consumption layer evolves to just using AI
0:24:09 and this class of companies go away?
0:24:13 Or do you actually think, if I heard what you just said, that the durability of these companies
0:24:16 is that it basically teaches you what the workflow is?
0:24:17 Well, I’m still going to say the latter.
0:24:20 Now, I don’t know if you need to bleep it out, but if you want to share a couple examples
0:24:25 of who you put in the not hard CRUD layer, then we can parse that.
0:24:26 But up to you.
0:24:27 The not hard CRUD layer?
0:24:28 Yeah.
0:24:34 I mean, I would say most vertical SaaS companies I see, the technology is trivial.
0:24:35 Yeah.
0:24:36 But the understanding of the domain is not.
0:24:36 No, no, no.
0:24:37 This is what you said before.
0:24:38 This is what I want to present.
0:24:38 That’s the thing.
0:24:40 That’s actually a great insight.
0:24:44 I’ve always underestimated vertical SaaS relative to the outcome.
0:24:44 Yeah.
0:24:47 And 20 years into doing enterprise software, I’m just like, no longer going to underestimate
0:24:48 vertical SaaS.
0:24:49 It’s not about the technology.
0:24:52 It’s the fact that somebody else has figured out the business model that works.
0:24:57 Like they have 10 people from the pharma industry that is like sitting next to the engineer
0:25:00 being like, this is how you should do the clinical trial workflow.
0:25:02 And that becomes so much of the IP.
0:25:07 Now, that translates fine to agents, but I still would then bet on that vertical player
0:25:13 doing that as opposed to somebody prompts their way into ChattoBT to build a FDA compliance
0:25:13 agent.
0:25:19 I would still largely bet on complianceagent.ai to do that over the pure horizontal system
0:25:21 that has no particular domain kind of expertise for that.
0:25:26 And then I think the other thing, I still think that there’s a relationship between some
0:25:31 amount of GUI and the agent and the APIs, because again, like you don’t want to every day
0:25:34 of your life, go to a blank empty screen and say, what’s our revenue today?
0:25:37 You just want a dashboard at some point and it just shows you the revenue.
0:25:37 That’s right, of course.
0:25:39 It’s almost like cast queries in a way.
0:25:40 Like somebody has made the decision.
0:25:43 Yes, this is like a known way to solve this problem in the enterprise.
0:25:48 And so I think that’s why the theory of the full abstraction away from the interface and
0:25:49 it’s all an API call.
0:25:50 I don’t think that happens.
0:25:55 And so ironically, probably what will happen is in a couple of years from now, we will see
0:25:59 agents like rebuild entire webpages and dashboards.
0:26:01 And then we’re going to find ourselves like, wait, why are we having an age?
0:26:06 Why do I have to spend tokens to create a thing that is a config on a dashboard?
0:26:10 And we’ll just be back to where we started for some amount of software, which will mean
0:26:13 that basically like these things are going to live together.
0:26:14 Cool.
0:26:16 Let’s move from software to decision process.
0:26:21 So I won’t say the name of the company, but I just spoke with a very, very legit company,
0:26:22 household name.
0:26:23 It’s a private company, though.
0:26:24 It’s not a public company.
0:26:30 We’re at the board level for every decision they ask the AI for like basically more information
0:26:31 for the decision.
0:26:32 Okay.
0:26:36 And this has actually been great from like discussion fodder to be provocative.
0:26:42 And it also shows how like fundamentally unoriginal the board members are.
0:26:45 Like this founder was telling me, it’s literally better than half of my board members.
0:26:46 Right.
0:26:51 And so like, how much have you thought about bringing AIs in to like help with decision
0:26:52 process?
0:26:52 Yeah.
0:26:57 And by the way, I think the board is like low hanging fruit because boards tend to not have
0:26:58 a lot of context to the business.
0:27:00 And so the incidents are probably less anyways.
0:27:02 But is this something that you’ve thought about?
0:27:04 The board one is an interesting one.
0:27:05 So maybe we can unravel that one.
0:27:10 But like I already use it for, let’s say, our earnings calls where we’ll do a draft of the
0:27:11 initial earnings script.
0:27:16 And then, I mean, again, because BoxAI deals with unstructured data, I just load up the
0:27:21 earnings script and I’ll use a better model and say, give me 10 points that analysts are
0:27:22 going to ask about this.
0:27:23 And like, how would I improve the script?
0:27:25 And it just spits out a bunch of things.
0:27:25 And it’s…
0:27:26 And how good is it at predicting?
0:27:27 Oh, extremely good.
0:27:28 Oh, 100%.
0:27:29 But the thing is, that’s not surprising.
0:27:33 Like it has access to every public earnings call in history.
0:27:34 Yeah, yeah.
0:27:38 And like at the end of the day, analysts can only ask you like, tailwinds, headwinds, who’s
0:27:38 buying what?
0:27:41 It’s not because the analysts are smart or not smart.
0:27:44 It’s just like, those are the things that like you would try and deduce from an earnings
0:27:45 call, buying a stock.
0:27:47 And you wouldn’t have thought of these questions beforehand?
0:27:48 Or is it just like…
0:27:49 I think you’re doing…
0:27:50 On the margin, on the margin.
0:27:51 No, no, sorry.
0:27:56 So what I’m using is then the specific parts of the document that is missing the answers
0:27:57 to those questions.
0:27:59 So I can actually inject the answers into that.
0:28:03 Because like you’re typing out a thing and like, I forgot to give two case studies in
0:28:04 this section or whatever.
0:28:07 It’s a quick way to just do some analysis on something.
0:28:12 But yeah, I mean, so Bezos famously had this memo-oriented, essay-oriented kind of meeting
0:28:12 structure.
0:28:13 And we never did that.
0:28:16 But I was always fascinated by the companies that could do it.
0:28:19 And actually, we’re entering a world where probably you could just pull that off, right?
0:28:23 So imagine if, whether it’s a board meeting or product meeting, you just do a quick, deep
0:28:24 research essay on the topic.
0:28:29 Like, obviously, every meeting, every strategy meeting in history would be better off if
0:28:32 you probably had that as a starting asset to get everybody informed.
0:28:36 I think the argument against that would be, the reason Bezos said it is because it forced
0:28:39 people to think clearly about what they’re doing and writing it down.
0:28:42 So the exercise meant that people walking in the meeting had more context.
0:28:46 This would almost argue that they would have less context because something else did the
0:28:46 thinking.
0:28:47 Well, two things.
0:28:51 It was to make sure that the person doing the thing had the clarity to write it, for sure.
0:28:54 But it was also still to inform everybody else that didn’t do that work.
0:28:57 And so it certainly would have helped everybody else in the room.
0:28:59 And I’m not 100%.
0:29:03 I mean, we should do a full longitudinal analysis of like the people that wrote the essay.
0:29:04 Did they actually have the better products?
0:29:06 Or like, I mean, there’s some Amazon products I don’t like.
0:29:09 And so they obviously wrote an essay also for those.
0:29:12 So I don’t know the hit rate ultimately on the essay specifically as much as the idea of
0:29:14 like write down a strategy, think it through.
0:29:18 And so why not have an agent do 90% of the heavy lifting?
0:29:24 So a lot of my workflows are like, if I have a topic where like maybe the direct change
0:29:29 of my workflow on this front is the kind of thing that three years ago, I might sort
0:29:32 of lob over to the chief of staff and say, hey, can you like go research like the pricing
0:29:35 strategy of this ecosystem or something?
0:29:37 That’s just a deep research query now.
0:29:39 And then I’ll wake up and it all look at this thing.
0:29:44 But what that does is because now I’m not having to calculate that person’s time, their
0:29:45 tasks, their trade-offs.
0:29:51 I just do it for the most random things, which means like I’m expanding and exploring
0:29:54 way more spaces mentally than I would have before.
0:29:55 And these are the kind of parts.
0:29:59 And this is equally why I’m like actually more optimistic on the jobs front, because what we
0:30:04 do too many times with an AI is we like look at today’s way of working and we’re just like
0:30:06 AI will come in and take 30% of that.
0:30:06 And it’s like, no, no, no.
0:30:08 We’ll just do totally different things with AI.
0:30:12 I wouldn’t have researched that thing before when it was people required to research it
0:30:15 because that would have been an inane task to send to somebody.
0:30:15 Yeah, yeah, yeah.
0:30:17 One thing.
0:30:22 So when we run the numbers and by run the numbers, I mean look through how AI companies are doing,
0:30:23 where does the value accrue?
0:30:26 There’s basically one takeaway.
0:30:30 And that is like these markets are very large and growing very fast.
0:30:32 And value is kind of accruing at every layer.
0:30:35 Everything from like literally chips up to apps.
0:30:40 And so like the only real sin is zero-sum thinking to be like, oh, like the models are
0:30:43 not going to be defensible or whatever your zero-sum thinking is that just hasn’t proven
0:30:44 out.
0:30:47 Now, this has still largely been a consumer phenomenon.
0:30:48 So what I’ve been thinking about, and I don’t have an answer.
0:30:53 I’d love to hear your thought is, is when it comes to enterprise budgets, like you can’t
0:30:55 just create budget out of thin air.
0:30:57 So like you actually do have a limited resource.
0:31:02 And so as budgets get reallocated, to what extent do you think this is like zero-sum,
0:31:06 like the old budget gets robbed versus like budget accretive?
0:31:07 Or like, how do you think about that?
0:31:10 Because again, like where we’ve come from, that has not been an issue.
0:31:12 I think in the enterprise, it probably will be.
0:31:14 So it does have to come from somewhere.
0:31:14 It’s fully logical.
0:31:15 A couple of things.
0:31:16 Yeah.
0:31:20 A large number for startups can also be a very small number for a large corporation.
0:31:21 Yeah.
0:31:23 So you have that dynamic playing out.
0:31:28 I’ll make up random stats, but you could probably take a meaningful engineering team and
0:31:32 probably for the price of five of those engineers or 10 of those engineers, you could
0:31:34 probably pay for cursor licenses for the entire engineering team.
0:31:38 But this would argue that it’s actually coming out of headcount.
0:31:41 So here’s where the asterisk is.
0:31:43 There’s an infinite set of ways.
0:31:46 This is why like you can never take a point in time snapshot on these kinds of things.
0:31:46 Yeah, totally.
0:31:48 There’s an infinite set of ways that this actually plays out.
0:31:49 Yeah.
0:31:56 Next year’s planning process, maybe in a perfectly like parallel universe, the salary increase
0:31:59 that year would have been 3.5% for employees.
0:32:02 And this coming year, it’s 3% because we’re going to take 0.5% and we’re going to deploy
0:32:04 AI for the company.
0:32:09 Or maybe next year, we would have added 50 engineers, but we’re going to add 25 and then pay for AI.
0:32:10 But guess what?
0:32:14 The year after, we’re going to have engineering productivity gains.
0:32:16 So it increases because it’s still a competitive environment.
0:32:21 We then now add engineers the year later because we’re getting higher productivity gains.
0:32:21 Yeah.
0:32:25 I think that most companies of any reasonable scale post 100 employees, let’s say, have
0:32:31 enough sort of dynamism in the financial model within a one to two year period where it doesn’t
0:32:33 look like what the economists would think it looks like.
0:32:34 Can I just spit this back?
0:32:37 Because I think this is actually a very good point that’s buried in there.
0:32:42 I just want to make sure I’m following along, which is the software license cost to a startup
0:32:46 relative to like a large people organization is relatively small.
0:32:50 It’s just a couple of headcount, which if you just look like normal performance management,
0:32:55 normal attrition, normal like variability, and even like hiring timelines is kind of in
0:32:56 the noise.
0:32:59 And so you already have an annual budgeting cycle to fix that up.
0:33:03 And so like basically within the noise, even of just like headcount planning, all of this
0:33:05 could work out without some massive disruption.
0:33:06 And I think that’s such a cool point.
0:33:10 And there could be an upper limit of this point, but let’s say the going rate in Silicon Valley
0:33:15 of a new engineer coming out of college, let’s just say it’s somewhere between 125 and 200.
0:33:16 Okay.
0:33:17 I’m just making up.
0:33:17 Okay.
0:33:17 Yeah.
0:33:22 Let’s say your most aggressive cursor usage or something is like a thousand bucks a year,
0:33:22 2000 bucks a year.
0:33:25 So you’re at like 1% salary maybe.
0:33:26 Here’s the question.
0:33:28 Again, do this like crazy apples to apples thing.
0:33:33 If you went and recruited from Stanford right now and you said, okay, you Stanford grad have a
0:33:33 choice.
0:33:41 You can work at this company and get paid 125K with no AI, or you can get paid 123K with
0:33:42 full access to AI.
0:33:43 Which one are you going to do?
0:33:45 They would do the 123 all day long.
0:33:48 But even that, yeah, I mean like your argument is, which makes a lot of sense to me.
0:33:50 It’s kind of on the margin when it comes to like 20.
0:33:55 But just as a way of exploring like why these things are not the high order bit of the cost
0:33:55 increase on budgets.
0:33:56 Oh, I love that.
0:33:57 That’s great.
0:34:02 And I did one kind of late night sort of like modeling once, but I’m afraid to say all the
0:34:03 numbers here because I think they’re just going to be so wrong.
0:34:07 But I think it’s something on the order like five or six trillion in knowledge worker headcount
0:34:08 spend in the U.S.
0:34:08 Yeah.
0:34:11 Everybody says for developers, let’s say 40 million.
0:34:12 Let’s just say it’s 30 million.
0:34:12 Yeah.
0:34:14 Let’s say that the average is 100K or you’re at three trillion.
0:34:16 Man, these are just massive, massive numbers.
0:34:17 So it’s many trillion.
0:34:17 Yeah.
0:34:19 So you have many trillions of dollars.
0:34:23 If you take a couple percent of that or five percent of that, you’re already doubling the
0:34:24 entire sort of U.S.
0:34:25 enterprise software spend.
0:34:26 Yeah.
0:34:28 So you can just make it work within.
0:34:31 And this is why I don’t think that people will not make cuts because they have to pay for
0:34:31 AI.
0:34:33 They might make cuts for other reasons.
0:34:33 Sure.
0:34:37 But even in those cases, I think you’ll often have it be for myopic reasons temporarily.
0:34:38 Yeah.
0:34:42 And there’s enough flexibility to basically consume this and then actually like recap on
0:34:43 the productivity game.
0:34:43 Yeah.
0:34:43 I think that’s great.
0:34:49 I try and parse everything you say through the lens of like, where are you landing on AI
0:34:50 coding?
0:34:53 And you seem to have a very pragmatic view of where things actually are at.
0:34:54 Yeah.
0:34:55 Where are you landing right now?
0:34:57 Well, it’s been an evolving.
0:35:02 So I would say in the entire AI thing, the biggest surprise to me is how effective it is
0:35:02 at code.
0:35:08 And so my sense is, so I’m just going to say a couple of, I think, facts, and then we can
0:35:10 kind of back out what this means in aggregate.
0:35:16 Because I think one fact is, the reality is, I do think that AI helps better developers more
0:35:17 than not better developers.
0:35:22 And the reason is you just have to deal with and be able to like know what to ask for and
0:35:22 know how to deal with the outcome.
0:35:23 So I think that’s one.
0:35:27 Someone said it, I thought, beautifully, which was, this is a very good developer.
0:35:28 And this was on X.
0:35:29 I forgot who it was, but I thought it capsulated.
0:35:35 He’s like, you know, 90% of what I know, the value of it has gone to zero, but 10% has
0:35:37 tripled more than 10X or whatever it is.
0:35:38 They’ve got 100X.
0:35:40 And I think that’s exactly right.
0:35:46 I do think that for a lot of rote use cases, the AI can do it and it doesn’t need to be
0:35:47 double checked, right?
0:35:50 So there’s a lot, to your point, like things like prototyping, things like scripting.
0:35:55 And so I do think if you look at usage of like open AI, if you actually look at code
0:35:59 usage, it’s like the primary use is actually professional developers, which means it’s part
0:36:00 of a developer workflow.
0:36:07 And then probably the most controversial stance I have is, and this is probably like sunk cost
0:36:10 fallacy because I’ve been a programmer for, I mean, like my PhD is in computer science,
0:36:12 like, you know, so maybe this is sunk cost fallacy.
0:36:17 But I just don’t see a world where you get rid of formal programming languages just because
0:36:20 they arose out of natural languages for a reason.
0:36:24 Like we started with English and then we made programming languages so that we could formally
0:36:25 describe stuff.
0:36:27 And so it’d be kind of a regression to go back.
0:36:30 So I still think we’ll use languages and maybe they’ll change, maybe more like a scripting
0:36:35 language, but I think like the existing tool set will evolve, but it’ll still be a professional
0:36:35 developer.
0:36:38 Like I think we’ll still have developers, still have developer tools.
0:36:38 So that’s kind of where I am.
0:36:39 I would love to hear where you’re at.
0:36:42 If I’m fully, I’m on the exact same page.
0:36:46 The fun thing to me is how coding is just at the tip of the kind of iceberg.
0:36:51 It’s the best thing to first sort of experience agentic automation, but I think you’ll see
0:36:52 this in basically every other space.
0:36:59 But what’s so fun is just in a one-year shift, let’s say, of like the nature of the relationship
0:37:00 with the AI.
0:37:04 So if you think about the GitHub co-pilot moment was like, oh, this thing is incredible.
0:37:07 It’s going to type ahead and predict what I’m typing.
0:37:12 And then you’re basically using it to work 20 or 30% faster and which parts of it do you
0:37:13 take on or not.
0:37:19 And now the relationship is like totally different within, again, a year or two period where you’re
0:37:23 using cursor, Windsor for whatever, and the agent is generating this chunk of an output
0:37:25 and then you’re just reviewing it.
0:37:30 But what’s incredible is like none of your expertise is any less valuable in that review.
0:37:34 In fact, it’s probably even more important than ever before because in some cases, like
0:37:39 it’s just going to be like wrong 3% of the time and then you review it, but then you’re
0:37:41 literally doing 3x the amount of output.
0:37:47 And the nature of how that changes both programming, but just like, why not have that for basically
0:37:51 everything is sort of this new way that both software should work and then actually we will
0:37:56 work is like, you know, the big joke a year after Chat to BT is like, okay, this thing generates
0:37:59 a legal case and it’s like wrong 10% of the time.
0:38:00 And it’s like, well, actually, hold on.
0:38:04 If you think about what the new paradigm of work looks like, and it’s like such a weird inversion
0:38:07 of it used to be the AI was fixing your errors.
0:38:09 That’s what we thought the AI was going to be.
0:38:10 And it’s just like a total flip.
0:38:12 It’s like the human’s job is to fix the AI errors.
0:38:14 And that’s the new way that we are going to work.
0:38:15 Right.
0:38:16 So this begs a very obvious question.
0:38:17 I’m going to work up to the question.
0:38:22 So there is a great paper in NSDI from an MIT team, which basically says you can optimize
0:38:24 a running system with agents.
0:38:29 And the way they did it is they basically have a teacher agent and then like more junior
0:38:32 agents and then the more junior agents would go try a bunch of stuff.
0:38:37 And of course, they had much more knowledge of the literature than any single human being.
0:38:39 So they try all of different things to try it.
0:38:41 And then the one at the senior agent would say, oh, this is good.
0:38:42 This isn’t good.
0:38:44 And then once it optimized the system, they would do it.
0:38:48 And then, you know, like the human being is then kind of helping the teacher agent decide
0:38:52 what are the parameters, what is good, what is not good and provide high level direction.
0:38:52 Right.
0:38:58 And so you’re already starting to see cases where human beings are running multiple agents
0:39:02 and even that already is starting to have some kind of bifurcation, which one way to think
0:39:08 about it is in any R&D organization, of course, people start as like ICs, but then they very
0:39:11 quickly get interns and go into management.
0:39:12 And so maybe we’re just skipping that step.
0:39:17 So the obvious question is what happens to entry level engineers?
0:39:22 Like does this change how people get introduced to computer science, for example?
0:39:26 The cool thing is probably more people will even now get introduced to computer science
0:39:27 because you’ll be able to…
0:39:28 Anybody can learn.
0:39:29 Anybody can learn it.
0:39:33 And, you know, it’s been 25 years for me, but like in the early days of programming basic
0:39:37 applications or putting other websites, it was just extremely frustrating that you would
0:39:40 spend days and days being like, why does that thing not work?
0:39:40 Yeah, yeah, yeah.
0:39:43 And like I have very few resources of figuring out why the thing didn’t work.
0:39:47 It would have been a hundred times easier if I could have had an agent write the thing.
0:39:48 I would have learned 10 times faster.
0:39:49 Yeah, yeah.
0:39:53 Honestly, what you did is you was like, well, not 25 years ago, but 10 years ago, you go
0:39:53 to Stack Overflow.
0:39:55 And so it was like the slow version.
0:40:00 Yeah, but so think about how many people missed the window pre-Stack Overflow that got sort
0:40:03 of pushed out of the ecosystem because they’re just like, this is too frustrating.
0:40:07 And so you’re going to have a way bigger funnel at the top of people now learning programming
0:40:08 and computer science.
0:40:11 I think a similar percentage of people will fall out.
0:40:14 So it’s not like, again, you’re going to get a 10x increase in programmers because you
0:40:17 still have to enjoy it and you have to like solving problems and whatnot.
0:40:21 It’s going to change the nature of the incoming class of engineers that you hire.
0:40:25 They will literally not be able to code without AI assisting them.
0:40:30 And it’s not 100% obvious that’s a bad thing because assuming you have internet and the site
0:40:32 stays up, like we should have access to the agents.
0:40:37 So I think it’s mostly just like we have to adapt how we think about the role of an engineer
0:40:40 and what these tools are giving us in terms of the productivity gains.
0:40:45 Actually, I meet with a lot of larger, not tech-oriented companies as customers.
0:40:50 And generally, the thing I’m recommending is hire a bunch of these people because they’re
0:40:55 going to flip your company on its head of how much faster the organization can run.
0:40:56 So I do understand.
0:40:59 I want to be sympathetic to the job market for anybody coming out of college because I don’t
0:41:00 think it’s easy right now.
0:41:02 And it probably hasn’t been easy in a number of years.
0:41:10 But if you are graduating, the thing I would be selling any corporation some way or another
0:41:15 is that if you are AI native right now coming out of college, the amount you can teach a company
0:41:16 is unbelievable.
0:41:20 And then conversely, if you’re a company, you should be actually like prioritizing this talent
0:41:26 that is just like, why does it take you guys two weeks to research a market to enter?
0:41:29 I can do that in deep research and get an answer to you in 30 minutes.
0:41:32 They will be able to show companies way faster ways of working.
0:41:37 Do you think there’s any stumbling into problems this way, which is like you kind of adopt too quickly.
0:41:42 It’s like you get into a morass you can’t get out of or you think at this point it’s pretty clear
0:41:44 this stuff can be practically consumed.
0:41:46 What would the morass be that you’d get into?
0:41:50 You hire a bunch of vibe cutters and then they create something that nobody can maintain
0:41:51 and it’s really slow.
0:41:51 Oh, yeah, yeah, totally.
0:41:54 Which, by the way, I will say I have seen this.
0:41:54 Yeah, yeah, yeah.
0:41:56 Okay, you could easily overdo this whole thing.
0:42:01 So I think as with anything, like deploying these strategies in moderation
0:42:05 while we’re all collectively still getting the technology to work better and better
0:42:08 is super important and understanding the consequences of these systems.
0:42:12 So, yes, this is not like a moment to just have your whole company vibe code.
0:42:15 I will say one of my favorite things that I’m witnessing in the whole coding thing,
0:42:18 I don’t know, the point of this talk is kind of the AI and the enterprise generally,
0:42:20 but like the coding thing is just so salient,
0:42:23 is that a lot of these OG programmers that I’ve known for a long time
0:42:26 that are off-creating companies or CEOs of public companies like yourself
0:42:27 are all back to programming.
0:42:27 Yeah.
0:42:30 And then you talk to them, you know, many of them, like I code, you know,
0:42:33 most nights with Cursor just because it’s really enjoyable.
0:42:37 And the reason I didn’t code before is because I just couldn’t keep up with the fucking frameworks.
0:42:39 I’m like, dude, I don’t know how to install the fucking thing.
0:42:41 And what is this Python environment stuff?
0:42:45 And like, you’re literally learning bad design choices that somebody else just made up.
0:42:47 Like, they’re not fundamental to the laws of the universe.
0:42:49 They don’t make you any smarter.
0:42:51 It’s just like waste of brain space.
0:42:55 And so in some way, the AI just gets rid of this kind of crufty stuff
0:42:57 that you probably shouldn’t be wasting brain space on anyway.
0:43:01 The amount of frustration I have when I look through, let’s say, our product roadmap.
0:43:01 Yeah.
0:43:04 Let’s say pre-AI, although this still obviously happens
0:43:05 because we haven’t fully transformed everything about how we work.
0:43:08 But pre-AI, when you would see things like,
0:43:12 we have to upgrade the Python library in this particular product.
0:43:15 And it’s like three engineer, two quarters.
0:43:16 No, exactly.
0:43:20 And like, at the end of that project, zero customers will notice that we did something.
0:43:24 We resolved some fringe vulnerability that is not going to even happen.
0:43:26 But you have to do it because there’s some compliance thing
0:43:30 where you have to make sure you’re on the latest version, which is super important.
0:43:32 But like, the thing is never going to happen.
0:43:37 And all of a sudden, like, you are wasting hundreds of thousands of dollars of engineering time.
0:43:41 And the fact that like, that’s now like a codex task is just unbelievable.
0:43:45 And the amount of just now things that you can actually relieve your team to go and work on is incredible.
0:43:51 And the other big, like, boon for the economy, and this is again where the economists just totally miss this stuff,
0:43:56 is think about every small business on the planet, of which there’s millions, tens of millions, whatever,
0:44:01 that for the first time ever in history, they have access to resources that are somewhat approximate
0:44:03 to the resources of a large company.
0:44:06 Like, they can do any marketing campaign.
0:44:08 Did you see the NBA finals video from Kalshi?
0:44:09 No.
0:44:10 The VO3 video?
0:44:11 Oh, yeah, yeah, yeah, yeah, yeah.
0:44:15 Like, you can now put together an otherwise million dollar marketing video.
0:44:18 For a couple hundred bucks of tokens.
0:44:21 And that being applied to every domain in every service area,
0:44:24 I can run a campaign that translates into every language.
0:44:29 I can have this long tail of bugs that I never got around to automatically get solved.
0:44:34 I can have the analysis of a top tier consulting firm done for my particular business.
0:44:43 So for the people or companies that are resourceful and are creative and imaginative, the access to resources right now is just truly unprecedented.
0:44:48 What do you think is the best metric for anybody interested in tracking this stuff as far as, like, how fast it’s going?
0:44:49 Is it GDP?
0:44:51 Is it margin?
0:44:52 Is it top line?
0:44:53 Is it headcount growth?
0:44:54 Is it all the above?
0:44:55 Like, how do you measure it?
0:44:56 Yeah.
0:45:02 I mean, for us, so internally first, and then maybe we’ll spitball some macro solutions to this.
0:45:08 Internally, we’ve actually explicitly taken the stance that we want to use AI to increase the capacity and capability of the company.
0:45:10 So just do more.
0:45:12 For anything that you track, just make sure it happens fast.
0:45:13 Just do more.
0:45:14 Just, like, do more or do faster.
0:45:16 In a given time period, yeah.
0:45:21 And so that somewhat relieves the pressure from people that, like, this is about cost cutting.
0:45:21 Yeah, yeah, yeah.
0:45:23 It’s just like, no, no, just, like, do more right now.
0:45:24 Let’s figure out what works.
0:45:25 Some things won’t work.
0:45:26 We want experimentation.
0:45:28 So just use AI to do more.
0:45:29 Okay, so that’s us.
0:45:35 The way you should measure that then in a couple years from now is either the growth rate of the company should be faster.
0:45:36 Yeah.
0:45:45 Or the amount of things that we’re collectively doing should be more, and the only reason that wouldn’t show up in growth rate is that every other company also does more, and so that gets competed away.
0:45:45 Yeah.
0:45:50 Which is, like, also a very viable outcome, is this is just the new standard of running a business.
0:45:51 But there’s no shift in the equilibrium.
0:45:52 Right.
0:45:52 There’s no shift in the equilibrium.
0:45:54 You just have to do it.
0:46:00 And then the ultimate product of all of that is some other kind of metric of satisfaction of, like, our products get better.
0:46:02 It could be, like, consumer price index or something.
0:46:04 Yeah, but, like, did the iPhone show up in GDP?
0:46:07 I don’t know, but my life is better with the iPhone pre than without the iPhone.
0:46:08 I’m pretty sure it did.
0:46:09 Okay, yeah, fine.
0:46:15 So, but, like, it would ultimately then show up in, like, new cures to diseases, better health care.
0:46:20 I don’t know that the dollars would move around all that differently as much as just, like, life expectancy should go up.
0:46:22 Like, cost of housing should go down.
0:46:30 Like, weird metrics that productivity gains will then drive that the economists wouldn’t naturally associate to, like, enterprise software and AI.
0:46:36 By the way, this is where I am, which is, like, clearly there’s a disruption because marginal costs are going down on a bunch of stuff.
0:46:36 Yeah.
0:46:38 Like, writing code and language reasoning and whatever.
0:46:43 And, like, some companies will take advantage of that, but I don’t think, like, the fundamental equilibrium changes.
0:46:46 I think, to your point, I think we just do more tech, products get better faster.
0:46:47 Yeah.
0:46:51 We saw problems that we haven’t before, but, like, it’s not asymmetric that we’ve seen in other companies.
0:46:59 The way I kind of think about it is, you know, if we go back to, like, 1985 and we just looked at how everybody works, I think we would just be totally stunned by how slow everything is.
0:47:05 And just, like, how long did it take you to research a thing or analyze a market or create a campaign or whatever?
0:47:05 Yeah.
0:47:10 But, like, it just has now been baked into our human productivity that we just do all those things really fast now.
0:47:10 Yeah.
0:47:16 And so, in 10 years from now, when we all have AI agents running around, we will just look back to today and be, like, how did we function?
0:47:22 Like, you spent two weeks to decide, like, the message for the marketing campaign?
0:47:24 Like, how is that possible?
0:47:27 Like, what we do now is we run 50 experiments with AI agents.
0:47:28 They all come back with versions.
0:47:32 We look at them all together, and then we make a decision in an hour, and we move on.
0:47:34 That’s obviously, like, how work works.
0:47:36 And, like, that’s what we will be saying 10 years from now.
0:47:38 Do you think we’ll ever saturate the consumer?
0:47:44 I caveat this by saying this comes up every one of these inflection points, and so I just wanted to ask it again for the umpteenth time.
0:47:46 I think I’ll say yes, just because at some point, maybe.
0:47:50 But, like, my list of purely consumer demands has not gone down.
0:47:55 Like, healthcare is, like, a totally unmet need that I have.
0:48:01 I do not like to go to doctors or dentists or anybody because of just how hard it is to get scheduled.
0:48:03 I mean, buying a fucking car, man.
0:48:05 Like, there are so many things that, like, just need to be sold.
0:48:06 The cost of housing.
0:48:07 We clearly don’t have enough houses.
0:48:10 Like, where will AI, you know, drive that?
0:48:12 Okay, so, you know, maybe robotics would be then the play there.
0:48:17 But, like, I don’t think we’re anywhere close to consumer satisfaction or satisfying all needs of consumers.
0:48:25 Well, actually, I meant more, will things change so fast that, like, it saturates the ability to adopt new things?
0:48:27 I do think that that is certainly possible.
0:48:34 I think I track sort of, let’s say, my parents as a decent kind of proxy or even just, like, college friends that aren’t particularly in tech.
0:48:34 Yeah.
0:48:38 And they’re still, like, in their ChatGPT phase of adoption.
0:48:39 And they haven’t moved on from that.
0:48:40 They haven’t made a VO video yet.
0:48:45 They’re just, like, using ChatGPT to ask questions about life experiences they have.
0:48:50 And so, maybe, ironically, like, one of the problems was ChatGPT was so good.
0:48:58 If you, like, you know, imagine what people thought AI should be able to do for them, it already met, like, 80% of, like, where they would have sort of projected it.
0:48:58 Yeah, yeah.
0:49:01 But when we know, actually, no, it can probably still do 10 to 20x more.
0:49:02 Yeah.
0:49:05 But their needs are going to be satisfied for some time on those core use cases.
0:49:06 Yeah.
0:49:10 So, I think for, like, the most basic consumer query type things.
0:49:19 But then this is the opportunity for startups, which is, like, now AI will show up in sort of ways that maybe the person isn’t even, like, in the market for an AI thing.
0:49:21 They just want a better version of that category.
0:49:24 I was going to say, this could just, like, simply be another market constraint.
0:49:27 As soon as it saturates, you just make the product better given, like, the existing.
0:49:35 Then, if I could just get better healthcare, but I don’t need to think about that as an AI problem or not an AI problem, but AI will be behind the scenes delivering that.
0:49:35 Yeah.
0:49:37 Then I don’t think you’re saturated anytime soon.
0:49:39 Yeah, it’s just the consumption capacity just becomes another market constraint.
0:49:41 But there’s a ton of other ways that you can improve things.
0:49:42 100%.
0:49:42 That’s great.
0:49:42 Good.
0:49:44 I love that you’re so optimistic about it.
0:49:47 I am, I think, 98th percentile optimistic.
0:49:47 Same.
0:49:48 Good.
0:49:48 All right.
0:49:54 So, I think we’ve had a fairly pragmatic conversation about the current impacts and the near-term impacts.
0:50:00 If you do a longer view, can you dare to guess what things look like in five to ten years?
0:50:04 So, Sam Altman and Jack Altman had a podcast recently.
0:50:05 Yeah, yeah, yeah.
0:50:12 And I’m going to paraphrase probably in some wrong way, but they were going back and forth about how, like, we just got what we would have predicted as AGI five years ago.
0:50:14 And it’s just like, we use it.
0:50:16 And it’s, like, it’s now just built in.
0:50:17 The most anticlimactic.
0:50:18 Yeah, it’s anticlimactic.
0:50:19 Anti-anticlimactic.
0:50:24 And I think that’s my instinct for a lot of this is five years, ten years, whatever your number is.
0:50:34 And this is why I’m so optimistic on just society and jobs and all this stuff is I don’t think it’s the Terminator kind of crazy outcome scenario of we automate away everything.
0:50:48 I think the human capacity for wanting to solve new problems, for creating new products, for serving customers in new ways, for delivering better healthcare, to try and do scientific discovery, like, all of this stuff is just built in us.
0:50:48 Yeah.
0:50:49 And it will continue.
0:50:55 And AI is this kind of up-leveling of the tools that we use to do all those things.
0:50:59 And so I think the way we work will be totally different in five years or ten years.
0:50:59 Totally.
0:51:05 But you’re already seeing enough of probably what it will look like that I think it’s an extrapolation of that.
0:51:15 It’s when you want the marketing campaign done, you have a set of agents that go and create the assets and choose the markets and figure out the ad plan.
0:51:19 And then you have a few people review it and debate it and say, okay, let’s go in this direction instead.
0:51:21 And then you deploy it and you’re on to the next thing.
0:51:25 And so each company, their units of output grow.
0:51:29 As a result of that growth, we’re all still in competitive spaces.
0:51:33 So some of it gets competed out and others will keep growing faster than they would have before.
0:51:35 So they’ll hire more people and you’ll have new types of jobs.
0:51:38 Like we’ll have jobs for people just to manage agents.
0:51:40 And like you’ll have operations teams.
0:51:43 You know, Adam D’Angelo had this cool role that just kind of got announced.
0:51:44 Yeah, that was really cool.
0:51:49 Yeah, the role is to work with Adam at Quora and figure out which workflows can be automated with AI.
0:51:51 I think you’ll have a lot of those kind of functions.
0:51:56 But I think one of the exciting things about at least being in Silicon Valley or anybody kind of tuning in, being in this ecosystem,
0:52:00 is like we’re seeing the change happen faster here.
0:52:05 And it’s going to be five or ten years of this rolling out to the rest of the economy.
0:52:11 And so I think we will spend the next five years making the technology actually deliver on the things that we’re all collectively talking about
0:52:18 to make it more and more robust and the accuracy goes up and the costs go down and the workflows it can tie into are better.
0:52:20 And we will be working on that for quite some time.
0:52:29 And you think ultimately this leads to the biggest peace dividend of better products for users, better user experience?
0:52:31 Yeah, I think the software gets better.
0:52:33 Our healthcare gets better.
0:52:34 The life sciences discoveries increase.
0:52:36 I think it’s all a society net positive.
0:52:37 I love it.
0:52:42 Thanks for listening to the A16Z podcast.
0:52:48 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
0:52:50 We’ve got more great conversations coming your way.
0:52:51 See you next time.
In this episode, a16z General Partner Martin Casado sits down with Box cofounder and CEO Aaron Levie to talk about how AI is changing not just software, but the structure and speed of work itself.
They unpack how enterprise adoption of AI is different from the consumer wave, why incumbents may be better positioned than people think, and how the role of the individual contributor is already shifting from executor to orchestrator. From vibe coding and agent UX to why startups should still go vertical, this is a candid, strategic conversation about what it actually looks like to build and operate in an AI-native enterprise.
Aaron also shares how Box is using AI internally today, and what might happen when agents outnumber employees.
Resources:
Find Aaron on X: https://x.com/levie
Find Martin on X: https://x.com/martin_casado
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.