AI transcript
0:00:05 So that’s my advice is every day, every day, you should be in ChatGPT.
0:00:07 I don’t care what your job is, right?
0:00:10 You could be a sommelier at a restaurant and you should be using ChatGPT every day
0:00:12 to make yourself better at whatever it is you do.
0:00:20 Can I ask you about the story really quick?
0:00:23 And you have like a list of stuff here that’s like all amazing.
0:00:24 It’s actually a lot of it’s very actionable.
0:00:27 But the reason I want to ask you about the story is for the listener.
0:00:29 Dharmesh founded HubSpot, $30 billion company.
0:00:30 You’re the CTO.
0:00:34 So you and you’re an OG for Web 1.0, Web 2.0.
0:00:39 And your first round or one of your first rounds was funded by Sequoia.
0:00:42 Your partner, Brian, is an investor at Sequoia.
0:00:44 So you are in the insider.
0:00:45 You’re an insider, I believe.
0:00:47 You may not acknowledge it.
0:00:47 I don’t know if you do or do not.
0:00:48 You are an insider.
0:00:51 The cool part is that you’re accessible to us.
0:00:55 When did you first see what Sam was working on?
0:00:58 And how long have you felt that this is going to change everything?
0:01:02 So I actually have known Sam before he started OpenAI.
0:01:06 And I got access to the GPT API.
0:01:11 It was a toolkit for developers to be able to kind of build AI applications, right?
0:01:12 Effectively.
0:01:17 And so I built this little chat application that used the API.
0:01:19 And so I could have a conversation with him.
0:01:20 So I actually built that thing that night.
0:01:22 It was a Sunday.
0:01:25 I had the full transcript two years before ChatGPT came out.
0:01:26 So that’s four years ago?
0:01:28 It was 2020.
0:01:29 So five years ago.
0:01:30 Wow.
0:01:30 Okay.
0:01:31 This summer.
0:01:36 And so even then, it’s like, and as soon as I, like, you sort of have that moment.
0:01:38 It’s the same that all of us have with ChatGPT.
0:01:40 I just had it two years earlier.
0:01:43 And then I’m showing everyone, like, Brian, you are not going to believe.
0:01:46 Like, I have this thing, you know, through this company called OpenAI.
0:01:49 And watch me, like, type stuff into it and see, like, see what happens.
0:01:52 And we would ask it, like, strategic questions about HubSpot.
0:01:54 It’s like, how should it, like, who are the top competitors?
0:01:59 And they were, even then, two years before Chat, it was shockingly good, right?
0:02:03 But the thing you sort of have to understand about the constraints of how a large language
0:02:08 model actually works is that you type and you have a limited, just imagine this, if we’re
0:02:13 going to just use the physical analog sheet of paper can only fit a certain number of words
0:02:14 on it.
0:02:20 And that certain number of words includes both what you write on it, that says, I want
0:02:23 you to do this, and the response has to fit on that sheet of paper.
0:02:28 And that sheet of paper is what, in technical terms, would be called the context window.
0:02:30 And you’ll hear this tossed around.
0:02:33 It’s like, oh, this, you know, ChatGPT has a context window of whatever, or this model has
0:02:34 a context window of whatever.
0:02:35 That’s what they’re talking about.
0:02:37 All right, so why is that?
0:02:39 Why does anybody care about the context window?
0:02:45 It’s like, well, sometimes you want to provide a large piece of text, let’s say, summarizes
0:02:45 for me.
0:02:48 Well, in order for you to do that, it has to fit in the context window.
0:02:52 So if you want to take two books worth of information and say, I want you to summarize this in 50
0:02:57 words, those two books worth of information have to fit inside the context window in order
0:02:58 for the LN to process it.
0:03:03 Most, the frontier models are roughly 100,000 to 200,000.
0:03:06 They measure it in tokens, which is like 0.75 of a word.
0:03:07 That’s like a book.
0:03:08 So yeah, is that a book?
0:03:09 I think it’s an average.
0:03:13 I think the average book is like 240,000 words, I think, but I’m not sure.
0:03:13 That’s not a lot.
0:03:19 So when I, the way that I use ChatGPT is I’ll like, let’s say a fun way is I’ll, I’ll put
0:03:22 a historical book that I loved reading and I’ll be like, summarize this so I remember
0:03:23 the details.
0:03:28 So you’re telling me that if it’s a thousand page book, it’s not even going to accurately
0:03:29 summarize that book?
0:03:30 It won’t fit.
0:03:35 You’re like, if you pay something large enough into ChatGPT or whatever AI application you’re
0:03:38 using, it will come back and say, sorry, that doesn’t fit.
0:03:41 Effectively, what they’re saying is that does not fit in the context window.
0:03:42 So you’re gonna have to do something different.
0:03:44 All right.
0:03:50 A few episodes ago, I talked about something and I got thousands of messages asking me to
0:03:51 go deeper and to explain.
0:03:52 And that’s what I’m about to do.
0:03:57 So I told you guys how I use ChatGPT as a life coach or a thought partner.
0:04:01 And what I did was I uploaded all types of amazing information.
0:04:07 So I uploaded my personal finances, my net worth, my goals, different books that I like,
0:04:09 issues going on in my personal life and businesses.
0:04:12 I uploaded so much information.
0:04:17 And so the output is that I have this GPT that I can ask questions that I’m having issues
0:04:18 with in my life.
0:04:20 Like, how should I respond to this email?
0:04:21 What’s the right decision?
0:04:25 Knowing that you know my goals for the future, things like that.
0:04:29 And so I worked with HubSpot to put together a step-by-step process, showing the audience,
0:04:35 showing you the software that I use to make this, the information that I had ChatGPT ask me,
0:04:36 all this stuff.
0:04:37 So it’s super easy for you to use.
0:04:40 Like I said, I use this like 10 or 20 times a day.
0:04:41 It’s literally changed my life.
0:04:43 And so if you want that, it’s free.
0:04:44 There’s a link below.
0:04:47 Just click it, enter your email, and we will send you everything you need to know to set
0:04:49 this up in just about 20 minutes.
0:04:52 And I’ll show you how I use it again, 10 to 20 times a day.
0:04:53 All right.
0:04:54 So check it out.
0:04:55 The link is below in the description.
0:04:57 Back to the episode.
0:05:03 I usually use projects and I have like, let’s say a health project and I’ll upload tons and
0:05:05 tons of books or tons of blood, blood work.
0:05:09 And I hope I’m hoping that it’s going to pull from all those books in my project.
0:05:10 Is that true?
0:05:11 That that is true.
0:05:13 So here, and this is a perfect segue, right?
0:05:15 Because this is the next big unlock.
0:05:19 So number one thing to like understand in our heads is there’s this thing called a context
0:05:19 window.
0:05:20 Here’s why it matters.
0:05:24 So let’s, we’re going to take a, we’re going to pop that on the stack and we’re going to
0:05:26 push on the stack and we’re going to come back to it.
0:05:27 So the thing we have to remember is two things.
0:05:31 Number one, it doesn’t know what it’s never been trained on.
0:05:33 That’s one of the limitations, right?
0:05:38 So if you ask it something that only you, Sam, have in your, in your files and your email,
0:05:42 whatever, that the training model was, I mean, the LLM was never trained on, it’s not going
0:05:43 to know those things.
0:05:44 It doesn’t matter how smart it is.
0:05:46 It’s just information it’s never seen.
0:05:47 So it’s not going to know that.
0:05:48 That’s kind of problem number one.
0:05:50 Problem number two.
0:05:56 So let’s say your website for Hampton was actually on, um, uh, in the training set,
0:05:56 right?
0:06:00 Because it’s on the public internet or whatever, but the training happened at a particular point
0:06:01 in time.
0:06:04 Like they ran the training, ran the training, ran the training and said, okay, we’re done
0:06:05 with the training now.
0:06:07 The machine is done.
0:06:10 Let’s let the customers in right now.
0:06:14 If the website changes, it’s not going to know about those new updates that you’ve made
0:06:17 to your website because the training was done at a particular date.
0:06:19 If completed, it’s kind of training course, right?
0:06:22 So those are the two things we sort of have to remember is that it doesn’t know what it
0:06:23 doesn’t know.
0:06:27 And number two, that the things that did know were frozen at that particular point in time,
0:06:27 right?
0:06:28 So it has a seen new information.
0:06:32 And those are relatively large limitations, right?
0:06:35 So especially if you’re going to use it for business use or personal, it’s like, well,
0:06:39 I’ve got a bunch of stuff that I want it to be able to answer questions about or whatever
0:06:42 inside my company or inside, uh, my own personal life.
0:06:44 How do I get it to do that?
0:06:46 Um, and so here’s the hack.
0:06:48 And this is, this was a brilliant, uh, brilliant discovery.
0:06:54 So what they figured out is to say, okay, let’s say you have a hundred thousand documents
0:06:56 that were never on the internet.
0:06:57 That’s in your company.
0:07:00 It’s all your employee hiring practices, your model.
0:07:01 Here’s how we do compensation.
0:07:02 All of it, right?
0:07:03 It’s like, oh, you have a hundred thousand documents.
0:07:06 And obviously you can’t ask questions about those hundred thousand documents straight
0:07:07 to chat GPT.
0:07:09 It doesn’t know anything about those, never seen those documents.
0:07:14 So this is, and we talked about this two episodes ago, um, this thing called vector embeddings
0:07:17 and RAG retrieval augmented generation.
0:07:20 Um, and I’ll, I recommend you folks go listen to that.
0:07:23 I think it’s a, it’s a, it’s a fun episode, but I’ll kind of summarize it, which is what
0:07:24 you can do.
0:07:27 And what we do is to say, we’re going to take those hundred thousand documents and we’re
0:07:31 going to put them in this special database called a vector store, a vector database.
0:07:36 And what we can do now is when someone asks the question, we can go to the vector store,
0:07:41 not the LLM, go to the vector store and say, give me the five documents out of the hundred
0:07:45 thousand that are most likely to answer this question based on the meaning of the question,
0:07:48 not keywords based on the actual meaning of the question.
0:07:51 So it’s called a semantic search is what the vector store is doing.
0:07:53 So it comes back with five documents.
0:07:58 Let’s just say now, as it turns out, five documents do fit inside the context window.
0:08:01 So effectively we said, okay, well, yeah, it would have been nice.
0:08:04 Had you trained on the hundred thousand documents, but that was not practical because I didn’t
0:08:05 want to expose all of that.
0:08:07 I’m going to give you the five documents that you actually need.
0:08:10 I can just say, give it to you in the context window.
0:08:16 And now, as you can imagine, it does an exceptionally good job at answering the question when it knows
0:08:17 the five documents that should be looking.
0:08:18 You just gave it to them.
0:08:18 Right?
0:08:21 So it’s having this, uh, so we’ll kind of, uh, jump metaphors here.
0:08:25 It’s like hiring a really, really good intern that has a PhD in everything, right?
0:08:27 They went to school, they read all the things, read all the internet.
0:08:31 The intern knows everything about everything that ever was publicly accessible.
0:08:34 They’re trained, show up for the first day of work.
0:08:35 That’s all they know.
0:08:38 They’re not learning anything new and they know nothing about your business.
0:08:41 Now it’s like, okay, well, I know, you know, everything about everything.
0:08:43 I have this question about my business.
0:08:48 Here are five documents that you can read right now and answer my question.
0:08:50 It’s like, oh, I can do that.
0:08:54 I like that analogy, the intern with the PhD and everything.
0:08:55 That’s so much how it is, right?
0:09:02 It’s as helpful and available as an intern, but it’s as knowledgeable as somebody with a
0:09:03 PhD in everything.
0:09:07 Um, and then like you said, you know, my, the, another analogy for that is like, it’s
0:09:11 a store, you have shelf space, which is kind of limited, but they do have a back and you
0:09:15 can always get, send the employee to the back and see if they can find it in the back for
0:09:15 you.
0:09:15 Right.
0:09:17 That’s kind of like what you’re saying, put it in the database.
0:09:20 They can go fetch the specific thing that you’re asking for.
0:09:22 Uh, because you know, you gave it access to the back.
0:09:24 You gave it a badge that lets it go in there.
0:09:29 Have you uploaded all of hub, like, have you figured, first of all, I want to know what
0:09:30 your chat GPT looks like.
0:09:33 I want to know how you use it on a, like, I just want you to just screen share, just like
0:09:37 show me exactly what you do, but also have you uploaded your entire life?
0:09:42 Like, have you uploaded all of HubSpot to chat GPT where you could just ask it any question?
0:09:43 Yeah, multiple times.
0:09:43 Right.
0:09:46 Um, so, and what format, tell me how you did that.
0:09:52 Uh, so I did, so open AI has, um, called this, uh, it’s called an embeddings algorithm that
0:09:56 takes any piece of text, a document, an email, whatever it happens to be, and creates this
0:10:00 kind of point in the high dimensional space, um, called, you know, called a vector embedding.
0:10:03 And, you know, a point in high dimensional space.
0:10:07 So in three dimensional space, physical space that we know of, we think of points being in
0:10:08 three dimensions, X, Y, and Z axis.
0:10:10 Like, oh, here’s where this point is in space.
0:10:13 Uh, high dimensional space, you can have a hundred dimensions.
0:10:14 You can have a thousand dimensions.
0:10:16 You can describe each document as this kind of point in space.
0:10:22 So what I’ve done, so it used to be, um, in the early kind of GPT world,
0:10:25 the number of dimensions you had access to was roughly like a hundred to 200 dimensions.
0:10:27 And so you would lose a lot of the meaning of a document, right?
0:10:28 They would sort of get it right.
0:10:30 It was sort of captured the meaning.
0:10:33 Uh, and then, uh, then we went to like a thousand dimensions.
0:10:39 It’s like, oh, well now it can much more accurately sort of represent, um, and, and capture, um,
0:10:43 a document of kind of arbitrary length and, and be able to find it, uh, give it a prompt
0:10:45 or given some sort of search query.
0:10:50 Uh, and then recently within the last year, we’ve gone, the, the latest algorithm, uh,
0:10:55 from open AI, uh, embeddings algorithm is like 3072, I think, uh, dimensions.
0:10:57 But, but where, where do you do this?
0:11:01 Do you just literally upload it as a project or you, you had to do an API connection?
0:11:02 What did you, how do you actually do this?
0:11:03 I had an API connection, right?
0:11:05 In fact, I’m running the lease.
0:11:06 Let me see where it is now.
0:11:10 And anyone could do this or you have special access because you’re friends?
0:11:11 No, anyone can do this.
0:11:14 The, the, uh, the API for the embeddings model, they have two versions.
0:11:16 They have the 3000 dimension version.
0:11:18 They have a 1000 dimension version.
0:11:22 And is the results of this, like, are you driving a NASCAR and I’m driving like a scooter?
0:11:23 Like, is that the difference?
0:11:28 Like if I just like, for example, uh, what I will do is I’ll just like download my company’s
0:11:29 financials and I’ll upload it.
0:11:32 And then I’ll like explain what my company does.
0:11:34 But the way that you do it is a lot different.
0:11:38 Now, are we talking a massive gap in results that you get versus what I get?
0:11:41 Um, yes.
0:11:44 The short answer is yes.
0:11:48 Uh, and, and the reason is like, so I do, I do that as well in terms of how I’ll describe
0:11:49 the company or whatever.
0:11:50 I try to provide it context.
0:11:52 And that’s why it’s called the context windows.
0:11:55 You try to provide the LLM, uh, context for what you’re asking it to do.
0:12:01 Um, you know, the difference is that, you know, because I can go through like, and by the
0:12:02 way, the richest, and I’m working on a,
0:12:08 kind of nice and weekends project right now, um, that takes, uh, email, uh, which, you
0:12:10 know, so you would be amazed.
0:12:14 Like if you had to write no other words right now, if you did nothing, but say, I’m going
0:12:19 to take all of my emails I’ve ever written, uh, that are still stored and give it, uh, to
0:12:23 a vector store, use an embedding’s algorithm and then use chat GPT to let me kind of answer
0:12:24 questions.
0:12:28 So if I want to say, Oh, I want you to give me a timeline for when we start first started
0:12:31 using hub to name products or whatever, and how’d that come about?
0:12:35 Or what were the winning arguments against doing that versus whatever?
0:12:41 Like it’s shocking how good the responses are when you give it access to that kind of rich
0:12:42 data, right?
0:12:46 Somebody needs to create just like a $10 a month, a single website.
0:12:48 That’s like, Hey, make your chat GPT smarter.
0:12:52 And it’s a website where it’s like, connect your Gmail, connect your Slack, connect your
0:12:52 everything.
0:12:57 I would pay them happily 20 bucks a month to just set this up for me so that my chat
0:13:03 to, to give my chat GPT, like the extra pill that says, you now have access to my data.
0:13:07 Is this because, because you’re talking about like, I have the API to the vector embeddings
0:13:10 and like, well, I have the flux capacitor too, but I don’t know what to do with it.
0:13:10 Right?
0:13:15 Like I need a button on a website with a Stripe payment button that I could just connect the
0:13:15 stuff.
0:13:16 Is it not?
0:13:18 Is there, is there a caveman version of this?
0:13:22 Oh, there’s, I mean, there are, uh, tools out there do, and there are startups working on
0:13:22 it, right?
0:13:23 Uh, there’s two pieces of good news.
0:13:25 One is there are startups working on it.
0:13:29 The challenge here is, uh, not that they’re doing a bad job.
0:13:34 The challenge actually comes down to, uh, if it were a startup and a startup came to you,
0:13:37 it’s like, Oh, we just started last, last week, but we’ve got this thing.
0:13:38 It really works.
0:13:44 Uh, in fact, Darmection, maybe, uh, investor, how willing would you be to hand over literally
0:13:47 your entire life and everything that’s in your email over to this startup?
0:13:51 Like, so part of the challenge we have is that the access control that let’s say you’re
0:13:56 using Gmail, which, uh, a lot of us use when you provide the keys to your Gmail account
0:14:00 to a third party, uh, there is no degree of real granularity.
0:14:02 You can say, Oh, I wanted to read the metadata.
0:14:04 That’s like level one, level two access.
0:14:05 I wanted to read my full email.
0:14:08 And level three is I want it to be able to write and delete emails on my behalf.
0:14:13 But if you wanted to like read like the actual body of the email, you can’t say, I only wanted
0:14:16 to read messages that are from HubSpot.com.
0:14:19 Or I own, I want to ignore all messages from my wife and my family or whatever in the thing.
0:14:21 There’s no way to control that.
0:14:21 Right.
0:14:22 So you sort of have to have a trust.
0:14:27 Is there any product that you would trust right now or that you can recommend that guys
0:14:31 like Sean and I should use as chat GPT add-ons or accelerators?
0:14:38 No, not that I don’t trust them, but it’s like, I wouldn’t trust really anyone right
0:14:39 now with that.
0:14:42 And it’s one of the reasons I sort of run it locally, even though I know these things are
0:14:42 out there.
0:14:47 Um, I predict what’s going to happen is we’re going to have, uh, any of the major players
0:14:49 and you can see this happening already, right?
0:14:53 We see this with, um, you know, you have the ability to create custom GPTs and open AI and
0:14:55 do projects and quad.
0:14:58 You have Google gems, which are essentially like a small baby version of this, right?
0:15:02 That says, Oh, you can upload 10 documents, a hundred documents that it’ll let you ask
0:15:05 questions, uh, against the, what it’s really doing behind the scenes is creating a vector
0:15:05 store.
0:15:07 That’s effectively what it’s, what’s happening.
0:15:14 Um, my expectation is all the major companies, um, will actually have a variation of this,
0:15:18 uh, starting with Google should be the first one because they already have the data.
0:15:25 There is absolutely zero reason why dual Gemini does not let you have a Q and a with your own
0:15:26 email account.
0:15:28 That’s just like insanely stupid, right?
0:15:30 Like I’ll just go ahead and say it.
0:15:32 It’s just, it’s just, there’s something not right with the world.
0:15:36 Uh, when they already have the data and it’s like, and they have the algorithm, they have
0:15:38 Gemini 2.5 pro, which is an exceptionally good model.
0:15:39 Right?
0:15:43 So there you have all the pieces, uh, but have not yet delivered, but I hope it’s a little
0:15:43 distant.
0:15:48 Then tell me and Sean, we’re early adopters, but neither of us are technical.
0:15:51 What can we do to, I want to get it on this baby.
0:15:52 All right.
0:15:53 So give me, give me two weeks.
0:15:53 Here’s, here’s what I’ll be.
0:15:56 Uh, so that’s the one thing I do trust that I trust myself.
0:15:58 Um, I’m, I’m an honest guy.
0:16:03 Uh, I’ll give you like this internal app that I’m building, let you put your Gmail to it.
0:16:06 It’ll go and it’ll run for a day or two days or something like that.
0:16:07 And then you will be amazed.
0:16:08 You will be able to ask questions.
0:16:14 Um, and by the way, like, and the thing I’m like working on now is once you have this,
0:16:15 this capability, right?
0:16:18 Like step one is just being able to do Q and a, right?
0:16:22 It’s like, Oh, just, I’m going to step to like, imagine kind of fast forwarding.
0:16:23 Like it has access to all of your kind of history.
0:16:25 So imagine you’re able to say, you know what?
0:16:30 I’m not doing this by the way, but if I were, it’s like, I want to write a book about HubSpot
0:16:32 and all the lessons learned and all like everything.
0:16:33 It’s all in my email.
0:16:36 Do the best possible job you can writing a book.
0:16:39 If you have questions along the way, ask me, but other than that, write the book.
0:16:40 I think you would be able to write the book.
0:16:41 Wow.
0:16:43 What else are you doing with AI?
0:16:45 So, uh, give me your day to day.
0:16:50 I like, for example, the CEO of Microsoft had this great thing where he goes, I think with
0:16:52 AI, then I work with my coworkers.
0:16:53 And that really shifted the way I worked.
0:16:57 Cause I used to brainstorm or have a meeting to talk about stuff with my coworkers, which
0:17:00 was honestly always like a little disappointing.
0:17:03 I felt like I’m the one bringing the energy and the ideas and the questions, and I’m hoping
0:17:08 that they’re going to, but dude, just sparring with AI first and then taking the kind
0:17:12 of like distilled thoughts to my team of like, here’s how we’re going to execute has been
0:17:13 way better.
0:17:16 Like that little one sentence he said shifted the way I was doing it.
0:17:19 How are you kind of using this stuff?
0:17:19 Yeah.
0:17:20 So a couple of things.
0:17:23 Um, so I’ll let, let’s start at the high level and we’ll drill in a little bit.
0:17:28 So, uh, what we’re used to with, uh, chat GPT, this is sort of your kind of early evolution
0:17:32 of most people’s use is because it’s called generative AI, use it to generate things, right?
0:17:37 Uh, generate a blog post, generate an image, generate a video, generate audio, all those
0:17:37 things.
0:17:39 That’s kind of the generation kind of aspect.
0:17:41 Uh, and that’s part of what it’s good at.
0:17:45 Then you sort of get into the, Oh, but it can also kind of summarize and synthesize things
0:17:46 for me.
0:17:49 It’s like, Oh, take this large body of text, take this blog post, take this academic paper
0:17:51 and summarize it in this way.
0:17:53 Or like, so a seven year old would understand it kind of thing.
0:17:53 Right.
0:17:57 So that’s the kind of step number, um, step number two, step number three.
0:17:59 And we’re going to get into how this is now possible.
0:18:03 Um, is you can do, um, effectively you can take action.
0:18:06 Um, have the LM actually do things for you.
0:18:10 Uh, and I’ve always kind of put it broadly in the kind of automation bucket, like I can
0:18:12 automate things that I was doing manually before.
0:18:15 Uh, and then the fourth thing is around orchestration.
0:18:20 It’s like, can I just have it manage a set of AI agents?
0:18:22 And we’ll talk about agents in a little bit and just do it all for me.
0:18:24 I just want to give it a super high order goal.
0:18:27 It has access to an army of agents that are good at varying different things.
0:18:29 I don’t want to know about any of that.
0:18:31 I just wanted to go do this thing for me.
0:18:31 Right.
0:18:33 And then that’s sort of where we are on the slope of the curve.
0:18:36 Uh, the first three things are possible today.
0:18:38 And work well today.
0:18:38 Right.
0:18:40 So you can generate, as we know, it can generate blog posts.
0:18:41 It could write really well.
0:18:45 Uh, it can generate great images now, including images with text.
0:18:48 It can do great video now with, uh, you know, higher fidelity, higher character cohesion, all
0:18:49 these things.
0:18:53 Uh, Sean, so the thing, the vision you had three years ago when I was on was around creating
0:18:55 the next Disney, the next kind of media company.
0:18:58 You have the tools now, my friend, uh, to finally start to approach that.
0:18:58 Right.
0:19:02 But then you should sort of move into, and this is what we were just talking about, this kind
0:19:03 of synthesis and analysis thing.
0:19:06 This is, okay, this is where deep research kinds of features come in.
0:19:09 It’s like, okay, well, I want you to take the entire of the internet or entirety of what,
0:19:11 uh, Sean has written about copywriting.
0:19:16 And I want you to write a book just for me that summarizes all of that in ways that I enjoy.
0:19:19 Because I like, I like analogies and I like jokes and I like this and I like that.
0:19:22 Write a custom version of Sean Puri’s book on copywriting, right?
0:19:25 That kind of synthesis, um, I think would be, uh, super interesting.
0:19:27 And then automation is now possible.
0:19:28 So agent.ai is one of those things.
0:19:32 There’s other tools out there that says, Hey, I want to take this workflow or this thing that
0:19:34 I do, and I want you to just do it for me.
0:19:38 Give us a specific, what’s a specific, specific automation that you’ve used.
0:19:40 That’s like, you know, useful, helpful, saves you time.
0:19:42 I’ll tell you a couple.
0:19:44 One is around domain names, which is, okay.
0:19:50 So I have an idea for a domain name, um, and I’m going to type words in and these things
0:19:50 exist.
0:19:52 And I’ll tell you the manual flow that I used to go to.
0:19:56 It’s like, okay, first of all, I can brainstorm myself and come up with possible words and
0:19:58 various other words, whatever, here’s the things that I’ll say.
0:19:58 Okay.
0:19:59 Which domains are available?
0:20:03 Absolutely zero of them, uh, that are good that will pop into my mind are like freely
0:20:05 available to kind of just register that no one’s registered before.
0:20:05 Okay.
0:20:06 Fine.
0:20:08 Then I’ll say, okay, well, which ones are available for sale?
0:20:09 Okay.
0:20:10 What’s the price tag?
0:20:12 Is that a fair approximation of the value?
0:20:15 Is it like below market, above market?
0:20:17 We don’t know because there’s no Zillow for domain names yet.
0:20:18 Uh, so create that.
0:20:23 So I have something that automates all of that and says, oh, so you have this particular idea
0:20:27 for this concept, for this business, business, whatever it is, uh, here are names.
0:20:28 You’re the actual price points.
0:20:31 Here’s the ones that I think are below market value, above market value.
0:20:32 Tell me which ones you want to register.
0:20:34 That’s in chat GPT.
0:20:37 No, it’s an agent.ai is where it lives right now.
0:20:42 But now there’s a connector between agent.ai and chat GPT through this thing called MCP, which
0:20:44 you’ll hear about, uh, a bunch if, if you haven’t already.
0:20:49 Um, one thing I want to kind of get out there, just so we keep connecting the dots, um, because
0:20:52 I want, I want everyone to have this framework in their head.
0:20:54 Uh, so we talked about large language models.
0:20:55 It can generate things.
0:20:56 We talked about the context window.
0:21:00 We talked about faking out the context window by saying, oh, we can do this vector database
0:21:03 and bring in the right five documents, stuff them into the context window.
0:21:06 Uh, here’s the other big breakthrough that’s happened.
0:21:10 Uh, I’ll say recently within the last year, year and a half is what’s called tool calling.
0:21:14 And what tool calling is, is a really brilliant idea.
0:21:18 And the tool calling says, okay, well, the LLM was trained on a certain number of things,
0:21:22 but if we had this intern that came in, it would be like saying, okay, well, whatever you
0:21:24 know, you know, but we’re not going to give you access to the internet.
0:21:27 Like that would be stupid, right?
0:21:28 We would give the intern access to the internet.
0:21:31 It’s like, if I ask you something that you weren’t trained on, go look it up, right?
0:21:34 That, that would be like, like thing number one on the first day of work.
0:21:39 And as it turns out, the LLM world, the intern couldn’t, didn’t have access to the internet.
0:21:43 All it had was whatever notes that happened to take during its PhD training and all things,
0:21:43 right?
0:21:47 And so what tool calling allows, and this is a weird, um, weird approach to it, but this is
0:21:48 because of the way LLMs work.
0:21:54 So remember the LLM, it’s architected such that you give it the context window in, it spits
0:21:55 things out.
0:21:55 That’s it.
0:21:58 It doesn’t have, and you can’t reprogram the architecture.
0:22:00 But now all of a sudden we’re going to give you access to tool calling.
0:22:02 So here’s the hack that they came up with.
0:22:08 They said, okay, in the instructions that we give it in the context window, we’re going
0:22:11 to say you have access to these four tools.
0:22:13 And it doesn’t actually have access to the four tools.
0:22:17 It’s that I want you to pretend like you have access to these four tools.
0:22:19 The first tool is this thing called the internet.
0:22:23 And the way the internet works is you type in the query and it will give you some things
0:22:23 back.
0:22:28 You have this other thing called a calculator and you can give it a mathematical expression
0:22:29 and it gives you an answer back.
0:22:32 And you have this other tool that lets you do this and you can have a number of tools.
0:22:34 And so here’s what happens.
0:22:41 In the context window happening behind the scenes, chat GPT, which is the interface right
0:22:44 now that is interacting with the LL, you’re not talking with the LM directly, right?
0:22:49 It gets a prompt and it says, okay, by the way, LLM, I want you to pretend like you have
0:22:50 access to these four tools.
0:22:56 And anytime you need them, when you pass the note back to me, the results, the output, just
0:22:57 tell me when you want to use one of those tools.
0:22:59 All right.
0:23:00 So we give it a query.
0:23:04 It’s like, okay, well, I want to look up like the historical stock valuation for HubSpot and
0:23:07 when it changed as a result of, is there any correlation to the weather?
0:23:09 Is it seasonal or whatever it is, right?
0:23:13 In terms of market cap of HubSpot versus seasonal changes.
0:23:14 All right.
0:23:17 Well, that’s not something you would have access to, but here’s what actually happens.
0:23:17 This is so cool, right?
0:23:22 So the LM gets it and the LM’s in the context window that we gave it.
0:23:23 We gave it instructions.
0:23:25 It’s a pretend like you have these four tools.
0:23:28 One of which is stock price lookup, let’s say, historical stock price lookup.
0:23:34 It’ll pass the output back to the application, not us, and say, and in the output, it says,
0:23:39 oh, please invoke that tool you told me I had access to and look up this result.
0:23:40 I want you to search the internet for X.
0:23:41 What was the weather?
0:23:42 I want you to do this for the stock price.
0:23:45 And then we do that.
0:23:51 We, the ChatGPT application, fill the context window with whatever it is the LM asked for
0:23:52 and then pass it back in.
0:23:57 So the LM effectively has access to those tools, even though it never accessed the internet,
0:24:01 it never accessed the stock market, but it pretended like it had access to it.
0:24:02 And we never see this.
0:24:03 This is happening behind the scenes.
0:24:07 Now, here is the big, massive unlock, right?
0:24:10 Which is, well, everything can be a tool, right?
0:24:13 Now you don’t have to build this kind of vector store or whatever, because you would never
0:24:16 build a vector store of all possible stock prices from the dawn of time.
0:24:18 Now, I guess you could, but then it’s outdated immediately.
0:24:23 Now it’s like, what if we just gave it 20 really powerful tools, including browser access,
0:24:23 to the internet?
0:24:30 Well, that’s like a 10,000, 100,000 times increase in that intern’s capability, right?
0:24:35 And so that’s where our brain should be headed now, which is exactly where the world is headed,
0:24:40 that says, what tools can we give the LM access to that will amplify its ability and cause
0:24:43 zero change to the actual architecture?
0:24:45 Literally, it doesn’t have to know anything about anything.
0:24:47 It’s like, I just want you to pretend that you have access to these tools.
0:24:49 It doesn’t need to know how to talk to those tools.
0:24:50 It doesn’t need to know about APIs.
0:24:52 It doesn’t need any of that stuff.
0:24:57 Cutting your sales cycle in half sounds pretty impossible, but that’s exactly what Sandler
0:24:59 training did with HubSpot.
0:25:06 They use Breeze, HubSpot’s AI tools to tailor every customer interaction without losing their
0:25:06 personal touch.
0:25:08 And the results were incredible.
0:25:11 Click-through rates jumped 25%.
0:25:13 Qualified leads quadrupled.
0:25:16 And people spent three times longer on their landing pages.
0:25:21 Go to HubSpot.com to see how Breeze can help your business grow.
0:25:28 Do you think that, I mean, this is all mind-blowing, and you have an interesting perspective because,
0:25:33 you know, I think three episodes ago that you’re on, you created this thing called Wordle.
0:25:34 Was it Wordle?
0:25:34 Wordplay.
0:25:35 Wordplay.
0:25:37 That does like 80 grand a month.
0:25:39 It was just like a puzzle that you do with your son.
0:25:39 It was amazing.
0:25:46 But now you have new projects, you have agent AI, you have a few other things, but you still
0:25:47 run a $30 billion company.
0:25:55 Do you think that the majority of value creation, like, am I going to, is my stock portfolio going
0:25:58 to go up because I own a basket of tech stocks?
0:26:05 Or is the best way to capitalize as an outsider, obviously, you start a company, or is it investing
0:26:09 in new startups that are using AI or AI-first startups?
0:26:12 Yeah, it’s a good question.
0:26:15 I’m neither an economist nor a stock analyst, but I will say this.
0:26:21 The thing I’m most excited about with AI, and I actually said exactly this in a talk I gave
0:26:26 well before GPT on the inbound stage, and I said, you know, as AI is starting to kind of
0:26:29 come up, it’s not a you versus AI.
0:26:31 That’s not the mental model you should have in here.
0:26:34 It’s like, oh, well, AI is going to take my job because it’s me trying to do things that
0:26:36 the AI is then eventually going to be able to do.
0:26:40 The right mental frame of reference you should have, it’s you to the power of AI.
0:26:43 AI is an amplifier of your capability.
0:26:47 It will unlock things and let you do things that you were never able to do before, as a
0:26:50 result of which it’s going to increase your value, not decrease it, right?
0:26:55 But in order for that to be true, you actually have to use it.
0:26:56 You have to learn it.
0:26:57 You have to experiment with it.
0:27:02 And the only real way to get a feel for what it can and can’t do is you have to do it.
0:27:03 So I’ll give you the very, very simple.
0:27:04 Everyone should do this.
0:27:05 I do this personally.
0:27:12 is that anytime you’re going to sit down at a computer and do something, research, whatever
0:27:17 it is you’re going to do, you should give chat GPT or your AI tool of choice a shot at
0:27:17 it.
0:27:23 Try to describe and pretend like you have access to this intern that has a PhD in everything.
0:27:26 It’s like, okay, well, maybe it doesn’t know anything about me or whatever.
0:27:26 Fine.
0:27:28 So then tell it a few things about you.
0:27:31 But imagine you have access to this all-knowing intern that has a PhD in everything.
0:27:34 Give it a crack at solving the problem that you’re about to sit down and spend some time
0:27:39 on and what you will invariably find, number one, is you’ll be surprised by the number of
0:27:42 times it actually comes up with a helpful response that you would never have expected
0:27:44 it would be even remotely able to do.
0:27:46 Like, how can it do that?
0:27:48 It’s because it has a PhD in everything, right?
0:27:52 And it’s now actually, we’ll talk about reasoning and whether models are actually doing that or
0:27:52 not if we have time.
0:28:00 But so that’s my advice is every day, every day, you should be in chat GPT.
0:28:04 If you’re a knowledge worker at all, it doesn’t actually, you don’t even have to be a
0:28:04 knowledge worker.
0:28:06 And I don’t care what your job is, right?
0:28:10 You could be a sommelier at a restaurant and you should be using chat GPT every day to make
0:28:12 yourself better at whatever it is you do.
0:28:17 And that might be the introduction of that orthogonal skill to bring it back to the, which I never
0:28:17 explained the word orthogonal.
0:28:19 I’ll do it in 30 seconds.
0:28:23 So orthogonal means a line that’s 90 degree intersection to another line.
0:28:27 And the most common use is when we have an X and Y axis, right?
0:28:31 It’s like, oh, the X axis and the Y axis are orthogonal to each other because they have
0:28:32 90 degrees separating them.
0:28:36 The common usage, when you say, oh, that’s an orthogonal concept, it means it’s unrelated.
0:28:37 It’s completely different.
0:28:40 That’s like the Y and X axis are completely independent of each other.
0:28:43 You can say, oh, you can be here on the X axis, but here on the Y axis and they’re not
0:28:44 related to each other.
0:28:48 So that’s what I mean when I say orthogonal concepts or skills or ideas.
0:28:49 Yeah.
0:28:52 Is there anything you disagree with that’s kind of the consensus?
0:28:55 Because a lot of things you’re talking about, like, hey, AI is going to change everything.
0:28:56 It’s super smart.
0:28:57 Agents are coming.
0:28:59 They can do some stuff now, more stuff later.
0:29:03 These are all probably right, but they’re also consensus.
0:29:06 I’m just curious, like, is there anything you disagree with that you hear out there that
0:29:09 drives you nuts where you’re just like, people keep saying this.
0:29:11 I think that’s either wrong.
0:29:12 It’s overrated.
0:29:13 It’s the wrong timeline.
0:29:14 It’s the wrong frame.
0:29:15 It’s whatever.
0:29:18 Is there anything that you disagree with that you’ve heard out there?
0:29:22 I’ve heard variations, two variations I disagree with.
0:29:27 One that I’ve, I think, spent so much time hopefully kind of talking folks out of, which
0:29:29 is it’s just autocorrect.
0:29:30 It’s not really thinking.
0:29:34 And that’s a matter of, like, what do you think thinking is, right?
0:29:39 It’s like, okay, well, if it produces the right output to which we think would require
0:29:45 thought, so I think that is flawed reasoning to say, oh, well, and this often comes from
0:29:49 the smartest people, the most experts in their field, because, oh, it’s really like a stochastic
0:29:49 parrot.
0:29:53 You’ll hear this phrase, which is, it’s like a Paul Billy-driven pattern matching based.
0:29:57 It just so happens that it’s been trained on the internet, but it’s not really like human
0:29:58 intelligence.
0:30:02 And I agree with that phrasing, which is it’s not like human intelligence, but that does not
0:30:06 mean that all it’s doing is sort of mimicking stochastically all the things I’ve read before,
0:30:11 because in order to do what it does, it is a form of creativity different from what we
0:30:12 normally experience.
0:30:15 That’s kind of thing number one that I disagree with.
0:30:20 Thing number two is people are thinking, I both disagree with the, oh, the scaling laws are
0:30:23 going to continue forever indefinitely, that the more and more compute we throw at, the
0:30:26 more knobs we put on the machine, the smarter and smarter it’s going to get.
0:30:29 I think there’s going to be a limit to that at some point.
0:30:30 It’s like nothing goes on forever.
0:30:33 It’s going to asymptotically move towards, we’re going to have to come up with new algorithms.
0:30:36 So that’s GPT can’t be the do all end of all things, right?
0:30:39 There will be a new way, you know, discovered.
0:30:40 So I think that’s going to happen.
0:30:46 But I think the smarter, and I did not say this, other people have said it, the best way
0:30:53 to kind of think about AI right now is as you use it, it’s the kind of truly find, find a
0:30:55 frontier of what it’s incapable of.
0:30:59 It’s like, okay, it can sort of do this thing, but not very well.
0:31:04 If, if that’s the way you describe its response, you are exactly where you need to be, which
0:31:08 is if you can sort of do it right now, sort of, if you have to squint a little bit, it’s
0:31:12 like, ah, well, it’s kind of something, but wait six months or a year, right?
0:31:15 Like it’s, uh, that’s the beauty of an exponential curve.
0:31:19 It gets so better, so much better, so fast, uh, that if it can sort of do it now, it will
0:31:20 be able to do it.
0:31:21 And then it’ll be able to do it really well.
0:31:24 That’s the inevitable sequence of events.
0:31:24 That’s going to happen.
0:31:25 Stan, have you heard this about startups?
0:31:30 There’s like a kind of the smart money in startups believes that the right startup to build is
0:31:33 basically the thing that AI kind of can’t do right now.
0:31:39 That’s the company to start today because you just have to stay alive long enough.
0:31:44 Give it the 12 to 18 month runway that it needs for the thing to go from, eh, didn’t really
0:31:46 work very well to like, oh my God, this is amazing.
0:31:50 But you’ve built your brand, your company, your mission, you’ve, your customer, but you’ve
0:31:54 been building that all along the way and you’re basically just betting you’re going to be able
0:31:56 to surf the improvement of the model.
0:31:57 Dude, by the way, that’s, that’s how I feel about my company.
0:32:02 My company is not related to this, but at all, but in terms of like our operations, we’re like
0:32:07 things are very manual and I’m like, oh my God, once I’m able to finally implement AI
0:32:10 when it can work for this purpose, my profit margins are going to go through the roof.
0:32:14 I mean, that’s how I, that’s how I feel about it, but it, which isn’t entirely related
0:32:19 that Sean, but a little bit, one, one, one thing I’ll plant out there since, uh, this
0:32:20 is my first million.
0:32:23 We, we like talking about ideas at a macro level.
0:32:28 Here’s the entirely new pool of ideas that I think are now available, um, on a trend that
0:32:32 I think is inevitable, which is ad agents get better and better, right?
0:32:37 Um, right now, most of us, when we use, uh, use AI, use chat TPD, uh, we use them as tools,
0:32:38 which is great.
0:32:38 Perfect.
0:32:39 Uh, fine.
0:32:43 Uh, over time, you need to shift your thinking and think of them as teammates.
0:32:46 Think of them as that intern that just got hired, right?
0:32:51 Uh, and, and as a result of that, so let’s, let’s assume for a second, let’s stipulate that
0:32:52 I I’m right.
0:32:55 All we don’t know is how long is it going to take for me to be right is that we’re going
0:32:59 to have effectively digital teammates that are part of all of our teams.
0:33:05 Every company is going to someday have a hybrid team consisting of carbon-based life forms and
0:33:07 these kinds of digital AI, um, AI agents.
0:33:07 Okay.
0:33:12 So if you accept that, the way that’s going to happen is not going to be like, all of
0:33:15 a sudden we one day wake up and every organization now starts kind of mixing them.
0:33:18 What’s going to happen is it’s going to slowly introduce this way.
0:33:21 It’s like, oh, I have this one task, whatever that an agent is better at is real.
0:33:23 It’s reliable enough for the thing.
0:33:24 And the risk is low enough.
0:33:24 I’m going to have it do that.
0:33:25 Right.
0:33:28 Well, we already see elements of that, but here’s, what’s going to happen as a result of
0:33:31 that kind of gradual kind of infusion and adoption of that technology.
0:33:37 Uh, the way to win and the opportunities that get created is like, how do I help the world
0:33:40 accomplish this end state that I know is going to come?
0:33:42 So here, I’ll give you some examples.
0:33:48 Uh, if we were to hire, if you, uh, Sam were to hire a new employee, uh, tomorrow, here’s
0:33:48 what you would do.
0:33:52 You would say, oh, well, I’m going to onboard that employee, spend a couple of days.
0:33:55 I’m going to tell them about the business, uh, whoever’s managing that employee, let’s
0:33:59 say with a direct report of yours, maybe you’ll have a weekly one-on-one or every other week
0:34:04 or whatever that one-on-one will consist of, uh, looking at the work they did, whatever’s
0:34:05 like, oh, over here, you did this or whatever.
0:34:06 And it could be copy editing.
0:34:08 It could be anything, whatever the role happens to be.
0:34:09 You’re going to give them feedback, right?
0:34:11 That’s what you do for a human worker.
0:34:18 All of those things have a direct, literally a direct analog in the agent world, right?
0:34:21 And what we’re doing right now is we’re hiring these agents and expecting them to do
0:34:27 magic, just like if we hired an exceptionally smart, uh, has a PhD in everything employee
0:34:30 and expected them to do magic with no training, no onboarding, no feedback, no one-on-one, no
0:34:31 nothing.
0:34:33 Well, your results are not going to vary.
0:34:37 They’re going to be crap, uh, because you did not make the investment in getting that agent.
0:34:40 Now the, the, the big unlock here.
0:34:43 So whether you’re an HR person or whatever, it’s like figure out, well, what does employee
0:34:45 training look like for digital workers?
0:34:48 What do performance reviews look like for digital workers?
0:34:50 How do we do, how do we do recruiting for digital workers?
0:34:53 How do we like, what, what are all the mechanisms that need to exist?
0:34:55 What is a manager of the future?
0:34:58 What are the new roles that will be created as a result of having these hybrid teams?
0:35:01 It’s like, okay, well now maybe we’re going to need someone.
0:35:06 That’s like the agentic manager, human that knows all the agents that are on their team
0:35:10 or whatever, and has kind of built the skillset, uh, how to do recruiting for their team, uh,
0:35:12 how to do performance reviews, how to do all of that.
0:35:16 But for agents or hybrid teams, um, you know, versus just purely human ones, uh, that, that’s
0:35:19 just a whole other, and we’re going to need the software.
0:35:20 We’re going to need the onboarding.
0:35:20 We’re going to need training.
0:35:21 We’re going to need books for it.
0:35:23 And we’re going to need all of it to kind of adopt.
0:35:26 And it’s going to take, uh, it’s going to take years, right?
0:35:32 It’s not, uh, two years ago, I asked you, is it going to be as bad?
0:35:35 Or I think you said, I asked, is it going to be horrible or is this going to be amazing?
0:35:38 And you said, uh, I saw this with the internet.
0:35:41 Nothing is as extreme as the most extreme predictions.
0:35:44 I listened to you and I trusted you.
0:35:49 Then I actually think knowing what I know now, I’m actually more fearful, uh, than I was
0:35:52 a couple of years ago where I’m like, oh, this is actually going to put a lot of people
0:35:53 out of work.
0:35:56 And, um, it’s maybe not good or bad, but things are going to change.
0:35:58 drastically more than I thought.
0:36:04 And my, so I don’t remember how I phrased the question, but is this going to change the
0:36:09 future more than you thought two years ago or less than you thought two years ago?
0:36:11 Um, has your opinion on that changed?
0:36:14 I still think they’re going to be unrecognizable.
0:36:19 My, my kind of macro level sense, and this is maybe just my inherent, uh, optimism about
0:36:24 things is that it’s going to be kind of a net positive for humanity.
0:36:27 And this is the other thing that, um, you know, lots of people would disagree with me
0:36:27 on this.
0:36:31 Like, oh, well, is this an existential crisis to the species?
0:36:37 Um, and I’ve not said this before, but I’m going to see how it sounds as the words leave
0:36:38 my mouth.
0:36:38 I’m probably going to regret it.
0:36:43 But in a way we are actually, and Sammy, um, Sean, you said this earlier, we’re sort
0:36:45 of producing a new species, right?
0:36:50 So that’s like saying, okay, well, homeo sapiens as they exist, absent AI is likely not going
0:36:50 to exist.
0:36:55 So the way we know the species as it exists today with where we have a single brain and,
0:36:57 and, and, in natural form, you know, four appendages or whatever, maybe that’s going
0:36:58 to be different.
0:37:03 Uh, but I think of that as an extension of humanity, not the obliteration of humanity, right?
0:37:08 That’s the, that’s, you know, human 2.0 or n.0, uh, of the way we kind of think of the
0:37:08 species right now.
0:37:12 So I’m, uh, I think things are still moving very, very fast.
0:37:16 And this is the, this is why I think humans have, uh, issues with exponential curves.
0:37:20 We’re just not used to them when something is kind of doubling or, uh, you know, um, every
0:37:25 end months, it’s hard to wrap our brains around how fast this stuff, uh, you know, can move
0:37:31 things that we thought were like the things we have today, Sam, um, if we had just described
0:37:36 them to someone a year and a half ago, there’s like, ah, well, chat CPT is cool or whatever,
0:37:37 but it’s never going to be able to do that.
0:37:41 And now we’re like, those are like par for the course, right?
0:37:45 Like we can do like, um, things that were literally like, oh, there’s no way, no way.
0:37:48 It’s like, yeah, it’s good at like texts and stuff like that, but that’s because it’s been
0:37:48 trained on text.
0:37:50 Now I can do images.
0:37:53 Well, I can do images, but like video is like 30 frames a second.
0:37:58 That’s like third and generating 30 images per frame of like per second of videos, like all
0:37:58 of that.
0:38:02 It’s like, yeah, but you know, diffusion models, the way they work is because you’re
0:38:03 not going to get, you get a different image every time.
0:38:04 So how are you going to create a video?
0:38:07 Because it requires the same character, the same setting in subsequent frames.
0:38:09 That’s not how the thing is arched.
0:38:10 That’s not how image models were.
0:38:12 And we solved all of those things, right?
0:38:15 Now we have character cohesion, setting cohesion, video generational.
0:38:15 Anyway.
0:38:23 So my answer is it’s exactly, not exactly, but it’s close to like, yep, this is what exponential
0:38:25 advancement looks like.
0:38:28 I’m still of the belief that we’re going to have more net positive.
0:38:31 That is not to say that in the interim, there’s not going to be pain.
0:38:34 And there’s two things I’ll put out there as cautionary, cautionary words.
0:38:40 One is in the interim, anyone that tells you that there’s not going to be job dislocation,
0:38:43 there’s not going to be roles that get completely obliterated, is lying to you.
0:38:44 That is going to happen.
0:38:46 It’s already happening, right?
0:38:49 It’s that there is no world in which that does not occur.
0:38:50 That’s kind of thing number one.
0:38:56 Thing number two, and we didn’t talk about this, but we should have, is that because of
0:39:00 the architecture of how LMs currently work, maybe they’ll figure out a way to do that,
0:39:01 they produce hallucinations.
0:39:05 And that’s just a fancy way of saying it makes things up, right?
0:39:10 And that’s sort of okay, but not okay, because it doesn’t know it’s making it up.
0:39:15 Because of the way the architecture works, it’s like the intern that thinks it’s been
0:39:16 exposed to all there is to know in the world.
0:39:17 It’s like, I know all the things.
0:39:18 You’re asking me a question.
0:39:19 I know I know all the things.
0:39:21 So I’m going to tell you the thing that I know.
0:39:23 It was like, well, yeah, but you didn’t know this.
0:39:27 And what you said is actually, factually, like provably, demonstrably wrong.
0:39:33 And it has absolutely zero lack of confidence in its output, which is fine for some things
0:39:36 if you’re writing a short fiction story or something like that.
0:39:40 It’s not great at all for other things like healthcare related where you need kind of
0:39:41 predictable, accurate responses.
0:39:44 So I think we need to be aware of the limitations around it.
0:39:46 when we’re doing research and things like that.
0:39:52 And the problem is when we have relatively, I’ll say naive, I don’t mean this in a disparaging
0:39:58 way, folks that are naive to a subject area asking ChatGPT for things where it can’t judge
0:39:59 the response, right?
0:40:04 We’re sort of taking it on faith that it’s ChatGPT and our Mesh said it’s got a PhD in everything.
0:40:05 So of course it’s going to be right.
0:40:07 Well, no, it’s often not right.
0:40:12 And it’s kind of up to us to figure out what our kind of risk tolerance is.
0:40:14 It’s like, what is it okay for it to be wrong?
0:40:18 How would I test it for my domain, for my particular use cases?
0:40:23 So you guys know this, but I have a company called Hampton.
0:40:24 Joinhampton.com.
0:40:26 It’s a vetted community for founders and CEOs.
0:40:27 Well, we have this member named Levan.
0:40:32 And Levan saw a bunch of members talking about the same problem within Hampton, which is that
0:40:35 they spent hours manually moving data into a PDF.
0:40:37 It’s tedious, it’s annoying, and it’s a waste of time.
0:40:40 And so Levan, like any great entrepreneur, he built a solution.
0:40:42 And that solution is called Molku.
0:40:47 Molku uses AI to automatically transfer data from any document into a PDF.
0:40:52 And so if you need to turn a supplier invoice into a customer quote or move info from an application
0:40:57 into a contract, you just put a file into Molku and it autofills the output PDF in seconds.
0:41:00 And a little backstory for all the tech nerds out there.
0:41:03 Levan built the entire web app without using a line of code.
0:41:05 He used something called Bubble.io.
0:41:09 They’ve added AI tools that can generate an entire app from one prompt.
0:41:09 It’s pretty amazing.
0:41:13 And it means you can build tools like Molku very fast without knowing how to code.
0:41:18 And so if you’re tired of copying and pasting between documents or paying people to do that
0:41:20 for you, check out Molku.ai.
0:41:23 M-O-L-K-U dot A-I.
0:41:24 All right, back to the pod.
0:41:31 What do you think about this situation where Zuck is throwing the bag at every researcher?
0:41:36 A hundred million dollar signing bonuses, even more than that in comp.
0:41:39 And he’s poaching basically his own dream team.
0:41:41 He’s like, okay, you’re not going to, I can’t acquire the company.
0:41:43 Well, why don’t I go get all the players?
0:41:45 If you can keep the team, I’ll keep the players.
0:41:49 And he’s going after them with these crazy nine figure offers.
0:41:54 A hundred million signing bonus and 300 million over four years, I think is what I saw.
0:41:54 Is that true?
0:41:56 I think that was like the higher, yeah.
0:41:56 So the higher end.
0:42:00 And some people have said there’s even like billion dollar offers to certain people that are
0:42:00 out there.
0:42:02 This is like job offers.
0:42:04 So Dharmesh is like, were you shocked by this?
0:42:07 Because I mean, my reaction to this was that’s bullshit.
0:42:08 First time I heard it.
0:42:10 Then I was like, wait, the source is Sam Altman.
0:42:10 Why would he say that?
0:42:13 And then I was like, okay, that’s insane.
0:42:16 And then an hour later, I was like, wait, that’s actually genius.
0:42:20 Because for a total of 3 billion or something, he can acquire the equivalent of one of these
0:42:23 labs that’s valued at 30, 40, 50, or $200 billion.
0:42:25 What a power play.
0:42:29 I know, obviously, you’re an investor in OpenAI, so maybe you don’t like this.
0:42:31 Maybe you have a different bias here.
0:42:36 But I’m just, from one kind of like leader of a tech company to another, like what’s your
0:42:37 view of this move?
0:42:39 I think it’s one of the crazier moves.
0:42:45 If I had to use one word, I would say diabolical, not stupid, not silly, but diabolical.
0:42:46 And here’s why, right?
0:42:48 This is the, like in the grand scheme of things.
0:42:52 So this is not just a, oh, can we use this technology and build a better product that will
0:42:56 then drive X billion dollars of revenue through whatever business model we happen to have.
0:43:01 There’s a meta thing at play here that says whoever gets to this first will be able to
0:43:05 produce companies with billions of dollars of revenue or whatever, right?
0:43:09 Because that’s, it’s like kind of finding the secret to the universe, the mystery of life
0:43:09 kind of thing.
0:43:14 It’s like, okay, well, whoever wins that and gets there first will then be able to use
0:43:19 the technology internally for a little while and be able to just kind of run the table for
0:43:20 as long as they want.
0:43:22 So there’s, it’s got incalculable value, right?
0:43:27 The upside is just so high that no amount of, like if you can increase your probability even
0:43:31 by a marginal amount, if you had the cash, why wouldn’t you do it, right?
0:43:34 So do you think, A, do you think it’ll work?
0:43:36 Do you think this tactic will work for him?
0:43:38 Do you think he will be able to build a super team?
0:43:41 Is he just going to get a bunch of engineers who now have yachts and don’t work?
0:43:45 Like what’s going to happen when you give somebody a hundred million dollars offers, you put together
0:43:49 this, smash together this team of, I think he’s got a hit list of 50 targets.
0:43:53 And I think like, you know, something like 19 or 20 of them have come on board already.
0:43:56 What’s your prediction of how this plays out?
0:43:58 It feels a little bit like a Hail Mary pass, right?
0:43:59 That’s okay.
0:44:00 They’re going to take this.
0:44:01 It’s like, okay, well, there’s not a whole lot of things we can do.
0:44:03 You know, the chips are down.
0:44:04 I’m going to mix metaphors now too.
0:44:07 But that works sometimes.
0:44:09 It works sometimes.
0:44:10 That’s exactly why people do it.
0:44:12 It’s like, okay, what other option do we have, right?
0:44:13 Like everything else hasn’t worked yet.
0:44:15 So let’s try this thing.
0:44:20 But I think the challenge, I still think it’s a dialogically smart move.
0:44:23 I’m not going to use the word ethics or anything like that.
0:44:24 But here’s the challenge though, right?
0:44:28 If we were having this conversation, we’ll call it two years ago, give or take.
0:44:33 Open AI was so far ahead in terms of the underlying algorithm.
0:44:36 And this is even before ChatGPT hit the kind of revenue curve that it’s hit.
0:44:40 Just raw, the GPT algorithm was just so good and they were so far ahead.
0:44:45 It was actually inconceivable for folks, including me, that others would catch up.
0:44:47 It’s like, okay, well, they’ll make progress.
0:44:48 They’ll get closer.
0:44:50 But then Open AI is obviously going to still keep working on it.
0:44:52 And they’re going to be far ahead for a long, long time.
0:44:54 That’s proven not to be true, right?
0:44:56 We’ve seen open source models come out.
0:44:57 We’ve seen other commercial models come out.
0:44:58 There’s Anthropic.
0:45:02 And they have, by most measures, comparable large language models, right?
0:45:03 Within like one standard deviation.
0:45:04 They’re pretty good.
0:45:06 And sometimes they’re better at some things, worse at others.
0:45:09 But it’s not this single horse race anymore.
0:45:12 So the thing that I’m a little bit dubious of is that even if you did this,
0:45:15 you pull all these people together, like it didn’t
0:45:18 really work for Open AI in the true sense of the word, right?
0:45:21 Like they weren’t able to create this kind of magical thing that it’s like,
0:45:23 okay, maybe they end up doing it somewhere else.
0:45:27 But I think there’s more smart people out there.
0:45:30 The technology has kind of deep-seek proved that you could actually,
0:45:34 and they didn’t actually have an actual innovation in terms of reasoning models
0:45:37 and things like that versus kind of the early generation large language model.
0:45:39 So jury’s still out.
0:45:44 How much better is a $300 million over,
0:45:48 so $100 million a year engineer over like a $20 million engineer?
0:45:51 Is it like, I followed some of these guys on Twitter,
0:45:53 and it was, they’re fantastic follows.
0:45:57 And do you think that their IQ is just so much better?
0:45:58 Or is it because they’ve had experience?
0:46:02 Is it really because they just saw how Open AI works,
0:46:03 and they want that experience?
0:46:05 Are they like, is this like espionage?
0:46:10 What is, how good could a $100 million or $300 million a year engineer be?
0:46:12 Well, that’s the thing, though.
0:46:13 This is software, right?
0:46:17 So this is a, you know, a world of like 95% margins.
0:46:19 So let’s say, yeah, I think part of the value is,
0:46:20 yes, they’re super smart,
0:46:24 but even human IQ asymptotically moves towards a certain ceiling, right?
0:46:26 You take smartest people in the world,
0:46:27 however you want to measure IQ.
0:46:30 And so that doesn’t explain away the value, right?
0:46:31 That’s not that.
0:46:33 It’s not that they’ve seen the inside of Open AI,
0:46:35 and they have some trade secrets in their head
0:46:36 that they can then kind of carry over.
0:46:37 It’s like, oh, here’s how we did it over there,
0:46:39 and here’s how we ran evals,
0:46:41 and here’s how we did, you know, the engineering process.
0:46:42 They’ll have some of that,
0:46:46 because we always carry some amount of kind of experience in our heads.
0:46:48 I think the larger thing,
0:46:51 I think kind of primary kind of vector of value
0:46:54 is they sort of have demonstrated the ability
0:46:56 to kind of see around corners and see into the future, right?
0:46:57 They believed in this thing
0:47:00 that almost no one believed in at the time.
0:47:02 They sort of saw where it was headed,
0:47:03 and they were working at it,
0:47:04 tripping away at it, whatever.
0:47:06 And that’s much rarer than you would think.
0:47:09 For really smart people to do this stupidly foolish,
0:47:11 seemingly stupid foolish thing,
0:47:13 it’s like, you’re going to do what now, right?
0:47:16 And we’re still asking ourselves a variation of that question
0:47:17 that we would have asked three years ago,
0:47:19 except now we have ChatGPT,
0:47:20 and we have the things in it,
0:47:21 and we’re still like,
0:47:23 well, you say that we’re going to have, like,
0:47:24 these kind of digital teammates,
0:47:25 and they’re going to be able to do all these things,
0:47:27 and it can’t even do this simple thing right, right?
0:47:29 Like, we sort of keep elevating our expectations
0:47:31 in what we believe is or is not possible.
0:47:33 They sort of know what’s possible,
0:47:35 and they almost think of what many of us
0:47:36 would consider impossible
0:47:37 as actually being inevitable.
0:47:39 Have you guys, as HubSpot,
0:47:40 have you made any of these offers?
0:47:41 I don’t think so,
0:47:43 but that’s not the game we’re in, right?
0:47:45 So we’re not in that league.
0:47:46 We’re not trying to build a frontier model.
0:47:48 We’re not trying to invent AGI.
0:47:49 We’re at the application layer of the stack.
0:47:51 So we want to benefit from it, right?
0:47:55 We didn’t, in any layer of my entrepreneurial career,
0:47:58 I have not been the guy in the center of the universe
0:47:59 or the company in the center of the universe.
0:48:00 But you’re not like,
0:48:01 oh, man, I met this person.
0:48:03 Like, we need to offer, like,
0:48:05 an NBA contract in order to secure this guy.
0:48:07 No, and there’s a reason for this, right?
0:48:09 It’s like, for the kinds of problems we’re solving,
0:48:11 what’s the, there’s a sports term
0:48:12 about the best alternative to the player
0:48:13 or something like that,
0:48:14 the replacement cost?
0:48:15 More, wins above replacement
0:48:17 is the metric they use in sports.
0:48:19 So, yeah, it’s just not,
0:48:20 it’s not worth it,
0:48:21 given our business model,
0:48:21 given what we do.
0:48:24 I have one last thing on the kind of AI front.
0:48:25 This is one of the things,
0:48:27 answering your question, Sean,
0:48:29 in terms of things I disagree with folks on,
0:48:33 is that there’s a group of people,
0:48:35 very smart, that will say,
0:48:37 oh, well, AI is going to lead
0:48:39 to a reduction in creativity,
0:48:40 broadly speaking, right?
0:48:41 Because you’re just going to have AI do the thing.
0:48:42 Why do you need to learn to do the thing?
0:48:44 And I have a 14-year-old, right?
0:48:45 So it’s like, okay, well,
0:48:47 if he just uses AI to write his essays
0:48:48 and do his homework or whatever,
0:48:50 it’s going to kind of reduce his creativity.
0:48:52 And I understand that particular
0:48:54 kind of line of reasoning that says,
0:48:55 yeah, if you just have it do the thing,
0:48:56 you’re not going to.
0:48:58 But I think the part
0:49:00 those folks are missing
0:49:02 is that, you know,
0:49:04 creativity is kind of,
0:49:05 in the literal sense of the word,
0:49:06 is like, okay,
0:49:07 I have this kind of thing,
0:49:08 idea in my head,
0:49:10 and I’m going to express it
0:49:10 in some creative form,
0:49:11 be it music,
0:49:11 be it art,
0:49:13 be it whatever it happens to be.
0:49:15 And the problem right now
0:49:16 is that
0:49:19 whatever creative ideas
0:49:19 we have in our head
0:49:21 are limited
0:49:23 in terms of how we can manifest them
0:49:24 based on our emerging skill set.
0:49:25 So Sean can have
0:49:27 a song in his head right now
0:49:27 that, like,
0:49:29 he may be composing things in his head,
0:49:31 but until he learns the mechanics
0:49:32 of how to actually play
0:49:33 an instrument,
0:49:34 whatever the instrument happens to be,
0:49:35 there’s no real way
0:49:36 to manifest that, right?
0:49:38 We can’t tap into his brain
0:49:38 and do that.
0:49:40 So in my mind,
0:49:42 AI actually increases creativity
0:49:43 because it will increase
0:49:44 the percentage of ideas
0:49:46 that people have in their heads
0:49:47 that they will then be able
0:49:47 to manifest
0:49:48 regardless of what their skills
0:49:49 are or not.
0:49:51 And I love that.
0:49:51 So my son,
0:49:53 he’s a big Japanese culture fan,
0:49:54 big manga fan,
0:49:57 Japanese comic books and anime.
0:50:00 And so he’s an aspiring,
0:50:02 you know, author someday.
0:50:03 And what he can do now, right,
0:50:04 and he’s been able to do this
0:50:05 for years,
0:50:05 which is,
0:50:07 so he’s always had,
0:50:07 again,
0:50:08 he likes fantasy fiction as well.
0:50:09 So he’s had these ideas
0:50:10 for writing things,
0:50:12 but he lacked the writing skills.
0:50:13 He doesn’t know about character development,
0:50:14 doesn’t know about any of these things.
0:50:16 So what he uses ChatGPT for
0:50:17 is he’s got this,
0:50:17 like,
0:50:18 2,000 word prompt
0:50:20 that describes his fictional world.
0:50:21 Here are the characters.
0:50:22 Here’s a power structure.
0:50:23 Here are the powers people have.
0:50:24 Here’s what you can and can’t do.
0:50:27 And then the way he tests the world
0:50:28 is he turns it into a role-playing game.
0:50:29 It’s like,
0:50:29 okay,
0:50:31 I’m going to jump into the world.
0:50:32 Now you, ChatGPT,
0:50:33 I’m going to do this.
0:50:33 Tell me what happens.
0:50:34 Oh, this happened.
0:50:35 Okay, now I’m going to do this.
0:50:36 Okay, well,
0:50:37 now you’ve got this power.
0:50:38 And so it will sort of
0:50:39 kind of pressure test
0:50:40 kind of his world.
0:50:41 And so that’s an expression
0:50:42 of his creativity
0:50:43 because the world
0:50:44 was sitting in his head,
0:50:45 but now he can actually
0:50:46 share that with friends.
0:50:48 Maybe turn that into a book someday
0:50:48 because it’s going to take
0:50:49 the ideas that he has
0:50:51 and hopefully in the meantime,
0:50:53 he will kind of develop
0:50:54 some of those foundational skills,
0:50:54 but he doesn’t have to wait
0:50:56 until like 12 years
0:50:56 of writing education
0:50:58 before he can take this idea
0:50:58 as a child.
0:51:00 He has lots of creativity,
0:51:02 but as a practitioner,
0:51:03 most of those things
0:51:03 that he would love
0:51:04 to be able to manifest
0:51:05 in the world,
0:51:07 he has nothing close
0:51:08 to the skills required,
0:51:09 whether it’s drawing
0:51:10 or writing or anything.
0:51:11 So I think that’s what
0:51:13 AI can help us kind of elevate.
0:51:14 And once again,
0:51:16 we have to use it responsibly,
0:51:17 but it should be able
0:51:18 to elevate our skills.
0:51:19 I want to show you guys
0:51:22 an example of this real quick.
0:51:24 So I had this idea
0:51:25 not long ago,
0:51:25 a couple of weeks ago
0:51:29 of creating a game
0:51:30 using only AI.
0:51:32 So I don’t know
0:51:33 if you guys ever played
0:51:34 the Monkey Island games
0:51:36 from like when I was a kid.
0:51:37 I played Monkey Island.
0:51:38 It was an incredible game.
0:51:39 It’s got basically
0:51:40 this guy wants to be a pirate.
0:51:41 It’s like this very funny,
0:51:43 but like 8-bit art style game.
0:51:45 And so I created a version of that
0:51:47 called Escape from Silicon Valley.
0:51:48 I didn’t create the whole game,
0:51:49 but I create like the art,
0:51:50 but like check this out.
0:51:52 So I go into AI
0:51:53 and I basically start
0:51:54 creating the game art.
0:51:55 And so it’s like the story
0:51:57 is basically like deep in San Francisco.
0:51:58 The year is 2048.
0:52:00 The block is starting
0:52:01 his third term in office.
0:52:04 You know, Nancy Pelosi passes away,
0:52:06 the richest woman on earth.
0:52:07 And then, you know,
0:52:08 Elon is promising
0:52:09 that self-driving cars
0:52:10 are coming really, really soon
0:52:11 for real this time.
0:52:12 And here you are,
0:52:13 you’re this character
0:52:16 and you’re in the opening AI office.
0:52:17 And basically the idea is like-
0:52:18 Oh, Charlie, look at that.
0:52:19 What’s that?
0:52:20 Look at the Charlie bar.
0:52:22 Yeah, yeah, exactly.
0:52:23 I was putting in some references
0:52:24 to like, you know,
0:52:24 stuff that I thought was,
0:52:25 it would be cool.
0:52:27 That is so cool.
0:52:27 What did you use
0:52:28 to make those images?
0:52:29 So that right there
0:52:31 was just ChatGPT
0:52:33 and the Journey mix.
0:52:34 I tried using, you know,
0:52:35 Scenario and a couple other
0:52:36 like game-specific tools.
0:52:37 Like check this out.
0:52:38 So like I created
0:52:39 all these like technical characters.
0:52:40 So it’s like I create
0:52:41 Zuck and Palmer Luckey
0:52:42 and like Chamath
0:52:43 and Elizabeth Holmes in jail.
0:52:44 Oh, that is awesome.
0:52:45 And I had it basically
0:52:46 write the scenes
0:52:47 for the levels with me,
0:52:48 like write the dialogue with me,
0:52:50 create the character art.
0:52:51 Dude, that’s like sick.
0:52:52 Why didn’t you do that?
0:52:55 Well, because I did the fun part
0:52:56 in the first two weeks
0:52:57 where I was like,
0:52:58 oh, the concept,
0:52:59 the levels,
0:53:01 the character art,
0:53:01 the music,
0:53:02 seeing what AI could do.
0:53:04 But then to actually make the game,
0:53:06 the AI can’t do that.
0:53:07 And so I was like,
0:53:08 oh, now I need to like,
0:53:10 I mean, people who build games
0:53:11 spend years building it.
0:53:11 It’s like, oh,
0:53:12 this is like minimum
0:53:13 six to 12 months
0:53:14 doing this like very,
0:53:15 very arbitrary project.
0:53:17 But I still love the idea
0:53:18 and I’m going to like
0:53:20 package up the whole idea.
0:53:22 Darmesh, last question.
0:53:23 Just really quick,
0:53:24 like you,
0:53:26 where do you hang out
0:53:27 on the internet
0:53:29 that we and the listener
0:53:30 can hang out
0:53:31 to stay on top
0:53:32 of some of this stuff?
0:53:33 Like are there,
0:53:34 like who’s a reputable
0:53:35 handful of people
0:53:36 on Twitter to follow
0:53:37 or reputable websites
0:53:38 or places to hang out at?
0:53:40 That’s interesting.
0:53:41 So I spend most of my time
0:53:44 on YouTube,
0:53:45 as it turns out.
0:53:49 And I sort of give into the,
0:53:50 give into the vibe,
0:53:51 so to speak,
0:53:52 and let the algorithm
0:53:52 sort of figure out
0:53:54 what things I might enjoy.
0:53:56 It gets it right sometimes,
0:53:57 it gets it wrong sometimes.
0:53:58 So it’s a mix of things.
0:54:01 But the person that I think,
0:54:03 if you want to kind of get deeper
0:54:04 into like understanding AI,
0:54:05 there’s a guy named,
0:54:06 Andre Karpathy,
0:54:07 I don’t know if you’ve
0:54:08 come across him.
0:54:09 Just search for Karpathy.
0:54:10 Dude,
0:54:10 you don’t want to know
0:54:11 how I know,
0:54:12 like I get so many ads
0:54:14 that says like Andre Karpathy
0:54:16 said this is the best product
0:54:17 or Andre Karpathy
0:54:18 showed me how to do this,
0:54:19 now I’m going to show you.
0:54:20 Like I don’t even know
0:54:21 who Andre is
0:54:22 other than ads
0:54:23 run his name
0:54:24 to promote him.
0:54:25 Yeah, I mean he’s
0:54:27 one of the true OGs
0:54:27 in AI,
0:54:28 but he has that
0:54:30 his orthogonal skill
0:54:30 or one of them,
0:54:31 I think he’s got like nine,
0:54:33 he’s probably like a nine tool
0:54:33 player of some sort,
0:54:35 but he’s able to really
0:54:37 simplify complicated things
0:54:39 without making you feel stupid,
0:54:39 right?
0:54:41 So he’s not talking down to you.
0:54:41 He’s like,
0:54:42 okay,
0:54:43 like here’s how we’re going
0:54:43 to do this.
0:54:44 We’re going to kind of build it
0:54:45 brick by brick
0:54:46 and you’re going to understand
0:54:48 at the end of this hour and a half
0:54:49 how X works,
0:54:49 right?
0:54:51 and he’s amazing.
0:54:52 So that would be one.
0:54:53 So him,
0:54:54 any other YouTubers
0:54:55 or Twitter people or blogs?
0:54:56 On the business side,
0:54:56 actually,
0:54:58 like Aaron Levy from Box
0:54:59 is actually very,
0:55:00 very thoughtful on the,
0:55:01 if you’re in software
0:55:02 or in business
0:55:04 and the AI implications there,
0:55:04 I think he’s really good.
0:55:06 Hiten Shah,
0:55:07 who you both know
0:55:08 now at Dropbox
0:55:09 through the acquisition,
0:55:11 has been on fire lately
0:55:12 on LinkedIn.
0:55:14 So he’s one I would go back,
0:55:15 especially for the last
0:55:16 like three,
0:55:16 four months
0:55:18 and read all the stuff
0:55:18 he’s written.
0:55:19 I think he’s on point
0:55:19 on the app.
0:55:21 Those are awesome.
0:55:21 Darmash,
0:55:22 thanks for coming on.
0:55:23 Thanks for teaching us.
0:55:24 You’re one of my favorite teachers
0:55:26 and entertainers.
0:55:27 So thank you for coming on, man.
0:55:29 My pleasure.
0:55:30 It was good to see you guys.
0:55:30 It was fun.
0:55:31 Likewise.
0:55:31 Thank you.
0:55:32 That’s it.
0:55:32 That’s the pod.
0:55:36 I feel like I can rule the world.
0:55:38 I know I could be what I want to.
0:55:40 I put my all in it
0:55:41 like no day’s off.
0:55:42 On the road,
0:55:42 let’s travel,
0:55:44 never looking back.
0:55:44 All right,
0:55:44 my friends,
0:55:46 I have a new podcast
0:55:47 for you guys to check out.
0:55:47 It’s called
0:55:49 Content is Profit
0:55:50 and it’s hosted by
0:55:52 Luis and Fonzie Cameo.
0:55:53 After years of building
0:55:55 content teams and frameworks
0:55:56 for companies like Red Bull
0:55:57 and Orange Theory Fitness,
0:55:58 Luis and Fonzie
0:55:59 are on a mission
0:56:00 to bridge the gap
0:56:01 between content
0:56:02 and revenue.
0:56:03 In each episode,
0:56:03 you’re going to hear
0:56:04 from top entrepreneurs
0:56:05 and creators
0:56:06 and you’re going to hear them
0:56:07 share their secrets
0:56:07 and strategies
0:56:09 to turn their content
0:56:09 into profit.
0:56:11 So you can check out
0:56:12 Content is Profit
0:56:13 wherever you get
0:56:14 your podcasts.

Want Sam’s playbook to turn ChatGPT into your executive coach? Get it here: https://clickhubspot.com/sfb

Episode 726: Sam Parr ( https://x.com/theSamParr ) and Shaan Puri ( https://x.com/ShaanVP ) talk to Dharmesh Shah ( https://x.com/dharmesh ) about how he’s using ChatGPT.

Show Notes:

(0:00) Intro

(2:00) Context windows

(5:26) Vector embeddings

(17:20) Automation and orchestration

(21:03) Tool calling

(28:14) Dharmesh’s hot takes on AI

(33:06) Agentic managers

(39:41) Zuck poaches OpenAI talent w/ 9-figures

(49:33) Shaan makes a video game

Links:

• Agent.ai – https://agent.ai/

• Andrej Karpathy – https://www.youtube.com/andrejkarpathy

Check Out Shaan’s Stuff:

• Shaan’s weekly email – https://www.shaanpuri.com

• Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents.

• Mercury – Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies!

Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC

Check Out Sam’s Stuff:

• Hampton – https://www.joinhampton.com/

• Ideation Bootcamp – https://www.ideationbootcamp.co/

• Copy That – https://copythat.com

• Hampton Wealth Survey – https://joinhampton.com/wealth

• Sam’s List – http://samslist.co/

My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano

Leave a Reply

Your email address will not be published. Required fields are marked *