Author: The Next Wave – AI and The Future of Technology

  • The Cheat Code to AI Content with Roberto Nickson

    AI transcript
    0:00:04 It’s just so interesting to me how far we’ve come with music generation.
    0:00:06 This is kind of like a cheat code for any creator out there.
    0:00:10 Two years ago, none of the stuff existed, and now it’s like central to my workflow.
    0:00:12 And that’s why I think AI is so magic.
    0:00:16 The reason I mentioned it and wanted to bring it up with you is because you had the Kanye thing.
    0:00:19 Was there backlash, good feedback, bad feedback?
    0:00:21 Mostly negativity for sure, man.
    0:00:23 I was getting lambasted.
    0:00:25 I think that was like one of the most viral things in AI that I saw.
    0:00:26 Oh, yeah.
    0:00:28 I knew it was going to go viral.
    0:00:32 I studied this stuff really deeply, like there was so many things in that script
    0:00:34 where I was like, this is going to go nuclear.
    0:00:38 Hey, welcome back to the next wave podcast.
    0:00:39 My name is Matt Wolf.
    0:00:41 I’m here with my co-host, Nathan Lanz.
    0:00:46 And with this show, we bring you all of the latest news and information in the AI world
    0:00:51 and have fascinating conversations with the people that are building this AI world.
    0:00:54 And today we’ve got another really awesome conversation.
    0:00:57 Today we’ve got Roberto Nixon on the show.
    0:01:02 Roberto is a serial entrepreneur, and he’s one of the top creators on Instagram
    0:01:04 and TikTok in the AI world.
    0:01:08 You may even know him from his viral Kanye West clone video
    0:01:11 that was all over the internet about a year ago.
    0:01:15 He also recently interviewed Mark Zuckerberg on Instagram
    0:01:17 all about what they’re doing in the world of AI.
    0:01:21 And we have some amazing conversations with him in this episode.
    0:01:25 We talk about the crossover between content and AI.
    0:01:29 We talk about the responsibility we have as content creators
    0:01:33 to keep people the best informed possible in the AI world.
    0:01:39 And we also talk to him about how the heck did he manage to get Mark Zuckerberg on his show?
    0:01:41 So many cool rabbit holes we’re going out in this one.
    0:01:43 So excited to share it with you.
    0:01:45 And let’s dive right in with Roberto Nixon.
    0:01:49 Thank you so much, Roberto, for joining us.
    0:01:50 How are you doing today?
    0:01:51 Dude, I’m pumped to be here, man.
    0:01:54 Next wave, first of all, congrats on the podcast.
    0:01:55 Been listening to everything.
    0:02:00 I’ve been a like religious follower of you guys for some time.
    0:02:04 And actually, Matt, I was introduced to you.
    0:02:06 Forget how it must have been about a year ago now.
    0:02:08 And I think I’ve watched every single one of your videos.
    0:02:10 It’s like part of the reason how I keep updated on the space.
    0:02:12 Everybody always asks me, how do I keep updated?
    0:02:15 Like, you’re one of the names that I always put out there
    0:02:18 as well as as as Lore and Nathan’s newsletter and everything.
    0:02:19 So like, I’m pumped to be here, man.
    0:02:21 I’m excited to talk with you guys.
    0:02:23 Lot to, lot to talk about.
    0:02:24 Yeah, there’s a lot to cover.
    0:02:27 You know, I think I first came across you.
    0:02:29 You had both Instagram Reels and I think TikTok
    0:02:32 were sort of the domain you’ve been playing in, right?
    0:02:34 And I came across your TikTok
    0:02:38 about how you recreated like a Kanye sounding song.
    0:02:40 And that video just went super viral
    0:02:43 and just I followed along to your journey ever since as well.
    0:02:45 So it’s, you know, it’s really cool.
    0:02:48 And I think that’s what we want to get into a little bit today
    0:02:50 is all three of us here are creators.
    0:02:52 We all consider ourselves creators.
    0:02:57 And, you know, that overlap between creator and AI,
    0:02:59 you know, it’s just that Venn diagram,
    0:03:01 that center of that Venn diagram
    0:03:03 is getting bigger and bigger every single day.
    0:03:05 So let’s start with your story a little bit too.
    0:03:07 How did you get into AI in the first place?
    0:03:09 Like, what was, what was the catalyst for that for you?
    0:03:11 Dude, it’s a long story, but I’m going to make it short.
    0:03:15 So basically the last decade I was actually in product.
    0:03:17 So my obsession was UI, user interfaces.
    0:03:21 I’ve been a UI UX designer for some time,
    0:03:23 first started off doing it for agencies
    0:03:27 and then consulting for companies small and large.
    0:03:30 And then I actually got into building iOS apps.
    0:03:32 So for like the last eight years,
    0:03:36 or I would say like 2013 to 2021, that’s all I was doing.
    0:03:37 I was just like obsessed with building
    0:03:39 creativity software for iOS.
    0:03:43 We had like some major years in 2015, 2016.
    0:03:46 We had an app called HitGlab that was a top photo editor.
    0:03:48 The most downloaded photo editor in the United States
    0:03:51 for that year, we had something like 75 million downloads
    0:03:53 across our suite of apps.
    0:03:57 And then sold some of them 2017, 2018,
    0:04:00 and then exited the rest of them in 2021.
    0:04:03 And then I was like, alright, what do I want to do next?
    0:04:03 Right?
    0:04:04 Like I kind of had no idea.
    0:04:07 I found actually web three and it was super interesting.
    0:04:08 So I kind of got into that.
    0:04:10 But then I said, you know what?
    0:04:12 I want to start making content personally.
    0:04:15 ‘Cause for so long I’ve been creating content,
    0:04:17 like every one of my app brands and other brands
    0:04:18 had big Instagram pages.
    0:04:21 That’s been like my domain for like the last 10 years.
    0:04:23 So like really intimately understand
    0:04:24 how to communicate with an audience on Instagram
    0:04:26 and how to grow audiences there and whatnot.
    0:04:28 But it’s always been faceless.
    0:04:30 It’s always been for my apps or faceless media brands,
    0:04:32 some of which I still run today.
    0:04:33 And I decided, you know what?
    0:04:35 I want to do this individually personally.
    0:04:37 And I chose AI because at the time
    0:04:40 we were building this SaaS platform.
    0:04:42 It was called a Luna media generation.
    0:04:45 So we’re now pivoting and we’re building this other platform
    0:04:47 called post coming out too much I’m really excited about.
    0:04:50 And initially when I started making content,
    0:04:51 I was like, I’m going to make content
    0:04:54 ’cause I’m looking for customers for my SaaS.
    0:04:55 That’s it, nothing else.
    0:04:58 I’m going to ride this AI trend just because, you know,
    0:04:59 that’s how we’re going to do it.
    0:05:00 But then a few months into it,
    0:05:02 I fell in love with the creative process, man.
    0:05:04 And like now I’m like doing stuff.
    0:05:06 Some of the stuff doesn’t even have
    0:05:08 like an economic incentive behind it.
    0:05:10 It’s just like me exploring my creative curiosities.
    0:05:13 And I’ve really just fallen in love with the process.
    0:05:15 And now it’s like, it’s just,
    0:05:17 I wake up every day excited to just create
    0:05:21 whether it be business focused or just creatively focused.
    0:05:23 And that’s sort of where I’m at today, man.
    0:05:25 And that’s kind of like the long story cut short.
    0:05:28 (upbeat music)
    0:05:30 – When all your marketing team does is put out fires,
    0:05:32 they burn out fast.
    0:05:35 Sifting through leads, creating content for infinite channels,
    0:05:38 endlessly searching for disparate performance KPIs.
    0:05:39 It all takes a toll.
    0:05:43 But with HubSpot, you can stop team burnout in its tracks.
    0:05:45 Plus your team can achieve their best results
    0:05:47 without breaking a sweat.
    0:05:49 With HubSpot’s collection of AI tools,
    0:05:52 Breeze, you can pinpoint the best leads possible.
    0:05:55 Capture prospects attention with click-worthy content
    0:05:58 and access all your company’s data in one place.
    0:06:01 No sifting through tabs necessary.
    0:06:03 It’s all waiting for your team in HubSpot.
    0:06:04 Keep your marketers cool
    0:06:07 and make your campaign results hotter than ever.
    0:06:10 Visit hubspot.com/marketers to learn more.
    0:06:13 (upbeat music)
    0:06:16 – I feel the same way about video production
    0:06:17 as I feel about AI, right?
    0:06:18 Like for me, like that’s where the,
    0:06:21 like coming back to this sort of Venn diagram metaphor, right?
    0:06:23 That’s where it sort of overlapped for me
    0:06:25 was I loved production, I love video,
    0:06:27 I love all the toys.
    0:06:28 And then I saw AI and I’m like,
    0:06:30 this is just another of that.
    0:06:33 It’s more like toys, but more, more digital,
    0:06:35 more online, more SaaS.
    0:06:37 Dude, I play with so many of them.
    0:06:39 Like every time, I mean, not as much as you,
    0:06:41 like future tools, this must be like,
    0:06:43 you must be tired of actually seeing AI tools.
    0:06:47 But yeah, I mean, bro, there’s probably like 15 tools
    0:06:50 that are part of my workflow, I would say right now.
    0:06:52 I’m like using so many.
    0:06:54 And the thing about the space that you guys know is
    0:06:55 use it for like two weeks
    0:06:57 and then something better comes along.
    0:06:59 And it’s like, things get outdated
    0:07:00 really, really quickly in the AI space.
    0:07:03 So that’s something that, it can get exhausting.
    0:07:06 I really don’t know how you do it curating future tools.
    0:07:07 It’s like, it’s just too much.
    0:07:10 – I get sick of seeing the same damn tool
    0:07:12 over and over and over again.
    0:07:13 That’s the frustrating part for me, right?
    0:07:17 Is the people that submit the tools now are mostly like,
    0:07:19 okay, we’ve already seen 15 other tools
    0:07:21 that do this exact same thing.
    0:07:24 Why do you need to go create another version of it
    0:07:26 and try to charge people for it?
    0:07:27 It just doesn’t make sense.
    0:07:29 That’s the part I get frustrated with.
    0:07:31 But when I see something new that I haven’t seen before,
    0:07:35 like, you know, I’m still just as dirty and excited
    0:07:37 about it as, you know, any other thing
    0:07:39 that I’ve seen for the first time.
    0:07:40 – Well, a bit on a tangent.
    0:07:42 Like, ’cause I was kind of guilty of it.
    0:07:44 Like, candidly with Illuna, we had this great idea
    0:07:45 of stable diffusions, unbelievable,
    0:07:47 but there’s no good UI for it.
    0:07:49 So we’re like, we’re gonna be the first to build this out.
    0:07:52 But you know, it was like 5,000 other entrepreneurs
    0:07:53 who kind of built the same thing.
    0:07:56 That product went, we hit 25K MRR pretty quickly,
    0:07:59 but it was, we understood very soon, all right?
    0:08:01 We don’t feel like competing against mid-journey
    0:08:04 and Google and Adobe.
    0:08:05 So now we’re building something
    0:08:07 that hopefully people haven’t seen before.
    0:08:09 I’ll let you know when that comes out.
    0:08:10 And you can be honest with me
    0:08:13 if it’s not, you know, good enough for future tools.
    0:08:14 Not a problem.
    0:08:15 (laughing)
    0:08:15 – I’m sure it will be
    0:08:17 because you’re probably approaching it from the same way.
    0:08:19 You know, someone like us would approach it as like,
    0:08:21 all right, let’s actually do something
    0:08:24 that you haven’t seen other people doing yet.
    0:08:27 Or, you know, at least put a twist on it, right?
    0:08:29 Like put some sort of twist on it
    0:08:31 where it’s like, okay, maybe it’s art generation,
    0:08:33 but it’s niche to this specific industry
    0:08:35 or something, right?
    0:08:37 – Well, let me ask you guys this,
    0:08:39 ’cause I actually think this is a really interesting question.
    0:08:40 I’d love to get your guys’ take.
    0:08:43 It’s like, are the incumbents just gonna take all?
    0:08:45 Or is there, I mean, like right now,
    0:08:46 people have had head starts.
    0:08:47 Like mid-journey came out
    0:08:49 and to me they’re still best in class,
    0:08:52 but I don’t know if you guys saw image in three from Google.
    0:08:53 It’s like right there, you know?
    0:08:55 And some of these other tools now,
    0:08:56 even some of these stable diffusion models
    0:08:57 that we see on Civiti,
    0:09:00 they’re like very close to mid-journey’s output.
    0:09:03 And some of these head starts have been kind of erased.
    0:09:06 And I just feel like there is this possibility
    0:09:10 that the incumbents, you know, Google, Meta, Microsoft,
    0:09:12 they’re just, they’re gonna take the whole pie.
    0:09:14 – I don’t know, my feeling is like, you know,
    0:09:16 like when’s the last Google, like new Google product
    0:09:18 that like people actually use?
    0:09:19 I can’t name one.
    0:09:21 I mean, they own YouTube, that’s great.
    0:09:22 You know, they bought that,
    0:09:25 but what have they built that people actually use
    0:09:26 that’s new?
    0:09:27 – Nothing, basically.
    0:09:30 So mid-journey has a lot of users who love the company.
    0:09:32 And so even if like Google releases something
    0:09:33 that’s like similar in quality,
    0:09:35 I don’t think that means everyone’s just gonna jump ship
    0:09:37 to the new Google product.
    0:09:38 I don’t see it.
    0:09:40 – You know, this is actually a conversation we had
    0:09:41 with a mutual friend.
    0:09:43 We had Greg Eisenberg on the show
    0:09:46 and we actually had this conversation with Greg as well.
    0:09:48 And Greg’s point was, you know,
    0:09:51 the tools that managed to build a community around them,
    0:09:54 the tools that managed to build like some sort of brand,
    0:09:56 maybe even in a smaller niche,
    0:09:58 will probably still get some traction
    0:10:00 over the big incumbents.
    0:10:04 I think the general population will probably migrate
    0:10:06 towards the big incumbents, right?
    0:10:08 The Microsofts, the Googles, you know,
    0:10:10 I would say open AI, but, you know,
    0:10:12 Microsoft is just building everything open AI does
    0:10:14 into it anyway.
    0:10:16 But I think, you know, the majority of the population
    0:10:18 will probably move to the big incumbents,
    0:10:20 but the ones that managed to build community around them
    0:10:22 will still manage to get traction
    0:10:25 and build pretty solid businesses around them.
    0:10:28 Will they turn into multi-billion dollar unicorns?
    0:10:29 Probably not.
    0:10:32 And the ones that I think do things really unique
    0:10:35 that do a good job will probably just get scooped up
    0:10:36 and acquired by Google.
    0:10:38 You know, Google has a,
    0:10:41 they don’t really create a lot of new stuff
    0:10:42 that gets people excited,
    0:10:45 but they’re really good at acquiring products
    0:10:47 that once people are excited about it,
    0:10:49 then Google comes in and scoops it up, right?
    0:10:52 You know, the last big products that Google made
    0:10:55 that are really, really still popular to this day
    0:10:59 are pretty much Gmail and Android, right?
    0:11:01 Everything else that’s really popular
    0:11:04 was something that Google came in and acquired.
    0:11:05 – Yeah, I mean, it’s really tough.
    0:11:07 Even if like, okay, even if your model
    0:11:09 is a little bit better, if, you know,
    0:11:11 I’m used to Google search,
    0:11:12 although that’s even been disrupted.
    0:11:13 I’m using Proplexity a lot.
    0:11:16 I’m using Met AI, I’m using ChatGBT now on my Mac,
    0:11:17 which has been awesome.
    0:11:20 But I think for most people it’s like this behavior
    0:11:23 that they’ve, it’s like muscle memory
    0:11:24 over the last 20 years.
    0:11:26 If like, they’re probably still gonna use Google search,
    0:11:28 even if Proplexity or even, you know,
    0:11:30 if these models like benchmark a little bit higher,
    0:11:32 the average person couldn’t care less, right?
    0:11:35 And so, yeah, man, I mean, it’s interesting.
    0:11:37 I like, I’m not huge on predictions,
    0:11:40 but I love just being on the play-by-play like you guys.
    0:11:43 And so it’ll be fascinating to watch it all play out.
    0:11:45 – I think it’s fun to make predictions
    0:11:46 because the ones that I’m right all, you know,
    0:11:47 six months from now,
    0:11:49 it’s been a round of glory before I was right.
    0:11:49 – You re-surfaced them.
    0:11:50 – Exactly.
    0:11:51 – And then the ones where I was wrong,
    0:11:53 I just never mention them again.
    0:11:55 – Dude, this is so like in finance,
    0:11:56 like all these stock market guys
    0:11:58 where they just every single day
    0:11:59 they predict the market’s gonna collapse.
    0:12:01 And like the one day that it does,
    0:12:02 they ride that wave forever
    0:12:03 and they just build a career off it.
    0:12:04 Totally get it.
    0:12:06 – I actually want to go back to something you mentioned.
    0:12:09 You mentioned that you were using like 15 different tools
    0:12:11 in your creative workflow.
    0:12:13 Let’s dive into that a little bit.
    0:12:15 I’d like to compare notes a little bit
    0:12:17 because I use a lot of AI tools,
    0:12:19 a lot of non AI tools in my creative workflow.
    0:12:21 So yeah, I’m curious,
    0:12:23 what are the tools that you find yourself using
    0:12:25 to actually put the content out?
    0:12:27 – Well, there was a video I made the other day would be,
    0:12:28 it was such a process.
    0:12:31 It was actually the longest I’ve ever spent editing a video,
    0:12:33 probably took like 12 hours over the course of a week.
    0:12:35 And I was like, what did I get myself into?
    0:12:37 But the process behind that video
    0:12:39 was first creating a lot of images on mid-journey.
    0:12:41 So I’d be like step one.
    0:12:43 Mid journeys upscaler is decent, it’s not the great.
    0:12:45 So then I’d use Magnific.
    0:12:47 Shout out to Javi, I know he’s a mutual friend.
    0:12:50 I think that’s probably the best in class upscaler.
    0:12:53 So at mid-journey and then upscaled in Magnific,
    0:12:55 then there was a lot of work in Photoshop to be done.
    0:12:57 If you saw the video, you’d understand what I’m getting to,
    0:13:00 but basically it was like these buildings
    0:13:02 with the windows changing like different colors
    0:13:03 and different things happening in the building
    0:13:04 as it was like zooming in and out.
    0:13:06 So then I’d bring it into Photoshop
    0:13:08 and there was a lot of masking and like manual work
    0:13:10 in Photoshop, but I also found myself using
    0:13:11 a lot of generative fill.
    0:13:16 Like generative fill is a huge part of my process.
    0:13:18 So mid-journey, Magnific, generative fill.
    0:13:20 And then I needed some elements animated,
    0:13:22 so then I’d bring it into runway
    0:13:24 and then I’d like mask out the exact elements
    0:13:28 that I needed animated and I spent some time working there.
    0:13:30 There’s by the way, runway, man,
    0:13:32 if they can like improve that product just a little bit,
    0:13:35 like it’s my favorite UI and like my favorite product,
    0:13:38 but I feel like it’s lagging behind a little bit.
    0:13:40 And so that process alone, it’s like,
    0:13:41 hopefully there’ll be a tool that’ll be able
    0:13:42 to automate all of this.
    0:13:44 And I actually think this is why
    0:13:46 going back to the incumbent conversation,
    0:13:48 I actually think Adobe Photoshop and Premiere
    0:13:50 will be able to do all this stuff.
    0:13:51 And that’s why I was asking,
    0:13:53 maybe I won’t need to use mid-journey, Magnific, et cetera,
    0:13:56 but for now I’m using like those four tools.
    0:13:57 And then I put them together in Premiere
    0:13:59 and then I found myself editing my audio
    0:14:02 in the enhanced audio, which is new to Premiere.
    0:14:05 And so that is an example of five tools
    0:14:08 being used right there to like output this one video.
    0:14:10 And depending on what I’m doing,
    0:14:11 I’m finding myself.
    0:14:14 And then before that, even just research and ideating,
    0:14:17 just conversationally brainstorming with an LLM.
    0:14:18 I mean, like, and it’s crazy, man.
    0:14:20 Like two years ago, none of the stuff existed.
    0:14:22 And now it’s like central to my workflow,
    0:14:24 allowing me to like tell stories
    0:14:25 that I previously wouldn’t be able to tell.
    0:14:28 And that’s why I think AI is so magic.
    0:14:30 (upbeat music)
    0:14:31 – We’ll be right back.
    0:14:33 But first I want to tell you about another great podcast
    0:14:34 you’re going to want to listen to.
    0:14:38 It’s called Science of Scaling, hosted by Mark Roberge.
    0:14:41 And it’s brought to you by the HubSpot Podcast Network,
    0:14:44 the audio destination for business professionals.
    0:14:46 Each week hosts Mark Roberge,
    0:14:49 founding chief revenue officer at HubSpot,
    0:14:51 senior lecturer at Harvard Business School,
    0:14:53 and co-founder of Stage Two Capital,
    0:14:56 sits down with the most successful sales leaders in tech
    0:14:59 to learn the secrets, strategies, and tactics
    0:15:01 to scaling your company’s growth.
    0:15:03 He recently did a great episode called,
    0:15:06 “How Do You Solve for a Siloed Marketing in Sales?”
    0:15:08 And I personally learned a lot from it.
    0:15:10 You’re going to want to check out the podcast,
    0:15:12 listen to Science of Scaling,
    0:15:14 wherever you get your podcasts.
    0:15:17 (upbeat music)
    0:15:18 – Yeah, well, Generative Fill is one
    0:15:21 that I feel like doesn’t get enough talk,
    0:15:22 but I use it almost daily.
    0:15:25 I love Generative Fill inside of Photoshop.
    0:15:26 – Dude, it’s magic.
    0:15:28 And it matches the color and like contrast.
    0:15:31 It’s actually, it’s the closest thing to magic
    0:15:33 that I’ve seen in like the AI space.
    0:15:34 It’s like you said,
    0:15:36 it’s underappreciated and under talked about.
    0:15:39 Yeah, one tool that I find myself using a lot more lately,
    0:15:42 and this kind of actually gets into a topic
    0:15:43 that we want to discuss too,
    0:15:46 is I’ve been using Suno a lot because in videos,
    0:15:49 it makes like the perfect music for videos.
    0:15:51 Like one thing I’ve started to experiment with
    0:15:53 is if I’m showing off a long process
    0:15:55 and there’s like a montage going off,
    0:15:58 I almost make like a South Park like,
    0:16:01 or Team America like montage song, right?
    0:16:03 Like here’s my montage of me coding.
    0:16:06 It’ll just be like lyrics that I typed into Suno
    0:16:08 about what I’m doing on the screen.
    0:16:10 And now there’s music playing during the montage,
    0:16:12 explaining what I’m doing, right?
    0:16:15 I’ve been doing that a lot more in videos as well.
    0:16:17 And Suno is really, really impressive.
    0:16:19 But the reason I mentioned it
    0:16:20 and wanted to bring it up with you
    0:16:23 is because you had the Kanye thing.
    0:16:25 And it’s just so interesting to me
    0:16:28 how far we’ve come with music generation.
    0:16:30 ‘Cause I’m assuming back when you did the Kanye thing,
    0:16:33 you had to use like the Soviets, SVC,
    0:16:35 and you probably had to like run it
    0:16:37 through like your terminal on your computer.
    0:16:39 And it was a pretty complicated process.
    0:16:42 I’m imagining, I don’t remember the exact workflow,
    0:16:46 but music generation has just come so damn far since then.
    0:16:49 – I mean, that’s how I first learned about Roberto is like,
    0:16:51 I think that was like one of the most viral things in AI
    0:16:52 that I saw was like you doing this thing
    0:16:56 where you’re singing like basically like training,
    0:16:59 creating a new Kanye song where you sing the song
    0:17:02 and then it goes and it sings it back to you
    0:17:03 with Kanye’s voice.
    0:17:04 And that was just such a magical thing.
    0:17:06 – Well, it helped you that Roberto actually
    0:17:07 can kind of rap too.
    0:17:09 Like, if I was to try to do the same thing,
    0:17:11 it would not have come out like that.
    0:17:12 – Kind of is generous.
    0:17:14 Dude, a couple thoughts there.
    0:17:18 One is, yeah, it was a pretty complicated process back then.
    0:17:20 And this is kind of like a cheat code for any creator
    0:17:23 out there is like, and part of the process
    0:17:25 that we’re trying to streamline with Pulse is like,
    0:17:29 I have 30 subreddits that in a folder
    0:17:32 and I have them sorted by rising and new, right?
    0:17:35 And so a lot of the magic is in the rising and new
    0:17:37 that people that never picks up and never makes it
    0:17:39 to like the hot or never makes it to the top of the feet.
    0:17:41 And I saw this thing like some kid did this thing
    0:17:44 with Kanye, he like developed this model
    0:17:46 and he put it on Google collab and it was like this discord
    0:17:48 but I was searching through Twitter,
    0:17:51 nobody was talking about it except for like this small
    0:17:54 subset of people in this like random ass discord.
    0:17:56 And I went in there and I tried it out
    0:17:58 and I ran the Google collab and dude,
    0:18:00 I was like mind blown.
    0:18:02 I said, this is gonna change everything.
    0:18:04 And so I made the video, I’ll put it out there.
    0:18:05 And it was really, I think the first like mainstream
    0:18:07 introduction of this technology to the world.
    0:18:09 And it went mega viral to the point
    0:18:12 where I always tell people I’ve had in the last 18 months,
    0:18:16 like 70 videos do a million plus views on like short form.
    0:18:19 You know, I’ve had five, 10, 20 million view videos.
    0:18:21 But like that’s the only video that I consider viral.
    0:18:25 And the reason why, because that was like every YouTuber
    0:18:26 covered it, like even the big ones,
    0:18:29 Unbox Therapy, Penguins, Moist Critical,
    0:18:31 like every journalist was reaching out.
    0:18:33 I talked to a lot of like label heads actually,
    0:18:35 like really prominent label heads,
    0:18:38 everybody wanted to understand the technology more,
    0:18:40 like TV interviews, everybody was like reaching out.
    0:18:42 And I said, whoa, this is crazy.
    0:18:47 And then like weeks later it even when there was a kid
    0:18:50 named Ghostwriter, you guys might remember that.
    0:18:53 And he was making, he had a Drake track with the weekend
    0:18:55 where I was like, if this was a real song,
    0:18:57 this would be like in Drake’s top 10.
    0:18:58 Like it was just absurd to me.
    0:19:00 And so yeah, that was a fun time,
    0:19:03 probably definitely the video that put me
    0:19:04 in a lot of people’s radars.
    0:19:06 And I’ve been searching for like that viral,
    0:19:08 viral crazy moment ever since.
    0:19:10 So that’s why I have my ears and eyes always
    0:19:12 to like the AI emerging tech world.
    0:19:14 – It sucks that, you know, Drake started going
    0:19:15 after everyone after that too.
    0:19:17 Like that was one of my first big Twitter threads.
    0:19:19 So I learned about you and I’m like, oh, this is amazing.
    0:19:22 And then the Drake thing came out
    0:19:24 and then the Grimes AI song as well.
    0:19:26 And I did like two big Twitter threads
    0:19:28 where I like, I wrote my thoughts on all like,
    0:19:30 what this means for the future of music.
    0:19:30 – Yeah, I remember that.
    0:19:32 – And those, you know, they went pretty big.
    0:19:35 And then like everyone who was making the Drake threads
    0:19:36 or sharing the song,
    0:19:38 they started getting like takedown notices on Twitter.
    0:19:40 – But yeah, one golden nugget for that real quick
    0:19:43 is like definitely a lot of the crazy stuff happening
    0:19:45 like the tinkerers in the community,
    0:19:46 they’re on Reddit, they’re on Discord.
    0:19:48 They’re not so much on like Twitter, Instagram, YouTube,
    0:19:51 like some of the more mainstream platforms.
    0:19:53 And so keeping your ears and eyes
    0:19:56 to these little subsets and weird little subcultures online
    0:19:59 is how you can find a lot of the stuff
    0:20:01 that’s like starting to bubble.
    0:20:03 – Yeah, there’s been so many videos I’ve made
    0:20:06 that just kind of spun up from like a cool like subreddit
    0:20:08 that I found or from like a random tweet
    0:20:09 that nobody else noticed.
    0:20:11 And I’m like, why isn’t anybody else talking about this?
    0:20:13 – 100%.
    0:20:15 – But I’m curious, was there like backlash,
    0:20:17 good feedback, bad feedback?
    0:20:21 Like what was the general feeling from that video?
    0:20:24 Did you get more sort of negativity as a result of it?
    0:20:27 More positivity, like how did that land?
    0:20:29 – Mostly negativity for sure, man.
    0:20:30 I was getting lambasted.
    0:20:33 I would say, I knew it was gonna go viral, all right?
    0:20:35 Because I’m like, I’m kind of a practitioner.
    0:20:38 I’m like, I study this stuff really deeply,
    0:20:39 like virality on the internet,
    0:20:41 how to create content, how to best.
    0:20:44 And there was so many things in that script
    0:20:46 and some where I was like, this is gonna go nuclear.
    0:20:48 I already knew, I even, I think I tweeted before,
    0:20:51 I’m about to drop a video that’s gonna go super viral.
    0:20:53 Part of the reason why, I didn’t expect, okay,
    0:20:57 so like the woke mob came after me for digital blackface,
    0:21:00 which was like CNN had published an article like right after.
    0:21:03 And so everybody was like, oh, this is racist,
    0:21:07 like a black dude’s voice, white dude singing it.
    0:21:10 Okay, but then the other part was,
    0:21:11 that part was not deliberate.
    0:21:12 I didn’t quite expect that.
    0:21:13 The part that was deliberate,
    0:21:16 like the way that I engineered the lyrics,
    0:21:19 one part where I kind of like took a shot at Kanye,
    0:21:21 or remember he did like the whole anti-Semitism thing.
    0:21:22 And I took a shot at him,
    0:21:26 like for talking down an entire culture,
    0:21:28 and I said it was ignorant, this and that.
    0:21:31 And then another one was I included a lyric,
    0:21:33 and I didn’t mean it for it to be disrespectful,
    0:21:36 but I included a lyric about Donda, like his late mother.
    0:21:37 That was supposed to be like endearing,
    0:21:40 but somebody was like, oh, not only is he digital blackface,
    0:21:42 he’s also talking about his mom as him.
    0:21:44 This is, I think moist critical called it like diabolical.
    0:21:46 So man, I was getting like heat left and right,
    0:21:48 which I kind of expected.
    0:21:49 So it wasn’t too big of a deal.
    0:21:52 But yeah, and then from the artist community,
    0:21:55 all here they come to steal the virtue of artists
    0:21:56 and human artists and that.
    0:21:59 So it was just like negativity straight throughout.
    0:22:01 But I think a lot of the technologists and stuff
    0:22:02 appreciated it.
    0:22:04 But yeah, certainly an interesting time, man.
    0:22:05 That was an experience for sure.
    0:22:06 – Did that linger on?
    0:22:07 Did that continue or?
    0:22:09 – Nah, it just, and that’s another thing with the internet.
    0:22:11 Like there’s so much like blatant,
    0:22:13 not to go on the super tangent,
    0:22:15 but there’s like so much blatant corruption in the world.
    0:22:16 And they’re just like,
    0:22:18 people don’t care anymore because what happens online, man,
    0:22:20 you for 24 hours,
    0:22:22 you get a bunch of people angry on Twitter
    0:22:23 and like writing mean,
    0:22:26 dredging comments on this and then everybody forgets
    0:22:27 and then they’re on to the next thing.
    0:22:30 And so that’s, you know,
    0:22:32 people are more fickle now than ever.
    0:22:34 – How do you think the like the sentiment around AI music
    0:22:36 has changed because now, you know,
    0:22:40 you’ve got like Udio and Suno and stuff like that.
    0:22:45 And Udio actually has like common and who else?
    0:22:47 There’s like a few musicians actually attached
    0:22:49 to that product now.
    0:22:51 So it seems like more musicians are getting on board.
    0:22:53 I was actually at the Google IO event
    0:22:56 and Lupe Fiasco was actually there at the event,
    0:22:59 wandering around and I got to talk to him for a minute,
    0:23:00 but, you know,
    0:23:03 he’s actually working with Google on text effects,
    0:23:06 but now they have a new one called music effects
    0:23:09 that they were showing off at Google IO as well.
    0:23:11 And that’s where actually bumped into Lupe
    0:23:13 was he was actually playing around
    0:23:15 with the music effects tools in real life,
    0:23:19 like mixing beats and generating AI music.
    0:23:22 And he was like super blown away, super impressed by it.
    0:23:23 So like, I mean,
    0:23:27 it feels like the sentiment among musicians
    0:23:29 is sort of starting to come around,
    0:23:31 but I don’t know, what are your thoughts?
    0:23:34 – In general, like I actually think about this a lot.
    0:23:36 I think anything that’s purely AI generated
    0:23:38 is actually really boring.
    0:23:40 And I don’t think there’ll ever be a market for it.
    0:23:42 I just don’t, aside from the initial novelty,
    0:23:44 like when we first started seeing mid-journey images,
    0:23:46 you guys probably remember, it was mind-blowing,
    0:23:48 but now I see an image from mid-journey
    0:23:51 and maybe I don’t know it’s AI at first.
    0:23:52 Maybe it looks like an artist,
    0:23:54 but once I learned that it’s purely AI generated,
    0:23:55 I just don’t care.
    0:23:57 Like, and I think as humans,
    0:23:58 we all desire that human element.
    0:24:01 And case in point, like Sam Altman made this point,
    0:24:05 it’s like, hey, we love chess as a game,
    0:24:07 the strategy behind it, all this.
    0:24:10 Robots can outperform humans in chess,
    0:24:12 but we don’t want to watch a robot playing a robot.
    0:24:14 So there has to be that human element,
    0:24:17 and I think they’re very same in art and in music.
    0:24:19 So if something’s purely AI generated,
    0:24:20 I actually don’t think it’ll ever hit.
    0:24:22 I don’t think there’ll be a market for it.
    0:24:23 Now, there is a caveat.
    0:24:25 There may be a time where literally we don’t know,
    0:24:28 but I do think there just has to be that human element,
    0:24:30 like the story behind the art.
    0:24:34 So I look at AI as a tool, specific to music.
    0:24:37 I look at, you know, like there’s producers
    0:24:40 who sample old songs and recreate them and make beats.
    0:24:43 So I think people will use AI to like create samples,
    0:24:47 to then sort of remix, maybe create drums, drum kits,
    0:24:48 that kind of stuff.
    0:24:50 But I do think there needs to be that human element.
    0:24:52 Otherwise, I just don’t think there’s a lasting market
    0:24:55 for purely AI generated media.
    0:24:57 – Yeah, I actually feel kind of the same way.
    0:25:01 I feel like I’ll hear stuff, be really impressed by it,
    0:25:04 but same thing, when you realize it was made by AI
    0:25:07 and it wasn’t like a human doing something awesome,
    0:25:10 you kind of, it kind of leaves something a little bit there.
    0:25:14 I feel like AI is the best sort of helpful tool out there,
    0:25:17 both for art and music and, you know,
    0:25:20 all of the creative forms, when it comes to writing,
    0:25:22 almost all of us now can spot chat GPT, right?
    0:25:24 You can read an article and almost immediately go,
    0:25:27 okay, I feel like chat GPT wrote this article now, right?
    0:25:31 But if you have chat GPT, write you an article,
    0:25:34 and then you go back and sort of see that as a rough draft
    0:25:35 and clean it up and add your own voice
    0:25:37 and add your own comments and opinions,
    0:25:39 now you have something that people actually want to read.
    0:25:41 Same with like music, right?
    0:25:43 Like the stuff that Lupe Fiasco is doing,
    0:25:45 for instance, with text effects,
    0:25:48 he’s helping, they built an AI to help you come up
    0:25:52 with lyrics and alliterations and rhyming words
    0:25:54 and synonyms and all that kind of stuff
    0:25:56 to help with the creative process,
    0:25:57 but then the musician still gets involved
    0:25:59 and creates the music.
    0:26:01 You know, the stuff that Google was showing off
    0:26:02 with their music effects,
    0:26:04 it wasn’t actually doing any lyrics
    0:26:05 that sounded like a musician,
    0:26:07 but it was creating really good beats
    0:26:11 by taking styles from this musician and this musician,
    0:26:13 blending them together, creating something novel,
    0:26:16 and then a rapper or a singer can go
    0:26:19 and then put the lyrics over the top of it.
    0:26:21 That’s where I feel like AI really shines,
    0:26:23 is like that sort of co-pilot,
    0:26:25 that tool to help you do the creative thing
    0:26:27 you’re trying to do.
    0:26:28 – I don’t know, I think we’ll see,
    0:26:30 like if it keeps improving, right?
    0:26:33 Like I do see it becoming maybe its own genre of music
    0:26:33 in the future, right?
    0:26:36 ‘Cause like, yeah, we’re talking about like AI music now,
    0:26:38 but like, okay, three years from now, five years from now,
    0:26:40 maybe it’s producing shit that just like blows our minds.
    0:26:44 It’s beyond any human’s ability to create, right?
    0:26:46 And so at that point, I don’t know, we’ll see.
    0:26:48 – Well, here’s an interesting question
    0:26:51 that I’d be curious both of your answers on.
    0:26:53 If, let’s say Drake, for instance,
    0:26:56 really got on board the AI train and let’s,
    0:26:58 I don’t know if you guys are Drake fans or not,
    0:27:00 but let’s assume you guys are Drake fans.
    0:27:02 Let’s say he trained his voice into the algorithm
    0:27:04 and he has like oversight,
    0:27:08 but he’s making AI Drake songs and it’s Drake’s oversight.
    0:27:10 He’s sort of, you know, deciding the beats
    0:27:12 that go underneath it.
    0:27:15 He’s helping steer the lyrics,
    0:27:17 but the song itself is fully AI generated.
    0:27:20 Would that change how you feel about this song?
    0:27:21 – So that’s the tricky part.
    0:27:23 Like if we knew, so a lot of people,
    0:27:26 once they find out Drake has an army of ghost writers
    0:27:28 and this and that, like their affinity for him
    0:27:29 just goes down a little bit as an artist ’cause it’s like,
    0:27:31 oh, he didn’t even write the lyrics, right?
    0:27:35 But that’s the part, that’s the big caveat.
    0:27:38 If we know, so do you guys know Varun Maya?
    0:27:41 He’s a creator, I think he’s mostly on Instagram,
    0:27:45 but all of his Instagram content,
    0:27:47 all the shorts are AI generated.
    0:27:49 Meaning it’s like his model and then he just goes
    0:27:52 and writes a script and it’s AI voiced, AI generated.
    0:27:54 The script may even be, but people don’t know that.
    0:27:57 But because he’s a real human that had like this affinity
    0:27:58 in this audience and stuff,
    0:28:00 people don’t really mind, I don’t think.
    0:28:03 Now, if that was a pure AI-generated creation,
    0:28:05 like if that human actually didn’t exist,
    0:28:06 I don’t think it would work.
    0:28:08 But because there was a human and a story
    0:28:11 and a body of work behind it, it sort of does.
    0:28:13 And I think it could be the same for music,
    0:28:16 where it’s like, if Drake, if he has like seven shows
    0:28:17 coming up in the next 10 days,
    0:28:20 but he’s gotta get a song out for whatever reason,
    0:28:22 there may be a chance, like just get my model to do it.
    0:28:25 Nobody will know, if it’s not now,
    0:28:27 it’ll be indistinguishable very soon.
    0:28:30 And that’s kind of the uncanny value behind it,
    0:28:33 where it’s like, if we know, we’re not gonna like it.
    0:28:37 But we probably will never know now from here on out,
    0:28:40 if something is wrapped by an artist
    0:28:41 or just generated by their model.
    0:28:45 And that to me is the part where it’s a little eerie.
    0:28:46 – I think in the future too,
    0:28:48 when like some of these great artists start to pass away,
    0:28:50 there’s gonna be more demand there too, right?
    0:28:53 Technology gets very good, Drake is no longer around.
    0:28:55 There’s still Drake fans.
    0:28:57 And now, like maybe he’s sold his rights
    0:29:01 before he passed away and you can still have him in songs.
    0:29:04 – Yeah, I mean, I wouldn’t be surprised if musicians
    0:29:06 actually start writing that kind of stuff into their will,
    0:29:11 like what happens to my voice IP once I’m gone.
    0:29:14 – But like, to me, it’s still like,
    0:29:16 we can make great Tupac songs right now with his,
    0:29:19 I will just, and he’s my favorite hip hop artist
    0:29:21 of all time, I will still never love them,
    0:29:23 like I love the, ’cause it’s just not,
    0:29:24 there’s something lacking there.
    0:29:25 There’s like, that’s-
    0:29:26 – You could patch a human to it though, right?
    0:29:28 You could do like a collaboration song, right?
    0:29:29 Where it’s like, I’m the new Tupac,
    0:29:32 and now I brought in an AI Tupac into my song.
    0:29:33 – Well, Drake just did that, right?
    0:29:35 And got sued.
    0:29:38 – But even then, I think like the novelty’s cool,
    0:29:40 but once that wears off, there’s no market for it.
    0:29:45 And so my thing is will and will there be regulation
    0:29:46 or what will happen here?
    0:29:49 But like, if I were a label and I wanted to maximum,
    0:29:51 like say I’m representing Drake and he died,
    0:29:56 like, do you then just say these were unreleased vocals
    0:29:57 from when he was alive?
    0:29:58 Because that will hit a lot.
    0:30:01 And so then it becomes like an ethical and moral question,
    0:30:02 maybe legal, maybe regulatory.
    0:30:05 So yeah, man, there’s so many,
    0:30:07 it’s so fun to be at the forefront of all this stuff,
    0:30:08 ’cause there’s so many questions
    0:30:11 and it’s all being sort of written in real time, but-
    0:30:12 – Oh yeah, we found a new Beatles album.
    0:30:15 It just was hidden in a store somewhere.
    0:30:17 – Right, and that would be more impactful
    0:30:20 than we just generated John Martin’s voice.
    0:30:23 Nobody’s, you know, so that’s where I feel.
    0:30:24 It’s like, we need the human element.
    0:30:27 There won’t be a market for purely AI-generated stuff,
    0:30:30 but a big question, will we even know?
    0:30:31 – Yeah, yeah.
    0:30:32 Well, I mean, a lot of the vocals from these musicians too
    0:30:34 are from their own personal experiences,
    0:30:35 their life experiences, right?
    0:30:37 You know, talking about Tupac, right?
    0:30:40 He lived a pretty crazy life.
    0:30:43 So his songs are all about the life that he lived.
    0:30:45 So songs that came out now
    0:30:48 wouldn’t represent his thoughts, his feelings,
    0:30:50 his life story at all.
    0:30:52 And I think there’s that element to it
    0:30:55 that people just know that this isn’t really him.
    0:30:58 And that just diminishes it by quite a bit, I think.
    0:30:59 – Yeah, I mean, in art theory,
    0:31:01 it’s a concept called the aura.
    0:31:04 So it’s like, hey, there’s artists today
    0:31:07 that can replicate the Mona Lisa to exact perfection,
    0:31:09 but we never, it’ll never hit the same
    0:31:10 because of that aura.
    0:31:13 Like this was a piece of cardboard
    0:31:17 or whatever it was that Leonardo da Vinci himself sculpted.
    0:31:19 This was in different fires and a ward almost broke down.
    0:31:22 It passed the hands of different monarchs through history
    0:31:24 and like that’s what makes it compelling.
    0:31:26 It’s not like the actual art or the design
    0:31:28 or the way that the colors are strung together.
    0:31:30 It’s the story behind it.
    0:31:31 And I think it’s with all media, all art,
    0:31:35 it’s like we’re drawn to the story behind something,
    0:31:37 piece of art, a piece of media,
    0:31:40 not so much like the actual construction
    0:31:42 of the whatever it is.
    0:31:44 And so I think we’ll see that play out with AI
    0:31:47 where it’s like, yeah, again,
    0:31:48 through our earlier example,
    0:31:50 a purely AI generated mid-journey,
    0:31:51 it’s just not interesting.
    0:31:53 That I don’t care how cool it is or how impressive.
    0:31:54 It’s like, it’s not interesting.
    0:31:58 I need the story behind the art, the human element.
    0:31:59 – I’m gonna shift gears here for a minute too.
    0:32:01 I wanna talk about a little bit
    0:32:03 about like the ethics of content creation.
    0:32:05 Is this is something that,
    0:32:07 I mean, I battle with it a little bit.
    0:32:08 It’s not too big of a battle
    0:32:10 because I know who I’m making the content for,
    0:32:12 but there’s this sort of battle
    0:32:15 between creating the content for the audience
    0:32:17 and making them aware of what’s out there
    0:32:18 and what’s not out there.
    0:32:21 But then you also have the creators,
    0:32:23 the creators of the products, right?
    0:32:25 The software companies.
    0:32:27 In my case lately, it’s been the Googles
    0:32:28 and the Microsofts of the world
    0:32:31 who will actually like pay me to fly out to their events
    0:32:33 so that I’ll talk about them in videos.
    0:32:35 And then I have this sort of dilemma.
    0:32:39 Do I, if I’m not impressed with what they’re showing me,
    0:32:41 but they paid me to be there
    0:32:43 to talk about what they’re showing me,
    0:32:45 am I tailoring to the audience?
    0:32:47 Am I tailoring to Google?
    0:32:49 – Did you do that with Google?
    0:32:51 – If you’ve watched my Google videos, you know I don’t,
    0:32:56 but this is something that I wanted to talk to you about
    0:32:59 because it’s something that I’ve struggled
    0:33:00 with a little more recently
    0:33:03 because I’ve had companies come to me
    0:33:06 and offer me like equity to be an advisor in the company
    0:33:09 or who may have reached out
    0:33:12 because they want me to be an investor in their company
    0:33:13 or something like that.
    0:33:16 And for the most part, I’ve said no to almost all of them
    0:33:20 because I worry about if I’ve got skin in the game
    0:33:21 for some of these companies,
    0:33:23 is it going to, you know,
    0:33:27 taint how the audience sees me talking about that product.
    0:33:30 And so it’s something that’s always kind of on the top
    0:33:34 of my mind of, you know, how do I strike this balance
    0:33:36 between the thing that’s gonna make me money
    0:33:40 as an entrepreneur, but also being true to the audience
    0:33:43 and making the best piece of content for my audience.
    0:33:44 – I think it’s super important.
    0:33:46 And then I definitely want to hear your guys thoughts
    0:33:47 because it’s something I struggle with a lot.
    0:33:49 I actually have two recent examples.
    0:33:52 So one was I was doing some work for Rabbit.
    0:33:55 Obviously, you know, disclose paid promo
    0:33:57 before the product actually came out.
    0:34:00 So I did two, I think sponsored posts for them.
    0:34:03 And I actually liked what they were building.
    0:34:05 I thought it was like whimsical, it was charming,
    0:34:06 it was $200, no subscription.
    0:34:08 I was like, there’s a market for this.
    0:34:12 And then I had a paid post sponsorship deal lined up.
    0:34:15 I got the product, I was gonna review it.
    0:34:17 And, you know, make a glowing little,
    0:34:20 not so much a review, but like almost like a commercial
    0:34:21 online, like a short form video,
    0:34:23 TikTok, Instagram, Twitter, et cetera.
    0:34:25 And it was just not something that I was comfortable
    0:34:27 recommending to my audience.
    0:34:28 Not that it’s a bad product.
    0:34:30 I think there could be a future for it.
    0:34:32 I think there could be a market for it, et cetera.
    0:34:36 Like, look, so I didn’t want to like knock Rabbit
    0:34:38 or knock the team, but I canceled the deal.
    0:34:40 I was like, this is not something comfortable.
    0:34:41 And by the way, it was like,
    0:34:42 I’m not like the super rich dude
    0:34:44 where I can just cancel deals, not a big deal.
    0:34:47 Like right now, my creator business sponsorships
    0:34:49 and ad and brand deals and stuff
    0:34:51 probably represents 50 to 60% of my revenue.
    0:34:53 And so, and it was like a pretty well paid thing.
    0:34:56 So it wasn’t like easy to do,
    0:34:57 but I do feel like as creators,
    0:34:59 we got to take those short-term hits
    0:35:02 if we’re playing in decades for that long-term trust.
    0:35:05 Because I do think that, especially in the age of AI,
    0:35:08 trust is gonna be the commodity.
    0:35:10 Like everything else can be replicated.
    0:35:11 Another example was Google.
    0:35:13 Like I was invited to the dinner.
    0:35:14 You went too mad.
    0:35:16 I couldn’t make it ’cause I had something to do in LA.
    0:35:17 But I love Google.
    0:35:19 I love the relationship I have with the DeepMind team,
    0:35:21 et cetera, but I didn’t really love the presentation.
    0:35:23 I thought it was lackluster,
    0:35:25 not the technology that they were presenting,
    0:35:26 but the way that they presented it.
    0:35:29 And so I went on threads and I wrote,
    0:35:30 this was really boring.
    0:35:31 Google has some work to do on productizing
    0:35:36 and on showmanship and presentation,
    0:35:37 all that kind of stuff.
    0:35:41 Like I worry that will probably hurt me.
    0:35:44 Like they’ll probably pull back some potential future deals
    0:35:44 because it’s like, well,
    0:35:46 this guy’s talking negatively about us online.
    0:35:48 But it’s like, those are my honest and true thoughts.
    0:35:50 I don’t want to hold them back
    0:35:53 just because I may get paid by them in the future.
    0:35:54 And it’s really tricky, man.
    0:35:57 I mean, like MKBHD obviously has been like super big news
    0:36:00 over the last month on this topic itself.
    0:36:02 And so my answer would be,
    0:36:06 protect the long-term trust with your audience at all costs,
    0:36:08 even if there’s some short-term money
    0:36:09 that you got to leave on the table.
    0:36:11 Yeah, in the early days of my newsletter lore,
    0:36:15 I had this sponsor, there was this Korean startup,
    0:36:18 was doing like AI pictures kind of stuff early on.
    0:36:21 And they sponsored the newsletter for like a month,
    0:36:24 paid very well, and then they just like disappeared
    0:36:25 and then didn’t say anything.
    0:36:27 Like didn’t notify me.
    0:36:29 Like I think I maybe even put up an ad,
    0:36:31 like a write when their website went down.
    0:36:34 And then some users like emailed me like,
    0:36:38 hey, I like paid for like a monthly subscription or whatever.
    0:36:40 And like now the website is just like gone.
    0:36:42 And so that was like a first experience for me like,
    0:36:45 oh, like you gotta be careful like which sponsors you take.
    0:36:46 You know, it wasn’t huge money at the time,
    0:36:47 but it was still, it was like, God,
    0:36:48 that’s a horrible experience.
    0:36:51 And then for, and obviously you lose a lot of trust
    0:36:52 by doing that, right?
    0:36:53 Can actually for a while,
    0:36:55 I stopped taking sponsors for that reason.
    0:36:56 – Yeah, no, I’m on the same page.
    0:36:58 I think in what we’re doing,
    0:37:01 the credibility and the trust is everything.
    0:37:02 That is our biggest currency.
    0:37:04 That’s what matters more than anything.
    0:37:07 And you know, when it comes to companies like Google,
    0:37:08 right?
    0:37:10 I’ve made a lot of videos that were critical of Google,
    0:37:13 you know, back when they put out their Gemini promo video,
    0:37:16 I was very, very critical of them kind of hiding the fact
    0:37:18 that this wasn’t real time.
    0:37:19 And I put out a whole video
    0:37:21 about how disappointed I was in Google.
    0:37:23 And I’ve worked with Google in the past
    0:37:25 and I’m working with Google again.
    0:37:28 So the fact that I’m working with Google again now
    0:37:30 shows that they get over it, right?
    0:37:31 At the end of the day,
    0:37:34 if you build a brand that has an audience
    0:37:37 and people are paying attention to what you say,
    0:37:39 these companies are gonna get over the fact
    0:37:41 that you were negative about them once
    0:37:43 because they know that you have the audience.
    0:37:48 You’re the trusted voice that people are checking in on.
    0:37:52 And so, you know, I would say for every one sponsor
    0:37:54 that I have on my channel and in my newsletter,
    0:37:56 there was probably 10 that I said no to.
    0:37:59 I say no, like I leave so much money on the table
    0:38:01 because I look at their product and go,
    0:38:02 I don’t wanna promote this.
    0:38:05 Like I can already do this inside of chat GPT.
    0:38:07 You just like put a wrapper around it
    0:38:10 and are charging the same amount as chat GPT.
    0:38:13 Why did I pay 20 bucks a month to do this thing
    0:38:14 when I can do it in chat GPT
    0:38:16 plus everything else chat GPT does.
    0:38:18 Like it just doesn’t make sense to me.
    0:38:19 I’m not gonna talk about it.
    0:38:21 It feels like I’d be pointing my audience
    0:38:24 to something that I don’t think is valuable to them.
    0:38:27 So I just had that philosophy of like the trust
    0:38:30 and the credibility is 100% what we need to live by
    0:38:32 as content creators.
    0:38:33 And as soon as you lose that
    0:38:35 and they start to think of you as like a sellout,
    0:38:38 then it’s all downhill from there.
    0:38:39 And again, these companies,
    0:38:41 if you’ve got the audience, they’ll forgive you.
    0:38:42 They’ll come back.
    0:38:44 – First of all, respect to you.
    0:38:46 And I think that’s why you’re so respected in space
    0:38:47 and respect to Google.
    0:38:48 Now I think it’s important for companies
    0:38:50 to take that criticism.
    0:38:53 You know, they gotta take the bad with the good
    0:38:55 ’cause some companies do weaponize it.
    0:38:57 I don’t know if you guys follow like Dr. Disrespect.
    0:38:58 He’s one of my favorite creators.
    0:38:59 I think he’s like one of the greatest entertainers
    0:39:02 in the world, but like very famously
    0:39:03 Call of Duty has blacklisted him.
    0:39:05 He’s not invited to any events.
    0:39:05 He’s not, you know,
    0:39:08 every other creator gets a deal from Call of Duty.
    0:39:09 He doesn’t because he like–
    0:39:12 – Which too, you got blacklisted by Twitch also.
    0:39:14 – And he like insults the game.
    0:39:16 And he’s like, these developers are lazy and this and that.
    0:39:17 And he’s like, he’s given his honest thoughts.
    0:39:20 And, but I think longterm that works out even better
    0:39:22 for Dr. Disrespect ’cause he’s way more trusted
    0:39:25 than these other creators who may just be looked at
    0:39:29 as like a NASCAR with 40 logos on them.
    0:39:29 They don’t like,
    0:39:31 you don’t really care what they have to say
    0:39:34 just because you know they’re just up for sale
    0:39:35 to the highest bidder.
    0:39:37 – I just think it’s so important,
    0:39:40 especially as being in an industry
    0:39:42 that’s so sort of uncertain feeling.
    0:39:44 Like I don’t know if you guys feel this way at all,
    0:39:47 but when I started making content around AI,
    0:39:50 this was, you know, somewhere mid 2021 was
    0:39:52 when I really started to make content about AI.
    0:39:54 And then I really ramped it up in 2022.
    0:39:58 But when I was making content about AI in the early days,
    0:40:01 I had zero, zero inclination that this was
    0:40:03 a controversial topic.
    0:40:04 Like this was just me going,
    0:40:06 this is a really fun tool.
    0:40:07 Everybody should know about this.
    0:40:09 Why don’t more people go play with this mid journey thing?
    0:40:11 Why is nobody talking about this?
    0:40:13 Or like this GPT three thing.
    0:40:16 Like you can go play with it on open AI’s playground.
    0:40:17 This is prior to chat GPT.
    0:40:19 You can go play with this on open AI’s playground
    0:40:21 and have it right copy for you
    0:40:24 and have it and have conversations with this thing.
    0:40:25 Why aren’t more people talking about this?
    0:40:27 This is so much fun.
    0:40:29 That was like my approach when I started creating content
    0:40:33 around AI was just like, this is just fun stuff.
    0:40:36 I had no clue that this was going to be so controversial.
    0:40:38 And as, you know, more and more,
    0:40:39 I don’t really want to say hate,
    0:40:42 but more and more like anti AI,
    0:40:45 doomerism, that kind of stuff started to bubble up.
    0:40:47 And I started to get more of that kind of stuff
    0:40:49 on my comments and my feed.
    0:40:51 It actually made me go, okay,
    0:40:54 I want to understand why people are so scared of this.
    0:40:57 Why, like, what is the big deal with this?
    0:40:58 What are the fears?
    0:41:00 And so I’ve always tried to take this
    0:41:03 like very empathetic approach of like,
    0:41:06 I really, really think this stuff is cool and fun.
    0:41:09 And I see a lot of use cases for it.
    0:41:10 But I also understand that a lot of people
    0:41:11 are scared of this now.
    0:41:13 And I need to like sort of lean into that.
    0:41:15 And I need to talk about that narrative as well.
    0:41:18 And I need to, you know, help people that have those fears,
    0:41:20 kind of get over those fears.
    0:41:23 And even to some degree talk a little bit
    0:41:25 about the fears that are, in my opinion,
    0:41:28 like actual worthy fears to think about, you know,
    0:41:32 things like the deep fake scams of like the voice cloning
    0:41:35 where people are calling other people using cloned voices
    0:41:36 and scamming them out of money.
    0:41:39 Or, you know, we talked about a story on the show
    0:41:42 about somebody in, I think it was Hong Kong
    0:41:44 that got scammed out of $25 million
    0:41:46 over a deep fake Zoom call, right?
    0:41:49 And so there are like these genuinely scary things.
    0:41:53 And I think it’s important to be very holistic
    0:41:56 with what we talk about because there is that fear
    0:42:00 and not building that trust with people in this time
    0:42:02 where everybody feels so uncertain
    0:42:05 just seems like a very, very short-sighted approach
    0:42:07 to content creation.
    0:42:10 – Yeah, I mean, look, some of the fears are warranted, right?
    0:42:12 Like there are voice actors on Fiverr, for example,
    0:42:14 they’re probably gonna get wiped out.
    0:42:15 So I understand it.
    0:42:17 But another thing that I often say is like,
    0:42:19 the more that we talk about this stuff,
    0:42:21 the more people understand what’s going on.
    0:42:23 And I do think that knowledge is power.
    0:42:27 And, you know, the more we talk about it
    0:42:28 and the more other creators and other people talk about it,
    0:42:31 the less chance it has to be used maliciously
    0:42:32 because now we’re, oh, wait a second,
    0:42:35 I just got called from my aunt demanding money.
    0:42:36 I saw this on Twitter.
    0:42:38 I saw Matt talking about this on YouTube.
    0:42:41 Like, this could be a scam, you’re a little bit more aware.
    0:42:44 And so the impact, the negative impact that it could have
    0:42:45 is gonna diminish.
    0:42:48 So I should just think it’s,
    0:42:49 first of all, forget the comments, man.
    0:42:51 Like this is one thing I’ve learned about
    0:42:53 being a creator on the internet is like,
    0:42:55 once you have like a really intimate site,
    0:42:57 understanding of human psychology,
    0:43:00 you begin to understand why different comments are happening.
    0:43:03 Okay, this comes from fear.
    0:43:06 This is a comment that’s rooted in insecurity.
    0:43:06 Oh, I see this guy.
    0:43:09 And so like all of a sudden you have like this shield
    0:43:11 and invincibility surrounding you once you understand
    0:43:13 like kind of the psychology behind it.
    0:43:16 Because I think talking about this stuff is not a negative.
    0:43:20 I think it’s definitely a net positive because of that.
    0:43:21 Like people have to understand what’s going on.
    0:43:24 This is technology that’s gonna impact everybody’s life.
    0:43:27 And so, you know, kudos to you guys for staying on top of it
    0:43:31 and like covering, like Matt, like your YouTube channel is,
    0:43:32 dude, it’s like an encyclopedia.
    0:43:36 It’s like you can go, it’s like a historical time capsule.
    0:43:36 It’s amazing.
    0:43:37 I really appreciate that.
    0:43:39 And you are as well too.
    0:43:40 I love following you on Instagram.
    0:43:43 Your videos you put out on Instagram are amazing.
    0:43:44 I believe you’re on TikTok as well,
    0:43:46 but I mostly see you on Instagram.
    0:43:47 I’m not really on TikTok.
    0:43:51 I don’t even know if TikTok’s gonna exist in nine months or not.
    0:43:53 But that’s a whole different rabbit hole.
    0:43:56 Real quick though, prediction, Nathan,
    0:43:57 what is TikTok’s fate?
    0:43:59 Like, what prediction?
    0:44:00 Oh, what’s, okay.
    0:44:02 So I’m, you know, I don’t know if you know that.
    0:44:04 So I studied Mandarin in Taiwan.
    0:44:06 So I’m probably biased against China.
    0:44:11 So, so I’m definitely very, you know,
    0:44:12 I’m concerned about China.
    0:44:15 That’s actually one of the big reasons I’m like, you know,
    0:44:17 I don’t know if I’d come up with a member of EAC,
    0:44:20 but you know, I do want to, for America to win at AI.
    0:44:21 I think it’s very important.
    0:44:24 And so I don’t think China should win at AI.
    0:44:26 I think that’d be very bad for the world and for freedom.
    0:44:30 And so I’m, I think that TikTok will get divested.
    0:44:31 I think that they, like, yeah, sure,
    0:44:33 they’re saying that they won’t do that.
    0:44:34 And the reason they’re saying they won’t
    0:44:36 is because they definitely are,
    0:44:37 they definitely do indirectly have connections
    0:44:39 to the Chinese government.
    0:44:41 Like every, like, like China,
    0:44:42 like most people don’t realize this,
    0:44:44 but like, I think it was like maybe four years ago,
    0:44:45 or something like this.
    0:44:47 You know, I had a, I have a friend in the Chinese government.
    0:44:48 I actually used to spend time out there.
    0:44:49 They tried to get me to move out there
    0:44:52 and set up an office for me maybe 10 years ago.
    0:44:53 And so I got a lot behind the scenes.
    0:44:57 And, you know, probably five years ago,
    0:45:00 the Chinese government like put a member of the government
    0:45:03 on every single board of every major tech company.
    0:45:05 And almost nobody reported on it,
    0:45:08 but like everybody who like knows people in China,
    0:45:09 they all know this.
    0:45:10 It’s like, holy crap, that happened.
    0:45:13 And it, it like happened overnight where all of a sudden
    0:45:16 there’s a board member on every single major tech company
    0:45:18 in China and, and that board member,
    0:45:19 even if they’re just one board member,
    0:45:21 they basically have control of the companies now.
    0:45:24 So, yeah, I don’t think we should have a company
    0:45:25 that has any connection to the Chinese government
    0:45:28 having influence on young people in America.
    0:45:32 And, and, and so I, so my prediction is it will be divested.
    0:45:33 And, and I think that’s a good thing.
    0:45:37 – I do think that probably some company in the US will buy them,
    0:45:40 but they’re really hesitant about giving another company
    0:45:43 access to their algorithm, right?
    0:45:45 So I feel like if that does happen,
    0:45:47 it’ll end up being a watered down version of TikTok.
    0:45:50 And yeah, I don’t really know
    0:45:54 because I’m not somebody that uses TikTok much.
    0:45:56 I’ve posted like three videos ever
    0:45:57 that have never gotten any traction
    0:46:01 and I’ve maybe scrolled the TikTok feed twice my entire life.
    0:46:04 So I don’t know a whole lot about TikTok,
    0:46:06 just other than what I hear in the news.
    0:46:09 But I do think they’ll end up working
    0:46:12 with some American company to, you know, get it sold.
    0:46:14 But then there’ll be some sort of weird thing
    0:46:16 where they don’t actually get access to the algorithm.
    0:46:18 And then it’ll be a watered down version of TikTok.
    0:46:21 And then I think Reels will end up just taking over
    0:46:23 and replacing TikTok anyway.
    0:46:27 But again, that is of, you know, it’s not a very hot take
    0:46:30 ’cause I don’t really know what I’m talking about in that realm.
    0:46:31 – We made an interesting point about Reels.
    0:46:32 Like I’m so bullish on Reels.
    0:46:36 Actually, the Reels business already generates more revenue
    0:46:38 than TikTok’s entire business.
    0:46:41 And I think we’re going to continue to see that as creators,
    0:46:42 you know, sort of start spreading their eggs
    0:46:44 across different baskets.
    0:46:46 ‘Cause I do think it’s going to get,
    0:46:47 I think it’s going to get banned.
    0:46:50 Like maybe just China’s sort of using that as leverage,
    0:46:53 saying that divestiture is not an option, this and that.
    0:46:55 But it is funny, man, it’s like the one thing
    0:46:57 that legislators in the United States
    0:46:59 from the left and the right can agree on.
    0:47:02 And so it’s like, they’re very determined to take TikTok out.
    0:47:04 I do think it eventually happens.
    0:47:07 And, you know, just six months ago, I would have told you no.
    0:47:09 That’s another reason why I’m very bullish on Meta.
    0:47:12 I think they’re the biggest beneficiaries of,
    0:47:14 and YouTube and Google, of course.
    0:47:16 But I think they’re the biggest beneficiaries
    0:47:18 of TikTok’s potential demise.
    0:47:20 – Yeah, it’s funny you mentioned that.
    0:47:22 As soon as Meta started open sourcing,
    0:47:24 like their llama models and stuff like that,
    0:47:26 I started buying up stock in Meta because I was like,
    0:47:28 all right, Meta’s actually,
    0:47:30 Meta’s redemption arc just started.
    0:47:33 And I feel– – Not financial advice.
    0:47:34 – Not financial advice.
    0:47:39 I’m not telling anybody to go buy Meta stock.
    0:47:40 But I went and bought Meta stock
    0:47:42 as soon as they started open sourcing
    0:47:44 because I went, all right, Meta,
    0:47:46 I think Meta’s doing the right thing here.
    0:47:51 I think their redemption story is turning the corner, right?
    0:47:54 I feel like Mark Zuckerberg has gotten some,
    0:47:56 you know, bad press in recent years,
    0:47:57 but I think what they’re,
    0:47:59 the direction they’re taking Meta in
    0:48:02 is the right direction with the open source
    0:48:04 and really putting the focus on AI.
    0:48:08 And, you know, I’m personally still bullish on like VR
    0:48:09 and the Meta quest and stuff.
    0:48:10 I have a Meta quest.
    0:48:11 It is really, really fun
    0:48:13 when you get in there and play with groups.
    0:48:15 So yeah, I’m a fan of Meta.
    0:48:17 – I think the Zuckerberg arc has been hilarious.
    0:48:20 Like, you know, just to see his transition.
    0:48:22 So I got to say hi to him one time,
    0:48:23 but he has no idea who I am.
    0:48:24 He was like at a party,
    0:48:26 at a game industry party way back in the day.
    0:48:28 And, but just seeing how he’s changed,
    0:48:30 he looks dramatically different now.
    0:48:31 And there was a while, you know,
    0:48:33 there was a period where he was like learning Mandarin
    0:48:35 and he was trying to get Facebook into China.
    0:48:37 – I mean, Facebook being banned in China,
    0:48:38 that’s another reason.
    0:48:40 It’s like, hey, the reciprocity here
    0:48:43 is another reason why I’m not super
    0:48:44 against TikTok being banned.
    0:48:46 But yeah, come in and dude,
    0:48:48 I’ve been buying a lot of Meta stock
    0:48:50 and definitely not financial advice.
    0:48:52 Cause if you follow my trade, you’re going to go bankrupt.
    0:48:54 I’m just letting you know.
    0:48:55 But I think if I had a,
    0:48:57 I do think Facebook’s on the way
    0:48:58 to a $2 trillion market cap.
    0:48:59 I think it’ll happen in the next three years.
    0:49:01 And I think Mark Zuckerberg is going to be
    0:49:03 the richest person in the world.
    0:49:03 I just, I mean,
    0:49:05 4 billion people using their products.
    0:49:07 I think again, like,
    0:49:10 I think people still highly undervalue
    0:49:14 the power of Instagram, WhatsApp, Facebook.
    0:49:15 The reality labs and all that stuff
    0:49:17 is still yet to be seen.
    0:49:21 But I think the biggest reason that I’m bullish on Meta,
    0:49:23 and I just interviewed Mark Zuckerberg.
    0:49:25 So obviously like there’s a bias there,
    0:49:26 but I think it’s him.
    0:49:28 Like he’s a founder.
    0:49:30 He’s been there for 20 plus years now.
    0:49:32 He’s the, maybe the most, like he’s a killer.
    0:49:35 Like he’s the most competitive person.
    0:49:36 I think in entrepreneurship.
    0:49:38 Like he wants it all.
    0:49:39 He’s coming for it all.
    0:49:40 He actually just wore a shirt
    0:49:43 for his 40th birthday post that says,
    0:49:45 I think death to Carthage,
    0:49:46 which is basically,
    0:49:50 it’s a modern day sort of rallying war cry.
    0:49:52 And to me represented,
    0:49:53 I don’t have any insider information,
    0:49:55 but to me it represented like,
    0:49:57 he wants to win.
    0:49:58 He wants all the smoke.
    0:49:59 He’s coming.
    0:50:01 Like the guy is kind of like Sam Altman,
    0:50:04 just absurdly competitive.
    0:50:06 And that’s one guy that I,
    0:50:07 much like Tom Brady,
    0:50:09 much like Kobe back in the day.
    0:50:12 It’s like one guy that I could just never bet against.
    0:50:13 – Yeah, yeah.
    0:50:14 Did you watch the roast, by the way?
    0:50:16 – I love the roast.
    0:50:16 And by the way, I have,
    0:50:17 dude, I have an idea.
    0:50:18 Like everybody,
    0:50:21 okay, like Zuckerberg’s arc has been incredible.
    0:50:23 Like he’s been,
    0:50:24 and by the way,
    0:50:27 working with his PR and communications teams,
    0:50:28 I get it.
    0:50:30 Like these people are sharp.
    0:50:32 Like they are so tuned into the culture.
    0:50:33 They’re brilliant.
    0:50:34 I love the whole experience.
    0:50:38 But I think him or Elon or Sam Altman or Sundar,
    0:50:40 one of these guys should have a roast.
    0:50:42 Because if you want to like humanize
    0:50:44 and endear people to a founder,
    0:50:47 like there’s no better way to be humanized
    0:50:49 and endeared than self-deprecation
    0:50:52 and getting shit on for two hours by famous committees.
    0:50:55 So if anybody’s out there from Zuck, Elon,
    0:50:56 whoever’s team,
    0:50:58 like get one of these guys on a roast, man.
    0:51:00 I think it’s a good move.
    0:51:01 – That’s such a great idea.
    0:51:03 But speaking of Zuck,
    0:51:04 is there any like,
    0:51:06 what’s the story behind how you got them?
    0:51:07 I don’t know if that’s something
    0:51:08 you’re open to sharing or not,
    0:51:09 but is there a story there?
    0:51:12 Like how did you actually land Zuck for an interview?
    0:51:13 – Yeah, man.
    0:51:15 I mean, the first thing I’ll say is a lot of luck involved.
    0:51:18 But then I also say that kind of just a PSA
    0:51:19 to anybody listening,
    0:51:20 you can create your own luck.
    0:51:23 So basically what I think happened,
    0:51:26 I don’t know what haven’t asked his team, why me,
    0:51:30 but I’ve been creating content now about emerging tech
    0:51:33 on Instagram for over two years now,
    0:51:35 not necessarily on my page,
    0:51:36 but on my media pages.
    0:51:37 One is called metaverse,
    0:51:38 second is a three,
    0:51:41 the other one aluna.ai.
    0:51:44 And I’ve put a lot of work into that, man.
    0:51:45 Like a lot, a lot of work,
    0:51:46 especially early in the day,
    0:51:48 I’d have like these long,
    0:51:49 you know, 10 post care cells,
    0:51:50 so much information,
    0:51:52 data rich, really well designed.
    0:51:53 And it obviously caught the eye
    0:51:56 of a lot of people at meta.
    0:51:58 And I’d be friended,
    0:52:01 dude, this is another reason why I bought so much stock.
    0:52:03 Like I have so many friends at meta now
    0:52:05 and the teams there are just so talented, man,
    0:52:08 across like all of their endeavors.
    0:52:11 And I’m doing a lot of work with meta over this year,
    0:52:12 like we’re going to Cannes in a month
    0:52:14 and like got a lot of stuff going on,
    0:52:16 but I’ve befriended a lot of people in meta
    0:52:17 and I’ve just like developed
    0:52:19 this really great relationship with them.
    0:52:21 And they always hit me up and it’s like,
    0:52:22 Hey, we got some news drop
    0:52:23 and do you mind like covering it,
    0:52:25 maybe making a video putting on metaverse or whatever.
    0:52:27 And I’m like always happy to do it
    0:52:29 just because I love this stuff.
    0:52:32 And I’ve never asked for anything in return.
    0:52:35 So like my intuition says that
    0:52:37 because I’m like an Instagram first creator
    0:52:39 and I’ve been like doing so much for them
    0:52:41 for two years now without ever asking
    0:52:42 for anything in return,
    0:52:44 I think it’s almost like this karma
    0:52:46 that was just put out there
    0:52:47 and they sort of honored me
    0:52:50 and rewarded me with it for that.
    0:52:51 That’s my guess.
    0:52:54 I don’t know for sure,
    0:52:56 but I just tell people, yeah, luck,
    0:52:57 but you can create your own luck.
    0:52:59 I think the last two, two and a half years
    0:53:02 creating on Instagram has kind of proven that.
    0:53:03 – You know, that actually leads me something
    0:53:05 that I wasn’t planning on asking you,
    0:53:07 but now I’m curious ’cause you brought it up.
    0:53:11 Like how does like the monetization on Instagram work,
    0:53:14 is it mostly like sponsors sponsoring your content
    0:53:16 or does meta actually pay you similar
    0:53:18 to like how YouTube and Twitter does now?
    0:53:20 – No, definitely not off platform.
    0:53:22 And like that’s another fascinating conversation
    0:53:24 that we can get into.
    0:53:26 The economics of why it’s,
    0:53:28 even though TikTok with the creativity programs doing it,
    0:53:29 but I think they’re subsidizing a lot.
    0:53:31 I don’t think that’s a longterm thing,
    0:53:34 but you know, YouTube, the magic of YouTube is like,
    0:53:36 hey, you have pre-roll, you have interstitial,
    0:53:38 you have like all these ad placements.
    0:53:40 And because it’s an eight plus minute video,
    0:53:42 20 minute, in your case, sometimes 30, 40 minute video,
    0:53:46 it’s like they can attribute that direct ad revenue
    0:53:48 to the creator bringing in these people.
    0:53:50 On Instagram, it doesn’t, Instagram and TikTok,
    0:53:52 like when I say a lot,
    0:53:54 people are more fans of the platform
    0:53:55 than they are the creator.
    0:53:57 And on YouTube, people are more fans of the creator
    0:53:59 than they are necessarily the platform.
    0:54:02 So like TikTok, Instagram’s doing the heavy lifting,
    0:54:04 getting the discovery, getting people to use,
    0:54:06 and making their app sticky, getting people to use it.
    0:54:07 And like you’re a beneficiary of that,
    0:54:10 but it’s hard to pinpoint and attribute,
    0:54:12 you know, where the revenue should be directed.
    0:54:14 So like I understand why they don’t pay directors,
    0:54:16 or sorry, creators directly.
    0:54:19 But yeah, all of my revenue on Instagram,
    0:54:20 I just use that as discoverability.
    0:54:22 And then the revenue happens third party,
    0:54:24 whether it be sponsors, ads, paid communities,
    0:54:26 whatever it might be.
    0:54:29 But yeah, they don’t pay creators, well they do,
    0:54:30 but it’s not a lot.
    0:54:33 Like I think one month I had like 30 million views
    0:54:35 and I got paid like $32.
    0:54:37 (laughing)
    0:54:39 – The Twitter actually stays better than that.
    0:54:40 (laughing)
    0:54:43 – Dude, dude, everything pays better than that.
    0:54:44 But yeah, that’s what I say.
    0:54:46 Like Instagram’s a discovery platform
    0:54:49 with relationship components built in.
    0:54:51 But if you’re looking to get paid direct from platform,
    0:54:54 I mean YouTube’s a place to be 100%.
    0:54:56 – Well, I think, you know,
    0:54:58 I think we could probably have
    0:55:00 like another hour long conversation,
    0:55:02 but I think we’ll have to have you back on, you know,
    0:55:04 as one of our more regular guests,
    0:55:06 ’cause I feel like we can probably nerd out frequently
    0:55:07 about whatever is going on.
    0:55:10 But you know, I do wanna be respectful of your time.
    0:55:12 – Dude, I could go on for hours and hours.
    0:55:14 So anytime you guys need like a third,
    0:55:16 even like co-host or whatever you just wanna riff,
    0:55:18 I’m always available.
    0:55:21 But I do wanna leave with a question.
    0:55:22 What do you guys think?
    0:55:25 WWDC, I think this episode will go out before then.
    0:55:28 So all lies on Apple.
    0:55:29 Just we don’t have to get into it super,
    0:55:33 but from one to 10, like how big of an impact
    0:55:36 or how impactful is that event gonna be,
    0:55:38 you guys think from one to 10?
    0:55:40 – Three.
    0:55:42 So they just now inked the deal with OpenAI.
    0:55:43 So that tells me two things.
    0:55:45 Like if they’re partnering with OpenAI,
    0:55:45 that means I mean,
    0:55:48 I think they’re gonna be quite reliant on OpenAI.
    0:55:51 And so I doubt that we’re gonna see a major Siri update
    0:55:53 ’cause they’re probably relying on OpenAI for that.
    0:55:55 So I think a month is so soon
    0:55:58 for them to have something amazing new with Siri by then,
    0:55:59 but who knows?
    0:56:01 Maybe this has been in the works and yeah.
    0:56:04 – Unless we saw a sneak peek of what Siri is gonna be
    0:56:08 on Monday with the OpenAI GPT-40 demo.
    0:56:09 – Yeah.
    0:56:11 – That could be what Siri is.
    0:56:13 – Yeah, that could be.
    0:56:16 That’s gonna, yeah, that’d be like a huge alliance
    0:56:18 between OpenAI, Microsoft, and Apple.
    0:56:20 And then I guess you’ll have, you know,
    0:56:23 Amazon will be, you know, partnering with Interopik.
    0:56:24 – I don’t know if Microsoft’s really gonna be a piece
    0:56:25 of that deal. (laughs)
    0:56:28 I don’t think Microsoft and Apple still have much love
    0:56:31 for each other, but I don’t know, it’ll be interesting
    0:56:34 because it feels like Microsoft and OpenAI are starting
    0:56:35 to make moves to sort of separate
    0:56:37 from each other a little bit, right?
    0:56:39 – Yeah, but certainly that’s very hard.
    0:56:41 Like unless OpenAI comes out with AGI.
    0:56:42 – Which you know.
    0:56:44 – And they determine what AGI is.
    0:56:45 So that’s a whole another.
    0:56:47 – Like I said, we’ll do another episode for all this,
    0:56:49 but yeah, Matt, one to 10 out of curiosity.
    0:56:54 You, like where are your expectation levels for WWDC?
    0:56:55 – I, not as great.
    0:56:57 The expectation was a lot bigger last year
    0:57:00 ’cause we knew Vision Pro was coming this year.
    0:57:02 I feel like the big thing they’re gonna be talking about
    0:57:06 is, you know, whatever updates they make to Siri.
    0:57:08 And I think it’s gonna be stuff
    0:57:10 that we’ve already seen before, right?
    0:57:13 It’ll be new to Apple, but those of us that are in the AI
    0:57:15 world paying attention, you know,
    0:57:17 that have seen a few things already.
    0:57:18 – Yeah.
    0:57:20 – Gonna feel like nothing new to us.
    0:57:23 That’s kind of what I’m expecting.
    0:57:26 – So my prediction is like you just said,
    0:57:30 it’s what we saw with OpenAI, GPT40,
    0:57:32 and with Google’s project Dastra.
    0:57:34 I think that was a sneak peek,
    0:57:36 sort of a preview of what we’re gonna see with iOS,
    0:57:38 where it’s like Siri’s gonna be revamped.
    0:57:40 It’s gonna be powered by, you know,
    0:57:42 some sort of omnimodal LLM that takes,
    0:57:45 you know, that uses a lot of the hardware and the device.
    0:57:47 And I think it’s basically going to power
    0:57:50 the entire ecosystem of Apple devices.
    0:57:53 But the difference is they’re gonna present it
    0:57:55 in such a spectacular way.
    0:57:56 Like I made this point where Google,
    0:57:58 it’s like they’re presenting this incredible technology,
    0:58:00 but they don’t make us excited about it.
    0:58:04 Where Apple sometimes presents like a feature
    0:58:06 that nobody ever uses or nobody ever will use,
    0:58:07 but we’re like, oh my God,
    0:58:09 it’s the coolest thing I’ve ever seen.
    0:58:11 And so even if they just announced sort of what we
    0:58:14 already expect and what we already have seen,
    0:58:16 the way that they will announce it
    0:58:17 is gonna get people psyched.
    0:58:19 So I’m like, I’m more interested in the presentation,
    0:58:21 the cinematography, the visuals.
    0:58:22 So I’m really excited, man.
    0:58:24 I have a lot of, I have a high expectation,
    0:58:27 so I’m just setting myself up to be let down,
    0:58:28 but we’ll see.
    0:58:29 – I think behind the scenes
    0:58:30 some really big things are probably happening,
    0:58:32 ’cause like open AI partnering with Apple,
    0:58:33 and then Apple’s been saying that they’re building
    0:58:35 all these new chips that are getting better and better
    0:58:37 for AI, you know, and you combine that with Sam Altman
    0:58:40 saying that the big limitation on AI right now
    0:58:42 is chips and having more chips
    0:58:44 and not just relying on Nvidia, you know,
    0:58:45 I wouldn’t be surprised if there’s some huge alliance
    0:58:48 going on there where like open AI and Apple
    0:58:50 are gonna collaborate on new AI chips
    0:58:51 or something in the future.
    0:58:54 – Well, yeah, the M4 may be a puzzle piece there,
    0:58:56 but all really interesting, man.
    0:58:58 I’m really excited for WWDC.
    0:59:00 And for your video covering it, Matt,
    0:59:02 I’m gonna be the first one that I watch.
    0:59:05 – Yeah, I won’t actually be at WWDC this year,
    0:59:07 but yeah, no, I’m excited to watch the stream
    0:59:08 and talk about it.
    0:59:09 – It’s better when you’re not there,
    0:59:10 ’cause then you’re in the lab like ready
    0:59:11 to crank something out.
    0:59:13 – Yeah, yeah, yeah.
    0:59:15 I mean, going to these events is really cool
    0:59:17 because you actually get to meet the people in the company.
    0:59:22 And I made a point in my video about the Google I/O event
    0:59:28 is that it’s like really easy to sort of think
    0:59:30 of these Googles and Apples and Microsoft
    0:59:32 as these big faceless companies
    0:59:33 that they just want your data
    0:59:35 and they don’t really care about the individual.
    0:59:37 But then you go to these events
    0:59:38 and you talk to the engineer
    0:59:40 that built this one feature
    0:59:42 that they’re gonna be announcing on stage today.
    0:59:44 And that one engineer is like nervous
    0:59:47 and anxious and excited that they’re one thing
    0:59:51 that they made is being presented today
    0:59:53 and they can’t wait to see how it’s being perceived.
    0:59:56 And they’re all giddy and nerdy and excited
    0:59:58 about this piece that they worked on.
    1:00:02 And you start to realize the humanity at these companies.
    1:00:03 You start to realize that Google
    1:00:05 isn’t a giant faceless company.
    1:00:07 It’s made up of people that are just as excited
    1:00:10 and nerdy about all of this stuff as we are.
    1:00:13 But on the news and all of that,
    1:00:16 you just see it as this giant faceless corporation.
    1:00:18 And I think putting faces to these companies
    1:00:21 and having that sense of humanity behind
    1:00:24 like the people actually engineering these things
    1:00:26 is really, really powerful.
    1:00:29 And I’ll make YouTube videos talking about
    1:00:30 like the latest thing Google just dropped
    1:00:32 and like half the comments,
    1:00:33 I know we need to ignore the comments
    1:00:35 but half the comments will be things like,
    1:00:36 oh, Google’s evil.
    1:00:38 Why are you even talking about them?
    1:00:40 Oh, they’re all just trying to harvest our data.
    1:00:41 They don’t care about you.
    1:00:42 But going to these events,
    1:00:46 you realize that’s not really fully the truth.
    1:00:47 And so–
    1:00:50 – That is so well said, man.
    1:00:51 I’ll end it with this,
    1:00:52 like Mark Zuckerberg was very similar.
    1:00:55 Like I have, I’m living in Charlotte right now.
    1:00:58 I have like a lot of family who are kind of like
    1:01:00 suspicious of Mark Zuckerberg.
    1:01:02 – Not even as a lizard person or whatever.
    1:01:03 – Yeah, you see these comments.
    1:01:04 Isn’t he like Illuminati?
    1:01:06 Like trying to just like ruin the world.
    1:01:08 And I’m talking to the guy and he’s just like,
    1:01:10 the craziest part about interviewing him,
    1:01:12 he’s like, it was like talking to you guys.
    1:01:13 He’s just a bro.
    1:01:14 Like he’s just one of us.
    1:01:17 He loves, I mean with $150 billion in his bank account,
    1:01:20 but he really loves this stuff, man.
    1:01:22 He’s like a tinkerer at heart.
    1:01:24 Like he was so personable.
    1:01:27 I was like, he’s going to probably be in a rush
    1:01:30 or maybe like, you know, like here’s another funny thing.
    1:01:32 Like I looked it up last year, you know,
    1:01:35 his net worth balloon and it amounted
    1:01:38 to nine and a half million dollars per hour that he made.
    1:01:40 So I was like, okay, so my 40 minutes with him
    1:01:42 is worth like $7 million for the guy.
    1:01:44 He’s going to be like in a rush, like hurry up.
    1:01:46 But we got to chat like five minutes before,
    1:01:49 five minutes after, like he even wanted to stay longer,
    1:01:50 just like shooting the shit about the UFC.
    1:01:51 But his team was like, all right,
    1:01:52 we got to go next meeting or whatever.
    1:01:56 But he was just so weirdly normal.
    1:01:57 – Yeah, yeah.
    1:01:59 – And I was kind of off put by that.
    1:02:01 I was expecting somebody just kind of like annoyed
    1:02:03 that he had to talk to me.
    1:02:05 But yeah, man, it was a really cool experience.
    1:02:07 So to your point, like these people working
    1:02:09 at these companies are actual human beings
    1:02:10 who love what they’re doing.
    1:02:13 They’re not trying to destroy humanity.
    1:02:15 – Well, thanks again for hanging out with us today
    1:02:18 and nerdin’ out with us before to do it again.
    1:02:20 – That was a lot of fun, guys.
    1:02:21 I love this.
    1:02:21 That was awesome.
    1:02:24 (upbeat music)
    1:02:27 (upbeat music)
    1:02:29 (upbeat music)
    1:02:32 (upbeat music)
    1:02:34 (upbeat music)

    Episode 9: How do you maintain trust and authenticity while exploring the world of AI content creation? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by AI enthusiast Roberto Nickson (https://x.com/rpnickson), a product designer and iOS app developer turned AI content creator.

    In this episode, they delve into the importance of humanizing the faces behind big tech companies, explore the future of AI-generated music, and discuss the balance creators must maintain between financial opportunities and credibility. The discussion also touches on the potential of reels overtaking TikTok and the exciting prospects of Meta’s advancements in AI.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Social media, music, and AI influence creators.
    • (03:41) Roberto working on PicLab app top photo editor.
    • (08:51) Small niche tools gain traction over big incumbents.
    • (10:16) Difficulty adapting to new search models expected.
    • (13:59) Using Suno for creating perfect video music.
    • (16:03) Kid develops groundbreaking technology, shares on Discord.
    • (20:55) AI music sentiment positive as musicians engage.
    • (23:18) AI creates, but human touch adds value.
    • (27:58) New technology raises ethical, regulatory questions.
    • (30:21) Content creation ethics: Audience or sponsor focus?
    • (32:55) Canceled deal due to discomfort.
    • (37:59) Matt not realizing AI was controversial.
    • (38:56) Navigating controversy surrounding AI and addressing fears.
    • (43:57) Potential TikTok acquisition in the US uncertain.
    • (45:41) Positive outlook on Meta’s open sourcing strategy.
    • (50:31) Building relationship with Meta through hard work.
    • (52:18) Social media platforms prioritize creator engagement over revenue.
    • (55:46) Revamped Siri to power Apple devices ecosystem.
    • (58:25) People excited about tech, humanize big companies.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Why Google Search Isn’t Going Anywhere Anytime Soon | Bilawal Sidhu

    AI transcript
    0:00:01 (upbeat music)
    0:00:03 – I’m still very bullish on Google
    0:00:05 because I think it’s like the tip of the iceberg
    0:00:07 of what you see in tech companies
    0:00:09 and the submerged part is just amazing.
    0:00:10 They’ve got the most complete models
    0:00:12 of the physical and digital world.
    0:00:15 They’ve got ubiquitous distribution
    0:00:17 and an existing ecosystem of ads and sales
    0:00:19 to plug monetization in.
    0:00:21 So I think it’s still a magical combination.
    0:00:27 – Hey, welcome to the Next Wave podcast.
    0:00:28 I’m Matt Wolf.
    0:00:30 I’m here with my co-host, Nathan Lans.
    0:00:33 And with this podcast, it is our goal
    0:00:36 to keep you looped in with all of the latest AI news,
    0:00:38 the latest AI tools,
    0:00:41 and just help you keep your finger on the pulse
    0:00:43 so that you are prepared for the next wave of AI
    0:00:44 that’s coming.
    0:00:47 And today we have an amazing guest on the show.
    0:00:50 We have Bilival Sidhu on the show.
    0:00:52 He is the host of the TED AI podcast.
    0:00:54 He’s an ex-Googler.
    0:00:56 We’re gonna talk to him about what it was like
    0:00:59 working on AI and visual effects over at Google.
    0:01:02 We’re gonna talk about the difference
    0:01:04 between whether or not we should be accelerating AI
    0:01:06 or slowing down AI.
    0:01:08 We’re also gonna learn about how some
    0:01:11 of these AI visual effects tools work
    0:01:13 because this is the field that Bilival worked in
    0:01:14 for so long.
    0:01:15 It’s an amazing episode.
    0:01:17 You’re gonna learn a ton
    0:01:18 and can’t wait to share it with you.
    0:01:20 So let’s jump on in with Bilival Sidhu.
    0:01:24 Thanks so much for being on today, Bilival.
    0:01:25 – Thanks for having me, gentlemen.
    0:01:26 Pleasure to be here.
    0:01:29 – Yeah, so I wanna just kind of dive right into it.
    0:01:32 Your background is Google, right?
    0:01:35 So I think when you and I first connected
    0:01:38 and we first started having some chats over Twitter DM,
    0:01:41 you were still actually working over at Google at the time
    0:01:43 and you also were kind of doing a creator business
    0:01:45 on the side with your YouTube channel
    0:01:46 and everything you had going on.
    0:01:48 But Bilival Sidhu, what were you doing over at Google?
    0:01:49 What was your role there?
    0:01:52 You know, what was your experience like over there?
    0:01:53 – Gosh, yeah, it was awesome.
    0:01:55 So, you know, I’ve spent a decade in tech,
    0:01:57 six years at Google.
    0:01:58 I’ve been able to work on projects
    0:02:00 that blend the physical and digital world.
    0:02:03 And I started off in the AR/VR team really
    0:02:06 when spatial computing, as it’s now called,
    0:02:07 was first popping off.
    0:02:10 This is like right after the DK2 came out,
    0:02:11 Google Glass was a thing.
    0:02:15 And everyone was talking about what is the next iteration
    0:02:16 of computing platforms?
    0:02:19 Where are we gonna go from like this mobile revolution?
    0:02:22 And so I had a chance to work on a bunch of cool stuff there,
    0:02:26 YouTube VR content, livestreaming Coachella,
    0:02:28 Dean Choice Awards, Elton John,
    0:02:30 the camera systems that we used
    0:02:33 to do stereoscopic 3D capture, AR SDKs,
    0:02:35 when that became popular, augmented reality,
    0:02:37 sort of hit the scene.
    0:02:40 And then after that, I spent four years at Google Maps,
    0:02:43 basically creating a ground up 3D model of the world,
    0:02:45 sort of remapping the world, if you will,
    0:02:48 and then turning the world into an AR canvas
    0:02:51 with the AR Core Geospatial API.
    0:02:53 It’s been a lot of fun and yeah,
    0:02:55 it’s been awesome to work with some really talented folks
    0:02:59 to work on these projects that have been blurring this line
    0:03:01 between the world of bits and atoms.
    0:03:03 – So I’m curious, working on all this stuff,
    0:03:05 in my mind, I can’t even imagine
    0:03:07 what a day-to-day looks like at Google.
    0:03:09 I’ve been on the Google campus
    0:03:13 and it looks like a giant playground for tech nerds.
    0:03:14 So I’m just kind of fascinated
    0:03:17 by what it’s like to work at Google.
    0:03:19 Like what does a day-to-day look like over there?
    0:03:20 – You know, so I was a product manager
    0:03:23 and so a day-to-day for me is gonna be very different
    0:03:26 than if you go talk to like an engineer or designer.
    0:03:27 For me, really it was a lot of meetings.
    0:03:30 Let me be perfectly honest, just like a ton of time.
    0:03:31 But there’s some very cool things.
    0:03:34 I think like Google and big tech companies generally
    0:03:37 are sort of this like interesting microcosm
    0:03:39 where it’s like, you know, I’ll send out an email
    0:03:43 and the guy that wrote the book on computer vision,
    0:03:46 like the computer vision book that like everyone reads,
    0:03:48 like a response to it and I get a bunch of pings
    0:03:51 and you’re like, oh, so-and-so respond to your thing.
    0:03:53 And it’s like all these Pokemon that like, you know,
    0:03:55 these companies have caught that are available
    0:03:58 at your back, back in call to share ideas with,
    0:04:01 pull into your own projects and really just like,
    0:04:05 you know, there’s such a, it’s like the tip of the iceberg
    0:04:07 of what you see in tech companies
    0:04:09 and the submerged part is just amazing.
    0:04:11 So it’s like, when I moved over to the maps team,
    0:04:14 I was thinking of working on glasses at the time.
    0:04:16 And the reason I went to maps is like,
    0:04:18 I met this engineer is like,
    0:04:21 oh yeah, we write CLs to move satellites around in the sky.
    0:04:23 And I was like, wait, what, huh?
    0:04:25 You move satellites around the sky.
    0:04:27 And it’s like, yeah, like literally orchestrating a fleet.
    0:04:29 Like, you know, like most people don’t know this,
    0:04:31 like Google owns their own fleet of like,
    0:04:33 not just street view cars, but airplanes.
    0:04:34 Oh, wow.
    0:04:36 And so like the ability to like task those.
    0:04:38 So like, hey, we need to, you know,
    0:04:41 we’ve got this like Sundar IO thing coming up
    0:04:44 and we’re gonna be presenting immersive view.
    0:04:47 We got to go capture this high resolution model of London.
    0:04:48 And like, not, you know,
    0:04:51 and suddenly things in the world of atoms
    0:04:52 are moving to make that happen.
    0:04:54 I think it’s like absolutely amazing.
    0:04:56 I think people deeply underestimate the data mode
    0:04:58 that Google has.
    0:05:00 Obviously the most complete digital twin of the world
    0:05:02 we’re talking about, but like search, right?
    0:05:05 Like YouTube, oh my goodness.
    0:05:07 All this stuff is available.
    0:05:09 And so like, there’s cool things and products
    0:05:11 you can build around it.
    0:05:13 But along with it comes, you know,
    0:05:15 which may not be a surprise to people in tech,
    0:05:18 but like a ton of responsibility and sort of guardrails
    0:05:20 for how you actually use this data.
    0:05:21 So it’s like the size of the prize
    0:05:24 and the data sets you get to play with are amazing.
    0:05:25 But to be able to do stuff with it,
    0:05:28 you really, really have to be exceedingly thoughtful.
    0:05:30 And there’s a lot of process involved
    0:05:32 in unlocking that innovation.
    0:05:33 So yeah, that’s how I would describe it.
    0:05:35 I think like it’s just like a,
    0:05:38 it’s like, it’s like a Disneyland for nerds to be honest.
    0:05:43 When all your marketing team does is put out fires,
    0:05:45 they burn out fast.
    0:05:46 Sifting through leads,
    0:05:48 creating content for infinite channels,
    0:05:51 endlessly searching for disparate performance KPIs.
    0:05:52 It all takes a toll.
    0:05:56 But with HubSpot, you can stop team burnout in its tracks.
    0:05:58 Plus your team can achieve their best results
    0:06:00 without breaking a sweat.
    0:06:02 With HubSpot’s collection of AI tools,
    0:06:05 Breeze, you can pinpoint the best leads possible.
    0:06:08 Capture prospects attention with clickworthy content
    0:06:11 and access all your company’s data in one place.
    0:06:14 No sifting through tabs necessary.
    0:06:16 It’s all waiting for your team in HubSpot.
    0:06:17 Keep your marketers cool
    0:06:20 and make your campaign results hotter than ever.
    0:06:23 Visit hubspot.com/marketers to learn more.
    0:06:28 – So it sounds like you’re still really bullish on Google.
    0:06:30 Cause I know we were having like some playful banter
    0:06:32 last year about like, you know, I was like,
    0:06:34 maybe Google’s going to die.
    0:06:36 And you were like, what the hell are you talking about?
    0:06:38 – Yeah, yeah, we’ve actually had that conversation
    0:06:40 on this podcast before where I’m like, you know what?
    0:06:42 I think I give more credit to Google.
    0:06:43 I think Nathan’s a little more,
    0:06:46 I don’t know if they’re going to be like the top dogs in AI.
    0:06:48 Like where do you stand on that?
    0:06:49 Do you count Google out?
    0:06:52 Do you think Google will pass the Microsoft AI,
    0:06:54 you know, Avenger mega team?
    0:06:56 – It’s hard to say anything with certainty,
    0:06:59 but what I will say is like, you know, after I left Google,
    0:07:00 I think I was like one of the few people,
    0:07:03 I felt like I was in isolation saying good things
    0:07:04 about Google.
    0:07:06 Everyone was just like, oh yeah, they’re just too slow.
    0:07:09 They foambolt, they came up with a transformer.
    0:07:11 All the transformer folks are left.
    0:07:15 I think it’s a situation where there was no real disruption
    0:07:16 in site for the search base.
    0:07:18 Yes, there were talks about like, hey, like kids
    0:07:21 are like searching on TikTok and like YouTube like now,
    0:07:24 but YouTube is owned by Google on TikTok.
    0:07:26 It’s kind of short former people are really going to be like,
    0:07:27 is that going to be a resilient thing?
    0:07:30 And one might argue now social networks are places
    0:07:31 where people do a lot of searching,
    0:07:34 but traditional search sort of as like, you know,
    0:07:37 a maps guide is to just give the maps analogy.
    0:07:40 It’s like, you know, like maps is how you discover stuff
    0:07:41 in the real world.
    0:07:43 And Google is how you discover stuff in the digital world.
    0:07:46 It’s literally your window to the worldwide web, right?
    0:07:48 I don’t think anything had like sort of questioned
    0:07:51 the strong position Google was in in that regard
    0:07:55 until chatGPT came out when suddenly people
    0:07:57 could start connecting the dots and see you like,
    0:08:00 if you connect large language models with like, you know,
    0:08:03 knowledge graph and search index, kind of like perplexity.
    0:08:06 And you know, Microsoft co-pilots
    0:08:09 and whatever the heck else OpenAI is going to announce
    0:08:13 in order to kneecap any Google limelight next week.
    0:08:14 Like I think people start saying,
    0:08:17 hey, there’s a disruption inside.
    0:08:19 And I think combined with the fact that like,
    0:08:21 the search ads business model
    0:08:24 was just such a money printing machine and still is.
    0:08:26 And the fact that, you know, the cost per query
    0:08:29 of these generative AI models is obviously going to be higher.
    0:08:32 And how do you do advertising and attribution
    0:08:34 and all this stuff like that it would like,
    0:08:37 kind of represents a, you know, contraction
    0:08:40 in the money printing machine and the pie
    0:08:41 and the business that Google created
    0:08:44 had all the signs of sort of innovators dilemma.
    0:08:46 And I think like Google has sort of adopted
    0:08:48 the playbook of the innovator solution.
    0:08:51 And, you know, initially they had some of these reorgs
    0:08:54 that felt more like exec reorgs.
    0:08:56 And now they’re actually bringing together
    0:08:58 like the brain and the deep mind teams
    0:09:01 and they’re actually shipping at a really good cadence.
    0:09:04 And I think they still have some of the most unique data sets
    0:09:06 that other folks are talking about, you know
    0:09:07 that may or may not have been scraped, you know
    0:09:11 he’s case in point, the CTO of open AI
    0:09:13 being asked by Joanna Stern about,
    0:09:14 what does it exactly mean
    0:09:16 that you train on publicly available data?
    0:09:19 So all this to say, I’m still very bullish on Google
    0:09:21 because I think they’ve got the most complete models
    0:09:23 of the physical and digital world.
    0:09:26 They’ve got ubiquitous distribution
    0:09:28 and they’ve got the right infrastructure chops
    0:09:31 to basically like bring that cost per query down
    0:09:34 and an existing ecosystem of ads and sales
    0:09:35 to plug monetization in.
    0:09:39 So I think that there’s a like monetizable
    0:09:41 sort of like answer engine model.
    0:09:43 I think Google is one of the few companies
    0:09:44 that could crack it.
    0:09:46 That isn’t to say that I think like open AI
    0:09:48 and Microsoft can’t take meaningful market share
    0:09:50 but let’s be honest, how many of us actually use Bing?
    0:09:51 Like I don’t, right?
    0:09:53 Like I used it for a little bit
    0:09:55 and I probably use perplexity more now.
    0:09:58 Yeah, I mean, I am using chat GBT and perplexity
    0:09:59 instead of Google a lot these days.
    0:10:01 I agree, me too, to be honest, yeah.
    0:10:03 And then, you know, I’ve been following,
    0:10:04 you know, a long time ago I used to do SEO
    0:10:06 like a long, long time ago
    0:10:07 and I’ve been kind of following that space
    0:10:09 and like in the last two months
    0:10:11 and I kind of predicted this a year ago,
    0:10:14 they’re like having major changes to the algorithm
    0:10:16 where they’re really focusing on authority.
    0:10:18 Yeah, domain authority, yeah.
    0:10:20 Yeah, the reason they’re having to do that
    0:10:23 is because of all the flood of AI content, right?
    0:10:26 Like they just can’t deal with the flood of AI content.
    0:10:28 So it’s like, okay, how do you deal with that?
    0:10:31 Let’s go back to like really valuing the big brands
    0:10:34 and the big names or the famous people too.
    0:10:36 That’s the other thing is maybe like they’re taking
    0:10:37 social signals, like you have a lot of followers
    0:10:38 on social media.
    0:10:40 Now that’s a signal that you’re a, you know,
    0:10:42 an author they should listen to.
    0:10:44 Yeah, I mean, why do you think we signed on with HubSpot?
    0:10:47 We want that backlink domain authority.
    0:10:48 That’s the only reason.
    0:10:48 Yeah, yeah, yeah.
    0:10:50 (laughing)
    0:10:54 I mean, this is like, this is gonna be a meta problem though
    0:10:55 for the industry, right?
    0:10:57 The just the explosion of synthetic content.
    0:10:59 I mean, and some social networks
    0:11:01 are almost like incentivizing it.
    0:11:03 Like, like on LinkedIn, it gives you,
    0:11:06 it uses GPT-4 to suggest the comments.
    0:11:08 And now you have just the most cringe like,
    0:11:11 like regurgitations and summarizations
    0:11:12 of like the original posts from like,
    0:11:15 clearly a normal human being would never write this.
    0:11:18 But like, if there was a meme about how to respond
    0:11:20 to somebody on LinkedIn, I mean,
    0:11:22 that’s like encapsulates the style I see.
    0:11:22 And so.
    0:11:23 But that’s happening on Quora too.
    0:11:24 So like right now,
    0:11:27 Quora is starting to rank at the top of Google results
    0:11:29 and Quora is being dominated
    0:11:31 with a chat GPT responses now.
    0:11:33 100%, I mean, like books on Amazon too.
    0:11:34 And I think the way they came out with a restriction
    0:11:37 as well, you can only upload X number of books per day.
    0:11:40 Like, I don’t know if that’s the solution.
    0:11:42 But like, this is like the deep fake,
    0:11:43 shallow fake problem too.
    0:11:45 It’s like, everyone’s talking about detecting deep fakes.
    0:11:46 And like, how do we figure this out?
    0:11:50 It’s like, well, the thing that causes maximum harm today
    0:11:51 are aren’t actually deep fakes.
    0:11:53 They’re like super shallow fakes
    0:11:55 where you take a photo from a different time or a context
    0:11:58 and like, you know, kind of put it against another context.
    0:11:59 This is exactly the type of stuff.
    0:12:01 You’ll see like community noted on Twitter.
    0:12:04 And that stuff’s like relatively easier to detect
    0:12:06 ’cause you can actually find the source imagery
    0:12:08 if you do like reverse image search.
    0:12:10 And so like you add in the generative problem
    0:12:12 on top of that, it’s like even crazier,
    0:12:14 but like most platforms haven’t really even solved
    0:12:16 the shallow fake problem, right?
    0:12:18 It’s really like when it reaches a certain threshold
    0:12:21 of distribution, there’s sort of this retroactive,
    0:12:23 let’s go throttle this thing versus like,
    0:12:25 how do you get ahead of this?
    0:12:26 Anyway, I could talk about that forever
    0:12:27 because some of the ways to avoid that is like,
    0:12:30 ubiquitous surveillance, which is also not, you know.
    0:12:32 Oh yeah, sounds great.
    0:12:36 It’s like somehow the solutions to things that sound 1984
    0:12:38 is like 1984 technology.
    0:12:40 It’s like, it’s kind of weird how that works.
    0:12:42 But it’s funny, somebody shared an article with me
    0:12:45 just like the other day that was an article about,
    0:12:47 well, not just me, but it was like an article
    0:12:51 about like these seven AI influencers are, you know,
    0:12:54 changing how we see AI or something like that, right?
    0:12:56 And then like one of the seven like was my name
    0:12:58 and I read the blurb about myself.
    0:13:00 And the blurb about myself was like,
    0:13:03 I grew up in Louisiana doing real estate and-
    0:13:05 – You’re my neighbor.
    0:13:08 – And like transitioned into computer programming
    0:13:09 and then started teaching AI.
    0:13:11 And I’m like, other than the fact that they’re like,
    0:13:15 Matt makes content about AI, everything else about that
    0:13:17 was just completely wrong.
    0:13:18 – I think that’s where also like these models
    0:13:21 need to be anchored in some sort of real knowledge graph,
    0:13:24 you know, and like, that’s not to say that like, you know,
    0:13:27 an approach like search is only gonna give you the truth,
    0:13:28 right?
    0:13:29 Like there’s like, what is even the truth?
    0:13:31 And there’s like differing opinions on it.
    0:13:33 But I think these models to be able to like,
    0:13:34 just kind of fact check themselves,
    0:13:36 at least with known information and come up with,
    0:13:39 oh, at least the resources that are reputable are saying,
    0:13:43 this is Matt’s bio would be better than, I don’t know.
    0:13:46 So I’m like, this is the equivalent of like SEO dribble
    0:13:50 that sort of started bleeding into Google around like 2019.
    0:13:51 I think like, you know, it’s,
    0:13:53 this is gonna be a huge problem.
    0:13:55 And not to mention the implications of like,
    0:13:57 if we’ve run out of content on the internet
    0:14:01 and we are actively disincentivizing human generated content,
    0:14:03 like how are we gonna train these models?
    0:14:04 Like what’s gonna happen there?
    0:14:05 – Yeah.
    0:14:08 We’ll be right back.
    0:14:11 But first I wanna tell you about another great podcast
    0:14:12 you’re gonna wanna listen to.
    0:14:15 It’s called “Science of Scaling” hosted by Mark Roberge.
    0:14:18 And it’s brought to you by the HubSpot Podcast Network,
    0:14:21 the audio destination for business professionals.
    0:14:23 Each week host Mark Roberge,
    0:14:26 founding chief revenue officer at HubSpot,
    0:14:28 senior lecturer at Harvard Business School
    0:14:30 and co-founder of Stage Two Capital,
    0:14:33 sits down with the most successful sales leaders in tech
    0:14:36 to learn the secrets, strategies and tactics
    0:14:38 to scaling your company’s growth.
    0:14:40 He recently did a great episode called
    0:14:44 “How do you solve for a siloed marketing and sales?”
    0:14:46 And I personally learned a lot from it.
    0:14:47 You’re gonna wanna check out the podcast,
    0:14:51 listen to “Science of Scaling” wherever you get your podcasts.
    0:14:57 – I wanna talk real quick about search
    0:14:59 because just to peek behind the curtain
    0:15:02 to anybody who might be listening to this episode,
    0:15:05 we’re actually recording it right before Google I/O, right?
    0:15:07 Like Google I/O is next week
    0:15:09 from the time we’re recording this.
    0:15:12 Billavall and I are actually gonna be out at Google I/O
    0:15:14 attending in person.
    0:15:16 But one of the things that you mentioned
    0:15:20 is that OpenAI and Microsoft have a tendency
    0:15:23 that whenever Google announces something,
    0:15:26 they need to jump in and like sort of one up them.
    0:15:28 So by the time this episode comes out,
    0:15:32 we’ll probably already know what OpenAI did with search.
    0:15:35 But the rumor right now is that OpenAI
    0:15:38 is creating some sort of their own search engine
    0:15:41 with maybe with Microsoft involved, maybe not.
    0:15:44 There’s still a lot of rumors and speculation flying around.
    0:15:45 But knowing what you know about Google,
    0:15:49 do you think OpenAI and chat GPT can come in and compete?
    0:15:52 – I think no, like I think the search index Google has
    0:15:53 is a very strong mode.
    0:15:56 The fact that they can sort of almost like map the internet
    0:16:00 in like almost real time is like just a hard technical
    0:16:01 and infrastructure problem.
    0:16:04 And they’re really well set up for it.
    0:16:05 I’m curious what it is.
    0:16:08 Like obviously this is complete pontification rumor mail.
    0:16:11 Like what it is OpenAI is going to roll out.
    0:16:14 What I think it’s gonna be is like something at parity
    0:16:18 with perplexity with maybe a better like search index involved.
    0:16:20 Like, and if it’s even if it’s something like that
    0:16:23 where you get sort of this like multimodal summary
    0:16:25 where it looks at a bunch of links, you get, you know,
    0:16:27 some images, maybe some embedded videos
    0:16:29 and like a summary of whatever it is you asked for
    0:16:32 with citation so you can go validate sort of the quality
    0:16:35 of those like the links that were summarized.
    0:16:37 I think that would be a huge step up, right?
    0:16:40 Like just getting, just being able to invoke like,
    0:16:43 like search inside of chat GPT right now is clunky.
    0:16:44 You have to be like, hey, well, and look this up,
    0:16:48 research this and like explicitly prompted to do that
    0:16:49 and being able to do that in a fashion
    0:16:54 that is like really about leaning on like sort of a real time
    0:16:58 and like sort of, you know, content that has real provenance
    0:17:02 along with, you know, like sort of the distilled wisdom,
    0:17:03 you know, the wisdom is debatable.
    0:17:05 The distilled wisdom that is in,
    0:17:07 in these large language models to summarize that,
    0:17:09 I think it’s still a magical combination.
    0:17:11 And I don’t know if y’all feel like this,
    0:17:12 but I think the vibes have been shifting
    0:17:14 with regards to just the conversation
    0:17:18 and sort of Matt, I know you had this like post about like,
    0:17:19 we think things are slowing down,
    0:17:22 but here’s a bunch of announcement events coming up,
    0:17:25 but doesn’t it feel like just like those leaps
    0:17:28 that we hoped for haven’t quite come?
    0:17:31 I guess Sora was kind of a leap.
    0:17:34 I think the trend from like a million to 10 million
    0:17:37 to maybe infinite context is like an interesting leap too,
    0:17:40 but maybe we’re just getting too used to it, you know,
    0:17:43 like versus the technologies,
    0:17:45 like the pace of advancements slowing down.
    0:17:46 Curious, which y’all think?
    0:17:47 Yeah, I don’t know.
    0:17:49 I feel like it’s kind of like it’s on social media,
    0:17:52 the perception is, you know, the vibe has shifted,
    0:17:53 but I feel like a lot of people
    0:17:56 who actually are closer to Sam Altman,
    0:17:57 they don’t feel that way.
    0:18:00 And so I still believe that, you know,
    0:18:01 they have something amazing coming.
    0:18:02 – I follow Gary Marcus.
    0:18:04 I know I think you actually have Gary Marcus
    0:18:06 coming on the TED AI podcast,
    0:18:08 so we’ll probably get some more insights from him
    0:18:10 in the near future, but, you know,
    0:18:12 the sort of Gary Marcus thing
    0:18:15 that he’s been sort of all over Twitter about
    0:18:20 is that we’re not seeing that same leap from GPT-4 to GPT,
    0:18:25 or the same leap from GPT-2 that we saw the GPT-3.
    0:18:27 He’s sort of arguing that that exponential curve
    0:18:28 that everybody’s talking about
    0:18:31 that we’re on with AI is not true, right?
    0:18:32 We’re not on this exponential curve.
    0:18:36 Otherwise, like, why didn’t we get from GPT-4 to GPT-5?
    0:18:40 And half the time we got from GPT-2 to GPT-3, right?
    0:18:42 Like, why is it not showing that?
    0:18:45 But I also, like, I think my counter argument to that
    0:18:46 is just ’cause we’re not seeing it,
    0:18:47 doesn’t mean it’s not there, right?
    0:18:49 – Yeah, totally.
    0:18:51 – I think there’s a lot of stuff happening,
    0:18:53 like Nathan sort of alluding to
    0:18:57 behind the scenes over at OpenAI that we’re not seeing.
    0:19:00 – I think it’s probably more related to, like,
    0:19:02 compute requirements to actually use
    0:19:05 some of these newer, more advanced models.
    0:19:06 If they were to release it right now
    0:19:08 with, like, the compute that’s available,
    0:19:10 it would be, like, really expensive.
    0:19:12 And, you know, the 20 bucks a month
    0:19:14 that people are paying to use Jet GPT
    0:19:16 is probably not gonna cover the cost of inference to run,
    0:19:19 you know, these newer models, same thing with Sora.
    0:19:21 – I mean, that’s kind of the leading theory.
    0:19:24 I know that I believe with the GPT-2,
    0:19:25 like, the mysterious model that came out
    0:19:27 is, like, what the hell is that?
    0:19:29 Like, maybe that’s what it is, is, like,
    0:19:30 they’ve actually, maybe this is something
    0:19:32 they actually developed, like, a year or two ago.
    0:19:35 And it’s like a, it’s a more efficient architecture
    0:19:36 or something like this.
    0:19:38 And possibly that’s what GPT-5’s built upon,
    0:19:40 then they, you know, they have less issues
    0:19:42 with the cost issues in theory, but, yeah.
    0:19:45 – I think it’d be hilarious if the GPT-2 stuff
    0:19:46 has actually opened AI.
    0:19:49 I mean, like, what an interesting way to sort of
    0:19:51 test a model in the wild versus, I don’t know,
    0:19:54 running some sort of A/B where, like,
    0:19:57 there’s some sort of experiments on the chat GPT website
    0:19:59 where a subset of users get a certain model
    0:20:01 versus another set of users.
    0:20:03 Maybe, like, making it explicit that there’s
    0:20:04 these two versions of the models
    0:20:07 and having people respond to it separately is interesting.
    0:20:09 Maybe there’s an intention to create a bit of a PR,
    0:20:11 like, sort of, like, you know, kind of,
    0:20:14 a seed the conversation and kind of, like,
    0:20:16 grease the wheels before the reel,
    0:20:17 like, sort of, I don’t know,
    0:20:19 race car jump moment happens or whatever.
    0:20:22 I don’t know, but it’s like, yeah, like,
    0:20:24 I mean, clients agree, like, there’s the compute stuff.
    0:20:26 Certainly, Sora hasn’t been rolled out widely
    0:20:29 because, like, it’s just so compute-intensive, right?
    0:20:31 Like, you need to come up with a completely different,
    0:20:34 like, pricing model way beyond the $20 one.
    0:20:37 Maybe, hence, them talking to studios and stuff like that.
    0:20:39 But all these models will get optimized, too.
    0:20:40 To the Gary Marcus point, it’s interesting,
    0:20:43 I had a conversation with him last week and it was like,
    0:20:46 it’s like, he’s been very consistent about, like,
    0:20:48 this not being the right paradigm.
    0:20:51 And, you know, I think people liked it,
    0:20:53 like, for the lack of, let’s just put it blunt.
    0:20:55 Like, people like to shit on Gary a lot.
    0:20:57 But, you know, if there’s one thing Gary’s been,
    0:20:59 it’s like, it’s been exceedingly consistent.
    0:21:01 And so, I don’t know, like, I would like to see
    0:21:06 this sort of agentic co-pilot that feels more like an employee.
    0:21:08 We certainly haven’t seen it yet, right?
    0:21:11 The converse of everything else is the expectations
    0:21:13 around AI, like, a year ago,
    0:21:15 if we go back to when GPT-4 came out,
    0:21:19 we’re just so fricking high that people just thought,
    0:21:20 like, whether you were a knowledge worker
    0:21:23 or a visual creator, you would look at the narrative
    0:21:24 and you’d be like, holy crap,
    0:21:26 like, this thing’s gonna take my job.
    0:21:27 And then I don’t know if y’all saw the tweet,
    0:21:29 I was like, and then you go use the tech
    0:21:32 and it feels less like this, like, fricking kaiju Godzilla
    0:21:34 that’s gonna stomp you and more like this, like,
    0:21:36 chaotic golden retriever that you can kind of coax
    0:21:39 to do cool stuff with you.
    0:21:41 And I think that delta between expectations
    0:21:44 and reality is so stark and, you know,
    0:21:46 like, there was all of these,
    0:21:48 I think even in a recent Sam Altman interview where he was
    0:21:50 asking one of his biggest regrets is like,
    0:21:53 GPT-4 didn’t have that, like, economic impact.
    0:21:55 Everyone thought it would.
    0:21:56 And he’s worried that the pendulum
    0:21:58 will swing sort of the other way now,
    0:22:00 where if expectations were so high,
    0:22:02 people were like, oh yeah, whatever.
    0:22:03 And so, I don’t know,
    0:22:04 I think the answer is always gonna be in the middle,
    0:22:07 but I can’t help but feel like we’ve gone past
    0:22:09 the, like, peak of inflated expectations
    0:22:12 and we’re going into the trough of disillusionment.
    0:22:12 – Well, we’ll see.
    0:22:14 I mean, like, Sam Altman also said that, like,
    0:22:17 he was surprised that GPT-4 was so successful
    0:22:19 and that, you know, it kind of sucks.
    0:22:21 – I don’t know, my feeling on that is that Sam Altman
    0:22:23 is one of the, like, greatest marketers
    0:22:24 of our time right now.
    0:22:27 And he’s, you know, he’s really, really, really good
    0:22:29 at getting that hype wheel spinning.
    0:22:30 – Oh, baby.
    0:22:32 – Yeah, I swear, I think Sam Altman
    0:22:34 is just really, really smart.
    0:22:36 And I think, you know, when you see him speak, right,
    0:22:39 when he does interviews, he’s very calculated, right?
    0:22:41 He’ll ask a question and he’ll sit there
    0:22:43 and he’ll usually pause for a good few seconds
    0:22:45 before he responds.
    0:22:48 And I think that he’s got that marketer brain.
    0:22:49 Like, what can I say?
    0:22:52 It’s going to sort of spread the flames
    0:22:53 to hype this up a little bit more.
    0:22:56 And I think that’s kind of how his brain operates.
    0:22:58 So I think, you know, him saying like,
    0:23:01 this is going to be the dumbest model
    0:23:02 you’ve ever used by a lot.
    0:23:04 And we’re going to, you know, we’re going to look at back
    0:23:06 at this and be embarrassed by what we put out.
    0:23:08 I think that’s all marketing.
    0:23:09 – Well, I’m certainly excited.
    0:23:11 But, you know, if I could have one request right now,
    0:23:15 it’s like just give me GPT-4 from April, last year.
    0:23:17 Give me that vintage of GPT-4.
    0:23:18 It was better.
    0:23:19 It was better, damn it.
    0:23:21 – Can you still go into like the open AI playground
    0:23:23 and like select the older models?
    0:23:24 – I think you can, yeah.
    0:23:25 And I feel like just all of these models,
    0:23:27 especially in the consumer interface,
    0:23:29 follow this trajectory of like when they launched,
    0:23:30 they’re really good.
    0:23:32 And then over time as like, you know,
    0:23:35 various efforts to make sure the output is, you know,
    0:23:38 on rails and not harmful kick in,
    0:23:40 you see this deterioration happen.
    0:23:43 But hey, that’s why we also have open source, right?
    0:23:43 – Right.
    0:23:44 – Yeah.
    0:23:46 – Well, let’s talk about visual effects too,
    0:23:48 because that’s like, that’s really your background
    0:23:49 over at Google.
    0:23:53 I want to go back to like the sort of, you know,
    0:23:57 3D imaging 101 for a second here.
    0:23:59 Can you sort of like break down the difference
    0:24:01 between things like photogrammetry,
    0:24:05 LiDAR, NERFs and, you know, Gaussian splats?
    0:24:06 – I would love to.
    0:24:07 In fact, the thing I was talking about is like,
    0:24:10 everyone talks about like generative AI a lot,
    0:24:12 but I think like the part that’s getting
    0:24:16 not that much attention is this like visual spatial AI space.
    0:24:19 And so think about spatial intelligence as like,
    0:24:21 really just like a reality capture.
    0:24:23 Like the world is in fricking 3D, right?
    0:24:25 And so, you know, Matt, you nailed it.
    0:24:28 It’s like basically photogrammetry is the art and science
    0:24:31 of taking 2D images and other sensor data like LiDAR
    0:24:34 and turning it into these 3D representations
    0:24:35 of the real world.
    0:24:37 So photogrammetry has been around
    0:24:39 since like before computers were invented even.
    0:24:43 Like this is a way of basically using like math and images
    0:24:45 and observations of the world really
    0:24:48 to like extract 3D structure from it.
    0:24:50 But you should also think of spatial intelligence
    0:24:53 as the ability for machines to sort of interpret
    0:24:55 the spatial data like maps, 3D models,
    0:24:57 like the world as we see it, right?
    0:25:00 And so like, to me like photogrammetry
    0:25:03 or like reality capture, all these other techniques
    0:25:05 are all about recreating reality.
    0:25:07 And so photogrammetry isn’t new as I alluded to, right?
    0:25:09 But I think what’s gotten a huge boost
    0:25:10 in why you hear about all these things
    0:25:12 is like, thanks to machine learning,
    0:25:15 basically like these learned approaches
    0:25:17 to modeling the complexity of reality, right?
    0:25:20 Like basically like, how do I take a bunch
    0:25:23 of 2D images of the world like this
    0:25:25 and essentially have a model do this
    0:25:26 inverse rendering problem?
    0:25:28 Or it’s like, oh, here’s where these hundred photos
    0:25:29 are located in 3D space.
    0:25:34 Based on this, I’m literally going to like eyeball ray tracing
    0:25:37 like and create a 3D representation that makes sense.
    0:25:41 And since you know exactly what the model looks like
    0:25:42 at the photos that you’ve taken,
    0:25:44 the representation that you get eventually
    0:25:46 is like good enough from all viewpoints.
    0:25:49 And so like, this basically the first nerve paper
    0:25:52 dropped in 2021 called neural radiance fields.
    0:25:54 And then there’s just been insane progress.
    0:25:56 Like we talked about from like data centers
    0:25:59 to like the GPU and your fricking like, you know
    0:26:03 Nvidia workstation to like the iPhone in your pocket.
    0:26:04 But even this wasn’t new.
    0:26:06 There was like spiritual successures
    0:26:09 to these like ML based learned representations
    0:26:12 to sort of encapsulate the complexity of reality.
    0:26:13 Enter radiance fields, right?
    0:26:17 Like the way like, think about radiance fields generally
    0:26:22 sort of like imagine like a voxel grid, a cube of cubes
    0:26:25 where every single cube has like a color value
    0:26:28 and an alpha like transparency value.
    0:26:30 And like that’s kind of what you end up getting with a nerve.
    0:26:32 And then when you do volume rendering,
    0:26:35 you can basically like, you know
    0:26:37 end up getting these photo realistic renditions
    0:26:38 of the world.
    0:26:41 And so like the cool part about neural radiance field is
    0:26:43 instead of photogrammetry where you get this
    0:26:46 like 3D mesh model, this like with surfaces
    0:26:48 with textures plastered on it.
    0:26:51 Think of it like crappy GTA looking models.
    0:26:54 What you get with nerfs is like a radiance field
    0:26:57 this voxel grid of like all these voxels
    0:26:59 and their various values they’re in
    0:27:02 that change based on how the camera is looking at it.
    0:27:05 And because of that, you get all these things
    0:27:06 that photogrammetry couldn’t do
    0:27:10 which is like modeling transparency, translucency
    0:27:14 like fricking like glass, like shiny objects
    0:27:17 all this stuff can be done fricking fire,
    0:27:19 volumetric effects, all the stuff that photogrammetry
    0:27:21 can’t do cause imagine needing to come up with like
    0:27:24 a cardboard paper mache model of that thing.
    0:27:25 It’s gonna look like crap.
    0:27:28 How do you model hair, fire, fog, all these things.
    0:27:30 And you can do all of that
    0:27:32 with these implicit representations.
    0:27:35 Now the problem with nerfs were the rendering speed
    0:27:37 because you’ve got this voxel grid
    0:27:38 and you’re doing this volume rendering
    0:27:40 where you’re like first doing like
    0:27:41 the training process takes forever
    0:27:43 but then when you want to render an image
    0:27:44 you got to do volume rendering
    0:27:47 and like trace like these rays through that voxel grid
    0:27:51 and add up these like values like that takes a lot of time.
    0:27:54 And basically it’s like think of it like one frame per second
    0:27:57 to render out some of these videos, right?
    0:27:58 Along comes Gaussian splatting
    0:28:01 which is like, hey, do we even need the neural part
    0:28:02 of radiance fields?
    0:28:04 Like do we need like ML?
    0:28:08 Can we just do this with like all school statistical techniques?
    0:28:10 And like, which is kind of wild, right?
    0:28:13 And so instead of having this implicit black box representation
    0:28:17 where like realities modeled in the weights of this like MLP
    0:28:19 this like multi-layer perceptron,
    0:28:22 you’ve got this explicit representation
    0:28:25 of these like ellipsoidal splat looking things
    0:28:26 called Gaussians.
    0:28:28 Just think of them like super stretchy like fricking spheres
    0:28:33 like turns out you can get like a huge jump in quality
    0:28:37 while also being able to render way, way faster.
    0:28:40 And so like it’s like from one FPS
    0:28:42 you’re getting like a hundred frames per second.
    0:28:44 And since it’s an explicit representation
    0:28:46 it’s in this like most formats
    0:28:48 like all these apps that I’m showing on the screen
    0:28:52 it’s in this format called PLY the Stanford PLY file
    0:28:53 you can basically bring it
    0:28:56 into any industry standard game engine.
    0:28:57 Like you can bring it into Blender
    0:29:00 you can bring it into Unreal into Unity.
    0:29:03 And since it’s not this like like black box
    0:29:05 like this neural network that you have to deal with
    0:29:07 and it’s explicit you can go and delete
    0:29:09 and edit things far more easily.
    0:29:13 And so it’s like super crazy to see like what’s happened there
    0:29:16 but basically between nerfs and Gaussian splatting
    0:29:19 think of Gaussian splatting basically as radiance fields
    0:29:21 without that neural rendering part.
    0:29:23 And the paper uses terms like training or whatever
    0:29:27 but there’s no neural networks involved at all in 3DGS.
    0:29:29 So yeah, like how crazy is that?
    0:29:30 We went from like cool.
    0:29:32 Yeah, you could do like cool fly through videos
    0:29:34 if you remember that was what the early days
    0:29:36 of like the Luma app was do the scan
    0:29:38 and now you can reanimate the camera
    0:29:40 and you left this thing render for 20 minutes
    0:29:41 and you got back something.
    0:29:43 Now you can literally take your scans
    0:29:45 and drop them into these real time environments.
    0:29:47 And it’s like fricking amazing.
    0:29:50 Like I think on the left I’m getting like 400 FPS
    0:29:52 on an NVIDIA GPU and on the right
    0:29:55 I’ve got this thing in Unreal Engine.
    0:29:59 And what the cool part of it is like unlike photogrammetry
    0:30:02 like and very similar to, you know, neural radiance fields
    0:30:05 they still model these light transport effects.
    0:30:07 So like again, like imagine if this was like
    0:30:08 a cardboard cut out model
    0:30:10 you wouldn’t have had all these light transport effects
    0:30:13 of the light going through the tree, et cetera.
    0:30:17 And so the way Gaussian splatting does this
    0:30:20 is like by using this OG physics concept
    0:30:23 called spherical harmonics to model it.
    0:30:25 And so like if you’re trying to optimize stuff
    0:30:27 you can get rid of some of these view dependent effects
    0:30:29 as they’re called view dependent meaning
    0:30:30 as you change your view
    0:30:32 that like materials look slightly different
    0:30:35 but you basically get it all with Gaussian splatting.
    0:30:37 So I think it’s super exciting.
    0:30:40 And yeah, like you could do this stuff in the cloud
    0:30:43 you can do this stuff on your fricking desktop now.
    0:30:45 Like I think post shot is a tool
    0:30:46 that not many people have used
    0:30:48 but like if you’re working on a commercial thing
    0:30:50 and you don’t wanna upload your data
    0:30:52 with Luma’s terms of service
    0:30:53 or Polycam’s terms of service
    0:30:58 like you can train this all locally on your desktop
    0:31:00 with post shot, with Nerf Studio
    0:31:03 the sum of the models Nerf Studio aren’t commercial friendly
    0:31:04 and then even in the phone in your pocket, right?
    0:31:06 Like so if you’ve got an iPhone like a modern iPhone
    0:31:09 like and you just wanna know what 3D
    0:31:12 like what radiance fields and reality captures all about
    0:31:15 just download the Scaniverse app and like have at it.
    0:31:17 So this is maybe a dumb question
    0:31:19 but like so with like Nerfs and all this new tech
    0:31:22 are you able to make like a really realistic 3D model
    0:31:23 of like a city like San Francisco?
    0:31:25 I mean, is that what you’re showing me earlier?
    0:31:27 Or like, or is it only like a certain scene?
    0:31:28 Like how hard is that?
    0:31:30 Yeah, I mean, there’s a bunch of new papers outright
    0:31:31 the initial radiance fields.
    0:31:33 Like so there’s a paper called Block Nerf
    0:31:35 that tries to scale Nerfs up to city scale
    0:31:37 using Waymo data sets.
    0:31:39 And similarly you’re seeing in the Gaussian splatting world
    0:31:42 different papers about basically having like
    0:31:46 in a kind of like nested hierarchies of splats
    0:31:48 that have really good transitions to model
    0:31:51 an entire like city and eventually the globe.
    0:31:54 So I think like that’s the path that academia
    0:31:55 and industry is on.
    0:31:58 And I think like already you’re seeing city scale data sets
    0:32:00 that are very plausible in research.
    0:32:02 And I think it’s only matter of time
    0:32:05 before that stuff gets into production.
    0:32:06 You think that’s like the future of Google Maps?
    0:32:08 I think it’s the future of maps for sure.
    0:32:11 Like, you know, in immersive view
    0:32:13 there are certain indoor locations
    0:32:15 where you get a pre-rendered neural radiance field
    0:32:18 that you can kind of like walk around and see.
    0:32:20 This is just the evolution of that.
    0:32:21 I think those data sets exist
    0:32:23 and there’s like a handful of companies
    0:32:25 in the world that have it.
    0:32:27 So I think like that is the future of geospatial
    0:32:29 and like maps in general.
    0:32:31 But on the other hand, I think what’s interesting is like,
    0:32:33 you know, this technology like building a map
    0:32:35 of the world is easy.
    0:32:36 Updating it is way harder, right?
    0:32:39 Like when people talk about this like
    0:32:42 one to one digital twin of reality, it’s like,
    0:32:45 oh yeah, well like, by the way, new stuff
    0:32:47 has built all the fricking time.
    0:32:48 Things change all the time.
    0:32:51 Seasonality is a fricking thing, right?
    0:32:54 Like so, I think with this technology we’ve got
    0:32:56 and since we’ve commoditized capture
    0:32:59 because sensors are cheaper, computers cheaper.
    0:33:02 And now we’ve got access to the same sort of algorithms
    0:33:05 and approaches to model reality.
    0:33:07 I think like updating this model of the world
    0:33:08 is gonna get a lot, lot easier.
    0:33:10 So I think like it’s gonna be very exciting
    0:33:12 or in the near future we’re walking around
    0:33:14 you know, driving around our cars
    0:33:17 and walking around with our like glasses or whatever.
    0:33:19 And we’re sort of updating this real time map of the world.
    0:33:22 I think we’re very much on that trajectory
    0:33:24 and we’re closer now than we’ve ever been.
    0:33:26 – What do you think are like the business applications
    0:33:27 to this tech?
    0:33:28 – I mean, it’s like all the applications
    0:33:30 that value stuff in the real world,
    0:33:32 there’s utility and there’s delight, right?
    0:33:35 I think like being able to not just like,
    0:33:37 I mean, if you look at what NVIDIA is doing with Earth too,
    0:33:39 right, we’re talking about the physical structure
    0:33:39 of the world.
    0:33:42 You can think of like the Earth having very facets, right?
    0:33:44 Like there’s like the sort of the terrain,
    0:33:48 like the natural like physical features of the Earth.
    0:33:49 Then it’s all the like, you know,
    0:33:51 human built things on top of that,
    0:33:52 the structure that we built.
    0:33:55 And then you can layer in like human activity
    0:33:56 on top of that, right?
    0:33:58 Like us moving around in the world,
    0:34:00 our sensors, our cars, et cetera.
    0:34:03 And then there’s other phenomena like weather, right?
    0:34:04 Like tides and things like that
    0:34:06 that need to be incorporated.
    0:34:08 So Earth too is this really interesting initiative
    0:34:11 by NVIDIA to focus on the weather like systems
    0:34:13 that govern like, you know,
    0:34:16 basically a day-to-day weather in the real world.
    0:34:18 And so if you’ve got that understanding of like,
    0:34:20 like the structure and geometry of a place
    0:34:21 where the sun is going to be,
    0:34:23 you can predict already things like,
    0:34:25 hey, can I install solar panels here?
    0:34:27 Like actually how much like sunlight would I get
    0:34:30 if I installed this configuration of panels?
    0:34:32 Then when you layer weather on top of that,
    0:34:33 things get even more interesting.
    0:34:35 So to answer your question,
    0:34:37 I think there’s a bunch of applications
    0:34:39 across utility and delight.
    0:34:42 Like media and entertainment’s obviously in gaming is like,
    0:34:45 I think the next GTA is absolutely going to be built
    0:34:46 in like a twin of the real world.
    0:34:49 Maybe this is the last GTA that we beat
    0:34:51 will be built by humans manually
    0:34:52 to emulate the real world.
    0:34:55 I think that’s certainly exciting.
    0:34:57 That said, a bunch of games have already used
    0:34:58 reality capture, right?
    0:35:00 Like from call of duty to battlefront, et cetera.
    0:35:03 But I think the utilitarian aspects
    0:35:05 are far, far more interesting.
    0:35:07 Whether you’re like anything you’re trying to do
    0:35:10 in the world of bits, like from like building stuff
    0:35:12 to like disaster planning,
    0:35:16 like the range of applications is just immense.
    0:35:17 – Well, even just, you know,
    0:35:21 one of the things that Jensen showed off at GTC this year
    0:35:25 was to create these sort of virtual worlds
    0:35:28 and then actually put virtual versions of like
    0:35:31 humanoid robots in these worlds
    0:35:34 and to sort of train them on this virtual twin
    0:35:35 of the real world.
    0:35:37 So they know how to navigate the real world
    0:35:39 and then once they get that training data
    0:35:41 then they can sort of inject that training data
    0:35:43 into the real robots.
    0:35:46 So like this concept of creating this digital twin
    0:35:50 of the earth will allow us to train a lot of these robots
    0:35:54 and machinery to operate within that digital twin
    0:35:56 before actually deploying it in the real world.
    0:35:59 To me, there’s a lot of huge implications there.
    0:36:02 – 110%, I mean like these,
    0:36:05 it’s like a way of creating all the training data
    0:36:07 that these machines and perception models need
    0:36:08 to be able to navigate the world, right?
    0:36:10 And what better way it’s like,
    0:36:12 you can create that like, you know,
    0:36:14 you can 3D scan a city block
    0:36:17 and then create all these different scenarios
    0:36:19 of human activity on top of that
    0:36:21 and feed that to, you know, like,
    0:36:23 and train like self-driving cars,
    0:36:25 like, you know, self-driving AI off of it.
    0:36:28 I think like the fact that we’ve got a place
    0:36:29 where we can basically like,
    0:36:32 we can teleport reality into the digital world
    0:36:34 and then also manifest the digital in the real world.
    0:36:37 It’s like that bridge I think is just very powerful
    0:36:38 for a bunch of different applications.
    0:36:41 – Well, I want, let’s talk super quickly about TED.
    0:36:45 So I’m, first of all, congrats on even having a TED talk.
    0:36:47 Like that’s such an amazing accomplishment.
    0:36:50 Like, you know, some people say they’ve had a TED talk.
    0:36:51 They’re really talking about a TEDx talk.
    0:36:54 And come on, come on, come on.
    0:36:57 You’ve actually given a real TED talk, a legit TED talk.
    0:36:58 And not only that,
    0:37:01 but they asked you to host the TED AI podcast.
    0:37:03 So tell us a little bit about that
    0:37:04 and what’s going on there.
    0:37:06 I mean, maybe share a little bit of your experience
    0:37:09 with TED and then tell us about the TED AI podcast.
    0:37:10 – Yeah, sure.
    0:37:14 I mean, certainly the TED talk was a fun experience last year.
    0:37:16 And I would say this year was even more fun.
    0:37:18 It’s a side of the opportunity.
    0:37:19 It’s a co-session too,
    0:37:22 which was all of that AI with Chris Anderson.
    0:37:24 And we had some amazing speakers,
    0:37:28 like Vinod Kosla, Feifei, you know,
    0:37:31 the CEO of GitHub, Helen Toner,
    0:37:33 export member, OpenAI.
    0:37:36 And even like, I don’t know if you’ve,
    0:37:38 if you’ve checked her work out, but nice aunties,
    0:37:42 like absolute trip down basically like,
    0:37:46 to me what intergalactic social media looks like.
    0:37:48 There was a super, super fun experience.
    0:37:49 Yeah, I mean, like, look,
    0:37:51 the T and TED is all about technology.
    0:37:54 And I think right now what’s exciting is to put the,
    0:37:58 like over time, TED grew to encompass, you know,
    0:38:00 not just technology, entertainment and design,
    0:38:03 but a plurality of topics, right?
    0:38:05 And I think with AI sort of in technology,
    0:38:06 being sort of this horizontal,
    0:38:08 like tech is a horizontal,
    0:38:11 but it’s impacting so many different verticals
    0:38:13 in our daily lives, right?
    0:38:15 Like we can talk about all the applications,
    0:38:16 whether you’re a creator,
    0:38:18 whether you’re a knowledge worker, you know,
    0:38:20 whether you’re a musician, you know,
    0:38:22 whether you’re thinking about like national security
    0:38:25 and defense, whether you’re thinking about relationships.
    0:38:30 And often in all of these sort of topics, you know,
    0:38:34 there’s like a dichotomy that we as like builders
    0:38:36 and consumers have to contend with.
    0:38:40 And so the idea of the TED AI show really is to outline
    0:38:42 those dichotomies and, you know,
    0:38:45 not necessarily take an opinion one way or the other,
    0:38:49 but sort of elaborate on the entire gamut of like,
    0:38:51 like the good, bad and the ugly
    0:38:53 and sort of let people decide for themselves
    0:38:56 and do that by talking to people from all walks of life.
    0:38:58 Like people whose titles haven’t even been invented yet,
    0:39:01 but obviously technologists, journalists, researchers,
    0:39:03 artists, you know, the list goes on
    0:39:05 and, you know, I’m just super grateful
    0:39:08 for the opportunity to be able to, you know,
    0:39:10 just bring my excitement into the space.
    0:39:12 Like obviously like I want to build,
    0:39:14 bring the lens of a creative,
    0:39:16 like that’s like built a following
    0:39:18 for over a million folks using these tools,
    0:39:20 but also as a product builder who shipped a bunch
    0:39:22 of this stuff and then just like,
    0:39:26 I would say like cautiously optimistic AI enthusiasts.
    0:39:28 So I’m going into a bunch of these topics
    0:39:30 with those three lenses in mind
    0:39:32 and it’s just been a lot of fun.
    0:39:35 We’ve got some really cool episodes lined up for y’all
    0:39:38 and I can’t wait for y’all to check it out.
    0:39:41 – You have any idea of like, you know, launch schedule?
    0:39:44 Like is that, are there dates planned out for it yet?
    0:39:45 – Yeah, yeah, totally.
    0:39:47 So May 21st, first episode drops
    0:39:48 and then it’s going to be weekly.
    0:39:50 There’s going to be a little bit of a summer break there,
    0:39:52 but yeah, 25 episodes in the season.
    0:39:55 And let me tell you,
    0:39:57 I think there’s something for everyone.
    0:39:59 – Yeah, I do wonder like what was the general vibe at Ted?
    0:40:03 Like are people optimistic or are they like really fearful
    0:40:04 of AI?
    0:40:05 – And if you look at Ted also, you know,
    0:40:08 you had people like, like the laval.
    0:40:10 I know there were some other speakers there.
    0:40:11 I think maybe Moustafa Suleiman was there.
    0:40:13 Maybe that was a more recent one.
    0:40:15 But then you also had guys like Gary Marcus
    0:40:20 and I’m going to totally butcher his name, Yadkowski.
    0:40:22 – Oh yeah, it is Yadkowski.
    0:40:24 – Which are both more on the like,
    0:40:26 hey, let’s chill out on the AI side.
    0:40:29 So it seems like it’s from a speaker’s front.
    0:40:31 It seemed like they had speakers on both sides
    0:40:33 of the arguments.
    0:40:33 – Definitely.
    0:40:35 I mean, the theme for this year was like the brave
    0:40:36 and the brilliant and like covering
    0:40:38 that gamut of opinions.
    0:40:40 I would say overall the vibe is positive.
    0:40:42 So I like, I’ll give you a sample size.
    0:40:44 Like I taught this discovery session,
    0:40:48 which was about the dichotomies of AI is about 50 people.
    0:40:49 And sort of what we did is we looked at a bunch
    0:40:52 of these verticals in the AI space.
    0:40:54 And like, you know, essentially came up with like,
    0:40:56 what happens if this goes really well
    0:40:59 and what happens if this goes really poorly?
    0:41:01 And like, let’s use chat GPT actually
    0:41:02 to come up with like a headline,
    0:41:06 like a piffy visual depiction of that desirable
    0:41:08 and undesirable future.
    0:41:10 And honestly, like most folks in the room
    0:41:12 are optimistic about it, right?
    0:41:14 Like, but they’re not blind to the downsides.
    0:41:17 I think like the problem with anything is extremes, right?
    0:41:19 And so like, you know, where like Nathan,
    0:41:22 I hear your concern of like, if we’re like, you know,
    0:41:23 oh, like we can’t have like,
    0:41:27 demerism and like, you know, I was like, it’s infectious.
    0:41:30 I think the same thing applies to the opposite narrative too,
    0:41:32 which is like, well, we just obviously,
    0:41:34 we got to keep like accelerating.
    0:41:35 We got to keep shipping.
    0:41:36 I think it depends, right?
    0:41:39 Like, it’s sort of the boring answer to these things.
    0:41:42 And I think you can’t understand the nuances
    0:41:45 unless you go dissect like the full gamut
    0:41:47 of like considerations.
    0:41:50 And so the goal is like, really like the editorial perspective
    0:41:51 I’m trying to bring.
    0:41:55 And of course, Ted has a huge say in this too is like,
    0:41:58 look, I’m a 60% optimistic, 70% optimistic,
    0:41:59 which is not too dissimilar.
    0:42:01 I think Matt, well, you and I have talked about
    0:42:03 on most things, but I’m not going to be blind
    0:42:06 to all like the downsides of this stuff too, right?
    0:42:09 Like, and I think that’s okay to say.
    0:42:12 And what I believe is just like the speaker selection,
    0:42:15 you know, we’re trying really hard
    0:42:17 to have a balanced perspective on the guests too.
    0:42:20 So that like, what’s what I think is going to happen
    0:42:22 in the real world as well as like,
    0:42:24 you’ll be able to hear both sides of that argument.
    0:42:27 Like somebody who’s like super stoked and thrilled
    0:42:30 on like AI art and just thinks it’s the bee’s knees.
    0:42:33 And it’s totally cool to train on copyrighted material.
    0:42:35 Somebody who thinks like that is the death of creativity
    0:42:36 as we know it too.
    0:42:37 And you have to-
    0:42:38 – I think it’s good to hear both sides.
    0:42:40 I mean, I agree, but I’m concerned.
    0:42:41 Like I used to live in San Francisco
    0:42:43 and like they’re like pushing for regulation now.
    0:42:44 We’re like to-
    0:42:45 – Who’s they?
    0:42:46 You mean, you mean, Sam Allman?
    0:42:47 – The government, yeah.
    0:42:48 Well, no, the government.
    0:42:49 They’re like, they’re like pushing a bill through right now.
    0:42:51 They’re trying to like, they’re trying to fast track it.
    0:42:53 I forgot who’s doing the bill,
    0:42:55 but we’re basically,
    0:42:57 if you launch a new large language model-
    0:42:58 – Need approval.
    0:43:00 – Yeah, you like need approval,
    0:43:02 but also like it doesn’t get to basically sign something
    0:43:03 and like it’s perjury if you’re lying
    0:43:05 that this model can do no harm.
    0:43:06 And it’s like-
    0:43:07 – Yeah, like who’s gonna do that?
    0:43:08 Yeah.
    0:43:09 – Yeah.
    0:43:12 And so I agree like nuance is important and like,
    0:43:15 I do consider myself kind of part of the EAC movement,
    0:43:16 but more generally just like a techno optimist.
    0:43:18 – Do you have it in your bio still?
    0:43:19 I don’t know.
    0:43:20 – I don’t, I don’t.
    0:43:21 But I, you know, I like Beth.
    0:43:23 I like all the people who are part of that.
    0:43:24 – That’s cool.
    0:43:25 – I think in general,
    0:43:27 I think in general it’s right, you know,
    0:43:29 but nuance is important, I agree.
    0:43:30 – Let me put it this way.
    0:43:31 I think the way I see it is like,
    0:43:34 we’ve got enough talented humans out there
    0:43:37 that are like pushing for like acceleration
    0:43:39 and there’s enough talented people out there
    0:43:41 that are pushing for, you know,
    0:43:43 I would say pumping the brakes
    0:43:46 for the lack of a better way to put it in certain areas.
    0:43:48 And I think like in totality,
    0:43:52 like we’ll reach like some optimum solution
    0:43:53 because of those influences.
    0:43:54 And I think it’s always been like that, right?
    0:43:58 Like, I mean, just like the early days of music,
    0:44:00 everyone’s like, oh yeah, Napster and like peer-to-peer
    0:44:01 and let’s just go crazy.
    0:44:03 And then things settled down
    0:44:05 and we found a business model that worked.
    0:44:07 Maybe it’s not perfect, right?
    0:44:09 Like people have a lot of gripes with the Apple
    0:44:10 and the Spotify business model,
    0:44:13 but I think we found this like globally optimum solution.
    0:44:15 – I mean, it doesn’t always work out, right?
    0:44:16 Though they look at like nuclear plants, right?
    0:44:18 Like in the past, the US was gonna build
    0:44:21 all these nuclear plants to solve energy problems
    0:44:23 and we didn’t do it because of regulations
    0:44:24 and because of fear.
    0:44:25 And now we’re like trying to solve
    0:44:26 all these global warming problems.
    0:44:29 Other thing is like, well, we always kind of had nuclear there
    0:44:31 that we could have been using and it worked.
    0:44:32 So it doesn’t always work out.
    0:44:34 Like it often does, but it doesn’t always.
    0:44:37 – Totally, I mean, one of the funnest topics
    0:44:42 is getting into how do politicians and regulators
    0:44:46 even regulate this like sort of nebulous set of technologies.
    0:44:48 It’s not just large language models, right?
    0:44:50 Like there’s all the perception AI stuff
    0:44:52 and like the implications there.
    0:44:55 And but like a group of these technologies
    0:44:57 that are like sort of permeate and go, my God,
    0:45:00 like some of the stuff that I’m like doing research
    0:45:04 right now on just the intersection of neuroscience and AI
    0:45:07 and what we’re gonna be able to do with just like passive
    0:45:10 neural like interfaces with earbuds and things like that.
    0:45:14 I mean, there’s gonna be some real big ethical quandaries
    0:45:15 that pop up.
    0:45:18 And so yeah, yeah, trying really hard to bring like
    0:45:19 still be techno-optimist,
    0:45:21 but bring that balanced perspective.
    0:45:23 And I think folks are gonna like it.
    0:45:24 – I think at the end of the day,
    0:45:26 the important thing is empathy, right?
    0:45:30 Like the perspective I come from is I tend to be
    0:45:31 a very empathetic person, right?
    0:45:34 I want to hear both sides of the story.
    0:45:35 I want to hear both perspectives
    0:45:38 and I want to be empathetic to both sides too.
    0:45:40 Like if somebody is genuinely worried
    0:45:43 that this technology is taking their job,
    0:45:44 I want to understand why.
    0:45:48 I want to understand what we can do to sort of mitigate
    0:45:50 the damages that could be done from this.
    0:45:53 Like I’m always gonna come from that place of empathy,
    0:45:55 which is why I’ve never sort of identified
    0:45:58 with the EAC movement is like,
    0:46:00 I don’t necessarily think we should always be pushing
    0:46:03 everything forward as fast as possible.
    0:46:05 I think we should be listening to the fears.
    0:46:06 We should be listening to the concerns.
    0:46:09 We should be figuring out sort of middle grounds.
    0:46:13 Like you mentioned, there’s always people on both sides
    0:46:15 which kind of creates a decent checks and balance
    0:46:17 to make sure one side doesn’t go too far
    0:46:20 and AI nukes the world,
    0:46:22 but also the other side doesn’t go too far
    0:46:25 and technology stops advancing completely.
    0:46:29 Those checks and balances I think are a net positive overall.
    0:46:30 I think those need to be there.
    0:46:33 – I agree overall, but I mean, I think the big thing
    0:46:36 that was like, the argument with EAC would be making is
    0:46:39 that compounding is one of the most important powers
    0:46:40 out there, right?
    0:46:41 Like an idea that if we build faster
    0:46:43 than the technology in the future will be better
    0:46:44 and better and better and we’ll start solving
    0:46:46 real world problems like cancer
    0:46:48 and all these other things that maybe we could have been doing
    0:46:52 if we weren’t so like quick to just regulate everything.
    0:46:54 And so I think with AI, it’s the same thing.
    0:46:56 Like sure, like some regulation in the future
    0:46:58 might make sense, but if we just start throwing it
    0:47:00 out there right now, we’re gonna slow down the compounding
    0:47:03 and like the exponential, we’ll stop the exponential
    0:47:05 from happening with our regulations.
    0:47:08 And we, you know, yeah, maybe some jobs would be lost
    0:47:10 in the short term, but in the long term,
    0:47:13 we could have cured cancer, we could have solved, you know
    0:47:15 global warming issues, all kinds of other problems
    0:47:18 that we could have solved if we would just like weighted
    0:47:19 and see what happens with the technology.
    0:47:20 And yeah, now there’s a big problem.
    0:47:23 Okay, maybe make a regulation, but just don’t do it
    0:47:23 like right at the beginning,
    0:47:24 like they’re trying to do right now.
    0:47:27 – I mean, India totally flipped their decision, right?
    0:47:29 Initially they were like, oh, you have to get every model
    0:47:31 approved and they’re like, actually we’re gonna retract
    0:47:33 this part, which was really interesting.
    0:47:36 And I mean, like it’s like a regulation also,
    0:47:38 like it could end up in a place where it just only benefits
    0:47:41 the incumbent, like the largest AI labs too, right?
    0:47:44 Like the regulatory capture like point of view,
    0:47:47 it’s like it could end up in a place where like any new
    0:47:50 innovation like from a startup can actually happen.
    0:47:52 And they’re the ones that get these like owners compliance
    0:47:55 requirements and they can’t afford a team of lawyers
    0:47:57 unless they’re like super VC backed.
    0:47:59 And then like, what an inefficient use of VC capital
    0:48:02 instead of innovating, you’re sort of like navigating
    0:48:05 like the legal landscape of like a heavily regulated
    0:48:05 industry.
    0:48:07 So I think like your point about nuclear is well taken too.
    0:48:10 It’s like, I mean, like a lot of folks, I mean,
    0:48:12 including Gary brings up the example of like, you know,
    0:48:16 like basically air travel and airplanes are still well
    0:48:19 regulated, but yet we’ve got this whole Boeing fiasco
    0:48:20 happening, right?
    0:48:23 It’s like where you’ve got one really big incumbent
    0:48:25 and there’s probably a revolving door between regulatory
    0:48:29 agencies and Boeing and, you know, so it’s-
    0:48:30 – And when the Wright brothers got started, I mean,
    0:48:31 they weren’t being heavily regulated as they were
    0:48:33 like inventing the plane.
    0:48:34 – Totally.
    0:48:35 – They were out there just like out in Ohio,
    0:48:37 just, you know, trying shit out.
    0:48:37 So I mean-
    0:48:39 – I think one thing it’s like a lot of people talk about
    0:48:42 is like just like technology, sunsets, like regulations
    0:48:45 need to sunset rather than us adding up more and more
    0:48:46 regulations.
    0:48:48 So I think this is where things get geopolitical too.
    0:48:51 It’s like, I think China is so much savvier about AI
    0:48:53 regulation than the US is right now.
    0:48:57 And I feel for the politicians, I think they’re like asking
    0:48:59 for this type of engagement.
    0:49:01 And I think it’d be good if we engage with them on this
    0:49:05 and like bring those perspectives to bear rather than,
    0:49:07 I don’t know, just being like a regulation, bad,
    0:49:10 innovation, good, let’s keep innovating.
    0:49:13 And so it’s like, it’s nuanced, but then again, like,
    0:49:18 look, I’m a, I’ve always been a Libra and like, you know,
    0:49:21 kind of trying to build bridges between two worlds.
    0:49:25 – Yeah, no, I mean, I think this being a super nuanced
    0:49:28 conversation is like the understatement of the episode here.
    0:49:31 I think there’s just so many different like rabbit holes
    0:49:33 that we could potentially go down when it comes to the
    0:49:34 regulation thing.
    0:49:37 I think you’re going to have to be one of our sort of
    0:49:38 recurring guests.
    0:49:41 Maybe every few months jump on and nerd out about this stuff.
    0:49:44 But, you know, I do want to give you the opportunity to
    0:49:46 tell us what else you’re working on.
    0:49:48 If there’s a place you think people should go check you
    0:49:51 out, your Twitter, your YouTube, obviously the TED AI
    0:49:53 podcast coming out later in May.
    0:49:56 – Yeah, so just please follow me on Twitter @belavosadoo.
    0:50:01 You can also follow me on YouTube and TikTok @billyfx.
    0:50:03 If you’re interested in some more long form expositions
    0:50:05 that I do, check out the Creative Tech Digest.
    0:50:08 It’s both a newsletter as well as a YouTube channel.
    0:50:10 And yeah, of course, check out the TED AI show.
    0:50:14 Maybe the one last thing I’ll say is like if you’re a
    0:50:16 founder and a builder in this space building with any of
    0:50:19 the technologies that we talked about and you’re looking
    0:50:21 for early stage investment, I’m also a scout for
    0:50:23 A16Z Games.
    0:50:26 So just hit me up on Twitter or you can email me.
    0:50:28 We’ll put the email in the show notes as well.
    0:50:30 And I really appreciate you guys having me on.
    0:50:32 I’m wishing you all the success for your podcast.
    0:50:36 And Matt, I will see you at I/O.
    0:50:39 And Nathan, I hope to see you in 3D sometime soon.
    0:50:42 – Oh, come out the Kyoto, come on.
    0:50:43 – I gotta make it happen.
    0:50:44 – Awesome, Bill of All.
    0:50:45 Well, it’s been a blast.
    0:50:47 This has been one of my favorite conversations we’ve had
    0:50:47 so far, so fun.
    0:50:50 Excited to see you in person next week.
    0:50:51 – Cool, cheers.
    0:50:53 (upbeat music)
    0:50:56 (upbeat music)
    0:50:59 (upbeat music)
    0:51:01 (upbeat music)
    0:51:04 (upbeat music)
    0:51:06 [MUSIC]

    Episode 8: Is Google’s dominance in search engines at risk with the rise of generative AI models? Hosts Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this topic with guest Bilawal Sidhu (https://x.com/bilawalsidhu), a host of The TED AI Show and a former Google employee experienced in AR/VR projects and creating 3D maps for Google Maps.

    In this episode, Bilawal explores the potential challenges facing Google’s search engine supremacy due to advancements in generative AI models and discusses the implications for the future of search engines and advertising. He dives into the impact of AI-generated content on search results and the need for a nuanced approach in navigating the evolving landscape of digital information.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Decade in tech, six years at Google.
    • (03:17) Time at Google as a product manager .
    • (06:27) Discussion about impact of TikTok and YouTube.
    • (12:28) Models need to be anchored in real knowledge.
    • (18:23) Testing GPT-2 in the wild with users.
    • (19:37) AI expectations not met yet.
    • (23:29) Spatial intelligence for machines and photogrammetry summary.
    • (26:02) Implicit representations enable flexible, accurate rendering techniques.
    • (32:08) Nvidia’s Earth 2 initiative predicts real-world weather.
    • (34:41) Training AI by creating virtual reality scenarios.
    • (36:33) Exciting Ted show explores impact of AI technology.
    • (40:15) Editorial perspective in Ted show seeks balanced, nuanced viewpoints.
    • (45:12) Delay regulations to accelerate technological advancement impactfully.
    • (47:18) Regulations should sunset, engage with China’s perspective.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • OpenAI GPT-4o, Google Gemini Update, and Microsoft Copilot: Everything You Need to Know

    AI transcript
    0:00:02 (upbeat music)
    0:00:06 – Hey, welcome to the Next Wave podcast.
    0:00:07 I’m Matt Wolf.
    0:00:09 I’m here with my co-host, Nathan Lanz.
    0:00:12 And in this show, it is our goal to keep you looped in
    0:00:15 on all the latest AI news, all the coolest AI tools,
    0:00:18 and just generally what’s going on in the AI world.
    0:00:20 And the last couple of weeks
    0:00:23 have been absolutely insane in the AI world.
    0:00:24 There have been so many announcements.
    0:00:28 We had Google I/O, we had OpenAI’s announcements.
    0:00:30 We had two different keynotes from Microsoft,
    0:00:32 and all of these announcements
    0:00:34 had so much going on around AI.
    0:00:38 So we thought we’d do a fun little bonus episode,
    0:00:39 maybe a little bit shorter than normal,
    0:00:41 and just kind of riff and talk about
    0:00:43 some of these announcements and, you know,
    0:00:45 get you looped in on what’s going on
    0:00:46 in the world of AI right now.
    0:00:50 So the first event that came up last week
    0:00:52 was the OpenAI event, which is kind of funny,
    0:00:55 because we knew Google I/O was happening
    0:00:56 last week on Tuesday.
    0:00:58 So what does OpenAI do?
    0:01:00 They go and make their announcement on Monday,
    0:01:03 the day before the big Google I/O event.
    0:01:06 That was sort of the kickoff to this announcement fest
    0:01:09 of all of these AI announcements from these big companies.
    0:01:14 And of course, the big OpenAI announcement was GPT-40.
    0:01:15 And during this announcement,
    0:01:18 they showed off some demos of GPT-40,
    0:01:22 being able to sort of have conversations with you,
    0:01:24 the inflections of the voices change,
    0:01:28 the voices sound eerily similar to Scarlett Johansson,
    0:01:31 which is the voice of the AI in her.
    0:01:34 So there’s some drama going around about that.
    0:01:36 But I figured let’s kick off this conversation
    0:01:38 talking about GPT-40.
    0:01:44 When all your marketing team does is put out fires,
    0:01:45 they burn out.
    0:01:46 But with HubSpot,
    0:01:49 they can achieve their best results without the stress.
    0:01:52 Tapping to HubSpot’s collection of AI tools,
    0:01:55 freezes to pinpoint leads, capture attention,
    0:01:58 and access all your data in one place.
    0:01:59 Keep your marketers cool
    0:02:01 and your campaign results hotter than ever.
    0:02:05 Visit hubspot.com/marketers to learn more.
    0:02:11 – So, I mean, Nathan, like what are your initial thoughts?
    0:02:13 When you watch that keynote, like,
    0:02:15 were you blown away by it?
    0:02:17 Were you underwhelmed by it?
    0:02:19 Like what were your thoughts when you first watched it?
    0:02:20 – I was honestly blown away by it.
    0:02:21 Like just like, you know,
    0:02:22 in one of our previous videos,
    0:02:25 we talked about how I think in a year from now,
    0:02:27 voice will be the main way that you interact with AI.
    0:02:29 And it’s like, okay, well, now here it is.
    0:02:32 Like they’re showing you can actually just talk to the AI.
    0:02:33 You can actually even interrupt it.
    0:02:35 It can be talking and you can just like interrupt it
    0:02:36 and start talking again.
    0:02:37 It’s not like, it’s not like the old one
    0:02:41 where you had to like record and then like processes
    0:02:43 and then it says something and you had to wait till it’s done.
    0:02:45 No, you can just like talk back and forth with it.
    0:02:47 And it wasn’t perfect,
    0:02:50 but damn it was pretty good for like a first version
    0:02:52 of like this kind of voice interact with AI.
    0:02:54 But yeah, I was pretty blown away by it personally.
    0:02:55 – You know, one of the big announcements
    0:02:57 that came out with the GPT-40
    0:03:00 is that pretty much the latest state of the art models
    0:03:02 that, you know, the general consumers
    0:03:04 are going to get access to,
    0:03:05 they’re going to give them access free, right?
    0:03:09 So like the GPT-40, people are going to get,
    0:03:10 they’re going to get access to that
    0:03:12 just in the free chat GPT now before, you know,
    0:03:15 if you had the free chat GPT, you were using, you know,
    0:03:17 GPT 3.5 and older model.
    0:03:20 If you were on the chat GPT plus subscription,
    0:03:23 you got chat, you got GPT-4 turbo.
    0:03:25 The naming conventions are all, you know, weird
    0:03:30 ’cause you got GPT-4, GPT-4 turbo, GPT-40, GPT-3.5.
    0:03:33 Like it’s all confusing, but you know,
    0:03:35 Sam Altman actually made a comment
    0:03:36 in one of the recent interviews
    0:03:38 that he’s probably going to stop, you know,
    0:03:41 naming them GPT-4, GPT-5, GPT-6, right?
    0:03:43 They’re going to just sort of land on a name
    0:03:46 and just keep on improving on that version
    0:03:47 over and over again.
    0:03:49 But I thought it was really cool
    0:03:51 that they’re basically saying our most state-of-the-art
    0:03:53 model, the best models that we’re putting out,
    0:03:55 you’re going to be able to use just for free now
    0:03:57 in the free version of chat GPT.
    0:03:59 I thought, I thought that was really, really cool.
    0:04:01 And then if you’re a plus subscriber,
    0:04:02 then you just get more use, right?
    0:04:04 You get, you’re going to get some of the features
    0:04:06 a little bit earlier than everybody else,
    0:04:08 but you’re going to get like five times more
    0:04:10 input/output with the model.
    0:04:11 – I think it’s a big deal
    0:04:13 because you know, most people when they think of AI,
    0:04:15 they’re thinking of chat GPT
    0:04:17 and they’re typically thinking of the free model, right?
    0:04:20 Which has been 3.5, which you know,
    0:04:22 compared to 4 is really, really bad.
    0:04:24 And so the fact now that someone gets,
    0:04:26 you know, I think 4.0 in some ways
    0:04:28 is not as good as 4 in some ways,
    0:04:31 but it’s pretty darn close and it’s way, way faster.
    0:04:34 So for most people, that’s going to be such a, you know,
    0:04:35 step forward in terms of what they believe
    0:04:36 is possible with AI.
    0:04:37 Like, and so I think that’s going to lead
    0:04:39 to like a lot more awakening around like,
    0:04:42 oh, AI is actually here, it’s really powerful.
    0:04:44 But, you know, I think one thing worth thinking about though
    0:04:46 is like, apparently they were training this model
    0:04:49 with a separate team ever since 2022.
    0:04:50 So my understanding is like, yeah,
    0:04:53 this is kind of their state-of-the-art model,
    0:04:55 but I don’t think it’s their like flagship model.
    0:04:57 I think it’s just like another model
    0:04:58 that they were developing.
    0:05:00 And so they put it out there for free.
    0:05:02 My theory is it’s probably built on like a better architecture
    0:05:03 where it’s more cost efficient.
    0:05:05 That’s why they’re able to give away for free.
    0:05:07 That’s why it’s faster.
    0:05:09 And then five is probably going to be built
    0:05:10 on top of that as well.
    0:05:12 So I think we’re still going to see GPT-5 quite soon.
    0:05:14 And it may not be multimodal.
    0:05:15 That’s my big question.
    0:05:17 Like, okay, so if it’s a different model,
    0:05:19 are we going to get something that’s dramatically smarter
    0:05:21 with GPT-5, but it’s not multimodal yet.
    0:05:22 It’s just text.
    0:05:25 – Yeah, but this new model, GPT-4O was, you know,
    0:05:27 pretty multimodal, right?
    0:05:29 They were, they were inputting video,
    0:05:31 although I think the video they were showing,
    0:05:33 it was just sort of looking at screenshots from the video.
    0:05:35 It wasn’t actually watching the video
    0:05:37 because there was one moment in their demo
    0:05:39 where he picked up the phone and like smiled
    0:05:40 into the camera and it was like,
    0:05:42 oh, I’m looking at something wood.
    0:05:43 And he was like, no, no, that was before.
    0:05:45 Now look at my face, right?
    0:05:47 So it obviously sort of like took a snapshot
    0:05:49 of the table or whatever.
    0:05:51 And then when he looked at his face,
    0:05:53 it still was like thinking about the picture
    0:05:54 of the table or something.
    0:05:56 So I don’t think it’s actually using video,
    0:05:59 but it does seem to be ingesting audio.
    0:06:01 You can put images into it.
    0:06:04 So I mean, it’s pretty multimodal already.
    0:06:05 I think the real question is,
    0:06:08 are we going to get like actual video input with GPT-5?
    0:06:11 – Yeah, I guess we’ll see.
    0:06:12 – So there were some other announcements
    0:06:13 during the OpenAI event.
    0:06:15 So I’ll just kind of rattle off some of them.
    0:06:17 I’ve got my notes here in front of me.
    0:06:19 They talked about a new desktop app,
    0:06:22 which it’s pretty cool.
    0:06:24 It’s only available on Mac right now.
    0:06:27 They didn’t make it available on PC right away.
    0:06:30 And I think I’m at the Microsoft event right now.
    0:06:31 If you’re watching on video,
    0:06:33 you can see I’ve got a different background than normal.
    0:06:37 I think we found out why they didn’t release a Mac version
    0:06:38 while I’m at the Microsoft event,
    0:06:39 but we’ll get into that
    0:06:42 when we start talking about the Microsoft event here.
    0:06:44 But the desktop app, it’s pretty cool.
    0:06:45 I tested it out on my Mac.
    0:06:47 It doesn’t have the voice features in it yet.
    0:06:49 It doesn’t have the feature
    0:06:51 where you can view your desktop yet.
    0:06:54 You can take a screenshot, drag the image in,
    0:06:57 and chat with the screenshot essentially,
    0:06:58 but it doesn’t just look at your desktop
    0:07:00 like they showed in the demo yet.
    0:07:02 It’s pretty much the same thing
    0:07:04 you get inside of the chat GPT web app,
    0:07:05 just on your desktop.
    0:07:10 Now we also learned that that GPT-2 chatbot
    0:07:12 that was all over Limsis on the chatbot arena,
    0:07:15 you know, apparently that was GPT-4O.
    0:07:17 That was them kind of testing that out
    0:07:20 to see how people would work with it.
    0:07:23 And you know, the other big thing was like
    0:07:24 the voices in it, right?
    0:07:28 Like the GPT-4O, it feels like a marginal improvement
    0:07:31 with like the intelligence of it, in my opinion, right?
    0:07:33 It doesn’t seem like huge leaps
    0:07:34 above what we were getting before.
    0:07:36 Other than in coding, it seems to work really well
    0:07:39 with coding, but the like the voice stuff
    0:07:41 was I think the thing that most people
    0:07:43 were going nuts about from that keynote, right?
    0:07:46 When you speak to it, you can tell it
    0:07:49 to give you more emotion, get more excited about it,
    0:07:52 talk like a robot, do all that sort of stuff.
    0:07:53 And then when they were demoing it,
    0:07:55 that’s also where we heard the voice,
    0:07:58 which sounded very similar to Scarlett Johansson.
    0:08:00 But I think that was the thing
    0:08:03 that people really went nuts over was that voice thing.
    0:08:05 And like you mentioned earlier in this episode,
    0:08:07 we did a whole episode about how like
    0:08:09 voice is that next thing, right?
    0:08:12 Voice is what’s going to be the sort of input
    0:08:15 of all of these large language models.
    0:08:17 And when chat GPT showed off that voice
    0:08:19 and there was really like no latency,
    0:08:21 they would ask a question, it would respond
    0:08:22 almost in the same amount of time,
    0:08:23 a real human would respond.
    0:08:26 You know, that was the thing that was probably
    0:08:29 the biggest sort of differentiator about,
    0:08:31 you know, that from what came before it.
    0:08:32 – Yeah, I mean, I tweeted about the whole
    0:08:34 Scarlett Johansson thing yesterday
    0:08:35 and it got a lot of attention,
    0:08:37 especially on LinkedIn for some reason.
    0:08:38 People, you know, seem pretty upset about it.
    0:08:40 You know, I have mixed feelings.
    0:08:41 I mean, I guess to give people context,
    0:08:44 like Scarlett Johansson put out a statement
    0:08:45 or I’m not sure if it was officially from her,
    0:08:48 but her publicist, you know, said that it really happened
    0:08:50 where openly I contacted Scarlett Johansson,
    0:08:52 asked like, can we use your voice?
    0:08:53 She’s like, no, you can’t.
    0:08:57 And apparently they kept trying and she just refused.
    0:09:00 And they kind of picked a voice that seemed similar to hers.
    0:09:01 And then she’s saying, not only that,
    0:09:03 but then like they threw it in my face
    0:09:04 ’cause like Sam Altman like tweeted to somebody
    0:09:06 like the word her, you know,
    0:09:09 and like referencing the movie where she’s like the,
    0:09:11 you know, the main voice for the AI system.
    0:09:12 I have mixed feelings about it.
    0:09:14 ‘Cause like when I, when you actually,
    0:09:16 somebody else shared a, you know,
    0:09:18 side-by-side comparison of the voices, you know,
    0:09:20 like here’s one, here’s the sky voice,
    0:09:23 the open AI voice, and then here’s Scarlett Johansson’s voice.
    0:09:24 You know, they’re quite different.
    0:09:26 I mean, like they don’t sound like,
    0:09:28 sure it sounds like an attractive woman talking
    0:09:29 or something, but it doesn’t sound
    0:09:31 really like Scarlett Johansson.
    0:09:33 And then like, so can she really like,
    0:09:34 what does she own?
    0:09:37 Like AI, like feminine AI voices forever, right?
    0:09:39 Like that’s kind of, you know,
    0:09:40 that’s kind of like a wild statement to make
    0:09:42 that you own that, but they, but who knows?
    0:09:45 They’ll probably still end up having to settle with her
    0:09:48 for the fact that like he said her, if I had to guess.
    0:09:49 – Yeah, yeah, that’s true.
    0:09:51 I mean, open AI did put out their own statement
    0:09:52 around what was going on.
    0:09:54 They said that they hired a different voice actor,
    0:09:55 not Scarlett Johansson.
    0:09:57 They trained on hours and hours
    0:10:01 of this other voice actors work to get this in.
    0:10:03 But from what I understand,
    0:10:05 they actually are not planning on launching
    0:10:07 with that sky voice now that, you know,
    0:10:08 just to avoid all the controversy
    0:10:10 and all the issues that might come up.
    0:10:13 I think that, you know, right now they’re just saying,
    0:10:14 we’re just not going to launch with that voice.
    0:10:16 You know, we don’t want to deal with all that.
    0:10:18 So we’ll see how that plays out.
    0:10:20 So HubSpot just put out this really cool checklist.
    0:10:22 It’s called the AI adoption checklist.
    0:10:25 And if you’re trying to implement AI into your business
    0:10:29 in any way whatsoever, this checklist has you covered
    0:10:33 from security to privacy to planning to training.
    0:10:34 This has you covered.
    0:10:36 Click the link in the description below
    0:10:38 and grab the checklist today.
    0:10:40 – Yeah, so then the very next day
    0:10:41 after this open AI keynote
    0:10:43 where they made those announcements
    0:10:45 was the Google IO event
    0:10:48 where that’s Google’s developer conference
    0:10:50 where they typically make all of their announcements.
    0:10:53 And this year it was once again, all about AI.
    0:10:55 In fact, at the end of the conference,
    0:10:58 Sundar, the CEO of Google actually counted up
    0:11:00 how many times they said AI.
    0:11:02 And it was like 121 times
    0:11:04 or something throughout the presentation.
    0:11:06 One thing that I find interesting
    0:11:09 between the two keynote presentations
    0:11:12 was that I feel like ChatGPT and open AI,
    0:11:16 they really sort of like honed in on like the one thing,
    0:11:17 right?
    0:11:19 They really sort of honed in on like the voice feature
    0:11:22 and the chatting and the conversational element of it
    0:11:25 and the tone and the intonations of the voice.
    0:11:28 It was very, very focused on that thing.
    0:11:31 The Google event in contrast to it was like,
    0:11:33 here’s an announcement, here’s an announcement,
    0:11:35 here’s an announcement, here’s an announcement.
    0:11:36 It was just two hours of an announcement
    0:11:39 after announcement, after announcement.
    0:11:40 I was at the event
    0:11:43 and I’m actually kind of struggling right now
    0:11:46 to remember what were all the announcements Google made, right?
    0:11:51 So like ChatGPT or open AI had like one big announcement
    0:11:54 that everybody remembers, everybody talked about.
    0:11:57 Google just like bombarded us with announcements
    0:11:59 but they were all kind of so marginal
    0:12:01 that I almost forget them.
    0:12:03 Like I know I’ve got my notes in front of me,
    0:12:06 but like there’s nothing that stands out in my mind
    0:12:10 of like Google talked about this and that is game changing.
    0:12:12 But I’m curious, Nathan, like from you,
    0:12:14 was there anything from the Google keynote
    0:12:18 that specifically stood out that was like,
    0:12:19 okay, wow, that’s something that Google did
    0:12:21 that I thought was impressive?
    0:12:23 – No, I mean, I think it was like information overload.
    0:12:25 Like they put out so many things
    0:12:29 and if you look at the open AI event in comparison,
    0:12:32 it was way more like Steve Jobs like, right?
    0:12:34 Where there’s like, here’s this one thing that we have
    0:12:37 and they just demonstrate in a beautiful way.
    0:12:39 And I think open AI did a really good job too
    0:12:40 where they even humanize it more
    0:12:42 by having all these different team members
    0:12:45 show the same thing in different ways.
    0:12:49 Here’s the AI voice, our CTO co-founders showing it to you,
    0:12:51 presenting it to you, and then here’s people from the team
    0:12:53 and here’s how they’re using it.
    0:12:55 And they showed like these six, seven different use cases
    0:12:58 all with the same product, the same AI voice.
    0:12:59 And I thought that was such a great way
    0:13:02 to like reiterate the one thing over and over and over.
    0:13:04 So by the end of it, it was like really ingrained
    0:13:06 in your head like, oh, they’ve done an amazing job
    0:13:08 with AI voice, whereas with, you know, with Google,
    0:13:11 it’s, they release so many things and it’s like,
    0:13:13 it’s not clear when a lot of it’s actually gonna come out.
    0:13:15 That’s really hard to like pinpoint like,
    0:13:16 oh yeah, what’s the one thing?
    0:13:17 And I was like, it sounds like, oh yeah,
    0:13:18 they made a lot of progress.
    0:13:20 There’s a lot of cool things they did.
    0:13:21 You know, like I put out Twitter threads
    0:13:23 about both companies’ announcements.
    0:13:26 And, you know, my Twitter thread about OpenAI
    0:13:28 got like 5.9 million views.
    0:13:30 And my tweet thread about Google,
    0:13:33 I think it got maybe like 50,000 views just in comparisons.
    0:13:36 It kind of showed you the general interest level
    0:13:39 in OpenAI’s announcement versus Google’s announcement.
    0:13:40 – Yeah, one thing that I found like really interesting
    0:13:43 about the OpenAI event was that they had so much more
    0:13:45 that they could have talked about as well, right?
    0:13:48 Like they put out a blog post about all of the stuff
    0:13:51 that they were rolling out with their new rollout.
    0:13:54 This GPT-40 is its own image generator.
    0:13:55 It’s not using Dali.
    0:13:58 It’s like using its own built-in image generator.
    0:14:00 It can do text to like 3D object.
    0:14:03 It’s got like a ton of features on their blog post.
    0:14:06 But in that keynote, they focused in on like the one thing
    0:14:09 that I thought maybe they felt was gonna have
    0:14:12 the most impact on people from watching that event.
    0:14:13 So I thought that was really interesting
    0:14:16 that they just, they could have themselves
    0:14:18 done a two-hour keynote where they bombarded us
    0:14:21 with OpenAI announcements, but they chose to focus
    0:14:23 on just like one or two key things.
    0:14:25 So I felt that was really, really interesting as well.
    0:14:28 – Yeah, and I thought it was fascinating too
    0:14:31 that, you know, Sam Altman didn’t appear.
    0:14:33 And so for me, that’s like, okay, he didn’t appear
    0:14:36 because he’s going to appear when GPT-5 comes out
    0:14:37 in like three months.
    0:14:39 – I’m at the Microsoft event now.
    0:14:41 He was here. He was part of that keynote.
    0:14:43 But we’ll get to Microsoft in a minute.
    0:14:43 He’s like…
    0:14:47 We’ll be right back.
    0:14:49 But first, I want to tell you about another great podcast
    0:14:50 you’re going to want to listen to.
    0:14:54 It’s called Science of Scaling, hosted by Mark Roberge.
    0:14:56 And it’s brought to you by the HubSpot Podcast Network,
    0:15:00 the audio destination for business professionals.
    0:15:02 Each week, host Mark Roberge,
    0:15:04 founding chief revenue officer at HubSpot,
    0:15:07 senior lecturer at Harvard Business School
    0:15:09 and co-founder of Stage 2 Capital,
    0:15:12 sits down with the most successful sales leaders in tech
    0:15:15 to learn the secrets, strategies, and tactics
    0:15:17 to scaling your company’s growth.
    0:15:20 He recently did a great episode called How Do You Sol
    0:15:22 for a Siloed Marketing and Sales?
    0:15:24 And I personally learned a lot from it.
    0:15:26 You’re going to want to check out the podcast,
    0:15:27 listen to Science of Scaling
    0:15:30 wherever you get your podcasts.
    0:15:34 – I do want to talk about a few more
    0:15:35 of the Google things that they announced
    0:15:37 ’cause there was something I thought was cool
    0:15:38 while I was there.
    0:15:41 But they’re also like, if I’m being honest,
    0:15:43 they’re kind of like, they’re sort of gimmicky
    0:15:46 and they’re kind of like almost forgettable,
    0:15:47 but they’re still kind of cool, right?
    0:15:50 Like they showed off this feature called Ask Photos
    0:15:54 where you can basically use like an AI search
    0:15:55 to search through your photos.
    0:15:56 And some of the examples they gave was like,
    0:15:58 when did my daughter learn how to swim?
    0:16:00 And it would look through all of your photos,
    0:16:02 all of the timestamps, try to find photos
    0:16:04 where your daughter is swimming
    0:16:07 and then tell you your daughter learned to swim
    0:16:09 based on all of your photos, right?
    0:16:13 So it uses the photos as context to answer your questions.
    0:16:15 That I thought was pretty cool.
    0:16:16 Like I can see that being really useful
    0:16:19 if you have a giant database of photos on your computer
    0:16:21 with like thousands of photos
    0:16:24 and you’re trying to find like one specific thing
    0:16:26 or maybe you’re making a video online
    0:16:29 and you need like some B-roll for your video
    0:16:31 and you want the B-roll to just be like shots
    0:16:33 of the ocean or something.
    0:16:35 You could go in there and let the AI find
    0:16:37 like just the shots that you’re looking for.
    0:16:40 So I thought Ask Photos was a pretty cool feature.
    0:16:42 – When I saw that demo, I was like, oh, that was amazing.
    0:16:44 They probably should have made that better
    0:16:46 and really focus on something like that
    0:16:48 ’cause it’s like, they announced like 10 things.
    0:16:50 And most of it seemed like they were just kind of like
    0:16:52 tacking on AI onto existing products, right?
    0:16:53 It’s like a whole joke about like–
    0:16:54 – I think a lot of it was.
    0:16:56 – If you see like the little three stars,
    0:16:58 that means you’re just like adding AI on
    0:17:00 for like the sake of it, right?
    0:17:03 And it felt like a lot of their stuff was that.
    0:17:04 And there was this engineer who made a statement
    0:17:06 about that the other day.
    0:17:08 Apparently he worked at Google for like 20 something years.
    0:17:10 He was a pretty, you know, top engineer at Google.
    0:17:12 And he said, they’re doing the same (beep)
    0:17:14 they did with Google Plus when like Facebook came out
    0:17:16 where like, they’re like panicking.
    0:17:17 They had to create something else and they’re just like,
    0:17:20 start, okay, we have to have a social network.
    0:17:22 Well, let’s just put all Google’s existing stuff
    0:17:24 into a social network and like it just doesn’t work.
    0:17:28 And you know, they’re also changing search too,
    0:17:29 which I mean, is a big deal.
    0:17:31 You know, where they’re like using AI
    0:17:33 to like analyze the results of a lot of search terms.
    0:17:36 I guess it’s only like 1% or so of searches
    0:17:37 are currently using that feature,
    0:17:40 but it rolled out like two days ago and you know,
    0:17:42 it’s apparently some people in SEO land
    0:17:45 are already seeing like 20% drop in traffic from that.
    0:17:46 Like in the last week.
    0:17:50 And so it’s, you know, they’re in a tough situation
    0:17:51 ’cause like all the stuff they released,
    0:17:54 they’re gonna like cause issues
    0:17:56 if they just start dramatically changing Google search,
    0:17:57 you know.
    0:17:58 – Well, I mean, their revenue is all from,
    0:18:00 from ad revenue from search, right?
    0:18:02 Like that’s the biggest portion of the revenue
    0:18:03 for the business.
    0:18:06 So if people stop clicking on those search ads,
    0:18:08 well, what happens to Google’s business?
    0:18:11 I’ve actually noticed that pop up a lot more often.
    0:18:14 Like almost every search I’ve done in the last like a week,
    0:18:15 I don’t search Google a lot anymore,
    0:18:18 but like I would say almost all of the searches
    0:18:19 I’ve done in the last week,
    0:18:23 I’ve seen that AI sort of summary thing come up at the top.
    0:18:24 I still end up clicking on the links though.
    0:18:27 Like usually if I’m Googling something,
    0:18:28 I’m trying to find a website.
    0:18:30 I’m not trying to find an answer to a question
    0:18:32 because if I’m trying to find an answer to a question,
    0:18:35 I’m just going straight to chat GPT these days, you know,
    0:18:36 or Claude or something like that.
    0:18:38 – You know, people are making a good point too.
    0:18:39 It’s like almost like Google is like violating
    0:18:41 this social contract that always existed
    0:18:43 with the open web, right?
    0:18:46 Where it’s like, hey, we’re like searching all your stuff.
    0:18:49 We are taking your data and we’re making money off of it,
    0:18:51 but we’re going to also send people to you.
    0:18:53 And so that’s always been this kind of like social
    0:18:54 contract that existed.
    0:18:57 So now if they are taking in people’s data,
    0:18:59 but then they’re just giving the answer,
    0:19:00 how are they, and then they’re like,
    0:19:02 how could they possibly criticize chat GPT?
    0:19:04 (laughing)
    0:19:08 How could they criticize chat GPT if open AI actually did
    0:19:10 crawl all of YouTube’s data, like it’s, you know,
    0:19:12 the rumor that they did.
    0:19:13 I think they have a hard argument there.
    0:19:14 That’s probably why they haven’t like really
    0:19:17 went after it yet, because like they’re doing similar.
    0:19:20 – Google’s a web crawler scraping company, right?
    0:19:23 Like it scrapes the entire web’s data also.
    0:19:26 The only difference is like if when Google scrapes the web,
    0:19:28 it’s actually beneficial to the creators
    0:19:30 and when chat GPT scrapes the web,
    0:19:32 it sort of disincentivizes creators, right?
    0:19:35 There’s definitely a big differentiator there.
    0:19:39 – For now, for now, but it’s changing almost every month
    0:19:39 right now.
    0:19:41 So it’s like every month it’s like, you know,
    0:19:43 people are going less to websites,
    0:19:44 they’re just getting the answer from Google.
    0:19:48 So yeah, I think Google’s in a very tough situation there.
    0:19:49 And I don’t know how they’re going to get out of it.
    0:19:51 – Yeah, well, the other thing is, you know,
    0:19:53 when it comes to like context windows,
    0:19:55 Google is currently the king, right?
    0:19:59 Like the Gemini now has a one million token context window
    0:20:01 and they said that’s jumping to two million tokens
    0:20:02 within the year.
    0:20:04 And just for, you know, context,
    0:20:09 one million tokens is about 750,000 words input and output.
    0:20:11 So the amount of words that you can put into the prompt
    0:20:13 and get out is 750,000.
    0:20:15 So they go to two million.
    0:20:18 That’s 1.5 million words in and out context.
    0:20:22 That’s enough to put in the entire Harry Potter book series
    0:20:23 and ask questions about it.
    0:20:24 – Yeah, talk about Steve Jobs again.
    0:20:25 Like, you know, it’s like Steve Jobs
    0:20:26 wouldn’t talk about all the specs, you know,
    0:20:28 which now Apple’s doing that crap too.
    0:20:29 Like Steve Jobs would talk about
    0:20:31 how this is an amazing new thing.
    0:20:32 He wouldn’t talk much about the specs.
    0:20:34 You know, Google’s doing that same thing where like,
    0:20:36 here’s the specs, it’s all these, you know,
    0:20:38 here’s the context window, how big it is.
    0:20:41 And versus open AI, not even talking about that stuff.
    0:20:43 And I think it’s a trap because, you know,
    0:20:45 one of my friends was at the open AI event
    0:20:48 and the rumor, you know, around town in Silicon Valley
    0:20:52 is that GPT-5 don’t really worry about the context window.
    0:20:53 There’s like some major breakthrough
    0:20:55 where I’m not sure if the context window
    0:20:57 is gonna be dramatically larger,
    0:20:58 but apparently something has changed
    0:21:01 and they’re like that’s not gonna be much of an issue in GPT-5.
    0:21:04 And so I think with like Google trying to play that game
    0:21:06 of like we have the largest context window.
    0:21:08 I mean, maybe GPT-5 is gonna have like 10 million
    0:21:10 or 100 million or something ridiculous.
    0:21:11 Maybe they’ve made a breakthrough there.
    0:21:13 And so like, the Google’s gonna look pretty silly.
    0:21:15 Like we’ve been playing this like stats game
    0:21:17 where we have the best, you know, whatever
    0:21:20 and the best specs and that’s not even,
    0:21:21 they’re gonna lose in that game as well, I think.
    0:21:24 – Yeah, I honestly think we’re gonna hit a point pretty soon
    0:21:26 where people just don’t even talk about context windows.
    0:21:28 This is gonna be like a non-issue.
    0:21:29 They’re gonna be so big that the context window
    0:21:33 is essentially more than you need, like always.
    0:21:35 The other thing that they talked about
    0:21:38 that was like pretty big news was their project Astra,
    0:21:39 which I don’t remember.
    0:21:40 I don’t know if you remember the demo,
    0:21:43 but they walked around with like an iPhone out
    0:21:45 and the iPhone was seeing things
    0:21:48 and they were having a conversation with the phone
    0:21:49 with what they were seeing, right?
    0:21:51 The example they showed in their demo,
    0:21:54 was the video footage went to a speaker
    0:21:57 and it was like, let me know when you see something
    0:21:58 that makes sound, right?
    0:22:00 And it was like, oh, I see a speaker on the table.
    0:22:02 And then they drew a little arrow on the camera
    0:22:03 and said, what’s this part of the speaker called?
    0:22:05 And it was like, oh, that’s called the tweeter, right?
    0:22:07 And then they walked around the room
    0:22:10 a little bit more with the camera
    0:22:12 and then the girl in the demo was like,
    0:22:14 do you remember where my glasses were?
    0:22:16 And it actually remembered from earlier video footage,
    0:22:19 yes, your glasses were on that other desk over by the Apple.
    0:22:22 So she went over to where her glasses were,
    0:22:24 put on the glasses and it was like this little Easter egg
    0:22:27 moment where she put on glasses,
    0:22:29 which looked like a hint at maybe a new version
    0:22:31 of Google Glass or something, right?
    0:22:34 ‘Cause she put on these glasses, sent her phone down
    0:22:38 and then continued to have a conversation with the AI bot,
    0:22:40 but now it was seeing what the glasses saw
    0:22:43 and it also looked like it had a little tiny heads up display.
    0:22:47 So whatever she was asking and whatever it was responding with,
    0:22:50 it was almost like captioning it in front of her face
    0:22:52 on the heads up display.
    0:22:54 And Project Astra was actually one of the things
    0:22:56 that they had demos available.
    0:22:59 So I was actually able to demo Astra
    0:23:00 while I was at Google IO.
    0:23:03 And I mean, it worked pretty much the way
    0:23:04 they showed it did in the demo.
    0:23:08 It wasn’t like previous Google demos
    0:23:10 where they made it look like real time,
    0:23:11 but it wasn’t really real time.
    0:23:14 It worked just like the demo.
    0:23:16 And I thought that was actually pretty impressive.
    0:23:17 I can see that being useful,
    0:23:19 especially when it gets into like glasses and stuff.
    0:23:21 – So since you really used it,
    0:23:23 so I’m curious for like, from my perception was
    0:23:27 that the open AI voice was more responsive than the Google,
    0:23:29 but maybe the Google seemed to be taking more video input
    0:23:31 ’cause it seemed to actually understand the surroundings
    0:23:34 a bit better than the open AI demo.
    0:23:35 So is that kind of like–
    0:23:36 – There’s definitely more of a delay, right?
    0:23:38 So when you were using the Google one,
    0:23:39 you would ask it a question
    0:23:41 and then there would be like a, you know,
    0:23:45 one, two, three, four, it would respond.
    0:23:48 The new version of chat GPT, it was like almost instant.
    0:23:51 But if you pay attention,
    0:23:55 it uses like filler words to make it feel more instant, right?
    0:23:57 So what it would kind of do is you would ask it a question
    0:24:00 and it would almost like repeat the question back to you.
    0:24:03 And while it’s repeating the question back to you,
    0:24:04 that’s where that sort of delay,
    0:24:06 that latency would normally be.
    0:24:08 So you would say like, “Hey, what am I looking at?”
    0:24:11 And it would say, “Okay, let me take a look.”
    0:24:12 It looks like you’re looking at
    0:24:14 and then it would go into, you know, its response.
    0:24:15 – Which is brilliant.
    0:24:16 – So it was using that like filler.
    0:24:17 – Yeah, it’s brilliant.
    0:24:19 – But it makes it feel more natural
    0:24:20 ’cause we talk like that too.
    0:24:22 – Yeah, and you know, engineers,
    0:24:23 there have been a few engineers on Twitter
    0:24:25 that have been speculating that the reason
    0:24:28 that the OpenAI model is so much more responsive
    0:24:31 is because it actually is a breakthrough
    0:24:32 in terms of being multimodal,
    0:24:34 whereas Google is basically kind of stitching
    0:24:36 these different models together behind the scenes.
    0:24:37 You know, they got like a video model.
    0:24:40 They got a, you know, text one, all this, right?
    0:24:40 In audio.
    0:24:42 And then OpenAI actually announced it.
    0:24:43 They’re like, maybe people don’t realize it
    0:24:45 ’cause we don’t really talk about the technical side
    0:24:47 much on the stage, but yeah, that’s the big thing here.
    0:24:49 It’s like, this is a model where literally
    0:24:52 it’s doing all three at one time.
    0:24:53 It’s not just doing text and then do,
    0:24:55 it’s not doing text or doing image
    0:24:57 and then like kind of like sending it to another model.
    0:24:59 No, this is all happening within the same model.
    0:25:00 That’s why it’s so fast.
    0:25:01 And so I think that was the thing
    0:25:03 that maybe a lot of people didn’t realize.
    0:25:04 – Yeah, yeah, I mean, you mentioned a video model.
    0:25:05 That was another thing that like,
    0:25:07 we’re not gonna cover everything
    0:25:08 that they covered at the Google event.
    0:25:10 There was just way too much, right?
    0:25:11 It was like two hours of announcements,
    0:25:14 but another one that was kind of interesting, I guess,
    0:25:16 was Veo or VEO, right?
    0:25:17 Like they were basically acting like
    0:25:19 this was their Sora competitor,
    0:25:21 but if you put them side by side,
    0:25:23 Sora’s still quite a bit better,
    0:25:24 but Veo, I think they said that
    0:25:27 can generate over one minute long videos
    0:25:29 in 1080p resolution.
    0:25:31 So, you know, it’s got the length
    0:25:33 and the kind of quality going for it,
    0:25:36 but it’s not like as realistic as what we saw out of Sora.
    0:25:39 It was something where, you know, people saw it, right?
    0:25:42 But everybody’s like first thing
    0:25:44 is they’re gonna look at it and compare it to Sora, right?
    0:25:45 Sora’s the best we’ve seen.
    0:25:48 Like that set the bar for AI video.
    0:25:50 And now you look at something like Veo
    0:25:51 and you’re like, oh, that’s pretty cool.
    0:25:53 But because we’ve already seen Sora,
    0:25:55 it’s hard to be impressed by it.
    0:25:57 – Yeah, that’s, and I wasn’t impressed.
    0:25:59 And, you know, and Sora set the bar so high
    0:26:01 that like it’s even,
    0:26:03 it’s actually heard all the other AI video companies, right?
    0:26:04 Like Runway and all the other ones, Pika.
    0:26:07 ‘Cause like I used to share all these AI video threads
    0:26:09 and they would get millions of views.
    0:26:10 And then now people are just not interested.
    0:26:12 Like you share it and like people are like,
    0:26:13 this is like garbage.
    0:26:15 – So hard to impress people now.
    0:26:16 – So hard to impress people.
    0:26:17 – Yeah, yeah.
    0:26:19 Well, one last thing that I will say about Google
    0:26:21 and I did sort of like a recap video
    0:26:23 of my experience of Google I/O.
    0:26:24 And I said this in that video,
    0:26:28 is that going to events like the Google I/O event,
    0:26:31 you know, it’s, it is really easy to look at Google.
    0:26:34 Is this like big, huge corporate, you know,
    0:26:37 faceless company that just wants to harvest
    0:26:39 everybody’s data and get as much money
    0:26:40 out of everybody as possible, right?
    0:26:43 Like I think that’s the perception of the Googles
    0:26:46 and the Microsofts and the metas of the world, right?
    0:26:48 But when I went to events like this,
    0:26:51 I got to meet so many of the people at Google, right?
    0:26:53 I got to meet like the various engineers
    0:26:54 and I got to meet, you know,
    0:26:57 project leaders on different projects
    0:27:00 that were working on a lot of this software.
    0:27:03 And the thing that kind of really struck me
    0:27:04 at this event was like,
    0:27:06 these people are really, really excited
    0:27:08 and really, really passionate about the tools
    0:27:09 that they’re building.
    0:27:12 And so even though Google itself is like this big,
    0:27:15 faceless, massive mega corporation,
    0:27:17 the individuals working on these projects
    0:27:18 are really, really excited
    0:27:20 and really passionate about what they do.
    0:27:25 And it really sort of added to the humanity of Google for me
    0:27:27 like actually getting to meet these people
    0:27:28 and talk with them.
    0:27:31 So, you know, that’s kind of the last little wrap up thing
    0:27:34 that I wanted to say about Google is the people at Google
    0:27:36 do care about the products that they’re developing.
    0:27:39 They do care about what people think of these projects.
    0:27:42 They do want to make them the best they can make them.
    0:27:45 And that was the overwhelming sense
    0:27:46 that I got at that event.
    0:27:48 Now, fast forward to this week,
    0:27:52 I’m actually at Microsoft Build out in Seattle right now.
    0:27:53 Same kind of thing.
    0:27:57 I’ve got to talk to the CTO of Microsoft.
    0:27:59 I’ve gotten to talk to a whole bunch of project leaders
    0:28:02 on different projects at Microsoft
    0:28:04 and actually getting to talk to the humans
    0:28:05 underneath these projects
    0:28:07 that are actually building these projects
    0:28:11 has been really interesting to me.
    0:28:15 It really, really does make me feel more connected
    0:28:16 to the company as a whole
    0:28:19 just because I see the excitement of the people
    0:28:21 that are building these things.
    0:28:24 And so while at this Google event,
    0:28:26 there was really two keynotes that happened.
    0:28:28 There was the Monday keynote
    0:28:31 which was like a co-pilot slash surface keynote
    0:28:33 like the consumer facing products.
    0:28:35 And then on Tuesday was Microsoft Build.
    0:28:38 And Microsoft Build was the developer conference
    0:28:40 which was really geared towards software engineers
    0:28:42 and people building the software.
    0:28:45 Not a whole lot of huge, exciting, crazy announcements
    0:28:47 that came out of Microsoft Build
    0:28:49 but the event that happened yesterday
    0:28:50 was really, really interesting.
    0:28:53 They announced a new feature called Recall
    0:28:56 which is really interesting.
    0:28:59 Basically what Recall does is it watches everything
    0:29:02 that’s happening on your computer and it saves it all.
    0:29:06 So it’s like the history inside of your Chrome Bowser
    0:29:07 but for your entire computer.
    0:29:09 So anything you built in Photoshop,
    0:29:13 anything that you made with DaVinci Resolve,
    0:29:16 any tool that you used on your computer,
    0:29:19 it actually saved a history of all of that
    0:29:21 and you can go back and find all of that history.
    0:29:24 All of your browsing, all of your internet browsing,
    0:29:26 it saves all of that as well.
    0:29:28 It saves it all and you can go back through your history,
    0:29:31 scrub through it and find very, very specific points
    0:29:33 throughout your day.
    0:29:34 And I thought that feature was pretty cool
    0:29:37 but it obviously brings up some like security,
    0:29:41 some privacy issues like oh, if it’s recording everything
    0:29:45 I do on my computer, is Microsoft getting that data?
    0:29:46 Where does that go?
    0:29:49 What if there’s websites that I don’t want to be saved
    0:29:52 into my storage and remembered that I visited?
    0:29:55 Well, the cool thing about the Microsoft event
    0:29:57 and one of the other announcements that they made
    0:29:59 was that they’re actually developing new computers
    0:30:01 to be AI first.
    0:30:04 So a computer typically has a CPU and a GPU in it.
    0:30:08 These new Microsoft computers have a CPU, a GPU
    0:30:11 and now what they call an MPU, a neural processing unit.
    0:30:15 So it’s like a separate card just to process the AI.
    0:30:18 So all of the AI that it’s using to record your computer
    0:30:21 and help you find what you did throughout the day,
    0:30:24 that is all happening locally on your computer.
    0:30:26 It’s not actually going out to the cloud
    0:30:29 and Microsoft isn’t getting any of this data from you.
    0:30:31 You can be not connected to the internet
    0:30:33 and still save all this history.
    0:30:36 Plus, they have the ability to go back through
    0:30:38 and delete history that you don’t want in there.
    0:30:40 So let’s say you visited a website
    0:30:41 that you don’t want other people to visit.
    0:30:43 Maybe you’re gift shopping or something.
    0:30:45 Let’s just use that as an example.
    0:30:47 I’m sure there’s other examples we can go with
    0:30:49 but let’s use the example of gift shopping.
    0:30:50 You don’t want somebody to come back,
    0:30:52 look at the history of your computer
    0:30:55 and see what gifts you were looking at for them.
    0:30:57 So you can go and delete that from your history.
    0:31:01 So it’s pretty secure, pretty private.
    0:31:02 They say they never are gonna train
    0:31:04 on any of your data that it’s saving.
    0:31:09 And supposedly it all stays local right on your computer.
    0:31:12 So that was one of the big announcements that they made.
    0:31:13 Well, two of the big announcements that they made.
    0:31:17 One was that they’re making their PCs AI first.
    0:31:19 And the second one is that you’re gonna have
    0:31:23 this recall feature built directly into Windows.
    0:31:24 So everything you do on the computer,
    0:31:26 there’s like a history of it that you can go back
    0:31:31 and find really, really easily using your AI search.
    0:31:33 The other big announcement they made at this event
    0:31:37 was that co-pilot is kind of getting an overhaul.
    0:31:38 If you use Windows right now,
    0:31:41 you might notice co-pilot is like a little pop-up thing
    0:31:44 that shows up on the right sidebar of your desktop.
    0:31:45 Well, now it’s an actual app
    0:31:48 that can actually be snapped to Windows
    0:31:52 or float in front of other Windows.
    0:31:54 And they showed off this really cool demo
    0:31:56 where you can actually chat with the bot
    0:31:59 and it will actually know what’s going on
    0:32:01 with your computer and actually speak back to you.
    0:32:03 So going all the way back to what we were talking about
    0:32:05 at the beginning of this podcast,
    0:32:10 OpenAI released a Mac app for ChatGPT
    0:32:12 that’s going to have the voice in it.
    0:32:15 And I think the reason that they didn’t release
    0:32:16 the Windows app along with it
    0:32:19 is because they knew Microsoft was going to release
    0:32:21 this new version of co-pilot,
    0:32:23 which kind of does all the same stuff.
    0:32:26 That was the reason I think that OpenAI
    0:32:27 didn’t announce a Windows app
    0:32:29 because co-pilot does that stuff.
    0:32:31 It can actually see what’s going on on your computer.
    0:32:33 You can talk to it with your voice.
    0:32:35 It will respond with a voice.
    0:32:37 It does that stuff as well.
    0:32:40 One of the really cool examples they showed off
    0:32:42 was they had co-pilot open
    0:32:44 and then somebody was playing Minecraft.
    0:32:45 And supposedly they were playing Minecraft
    0:32:46 for the first time.
    0:32:47 They didn’t know how to play.
    0:32:50 And they were having a conversation with the bot
    0:32:54 and the bot was explaining to them how to play Minecraft
    0:32:55 based on what it saw on their screen.
    0:32:58 So it was like, they opened up their inventory
    0:33:01 and asked the bot, “What do I build with this inventory?”
    0:33:03 And it’s like, “Oh, you need to build an axe.”
    0:33:04 And he’s like, “Okay, I’ll build an axe.”
    0:33:07 And then it closes an inventory and the zombie pops up.
    0:33:08 And he’s like, “What’s that? What do I do?”
    0:33:10 And the AI was like, “Run, run, it’s a zombie.
    0:33:11 Get out of here.”
    0:33:14 So the AI was watching in real time
    0:33:15 as he was playing this game
    0:33:17 and actually giving feedback
    0:33:20 and telling him what to do next in the game.
    0:33:22 And that was also a really cool feature
    0:33:25 that they showed off at the Microsoft event.
    0:33:30 But it has been a whirlwind of AI announcements
    0:33:31 over the last couple of weeks.
    0:33:33 We’ve just been absolutely bombarded
    0:33:36 between open AI, Google, and Microsoft.
    0:33:41 And then in June, we also have a Cisco event coming up
    0:33:42 where they’re gonna be talking
    0:33:44 about how they’re leveraging AI for cybersecurity.
    0:33:46 We have a Qualcomm event coming up,
    0:33:50 which is starting to make a lot of the AI chips
    0:33:51 that a lot of these products are using.
    0:33:53 In fact, at the Microsoft event,
    0:33:54 a lot of the Surface Pros are using
    0:33:57 Snapdragon trips from Qualcomm.
    0:33:59 So Qualcomm has a bunch of AI announcements coming up.
    0:34:03 And then also in June, we have WWDC from Apple,
    0:34:06 which is the Worldwide Developer Conference.
    0:34:07 This is their big event
    0:34:10 where they announce all of their big new features
    0:34:11 that are coming out.
    0:34:13 That’s last year, we got the announcement
    0:34:14 of the Apple Vision Pro.
    0:34:17 This year, we’re expecting a whole bunch of AI announcements,
    0:34:19 possibly a new AI series.
    0:34:22 But it has just been a crazy few weeks
    0:34:25 and we have a few more crazy weeks coming up
    0:34:28 with these additional events that are happening.
    0:34:31 And hopefully, Nathan and I are gonna be able
    0:34:33 to keep you looped in and chatting
    0:34:35 about all of these upcoming events
    0:34:38 and filling you in on all of the big announcements
    0:34:39 that are happening.
    0:34:41 So if you’re not subscribed to this podcast,
    0:34:43 make sure you’re subscribed.
    0:34:45 We put out new episodes every Tuesday
    0:34:46 and every once in a while,
    0:34:48 when we have really, really exciting events
    0:34:49 happening in the AI world,
    0:34:51 we’re gonna drop these bonus episodes
    0:34:53 in between our normal episodes
    0:34:55 to make sure that you’re looped in
    0:34:59 and keeping your finger on the pulse of the latest news.
    0:35:02 So that’s the goal, a lot of exciting things happening
    0:35:05 and hopefully you’re here for it ’cause we’re here for it.
    0:35:08 And thank you so much for tuning in to this episode today.
    0:35:11 (upbeat music)
    0:35:14 (upbeat music)
    0:35:16 (upbeat music)

    Bonus Episode: How are the latest AI breakthroughs set to change our everyday lives? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are here to break down the monumental updates from recent AI events and explore their potential impact.

    In this bonus episode, Matt and Nathan discuss the recent announcements from OpenAI’s GPT-4.0, particularly its groundbreaking voice interaction features, and compare it with Google’s latest AI advancements. From the intriguing controversy around the GPT-4 voice to the shift in search dynamics, this episode dives deep into the heart of the AI revolution. Don’t miss their insights on the importance of Google’s project Astra, Microsoft’s new AI hardware, and the potential of upcoming AI tools and events.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • 00:00 AI announcements discussed, GPT-4 demo highlighted.
    • 03:41 AI advancement sparks awakening and debate.
    • 08:02 Twitter buzz about disputed Scarlett Johansson voice.
    • 10:21 AI dominates conference, Google focuses on announcements.
    • 14:08 Microsoft event, Google announcements, AI photo search.
    • 16:37 Google’s revenue relies on ad revenue. Ads matter.
    • 20:14 Astra project uses iPhone to recognize objects.
    • 24:57 Google employees are passionate and excited.
    • 28:20 New Microsoft computers incorporate AI for local processing.
    • 30:25 OpenAI and Microsoft apps for computer interaction.
    • 32:41 Apple WWDC announces new features and products.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • How Yohei Nakajima Created an Autonomous Startup Founder: Baby AGI

    AI transcript
    0:00:07 Autonomous agents will have huge impact and I think just the question is a matter of when so I started building
    0:00:12 Tools that were based on code and it was so fast that I started prototyping just one or two things a week
    0:00:16 Sometimes it was something we use at our VC firm and sometimes it was just a peer experiment
    0:00:19 People started asking this seems like it’s more than an autonomous startup founder
    0:00:24 So I was like I make the world a better place and I just started thinking of ways to make the world a better place
    0:00:26 Of course, I went viral I went to got to a million views quickly
    0:00:31 And I was like alright, so this is more than an autonomous startup founder and one of my friends had commented, bro
    0:00:33 Did you just build a baby AGI?
    0:00:39 Hey, you welcome to the next wave podcast I’m Matt Wolf
    0:00:44 I’m here with my co-host Nathan Lanz and today we have an amazing guest for you today
    0:00:49 We are talking to Yohei Nakajima and we’re gonna be talking all about AI agents
    0:00:53 So thank you so much for joining us today. We’re really excited to have you on the show and
    0:01:00 Excited to dive in with you. Thank you for having me guys. Nathan. Good to see you again. Yeah. Good to see you again as well
    0:01:04 Yeah, I think it’d be kind of interesting to tell people I mean in an odd way, you know
    0:01:09 Talking to you about baby AGI about a year ago now, which seems crazy
    0:01:15 It’s been a year, but maybe you know a lot a lot of the audience probably doesn’t know what baby AGI is or how did you get that?
    0:01:19 Crazy idea and how did a VC who can’t code create this thing that was super viral
    0:01:26 Well, yeah, so the background is that I’m a VC and I like to build as partially a hobby and as a way to learn
    0:01:31 I was more of a no-coder before I’ve always coded but not that good at it
    0:01:35 So I’ve always been more of a no-coder, but when AI came out I could suddenly just have AI write the code
    0:01:41 So I started building tools that were based on code and it was so fast that I started prototyping just one or two things a week
    0:01:42 sometimes it was
    0:01:45 something we use at our VC firm and sometimes it was just a pure experiment and
    0:01:49 I think it was project number probably around 70 ish
    0:01:55 I was looking at hustle GPT, which was when people were using chat GPT as a co-founder saying I have a thousand bucks
    0:02:02 How do I grow it and then just doing whatever chat GPT told them to do and I thought that was a fascinating experiment and wanted to be
    0:02:05 Involved but it was too busy and I thought can I automate the human part?
    0:02:11 And so that led to a challenge over the weekend to prototype a autonomous startup founder
    0:02:15 And so of course I went to chat GPT and said I want to prototype an autonomous founder
    0:02:19 Here’s how I think it should work. Can you give me the code then after some iterations?
    0:02:22 It started working so I posted a demo online and it went viral
    0:02:27 I do got to a million views quickly then people started asking this seems like it’s more than an autonomous startup founder
    0:02:32 So I was like I make the world a better place and I just started thinking of ways to make the world a better place
    0:02:34 And of course at this point is just reasoning so it’s just text
    0:02:38 But I said make as many paperclips as possible and the first thing I said was this is actually unsafe
    0:02:40 So let’s first come up with safety protocols
    0:02:45 And I was like alright, so this is more than an autonomous startup founder and one of my friends had commented bro
    0:02:47 Did you build did you just build a baby AGI?
    0:02:53 So that’s that’s where the name came from. I I was like I this is popular like academics are loving it
    0:02:57 I need a paper so I went to chat GPT dropped in the code and said give me an academic paper
    0:03:02 And I posted on my blog as my paper and then I had the I had a tweet thread about the paper
    0:03:05 And then and then I open sourced the code and just each time it just went viral
    0:03:09 So just a week of just like non-stop like notifications, you know
    0:03:14 I learned about EACC that week and about tumors because they were both in my DMs like some people loved it
    0:03:17 Some people hated it, but it was an absolutely wild week
    0:03:24 When all your marketing team does is put out fires they burn out but with HubSpot
    0:03:30 They can achieve their best results without the stress tap into HubSpot’s collection of AI tools
    0:03:36 Breeze to pinpoint leads capture attention and access all your data in one place
    0:03:43 Keep your marketers cool and your campaign results hotter than ever. Is it hubspot.com/marketers to learn more
    0:03:53 I love how that the paperclip maximizer scenario was one of your first tests that you would you wanted to see how
    0:03:59 How it would go. Well, it’s funny is somebody tagged the odd and then he obviously he actually retweeted or commented on it
    0:04:02 Which I think got the doomers noticing me
    0:04:07 So what what’s the current state of baby AGI? Where does it stand today?
    0:04:11 First after I released it some people that people jumped in started contributing. I’m not a developer
    0:04:15 So I don’t know how to manage it. So I have some community members are willing to help support it
    0:04:17 So they were managing the PRs for me
    0:04:21 But there just wasn’t much support behind it and then I wanted I had some more ideas around the framework
    0:04:24 And I realized I wasn’t actually good at reading other people’s code
    0:04:27 So I just went back to my original and started modding it
    0:04:30 And I went to my like group and I was like, what should I do with it?
    0:04:33 They’re like, why don’t just create a classics folder and just stick it in there
    0:04:40 So then there was like baby be AGI baby cat AGI where I basically used baby AGI as a way to introduce
    0:04:45 Like design patterns that I thought were interesting and useful for building an autonomous asian
    0:04:48 So I all the way got all the way up to baby fox AGI. It was it was an alphabetical order
    0:04:56 And each time I introduced like self-improvement memory like new UX and then at some point I think around baby fox
    0:05:00 I was like it’s the code’s getting a little clunky. I feel like I’m stuck a little bit
    0:05:04 I need I need to solve something else and I kind of took a break and then as of a month ago
    0:05:11 I just got back into rebuilding baby AGI from scratch. I was a new architecture. That’s fully graph based
    0:05:16 So I just started tweeting about that because it’s it’s looking pretty good. But uh, that’s that’s the current state
    0:05:22 Are there any specific models that you you really prefer to use because it seems like back when
    0:05:29 Baby AGI first came out. I mean we had I don’t remember if gpt4 was out yet or if it was still gpt3.5
    0:05:35 But then there’s just been an explosion of llms, right? You’ve got all the clod models. You’ve got google’s models now
    0:05:43 You’ve got the llama models. Is there a model that you think would work best as an agent or do they all are they all kind of similar?
    0:05:45 What are your thoughts there?
    0:05:48 I think picking the right model is extremely helpful
    0:05:56 In that and getting your asian to be more reliable faster cheaper. And then of course there’s fine-tuning and there’s a lot of techniques you can do
    0:06:04 For me personally, I’m much more interested in ideating around new design patterns
    0:06:08 And for that it’s easiest to just use the most powerful model
    0:06:12 Um, even if it’s low just to like get it working
    0:06:18 And then if it if that new design pattern works, then I’ll go then I in theory could go back and optimize the model to prompt
    0:06:22 But oftentimes instead of doing that I just open source it and move on to the next design pattern
    0:06:27 So I have so the answer is I have not really thought too much about the model that that being said in my newest
    0:06:29 BB agi
    0:06:33 I’m actually using a library called light lm that lets me switch between models easily
    0:06:36 So I have started using clod
    0:06:42 Opus and hyacinth. I mean I remember when we talked before I think one of the use cases we talked about was like research
    0:06:47 Was like something would be like a straightforward first use case that like hey these agents
    0:06:49 You could go off and talk to each other and kind of figure out like hey
    0:06:54 Let’s go do some research whether it’s like you’re starting a new project or researching a new market or whatever
    0:06:59 Uh, is that still like a one of the use cases? Yeah, that was one that was working pretty early on
    0:07:03 I mean within a few months of BB agi launching there was a plenty of tools that I you know
    0:07:09 I was using BB agi UI was one of the tools I was using all the time to pepper meetings a common one right at the right
    0:07:13 After BB agi had a whole bunch of big company kind of execs reaching out wanting to pick my brain
    0:07:18 I was like they might as well just just network and one of the the regular queries
    0:07:21 I had was research this company
    0:07:26 Their revenue drivers their cost drivers and their business units as separate web searches
    0:07:32 Create a summary of that company based on that research and then give me three strategies
    0:07:38 That leveraged large language models for each business unit that would be impactful to their bottom line
    0:07:41 And then I would just let it run it would take like five to ten minutes
    0:07:44 But then it would give me a report which she had somewhere between, you know
    0:07:47 10 to 20 something
    0:07:51 Ideas that I could then bring to the meeting and I could just skim through that in 10 seconds
    0:07:55 And then cherry put three or four that I thought were good and and sound really smart in every meeting
    0:07:59 So it made you sound smart. Was it actually like useful advice?
    0:08:05 It was good advice like it was so what I what I what I found was helpful was it skipped that initial ideation
    0:08:10 Right oftentimes when you’re presenting an idea, there’s the initial like
    0:08:13 A diversion thinking where any every idea is a good idea
    0:08:17 Let’s just research was collect all the ideas and then you switch to a conversion thinking mode where you’re like right now
    0:08:20 Let’s pair it down to the best idea so that we can present it
    0:08:24 Right, I found that these Asians or media in general, but especially if you have these Asians
    0:08:28 You can just automate that diversion thinking parts you’re presented with all the ideas that have taken everything into account
    0:08:32 And then you and then I think people are still probably better at that conversion thinking part
    0:08:37 Which is like let’s really apply it to the current nuance and like pick the handful that are most relevant
    0:08:42 Yes, I’m super powerful. I just wonder it feels like it hasn’t really taken off in terms of people using it though
    0:08:45 Right, so like I wonder what the roadblock there in terms of uses
    0:08:51 So I think there’s a learning curve for sure in using autonomous Asians like the one I just described, right?
    0:08:54 Um, I was typing in. Hey, can you give me a report?
    0:08:58 Look, can you research company x and then research their cost drivers and revenue drivers?
    0:09:02 I think for most people they would just want to say, hey, I want strategy
    0:09:07 I want LLM strategies for company x and and they want the LLM to think through what the search should be
    0:09:10 To get that done for that, right?
    0:09:13 And so, um, in a talk I did last week
    0:09:19 I talked about the difference between kind of what I call handcrafted Asians where you’re handwriting each prompt and like api call
    0:09:24 And you’re designing the full flow versus what I would call like a specialized general
    0:09:28 Specialized autonomous Asian that’s kind of dynamically generating in some task list
    0:09:31 I think when we think autonomous Asian we’re thinking of the latter one, right?
    0:09:32 If it’s if it’s handcrafted
    0:09:38 It’s almost just like a zappier workflow and it doesn’t feel autonomous if you’re writing it yourself in building these
    0:09:41 More dynamic agents that generate its own task list
    0:09:48 One of the things I found was having good task task flows to feed it as examples help this get better
    0:09:50 today
    0:09:55 The companies that are generating revenue are creating what I think handcrafted Asians where they’ve
    0:10:00 Plugged all those in and and you know that the executive can just ask a very
    0:10:04 A rough question and and the team has thought through how the AI should break it down
    0:10:07 And I think where we’re going with it
    0:10:12 Is that eventually these kind of skills will become more dynamic and we’ll get the AI to be better at dynamically
    0:10:16 Connect creating these task lists, but we’re just not quite there yet where it’s good at it all the time
    0:10:17 interesting
    0:10:21 It seems like there’s some kind of opportunity there for like some kind of new kind of sass or something where
    0:10:27 The company just subscribes to this agent service and there’s like I want an agent to do this
    0:10:32 So like well, we have that like custom tailored for your needs kind of thing and like they just sign up for the service
    0:10:34 And yeah, that’s interesting. You know what would be awesome is
    0:10:38 Like there’s so many kind of like brainstorming strategies and whatnot, right?
    0:10:42 There’s like coaches and there’s like different strategies on like how to brainstorm
    0:10:46 It’d be great to have like a database of like something like that
    0:10:50 But like tailored to specific tasks then like given like a rough task it like breaks down
    0:10:52 Like a really good smart way that an expert would do it
    0:10:57 And if there was a massive database that I could just subscribe to and just like pull task lists and you
    0:11:01 And and then say take this task list and adopt it to our own that would be pretty valuable
    0:11:04 Well after this episode goes out. I mean one might pop up
    0:11:13 Or I’ll go build it together. Yeah, exactly. Exactly. So I was I was recently watching this this TED talk recently with
    0:11:17 Mustafa Suleyman who’s now the CEO of Microsoft AI
    0:11:23 He made a comment at the end during the q&a portion about how when it comes to AI one thing
    0:11:26 He doesn’t think we should do is give AI autonomy
    0:11:30 And so I just wanted to hear what your thoughts are on like that comment
    0:11:35 I I don’t get it like we can’t like why are we not going to yeah
    0:11:41 Like clearly someone’s going to so we should figure out like how collectively how we should do it and how to do it
    0:11:46 Well and safely yeah, he seemed to be playing this like odd game of like trying to be on the EX side as well as the
    0:11:47 Dumer side it was like this odd thing
    0:11:52 We’re like he was basically like we should accelerate but we should not have it shouldn’t be like, you know improving itself
    0:11:54 It shouldn’t be going off and doing its own thing
    0:12:00 Yeah, I feel like most of the people I engaged with saw that and was like that’s that’s pretty much my task list right now
    0:12:01 Like trying to get yeah
    0:12:05 And well, I mean most of the stuff that’s that’s been out so far it’ll actually ask you
    0:12:07 Do you want me to take the next step right?
    0:12:12 It’ll it’ll essentially suggest the next step and then say do you want me to take this next step?
    0:12:13 Yes or no
    0:12:16 Right, so even even most of what people have played with so far
    0:12:20 Has had that like extra layer of like are you sure you want to do this step?
    0:12:22 Are you sure you want to do this step and
    0:12:26 I mean, I don’t know if that’s really the future of agents
    0:12:32 I feel like people want to be able to give it a prompt walk away go to dinner come back have their their tasks completed
    0:12:36 But so far it seems like a lot of those safety measurements have been built in
    0:12:41 Yeah, I think I mean, you know, I think we’re we’re still trying to figure out how to get all the this ai to work
    0:12:45 How to work well, what are the correct again design patterns ux patterns?
    0:12:48 I think we’re all it’s still trying to discover that some of those questions
    0:12:51 We’re also also like to like reduce cost as well
    0:12:56 Right, I think when we really when we first talked like because if it just keeps off running and it’s not waiting
    0:12:57 It’s not asking you any questions
    0:13:01 It could go off and just like run up your api bill for open ai or whatever to like
    0:13:07 Thousands and thousands of dollars. Well, there was also a little bit of a tendency from time to time where it would get stuck in a loop
    0:13:10 Right, it would sort of do a search not find what it was looking for do a search
    0:13:16 And it would just kind of continuously loop and if you just walked away for hours you just rack up those api costs
    0:13:20 We’ll be right back
    0:13:25 But first I want to tell you about another great podcast you’re going to want to listen to it’s called science of scaling
    0:13:33 Hosted by mark roberos and it’s brought to you by the hub spot podcast network the audio destination for business professionals
    0:13:38 Each week host mark roberos founding chief revenue officer at hub spot
    0:13:42 Senior lecturer at harvard business school and co-founder of stage two capital
    0:13:50 Sits down with the most successful sales leaders in tech to learn the secrets strategies and tactics to scaling your company’s growth
    0:13:55 He recently did a great episode called how do you solve for a siloed marketing and sales?
    0:13:59 And I personally learned a lot from it. You’re going to want to check out the podcast
    0:14:03 Listen to science of scaling wherever you get your podcasts
    0:14:09 Yeah, I think when we talk about autonomous agents one thing that like
    0:14:12 There’s qualitative decisions and quantitative decisions
    0:14:16 And there’s types of decisions that we can trust a computer do and some that we can’t and I think it’s
    0:14:21 It’s worth pointing out that I think like half the stock trades are autonomously done by like essentially what’s an autonomous agent
    0:14:26 That’s only using quantifiable information and as we move to more qualified, you know, like
    0:14:29 Qualitative stuff it gets it gets a little bit trickier
    0:14:34 But I think it’s ultimately just like what are the tasks that we can we can automate today?
    0:14:36 And if we can start relying on it we can slowly
    0:14:40 Slowly figure out like, you know more and more things that can be automated
    0:14:43 Yeah, so one thing I do want to bring up as well is that
    0:14:48 You know, we’ve probably all seen devon right from I think cognition labs, right?
    0:14:54 And this is an AI agent that can help with coding you give it a task that you want it to do it writes the code
    0:14:59 I think it sort of double checks its own code iterates optimizes and sort of
    0:15:04 As this loop until it sort of figures out the right code and completes the task
    0:15:09 Well, they were just valued at two billion dollars and the company’s only six months old
    0:15:15 And uh to me that’s that’s kind of crazy that a company that is that new
    0:15:20 Got that kind of valuation. Do we see this is like the the sort of next
    0:15:29 Venture capitalist push is this where a lot of the money is going to start to flow is some of these companies that are building these more autonomous agents
    0:15:33 I mean, I think everybody agrees that
    0:15:39 Autonomous agents when they work will have huge impact and I think just the question is a matter of when
    0:15:46 And so it’s no surprise that I think some vcs right if there’s enough vcs some are going to think that it’s very soon
    0:15:48 And if it is very soon, it’s worth making
    0:15:54 A big bet that being said I say there’s all types of vcs and the types of vcs who make
    0:15:59 Two billion valuation bets into a company six months old is pretty different from a from a vc
    0:16:04 Who’s you know from a from an emerging manager micro vc, which is myself
    0:16:08 And I’d say that an emerging manager micro vc probably has more in common with an AI builder at least from my perspective
    0:16:12 There’s somebody rolling that so I can’t really speak to that specific valuation
    0:16:15 I think it’s just a different type of company building totally makes sense
    0:16:20 Yeah, I mean Devin seems amazing like the concept but like there’s also been a lot of controversy at least on twitter
    0:16:24 People saying like is it even real here? Here’s my guess, right?
    0:16:27 And I mentioned handcrafted agents versus like autonomous agents
    0:16:33 If you think about developers and and like some of the devin skills they showed was like let’s run a code
    0:16:38 If there’s an error, let’s look at the code and decide where to add a print statement
    0:16:43 Let’s run the code again read that print statement and then based on the code the new print statement
    0:16:46 Guess where we need to fix the code
    0:16:48 Fix the code and then run it again
    0:16:55 If it’s an error loop this this oh this to me feels like a handcrafted agent like every developer has done that themselves
    0:17:00 So it’s it’s it’s a very like it’s not like a you don’t need an AI to make up that workflow
    0:17:06 And there’s probably a couple other ones that are common enough that some team like Devin should probably build that tool in a
    0:17:14 Modular way where each step is a modular skill so that the AI can in theory dynamically pick and choose and combine them together
    0:17:19 But at least based on their demo they’ve gotten it to they’ve probably we can teach at these handcrafted agent skills
    0:17:24 Like if if you’re doing this like chain tasks got this way if you teach it that then it’ll do it that way
    0:17:26 But as people start using it
    0:17:30 There’s going to be edge cases where there’s like a new flow that the team hadn’t thought of or that Devin hadn’t run into
    0:17:32 Then it might not be as good and stable
    0:17:39 So so getting those stable is is going to come through people using it figuring out where it doesn’t work and then maybe even like
    0:17:44 Describe you know describing it to Devin so it does it right and stores it in this memory
    0:17:46 So that the next time it’s asked you can do it that way better
    0:17:48 So that’s that’s my kind of gut guess on where they’re at
    0:17:54 Which would be which would explain why their demos work why people who are using it are getting good results for the for the type of tasks that
    0:17:58 Um, they’ve gotten a good ad but they’re probably still working on some of the edge cases would be my guess
    0:18:05 It was just the reason I wouldn’t go public yet. Are there any like major technical hurdles that you’re seeing to
    0:18:08 To getting to the point where I can just say hey
    0:18:10 Go build me this website
    0:18:15 And then I walk away go to dinner come back and then the website’s just built for me everything’s done
    0:18:21 I mean we we have really powerful large language models most of the large language models now can work with tools
    0:18:23 They can interface with apis things like that
    0:18:28 What do you think the holdup is from from where we are now to getting to that?
    0:18:33 So I think part of it is those kind of people’s expectation when you said go build me a website
    0:18:37 right if it’s if I wanted to build a tool that can build you a
    0:18:42 General personal website maybe with a blog I could probably build that website and build that tool
    0:18:46 Right if I dedicate myself to making it work really well the challenge is if I put it out into the market
    0:18:50 Someone’s going to go say build me a sass website or build me a game website
    0:18:53 And suddenly this tool doesn’t work because it didn’t create that website
    0:18:57 And so how do you present that right? Do you go really really niche in which case?
    0:19:00 No one really cares or do you present the big picture that you want to build in which case?
    0:19:06 There’s going to be edge cases that don’t work and I think that’s one of the big big blockers is kind of figuring out
    0:19:08 There are probably agents that are good at certain things
    0:19:10 But maybe they don’t go viral because it’s such a niche group
    0:19:14 And then the ones that promise too much are going to hit the market and they’re going to run into edge cases
    0:19:17 And people are going to say this doesn’t work and I think that’s one of the holdups
    0:19:23 But I think within that what you’re finding is that some people who discover the right tool for themselves are getting a lot out of it
    0:19:28 Or the people who can see past the mistakes and are willing to just treat it like a a new employee
    0:19:32 That’s just much cheaper like those people I think are getting value out of these
    0:19:35 Asian type tools and workflows today
    0:19:37 Yeah, that makes sense
    0:19:42 It’s it’s it’s the little like nuances of exactly what you’re looking for and everybody that uses the tool
    0:19:46 Is going to have slightly different nuances to what they need out of it
    0:19:50 I’m wondering like whenever we get to the point where we can not only have it make a website
    0:19:55 But like come up with the concepts like I imagine like a future product lab where you know
    0:20:01 You’re like I wanted to run experiments in this sector and like the agent actually goes off and like comes up with like the plan and like
    0:20:06 Yeah, come, you know, give me five landing pages for different angles on this, you know that we could attack and then
    0:20:13 Um, I wonder how many years off we are from that and also just like how much that changes business when you can just like ramp out
    0:20:19 New experiments like that again going back to like if it’s a use case that a lot of people want I don’t think we’re that far away from it
    0:20:27 Right. I use I use an AI diligence tool that will spit out a 30 50 page report on any given company or industry within 30 minutes
    0:20:31 It’s a very specific, you know, what I call handcrafted workflow
    0:20:35 Get a ton of value out of it and it makes sense to build that tool because it’s a workflow that
    0:20:38 Every investor every consultant goes through it
    0:20:41 Which is going and researching the company the news the data and all that kind of stuff
    0:20:45 one of the my favorite stuff that came out after baby Asia was auto rpg, which was a
    0:20:49 Which was using baby asiai’s framework to auto design
    0:20:56 A full game level the storyline for that game level the characters the the their script
    0:20:59 And then to a point where they can just upload it and make that
    0:21:04 Level playable by the game again. It was just a rough demo and I think they’re still cleaning up the actual back end of it
    0:21:06 But I mean that’s pretty cool stuff
    0:21:12 Yeah, one of the things I want to ask you about was something you mentioned in your ted talk that I found really fascinating and you made a
    0:21:13 comment about how
    0:21:18 With AI we essentially have access to the sum of human consciousness
    0:21:25 Or the ability to chat with the sum of human consciousness and i’m paraphrasing you probably said it way more eloquently than I did
    0:21:29 But I want to kind of talk about that statement and what you meant by that
    0:21:35 Yeah, I mean if you think about these massive models, they are trained on
    0:21:37 massive corpus of
    0:21:39 text
    0:21:45 Written by millions of people it’s their words and again, it’s a filtered list of words because it’s only stuff they publish
    0:21:49 But when you ask a model a question they’re answering that based on
    0:21:56 The writing of millions of people and so I think what I said was that it’s probably the closest thing we have to chatting with our collective conscious
    0:22:03 Um, which I think is just a really fun way to like think of the tool and and play with the tool and ask questions against the tool
    0:22:09 Actually, one of the first experiments I did early on this was in like the Da Vinci days was I actually had it write a 42 page book
    0:22:14 On on on like what it means to be human and I talked to like family and why not?
    0:22:17 I just I was just like I want to know what like the average
    0:22:22 Like two pages about like what family means to a human is and how did that book come out?
    0:22:27 It was great. It’s like a google doc. You know, I just I just like made it for myself as like an experiment
    0:22:32 But it was like a it goes through like family death life, you know all those like basic human concepts
    0:22:36 It was a there’s a book I loved called Tuesdays with Mori, which was actually the inspiration for it
    0:22:38 But uh, it goes through some of the key aspects of life
    0:22:42 But I basically just did that with an AI instead of with a wise man named worry
    0:22:46 Now, I don’t know if this is a rabbit hole. We want to go down or not, but I’ll bring it up anyway
    0:22:52 So, you know, you know continually along those lines when a lot of people are out there talking about like, oh, these
    0:22:56 These large language models are biased in one way or the other
    0:22:59 does that sort of mean that like
    0:23:02 The biases is sort of what the majority of humans
    0:23:07 You know believe I would I that I feel like that’s a dangerous jump. I feel like
    0:23:13 It’s it’s fair to say that the bias of like human society is probably in there somewhere
    0:23:16 But then there’s also the bias layer of who has access to the internet
    0:23:21 Which has ties to wealth and bias layer layer of who’s willing to who has access to like
    0:23:26 Technical knowledge, right because people who had access to blogging earlier. There’s more content out there
    0:23:31 So so there’s there’s a layer of bias beyond like what the biases of the world ingrained into the model
    0:23:35 So so I would I would say that as long as you’re making that clear
    0:23:39 I think that’s fair to say that there that that bias is reflected in the model to some extent
    0:23:42 Yeah, that’s very cool. And I just I love that
    0:23:45 I don’t know if you call it an analogy or that way of thinking, right?
    0:23:52 I love that idea of we’re chatting with the sort of sum of human consciousness when we’re talking with these AI bots because I mean
    0:23:54 they’re trained on, you know
    0:24:02 trillions of bits of data from the, you know, all of humans putting the this information on the internet and
    0:24:04 Just looking at it from that way
    0:24:10 When I was watching your TED talk that one statement just like flipped a switch in my brain of like, oh man, that that is so true
    0:24:14 I remember when I was a kid and I don’t think this is even true
    0:24:18 But actually somebody told me that like every, you know, anything with mass has gravity
    0:24:24 Right, so that if you if you wave your hand that like in a very very minuscule way a star
    0:24:29 Is like moving slightly because of your hand movement. Now, I think it’s actually I looked it up
    0:24:35 I think it’s actually false because there’s like some diminishing so but I think theoretically speaking that that idea of like
    0:24:40 Be having a minuscule aspect on something extremely large and far away has always fascinated me
    0:24:45 And I think people I think it’s fascinating to think about like all the people who published on the internet, right?
    0:24:50 Some of their ideas and words, right in a very indirect way
    0:24:56 Are now we’re consuming it and there’s there’s there’s like an immortality to that that that just fascinates me
    0:24:59 Well, yeah, and even hundreds of years old, right?
    0:25:04 I mean, it’s it’s trained on data from the the bible and the karan and the all of it, right?
    0:25:06 So like hundreds thousands of years old as well
    0:25:10 And it’ll be trained on this podcast that we’re doing right now, which is kind of wild, right?
    0:25:13 We become part of this infinite collection of all of humanity, right?
    0:25:16 Like all of the stories of humanity. Yeah, this is this is a whole rabbit hole
    0:25:21 I do think I’ve been talking more about autonomous agents recently just because of the talk and a couple panels
    0:25:24 But the the reaction from enterprises are very interesting, right?
    0:25:26 You know, there’s the ethics job displacement
    0:25:31 Like those are all discussions that are happening around autonomous agents like how and when do you replace
    0:25:37 Tasks from humans doing it to autonomously, you know, what is the long long-term impact of it?
    0:25:43 And then one of the crazier ideas that I got some reaction of was the idea of like an autonomous CEO at some point, right?
    0:25:51 A little 24/7 can talk to all employees in parallel has access to all company data and has at least consistent and transparent bias
    0:25:52 Yeah
    0:25:56 I’m curious how you respond to people when when the the sort of job displacement question comes up
    0:25:59 You know being somebody that creates a lot of content around AI
    0:26:04 In my comments and things like that. I you know, I get love but I also get a lot of hate from people
    0:26:07 They’re like, oh, you’re part of the problem. You’re part you’re out there sort of
    0:26:11 Spreading the message of the thing that’s going to take the jobs from people
    0:26:18 What sort of response do you find yourself giving people when they are asking about like well, is what you’re talking about going to take our jobs?
    0:26:24 Looking forward I can see the argument as to why autonomous agents will
    0:26:26 Displace jobs and I can see why
    0:26:31 There’s a reason to fear that potentially I do but looking back on it
    0:26:37 I also know that many smart people had that exact same fear about almost every single technology that came
    0:26:41 And all those technologies ended up being net job creators
    0:26:45 So I’m humble enough to assume that I’m not like necessarily
    0:26:48 You know, I’m just as likely to be as long as all the people in the past
    0:26:54 And and if I’m going to go with the you know, just what I’ve seen in the past and if I’m looking for my third party
    0:26:59 I would assume that the next technology would be a net job creation is engine again
    0:27:03 Yeah, I mean when I just look at you know, things like um, you know AI art for instance, right?
    0:27:09 Everybody’s worried about the fact that AI can generate art, but it feels like the same arguments that happened when photo shop came out
    0:27:13 And most likely the same arguments that came out when cameras were invented
    0:27:20 You know and you can sort of go back computers cars printing press radio tv, right every single time like something
    0:27:24 You know and then yes, there’s there’s pros and cons of every every single technology
    0:27:28 And each one of those created new careers that didn’t exist prior to them
    0:27:30 exactly
    0:27:32 Yeah, it feels like people will get better jobs as well as my opinion
    0:27:36 Like it’ll a lot of the like tedious parts of work the work that people don’t actually like doing
    0:27:40 That’s like soul draining kind of work a lot of that will get replaced
    0:27:44 And so I I think though with this new technology people have more free time and they can actually explore things
    0:27:49 They actually enjoy doing for work versus like oh, yeah, I need to enter this data into this form
    0:27:55 Like yeah, I feel like AI is already sort of replaced data entry and I know for me it has like I don’t manually enter stuff
    0:28:00 And just spreadsheets anymore because AI can do it for me. Just as good as I can. Yeah, but it didn’t take anybody’s jobs
    0:28:04 It just made me more efficient. Yeah, I think jobs will change right?
    0:28:08 I mean if you look at a super small scale like our venture fund right as I had mentioned
    0:28:11 I have this diligence tool that that does all this, you know research for me
    0:28:16 But it doesn’t mean that like I’m never gonna hire people into my fund. It just means that when I hire
    0:28:21 I’m not going to be looking for that skill set. I don’t need somebody who who wants to spend time
    0:28:24 You know searching the web and copy pasting stuff into a document
    0:28:29 I want someone who I’m okay with someone who sucks at that because what I want is different skill sets
    0:28:33 And so I think the jobs do change around what AI can automate
    0:28:36 Uh, but that’s always been true in any technology, right?
    0:28:42 If there’s certain tasks that technology will replace then just figure out what other tasks will support will be needed to support that
    0:28:47 I want to talk a little bit about like the the venture capital world for a minute too
    0:28:52 Specifically as it relates to AI right because there’s this narrative. We had Greg Eisenberg on the show
    0:28:55 We talked to him about this exact concept
    0:29:02 but there’s this narrative that sass companies don’t really have a moat because almost all of these platforms are either built on
    0:29:06 existing apis or open source platforms and
    0:29:14 Anybody could go and use AI to build a sass that connects to these apis or these open source platforms
    0:29:20 Even if you don’t know code, I mean, you know, what what you’ve managed to do is is kind of proof of that concept
    0:29:27 But so like I look at some of these companies that like when AI was really bubbling up at the end of 2022 beginning of 2023
    0:29:31 That seems to be when the sort of big like boom happened where the
    0:29:35 The world consciousness started to realize that AI is is getting really popular, right?
    0:29:39 and we saw platforms pop up like tools that were like, hey, we can um
    0:29:45 Summarize PDFs for you or chat with your PDF or tools like that
    0:29:48 Some of those companies raised capital early on when they first popped up
    0:29:54 But then you know fast forward a year and now you can just do that directly inside of chat gpt or clod
    0:29:57 And who’s going to go buy a chat with pdf software?
    0:30:04 So when you’re looking at like potential companies to invest in are there things that you’re looking at
    0:30:10 That might future-proof the companies like what what sort of criteria are you looking at?
    0:30:15 I think there’s two parts. I mean being an early stage investor, right investing at precede
    0:30:20 I’m comfortable with the idea that my companies might pivot. So at precede, I mean team comes first, right?
    0:30:24 If I’m talking to a team that’s closely following what’s happening and is able to like adapt quickly
    0:30:26 Like that’s that that can be enough
    0:30:30 But kind of thematically, which I think is more the question around for us at least
    0:30:32 We do try to get deep into
    0:30:40 Like both the building and testing of tools and talking to customers to to build conviction around the direction that we think
    0:30:45 AI is going to go right and looking back on how the models have advanced in the past to get a sense of where we think
    0:30:47 The models are going to go
    0:30:48 And it’s through that
    0:30:53 We have conviction in certain types of plays. We feel like this is going to be a good play versus
    0:30:59 Conviction that certain plays won’t work and then and then some a lot of plays are just like we’re not sure what the right play is here
    0:31:04 And then as an investor we tend to invest in the areas where like I have strong conviction in this play
    0:31:07 And there’s plenty of other really interesting ideas where I’m just like not
    0:31:13 Don’t really have conviction and and that just market direction that I might pause and like watch a little bit make sense
    0:31:16 I mean, I’ve been you know
    0:31:18 It’s slightly connected. I’ve been I’ve been wondering like with the vc
    0:31:21 Like how will it look in five or ten years because it’ll probably be really different
    0:31:27 I mean, I’m interested to see what happens in terms of like lowering the cost of company starting and what those teams look like
    0:31:32 I mean, that’s been a consistent trend right over over decades where the cost of starting companies continue to get lower
    0:31:36 So so I hope to see that of what we’ve seen is the vc landscape change as well
    0:31:40 Right, we saw vcs becoming smaller and smaller and more micro and micro
    0:31:44 At the same time we did we have seen some shifts around
    0:31:49 crowdfunding around, you know syndicates like, you know rolling funds
    0:31:53 So we are starting to see some innovation over the last I mean less than a half decade
    0:31:56 I’d say is where we’ve started to see a lot more innovation around vc
    0:31:58 So it’ll be interesting to see where it goes
    0:32:05 Are there are there any like tools or research or anything that you know has come on to your radar that’s really exciting you’re right?
    0:32:10 Yeah, that’s a good question. I feel like that’s uh, that’s this constantly. Yes. Um
    0:32:16 I’m curious to learn more about like the what Sakana is doing by combining models that kind of modular approach of
    0:32:22 Of mixing and matching models and being able to use them as as different models. I think that’s a fascinating area
    0:32:29 Um right now that the overarching trend seems to be like let’s throw more money and more power to get better performance
    0:32:34 But I think if we can start playing around with more modular models and start combining them
    0:32:38 We can start seeing different economics and plays there. So I’m really fascinated by that
    0:32:40 Yeah, something like that seems great for agents, right?
    0:32:44 It’s like depending on what use case you can change out the model like okay right now
    0:32:47 I don’t need the smartest one and I don’t need to spend a fortune on it
    0:32:53 Another area is web 3 and ai. I feel like I’ve seen a lot more that’s been really bubbling up with web 3 coming back everything from
    0:32:56 You know, uh data
    0:33:03 You know data marketplaces on web 3 ip plays leveraging kind of ai first ip plays on web 3
    0:33:08 As well as agentic web 3 right help agents pay help agents have an identity
    0:33:10 and this might be
    0:33:14 Uh more of a rabbit hole that we want to go down today, but what what’s your definition of web 3?
    0:33:20 I mean, I think the simplest is is one is technologies that leverage the blockchain. I might I might I may be on
    0:33:26 A smaller group that kind of sees web 3 as almost a philosophy and ethos
    0:33:28 But but I actually think most people would disagree with me there
    0:33:34 Yeah, I mean, I’d probably agree with you like when it when it comes to the whole like blockchain sort of crypto discussion
    0:33:40 I think the technology the sort of distributed ledger concept. I think is really really powerful
    0:33:47 I think a lot of drifters and bad actors and people trying to really push the make money narrative has
    0:33:52 Really screwed up the the the sort of web 3 blockchain concept
    0:33:55 But I do think the underlying tech is really powerful and I do think it’s the future
    0:33:59 Yeah, and I always thought it was kind of wild to like, you know, it’s it’s a great marketing
    0:34:05 I’m sure web 3 but it’s like when web 2 happened which was like all the big social companies that term kind of came out
    0:34:08 You know, I think sarah lacey and a few others started spreading that term
    0:34:11 Uh, and and that was like after the big social companies were already working
    0:34:16 They never like really dramatically changed the internet like people were using dig
    0:34:19 They were using facebook they were using myspace and all this stuff, right?
    0:34:23 And then that term kind of came out versus web 3 was like, here’s this cool concept
    0:34:27 It’s got it’s got branding issues for sure. Yeah. Yeah
    0:34:34 So, you know as we kind of wrap up here a large portion of the audience that listens to this show is
    0:34:38 You know small medium businesses is a hub spot podcast
    0:34:42 What advice would you give to business owners that are seeing AI?
    0:34:46 They’re seeing all of this stuff bubble up. Maybe they haven’t experimented with AI yet
    0:34:52 What would you tell them to do is sort of a first step to get their toes wet with AI or their business?
    0:34:58 I mean, I think for anybody who’s played with it will tell you that the first couple of times they played with something like chat
    0:35:01 Upt, they were pretty blown away by his capabilities
    0:35:06 I think it’s pretty low hanging fruit to start there if you haven’t like pull up chat Upt
    0:35:09 I think there’s a free version some people tell you oh the free version is not good enough
    0:35:11 But realistically just start playing with it, right?
    0:35:15 And I think for most people when you when you ask it a couple of questions when you’re blown away by it
    0:35:19 You’ll start thinking of hey, how can I maybe use this at my work?
    0:35:22 And I’ll ask you a couple questions that’s relevant to your work
    0:35:26 And again, my guess is that like a majority of people we’ve blown away by his capabilities there
    0:35:32 And hopefully just by doing that a little bit you’ll start thinking of ideas on like how you can start using it
    0:35:39 And initially I just nudge people to like, you know use chat Upt or any pick an AI tool and just start playing around with it
    0:35:45 So that you kind of get into the habit for people who are more creative pick a pick an image generation tool like mid-journey
    0:35:48 Stable diffusion or if you’re a music person then pick sueno or audio
    0:35:52 Then just pick an AI tool and just start playing with it and just get into that habit
    0:35:57 And then once you do I think the ideals will start coming and just like and just know that like, you know
    0:35:58 Some of the stuff you’re going to try won’t work
    0:36:02 But like if you keep trying stuff, they’ll keep discovering new cool ways to like use it
    0:36:05 Well, very cool. Thank you so much for for joining us on this show today
    0:36:09 Where should people go learn more about you go check you out follow you on x
    0:36:12 You know, what’s what’s the best place to learn more about you and what you’re up to?
    0:36:16 Probably the uh, the the place I post most is on x slash twitter
    0:36:20 I’m constantly posting about AI discoveries rvc fund
    0:36:25 Um for a lot of the stuff I build I do have a build in public log at yohey.me where you can see
    0:36:30 Most of my experiments some of them are have tutorials some of them are just pure experiments in place
    0:36:36 But uh, those two are the probably two most interesting places. Amazing. Awesome. Well, thank you so much for joining us today
    0:36:38 This has been such a fun and fascinating conversation
    0:36:42 I’m so glad that you were able to make the time to do this with us today
    0:36:45 And I couldn’t appreciate you enough for doing it. So thanks again. Thank you for having me
    0:36:55 [Music]
    0:36:57 [Music]
    0:36:59 [Music]
    0:37:09 [MUSIC]

    Episode 7: Why and how, Yohei Nakajim built a baby AGI. Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) sit down with Yohei Nakajima (https://x.com/yoheinakajima), a venture capitalist and serial AI tool builder.

    In this episode, Yohei dives deep into the transformative potential of AI in automating tedious tasks, revolutionizing venture capital, and redefining job markets. He shares insights on the practical applications of AI for small and medium businesses, discusses the branding of “web three,” and explores the development and impact of autonomous agents like his Baby Agi project. Whether you’re interested in tech innovation, business adaptability, or AI ethics, this conversation covers it all.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) VC turned coder using AI for prototyping.
    • (03:38) Community support led to modding and improvements to Baby AGI.
    • (08:16) Learning curve in using autonomous agents.
    • (11:31) AI agents offer guided next step suggestions.
    • (15:04) Developers emphasize handcrafted agents over autonomous agents.
    • (17:10) Balancing niche and broad market expectations in tech.
    • (20:12) Massive models trained on millions of people.
    • (22:52) Childhood fascination leads to pondering internet immortality.
    • (27:24) Venture capital, AI, SaaS, future-proofing investment criteria.
    • (30:03) Interest in lowering company startup costs and VC innovation.
    • (33:34) Try using free version of chats first.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • How OpenAI’s SORA Changes the Future of Filmmaking

    AI transcript
    0:00:04 – The question of will our economic systems change forever?
    0:00:05 I think the answer is yes.
    0:00:06 – Yeah.
    0:00:07 – But how it’s going to change?
    0:00:08 I don’t think we know yet.
    0:00:10 A lot of people underestimate humans,
    0:00:12 especially when we’re talking about AI.
    0:00:15 I feel like humans will figure out new ways
    0:00:16 to add value to the world.
    0:00:17 – And I’m pretty encouraged there.
    0:00:19 Like I think people who adopt AI,
    0:00:21 they’re going to become enhanced in a way
    0:00:24 and they’re going to have an outsized impact on the world.
    0:00:26 (upbeat music)
    0:00:28 – Hey, welcome back to the Next Wave podcast.
    0:00:31 I’m Matt Wolf and I’m here with my co-host, Nathan Lanz.
    0:00:34 And we are your chief AI officer.
    0:00:36 It is our goal with this show
    0:00:38 to keep you informed on the latest AI news,
    0:00:42 the latest AI tools and the talk of the AI world
    0:00:45 so that you can use it in your life and your business
    0:00:48 and leverage these new technologies
    0:00:50 that are there for you to use.
    0:00:52 Today’s episode is a little bit different
    0:00:53 from what we’ve put out so far.
    0:00:54 This time we’re doing a little bit
    0:00:57 of an ask us anything episode.
    0:01:00 We went to X and asked you to send us your questions
    0:01:03 about the AI world and what you want to know
    0:01:05 and hear our opinions on.
    0:01:06 And we got some amazing questions.
    0:01:10 So in this episode, Nathan and I are going to deep dive
    0:01:13 and answer and give our thoughts on some of those questions.
    0:01:17 We talk about things like where is AI art going
    0:01:20 and is there room for it to get even better?
    0:01:23 We also talk about how the government and corporations
    0:01:26 and how people need to change to adapt
    0:01:28 to this new AI world that we’re headed in.
    0:01:31 And we also talk about the limitations
    0:01:32 of large language models.
    0:01:34 How big can these large language models get?
    0:01:36 How powerful can they get?
    0:01:38 And we’re going to deep dive and give you our thoughts
    0:01:40 on all of this in today’s episode.
    0:01:42 So hang out with us and let’s dive in.
    0:01:47 So the first question here comes from Vicky Jay over on X.
    0:01:50 She says, where do you see AI art and video going
    0:01:52 by the end of this year?
    0:01:54 I mean, AI art, the thing is like a year ago,
    0:01:57 AI art was horrible.
    0:01:59 Or like a year and a half ago, really horrible.
    0:02:01 And then now it’s getting to the point where it’s like,
    0:02:02 okay, how is it going to get better?
    0:02:05 Like the newest version of mid-journey is amazing.
    0:02:09 ‘Cause maybe it’s still, with mid-journey,
    0:02:11 I think the one challenge is it still always looks
    0:02:12 like mid-journey to me.
    0:02:14 Like it looks very polished and it’s harder
    0:02:16 to get it having a very different style.
    0:02:21 So maybe having more controls over how,
    0:02:24 like little details in the image versus now
    0:02:25 where it kind of just, oh yeah,
    0:02:27 produces something super beautiful,
    0:02:29 but it’s not really easy to edit.
    0:02:31 I think by the end of the year,
    0:02:32 you’ll be able to produce images
    0:02:34 that look even more amazing than now,
    0:02:35 but you’ll have way more control
    0:02:37 over how you actually edit those images,
    0:02:39 which for commercial uses will be amazing, right?
    0:02:41 ‘Cause if you’re producing like graphics
    0:02:43 for advertisements or anything like that,
    0:02:44 that stuff’s going to get way better,
    0:02:46 I think by the end of the year.
    0:02:50 When all your marketing team does is put out fires,
    0:02:52 they burn out fast.
    0:02:53 Sifting through leads,
    0:02:55 creating content for infinite channels,
    0:02:58 endlessly searching for disparate performance KPIs,
    0:02:59 it all takes a toll.
    0:03:00 But with HubSpot,
    0:03:03 you can stop team burnout in its tracks.
    0:03:05 Plus your team can achieve their best results
    0:03:07 without breaking a sweat.
    0:03:09 With HubSpot’s collection of AI tools,
    0:03:12 Breeze, you can pinpoint the best leads possible,
    0:03:15 capture prospects attention with click-worthy content
    0:03:18 and access all your company’s data in one place.
    0:03:21 No sifting through tabs necessary.
    0:03:23 It’s all waiting for your team in HubSpot.
    0:03:24 Keep your marketers cool
    0:03:27 and make your campaign results hotter than ever.
    0:03:30 Visit hubspot.com/marketers to learn more.
    0:03:33 (upbeat music)
    0:03:34 – Yeah, it is interesting
    0:03:36 ’cause I feel the same way about Dolly 3, right?
    0:03:37 Like I’ve gotten your point now
    0:03:38 where I can look at an image and go,
    0:03:40 “Okay, I can tell that was a Dolly 3.
    0:03:42 I can tell that was mid-journey.”
    0:03:44 I mean, I don’t know if the normal,
    0:03:46 if normal people who aren’t like as immersed in AI
    0:03:48 can see that or not, it’s hard for me to say, right?
    0:03:50 Like I’m such in that my own bubble
    0:03:52 that I don’t know if other people notice that too,
    0:03:55 but Dolly creates very similar images.
    0:03:56 Like the color palettes on them
    0:03:58 always kind of looks the same to me.
    0:04:01 Same with mid-journey that’s got like this stylistic thing
    0:04:02 where I look at it and go,
    0:04:04 “Okay, I can tell that was mid-journey.”
    0:04:05 But I think like you said,
    0:04:08 I think the updates throughout this year
    0:04:11 are gonna be a little bit more nuanced and marginal updates,
    0:04:12 right?
    0:04:13 They’re gonna be smaller updates
    0:04:15 because we’ve already got these AI models
    0:04:17 for images specifically
    0:04:20 that can create ultra-realistic images, right?
    0:04:21 Where else do you go?
    0:04:23 I think the focus now is less on realism
    0:04:25 and more on prompt adherence, right?
    0:04:28 Mid-journey is really, really great at realism.
    0:04:30 Dolly still sucks at realism.
    0:04:33 Dolly is really, really good at prompt adherence.
    0:04:35 You can throw a whole bunch of elements into the prompt.
    0:04:38 You know, I wanna drag in wearing a fedora,
    0:04:40 eating nachos, watching MTV
    0:04:43 and the words party time on the screen
    0:04:46 and it will get all of those elements into a single image.
    0:04:48 Mid-journey can’t do that, right?
    0:04:49 So I think what we’re probably gonna see
    0:04:52 is like all of these models
    0:04:54 kind of combining to a point where like,
    0:04:56 nobody can tell whether it was mid-journey
    0:04:59 or Dolly or stable diffusion
    0:05:03 because mid-journey focused on prompt adherence
    0:05:04 and improves that.
    0:05:08 Dolly focuses on realism
    0:05:11 and more dynamic color palettes
    0:05:13 and optimizes for that.
    0:05:14 And we’re gonna get to a point
    0:05:16 where we don’t know what model generated what
    0:05:18 because everything is just good.
    0:05:21 As far as video, I mean, we saw Sora.
    0:05:25 OpenAI has made it sound like we might see Sora
    0:05:26 sometime by the end of the year.
    0:05:28 We might actually get our hands on it.
    0:05:29 You know, we were just talking a minute ago
    0:05:33 about Adobe rolling in Sora to Adobe Premiere.
    0:05:35 So, you know, I think with Adobe Premiere
    0:05:37 we might start to get some access to Sora
    0:05:40 and some of those capabilities in there.
    0:05:41 So that’s gonna improve,
    0:05:43 which means more realism in video,
    0:05:45 longer prompt generation with video.
    0:05:48 Yeah, I think that’s really what we’re gonna see.
    0:05:49 I think the other area
    0:05:52 that we’re gonna see continued massive leaps
    0:05:55 is in text to 3D art, right?
    0:05:59 For, you know, for creating like little animated characters
    0:06:01 or for creating game assets or things like that, right?
    0:06:04 You’ve got Spline 3D
    0:06:08 and you’ve got CSM common sense machines
    0:06:11 and you’ve got Meshie.
    0:06:12 There’s all of these tools that are like
    0:06:16 getting really, really good at like text to 3D object.
    0:06:18 And I feel like a lot of companies
    0:06:20 are kind of focusing in there right now.
    0:06:21 Like even mid-journey said
    0:06:23 that they think their next sort of frontier
    0:06:26 is gonna be 3D generation.
    0:06:27 We’ll get back to the show in just a moment,
    0:06:29 but first here’s a word from HubSpot.
    0:06:32 Curious about what the future of productivity looks like,
    0:06:33 HubSpot’s AI tools make quick work
    0:06:35 of expediting content creation,
    0:06:39 optimizing workflows and elevating data analysis.
    0:06:40 Personally, I’m really impressed
    0:06:42 by the AI website generator.
    0:06:45 You can create a webpage just from a few simple prompts.
    0:06:47 That’s mind-blowing.
    0:06:48 And the facts speak for themselves.
    0:06:51 Over 85% of marketers and sales professionals agree
    0:06:55 that AI enhances content quality and prospecting efforts.
    0:06:56 So what are you waiting for?
    0:06:58 Sign up for free by clicking the link
    0:06:59 in the description below.
    0:07:00 Now back to the show.
    0:07:02 – Going back to video for a second.
    0:07:04 I think that, you know,
    0:07:06 I’m sure we’re gonna see Sora this year.
    0:07:08 Maybe when we see Sora and then like even like
    0:07:09 a small upgrade to it,
    0:07:11 like something better than the demo,
    0:07:13 like in the public’s hands, which is gonna be amazing.
    0:07:16 And I think you, you know, by the end of the year,
    0:07:18 I think there’s a chance that we’ll actually see
    0:07:20 like a viral short film,
    0:07:23 like something like five minutes, 10 minutes long,
    0:07:25 this entirely made with something like Sora.
    0:07:26 – Yeah.
    0:07:27 – I think that’s gonna be like the turning point of like,
    0:07:30 oh, like wow, because like,
    0:07:31 ’cause it’s almost there now.
    0:07:33 Like if something like Sora was out there,
    0:07:35 I’m pretty sure somebody could make something
    0:07:36 that went really viral.
    0:07:38 And I think you might even see, you know,
    0:07:39 ’cause we’ve been talking about like,
    0:07:42 there’s all these great new AI video tools coming out,
    0:07:43 but like what’s the use case?
    0:07:45 I do feel like there’ll be something, you know,
    0:07:46 almost kind of like how Instagram took off
    0:07:47 because of filters.
    0:07:50 I do feel like there’s some kind of new app,
    0:07:52 whether it comes this year or next,
    0:07:54 or I feel like there’s something on the horizon
    0:07:56 that we don’t even know what it is yet,
    0:07:57 but some kind of social app
    0:07:59 that will integrate these AI video tools.
    0:08:01 And we’re like, oh yeah, of course,
    0:08:02 why didn’t I think of that?
    0:08:04 Like I think something like that bigger than TikTok
    0:08:07 is coming using AI, probably AI video.
    0:08:08 – Yeah, interesting.
    0:08:10 I wonder what a social media app would look like
    0:08:13 in that realm because it’s like,
    0:08:15 is it gonna be just a bunch of AI videos
    0:08:18 and people are scrolling a feed of AI videos?
    0:08:19 – I guess the challenge right now is like,
    0:08:21 the filters were free.
    0:08:22 A lot of this stuff is quite expensive
    0:08:24 to actually do right now.
    0:08:25 So like you would have, you know,
    0:08:27 had to pay for the generation of the video,
    0:08:29 I guess is the one challenge you probably would have
    0:08:30 right now.
    0:08:32 But so that might make it where like,
    0:08:33 it’s more likely to happen next year
    0:08:34 when costs come down or something.
    0:08:36 But it does feel to me like, okay,
    0:08:38 we haven’t had big social app in a while.
    0:08:39 We had that kind of like,
    0:08:40 what was that one called gas or whatever,
    0:08:43 which was like this guy kind of like generated
    0:08:45 this out of nowhere and then sold it to Reddit,
    0:08:46 I think really fast.
    0:08:47 – I don’t remember that one.
    0:08:49 – Anyway, it was like half a joke.
    0:08:50 – Yeah.
    0:08:51 – And then he sold it for like a ton of money.
    0:08:53 Like, I don’t know, like over 50 million or something.
    0:08:55 And apparently like,
    0:08:57 I think he told Sean Prairie from my first million
    0:08:58 and a few others like that he was gonna do it
    0:09:00 before he did it and he just did it.
    0:09:01 But there hasn’t been any other big social apps,
    0:09:03 you know, since then that have taken off.
    0:09:04 And it feels like, okay,
    0:09:06 so AI video probably is going to enable
    0:09:07 something really cool there.
    0:09:08 – Yeah.
    0:09:10 I mean, right now like air chat is sort of a social app.
    0:09:11 I mean, it came out like a year ago,
    0:09:12 but they rebuilt it.
    0:09:15 Now it’s sort of re-emerging.
    0:09:16 I think that’s mostly emerging
    0:09:18 just because it’s got Naval’s name on it
    0:09:20 and people flocked to Naval.
    0:09:22 – Yeah, Naval is saying it’s because of AI partially.
    0:09:24 He is saying because he believes,
    0:09:25 which is funny.
    0:09:26 He used to say the text was like the best.
    0:09:28 And then now he is switching to saying like,
    0:09:30 yeah, it’s all gonna be audio and all,
    0:09:31 you know, it’s all gonna be audio.
    0:09:33 And that’s why I’m doing air chat.
    0:09:35 And apparently he is saying that one of the reasons
    0:09:38 is he’s now kind of reevaluating things
    0:09:40 because of AI and realizing, yeah,
    0:09:41 with all these new AI tools,
    0:09:43 you’re gonna be talking to your AI
    0:09:44 and then you’re gonna be talking to people
    0:09:45 that you wanna, you know, real people
    0:09:46 you wanna talk to as well.
    0:09:49 And so like voice is gonna be the main medium there.
    0:09:51 – Yeah, yeah, I know we’re sort of
    0:09:52 on a little tangent right now,
    0:09:55 but when it comes to air chat,
    0:09:58 the thing that I like about it is that you can’t type.
    0:10:00 You can’t open it up and type a response to somebody.
    0:10:01 You have to speak.
    0:10:03 And the thing that makes that cool
    0:10:05 is it sort of solves the problems
    0:10:08 that X has right now with a lot of bots, right?
    0:10:10 Right now you can’t really bot it
    0:10:11 because you’ve got to talk to it.
    0:10:15 Unless you’re using a really, really sophisticated
    0:10:17 sort of like 11 labs combo
    0:10:20 where it’s spitting out real voices and like,
    0:10:23 I don’t think anybody’s that advanced with it yet.
    0:10:25 But you also don’t get like all the trolling
    0:10:28 and all the negativity because when somebody
    0:10:30 actually speaks to you face to face,
    0:10:32 they’re not gonna be as negative
    0:10:34 and as much of a dick basically
    0:10:36 as they might be on something like X.
    0:10:38 – Did you ever use Clubhouse?
    0:10:40 Like when back in COVID? – A little bit, yeah.
    0:10:41 – Yeah, yeah.
    0:10:42 So I mean, Clubhouse was like blowing up,
    0:10:43 especially in Silicon Valley.
    0:10:47 Like everybody was on it and it started off great
    0:10:49 and then it went downhill.
    0:10:52 And you know, it’s like,
    0:10:53 ’cause it’s really hard to moderate these things.
    0:10:54 Like yeah, at first everybody’s nice,
    0:10:57 but then you got people talking about political things
    0:11:00 or whatever and like people from the opposite side join
    0:11:03 and like it gets really hateful and it just, you know.
    0:11:04 And then those kind of channels became
    0:11:06 the most popular channels
    0:11:08 because there’s more engagement there.
    0:11:10 It’s I don’t even wonder what Air Chat’s doing
    0:11:13 to kind of make sure they don’t have that same problems.
    0:11:14 – Yeah, it’ll be interesting.
    0:11:16 I’m new, I’ve only been on it for, you know,
    0:11:18 48 hours as of this recording.
    0:11:22 So, but anyway, we can move off that tangent for right now.
    0:11:25 So let’s, the next question,
    0:11:27 this one comes from craze in the dark
    0:11:29 who’s actually somebody that works for me.
    0:11:31 This is his name’s John, he’s my creative director,
    0:11:32 but he asked this question.
    0:11:35 He said, “How can governments and corporations
    0:11:39 “addressed to a potential massive AI job replacement?
    0:11:41 “Will our economic systems change forever?”
    0:11:44 You know, real easy softball question.
    0:11:45 – Okay, next question.
    0:11:48 – I do have like really mixed feelings here
    0:11:49 ’cause like, you know, in terms of,
    0:11:52 will it change our economic systems forever?
    0:11:55 I think that was so hard to predict.
    0:11:57 Like anybody who says that they have a predicted,
    0:11:58 like that they know what’s gonna happen there
    0:12:01 is lying or like, you know,
    0:12:03 deleting themselves because AI introduces
    0:12:06 so many new variables into the world.
    0:12:09 It’s so hard to know, like, okay, GPT-5 is gonna come out.
    0:12:12 Is it 10% better, 50%?
    0:12:13 What does that mean?
    0:12:14 What does that mean for the economy?
    0:12:17 What does that mean for how people work?
    0:12:20 Will these systems stop improving as fast as they are now?
    0:12:21 Like in three years,
    0:12:23 ’cause we hit some kind of limitations
    0:12:27 or will they just start, you know, growing exponentially?
    0:12:30 And so it’s really hard to know, but, you know,
    0:12:33 and in the future, if AI can do most work,
    0:12:35 that definitely opens huge questions of like,
    0:12:37 what’s the purpose of the government?
    0:12:38 – Yeah.
    0:12:41 – Like, what happens when AI gets incredibly smart
    0:12:43 and starts telling us that our government’s messed up?
    0:12:45 What do we do then?
    0:12:48 – We elect an AI to run our government, obviously.
    0:12:49 – Yeah, yeah, do we ask the AI
    0:12:51 how we should be structuring our government based on,
    0:12:55 you know, and how do politicians respond to that?
    0:12:57 Do they decide, okay, we have to ban AI now?
    0:12:59 That’s probably what would happen.
    0:13:01 So I think no one knows, like, you know,
    0:13:03 ’cause it’s like, right now, you know,
    0:13:05 capitalism is, in my opinion, the best system we have.
    0:13:08 It’s definitely not perfect, like everyone knows that.
    0:13:10 And so in the future, when AI does the work,
    0:13:12 people are gonna have to find meaning in other ways
    0:13:15 and also how does money work then?
    0:13:17 I think Mark Andreessen talked about this recently
    0:13:20 where he believes that AI will dramatically bring down
    0:13:22 the cost of many things, right?
    0:13:24 And then there’ll be certain niche products
    0:13:26 or niche or maybe like real life experiences
    0:13:29 that still cost a lot of money, right?
    0:13:31 Like going somewhere and having some amazing experience
    0:13:33 or having a human crate, something that could still
    0:13:34 be expensive.
    0:13:36 – I think that’s giving companies too much credit.
    0:13:38 I think it’s giving companies too much credit
    0:13:40 ’cause what we’ve seen so far, right, like,
    0:13:42 there’s so many companies that are built on the back
    0:13:47 of GPT-3, GPT-4, and over the last several months,
    0:13:49 so many of these APIs have lowered their costs,
    0:13:51 lowered their costs, lowered their costs.
    0:13:54 Have the companies that we’re paying our monthly fees to
    0:13:57 lower their costs or just pocketed extra profits?
    0:14:00 – Yeah, well, I mean, so it depends on how good AI gets.
    0:14:02 So like, that’s when there’s like a handful of competitors,
    0:14:05 but if you have like a thousand competitors
    0:14:06 and they’re all competing on price,
    0:14:09 because AI has made things so much easier to build,
    0:14:12 you know, a lot of prices should go down over time.
    0:14:13 But yeah, you’re right.
    0:14:15 I mean, iPhone’s still incredibly expensive.
    0:14:18 Other things that we buy are still incredibly expensive.
    0:14:19 So you’re correct there.
    0:14:22 I’m not sure, you know, but also at the same time,
    0:14:25 I don’t really, you know, like universal basic income,
    0:14:27 which I believe Sam Altman originally was one of the people
    0:14:28 who kind of like didn’t experiment with that,
    0:14:29 where they gave people money.
    0:14:31 I believe they gave people money in Oakland.
    0:14:32 They ran that experiment.
    0:14:33 And I can’t remember the results,
    0:14:35 but I think it was kind of mixed where it’s like,
    0:14:37 yeah, some people just end up kind of just like
    0:14:39 playing video games, you know?
    0:14:40 It’s like, okay, is that what you want in your society?
    0:14:41 Like, I don’t know.
    0:14:45 Like maybe there’s, you know, maybe some,
    0:14:46 you know, portion of people, that is what they do.
    0:14:48 And then they go off and do some productive things,
    0:14:49 hopefully.
    0:14:51 – I think everybody worries about the Wally scenario,
    0:14:53 right, where everybody’s just kind of
    0:14:54 sitting in their chairs, floating around,
    0:14:56 getting fat, drinking slurpees,
    0:14:58 and watching movies or playing video games all day.
    0:15:00 – Yeah, but all, you know, recently there’s been this big,
    0:15:03 you know, uptick in like people are getting healthy,
    0:15:05 at least in like Silicon Valley circles and stuff.
    0:15:07 And even me, I lost a bunch of weight.
    0:15:10 I’ve been using AI even to give me advice on like,
    0:15:12 how to go to the gym and like how to work out
    0:15:13 and gain muscle and all this stuff.
    0:15:15 And it’s the people were like, yeah,
    0:15:16 so we thought it was gonna be like a Wally,
    0:15:17 everyone’s super fab.
    0:15:19 It’s gonna be like a Wally, everyone’s super jagged.
    0:15:20 – Yeah.
    0:15:21 (laughing)
    0:15:23 – Like, oh, it’s free time and AI’s helped them,
    0:15:24 teach them how to be healthy.
    0:15:26 We have all these new drugs that make you healthier and stuff.
    0:15:30 And it might be like that, like a Wally, but we’re all jagged.
    0:15:33 So yeah, I don’t know, but my gut feeling is
    0:15:37 that there will always be like the capitalism will just
    0:15:41 evolve, like it’s not gonna disappear anytime soon.
    0:15:43 In a thousand years or, you know, whatever who knows,
    0:15:47 maybe we’re in Star Trek land, but no time that soon.
    0:15:48 – Yeah.
    0:15:50 (upbeat music)
    0:15:51 – We’ll be right back.
    0:15:53 But first, I wanna tell you about another great podcast
    0:15:54 you’re gonna wanna listen to.
    0:15:58 It’s called Science of Scaling, hosted by Mark Roberge,
    0:16:01 and it’s brought to you by the HubSpot Podcast Network,
    0:16:04 the audio destination for business professionals.
    0:16:06 Each week, host Mark Roberge,
    0:16:09 founding chief revenue officer at HubSpot,
    0:16:11 senior lecturer at Harvard Business School,
    0:16:13 and co-founder of Stage Two Capital,
    0:16:16 sits down with the most successful sales leaders in tech
    0:16:19 to learn the secrets, strategies, and tactics
    0:16:21 to scaling your company’s growth.
    0:16:23 He recently did a great episode called,
    0:16:26 How Do You Solve for a Siloed Marketing in Sales?
    0:16:28 And I personally learned a lot from it.
    0:16:30 You’re gonna wanna check out the podcast,
    0:16:34 listen to Science of Scaling wherever you get your podcasts.
    0:16:37 (upbeat music)
    0:16:39 – I don’t know, I feel like a lot of people
    0:16:43 underestimate humans, you know,
    0:16:44 especially when we’re talking about AI.
    0:16:47 I feel like humans will figure out new ways
    0:16:49 to add value to the world.
    0:16:53 Right now, one of the ways humans add value to the world
    0:16:56 is doing the actual construction, building the stuff,
    0:17:00 doing the actual work that needs to get done.
    0:17:03 But I do think in a sort of post-AI world,
    0:17:07 maybe call it a post-AGI world where robots and AI
    0:17:09 can handle a lot of this stuff for us,
    0:17:12 I still think humans are going to figure out
    0:17:14 new ways to add value,
    0:17:16 we just don’t know what that looks like yet.
    0:17:18 Also, questions like this about how can governments
    0:17:19 and corporations adjust.
    0:17:23 My thoughts on that are not necessarily
    0:17:25 that corporations and governments need to adjust,
    0:17:27 it’s that we need to adjust.
    0:17:29 The people, the general population,
    0:17:32 the public should be adjusting.
    0:17:35 You know, what we’re seeing right now,
    0:17:37 I think is one of the greatest opportunities
    0:17:41 in anybody’s lifetime to quickly build and iterate
    0:17:44 and optimize and create something yourself
    0:17:45 that adds value to the world.
    0:17:48 So maybe you’re going to lose your job at Walmart
    0:17:53 as a checker because, you know, you can just walk out now,
    0:17:56 but you also just have the opportunity to use AI
    0:17:58 to help you create a business that can generate value
    0:18:00 and income for you.
    0:18:04 So, you know, we’re in this very empowering point in time
    0:18:07 where I don’t think we should be going,
    0:18:10 what can governments and corporations do to protect us
    0:18:13 from AI and we should be thinking,
    0:18:16 what can I do with AI so that I’m not that reliant
    0:18:17 on corporations and government?
    0:18:21 I think we need to have a sort of paradigm shift.
    0:18:24 We need to think differently, right?
    0:18:26 You know, not trying to quote Apple here,
    0:18:28 but we do need to think differently
    0:18:31 in that we need to not look at corporations
    0:18:34 and government to solve these problems for us.
    0:18:36 We need to look at how can we be empowered
    0:18:38 by this new technology so that we aren’t reliant
    0:18:40 on government and corporations.
    0:18:40 – Yeah, I agree.
    0:18:44 Everyone has a free assistant, you know, in their pocket now,
    0:18:46 you know, they used to have a computer,
    0:18:47 now they have an actual assistant
    0:18:50 and that assistant’s gonna get more and more intelligent.
    0:18:53 So, yeah, that’s, and that’s probably how you’ll actually
    0:18:55 see changes in government and corporations
    0:18:57 is like from the bottom up, you know,
    0:18:58 like of like people learning this stuff
    0:19:01 and then changing the government.
    0:19:03 And I’m pretty encouraged there.
    0:19:04 Like I think AI, like the people who adopt AI,
    0:19:07 they’re gonna become enhanced in a way
    0:19:10 and they’re gonna have an outsized impact on the world,
    0:19:12 which should give them a better chance
    0:19:13 of actually influencing the government
    0:19:15 or becoming part of the government,
    0:19:16 helping restructure things.
    0:19:18 – I mean, the question of will our economic systems
    0:19:20 change forever, I think the answer is yes.
    0:19:21 – Yeah.
    0:19:23 – How it’s gonna change, I don’t think we know yet.
    0:19:25 I don’t think anybody is gonna be really good
    0:19:28 at predicting that in this moment in time right now.
    0:19:30 But definitely, I think the economic system
    0:19:32 is going to change.
    0:19:34 I mean, we’re already seeing it change, right?
    0:19:37 You know, people are trying to figure out ways
    0:19:39 to bypass the existing financial systems
    0:19:42 and banking systems through crypto and things like that.
    0:19:44 It’s going to continue to evolve that way,
    0:19:47 no matter how much people sort of fight that stuff.
    0:19:49 I do think it’s going to can like evolve
    0:19:54 towards that more decentralized crypto-centric finance
    0:19:56 in the future.
    0:19:57 But yeah, definitely the economic system
    0:19:58 is gonna change forever.
    0:20:02 I just, I can’t make a prediction on how yet.
    0:20:03 – Yeah.
    0:20:07 Next question here is from Universal AI Podcast.
    0:20:09 Their question is, do you think open source community
    0:20:11 can band together and create an AGI
    0:20:13 that can compete with the big corporations?
    0:20:16 So really, I think the main question is here is,
    0:20:20 can open source build AI models that can compete
    0:20:22 with an open AI, with a Microsoft, with a Google,
    0:20:25 with an Anthropic, with companies that have millions,
    0:20:27 potentially billions of dollars backing them
    0:20:29 with compute power?
    0:20:32 – Yeah, I think we’ve talked about this a little bit before,
    0:20:33 not sure if we put it out there,
    0:20:37 but I personally really hope so.
    0:20:40 Like really, really hope so.
    0:20:41 But I am skeptical.
    0:20:46 I do feel that open AI is further ahead than people realize.
    0:20:49 I think open AI is probably about one to two years ahead.
    0:20:52 And when GPT-5 comes out,
    0:20:53 I think that’s going to become apparent.
    0:20:55 Maybe I’m wrong, delusional,
    0:20:56 but that’s my current belief.
    0:20:58 And it feels like, yeah,
    0:21:01 open source is catching up to GPT-4,
    0:21:03 but if GPT-5 is two years ahead of that,
    0:21:05 you know, it’s like, well,
    0:21:06 then maybe they’ll always be in it.
    0:21:10 And then at some point when AI starts self-improving,
    0:21:11 let’s say by GPT-6,
    0:21:13 that it’s actually kind of improving itself
    0:21:15 a little bit too by itself,
    0:21:19 you could get in this exponential growth of the AI,
    0:21:21 where then whoever gets there first,
    0:21:24 or the first two or three companies that get there first,
    0:21:26 it gets better so fast that, yeah, sure,
    0:21:30 there’ll be cool open source models for individuals,
    0:21:33 but maybe the most powerful stuff will always be
    0:21:35 in like two or three companies’ hands, unfortunately.
    0:21:36 And then there’s the question of like, yeah,
    0:21:38 who gets to use it?
    0:21:40 Do the government say like, holy crap,
    0:21:42 this is so powerful, it changes the entire world?
    0:21:44 And yeah, we have to like really, you know,
    0:21:46 treat like nuclear bombs or something like that.
    0:21:48 And I really hope that’s not what happens.
    0:21:50 – Yeah, so when it comes to open source,
    0:21:53 one of, obviously the biggest bottleneck is compute, right?
    0:21:56 So like, you look at companies like Google,
    0:21:59 like Microsoft, like Anthropic, like Mixtral,
    0:22:00 some of these companies that are building
    0:22:01 these large language models,
    0:22:03 they have millions and in some of these scenarios,
    0:22:05 billions of dollars of investment,
    0:22:09 which they could use on GPUs to train these models.
    0:22:11 Even the open source community,
    0:22:13 even all of the large language models
    0:22:16 that are out there in the open source world right now,
    0:22:20 probably costed millions of dollars to train on, right?
    0:22:22 So you still need one of these big companies,
    0:22:24 like a Meta, like a Mixtral,
    0:22:27 it has all of this financial availability
    0:22:29 to train these models,
    0:22:32 even for the open source to be successful, right?
    0:22:34 So I don’t know if open source,
    0:22:36 at least not in the near term,
    0:22:38 I don’t know if it can be successful
    0:22:41 without the big corporations deciding to get involved
    0:22:42 and help train these models,
    0:22:45 because the open source world of LLMs right now,
    0:22:46 probably wouldn’t even exist
    0:22:48 if it wasn’t for Meta dropping Lama 2,
    0:22:51 which everybody’s sort of kind of fine-tuned
    0:22:53 and built off the back of that, right?
    0:22:57 So it’s like, I don’t know, at the end of the day,
    0:23:00 open source is amazing and we want it to grow.
    0:23:02 We want more people tackling problems
    0:23:04 and more people trying to figure out
    0:23:09 how to make these models a better thing
    0:23:12 than less people working on these models.
    0:23:13 But at the same time,
    0:23:16 you do need millions of dollars
    0:23:17 to train one of these models right now,
    0:23:19 even if it is open source.
    0:23:20 – Yeah, in the future,
    0:23:24 like when we’re talking about GPT-5 or GPT-6 level models,
    0:23:25 you’re probably talking about way more than that.
    0:23:26 I mean, I think the other day,
    0:23:29 Google came out and said that over the next,
    0:23:31 I don’t know if they said next five to 10 years,
    0:23:33 that they’ll probably spend like,
    0:23:37 end up spending like $100 billion on AI development.
    0:23:39 – And that news came right after Microsoft
    0:23:41 said that them and OpenAI
    0:23:44 are partnering on a $100 billion data center, right?
    0:23:45 – Yeah.
    0:23:47 – So basically, OpenAI and Microsoft said,
    0:23:50 we’re gonna partner on a $100 billion data center
    0:23:53 filled with GPUs so we can train better and better models.
    0:23:55 Recently, the CEO of DeepMind,
    0:23:58 Demis Hassibis, came out and said,
    0:24:01 in response to Microsoft building
    0:24:02 this $100 billion data center,
    0:24:04 that over the next five to 10 years,
    0:24:07 Google’s gonna spend at least that on their data centers.
    0:24:08 So that was like him sort of responding
    0:24:10 to the fact that Microsoft and OpenAI
    0:24:13 were building this $100 billion data center.
    0:24:17 But Meta said, do you remember the exact number Meta said?
    0:24:21 I think they said they have something like 600,000 H100,
    0:24:22 something insane like that.
    0:24:23 – Yeah, something, I think a chart came out,
    0:24:25 assuming like they have the most or something.
    0:24:30 – Yeah, so Meta has 350,000 H100s right now.
    0:24:31 – Oh my God.
    0:24:32 (laughing)
    0:24:36 – Yeah, so, but Mark Zuckerberg did make a comment
    0:24:39 recently saying they have the equivalent
    0:24:43 of 600,000 H100s, whatever that means.
    0:24:44 – Yeah.
    0:24:48 – So, I don’t really feel like an underground dude
    0:24:50 in his basement is going to train a model
    0:24:53 that’s gonna compete with a GPT-4,
    0:24:57 they just don’t have the compute availability to do that.
    0:24:59 At least not in this point in time.
    0:25:00 – Yeah.
    0:25:03 – This question comes from aileaksandnews.
    0:25:06 What is one thing you’re confident we won’t see
    0:25:08 in the next 12 months?
    0:25:09 – I don’t know if I could be confident
    0:25:11 about anything in the next 12 months.
    0:25:12 I mean, that’s a crazy thing.
    0:25:15 It’s like, it’s really hard to know where the,
    0:25:18 you know, where OpenAI is private,
    0:25:19 like where their model is at.
    0:25:24 I don’t expect to see AGI in the next 12 months
    0:25:26 or, you know, artificial super-intelligence
    0:25:28 or anything like that.
    0:25:29 And partially because I think we’re probably,
    0:25:32 we need, you know, we’re probably limited
    0:25:37 by the current compute, but that’s about it.
    0:25:39 – Yeah, I mean, I have the same thoughts.
    0:25:41 Like, almost every time I’ve tried to predict
    0:25:44 something that won’t happen, I’ve been wrong, right?
    0:25:46 – Yeah, look at AI video, right?
    0:25:47 – Yeah, I’ve made predictions
    0:25:50 about how far AI video is gonna come, how fast.
    0:25:51 I was totally wrong on my prediction,
    0:25:53 and what I thought wasn’t coming for years
    0:25:56 came months later, right?
    0:25:58 – I was very optimistic about AI video.
    0:26:01 Like, I was putting out AI video threads for a long time,
    0:26:02 talking about how it was, you know,
    0:26:05 gonna really change Hollywood and whatnot.
    0:26:06 But yeah, it’s hard to know.
    0:26:10 Like, yeah, AI video, it could have taken years,
    0:26:12 but now it looks like it’s actually going to be amazing
    0:26:13 in the next six months.
    0:26:14 – Yeah, yeah, yeah.
    0:26:17 – So there’s not much that I would be willing
    0:26:19 to like confidently predict in 12 months.
    0:26:20 – No, I’m with you.
    0:26:22 I think AGI and ASI are probably the two things
    0:26:25 I’m fairly confident we’re not gonna see within 12 months.
    0:26:29 Definitely not ASI, but most likely not AGI either.
    0:26:31 – Yeah, I’m fairly confident that AI
    0:26:33 will not replace all jobs in 12 months.
    0:26:35 They will not be that good.
    0:26:38 – We won’t have Skynet taking over the world
    0:26:40 within 12 months.
    0:26:41 – Yeah, probably not.
    0:26:43 – Probably, I like how you said probably not.
    0:26:45 I’m not definitely not.
    0:26:47 – Yeah, probably not, yeah, probably not.
    0:26:49 I think I saw something the other day where
    0:26:52 one of the AI systems in China
    0:26:54 with the government it’s actually called Skynet.
    0:26:57 – Yeah, a few people have named their company Skynet.
    0:26:59 I’ve seen a couple different Skynet companies
    0:27:04 and I’m like, you do realize Skynet was the bad guy, right?
    0:27:07 – Yeah, I think they thought it was humorous or something,
    0:27:09 but yeah, it’s literally China’s AI system
    0:27:11 that monitors the public.
    0:27:15 – Yeah, all right, so this one comes from Akshay Lazarus,
    0:27:18 and I’m sorry if I butchered that name.
    0:27:20 But he says, “I’d love to hear you discuss
    0:27:21 “the future of tech.
    0:27:24 “For example, will UI switching from GUIs,
    0:27:27 “graphical users or interfaces, to voice?
    0:27:29 “Do we see there being a consolidation
    0:27:32 “of all application tech by cloud-based hyperscalers?
    0:27:34 “What is the role of startups as data resides
    0:27:36 “with these hyperscalers?”
    0:27:38 Well, let’s start with the first part of the question.
    0:27:40 Like, what do you think the sort of future
    0:27:42 of these user interfaces are?
    0:27:44 Like less graphical user interface, more voice.
    0:27:46 What are your thoughts there?
    0:27:48 – I think both.
    0:27:49 I mean, I think you’ll have both.
    0:27:53 I mean, I think you’ll have a minority report kind of thing
    0:27:55 where you’ve got like these visual systems
    0:27:58 that you can interact with and they get really intelligent.
    0:28:00 And yeah, of course, you’ll be able to also use voice
    0:28:02 to interact with it as well, kind of like her
    0:28:04 and other movies, right?
    0:28:05 – Well, I mean, we’re already kind of doing that, right?
    0:28:08 Like that was sort of the point of Siri, right?
    0:28:11 That was sort of the point of Amazon Alexa.
    0:28:12 So we already kind of have that.
    0:28:14 They’re just kind of dumb AI’s right now.
    0:28:17 They’re not very great AI’s, but we’re already kind of,
    0:28:19 we’ve already had that for a while now.
    0:28:21 So I’m going to speak for myself here.
    0:28:23 When it comes to like speaking out loud
    0:28:25 to some of these apps,
    0:28:27 I feel very awkward doing that in public, right?
    0:28:29 Like I’ve got these Meta Ray-Ban sunglasses
    0:28:32 that have Llama 2 built into them.
    0:28:34 I feel awkward as hell in public
    0:28:37 asking questions to my sunglasses, right?
    0:28:39 ‘Cause you go, “Hey, Meta,” and then you add,
    0:28:41 and that’s like saying, “Hey, Siri,” right?
    0:28:42 That’s what starts the prompt
    0:28:44 so I can start talking to these sunglasses.
    0:28:46 I feel ridiculous in public saying,
    0:28:49 “Hey, Meta,” and then talking to sunglasses.
    0:28:51 I feel like I’d feel the same way with an AI pin, right?
    0:28:55 The humane pin pressing a button and then speaking out loud.
    0:28:57 Like if I’m sitting in a coffee shop or a library
    0:29:00 or somewhere that tends to be a more quiet place,
    0:29:04 I’m definitely not using the voice interface, right?
    0:29:05 But even just being out in public,
    0:29:07 like if I’m walking through my supermarket
    0:29:09 and there’s other people around,
    0:29:13 I feel like a crazy person talking to my gadgets, you know?
    0:29:15 – Yeah, I mean, that might be an age thing too, right?
    0:29:16 – Could be.
    0:29:18 – I think that young people may, you know,
    0:29:19 AI gets better and better,
    0:29:20 they may just be totally used to it.
    0:29:22 Like of course I’m talking to it.
    0:29:25 Why would I, you know, it’s like my son with like Siri and stuff.
    0:29:28 He doesn’t understand the world without that, right?
    0:29:29 – That’s true, yeah.
    0:29:31 – And he also just thinks Siri’s so dumb
    0:29:32 as soon as it gets better.
    0:29:34 I think he’s gonna love interacting with it
    0:29:37 however he can, including being outside.
    0:29:39 – Yeah, but yeah, I also would feel very weird
    0:29:42 like talking to, you know, AI outside,
    0:29:43 especially here in Japan.
    0:29:47 People, he’s like a psycho, like what is this guy doing?
    0:29:50 – I mean, if really the question is essentially like,
    0:29:52 yeah, we’ve got huge incumbents
    0:29:53 that are sort of running everything right now,
    0:29:54 how does anybody compete?
    0:29:56 – I personally think it’s a big open question.
    0:30:00 Like, yeah, yeah, will, you know,
    0:30:05 when open AI has AGI, you know, what do startups do?
    0:30:07 But I think they’ll always be like,
    0:30:10 I don’t think, you know,
    0:30:13 open AI is gonna want to build every single use case
    0:30:14 and every single kind of product.
    0:30:16 They’re wanting to build the technology
    0:30:18 and have other people build on top of it.
    0:30:20 And I think if they tried to do everything,
    0:30:21 they’re gonna run into like really major
    0:30:24 like regulation problems, if I had to guess.
    0:30:27 So I think, you know, I don’t like regulation,
    0:30:28 but that might be one area where like,
    0:30:30 regulation actually comes to save the day
    0:30:32 because I do think the big tech companies
    0:30:35 are not going to try to do everything.
    0:30:37 They’re going to try to build the best AI technology.
    0:30:38 And then on top of that,
    0:30:40 there’ll be so many things for startups to build.
    0:30:42 – Yeah, and really if the question is like,
    0:30:43 how do they compete?
    0:30:46 Honestly, our Greg episode, our Greg Eisenberg episode
    0:30:49 is literally that entire topic of like,
    0:30:52 when do you have these big incumbent companies,
    0:30:54 the Googles, Microsoft’s, Amazon’s,
    0:30:55 all these companies out there,
    0:30:57 how do these little companies compete?
    0:30:59 Well, that’s exactly what we talked about with Greg.
    0:31:01 And Greg was talking about, well,
    0:31:03 you got to create brand, you’ve got to create community.
    0:31:05 You’ve got to, you know, what are you going to do
    0:31:07 to stand out, right?
    0:31:10 I just listened back to that episode when it went live.
    0:31:13 And Greg was saying things like, you know,
    0:31:15 people choose sides, right?
    0:31:17 I’m an Apple guy, I’m a PC guy.
    0:31:21 I’m a Nike guy, Adidas guy, Puma guy, whatever, right?
    0:31:25 Like people sort of gravitate towards brands
    0:31:27 and attach their identity to brands.
    0:31:29 And a lot of times they’re the brands
    0:31:32 that have good community connected to them, right?
    0:31:35 So that is in my opinion, and Greg’s opinion,
    0:31:39 and it’s my opinion because I heard Greg give me this opinion,
    0:31:43 but you know, that’s how you stand out,
    0:31:44 is through brand, through community,
    0:31:47 through those things that differentiate you, right?
    0:31:51 People don’t necessarily want to work with Google
    0:31:53 where they’re just like a number
    0:31:56 and a giant database of a million people.
    0:32:00 But I use, for instance, Beehive for my AI newsletter, right?
    0:32:03 And with Beehive, I’ve talked to the founders.
    0:32:04 They actually respond to people on X.
    0:32:07 They’re still a smaller, nimble company
    0:32:08 with community around them,
    0:32:12 where everybody that uses Beehive sort of loves talking
    0:32:15 about it and recommends other people to go use Beehive.
    0:32:18 And they’ve got that community element, that brand element.
    0:32:20 And there’s a narrative that sort of farmed
    0:32:24 of this like convert kit versus Beehive narrative, right?
    0:32:27 And so you’ve got that kind of thing going on
    0:32:29 where people might like pick their sides.
    0:32:31 And the reason they’ll pick a side
    0:32:34 is they connect with the community.
    0:32:37 They connect with the brand that that company built.
    0:32:39 And so I think, you know, as Greg said,
    0:32:40 that’s how they stand out.
    0:32:43 And, you know, I totally agree with that opinion.
    0:32:44 – Yeah, yeah.
    0:32:46 And I think, you know, AI will only like make that
    0:32:47 even more so, right?
    0:32:49 Like where people want more human connection there, right?
    0:32:51 Like, yeah, I know the founders are like,
    0:32:53 “Oh, the people in the community, I love them.”
    0:32:55 And I feel like they actually like listen to what I say.
    0:32:58 And this is like one reason to do like Q&A like this, right?
    0:33:00 Like is to actually like, yeah, we’re doing the show,
    0:33:02 but also, you know, it’s four other people
    0:33:04 and we’re, you know, interacting with those people
    0:33:06 versus just, oh, it’s just what me and Matt say.
    0:33:08 Like it starts in general beyond community.
    0:33:13 I mean, I think, you know, AI is going to make,
    0:33:15 there can be so many opportunities for startups.
    0:33:16 And I just, I don’t believe that the date,
    0:33:18 like three companies are going to control everything.
    0:33:21 And there’s so many, there’s so many problems
    0:33:23 to be solved in the world.
    0:33:25 And it’s not going to be two or three companies
    0:33:26 that solve all of them.
    0:33:28 Like there will always be new opportunities.
    0:33:30 – I think if it does become three companies
    0:33:32 trying to do it all, governments will intervene
    0:33:35 and be like, “No, you can’t have this much power.”
    0:33:37 I just, I think that’s where it’ll go.
    0:33:38 – Yeah, yeah.
    0:33:40 Okay, this one comes from my Twitter feed,
    0:33:42 comes from Jason Vanish.
    0:33:44 Hope I’m saying that correctly.
    0:33:46 Jason’s always been really awesome on my Twitter
    0:33:50 and doing a lot of good comments on my Twitter.
    0:33:53 How much better can LLMs get?
    0:33:55 At some point, there are two larger
    0:33:57 and larger training datasets, are there?
    0:34:01 At this point then, do we hit an AI plateau?
    0:34:04 Are there different methods that could leapfrog LLMs?
    0:34:06 Like the computer industry had computing breakthroughs
    0:34:08 from the 70s to the 90s?
    0:34:09 I mean, I think what this would like,
    0:34:11 we’re definitely like speculating
    0:34:13 ’cause like either one of us are like,
    0:34:15 the smartest AI engineers in the world
    0:34:16 or something like, I code.
    0:34:17 – Yeah, I mean, you can just end in that sentence
    0:34:19 as neither of us are really the smartest.
    0:34:22 (laughing)
    0:34:23 – Well, yeah.
    0:34:24 There was an interview though,
    0:34:26 I think with like Sam Altman and Illya,
    0:34:28 maybe, I don’t know, maybe it was like five months ago,
    0:34:29 six months ago, something like that,
    0:34:34 where they said that with existing data in the world,
    0:34:37 with video and audio and other things
    0:34:40 that we’re just now starting to train on,
    0:34:43 that there’s a pretty clear path to major improvements
    0:34:47 for the next three to five years, right?
    0:34:47 And so that’s the case.
    0:34:51 We’re probably talking about like GPT-6, GPT-7
    0:34:53 before you need any kind of breakthroughs.
    0:34:54 – Yeah.
    0:34:56 – And if GPT-5 is as good as they’re saying,
    0:35:00 I think by GPT-6, 7, we’re talking about like work
    0:35:02 looking very, very different.
    0:35:03 – Right.
    0:35:05 – So I think the next three to five years,
    0:35:08 there’s already enough data for the world
    0:35:09 to be entirely transformed,
    0:35:12 mostly in positive ways.
    0:35:14 And then beyond that,
    0:35:17 I’m sure we will need more breakthroughs at some point.
    0:35:18 And there’s a big open question too,
    0:35:20 that a lot of engineers are debating
    0:35:22 is like with synthetic data, right?
    0:35:24 Like, is synthetic data actually useful?
    0:35:26 Like data that the AI helps generate.
    0:35:29 You didn’t train on that, that new data.
    0:35:31 You know, it’s synthetic data.
    0:35:33 And I think it’s not entirely clear yet,
    0:35:35 but I could be wrong on that.
    0:35:36 – Yeah.
    0:35:37 – But if synthetic data works,
    0:35:39 well, then yeah, there’s,
    0:35:42 we’ll probably never run out of data to be training on.
    0:35:43 – Yeah.
    0:35:44 And I mean, everything we’re talking about at this point
    0:35:46 is very sort of theoretical, but-
    0:35:47 – Yeah.
    0:35:48 – You know, at some point,
    0:35:50 I don’t think humans need to continue
    0:35:53 to improve on training the large language models.
    0:35:56 Like once we hit a point of, you know, AGI,
    0:35:59 we get to a point where the models
    0:36:01 figure out what they need to train on next
    0:36:03 and figure out how to self-improve
    0:36:05 and sort of get better and better.
    0:36:08 I think there is going to be sort of a model
    0:36:10 that’s kind of like the last model
    0:36:11 that humans needed to train.
    0:36:13 And then the models beyond that
    0:36:15 become the models where the AI
    0:36:17 is sort of a self-improving model,
    0:36:19 where it just, it gets better and better
    0:36:20 and better on its own, right?
    0:36:22 It does its own,
    0:36:25 it’s reinforcement learning through AI feedback
    0:36:26 instead of reinforcement learning
    0:36:29 through human feedback at some point, right?
    0:36:31 You know, to some degree, we have that right now, right?
    0:36:33 Like that’s kind of how GANs,
    0:36:35 Generative Adversarial Networks work,
    0:36:37 where you have an AI that’s a discriminator
    0:36:39 and then you have the AI that’s the generator
    0:36:41 and the AI that’s the generator
    0:36:43 tries to generate something that fools the AI
    0:36:46 that’s the discriminator and they go back and forth
    0:36:48 until the discriminator is actually fooled
    0:36:50 by what the generator made.
    0:36:52 I think we’re going to see that kind of thing
    0:36:54 get more and more prominent with large language models
    0:36:56 where it gets to a point where
    0:36:57 it’s giving itself its own feedback
    0:36:59 and getting better and better and better.
    0:37:01 But then I also think there’s a phase
    0:37:03 after large language models where
    0:37:06 we start talking more about the embodied AI, right?
    0:37:09 Like we were looking at the Boston Dynamics robot.
    0:37:11 We’ve talked about the Figure One robot.
    0:37:12 We’ve got Tesla Optimus.
    0:37:15 All of these are robots that, you know,
    0:37:17 they’re gonna have AI injected into them.
    0:37:21 And when we start putting AI into some of these robots,
    0:37:24 some of these machines, well, now they have
    0:37:25 more stuff they need to learn on.
    0:37:27 They need to learn on how to interact
    0:37:28 with the physical world.
    0:37:31 And when my arm moves like this, what’s the result?
    0:37:32 When my arm moves like this?
    0:37:34 And now it’s starting to get trained more
    0:37:38 on the domain knowledge of just how to operate this robot.
    0:37:41 So I think there’s gonna be this shift
    0:37:45 to now we need to train these LLMs
    0:37:48 to work with the specific use case like embodied robots.
    0:37:51 Like, you know, going into drones
    0:37:54 or whatever we use it on next.
    0:37:56 And they need to sort of train on the domain knowledge
    0:38:00 to operate the vehicle that’s now embodying that AI,
    0:38:01 if that makes sense.
    0:38:02 – Yeah, totally.
    0:38:04 I think a big open question too is like,
    0:38:07 can AI start solving real world problems too?
    0:38:09 Like can it help cure cancer?
    0:38:12 Can it help us figure out how to make the robots better?
    0:38:14 Right, ’cause if that’s the case,
    0:38:16 then we’ll probably get to this kind of exponential point
    0:38:18 where things will just keep getting improving.
    0:38:21 And then yeah, something that seemed like such a small
    0:38:24 little breakthrough, you know, with GPT-1
    0:38:27 and with transformers, you know, concept of transformers.
    0:38:29 Yeah, that takes us, it could take us all the way.
    0:38:30 I think that’s an open question.
    0:38:32 We don’t know.
    0:38:35 It could be that like just a ton of data is all we needed.
    0:38:37 – A ton of data and a ton of GPUs to process it.
    0:38:38 – Yeah, yeah.
    0:38:40 Also, I think, you know, LLMs are gonna be just like
    0:38:42 one part of the equation too, like you said, the robots.
    0:38:45 But also my understanding is that, you know,
    0:38:47 the rumors where the open AI has created this thing
    0:38:50 called Q*, which is supposed to be some kind of logic engine.
    0:38:52 I don’t think there’s any details been revealed about that.
    0:38:54 Is that just like some kind of other LLM?
    0:38:56 I don’t know.
    0:38:59 But so in theory, you would have like the LLM
    0:39:01 then having some kind of like a logic engine
    0:39:02 attached on top of it.
    0:39:05 So in the future, these systems could be really complicated
    0:39:06 for like regular people.
    0:39:08 You wouldn’t know when that’s going on.
    0:39:09 – Well, if you ask Yon Lacoon, right?
    0:39:12 He’s the lead AI over at Meta.
    0:39:13 Also worked with Jeffrey Hinton,
    0:39:16 one of the Godfathers of AI as they call them, right?
    0:39:18 He doesn’t believe large language models
    0:39:19 will ever achieve AGI.
    0:39:22 He thinks that the sort of technology underneath
    0:39:24 large language models just will never get
    0:39:26 to that point of AGI.
    0:39:28 And over at Meta, they’re developing something
    0:39:33 they called VGEPA, which stands for Video Joint
    0:39:35 Embedding Predictive Architecture.
    0:39:39 But basically it’s a way for like the AI models
    0:39:41 to see and understand the world
    0:39:43 and sort of train themselves by actually doing
    0:39:45 and getting a response.
    0:39:48 And he believes this is what will actually lead to AGI
    0:39:50 and not necessarily the large language models
    0:39:52 that everybody’s familiar with today.
    0:39:54 So I don’t know.
    0:39:56 There might be a point where large language models,
    0:39:59 there is like a point of diminishing returns
    0:40:01 where they just don’t get any better.
    0:40:03 But I still think there’s a lot of models
    0:40:05 that can replace large language models
    0:40:09 that continue to improve what the capabilities are.
    0:40:10 – Yeah, I don’t know.
    0:40:12 He like, I know he’s highly respected
    0:40:14 and obviously he knows more about this technology
    0:40:15 than I do, but he’s also made some predictions
    0:40:18 with like towards like GPT-4 and things like that
    0:40:19 that didn’t seem that accurate.
    0:40:22 Like he was really skeptical on how good GPT-4
    0:40:24 was gonna be and stuff like that.
    0:40:27 And now like with the rumors about GPT-5,
    0:40:28 if they turn out to be true,
    0:40:30 well then obviously Open AI is doing something
    0:40:31 that he doesn’t understand.
    0:40:32 – Yeah.
    0:40:34 – Right, and so we don’t know what that is.
    0:40:37 Like in this Sam Almond bullshit, I don’t think so.
    0:40:39 Like, you know, and the other day Sam Almond said,
    0:40:41 they’re like, you know, people who are ignoring like
    0:40:43 where things are gonna be going with GPT-5,
    0:40:44 I think he said something like,
    0:40:46 you’re gonna, we’re gonna steamroll you.
    0:40:47 – Yeah.
    0:40:48 – It’s what he said.
    0:40:49 We’re just kind of shocking.
    0:40:51 He was like, and he delivered this kind of like
    0:40:52 really like, you know, peaceful, nice way.
    0:40:54 But it was like, yeah, we’re gonna steamroll you.
    0:40:56 And so I think they have something.
    0:40:59 I think it’s gonna be shocking, but who knows?
    0:41:02 Who knows what’s gonna be the best way
    0:41:03 to do this in the future?
    0:41:06 – Yeah, I actually think that government
    0:41:09 is going to be a huge bottleneck over time, right?
    0:41:11 I think, I don’t wanna get political with this,
    0:41:13 but I do think that the government
    0:41:16 is going to be a bottleneck to progress.
    0:41:18 And the reason I say that is because I was recently
    0:41:22 listening to a conversation with Ray Kurzweil
    0:41:25 and Jeffrey Hinton, right?
    0:41:28 And they were talking about these AI models
    0:41:31 being essentially like the best correlation machines
    0:41:32 on the planet.
    0:41:37 They can find correlations between seemingly unrelated things
    0:41:39 that humans could never spot.
    0:41:41 And that’s the reason why these large language models
    0:41:45 will likely find the cures to various cancers and diseases
    0:41:48 and possibly solve climate change and world hunger
    0:41:49 and all of this stuff, right?
    0:41:50 They’ll find correlations.
    0:41:52 – But they’re also very good at finding corruption.
    0:41:53 – That’s true.
    0:41:56 – Which, I had a viral tweet on this like maybe a year ago
    0:41:57 where I talked about that.
    0:41:59 And I think it’s gonna be a big thing
    0:42:00 where you can actually see how,
    0:42:02 where all the money is moving in the government
    0:42:04 and the humans cannot process this.
    0:42:07 And it’s a very easy way to like be corrupt and hide that
    0:42:08 is like through moving money around
    0:42:10 in like really sneaky ways.
    0:42:13 – Which is another thing blockchain sort of solves as well.
    0:42:13 But you know, we don’t do that.
    0:42:15 – Yeah, but AI will be able to like look at that and go like,
    0:42:17 oh yeah, this person’s doing this.
    0:42:18 This is why they’re doing it.
    0:42:20 This is why they, you know, sign this into law.
    0:42:22 And so yeah, I could definitely see government people
    0:42:25 being very like not wanting that to come out.
    0:42:27 – Yeah. Well, you know, where I was going
    0:42:30 with them, the government being the bottleneck is,
    0:42:32 I think these large language models
    0:42:35 will likely find cures for cancer,
    0:42:39 you know, solutions for climate change, you know,
    0:42:41 ways to end hunger in, you know,
    0:42:43 parts of the world that need it.
    0:42:46 But I think government is gonna get in the way
    0:42:50 of making what the AI finds a reality, right?
    0:42:52 Like when there’s new drugs that come to market,
    0:42:55 how long does it take for the drug to go through
    0:42:57 animal trials and then human trials?
    0:43:00 And then, you know, all of the steps
    0:43:01 before the drug finally gets on the market
    0:43:03 for humans to use, right?
    0:43:07 I think AI will probably find a lot of solutions
    0:43:08 to a lot of problems.
    0:43:11 And then the government’s red tape is going to be
    0:43:13 what slows down these solutions
    0:43:16 actually becoming live to the world.
    0:43:18 – Yeah. I mean, I mean, that’s why there’s movements
    0:43:19 like EAC and things like that, right?
    0:43:22 It’s, they are really afraid of that.
    0:43:24 It’s like, I think people talked about like, you know,
    0:43:26 we could have solved a lot of our energy problems
    0:43:28 with like nuclear power, but then regulation
    0:43:31 like stopped that from happening, right?
    0:43:33 And I think France is one of the big success stories
    0:43:35 where they adopted nuclear power
    0:43:37 and they haven’t had energy problems
    0:43:38 like a lot of other countries have.
    0:43:41 And so there’s a lot of fear around that.
    0:43:43 And, but then the challenge is like, you know,
    0:43:46 EAC does kind of go so extreme that they might, you know,
    0:43:47 bringing on some of the regulation, right?
    0:43:49 ‘Cause like just let’s go as hard as we can
    0:43:51 as bad as we can.
    0:43:53 And then the government kind of freaks out about that too.
    0:43:56 So it’s really hard to know like how to get the government
    0:43:58 to embrace this technology.
    0:43:59 I’m hoping they will.
    0:44:02 I mean, I think it’s like the U.S. winning at the internet.
    0:44:05 Like the U.S. needs to win an AI too
    0:44:06 in order to kind of set the stage
    0:44:09 for the next 100 years of the world, right?
    0:44:12 And if we don’t, whoever wins that, China or whoever,
    0:44:13 you know, Russia, whoever,
    0:44:16 they then get to set the stage for the next, you know,
    0:44:18 chapter of humanity.
    0:44:20 – It’s the new space race.
    0:44:22 – Yeah. I think it’s bigger than that.
    0:44:22 – Yeah.
    0:44:23 – Yeah, I mean, it is really interesting
    0:44:25 to think about it from that perspective too,
    0:44:28 because like the U.S. probably wants to be the country
    0:44:29 that finds the cure for cancer,
    0:44:32 but is the government going to, you know,
    0:44:34 is the government gonna put up a bunch of red tape
    0:44:35 and slow that down?
    0:44:36 I don’t know yet to be seen.
    0:44:37 – Yeah.
    0:44:41 – I think there’s going to be a sort of necessary overhaul
    0:44:46 coming because with this AI world that we’re entering into,
    0:44:48 I do feel like we need stuff to happen faster
    0:44:52 so that we can let the solutions that these tools provide,
    0:44:54 provide the solutions.
    0:44:55 – I think ideally that’s what we do.
    0:44:57 Like as these systems become incredibly intelligent,
    0:45:00 more, they can process more data than humans can,
    0:45:02 we should be asking them like, okay,
    0:45:04 how can we reorganize this part of the government
    0:45:06 to be more efficient and actually get more things done
    0:45:09 and help more people, bring more people out of poverty
    0:45:10 and things like this?
    0:45:11 That’s what I hope for.
    0:45:13 You know, that’s the one reason I wanted to do it,
    0:45:14 you know, do a show with you.
    0:45:15 – Yeah.
    0:45:17 – That’s why I tweet is I’m really hopeful
    0:45:19 for what this technology can do for humanity
    0:45:22 if we don’t all get in the way, right?
    0:45:24 – You know, I think that’s a perfect spot
    0:45:25 to wrap this one up.
    0:45:28 If anybody is watching on YouTube,
    0:45:29 let us know in the comments
    0:45:31 if you like this style of episode,
    0:45:33 if you want us to do more Q and A
    0:45:36 and you like just the sort of us giving thoughts
    0:45:37 on your questions,
    0:45:38 ’cause if you like this,
    0:45:40 we’ll make more episodes like this for you.
    0:45:41 If you haven’t already,
    0:45:44 make sure you like and subscribe
    0:45:46 wherever you’re watching or listening to this.
    0:45:47 It really helps us out
    0:45:50 to get more listeners and viewers on the show.
    0:45:51 So appreciate that.
    0:45:53 And thank you so much for tuning in today.
    0:45:55 We’ll see you in the next episode.
    0:45:57 (upbeat music)
    0:46:00 (upbeat music)
    0:46:02 (upbeat music)
    0:46:05 (upbeat music)
    0:46:08 (upbeat music)
    0:46:10 (gentle music)
    0:46:20 [BLANK_AUDIO]

    Episode 6: Will AI change our economic systems forever? Join hosts Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) as they delve into these pressing questions.

    In this wide ranging episode, Matt and Nathan answer your thought provoking questions and preview new AI video tools coming out, explore the revolutionary impact of AI on societal structures, healthcare advancements, and economic systems. They discuss the potential for AI to streamline government efficiency, uncover cures for diseases, and even tackle global challenges such as climate change and hunger. However, the conversation also navigates through the complexities of government regulations, the technological arms race among big corporations, and the societal implications of widespread AI adoption.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) AI podcast hosts discuss audience questions, insights.
    • (04:38) Adobe premiere integrating Sora for video improvements.
    • (09:01) Voice chat limits bots and negativity online.
    • (10:55) AI’s impact on economy and work uncertainty.
    • (15:38) Public, not governments or corporations, should adapt.
    • (19:19) Open source AI catching up, corporate control.
    • (20:10) Big companies’ financial support crucial for open source.
    • (25:33) Future tech: UI, voice, cloud, startups role.
    • (29:03) Greg Isenberg on how small companies compete.
    • (31:05) AI will enhance human connection in startups.
    • (34:05) Humans may not need to continue training large language models, as AI could self-improve through reinforcement learning.
    • (37:30) Yann LeCun doubts large language models’ potential.
    • (42:02) IAC’s extreme approach may bring regulation.
    • (43:44) Encouraging engagement and feedback for future episodes.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Why Logan Kilpatrick Left OpenAI for Google

    AI transcript
    0:00:03 – It’s crazy to think that a million active users
    0:00:06 as a small startup, the other sort of scary thought
    0:00:08 is the thought that maybe AI one day
    0:00:10 will be able to break encryptions.
    0:00:12 – We’ve got artificial super intelligence right now.
    0:00:14 There’s seven billion agents on earth
    0:00:16 that are focused on this problem
    0:00:18 and like nobody has cracked those encryptions yet.
    0:00:19 I think it’s much more likely
    0:00:22 that there’s like some physical hardware innovation
    0:00:25 that allows super computers to really take off
    0:00:26 and break some of those things.
    0:00:28 (upbeat music)
    0:00:30 – Hey, welcome to the Next Wave podcast.
    0:00:31 My name is Matt Wolf.
    0:00:34 I’m here with my co-host, Nathan Lanz
    0:00:37 and we are your chief AI officer.
    0:00:39 Our goal with this podcast is to keep you looped in
    0:00:42 on the latest news and everything you need to know
    0:00:43 in the AI world.
    0:00:46 And today we have an amazing guest for you.
    0:00:48 Today we’re talking to Logan Kilpatrick.
    0:00:51 Now, Logan used to be the head of developer relations
    0:00:53 over at OpenAI.
    0:00:56 Now he’s the lead of product over at Google’s AI studio.
    0:00:58 We actually recorded this episode a few weeks ago
    0:01:00 and we happened to catch him in this in between phase
    0:01:02 where he’s not at OpenAI.
    0:01:03 He’s not at Google yet.
    0:01:07 And we got some really, really cool insights from him.
    0:01:08 Now what he says on this,
    0:01:11 isn’t the opinions of Google or OpenAI
    0:01:12 ’cause he’s not with those companies
    0:01:13 at the time of recording.
    0:01:15 But by the time you hear this,
    0:01:18 he’s now working for Google in this new role.
    0:01:21 (upbeat music)
    0:01:23 – When all your marketing team does is put out fires,
    0:01:24 they burn out.
    0:01:27 But with HubSpot, they can achieve their best results
    0:01:29 without the stress.
    0:01:31 Tap into HubSpot’s collection of AI tools,
    0:01:34 breeze, to pinpoint leads, capture attention,
    0:01:37 and access all your data in one place.
    0:01:39 Keep your marketers cool
    0:01:41 and your campaign results hotter than ever.
    0:01:44 Visit hubspot.com/marketers to learn more.
    0:01:47 (upbeat music)
    0:01:48 – And I think a lot of people,
    0:01:50 they only know like Sam Altman.
    0:01:52 They don’t know that Logan on Twitter
    0:01:54 has been like the main voice of like,
    0:01:56 he’s been talking to all the AI influencers,
    0:01:57 all the engineers.
    0:01:59 Like anytime any big thing would come out,
    0:02:01 like he was the human like voice
    0:02:04 that you would hear on social media from OpenAI.
    0:02:06 And now he’s left.
    0:02:08 Right after OpenAI seems to be doing great,
    0:02:09 but there’s been a lot of drama,
    0:02:10 he’s left to join Google.
    0:02:12 So it does seem like a huge win for them
    0:02:15 because if he can start making their products more,
    0:02:17 you know, if he can be like a human like voice for Google
    0:02:19 and explain to people like how you can use it.
    0:02:20 I think it’s such a big win for them.
    0:02:22 – Yeah, there’s a lot of interesting things happening
    0:02:25 right now, both at Google and at OpenAI.
    0:02:29 In fact, Google is about to do their annual Google I/O event.
    0:02:30 And last year that’s where they made
    0:02:33 a whole bunch of huge AI announcements.
    0:02:36 We’re expecting really big AI announcements again this year.
    0:02:38 I’m gonna be there with Logan.
    0:02:40 So I will probably be doing a little bit of reporting
    0:02:43 from this Google I/O event and keeping you looped in.
    0:02:45 We’ll likely even do a follow-up episode
    0:02:47 about what happened at Google I/O
    0:02:48 and all of those announcements.
    0:02:51 There’s also some rumors flying around right now
    0:02:55 that OpenAI is about to launch a AI powered search engine
    0:02:56 to go head to head with Google.
    0:03:00 We don’t really totally know all the details about that,
    0:03:01 but it’s unfolding right now.
    0:03:03 It’s supposed to be happening the week
    0:03:05 that this episode is dropping.
    0:03:07 So if there’s any big news around that,
    0:03:09 we will be telling you all about that
    0:03:10 in an upcoming episode as well.
    0:03:12 But today in this episode,
    0:03:15 we had a really fascinating conversation with Logan.
    0:03:18 He had some really good insights about OpenAI,
    0:03:21 the culture at OpenAI, why he decided to leave,
    0:03:22 why he picked to go to Google.
    0:03:26 And of course, he has some amazing insights for you
    0:03:27 if you have a business
    0:03:29 or you want to use this stuff in your personal life.
    0:03:31 He has some great tips there
    0:03:33 as far as how you can actually integrate what he teaches
    0:03:36 and what we’re talking about on today’s episode.
    0:03:37 It’s an amazing episode.
    0:03:38 You’re really going to enjoy it.
    0:03:41 So let’s go ahead and jump on in with Logan Kilpatrick.
    0:03:44 Thanks so much for joining us on the show today.
    0:03:45 – Yeah, likewise.
    0:03:46 I’m super, I’m super excited to be here.
    0:03:49 – So I’m curious how when it comes to OpenAI,
    0:03:51 I promise the whole conversation won’t be about OpenAI,
    0:03:54 but I’m curious as head of developer relations
    0:03:55 in the early days, you were probably working
    0:03:57 with Jasper, CopyAI, those kinds of companies.
    0:04:00 – Yeah, and I think the most interesting thing
    0:04:01 to me about being in OpenAI
    0:04:05 was just the breadth of the work in the early days
    0:04:08 because of how quickly everything was moving
    0:04:10 and because of how everything was always on fire
    0:04:13 all the time, you could really just jump in.
    0:04:14 And if you were someone who loved fighting fires,
    0:04:18 which I loved, you could jump in and get your hands dirty
    0:04:21 like pretty much anywhere in the company.
    0:04:23 And that was something that I appreciated so much.
    0:04:26 And I think like the natural tendency as a company grows,
    0:04:27 as things become more formalized,
    0:04:30 every additional hundred people who joined the company
    0:04:32 was just like less and less of an opportunity to do that.
    0:04:35 And yeah, it was interesting to see like
    0:04:37 when I joined OpenAI, it really, really felt
    0:04:38 like a small startup.
    0:04:40 And I think when I left OpenAI,
    0:04:44 I didn’t feel like a small startup anymore by any means.
    0:04:46 – Yeah, no, it’s crazy to think that, you know,
    0:04:48 a million active users as a small startup.
    0:04:52 But yeah, I mean, that was kind of the vibe
    0:04:54 in the early days, it seems like that’s really cool.
    0:04:55 – Well, I also think at the time,
    0:04:57 ChatGPT was just like a demo.
    0:04:59 Like I don’t really think like people had,
    0:05:01 were like just starting to,
    0:05:02 but like at that time, the million,
    0:05:05 I don’t think that was like DAUs, if I remember correctly,
    0:05:07 it was just like a million people had tried ChatGPT
    0:05:08 or something like that.
    0:05:11 So I’m guessing like the attrition rate was super,
    0:05:14 super high and most people weren’t actually converted
    0:05:16 like weekly users or something like that.
    0:05:19 – Yeah, so basically the growth of ChatGPT
    0:05:22 when it first came out wasn’t really expected, right?
    0:05:26 OpenAI didn’t anticipate that it would have that,
    0:05:29 you know, quick of an onboarding of so many people.
    0:05:32 – So at the time, the reason that ChatGPT was created,
    0:05:34 and there’s a really great podcast interview
    0:05:37 with my former manager, Frazier,
    0:05:40 who led product for both ChatGPT and the API.
    0:05:42 And he talked about how, and again,
    0:05:44 this was before I joined OpenAI,
    0:05:48 but the GPT-4 actually finished training summer of 2022.
    0:05:50 So the team knew like what was coming
    0:05:53 and really the early explorations
    0:05:55 that Frazier and the team were doing
    0:05:57 and a bunch of other folks was thinking about
    0:05:59 what is the right form for this technology
    0:06:02 to actually be useful to end users.
    0:06:04 So this whole narrative that like the team
    0:06:06 just kind of threw together something random
    0:06:08 and then like published it and then it all went,
    0:06:10 you know, perfectly well is actually not true.
    0:06:12 Like, and you should, folks should go
    0:06:14 and listen to Frazier talk about this,
    0:06:16 but it was a very intentional process
    0:06:19 of like having a whole team of people
    0:06:21 who were like constantly iterating for multiple,
    0:06:23 like on the order of multiple months
    0:06:25 to like ultimately come to the form factor
    0:06:27 that was what ChatGPT was,
    0:06:29 which ended up being what we released to the world.
    0:06:32 So there was more nuance to the story,
    0:06:33 but I think people like to hear the story of like,
    0:06:35 oh, it was just totally random.
    0:06:36 And we threw this together.
    0:06:37 – Yeah, I was gonna ask you like,
    0:06:39 where did that narrative come from?
    0:06:40 That’s why I’d heard as well.
    0:06:41 It’s like, oh, they just do it out there
    0:06:42 and like, oh, it blew up
    0:06:44 and they didn’t expect it was gonna happen.
    0:06:46 I was like, that doesn’t sound right to me, but you know.
    0:06:49 – I don’t think they people had the perspective
    0:06:50 on how quickly it would grow,
    0:06:54 but really the intent was to see whether or not
    0:06:56 this is something that would resonate with consumers.
    0:06:58 Like it was intended as a product release
    0:07:00 in a certain sense and like intended to see
    0:07:02 whether or not this like basic chat interface
    0:07:04 would be something that’s useful to people
    0:07:06 that we could ultimately use when GPT-4 came out
    0:07:09 to sort of be the thing to, you know,
    0:07:11 be the catalyst for people using that product.
    0:07:14 I think that’s like, I don’t know whether someone
    0:07:16 at OpenAI started perpetuating that narrative
    0:07:18 or whether it’s just like a media narrative
    0:07:19 that took off, I’m not sure.
    0:07:21 – I don’t know if it was like 10 years ago or so,
    0:07:22 but there was like in Silicon Valley,
    0:07:25 a whole wave of like chatbots that were released,
    0:07:27 which obviously were way more primitive
    0:07:28 than what’s out now.
    0:07:29 But everyone was all convinced,
    0:07:31 oh, this is gonna be the next thing is these chatbots
    0:07:32 and it never worked.
    0:07:34 And so when chat GPT came out,
    0:07:35 I think there was a big question like,
    0:07:36 will people actually use this?
    0:07:38 You know, ’cause people in Silicon Valley
    0:07:40 thought they had already seen that before
    0:07:41 and then chat GPT came out
    0:07:42 and like people were just blown away.
    0:07:44 And I was as well.
    0:07:45 – Yeah, same here.
    0:07:46 I was an early Jasper customer.
    0:07:49 So I had sort of experimented with
    0:07:52 and seen the power of this technology early
    0:07:53 from using Jasper.
    0:07:56 And honestly, like GPT-4 is really what started
    0:07:57 to make it much more useful.
    0:08:00 Like I still think the highest leverage,
    0:08:03 like largest marginal value you can get from chat GPT
    0:08:05 is if you’re an engineer and you use it for coding.
    0:08:07 Like most of the other things are like useful and nice,
    0:08:10 but it’s like from a raw economic output perspective,
    0:08:12 coding is the most useful thing
    0:08:14 to use this technology for today.
    0:08:16 – Yeah, I told Matt previously
    0:08:18 when I finally like started using it in code,
    0:08:20 I’m like, oh my God, this is such a game changer.
    0:08:22 It’s like it helped me like change some code
    0:08:24 from like JavaScript to C++
    0:08:25 and some other things I was playing around with.
    0:08:27 I was like, this is nuts that it can just do that.
    0:08:31 – It’s crazy to me how that there’s still engineers
    0:08:33 out there who haven’t made that jump
    0:08:34 and like haven’t had that aha moment.
    0:08:36 I’m like that literally makes no sense.
    0:08:37 Like even for an, you know,
    0:08:40 I’m sure there’s folks at all companies.
    0:08:42 So this is not like a representative data pipeline,
    0:08:43 but like even folks I worked with at OpenAI,
    0:08:46 some of them like weren’t using AI every day
    0:08:47 as a software engineer.
    0:08:48 And it was always crazy to see that,
    0:08:50 especially being so close to the technology.
    0:08:52 – Yeah, well, I mean, the two top like coding,
    0:08:55 you know, influencers like Jonathan Blow and Primogen,
    0:08:58 they’re both like saying that like AI is like,
    0:08:59 I’m not sure if they’re calling it a fad,
    0:09:01 but they’re like, oh, it produces shit code
    0:09:02 and like all this kind of stuff.
    0:09:03 And it’s like, well, you know,
    0:09:05 – It’s better at coding than I am.
    0:09:05 – Yeah, exactly.
    0:09:07 It’s better than coding than most people are.
    0:09:09 It would bring the average level of code up, you know,
    0:09:12 not everyone’s like the most senior developer
    0:09:14 who’s been doing it for like 30 years.
    0:09:16 And obviously this stuff’s gonna just keep getting better
    0:09:17 and better.
    0:09:19 – So I’m curious, one last OpenAI question,
    0:09:21 and you don’t have to answer it if you don’t want,
    0:09:23 but was there any sort of catalyst
    0:09:25 that led you to leave OpenAI?
    0:09:27 I mean, from the outward perspective on Twitter,
    0:09:29 it looked like everything was cool, amicable.
    0:09:30 You just kind of wanted to move on to something else,
    0:09:33 but I’m curious if there was any story there.
    0:09:36 – I think broadly, like my broad perspective is like,
    0:09:38 the company just changed so dramatically
    0:09:42 from what I joined, like I intentionally joined,
    0:09:43 like I had worked at Apple
    0:09:45 and had intentionally joined us a small company
    0:09:49 because I wanted a lot of the things that come with working
    0:09:51 at a small company, like being able to move really quickly,
    0:09:54 being able to have high agency to go and solve problems,
    0:09:57 having the green fields, all those things.
    0:10:01 And I think just like very naturally over time OpenAI
    0:10:03 became like less of those things for me personally.
    0:10:05 And I think it was also like, you know,
    0:10:06 I don’t remember if this was on camera
    0:10:08 before we started chatting on camera,
    0:10:10 but the comment about, you know,
    0:10:14 being a human voice at OpenAI,
    0:10:16 like it was a really challenging position for me to be in.
    0:10:18 And I think like if you look around,
    0:10:19 like there was not a lot of other people
    0:10:21 who were doing that type of work.
    0:10:24 And I think that just had its own whole host of challenges.
    0:10:28 And I think just overall, also incredibly like,
    0:10:30 as I started to have more conversations with people,
    0:10:32 just became incredibly excited about like
    0:10:35 where everybody else in the ecosystem was.
    0:10:38 And yeah, I think there’s so much interesting stuff happening.
    0:10:40 And I think like OpenAI for a long time
    0:10:43 has dominated the narrative of being the benefactor
    0:10:44 of this technology.
    0:10:46 And also the people who are like giving
    0:10:47 the most value to the world.
    0:10:49 And I think there’s gonna be more companies
    0:10:52 that are gonna be able to like successfully do that
    0:10:53 in the next six to 12 months,
    0:10:55 which I find just as a consumer
    0:10:57 and as somebody who like loves the technology,
    0:10:59 I think that’s such so incredible.
    0:11:04 And I’m excited to hopefully get to help that
    0:11:05 from a different side of things.
    0:11:09 – I feel like OpenAI is really gonna miss having you there.
    0:11:11 ‘Cause like there is no one else right now who does that.
    0:11:14 Like, I mean, like before on Twitter,
    0:11:15 you were the only one I would look to
    0:11:16 like when things would change.
    0:11:17 So like, I’m curious.
    0:11:18 So like, so you said that you left
    0:11:20 because of it no longer being a startup
    0:11:23 and you wanting there to be other competitors out there
    0:11:24 who can compete with OpenAI.
    0:11:26 But like, why Google?
    0:11:29 Why not like doing an open source AI project
    0:11:30 or something like that?
    0:11:32 – I feel like there’s still such an opportunity
    0:11:33 at the large language wall space.
    0:11:36 Like as I was exploring, it was like, you know,
    0:11:38 could go to an application layer company.
    0:11:40 There’s a bunch of incredible companies
    0:11:41 doing interesting stuff at that layer of things,
    0:11:44 but it still feels like there’s a lot of opportunity.
    0:11:48 I also think like, you know, to be candid for Google,
    0:11:50 like the, you know, they’ve had a challenging narrative
    0:11:53 as far as like how developers feel about the platform,
    0:11:54 like what they’ve been doing with AI.
    0:11:58 Like there’s just this incredible moment
    0:12:01 and opportunity at Google for someone who loves
    0:12:03 building products for developers to really come in
    0:12:05 and help support that ecosystem.
    0:12:08 There’s also so many smart people at Google,
    0:12:11 like they have such an incredible roadmap.
    0:12:13 And again, I don’t know all the details
    0:12:16 ’cause I haven’t actually, at the time of this recording,
    0:12:17 haven’t actually started that role.
    0:12:21 But I think they, at least from what I’ve seen externally,
    0:12:22 I think they’re pushing in the right direction.
    0:12:24 Like the one million context window,
    0:12:27 like those models being natively multimodal,
    0:12:29 like all that stuff gives me a lot of confidence
    0:12:32 and hopefully what the roadmap looks like for other things.
    0:12:35 And it’s also just like such a core piece of their business.
    0:12:36 Like there’s a lot of people and like,
    0:12:39 I think I love what the Meta folks are doing
    0:12:41 and I love that they’re putting out the Lama models.
    0:12:44 But in many ways, at least from like my outside perspective,
    0:12:47 it’s not clear that it’s like the core driving force
    0:12:48 for their business.
    0:12:50 And I feel like in the case of Google and others,
    0:12:52 like it is a core driving force for their business
    0:12:53 at least now.
    0:12:54 And I think for Meta, it’ll probably evolve to be that
    0:12:57 over time as they build AI into their products and services.
    0:13:00 But today, it’s like, yeah, like do I need a chat bot
    0:13:02 for Instagram to be a viable product for me?
    0:13:03 Like not really.
    0:13:05 Like they can essentially keep that product
    0:13:06 and keep going without AI.
    0:13:08 And I think Google is in a very different position
    0:13:10 with respect to search and some of their other platforms.
    0:13:13 – Yeah, let’s talk about open source versus closed source
    0:13:14 for a minute because, you know,
    0:13:17 there’s this big sort of storyline unfolding, right?
    0:13:20 You’ve got Elon Musk versus Sam Altman,
    0:13:23 the sort of public battle of open source versus closed source
    0:13:24 going on.
    0:13:27 But I’m curious to hear from your perspective,
    0:13:29 where do you see the value of open source?
    0:13:31 Why, I know you’ve been sort of outspoken,
    0:13:34 a proponent of open source, you know, on your Twitter account.
    0:13:36 Where do you see that value of open source?
    0:13:38 What excites you about open source?
    0:13:41 – One, I think it’s like fundamentally at the end of the day
    0:13:43 and people like to consider it in a different perspective.
    0:13:45 But from my perspective,
    0:13:47 it’s fundamentally like a business decision.
    0:13:51 Like do the pros of open sourcing models
    0:13:53 and all the, you know, potentially infrastructure
    0:13:57 around those models end up outweighing the cost of doing that.
    0:13:58 And there are very real costs.
    0:14:01 Like I don’t think people, like everyone just assumes
    0:14:03 that open source is like, you know,
    0:14:05 a positive in every direction.
    0:14:06 And it’s like certainly not.
    0:14:08 If you’ve ever been an open source maintainer,
    0:14:11 like it is not fun in many cases
    0:14:13 to maintain open source projects.
    0:14:15 Like there’s just people are,
    0:14:17 you’re essentially giving something away for free
    0:14:20 and everyone is asking you to do more things for free.
    0:14:22 And you’re not getting any of the value
    0:14:23 that’s being accrued.
    0:14:27 And I think this is like the really difficult tension
    0:14:30 for companies that are making this decision.
    0:14:32 And I think like I love the folks at Mistral
    0:14:34 and I think they’re doing really important work,
    0:14:36 but it’s a challenging position for them as an example
    0:14:39 where, you know, they open source a model
    0:14:41 and then, you know, you can now do inference on that model
    0:14:44 on any of the many, many different platforms
    0:14:45 that offer inference.
    0:14:47 And, you know, that led them to,
    0:14:48 with their most recent model,
    0:14:50 not making it fully open source yet.
    0:14:54 I think there’s like some nuance about how open that,
    0:14:56 the latest model that they did is.
    0:14:59 And I think like more companies are going to struggle with this
    0:15:00 because I do think at the end of the day,
    0:15:02 especially for the model layer,
    0:15:06 it’s hard to have a business that does this.
    0:15:09 And I actually think this is why companies like Meta,
    0:15:10 it makes a ton of sense for them
    0:15:14 because like they are not a developer platform company.
    0:15:17 Them taking the model and putting it out to the world
    0:15:18 actually doesn’t really matter.
    0:15:20 Like it’s not negative for their business
    0:15:23 because their business is serving ads
    0:15:25 and selling products to the four billion people
    0:15:26 who use their platform.
    0:15:30 And like that’s a very privileged position for them to be in.
    0:15:31 And I think there’s a lot of startups
    0:15:33 who don’t have that distribution
    0:15:35 and are still trying to do open source models.
    0:15:39 And you just run into these like very real realities
    0:15:41 of running a business that make it really hard
    0:15:42 to open source those models.
    0:15:45 I think there’s also like the whole philosophical debate
    0:15:48 about whether the technology should be open source
    0:15:50 because it’s super powerful.
    0:15:53 I think that like that’s a very fair argument.
    0:15:55 I think there’s also a bunch of very fair arguments
    0:15:57 on the safety side around not open sourcing
    0:15:58 some of these models.
    0:16:00 (upbeat music)
    0:16:01 – We’ll be right back.
    0:16:04 But first I wanna tell you about another great podcast
    0:16:05 you’re gonna wanna listen to.
    0:16:07 It’s called Science of Scaling,
    0:16:08 hosted by Mark Roberge.
    0:16:11 And it’s brought to you by the HubSpot Podcast Network,
    0:16:14 the audio destination for business professionals.
    0:16:16 Each week host Mark Roberge,
    0:16:19 founding chief revenue officer at HubSpot,
    0:16:21 senior lecturer at Harvard Business School
    0:16:23 and co-founder of Stage Two Capital,
    0:16:26 sits down with the most successful sales leaders in tech
    0:16:29 to learn the secrets, strategies, and tactics
    0:16:31 to scaling your company’s growth.
    0:16:34 He recently did a great episode called How Do You Sol
    0:16:37 for a Siloed Marketing in Sales?
    0:16:39 And I personally learned a lot from it.
    0:16:41 You’re gonna wanna check out the podcast,
    0:16:42 listen to Science of Scaling
    0:16:44 wherever you get your podcasts.
    0:16:47 (upbeat music)
    0:16:49 – Do wonder how regulation is gonna hit open source
    0:16:51 ’cause it does feel like everything is going
    0:16:53 to more and more regulation.
    0:16:56 Europe’s heavily starting to regulate AI.
    0:16:58 In America, there’s more and more people wanting
    0:16:59 to regulate AI.
    0:17:01 I’m kind of more on the EAC side of things
    0:17:03 of we need to go as fast as possible.
    0:17:05 But I do understand the concerns.
    0:17:06 And so I do wonder.
    0:17:09 It feels like, okay, Elon Musk is open sourcing all his AI,
    0:17:11 but at some point when that gets very powerful,
    0:17:12 I have a feeling the government’s gonna be like,
    0:17:16 yeah, we can’t just have anyone having that.
    0:17:19 So I’m curious how that stuff’s gonna play out long term
    0:17:20 ’cause I think open source is very important
    0:17:23 ’cause I love open AI, I love Google.
    0:17:27 I don’t want open AI and Google being the future of AI,
    0:17:29 which is the future of humanity, basically.
    0:17:31 – Well, it’s funny ’cause I think Logan posted something
    0:17:32 on Twitter the other day about,
    0:17:34 can somebody bring me a hard drive
    0:17:36 with the weights for Grok on it?
    0:17:39 ‘Cause I can’t download 368 gigabytes.
    0:17:41 So I mean, the Grok open source, I think,
    0:17:43 still has some roadblocks for general consumers
    0:17:45 to just start using on their own computer.
    0:17:46 – Yeah, yeah, yeah, yeah.
    0:17:47 – This is part of the nuance
    0:17:49 that I think a lot of people miss is like,
    0:17:53 there’s this massive spectrum of what it means to be open.
    0:17:55 And I think to the open AI folks’ credit,
    0:17:58 and I forgot whether what blog post
    0:18:00 or where they talked about this,
    0:18:04 but I do think having your technology cheaply available
    0:18:07 through an API and making that broadly accessible
    0:18:09 to the world for developers to go
    0:18:10 and build products and platforms,
    0:18:13 to me that is certainly an element of openness.
    0:18:16 And I think putting the weights of a model available
    0:18:18 to the world is also an element of openness.
    0:18:20 But I think just because you make your weights available
    0:18:22 does not mean that you’re actually running
    0:18:23 an open source project.
    0:18:28 And at NumFocus, when we evaluate open source projects
    0:18:30 to see whether or not they can be a part of NumFocus,
    0:18:32 there’s this huge list of things
    0:18:34 that you have to go through from like,
    0:18:36 who are the people who are making the decisions
    0:18:38 about where the money is spent?
    0:18:41 What are the governance policies, the code of conduct?
    0:18:44 All that stuff, which no one is looking at.
    0:18:46 They’re like, oh, just because the model is open,
    0:18:48 it doesn’t matter that it’s a for-profit corporation
    0:18:50 or someone who’s got a lot of money,
    0:18:52 who’s driving and making all the decisions.
    0:18:54 And I do think it’ll be interesting to see
    0:18:58 how the narrative evolves over time.
    0:18:59 I actually think in many ways,
    0:19:02 open source models are much less open
    0:19:04 than traditional software is.
    0:19:07 If you look at any of the popular open source projects,
    0:19:09 many of them have distributed governance.
    0:19:12 It’s clear how they make decisions, all those things.
    0:19:14 And that’s very much not the case
    0:19:15 in for some of these open models,
    0:19:17 which is super interesting.
    0:19:18 – You mentioned Meta earlier.
    0:19:20 And it’s something that I guess
    0:19:21 I haven’t been able to wrap my head around
    0:19:26 is why a company like Meta would open source their models.
    0:19:29 From what I understand, to train one of these models,
    0:19:31 it could cost millions of dollars to train
    0:19:35 with all the compute power required to train the models.
    0:19:37 What do you think the motives for Meta
    0:19:39 to release models like this publicly?
    0:19:40 Why would they do that?
    0:19:44 – They are not selling a developer product
    0:19:46 and they don’t have a developer platform.
    0:19:48 They have, I think there is some Facebook app platform
    0:19:49 or something like that.
    0:19:52 But I think that’s like, I’ll slightly aside.
    0:19:54 They don’t, they’re not like a cloud provider.
    0:19:56 They’re not selling the model to end users
    0:19:58 for them to be able to, or developers for them
    0:20:00 to be able to build that in their technology.
    0:20:03 So really it’s like, open sourcing the model.
    0:20:06 The only way that that hurts them potentially
    0:20:08 is somebody takes that model
    0:20:11 and then goes and makes an Instagram competitor
    0:20:13 or a Facebook competitor or a WhatsApp competitor.
    0:20:15 And I think if you look at their business,
    0:20:17 like they have deeply, they’re deeply entrenched
    0:20:19 in the ecosystem, they have distribution,
    0:20:21 they have all the money, they have all the mode.
    0:20:23 So like, they actually don’t really need to worry
    0:20:23 about that as much.
    0:20:25 And it’s more of an existential risk for them
    0:20:30 if they were to not take AI and infuse another platform.
    0:20:31 Cause then all of a sudden somebody makes
    0:20:33 WhatsApp with AI and Instagram with AI
    0:20:36 and Facebook with AI and then, you know, they get disrupted.
    0:20:38 So it’s much easier for them to justify that cost
    0:20:41 of just like essentially business as usual.
    0:20:44 This is the next technology frontier, take AI,
    0:20:47 put it into those platforms and then now, you know,
    0:20:49 potentially even, you know, they can sell services
    0:20:51 to people with AI on Facebook and Instagram
    0:20:52 and WhatsApp and things like that.
    0:20:54 – Yeah, it almost feels like there’s an element
    0:20:57 of this like meta redemption arc happening, right?
    0:20:59 Where a lot of people were soured by meta
    0:21:01 with the Cambridge Analytica and the data leaks
    0:21:02 and all that kind of stuff.
    0:21:06 And now meta is kind of saying, no, we’re good guys here.
    0:21:08 We’re open sourcing our stuff.
    0:21:09 – I’m sure that’s part of it.
    0:21:11 Like, and I think that makes sense for them.
    0:21:14 Like there’s certainly like, they’re definitely winning
    0:21:15 people over by open sourcing models.
    0:21:18 I think it’s like, it’s been a viable strategy.
    0:21:20 – And they’ve been doing that for a while, right?
    0:21:22 Like that’s been their playbook for recruiting talent
    0:21:24 with like GraphQL, React.
    0:21:25 – PyTorch.
    0:21:26 – Yeah, yeah.
    0:21:27 And that all kind of started like when like the image
    0:21:29 of Facebook in Silicon Valley was kind of going down
    0:21:31 at least in like the tech press.
    0:21:32 They were like, everyone was hating on Facebook.
    0:21:34 And then they’re like, oh, we’re open sourcing all this stuff
    0:21:36 and all these developers like, oh, we love Facebook.
    0:21:38 And so it kind of created this like kind of divide there
    0:21:40 where all of a sudden a lot of developers were loving them.
    0:21:43 And I think this is a continuation of that playbook.
    0:21:47 – Let’s talk about the potential risks of open source, right?
    0:21:51 So there’s this kind of scary thought around open source
    0:21:54 that somebody in their basement playing with open source tools
    0:21:58 could cause massive destruction and things like that.
    0:22:01 I guess let’s speak about and talk about some of the risks
    0:22:04 and maybe some of the counteractions that can be taken
    0:22:07 to mitigate some of these risks.
    0:22:08 There was that story, I don’t know, a year ago
    0:22:13 about chaos GPT that tried to take over the world
    0:22:16 by tweeting to like seven people or whatever.
    0:22:19 But I mean, I think that’s a real fear of a lot of people
    0:22:20 is that somebody in their basement
    0:22:22 could create some sort of AI agent
    0:22:24 that creates real chaos, real destruction.
    0:22:27 – Yeah, it’s such a tough position.
    0:22:30 And this is why it’s always been tricky for me to like,
    0:22:34 in many ways, like I align with open AI’s principles
    0:22:36 about what the risks are with open sourcing models.
    0:22:39 I think there’s also a way to potentially do more in the open
    0:22:40 even with some of those risks.
    0:22:43 I think very fundamentally,
    0:22:47 I have yet to see any proof or any like scientific evidence.
    0:22:49 Essentially, there’s all of this stuff that happens
    0:22:51 during the model training process
    0:22:54 to make it so that the models outputs
    0:22:57 are not like a net bad thing for humanity.
    0:22:59 And I think there’s, you know, wide bounds of this
    0:23:03 and people can disagree about how much of that you should do.
    0:23:06 But that is the intent of how many companies
    0:23:08 train these models today.
    0:23:10 The challenging part is that once you take the weight
    0:23:12 to the model and you make it accessible,
    0:23:14 it is easy to essentially fine tune away
    0:23:17 a lot of the safeguards that have been put in.
    0:23:19 And I think today that’s like less of a problem
    0:23:21 ’cause you can just, you know, get the model
    0:23:22 to say a bunch of bad things.
    0:23:25 And I think like all the really, really bad capabilities
    0:23:27 are potentially like less of a risk.
    0:23:30 I think the real challenge is like you take a GPT-5
    0:23:33 or a GPT-6 and if the same principle holds,
    0:23:36 like you could really get a model that’s capable
    0:23:38 of doing like a large amount of damage.
    0:23:40 And because it’s open source,
    0:23:42 anybody can just go and fine tune, you know,
    0:23:45 you could make an open source training set
    0:23:47 with a bunch of bad stuff in it
    0:23:48 and publish it and share it with people
    0:23:50 and then they can go and do this themselves.
    0:23:52 And I think also with like how compute
    0:23:56 and efficient models are going to become over time,
    0:23:58 like it’ll just become easier and easier
    0:23:59 for people to do this.
    0:24:02 I think like the only like one,
    0:24:04 there could be some like scientific breakthrough
    0:24:08 that would like make it so even if you fine tune the model,
    0:24:10 you know, doesn’t wanna do bad things,
    0:24:12 that seems like a stretch like I, you know,
    0:24:13 it’s outside my realm of understanding
    0:24:14 how that would be possible.
    0:24:16 But I suppose that it’s possible.
    0:24:18 And if we could do that, then that would be awesome.
    0:24:22 I think the other option is it’s likely
    0:24:25 that there’ll be some amount of pressure
    0:24:27 that’s put on people who are providing compute
    0:24:31 to do some, you know, moderation
    0:24:33 or some sort of like security checking
    0:24:37 at the compute level, at like the token generation level.
    0:24:38 I don’t know how feasible that is either
    0:24:40 because like you could just spin up a GPU yourself
    0:24:42 and do this on your own computer
    0:24:44 so you don’t need someone else’s compute.
    0:24:46 But like for example, why I have conviction
    0:24:49 that open AI will successfully and safely deploy
    0:24:52 this technology, at least from what I’ve seen is that like,
    0:24:54 there is some level of monitoring taking place
    0:24:56 on the platform so they can see, you know,
    0:24:59 if someone is continually, you know,
    0:25:01 asking how to make bombs or bio weapons
    0:25:02 or something like that.
    0:25:04 Like they can actually monitor what’s taking place
    0:25:07 and go and proactively take action to keep people safe
    0:25:08 and keep the platform safe.
    0:25:11 And I think to me that’s not possible
    0:25:12 in the context of open source.
    0:25:14 And yeah, it’s a big open question,
    0:25:16 especially if you have your own GPUs.
    0:25:18 Like you can’t, it’s essentially impossible
    0:25:20 to stop somebody from doing something like that
    0:25:23 if you don’t have any control over the compute
    0:25:24 from a cloud perspective or something like that.
    0:25:26 So it’s a tricky situation.
    0:25:28 And like, I think this goes back to like,
    0:25:30 it’s just a myriad of like really, really tough trade-offs
    0:25:33 which is why I don’t appreciate much of the narrative online
    0:25:38 which like lacks the nuance of all of these things.
    0:25:41 Like there’s just like a lot of very, very real trade-offs
    0:25:42 to take into account.
    0:25:44 Everyone’s like, oh no, like they glaze over
    0:25:45 a lot of those things.
    0:25:48 And I think it’ll also become more clear
    0:25:50 as the models become capable, like more capable.
    0:25:52 Like today it’s like a little bit easy to be like,
    0:25:53 oh, like what’s the worst that happens?
    0:25:55 You get something that’s gonna write like mean jokes
    0:25:58 or stupid tweets or whatever.
    0:26:00 And then again, Twitter just has to take those tweets down.
    0:26:03 But I think like, you know, you imagine a world
    0:26:05 where the agentic systems really take off
    0:26:08 and there’s great infrastructure to do that.
    0:26:10 And all of a sudden at the push of a button,
    0:26:13 like you have access to go spin up like millions of agents.
    0:26:14 Like you could genuinely cause,
    0:26:18 I could see genuine harm being caused in the world from that.
    0:26:20 And thankfully I feel like we still have a little bit of time
    0:26:22 to like try to figure some of that out.
    0:26:24 – But yeah, there’s real challenges.
    0:26:26 – Yeah, the other sort of scary thought
    0:26:28 and this kind of bubbled up with the whole like
    0:26:30 two-star rumors we’re going around, right?
    0:26:32 Is the thought that maybe AI one day
    0:26:34 will be able to break encryptions.
    0:26:36 And if AI can break encryption,
    0:26:38 then everybody’s bank accounts are at risk.
    0:26:40 You know, all cryptocurrencies are at risk.
    0:26:43 Like pretty much the internet as we know it is at risk
    0:26:47 because it’s all sort of built on this cryptography.
    0:26:50 Do you see a world where the AI models
    0:26:52 are able to break the cryptography
    0:26:54 that sort of runs the internet today?
    0:26:56 – I don’t know if enough about cryptography
    0:26:57 to really be informed.
    0:26:59 My instinct has always been that it’s like much more likely
    0:27:01 that somebody will use these models
    0:27:04 to like accelerate quantum computing research.
    0:27:06 And then that will be the catalyst for like,
    0:27:08 I feel like it’ll be like a hardware innovation.
    0:27:12 Like just, and my mental model for this is like,
    0:27:14 we’ve already got AGI right now.
    0:27:16 We’ve got artificial super intelligence right now.
    0:27:19 There’s seven billion agents on earth
    0:27:20 that are focused on this problem.
    0:27:23 And like, nobody has cracked those encryptions yet.
    0:27:25 So like, I don’t think it’s going to be an LLM layer
    0:27:26 that ends up doing that.
    0:27:28 I think it’s much more likely that there’s like some
    0:27:32 physical hardware innovation that allows super computers
    0:27:34 to really take off and break some of those things.
    0:27:35 – Like I hear the concern of like, okay,
    0:27:37 there’s going to be like these rogue agents and all that.
    0:27:40 But I don’t know, like I was born in 1984
    0:27:41 and like I read the book many, many times.
    0:27:44 And that’s like, you know, my concern is like, okay,
    0:27:46 sure, all those things could happen,
    0:27:47 but also it seems very likely
    0:27:49 that like when you centralize power
    0:27:51 and you have like one group or two groups
    0:27:53 that have all the power and then humanity starts
    0:27:56 kind of like outsourcing our intelligence to the LLMs
    0:27:57 ’cause they get so intelligent
    0:27:59 that they’re more intelligent than us.
    0:28:01 So we’re like, hey, GPT-7,
    0:28:03 what do you think I should do in my life?
    0:28:05 And then now you’ve got the kind of government sneaky
    0:28:07 getting there or whoever, you know,
    0:28:08 let’s say it’s not the U.S. government,
    0:28:09 but you know, it could be, it could be other governments
    0:28:11 who are, you know, supposedly are more nefarious.
    0:28:13 And they’re all of a sudden kind of telling people
    0:28:14 how they should live their lives.
    0:28:17 And like actually kind of micromanaging people
    0:28:18 on an individual basis.
    0:28:20 Like having the AI kind of tell you like,
    0:28:23 how could I make Nathan do this thing I want him to do, right?
    0:28:25 And all of a sudden the LLMs like give me different results,
    0:28:27 you know, based on what they want me to do.
    0:28:29 I have a lot of concern about that.
    0:28:30 Like more than the rogue stuff.
    0:28:33 Because I think with all like the rogue AI stuff,
    0:28:34 you’re going to have like this battle of like,
    0:28:36 okay, there’s bad people using AI in bad ways,
    0:28:38 but in theory, the good guys also have the AI.
    0:28:40 Maybe they even have like a slightly better model.
    0:28:42 And so I think those things are going to not be as big
    0:28:44 of a problem as people think.
    0:28:45 – I think Bill Gates said something recently
    0:28:48 to the effect of all of the problems
    0:28:49 that people are worried about with AI.
    0:28:51 AI also solves those problems.
    0:28:52 – Yes, yeah.
    0:28:53 – And I feel like that’s kind of like
    0:28:55 where you’re going with that.
    0:28:56 – I think that’s true to a certain extent,
    0:29:01 but I would, my mental model for this has long been that
    0:29:04 like we could have this technology today.
    0:29:08 It doesn’t mean that there’s so many humans on earth
    0:29:12 in so many different like life and geographic
    0:29:14 and financial positions that like it doesn’t even matter
    0:29:16 if we have this technology available
    0:29:19 to like quote unquote everyone,
    0:29:21 like chat GBTs available to quote unquote everyone,
    0:29:23 but it’s really not like it’s only available to people
    0:29:25 who have access to the internet and know about what AI is.
    0:29:29 And like, I think, yeah, it still feels like one,
    0:29:32 Nathan to your point, I’m 100% in favor.
    0:29:34 I don’t think that the best outcome for this technology
    0:29:36 just from like a technological development perspective
    0:29:38 is that only one company controls this.
    0:29:42 And I think just the trajectory of where the ecosystem
    0:29:44 is headed, it feels like that’s not going to be the case.
    0:29:47 Like it feels like pretty much everyone has realized
    0:29:48 they have a vested interest.
    0:29:52 Like every company with any amount of technical capacity
    0:29:53 is like, let’s go train a model
    0:29:55 and let’s go make this happen.
    0:29:56 I think the question is like,
    0:29:58 how far does that trend continue?
    0:30:00 And, but it does feel like people have woken up
    0:30:03 to this idea and I would imagine like,
    0:30:04 I don’t think the open source layer of this
    0:30:05 is going to go anywhere.
    0:30:07 I feel like this idea that, you know,
    0:30:10 good people on the internet are just like crusading
    0:30:12 into like the bad places on the internet
    0:30:14 and like stopping people from doing bad things,
    0:30:16 at least in my worldview.
    0:30:19 Like I’ve never, you know, seen that happen before.
    0:30:21 It’s usually just like there’s bad people doing bad stuff
    0:30:23 and like they all get together
    0:30:25 and do more bad stuff together.
    0:30:26 So it’ll be interesting to see if like AI
    0:30:28 is potentially a fix for that.
    0:30:30 – So I wanted to ask about the future.
    0:30:32 I wanted to kind of get your perspective
    0:30:34 on where all of this is headed.
    0:30:36 I’ll read a tweet real quick that you put out.
    0:30:37 You said, in the next 10 years,
    0:30:39 we’re going to have super human AI,
    0:30:40 full self driving everywhere in the world,
    0:30:43 humans on Mars, internet everywhere on earth,
    0:30:44 supersonic commercial jets,
    0:30:46 and cures for major diseases.
    0:30:49 So what do you think’s coming first of those?
    0:30:51 And you know, I’m just kind of curious to hear
    0:30:52 your sort of future scenario
    0:30:55 and what do you think is kind of coming
    0:30:56 within the next few years?
    0:30:57 What do you think is still 10 years out?
    0:31:00 Like how does this story unfold?
    0:31:02 – All of this comes back to being
    0:31:04 like a physical hardware challenge,
    0:31:06 which would be really interesting to see how that plays out.
    0:31:07 Like I imagine the main limitation
    0:31:10 on a lot of the progress that takes place is like,
    0:31:13 can we get enough GPUs or I’m guessing in 10 years,
    0:31:15 it’ll be some new iteration, it won’t be GPUs,
    0:31:18 but some new compute to power all of these things.
    0:31:20 ‘Cause you can imagine like, even if you had,
    0:31:23 and this is the challenge with a lot of the narratives
    0:31:25 around like super intelligent AI’s
    0:31:27 that are just like doing stuff for you all the time,
    0:31:31 like you basically have to have like a H100 cluster
    0:31:34 in your bedroom like to power all the types of things
    0:31:37 that like people think will be possible with AI.
    0:31:39 So like it really does come down to like,
    0:31:42 can we produce a compute for there to be an H100
    0:31:45 for or not even one H100,
    0:31:49 but like a rack of H100s for every human on earth.
    0:31:53 And like that is like a deeply physical problem,
    0:31:55 like very abstracted away from all the things
    0:31:56 that AI will be able to do.
    0:31:59 And AI will be able to solve parts of those problems,
    0:32:01 but like AI cannot solve the problem
    0:32:04 of like physically moving sand and rock and all this stuff.
    0:32:06 So like there’s a lot of like very interesting
    0:32:09 like traditional human problems
    0:32:11 that I think need to be solved to get to that point.
    0:32:13 My guess is like the boom arrow
    0:32:15 is already gonna do hypersonic.
    0:32:16 So that won’t be the thing.
    0:32:19 Like that’s for sure, pretty much a guaranteed in my mind.
    0:32:21 Someone’s intelligently responded
    0:32:23 about how the windows for Mars,
    0:32:25 it seemed unlikely that we would get there
    0:32:27 in the next 10 years, which is really sad to me.
    0:32:28 I’m like, dude, another 10 years,
    0:32:30 how can they not get to Mars in 10 years?
    0:32:31 I feel like that should be super soon.
    0:32:34 But apparently we only have like there’s a window
    0:32:36 in six years or something like that or four years.
    0:32:39 And I’m like, it’s gonna be too long before the next window.
    0:32:41 So we probably won’t get to Mars
    0:32:43 if that person is correct in the next 10 years, which sucks.
    0:32:46 And it’s not clear to me that AI is going to make that happen.
    0:32:51 But I really do think like the abundance of intelligence
    0:32:53 is going to be so exciting,
    0:32:57 but also like have like very real challenges for society.
    0:32:59 I think the thing that gives me the most hope about this
    0:33:03 is like how quickly humans seem to adapt to new changes.
    0:33:05 Like the fact that we have chat to BT
    0:33:08 and all of a sudden like we’re pissed that it’s, you know,
    0:33:11 telling us to do work and are upset about like
    0:33:13 it not just doing everything for us already
    0:33:15 after only a year and a half or whatever,
    0:33:16 however long it’s been.
    0:33:17 I think it’s like a great example
    0:33:19 of like our expectations continue to go up.
    0:33:22 And I think my hope is that the technology
    0:33:25 like continues on like that linear curve
    0:33:27 and doesn’t end up like on some crazy exponential.
    0:33:29 ‘Cause I think that’s just like where the highest chance
    0:33:33 for things to go wrong and people to be, you know,
    0:33:34 disrupted in a negative way.
    0:33:39 But if we can stay on some not actual exponential curve
    0:33:41 that I’m hopeful that like society in the world
    0:33:42 will be able to read that.
    0:33:45 But we’re also close to this technology
    0:33:50 that like if you rounded the percentage of people on earth
    0:33:53 who are like actively using AI every day,
    0:33:55 like it probably rounds down to zero.
    0:33:58 Like it’s probably like one or 0% of the world.
    0:34:00 And like that’s just like speaks to the level
    0:34:03 of how far we still have to go with this technology
    0:34:06 to make sure that we get like the rest of the world
    0:34:08 on board using this technology bought into it
    0:34:10 actually contributing to the discourse.
    0:34:12 And like that is a really, really difficult problem.
    0:34:13 And it honestly sounds like it’s a problem
    0:34:15 that’s probably gonna take 10 years.
    0:34:17 – And that’s one reason we started this podcast, you know,
    0:34:19 is like there’s so many people who have no idea
    0:34:21 what’s going on with AI, you know,
    0:34:23 I moved from San Francisco out to Kyoto over a year ago.
    0:34:25 And like, I was surprised.
    0:34:26 Like I’m thinking like in my mind like,
    0:34:28 oh, Japan, super futuristic, which you know,
    0:34:29 I’ve been here a lot.
    0:34:31 So I know it’s not always like how people imagine it,
    0:34:33 but I would go out to like local cafes and stuff
    0:34:34 and hang out with like locals.
    0:34:36 And I would show them like mid journey
    0:34:38 or show them chat GBT.
    0:34:39 They were like, what is that?
    0:34:41 They would just be like have no idea
    0:34:42 what the hell was going on.
    0:34:45 I basically just like pulled magic out of my pocket
    0:34:48 and they just were so completely mind blown.
    0:34:49 I was like, oh my God.
    0:34:50 So like the average person around the world
    0:34:53 probably has no idea what’s going on.
    0:34:55 They may have heard like one or two like scary stories
    0:34:59 on the news and that’s probably their entire context for AI.
    0:35:02 – This needs to be like a government level project.
    0:35:06 Like actually a part of when I was exploring
    0:35:07 what’s out there I was looking like,
    0:35:10 is the government spending money to try to like educate
    0:35:11 the masses on this technology?
    0:35:12 And like not just the U.S. government,
    0:35:15 but like global governments because I feel like it’s just,
    0:35:17 like, you know, I think your show is going to do
    0:35:20 incredibly well, but it is likely to reach people
    0:35:22 who are like much more interested in AI.
    0:35:25 So I think it’s like a really, really hard human problem.
    0:35:27 And like I think it’s going to take like,
    0:35:30 I would love to see some like international level
    0:35:31 collaboration to be like,
    0:35:34 we need to educate humans about how to use AI,
    0:35:35 but it just feels like, yeah,
    0:35:37 you can’t get international governments
    0:35:38 to agree on anything.
    0:35:39 So who knows if that will happen,
    0:35:40 but somebody should try.
    0:35:43 Like I feel like it would be super useful.
    0:35:44 – I think a lot of leaders around the world
    0:35:45 barely understand the technology.
    0:35:46 I mean, I think that’s one problem
    0:35:48 of having like really old people in government.
    0:35:50 You know, I hate to get like political,
    0:35:52 like, but on both sides right now of politics,
    0:35:54 like it doesn’t feel like the greatest spot to be in,
    0:35:57 like when the AI revolution is starting to have like leaders
    0:36:00 who just really can’t even fathom what’s going on with the tech.
    0:36:01 – I agree.
    0:36:04 My instinct is that like that will be a major catalyst
    0:36:05 for like the next generation of leaders
    0:36:07 to try and run for office.
    0:36:10 Like I actually know some folks who I worked with
    0:36:13 at like a J.C.L.A. through open source.
    0:36:15 And they were at the Schmidt Ventures,
    0:36:17 Schmidt Futures from Eric Schmidt.
    0:36:20 And you know, the guys running for his credit,
    0:36:22 running for the house of representatives in Georgia,
    0:36:23 which is incredible.
    0:36:24 You should actually have him on the show.
    0:36:26 I’ll happily connect you.
    0:36:29 But like specifically through this platform of like,
    0:36:31 the world is changing and like the current leaders,
    0:36:33 like who are all great and have a bunch
    0:36:34 of awesome accomplishments,
    0:36:36 like don’t understand how this technology works.
    0:36:39 And like it’s really difficult to adapt to that change
    0:36:41 if you don’t understand how the technology works
    0:36:43 and you don’t have your hands on it every day.
    0:36:46 And I think, yeah, it’s a critically important problem
    0:36:47 for us to get right.
    0:36:48 – This is a HubSpot podcast.
    0:36:51 And, you know, HubSpot talks to entrepreneurs
    0:36:53 and business owners and things like that.
    0:36:55 And, you know, in your role at OpenAI,
    0:36:57 you dealt with a lot of, you know,
    0:36:59 SaaS founders and business owners.
    0:37:02 What advice do you have for business owners
    0:37:05 looking to work with AI as a piece of their business?
    0:37:09 – The biggest challenge with people who aren’t like
    0:37:12 inherently like giddy about AI and technology
    0:37:16 is just getting past like the blank page problem.
    0:37:18 And I think this is like part of the challenge
    0:37:20 with ChatGBT fundamentally as a tool is you show up
    0:37:23 and you’re like, you basically are, it’s blank.
    0:37:25 And you basically already need to know
    0:37:27 how this technology helps you.
    0:37:31 I think GBTs are like a really great step in that direction
    0:37:35 because they like are a very tangible small use case
    0:37:37 that like I don’t have an AI problem.
    0:37:39 I have like a finance problem
    0:37:42 and I need help with my books or whatever it is.
    0:37:45 I think all of those like looking for those
    0:37:48 like very specific tangible use cases
    0:37:49 and trying to automate stuff around them.
    0:37:54 Like I’ve always been a, I’ll give my really quick shout out
    0:37:55 to the people at Zapier.
    0:37:58 Like I think Zapier is going to be an incredible catalyst
    0:38:02 for people who are just getting used to this technology
    0:38:04 to like truly automate things.
    0:38:08 And there’s so few platforms that are targeting
    0:38:10 that demographic of people.
    0:38:12 And I actually think HubSpot is one of them.
    0:38:14 I’ve been a huge fan of HubSpot and Darmash
    0:38:16 and the whole team doing a bunch of incredible work
    0:38:18 around AI and I think like more and more companies
    0:38:21 need to do that and I think more and more people
    0:38:23 need to go and find those use cases.
    0:38:25 And it’s just really hard and I empathize with that a lot.
    0:38:27 Like I feel this challenge for myself.
    0:38:28 I’m like, I’m always busy.
    0:38:30 Like how can I use AI to help me?
    0:38:31 And it’s like not always obvious
    0:38:34 like what the use cases are.
    0:38:37 I think the real tangible advice is like
    0:38:38 go and play with the technology.
    0:38:41 Like you have to like get your hands dirty with it.
    0:38:42 You have to play with it.
    0:38:43 You have to run into the edges.
    0:38:46 You have to like explore the tools a little bit
    0:38:47 and like that kind of sucks.
    0:38:50 And I think like platforms like the one that you all have
    0:38:52 can like hopefully be something to like help
    0:38:54 get some of the right resources to those people.
    0:38:56 But yeah, you just have to play with the stuff
    0:38:59 and find out what adds value and like hopefully find something
    0:39:01 that adds enough incremental value
    0:39:03 that it’s like worth you spending your time
    0:39:04 continuing around it.
    0:39:06 And I think like chat GBT and a bunch of these other tools
    0:39:08 like really will add enough value
    0:39:10 if you spend the time to play around with them
    0:39:12 and get your hands dirty.
    0:39:15 – Yeah, I actually onboarded my dad onto chat GBT
    0:39:18 by showing ’cause he’s actually a business owner himself.
    0:39:19 He does window covering.
    0:39:21 So nothing tech related,
    0:39:23 but I showed him how to use chat GBT
    0:39:25 to help him respond to his emails.
    0:39:29 And now he uses chat GBT to reply to all of his emails.
    0:39:30 And he’s constantly finding like new ways
    0:39:32 to use chat GBT in his business,
    0:39:34 but that email was that catalyst.
    0:39:36 So just like something really simple like that
    0:39:38 like help me write my emails better
    0:39:41 is a really good way to get into it, I think.
    0:39:42 I need to figure that out with my mom.
    0:39:44 I show it to her and she was like,
    0:39:45 “Oh, that’s so impressive, that’s so amazing.”
    0:39:47 And then she was like, “Okay, now what?”
    0:39:50 – It’s that now what that is like the opportunity.
    0:39:55 I think like no one has really nailed that like now what piece.
    0:39:57 And I think if you can like help people with that now what
    0:39:59 like that is a huge opportunity
    0:40:03 that I think, yeah, there needs to be more thought put into.
    0:40:04 – Very cool.
    0:40:06 Well, this has been an absolutely amazing conversation.
    0:40:07 We’ve had a blast.
    0:40:09 Hopefully we can have you back on in the future.
    0:40:10 Once you’ve settled into that role
    0:40:12 let Google a little bit and maybe talk about
    0:40:13 what’s going on there.
    0:40:14 But I really appreciate the time.
    0:40:17 Is there anywhere you prefer people to follow you?
    0:40:18 Is Twitter the best place?
    0:40:19 Do you have a website?
    0:40:21 Any where people should go after listening to this?
    0:40:23 – Yeah, I’m posting on Twitter.
    0:40:25 I’m posting on, I try to post on LinkedIn
    0:40:28 ’cause I feel like there’s a underserved AI market on LinkedIn.
    0:40:30 Like everyone’s all the cool stuff’s happening on Twitter.
    0:40:32 I’m like, we got to help all the normies on LinkedIn.
    0:40:35 So I spend time on LinkedIn as well.
    0:40:36 And yeah, I have a website
    0:40:39 but there’s nothing as useful as on Twitter and LinkedIn.
    0:40:41 At official Logan K.
    0:40:43 And then I’m like, Dennis, just Logan Kilpatrick.
    0:40:44 – Awesome.
    0:40:45 Well, thank you so much for spending the time with us today.
    0:40:47 This has been such a fun conversation.
    0:40:50 I can’t wait to get it out to the world.
    0:40:52 And yeah, thanks for joining us today.
    0:40:53 – Thank you for having me
    0:40:54 and congrats on the launch of the show.
    0:40:55 This is awesome.
    0:40:57 (upbeat music)
    0:41:00 (upbeat music)
    0:41:02 (upbeat music)
    0:41:05 (upbeat music)
    0:41:08 (upbeat music)
    0:41:10 you
    0:41:12 you

    Episode 5: Why did Logan Kilpatrick leave OpenAI for Google amidst the AI surge? Join hosts Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) as they delve into this significant career move with Logan Kilpatrick (https://twitter.com/officialLoganK), the former head of developer relations at OpenAI. Kilpatrick’s new role at Google places him at the forefront of AI product leadership, following substantial contributions to AI tools like Chat GPT.

    This episode unravels the layers behind Logan Kilpatrick’s pivotal shift from OpenAI to Google. As a key figure in the development of groundbreaking AI technology, Kilpatrick reveals his insights on the evolution of AI, addressing the challenges and excitement that encouraged his transition. The discussion offers a peek into the potential future of AI, the ethics of open-source models, and the continuous need for innovation in technology sectors.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Logan Kilpatrick shares insights on OpenAI, Google.
    • (05:09) Chat GPT creation process deliberate and intentional.
    • (09:13) Shift from small to large company challenging.
    • (11:10) Excited about opportunities in language model space.
    • (16:42) Nuance of openness in technology and projects.
    • (18:36) They’re not selling a developer platform.
    • (22:22) Open source GPT models raise safety concerns.
    • (26:30) Concerns about centralized power and AI influence.
    • (39:56) Hardware challenges for AI.
    • (31:42) Exciting challenges of increasing artificial intelligence adoption.
    • (37:29) “Play with technology to find value.”
    • (38:07) Introduced dad to chat GPT for business.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • The Large Language Model Race with Pete Huang, Founder of The Neuron

    AI transcript
    0:00:02 Google said, we’ve got a new large language model that’s awesome.
    0:00:04 And then open AI went, hold my beer and watch this.
    0:00:05 That was crazy.
    0:00:06 That was absolutely crazy.
    0:00:06 Right.
    0:00:12 And look, I think the whole thing around the one person business that can
    0:00:16 generate 10 million a year, 20 million a year, a billion dollars, whatever sort
    0:00:18 of fancy thing you want to put on it is possible.
    0:00:23 Hey, welcome to the next wave podcast.
    0:00:23 My name is Matt Wolf.
    0:00:25 I’m here with my co-host, Nathan Lanz.
    0:00:28 And today we’ve got an amazing guest for you.
    0:00:33 Someone that loves to nerd out around AI and keeps his finger on the pulse of AI
    0:00:35 just as much as we do today.
    0:00:40 We’ve got the founder of one of the most popular AI newsletters, the neuron on the
    0:00:42 show. Today we’re talking with Pete Huong.
    0:00:44 Thank you so much for joining us, Pete.
    0:00:44 How are you doing?
    0:00:45 How’s it going, everyone?
    0:00:46 Good, it’s good to be here.
    0:00:51 Can you give us the quick rundown of what the neuron is, what type of reader
    0:00:54 you have, what your goal with each issue is?
    0:00:56 So the neuron is a daily AI newsletter.
    0:00:59 It’s filtered through the lens of knowledge workers, non-technical
    0:01:01 audience, small business owners.
    0:01:04 And we give you updates every day, money through Friday.
    0:01:05 It’s a very, very quick read.
    0:01:09 We do two stories and a bunch of links just showing what’s going on.
    0:01:13 And what we try to do is filter through the lens of like really the average
    0:01:16 person. There’s a lot of, you know, talk around the technicals of AI.
    0:01:21 But if we zoom way back and make sense of how does it actually manifest in
    0:01:25 your work, in your business and how you make changes in how you do work?
    0:01:26 What does it actually look like?
    0:01:27 Yeah, that’s amazing.
    0:01:29 I mean, congrats on the success of that newsletter.
    0:01:30 That is huge.
    0:01:34 I have to ask you one question because it’s the question that people ask me
    0:01:37 the most. So I actually want to turn it around on somebody else.
    0:01:41 Is there anything like in the AI world that you’ve kind of like heard of?
    0:01:45 It’s in the pipeline that maybe you’re aware of or you’ve seen already
    0:01:47 that has you really excited right now?
    0:01:49 Oh, it’s the agents for sure.
    0:01:50 Agents for sure.
    0:01:55 Because if anything, because my lens is on what running a business
    0:01:59 looks like in the future, and today, right now, a lot of business owners,
    0:02:02 enterprises, small business, everyone included, they’re a little bit confused
    0:02:06 by AI because they’re like, okay, well, the press and the media,
    0:02:09 they’re promising me these transformative things.
    0:02:12 I should be able to translate, run a completely different business
    0:02:13 because of AI today.
    0:02:18 And that’s not actually happening because the reality of AI, at least in
    0:02:22 the chat to the world as it is today, is it’s automating what we call
    0:02:24 like the bits and pieces of work, right?
    0:02:30 It is, I’m a marketing coordinator and I can save 10 to 30 minutes of my
    0:02:34 day every day because of this small little thing that I now created a prompt
    0:02:36 in Claude or chat to BT to do.
    0:02:39 And then now it’s giving me a first draft and I have to spend that time doing
    0:02:43 it. That is not as transformative as it is for that one particular person.
    0:02:48 For the business owner, it’s not like wholesale transformative, right?
    0:02:53 Agents, I think have a much bigger promise along that front because you’re
    0:02:58 now going to unlock end to end automation for an entire process, right?
    0:03:02 So if you have an agent chain that can go from creative concept to a brief,
    0:03:06 to a proposal, to all the iterations, to all the back and forth, like really
    0:03:08 all those things and handle all those pieces of work.
    0:03:12 Now you’re talking about for a large corporation, for example, a set of AI
    0:03:15 agents could take on even 10%, 20%.
    0:03:17 That’s huge, right?
    0:03:18 That is a huge amount of work.
    0:03:20 And those people can now do other things.
    0:03:26 And now you’re looking at a real multiples of creativity, not this 10 to 20%
    0:03:28 thing, you know, on, on a per person basis.
    0:03:32 You’re talking about entire sort of like chunks of work at a time.
    0:03:38 I think the whole thing around the one person business that can generate 10
    0:03:41 million a year, 20 million a year, a billion dollars, whatever sort of fancy
    0:03:44 thing you want to put on it is possible, right?
    0:03:45 But you need agents to get there.
    0:03:49 I don’t think you sitting in front of chat to BT every day is going to do it.
    0:03:52 That is again, giving you 50% leverage on your time.
    0:03:54 We’re really looking for a hundred acts, right?
    0:03:56 And I think agents represent that hundred acts.
    0:04:02 When all your marketing team does is put out fires, they burn out fast.
    0:04:06 Sifting through leads, creating content for infinite channels, endlessly
    0:04:09 searching for disparate performance KPIs, it all takes a toll.
    0:04:13 But with HubSpot, you can stop team burnout in its tracks.
    0:04:17 Plus your team can achieve their best results without breaking a sweat.
    0:04:22 With HubSpot’s collection of AI tools, Breeze, you can pinpoint the best leads
    0:04:27 possible, capture prospects attention with clickworthy content and access
    0:04:29 all your company’s data in one place.
    0:04:31 No sifting through tabs necessary.
    0:04:33 It’s all waiting for your team in HubSpot.
    0:04:37 Keep your marketers cool and make your campaign results hotter than ever.
    0:04:40 Visit hubspot.com/marketers to learn more.
    0:04:46 So when, when you say an agent, let me just kind of clarify.
    0:04:50 A, so I know that we’re we’re talking the same language, but B, the
    0:04:51 listeners know what we’re talking about.
    0:04:55 When you say an agent you’re talking about, you give it, you go into like
    0:04:59 a chat window, like a chat GPT, you give it a task that you want it to complete.
    0:05:04 But instead of just giving you a single response back, it will keep on working
    0:05:09 on that task and iterating and iterating and possibly even using external tools
    0:05:13 and APIs until it reaches the end goal that you initially prompted it with.
    0:05:14 That’s exactly right.
    0:05:18 And so I think to me, the, the easiest way to explain it is there are three
    0:05:21 parts, exactly all the things that you just said, Matt, today, right now in
    0:05:26 chat, it’s like very directed, which is like, I want you to write a blog post
    0:05:26 about this.
    0:05:28 And it’s like a very specific thing.
    0:05:29 Like it’s very self-contained.
    0:05:35 There’s no extra research involved, but an agent has the ability to plan.
    0:05:39 It has the ability to reason and has the ability to use other tools.
    0:05:42 And so instead of saying, write a blog post, you might give it something a
    0:05:46 little bit more abstract and say, come up with them, come up with and
    0:05:50 execute the marking strategy for my business and it’ll go through everything
    0:05:53 around, okay, like I’m going to research, what does the marketing strategy mean?
    0:05:55 What does it mean to have a good marketing strategy?
    0:05:57 What does it mean to execute it?
    0:05:59 What are the sort of best practices for your line of work?
    0:06:02 What are all the pieces and sort of like compile everything.
    0:06:06 And as it’s sort of reflecting on its own work, it’s reasoning through the problem.
    0:06:09 It’ll then go into execution phase and start to use tools, right?
    0:06:12 Which is, okay, now I know that I need to create an Instagram post.
    0:06:17 So let me go draft the thing and then go call the Instagram API and
    0:06:20 upload the post every week, every two days, whatever it is.
    0:06:25 And so now all these things that were normally done by a human going from chat
    0:06:30 GPT to clicking over to Instagram.com, whatever it is, is now replaced by code,
    0:06:30 right?
    0:06:35 It’s, it’s, it’s code and, and these large language models, um, that have just
    0:06:37 like developed enough capability, right?
    0:06:40 And, and so this is sort of the prospect of GPT five, llama three, et cetera.
    0:06:42 Um, to do all these things.
    0:06:47 And so this is getting very close, honestly, to what it feels like to have an AI employee.
    0:06:47 Right.
    0:06:52 Like in some cases, I know previous jobs that I’ve had where the only value
    0:06:55 ad that I’ve brought to the table was the fact that I could think because I have
    0:06:57 a human brain and I was clicking on software all day.
    0:07:01 Like I was just moving bits and pieces of information from one software to another.
    0:07:03 Like an agent could totally do that.
    0:07:04 You know what I mean?
    0:07:07 Like an agent can go and build a Salesforce dashboard for you.
    0:07:09 It can go run SQL for you.
    0:07:10 It can do all these things.
    0:07:14 So there’s like this whole new wave of, of capabilities and startups and
    0:07:18 products that are all agentic in this way that, that, that’ll come hopefully in
    0:07:19 the next couple of years.
    0:07:22 And again, my lens is on just like the future of business, right?
    0:07:26 Like I think that’s going to like really fundamentally change how businesses are run.
    0:07:26 Yeah.
    0:07:31 If it feels like probably GPT five will be the first time where agents kind of
    0:07:34 start working because like, you know, I wrote this big Twitter thread back, you
    0:07:36 know, like a year ago about baby AGI, right?
    0:07:37 It weren’t really viral.
    0:07:40 I went on TV and talked about it and it was really promising, but like, you know,
    0:07:43 all of those, you know, you’d boot them up and get them working on something.
    0:07:45 And it would just end up spinning in circles because it wouldn’t be able to
    0:07:46 complete the task.
    0:07:48 It would basically get lost, you know?
    0:07:52 And so I think as soon as you get where they can actually complete tasks and come
    0:07:56 back and show you the results or even go on and do the next logical step, that’s,
    0:07:59 that’s going to be such a huge unlock for businesses and entrepreneurs.
    0:08:04 And, and the rumor, the rumor is that with GPT five, people have seen demos and it
    0:08:05 can not that it’s perfect.
    0:08:07 I’m sure it’s going to like take many, many years to get amazing.
    0:08:11 But apparently, yeah, you can tell to do some task and it just goes off and does
    0:08:13 them and come back and reports to you.
    0:08:14 Yeah.
    0:08:14 Yeah.
    0:08:18 And, and Nathan, that getting lost thing, I think is the unlock, right?
    0:08:26 Cause right now, even if it’s 2% off, 3% off, you take point, like 97% times 97%
    0:08:30 times 97% times however many steps you want to put to complete.
    0:08:34 And like at some point there’s like a guarantee that it will not complete it
    0:08:36 correctly if you have enough steps, right?
    0:08:38 It’ll, that number will converge to zero.
    0:08:45 And so the, the real part is like, how do you get that 97% correct to as high as
    0:08:50 possible, like 99.999999999% or even 100% theoretical?
    0:08:51 I have no idea, right?
    0:08:52 But that’s the biggest gap.
    0:08:55 There are a couple of platforms out today, but that’s just how it is, right?
    0:08:57 Like it is just not there yet.
    0:09:01 And so once you get past three, four steps, it just starts to crash every
    0:09:02 single time.
    0:09:05 I think everyone needs to be thinking about how they’re going to use agents when
    0:09:08 they get good, because it’s going to create so much competition that if you’re
    0:09:10 not using them, they’re going to be so far behind.
    0:09:13 If you think of like the Peter levels of the world, who just like crank out, like,
    0:09:16 Oh, I made a new startup and now it’s making $20,000 a month.
    0:09:18 And like, in the next month, he does another one, right?
    0:09:20 He just keeps doing this somehow.
    0:09:24 Imagine when he has agents where he can literally just like, here’s my idea.
    0:09:25 I have a social following.
    0:09:26 So I can kind of jumpstart it.
    0:09:28 I don’t have to worry about like no one’s going to try it.
    0:09:30 At least a thousand people will try anything I do.
    0:09:34 The agent goes and makes the landing page an hour after I thought about it on
    0:09:38 the beach and now I’m testing the idea.
    0:09:38 It didn’t work.
    0:09:38 Okay.
    0:09:40 Well, tonight we’ll try some more ideas.
    0:09:43 It’s just like, that’s going to be amazing.
    0:09:43 Totally.
    0:09:47 The week that we’re recording this kind of the biggest thing going on in the AI
    0:09:50 world is all of these new large language models dropping.
    0:09:54 We got Gemini 1.5 now became available publicly.
    0:09:56 We’ve got GPT-4.
    0:10:01 They just released a new April 9th version that they claim is a major update,
    0:10:03 but didn’t really tell anybody what they updated about it.
    0:10:04 Just that it’s a major update.
    0:10:12 We got Mixerl 8X7B that just got released, which I believe is open source.
    0:10:14 The 8X7B was open source.
    0:10:16 So I believe this one is as well.
    0:10:19 And then we also got the announcement from META that Lama 3 is most likely
    0:10:20 coming in May.
    0:10:25 So lots of large language model news going on right now.
    0:10:29 What is what is your take on large language model news?
    0:10:34 Do you see like one of these companies like being a clear winner?
    0:10:38 Do you think there’s place for all of these large language models to all coexist?
    0:10:40 Like what’s your take on it?
    0:10:41 It’s a race.
    0:10:44 I mean, it’s all a race to build the best models, right?
    0:10:46 And so, of course, you have your GPT-4.
    0:10:49 There’s that upgrade that happened this week.
    0:10:52 Everyone’s sort of already talked about GPT-5, though, you know what I mean?
    0:10:56 It’s like when when both Sam Altman and Satya Nadella are talking
    0:10:58 openly about GPT-5, like it’s coming.
    0:11:01 Like everyone knows it’s in the works and it’s just a matter of time.
    0:11:05 I think rumors, if I have this correct, guys, it’s like summer, I believe.
    0:11:07 That’s GPT-5. Is that right?
    0:11:07 Yeah.
    0:11:09 And so it’s just like all these models are great.
    0:11:13 I mean, like Claude 3, I think was is amazing.
    0:11:17 And I still think Claude today, chat bot wise, is is the best option.
    0:11:19 So that was a big upgrade over GPT-4.
    0:11:25 Yeah, we had Gemini 1.5 public preview availability for for all developers.
    0:11:30 A lot of advancements there that they claim, you know, Gemini 1.5 beats GPT-4.
    0:11:34 You have all these the sort of like second group of the
    0:11:37 mixed rolls, et cetera, that are trying to make their mark and and and find a
    0:11:40 position in the market, you have Lama 3 with with Meta.
    0:11:46 But all of this is sort of looming with the sort of shadow of GPT-5 coming.
    0:11:46 You know what I mean?
    0:11:48 Like everyone’s comparing it against GPT-4.
    0:11:51 But GPT-4 has been out for a year already.
    0:11:53 And that was like the old GPT-4, you know what I mean?
    0:11:58 And so you factor in like this week’s upgrade with some math capabilities.
    0:12:01 And then you just have to ask yourself like, OK, if everyone’s just now
    0:12:05 catching up to GPT-4, are they just going to get blown out of the water by GPT-5?
    0:12:06 You know what I mean?
    0:12:09 Yeah, yeah, I’ve been thinking, too, that like the GPT-5 is probably
    0:12:11 way ahead of everything else we’re seeing.
    0:12:18 And some evaluations came out for GPT-4, the new upgrade, maybe a few hours ago,
    0:12:22 and it’s showing that actually it’s outperforming Clod again in coding.
    0:12:25 And it’s kind of weird, too, because like they released it and it’s so vague.
    0:12:27 And like you said, yeah, GPT-4 has been out for a year.
    0:12:30 They’re still releasing updates to GPT-4.
    0:12:32 They’re not calling it 4.1 or 2.
    0:12:33 It’s kind of confusing.
    0:12:36 And I kind of wonder if they’re doing it on purpose, just to like, you know,
    0:12:39 they’re so far ahead, it’s like, oh, yeah, compared to the old model and we’ll
    0:12:43 slightly like you, you know, Clod gets ahead, we’re always just barely
    0:12:47 better than the best model until we release the real thing we’re actually
    0:12:51 focused on, which the thing they’ve been really focused on is GPT-5, not 4.
    0:12:57 When you look at it from a product marketing and PR lens, OpenAI knows
    0:12:58 that they’re ahead, right?
    0:12:59 Like, let’s just be, let’s put it out there, right?
    0:13:01 Like they’re they’re pretty much ahead.
    0:13:05 And if you see their behavior over the last year and a half, every single
    0:13:09 time Google has launched something, they’re always ready.
    0:13:12 And in fact, I remember, if you remember last spring, every time Google
    0:13:16 made an announcement, Google or OpenAI made an announcement that was
    0:13:17 like 10 times better.
    0:13:20 And everyone was just like, it created this atmosphere of just like,
    0:13:21 what is Google doing?
    0:13:22 You know what I mean?
    0:13:25 And Google’s trying, they’re like trying to release new things or prepping
    0:13:29 their launches, their PMs or PMMs, like going nuts, trying to craft a
    0:13:30 campaign and a message that works.
    0:13:33 And OpenAI was like, boop, let me just go, like, drop something out.
    0:13:37 Oh, here’s another one from the backlog that just like beats you, right?
    0:13:39 And, you know, so this one is ready.
    0:13:43 And to your point, Nathan, like this thing that dropped on the April 9th
    0:13:47 update for GPT-4 wasn’t actually a real launch, you know what I mean?
    0:13:50 From a product marketing perspective was not a real, there was no blog post.
    0:13:54 There was no specificity into like what actually improved other than like, I
    0:13:55 think it’s better than math.
    0:13:56 You know what I mean?
    0:13:57 Yeah, it got a little bit better.
    0:13:57 Yeah.
    0:13:59 Well, there was somebody who said like major update or major upgrade.
    0:14:03 And there was others like upgrade or update is like, that was that was
    0:14:04 different messaging from OpenAI.
    0:14:05 Totally, totally.
    0:14:09 And so it’s just like, they had to have been sitting on this for so long that
    0:14:11 they’re kind of like, okay, we know Google Cloud Nexus coming.
    0:14:14 We know they’re going to do something, Google Gemini 1.5.
    0:14:14 Yeah.
    0:14:15 Let’s just be ready.
    0:14:19 And just like even a half baked release or an upgrade of whatever, like, wouldn’t
    0:14:21 even need to be specific about it.
    0:14:22 We’ll just drop it.
    0:14:25 And then people, you know, then we’ll be talking about it on this podcast, right?
    0:14:27 Versus Gemini 1.5, right?
    0:14:32 So it’s like, it’s just a masterclass, I think, in OpenAI flexing on the
    0:14:35 rest of the market and knowing very clearly what position they’re in.
    0:14:40 Yeah, Paul Graham always said that like Sam Altman really shined in terms of
    0:14:42 strategic thinking and planning.
    0:14:44 And so I think a lot of people are not realizing that.
    0:14:47 Like he’s like one of the masters at being strategic and planning things out.
    0:14:50 And I don’t think a lot of the people think like, oh, they’re just kind of
    0:14:52 floundering right now because GPT-5 is not out.
    0:14:54 I was like, no, I think this is like part of the plan.
    0:14:58 And then when it comes out, it’s going to be so much better than everything else.
    0:15:00 And I think Claude and other ones are going to be like, oh, wow.
    0:15:02 They’re like way behind.
    0:15:03 That’s my personal belief.
    0:15:06 Even going back to Pete’s point about just sort of OpenAI always sort of
    0:15:10 having something queued up to be ready, wasn’t it the same week that
    0:15:15 Gemini 1.5 was announced that OpenAI dropped the Sora video?
    0:15:18 I’m pretty sure that was like the same week.
    0:15:21 Google said, we’ve got a new large language model that’s awesome.
    0:15:23 And then OpenAI went, hold my beer and watch this.
    0:15:25 And then put out the Sora videos.
    0:15:26 That was crazy.
    0:15:27 That was absolutely crazy, right?
    0:15:30 And look, to Google’s credit, and maybe this is where we start talking
    0:15:35 about Gemini a little bit, but like that release when Google dropped Gemini 1.5,
    0:15:36 that was a pretty big deal.
    0:15:41 Like there’s a lot going on with Gemini 1.5 that matters a ton.
    0:15:44 And it wasn’t just like this, let me catch up to GPT-4 thing.
    0:15:46 It was like a pretty landmark statement.
    0:15:48 But it made for a crazy news week, right?
    0:15:52 Because you had Gemini 1.5 and everyone’s just like, oh my gosh,
    0:15:55 now video is a real thing with Sora.
    0:15:58 And like before, like those clips weren’t like the video
    0:16:00 genders weren’t actually doing a thing.
    0:16:02 And then now it’s like a real sort of like thing to watch out for.
    0:16:05 I remember that it was absolutely wild.
    0:16:11 But I’m curious, Matt, if we step back here and kind of just go back to Google
    0:16:14 Cloud next and all the sort of announcements there.
    0:16:17 Gemini is definitely, I think, one of the the headliners coming out one.
    0:16:20 Can you just help me level set the Gemini conversation?
    0:16:21 Just like, what is it?
    0:16:24 Like, what is the big deal about it over something like GPT-4?
    0:16:29 And then let’s talk about what’s coming with this model and all the other
    0:16:30 rest over the next few months.
    0:16:34 We’ll be right back.
    0:16:37 But first, I want to tell you about another great podcast you’re going to want to listen to.
    0:16:41 It’s called Science of Scaling, hosted by Mark Roberge.
    0:16:45 And it’s brought to you by the HubSpot Podcast Network, the audio
    0:16:47 destination for business professionals.
    0:16:52 Each week, host Mark Roberge, founding Chief Revenue Officer at HubSpot,
    0:16:56 senior lecturer at Harvard Business School and co-founder of Stage 2 Capital,
    0:17:00 sits down with the most successful sales leaders in tech to learn the secrets,
    0:17:03 strategies, and tactics to scaling your company’s growth.
    0:17:09 He recently did a great episode called How Do You Solve for a Siloed Marketing
    0:17:11 in Sales, and I personally learned a lot from it.
    0:17:13 You’re going to want to check out the podcast.
    0:17:17 Listen to Science of Scaling wherever you get your podcasts.
    0:17:24 So one of the biggest things about the Gemini 1.5 model when they announced
    0:17:29 it was that it was going to have up to a 10 million token context window.
    0:17:34 But I think the release version that was going to be made available was going
    0:17:37 to have up to a million, but then the version that we actually have now,
    0:17:39 I think it’s like 248,000 or something like that.
    0:17:44 So the version that we have available to us right now inside of like the Gemini
    0:17:49 platform, I think is like a 258,000 somewhere in that range.
    0:17:51 I don’t know the exact number I’ll top my head, but it’s somewhere in that range.
    0:17:54 So it’s the largest context window we have available.
    0:17:56 You know, supposedly it’s much better at coding.
    0:18:00 I actually haven’t tested it myself for coding yet, but the context window,
    0:18:02 I think was really, really that big leap.
    0:18:07 And in the article that they shared, they did the needle in a haystack test,
    0:18:12 right, where they had a, you know, 700,000 word article.
    0:18:15 They put a single sentence that was unrelated to the rest of the article
    0:18:19 somewhere inside of it and then asked questions about it.
    0:18:24 And the needle in a haystack test found the answer like 99.9% of the time.
    0:18:27 So it did really well on the needle in a haystack test for like the,
    0:18:30 like the retrieval augmented generate generation tests.
    0:18:32 And it also has that huge context window.
    0:18:34 I’d say those are probably the biggest factors.
    0:18:38 Well, I saw a tweet saying that you could upload files up to two gigabytes.
    0:18:40 And I don’t know if I’ve actually, I haven’t confirmed that, but like,
    0:18:41 that sounds incredible.
    0:18:44 Like if you can actually like upload incredibly large documents to this thing
    0:18:47 and have it evaluate them or summarize them or, you know,
    0:18:50 be a last questions of that document, that’s, that’s huge.
    0:18:53 I’m not sure how big chat GPT is, but I feel like, yeah, I don’t know
    0:18:57 the upload number on chat GPT, but I know Bill of all also shared something
    0:19:01 either yesterday or today around Gemini 1.5 where he uploaded an entire
    0:19:05 like interview and had Gemini create the timestamps for it.
    0:19:07 So it actually listed the whole interview made timestamps.
    0:19:08 And he said they were super accurate.
    0:19:13 I’ve tried doing that with chat GPT and A, you have to sort of export
    0:19:15 the transcript first, right?
    0:19:18 If you’re using Gemini, apparently you can just upload the video
    0:19:20 and it’ll just pull the data from the video.
    0:19:25 Chat GPT, you have to grab the transcripts, copy, paste them into chat GPT,
    0:19:26 tell it to write you timestamps.
    0:19:30 It will actually get the sort of notes for the timestamps, right?
    0:19:31 But it’ll have the timing way off.
    0:19:36 Like the actual like time that it wants you to jump to will not correlate at all.
    0:19:41 So that’s another big leap is supposedly it’s getting better at actually,
    0:19:46 you know, the multi-modality, multi-modality so far has mostly meant
    0:19:50 we can upload text, we can upload images, we can talk to it.
    0:19:52 But all it’s doing behind the scenes is transcribing it.
    0:19:54 So it’s still just kind of text, right?
    0:19:56 Yeah, there’s video understanding.
    0:19:58 So just to put some numbers out there, right?
    0:20:02 So with Gemini, I commonly cited, although it sounds like the release
    0:20:06 is a little smaller than this, but the one million token context window,
    0:20:08 like one of the comments that we get a lot is like, OK, tokens,
    0:20:12 like, why did you invent like a new unit to count things by?
    0:20:15 So token is like three fourths of a word, basically.
    0:20:16 It’s like three or four characters.
    0:20:20 And if you if you have a million tokens, then you should assume that’s
    0:20:25 like 700,000 words in like one lengthy conversation with with Gemini.
    0:20:28 You can feed 700,000 words into Gemini, right?
    0:20:33 That also roughly equates to one hour of video, about 10 hours of audio
    0:20:35 in terms of understanding there.
    0:20:38 So to your point, to Bill of All’s experiment, right?
    0:20:41 It’s like, man, you can just toss an hour video in there
    0:20:42 and it’ll have full understanding.
    0:20:45 And this is where that needle in the haystack test gets really important.
    0:20:50 So stepping back for a second, the needle in haystack test, as Matt was describing,
    0:20:53 is yeah, you just bury some piece of information just like randomly
    0:20:57 in this huge amount of text, like somewhere in 700,000 words.
    0:21:00 You just drop in that random thing and just see if it can actually detect it.
    0:21:04 The reason this is important is when anthropic first released
    0:21:06 Claude, like an upgraded version of Claude.
    0:21:12 They had this thing where like, oh, like we can do like 128,000 tokens,
    0:21:14 which at that point was like pretty significant, right?
    0:21:17 Like compared to chat to BT, which is in the five digits.
    0:21:20 And then someone ran a test that this needle in haystack test.
    0:21:24 And then it turns out that like the more information you were giving Claude,
    0:21:28 it would get worse and worse at finding the needle in the haystack.
    0:21:32 And there was something they found something where it was
    0:21:35 just wasn’t just wasn’t performing correctly, according to spec.
    0:21:39 And so this is sort of like a test to show it’s like not only
    0:21:44 do we have the ability for you to just like submit one million tokens
    0:21:46 worth of stuff, but it actually works, right?
    0:21:47 It’s like it’s not going to drop it.
    0:21:49 It’s not going to progressively get worse.
    0:21:53 It’s like this sort of like a uptime guarantee or like a quality
    0:21:56 guarantee of your of your of your token context window.
    0:21:58 So there’s so much here, right?
    0:22:02 Like I think with one hour, like 10 hours of audio, there is so much
    0:22:05 you can do with that that, you know, we were trying to come up with
    0:22:07 use cases or like products that could exist.
    0:22:11 It’ll be things like, OK, now there’s like some speaking coaches, AI powered
    0:22:14 apps that’ll do speaking training and coaching.
    0:22:17 It’s like as you’re on a call or like you can upload a video and then
    0:22:21 it’ll analyze the transcript and then pick out the ums and the us and
    0:22:25 all the repeat words and all these things and tell you like, you know,
    0:22:28 you should be using, you should be speaking with more authority.
    0:22:31 You’re, you’re, you’re hedging too much or you’re using ums and us too much.
    0:22:36 Now imagine it’s actually analyzing the video of you talking.
    0:22:39 So in like a public speaking setting and giving you feedback on your stage
    0:22:41 presence, right?
    0:22:43 Because actually you can see the video now and like what, what you’re
    0:22:45 actually doing in the video.
    0:22:48 So it’s like these types of things that I think are really interesting.
    0:22:51 And I’m just so excited about all the apps that are getting built.
    0:22:54 Now that again, developers have wide access to this, public access to this.
    0:22:57 You can imagine that a lot of builders who are like, oh man, like there’s
    0:23:01 this thing that I’ve been wanting to build and like it requires video access.
    0:23:02 And now have video with Gemini.
    0:23:05 And so we get kind of get to experiment with that.
    0:23:07 So I’m just super excited about all those ideas that are going to come out.
    0:23:14 Well, and also there’s Hume as well, H-U-M-E where it actually understands the context.
    0:23:17 I don’t know if it actually reads your face right now, but it listens to your
    0:23:20 voice to try to understand whether you’re sad, happy, angry, right?
    0:23:24 It tries to understand the emotion in your voice and supposedly down the line,
    0:23:29 it’ll actually look at your face and try to, you know, pick up micro cues in
    0:23:31 your face to figure out the emotions as well.
    0:23:33 So that even takes it to another level.
    0:23:36 You were talking about like being on stage and it giving you feedback about
    0:23:37 your stage presence.
    0:23:41 Imagine if it can tell when you’re more nervous, when you’re more excited,
    0:23:44 when you’re more happy and even give you feedback on that stuff on stage.
    0:23:47 Yeah, yeah, that’s coming, I’m sure.
    0:23:48 Yeah.
    0:23:54 Nathan, is there anything you’re excited about with, be it Gemini or Llama 3 or
    0:23:55 any of these models that are coming out?
    0:23:57 Anything that you’re particularly looking forward to?
    0:24:01 On the open source side, you know, Elon Musk is saying that Grock is going to
    0:24:04 be as good as GPT-4 or better very soon.
    0:24:08 So to have that kind of open source version where you can kind of ask it
    0:24:11 whatever and not worry about how it’s filtering, you know, cut.
    0:24:15 And if you saw some of the stuff with Claude, how, how they’re filtering
    0:24:18 what you say to it, like where they actually, they don’t only actually tell
    0:24:22 Claude, they don’t tell the LLM what you’re actually saying even.
    0:24:25 They pre-filter what you type.
    0:24:28 And so they don’t, they don’t even give that your actual message to Claude.
    0:24:34 So basically they’re, you write it a prompt, it goes and rewrites the prompt
    0:24:36 for you in a way that it thinks is probably going to get you a better
    0:24:40 response, but also in some ways may potentially censor you, right?
    0:24:43 It is like making sure that it’s, you’re saying things appropriately or whatever.
    0:24:45 It is doing that.
    0:24:47 So it’s like, and it’s not even showing the LLM that that’s like their
    0:24:49 solution is like to literally change your own words.
    0:24:53 I just, I think it is good to have a kind of an open source alternative where
    0:24:57 like, okay, if open AI and all those go way too hard on that side, that
    0:25:01 there’s an open source alternative where you can ask it things and not have
    0:25:01 to worry about that.
    0:25:04 We’ve talked about all these large language models from Google, from open
    0:25:07 AI, from Anthropic, from all of these places.
    0:25:11 And Grock didn’t come up once until just now when Nathan mentioned it.
    0:25:12 Yeah.
    0:25:16 Grock has an open source model that anybody can use if they want.
    0:25:19 I mean, I don’t know if most people could run it on their computers
    0:25:23 with the size of it, but anybody can access it however they want.
    0:25:30 And supposedly he’s about to drop a model that is as good as GPT-4 that is
    0:25:32 going to be widely available to users of Twitter.
    0:25:37 So it’s just kind of interesting to me how much Grock gets sort of left out
    0:25:40 of the conversation when it comes to large language models.
    0:25:42 Well, it’s just not as capable as far as I understand it, right?
    0:25:42 Yeah.
    0:25:46 Like in open source, you have the meta stuff with LLMA.
    0:25:49 You have mixed role that’s doing really well.
    0:25:52 And then you have like the likes of Grock, et cetera.
    0:25:55 But then when you look at the surveys, for example, I think it was
    0:25:59 in Dresden Horowitz that did a survey of enterprise buyers and practitioners
    0:26:04 for generative AI, basically what they were saying is that the three or four
    0:26:10 players that any sort of practitioners considering using model wise is open AI
    0:26:15 because they have the leading model is meta LLMA because it’s open source.
    0:26:19 And it is just like the only sort of option, basically.
    0:26:22 Like why would you give up anything on the capability front when you
    0:26:25 just need your best, the singular best open source model?
    0:26:31 And then you have Google and Claude or anthropics models similarly
    0:26:34 because of sort of the latest capability reasons, right?
    0:26:39 And so from that perspective, it’s sort of like, why would you consider
    0:26:45 even even Mistral stuff, even Grock stuff if it’s not going to hold up to meta LLMA?
    0:26:47 Like, I think that’s frankly the answer.
    0:26:51 I don’t use Grock because I just don’t think it’s as good as until it’s good.
    0:26:53 I know I see no reason to actually switch over.
    0:26:57 I know that like the fact that Elon Musk is behind the project
    0:27:00 and actually really pushing it, I think long term he is going to catch up
    0:27:02 and surpass a lot of people. I do believe that.
    0:27:06 One other interesting thing about Grock, right, is that it does have access to Twitter.
    0:27:10 So it’s trying to use like all the conversations on Twitter as part of the context.
    0:27:13 But we also saw last week that that can be an issue, right?
    0:27:17 Because last week there was an issue where Grock reported some news
    0:27:18 that wasn’t even real news.
    0:27:21 They just pulled from like memes on Twitter thinking it was real news
    0:27:23 and then fed it back to somebody as if it was real.
    0:27:27 So, you know, maybe feeding all of the information off of Twitter
    0:27:29 isn’t that great of an idea after all.
    0:27:32 But that is one of the sort of benefits of Grock as well.
    0:27:34 Is that Twitter?
    0:27:36 That’s fascinating. I’ve never even thought about that.
    0:27:39 So you you can like make up memes and then Grock thinks it’s real
    0:27:41 then actually get spread to real news.
    0:27:44 It reminds me of I live in San Francisco.
    0:27:47 And there’s a lot of these like shenanigans that happen with self driving cars.
    0:27:50 So like people people will take like a traffic cone
    0:27:53 and like put it on the sensor and prevent it from moving.
    0:27:57 Yeah, a more violent one, unfortunately, is I think it was like two months ago.
    0:28:01 People literally blew up a self driving car Chinatown in Chinatown here.
    0:28:02 Yeah, yeah, in Chinatown.
    0:28:04 And it kind of feels like this, you know what I mean?
    0:28:08 Like I could totally see if Grock ends up being very good, widely used
    0:28:12 and it’s very well publicized that pulls from Twitter that there’s constant
    0:28:15 like meme campaigns to throw it off, you know what I mean?
    0:28:17 And just to get it to say whatever.
    0:28:20 So I don’t know, like never underestimate the Internet, you know what I mean?
    0:28:23 Like there are going to be shenanigans abound with this stuff.
    0:28:29 Why do you think there’s such an aversion, a hatred towards the self driving cars?
    0:28:33 Is it like the people that are Uber drivers and Lyft drivers feeling threatened?
    0:28:36 Is it people that are just anti AI?
    0:28:38 Is there like any feeling of why that’s happening?
    0:28:45 My feeling is if you take the same person and put them in any other city
    0:28:48 that’s not San Francisco and has this exposure to tech whatsoever.
    0:28:52 The arrival of self driving cars is like magic, you know what I mean?
    0:28:58 Like when when I first got access to Waymo, for example, in its beta
    0:29:01 and I hopped in, everyone immediately around me was like, oh my gosh, what is that?
    0:29:03 Like that is so cool.
    0:29:05 They were taking pictures and everything.
    0:29:10 And I think there’s just something about how much experimentation there is
    0:29:14 in San Francisco that for some reason, the the new fangled thing,
    0:29:16 like having a new fangled thing is just baseline.
    0:29:21 And in some respects, you get desensitized to just how cool it can be.
    0:29:26 And as a result, I think you just get tired of just how much change there is
    0:29:31 or like you feel this narrative of like big tech kind of invading you
    0:29:34 because you just have to deal with this change all the time.
    0:29:38 And it happened when there were these delivery robots that I think it was
    0:29:42 DoorDash, I’m not too sure it was these food delivery companies that were piloting,
    0:29:45 you know, sending these little rovers around to go pick up your food.
    0:29:47 And then you wouldn’t need a person anymore.
    0:29:48 You can just like have this whole thing.
    0:29:52 And then all of a sudden, like, you know, pedestrian safety became an issue, right?
    0:29:56 It’s like, oh, what happens if someone trips and like falls over one of these things?
    0:29:58 And these are like good concerns.
    0:30:00 I think there’s like the right questions to ask.
    0:30:03 But just like this overall feeling of just like the first instinct you have
    0:30:07 when you receive something new and innovative, I think you get reminded
    0:30:11 a little bit of all the other times where it’s failed or it’s been annoying
    0:30:15 or been a nuisance, the scooter wars were a very big example like line scooters,
    0:30:19 bird scooters, those invaded San Francisco, polluted the sidewalks.
    0:30:21 I think that was a little bit too much by the end of it.
    0:30:25 But you kind of just get reminded of that kind of era a little bit.
    0:30:26 I think that’s that’s where that comes from.
    0:30:30 When you’re there, like it’s so expensive to live there in San Francisco.
    0:30:34 And the people who are in tech are making crazy amounts of money.
    0:30:39 And the people who are not, they’re barely scraping by in San Francisco on average.
    0:30:42 And so when I was there, like when I first moved there, it wasn’t that bad.
    0:30:46 But it seemed like every year there was more and more hate towards people in tech in general.
    0:30:50 And I think a big, a big part of it is like that difference of income and things like that.
    0:30:52 You would even see it like in like tech journalists.
    0:30:54 So like a tech journalist are not liking people in tech.
    0:30:57 And it was like, become this really odd dynamic there where it’s like,
    0:31:00 you got tech entrepreneurs and people are writing about them who don’t like them.
    0:31:02 It’s like, this is very odd.
    0:31:07 I feel like a lot of journalists seem to hate like tech AI specifically.
    0:31:11 If you see anything in like a big media platform these days,
    0:31:18 it usually is taking a more negative towards the AI tech than it is the positive slant, right?
    0:31:23 So like people like us who are creating content that’s more on that positive spin
    0:31:27 of where all of this is going, we seem to be having our heyday right now
    0:31:29 where people are paying attention to what we’re talking about
    0:31:31 and they don’t trust the big media anymore.
    0:31:36 So I mean, I think there is a path to be optimistic and still get views
    0:31:38 and still get clicks.
    0:31:43 I just think the media is so used to this like old way of doing it of like drama cells, you know?
    0:31:49 I do think with some of these conversations, as much as optimists as we are,
    0:31:52 I do think it’s important to be sympathetic to some of these points of criticism.
    0:31:58 For example, this week, along with all the LLM stuff that’s been going on,
    0:32:01 music is like on top of everyone’s minds, right?
    0:32:04 So I mean, today specifically, there’s the audio release.
    0:32:06 There’s like another one coming out of YC called Sonato.
    0:32:08 I think it was last week.
    0:32:12 You had 200 of the top artists sign a letter saying, hey, look,
    0:32:15 like we generally think AI is good for creative work
    0:32:18 and it can be built in a way that supports creative work.
    0:32:23 However, we like strongly, strongly, strongly encourage that tech companies, investors, et cetera,
    0:32:28 the entire ecosystem working on AI do not build AI that replaces
    0:32:33 or diminishes humans or their compensation that are in the industry, right?
    0:32:38 Now, the exact achievement of that, like how to actually strike that balance
    0:32:40 is like very, very hard in my opinion.
    0:32:43 Like there’s going to be coming and going like with all of this, right?
    0:32:50 But if you were an artist on that petition, signing that letter
    0:32:56 and your latest understanding was Suno or one of these other tools,
    0:32:57 frankly, I don’t even think it was Suno.
    0:33:01 Frankly, like a lot of these musicians or a lot of people outside of our bubble
    0:33:03 aren’t even exposed to this.
    0:33:07 And it’s like only truly the early tinkerers on your Discord servers,
    0:33:09 et cetera, that are really playing around with this.
    0:33:14 Like you can’t be happy seeing the Sonato from today or Udio today,
    0:33:18 which are even steps ahead of Suno, right?
    0:33:20 You can’t, I don’t think that that makes it happy.
    0:33:25 Even if from Silicon Valley’s perspective, it does not like,
    0:33:28 I think everyone here is sort of aligned on this general idea of like, OK,
    0:33:32 well, like the intent here is to create tools that promote creativity,
    0:33:35 that democratize access, that democratize output.
    0:33:36 That all sounds great, right?
    0:33:40 But if you are in the industry and you’re watching all these developments
    0:33:45 happening, I think it’s very fair to assume that like that person is is just
    0:33:47 like they’re coming for us, you know what I mean?
    0:33:49 Like that’s sort of the mindset.
    0:33:52 And it’s like, like if this is happening this quickly and in a matter of a couple
    0:33:56 of months, the models have gone this far, then what’s next?
    0:34:00 Like the only thing I see left remaining is for this to fully replace me
    0:34:02 or or outright replace my job.
    0:34:03 That can’t feel good, right?
    0:34:08 And I’m very sympathetic to that because to I feel like to brush that
    0:34:11 conversation, that narrative under the rug, I think is missing the point.
    0:34:16 I think that we’re going to have a much harder time convincing people to one,
    0:34:21 use these tools and then to convince them that the proliferation of these tools
    0:34:23 is net good for society.
    0:34:26 We don’t at least acknowledge that sort of thinking, right?
    0:34:28 So I think it’s it’s important to be nuanced in these things.
    0:34:32 It’s just there are a lot of when I notice the animation communities,
    0:34:36 the gaming communities, the music communities that are all deeply creative
    0:34:40 people and there is a lot of anti AI sentiment, right?
    0:34:43 I think like all three of us have probably been subject to some amount
    0:34:47 of like pseudo cancellation on Twitter or YouTube comments or whatever it is.
    0:34:51 As a result of this, Matt, maybe, I don’t know, you’ve been you’ve been pretty
    0:34:53 your comment section always looks pretty positive.
    0:34:55 You’re not seeing it all then.
    0:35:00 I do think like, you know, like we have to acknowledge it, right?
    0:35:03 And it’s not just a matter of like, oh, these people like they’re just
    0:35:04 going to get left behind.
    0:35:06 It’s like, no, like, I think these are pretty legitimate concerns.
    0:35:11 And we just have to figure out a way economically, legally and technologically
    0:35:15 that we can sort of just have everyone come along.
    0:35:16 It’s not going to be perfect, right?
    0:35:18 But I feel like there’s got to be a solution there.
    0:35:21 I’m an optimist about that, at least that there is sort of a way
    0:35:23 forward that that kind of fits everyone in the future.
    0:35:24 That’s fair.
    0:35:26 Like, I think this will, you know, there will be some jobs that will
    0:35:27 dramatically change.
    0:35:29 Some will disappear, new ones will appear.
    0:35:32 Most jobs will change in dramatic ways.
    0:35:36 And there will be some aspect of like, OK, how do you make sure the artists
    0:35:37 get compensated?
    0:35:38 And and it’s tricky.
    0:35:40 I think there is going to be a big legal battle, right?
    0:35:44 Because like, I’m sure OpenAI has trained on so much data that maybe you
    0:35:46 would say they shouldn’t have been able to do.
    0:35:51 And I don’t think they would have done that unless they had talked to lots
    0:35:54 of lawyers and decided that, OK, this is actually a gray area.
    0:35:58 Because when you’re training on this data, since this is such a new thing,
    0:36:01 you can say that this is basically the same way that a musician would
    0:36:03 learn from listening to another musician.
    0:36:05 Did that person steal that person’s work?
    0:36:09 You know, it’s like, you know, and so I think that’s really the argument
    0:36:12 they they plan to make and they just not wanting to publicly state that yet.
    0:36:16 But I do believe that is the argument that legal argument is coming,
    0:36:18 because it sounds like they’ve actually scraped YouTube’s videos
    0:36:22 possibly and that’s how they’ve, you know, built up their video.
    0:36:24 Yeah, I mean, I take that same approach.
    0:36:28 I try to be very empathetic, sympathetic to those concerns.
    0:36:31 I try to take them all into account when I am sharing the news.
    0:36:34 I always try to kind of talk about both sides of the equation.
    0:36:38 You know, I really do feel for a lot of these people.
    0:36:41 Like when it comes to music, I don’t think it should be OK
    0:36:44 to just go and create a song in the style of Drake.
    0:36:47 It sounds like the beatstrake would make sounds like his style,
    0:36:51 sounds like his voice and you just took him out of the loop completely.
    0:36:52 I don’t think that’s cool.
    0:36:53 Honestly, I don’t.
    0:36:55 I don’t think people should be allowed to do that.
    0:36:58 But saying that there is a very, very fine line.
    0:37:02 What if somebody has a voice that sounds very similar to Drake?
    0:37:05 Are they not allowed to make music that sounds similar to Drake?
    0:37:09 Like there’s some weird, like nuances, gray areas there
    0:37:12 that are really, really hard to deal with.
    0:37:14 But I think at the end of the day, a lot of it comes down
    0:37:18 to the sort of monetary aspect of it, right?
    0:37:21 If somebody is like who dedicated their life’s work to becoming
    0:37:25 the best musician that they can possibly be, the best game developer
    0:37:28 they can possibly be, the best artist they can possibly be.
    0:37:32 And now people can create stuff at the level that they were able to create it.
    0:37:36 Well, how do they sort of still make money?
    0:37:38 How do they still have a livelihood?
    0:37:40 Like it all comes back to the monetization.
    0:37:44 And I think something needs to happen in the AI world
    0:37:48 where there is some sort of like Spotify style split.
    0:37:50 It is insanely complicated to figure out.
    0:37:53 And I do not envy the people that have to figure that out.
    0:37:56 I just get to be here on the sidelines, armshare, quarterbacking.
    0:38:00 But, you know, I do think there needs to be some sort of way
    0:38:05 to compensate the people who maybe with the data was trained on.
    0:38:09 I think if I was a musician right now and I, you know, you could see my background.
    0:38:12 I’ve got guitars. I’ve played in bands before I’ve recorded.
    0:38:13 I’ve gone on tour.
    0:38:16 I’ve done the whole music thing in the past.
    0:38:19 But if I still played music as often as I used to,
    0:38:22 I would actually be really excited about what’s going on because I can be like,
    0:38:26 man, if my voice is popular, I can do the Grimes thing and license it
    0:38:30 and have a million people going out there and creating music on my behalf.
    0:38:35 And I can somehow earn from those people using my voice and using my style of music.
    0:38:39 To me, that gets me excited, but I also understand the flip side of that coin.
    0:38:42 I think that has to be coming because, I mean, Will, I am invested in Udio.
    0:38:44 That was announced like two hours ago.
    0:38:49 And and so, you know, alongside Andrews and Horowitz and a bunch of other great investors.
    0:38:52 And I don’t think he would be investing if they didn’t have some kind of plan
    0:38:55 for revenue share with artists. I could be wrong, but I think I think the fact
    0:38:59 that he’s involved shows that there’s probably they’re probably not just
    0:39:00 going to be saying F you to all the artists.
    0:39:02 I think there’s going to be some kind of plan to compensate people.
    0:39:06 Yeah, well, if you look at Udio and you look at Suno, you can’t go into either
    0:39:11 one of those apps and say, generate a song for me that sounds like Katy Perry.
    0:39:13 It’ll just say, sorry, can’t do that.
    0:39:18 That other one, Sonato, I actually did a live stream and I was playing with it
    0:39:20 and just out of curiosity on the live stream, I was like, generate a song
    0:39:23 that sounds like Blink 182. And it’s like, here you go.
    0:39:28 We use the lyric, the voice style of Mark Hoppus and the guitar style of Tom DeLong
    0:39:30 to make this sound like a Blink 182 song.
    0:39:35 So Sonato actually does let you plug in a band’s name and generate a song
    0:39:37 of like wherever you want.
    0:39:40 I was feeling that feature will probably get phased out.
    0:39:44 I don’t know. I think they need to nail that because that’s like consumer wise.
    0:39:45 That’s where you’d actually get traction.
    0:39:49 Like if you can’t say artist name, you’re not you’re not actually like a major,
    0:39:52 you know, mass adoption of these tools, I don’t think.
    0:39:54 But consumer wise, that’s where it’s at.
    0:39:57 Legal wise and not wanting to get sued up
    0:40:01 as by every music industry person in the world.
    0:40:03 I don’t know if that’s the approach you want to take. I don’t know.
    0:40:07 Yeah. What what is your guys’s take on
    0:40:10 Udio and Suno? How do they sort of compare to each other?
    0:40:12 I’ve been playing around with a couple of them.
    0:40:14 I think Udio is very clearly the best one.
    0:40:17 But what have your experimentations led you to?
    0:40:18 Personally, I’m just having fun with them.
    0:40:21 Like I love them. I think they’re both so much fun.
    0:40:25 I’ve made probably like 20 potential jingles for future tools, right?
    0:40:29 Like write me a jingle about why future tools is the place to go
    0:40:33 to learn about AI and the latest tools and it makes like really fun songs.
    0:40:35 And I work them into my YouTube videos.
    0:40:37 For me, that it’s just kind of like a fun toy.
    0:40:41 I don’t really see a whole lot of practical applications for it yet.
    0:40:43 But I do think they’re really, really fun to play with.
    0:40:48 Yeah. Right. I think Udio probably overall is better for the most part.
    0:40:52 But I did give it a prompt like generate some like electronic dance music for me.
    0:40:58 And sooner did that style of music better than than Udio did.
    0:41:02 So I think, you know, genre dependent, but I think as a whole, Udio is probably better.
    0:41:06 I think the the overall direction that I’m excited in is one for
    0:41:08 for you to refine what you just generated, right?
    0:41:13 And so instead of having this be like a one shot sort of it just generates.
    0:41:14 And that’s pretty much all you get.
    0:41:17 And you just have to keep on trying over and over again to go in
    0:41:18 and change the little aspects of it.
    0:41:22 Maybe there’s like individual parts of it that you can you can tease out
    0:41:24 the little the individual stems and edit them.
    0:41:30 And I think that is generally the direction that I also think is helpful
    0:41:35 to turn this into more of like a creative tool, more than an outright
    0:41:38 sort of prompt to full song type of thing.
    0:41:41 Because Matt, to your point, like I am a little bit questioning the use case
    0:41:45 beyond doing your your jingles for YouTube videos.
    0:41:49 I’ve also done this for for jingle purposes and like jingles aside.
    0:41:51 Like jingles are not a big market, you know what I mean?
    0:41:54 Like they’re not going to like make all these companies
    0:41:57 enough revenue to compete over the long term.
    0:42:00 However, if you’re to build more tooling around it, which is like,
    0:42:04 I think this is maybe Sonato’s approach here, which is can we allow
    0:42:08 a little bit more directed iterations over time?
    0:42:09 That is interesting, right?
    0:42:14 Like that that now is opening up a new canvas for for new artists
    0:42:19 to explore and creatives to really get in there and master the individual pieces.
    0:42:22 And that can interest a new way of creating music.
    0:42:24 So generally excited about it.
    0:42:28 But yeah, I’m very curious to see once we’ve solved that sort of like
    0:42:33 can it can create a 30 second clip of like pretty coherent, pretty good
    0:42:36 music end to end, then it’s like then what right?
    0:42:38 Like then then what levels you go to?
    0:42:41 Which direction do you head in research wise?
    0:42:46 They saw a Twitter post ex post earlier today of somebody saying
    0:42:51 like I’ll be impressed with music when an AI song tops the charts or something
    0:42:53 like that. Is that like the goal of some of these tools?
    0:42:57 Are they trying to like get you to make songs that’s going to be top of the charts?
    0:43:01 Like, I don’t know, as a human myself, maybe you guys can relate.
    0:43:05 But as a human myself, I enjoy listening to music created by other humans.
    0:43:10 Like there is something about listening to like a Led Zeppelin song
    0:43:14 and knowing that like Jimmy Page played those notes on his guitar.
    0:43:19 And that was a very complex thing that he played that makes it that much
    0:43:23 more pleasing on the ears, just knowing the human element was attached to it.
    0:43:27 I don’t feel that sense of accomplishment when I listen to an AI song
    0:43:31 that I generated, right? I don’t really like I might go share it on Twitter
    0:43:32 because I think it’s funny.
    0:43:37 But I’m not going like this is the next, you know, dance club hit or anything.
    0:43:41 I feel like the AI music right now, what it’s best for like a real
    0:43:44 world use case is actually take the lyrics out of it completely
    0:43:48 and use it as like a sound bed for your your videos or for your podcast
    0:43:50 or things like that.
    0:43:53 But once you add the lyrics into it and you’re trying to make your own
    0:43:58 like pop songs with singing, I actually don’t know the real use case for it right now.
    0:44:01 Was it musically that started off as and then it turned into TikTok?
    0:44:02 Or maybe I’m good. Yeah.
    0:44:05 And so like, you know, I think you could have some kind of new music app
    0:44:07 where like, yeah, if somebody could just generate any kind of song
    0:44:11 as part of some kind of social app and then and then that thing could go viral
    0:44:14 and you know, and that could become a top chart things like some some songs
    0:44:17 from TikTok have like become very popular because they were big on TikTok.
    0:44:20 Right. Once again, I do think it’s sort of genre dependent.
    0:44:23 What I get excited for is do you guys remember that clip of Maggie Rogers
    0:44:27 when she was still in school and Pharrell came and listened.
    0:44:30 It was like a music production class of some sort.
    0:44:31 And it was like she blew up.
    0:44:35 Like that’s exactly how she blew up was off of a video recording
    0:44:39 of Pharrell listening to her music that she produced all by herself.
    0:44:42 She’s saying just everything was hers, right?
    0:44:44 And I think Pharrell’s commentary was just like, I have no notes
    0:44:48 because this is so authentically you that like I can’t come in
    0:44:54 and sort of give you any sort of advisor criticism or editing off of this.
    0:45:00 And what I think about is if you took Maggie Rogers then
    0:45:05 and gave her a set of new, like these AI powered tools,
    0:45:09 how much more powerful of that experience would that experience have been?
    0:45:11 Right. And it’s not about to say like, oh, Maggie,
    0:45:15 you only give us the lyrics and you sing and sort of AI will do everything
    0:45:18 around you and sort of little one click sort of like spinning into different genres.
    0:45:22 There are probably a lot of smaller technical things
    0:45:26 that she probably did not do in the first pass, didn’t know how
    0:45:30 or just didn’t see it creatively or whatever it is because she’s young
    0:45:34 and sort of early on in your music career, getting you that first draft of it
    0:45:36 sparks the creativity, if anything, right?
    0:45:40 And you can you can have her be what she was in that moment,
    0:45:43 which was the vibe curator of the song, right?
    0:45:46 Like she brought the spirit of what was the message
    0:45:49 that she wanted to send the emotional state that she wanted to conjure.
    0:45:51 But the rest of the technical pieces,
    0:45:55 maybe there’s help that we can give the next Maggie Rogers, right?
    0:46:00 That would be cool to me to your point around the pop factory songs.
    0:46:00 You know what I mean?
    0:46:04 Like these are all the same set of musical artists that are writing songs for each other, right?
    0:46:08 They will literally sit in a room and be like, OK, today we’re writing for Shawn Mendes.
    0:46:10 And like, you know, eight of them will get together
    0:46:12 and just sort of all throw songs together.
    0:46:15 One of them will hit and it will become Shawn Mendes’s next song, right?
    0:46:18 There’s something about that that feels a little bit different, perhaps.
    0:46:22 And, you know, it is a much more mechanical process, I suppose.
    0:46:27 But when I think about the essence, like the spirit of like what we’re trying
    0:46:30 to do with these creative tools, or at least the companies doing them,
    0:46:33 I would hope is to find that next student, right?
    0:46:38 And to give them, you know, that much an easier of a time to express the spirit
    0:46:42 and the concept of what they’re doing, that feels very powerful to me.
    0:46:46 I do actually have like a couple like sort of last questions I want to ask.
    0:46:50 One of them is, do you have any advice for business owners
    0:46:55 in like the current landscape of AI with the existing suite of tools that are available?
    0:47:00 What do you tell business owners who maybe are hesitant to get into using AI?
    0:47:02 Where where would you send them first?
    0:47:08 If I had one line, one line, it’s to give your entire business
    0:47:13 access to one of the paid plans of chat between Claude and just let him figure it out.
    0:47:16 Honestly, it’s $20 per user per month.
    0:47:20 For most businesses, this is more than affordable, right?
    0:47:24 Like just one month, 30 days and be serious about it, right?
    0:47:28 Like give everyone access and tell them I want you to figure out where it can be more effective.
    0:47:34 Arm the people with a mandate to go figure it out because people will, again,
    0:47:35 given the nature of the product, right?
    0:47:40 Again, we’re focusing a lot of the benefit comes from the bits and pieces
    0:47:41 of all these different processes.
    0:47:44 Who knows them better than your employees that are dealing
    0:47:46 with these processes every single day, right?
    0:47:50 This is not a top down thing where we’re saying, OK, we are now going to apply
    0:47:52 AI in like our marketing department.
    0:47:56 It’s like, no, there are all these like little things that there are enough
    0:47:59 early adopters at your company that will take the time and experiment
    0:48:03 and go figure out if you let them and give them the tools and the support
    0:48:05 from leadership to do it, right?
    0:48:09 Host the hackathon, encourage sharing, reward the behavior, right?
    0:48:15 So typically a lot of the patterns that I’ve seen is there’s just like one person
    0:48:19 in this one department that spent their off hours, one or two hours every now
    0:48:24 and then playing around with a prompt or like a Zapier integration or something
    0:48:29 like that, and all of a sudden it’s saving the entire department 100 hours
    0:48:32 a week because of how many people they scaled the entire operation to.
    0:48:36 Maybe again, something as simple as everyone here.
    0:48:40 I sent the I sent the prompt over email, copy and paste the prompt from the email.
    0:48:43 Go to chat to your cloud or whatever it is and then use this when you’re doing
    0:48:44 this thing, right?
    0:48:49 That’s already impactful and it had nothing to do with hiring an outside
    0:48:51 consultant or some other vendor or whatever it is.
    0:48:55 It was all just harnessing the, the inherent sort of interest
    0:49:00 and attention there is around AI and grasping onto those early adopters
    0:49:03 who would like to figure it out and giving them the environment for them
    0:49:04 to do it for you, right?
    0:49:05 So that’s the main thing.
    0:49:08 You’ll be surprised at how many businesses have not taken this.
    0:49:11 Like I’ve talked to a few enterprises that like from the outside looking in,
    0:49:15 they really should have had some movement on this in the last year or so.
    0:49:16 And they just haven’t touched it at all.
    0:49:19 Not one bit because they’re, they’re just like, okay, this is cool.
    0:49:19 We get it.
    0:49:21 But like, ah, like we’re fine.
    0:49:21 You know what I mean?
    0:49:23 It’s like, no, take it seriously, right?
    0:49:27 Like go through the exercise, get it in people’s hands and it’ll come.
    0:49:28 And it’ll come.
    0:49:30 So if, again, that’s, that’s the one thing.
    0:49:33 There’s a whole bunch of other ways you could do it, but like, this is
    0:49:34 the most dead simple one.
    0:49:37 You already have the people and the resources just given them the environment.
    0:49:40 Thank you so much, Pete, for, for hanging out with us today.
    0:49:40 That’s awesome.
    0:49:43 You are non-mer Pete over on X.
    0:49:46 The newsletter is the neuron.
    0:49:51 Where should people go to go learn more from you and to, you know, get
    0:49:52 inside of your ecosystem?
    0:49:52 Yes.
    0:49:54 Find me on Twitter, nonbear Pete.
    0:49:57 Go to the neuron daily.com for our newsletter.
    0:49:59 And then find me, Pete Huang on LinkedIn.
    0:50:00 Thanks again for joining us.
    0:50:01 This has been such a fun conversation.
    0:50:02 Thanks guys for having me.
    0:50:03 This is so much fun.
    0:50:13 [Music]
    0:50:23 [Music]

    Episode 4: How is AI impacting the future of creativity and the workplace? Matt Wolfe (https://twitter.com/mattwolfe) and Nathan Lands (https://twitter.com/NathanLands) tap into the insights of Pete Huang (https://twitter.com/petehuang), founder of The Neuron, a daily newsletter which demystifies AI for the average person. 

    In this episode, we delve into the intersection of AI, creativity, and its transformational potential on industries like music and entrepreneurship. Pete Huang shares his experiences and perspectives spanning various facets of AI, including the balance between innovation and ethical considerations, the rising capabilities of language models, and the profound implications these have on artistic creation and business operations. The conversation also explores the broader societal impacts, such as income disparity and public perceptions of technology.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) AI’s impact on business is misunderstood.
    • (05:03) Exploring marketing strategy and AI’s potential impact.
    • (08:14) Using agents will create fierce competition for startups.
    • (12:04) OpenAI leads in product marketing and PR.
    • (15:43) Gemini 1.5 has 10 million token window.
    • (18:58) Importance of needle in haystack test explained.
    • (20:01) Token submission enables enhanced audio processing capabilities.
    • (24:03) OpenAI and MetaLlama are top open source.
    • (27:02) Self-driving cars bring excitement and desensitization.
    • (30:08) Artists call on tech to not replace humans with AI.
    • (35:16) Compensation for similar music in AI.
    • (36:28) Excitement for music licensing and potential earnings.
    • (42:43) Pharrell’s praise for Maggie Rogers’ authentic music.
    • (45:56) Leverage employees’ expertise to improve processes effectively.
    • (47:07) Leverage early adopters to explore AI potential.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • The Rise and Dangers of Speech A.I.

    AI transcript
    0:00:05 I think the big a-ha moment for a lot of people is gonna be that probably in the next six months to a year that the main way
    0:00:07 People are using AI is voice
    0:00:15 Hey, welcome to the next wave podcast my name is Matt Wolf
    0:00:19 I’m here with my co-host Nathan Land and we are your chief AI officer
    0:00:24 It is our goal with this podcast to keep you looped in on all the latest AI news the coolest AI tools and
    0:00:31 Set you and your business up to be ready for this next wave of AI that’s coming and today
    0:00:38 We’re going to talk about what we think is actually the next wave of AI your voice in this episode
    0:00:42 We’re gonna talk about why we think voice is the next big thing
    0:00:45 We’re gonna talk about the tools that are available so that you can use voice
    0:00:52 We’re gonna talk about the scary risks and why this could be fairly damaging to the world if things go in the wrong direction
    0:00:57 we’re also gonna talk about how AI is taking over the music world and
    0:01:01 What potential options are out there to solve some of these problems?
    0:01:05 so a really really fascinating episode lots of cool stuff we’re gonna talk about and
    0:01:08 Excited to share with you. So let’s go ahead and just dig in
    0:01:14 Yeah, so today when everyone thinks about AI most people are thinking about text using text to talk to AI
    0:01:15 You know like chat GBT Claude
    0:01:21 but I think the big a-ha moment for a lot of people is gonna be that probably in the next six months to a year the
    0:01:24 Main way people are gonna interact. I believe is gonna be with voice, you know
    0:01:29 And this company Hume AI just raised 50 million dollars in their series B
    0:01:35 And they look they just launched a product called Evie, which is a empathetic voice interface and you know
    0:01:38 I tried it yesterday with my son and
    0:01:40 And he was blown away, you know, he’s ten years old
    0:01:45 He tried it out and he was like what is that and he saw like the little line on it where it’s showing the emotions
    0:01:50 You know and he started and he started talking to it and he was just like freaking out because it was
    0:01:54 Detecting the emotions in his voice and then responding accordingly
    0:02:00 When all your marketing team does is put out fires they burn out fast
    0:02:06 Sifting through leads creating content for infinite channels endlessly searching for disparate performance KPIs
    0:02:11 It all takes a toll but with HubSpot you can stop team burnout in its tracks
    0:02:17 Plus your team can achieve their best results without breaking a sweat with HubSpot’s collection of AI tools
    0:02:24 Breeze you can pinpoint the best leads possible capture prospects attention with click-worthy content and
    0:02:29 Access all your company’s data in one place no sifting through tabs necessary
    0:02:31 It’s all waiting for your team in HubSpot
    0:02:38 Keep your marketers cool and make your campaign results hotter than ever visit hubspot.com slash marketers to learn more
    0:02:43 No, it’s really interesting. I played around with you
    0:02:48 I actually made a YouTube video where it was like the AI news recap
    0:02:50 But in one piece of it, I played around with Hume for a minute
    0:02:55 And it was kind of funny because I was like hey Hume, how you doing today or something like that?
    0:03:01 And it was like I I detect that you’re angry. Why are you angry and I’m like I’m not angry
    0:03:06 I think I was probably like when I make videos and when we record podcasts like this
    0:03:08 I tend to ramp up my energy and you know
    0:03:11 I’m probably a little bit more excited than if I was you know
    0:03:17 Just talking to you in real life face-to-face and I have a feeling Hume sort of picked up on that sort of extra
    0:03:23 Excited energy news like what is wrong with you? Yeah. Yeah. Yeah. Yeah. Yeah, but anyway, you know
    0:03:29 Back to the bigger point here is that I do feel like we’re entering this world where right now
    0:03:34 Most of the prompting we do is with text prompting you go to chat gpt
    0:03:39 You go to Claude you go to one of these platforms right and you type in what you want it to respond to
    0:03:46 But I feel like over the last decade or so the tech world has kind of been training us that know your voice should be
    0:03:54 Mechanism for interacting with the computer right with things like Siri and what’s Google’s version Google’s assistant version
    0:03:59 Is it not Google home? Yeah, but anyway the point being like yeah, we have all of these smart devices
    0:04:02 We’ve got our phones. We’ve got our Alexa’s. We’ve got stuff like that
    0:04:09 That is I think where things are going right you’re going to be actually speaking to these things instead of typing prompts
    0:04:17 and with things like Hume, we’re gonna actually get even more context for it to analyze when it gives us feedback, right?
    0:04:24 The rumor has it that at WWDC this year Apple’s big developer conference that they do every year
    0:04:26 That’s kind of where they make the big announcements
    0:04:30 It was WWDC last year where they announced Apple Vision Pro this year
    0:04:37 The big expectation is going to be a lot of AI right and I think a lot of people are speculating that Siri is going to get AI
    0:04:40 Into it what they’re gonna use for the AI. We don’t really know yet
    0:04:47 You know, we we do know that there’s been rumors going around that Apple and Google might partner and that Apple might use Gemini
    0:04:51 1.5 for Siri behind the scenes, but then also
    0:04:59 Apple just released a research paper the week that we’re recording this apple just released a research paper called realm
    0:05:08 And it’s basically their large language model that they designed for a mobile phone. It is actually very good at reading the context of what’s on the screen
    0:05:10 it can actually view a mobile phone screen and
    0:05:16 Use that context to sort of inform the large language model before it responds to you
    0:05:20 So a lot of people are thinking well, maybe that’s what’s gonna be in Siri, right?
    0:05:26 So maybe you’re on an app. You’re using the app. You might be able to go Siri. Hey, this app isn’t working
    0:05:31 Can you help troubleshoot it for me? Siri will actually be able to see your iPhone screen see what’s going on and
    0:05:39 Possibly help you through whatever issue you’re running into if they use this realm language model that Apple just put out the research for
    0:05:45 But where they actually go with it. I don’t think anybody’s gonna actually know until WWDC, which is I believe in June of this year
    0:05:52 I think it’s crazy that like, you know, Alexa and Siri both came out like 10 or 11 years ago right around the time when the movie
    0:05:57 Her came out, you know, everyone thought this technology was gonna, you know, improve every year. It was gonna be amazing
    0:06:01 And so a lot of people I think they think oh this stuff is not gonna get better because look look it took 10 years
    0:06:03 It’s still almost the same product
    0:06:09 But they don’t realize like when like GPT-5 comes out if you built something like Alexa now with like GPT-5 under the hood
    0:06:16 How dramatically different of an experience that’s that’s gonna be and and I’m sure Alexa will be upgrading soon too because you know
    0:06:20 Amazon just invested. What was it like four billion dollars and then something like that. Yeah
    0:06:26 Well, they initially invested like 1.75 billion or something and then when yeah all this data came out that
    0:06:31 Opus is actually beating the latest version of GPT-4 in all of the benchmarks
    0:06:35 Yeah, Amazon came back to Anthropic and said hey, maybe we’ll put even more money into you guys
    0:06:38 And then I think they invested like another two and a half billion or something like that, right?
    0:06:42 So we’re gonna have Jarvis in our house pretty soon if you want to yeah
    0:06:48 And if you think of that from like a business context, so you know as a CEO or a manager like how amazing is that gonna be that instead of
    0:06:50 Having to type everything to your AI you can be like really quickly like hey
    0:06:54 You know this is going on or hey, can you check this report for me or can you check?
    0:07:00 You know what do I have an email from so-and-so and can you respond to that doing all of that by voice like how much more
    0:07:03 Efficient you’ll be as an executive using these kind of tools. Yeah
    0:07:06 While we’re on the topic of voice to open AI just put out some new research as well, right?
    0:07:13 Open AI just put out an article on their blog saying that we’ve got this really really good voice model where with I don’t remember what it
    0:07:19 Was exactly but like 15 seconds of training data. We can replicate your voice and then you can type in whatever you want
    0:07:25 It’ll say that thing. Yeah, but then open AI in that same article wet, but it’s too powerful guys. It’s too strong
    0:07:31 It’s too scary. We can’t let you use that yet, which to me is just wild because we’ve already got it, right?
    0:07:37 We’ve got 11 labs 11 labs already do exactly that right we’ve had like uber doc
    0:07:45 There’s been there’s been a whole bunch of models out there already that do exactly that an open AI kind of coming out and saying like
    0:07:49 We do it too, but we’re not giving you our version because it’s too good kind of weird
    0:07:54 So when I saw Evie from Hume, I was like, okay, that’s probably like the best I had seen so far like on the emotional side
    0:08:00 That’s it’s amazing to texture emotions, but you could still kind of tell it’s not a human and with 11 labs pretty good
    0:08:05 But you can still kind of tell it’s not a human. I know that the demo I saw I believe it’s from open AI
    0:08:12 That was the most realistic one I had heard. I was like, okay, that’s like 99.5 percent there
    0:08:16 Yeah, like if I didn’t know this technology would existed. I would not know that was AI
    0:08:19 I would think that was a real person talking. Yeah. Yeah, the other ones are like 95 percent there
    0:08:24 But when you get to like 99 like people just they don’t know that it’s an AI and that’s where it’s you know
    0:08:28 I can get that. Yeah, that could be kind of scary that you could just be pretending to be anyone
    0:08:30 Yeah, no one would know well totally
    0:08:34 But I also think that you know people like you and I who are immersed in it on a daily basis are probably a little more
    0:08:40 Adepted actually spotting AI right like I feel like I’m really good at when it comes to AI images and going I can tell like
    0:08:44 Instantly that it’s AI now because I’ve seen so many AI images
    0:08:48 I feel like I’m getting that way with yeah voice now to where I can pretty quickly grasp
    0:08:50 Okay, that was done with AI
    0:08:56 But I mean how many people have actually been fooled already by AI voice using things like 11 labs
    0:09:02 I mean that that that kind of sort of brings us into the I guess kind of the next thing that I want to talk about is like a
    0:09:08 Scary part about all of this AI voice technology that’s that’s coming out right now is you know
    0:09:10 there’s there’s been stories of a
    0:09:14 Scammer calling up somebody’s parents and saying hey
    0:09:18 We kidnapped your daughter and then they would use a voice sample of that person
    0:09:24 to convince the person on the other side of the phone like oh they really have my daughter and
    0:09:31 Collect ransom money right. There’s also that what was it the 25 million dollar scam that happened over zoom
    0:09:34 Like remember when we talked about that for a little bit
    0:09:38 Yeah, yeah, like so I mean that yeah stuff’s happening right now
    0:09:42 Yeah, and I think that was here in Japan. I believe and so yeah somebody
    0:09:48 impersonated an executive or I think they print impersonated with the CFO or was like a top executive at a big company and
    0:09:52 Called people up and people didn’t know that it wasn’t a real person
    0:09:56 And they end up wiring. What was it 25 million or something or something like that?
    0:09:59 It was a lot of money that they just wired off to some scammer. Yeah
    0:10:03 Yeah, the tech to do the trickery is already out there
    0:10:07 Yeah, but like you said like yeah open AI is the best one to come along so far
    0:10:12 Yeah, but it’s already out there like people can already do this right now with it with this kind of technology
    0:10:17 So there’s already laws that this this kind of stuff is illegal like so I guess the challenge is the scale will probably greater
    0:10:24 Because like it’ll be easier to scam people now. Yeah, right. So and that is probably an area where you’ll need AI to help like
    0:10:26 You know find scammers and things like that, right?
    0:10:31 Which you know hopefully doesn’t go into like big brother territory where we’re using AI like monitor everyone
    0:10:35 But you know possibly that’s it is slightly going towards that direction like yeah
    0:10:40 You need AI to like figure out like so-and-so scamming people and then yeah look at the data and then go get the person
    0:10:47 Yeah, I’ve got somebody that like is anti regulation. I actually think regulation to some degree is of a good thing
    0:10:54 In the sense that I feel like regulation gives companies bumpers to stay within I think that could be a good thing
    0:10:59 Right, I think right now. There’s a lot of AI companies out there. They’re developing and they’re going well
    0:11:01 We don’t know what’s gonna happen with regulation
    0:11:04 So let’s just keep on pushing the limits
    0:11:11 But I do feel like regulation to some degree gives them some bumpers to stay within so they know they’re not overstepping their bounds
    0:11:16 However, when it comes to AI, I don’t feel like regulation works at all
    0:11:21 Because bad actors are gonna be bad actors, right? So we can go out there and say like hey
    0:11:25 We’re gonna regulate you’re not allowed to clone people’s voices with AI and do people
    0:11:33 Well, you already can’t do that, right? Whether it’s law or not, you know, it’s already highly unethical people already see that as a negative thing
    0:11:36 I’m pretty sure it’s already a law that you can’t do that anyway
    0:11:42 But yeah, yeah, the technology’s out there. It’s out there in open source form. It’s out there in closed source form
    0:11:49 It’s out there people are gonna be able to do it. So how do you like what is regulation going to accomplish in that sense?
    0:11:52 I don’t really understand the regulation argument here
    0:11:56 You know, most of the politicians have very little like real-life business experience
    0:12:01 And so like when they think about regulation a lot of times it’s just like, okay, is the public scared of this technology?
    0:12:07 Well, okay, then I’m gonna regulate it. It’s like it’s actually good for your country. Is it good for like business?
    0:12:14 Is it good, you know, are you just literally responding to some polls? Is that all that is and you know without going too deep down like the sort of
    0:12:21 Political rabbit hole there’s lobbyists, right? And a lot of the government is run by the companies who are paying
    0:12:25 For those people to be where they are in the government, right?
    0:12:32 So what I think what the big fear is and I know this is something whether you love them or hate them that Gary Marcus has talked about a
    0:12:37 Little bit in the past. Yeah, is that what happens when we get too much regulation with this is what it’s gonna end up
    0:12:41 Doing is concentrating the power into a few small companies, right?
    0:12:47 What’s gonna end up happening is the the companies like open AI and Microsoft and maybe Google and some of these big corporations that are
    0:12:53 Pushing for the regulations may end up driving the regulations in ways that really favor their company
    0:13:00 But really do not favor the little guys, you know, that’s kind of like the biggest divide that’s happened in Silicon Valley recently
    0:13:03 Is like you got the divide between like the Sam Altman, you know side
    0:13:09 Which is also kind of aligned with the YC and then the Mark Andreessen VC side where they’re like, yeah
    0:13:13 This is basically regulatory capture here. Like they’re like trying to go out there and like say yeah
    0:13:18 What we’re building is very dangerous. So you should please regulate. Yeah, that’s an odd thing to ask for like why are they doing that?
    0:13:22 Please regulate us and we’ll be on the board to decide what those regulations are
    0:13:27 Yeah, yeah, yeah regulate us in this exact way that we want that no startup can afford
    0:13:32 Yeah, and oh by the way, open source is very dangerous and so that’s kind of been like the undertones of it all too
    0:13:36 It’s like when they’re talking about regulation. It’s like open sourcing of AI is very dangerous
    0:13:39 That’s almost always the undertone. It’s like well, if you don’t have open source AI
    0:13:44 Then yeah, you will end up with like one or two companies that controls all the technology
    0:13:49 So so that’s why I’m like very hesitant of like like I’m not gonna say there’s no regulation
    0:13:53 They did like maybe there are there’s regulation needed for like, you know deep fakes or you know AI voice at some point
    0:13:58 But like just speeding into it and trying to regulate everything kind of like what’s going on and you’re a little bit right now
    0:14:00 I’m like, that’s not the right approach
    0:14:04 Hey, you know a lot of a lot of where my heads at to kind of comes from like the crypto space, right?
    0:14:09 Where there’s been looming regulations forever, but the a lot of the regulations never end up happening
    0:14:15 So a lot of the companies that are trying to build or like am I gonna build something that’s gonna end up getting regulated out of existence?
    0:14:21 So like a lot of people and companies in that crypto space are like just tell us the damn regulations
    0:14:24 So we know what bumpers to stick with it and we’re not in this limbo
    0:14:29 But when we’re like talking about AI, I feel like it’s a different story because it’s like you do have the open source
    0:14:30 You do have the closed source
    0:14:34 You do have the companies that you know have a better foothold within
    0:14:38 Governments than the little guys you do have all these other nuances
    0:14:46 I think really muddy the waters and also I feel like regulation is just gonna make it easier for the bad actors to be the bad actors
    0:14:53 While the the people they’re trying to do right lose abilities essentially right along with all the negatives that we just talked about
    0:14:57 There are still some positives of this this voice AI that’s coming out
    0:15:00 I think there are some use cases that I think could be very valuable
    0:15:07 I know for me as a content creator using it for like dubbing is really really helpful like if I misspeak in one of my videos
    0:15:10 I don’t have to go back in record and like overdub something
    0:15:18 I can open up a tool like descriptor 11 labs type in the words that I meant to say and then just use that and dub it into my
    0:15:23 Video so I’ve actually used some of these tools to sort of fix a misspeak in some of my videos
    0:15:28 So that’s like that’s one really good use case for some of this AI voice
    0:15:36 We’ll be right back but first I want to tell you about another great podcast you’re gonna want to listen to it’s called science of scaling
    0:15:42 Hosted by mark robert’s and it’s brought to you by the hub spot podcast network the audio
    0:15:46 Destination for business professionals each week host mark robert’s
    0:15:53 Founding chief revenue officer at hub spot senior lecturer at Harvard Business School and co-founder of stage two capital
    0:16:01 Sits down with the most successful sales leaders in tech to learn the secrets strategies and tactics to scaling your company’s growth
    0:16:07 He recently did a great episode called how do you solve for a siloed marketing and sales
    0:16:10 And I personally learned a lot from it. You’re gonna want to check out the podcast
    0:16:14 Listen to science of scaling wherever you get your podcasts
    0:16:22 Yeah, and human they’re kind of pitching that this is gonna be used for therapy and things like that right which is
    0:16:27 Exciting you know, it’s kind of you know gonna be weird to talk to a robot
    0:16:30 And that’s how you’re getting your therapy, but I mean who knows like when it gets good enough
    0:16:34 You know, I guess that’ll be a thing it does kind of make me think of the movie her
    0:16:38 Yeah, right, you know, we’re you’ve got this guy who I think you haven’t seen it
    0:16:46 And I feel like you should you should take my AI fan card from me. Yeah, you seem ex Machina. I have seen ex Machina. Yes
    0:16:49 Okay, okay. Okay. You’re okay. You’re like you’re halfway there, you know
    0:16:55 So with her you’ve got you know, Joaquin Phoenix playing this guy who’s like got divorced. He’s very sad
    0:16:59 He’s writing these authentic love letters, you know, he’s you know, it’s like a service
    0:17:05 So he’s like basically writing fake love letters. They’re not really authentic. He’s one writing them for people and very lonely guy
    0:17:09 And then he installs this AI operating system and and he you know at first he thinks
    0:17:14 Oh, it’s just like that’s kind of cool toy or something and then next thing you know, he’s fallen in love with it
    0:17:19 Right and and at some point, you know not to give spoilers, but you know at some point
    0:17:23 It kind of outgrows him. Yeah, basically and then if you know at some point it’s like, okay
    0:17:25 It’s obviously you need relationships with actual humans
    0:17:29 So as a supplement though, I could see this being great for therapy
    0:17:32 Like maybe you have a real therapist and then you have like the AI is like, okay
    0:17:37 When I can’t talk to the therapist can’t talk to him all the time or what if the therapist’s advice to everybody is just buy more
    0:17:43 GPUs Jensen Jensen take it over. You’re wearing his shirt the more you buy the cheaper it gets or whatever his slogan is
    0:17:45 The more you buy the more you save money
    0:17:51 Yeah, but but you know, also I think about like translation and stuff
    0:17:56 So, you know, I recently tweeted about like, you know, I got engaged in Japan and I was using AI to kind of
    0:18:02 facilitate the conversation because I can speak a little bit of Japanese but not enough to have like a deep conversation and then
    0:18:05 Yeah, without this technology, I wouldn’t have got engaged in like my life’s a lot better now
    0:18:07 And I’m sitting there thinking about like, God, what does that got?
    0:18:11 You know, how’s that gonna be when you have like AI voice with this too? It’s kind of exciting
    0:18:16 It’s kind of weird too. Like I don’t really want the AI voice to be like the main voice she hears from me
    0:18:19 Yeah, I’m actually curious, right? How does that interaction look?
    0:18:22 I did when you guys have conversations with each other you and your fiance
    0:18:25 Yeah, do you like have a phone between you you speak in English and then it says it out
    0:18:30 In Japanese and then she says it in Japanese and it speaks it back in like, what does that look like?
    0:18:36 I’m just curious. Well, yeah at first it was almost entirely using the phone and occasionally would use Google translate
    0:18:40 But like Google Translate’s results are really bad. It almost always has a mistake
    0:18:44 But the reason you use Google translate is simplicity, right? It’s faster. You’re not waiting a result
    0:18:48 It’s like just it’s right there, but it makes so many mistakes and a few times we had like
    0:18:54 Complete misunderstandings because we were using Google translate. I’m like, let me put this in the chat GBT. Oh, okay
    0:18:56 Like you said something totally different
    0:19:00 You know, I’m not gonna go into details, but there’s once or twice or it was like, oh jeez like
    0:19:03 We’re really misunderstanding on something like you know something something big
    0:19:06 But you know, but now how it’s kind of evolved
    0:19:09 So at first it was like entirely using mostly chat to be especially when we weren’t in person
    0:19:13 It was chat to be tea a lot and then in person it was a lot of Google translate because it’s faster
    0:19:19 Right and then now how it’s kind of evolved now is you know, I speak a little bit Japanese my son from a previous marriage
    0:19:23 He’s half Japanese and so I’ve got a little bit exposure to Japanese and and then she speaks a little bit of
    0:19:28 English she loves American movies and things like that, right? So she knows she knows some words, right?
    0:19:31 But she just can’t like you know say a whole paragraph or something, right?
    0:19:37 So now it’s kind of evolved where in person we mainly use chat GPT for like something really detailed like a long conversation
    0:19:39 Like a really deep conversation
    0:19:42 We’ll be using AI but for other little things
    0:19:47 We’ve like got little words and things where we know, you know, we can communicate basic thing
    0:19:50 I use custom instructions with chat GPT to tell it how to teach me as well
    0:19:55 Oh, nice like don’t just translate it but actually break down the key words underneath the translation
    0:20:00 And so when it would translate for me it’d be like, oh, here’s this word and then here’s a hiragana for it
    0:20:03 Which is Japan has like three writing systems and hiragana is the simplest one
    0:20:07 They teach kids first and so I’m like show me hiragana cuz I know hiragana
    0:20:11 And so it’s not only was it translating but it was it’s been teaching me at the same time
    0:20:15 I think I didn’t properly explain that on Twitter cuz people were like, dude, you’re gonna have to learn Japanese
    0:20:20 I’m like, but yeah, but it’s been an amazing tool like this relationship would not have happened without that and you know
    0:20:23 Not even just in a if you think about like a personal context with relationships
    0:20:29 But like business relationships like what is this gonna do for people where now you can go meet a business person in Japan or
    0:20:33 China or Saudi Arabia or whatever and and you can use their local language
    0:20:36 You probably like have a little device you put down on the table or something. Yeah, you talk
    0:20:40 You know, probably like I’m actually spit out something whether it’s in their headphone or whatever
    0:20:44 Yeah, it already this is like probably the next six months yet exists and the quality is okay
    0:20:48 But like, you know, probably like in the next year will be like really really really good
    0:20:53 I was I was at CES back in January and there was a company there called Time Kettle and
    0:20:59 Time Kettle has these little earpieces. They just look like like air pods that you’d get from like Apple, right?
    0:21:01 That’s kind of what they look like and there’s two of them
    0:21:03 I put one in my ear you put one in your ear
    0:21:05 I just speak naturally in my language
    0:21:11 But what you hear in your earpiece is translated automatically for you and vice versa. So battlefish
    0:21:16 So we can just sit there with those those little earpieces in and have a conversation right with in two different languages
    0:21:19 Yeah, so I mean that that already exists
    0:21:24 You know, you see it at like UN right like when you look at like UN meetings where somebody’s up in front of the whole UN
    0:21:31 Speaking everybody in the UN has like one of those little headphones in but I think there it might even actually be a human
    0:21:35 Translating for every single person there. I don’t know for sure. That’s my understanding because that is yeah
    0:21:40 So I mean eventually I think all of that’s just gonna be like an AI just automatically translating it for whatever language
    0:21:45 He turned the dial to you know, yeah, it’s in the book what Hitchhiker’s guide to the galaxy right where they have the
    0:21:48 Year
    0:21:53 Yeah, it automatically translates it for you like yeah, so we’re heading there like in the next year
    0:21:57 Which is gonna be amazing. So yeah, there are products are like niche products
    0:22:02 It’s some people know about but there’s not like a mainstream right like amazing automatic translation
    0:22:07 You know device or product and I would imagine most people are just gonna want to use the thing
    0:22:09 They’ve already got in their pocket, right? Like yeah
    0:22:15 Most people probably are gonna go and invest in something like that when they can pull out their phone and just sort of hand it back and forth
    0:22:18 I bet there’s like five to ten well-funded startups right now
    0:22:24 Working on this. I’d be shocked if there’s not like like it’ll be it’ll be a next wave that you’ll see in the next like six months
    0:22:26 To a year like what you did there with the next wave. Yeah, yeah
    0:22:29 They’ll be like five of those like all of a sudden, you know, and it’s because yeah
    0:22:33 It’s a great idea. Of course somebody’s gonna, you know build that in and win in that market
    0:22:38 Well, I think the smartest move that a company can do is for Apple to just build it into the air pod somehow
    0:22:43 Right like yeah, everybody already has these little air pods sitting around like just build it into there
    0:22:48 Where somebody could speak to me and like it’ll use the processing on my phone, right?
    0:22:49 The phone will be in my pocket
    0:22:53 Yeah, but these can already hear and they can already produce sound back into my ear, right?
    0:23:00 So why not listen to what they say send the information to my phone translate it send it back to my air pods
    0:23:05 Like I there’s almost no doubt in my mind that it will eventually just be built into like our earbuds that we use now
    0:23:10 That’ll help business so much like you’ll be able to go do business in so many more countries with less
    0:23:14 Understandings like right now if you travel around the world like when you go to other countries
    0:23:16 They don’t speak your language like yeah sure some places might speak English
    0:23:20 But still it’s kind of daunting when you go to a country where they don’t speak English
    0:23:23 And now that I’ll be gone like you just put in your ear and you off you go
    0:23:26 Yeah, and so that’ll really start to connect the world more
    0:23:31 I believe make people understand each other better than people currently do yeah, absolutely the other topic
    0:23:36 I want to talk about real quick before we wrap up here is is AI music has had some
    0:23:39 Really big advancements within the last few months
    0:23:44 I don’t know if you had a chance to play around with like sooner version three yet, but it can make like I haven’t no
    0:23:46 It can make up to two minute songs
    0:23:53 It actually writes the lyrics creates all the background music and sings it and like the songs are actually good
    0:23:58 Like I’ve played some on some of my YouTube videos and people are like I’m actually digging this song
    0:24:03 I can listen to that like they’re actually good songs, and it’s just so so so impressive
    0:24:08 What they’re doing with with the music now, but also on the flip side of that that coin
    0:24:16 200 artists just this week that we’re recording this 200 artists all side of position to try to stop the advancement of AI in the music industry
    0:24:18 because it it
    0:24:24 It creates an existential threat to their their income their their business model, right?
    0:24:29 So like on one hand like the AI music tech is getting so
    0:24:34 Unblowing good that we can create whatever song we feel like listening to right now
    0:24:38 And it’ll be a unique good interesting song that we like but on the other end of the spectrum
    0:24:40 all of the traditional
    0:24:47 Musicians and all of the bands that we grew up liking and the current pop artists of today are all fighting hand-in-tooth against it
    0:24:52 Yeah, I want to try out the new Suno, so I haven’t tried it yet. I heard about it sounds amazing
    0:24:54 I’ll probably try it right when we got here
    0:25:01 But yeah, I created something like the top Twitter threads back, you know many many months back now like on a AI Grimes where she had like a
    0:25:05 She’s allowing people to use her her voice to make songs
    0:25:07 Which I you know that’s kind of going on the one side of the spectrum
    0:25:11 Which seems to be pretty rare because everyone else most of the top musicians seem to be like yeah
    0:25:17 Don’t use my voice especially without my permission and so like that the other big one that came out was the AI Drake song
    0:25:22 And I that that song, you know, I’m a kind of Drake fan like I’m not like like five of his songs
    0:25:26 I’m not like a hardcore fan, but I heard that like oh, yeah, this is now one of my top five
    0:25:31 Yeah, yeah, yeah, I put it on my my phone. I was listening to it when I went to the gym and everything
    0:25:34 I was like this is crazy that this is an AI Drake song
    0:25:36 That’s not from him and then all of a sudden, you know
    0:25:42 People start having their their Twitter threads taken down the YouTube videos taken down anything that had AI Drake in it was like gone
    0:25:44 Yeah, and so it’s like oh wow
    0:25:49 Yeah, he like realized that like that’s a big like if there’s a song that’s almost as good as his songs coming out
    0:25:55 If there was a smart way for musicians to monetize their voice being trained to the AI systems
    0:25:58 All the musicians would be on board but right now
    0:25:58 Yeah
    0:26:03 There’s no real smart way for them to make money if their voice is being used right but like yeah
    0:26:10 It got to a point where I go and create my own variation of a Drake song with lyrics that I created it sounds like Drake
    0:26:12 It’s good people like it and
    0:26:16 Whenever that music gets played or when this song is generated
    0:26:18 I don’t know how the monetization would work
    0:26:23 But if there’s a way that yeah Drake made some money every single time that song got played
    0:26:26 He would be all for it because now yeah
    0:26:30 You can make a million Drake songs can be made and he’s making money off of all of them
    0:26:35 But the problem is right these artists have no way of actually making money off of their voice being trained in there
    0:26:40 And I think a lot of these companies want to figure that out because if they can crack that code
    0:26:43 Yeah, how do we actually incentivize musicians to be a part of this?
    0:26:47 Musicians will probably be a lot more likely to be involved in it
    0:26:52 Yeah, I kind of feel like we’re gonna probably need like some future AI to help us figure out how to do that
    0:26:55 Yeah, you know you probably you probably don’t even know this but like my last startup binded
    0:26:58 That’s like kind of what we were going after so we were doing
    0:27:03 Attribution on the blockchain and trying to automate royalties and things like that so we experimented with music
    0:27:06 We end up doing images because like it was the easiest way to get started
    0:27:09 I went out to Washington DC and met with people in the copyright office
    0:27:13 I spoke on a panel in Washington DC with the copyright office and
    0:27:20 And man, it is like hard. It’s very hard to like track these things and make sure it’s actually authentic and then handle the payments
    0:27:22 You’ve got Spotify right and Spotify
    0:27:29 They’ve got some sort of model where every time a song gets played the musician gets it like a fraction of a penny or something
    0:27:32 But if you’re a popular musician and getting millions and millions and millions of downloads a month
    0:27:35 It adds up they can make a living off of it
    0:27:41 And I feel like a lot of people are sort of looking at Spotify is like we got to do something like that
    0:27:48 But I feel like it’s so much more complicated than what Spotify is trying to do because if you train all these voices
    0:27:54 Into an AI and then somebody goes and creates a song, you know, how do you know exactly which voices?
    0:27:56 It’s pulling from to create this song
    0:28:01 Maybe the voice is a blend between Drake’s voice and Eminem’s voice and it’s like a hybrid now
    0:28:06 Do they get paid for that? Like there’s so much more like intricacies involved
    0:28:11 And do they get paid and does the publisher and all these other people get paid like who you know who all gets paid?
    0:28:19 And then like existing contracts they already have that may like prohibit those kind of things. Honestly, I think it’s just gonna require new contracts
    0:28:28 Like a rethinking of that industry essentially, right? Like the music industry already had to reinvent itself when streaming came along, right?
    0:28:36 Everybody bought CDs everybody bought albums now do bands even make albums anymore or do they just drop songs, right?
    0:28:39 Because like the music industry’s train changed so much
    0:28:44 I just feel like they’ve got to figure out what what is this next evolution because the change is going to happen
    0:28:49 It’s not like them putting out like 200 of them signing a letter is gonna stop anything
    0:28:54 It’s just sort of making their feelings known, but it’s not gonna stop anything. Yeah, man
    0:28:57 It’s kind of like with like Napster like yeah, you know, they tried to stop Napster
    0:29:02 But that kind of technology and Torrance everything else kept evolving and like, you know, it’s hard to start technology, right?
    0:29:09 And so yeah, when there when there’s a 10,000 Drake songs out there AI Drake songs like what are they gonna do like, you know
    0:29:12 A million AI Drake songs. Yeah, are they really gonna be able to stop all of that?
    0:29:17 Am I gonna go pay to listen to a Drake song if I can use a tool and just generate a new Drake song like that?
    0:29:20 I’ve never heard before like right and I do wonder if it’s gonna lead to a world, too
    0:29:24 We’re like, you know, you you almost like freeze culture in place, which I hope does not happen
    0:29:27 I hope you know culture keeps advancing and there’s new creative works
    0:29:32 I hope we don’t like freeze culture in place where it’s like, okay, you’ve got the Beatles you got Michael Jackson
    0:29:38 You got whoever and like you’re like replicating their voice to make new songs and like that’s like the famous songs for all of time
    0:29:40 No, I think it’ll keep evolving
    0:29:43 But I do think AI is just gonna be another tool in the mix, right?
    0:29:46 I think I think people are gonna figure out creative ways to use AI
    0:29:49 I think AI can be a great tool for like
    0:29:54 Musicians to collaborate with other musicians without the other musician need to be involved like maybe
    0:30:02 Eminem goes in like produces a song and he wants, you know Drake to cameo on the song and
    0:30:05 Drake just doesn’t have the time well Drake could license his weights to Eminem
    0:30:09 Eminem can generate the clips that he needs from Drake work them into his song
    0:30:14 Yeah, Drake gets paid Eminem just Drake on his song and they never had to meet up in person, right?
    0:30:18 I think there’s something there if they could just figure out all the logistics of it
    0:30:23 Yeah, and like bringing it back to AI voice and you could be like mixing all of this with your voice, too
    0:30:28 Right, you’d be like, yeah changes part of the song or add something here add Drake in this part and like doing a lot of that
    0:30:31 With voice that’s gonna be you know, that’s gonna be a new creative experience
    0:30:34 Which which probably would be great because like people can say more in the flow, right?
    0:30:38 It would be more creative like without having to like go manually touch all these tools
    0:30:41 Like you just kind of making the music and like, you know going with the flow
    0:30:47 So now that we’ve talked about AI voice and AI music and where all of this is headed you probably want to know
    0:30:48 All right, what are what are the takeaways?
    0:30:54 What what can I do with this information and there’s a few things when it when it comes to like the risks and dangers of AI
    0:31:02 One of the things I actually told my parents is that if they ever get a call from me or my wife or one of my kids
    0:31:07 We have a code word ask for that code word to make sure it’s really us
    0:31:09 I mean if I’m just called to say happy birthday
    0:31:10 You don’t ask to ask ask for the code word
    0:31:15 But if I’m asking for money if I’m you know if if I’m saying there’s a problem where I was in a car accident and
    0:31:21 I need money or somebody was kidnapped and I need money or something that sounds really really out of the ordinary
    0:31:27 Ask for me ask me for the code word to verify that it’s really me because that is
    0:31:32 Something that I think people should start doing because AI voice is only gonna get better and better
    0:31:35 So I think that’s like one of the things that you should really
    0:31:41 Really take away and another thing is I think you should go and use a lot of these tools
    0:31:44 I think you should try 11 labs. You should listen to the open AI voice stuff
    0:31:50 You should listen to the sooner music. I think the more you get immersed like we’ve talked about
    0:31:57 The deeper we go down these AI rabbit holes the better we get at detecting whether this is AI or real
    0:32:05 It’s almost like you know back when Photoshop came out people had a hard time telling whether something was photoshopped or not
    0:32:10 But over time you see enough of it and now people can go okay. That looks like it was probably photoshopped
    0:32:16 I feel like the same kind of thing can happen with AI audio over time. You’ll probably get better and better at
    0:32:20 Detecting AI. That’s not to say AI is not gonna get better and at some point it will be undetectable
    0:32:24 But short term you should probably be using these tools hearing them
    0:32:30 Understanding how they sound and you will probably get better and better at seeing these little nuances or hearing these little nuances in the audio
    0:32:33 They give away that it’s that it’s made with AI
    0:32:38 Well, and I think the other key takeaway too though is that like a lot of the like
    0:32:44 Circle all the way back around bookend it to how it started of like the other key takeaway is that
    0:32:50 All of this is going to turn to voice as opposed to typing one thing people need to realize is you know
    0:32:53 This technology is not just like a sci-fi movie like her now
    0:32:54 It’s you know
    0:33:01 It’s here now and you’ll probably see in the next six months to 12 months that the main way people are using AI is
    0:33:05 Voice and so you know as a as a business leader executive
    0:33:12 Employee you should be thinking about how are you gonna be able to use these tools in a year from now with voice that you can’t currently with
    0:33:17 Text and and how is your how is your life gonna look differently when you can just talk to the AI and have it help do work for you?
    0:33:20 You know, you’ll even be able to do things like you know
    0:33:25 Create AI agents where you’ll be able to send off the agents to do a little task for you and command them by voice
    0:33:28 Right, so like that’s coming very very soon
    0:33:31 So like imagine, you know be planning for that and you’ll be in a way better position
    0:33:37 Than people who have no idea this technology exists and if you have a business that has content online
    0:33:43 It’s really easy these days to make an audio version of that written content as well
    0:33:48 So I think a lot more people are going to also consume content via audio, right?
    0:33:51 I think the prompting is gonna be more audio-based where we talked to Siri
    0:33:57 We talked to Alexa we talked to these tools and it sort of goes and does the prompt based on what we say to it
    0:34:03 But I also think the reverse is true where over time more and more people might consume their content that way as well
    0:34:05 So if you have a blog with written content
    0:34:11 Throw that content into 11 labs and to have it a podcast audio version that people can listen to as well
    0:34:18 Because now you just have another format that makes it more likely that somebody’s gonna consume the content that you just created
    0:34:23 So I think that’s another like take away for businesses listening to this is lean into this use this technology
    0:34:26 It’s actually a really cool way to make audio versions of your content
    0:34:32 Yeah, and I think you know in terms of you know people should be playing too for like how they can use this internationally, right?
    0:34:37 Like this is gonna open up so many opportunities that your business can’t currently take advantage of
    0:34:40 You know think about like okay people who speak different languages
    0:34:46 I’ll know be able to have business meetings with them or think about if you’re making videos or written content that you didn’t turn into audio
    0:34:48 You’ll be able to turn that into like a hundred different languages
    0:34:51 Right like what does that mean for your business?
    0:34:55 So I got so everyone should be thinking about that right now and hopefully we’ll do the same with the podcast
    0:34:59 Hopefully we’ll have this in Japanese the next six months or something like that brings up another question when you proposed
    0:35:03 Did you actually propose with the cell phone like a translator?
    0:35:08 I tried to not use it and then it was it was it was necessary within 30 seconds
    0:35:12 But I tried my best and another thing is there the other thing I didn’t mention is you know my son
    0:35:17 He’s bilingual. He speaks English in Japanese both perfectly early or at least for a 10 year
    0:35:20 You know a 10 year old and and so occasionally, you know
    0:35:24 He helps translate which I try to make sure he doesn’t do that too often. It’s like kind of you know
    0:35:30 Annoying for him, but but but he but he was there as well like in another room. So I was like, well worst-case scenario
    0:35:32 I’ll be like, you know, no
    0:35:38 Very cool. Well AI is is changing lives and building relationships. Yeah
    0:35:41 Exciting times we’re in bringing loved everyone
    0:35:47 And yeah, okay go watch the movie her I’ll go watch the movie her and on that note
    0:35:51 I think we can wrap this one up. Awesome. Well, thank you so much to everybody for tuning in
    0:35:58 Please like this video and subscribe to our channel if you haven’t already it really really helps get our podcast in front
    0:36:03 Of more people if there’s somebody that you know that this episode can be helpful for send them the link
    0:36:08 Let them let them tune into this episode if you’re on a podcast player like Spotify or Apple
    0:36:12 Give us a subscribe and maybe even leave us a review. We really really appreciate it
    0:36:17 It helps spread the word of this podcast and shares this information with more people. So thanks again for tuning in
    0:36:19 You
    0:36:21 You
    0:36:23 You
    0:36:25 You
    0:36:28 (chiming music)
    0:36:38 [BLANK_AUDIO]

    Episode 3: Are you ready for speech A.I.? Because it’s here, not in 6 months or a year, but now. AI-generated voice technologies will have a major impact on many things in our daily lives. This technology will effect everything spanning applications in language learning, music creation, emotional AI interfaces, and the evolving landscape of personal digital interactions.

    These tools out there now that you can communicate with directly by voice, and there are things you need to be ready for. Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) dive deep into both the incredible potential and the inherent risks of this groundbreaking tech and how you can take advantage of it. 

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://link.chtbl.com/4FZET15d

    Show Notes:

    • (00:00) AI shift towards voice interaction, Hume’s empathetic Eevee impressive.
    • (03:41) AI speculation for Siri; potential Apple-Google partnership.
    • (07:41) AI technology making it easier to deceive.
    • (09:52) Regulation provides bumpers for company behavior.
    • (13:12) Cryptocurrency regulations are uncertain, causing concerns for builders. AI regulations add complexities and potential drawbacks. However, voice AI offers valuable content creation tools.
    • (18:30) Learning language through translation enhances relationships and communication.
    • (21:52) AI music making advances, creating good songs.
    • (22:22) AI music tech advances creating music industry division.
    • (25:43) Spotify’s payment model raises complexities and concerns.
    • (30:10) Immersing in AI detection, improving AI audio discernment.
    • (31:57) Content also available in audio format.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Greg Isenberg’s Step-By-Step Blueprint To Building A Successful AI Business

    AI transcript
    0:00:06 I’ll give you and folks listening the playbook of if I was starting an AI company, here’s
    0:00:13 how I would do it.
    0:00:20 Hey, welcome to the Next Wave Podcast.
    0:00:21 My name is Matt Wolf.
    0:00:26 I’m here with my co-host, Nathan Land, and we’ve got an excellent show for you today.
    0:00:31 So in the AI world, you’ve probably heard a lot that AI businesses have no moat.
    0:00:35 It’s really, really difficult to create a new business in the world of AI when everybody
    0:00:37 can create similar businesses.
    0:00:43 Well, on today’s show, we’ve got Greg Eisenberg, and Greg has a completely different idea about
    0:00:44 how this is all going to play out.
    0:00:51 In fact, he gave us a step-by-step roadmap, a playbook on how you can build a successful
    0:00:53 business in this current world of AI.
    0:01:01 So let’s go ahead and just dive in with Greg now.
    0:01:02 This is going to be fun.
    0:01:07 I mean, right now we’re in this era where AI is sort of taking over all of the news cycles
    0:01:10 and everything has AI baked into it.
    0:01:15 Even all the products that don’t seem like they need AI have AI baked into them, and you’re
    0:01:16 doing some amazing stuff with AI.
    0:01:21 So it just, it seems like a good fit for you to be one of our first early guests on this
    0:01:23 show and talk about where all of this is headed.
    0:01:28 The interesting thing about AI is everyone from your barber to Jeff Bezos is interested
    0:01:29 in AI.
    0:01:30 Yeah.
    0:01:36 I recently got a haircut, and my barber was just kept asking me about different AI tools
    0:01:39 that he was using, he wanted my feedback on it.
    0:01:44 And what Nathan and I actually have in common is, I think I tweeted something about AI,
    0:01:48 and he tweeted something about AI, and then within 24 hours, Jeff Bezos both followed
    0:01:49 us on Twitter.
    0:01:51 And then Matt also got followed off of that.
    0:01:52 Yeah.
    0:01:54 We’re all in an exclusive Jeff Bezos follows us club now.
    0:01:55 Yeah.
    0:01:56 Yeah.
    0:01:58 So we got to finally message him and actually get him to come on or do something with all
    0:01:59 of us.
    0:02:01 So that’s right.
    0:02:05 When all your marketing team does is put out fires.
    0:02:06 They burn out.
    0:02:11 But with HubSpot, they can achieve their best results without the stress, tap into HubSpot’s
    0:02:17 collection of AI tools, breeze to pinpoint leads, capture attention, and access all your
    0:02:19 data in one place.
    0:02:23 Keep your marketers cool, and your campaign results hotter than ever.
    0:02:30 Visit HubSpot.com/marketers to learn more.
    0:02:33 So Greg, how do you see the holding company concept?
    0:02:39 How do you decide what companies to either invest in or build with this whole narrative
    0:02:44 that these AI companies don’t have a moat, knowing that almost any company out there
    0:02:50 can use some of these existing resources to build the same software you’re building?
    0:02:54 How is the approach to what companies to invest in or build?
    0:03:00 So there’s this narrative that if you’re building on top of open AI, chat GPT, you’re
    0:03:05 this rapper that you have no moat, and therefore you haven’t built any value.
    0:03:06 And I think it’s wrong.
    0:03:07 I think it’s wrong.
    0:03:11 Because if you think of chat GPT and open AI, what is it?
    0:03:18 It’s basically an amalgamation of the craziest amount of worldwide data in one place.
    0:03:19 It’s incredible.
    0:03:27 It’s like a trillion times the Great Library in Alexandria, maybe even more.
    0:03:32 And I think that what people don’t get is, you know what’s another little startup that
    0:03:36 is a thin wrapper around the world’s information?
    0:03:37 It’s Google.
    0:03:40 Google doesn’t create any of the content.
    0:03:47 They are a wrapper around the internet, but they point people to pages within the internet.
    0:03:51 And it’s the greatest business model of all time.
    0:03:53 Even today, it’s a trillion dollar company.
    0:03:57 If you look at their financial statements, a huge chunk of their business still comes
    0:03:59 from Google search.
    0:04:05 I also think that there’s this great unbundling of chat GPT that is happening.
    0:04:08 Now, what unbundling is, there’s a great quote.
    0:04:11 I think Jim Barksdale said it, “There’s two ways to make money.
    0:04:14 You’re either unbundling or you’re bundling.”
    0:04:18 And what that means is, what does unbundling mean?
    0:04:24 Unbundling means, if you look at Craigslist, Craigslist was a marketplace for everything.
    0:04:26 You had the ability to post a job.
    0:04:29 That became Indeed.com.
    0:04:33 You had the ability to find a mate.
    0:04:37 That became Tinder and what Match is doing.
    0:04:42 Basically on the internet, you can’t really be everything to everyone.
    0:04:47 So there’s going to be a huge opportunity to look at what are the different use cases
    0:04:54 on the horizontal platforms, like chat GPT and what OpenAI is doing, and basically think
    0:05:02 about how can I apply a community-first approach, create what you might call as a thin wrapper.
    0:05:04 But what I call is, this is dope.
    0:05:11 I don’t have to rebuild everything from scratch, and just focus on a particular use case.
    0:05:14 Now I want to give one quick example to that.
    0:05:17 I forget the name of the founder, PDF.ai.
    0:05:25 And when chat GPT introduced this idea around, you can now talk with a PDF, basically.
    0:05:29 Everyone basically assumed that this guy’s quote unquote “thin wrapper” was going to
    0:05:30 go to zero.
    0:05:34 And what he noticed is that revenue actually didn’t really go down.
    0:05:39 A lot more people now knew about chat GPT, there was news about it, they knew that you
    0:05:44 can use PDFs, and he had a purpose-built business focused on it.
    0:05:50 So I think you’re going to see more of the PDF.ai stories than you are going to see other
    0:05:51 stories.
    0:05:55 Yeah, I’ve always thought the critique on the thin wrapper thing was kind of bullshit.
    0:05:57 It doesn’t make a lot of sense to me.
    0:06:00 Almost every company on there that’s reliant on something.
    0:06:01 Even level or not.
    0:06:04 Even everybody’s reliant on the internet.
    0:06:08 Where I think the community approach that you’re doing makes a lot of sense, it’s probably
    0:06:14 a really unique advantage, is that with the new AI tools, and especially, let’s say GPT-5
    0:06:18 comes out and it’s even way more impressive than GPT-4, we’re going to get to a point
    0:06:24 where you’re going to be able to look at a SaaS website and say, “Hey, GPT-5, go build
    0:06:25 that for me.
    0:06:26 Make my own version for me.”
    0:06:27 That might be like a year away.
    0:06:32 And if you have a brand and a community, people who trust you, like you, I think they’ll keep
    0:06:36 doing business with you, whereas if you’re just like a soulless company that doesn’t
    0:06:40 have that, I think you’re at a great disadvantage in this next wave.
    0:06:44 Yeah, I want to expand on that, because I don’t think a lot of people are talking about
    0:06:48 the marketing community angle to these products.
    0:06:54 For the longest time, people were saying that energy drinks were actually a saturated market.
    0:06:58 And then out of nowhere, what’s like the best performing stock of the last 12 months, or
    0:07:04 one of the best, is Celsius Holdings, and it’s like a $15 billion company.
    0:07:11 It’s a energy drink that they originally targeted women, because in their research and insight,
    0:07:17 they noticed that if you look at monster energy and all these energy drinks, it was very male-dominated.
    0:07:26 Now, what’s the difference from a formula perspective of a monster versus a bang or whatever?
    0:07:27 Probably not that different.
    0:07:29 Same with beer.
    0:07:33 What’s the difference between a Miller light and a Kors light?
    0:07:39 Now, a lot of people listening are going to be like, “No, I’m a Miller girl, or a Miller
    0:07:45 guy, or I’m a Kors light guy,” but that’s identity playing into it.
    0:07:48 In your identity, you’re like, “I’m that type of person,” and I think you’re going
    0:07:53 to see that identity-based consumption model happen with AI startups too.
    0:07:55 I’m a Gemini person.
    0:07:57 I’m not a chat GPT person.
    0:07:59 Was it PDF.ai?
    0:08:04 What would be some advice you would give to that company as far as building a community
    0:08:05 around it?
    0:08:09 How do you build a community around a tool that essentially just helps you understand
    0:08:10 PDFs better?
    0:08:11 Yeah.
    0:08:19 I think the issue with PDF.ai is it’s niche, not what I call super-niche.
    0:08:21 Step one is your horizontal.
    0:08:23 You have no niche, your chat GPT.
    0:08:25 Step two is your niche.
    0:08:31 You’ve picked PDF as your category, and then step three is micro-niche, which is your PDF
    0:08:35 for lawyers, your PDF for accountants.
    0:08:39 What I would be doing is be focusing on the community piece if I was him.
    0:08:45 What are the different communities that need PDFs the most in trying to retrofit it for
    0:08:47 those people?
    0:08:52 Those people, like accountants or lawyers, for example, will have a set of pain points
    0:08:56 that are going to be different than me or you, just because we’re different.
    0:09:03 Also, they just might just trust you more if it’s called legal PDF as well, which is
    0:09:04 the beer example.
    0:09:08 They might just trust you because they like the name, which also is a thing that not many
    0:09:17 people talk about in startups, which is names actually pay a—and you know this, Nathan,
    0:09:24 Mr. Lore, but names actually pay a bigger role in how people actually connect with products
    0:09:26 than people like to think.
    0:09:27 Yeah, totally.
    0:09:28 I think you’re right.
    0:09:30 Especially in the age of AI, that’s going to matter more and more.
    0:09:34 Having a name and a brand that people trust that they feel connected to, so they keep
    0:09:39 going back and doing business with that company versus the random company that just sprouted
    0:09:41 up out of nowhere and just copied you with AI.
    0:09:47 You bring up a good point around, if everyone could basically press the duplicate button,
    0:09:50 how do we think about building companies in that world?
    0:09:51 It’s a really good point.
    0:09:56 I’ve thought about it a lot and I think that the good side of it, by the way, is that it’s
    0:09:59 going to be really easy to spin things up.
    0:10:03 You could see a viral tweet and then be like, “Wow, this is really cool and you can be living
    0:10:07 in Sri Lanka and you can be 14 years old and you can press the duplicate button, and then
    0:10:14 you can pay for X blue and you show off and you reply and you get a million downloads.”
    0:10:16 Your life could change overnight.
    0:10:22 The downside of that coin is that competition is going to 100X.
    0:10:27 The amount of software that’s going to be created is going to be 10 to 100X minimum.
    0:10:34 You saw that in content land with tools like TikTok as an advisor to TikTok for a few
    0:10:40 years and it was crazy seeing the amount of content.
    0:10:41 You see it.
    0:10:42 I’m sure you’ve seen played with TikTok.
    0:10:48 There’s the amount of content on all these platforms, TikTok, YouTube, Instagram.
    0:10:50 You give people easier tools to create things.
    0:10:51 They’re going to create things.
    0:10:52 That’s what’s going to happen.
    0:10:53 I’d love to talk about it with you guys.
    0:10:59 How would you navigate this world where everyone’s given the duplicate button?
    0:11:00 Greg, that’s a big reason.
    0:11:04 I thought you’d be a great guest on the show, honestly, because I think your approach is
    0:11:05 right on.
    0:11:06 You’re building a personal brand.
    0:11:09 I think in the age of AI, that’s really important.
    0:11:10 You’re an actual person.
    0:11:14 People like you and you can leverage that across various companies that you build.
    0:11:16 I think that approach is right on.
    0:11:21 The community aspect, having people involved in the actual product so they like you and
    0:11:25 they feel engaged and actually you adjust the product based on feedback and talking
    0:11:27 with them, I think is huge.
    0:11:30 Also, you’ve got the innovation agency as well.
    0:11:31 That’s another one.
    0:11:35 We’re like, yeah, in this new age, companies have to be thinking about, “Oh, I don’t just
    0:11:39 have this one product and people are going to use this same product for 100 years.”
    0:11:43 Somebody may come and copy it and I think that I have a better brand, but maybe all
    0:11:47 of a sudden they’re cool and everybody switches over.
    0:11:49 You have to continue innovating and trying new things.
    0:11:53 I think the companies that are really going to be the big companies in the next 100 years
    0:11:58 are the ones that build innovation into their company and they’re constantly innovating and
    0:12:00 spinning out new projects.
    0:12:05 I run this site called Future Tools, which curates all of the latest AI tools that come
    0:12:06 out.
    0:12:10 There’s a submission form on the site where people submit their tools to me.
    0:12:15 I review them with a team of a couple extra people and the tools that I think are cool
    0:12:16 make it onto the site.
    0:12:18 I’m truly trying to curate now.
    0:12:24 The problem that I tend to have is that every single day, 13 of the exact same tool pops
    0:12:25 onto the site.
    0:12:31 Almost every single day, I see a tool that’s like, “Here is your AI copywriter.
    0:12:34 Use this to write your sales copy for your business.”
    0:12:36 Here’s an AI blog writer.
    0:12:40 Every single day, there’s 11 of those submitted to the point where I can’t even tell the
    0:12:44 difference and a lot of times it’s like a completely different company, but I swear
    0:12:50 I’ve seen that site, that UI, that page before.
    0:12:54 From the perspective of Amin, where I’m seeing all these tools being submitted, to me it
    0:13:00 feels like this AI world has gotten to a place where everybody is already cloning everything
    0:13:01 else.
    0:13:02 “Oh, that tool worked.
    0:13:03 Let’s clone it.
    0:13:04 Somebody else clones it.
    0:13:05 Somebody else clones it.”
    0:13:11 It gets to a point where the first movers, the ones that put out the product first, are
    0:13:15 still the ones that people talk about, but then there’s just a whole bunch of junk that
    0:13:17 followed behind it.
    0:13:25 I do think that being an early mover is important, but then I also think about the big incumbents.
    0:13:31 You’ve got the Googles, the Adobe’s, the Microsoft’s, companies like that.
    0:13:35 I feel like we might be moving into this world where whenever something works really, really
    0:13:39 well as a small SaaS company, one of those big companies is either just going to acquire
    0:13:45 it or build it in and then wipe out pretty much everybody else that was doing that already.
    0:13:51 I mean, we were talking even before we hit record about the new open AI video model.
    0:13:55 It’s like, when that gets released to the public in a single day, that’s already better
    0:14:00 than Runway, than Pica, than Animate, Diffusion, all of that stuff.
    0:14:06 There’s already a better option out there, so now what do we do with those now?
    0:14:11 We’ll be right back, but first, I want to tell you about another great podcast you’re
    0:14:12 going to want to listen to.
    0:14:16 It’s called Science of Scaling, hosted by Mark Roberge, and it’s brought to you by
    0:14:22 the HubSpot Podcast Network, the audio destination for business professionals.
    0:14:27 Each week, host Mark Roberge, founding Chief Revenue Officer at HubSpot, senior lecturer
    0:14:32 at Harvard Business School, and co-founder of Stage 2 Capital, sits down with the most
    0:14:37 successful sales leaders in tech to learn the secrets, strategies, and tactics to scaling
    0:14:38 your company’s growth.
    0:14:44 He recently did a great episode called How Do You Solve for a Siloed Marketing and Sales,
    0:14:46 and I personally learned a lot from it.
    0:14:48 You’re going to want to check out the podcast.
    0:15:12 Into Science of Scaling, wherever you get your podcasts.
    0:15:25 The business is called BoringMarketing.com.
    0:15:28 We started off as a Twitter account, literally.
    0:15:30 Anyone could sign up for a Twitter account.
    0:15:32 It’s free, more than us.
    0:15:40 We called it @BoringMarketer, and we created a character behind this idea around people
    0:15:45 talk about boring businesses, but not that many people talk about boring marketing.
    0:15:47 Step one is you build a character.
    0:15:51 You start with a character that people are going to connect with, or maybe you are that
    0:15:53 character.
    0:15:58 For me, I like being that character, or Nathan likes being that character.
    0:16:02 We just started talking about boring ways to grow your internet business.
    0:16:07 People kept asking us about SEO, SEO, SEO.
    0:16:14 Then we looked at the market and we realized that there was an opportunity to create AI-assisted
    0:16:16 tools to do SEO.
    0:16:20 We started using those AI tools on our own products.
    0:16:21 That’s another benefit of having a holding company.
    0:16:26 You have this portfolio of companies that could dog through to your product.
    0:16:30 The good news of it is we knew that people wanted it because they were telling us that
    0:16:31 they wanted it.
    0:16:37 We built this community of about 10,000 people, and then we took that, those AI tools.
    0:16:45 We wrapped it around a service agency to help people implement and create content.
    0:16:46 People started seeing results.
    0:16:49 It started driving word of mouth.
    0:16:53 Now, all of a sudden, fast forward, it hit a seven-figure run rate within four or five
    0:16:57 months, and now it’s a really profitable engine.
    0:17:03 Now we’re moving towards, by the time this is out, boringads.com will be out.
    0:17:05 No brand is attached to a lot of this.
    0:17:08 Is that quick run weight?
    0:17:13 Is it because you had a springboard with your own personal brand, or do you think it’s
    0:17:15 due to the merits of the products themselves?
    0:17:21 I’m just going on Boring Marketer right now, the Twitter account, 100, 200, 300 likes per
    0:17:22 tweet.
    0:17:26 This is boring SEO stuff.
    0:17:32 The cool thing about having one personal brand with a few thousand people, you can give
    0:17:40 seed capital, social capital to another account, and be like, “Hey, I just started at Boring
    0:17:41 Marketer.
    0:17:42 Check it out.”
    0:17:43 Yeah.
    0:17:47 That gets the first 2,000 or 3,000 followers, and then from that, you try to get it to a
    0:17:52 point where it starts off as a baby, and then gets to a toddler, then a child, and then
    0:17:53 now it’s an adult.
    0:18:02 I think how helpful was it for me to be behind it initially, helpful, but do I think it…
    0:18:08 My take on Twitter specifically is anyone can get to 10,000 followers if you just set
    0:18:12 up tweet notifications and write thoughtful replies.
    0:18:16 You could have started at Boring Marketer, it probably would have taken you four months
    0:18:17 to get there.
    0:18:22 I was able to do it in four days, but I don’t think that should be a stopping point for people
    0:18:24 who want to do something like this.
    0:18:26 I wanted to talk about the agency business model.
    0:18:31 I know that was one of the notes that we had, Nathan, that we wanted to dive into.
    0:18:35 Do agencies still exist several years down the road?
    0:18:44 I mean, if chat GPT becomes our consultant, and we can prompt any website into existence
    0:18:46 in a year or two from now.
    0:18:56 I think low- to mid-level agencies get commoditized to basically nothing or very little.
    0:18:57 You’re right.
    0:19:02 In a world where you just prompt something to create a website or a landing page or a
    0:19:06 marketing video, of course, that’s tough.
    0:19:09 I think there’s millions of people who work in that space that are…
    0:19:10 That’s tough.
    0:19:12 It is a tough place to be.
    0:19:21 Do I think that McKinsey is going anywhere or Bain or IDEO or we have an innovation agency
    0:19:27 where we work with Fortune 500s and some fast-growing startups like Jasper AI and people like that?
    0:19:30 Do I think we’re going anywhere on the innovation side?
    0:19:32 Absolutely not.
    0:19:35 People forget that you need taste.
    0:19:43 This is going to be the most sought-after skill set for the next at least 10 years.
    0:19:44 This is somewhat controversial.
    0:19:49 I think engineering was the most sought-after skill set of the last 10 years.
    0:19:52 I could say that I studied computer science.
    0:19:58 I’m a trained engineer, but I think that if you have good taste and you know how to prompt
    0:20:03 and you know how to come up with great ideas, I’ll give you an example.
    0:20:09 We came up with a really big idea for a Fortune 100 company to shift their entire business.
    0:20:13 If you’re the CEO of that company, you’re not just going to prompt an idea generator
    0:20:14 to do that.
    0:20:18 You’re going to want to outsource that thinking to someone that you can trust, and that’s
    0:20:20 where taste comes in, right?
    0:20:21 So I think that…
    0:20:28 Another sort of example along the same lines is, “I’ve got a YouTube channel.
    0:20:31 All of my thumbnails for my YouTube channel are created with AI.
    0:20:35 I’ve got like an AI model with my face trained into it.
    0:20:37 I use Mid Journey for backgrounds.
    0:20:41 I actually still have a thumbnail designer on my team.
    0:20:46 He uses AI for me because he knows what thumbnails look better than I do, right?
    0:20:52 So he can actually use these tools, Dolly, the Stable Diffusion, Mid Journey, and I mean,
    0:20:54 I can create amazing images with those.
    0:20:58 He creates really amazing thumbnails, pulls them all together, and still has a much better
    0:21:01 design eye than I have.
    0:21:05 So I still hire somebody to use AI for me to make the thumbnails.
    0:21:06 Yeah.
    0:21:07 Yeah.
    0:21:10 I think one thing there, though, is a lot of grunt work, I think, will be kind of replaced
    0:21:11 with this, right?
    0:21:15 It’s like the guy who’s making the thumbnails, he has great taste, but if it’s somebody who
    0:21:20 is doing something just very repetitive without a lot of thought put into it or requiring
    0:21:23 taste, a lot of that is going to get automated away.
    0:21:24 Yeah.
    0:21:27 And I think in a lot of ways, too, what you were saying, it sort of levels the playing
    0:21:28 field as well.
    0:21:33 So yes, there’s going to be like non-skilled people that might struggle a little bit, but
    0:21:38 those people in that sort of middle class realm, they also have these tools at their
    0:21:41 disposal to sort of take them to the next level.
    0:21:45 If you want to learn how to code, it’s easier than ever right now to learn how to code.
    0:21:49 If you want to learn how to be really good at graphic design, it’s easier than ever to
    0:21:51 learn how to be really good at graphic design.
    0:21:55 So I feel like the people that are sort of in that middle lower end, I know this is like
    0:22:00 a buzzword, but it sort of democratized the information to get you there, right?
    0:22:06 Now it’s a lot easier to go from zero to one with the tools that are available out there.
    0:22:08 I think those people are out of crossroads.
    0:22:13 They can either see the knowledge and see the tools and be like, I’ll get to you later
    0:22:16 or they can get their hands dirty.
    0:22:20 I’m kind of convinced this stuff is going to get 20 to 50% better every year for the
    0:22:21 next five years.
    0:22:26 So that’s coming soon, that this is going to be able to help you plan out your life
    0:22:29 and like an action plan of things you could be doing to make your life better.
    0:22:33 So I think that’s going to be such an unlock for so many people.
    0:22:35 There’s this sort of narrative going on.
    0:22:40 We were talking about the AI video generators, like open AI just came out with theirs.
    0:22:44 And there’s this narrative that I just absolutely hate that I see all over Twitter that’s like,
    0:22:46 “Oh, it’s the end for directors.
    0:22:47 Oh, it’s the end for filmmakers.
    0:22:49 Look what you can do now.”
    0:22:50 And I can’t stand that narrative.
    0:22:54 I don’t like the narrative of any time somebody goes on Twitter and makes a blanket statement
    0:22:57 of this AI is killing this industry.
    0:23:03 What I feel right now is that it levels the playing field, but it also sort of raises
    0:23:04 the playing field.
    0:23:11 It makes everybody that is sort of making videos, making art sort of uplevel their game
    0:23:16 because now what used to be something people thought was really good, now anybody can do
    0:23:17 it.
    0:23:22 But we’re good at that, now need to up their game and get even better at that thing because
    0:23:25 pretty much everybody can do the original thing.
    0:23:31 And so I feel like that is kind of what we’re seeing right now, especially with like AI video
    0:23:37 and AI image creation is just this leveling the playing field for what used to be considered
    0:23:41 good, but also raising the bar for what is really amazing now.
    0:23:47 I feel like it gives creatives, like the video creators, the art creators, superpowers, right?
    0:23:52 Like if you’re really, really good at art and now you have AI at your disposal to make
    0:23:56 art even better, you’re still going to be ahead of the game of the person who just learned
    0:23:57 AI art.
    0:24:00 So yeah, let’s talk about that.
    0:24:05 So Sora, which is the model I think you’re talking about, is really interesting because
    0:24:11 you can essentially prompt it to create, let’s just say I wanted to create an animation film
    0:24:16 and right now I think it’s only up to one minute, one minute of film, but I’m sure in
    0:24:20 the future you’ll be get access to 90 minutes.
    0:24:26 And I wanted to create a film that essentially was like a Disney, a Disney film, but with
    0:24:30 my own script, like that is in the realm of possibility in the future.
    0:24:37 So if anyone could be a Walt Disney basically and you can make that assumption, if you have
    0:24:38 good ideas, right?
    0:24:41 If you’re going back to the taste thing, you know you have good taste and you can write
    0:24:45 well and you have the skills to do that, then what is mispriced?
    0:24:48 The mispriced piece is the distribution.
    0:24:53 The only place that you can’t compete with Disney is in distribution.
    0:24:57 They’re in tens of thousands of movie theaters or hundreds of thousands of movies across
    0:24:58 the world.
    0:25:01 They’re on Netflix, like they’ve done these deals.
    0:25:03 They’re taking out Super Bowl ads, stuff like that, right?
    0:25:10 So that’s where I think that people are mispricing creators and they’re mispricing distribution
    0:25:12 in general.
    0:25:18 So if I’m listening to this and if I’m me too, like, and I’m you, like the thing to
    0:25:20 do is two things.
    0:25:23 One is you build a distribution lane.
    0:25:31 So you do everything you can to build as much distribution, credibility and trust as possible.
    0:25:37 And then the other lane is playing is the best way to describe it.
    0:25:42 Like you play with the tools and you play to learn, but you have to pick.
    0:25:46 You can’t just be like, I’m going to be the video person and the audio person and then
    0:25:47 the writer person.
    0:25:49 And then you have to like pick a lane also, right?
    0:25:55 Like you have to decide like, oh, my dream is to build, is to be the next Walt Disney,
    0:25:56 for example.
    0:25:57 I spent a little bit of time in Hollywood.
    0:26:00 So for like about a year and a half, I was partnered with Barry Osborn.
    0:26:04 I think Greg, maybe I told you a little bit about this, but we, you know, Lord.com originally,
    0:26:08 the reason I bought the fancy domain is because I was partnered with the producer of Lord of
    0:26:09 the Rings.
    0:26:11 I was like, we’re going to make this new kind of movie studio together.
    0:26:13 And it was this crazy dream of mine.
    0:26:19 And so I got to spend time on the set of Mulan out in New Zealand, you know, Disney set and
    0:26:22 got to meet all these crazy people out in Hollywood, New Zealand.
    0:26:26 And one thing you realize is it’s so hard to break into that industry, right?
    0:26:28 And like, you really have to know people and things like that.
    0:26:31 And it does feel like with this new technology, like so many more people are going to get
    0:26:35 discovered because they can, they can show, they can show their concepts to Netflix or
    0:26:36 whoever, right?
    0:26:40 Like right now, the people who pitch Netflix, they’re coming in with storyboards and things
    0:26:43 like that and a team.
    0:26:47 And that’s a big part of what sells, sells it is like the storyboard and like the concept
    0:26:48 and the research around it.
    0:26:51 And before people, you had to have a whole team to do that.
    0:26:56 And now like somebody who has good taste, they can have those ideas, produce the storyboards
    0:27:01 or a short video concept, maybe not, maybe that’s not going to be the final film, but
    0:27:06 they can produce some one minute or 90 minute video and say, hey, here’s not the final film,
    0:27:08 but here you get the gist of it.
    0:27:11 Here’s the gist of this idea I have helped me make it.
    0:27:15 And I think a lot of people like Netflix will write checks, like large checks to come in
    0:27:17 and produce those films.
    0:27:19 That’s going to be exciting for so many people.
    0:27:23 So you think that like AI video is the new storyboard instead of making a storyboard.
    0:27:25 Here’s just a mock of the video.
    0:27:26 I think there’s so many opportunities around that.
    0:27:30 I think, I think you’re going to see like talent agencies too, where they’ll like, you
    0:27:33 know, they’ll, they’ll pitch the studios with their talent and like just put our talents
    0:27:35 face into the concept.
    0:27:38 Like here’s how this, you know, this guy that we have a girl is how they would look in this
    0:27:42 role and then you just show it, you know, there’s like so many areas in entertainment
    0:27:44 that’s going to get changed from this.
    0:27:48 I also think that the leverage that you’re going to have with the Netflix is of the world
    0:27:58 is going to be, oh, I’ve got 50,000 or 100,000 followers and I’ve already posted this IG
    0:28:03 clip or whatever and I got 20,000 shares and they’re loving it and they’re, they’re banging
    0:28:05 on the door and they want more of it.
    0:28:09 So I think that’s why I always go back to the distribution piece because like I think
    0:28:14 that, that if everyone has the ability to create essentially these Disney style like
    0:28:19 levels of films and storyboards, like it’s going to be, you have to have some point of
    0:28:20 leverage.
    0:28:21 Yeah.
    0:28:25 I mean, when you look at like the music industry and even like the book industry, right?
    0:28:30 Among authors, when you go to get like a publishing deal for books or if you go to get like a,
    0:28:35 you know, a music production deal from these, these music studios, that’s kind of the thing
    0:28:38 they’re looking at these days is how big of a following do you have, right?
    0:28:42 It’s a heck of a lot easier to get a book publishing deal if you have a million subscribers
    0:28:48 on YouTube and, you know, 1.5 million followers on X, you know, it sounds like that might
    0:28:51 be the future of video as well.
    0:28:56 If the sort of creation of video is completely democratized and anybody can make them, that’s
    0:28:59 sort of the, the next level is what sort of distribution do you bring to the table on
    0:29:01 your existing platforms?
    0:29:04 That’s a, I think that’s a really interesting way to look at it if I’m interpreting what
    0:29:05 you’re saying, right?
    0:29:10 I mean, you saw that a little bit with music, you know, 10, 15, 20 years ago, even when
    0:29:15 YouTube came out, like Justin Bieber was a YouTube creator, you know, we didn’t call
    0:29:18 him that, but he posted videos on YouTube.
    0:29:24 He was found by Scooter Braun on YouTube and then he became Justin Bieber, one of the
    0:29:28 biggest acts in the world, musical acts in the world from YouTube.
    0:29:34 We don’t have that really with movies, for example, like the way you become big in movies
    0:29:41 is you go on a more traditional path, like you go to USC and study film, and then you
    0:29:46 get an internship at, you know, X, Y, Z, you know, I don’t even know because I don’t know
    0:29:50 the film business, but I know, all I know from what I hear is that it’s, it’s a very
    0:29:55 linear path and you get like, here’s like, you have to like check the boxes.
    0:30:00 What this does is there’s only two boxes to check, you know, the tools and you have distribution
    0:30:02 is what I’m saying.
    0:30:05 And you can use the tools to get distribution too is a beautiful thing, right?
    0:30:10 Like you could be somebody in Quebec or whatever, like in a small town who starts producing films
    0:30:15 or AI art or whatever on Twitter and get an audience and then go out and try to take the
    0:30:19 next level and actually, you know, partner with some big company to actually make it.
    0:30:20 Absolutely.
    0:30:25 I also think this is kind of a little bit of a different topic, but I think there’s a
    0:30:33 huge opportunity to create content for different geographies and languages and cultures.
    0:30:34 Yes.
    0:30:39 I’m interested to see how AI plays a role in that so that like, you’re not just lip dubbing
    0:30:44 different videos, for example, like what if in France you’re wearing like a beret or something
    0:30:49 and you know, and then the video in the US, maybe you’re not wearing a beret, right?
    0:30:52 You’re wearing a baseball cap or something like that.
    0:30:58 It’s like, what are these little nuances things that you can add to the content so that it
    0:31:01 feels more, oh, this is for me.
    0:31:03 This is for people like me.
    0:31:04 And I think that’s really interesting.
    0:31:10 And I think that the greater trend to that is all media will be personalized.
    0:31:14 I think that’s the really cool part about where we are and like the cycle and then this
    0:31:15 AI world.
    0:31:18 It’s like, it’s not just an AI world.
    0:31:26 It’s like, there’s a lot of massive trends happening right now like AI, virtual reality.
    0:31:27 Yeah.
    0:31:28 Robotics is just starting as well.
    0:31:29 Robotics is just starting.
    0:31:34 I saw YC came out with their like, here’s the things that we’re really interested in.
    0:31:35 I don’t know if you saw that.
    0:31:36 Yeah.
    0:31:37 I saw it yesterday.
    0:31:38 Yeah.
    0:31:41 It’s like, you know, applying machine learning to robotics, using machine learning to stimulate
    0:31:46 the physical world, new defense technology, bring manufacturing back to America, new space
    0:31:51 companies, climate tech, commercial open source companies, spatial computing, new enterprise
    0:31:57 resource planning software, developer tools inspired by existing internal tools, explainable
    0:32:03 AI, LLMs for manual back office processes and legacy enterprises, AI to build enterprise
    0:32:10 software, stablecoin finance, a way to end cancer, foundation models for biological systems,
    0:32:15 the managed service organization model for healthcare, eliminating middlemen in healthcare,
    0:32:20 better enterprise guru, and finally, small, fine-tuned models as an alternative to giant
    0:32:21 generic ones.
    0:32:23 There’s a lot of stuff going on.
    0:32:27 And people were saying that part of that whole list was, oh, you know, it’s an end of the
    0:32:30 trend with SAS or something like this.
    0:32:35 I think YC is like seeing the writing on the wall that like, dramatic changes are coming,
    0:32:36 like dramatic changes.
    0:32:40 And so yeah, we should be looking for more moonshots, big things, and now the technology
    0:32:41 is making it possible.
    0:32:47 But also from a, you know, probably from their perspective, from an investment stand standing,
    0:32:51 like the companies that they would have invested in before, a lot of those now probably be
    0:32:55 built with like one or two people, and they’re not going to need as much capital, you know,
    0:32:57 and Greg, you talk a lot about this kind of stuff.
    0:33:01 But so I think they’re looking like, okay, yeah, investors in VCs are probably going
    0:33:05 to focus more and more on the companies that are really going to need a lot of caps, right?
    0:33:08 People that are trying to solve cancer, people are trying to build robotics and things like
    0:33:12 this and like new time, you know, simulating the world with AI, like, like very ambitious
    0:33:17 things that are going to require lots of capital because the other one’s just like, God, like
    0:33:21 in a year or two, and especially, you know, I’ve been hearing from friends in San Francisco
    0:33:25 and a lot of them are like connected to the whole YC kind of network, really positive
    0:33:27 things about GPT five.
    0:33:31 I assumed, you know, since, you know, Sam Almond’s connection to YC, a lot of those
    0:33:34 people know the ideas of what’s coming.
    0:33:38 And so they’re probably realizing like, yeah, in a year or two from now, a lot of those
    0:33:42 simple SaaS tools, if you don’t have a great distribution or a brand, they’re not going
    0:33:43 to make a whole lot of sense.
    0:33:48 And so I think that’s part of the list, too, is that like personally, I’m happy that people
    0:33:53 are working on those hard problems, but I personally prefer working on easy problems.
    0:33:56 The one to end or whatever versus zero to one.
    0:33:57 Yeah.
    0:34:03 Yeah, I think it’s like, I’m a thin rapper guy, you know, and I did the whole Silicon
    0:34:04 Valley thing.
    0:34:08 Like nine years there, I’ve been a part of companies that have raised billions of dollars.
    0:34:16 Like, I could say I did it, but man, am I glad to be on the other side and creating
    0:34:22 thin rappers with deep communities like that should be the title of this, this, something
    0:34:23 like that.
    0:34:27 It’s like thin rappers and deep communities, like that’s the strategy for, you want to
    0:34:29 win an AI, like that’s literally the strategy.
    0:34:33 But before we do wrap up, I do want to ask about you probably need a robot.
    0:34:35 Could you quickly explain that to me?
    0:34:40 Because I was asking Nathan about it and, you know, I figured the best person to ask
    0:34:43 about it and get the explanation from is from Greg himself.
    0:34:45 So could you tell me a little bit about what that is?
    0:34:47 I start all businesses with this framework of mind.
    0:34:51 We call it the ACP funnel, audience community product.
    0:34:55 So like I was talking about with boring marketing, we started with the Twitter account.
    0:34:57 We did the same thing with you probably need a robot.
    0:34:59 This is like over a year ago.
    0:35:00 Yeah.
    0:35:01 Over a year ago.
    0:35:06 AI obviously wasn’t as big as what it is today.
    0:35:11 And we created a Twitter account where we just share productivity tips with, like we
    0:35:13 use this tool and here’s what we learned.
    0:35:19 And then we opened up a discord and then we got like 20,000 people to join the discord,
    0:35:22 maybe more, 25,000.
    0:35:24 And we saw that there was demand.
    0:35:28 And then we, of those people, we turned it into a newsletter.
    0:35:31 And then every week we would give people like deep dives around.
    0:35:33 Here’s how we’re using AI this week.
    0:35:40 So not like AI news or anything like that, but just like here’s things that we’re learning
    0:35:44 and to make it more productive in your business career.
    0:35:47 And it’s evolved to other things too.
    0:35:55 We created like a deal pass, which is very popular where you get access like 48 bucks.
    0:36:01 And like you get discounts on like all the major AI tools or a lot of them like My Mind
    0:36:07 and Drippy AI, Framer and barely.ai, Apollo.
    0:36:09 So a lot of things like that.
    0:36:13 And so we’ve been able to monetize really well through this audience community product.
    0:36:18 And then we also, you know, because we run a lot of agencies, we also do helping companies
    0:36:21 like transform their businesses to be AI first.
    0:36:24 The reason I wanted to, yeah, I just wanted to share that structure because I think that
    0:36:27 that same structure, ACP could be helpful for other people.
    0:36:31 Nathan, anything else you want to ask or add before we wrap this up?
    0:36:34 Yeah, I really appreciate you coming here and it’s been awesome.
    0:36:39 Thank you for your time and I’m excited to see where this goes and you’re doing great.
    0:36:39 Appreciate it.
    0:36:49 [Music]
    0:36:51 [Music]
    0:36:53 you
    0:36:55 you

    Episode 2: Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) are joined by serial entrepreneur Greg Isenberg (https://twitter.com/gregisenberg), founder of Late Check Out and AI powered startups like @boringmarketer and @youneedarobot.

    This episode goes step by step through Greg’s process of creating successful AI businesses, and how you can do the same from scratch TODAY. Greg covers his The ACP framework: Audience/Community/Product, leveraging communities and personal brands to grow your business, and where AI fits in to your strategy.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://link.chtbl.com/4FZET15d

    Show Notes:

    (00:00) Using OpenAI GPT for value creation is misunderstood.

    (07:49) PDF AI’s challenge is niche focus and community targeting.

    (12:43) Crucial to be early movers, because big companies dominate the market.

    (16:04) Building social capital through your personal brand can be very effective and valuable.

    (18:21) Taste trumps engineering in sought-after skills.

    (21:23) AI tools will not destroy industries, they level up the playing field. If you’re good, you will be even better with AI (e.g. video generators)

    (24:44) Nathan’s experienc working with Hollywood execs. The AI video generators like Sora will give access to many film makers who otherwise would have difficulty breaking through.

    (27:09) Industries seek those with large followings.

    (31:22) Anticipating dramatic changes in technology and investment.

    (34:03) ACP Framework: Greg started a discord that grew to a newsletter that led to offers and deals deals.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • How Perplexity.ai Is Disrupting The Future Of Search

    AI transcript
    0:00:08 This is a completely new experience like, okay, large language models can help you build new search experiences that was never possible before.
    0:00:14 Our mission is to really transition the world from links to answers and build the ultimate knowledge app.
    0:00:18 Hey, welcome to the Next Wave podcast.
    0:00:19 My name is Matt Wolfe.
    0:00:23 I’m here with my co-host Nathan Lanz, and we are your chief AI officer.
    0:00:29 It is our goal with this podcast to keep you in the loop with all the latest AI news, the coolest AI tools.
    0:00:36 Ed set you up for success in this Next Wave that we’re entering into with the world of AI and technology.
    0:00:38 Today, we’ve got an excellent show for you.
    0:00:44 We’ve got the founder and CEO of Perplexity on the show, Arvind Srinivas.
    0:00:53 We had a fascinating conversation with Arvind, and he told us his story, how he went from growing up in Chennai to moving out to California and going to university at Berkeley.
    0:00:55 He’s worked at Google DeepMind.
    0:00:56 He’s worked at OpenAI.
    0:01:09 He’s got a pretty impressive resume, and now Perplexity is one of those companies that all the venture capitalists in Silicon Valley are chasing after and just throwing a ton of money at because they love the idea.
    0:01:12 And we want to dissect that and break that down in this episode.
    0:01:14 It’s probably the best way to do research with AI.
    0:01:16 Actually, I’ve started using it to do research for the episodes.
    0:01:25 Yeah, when you look at the current AI landscape, right, you’ve got these large language models like GPT-4 and Anthropics Clawed and Gemini.
    0:01:32 And you’ve got all of these large language models that are sort of put in the form of a chat bot like we see with chat GPT and Clawed, right?
    0:01:41 Perplexity took a little bit different of an angle on it, and they wanted to make sure that, A, they were searching the web, and B, they were citing all of their sources.
    0:01:51 And they were really sort of the first AI chat query platform that started to cite the sources and share where they actually found the information.
    0:02:01 They have a really, really great Chrome extension where you install the Chrome extension and it will essentially search anything on the site or domain that you’re on.
    0:02:03 I found that to be super, super helpful.
    0:02:05 So I love this approach Perplexity took.
    0:02:15 And Nathan, you and I were talking right before we hit record about how they’re totally agnostic to the actual underlying large language model.
    0:02:21 Yeah, which means they’re also like a huge supporter of open source, which obviously I’m a big fan of because I don’t like the idea of just having one
    0:02:24 company or two companies that rule the future of AI.
    0:02:30 So it was great to hear from Arivan, like his thoughts on open source and how Perplexity is kind of supporting open source by being agnostic.
    0:02:32 Yeah, I mean, it’s a cool place to be, right?
    0:02:39 Because you can go use Perplexity and you don’t have to worry about, all right, what is the best model out there right now?
    0:02:44 I mean, people like Nathan and I were constantly going, all right, Clawed is marginally better than chat GPT.
    0:02:46 So let’s use that now instead, right?
    0:02:48 We’re keeping our finger on the pulse of that kind of stuff.
    0:02:56 But if you use something like Perplexity, well, it’s always just going to use the most beneficial model for what you’re trying to achieve.
    0:02:56 Yeah.
    0:03:03 When all your marketing team does is put out fires, they burn out.
    0:03:07 But with HubSpot, they can achieve their best results without the stress.
    0:03:16 Tap into HubSpot’s collection of AI tools, breeze to pinpoint leads, capture attention and access all your data in one place.
    0:03:19 Keep your marketers cool and your campaign results hotter than ever.
    0:03:23 Visit HubSpot.com/marketers to learn more.
    0:03:33 So on this episode, our event is going to break down his entire story about what Perplexity was before it is what it is now.
    0:03:36 And when he started, it was completely different.
    0:03:42 So he’s going to break down that whole story arc for you of how it started and how it got to where it is today.
    0:03:44 We talk about the current state of AI.
    0:03:48 We talk about all of these devices that AI is getting rolled out into.
    0:03:56 You’re going to learn about the past, present and future of AI and how Perplexity is firmly placing themselves in the center of all of it.
    0:03:59 So let’s go ahead and jump in with Aravind Srinivas.
    0:04:03 Aravind Srinivas, thank you for joining us today on the next wave.
    0:04:05 Thank you for having me and Nathan, Matt.
    0:04:07 You know, I’ve been a big fan of Perplexity since the beginning.
    0:04:11 I think when I first saw you tweeting about it and tried it out and was blown away.
    0:04:20 So I think it’d be useful to know how did you get Perplexity started from starting out in India to now having one of the hottest startups in Silicon Valley?
    0:04:21 Like, what was that journey?
    0:04:29 I mean, by the way, I think it’s better to tell the true story than something that’s retrofitted to make it look like a much better story for PR.
    0:04:31 Yeah, OK. Yeah, yeah, yeah. The true story is great.
    0:04:40 Look, I never intended to start any company, but then there was this movie I watched, really deeply impacted me, called The Pirates of Silicon Valley.
    0:04:42 I don’t know if you guys have seen that movie.
    0:04:43 Yeah, I have, yeah.
    0:04:49 It’s one of the most authentic portrayals of Steve Jobs and Bill Gates and Microsoft and Apple.
    0:04:50 Yeah.
    0:04:53 And I was like, OK, I really need to be at Silicon Valley.
    0:04:54 It’s fantastic.
    0:04:59 And then did not have enough money to go do a master’s myself.
    0:05:01 So I thought, OK, someone else has to pay for you.
    0:05:03 So why don’t I try for a PhD?
    0:05:10 And I’ll get the best way to do PhDs and get started in some kind of research and establish like a tracker garden.
    0:05:14 So I went to a professor at IED and said, hey, like, can you help me do some research?
    0:05:19 And he was like, yeah, you know, there’s this paper called Atari Games AI.
    0:05:23 Like, there’s this company called DeepMind that trained in AI to play Atari games.
    0:05:26 Why don’t you try to reimplement that whole paper?
    0:05:31 So like, he got me excited about all these ideas like transfer learning and hierarchical learning and things like that.
    0:05:38 And like, I wrote a few papers with them and that got me an admission in UC Berkeley for doing a AI PhD.
    0:05:45 And there I did a little more work and like OpenAI noticed my work, particularly this guy called John Shillman.
    0:05:49 He’s the guy who in basically like the research inventor of chat.
    0:05:54 At that time, he was doing research more in our realm and he invited me to do an internship.
    0:05:57 And until that point, I was kind of like on a high.
    0:06:01 I was thinking I was really doing well writing papers, like coming all the way from India here.
    0:06:06 And when I entered OpenAI, I was like, damn, like a back on your face.
    0:06:11 It’s still humbling. The people here are like stalwarts, like superstars.
    0:06:14 All of them are really amazing, like talented people.
    0:06:17 But it was not a very stable organization at the time.
    0:06:20 It’s probably never been stable for what it’s for the hotel.
    0:06:25 So then I got to work on all these unsupervised, ungenerative models,
    0:06:27 got an internship at DeepMind.
    0:06:32 And that’s where I think I got the entrepreneurial ambitions because I always wanted to start a company like that.
    0:06:38 Where I knew I would not be successful starting the next like Instagram or to talk anywhere,
    0:06:42 even if like luck was on my side because I don’t have the skill set of like
    0:06:44 hacking the dopamine of people.
    0:06:48 So my skill set is more, OK, thinking about like some problem more deeply
    0:06:51 and trying to see what we can do with some research, but quickly ship it to product.
    0:06:54 That was the sweet spot I was trying to get at.
    0:06:56 And Google is a great example of that.
    0:06:58 So that was very motivational.
    0:07:00 That doesn’t mean I wanted to start a search startup.
    0:07:03 It was just like motivational to try to start a company in that fold.
    0:07:06 And, you know, one thing led to another.
    0:07:10 I tried, you know, this TV show Slick and Valley, right?
    0:07:11 You wouldn’t believe it.
    0:07:15 I actually thought it was for a com, like meant for a comedy.
    0:07:19 But people told me it’s to be real.
    0:07:21 I lived in Silicon Valley for 13 years.
    0:07:23 It’s it’s definitely real.
    0:07:25 I was, I was at Berkeley.
    0:07:25 I was in Berkeley, right?
    0:07:28 So I was not very well connected to Silicon Valley.
    0:07:32 So I thought this show was just meant for laughs.
    0:07:34 But people told me, dude, don’t laugh at this.
    0:07:39 I cry watching it because it’s too real and reminds me of my own life.
    0:07:42 And then I was OK, if I like it’s compression, generative models.
    0:07:44 All of that was like amazing.
    0:07:47 Try to start convinced people to work with me.
    0:07:49 Nobody wanted to do any company.
    0:07:53 Something that you realize as a founder is like every time you go to your friends
    0:07:57 and say, let’s start a company either over drinks or coffee, doesn’t matter.
    0:07:59 All of them would say, hell, yeah, let’s do it.
    0:08:01 And then you just forget about it.
    0:08:03 Yeah, people don’t really tell how hard of doing a startup is, I think.
    0:08:06 You it’s real when you just say, yeah, I’ve started it.
    0:08:09 This is the company.
    0:08:10 Are you willing to join?
    0:08:13 Whether you join or not join, I’m going to I’m going to do it.
    0:08:15 And that’s when people are like, wait, is this real?
    0:08:16 Is this serious?
    0:08:19 And then they’re like spooked and interested, right?
    0:08:22 So the reason you’re not having cofars is people don’t think you’re serious enough.
    0:08:27 Anyway, so all that, like one thing led to another and like, it’s the stupidest idea
    0:08:31 to one of my first investors saying, hey, like, we need to disrupt search.
    0:08:36 So it’s hard to disrupt Google through the text form factor.
    0:08:42 So how about we disrupt Google through the vision form factor, through the vision pixels?
    0:08:46 So imagine we all wore a glass and we all saw this and then we could just ask
    0:08:48 questions about whatever we see.
    0:08:50 And he was like, OK, all the sun’s cool.
    0:08:53 But look, you’re you’re like literally one person right now.
    0:08:57 And like, you’re not going to be able to execute on this yourself.
    0:09:02 Start focusing on more narrow things, get a team and then try to build up towards this.
    0:09:06 So that was a very good advice given to me by this great investor named Elad Gill.
    0:09:07 Oh, Elad.
    0:09:10 And then like, Naft, Friedman and Elad decided to fund me.
    0:09:12 They were like, OK, look, you’re from open AI.
    0:09:16 You have all this, like you’ve done work in deep mind research.
    0:09:17 You understand these things.
    0:09:19 But again, like, you don’t have any idea.
    0:09:21 You don’t have any product.
    0:09:25 So we’re going to give you like one or two million to play around and tinker.
    0:09:26 We’ll see what happens.
    0:09:30 And then I take that money and like, we start focusing more on like searching over databases,
    0:09:36 like searching over your own spreadsheets, searching over CSVs, asking questions about like data sets.
    0:09:37 And that was fun.
    0:09:38 Like as a data nerd, I really loved it.
    0:09:43 And we got like, my co-founders Dennis and Johnny joined to try on.
    0:09:45 They were all excited experiment too.
    0:09:49 And then like, we went to enterprises and said, hey, dude, it’s like we have this thing.
    0:09:51 We used to show demos.
    0:09:55 What if you gave us your data and we powered search over that?
    0:09:58 You just upgrade the functionality for your users.
    0:10:03 Like you’re into websites like pitchbook and crunch based and like, and all of them will listen to us,
    0:10:08 watch our demos and be like, our engineering teams can do this, man.
    0:10:09 So thank you.
    0:10:14 And we feel like really depressing every week where like we would keep doing demos and nobody wants it.
    0:10:20 And then I one day I just realized nobody cares about like a three person startup.
    0:10:22 They think they can do it themselves.
    0:10:24 They don’t value you and it’s fair.
    0:10:24 It’s fair.
    0:10:27 Like you’ve not earned their value yet.
    0:10:32 I’m sure this will be useful for bigger companies, but they are never going to talk to us.
    0:10:37 If these smaller companies don’t talk to us, like let’s earn the attention of the bigger guys
    0:10:41 by doing search over public data sets that are really big.
    0:10:45 Only then they’ll get convinced that we can handle large databases.
    0:10:50 And so we started scraping Twitter because I really mean, obviously we all like Twitter.
    0:10:50 We are all using it.
    0:10:52 Yeah, right.
    0:10:53 Yeah, X as it’s called today.
    0:10:57 And Jack Dorsey had Twitter API.
    0:11:03 Elon also had it, but Elon basically charges so high that it’s impossible to lose it now.
    0:11:05 But Jack Dorsey had the Twitter API.
    0:11:10 And if you’re an academic access accounts, you can just scrape a lot of tweets every day.
    0:11:14 So we will just create these academic access accounts.
    0:11:16 And non-commercial use.
    0:11:19 And we keep scraping social graphs and tweets.
    0:11:20 And then we will power search over that.
    0:11:26 We will power search over like, oh, like how many followers does Nathan have that math is also following?
    0:11:32 Or like, what are the tweets of math that Nathan has liked in the last 10 days?
    0:11:38 Or like, which of Nathan’s tweets has Elon Musk replied to or?
    0:11:39 There is stuff like that, right?
    0:11:40 Right.
    0:11:53 And then you can sort of like, it’s fun, like these sort of social searches and like, like tweets about AI or like tweets about like, like 3D diffusion models that like, like the math has tweeted about.
    0:11:57 You can do a lot of these searches that current Twitter just like really sucks at, right?
    0:12:04 And once we build this demo, we show it to a few people like Elon and like Jeff and like, they’re all like blown away by that.
    0:12:07 Damn, like this is a completely new experience.
    0:12:14 Like, okay, large language models can help you build new search experiences that was never possible before.
    0:12:16 And they invested in us.
    0:12:31 And then we use that their investment as a credibility to like attract some engineers, at least two engineers joined us after that, saying, hey, okay, like, look, you guys may not be well known, but looks like you got funding from some top people.
    0:12:38 So you won’t be like Randall’s, you know, at least I can trust you to like work with you guys and the demos are actually really impressive.
    0:12:39 So let’s work together.
    0:12:45 And then we went to these bigger companies and said, hey, like, look, these things are working.
    0:12:46 Do you want to work with us?
    0:12:51 Now they would at least like take our meetings more seriously and say, okay, you know what, these are all our problems.
    0:12:52 Like, what do you want to do?
    0:12:54 So that was the stage we were in.
    0:13:00 And then one fine day we were like, hey, like this part of talking to companies and like trying to sell solutions to them is not even fun.
    0:13:14 So why don’t we just like search over the whole web, like make the LLM just look at the links, take the relevant parts of the links and then let the LLM do all the reasoning in terms of whether it has to return a table or a paragraph or citations or whatever.
    0:13:16 And then we built a little more general solution.
    0:13:23 Like, I think Paul Graham talks a lot about this, like how often when you realize a simpler way to do something, like it becomes a big unlock for you.
    0:13:29 And then one weekend, like we prototype this idea of just taking the links and summarizing them with citations.
    0:13:32 And then it was working reasonably well.
    0:13:38 And three days before chat, GBT got released, OpenAI put out this DaVinci 3 model.
    0:13:39 Yeah.
    0:13:46 And that model just made these summarizations so much better that we were like, damn, this is truly a big deal.
    0:13:47 It’s an inflection point in AI.
    0:13:51 Everyone’s like, this is a technology we never even knew we wanted.
    0:13:54 But now that we have it, we just like don’t want to go back.
    0:14:01 And they did one thing, which is they don’t have, they had a knowledge cut off really, and they don’t have citations.
    0:14:03 They don’t have like grounding in facts.
    0:14:09 So there was a space for somebody else to come and put a fact grounded citation powered answer bot.
    0:14:11 And we already had it.
    0:14:13 It’s not even like we had to build it in like three days.
    0:14:14 We already had it.
    0:14:17 We just had to put together a web front and then we got it ready quickly.
    0:14:19 We sent it to a few investors.
    0:14:22 I remember my first feedback was from this guy.
    0:14:24 I really respect Daniel Gross.
    0:14:27 And he said, Arvind, this is cool.
    0:14:30 You should not have it as in like, it’s not a hit button for your query.
    0:14:35 It’s a submit button because it’s that slow takes like 10 seconds to get an answer.
    0:14:37 It’s almost like I’m submitting a job.
    0:14:43 So you should tell us submit button and have a queue of queries or something from there onwards.
    0:14:47 So now being asked like, how is the service so fast?
    0:14:48 That is the progress, right?
    0:14:52 We have made not just because of our own engineering team, which is amazing.
    0:14:59 Also, the fact that chips are getting better, faster, cheaper models are getting better, faster, cheaper.
    0:15:06 And we made a bet when we launch, there were similar other services in our space that we’re
    0:15:08 also launching a mix of search and LLMs.
    0:15:13 But we are the only ones who have the conviction that it should just be answers.
    0:15:15 The links are only in sources.
    0:15:18 Others are like, I still want to have the 10 blue links.
    0:15:21 I want to have a sidebar with a chatbot.
    0:15:24 I want to have a summary panel at the top.
    0:15:26 I don’t want to like change it too dramatically.
    0:15:31 And we were like, dude, if you don’t change it dramatically, no one’s going to really realize
    0:15:32 you’re different from Google.
    0:15:35 They’re just going to think you’re Google with some add ons.
    0:15:36 Like, that’s not exciting.
    0:15:41 You have to be truly differentiating that, okay, even if you’re worse than Google, even
    0:15:45 if you’re slower, even if you suck at navigational queries that people go back
    0:15:50 to Google, they’ll at least register in their minds that you are better than Google
    0:15:54 on certain things, which is like actually asking a question, deeper research.
    0:15:56 And they’ll come to you for answer engines.
    0:15:57 They’re not going to come to you for search engines.
    0:15:59 They’re not going to come to you for product comparisons.
    0:16:03 They’re not going to come to you for like ordering San Pellegrino, right?
    0:16:06 They’re going to come to you for asking whether San Pellegrino or the Crocs.
    0:16:07 What should I, what should I get?
    0:16:10 It’s going to register in their mind why you’re different and better.
    0:16:12 So that is the position we took.
    0:16:16 We had conviction that like, even if we got answers wrong, even if people made fun
    0:16:21 of us for, you know, like hallucinations over time, all these problems will get much better.
    0:16:26 And that ended up being one of the best decisions we made to be called as an answer
    0:16:29 engine instead of like search engine with LLMs on top.
    0:16:32 And so we have been proven right.
    0:16:37 Our thesis was correct that this is the right format to interact with information on the web.
    0:16:40 And that ended up being for Black City.
    0:16:43 Our traffic has been growing exponentially since we started.
    0:16:50 So then we said, OK, look, we were initially on this treasure hunt, trying to figure
    0:16:52 out some product that would resonate with users.
    0:16:55 This is the product that it’s growing in terms of traction.
    0:16:58 Also, let’s commit ourselves to building a company.
    0:17:01 Let’s not be a seed round $2 million project.
    0:17:04 Let’s try to build a company around it and a business around it.
    0:17:09 And so we went and raised venture funding rounds and use that money to like keep growing
    0:17:12 even more. And that’s our current plan.
    0:17:17 Our mission is to like really transition the work from links to answers and build
    0:17:18 the ultimate knowledge app.
    0:17:21 Like if people go to Black City, they should just feel smarter every day.
    0:17:23 That’s the vibes we want people to feel.
    0:17:29 We don’t want the vibes of dancing girls on TikTok or like celebrities posting stuff on
    0:17:32 Instagram. We just want the vibes of feeling smarter.
    0:17:38 And I think asking questions is a great way to feel smarter, discovering new threads,
    0:17:41 your friend sharing interesting queries with each other.
    0:17:44 These are all utility values we’re trying to add to people’s lives.
    0:17:48 We’ll be right back.
    0:17:51 But first, I want to tell you about another great podcast you’re going to want to listen to.
    0:17:55 It’s called Science of Scaling, hosted by Mark Roberge.
    0:18:00 And it’s brought to you by the HubSpot Podcast Network, the audio destination for
    0:18:01 business professionals.
    0:18:06 Each week hosts Mark Roberge, founding chief revenue officer at HubSpot, senior
    0:18:11 lecturer at Harvard Business School and co-founder of Stage 2 Capital, sits down
    0:18:15 with the most successful sales leaders in tech to learn the secrets, strategies and
    0:18:18 tactics to scaling your company’s growth.
    0:18:23 He recently did a great episode called How Do You Solve for a Siloed Marketing and
    0:18:25 Sales, and I personally learned a lot from it.
    0:18:27 You’re going to want to check out the podcast.
    0:18:31 Listen to Science of Scaling wherever you get your podcasts.
    0:18:36 I’m curious on your thoughts on this.
    0:18:40 So obviously there’s a lot of people that are worried about like content creation, right?
    0:18:45 If I’m creating, if I’m writing blog posts or creating podcasts or making YouTube videos
    0:18:49 and, you know, doing my best to like SEO them or whatever so that people will find them.
    0:18:55 And these chatbots in the future will just sort of answer the question without me
    0:18:57 actually needing to navigate to the site and read the article.
    0:19:02 Is it sort of disincentivized content creators to keep on creating content?
    0:19:05 If people aren’t like clicking over to their website anymore.
    0:19:08 I’m just curious your thoughts on that whole argument around it.
    0:19:13 Our model of the citation or attribution is, I would say it’s kind of the right model.
    0:19:18 Now you can ask like, what about these future AI models that are just training on me?
    0:19:22 Like as they like joke, all publicly available data.
    0:19:27 So I don’t have a, I just don’t, we don’t do that ourselves.
    0:19:30 Like we’re not in the business of creating these large foundation models.
    0:19:35 So we’re not like taking the models and like benefiting from the data you create.
    0:19:39 One thing I think Nat Friedman has said about this, I kind of like this.
    0:19:44 It should be okay to train on someone’s data as long as you’re not like literally
    0:19:46 we’re bad at reproducing it.
    0:19:47 It’s kind of similar.
    0:19:53 Like for example, when I watch any of your, you guys’s broadcasts or YouTube videos,
    0:19:55 is it fair to say I’m training on it?
    0:19:58 Because I’m kind of consuming your data, right?
    0:20:04 But if I were literally taking that and ripping it off and like, and creating value out
    0:20:09 of it without giving you any kind of attribution is like saying, okay, according
    0:20:13 to Matt or according to Nathan, if without saying that, if I’m just literally like
    0:20:16 reproducing your thing word by word, that seems problematic.
    0:20:22 And that is basically the whole core point that New York Times is being against Open AI.
    0:20:27 And I think there’s some, you know, responsible, but I was like, they kind
    0:20:30 of over-engineered the prompts to show those cases.
    0:20:34 But the deeper, deeper point being made is that like, there is a potential to just
    0:20:35 regurgitate content here.
    0:20:38 So what, what, what, what happens?
    0:20:40 Like, like, should, should the person be given credit?
    0:20:45 And I think the current paradigm of like people fighting for licensing deals and
    0:20:49 trying to make money out of people, like the AI companies also doesn’t seem like
    0:20:49 the right solution.
    0:20:51 It seems like a temporary solution.
    0:20:56 The longer term solution is like, whatever value is created per query, it should
    0:21:01 be shared by the person surfacing the answer and the site and the sources that
    0:21:05 got cited, which is more of the Spotify model, which works.
    0:21:10 So this is the sort of thing I kind of feel all AI companies should subscribe to
    0:21:13 not being overly greedy, because if people don’t continue to create good
    0:21:17 content on the web through their blogs or tweets or like journalists writing their
    0:21:22 good essays or YouTube creators, paying good videos, then there’s really no value
    0:21:24 in your bot either.
    0:21:28 Your bot is only as useful because it’s surfacing good content from the web and
    0:21:32 getting into the hands of people who are asking questions relevant to that.
    0:21:34 And if people stop creating good content, your bot is also not going to be
    0:21:35 that useful, right?
    0:21:37 You do, we need a two-way relationship.
    0:21:42 And so instead of trying to be greedy, like, and trying to create a company that’s
    0:21:47 eating all the profits like Google did in the previous era, if you’re like less
    0:21:51 greedy and like more long-term focused like Spotify, I think you can create a
    0:21:55 much better model here. And that’s something we are aspiring to do.
    0:21:58 Yeah, Google just kept getting more and more greedy over time too, right?
    0:22:01 Like adding more and more advertising links at the top.
    0:22:04 We’re now when you do a Google search, you’re saying like five or six or seven
    0:22:08 or eight or results or they’re like instantly answering the question or
    0:22:12 they’re sending you to one of their properties to get the first result.
    0:22:17 Yeah, a lot of people think mistakenly that Google pay everyone some money
    0:22:21 for being able to use their content in the 10-loop in QI.
    0:22:23 Reality is not that reality.
    0:22:24 They don’t pay anybody anything.
    0:22:27 I’m curious on your thoughts about the, you know, the whole open source,
    0:22:30 closed source debate. Obviously, that’s a very hot topic right now.
    0:22:32 Elon Musk is calling out Sam Altman.
    0:22:34 There’s that whole battle going on.
    0:22:37 Do you think you think the future of like the large language models,
    0:22:41 do you think it’s going to be more open source, closed source, a combo of both?
    0:22:43 Like, what are your thoughts on how this is all going to play out?
    0:22:46 I think it’s combo of both.
    0:22:50 Open source will always lack the best closed source model.
    0:22:54 And that’s probably only one company in the world that has the money
    0:22:57 and the incentives to keep open sourcing models, which is meta.
    0:23:01 Everybody hates Zuckerberg, but that’s the only guy who’s truly committed to open source.
    0:23:07 Rest of the people are all like kind of like proxy open source or like whatever.
    0:23:11 Doesn’t need to make fun of them because everyone’s trying to do the best they can, right?
    0:23:17 Like, nobody is able to have a cash cow like Zuck to be able to like spend so much money
    0:23:21 and get open source at all and like give away the benefits because unlike Google,
    0:23:23 he doesn’t even have a cloud business.
    0:23:26 He doesn’t want to have either. He’s like, I don’t care.
    0:23:29 I just want to like make more ad revenue.
    0:23:33 And so he has the incentive to just give it out and own the ecosystem
    0:23:39 and ensure that he profits from the developers who are like building on top
    0:23:41 so that their engineering can benefit the meta.
    0:23:45 And anybody else is not truly committed.
    0:23:51 And I, so from, so then we should say like when can we beat GPT-4?
    0:23:52 That’s the right question to ask.
    0:23:57 Maybe it’s this year, maybe, you know, I hear they’re trying their best.
    0:24:04 But given that he’s purchased 600,000 H100s, it’s inevitable that he beats them, right?
    0:24:05 Like, it’s just a matter of time.
    0:24:09 No, then you can say, okay, by the time he meets them, would Sam have a better model?
    0:24:11 Definitely.
    0:24:16 Like they’ve already had a year for GPT-4 and they’ve been upgrading GPT-4
    0:24:19 through the course of the year, but they had a year to build an even better model.
    0:24:25 So most likely that would be a version of closed source either as OpenAI or Anthropic
    0:24:29 or Gemini and that would be better than the best llama at that point.
    0:24:32 But that doesn’t mean closed source is getting destroyed.
    0:24:37 Like most people just want to use APIs and you need somebody else to serve these models.
    0:24:41 But you don’t have to overly depend on one provider.
    0:24:42 I think that’s the future we want.
    0:24:44 You don’t want to overly depend on one provider.
    0:24:50 And you want the ability to take these models and customize them for what you want to build yourself.
    0:24:56 And if you have, if there is a lot of friction in being able to train and deploy your own models,
    0:25:00 because literally you have to get a GPU cluster, you have to train things,
    0:25:01 you have to deploy, you have to do evals.
    0:25:06 Like, people think like, oh yeah, I’ll just take this model and I create like fake news bots in
    0:25:08 the world and I’ll destroy the world or something.
    0:25:10 That’s not how internet works actually.
    0:25:13 It’s hard to create bots by the way.
    0:25:15 Like there are so many layers of security you need to bypass.
    0:25:20 And like there are so many solutions to like fighting the bots problem and fake information
    0:25:25 problem compared to like say banning the use of open source models.
    0:25:30 Because the more you block people from having access to powerful technology,
    0:25:33 even more motivated they’ll be to like get access to it.
    0:25:38 You know the news of how this Chinese engineer was like leaking all the details, right?
    0:25:44 From Google and like having somebody else badge them and run real office.
    0:25:48 So this is what is going to happen if you go too much on the other extreme.
    0:25:52 You know, a lot of the concerns that people have about the open versus closed
    0:25:55 also has to do with, you know, some of the bias elements, right?
    0:25:59 Like the people are worried that if Microsoft or Google or one of these
    0:26:04 companies is in control, well now it’s a big corporation who controls the narrative
    0:26:07 that is coming out of these bots, right?
    0:26:12 Where open source maybe you could steer it and sort of have your own sort of biases,
    0:26:15 preferences, whatever inside of the model.
    0:26:16 100%.
    0:26:18 I think it’s scary to have like one company that then
    0:26:21 in the future determines what was human history.
    0:26:25 And they’re like telling you the answer and like and it’s not exactly the truth.
    0:26:29 It’s like some modified version of the truth that fits some agenda that they have.
    0:26:31 Close source is going to continue to be way ahead like Erevan said,
    0:26:35 but I’m glad the open source is there because we definitely need alternatives.
    0:26:37 So it’s not just one company ruling everything.
    0:26:38 Yeah.
    0:26:38 So what do you think?
    0:26:41 You mentioned Zuckerberg real quick.
    0:26:42 I’m just curious on your thoughts on this.
    0:26:46 You mentioned that he’s incentivized to open source it.
    0:26:49 What is the incentive for Meta to be open sourcing it?
    0:26:53 We don’t have to think about anyone as altruistic or like a good or bad person.
    0:26:55 Just purely capitalistically.
    0:27:00 It’s in this incentive that other engineers and build on top of Lama than GPDs.
    0:27:05 So that like the engineering people do in the open source ecosystem,
    0:27:08 Meta can learn from that and like use it in their products.
    0:27:13 Like if you can see how other people take Lama and like make it faster,
    0:27:18 learn how to like fine tune it, get it deployed on the edge devices,
    0:27:24 like learn how to personalize these LLMs with like very limited parameter efficient fine tuning.
    0:27:29 All these are like algorithmic benefits that Meta can just look at what people are doing
    0:27:32 in the open and put it in their products.
    0:27:35 Instead of saying, oh, I’ll hire all the best engineers in my company
    0:27:38 and then like only rely on their own like brains to do these things.
    0:27:42 Because you want the whole ecosystem to benefit faster, right?
    0:27:46 And you also benefit from the ecosystem benefiting.
    0:27:49 And you have the cash cow, you have the user base to like,
    0:27:51 you know, go and deploy all this at scale.
    0:27:55 He actually benefits a lot by putting it out and like letting other people build on top.
    0:27:59 Now, there’s the other argument that I believe he’s making
    0:28:04 honestly, but people can be skeptical of his true intentions that he’s saying this,
    0:28:08 if you really care about safety, you rather want as many eyeballs on it.
    0:28:12 You can’t be the person who comes and says, we need to make all this safe.
    0:28:14 This could go really wrong and dangerous.
    0:28:18 So you better trust like these us four or five people in the world.
    0:28:20 Well, like all these like billion dollars in funding
    0:28:25 and like tightly tied to like Microsoft or, you know, Google or Amazon.
    0:28:27 And like, you know, we’ll decide what is good for you.
    0:28:31 But you’d rather have as many people have access to these things, right?
    0:28:35 If it is truly dangerous, you’d rather have as many people be aware
    0:28:39 and educated and having access and like trying to be able to have opinions about it, right?
    0:28:42 Because that way, even if somebody is misusing it,
    0:28:44 you at least know how people can misuse things.
    0:28:45 Right.
    0:28:49 And that way, you’ll be able to build guardrails against it.
    0:28:51 Instead of just saying, trust us and we know what we’re doing.
    0:28:55 Well, this has been an amazing conversation.
    0:28:57 Everybody needs to check out perplexity.ai.
    0:29:00 There is a free version that you can use of it.
    0:29:02 There’s also a premium version.
    0:29:03 I’m on the premium version.
    0:29:05 I’ve also have a rabbit R1 coming.
    0:29:08 So I’m excited to play around with that with perplexity on board.
    0:29:13 Is there anywhere that you want people to follow you, maybe on Twitter, YouTube,
    0:29:14 something like that?
    0:29:16 Where do you want to send people after listening to this episode?
    0:29:18 Perplexity underscore AI.
    0:29:19 That’s our Twitter handle.
    0:29:25 And mine is AROVShrinivas, A-R-A-V-S-R-I-N-I-V-A-S.
    0:29:26 Very cool.
    0:29:29 Well, thank you so much for spending the time with us today
    0:29:32 and answering all of our questions and hanging out with us.
    0:29:34 And yeah, it’s been a great conversation.
    0:29:34 Thank you.
    0:29:35 Thank you, Irvin.
    0:29:38 [MUSIC PLAYING]
    0:29:42 [MUSIC PLAYING]
    0:29:45 [MUSIC PLAYING]
    0:29:48 [MUSIC PLAYING]
    0:29:52 [MUSIC PLAYING]
    0:29:55 [MUSIC PLAYING]
    0:30:05 [BLANK_AUDIO]

    Is our search functionality changing? How will AI change how we find information? Who will usher in the next wave of the online search experience? The Next Wave answers those questions and more as Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) talk with Aravind Srinivas (https://twitter.com/AravSrinivas), CEO of Perplexity A.I. They discuss perplexity’s beginnings, Arivand’s journey from Open AI to Google Deep Mind, to Perplexity, and how they are changing how people use search functionality, differentiating themselves from Google. Plus, open source vs closed source, AI’s implications for creators and more!

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://link.chtbl.com/4FZET15d

    Show Notes:

    (00:00) Aravind’s beginnings from India to Berkley and Silicon Valley

    (05:28) Entering OpenAI humbled Aravind and motivated him to pursue entrepreneurship.

    (07:34) Pitching a bold idea for disrupting Google search.

    (11:10) New search experiences utilizing large language models.

    (15:23) Deep research brings clients seeking answers, not products.

    (16:50) Transitioning the world’s search results from links to answers

    (19:17) Value sharing model needed for AI companies.

    (22:11) Incentive for profits, expecting technological advancements.

    (25:31) Encouraging open source for mutual benefit.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano