AI transcript
0:00:05 I think the answer is yes.
0:00:06 – Yeah.
0:00:07 – But how it’s going to change?
0:00:08 I don’t think we know yet.
0:00:10 A lot of people underestimate humans,
0:00:12 especially when we’re talking about AI.
0:00:15 I feel like humans will figure out new ways
0:00:16 to add value to the world.
0:00:17 – And I’m pretty encouraged there.
0:00:19 Like I think people who adopt AI,
0:00:21 they’re going to become enhanced in a way
0:00:24 and they’re going to have an outsized impact on the world.
0:00:26 (upbeat music)
0:00:28 – Hey, welcome back to the Next Wave podcast.
0:00:31 I’m Matt Wolf and I’m here with my co-host, Nathan Lanz.
0:00:34 And we are your chief AI officer.
0:00:36 It is our goal with this show
0:00:38 to keep you informed on the latest AI news,
0:00:42 the latest AI tools and the talk of the AI world
0:00:45 so that you can use it in your life and your business
0:00:48 and leverage these new technologies
0:00:50 that are there for you to use.
0:00:52 Today’s episode is a little bit different
0:00:53 from what we’ve put out so far.
0:00:54 This time we’re doing a little bit
0:00:57 of an ask us anything episode.
0:01:00 We went to X and asked you to send us your questions
0:01:03 about the AI world and what you want to know
0:01:05 and hear our opinions on.
0:01:06 And we got some amazing questions.
0:01:10 So in this episode, Nathan and I are going to deep dive
0:01:13 and answer and give our thoughts on some of those questions.
0:01:17 We talk about things like where is AI art going
0:01:20 and is there room for it to get even better?
0:01:23 We also talk about how the government and corporations
0:01:26 and how people need to change to adapt
0:01:28 to this new AI world that we’re headed in.
0:01:31 And we also talk about the limitations
0:01:32 of large language models.
0:01:34 How big can these large language models get?
0:01:36 How powerful can they get?
0:01:38 And we’re going to deep dive and give you our thoughts
0:01:40 on all of this in today’s episode.
0:01:42 So hang out with us and let’s dive in.
0:01:47 So the first question here comes from Vicky Jay over on X.
0:01:50 She says, where do you see AI art and video going
0:01:52 by the end of this year?
0:01:54 I mean, AI art, the thing is like a year ago,
0:01:57 AI art was horrible.
0:01:59 Or like a year and a half ago, really horrible.
0:02:01 And then now it’s getting to the point where it’s like,
0:02:02 okay, how is it going to get better?
0:02:05 Like the newest version of mid-journey is amazing.
0:02:09 ‘Cause maybe it’s still, with mid-journey,
0:02:11 I think the one challenge is it still always looks
0:02:12 like mid-journey to me.
0:02:14 Like it looks very polished and it’s harder
0:02:16 to get it having a very different style.
0:02:21 So maybe having more controls over how,
0:02:24 like little details in the image versus now
0:02:25 where it kind of just, oh yeah,
0:02:27 produces something super beautiful,
0:02:29 but it’s not really easy to edit.
0:02:31 I think by the end of the year,
0:02:32 you’ll be able to produce images
0:02:34 that look even more amazing than now,
0:02:35 but you’ll have way more control
0:02:37 over how you actually edit those images,
0:02:39 which for commercial uses will be amazing, right?
0:02:41 ‘Cause if you’re producing like graphics
0:02:43 for advertisements or anything like that,
0:02:44 that stuff’s going to get way better,
0:02:46 I think by the end of the year.
0:02:50 When all your marketing team does is put out fires,
0:02:52 they burn out fast.
0:02:53 Sifting through leads,
0:02:55 creating content for infinite channels,
0:02:58 endlessly searching for disparate performance KPIs,
0:02:59 it all takes a toll.
0:03:00 But with HubSpot,
0:03:03 you can stop team burnout in its tracks.
0:03:05 Plus your team can achieve their best results
0:03:07 without breaking a sweat.
0:03:09 With HubSpot’s collection of AI tools,
0:03:12 Breeze, you can pinpoint the best leads possible,
0:03:15 capture prospects attention with click-worthy content
0:03:18 and access all your company’s data in one place.
0:03:21 No sifting through tabs necessary.
0:03:23 It’s all waiting for your team in HubSpot.
0:03:24 Keep your marketers cool
0:03:27 and make your campaign results hotter than ever.
0:03:30 Visit hubspot.com/marketers to learn more.
0:03:33 (upbeat music)
0:03:34 – Yeah, it is interesting
0:03:36 ’cause I feel the same way about Dolly 3, right?
0:03:37 Like I’ve gotten your point now
0:03:38 where I can look at an image and go,
0:03:40 “Okay, I can tell that was a Dolly 3.
0:03:42 I can tell that was mid-journey.”
0:03:44 I mean, I don’t know if the normal,
0:03:46 if normal people who aren’t like as immersed in AI
0:03:48 can see that or not, it’s hard for me to say, right?
0:03:50 Like I’m such in that my own bubble
0:03:52 that I don’t know if other people notice that too,
0:03:55 but Dolly creates very similar images.
0:03:56 Like the color palettes on them
0:03:58 always kind of looks the same to me.
0:04:01 Same with mid-journey that’s got like this stylistic thing
0:04:02 where I look at it and go,
0:04:04 “Okay, I can tell that was mid-journey.”
0:04:05 But I think like you said,
0:04:08 I think the updates throughout this year
0:04:11 are gonna be a little bit more nuanced and marginal updates,
0:04:12 right?
0:04:13 They’re gonna be smaller updates
0:04:15 because we’ve already got these AI models
0:04:17 for images specifically
0:04:20 that can create ultra-realistic images, right?
0:04:21 Where else do you go?
0:04:23 I think the focus now is less on realism
0:04:25 and more on prompt adherence, right?
0:04:28 Mid-journey is really, really great at realism.
0:04:30 Dolly still sucks at realism.
0:04:33 Dolly is really, really good at prompt adherence.
0:04:35 You can throw a whole bunch of elements into the prompt.
0:04:38 You know, I wanna drag in wearing a fedora,
0:04:40 eating nachos, watching MTV
0:04:43 and the words party time on the screen
0:04:46 and it will get all of those elements into a single image.
0:04:48 Mid-journey can’t do that, right?
0:04:49 So I think what we’re probably gonna see
0:04:52 is like all of these models
0:04:54 kind of combining to a point where like,
0:04:56 nobody can tell whether it was mid-journey
0:04:59 or Dolly or stable diffusion
0:05:03 because mid-journey focused on prompt adherence
0:05:04 and improves that.
0:05:08 Dolly focuses on realism
0:05:11 and more dynamic color palettes
0:05:13 and optimizes for that.
0:05:14 And we’re gonna get to a point
0:05:16 where we don’t know what model generated what
0:05:18 because everything is just good.
0:05:21 As far as video, I mean, we saw Sora.
0:05:25 OpenAI has made it sound like we might see Sora
0:05:26 sometime by the end of the year.
0:05:28 We might actually get our hands on it.
0:05:29 You know, we were just talking a minute ago
0:05:33 about Adobe rolling in Sora to Adobe Premiere.
0:05:35 So, you know, I think with Adobe Premiere
0:05:37 we might start to get some access to Sora
0:05:40 and some of those capabilities in there.
0:05:41 So that’s gonna improve,
0:05:43 which means more realism in video,
0:05:45 longer prompt generation with video.
0:05:48 Yeah, I think that’s really what we’re gonna see.
0:05:49 I think the other area
0:05:52 that we’re gonna see continued massive leaps
0:05:55 is in text to 3D art, right?
0:05:59 For, you know, for creating like little animated characters
0:06:01 or for creating game assets or things like that, right?
0:06:04 You’ve got Spline 3D
0:06:08 and you’ve got CSM common sense machines
0:06:11 and you’ve got Meshie.
0:06:12 There’s all of these tools that are like
0:06:16 getting really, really good at like text to 3D object.
0:06:18 And I feel like a lot of companies
0:06:20 are kind of focusing in there right now.
0:06:21 Like even mid-journey said
0:06:23 that they think their next sort of frontier
0:06:26 is gonna be 3D generation.
0:06:27 We’ll get back to the show in just a moment,
0:06:29 but first here’s a word from HubSpot.
0:06:32 Curious about what the future of productivity looks like,
0:06:33 HubSpot’s AI tools make quick work
0:06:35 of expediting content creation,
0:06:39 optimizing workflows and elevating data analysis.
0:06:40 Personally, I’m really impressed
0:06:42 by the AI website generator.
0:06:45 You can create a webpage just from a few simple prompts.
0:06:47 That’s mind-blowing.
0:06:48 And the facts speak for themselves.
0:06:51 Over 85% of marketers and sales professionals agree
0:06:55 that AI enhances content quality and prospecting efforts.
0:06:56 So what are you waiting for?
0:06:58 Sign up for free by clicking the link
0:06:59 in the description below.
0:07:00 Now back to the show.
0:07:02 – Going back to video for a second.
0:07:04 I think that, you know,
0:07:06 I’m sure we’re gonna see Sora this year.
0:07:08 Maybe when we see Sora and then like even like
0:07:09 a small upgrade to it,
0:07:11 like something better than the demo,
0:07:13 like in the public’s hands, which is gonna be amazing.
0:07:16 And I think you, you know, by the end of the year,
0:07:18 I think there’s a chance that we’ll actually see
0:07:20 like a viral short film,
0:07:23 like something like five minutes, 10 minutes long,
0:07:25 this entirely made with something like Sora.
0:07:26 – Yeah.
0:07:27 – I think that’s gonna be like the turning point of like,
0:07:30 oh, like wow, because like,
0:07:31 ’cause it’s almost there now.
0:07:33 Like if something like Sora was out there,
0:07:35 I’m pretty sure somebody could make something
0:07:36 that went really viral.
0:07:38 And I think you might even see, you know,
0:07:39 ’cause we’ve been talking about like,
0:07:42 there’s all these great new AI video tools coming out,
0:07:43 but like what’s the use case?
0:07:45 I do feel like there’ll be something, you know,
0:07:46 almost kind of like how Instagram took off
0:07:47 because of filters.
0:07:50 I do feel like there’s some kind of new app,
0:07:52 whether it comes this year or next,
0:07:54 or I feel like there’s something on the horizon
0:07:56 that we don’t even know what it is yet,
0:07:57 but some kind of social app
0:07:59 that will integrate these AI video tools.
0:08:01 And we’re like, oh yeah, of course,
0:08:02 why didn’t I think of that?
0:08:04 Like I think something like that bigger than TikTok
0:08:07 is coming using AI, probably AI video.
0:08:08 – Yeah, interesting.
0:08:10 I wonder what a social media app would look like
0:08:13 in that realm because it’s like,
0:08:15 is it gonna be just a bunch of AI videos
0:08:18 and people are scrolling a feed of AI videos?
0:08:19 – I guess the challenge right now is like,
0:08:21 the filters were free.
0:08:22 A lot of this stuff is quite expensive
0:08:24 to actually do right now.
0:08:25 So like you would have, you know,
0:08:27 had to pay for the generation of the video,
0:08:29 I guess is the one challenge you probably would have
0:08:30 right now.
0:08:32 But so that might make it where like,
0:08:33 it’s more likely to happen next year
0:08:34 when costs come down or something.
0:08:36 But it does feel to me like, okay,
0:08:38 we haven’t had big social app in a while.
0:08:39 We had that kind of like,
0:08:40 what was that one called gas or whatever,
0:08:43 which was like this guy kind of like generated
0:08:45 this out of nowhere and then sold it to Reddit,
0:08:46 I think really fast.
0:08:47 – I don’t remember that one.
0:08:49 – Anyway, it was like half a joke.
0:08:50 – Yeah.
0:08:51 – And then he sold it for like a ton of money.
0:08:53 Like, I don’t know, like over 50 million or something.
0:08:55 And apparently like,
0:08:57 I think he told Sean Prairie from my first million
0:08:58 and a few others like that he was gonna do it
0:09:00 before he did it and he just did it.
0:09:01 But there hasn’t been any other big social apps,
0:09:03 you know, since then that have taken off.
0:09:04 And it feels like, okay,
0:09:06 so AI video probably is going to enable
0:09:07 something really cool there.
0:09:08 – Yeah.
0:09:10 I mean, right now like air chat is sort of a social app.
0:09:11 I mean, it came out like a year ago,
0:09:12 but they rebuilt it.
0:09:15 Now it’s sort of re-emerging.
0:09:16 I think that’s mostly emerging
0:09:18 just because it’s got Naval’s name on it
0:09:20 and people flocked to Naval.
0:09:22 – Yeah, Naval is saying it’s because of AI partially.
0:09:24 He is saying because he believes,
0:09:25 which is funny.
0:09:26 He used to say the text was like the best.
0:09:28 And then now he is switching to saying like,
0:09:30 yeah, it’s all gonna be audio and all,
0:09:31 you know, it’s all gonna be audio.
0:09:33 And that’s why I’m doing air chat.
0:09:35 And apparently he is saying that one of the reasons
0:09:38 is he’s now kind of reevaluating things
0:09:40 because of AI and realizing, yeah,
0:09:41 with all these new AI tools,
0:09:43 you’re gonna be talking to your AI
0:09:44 and then you’re gonna be talking to people
0:09:45 that you wanna, you know, real people
0:09:46 you wanna talk to as well.
0:09:49 And so like voice is gonna be the main medium there.
0:09:51 – Yeah, yeah, I know we’re sort of
0:09:52 on a little tangent right now,
0:09:55 but when it comes to air chat,
0:09:58 the thing that I like about it is that you can’t type.
0:10:00 You can’t open it up and type a response to somebody.
0:10:01 You have to speak.
0:10:03 And the thing that makes that cool
0:10:05 is it sort of solves the problems
0:10:08 that X has right now with a lot of bots, right?
0:10:10 Right now you can’t really bot it
0:10:11 because you’ve got to talk to it.
0:10:15 Unless you’re using a really, really sophisticated
0:10:17 sort of like 11 labs combo
0:10:20 where it’s spitting out real voices and like,
0:10:23 I don’t think anybody’s that advanced with it yet.
0:10:25 But you also don’t get like all the trolling
0:10:28 and all the negativity because when somebody
0:10:30 actually speaks to you face to face,
0:10:32 they’re not gonna be as negative
0:10:34 and as much of a dick basically
0:10:36 as they might be on something like X.
0:10:38 – Did you ever use Clubhouse?
0:10:40 Like when back in COVID? – A little bit, yeah.
0:10:41 – Yeah, yeah.
0:10:42 So I mean, Clubhouse was like blowing up,
0:10:43 especially in Silicon Valley.
0:10:47 Like everybody was on it and it started off great
0:10:49 and then it went downhill.
0:10:52 And you know, it’s like,
0:10:53 ’cause it’s really hard to moderate these things.
0:10:54 Like yeah, at first everybody’s nice,
0:10:57 but then you got people talking about political things
0:11:00 or whatever and like people from the opposite side join
0:11:03 and like it gets really hateful and it just, you know.
0:11:04 And then those kind of channels became
0:11:06 the most popular channels
0:11:08 because there’s more engagement there.
0:11:10 It’s I don’t even wonder what Air Chat’s doing
0:11:13 to kind of make sure they don’t have that same problems.
0:11:14 – Yeah, it’ll be interesting.
0:11:16 I’m new, I’ve only been on it for, you know,
0:11:18 48 hours as of this recording.
0:11:22 So, but anyway, we can move off that tangent for right now.
0:11:25 So let’s, the next question,
0:11:27 this one comes from craze in the dark
0:11:29 who’s actually somebody that works for me.
0:11:31 This is his name’s John, he’s my creative director,
0:11:32 but he asked this question.
0:11:35 He said, “How can governments and corporations
0:11:39 “addressed to a potential massive AI job replacement?
0:11:41 “Will our economic systems change forever?”
0:11:44 You know, real easy softball question.
0:11:45 – Okay, next question.
0:11:48 – I do have like really mixed feelings here
0:11:49 ’cause like, you know, in terms of,
0:11:52 will it change our economic systems forever?
0:11:55 I think that was so hard to predict.
0:11:57 Like anybody who says that they have a predicted,
0:11:58 like that they know what’s gonna happen there
0:12:01 is lying or like, you know,
0:12:03 deleting themselves because AI introduces
0:12:06 so many new variables into the world.
0:12:09 It’s so hard to know, like, okay, GPT-5 is gonna come out.
0:12:12 Is it 10% better, 50%?
0:12:13 What does that mean?
0:12:14 What does that mean for the economy?
0:12:17 What does that mean for how people work?
0:12:20 Will these systems stop improving as fast as they are now?
0:12:21 Like in three years,
0:12:23 ’cause we hit some kind of limitations
0:12:27 or will they just start, you know, growing exponentially?
0:12:30 And so it’s really hard to know, but, you know,
0:12:33 and in the future, if AI can do most work,
0:12:35 that definitely opens huge questions of like,
0:12:37 what’s the purpose of the government?
0:12:38 – Yeah.
0:12:41 – Like, what happens when AI gets incredibly smart
0:12:43 and starts telling us that our government’s messed up?
0:12:45 What do we do then?
0:12:48 – We elect an AI to run our government, obviously.
0:12:49 – Yeah, yeah, do we ask the AI
0:12:51 how we should be structuring our government based on,
0:12:55 you know, and how do politicians respond to that?
0:12:57 Do they decide, okay, we have to ban AI now?
0:12:59 That’s probably what would happen.
0:13:01 So I think no one knows, like, you know,
0:13:03 ’cause it’s like, right now, you know,
0:13:05 capitalism is, in my opinion, the best system we have.
0:13:08 It’s definitely not perfect, like everyone knows that.
0:13:10 And so in the future, when AI does the work,
0:13:12 people are gonna have to find meaning in other ways
0:13:15 and also how does money work then?
0:13:17 I think Mark Andreessen talked about this recently
0:13:20 where he believes that AI will dramatically bring down
0:13:22 the cost of many things, right?
0:13:24 And then there’ll be certain niche products
0:13:26 or niche or maybe like real life experiences
0:13:29 that still cost a lot of money, right?
0:13:31 Like going somewhere and having some amazing experience
0:13:33 or having a human crate, something that could still
0:13:34 be expensive.
0:13:36 – I think that’s giving companies too much credit.
0:13:38 I think it’s giving companies too much credit
0:13:40 ’cause what we’ve seen so far, right, like,
0:13:42 there’s so many companies that are built on the back
0:13:47 of GPT-3, GPT-4, and over the last several months,
0:13:49 so many of these APIs have lowered their costs,
0:13:51 lowered their costs, lowered their costs.
0:13:54 Have the companies that we’re paying our monthly fees to
0:13:57 lower their costs or just pocketed extra profits?
0:14:00 – Yeah, well, I mean, so it depends on how good AI gets.
0:14:02 So like, that’s when there’s like a handful of competitors,
0:14:05 but if you have like a thousand competitors
0:14:06 and they’re all competing on price,
0:14:09 because AI has made things so much easier to build,
0:14:12 you know, a lot of prices should go down over time.
0:14:13 But yeah, you’re right.
0:14:15 I mean, iPhone’s still incredibly expensive.
0:14:18 Other things that we buy are still incredibly expensive.
0:14:19 So you’re correct there.
0:14:22 I’m not sure, you know, but also at the same time,
0:14:25 I don’t really, you know, like universal basic income,
0:14:27 which I believe Sam Altman originally was one of the people
0:14:28 who kind of like didn’t experiment with that,
0:14:29 where they gave people money.
0:14:31 I believe they gave people money in Oakland.
0:14:32 They ran that experiment.
0:14:33 And I can’t remember the results,
0:14:35 but I think it was kind of mixed where it’s like,
0:14:37 yeah, some people just end up kind of just like
0:14:39 playing video games, you know?
0:14:40 It’s like, okay, is that what you want in your society?
0:14:41 Like, I don’t know.
0:14:45 Like maybe there’s, you know, maybe some,
0:14:46 you know, portion of people, that is what they do.
0:14:48 And then they go off and do some productive things,
0:14:49 hopefully.
0:14:51 – I think everybody worries about the Wally scenario,
0:14:53 right, where everybody’s just kind of
0:14:54 sitting in their chairs, floating around,
0:14:56 getting fat, drinking slurpees,
0:14:58 and watching movies or playing video games all day.
0:15:00 – Yeah, but all, you know, recently there’s been this big,
0:15:03 you know, uptick in like people are getting healthy,
0:15:05 at least in like Silicon Valley circles and stuff.
0:15:07 And even me, I lost a bunch of weight.
0:15:10 I’ve been using AI even to give me advice on like,
0:15:12 how to go to the gym and like how to work out
0:15:13 and gain muscle and all this stuff.
0:15:15 And it’s the people were like, yeah,
0:15:16 so we thought it was gonna be like a Wally,
0:15:17 everyone’s super fab.
0:15:19 It’s gonna be like a Wally, everyone’s super jagged.
0:15:20 – Yeah.
0:15:21 (laughing)
0:15:23 – Like, oh, it’s free time and AI’s helped them,
0:15:24 teach them how to be healthy.
0:15:26 We have all these new drugs that make you healthier and stuff.
0:15:30 And it might be like that, like a Wally, but we’re all jagged.
0:15:33 So yeah, I don’t know, but my gut feeling is
0:15:37 that there will always be like the capitalism will just
0:15:41 evolve, like it’s not gonna disappear anytime soon.
0:15:43 In a thousand years or, you know, whatever who knows,
0:15:47 maybe we’re in Star Trek land, but no time that soon.
0:15:48 – Yeah.
0:15:50 (upbeat music)
0:15:51 – We’ll be right back.
0:15:53 But first, I wanna tell you about another great podcast
0:15:54 you’re gonna wanna listen to.
0:15:58 It’s called Science of Scaling, hosted by Mark Roberge,
0:16:01 and it’s brought to you by the HubSpot Podcast Network,
0:16:04 the audio destination for business professionals.
0:16:06 Each week, host Mark Roberge,
0:16:09 founding chief revenue officer at HubSpot,
0:16:11 senior lecturer at Harvard Business School,
0:16:13 and co-founder of Stage Two Capital,
0:16:16 sits down with the most successful sales leaders in tech
0:16:19 to learn the secrets, strategies, and tactics
0:16:21 to scaling your company’s growth.
0:16:23 He recently did a great episode called,
0:16:26 How Do You Solve for a Siloed Marketing in Sales?
0:16:28 And I personally learned a lot from it.
0:16:30 You’re gonna wanna check out the podcast,
0:16:34 listen to Science of Scaling wherever you get your podcasts.
0:16:37 (upbeat music)
0:16:39 – I don’t know, I feel like a lot of people
0:16:43 underestimate humans, you know,
0:16:44 especially when we’re talking about AI.
0:16:47 I feel like humans will figure out new ways
0:16:49 to add value to the world.
0:16:53 Right now, one of the ways humans add value to the world
0:16:56 is doing the actual construction, building the stuff,
0:17:00 doing the actual work that needs to get done.
0:17:03 But I do think in a sort of post-AI world,
0:17:07 maybe call it a post-AGI world where robots and AI
0:17:09 can handle a lot of this stuff for us,
0:17:12 I still think humans are going to figure out
0:17:14 new ways to add value,
0:17:16 we just don’t know what that looks like yet.
0:17:18 Also, questions like this about how can governments
0:17:19 and corporations adjust.
0:17:23 My thoughts on that are not necessarily
0:17:25 that corporations and governments need to adjust,
0:17:27 it’s that we need to adjust.
0:17:29 The people, the general population,
0:17:32 the public should be adjusting.
0:17:35 You know, what we’re seeing right now,
0:17:37 I think is one of the greatest opportunities
0:17:41 in anybody’s lifetime to quickly build and iterate
0:17:44 and optimize and create something yourself
0:17:45 that adds value to the world.
0:17:48 So maybe you’re going to lose your job at Walmart
0:17:53 as a checker because, you know, you can just walk out now,
0:17:56 but you also just have the opportunity to use AI
0:17:58 to help you create a business that can generate value
0:18:00 and income for you.
0:18:04 So, you know, we’re in this very empowering point in time
0:18:07 where I don’t think we should be going,
0:18:10 what can governments and corporations do to protect us
0:18:13 from AI and we should be thinking,
0:18:16 what can I do with AI so that I’m not that reliant
0:18:17 on corporations and government?
0:18:21 I think we need to have a sort of paradigm shift.
0:18:24 We need to think differently, right?
0:18:26 You know, not trying to quote Apple here,
0:18:28 but we do need to think differently
0:18:31 in that we need to not look at corporations
0:18:34 and government to solve these problems for us.
0:18:36 We need to look at how can we be empowered
0:18:38 by this new technology so that we aren’t reliant
0:18:40 on government and corporations.
0:18:40 – Yeah, I agree.
0:18:44 Everyone has a free assistant, you know, in their pocket now,
0:18:46 you know, they used to have a computer,
0:18:47 now they have an actual assistant
0:18:50 and that assistant’s gonna get more and more intelligent.
0:18:53 So, yeah, that’s, and that’s probably how you’ll actually
0:18:55 see changes in government and corporations
0:18:57 is like from the bottom up, you know,
0:18:58 like of like people learning this stuff
0:19:01 and then changing the government.
0:19:03 And I’m pretty encouraged there.
0:19:04 Like I think AI, like the people who adopt AI,
0:19:07 they’re gonna become enhanced in a way
0:19:10 and they’re gonna have an outsized impact on the world,
0:19:12 which should give them a better chance
0:19:13 of actually influencing the government
0:19:15 or becoming part of the government,
0:19:16 helping restructure things.
0:19:18 – I mean, the question of will our economic systems
0:19:20 change forever, I think the answer is yes.
0:19:21 – Yeah.
0:19:23 – How it’s gonna change, I don’t think we know yet.
0:19:25 I don’t think anybody is gonna be really good
0:19:28 at predicting that in this moment in time right now.
0:19:30 But definitely, I think the economic system
0:19:32 is going to change.
0:19:34 I mean, we’re already seeing it change, right?
0:19:37 You know, people are trying to figure out ways
0:19:39 to bypass the existing financial systems
0:19:42 and banking systems through crypto and things like that.
0:19:44 It’s going to continue to evolve that way,
0:19:47 no matter how much people sort of fight that stuff.
0:19:49 I do think it’s going to can like evolve
0:19:54 towards that more decentralized crypto-centric finance
0:19:56 in the future.
0:19:57 But yeah, definitely the economic system
0:19:58 is gonna change forever.
0:20:02 I just, I can’t make a prediction on how yet.
0:20:03 – Yeah.
0:20:07 Next question here is from Universal AI Podcast.
0:20:09 Their question is, do you think open source community
0:20:11 can band together and create an AGI
0:20:13 that can compete with the big corporations?
0:20:16 So really, I think the main question is here is,
0:20:20 can open source build AI models that can compete
0:20:22 with an open AI, with a Microsoft, with a Google,
0:20:25 with an Anthropic, with companies that have millions,
0:20:27 potentially billions of dollars backing them
0:20:29 with compute power?
0:20:32 – Yeah, I think we’ve talked about this a little bit before,
0:20:33 not sure if we put it out there,
0:20:37 but I personally really hope so.
0:20:40 Like really, really hope so.
0:20:41 But I am skeptical.
0:20:46 I do feel that open AI is further ahead than people realize.
0:20:49 I think open AI is probably about one to two years ahead.
0:20:52 And when GPT-5 comes out,
0:20:53 I think that’s going to become apparent.
0:20:55 Maybe I’m wrong, delusional,
0:20:56 but that’s my current belief.
0:20:58 And it feels like, yeah,
0:21:01 open source is catching up to GPT-4,
0:21:03 but if GPT-5 is two years ahead of that,
0:21:05 you know, it’s like, well,
0:21:06 then maybe they’ll always be in it.
0:21:10 And then at some point when AI starts self-improving,
0:21:11 let’s say by GPT-6,
0:21:13 that it’s actually kind of improving itself
0:21:15 a little bit too by itself,
0:21:19 you could get in this exponential growth of the AI,
0:21:21 where then whoever gets there first,
0:21:24 or the first two or three companies that get there first,
0:21:26 it gets better so fast that, yeah, sure,
0:21:30 there’ll be cool open source models for individuals,
0:21:33 but maybe the most powerful stuff will always be
0:21:35 in like two or three companies’ hands, unfortunately.
0:21:36 And then there’s the question of like, yeah,
0:21:38 who gets to use it?
0:21:40 Do the government say like, holy crap,
0:21:42 this is so powerful, it changes the entire world?
0:21:44 And yeah, we have to like really, you know,
0:21:46 treat like nuclear bombs or something like that.
0:21:48 And I really hope that’s not what happens.
0:21:50 – Yeah, so when it comes to open source,
0:21:53 one of, obviously the biggest bottleneck is compute, right?
0:21:56 So like, you look at companies like Google,
0:21:59 like Microsoft, like Anthropic, like Mixtral,
0:22:00 some of these companies that are building
0:22:01 these large language models,
0:22:03 they have millions and in some of these scenarios,
0:22:05 billions of dollars of investment,
0:22:09 which they could use on GPUs to train these models.
0:22:11 Even the open source community,
0:22:13 even all of the large language models
0:22:16 that are out there in the open source world right now,
0:22:20 probably costed millions of dollars to train on, right?
0:22:22 So you still need one of these big companies,
0:22:24 like a Meta, like a Mixtral,
0:22:27 it has all of this financial availability
0:22:29 to train these models,
0:22:32 even for the open source to be successful, right?
0:22:34 So I don’t know if open source,
0:22:36 at least not in the near term,
0:22:38 I don’t know if it can be successful
0:22:41 without the big corporations deciding to get involved
0:22:42 and help train these models,
0:22:45 because the open source world of LLMs right now,
0:22:46 probably wouldn’t even exist
0:22:48 if it wasn’t for Meta dropping Lama 2,
0:22:51 which everybody’s sort of kind of fine-tuned
0:22:53 and built off the back of that, right?
0:22:57 So it’s like, I don’t know, at the end of the day,
0:23:00 open source is amazing and we want it to grow.
0:23:02 We want more people tackling problems
0:23:04 and more people trying to figure out
0:23:09 how to make these models a better thing
0:23:12 than less people working on these models.
0:23:13 But at the same time,
0:23:16 you do need millions of dollars
0:23:17 to train one of these models right now,
0:23:19 even if it is open source.
0:23:20 – Yeah, in the future,
0:23:24 like when we’re talking about GPT-5 or GPT-6 level models,
0:23:25 you’re probably talking about way more than that.
0:23:26 I mean, I think the other day,
0:23:29 Google came out and said that over the next,
0:23:31 I don’t know if they said next five to 10 years,
0:23:33 that they’ll probably spend like,
0:23:37 end up spending like $100 billion on AI development.
0:23:39 – And that news came right after Microsoft
0:23:41 said that them and OpenAI
0:23:44 are partnering on a $100 billion data center, right?
0:23:45 – Yeah.
0:23:47 – So basically, OpenAI and Microsoft said,
0:23:50 we’re gonna partner on a $100 billion data center
0:23:53 filled with GPUs so we can train better and better models.
0:23:55 Recently, the CEO of DeepMind,
0:23:58 Demis Hassibis, came out and said,
0:24:01 in response to Microsoft building
0:24:02 this $100 billion data center,
0:24:04 that over the next five to 10 years,
0:24:07 Google’s gonna spend at least that on their data centers.
0:24:08 So that was like him sort of responding
0:24:10 to the fact that Microsoft and OpenAI
0:24:13 were building this $100 billion data center.
0:24:17 But Meta said, do you remember the exact number Meta said?
0:24:21 I think they said they have something like 600,000 H100,
0:24:22 something insane like that.
0:24:23 – Yeah, something, I think a chart came out,
0:24:25 assuming like they have the most or something.
0:24:30 – Yeah, so Meta has 350,000 H100s right now.
0:24:31 – Oh my God.
0:24:32 (laughing)
0:24:36 – Yeah, so, but Mark Zuckerberg did make a comment
0:24:39 recently saying they have the equivalent
0:24:43 of 600,000 H100s, whatever that means.
0:24:44 – Yeah.
0:24:48 – So, I don’t really feel like an underground dude
0:24:50 in his basement is going to train a model
0:24:53 that’s gonna compete with a GPT-4,
0:24:57 they just don’t have the compute availability to do that.
0:24:59 At least not in this point in time.
0:25:00 – Yeah.
0:25:03 – This question comes from aileaksandnews.
0:25:06 What is one thing you’re confident we won’t see
0:25:08 in the next 12 months?
0:25:09 – I don’t know if I could be confident
0:25:11 about anything in the next 12 months.
0:25:12 I mean, that’s a crazy thing.
0:25:15 It’s like, it’s really hard to know where the,
0:25:18 you know, where OpenAI is private,
0:25:19 like where their model is at.
0:25:24 I don’t expect to see AGI in the next 12 months
0:25:26 or, you know, artificial super-intelligence
0:25:28 or anything like that.
0:25:29 And partially because I think we’re probably,
0:25:32 we need, you know, we’re probably limited
0:25:37 by the current compute, but that’s about it.
0:25:39 – Yeah, I mean, I have the same thoughts.
0:25:41 Like, almost every time I’ve tried to predict
0:25:44 something that won’t happen, I’ve been wrong, right?
0:25:46 – Yeah, look at AI video, right?
0:25:47 – Yeah, I’ve made predictions
0:25:50 about how far AI video is gonna come, how fast.
0:25:51 I was totally wrong on my prediction,
0:25:53 and what I thought wasn’t coming for years
0:25:56 came months later, right?
0:25:58 – I was very optimistic about AI video.
0:26:01 Like, I was putting out AI video threads for a long time,
0:26:02 talking about how it was, you know,
0:26:05 gonna really change Hollywood and whatnot.
0:26:06 But yeah, it’s hard to know.
0:26:10 Like, yeah, AI video, it could have taken years,
0:26:12 but now it looks like it’s actually going to be amazing
0:26:13 in the next six months.
0:26:14 – Yeah, yeah, yeah.
0:26:17 – So there’s not much that I would be willing
0:26:19 to like confidently predict in 12 months.
0:26:20 – No, I’m with you.
0:26:22 I think AGI and ASI are probably the two things
0:26:25 I’m fairly confident we’re not gonna see within 12 months.
0:26:29 Definitely not ASI, but most likely not AGI either.
0:26:31 – Yeah, I’m fairly confident that AI
0:26:33 will not replace all jobs in 12 months.
0:26:35 They will not be that good.
0:26:38 – We won’t have Skynet taking over the world
0:26:40 within 12 months.
0:26:41 – Yeah, probably not.
0:26:43 – Probably, I like how you said probably not.
0:26:45 I’m not definitely not.
0:26:47 – Yeah, probably not, yeah, probably not.
0:26:49 I think I saw something the other day where
0:26:52 one of the AI systems in China
0:26:54 with the government it’s actually called Skynet.
0:26:57 – Yeah, a few people have named their company Skynet.
0:26:59 I’ve seen a couple different Skynet companies
0:27:04 and I’m like, you do realize Skynet was the bad guy, right?
0:27:07 – Yeah, I think they thought it was humorous or something,
0:27:09 but yeah, it’s literally China’s AI system
0:27:11 that monitors the public.
0:27:15 – Yeah, all right, so this one comes from Akshay Lazarus,
0:27:18 and I’m sorry if I butchered that name.
0:27:20 But he says, “I’d love to hear you discuss
0:27:21 “the future of tech.
0:27:24 “For example, will UI switching from GUIs,
0:27:27 “graphical users or interfaces, to voice?
0:27:29 “Do we see there being a consolidation
0:27:32 “of all application tech by cloud-based hyperscalers?
0:27:34 “What is the role of startups as data resides
0:27:36 “with these hyperscalers?”
0:27:38 Well, let’s start with the first part of the question.
0:27:40 Like, what do you think the sort of future
0:27:42 of these user interfaces are?
0:27:44 Like less graphical user interface, more voice.
0:27:46 What are your thoughts there?
0:27:48 – I think both.
0:27:49 I mean, I think you’ll have both.
0:27:53 I mean, I think you’ll have a minority report kind of thing
0:27:55 where you’ve got like these visual systems
0:27:58 that you can interact with and they get really intelligent.
0:28:00 And yeah, of course, you’ll be able to also use voice
0:28:02 to interact with it as well, kind of like her
0:28:04 and other movies, right?
0:28:05 – Well, I mean, we’re already kind of doing that, right?
0:28:08 Like that was sort of the point of Siri, right?
0:28:11 That was sort of the point of Amazon Alexa.
0:28:12 So we already kind of have that.
0:28:14 They’re just kind of dumb AI’s right now.
0:28:17 They’re not very great AI’s, but we’re already kind of,
0:28:19 we’ve already had that for a while now.
0:28:21 So I’m going to speak for myself here.
0:28:23 When it comes to like speaking out loud
0:28:25 to some of these apps,
0:28:27 I feel very awkward doing that in public, right?
0:28:29 Like I’ve got these Meta Ray-Ban sunglasses
0:28:32 that have Llama 2 built into them.
0:28:34 I feel awkward as hell in public
0:28:37 asking questions to my sunglasses, right?
0:28:39 ‘Cause you go, “Hey, Meta,” and then you add,
0:28:41 and that’s like saying, “Hey, Siri,” right?
0:28:42 That’s what starts the prompt
0:28:44 so I can start talking to these sunglasses.
0:28:46 I feel ridiculous in public saying,
0:28:49 “Hey, Meta,” and then talking to sunglasses.
0:28:51 I feel like I’d feel the same way with an AI pin, right?
0:28:55 The humane pin pressing a button and then speaking out loud.
0:28:57 Like if I’m sitting in a coffee shop or a library
0:29:00 or somewhere that tends to be a more quiet place,
0:29:04 I’m definitely not using the voice interface, right?
0:29:05 But even just being out in public,
0:29:07 like if I’m walking through my supermarket
0:29:09 and there’s other people around,
0:29:13 I feel like a crazy person talking to my gadgets, you know?
0:29:15 – Yeah, I mean, that might be an age thing too, right?
0:29:16 – Could be.
0:29:18 – I think that young people may, you know,
0:29:19 AI gets better and better,
0:29:20 they may just be totally used to it.
0:29:22 Like of course I’m talking to it.
0:29:25 Why would I, you know, it’s like my son with like Siri and stuff.
0:29:28 He doesn’t understand the world without that, right?
0:29:29 – That’s true, yeah.
0:29:31 – And he also just thinks Siri’s so dumb
0:29:32 as soon as it gets better.
0:29:34 I think he’s gonna love interacting with it
0:29:37 however he can, including being outside.
0:29:39 – Yeah, but yeah, I also would feel very weird
0:29:42 like talking to, you know, AI outside,
0:29:43 especially here in Japan.
0:29:47 People, he’s like a psycho, like what is this guy doing?
0:29:50 – I mean, if really the question is essentially like,
0:29:52 yeah, we’ve got huge incumbents
0:29:53 that are sort of running everything right now,
0:29:54 how does anybody compete?
0:29:56 – I personally think it’s a big open question.
0:30:00 Like, yeah, yeah, will, you know,
0:30:05 when open AI has AGI, you know, what do startups do?
0:30:07 But I think they’ll always be like,
0:30:10 I don’t think, you know,
0:30:13 open AI is gonna want to build every single use case
0:30:14 and every single kind of product.
0:30:16 They’re wanting to build the technology
0:30:18 and have other people build on top of it.
0:30:20 And I think if they tried to do everything,
0:30:21 they’re gonna run into like really major
0:30:24 like regulation problems, if I had to guess.
0:30:27 So I think, you know, I don’t like regulation,
0:30:28 but that might be one area where like,
0:30:30 regulation actually comes to save the day
0:30:32 because I do think the big tech companies
0:30:35 are not going to try to do everything.
0:30:37 They’re going to try to build the best AI technology.
0:30:38 And then on top of that,
0:30:40 there’ll be so many things for startups to build.
0:30:42 – Yeah, and really if the question is like,
0:30:43 how do they compete?
0:30:46 Honestly, our Greg episode, our Greg Eisenberg episode
0:30:49 is literally that entire topic of like,
0:30:52 when do you have these big incumbent companies,
0:30:54 the Googles, Microsoft’s, Amazon’s,
0:30:55 all these companies out there,
0:30:57 how do these little companies compete?
0:30:59 Well, that’s exactly what we talked about with Greg.
0:31:01 And Greg was talking about, well,
0:31:03 you got to create brand, you’ve got to create community.
0:31:05 You’ve got to, you know, what are you going to do
0:31:07 to stand out, right?
0:31:10 I just listened back to that episode when it went live.
0:31:13 And Greg was saying things like, you know,
0:31:15 people choose sides, right?
0:31:17 I’m an Apple guy, I’m a PC guy.
0:31:21 I’m a Nike guy, Adidas guy, Puma guy, whatever, right?
0:31:25 Like people sort of gravitate towards brands
0:31:27 and attach their identity to brands.
0:31:29 And a lot of times they’re the brands
0:31:32 that have good community connected to them, right?
0:31:35 So that is in my opinion, and Greg’s opinion,
0:31:39 and it’s my opinion because I heard Greg give me this opinion,
0:31:43 but you know, that’s how you stand out,
0:31:44 is through brand, through community,
0:31:47 through those things that differentiate you, right?
0:31:51 People don’t necessarily want to work with Google
0:31:53 where they’re just like a number
0:31:56 and a giant database of a million people.
0:32:00 But I use, for instance, Beehive for my AI newsletter, right?
0:32:03 And with Beehive, I’ve talked to the founders.
0:32:04 They actually respond to people on X.
0:32:07 They’re still a smaller, nimble company
0:32:08 with community around them,
0:32:12 where everybody that uses Beehive sort of loves talking
0:32:15 about it and recommends other people to go use Beehive.
0:32:18 And they’ve got that community element, that brand element.
0:32:20 And there’s a narrative that sort of farmed
0:32:24 of this like convert kit versus Beehive narrative, right?
0:32:27 And so you’ve got that kind of thing going on
0:32:29 where people might like pick their sides.
0:32:31 And the reason they’ll pick a side
0:32:34 is they connect with the community.
0:32:37 They connect with the brand that that company built.
0:32:39 And so I think, you know, as Greg said,
0:32:40 that’s how they stand out.
0:32:43 And, you know, I totally agree with that opinion.
0:32:44 – Yeah, yeah.
0:32:46 And I think, you know, AI will only like make that
0:32:47 even more so, right?
0:32:49 Like where people want more human connection there, right?
0:32:51 Like, yeah, I know the founders are like,
0:32:53 “Oh, the people in the community, I love them.”
0:32:55 And I feel like they actually like listen to what I say.
0:32:58 And this is like one reason to do like Q&A like this, right?
0:33:00 Like is to actually like, yeah, we’re doing the show,
0:33:02 but also, you know, it’s four other people
0:33:04 and we’re, you know, interacting with those people
0:33:06 versus just, oh, it’s just what me and Matt say.
0:33:08 Like it starts in general beyond community.
0:33:13 I mean, I think, you know, AI is going to make,
0:33:15 there can be so many opportunities for startups.
0:33:16 And I just, I don’t believe that the date,
0:33:18 like three companies are going to control everything.
0:33:21 And there’s so many, there’s so many problems
0:33:23 to be solved in the world.
0:33:25 And it’s not going to be two or three companies
0:33:26 that solve all of them.
0:33:28 Like there will always be new opportunities.
0:33:30 – I think if it does become three companies
0:33:32 trying to do it all, governments will intervene
0:33:35 and be like, “No, you can’t have this much power.”
0:33:37 I just, I think that’s where it’ll go.
0:33:38 – Yeah, yeah.
0:33:40 Okay, this one comes from my Twitter feed,
0:33:42 comes from Jason Vanish.
0:33:44 Hope I’m saying that correctly.
0:33:46 Jason’s always been really awesome on my Twitter
0:33:50 and doing a lot of good comments on my Twitter.
0:33:53 How much better can LLMs get?
0:33:55 At some point, there are two larger
0:33:57 and larger training datasets, are there?
0:34:01 At this point then, do we hit an AI plateau?
0:34:04 Are there different methods that could leapfrog LLMs?
0:34:06 Like the computer industry had computing breakthroughs
0:34:08 from the 70s to the 90s?
0:34:09 I mean, I think what this would like,
0:34:11 we’re definitely like speculating
0:34:13 ’cause like either one of us are like,
0:34:15 the smartest AI engineers in the world
0:34:16 or something like, I code.
0:34:17 – Yeah, I mean, you can just end in that sentence
0:34:19 as neither of us are really the smartest.
0:34:22 (laughing)
0:34:23 – Well, yeah.
0:34:24 There was an interview though,
0:34:26 I think with like Sam Altman and Illya,
0:34:28 maybe, I don’t know, maybe it was like five months ago,
0:34:29 six months ago, something like that,
0:34:34 where they said that with existing data in the world,
0:34:37 with video and audio and other things
0:34:40 that we’re just now starting to train on,
0:34:43 that there’s a pretty clear path to major improvements
0:34:47 for the next three to five years, right?
0:34:47 And so that’s the case.
0:34:51 We’re probably talking about like GPT-6, GPT-7
0:34:53 before you need any kind of breakthroughs.
0:34:54 – Yeah.
0:34:56 – And if GPT-5 is as good as they’re saying,
0:35:00 I think by GPT-6, 7, we’re talking about like work
0:35:02 looking very, very different.
0:35:03 – Right.
0:35:05 – So I think the next three to five years,
0:35:08 there’s already enough data for the world
0:35:09 to be entirely transformed,
0:35:12 mostly in positive ways.
0:35:14 And then beyond that,
0:35:17 I’m sure we will need more breakthroughs at some point.
0:35:18 And there’s a big open question too,
0:35:20 that a lot of engineers are debating
0:35:22 is like with synthetic data, right?
0:35:24 Like, is synthetic data actually useful?
0:35:26 Like data that the AI helps generate.
0:35:29 You didn’t train on that, that new data.
0:35:31 You know, it’s synthetic data.
0:35:33 And I think it’s not entirely clear yet,
0:35:35 but I could be wrong on that.
0:35:36 – Yeah.
0:35:37 – But if synthetic data works,
0:35:39 well, then yeah, there’s,
0:35:42 we’ll probably never run out of data to be training on.
0:35:43 – Yeah.
0:35:44 And I mean, everything we’re talking about at this point
0:35:46 is very sort of theoretical, but-
0:35:47 – Yeah.
0:35:48 – You know, at some point,
0:35:50 I don’t think humans need to continue
0:35:53 to improve on training the large language models.
0:35:56 Like once we hit a point of, you know, AGI,
0:35:59 we get to a point where the models
0:36:01 figure out what they need to train on next
0:36:03 and figure out how to self-improve
0:36:05 and sort of get better and better.
0:36:08 I think there is going to be sort of a model
0:36:10 that’s kind of like the last model
0:36:11 that humans needed to train.
0:36:13 And then the models beyond that
0:36:15 become the models where the AI
0:36:17 is sort of a self-improving model,
0:36:19 where it just, it gets better and better
0:36:20 and better on its own, right?
0:36:22 It does its own,
0:36:25 it’s reinforcement learning through AI feedback
0:36:26 instead of reinforcement learning
0:36:29 through human feedback at some point, right?
0:36:31 You know, to some degree, we have that right now, right?
0:36:33 Like that’s kind of how GANs,
0:36:35 Generative Adversarial Networks work,
0:36:37 where you have an AI that’s a discriminator
0:36:39 and then you have the AI that’s the generator
0:36:41 and the AI that’s the generator
0:36:43 tries to generate something that fools the AI
0:36:46 that’s the discriminator and they go back and forth
0:36:48 until the discriminator is actually fooled
0:36:50 by what the generator made.
0:36:52 I think we’re going to see that kind of thing
0:36:54 get more and more prominent with large language models
0:36:56 where it gets to a point where
0:36:57 it’s giving itself its own feedback
0:36:59 and getting better and better and better.
0:37:01 But then I also think there’s a phase
0:37:03 after large language models where
0:37:06 we start talking more about the embodied AI, right?
0:37:09 Like we were looking at the Boston Dynamics robot.
0:37:11 We’ve talked about the Figure One robot.
0:37:12 We’ve got Tesla Optimus.
0:37:15 All of these are robots that, you know,
0:37:17 they’re gonna have AI injected into them.
0:37:21 And when we start putting AI into some of these robots,
0:37:24 some of these machines, well, now they have
0:37:25 more stuff they need to learn on.
0:37:27 They need to learn on how to interact
0:37:28 with the physical world.
0:37:31 And when my arm moves like this, what’s the result?
0:37:32 When my arm moves like this?
0:37:34 And now it’s starting to get trained more
0:37:38 on the domain knowledge of just how to operate this robot.
0:37:41 So I think there’s gonna be this shift
0:37:45 to now we need to train these LLMs
0:37:48 to work with the specific use case like embodied robots.
0:37:51 Like, you know, going into drones
0:37:54 or whatever we use it on next.
0:37:56 And they need to sort of train on the domain knowledge
0:38:00 to operate the vehicle that’s now embodying that AI,
0:38:01 if that makes sense.
0:38:02 – Yeah, totally.
0:38:04 I think a big open question too is like,
0:38:07 can AI start solving real world problems too?
0:38:09 Like can it help cure cancer?
0:38:12 Can it help us figure out how to make the robots better?
0:38:14 Right, ’cause if that’s the case,
0:38:16 then we’ll probably get to this kind of exponential point
0:38:18 where things will just keep getting improving.
0:38:21 And then yeah, something that seemed like such a small
0:38:24 little breakthrough, you know, with GPT-1
0:38:27 and with transformers, you know, concept of transformers.
0:38:29 Yeah, that takes us, it could take us all the way.
0:38:30 I think that’s an open question.
0:38:32 We don’t know.
0:38:35 It could be that like just a ton of data is all we needed.
0:38:37 – A ton of data and a ton of GPUs to process it.
0:38:38 – Yeah, yeah.
0:38:40 Also, I think, you know, LLMs are gonna be just like
0:38:42 one part of the equation too, like you said, the robots.
0:38:45 But also my understanding is that, you know,
0:38:47 the rumors where the open AI has created this thing
0:38:50 called Q*, which is supposed to be some kind of logic engine.
0:38:52 I don’t think there’s any details been revealed about that.
0:38:54 Is that just like some kind of other LLM?
0:38:56 I don’t know.
0:38:59 But so in theory, you would have like the LLM
0:39:01 then having some kind of like a logic engine
0:39:02 attached on top of it.
0:39:05 So in the future, these systems could be really complicated
0:39:06 for like regular people.
0:39:08 You wouldn’t know when that’s going on.
0:39:09 – Well, if you ask Yon Lacoon, right?
0:39:12 He’s the lead AI over at Meta.
0:39:13 Also worked with Jeffrey Hinton,
0:39:16 one of the Godfathers of AI as they call them, right?
0:39:18 He doesn’t believe large language models
0:39:19 will ever achieve AGI.
0:39:22 He thinks that the sort of technology underneath
0:39:24 large language models just will never get
0:39:26 to that point of AGI.
0:39:28 And over at Meta, they’re developing something
0:39:33 they called VGEPA, which stands for Video Joint
0:39:35 Embedding Predictive Architecture.
0:39:39 But basically it’s a way for like the AI models
0:39:41 to see and understand the world
0:39:43 and sort of train themselves by actually doing
0:39:45 and getting a response.
0:39:48 And he believes this is what will actually lead to AGI
0:39:50 and not necessarily the large language models
0:39:52 that everybody’s familiar with today.
0:39:54 So I don’t know.
0:39:56 There might be a point where large language models,
0:39:59 there is like a point of diminishing returns
0:40:01 where they just don’t get any better.
0:40:03 But I still think there’s a lot of models
0:40:05 that can replace large language models
0:40:09 that continue to improve what the capabilities are.
0:40:10 – Yeah, I don’t know.
0:40:12 He like, I know he’s highly respected
0:40:14 and obviously he knows more about this technology
0:40:15 than I do, but he’s also made some predictions
0:40:18 with like towards like GPT-4 and things like that
0:40:19 that didn’t seem that accurate.
0:40:22 Like he was really skeptical on how good GPT-4
0:40:24 was gonna be and stuff like that.
0:40:27 And now like with the rumors about GPT-5,
0:40:28 if they turn out to be true,
0:40:30 well then obviously Open AI is doing something
0:40:31 that he doesn’t understand.
0:40:32 – Yeah.
0:40:34 – Right, and so we don’t know what that is.
0:40:37 Like in this Sam Almond bullshit, I don’t think so.
0:40:39 Like, you know, and the other day Sam Almond said,
0:40:41 they’re like, you know, people who are ignoring like
0:40:43 where things are gonna be going with GPT-5,
0:40:44 I think he said something like,
0:40:46 you’re gonna, we’re gonna steamroll you.
0:40:47 – Yeah.
0:40:48 – It’s what he said.
0:40:49 We’re just kind of shocking.
0:40:51 He was like, and he delivered this kind of like
0:40:52 really like, you know, peaceful, nice way.
0:40:54 But it was like, yeah, we’re gonna steamroll you.
0:40:56 And so I think they have something.
0:40:59 I think it’s gonna be shocking, but who knows?
0:41:02 Who knows what’s gonna be the best way
0:41:03 to do this in the future?
0:41:06 – Yeah, I actually think that government
0:41:09 is going to be a huge bottleneck over time, right?
0:41:11 I think, I don’t wanna get political with this,
0:41:13 but I do think that the government
0:41:16 is going to be a bottleneck to progress.
0:41:18 And the reason I say that is because I was recently
0:41:22 listening to a conversation with Ray Kurzweil
0:41:25 and Jeffrey Hinton, right?
0:41:28 And they were talking about these AI models
0:41:31 being essentially like the best correlation machines
0:41:32 on the planet.
0:41:37 They can find correlations between seemingly unrelated things
0:41:39 that humans could never spot.
0:41:41 And that’s the reason why these large language models
0:41:45 will likely find the cures to various cancers and diseases
0:41:48 and possibly solve climate change and world hunger
0:41:49 and all of this stuff, right?
0:41:50 They’ll find correlations.
0:41:52 – But they’re also very good at finding corruption.
0:41:53 – That’s true.
0:41:56 – Which, I had a viral tweet on this like maybe a year ago
0:41:57 where I talked about that.
0:41:59 And I think it’s gonna be a big thing
0:42:00 where you can actually see how,
0:42:02 where all the money is moving in the government
0:42:04 and the humans cannot process this.
0:42:07 And it’s a very easy way to like be corrupt and hide that
0:42:08 is like through moving money around
0:42:10 in like really sneaky ways.
0:42:13 – Which is another thing blockchain sort of solves as well.
0:42:13 But you know, we don’t do that.
0:42:15 – Yeah, but AI will be able to like look at that and go like,
0:42:17 oh yeah, this person’s doing this.
0:42:18 This is why they’re doing it.
0:42:20 This is why they, you know, sign this into law.
0:42:22 And so yeah, I could definitely see government people
0:42:25 being very like not wanting that to come out.
0:42:27 – Yeah. Well, you know, where I was going
0:42:30 with them, the government being the bottleneck is,
0:42:32 I think these large language models
0:42:35 will likely find cures for cancer,
0:42:39 you know, solutions for climate change, you know,
0:42:41 ways to end hunger in, you know,
0:42:43 parts of the world that need it.
0:42:46 But I think government is gonna get in the way
0:42:50 of making what the AI finds a reality, right?
0:42:52 Like when there’s new drugs that come to market,
0:42:55 how long does it take for the drug to go through
0:42:57 animal trials and then human trials?
0:43:00 And then, you know, all of the steps
0:43:01 before the drug finally gets on the market
0:43:03 for humans to use, right?
0:43:07 I think AI will probably find a lot of solutions
0:43:08 to a lot of problems.
0:43:11 And then the government’s red tape is going to be
0:43:13 what slows down these solutions
0:43:16 actually becoming live to the world.
0:43:18 – Yeah. I mean, I mean, that’s why there’s movements
0:43:19 like EAC and things like that, right?
0:43:22 It’s, they are really afraid of that.
0:43:24 It’s like, I think people talked about like, you know,
0:43:26 we could have solved a lot of our energy problems
0:43:28 with like nuclear power, but then regulation
0:43:31 like stopped that from happening, right?
0:43:33 And I think France is one of the big success stories
0:43:35 where they adopted nuclear power
0:43:37 and they haven’t had energy problems
0:43:38 like a lot of other countries have.
0:43:41 And so there’s a lot of fear around that.
0:43:43 And, but then the challenge is like, you know,
0:43:46 EAC does kind of go so extreme that they might, you know,
0:43:47 bringing on some of the regulation, right?
0:43:49 ‘Cause like just let’s go as hard as we can
0:43:51 as bad as we can.
0:43:53 And then the government kind of freaks out about that too.
0:43:56 So it’s really hard to know like how to get the government
0:43:58 to embrace this technology.
0:43:59 I’m hoping they will.
0:44:02 I mean, I think it’s like the U.S. winning at the internet.
0:44:05 Like the U.S. needs to win an AI too
0:44:06 in order to kind of set the stage
0:44:09 for the next 100 years of the world, right?
0:44:12 And if we don’t, whoever wins that, China or whoever,
0:44:13 you know, Russia, whoever,
0:44:16 they then get to set the stage for the next, you know,
0:44:18 chapter of humanity.
0:44:20 – It’s the new space race.
0:44:22 – Yeah. I think it’s bigger than that.
0:44:22 – Yeah.
0:44:23 – Yeah, I mean, it is really interesting
0:44:25 to think about it from that perspective too,
0:44:28 because like the U.S. probably wants to be the country
0:44:29 that finds the cure for cancer,
0:44:32 but is the government going to, you know,
0:44:34 is the government gonna put up a bunch of red tape
0:44:35 and slow that down?
0:44:36 I don’t know yet to be seen.
0:44:37 – Yeah.
0:44:41 – I think there’s going to be a sort of necessary overhaul
0:44:46 coming because with this AI world that we’re entering into,
0:44:48 I do feel like we need stuff to happen faster
0:44:52 so that we can let the solutions that these tools provide,
0:44:54 provide the solutions.
0:44:55 – I think ideally that’s what we do.
0:44:57 Like as these systems become incredibly intelligent,
0:45:00 more, they can process more data than humans can,
0:45:02 we should be asking them like, okay,
0:45:04 how can we reorganize this part of the government
0:45:06 to be more efficient and actually get more things done
0:45:09 and help more people, bring more people out of poverty
0:45:10 and things like this?
0:45:11 That’s what I hope for.
0:45:13 You know, that’s the one reason I wanted to do it,
0:45:14 you know, do a show with you.
0:45:15 – Yeah.
0:45:17 – That’s why I tweet is I’m really hopeful
0:45:19 for what this technology can do for humanity
0:45:22 if we don’t all get in the way, right?
0:45:24 – You know, I think that’s a perfect spot
0:45:25 to wrap this one up.
0:45:28 If anybody is watching on YouTube,
0:45:29 let us know in the comments
0:45:31 if you like this style of episode,
0:45:33 if you want us to do more Q and A
0:45:36 and you like just the sort of us giving thoughts
0:45:37 on your questions,
0:45:38 ’cause if you like this,
0:45:40 we’ll make more episodes like this for you.
0:45:41 If you haven’t already,
0:45:44 make sure you like and subscribe
0:45:46 wherever you’re watching or listening to this.
0:45:47 It really helps us out
0:45:50 to get more listeners and viewers on the show.
0:45:51 So appreciate that.
0:45:53 And thank you so much for tuning in today.
0:45:55 We’ll see you in the next episode.
0:45:57 (upbeat music)
0:46:00 (upbeat music)
0:46:02 (upbeat music)
0:46:05 (upbeat music)
0:46:08 (upbeat music)
0:46:10 (gentle music)
0:46:20 [BLANK_AUDIO]
Episode 6: Will AI change our economic systems forever? Join hosts Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) as they delve into these pressing questions.
In this wide ranging episode, Matt and Nathan answer your thought provoking questions and preview new AI video tools coming out, explore the revolutionary impact of AI on societal structures, healthcare advancements, and economic systems. They discuss the potential for AI to streamline government efficiency, uncover cures for diseases, and even tackle global challenges such as climate change and hunger. However, the conversation also navigates through the complexities of government regulations, the technological arms race among big corporations, and the societal implications of widespread AI adoption.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) AI podcast hosts discuss audience questions, insights.
- (04:38) Adobe premiere integrating Sora for video improvements.
- (09:01) Voice chat limits bots and negativity online.
- (10:55) AI’s impact on economy and work uncertainty.
- (15:38) Public, not governments or corporations, should adapt.
- (19:19) Open source AI catching up, corporate control.
- (20:10) Big companies’ financial support crucial for open source.
- (25:33) Future tech: UI, voice, cloud, startups role.
- (29:03) Greg Isenberg on how small companies compete.
- (31:05) AI will enhance human connection in startups.
- (34:05) Humans may not need to continue training large language models, as AI could self-improve through reinforcement learning.
- (37:30) Yann LeCun doubts large language models’ potential.
- (42:02) IAC’s extreme approach may bring regulation.
- (43:44) Encouraging engagement and feedback for future episodes.
—
Mentions:
- Adobe Premiere: https://www.adobe.com/products/premiere.html
- OpenAI: https://www.openai.com/
- Chat GPT: https://chat.openai.com/
- Sam Altman: https://blog.samaltman.com/
- Yann LeCun: http://yann.lecun.com/
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano