AI transcript
0:00:09 For anybody to think that we’ve hit the end of the road with AI is delusional.
0:00:14 I don’t know the right word, but we’re closer to the one yard line than we are to the 99 yard line.
0:00:17 We’re at the beginning of an exponential curve.
0:00:20 We’re not plateauing. We’re literally right here.
0:00:27 We’ve seen the most insane advancements in technology the world has ever seen in human history.
0:00:31 And we’re going to get to witness AGI and most likely ASI in our lifetimes.
0:00:33 Like that to me is mind blowing.
0:00:39 Hey, welcome to the next wave podcast on Mount Wolf.
0:00:42 I’m here with Nathan Lanz.
0:00:45 And today we’re talking about some really important topics.
0:00:47 The world has recently changed.
0:00:50 We just had an election here in the US and Donald Trump was elected.
0:00:56 And we’re going to spend a big chunk of this episode talking about what that actually means for the world of AI.
0:01:04 We’re also going to talk about how there’s been a major shift in the way Silicon Valley’s optimism has been towards tech and AI.
0:01:10 And we’re going to give you some of the predictions that we have of where AI is going in 2025.
0:01:11 You’re not going to want to miss this one.
0:01:12 So let’s just jump straight in.
0:01:19 When all your marketing team does is put out fires, they burn out fast.
0:01:25 Sifting through leads, creating content for infinite channels, endlessly searching for disparate performance KPIs.
0:01:26 It all takes a toll.
0:01:30 But with HubSpot, you can stop team burnout in its tracks.
0:01:34 Plus, your team can achieve their best results without breaking a sweat.
0:01:40 With HubSpot’s collection of AI tools, Breeze, you can pinpoint the best leads possible.
0:01:46 Capture prospects attention with clickworthy content and access all your company’s data in one place.
0:01:48 No sifting through tabs necessary.
0:01:50 It’s all waiting for your team in HubSpot.
0:01:54 Keep your marketers cool and make your campaign results hotter than ever.
0:01:57 Visit hubspot.com/marketers to learn more.
0:02:04 Let’s just go ahead and get into it.
0:02:08 I know, Nathan, you’ve had a lot on your mind about this stuff.
0:02:10 Where do you think’s a good starting point for this?
0:02:13 Yeah, I mean, maybe give you a little bit of this claim or like, you know,
0:02:15 this is not going to become a political podcast.
0:02:20 And I understand this whole topic is quite controversial because some people love Trump.
0:02:21 Some people hate him.
0:02:25 I absolutely hate him, but I think it’s going to be kind of hard to avoid this topic because moving forward,
0:02:29 obviously, Trump and Elon Musk are going to be pivotal in a lot of the changes
0:02:31 that are going to happen with AI and technology.
0:02:35 So it’s going to be impossible not to discuss the stuff that they’re doing.
0:02:39 And something I’ve noticed is, you know, I kind of confided to you recently.
0:02:44 You know, I’m kind of a ex-liberal lived in San Francisco for 13 years,
0:02:48 fell out of love with a lot of the left-wing policies because of the stuff I saw there.
0:02:52 And I have noticed that, like, even with all of my left-wing friends in Silicon Valley right now,
0:02:54 there’s a major vibe shift happening.
0:02:57 Like, like even my left-wing friends are like admitting like, oh, yeah,
0:02:59 the mood has dramatically changed.
0:03:03 It’s went from being very pessimistic and scared of the government to thinking like,
0:03:05 holy crap, it was always a talk in Silicon Valley.
0:03:07 Like, why do we not have any smart people in the government?
0:03:08 Why is that?
0:03:12 Like, why don’t we actually get Silicon Valley like some of the smartest people involved in the government?
0:03:14 It’s like, well, that is now happening here, right?
0:03:20 Or are we sending our dumbest people to the government or is it the smartest people don’t want to work for the government?
0:03:21 Yeah, exactly.
0:03:23 But it’s like, now there’s this moment where it’s like, okay, like Elon Musk.
0:03:26 It’s like, well, even people don’t like his politics.
0:03:28 It’s hard to argue that he’s not intelligent.
0:03:31 Like, you can’t build those kind of companies that he has without being incredibly smart.
0:03:35 And then you’ve got Vivek, who’s also a highly intelligent guy.
0:03:38 And you got JD Vance, who used to be a venture capitalist, right?
0:03:42 So you’ve got people who actually understand technology highly involved in the government now.
0:03:46 And the big thing they just announced is, you know, Doge, which I guess you didn’t know.
0:03:48 Like, a lot of people don’t realize like it’s not just the meme coin.
0:03:53 It’s like, it’s a Department of Government Efficiency is what this stands for.
0:03:54 And it actually started out as a joke on X.
0:03:59 Like somebody tweeted like, oh, you should create the Department of Government Efficiency and call it Doge.
0:04:01 And it literally started from a joke.
0:04:03 And now it’s a real thing.
0:04:04 And this is actually why a lot of people are excited.
0:04:06 Like, you know, because it’s obvious.
0:04:10 Like, you know, he’s interacted with the government, like realizes, you know, it’s almost like when you go to the hospital,
0:04:14 where like they charge you $1,000 for like a straw and stuff like this, right?
0:04:20 Like the government is full of this kind of stuff where people are just like throwing away money because it’s not theirs.
0:04:21 It’s a taxpayer’s money.
0:04:24 And I think a lot of that on both sides is happening.
0:04:29 So people are excited because the idea is they’re going to look through, you know, how the government’s spending money.
0:04:35 And I predicted on X yesterday that Elon Musk would probably be important for this because people on both sides,
0:04:40 including Republicans, are going to hate this because, you know, you don’t want for people to know how much money is wasted.
0:04:44 And it’s going to be important for Elon Musk to be involved because he can make that transparent.
0:04:46 And then it’s going to be hard for him to hide from that, right?
0:04:51 Like if you make a transparent X show where the money is being spent, what can they do about that?
0:04:54 And so Elon Musk announced yesterday like, yeah, that is what he’s doing.
0:04:59 He’s going to make a public leaderboard where he’s going to show where the money is being spent.
0:05:01 And then they’re just going to cut it out.
0:05:06 And then Vivek, he shared something earlier because people are saying like, oh, they’re not going to have any power to do any of this.
0:05:11 It’s all just talk. And Vivek’s like shared all these like Supreme Court cases that seem to say otherwise,
0:05:17 seem to say that there’s like going to be a legal precedent that like the government agencies have already overstepped their bounds.
0:05:21 There’s Supreme Court rulings that seem like you could cut a lot of the agencies down.
0:05:23 And so I think that’s what he’s going to try to do.
0:05:27 He’s going to try to do like, you know, with like Twitter where, you know, he cut Twitter down by 80%.
0:05:30 Sure, there were some problems along the way. Of course, there’s problems when you change things like that.
0:05:36 But overall, the company is doing way better now and it has like 80% less people.
0:05:39 And so he’s convinced with the government, you’re going to be able to do the same kind of thing.
0:05:43 The current plan for my understanding is that he’s going to give people a two-year severance pay.
0:05:50 So they’re talking about possibly reducing the size of the government by like anywhere from 30% to 80%
0:05:51 and giving everyone two-year severance pay.
0:05:56 And then, hey, you should go, you know, working some new job in AI or tech or whatever, find a new job.
0:06:02 Right. And so instead of spending all that money, if you could actually invest that money into the U.S.
0:06:05 and making sure that the U.S. is number one, that could change everything.
0:06:08 I think that people are not realizing that like all that money is being wasted.
0:06:13 If you reinvested it in new things in America, that you could really change the country, you know?
0:06:16 And I shared a tweet on X or post or whatever they’re calling.
0:06:19 Maybe like a month ago, they got like 47 million views.
0:06:23 Talk about how I went from Japan to America and solve the problems in America, right?
0:06:27 Like, yeah, America is the best country in some ways, but there’s a lot of things that don’t work well.
0:06:32 Like in compared to Japan, like you go to America and things just don’t work, right?
0:06:35 And it feels like a lot of that is because we waste money on all these stupid things
0:06:38 where we could be investing that money into infrastructure and new technology.
0:06:43 And so that’s why I’m personally excited because I think this could possibly be like a golden age for America
0:06:46 where like we actually start investing in the country again.
0:06:49 And they also proposed building 10 new cities in America.
0:06:54 Since Elon Musk is going to be involved in that, 10 new cities that are going to be obviously infused with like AI.
0:06:58 You’re going to have like robot cars, you’re going to have, you know, truck pop flying cars.
0:07:01 I don’t know if I actually like that idea of flying cars, honestly.
0:07:06 But you’re going to have like 10 new cities that are going to be built from the ground up with AI and technology in mind.
0:07:10 So just imagine instead of wasting two trillion dollars on stuff that probably doesn’t matter.
0:07:14 It’s like people who are just doing paperwork all day that actually typically slow down companies.
0:07:20 Instead of that, you put that two trillion a year into building 10 new AI powered cities.
0:07:22 Like America would look dramatically different.
0:07:24 And so I think people who understand that, that’s why there’s a vibe shift.
0:07:28 And I’m happy to see that even like moderate left wing people are like, I don’t like Trump,
0:07:31 but the idea of Dove is amazing, excited for it.
0:07:33 So yeah, no, that’s, that’s really interesting.
0:07:37 I think you dropped like five or six different things there that all be like sort of talking voice
0:07:39 that we can go down the rabbit hole on.
0:07:43 But yeah, I mean, I keep on seeing this Doge thing on X and every time I saw it,
0:07:48 like I almost sort of skimped past it because I thought it was talking about the crypto thing.
0:07:51 And, you know, I’m like somewhat interested in crypto.
0:07:53 I hold some Bitcoin and stuff.
0:07:57 This, you know, which by the way is another whole story there with Bitcoin exploding.
0:08:02 But you know, I hold a little bit of crypto, but I don’t do the Doge thing.
0:08:04 Never really been into the mean coin thing.
0:08:07 So every time I see Doge, I just sort of scroll past it thinking, oh,
0:08:09 this is just another thread of somebody talking about crypto.
0:08:12 So it’s, you know, not something that’s really on my radar.
0:08:18 But now that I know that it’s actually referring to the department of government efficiency.
0:08:18 Yeah. Yeah.
0:08:22 Now that I know that I’m going to actually start paying a little bit more attention to it.
0:08:23 Yeah. I think it’s kind of, yeah.
0:08:27 So maybe that’s distracting that the name it, but also I think it probably got more attention to it.
0:08:30 Right. Because like, obviously people in the media are like super upset.
0:08:31 Like this is ridiculous.
0:08:35 He’s like calling it Doge and like there’s like an official Doge job.
0:08:40 You know, we’ve already got a Doge coin, which is apparently unrelated completely.
0:08:41 Maybe they’ll tie together.
0:08:42 Who knows?
0:08:43 Yeah. Who knows?
0:08:45 I mean, you know, Doge has always been a meme, right?
0:08:49 Like I hung out with Jackson Palmer when he moved from Australia to San Francisco back
0:08:52 of the day when he first started Doge coin as a joke.
0:08:53 You know, really fun guy.
0:08:57 But yeah, the whole thing was a joke and now it’s become a, you know, a whole thing.
0:08:57 Yeah. Yeah.
0:09:00 I really like the sort of leaderboard idea.
0:09:01 I can’t visualize it.
0:09:06 I have no idea what something like that would look like, but I really like that level of transparency
0:09:10 where like anybody can go and be like, wait, we’re putting this much money towards this thing.
0:09:13 Like why this leaderboard is out of skew?
0:09:15 We need to adjust this a bit.
0:09:16 I love the idea.
0:09:21 Like I tweeted, I think like a year ago, like one of my big predictions was that in the next year or two,
0:09:24 we would see AI start to have an impact on government spending.
0:09:29 You know, this might be controversial, but you know, my belief is that Democrats and Republicans both,
0:09:32 there’s a lot of corruption in my opinion.
0:09:35 I think there’s a lot of people overpaying their friends, companies, and then later on,
0:09:37 they get favors and things like this.
0:09:42 And I think that there’s so much complexity that it’s hard for the average person to understand that or see it.
0:09:47 And I’m pretty convinced that once you start applying AI to looking at all the government spending data,
0:09:50 there’s going to be some things that kind of pop out like, oh,
0:09:55 why are we paying a million dollars for this, you know, for, you know, whatever?
0:09:59 Like for that kind of shovel or whatever thing it is, you know, like I think there’s like so many examples like this of like,
0:10:01 just dramatically overpaying for things.
0:10:03 And so I’m excited.
0:10:07 I think in probably 12 years, we’ll be looking at AI or maybe in less than 12 years,
0:10:10 like AI will be like almost like part of the government where it’s like showing us like,
0:10:11 here’s how this money’s been hit.
0:10:12 We could spend it more efficiently.
0:10:18 It’s all about how you can spend more efficiently, more intelligently versus what I think is happening right now.
0:10:20 Like I love people just overpaying their friends and things like that.
0:10:25 One thing I do want to dive into a little bit is so you’ve already sort of broken out a whole bunch of potential
0:10:29 implications of like the new administration coming in, right?
0:10:34 There are a few other things I know that like Trump basically said that on day one,
0:10:36 whether this actually happens on day one or not,
0:10:41 I feel like politicians saying I’m going to do this on day one is sort of like a talking point.
0:10:46 It’s sort of like high schoolers running for president saying I’m going to make all the vending machines free.
0:10:48 Like whenever I hear day one, that’s kind of how I feel about it.
0:10:50 Is there just saying stuff people want to hear?
0:10:56 But he did say that on day one, he wants to repeal Biden’s AI executive order,
0:11:03 which essentially Biden’s executive order like created a new form of government to sort of look at AI.
0:11:10 And also there was something in there that said that pretty much any foundation model had to be seen
0:11:14 by the government and approved by the government before it can sort of get released into the wild.
0:11:18 Those were like kind of the two main things of the executive order.
0:11:24 Yeah, government approval first and also like a government body to sort of keep track of AI.
0:11:26 And Trump said, I’m going to repeal that right away.
0:11:30 We want companies to be able to move as fast as possible when it comes to this stuff.
0:11:35 They shouldn’t need to like run it by their parents first, right?
0:11:37 That’s one of the big implications.
0:11:41 Also, we got JD Vance, which you mentioned he was a venture capitalist.
0:11:45 But one of the things that he’s been fairly outspoken about is open source.
0:11:52 He actually has made a lot of statements in the past about how he’s worried that regulation for AI
0:11:59 within the government is going to sort of strongly favor the existing incumbents
0:12:02 and make it really, really difficult for new players to get in, right?
0:12:06 Because what ends up happening is you get these big massive companies,
0:12:11 the Googles, the Metas, the Microsoft companies like this that have more money than God
0:12:17 and they can lobby the politicians to get the regulations to sort of go in their favor.
0:12:21 And a lot of times those regulations go in these big incumbents favors,
0:12:25 but the open source, the smaller companies that are just trying to get going,
0:12:29 they severely hinder those companies progress.
0:12:33 And that’s something that JD Vance, the vice president elect,
0:12:37 essentially said he’s worried about with AI regulation.
0:12:40 We need to make sure that whatever sort of things we do,
0:12:45 whatever sort of moves the government makes, they help, you know, both sides, right?
0:12:50 It’s not just completely favoring the massive incumbent companies.
0:12:54 So those are a few of like the implications that we’ve heard already.
0:12:58 Another thing is that Trump basically said he wants to make US first in AI, right?
0:13:03 You know, he sees it as a competition with China and I’m sure there’s some other countries in the mix,
0:13:07 but for the most part, when you talk about AI, you’re usually talking about the US and China
0:13:10 are the two that seem to be like racing each other the most.
0:13:15 Yeah, I think that the concerns JD Vance has shared like actually are really similar to mine.
0:13:21 Like in terms of in the future, AI is going to be the main intelligence on planet Earth.
0:13:27 So it’s very dangerous for that to be one company owning that because then that’s one company that owns intelligence.
0:13:31 One company that owns all sources of like what is the truth, right?
0:13:33 Obviously, that’s dangerous for one company to own.
0:13:35 So a lot of his concerns are around that.
0:13:38 And so that’s why he’s a big proponent of open source, which is exactly the same as me.
0:13:41 Like, I think we can’t have just one company owning intelligence.
0:13:44 That’s like, obviously, a very bad idea for humanity.
0:13:49 But at the same time, he does seem to be very practical of like you mentioned being concerned about China
0:13:52 and realizing we are in a new arms race with China, right?
0:13:55 This is like building the nuclear bomb or building the internet or whatever.
0:14:00 Like in new technologies, America has stayed ahead because we were the number one in those areas with AI and robotics.
0:14:02 Now we have to be number one.
0:14:05 And it looks like right now we’re ahead in AI and China is ahead in robotics, right?
0:14:06 Right.
0:14:09 That’s concerning.
0:14:13 It’s good that we’re ahead in AI though, because that should help us go ahead in robotics in the future.
0:14:14 But currently, that’s not how it’s playing out.
0:14:17 Currently, China seems to be ahead in robotics.
0:14:18 And so I’m pretty sure that they’ll be practical.
0:14:21 They’re not going to be like, hey, everything has to be open source.
0:14:24 I think they’re going to be very supportive of being open AI and all these other different companies.
0:14:27 I don’t think it’s going to be all about X AI or whatever.
0:14:29 I don’t think they’re going to just favor Elon Musk.
0:14:33 And so I think I have a practical approach where it’s like, okay, there’ll be some regulation around AI,
0:14:38 but definitely not highly restrictive because I do not want to slow down American companies in terms of competing with China.
0:14:39 Yeah, yeah.
0:14:40 That’s the impression I get.
0:14:44 We’ll be right back.
0:14:47 But first, I want to tell you about another great podcast you’re going to want to listen to.
0:14:51 It’s called Science of Scaling, hosted by Mark Roberge.
0:14:57 And it’s brought to you by the HubSpot Podcast Network, the audio destination for business professionals.
0:15:04 Each week, host Mark Roberge, founding chief revenue officer at HubSpot, senior lecturer at Harvard Business School,
0:15:09 and co-founder of Stage 2 Capital, sits down with the most successful sales leaders in tech
0:15:14 to learn the secrets, strategies, and tactics to scaling your company’s growth.
0:15:19 He recently did a great episode called How Do You Solve for a Siloed Marketing in Sales?
0:15:21 And I personally learned a lot from it.
0:15:23 You’re going to want to check out the podcast.
0:15:26 Listen to Science of Scaling wherever you get your podcasts.
0:15:31 I tend to sort of avoid politics.
0:15:35 I don’t really identify as a Republican or a Democrat like never in my life.
0:15:37 Have I identified with like either party?
0:15:40 I’ve always sort of identified with I’m an entrepreneur.
0:15:47 I take care of myself, no government entity, no one person getting elected is going to dramatically change my life.
0:15:50 It’s up to me to change my life and get to where I want to get.
0:15:54 And so I’ve always kind of had like that sort of perspective on politics.
0:16:02 But saying all of that, the comment that I was about to make is that it does seem like as far as like AI progress goes.
0:16:05 I’m not going to speak to the character of the candidates or anything like that,
0:16:12 but as far as like which candidate is going to help us get ahead in AI faster and get further with it.
0:16:17 I think the outcome of that election is going to get us further in AI, right?
0:16:19 That’s kind of how I feel about that.
0:16:25 There’s, you know, other things that I do like other things that I don’t like about, you know, both candidates that we’re running,
0:16:26 but we don’t need to get into any of that.
0:16:32 Let’s sort of like shift the topic slightly here because Gary Tan just interviewed Sam Altman.
0:16:39 And during that interview with Sam Altman, one of the questions he asked him is what are you most excited about for 2025?
0:16:42 And Sam Altman’s response was AGI.
0:16:43 I think that’ll be pretty cool, right?
0:16:46 Like, I think that was his words exactly.
0:16:52 So that’s basically Sam Altman implying that AGI is coming in 2025.
0:16:58 What’s interesting about that is there’s also been articles that recently came out from the information and from Bloomberg
0:17:05 and then pretty much all the other news outlets sort of covered what these two original news sources covered,
0:17:09 which is that they are both claiming that AI has slowed way down.
0:17:16 So to hear Sam Altman go and talk to Gary Tan on the Y Combinator podcast and say we’re gonna have AGI in 2025
0:17:23 and then seeing all these other news outlets saying we’re hitting this AI winter in progress is slowing way down.
0:17:26 Those two things seem to be at odds with each other a little bit.
0:17:30 Yeah, I mean, like from talking to friends in Silicon Valley, like everyone’s very optimistic.
0:17:37 And so, and in general, Silicon Valley does have like insider information like YC is the biggest network in Silicon Valley.
0:17:44 Sam Altman used to run YC, former YC, you know, and so they typically know about things going on in OpenAI before others do.
0:17:46 And everyone’s very optimistic.
0:17:51 So that tells me that I would trust what Sam is saying about AI over other people, quite honestly.
0:17:56 Can we talk about this before, like we were out in Boston talking about how reasoning models like 01, it’s a big deal.
0:18:02 The fact that you can now, even without more data, you can still scale AI just by throwing more compute at it.
0:18:06 And instead of being based on the data, it’s based on when it’s actually thinking about what you say to it.
0:18:10 And recently, people inside of OpenAI have started sharing comments on this.
0:18:18 Like, hey, yeah, people have not properly updated their thoughts on where AI is heading based on what 01 means.
0:18:21 Because when people try to explain 01 as well, people are like, oh, it’s just like a chain of thought.
0:18:24 And that’s all it is. It’s basically what people were doing before.
0:18:27 People at OpenAI have said like, no, that’s like kind of like what inspired it.
0:18:30 But there’s more obviously going on behind the scenes.
0:18:32 It’s not just chain of thought.
0:18:38 And so what surprised them is it seems to reflect almost like an internal monologue the AI has now.
0:18:42 And they said some of the internal monologue that they see has been kind of shocking.
0:18:49 And I’ve been hearing rumors too, like by the time this episode is out, we might have 01 full version, right?
0:18:54 Right now we’ve got OpenAI 01 me and OpenAI 01 preview, right?
0:18:58 So we haven’t even seen the OpenAI 01 model yet.
0:19:02 What we’ve seen is sort of like a not fully trained version.
0:19:08 It’s almost like a checkpoint version of what the full 01 was going to be.
0:19:16 And there’s been a lot of rumors circulating over on X and various news websites that claim that in November,
0:19:18 we’re going to see the full version of 01.
0:19:22 So within the next couple of weeks, we’re likely to see the next version of 01
0:19:25 and it’s probably going to blow some people’s minds.
0:19:27 You know, we also had Dario Amadei.
0:19:29 I’m not sure if I pronounced his name right.
0:19:31 He’s the CEO of Anthropic.
0:19:33 He went on the Lex Friedman podcast.
0:19:36 Lex asked him the same question.
0:19:38 When do you think we’re going to get AGI?
0:19:44 He basically said, I believe it’s going to be in, you know, 2026 or 2027, but probably not sooner than that.
0:19:48 And then you have Sam Altman saying he believes it’s going to be 2025.
0:19:53 But I think a big thing that’s going on here and the information article and then Bloomberg covered it
0:19:55 where they said things are really slowing down.
0:20:01 What we’re seeing happen is that the training side is slowing down,
0:20:06 but what they’re able to do on the inference side is getting better and better and better.
0:20:10 And I think that’s what you were just kind of saying there with the like reasoning model with 01.
0:20:14 So yes, we’re sort of running out of data to train these AI models on, right?
0:20:19 Like once you’ve scraped the whole internet and traded into an AI model, like what are you scraping from beyond that?
0:20:25 It’s either got to be synthetic data or you’re just scraping the same thing over and over again.
0:20:30 You know, I think where people are probably getting confused too is like they saw 01 preview and they’re like,
0:20:34 oh, it’s kind of slow and it’s not better in a lot of ways.
0:20:38 But they’re not realizing like this is a new, you know, kind of overusing the word, but it’s a new paradigm.
0:20:40 But it’s a new way to build AI.
0:20:42 And so of course, the first version is not that great.
0:20:44 But it’s a matter of GPT one version, right?
0:20:46 They started the whole naming convention over.
0:20:47 That’s why, right?
0:20:53 And then people on open AI have like commented recently, like they’re not paying attention to how the trajectory is going to change.
0:20:55 Like the trajectory of AI, how fast things improve.
0:20:57 That’s what matters.
0:21:02 And with these new models, if you don’t have to worry about like you just trained an entirely new model and you got all this new data,
0:21:06 and there’s all these different projects, you know, it’s like it was like a nine months or a year.
0:21:09 Like they said, like for some of the models, how long it took to train everything and test it.
0:21:16 If you’re not waiting on that instead, every single day, you’re throwing more compute at this model and improving it every single day.
0:21:18 That’s probably where we’re at or heading.
0:21:22 And so that’s going to be really different than, oh, yeah, we get an upgrade every nine months.
0:21:26 We’re probably going to be like, oh, we get an upgrade every week and improvements may like speed up.
0:21:27 And so that’s what people are not thinking about.
0:21:32 Like we’re probably moving from a world of like updating every nine months, like updating every week and that’s going to change things.
0:21:36 Yeah, and eventually updating every day and then eventually updating in real time, most likely.
0:21:41 Like I think a lot of people have this misconception right now that a lot of these AI models,
0:21:46 when you’re sitting there having a conversation with it, it’s actually learning and like training on the conversation.
0:21:48 But that’s not how it works, right?
0:21:55 Like that’s why when you see a new model of GPT, it says like framed through, you know, June 2023 or, you know,
0:21:59 this model was updated on August of 2024 or whatever.
0:22:03 And like I have conversations with people sort of outside of the AI sphere all the time, right?
0:22:07 Whenever I’m, you know, hanging out with friends or family that don’t know much about AI, but they know what I do for a living,
0:22:10 they always want to ask me about it, right?
0:22:15 And the conception that most people have is that if I go and have a conversation with chat GPT,
0:22:19 it’s instantly getting smarter and smarter and smarter.
0:22:23 And if I corrected on things, the correction that I gave it is now going into the training.
0:22:24 That’s not how it works, right?
0:22:28 They’re getting updated and there’s like new training runs all the time.
0:22:33 But the conversations you’re having with it aren’t actually updating the model.
0:22:35 Saying that, I think that’s eventually where it’s going to get to.
0:22:38 I think it is going to get to a point where in real time,
0:22:43 the models are sort of getting smarter and smarter and smarter based on the conversations they’re having.
0:22:48 And no, like one person is going to be able to totally screw over the model
0:22:52 by giving it a whole bunch of fake information and then assuming it’s going to get trained into the model,
0:22:58 because I would imagine it’s going to have some sort of system where it’s looking at all of the information in aggregate.
0:23:04 And when a certain specific information is fed multiple times over and over again,
0:23:06 that’s the information that’s going to get updated and fixed.
0:23:11 But if somebody is going in there saying, here’s how many rocks you should eat on a daily basis,
0:23:13 not necessarily going to update with that information.
0:23:17 It’s sort of going to look at the aggregate of everybody communicating with it.
0:23:19 That’s where I think it’s going to get.
0:23:22 I posted something on Twitter the other day about how like,
0:23:26 it feels like there’s been less exciting things in AI lately.
0:23:28 And somebody put a comment on there saying,
0:23:34 what you’re failing to realize is that the big monumental shift that AI was going to generate
0:23:39 has already happened and there’s no more progress from here.
0:23:41 And my response to that was just false, right?
0:23:45 Because I could literally go on and on and on about all of the stuff that’s coming.
0:23:49 I mean, I’ve even got NDAs with companies that have shown me some stuff
0:23:51 that I think are going to blow people’s minds, right?
0:23:56 But I know what’s coming intuitively and even seen some of it.
0:24:03 And I’m like, for anybody to think that we’ve hit the end of the road with AI is delusional.
0:24:06 I don’t know the right word, but we’re not at the end.
0:24:10 I mean, like we’re just starting to touch what like world models can do.
0:24:13 Like, you know, modeling all of the world and environment
0:24:16 and having that information inside of the AI as well
0:24:19 so that it works better with like embodied AIs and things like that.
0:24:20 Like we’re closer to the beginning.
0:24:25 We’re closer to the like the one yard line than we are to the 99 yard line.
0:24:27 Humans have a hard time extrapolating out.
0:24:31 Like, you know, you see progress and then I imagine what’s going to happen after,
0:24:33 you know, that technology built on it after two or three years,
0:24:35 how things are going to change.
0:24:37 People have a hard time like like imagining those things.
0:24:41 And now I remember when GPT one and two came out, like my friends in San Francisco,
0:24:44 like a lot of them were like XYC people, right?
0:24:48 And in our private group chats, they were sharing results from GPT one and two.
0:24:50 And like this changes everything.
0:24:52 And I’m like, maybe they’re more intelligent than me.
0:24:54 I feel like they got it like slightly faster than I did.
0:24:57 But like once I like I got it, I was like, oh, yeah, that does like AI actually works now.
0:25:00 Like, even though it’s not great yet, this is the beginning of it.
0:25:03 And so those same people who recognize that they’re more optimistic
0:25:05 than ever before right now.
0:25:08 And so I just those people over the people who probably at that time,
0:25:11 they were like, oh, AI is nothing or test BT is not good.
0:25:12 Yeah, they didn’t understand that kind of stuff was coming.
0:25:15 Like the people in the know, they’ve known for a while.
0:25:18 You know, even my previous startup bind did we did computer vision stuff.
0:25:22 And there was breakthroughs happening in AI then before LLMs even, right?
0:25:24 Especially on the computer vision side.
0:25:26 Like there was dramatic changes happening
0:25:29 where you could start recognizing the elements of images and stuff.
0:25:33 And so like AI has been improving for a long time now.
0:25:36 And before that, you know, machine learning applied to like recommendation engines
0:25:37 for Amazon and YouTube.
0:25:41 Like AI has been a long path and a lot of people don’t realize that
0:25:43 that this hasn’t happened overnight.
0:25:44 Yeah, right.
0:25:47 And then yeah, like you said, we’re like at the beginning of an exponential curve.
0:25:48 We’re not plateauing.
0:25:51 We’re literally like right here and about to go up.
0:25:53 So exactly.
0:25:55 Now when it comes to the topic of AGI though,
0:25:59 I think the thing that I struggle with the most around that conversation, right?
0:26:01 You got Sam Altman saying 2025.
0:26:07 You got Dario from Anthropix saying 26 or 27, if not later than that.
0:26:12 The problem I have, how do we know when we’ve hit AGI?
0:26:17 Because I feel like maybe Sam and Dario could have different definitions of it
0:26:20 and Sam might actually think we hit it in 2025,
0:26:22 but he didn’t hit Dario’s definition of it.
0:26:27 And I know Google has their whole like Google or open AI or maybe they both have it,
0:26:31 but they have like these levels of like various AGI’s, right?
0:26:33 Nobody really knows where the goalpost is right now.
0:26:37 And I think Sam might have this idea of this is what AGI is to me.
0:26:39 And I think we’ll hit it in 2025.
0:26:43 While other people’s definition of AGI might be different than Sam’s.
0:26:45 And when Sam thinks we hit AGI,
0:26:49 other people in the space will be like, yeah, but that’s just Sam’s version of AGI.
0:26:50 It’s not really AGI yet.
0:26:53 Yeah, maybe we should segue into like some 2025 predictions around all this.
0:26:56 But you know, one thing that’s interesting is,
0:27:00 I’m not sure if you remember this, but apparently open the eyes deal with Microsoft
0:27:03 has all these things saying that when they hit AGI,
0:27:06 it’s a deal off or like the ownership’s like something changes.
0:27:07 The structure changes.
0:27:11 I think they still work together, but the structure dramatically changes somehow.
0:27:13 Yeah, yeah, in opening eyes favor, right?
0:27:16 Like Microsoft has way less control of the company once that happens.
0:27:19 And so it might be in their benefit to say they’ve hit AGI.
0:27:23 My understanding about AGI is everyone has a different definition for this,
0:27:26 but like as soon as it can do the work of like a typical person,
0:27:28 not like the most genius person in the world,
0:27:31 but like an average, you know, a person who like sits at your desk
0:27:34 and does emails and stuff like that, like an admin or something.
0:27:39 For me, as soon as it does some kind of work like that, that’s like basic AGI.
0:27:44 Yeah, so Dario actually, I think described how he saw it on his episode with Lex.
0:27:47 I don’t have the exact quote in front of me right now,
0:27:54 but essentially he said when sort of every topic AI is able to understand it at like a PhD level, right?
0:27:59 So it almost sounds like maybe your definition and maybe Sam’s definition might be
0:28:01 it could do anything a human can do,
0:28:04 but it sounded like Dario’s definition is like it could do anything
0:28:07 that the smartest human at a specific task can do.
0:28:12 Yeah, yeah, that’s, you know, arguing over the definition.
0:28:14 So that’s going to continue to happen.
0:28:21 That’s where I sort of get like with Sam saying 2025 and Dario saying 26 or 27
0:28:22 and everybody having these different definitions.
0:28:26 It’s like, I feel like you have to have some sort of like,
0:28:28 okay, this is how we know we’ve hit it.
0:28:30 Otherwise, this debate is going to rage on forever.
0:28:32 Yeah, yeah, your robots are ruling the entire world.
0:28:34 They’re like, have we hit AGI yet?
0:28:39 While we said back in the robot services and everything, you know, it’s going to be like that.
0:28:45 I think I think in 2025, we’re going to get AI agents to actually work.
0:28:47 They actually go off and do work for you.
0:28:50 And the fact that these things also have an internal monologue going on.
0:28:54 I mean, for me, that’s AGI like it passes a Turing test.
0:28:58 If you didn’t know about chat, you could chat with it and think you were talking to a human.
0:29:00 You could internal monologue.
0:29:01 It can go off and do work for you.
0:29:04 I’m convinced all these things are going to be there 2025.
0:29:05 I mean, two of them already are there.
0:29:06 And so for me, that’s AGI.
0:29:10 Yeah, I think AGI 2025 and then and the interesting thing too,
0:29:15 as Sam Altman said, artificial superintelligence in the next thousand days or so.
0:29:19 You know, so that’s possibly our official superintelligence within three to five years.
0:29:24 Our official superintelligence is basically AI beyond humans understanding,
0:29:28 beyond anything the smartest human in the world could possibly do.
0:29:30 You know, make Einstein look dumb, right?
0:29:30 I don’t know.
0:29:36 I feel like once you hit AGI, ASI is not too far afterwards, right?
0:29:39 Because so if I’m basing it off of like Dario’s definition, right?
0:29:43 And if Dario’s definition is like an AI that knows every topic,
0:29:46 as well as the smartest person on that topic, right?
0:29:52 If we see AGI is like that, well, then wouldn’t that mean that AGI would be smart
0:29:56 enough to figure out how to develop an ASI, right?
0:29:59 Like if it is like the smartest coder in the world,
0:30:04 the smartest engineer in the world, you know, the smartest writer in the world,
0:30:09 all of this like bucket of things that you would need to create ASI.
0:30:12 If it was the smartest at every one of those things in the world,
0:30:17 it doesn’t seem too much of a stretch that AGI, as soon as we hit that,
0:30:20 ASI comes pretty damn quickly after that.
0:30:23 Yeah, I mean, that’s where like three years makes sense to me.
0:30:27 Like if you keep adding 10 to 20 IQ points to the thing every year,
0:30:30 you get smarter than any human very quickly, right?
0:30:31 Yeah, yeah, yeah.
0:30:36 I mean, like right now, is there any AI that can solve math problems
0:30:38 that no human has managed to solve yet?
0:30:42 No, but what is interesting is like I said in like one of our last episodes,
0:30:44 like scientists are already using this stuff now, though,
0:30:45 and saying like it replaced it.
0:30:47 So it’s not replacing the smartest human,
0:30:50 but it’s already replacing like smart graduates.
0:30:53 Yeah, well, I mean, AlphaFold, right? AlphaFold3,
0:30:56 they just open source AlphaFold3 so anybody can go use it.
0:31:00 But that’s like, you know, figuring out new ways to fold proteins and stuff
0:31:03 that are novel ways that humans haven’t figured out yet.
0:31:07 So if AI is figuring out these new novel things
0:31:10 that no human has figured out yet, once we get to AGI,
0:31:14 I just don’t see how ASI is not like fairly soon after.
0:31:16 Yeah, I’m beating that dead horse now.
0:31:17 Yeah, but you bring up AlphaFold.
0:31:20 I mean, that’s even another reason why I’m super optimistic
0:31:22 about the Trump administration coming in,
0:31:26 because people don’t realize like we’re in like the craziest time in human history.
0:31:28 It’s like, I don’t believe in the simulation theory,
0:31:31 but if you did, I understand why because we’re alive
0:31:34 in the most interesting possible time in humanity, right?
0:31:37 Like we were at the birth of artificial general intelligence.
0:31:41 We’re almost at the birth of like a new entity or a new kind of being.
0:31:44 And the next four years, all of these things are going to combine, right?
0:31:48 Like, you know, AI, robotics, we’ll be applying it to solving cancer
0:31:49 and all these different things.
0:31:53 And so in that area, you do want to be moving as quickly as possible.
0:31:56 You don’t want too many restrictions because we’re probably going to be like,
0:32:01 we’re totally reshaping what America is like entirely, right?
0:32:05 Yeah, no, it’s fascinating. I mean, you and I, we’re roughly the same age, right?
0:32:11 So like we lived in an era where we knew what it was like before internet and after internet.
0:32:15 We lived in the era where we knew what it was like before everybody had a cell phone
0:32:17 and after everybody has a cell phone.
0:32:22 We lived in an era when we had cell phones and an era where we had smartphones.
0:32:28 Like we’ve lived in this window of time where we’ve seen the most insane
0:32:33 advancements in technology like the world has ever seen in human history.
0:32:37 And we’re going to get to witness AGI and most likely ASI in our lifetimes.
0:32:39 Like that to me is mind blowing.
0:32:41 It is. Yeah. I grew up on IRC, right?
0:32:45 Like I grew up like when the internet was, was being born, right?
0:32:47 Like people didn’t know how to use the internet or what it was.
0:32:48 Or it was just, it was crazy.
0:32:52 It was just all this stuff like tied together barely somehow and all work somehow.
0:32:54 And then now everyone takes the internet for granted.
0:32:56 And it’s like a part of daily life.
0:32:58 It’s part of the whole world economy.
0:33:01 And then now we got people talking like, oh, is AI going to be a real thing?
0:33:02 It’s important or not?
0:33:04 It’s like, it’s going to look the same as everyone’s saying.
0:33:06 Like, is the internet going to be important or not?
0:33:07 Yes, it’s going to be important.
0:33:11 And most likely asked me way more important than the internet even because it’s intelligence.
0:33:13 It’s not just networking.
0:33:15 It’s intelligence. Absolutely.
0:33:16 Yeah. Yeah.
0:33:20 I know we’re going to do like probably a whole like 2025 predictions episode.
0:33:24 I imagine we’ll probably do an episode like that, like closer to New Year’s or something
0:33:26 where we just throw out like all of our predictions.
0:33:30 But I know when it comes to AGI, I think I probably live somewhere
0:33:35 between Sam Altman and Dario, like maybe 2026 and maybe not by 2025.
0:33:41 Maybe there’s like some version that’s like really, really close in 2025
0:33:44 and like open AI calls it an AGI.
0:33:45 But I don’t know.
0:33:47 Like I’m always bad at those predictions too.
0:33:51 Like every time I make a prediction like that, it always happens faster than I assume.
0:33:56 So me saying 2026 is probably a good bet for 2025.
0:33:58 So I’ll say I’m pretty bad at predictions.
0:34:00 They’re like, I’m great at prediction.
0:34:01 The kind of things that will actually happen.
0:34:04 Like I’ve done that so many times in my career that people have been kind of like shocked.
0:34:06 But like I’m always off on the timing.
0:34:09 So I don’t know. Like, yeah, I think 2025, but that’s based on my definition of like,
0:34:11 look, the things got internal monologue.
0:34:14 If they say that’s going to keep improving the things talking to itself
0:34:16 and like reflecting on what it’s worth.
0:34:18 I mean, that’s crazy.
0:34:22 And it sounds like they’re pretty confident that they’re going to have the agency work,
0:34:25 which probably means like, okay, you’re going to have like basically an AI assistant
0:34:29 that can handle your emails for you and stuff like that and schedule meetings and other things.
0:34:30 That’s probably all coming in 2025.
0:34:32 So for me, that’s like, yeah, that’s the best.
0:34:37 That’s something I can fairly confidently agree with that 2025 is going to be the year of the agents
0:34:41 that you just give it a prompt of what you want it to take care of for you.
0:34:45 And it’s going to go and use all the tools and do all the navigating and all the research
0:34:46 and handle that stuff for you.
0:34:48 We’ve already seen glimpses of it.
0:34:52 The glimpses we’ve seen are buggy, but like we can see where it’s going.
0:34:55 Microsoft just released magentaic or something like that.
0:34:57 You got the cloud computer use.
0:35:01 There’s been some news articles going around that chat GPT has that capability.
0:35:05 They just haven’t rolled it out to like the consumer models yet.
0:35:10 So like, I think pretty confidently we’re going to have really solid agents in 2025.
0:35:16 I also think the whole like digital twin like modeling the world stuff is going to be really big in 2025.
0:35:21 I think we’re going to see the video models like Sora where they’re actually doing that whole like world
0:35:25 modeling thing and sort of like understand environments and it’s generating based on
0:35:31 what it knows about environments versus like trying to recreate exactly what it was trained on.
0:35:36 I’m not very good at explaining that, but like that whole digital twin like world modeling thing
0:35:42 I think is going to only become like a bigger component of a lot of these AI models next year as well.
0:35:47 Yeah. Yeah. I think probably 20, 27 rich people are going to have robots.
0:35:50 You know, 20, 30 most people have robots.
0:35:52 That’s like kind of generally what I think.
0:35:55 I can see that. I could buy that timeline. Yeah. Yeah. For sure. For sure.
0:35:58 I mean, the biggest thing for consumers is just getting those costs down, right?
0:36:01 Like you’re not going to be getting people going and buying, you know,
0:36:04 $50,000 humanoid robots to do their laundry for them.
0:36:07 Yeah. It’s not going to be great at it either at first.
0:36:10 Right. If we’re like, like I can kind of do it, but didn’t fold it properly.
0:36:13 You know, my wife’s super picky about folding stuff.
0:36:16 So she’ll be like, no, no, no way.
0:36:17 Yeah. Yeah. Well, cool.
0:36:20 I think we covered quite a bit of ground on this episode.
0:36:23 You know, we talked about what the current election means.
0:36:27 We talked about the vibes in San Francisco shifting.
0:36:32 We talked about when we might see AGI, what AGI means, a lot of ground covered.
0:36:37 I think, I think that’s probably a good place to call it a wrap on this one.
0:36:37 It was fun.
0:36:41 I think that’s a good stopping point if you tune into this episode and you enjoyed it.
0:36:43 We’ve got some amazing episodes coming up.
0:36:46 If you like hearing our discussions about the current state of AI,
0:36:50 you want to hear us have discussions with other leaders in the AI space
0:36:54 and the people that are building these next AI generations.
0:36:57 Make sure you’re subscribed to the show, subscribe on YouTube,
0:37:00 subscribe on Apple Podcasts, Spotify, wherever you listen to podcasts.
0:37:02 We’re probably there.
0:37:05 We really, really appreciate you subscribing and thank you so much
0:37:07 for tuning into this episode.
0:37:08 We’ll hopefully see you in the next one.
0:37:22 [Music]
0:37:25 (upbeat music)
0:37:27 you
Episode 34: How will the 2024 election impact AI advancements by 2025? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into the implications of the upcoming Trump term.
In this episode, Matt and Nathan discuss the potential AI developments over the next few years, how different political outcomes could shape AI progress, and the shifting landscape in Silicon Valley. They explore the latest in AI models like OpenAI 01, the debate over AGI timelines, and how regulatory approaches might impact America’s competitive edge in tech.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) This won’t become political; mood is changing.
- (04:55) Reduce government size, invest severance in tech.
- (08:28) AI can reveal corruption in government spending.
- (10:58) AI regulation may favor big companies, hinder startups.
- (14:50) Sam Altman expects AGI by 2025, despite skepticism.
- (17:59) AGI expected around 2025-2027, training slowing.
- (19:56) AI models don’t learn in real-time conversations.
- (22:48) Humans struggle to foresee technological advancements’ impact.
- (27:53) AGI leads to ASI due to intelligence.
- (29:39) Optimistic about AI and future advancements.
- (32:20) Predicts accurately, but often wrong on timing.
—
Mentions:
- Sam Altman: https://blog.samaltman.com/
- OpenAI: https://openai.com/
- Dario Amodei: https://www.linkedin.com/in/dario-amodei-3934934/
- Anthropic: https://www.anthropic.com/
- AlphaFold: https://alphafold.ebi.ac.uk/
- Dogecoin: https://dogecoin.com/
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano