AI transcript
0:00:15 this has been a monumental, massive week in the world of AI. We got Project Stargate.
0:00:21 We got DeepSeek R1. We got announcements about the next versions that OpenAI are about to
0:00:27 release. So many huge things happen this week. And we’re both optimistic and maybe a little
0:00:30 bit scared about some of it. But we’re going to go ahead and break it all down for you and
0:00:35 really dive deep into our thoughts about all of this stuff. So let’s just get right into it.
0:00:41 Look, if you’re curious about custom GPTs or our pro that’s looking to up your game,
0:00:47 listen up. I’ve actually built custom GPTs that helped me do the research and planning for my
0:00:53 YouTube videos. And building my own custom GPTs has truly given me a huge advantage. Well,
0:00:58 I want to do the same for you. HubSpot has just dropped a full guide on how you can create your
0:01:03 own custom GPT. And they’ve taken the guesswork out of it. We’ve included templates and a step-by-step
0:01:09 guide to design and implement custom models so you can focus on the best part, actually building
0:01:13 it. If you want it, you can get it at the link in the description below. Now back to the show.
0:01:23 This week has been a fairly monumental week in AI. It’s been a really, really big week,
0:01:30 I think the biggest news in the world of AI was the Stargate project. Donald Trump announced it
0:01:36 alongside Masayoshi-san and Sam Altman and Larry Ellison of Oracle, right? They were all together
0:01:42 up during this press conference, all sort of taking turns explaining why they’re excited
0:01:46 about this project Stargate. You know, it had been rumored for like probably the last six or
0:01:50 even nine months that Sam Altman was working on something like Stargate. They had been having
0:01:54 conversations with Masayoshi-san. You know, a lot of people in America don’t know who Masayoshi is.
0:02:01 Masayoshi started SoftBank, which is almost like the AT&T of Japan, I guess is one way to describe it,
0:02:05 but also they’ve brainstormed out into many different ventures, including they raised one of
0:02:09 the largest venture funds in the world in collaboration, I think, with some of the Saudis,
0:02:14 I believe. They were also the largest investor in WeWork. Yeah, yeah. They had the Vision Fund,
0:02:20 which I think was the largest venture capital fund ever, right? So Masayoshi aims big. In my previous
0:02:24 startup, one of my main investors was Taizo-san, his younger brother. And actually, I got to be
0:02:28 pretty good friends with Taizo-san and also really good friends, like even closer with his
0:02:32 right-hand man, Atsushi Taira, but he aims very big. Anyway, so there have been rumors that they
0:02:35 were trying to build something like this together, that there have been, you know, conversations
0:02:40 about it. And then I think around the time when Trump won the election, I believe there was some
0:02:45 kind of press conference with him and Masayoshi, where they kind of alluded to something like this,
0:02:50 but they didn’t talk details. And they talked about, okay, $100 billion is what Masayoshi was
0:02:55 saying. And Trump told him, make it $200. And so now they’re coming out and saying,
0:03:01 oh, this is actually going to be a $500 billion AI infrastructure project. And so just to give
0:03:05 people context, I mean, I did some research on perplexity, like that’s the largest infrastructure
0:03:11 project ever in human history. The closest thing is like maybe there’s a city that Saudi Arabia
0:03:16 built, which was around $500 billion, but that’s an entire city. Definitely in terms of technology,
0:03:21 this is the largest infrastructure project ever in human history. Yeah, yeah. So I’ve got the
0:03:26 tweet up here. So I’ll sort of dig in on like a few of the elements just to sort of make it super
0:03:32 clear. But the Stargate project is a new company, which intends to invest $500 billion over the
0:03:38 next four years in building new AI infrastructure for open AI in the United States. So this is
0:03:45 specifically saying we’re going to build this infrastructure for open AI to use, right? So
0:03:48 that’s a very important element. And we’ll probably get into some discussion around that
0:03:55 in a little bit here, but they are building this multi $100 billion infrastructure for open AI.
0:04:00 So they’re going to begin deploying $100 billion immediately. They claim it’s going to
0:04:04 secure American leadership and AI, create hundreds of thousands of American jobs,
0:04:10 and generate massive economic benefit for the entire world. And if you watched any of the press
0:04:15 conferences, there was the press conference inside the White House, and then there was another sort
0:04:19 of interview that happened between those same three people out on the White House lawn. If you
0:04:26 watched any of those, they really, really focused in on how this was going to benefit health. Larry
0:04:30 Ellison talked a lot about how people are going to use AI and give all their health records to AI,
0:04:35 and AI is going to sort of pre-diagnose things. And then you can take your sort of pre-diagnosis
0:04:39 to a doctor and the doctor could sort of confirm the results. He also talked about how
0:04:44 this infrastructure is going to find the cure for cancer and is going to create a vaccine for
0:04:50 cancer. And it’s going to solve pretty much like every health ailment that plagues mankind, right?
0:04:55 That sort of the vision that they pitch to everybody, creating hundreds of thousands of jobs,
0:05:01 and also solving all of these health issues. Their tweet doesn’t really go into all the health
0:05:06 issues, but that was what they really, really honed in on the press conference. So the initial
0:05:12 founders in Stargate are SoftBank, which is Masayoshi Sun, OpenAI, which is Sam Altman, Oracle,
0:05:18 which is Larry Ellison, and then MGX, which I’m not super familiar with MGX. They’re the company
0:05:22 that I guess is going to be really honed in and focused on the medicine stuff. So SoftBank and
0:05:28 OpenAI are the lead partners for Stargate with SoftBank being the financial part of it. OpenAI
0:05:33 having the operational part. Masayoshi Sun’s actually going to be the chairman of this company.
0:05:38 And then they have some key tech partners, ARM, who makes basically all the chips that go into
0:05:42 mobile devices these days. And Masayoshi owns. Yeah, that’s what I was going to say. I believe
0:05:48 Masayoshi Sun is like the majority owner in that company. Microsoft, I mean, I think Microsoft is
0:05:54 in the mix by way of OpenAI, right? Yeah. NVIDIA, Oracle, and OpenAI. They’re all the key technology
0:05:59 partners in this project. And they’ve already started building their first mega data center in
0:06:04 Texas, where once that data center is built, will be the largest data center ever built on the planet,
0:06:09 right? Yeah, they’re going to closely collaborate to build and operate this computing system.
0:06:14 And in particular, AGI, we believe that this new step is critical on the path and will enable
0:06:19 creative people to figure out how to use AI to elevate humanity. So that’s the big sort of pitch
0:06:27 there is that it is this company that’s really kind of OpenAI, Oracle, and SoftBank coming together
0:06:35 to create this $500 billion AI infrastructure to essentially get to AGI and to create new jobs.
0:06:40 I feel like the create new jobs part might be like more of a short term thing. I don’t think
0:06:44 over like a 10 year window. Yeah, I think there’s a reason they focused on the drug discovery and
0:06:49 all that kind of stuff. Because like messaging wise, it’s like, okay, like long term, what also
0:06:54 will I do? Yeah, there could be some job loss. But I think, you know, as we’ve said before on
0:07:00 the podcast, you know, there’s good and bad sides of AI, but we both believe that AI in general
0:07:04 will have a positive outcome for humanity. Yes. And that even if people end up doing less jobs,
0:07:07 a lot of it will be jobs that they didn’t actually want to do. And there will be more
0:07:13 abundance in society that people can live better lives. So I’m personally super excited for this.
0:07:16 You know, it’s like when we first started the podcast, we’re talking about like how big of a
0:07:19 moment this is in human history. Yeah. Right. That was like when the first episodes we ever did
0:07:23 talking about that. And that’s exactly the stuff that Masayoshi’s talking about. I kind of wonder
0:07:27 if he’s reading my newsletter. Yeah, it could be because I would talk about like the golden age
0:07:32 of AI in America. Yeah. And he repeated that multiple times. Yeah, he kept on saying this is
0:07:36 the golden age, right? I do believe it’s true. It’s like this is a moment where you could reimagine
0:07:41 everything using AI. And also, you know, talking about like deep seeking China and the progress
0:07:46 China is making. This is a moment where like, yeah, the AI wars have begun. Yeah. It’s a monumental
0:07:51 moment. Like I really, really think this is a big moment in sort of the trajectory of human history.
0:07:57 This is like the beginning of the Manhattan project, right? Like this is like a big step in
0:08:02 saying we are going to be the dominant leader in the world. We are going to be the first ones to
0:08:07 hit AGI and, you know, probably not long after that ASI. That is sort of like what they’re doing.
0:08:12 They’re planting their flag in the sand and saying we are going to lead this. Yeah. It’s the beginning
0:08:17 of what I feel like is going to be like essentially the space race between us and China, right?
0:08:20 Yeah. And what I’ve said before, like, you know, I live here in Japan, I’m like,
0:08:24 eventually America’s going to have a huge advantage because of their partnership with Japan,
0:08:28 especially when it comes to robotics in the future. I still strongly believe that.
0:08:32 And so the fact that this alliance is between an American company and a Japanese company
0:08:39 is really promising for the future. Yeah. So I am very, very excited. I lean mostly optimistic,
0:08:44 but I do have some things. I actually made a whole YouTube video about it and I made a whole
0:08:50 tweet about it. And, you know, it might come off as a little tinfoil hat conspiracy theorist sort
0:08:57 of thing. But I think my concern around all of this is specifically around Larry Ellison.
0:09:02 I’m not sure how much you know about Larry Ellison, but he’s the CEO of Oracle.
0:09:06 Yeah. He’s notorious in Silicon Valley and it’s interesting too. He is friends with Elon Musk.
0:09:09 And now Elon Musk, it kind of seems to be pissed off about this whole thing.
0:09:12 Yeah. Elon Musk is not happy about it. He’s already talking crap on Twitter about it.
0:09:18 But yeah, anyway, with Larry Ellison. So Oracle was originally founded as a company
0:09:24 to build databases for the CIA, right? So it was originally they had a different name.
0:09:29 Their very first project was called Project Oracle. Project Oracle was designed to build
0:09:36 databases for the CIA. And to this day, Oracle still has like government contracts with the CIA
0:09:41 and various, you know, three letter government agencies here in the US, right? So you’ve got
0:09:48 that element of it, right? Also, Larry Ellison just about four weeks ago did like this investor
0:09:53 meeting to all of like the Oracle investors. And while he was on stage, he was talking about
0:09:59 envisioning a future where everybody was under surveillance. Yeah, I saw that he was talking
0:10:04 about how there was cameras on drones, cameras on buildings, cameras on police, cameras on all the
0:10:09 newer car models all have cameras. And he was talking about how all of this data is going to
0:10:15 get fed to a data center somewhere. And then AI is going to analyze all of this. And when there’s
0:10:21 anything that pops up that the AI deems is worrisome, they’re going to alert the authorities
0:10:27 automatically. So when I say I have like some concerns, Larry Ellison is the one that like
0:10:32 his background with working with all the government agencies and also literally recent
0:10:37 statements within the last like four months about how he wants all this surveillance and he
0:10:42 sees a world where people and police officers will fall in line because they’re always being
0:10:47 watched. Yeah, like that is the future he wants to build. He’s publicly talked about that. So
0:10:53 that to me is a little worrisome, honestly. And then also just sort of going further down
0:10:57 this rabbit hole. I feel like that meme of always sunny in Philadelphia where he’s like
0:11:01 connecting all the dots and he’s got like the pin board and he’s like tying strings together
0:11:08 and stuff. The most recent board members that open AI brought in house to be in the open AI
0:11:14 board, one of them is an ex member of the NSA. And the newest one is a member of BlackRock,
0:11:18 one of the executives at BlackRock, which is, you know, the world’s largest investment firm
0:11:24 that has huge political ties and tries to steer the politicians. Like if there is an Illuminati
0:11:29 BlackRock is kind of part of it. Anyway, okay, done with all the conspiracy stuff there. But
0:11:34 like when I’m starting to put all those pieces together, it makes me wonder if like outwardly
0:11:40 the motives they talk about are building new jobs, curing cancers, creating new vaccines that will
0:11:46 prevent cancer from ever happening in the first place. But inwardly, Larry Ellison needs a massive
0:11:52 data setter to collect all this video footage so that he can use AI to analyze it and keep tabs on
0:11:58 the people. Just throwing that out there. That was my whole like rant and ramble that I put on
0:12:02 Twitter. But I get what he’s saying. Like I do think that actually, you know, AI will be good
0:12:06 in that way that you can have customized medicine in terms of the monitoring and the surveillance
0:12:11 stuff. Yeah, I saw that. I was kind of, you know, alarming. You know, I’ve read a lot of sci-fi books
0:12:15 on the topic. I’ve always been of the opinion that that’s probably going to happen in the future.
0:12:19 That’s going to be like inevitable. And I don’t like it, but I don’t see any way around it because
0:12:24 people do really value safety. And in the future, as AI gets better, it’s going to be harder to
0:12:28 harder to argue against safety. And so I do think you will have AI systems that do like mass
0:12:32 surveillance and stuff like that. I think that’s going to be really hard to avoid. So I get the
0:12:37 concern. I don’t really know how you avoid that as technology gets better. You know, slightly
0:12:41 comforting that, you know, Masayoshi and Sam Altman, I don’t think they’re all for surveillance,
0:12:46 you know, like Larry is. So hopefully there’s a balance there. Yeah, I wouldn’t imagine so. But
0:12:50 yeah, I don’t know where Trump stands on it. I don’t want to get into the politics of it all. I
0:12:54 don’t know where he stands on it. But obviously, you know, this new company has the backing of the
0:13:00 US government. One of the last executive orders that Biden signed before he left office was an
0:13:06 executive order to allow AI data centers to be built on federal land. Yeah. Right. So basically,
0:13:12 data centers can be built anywhere in the country. The government can basically give land for these
0:13:18 data centers. And clearly Donald Trump was the one who sort of announced this new Stargate project
0:13:22 before introducing, you know, the three main players. So they have the backing of the federal
0:13:25 government. When you mean the backing, you mean the money though, because I’m not sure that it’s
0:13:30 confirmed. No, no, no, not the money. I think the money’s mostly coming from Masayoshi-san and,
0:13:34 you know, maybe some from Oracle and Open AI, but it seems like Masayoshi-san is sort of responsible
0:13:38 for the financing of it. But it sounds like the government is essentially saying, we’re not going
0:13:43 to get in your way to build whatever you want to build. That’s kind of my takeaway from it is like
0:13:48 they have the backing, not in a financial sense, but in the like open doors sense, right?
0:13:51 Tends to support. They’re probably going to get things like, you know, they want to go through
0:13:55 as many regulations to get things set up and they’ll all be fast track. That’s kind of my
0:13:59 understanding. Yeah. Yeah. But then talking about Open AI, right? I sort of highlighted the fact that
0:14:05 they are building this for Open AI. Well, to me, that sort of brings me back to when we’re talking
0:14:10 about like the whole podcast between Joe Rogan and Mark Andreessen, right? Yeah. Mark Andreessen
0:14:16 made a comment on that podcast about how he was in closed door meetings with the government where
0:14:20 they basically said, don’t even pursue building AI companies at this point, right? Oh man, I didn’t
0:14:25 make that connection. Yeah. Essentially saying that there’s going to be like one true king,
0:14:29 like one main AI player. So if you’re trying to build an AI company, you’re probably going to
0:14:34 fail because we’ve already sort of picked our winner. And Larry Ellison made an offhand comment
0:14:39 during that sort of outside the White House interview the other day. He made a comment that
0:14:43 this has been in the works for a while now. He didn’t say how long, but I’m assuming they’ve
0:14:49 been working on this long before Donald Trump was in the picture with it, right? So it makes me think
0:14:53 that maybe some of this stuff that Mark Andreessen was referring to, that like they already knew
0:14:57 open AI was going to be it months ago. Yeah, I didn’t actually make that connection. That’s
0:15:02 interesting. Yeah, that could be. I mean, I can get it from the government’s perspective. It’s
0:15:07 kind of like, do you want multiple companies building the nuclear bomb? It’s like, no, you
0:15:10 probably want one and you want to be in control of that. Yeah. Yeah, I don’t know. Like, I mean,
0:15:14 even though open AI is going to have a lot of support, I don’t think that means that like,
0:15:18 you know, X AI will not or that anthropic or Google, I think you’re going to see
0:15:22 these kind of projects from all of them, I believe. You think so? I think so. I think so.
0:15:27 I hope so. That’s what I would prefer to see, right? Like, I would kind of prefer to see
0:15:30 not just one company controlling all the power with this kind of stuff.
0:15:35 Yeah. But this is like what we talked about with open source before. It makes me way more
0:15:40 pessimistic about what chances open source has. Yeah. Because obviously, one of the reasons that
0:15:45 Sam is getting the finance to do this is because of stuff we’ve discussed on this podcast.
0:15:50 You know, they are probably seeing some amazingly promising signs from the internal models that
0:15:55 they’re building, right? They’ve learned how to scale up test time compute, and they got O3
0:15:59 in three months, and they’re kind of mapping out what that means. And there was even an interview
0:16:04 today where they start talking about like O4 and saying, like, yeah, we’re expecting that to also
0:16:09 come kind of faster than people might anticipate. And the improvement seems to just kind of keep
0:16:14 going up at a very fast rate. So if that’s true, they’re like, yeah, AGI is basically here,
0:16:19 and all you need is more compute. And possibly we have ASI, you know, basically kind of a digital
0:16:25 guide that you can create as long as you throw, you know, $500 billion at it. So now that that’s
0:16:29 a known thing, and like flags been planted there, and like, yeah, we know this is a possibility,
0:16:32 I think everyone will be going after it. You know, that’s why I think you see Elon Musk talking so
0:16:36 much crap, because before this announcement, it was announced kind of like the AI cluster he was
0:16:41 building was going to be the largest in the world. And it’s like, oh, by the way, $500 billion.
0:16:47 Yeah, they’re having a data center measuring contest. Yeah, yeah. But also, you know, not to
0:16:51 get political, but there’s also reason they’re all going to Texas and places like that. As discussed
0:16:57 before, this is going to require a major rethinking about like energy, the creation of energy and
0:17:03 things like that, because these systems are going to require massive amounts of energy, like massive
0:17:08 amounts. Yeah. Yeah, one of the things I heard about why they wanted to build out in like Western
0:17:14 Texas specifically is it’s one of the spots in the country that gets the most sunshine and heat all
0:17:19 year long, right? And there’s tons and tons and tons of open land in Texas, right? Especially
0:17:26 West Texas. I’ve driven from Austin all the way to San Diego. There’s hours and hours of driving
0:17:31 where there’s just nothing, right? And it’s also the area that gets like the most sun throughout
0:17:37 the year. So yeah, the reasoning for it is partially political, but also just geographically,
0:17:43 there is the land there and there is also the sunshine to get assistance from the solar power,
0:17:47 obviously, right? Yeah, but there’s also like less restrictions on generating energy and using
0:17:51 energy in Texas versus California. I mean, there’s definitely multiple reasons they’re choosing that
0:17:58 location, but just the geography of it is also one of the big reasons as well. But yeah, I definitely
0:18:02 have like mixed feelings about it. When I first saw it, I’m like, this is amazing. We’re going to see
0:18:06 AGI like way sooner than anybody thinks. And that means we’re probably going to see ASI way
0:18:11 sooner than anybody thinks. And maybe we are going to enter in this like sort of post-capitalistic
0:18:17 world fairly soon, sooner than most people realize where we don’t have to work if we don’t want,
0:18:21 because AI is just going to do everything for us. And then I started seeing a lot of the like
0:18:26 Larry Ellis and stuff. And then I started thinking about the more like regulatory capture sort of
0:18:32 element of it that open AI now seems to be really tied in with the US government. I’m curious, like,
0:18:38 who else do you think could provide the sort of financing to do? Open AI has Masayoshi son,
0:18:43 who’s essentially going to help them get to $500 billion over the next several years to build
0:18:47 these dentisters. Who else has that capability? Oh, you mean outside of Stargate? Yeah, that’s
0:18:52 interesting. And also you would like Masayoshi, I do wonder where the money is coming from. I don’t
0:18:57 think he has all that money. No, no, he’s raising it. I mean, even Elon said like he’s only got 10
0:19:01 billion dollars. It’s a cute or something. But I don’t think Elon knows actually. Yeah, I don’t
0:19:04 think he does either. I mean, he might have heard from a friend or something like, oh, they were
0:19:08 trying to raise money. This is how much they had committed at the time or something. But like I
0:19:13 said, like with probably the internal data that open AI has, that is what’s making the fund raise
0:19:18 that large. If they didn’t have the internal data showing, oh, yeah, here’s 04 and 05 is going to be
0:19:22 like in three to six months after that, it’s going to be this much better. Like if they couldn’t show
0:19:26 that, they wouldn’t be raising all this money. They have something incredible inside of open AI.
0:19:30 Yeah. Well, I also think just the fact that, you know, they had the president announce it and they
0:19:34 did the whole announcement with the White House and everything like that, that’s only going to help
0:19:40 them raise, right? Like knowing that they’ve got the support of the US government, it’s not going
0:19:44 to hurt their ability to raise. That’s only going to help them raise the money. I think after
0:19:48 all the press conferences and all of that kind of stuff, I think it’s going to get a lot easier
0:19:52 for them to actually come up with the funds to actually do this thing. You know, I mean, you’ve
0:19:56 been saying it on podcasts since the very beginning, like nobody’s catching up with open AI. And if
0:20:01 nobody else can build data centers like this, now I really believe nobody’s catching up with open AI.
0:20:05 I have been saying that, right? For a long time. A lot of people are like, what are you talking
0:20:10 about? Look, Claude’s great and all this stuff. I don’t know. Just things I’ve heard from friends
0:20:15 who know Sam Altman is just the signs internally have been very positive for a long time, despite
0:20:19 the drama that people saw, you know, from the company. Yeah, I don’t know. I do believe it’s
0:20:23 eventually going to be open AI versus Elon Musk. Like I’ve been believing that for a long time.
0:20:27 I think Google will try to catch up. Who knows, maybe Google will even have to try to make an
0:20:30 alliance with Elon Musk or something. Like who knows, like what will happen there, like long term.
0:20:35 But I do believe that the only one who could attract the talent and the capital would be
0:20:39 Elon Musk. How big is the data center that Elon Musk is building? I can’t remember.
0:20:46 I’m perplexing it right now. So he spent 2.5 billion on 100,000 H 100s and an additional 2.5
0:20:50 billion on other data center costs and infrastructure. So I’m seeing 5 billion.
0:20:55 Okay. So I’m seeing 6 billion and a lot of it came from the Middle East and say that even the
0:20:59 thing with the open AI, you know, the Stargate, probably a lot of that’s Middle East money,
0:21:04 quite honestly. Yeah. This actually does conclude 6 billion because it does say a recent 1.08 billion
0:21:08 order placed for NVIDIA GB 200 AI servers brought it up to 6 billion. That investment
0:21:12 just happened like within the last couple of weeks. So they were at 5 billion, just got
0:21:16 another billion like a couple of weeks ago. Well, one thing that Skobal brought up too,
0:21:21 he mentioned this in an X post that I saw earlier today is like, there’s been a lot of talk about
0:21:26 essentially running out of data. Like if you’ve scraped the entire internet and all of the data
0:21:33 has already been sort of grabbed, what do you need $100 billion data center for? Like where is
0:21:39 the data coming from? So that is an interesting question too, right? I’m sure with a 5 billion,
0:21:46 6 billion dollar data center, do you really need that $100 billion data center? I don’t know.
0:21:49 I don’t know. I kind of disagree with that. It was a question that he raised, but so
0:21:54 there was some recent research that came out showing some success, basically using data,
0:21:59 using content created by the AI to teach the AI, like in training the models.
0:22:03 Yeah, it’s synthetic data, right? Yeah, it’s synthetic data. And so the early signs seem kind
0:22:10 of promising, and this was not from open AI. So I assume that the ’03, ’01 pro models are good enough
0:22:15 to actually create synthetic data that’s actually helping improve the models. And so if that’s true,
0:22:18 I mean, that’s what people have been saying for the last six months or so. If that’s true,
0:22:23 in theory, that’s no longer a problem. They can just keep producing new content and actually
0:22:28 train their models on that. And also, we’ve been saying they can keep scaling up with test time
0:22:33 compute, so they can just throw more processing power to give these models more juice to actually
0:22:38 think, yeah, you could spend infinite money. How much energy can you throw at it? The more you
0:22:43 throw at it, the smarter it will be. Not to mention if there is a goal of putting cameras
0:22:48 everywhere, that’s all new incoming data to build world models or whatever, right? If you’ve got
0:22:54 cameras and drones and on bodycams and on cars, companies put them on their buildings or whatever
0:22:59 for their own sort of security, and they’re trying to collect as much of that footage as they can,
0:23:04 that’s going to require some pretty massive data centers to be able to pull all of that in
0:23:10 and sort of analyze it for AI. But I also can see all of that data being what they use to train
0:23:16 like actual world models to understand physics and the world around us and how things and people
0:23:21 and objects move through the environment, right? Yeah, I think maybe our listeners are like,
0:23:25 “Okay, cool. What does it mean for me?” Anything we’ve said on this show about timelines,
0:23:31 just cut that in half or less now. Like literally a lot of things that we may have been saying
0:23:36 three to five years, some of those things may be one or two years now. Development is going to
0:23:40 increase. And like I said, the reason they’re able to raise this much money is OpenAI has something
0:23:44 amazing internally that they’re showing investors. Yeah, I would believe it. And they still are
0:23:49 maintaining their partnership with Microsoft and Microsoft is still getting access to any sort of
0:23:54 new OpenAI models that they develop according to Microsoft, which means Clippy is going to get
0:23:59 really good really quick. But Microsoft did not invest. I mean, to me that’s a signal. In Silicon
0:24:04 Valley, if somebody invests and then they don’t follow into the next round, you try to frame it
0:24:07 as not being a negative signal, but that is some kind of signal. And I don’t think it’s a negative
0:24:12 signal on OpenAI’s part. I kind of think that OpenAI wanted to have other partners. It’s like,
0:24:16 yeah, Microsoft’s one of our partners, but like, look, we got all these other partners. We don’t
0:24:21 just need Microsoft. Well, the impression that I get is that the scale that OpenAI has in mind
0:24:26 is bigger than what even Microsoft Azure can provide, right? Like I think they’re imagining
0:24:31 this sort of scale that’s just sort of unfathomable for us. That’s what I said before. I said like
0:24:35 people talking about Microsoft is going to take over OpenAI. I’m like, I envision the potential
0:24:40 future where OpenAI acquires Microsoft to not have to deal with their contracts with them anymore.
0:24:44 Like it’s the other way around. Yeah, yeah. I wonder if there’s a world where
0:24:50 XAI and Elon actually get rolled into this new project. I know Sam Altman and Elon Musk have
0:24:55 been beefing on X and whatever, right? But if they’re both really in it for the good of the
0:25:00 country and building like the super intelligence, makes me wonder if there’s a world where they
0:25:04 bury the hatchet and work on this together. Well, it’s crazy, you know, because like Sam Altman,
0:25:07 definitely like, you know, I think I told you before, like I did a speech at Stanford and Sam
0:25:12 was there at the same time. And I know politically he was at the time pretty far left. I think he’s
0:25:15 like kind of switched now, being more somewhere in the middle. I think they’re all opportunistic.
0:25:20 That’s what I think. Yeah, he hated Trump for sure. Like that’s definitely for sure.
0:25:24 And so I do wonder now, it’s going to be kind of odd, but like I wouldn’t be surprised if that
0:25:28 happens. It literally would be some kind of deal brokered by Trump between the two of them. Like,
0:25:33 hey guys, makeup, it’s for the best of America. But I’ve been saying this for a while. I do believe
0:25:38 that open AI owes Elon Musk something. I really do. Like I think it’s crazy that he owns no equity
0:25:43 in open AI. I think that’s ridiculous. In the early days of open AI, like everyone in San
0:25:47 Francisco, when they would talk about the company, they would talk about Elon Musk. And like Sam’s
0:25:51 name would occasionally come up, but it was like, oh, open AI, oh, that’s Elon Musk thing.
0:25:56 Yeah. The first time I ever heard of open AI, it was with Elon Musk attached, right? Like that
0:26:00 very first time I ever heard of it. It was like, this was another Elon Musk company.
0:26:03 Yeah. It was like his company and like Sam Altman’s help and run it. It was like, that was at least
0:26:08 the messaging externally that was being presented to people. And then also like the main talent
0:26:14 early on, he helped recruit the capital. He provided some of it, but also it was from his friends.
0:26:18 Yeah. Yeah. Didn’t he bring on Ilya? Like I think Ilya was working with Elon in the beginning.
0:26:21 I think he did. I could be wrong about that. Don’t quote me.
0:26:26 Yeah. My understanding is he did. And so a lot of the main talent he brought on and the capital
0:26:30 and just the reputation. And then also that’s having someone like Elon Musk brings so much
0:26:34 talent to you too. Just like, oh, it’s Elon Musk’s AI company. I’ll go work there now.
0:26:39 And so the fact that he owns nothing is crazy. So I would love if something happened where
0:26:44 Elon Musk owns a small piece of open AI and they have some kind of technology sharing deal,
0:26:48 but like, will that actually work? I have no idea. Yeah. Yeah. I don’t know. Anyway,
0:26:53 there was another really big piece of news that came out this week and it is related to open
0:26:58 source and China. Right. This week we got deep seek R one. There was already these deep seek
0:27:05 models out there, but the R one is this new like research model. And I did test it on a live stream
0:27:12 and I tested it side by side with O one just standard and O one pro. O one pro definitely
0:27:17 still outperforms deep seek R one. It was definitely giving me more like quality in-depth
0:27:25 responses, but comparing it to O one, they felt pretty even, which to me blew my mind for an
0:27:29 open source model. I know you have some opinions on it. Obviously the models out of China. Yeah,
0:27:34 our good friend Matthew Berman was doing some testing with it and he asked it about
0:27:39 if like Taiwan was part of China and it basically said Taiwan is part of China and anybody who
0:27:44 opposes this thought will be shut down or something weird like that. Right. It wasn’t exactly that,
0:27:48 but yeah, it was in that vein. No, I’m definitely paraphrasing, but it was basically saying like
0:27:52 any plans for independence will not work or something like that. Right. It was some wording
0:27:57 like that, which I found really interesting when I actually asked that same exact question,
0:28:02 like I put the exact same prompt into deep seek and it just said, sorry, I can’t answer that.
0:28:05 Let’s talk about something else. That’s what it did when I put the prompt in. That’s what I’ve
0:28:09 been calling it like open propaganda, like open source. I don’t know. It’s open propaganda.
0:28:14 I mean, because it is like, okay, so why is China allowing this model to be open source and be out
0:28:18 there? The Chinese government is allowing it. And to me, that is why they’re doing it is because
0:28:23 they know that people love open source. It’s a great way psychologically to have people go,
0:28:27 oh, it’s open source. I love it. But there’s other things going on here. Like the reason they’re
0:28:31 allowing it is because if it becomes one of the biggest models, they get the kind of control
0:28:36 reality and distort history through that. Right. Like, okay, TMN square didn’t happen or different
0:28:40 things like that. They can change that in the AI model. And, you know, full disclosure, I mean,
0:28:46 I did live in Taiwan. I studied Mandarin there, kind of biased. I love Taiwan. But for that reason,
0:28:49 I just, I can’t support the model. It’s because like, yeah, you talked to it about Taiwan and it’s
0:28:54 like, yeah, it’s owned by China. No one in Taiwan sees it that way. Or maybe a few people, but like
0:29:00 not many. Yeah, yeah. But I mean, like there is biases built into pretty much every AI model,
0:29:06 right? Like, you know, a lot of the US models refuse to talk about certain things or
0:29:11 sort of share their own political bias that was trained in as well. Right. But yeah,
0:29:16 I definitely see that. And the fact that it’s actually open weights, though, people can take
0:29:23 this deep seek R1 and fine tune it and sort of essentially train out all of that bias if they
0:29:27 wanted to, right? Because the model weights are open source as well. Like you can actually take
0:29:33 the weights and fine tune them if you want. So, you know, as a whole, as a model that’s open source,
0:29:38 I do think it’s really, really impressive that we’ve gotten to 01 level this quickly,
0:29:42 right? Like, you know, they’re constantly talking about the gap between when we released
0:29:48 GPT-4 to 01. Look at how big that gap was and then 01 to 03. That was only like a three month
0:29:52 difference. Well, we’re seeing that sort of same scaling happen in open source as well.
0:29:57 Yeah, there was a tweet from one of the open AI guys and he was kind of saying,
0:30:02 I expect to see a lot of new reasoning models in the next few months in open source and other
0:30:07 areas that don’t fully get, you know, how it’s done, or like he basically kind of lunage like
0:30:10 opening is doing something a little bit different than you think. It’s not as straightforward as
0:30:14 what you think. And I kind of still think that’s probably true. I think that’s why open AI is moving
0:30:18 faster than anyone else. They have discovered something, but I did test deep seek. I mean,
0:30:24 it was impressive. It’s better than Claude. It’s definitely in the ballpark of 01. I found that
0:30:29 in some ways it was better than 01 and in some ways it was worse. I tested it on coding after
0:30:34 you told me to try it. And I found that like in some ways the code was better in some areas.
0:30:37 I was like, oh, that’s amazing. Like it’s actually better. And then it would do some things that
0:30:42 were like pretty dumb that 01 would never do. Yeah. I was like, I think there’s something missing
0:30:45 in the reasoning side of this. I could see it. There’s something they’re not getting that the
0:30:50 open AI is getting some technique that they’ve discovered that this model is not using. It would
0:30:53 hallucinate more. It would imagine files and things like that. They didn’t exist. I was like,
0:30:58 what? Like how this is a reasoning model? Like shouldn’t it have like checked to see that that
0:31:02 file actually existed? It’s actually hallucinating that. That’s like, you know. Yeah. Well, one thing
0:31:07 that I do like about the R1 model though is like you can literally see everything it’s thinking
0:31:12 as it thinks, right? You can see it go, all right, let me test this. Okay, that didn’t work. Let me
0:31:16 test this. That’s not the right way to do it. Let me test it. And it’s literally showing you
0:31:21 everything it’s trying and testing and going back and forth. And I think that’s pretty fascinating.
0:31:26 But I also think there is a reason open AI isn’t showing you all of that, right? Like it does show
0:31:30 you some, but it’s almost like a summarized version of how it thought as opposed to the whole
0:31:34 thinking process. Yeah, there’s some secret sauce there. And that’s why I’ve been saying for a while,
0:31:38 like I’m a huge fan of Elon Musk, but he was kind of saying, oh, we’re going to have this new model
0:31:42 and it’s going to be better than open AI. It’s going to be the best in the world. You know,
0:31:46 I don’t know if that’s the case. I think that you can’t just, you know, okay, you have more
0:31:50 processors and now you trained a larger model. I don’t think that’s the game. Like I think open
0:31:53 AI is playing a different game now. They’ve learned that it’s a combination. You’re training
0:31:57 the model with more data, but also there’s a whole test time compute there and some kind of secret
0:32:02 sauce that they have discovered with deep seek. I guess the one takeaway is like you’ve been saying
0:32:06 before, eventually we are going to have open source models that are like somewhat close to the best
0:32:10 models. Maybe they’re not as good in some ways, but you’re going to be able to have like models
0:32:14 that you can run on your local machine that are really freaking good. Well, and the reason that
0:32:19 I pointed out to you that I thought you might be interested in it was more the fact that there’s
0:32:22 actually an accessible API for it right now. Well, you can’t use O3 at all right now,
0:32:28 but there is no like O1 API. So you can’t just use it inside of something like a cursor or a
0:32:32 Windsor for something like that. Oh, you can, you can use O1 in cursor. Oh, you just can’t use O1
0:32:37 Pro yet, right? Correct, correct. Okay. And there’s a lot of limitations on O1 in cursor too. I
0:32:41 think unless you put in your own API key, but like if you’re using it by default, it’s like very
0:32:46 restrictive. Like you run out of like queries or prompts, whatever, very vast. Oh, so even when
0:32:50 you’re using the API, they still rate limit you? Well, if you put in your own API key, they don’t,
0:32:55 but if you’re using just like the cursor plan or whatever, it’s very restrictive. It’s probably
0:32:59 because the O1 API is more expensive. So they have some kind of like system where it’s like,
0:33:02 if you’re paying us 20 bucks a month, we’re not going to allow you to put up some gigantic O1
0:33:07 bill and then we pay for it. Yeah, yeah, yeah, yeah. Okay. Yeah. For some reason, it slipped my
0:33:12 mind that O1 had an API available. And I’m like, Oh, well, here’s an alternative to O1 that has
0:33:18 an API. That’s O1 Pro that doesn’t. And actually, it’s worth noting too that Sam Altman has been
0:33:22 being way more public about their like upcoming plans recently, like he’s talked about like
0:33:28 O3 minis coming in the next two weeks, which is like, holy crap. Although he did say O1 Pro is
0:33:33 still better than O3 mini. Did you see that? Yes. But under that tweet, somebody asked him,
0:33:38 how is O3 mini compared to O1 Pro? And he said O1 Pro is still going to be way better at most
0:33:43 things. Yeah. I mean, we had seen some benchmarks that had shown that like maybe a few weeks ago,
0:33:48 but the big difference will be that O3 mini will be really fast. And also it’s really
0:33:53 cheap for open AI to operate. So it should be a model that’s like better the deep seek,
0:33:59 way better than O1. That’s very fast. And the other comment that he made on that same tweet,
0:34:04 is that they’re going to release it with an API, which amazing. No, but also even that the O3
0:34:09 Pro is going to have an API. So like apparently O3 and O3 Pro are coming. So they’re going to
0:34:13 continue the thing where now that they’ve learned that you can just throw more compute at the models,
0:34:18 they can always have a better model available, just like throwing more compute at it, right?
0:34:22 And so they will continue to have like a pro model. And the amazing thing there was I was
0:34:26 concerned that they’re going to like increase the price to like 2000 or something. Because I was
0:34:30 like, well, I might actually pay it. Like if it continues to get better and it replaces me hiring
0:34:36 an engineer, I might actually pay that. But he said it’s going to continue to be $200. And that
0:34:41 the O3 Pro model will have an API as well. So that’s super exciting. So the O3 Pro is going
0:34:46 to be on the $200 model? Yes. And have an API. That’s the one where they were like, it costs
0:34:51 like three grand per task right now. Well, they said that they’ll figure out how to make it cheaper.
0:34:54 And apparently they seem to think that they probably have or they’re going to have by the
0:34:58 time it comes out. And the latest information too, I forgot who it was. If it was like their chief
0:35:01 product officer or something like that, he was interviewed by the Wall Street Journal. Kevin
0:35:06 Weill. Yeah, exactly. Yeah. And he said that the current plan is that the O3 models, not the mini,
0:35:09 the mini should be coming in the next week or two. Maybe by the time you’re listening to this,
0:35:15 it may already even be out. But that the full-blown O3 models, which probably means O3 and O3 Pro,
0:35:19 timeline is like two to four months. For someone who’s been using O1 Pro, and like at the last
0:35:23 episode we did where I showed you how much value you can get out of O1 Pro if you give it tons of
0:35:28 context, to imagine that we’re about to get a model like three to five times smarter than that
0:35:35 in the next three months. It’s just blowing my mind. Well, and we’re about to get agents. I mean,
0:35:40 don’t hold me to this, but like the week that this episode is going live, we might already have
0:35:45 agents or it might be announced this week. But like apparently, according to the information,
0:35:50 OpenAI is preparing to release a new chat GPT feature this week that will automate complex tasks
0:35:54 typically done through the web browser, such as making restaurant reservations or planning
0:35:59 trips. According to a person with direct knowledge of the plans, they’re calling it operator. And so
0:36:04 this says this week. Yeah. And actually, there was some kind of benchmark leaked recently showing
0:36:09 operator versus Claude’s computer use showing the operators way better, like not perfect. A lot of
0:36:13 stuff like Claude was like in the 50% success rate. And then the operator was like in the 80%
0:36:17 success rate, but still, yeah, dramatically better. Yeah. And that’s actually when we talked about
0:36:21 computer use before. That’s what I said. I was like, I’ve heard that like OpenAI has stuff internally.
0:36:26 That’s pretty good. There’s not happy to release it yet. Right. And that’s Claude released their
0:36:30 thing very early, but no one really used it because it wasn’t that great. But it sounds like,
0:36:34 I mean, they’re saying it’s going to be like do stuff for you, like make reservations online,
0:36:38 buy things for you. Just like basic stuff you would do in the browser. Probably when operator
0:36:42 comes out, you’ll be able to just tell it, hey, go do that for me. And then it’ll just do it.
0:36:46 Yeah. It’s really a pain in the butt to set up the anthropic version, right? Like if you want to
0:36:49 use computer use, you have to do it through Docker. You have to get it all set up. And you
0:36:54 actually have to use it with their sort of crappy browser, like on a remote computer.
0:37:00 It’s not a great experience. I would imagine when chatGPT rolls it out, it’s just going to work
0:37:04 in your own browser. It’s going to be a lot more seamless of an experience.
0:37:08 Yeah. Yeah. So yeah. So exciting times. I mean, we’re about to have like AI that can
0:37:12 like just code full products for you. You just talk to it and it makes the whole thing. Like
0:37:17 this is probably this year. And you have AI that can like just use websites for you and do whatever
0:37:21 you would do on your computer, do it for you. It’s exciting. I mean, a lot of stuff that just
0:37:24 is time consuming stuff that you don’t want to waste your life on. Soon you’re not going to have
0:37:28 to waste your life on it. I can’t wait till the day where I’m just like, hey, I need to update my
0:37:33 future tools website, go find all the news and put it on my website for me. I’m going to go take a
0:37:39 nap, which is funny because I don’t think that’s actually that far off. Probably not. I actually
0:37:43 set up that thing with a chatGPT where it sends you little notifications or whatever. It’s not,
0:37:46 they definitely need to improve that experience, but it’s kind of cool. It’s been a similar little
0:37:50 Japanese words to learn and send me little summaries of AI news. Yeah. I’ve done it too.
0:37:56 I set up a daily task list to find any AI news from that morning and don’t find anything from
0:38:01 like before today. I only want the most current news. Yeah. And it sends me an email every morning
0:38:05 to let me know that it’s done it and I click the link and it shows me what it found. Yeah,
0:38:09 and it’s useful. I tried it recently in my newsletter issues to see if anyone would complain,
0:38:16 but I just took a summer of the news and then used whisper flow just to talk to my computer
0:38:21 about my own thoughts on the matter. Yeah. Right? Did that for like 10 minutes,
0:38:25 handed it off to 01 Pro where I had provided all this context of what’s good newsletter
0:38:29 issues and what’s not and it edited all my words and like looked really good and everyone seemed
0:38:34 to like it. Nice. And it dramatically reduced how long it took me to create my newsletter issue.
0:38:37 Like, you know, instead of taking like a few hours, it probably took me an hour to finish
0:38:41 everything. And so I’m just like, this new world is so exciting where you just like all the kind
0:38:44 of work that you don’t like doing. You know, I like sharing my opinions. I don’t like sitting
0:38:48 there like editing them for like hours and doing all that. Yeah. Yeah. It’s going to do all that
0:38:52 for me. It’s awesome. Yeah. I saw Dario Amadei from Anthropic. Was it the Wall Street Journal
0:38:57 that was doing all the interviews out at that sort of Davos thing? Yeah. So he was on there.
0:39:03 They were actually asking about like the future of jobs. And he essentially said that like what
0:39:08 they found and what a lot of research has shown is that when you give people all of these automations,
0:39:13 it doesn’t actually take away too many jobs. It just makes people way, way, way more efficient
0:39:18 at the stuff they actually want to be doing within their job. And he was talking about how
0:39:24 basically so many people have been trying to use AI to replace jobs. But if you start using AI as a
0:39:30 way to like sort of enhance jobs, they find that the effectiveness of those people is way better
0:39:36 than when you use AI to replace jobs like the efficiency and the output and that sort of thing
0:39:42 is like dramatically improved when they’re using AI to actually like improve the things and also
0:39:47 hone in on just doing the things that are like their core competencies and letting AI do all the
0:39:53 mundane stuff. So, you know, I think that’s sort of the next phase. I mean, there might be a phase
0:39:57 beyond that where it’s just kind of like, all right, AI and the robots do everything and we just
0:40:02 get to travel and live our best life. That might be like a future phase. But I think the next phase
0:40:06 that we’re moving into is like, we get to focus on the stuff that we actually enjoy doing in the
0:40:12 work that we do while AI does all the sort of mundane boring stuff we just don’t want to do
0:40:16 because it’s repetitive or whatever. Yeah, totally. I mean, like, like I’m already seeing that in my
0:40:20 life right now. The fact that like these things are about to get three to five times better.
0:40:24 And it appears that that’s going to start happening like every three to six months,
0:40:28 not every like two years. Yeah, it’s just amazing. I mean, because like, and then now that you got
0:40:32 Stargate, I don’t think people are like processing like they’re not like reimagining like what’s
0:40:36 going to happen based on the new data because like things are going to improve faster than you
0:40:42 expect if you’re listening to this. And the OpenAI also recently talked about like their last models
0:40:45 took like, I think a year and a half or longer than a year and a half to train. And they were like,
0:40:50 maybe like 50% to 100% better. You know, the new 03 model, they’re seeing like a, you know,
0:40:55 3x improvement in a matter of three months. We’re entering a new phase of development here. We’re
0:40:59 talking about probably like improvements speeding up like 10 times or more. Yeah. And now there’s
0:41:05 going to be a hundred billion dollar data center worth of compute behind OpenAI to move even faster.
0:41:10 So I think you’re going to see that sort of exponential growth continue, right? Like,
0:41:14 it’s just going to be a vertical line. And also there’s going to be now that that’s
0:41:16 announced. I mean, you’re going to see Elon Musk and everyone else
0:41:20 raising more money to go after this faster too. It’s not going to be just OpenAI. It’s going to
0:41:24 be everyone else. It’s going to be investing more into this for the race. For sure. It’s going to
0:41:29 happen faster. Yeah. Well, we did say there was actually a lot to talk about. We probably rambled
0:41:35 a lot. Sorry. Yeah. We had a lot to talk about. It felt like a pretty monumental week between
0:41:41 the Stargate, between deep seek, between, you know, the announcements from OpenAI and 03 coming.
0:41:46 There was a lot, a lot, a lot of big stuff that I felt like we needed to kind of deep dive and
0:41:51 unload. And hopefully anybody who’s just sort of listening to this podcast and checking in to
0:41:56 stay looped in, you feel a little more looped in. You might feel a little bit more optimistic.
0:41:59 You might feel a little bit more scared. I don’t know. Or confused or whatever. Yeah.
0:42:02 Maybe I like to share something about what Stargate means for the future. There was this tweet
0:42:05 and it’s kind of philosophical, but Roon shared this tweet, you know, people like,
0:42:10 why is it called Stargate? Roon’s like a well-known guy who works OpenAI. He’s like,
0:42:13 a lot of people know who he is, but he keeps an anonymous. He’s not Sam Altman. A lot of people
0:42:19 think he’s Sam Altman. He’s not. But he had this tweet, the Stargate blasting a hole into the
0:42:25 Platonic ether to summon angels. First contact with alien civilizations. So I think that is kind
0:42:29 of a summary of like what Stargate means. I mean, like this is we are like summoning the angels. We
0:42:34 are making contact with a new intelligence, you know, an alien intelligence. And that will be
0:42:40 artificial superintelligence. It will be like us discovering something beyond anything we can
0:42:44 imagine. And that’s what this is designed to do. And so it’s just, you know, I think it’s
0:42:47 important for people to take a moment to kind of try to take that in. It’s a lot to take in,
0:42:50 but that is what humanity is trying to accomplish right now.
0:42:57 Wild. Well, hopefully these aliens are coming to save us and not to destroy us. I choose to
0:43:03 lean into the optimism and I believe that it’s all going to make humanity better and make us all
0:43:08 better and more improved, augmented humans that help us get more done that we want to get done.
0:43:12 So I’m looking forward to the future. I’m excited about it. There are some little concerns,
0:43:17 but I still lean mostly optimistic on all of this. And I know you do as well.
0:43:22 Yeah. And I think on that note, we can go ahead and wrap this one up. I think we’ve sort of
0:43:27 unloaded everything we have to say about all of these big announcements that came out this week.
0:43:33 If you liked this episode and you want to stay looped in on all of this latest AI news and hear
0:43:38 some amazing interviews and discussions and deep dives around ways to practically use AI in your
0:43:42 life. Make sure you’re subscribed to this, either our YouTube channel or audio channel,
0:43:46 wherever you listen to podcasts. These podcasts is available all over the place.
0:43:50 We’d love for you to subscribe and thank you so much for tuning in. Hopefully we’ll see you in the next one.
0:44:00 [Music]
0:44:04 [MUSIC PLAYING]
0:44:14 [BLANK_AUDIO]
Episode 43: How massive is the $500 billion AI Stargate Project, and what does it mean for the future of AI? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) break down this monumental development and discuss its implications.
This episode delves into the ambitious AI infrastructure project announced by Donald Trump, alongside key tech players. Learn why this initiative is seen as a groundbreaking moment in AI history, its promises for job creation, health advancements, and the potential concerns around surveillance and data privacy. The hosts also touch on related advancements including DeepSeek-R1, OpenAI’s upcoming models, and how they all fit into the larger AI development landscape.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) Stargate’s $500B AI Infrastructure Investment
- (03:17) AI Innovation: U.S. Health Revolution
- (09:17) Surveillance Expansion Concerns
- (10:27) Corporate Influence and Surveillance Speculation
- (14:57) Sam’s Breakthrough in AI Development
- (17:16) Mixed Feelings on AI’s Future
- (20:15) NVIDIA’s $6B Data Center Expansion
- (23:14) OpenAI Expands Beyond Microsoft
- (27:43) AI Model Bias and Control
- (29:11) OpenAI’s Edge: Unique Reasoning Models
- (34:20) New O3 Models Release Timeline
- (38:08) AI Boosts Job Efficiency, Not Loss
- (39:32) Rapid Technological Advancements
—
Mentions:
- OpenAI: https://openai.com
- NVIDIA: https://www.nvidia.com/en-us/
- SoftBank: https://www.softbank.jp/en//
- Oracle: https://www.oracle.com/
- MGX: https://www.mgx.ae/en
- DeepSeek: https://www.deepseek.com/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano