AI transcript
0:00:11 recurring co-host, Maria Garim. And it’s perfect timing too, because this week the AI world
0:00:18 delivered absolute chaos, and I needed Maria’s sharp takes for this one. Because something
0:00:25 really wild just happened. OpenAI officially went into code red mode. Like, not metaphorically,
0:00:31 literally. Internal alarms, priorities shifting, projects paused, and all of it was triggered by
0:00:37 Google’s sudden surges with Gemini 3 and Nano Banana Pro. So in this episode, we break down
0:00:42 what the code red actually means inside of OpenAI, why Sam Altman slammed the panic button,
0:00:48 and whether this is the first real sign that Google’s full stack advantage is becoming an
0:00:54 existential threat to other companies. Then we dive into the model wars, Claude Opus 4.5,
0:01:02 Gemini 3, GPT 5.1, and why some models feel smarter, why others feel dumber, and how the power dynamics
0:01:09 between Google and OpenAI and Anthropic and XAI are all shifting in real time. And that’s not all. We
0:01:17 also get into the explosion of AI video tools that came out over the past week, like Runway’s Gen 4.5,
0:01:24 and Kling’s multiple model drops, and Nano Banana’s growing dominance. And some of these demos are
0:01:29 pretty insane. I mean, some of them still make people look like they’re limping through the uncanny
0:01:35 valley, but others are really impressive. And Marie and I do not hold back in this one. We also touch on
0:01:42 Alibaba’s unexpected entry into AI glasses, why Notebook LM might secretly be the best research tool on the
0:01:48 planet, and what happens when your mom sends you AI videos that she thinks are real. It’s a jam-packed
0:01:53 episode with a ton of big shifts, some spicy opinions, and a little bit of smack talk about
0:01:59 Sam Altman. All in good fun, though. So without further ado, let’s jump on over and dive in on
0:02:01 Everything That’s Been Going On with Maria.
0:02:11 Being a know-it-all used to be considered a bad thing, but in business, it’s everything. Because
0:02:18 right now, most businesses only use 20% of their data, unless you have HubSpot, where data that’s
0:02:24 buried in emails, call logs, and meeting notes become insights that help you grow your business.
0:02:29 Because when you know more, you grow more. Visit HubSpot.com to learn more.
0:02:37 Yeah, so we have some major news that’s happening this week. It seems like there’s some shakeups
0:02:38 happening right now.
0:02:44 I wouldn’t say shakeup. Everyone has red button. Feels like Didi all over again. You know, like
0:02:49 everyone wants to hit that code red. And Sam Altman is one of these people. I think he has like a red
0:02:54 button on his desk and he wants to hit that. Because I saw the news. I was like, what do you
0:02:57 mean code red? What’s happening in the OpenAI house right now?
0:03:04 Yeah, so apparently they have a very similar system to like the U.S. government threat level where they
0:03:11 have like the various color coding. So I guess like a code yellow is everything’s going fine. They were
0:03:16 in code orange and now they’re at code red. That was what I read. Here, let me actually share the
0:03:21 article. But apparently OpenAI decided we need to go into what they call like code red mode
0:03:28 because of Google, right? We saw over the last couple weeks that Google Gemini 3 came out and
0:03:34 really blew a lot of people away. We saw Nano Banana Pro come out. That blew a lot of minds. And it was
0:03:42 taking a lot of the sort of narrative and AI mind share away from OpenAI. And that caused Sam
0:03:48 Waltman to go, all right, we need to like rethink things here. So if we want to get into breaking
0:03:54 down the details of the code red, basically what it is is Sam said, we’re going to deprioritize a lot
0:04:00 of things in OpenAI. We were planning on putting ads into chats, which is a whole other story that a lot
0:04:04 of people are going to hate. They were planning on putting ads right into the chat window. They’re
0:04:10 putting a pause on that. They have their OpenAI pulse, which is like their daily AI news update that you can
0:04:14 get sent to your phone. They’re putting a pause on that. Their agent models they’ve been working on
0:04:20 putting a pause. The browser, all of the stuff that is not making chat GPT better, they’re essentially
0:04:26 saying, pause that. Let’s just make our models smarter and smarter for right now. And like everything
0:04:31 else is less important at the moment. It felt like Mayday, Mayday. I was reading the article before I was
0:04:35 writing in the newsletter. I was like, there’s no way Sam Altman is hitting code red right now.
0:04:40 Something is definitely off. But like, let’s say we want to compare Gemini 3 to GPT.
0:04:47 5.1, which by the way, what was that? What do you mean 5.1? What does it add to anything right
0:04:52 now? I didn’t feel any difference, but that’s me. You know, like I’m on GPT every single day.
0:04:55 How is it different than any of the other stuff before?
0:05:02 Yeah. I mean, I didn’t really feel a big difference from like 5 to 5.1 other than the fact that I feel
0:05:07 like ChatGPT has gotten dumber over time. Like I honestly feel like ChatGPT has gotten dumber.
0:05:16 Like the four models, like 4.0 and when they had 4.5 and even 4.1, which weirdly came out after 4.5.
0:05:24 Yeah. Those models were all like pretty decent. And I’ve been using 5.1 lately and it hallucinates more
0:05:29 than any model I’ve used before it. Like when you ask it questions about things that you know a lot
0:05:33 about, that’s kind of how you can test if it hallucinates, right? Like you, you go and ask it
0:05:38 questions about something you have deep knowledge on, go and read it and see how factual it is based
0:05:45 on what you know. And I’ve noticed that lately it really, really sucks. Like it is giving me a lot
0:05:50 of wrong information and I’m having to like double check constantly on what it’s doing. So I don’t
0:05:56 know what it is about 5.1, but I can’t stand it. Like I, I want ChatGPT to release a new model
0:06:00 because that one sucks. Like if they need to, as soon as possible, I was like asking it one
0:06:04 question. And for some reason, because there’s like, you know, a pile of information happening in
0:06:09 the chat before it kind of was summing up everything that I was saying. And then at the end, it was giving
0:06:15 me what I asked it in the first place. Like, that’s not what I asked you. Like on a Monday, don’t piss me
0:06:21 off. So do we think like Gemini three is better? Like, is it actually smarter? Is it like incredibly
0:06:27 well-connected to Google’s data? How do we compare it to whatever just happened with GPT 5.1?
0:06:33 Yeah. I mean, I definitely think Gemini three feels a lot smarter, but I think the sort of bigger picture
0:06:39 here, more so than just like who has the smarter models, because I think that’s going to be sort of
0:06:43 like this cat and mouse game that we’re just going to constantly see. Yeah. Google, Anthropic,
0:06:49 OpenAI and, you know, probably XAI. Those four companies, we’re just going to see them just keep
0:06:54 leapfrogging each other and keep growing. One’s going to have the smartest model for two weeks. And then
0:06:58 one of the other ones is going to have the smartest model for a week. And the models are just going to
0:07:03 get smarter and smarter. And we’re going to see these labs just keep passing each other. I actually don’t
0:07:09 necessarily think that’s the sort of bigger picture of this code red here. I think the bigger picture of
0:07:16 this code red here is OpenAI and Google, they’re fundamentally like two completely different
0:07:21 companies that have two completely different abilities, right? So if you look at OpenAI,
0:07:28 OpenAI is still a startup. When it comes to like their cloud compute, they rely on Microsoft’s cloud
0:07:33 servers. They’re now working with Oracle. They’re working with like other cloud providers now as well,
0:07:39 but they rely on other cloud providers. When it comes to like the compute and actually training
0:07:45 new AI models, they’re reliant on NVIDIA’s hardware, right? Yeah. They need that hardware
0:07:51 from NVIDIA. If they want to like integrate into Google Drive, they have to, you know, work directly
0:07:59 with Google. Now compare that to Google who literally owns the full stack. They have a frontier research lab
0:08:03 where they can keep on making better and better models. That’s DeepMind, right? They’ve got their
0:08:09 own hardware. They’re not relying on NVIDIA. They have their own TPU chips that they can train these
0:08:14 models on. They’re not relying on, you know, a Microsoft or an Amazon or something like that for
0:08:21 the actual data centers and the processing. They have Google Cloud. They’ve got their own sort of compute
0:08:27 infrastructure. They’ve got the app layer that people use, right? They’ve got Google Chrome.
0:08:33 They’ve got Android operating system on mobile phones. They’ve got YouTube. They’ve got Google
0:08:40 Drive and Google Sheets. They’ve got Gmail. They’ve got all of these things that users are already using
0:08:48 in their ecosystem right now that they can now bake Gemini into. And they have the AI search business,
0:08:54 which generates hundreds of billions of dollars a year for them. So Google can afford to lose money
0:09:00 every single time somebody enters a prompt because Google makes their money in other ways right now.
0:09:08 So when you compare and contrast OpenAI and Google, OpenAI is burning something like $200 million a
0:09:12 month. Like that’s how much money they’re losing every month. Pocket money.
0:09:18 They have to keep on raising money constantly from investors to make sure that they can keep
0:09:23 this business going. And they’re so reliant on so many other companies to make sure their business
0:09:29 keeps on going. Google’s not reliant on anybody and they have an abundance of money to just do
0:09:35 whatever they want and subsidize the spending on AI. So in my mind, this is where the real code red
0:09:42 comes in for OpenAI. It’s not like, oh no, Gemini has a smarter model. It’s like, oh no, Google has
0:09:46 everything we don’t have. And now they also have a smarter model.
0:09:52 Yeah. Yeah. Do we think that they’re winning the race as we speak? Like, do we think that because
0:09:56 they are a powerhouse, they win the race right now?
0:10:01 You know, I don’t necessarily know if there is a finish line to win the race. You know what I mean?
0:10:05 Like when is the race won? Yeah, that’s actually a nice perspective.
0:10:14 Look, we know AI moves fast with new models dropping every week. So we created your weekly
0:10:20 cheat sheet with what’s happening this week in AI with over 20 prompts for the newest tools that you
0:10:25 can copy and paste. A quick reference table that shows you exactly which model to use when
0:10:41 I mean, it’s so many models are coming out right now. And I feel like people aren’t talking much
0:10:48 about Opus 4.5 and travesty and tragedy. And I don’t like it when people do that. So my question for you,
0:10:53 because I think you play around with a lot of these AI tools, more than me, and I write about
0:11:00 them every single day. What did Opus 4.5 like genuinely fix that earlier versions didn’t?
0:11:05 Yeah. So I do a lot of sort of like vibe coding stuff in my free time. For me, that’s like one of
0:11:13 the most fun things. Nerd. Yeah, totally. Admittedly. But like, so Gemini 3 is really,
0:11:18 really good at coding. Opus 4.5 is really, really good at coding. But I’ve noticed they’re
0:11:24 good in different areas. So Gemini 3 I’ve noticed is really good for front end development. It’s great
0:11:30 at design. I don’t know if you’ve ever used like Claude or ChatGPT to code up something. All the
0:11:36 designs look exactly the same. They’ve got this like purple gradient background, the fonts they use,
0:11:40 the text they use, like you can see a page and pretty much instantly go like I know that that was
0:11:46 generated by Claude, right? Like this was Claude. They have a look to Gemini on the other hand seems
0:11:51 to actually have more like design taste. So if you’re trying to build like a website or an app,
0:11:56 and you tell it to design it for you and make it colorful and vibrant and interesting looking,
0:12:01 it will actually design something unique that doesn’t look like an AI generated front end.
0:12:07 And I found that Gemini is really good at giving it one prompt saying this is the app I want. Here’s
0:12:12 everything I want it to do. And then it just goes away, builds the thing, and it seems to work pretty
0:12:18 well the first time out of the box. It’s really good at that. Claude, on the other hand, is really,
0:12:23 really good at this sort of back end. Once you’re deeper into a project, sort of understanding the
0:12:29 whole code base a little bit better and going and like fixing little bugs and getting deeper into the
0:12:35 project. So Gemini, great for front end, getting a project started. Claude, great for like going
0:12:42 deeper and fixing bugs and refactoring code and sort of working a little bit deeper into the project.
0:12:47 Those are where I’ve found the differentiation. Yeah. But to answer the question about like what
0:12:54 makes Claude different is the way that it understands the previous information that has been fed to it.
0:13:00 Right. So you’ve got your context window in these AI models where the context window is essentially
0:13:06 how much text the model can understand, you know, both input and output. So something that let’s say
0:13:12 has a million token context window between the amount of text you feed in and the amount of text that it
0:13:20 gives you back is a million tokens. A token is roughly 75% of a word. So about 750,000 words would be a
0:13:26 million token context window. But all of these models that are like thinking models where you see it actually
0:13:31 think through and you’re seeing the internal thought process, that whole internal thought process is also
0:13:38 using tokens. So all of that thinking is using that context window that these models give you.
0:13:46 And what Claude changed is it’s a much more efficient way of reviewing what text and conversation you’ve
0:13:51 already had. It’s gotten a lot better at like getting rid of what’s not important and keeping what is
0:13:56 important and adding that back into the context of future conversations.
0:14:01 Like think about Claude’s like the depth of it is something that I haven’t seen before. And like,
0:14:08 whenever you need to ask a question, it actually holds your hand and like says what it needs to say to you.
0:14:08 Yeah.
0:14:14 Which in like when it comes to Jajipti or other kind of models, they kind of like catastrophize,
0:14:19 in my opinion, like they kind of like compiled in a way that it’s not really what you need it to be.
0:14:23 Like for people that are visual learners like me, I like Claude when it gives me the answers
0:14:27 more than Jajipti, even though Jajipti kind of is more advanced. But my question is like,
0:14:34 do we think that Claude is playing the long, quiet kind of game? Because GPT is loud, you know,
0:14:36 like OpenAI are so loud about everything.
0:14:41 They’re like in your face 24-7. So like, do we think that like, it’s like an underdog sort of thing?
0:14:45 I don’t know. I would have a hard time saying that Claude is an underdog because
0:14:50 when it comes to like a lot of the LM Arena sort of model testing, Claude is pretty much
0:14:55 always sitting at the top. So from like a general consumer standpoint, just like everyday people
0:15:00 that aren’t sort of immersed in tech, that aren’t trying to keep up with AI on a daily basis,
0:15:05 just your normal everyday people, they’re familiar with ChatGPT. That’s the biggest brand.
0:15:09 That’s the sort of most in the media, biggest in the pop culture, things like that.
0:15:14 So most people know ChatGPT. But I feel like once you start to narrow down and get into like
0:15:18 techie people that are sort of playing with these models and using different ones,
0:15:26 most of them are probably partial to more Google and Anthropic. Like when I build stuff behind the
0:15:31 scenes and I’m doing like vibe coding type stuff, I’m almost never using OpenAI models. It’s mostly
0:15:37 either Gemini or Anthropic models. But I do think that when it comes to enterprise and when it comes to
0:15:43 coding, Anthropic is kind of the leader right now. When it comes to like backend API calls,
0:15:49 I believe there’s more companies, more products that are calling the Anthropic API than the OpenAI API.
0:15:56 So ChatGPT has sort of become this consumer facing product company where they’ve put more and more
0:16:02 focus into like, how do we get more users in ChatGPT? How do we make it more sticky? How do we make it so
0:16:08 people keep on coming back to ChatGPT constantly? Where Anthropic has been more like, how do we make
0:16:14 this model useful for enterprises and people that are using us through the API? Right. So it seems like
0:16:19 they’re kind of taking slightly different approaches. OpenAI wants to be that consumer front end facing
0:16:25 thing where everybody in the world has a ChatGPT account. Cloud doesn’t care if everybody in the world
0:16:30 has a Cloud account, right? Like it knows it’s best. It knows kind of like what is it’s best at. So yeah,
0:16:35 it doesn’t need all of these things. It focuses on API for like enterprise users and developers.
0:16:41 And you know, they’ve got the backing of Amazon. So just like OpenAI had, you know, 10 to $14 billion
0:16:46 investment from Microsoft. Anthropic has pretty huge investments from both Google and Amazon.
0:16:53 Right. So Anthropic has all of the same kind of backing that OpenAI has from some of these other
0:16:59 companies as well. And right now they’re raising at almost a $300 billion valuation. If you’re keeping
0:17:05 up with OpenAI, OpenAI is valued at about $500 billion. So they’re not that far behind when it comes
0:17:10 to in terms of like a valuation of a startup. Yeah. So they’re playing different games. They’re
0:17:16 definitely competitors because OpenAI wants you using their API also. But it just kind of feels like
0:17:22 their marketing and their efforts are going more towards ChatGPT and that front end facing product
0:17:28 where Anthropic’s efforts are going a little bit more towards enticing developers and enterprises to use
0:17:34 their APIs, their systems on the back end. I don’t want to like dwell a lot on Cloud, but like I really
0:17:39 want to pick your mind on this. Do we think that like they’re being held by like safety in general?
0:17:46 These are guardrails for them or like, no? I’d say Anthropic is a little bit more focused on
0:17:51 safety and guardrails than OpenAI is. I feel like OpenAI kind of tests the limits a little bit and
0:17:57 then pulls back, right? We saw that when they released Sora. They’re like, we’re making it so you have to opt out
0:18:02 if you don’t want your brand in there. Well, a lot of companies got really upset about that and OpenAI
0:18:07 course corrected and said, just kidding, you have to opt in now, right? Like I feel like OpenAI sort of pushes
0:18:12 the limits and like test the waters of what they can get away with and then back off a little bit where Anthropic
0:18:18 takes the approach of like trying to be the safest first, right? Like they don’t release a model until they feel
0:18:25 pretty comfortable saying that I personally haven’t run into like any issues in that regard with
0:18:29 Anthropic, right? Cause I mostly use it for coding. I feel like most of the people that I know that use
0:18:34 Claude a lot are using it for coding and you’re not really running into like any sort of guardrail
0:18:39 issues for the most part when you’re coding, unless you’re trying to like build something that hacks the
0:18:42 government or something, which I’m not doing as far as anybody knows.
0:18:50 So like leadership board, like where do we put everyone else in terms of like the most capable
0:18:56 models? Like I say, okay, coding, we’d put Claude as first and then Gemini and then GPT. Correct. Yeah.
0:19:01 That’s probably how I’d order them right now. Like deep research. Where would you put all of them?
0:19:06 Deep research. I would probably lean towards Gemini. Oh, actually, if we’re talking about like
0:19:10 really good research, if you really want to research something, well, notebook LM is where
0:19:16 it’s at. Like notebook LM is the best research tool ever. Really? Yeah. Cause notebook LM lets you
0:19:21 dump in sources, right? So you can go and find a bunch of articles all over the place, grab those
0:19:25 articles, throw them in there. And now you’re having a conversation with like these 20 articles that you
0:19:28 just dumped in there. Interesting take. People are going to freak out about this. Cause like,
0:19:33 I think they’re going to like revert back to go to that. Like imagine having to research a lot of
0:19:39 things. People would open like 7,000 tabs 24 seven, leave them open, like crash down their laptops and
0:19:43 like cry. And, you know, and like not, they’re not caffeinated enough. So they don’t know what to do.
0:19:48 And now everything has changed. So like, I think deep research and like these LLMs are like
0:19:53 saviors for a lot of people. Well, and notebook LM is all powered by Gemini. So it’s like,
0:19:58 it’s still Gemini. It’s just a different user interface, a different wrapper around Gemini
0:20:03 essentially. But I’ve been loving notebook LM for research. In fact, now when I’m making a video that
0:20:07 I’m doing a bunch of research on, I go find all the articles I can about it, put them all into
0:20:12 notebook LM and then just start having conversations about notebook LM. And then it’s sort of grounded on
0:20:16 just the information that I fed it. So it’s not going to hallucinate. It’s not going to make up
0:20:21 information. It’s not going to give me anything outside of what I’ve given it to pull from,
0:20:26 you know, I’m going to use that. I feel like maybe we should go back to videos.
0:20:31 TikTok is flooded. Instagram is flooded. Facebook. I’m being sent things from my mother
0:20:37 asking me if it’s real or not, as if I am like a person that knows what’s AI and what’s not.
0:20:40 Sometimes I can’t even identify them, which is weird, but that’s okay.
0:20:45 I feel like with images, I have a really hard time telling anymore. And that’s scary to me because I
0:20:49 used to be pretty good at spotting stuff. Now when it comes to images, I oftentimes can’t tell.
0:20:55 I feel like with video, there’s still some signs. To me, video is still a little bit easier to
0:20:59 determine, but images, definitely it’s gotten to a point where I’m like, I don’t know anymore.
0:21:06 My mother saw a video of like a baby with like a cat next to it. And like, she’s like, oh,
0:21:10 it’s so cute. I was like, mom, that’s AI. And she’s like, no, it’s not. And she was so defensive.
0:21:15 And like, we don’t know what to say to your 58 year old mother. Anyway, let’s talk about like,
0:21:20 what is out there in the wild right now? What are people raving about? Like runway came out with
0:21:24 something. Nano Banana, funny name, by the way. I still think it’s a funny name. I don’t know why
0:21:29 they came out with that. I love the name. Nano Banana is like my favorite name that a company’s
0:21:35 come up with because I’m so sick of like GPT, OSS 3.5 sort of names.
0:21:41 Sora, who is Sora? What is she? Yeah. So like, tell us what is your stance about like creative AI?
0:21:47 As far as like my take on AI video, I definitely have two minds about it. On one case, I’m really,
0:21:53 really kind of overseeing a lot of the AI slop. Like during Halloween, right? There was tons of
0:21:59 videos circulating about like the craziest Halloween costumes. And I couldn’t even tell like if they
0:22:03 were AI or not. My wife was sending me stuff going, look at this costume. Isn’t this the coolest costume?
0:22:07 And then like a day later, she’s like, you know, that costume I sent you yesterday turns out I was AI.
0:22:12 And I’m like, I couldn’t even tell. The disappointment on their faces as if like you betrayed them.
0:22:16 Going on a call with your wife or my mother, it’s like the equivalent of an anime betrayal.
0:22:26 So I’m not really looking forward to the amount of AI slop we’re about to get inundated with or have
0:22:32 already started getting inundated with. Saying that, there’s also a lot of non-AI generated slop.
0:22:39 It’s not like AI created the slop problem. There’s always been a content slop problem. It’s just AI
0:22:44 makes it a lot easier to generate a lot of slop and do it at scale. And that’s something I’m not
0:22:49 looking forward to. But as somebody who makes YouTube videos, like I love the ability to quickly
0:22:54 make a little piece of B-roll, right? Like I’ll have a video where I’m like, oh, I don’t know what
0:22:59 to put for this like little seven second clip. I don’t really want me on screen here. Let me go
0:23:04 generate a video that I can splice in here. That’s quick enough that nobody even realizes it’s AI,
0:23:08 but it sort of like serves my purpose in the video to cover something up.
0:23:12 So like, I like that I can do that. So I mean, like definitely pros and cons.
0:23:17 What is your take on like AI video? Like I’ve definitely mixed emotions about it.
0:23:23 I read a lot, right? Like I think everyone knows now that I read a lot and like fictional stuff and
0:23:28 like fantasy and romantic things. It is so refreshing to see these kinds of stuff that you read
0:23:35 in these kinds of videos. I am raving about it. But at the same time, I’m like, let’s just use it
0:23:39 like in a nice way, like, you know, in the limits, because people are going insane with everything.
0:23:46 Like everything needs to have these limits. And there are people out there that take it way too far. So
0:23:51 let’s just keep it, you know, in a balanced way. But in terms of that, I am raving. Like I am
0:23:55 very happy. I’m getting creative with these kinds of stuff. And I’m still a Majority girly,
0:23:58 by the way. Like people don’t know this, but I edit a lot of videos and sometimes I put it in a
0:24:05 really nice video that I animated on Majority. I am having so much fun. The Nano Banana stuff is
0:24:12 really good. I’m not going to lie. I’m not using Sora. It sucks. Sam, if you can see this, it sucks.
0:24:18 But Vio is really nice. Like I am liking it so far. But then there’s like the company that you just
0:24:22 mentioned that I haven’t really heard about. Yeah. So we’ve got Runway and we’ve got Kling.
0:24:27 Kling actually gave us new models this week. Runway announced a new model. I have not actually
0:24:34 played with any of these tools yet. So I have not experienced these. So the first one that Kling
0:24:43 released on December 1st here is called Kling Image 01. Here’s what the tweet says. Input anything,
0:24:48 understand everything, generate any vision, superb consistency, precise modification,
0:24:54 powerful stylization, max creativity. Image 01 brings it all. This update revamps the entire process from
0:24:58 generation to editing, empowering maximum productivity with a seamless experience.
0:25:04 Basically Kling’s version of Nano Banana. So this was released on December 1st and then on December 3rd,
0:25:13 so two days later, they introduced Video 2.6. So, and this is their model that has actual sound to it.
0:25:19 So here’s the Kling Video 2.5. The big difference between the previous Kling models and this one is
0:25:25 that now it generates sound. I, to this day, I don’t know what you think. I, to this day, don’t really
0:25:31 think any of the sound in AI videos is very good. Like it always still looks very off to me. And I feel the
0:25:37 same way about Kling. It looks pretty good, but I still feel like there’s something about the blending
0:25:43 of audio and video where it’s like a dead giveaway to me. Like there’s something still so uncanny about
0:25:49 the AI video and audio generators. Like VO3 is the same way. Whenever I see VO3 generate dialogue,
0:25:54 I always like, it’s pretty good, but I don’t think it’s fooling anybody, you know?
0:25:59 I mean, I think like, it’s very comforting for people that still feel on shaky grounds when it
0:26:04 comes to AI videos. I feel like people would look at it and I was like, yes, we can still identify it.
0:26:07 Oh, thank God we’re not being replaced, but you know? So I…
0:26:13 But I also think that’s only a matter of time. That’s like a short window, right? Like with how fast this is
0:26:17 advancing, I really think we’re not going to be able to tell like this time next year.
0:26:26 I’m loving it so far. Like I am liking how everything like is turning out and I get to
0:26:32 go as creative as I can go when it comes to these kinds of stuff. And I’m so happy. There are so many
0:26:36 things up in my mind. Whenever I’m reading like a scene, like I’m reading like something fantastical
0:26:42 and I get to sort of, you know, put in my creative writing in these kind of prompts and it turns out to
0:26:48 be something absolutely amazing. I’m having fun. Yeah. Thank God for AI. Honestly, whenever it’s
0:26:53 like Friday night and like it’s raining 24/7 in the UK, I don’t know if people know this, but it’s rain
0:26:57 season. So there’s nothing always like happening outside. I’m not going for drinks because everyone’s
0:27:03 like cocooned inside their blankets. Like I just open my laptop and like create videos and that’s pretty
0:27:08 cool with a glass of wine. Yeah. That’s one of my favorite things to do with Suno, like making AI
0:27:12 music. I nerd out about that. I’ll just generate song after song after song.
0:27:16 Yeah. We can talk about that actually. What are you doing? Like what kind of genre are you doing?
0:27:20 Are you doing like rock or like, if you’re saying hip hop, I’m going to be very sad.
0:27:27 I tend to either do like pop punk kind of music or like sort of electronic dancey sort of music.
0:27:29 Those are the two genres I tend to play with the most.
0:27:34 I would have thought you’re more into metal songs rather than EDM, but here we are.
0:27:38 I mean, like the more, you know, I have generated some metal songs. I wouldn’t really say that’s
0:27:43 like my preferred genre. I was always more of like a punk person. I like pop punk. Like I like
0:27:46 the blink 182s of the world and bands like that.
0:27:51 That sounds so cool. Like you are 100% a millennial. Thank God for that.
0:27:52 I am 100% a millennial.
0:27:58 Well, there’s one more video model that was announced last week. This one came out the week
0:28:01 we’re recording. So by the time you’re listening, it will have been last week.
0:28:08 Right. But runway also introduced a new model this week. It’s not actually in any of the runway
0:28:14 accounts yet. So you can’t actually use it yet, but they have a new model called runway gen 4.5,
0:28:18 which apparently was code named whisper thunder, whisper thunder, whisper thunder, whisper
0:28:23 thunder was its code name. Like while they were working on it internally, I guess.
0:28:28 I want to be a fly on that wall when that name was introduced. I would have laughed so much.
0:28:35 So they did some tests with artificial analysis. I think an ELO score is based on like actual users
0:28:39 voting. Is that a survey? Yeah, I think it’s kind of like a survey style thing where they’re given
0:28:46 multiple videos and people pick their favorite. And as we can see here, runway gen 4.5 kind of blew
0:28:52 everything out of the water when it comes to like people’s preferences. The new cling 2.6 model isn’t on
0:28:58 here, but I still would imagine by how big of a margin this is that it probably even blew away
0:29:03 2.6. One thing that’s sort of interesting about this chart that makes it a little bit like misleading
0:29:08 is they start the chart at 1200. So they look like this one’s like really blowing everything away,
0:29:15 but it’s really like zoomed in on like from 1200 and up so much. It makes it look like this one blew
0:29:21 everything out of the water, but it’s really not that huge of a leap in numbers. Yeah. So the gap isn’t
0:29:25 gapping like it’s not a gap. It’s just yeah. Okay. I don’t like numbers, Matt. I don’t know if you know
0:29:33 I’m just saying that the screenshot that they shared here makes this out to be much more impressive than
0:29:40 it actually is. But let’s take a look at their demo here and see what they’re showing off. Let’s see if
0:29:47 we think it’s a heck of a lot better than previous models. Oh, that’s pretty good. Yeah. I’m not gonna
0:29:53 lie. That actually looks really, really good. I want a closeup of like a tiger cub without the
0:29:58 photographer being murdered, you know, like that sounds nice. I want to see that. Yeah.
0:30:02 I mean, from what I’m seeing, it looks pretty good. You know, when they put out demo videos like this on
0:30:07 social media, they’re going to cherry pick a little bit, right? Like they’re going to go and find the
0:30:12 best possible generations they’ve made and use those as their video. But I mean, the fact that it’s capable
0:30:16 of what we just saw, I think it’s a pretty damn impressive video model.
0:30:20 When people are on dating apps, they’re not going to pick out the worst pictures of themselves. They’re
0:30:25 going to pick nice pictures of themselves on dating apps. So like, maybe, you know, like this is how
0:30:30 people run. Yeah. Yeah. But I think it’s like notice when, when the people are like generating videos,
0:30:36 like some of when they’re trying to be as realistic as possible, the eyes are like bold, like they’re
0:30:41 out, you know, like they’re coming out of the socket, which is it tell, you know, that this is AI.
0:30:45 Yeah. So yeah, you’d still know because of the eyes, in my opinion, and fingers,
0:30:49 it does not get fingers till now. Like the fingers are not. Yeah.
0:30:54 Yeah. I mean, for me, it’s usually like physics things that are the giveaway for me. Like somebody
0:30:58 will be walking and I’ll be like, they stepped kind of weird there. Or like the people around
0:31:02 them are moving at like a weird pace compared to them or something. Like, it’s always something
0:31:05 related to like the physics of the scene that give it away for me.
0:31:08 Like, why is this guy limping? Like, is he okay?
0:31:15 But this one, I, you know, I don’t know if this was like a response to the fact that Kling
0:31:19 Kling was putting out a new video model this week. I’m not really a fan of announcements
0:31:24 of announcements. Right. And to me, this feels like runway saying, Hey, here’s a new model we’re
0:31:28 announcing, but not yet. Right. Like, it’s almost like, Hey, we’re going to tease this, but not give
0:31:34 it to you yet. And I almost feel like what might’ve happened was they got word that Kling was going
0:31:38 going to be releasing a new video model. And they’re like, Hey, let’s announce that this is
0:31:43 coming and get ahead of Kling. You know, I feel like a lot of that is happening in the AI world where
0:31:48 like companies are constantly trying to one up each other or like, Hey, there’s news coming out on
0:31:53 Tuesday. So let’s get our news out on Monday. So it sort of overshadows Tuesday’s news. A lot of
0:31:58 that happens in the AI world. That’s the most action we’re seeing right now in the AI world.
0:32:03 And I feel like everyone’s at each other’s throats, but like in a nice way, you know?
0:32:09 Yeah. I mean, it’s definitely leading towards things scaling a lot faster, right? Like this
0:32:16 competition is causing these companies to, I think, push out products quicker sometimes before they’re
0:32:22 even ready just to sort of stay in the news cycle, be the most top of mind product or company at the
0:32:27 time. But, you know, circling all the way back around to how we started this episode, I really,
0:32:32 really think that Google’s probably going to be the one that comes out on top just because of their huge,
0:32:39 like infrastructure advantage. And the fact that like, everybody’s all so deep into the Google
0:32:43 ecosystem, right? Like everybody has a Gmail. Everybody uses YouTube.
0:32:46 We’re using Chrome right now as we speak, you know?
0:32:53 Yeah. Everybody uses Chrome, right? So I feel like because they have that ecosystem that
0:32:58 so much of the world is already locked into, it’s going to be hard for Google to lose. Like they
0:33:04 really have to screw up bad. I think the only thing I will not own is like a Google phone.
0:33:08 That’s where I draw the line when it comes to dating apps. Like when I find out someone’s
0:33:12 using an Android, basically, I just like, so an Android’s a red flag for you.
0:33:19 It’s like a huge red flag. That’s hilarious. So if somebody texts you and you see the green
0:33:24 bubble, you’re like, no, I’m out. No, no, no. But I don’t know how like I try to answer,
0:33:28 like I asked that question and like, I find out that the guy I’m trying to talk to on the dating app
0:33:33 is on Android or like any sort of Android. Like I find out that he is using it. And I’m like,
0:33:38 you know what? I’m working on myself right now. So we’re not going to do this today.
0:33:42 Because Apple is supremacy to me, like always has been.
0:33:48 I’ve never heard of that as like a dating criteria before.
0:33:53 This is the ick. Yeah. But there’s one thing that I really want to talk about. And like,
0:33:59 you know how like Meta has a weird collaboration with Ray-Bans and they have the Meta glasses and
0:34:03 stuff. And yes, it’s nice. It’s amazing. But then, you know, Meta is not trying to upgrade
0:34:09 to Prada, which is whatever. But Alibaba apparently are coming out with their own version of AI glasses.
0:34:15 And like they’re powered by their own GPT kind of like model, which is called Quen.
0:34:19 And it’s like a quiet launch. I think a lot of people are like talking about it,
0:34:23 but like where I’m in right now, no one’s really into it. But I’ve been reading about this,
0:34:29 like in the east rather than here in the west. They have like real human sort of language built
0:34:35 in and they have like two versions, which is S1 and G1. And the first one is like on a budget sort of
0:34:40 thing. And the second one is like, I don’t want to sell a kidney sort of thing. And like one is very
0:34:46 fine. Like the other one is very expensive. And they both have tiny like screens inside the lenses
0:34:52 and like built-in cameras and the frame and everything runs on voice through Quen, which again,
0:34:54 it’s their own GPT sort of thing.
0:34:57 So like they have the little like heads up display inside the eye.
0:35:03 Yeah, yeah, yeah, yeah, yeah. And they have like live translation, like while you’re walking like
0:35:06 AI generated meeting notes. And like, can you imagine like, you know, meeting notes and like
0:35:10 it comes out with that. So rest in peace to whatever software you’re trying to use.
0:35:11 Yeah.
0:35:16 And you can ask questions and like out loud and it gives you the answers. And let’s say,
0:35:20 like you want to buy something and like it takes a picture. You can take a picture with
0:35:26 the glasses and it puts the prices on Taobao, which is, I think, predominantly in the East,
0:35:29 as I said. So yeah, yeah, it’s pretty huge right now.
0:35:33 Yeah. It’s interesting to me just because when I think of Alibaba,
0:35:39 right, like the first thing that pops into my mind is the place where you go online to buy cheap junk.
0:35:40 Yeah, in bulk.
0:35:47 Right. Like that’s my thought on Alibaba. But now Alibaba has actually become this like frontier
0:35:52 AI lab and they’re actually building like interesting stuff in the AI world.
0:35:52 They’re so good.
0:35:57 And my brain is struggling to compute the fact that like this is the company that we go and
0:36:03 import cheap junk from is now like this huge tech company that’s generating glasses.
0:36:09 And so my mind automatically thinks of like Alibaba version of glasses is probably like the cheap
0:36:13 Chinese knockoff of like good glasses, but that’s not really what they are. Like Alibaba is like
0:36:15 a legit tech company also, you know,
0:36:20 It’s a huge, I’m not saying I was surprised. I saw the video and like, they’re not like,
0:36:28 it’s not a small thing. It’s big energy. And I can feel it like coming through the market and kind
0:36:35 of like maybe surpassing Meta. Who knows if Meta doesn’t change these designs, maybe, but it’s really
0:36:37 good, really good stuff. Like I can see it.
0:36:41 Although I have a feeling in the U.S. they’ll probably be banned.
0:36:44 Yeah, they’ll probably use Meta. Yeah, a hundred percent.
0:36:50 Right. Because the U.S. is already trying to ban DJI products like the DJI drones and cameras and
0:36:54 stuff because they’re worried about, you know, China spying through those gadgets and stuff.
0:37:02 So like glasses with cameras on them made by a Chinese company, I can see if you want to get a pair,
0:37:07 get them as soon as you can, because I cannot see that like lasting forever, especially here in the U.S.
0:37:14 Yeah. I mean, wearable AI is the future and I like it so much. I don’t know if I like it on my
0:37:19 face a lot, maybe around like something on my maybe the watch is better. Maybe the ring is better.
0:37:25 The ring thing, I really want to try it out. But yeah, it’s I think we’re reverting back from
0:37:31 having it in our pockets and our bags to an actual like wearing it, you know, like something that you
0:37:32 can go without.
0:37:38 Yeah. I mean, I’m also not quite convinced still that glasses are that like final form factor that
0:37:42 everybody really wants. I don’t think people take them off at the end of the day.
0:37:46 Like I have regular Meta Ray-Bans, not the ones with the display, but the ones that have like the AI
0:37:51 and the speakers and stuff built in. Right. And they’re like quite a bit heavier than my normal
0:37:56 glasses. Really? So like most of the yeah. And like if I wear them for a full day, let’s say I’m wearing
0:38:01 them for like eight hours in a day. Like I start to feel it on my nose after a while because it’s so
0:38:06 heavy. And so like most of the time I don’t even wear those glasses unless I’m like, I know I’m going to be
0:38:10 wanting to take pictures or, you know, use the speakers in them or something. Most of the time,
0:38:15 I still just grab like my cheap light glasses because those ones get heavy over time. But I
0:38:20 also think like, I don’t know, I don’t think people are going to be wanting to walk around with like
0:38:27 little cameras strapped to their face all the time. I think it’s going to kind of become somewhat socially
0:38:33 unacceptable to just be like walking around with like cameras on. But, you know, saying that I also don’t
0:38:37 know what form factor I’d prefer. Like, I don’t know if you ever saw the movie Her where he has
0:38:43 like the little earbuds. I could see something like that. But also you lose like the vision
0:38:48 elements of it. So I don’t really know. And we also know that Sam Altman and Johnny Ive are over
0:38:54 working on some sort of hardware that they claim isn’t glasses and isn’t like another pin kind of
0:38:58 thing. And everybody’s just confused. Like, okay, well, then what the hell is it?
0:39:03 I did like a small video about this on LinkedIn and like they only want to talk about the vibe of it.
0:39:07 Yeah. And it’s going to be something of like an essential thing that you want to just have.
0:39:12 And it makes you feel like you’re in a cabin in the woods. What does that mean? What do you mean?
0:39:17 What if I don’t want to be in the woods? What kind of a description is that?
0:39:19 You know, like I have no idea.
0:39:23 What kind of a horror movie is that cabin in the woods and peaceful?
0:39:26 Bro, that’s like every horror movie in America.
0:39:29 I’m pretty sure there is a horror movie called The Cabin in the Woods, isn’t there?
0:39:32 Maybe I got inspired by that. No, Sam. No.
0:39:37 I think there’s an M. Night Shyamalan movie about The Cabin in the Woods.
0:39:44 Oh, my God. That sounds horrifying. Like, give us something so that people aren’t like freaked out about it.
0:39:51 Yeah, yeah. Yeah. So, I mean, a lot going on right now. We didn’t mention this when we were talking about
0:39:56 the AI models, but supposedly OpenAI is working on a new model that’s codenamed Onion.
0:40:01 Really? You know, we’ve had strawberries and we’ve had bananas. Now they’re working on Onion is their
0:40:07 latest model. And rumors have it that it might be coming out within the next couple of weeks to
0:40:12 improve upon 5.1. But it’s still very speculative. All that is, is just sort of like rumors and
0:40:14 unconfirmed leaks and things like that.
0:40:19 I mean, you have your own people and we’re just people, you know, like normal people that write
0:40:23 the newsletter every single day, but you have more people that tells you the secrets.
0:40:29 Well, this isn’t a secret coming from anybody of relevance. This is more like a blog post
0:40:36 speculating and things. So we should have a model that’s coming from ChatGPT in the very near future.
0:40:43 I don’t know if you remember, but last year during December, OpenAI did their 10 days of Christmas or
0:40:48 whatever. And every day for 10 days, they announced and launched something new. So last year in December,
0:40:53 it was actually a really big month for AI announcements because OpenAI kept rolling stuff
0:40:58 out. But that was also the same time that because OpenAI was rolling all this stuff out, that was when
0:41:03 Google really heated up. Like December last year was when Google released. I think it might’ve been
0:41:09 Gemini 2.5. I don’t remember which model it was, but Gemini started releasing models like crazy
0:41:15 to sort of like keep pace with OpenAI. I don’t really feel like that’s going to happen this December
0:41:20 because we got so many new releases over the last like three weeks. But last year, like this window
0:41:27 of time was like an insane window for AI announcements. So I wasn’t keeping up. I think I was like Christmas
0:41:32 shopping and I come from a Lebanese household. So everyone needed a gift. So like we’re talking about 70
0:41:39 people. So I apologize for not keeping up. That’s crazy. But yeah. But yeah. So it’ll be interesting
0:41:43 to see if this December is like a repeat of last. Can you imagine if they did it again? They probably
0:41:47 did it again. I’m kind of almost expecting Sam Altman to say, hey, we’re going to do 10 days of
0:41:51 Christmas again and give us a new announcement every day. An advent calendar. Yeah, yeah. Give us an
0:41:55 advent calendar where every day we open up a new treat from OpenAI. But can you imagine if they create
0:42:01 like a thing that makes chocolate from scratch and they release it in an advent calendar? I’m giving them ideas
0:42:06 now. It’s a million dollar idea. Yeah. Well, last year they had the little like Christmas frogs in
0:42:09 the background of all their videos. And I was really hoping they’d release them because I’m like, I want
0:42:13 to put some Christmas frogs in the background of my videos. I really want to sit down with Sam,
0:42:17 just like pick his mind. He sounds like a really interesting person, honestly. Well, maybe if this
0:42:21 podcast goes in the right direction, we’ll get Sam. Like, I’m sorry I bullied you, Sam. I was being
0:42:26 nice. Like I was just… Yeah. I take back everything we’ve said about you, Sam. Please
0:42:32 come on the show. Please come here. Well, awesome. I think we kind of covered everything that’s
0:42:38 happened over the last, you know, seven to 10 days. And I’d love to hear what people think about these
0:42:42 episodes. So let us know in the comments. I think you can actually comment on Spotify now too. So,
0:42:48 you know, YouTube, Spotify, leave us comments like these episodes, these news breakdowns and recaps
0:42:55 and opinionated takes on things because they’re a lot of fun for us to make. So let us know if you’re
0:43:01 enjoying these. Oh yeah. And hopefully we see you in the next one. Please give this video a thumbs up
0:43:06 and subscribe wherever you listen to podcasts. And we really appreciate you tuning in. Yeah. See you guys.
0:43:14 Bye.
Episode 88: What happens inside OpenAI when Google drops a game-changing AI model? Matt Wolfe ((https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) break it down.
This episode unpacks OpenAI’s unprecedented “Code Red,” the real reason Sam Altman hit the panic button, and how Google’s Gemini 3 and Nano Banana Pro could threaten OpenAI’s dominance. The hosts debate which next-gen AI models are actually smarter (Claude Opus 4.5, Gemini 3, GPT-5.1, and more), why some tools are getting dumber, and how Google’s full-stack advantage is shifting the AI power balance. Plus: a rapid-fire review of explosive new AI tools for video, the rise of creative AI (and AI “slop”), surprising advances in wearable tech, and a bit of fun at Sam Altman’s expense.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
(00:00) AI Wars: Models, Tools, Power
(05:59) AI Innovation Race
(08:43) OpenAI’s Google Challenge
(12:15) Claude’s Context Window Explained
(14:12) Claude vs. ChatGPT: AI Preferences
(17:13) Anthropic vs. OpenAI Philosophy
(21:51) AI and Content Slop Concerns
(24:46) AI Generative Audio’s Uncanny Gap
(28:07) Runway Gen 4.5 Dominates Preferences
(30:41) AI Model Announcement Rivalry
(35:00 Alibaba’s AI Evolution
(37:19) Heavy Glasses and Social Concerns
(40:03) AI Advancements: December Launches
(42:27) “Like, Subscribe, See You Soon
—
Mentions:
Sam Altman: https://blog.samaltman.com/
Google Gemini 3: https://aistudio.google.com/models/gemini-3
Nano Banana Pro: https://gemini.google/overview/image-generation/
Claude Opus 4.5: https://www.anthropic.com/claude/opus
ChatGPT: https://chatgpt.com/
NotebookLM: https://notebooklm.google/
Runway Gen 4.5: https://runwayml.com/research/introducing-runway-gen-4.5
Kling: https://klingai.com/global/
Midjourney: https://www.midjourney.com/home
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
Newsletter: https://news.lore.com/
Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Leave a Reply
You must be logged in to post a comment.