AI NEWS: 5 New Tools, Elon Musk’s Matrix & GPT Erotica Explained

Leave a Reply

AI transcript
0:00:06 Welcome to the Next Wave Podcast. I’m Matt Wolf, and today I’m joined by someone you’re
0:00:12 going to be seeing a lot more of around here. Maria Garib, the head writer of the Mindstream
0:00:16 newsletter and one of the newest members of the Next Wave content team. Maria’s journey
0:00:23 into AI is wild. She went from studying international affairs and politics to becoming one of the
0:00:28 sharpest AI journalists in the game. Every morning, thousands of people start their day
0:00:34 with her breakdowns of the biggest AI stories, and today we’re bringing that brainpower right here
0:00:40 to the podcast. In this episode, we dive into Microsoft’s new AI image model and what it means
0:00:46 for the OpenAI-Microsoft relationship, ChatGPT’s personality changes, and Sam Altman’s surprising
0:00:52 comments about mental health and erotica. We talk about Google Gemini’s new calendar integration
0:00:58 and a bunch of other stories, from Elon’s world model to AI catching Lyme disease when
0:01:05 doctors missed it. It’s a packed episode full of insights, hot takes, and future talk. So
0:01:07 without further ado, let’s dive in with Maria.
0:01:21 Being a know-it-all used to be considered a bad thing, but in business, it’s everything. Because
0:01:28 right now, most businesses only use 20% of their data, unless you have HubSpot, where data that’s
0:01:34 buried in emails, call logs, and meeting notes become insights that help you grow your business.
0:01:38 Because when you know more, you grow more. Visit HubSpot.com to learn more.
0:01:46 Hey, Maria. Thanks so much for joining me on the show today. How are you?
0:01:48 Hi, Matt. Thanks for having me. I’m good. How are you?
0:01:54 I’m doing great. I just want to jump straight into it. We’re going to get to all of the big
0:01:59 news stories over the last couple of weeks in just a minute here. But before we do, this is
0:02:03 your first time on the show, but you’re going to be back. People are going to be seeing a lot of you
0:02:08 around here at the next wave. So I want to give you the opportunity to just kind of quickly share a
0:02:13 little bit about your background. You know, how did you get into AI? How did you get involved with the
0:02:17 Mindstream newsletter? Let’s just kind of set the stage for people a little bit before we jump into the
0:02:23 news. Right. Yeah. My name is Maria. I’m Lebanese and I moved to the UK like this year because
0:02:27 HubSpot acquired Mindstream. The whole story about Mindstream and how I got into AI is pretty,
0:02:32 probably the funniest thing I’ve ever done in my entire life because I have degrees in
0:02:37 international affairs and politics and I moved into marketing also by mistake. And I was just scrolling
0:02:44 on LinkedIn, I think last year around January, you know, 2024. And I saw that they needed like a
0:02:49 research copywriter and I do research in general because I have two degrees in academia, I guess.
0:02:55 And I applied and I reached out to Matt, my manager right now and like got this rapport happening and
0:03:00 I got interviewed and they tested my writing and I’ve been working in AI ever since and writing about it.
0:03:08 So yeah, now I write the Mindstream newsletter that pops into people’s Gmails every day at 7am. And
0:03:14 it’s been amazing ever since. Very cool. Now, just sort of feeding my own curiosity here.
0:03:18 How do you keep up with the news? This is a question I feel like I get asked almost anytime
0:03:24 I’m interviewed because I used to make videos about all the latest AI news and I have a newsletter as
0:03:29 well. And so I’m trying to stay up with the latest news every day. And as we both know, it’s just like
0:03:35 a fire hose right now. It’s a flood of AI news. So I’m just curious, like, do you have a process for
0:03:39 keeping up with it all? I mean, I get alerts on Google, which is very helpful, obviously,
0:03:44 in my line of work. But also, you know, like attention spans in general have been ruined by
0:03:50 TikTok, but TikTok has some good advantages. And one of them is these sorts of videos that pop up
0:03:54 on your FYP every now and then and gives you basically, you know, because of algorithm and
0:03:59 algorithm catches on what you’re looking for. It’s been giving me what I need and I would search on
0:04:06 this and like kind of like do my deep research and write the blog the next day. So basically that and
0:04:11 Google alerts and yeah, whatever the people are talking about, whether it was on X, on Instagram,
0:04:16 Instagram has been very nice to me lately. I’ve been getting some pretty good stuff because of algorithm
0:04:19 as well, as I said. And yeah, that’s how.
0:04:23 Yeah, yeah. No, that’s super interesting. I mean, I’ve heard that more and more people are actually
0:04:28 getting their news from TikTok. For whatever reason, like TikTok has never like grabbed me.
0:04:34 I’ve never gotten really into TikTok. Don’t go to the dark side because your attention span is
0:04:39 going to obliterate. I think it’s going to stop existing. So don’t do this to yourself.
0:04:43 Yeah, but I mean, I do the same thing on Instagram Reels. So that’s where I was about to go. I was
0:04:48 like, I don’t do TikTok, but I do Instagram Reels, which is probably like an equal evil.
0:04:54 Yeah. Very cool. All right. Let’s just jump right into the news and start like breaking it down for
0:04:58 people because there’s been a lot that’s happened over the last couple of weeks. Oh yeah. I think
0:05:02 the one that I’d like to start with is the Microsoft, they just released their, I think that’s called
0:05:09 MAI image one, their new image generator. Yeah. And interestingly, it’s not really being talked about
0:05:14 much. Like I haven’t really seen it sort of hit the AI news cycle where people are really like hyped
0:05:19 about it. What are your thoughts on that one? So I think this is one of the biggest thing that
0:05:24 they’ve dropped yet, in my opinion, it’s called, as you said, MAI image one, and it’s their first
0:05:31 ever text to image model built fully in-house, which is what makes it so interesting. It also,
0:05:37 by the way, landed on the top 10 LM arena right out of the gate after it came out. And it’s that this is
0:05:44 the first real sign of Microsoft stepping out from under OpenAI’s shadow because they’ve been fully
0:05:47 dependent on them. I don’t know if people know this, but they’ve sort of like, you know, kind of
0:05:54 borrowing someone else’s tech. But this is a hundred percent homegrown, as we say. And according to the
0:06:01 company, MAI image one focuses on creative quality rather than quantity, meaning it’s built to generate
0:06:07 realistic visuals that don’t all look like they came out from the same old AI art template. So you get
0:06:14 better lighting, obviously, and like more natural reflections and faster results. And this is all
0:06:20 without the weird over-processed style that comes out from all the models. All the models fall into that.
0:06:27 Yeah, no, it’s been really interesting to watch Microsoft’s plays in AI, right? Like they’re a 49%
0:06:33 owner of the OpenAI non-profit and OpenAI has just a very bizarre structure. Like there’s a non-profit
0:06:35 that sort of governs a for-profit.
0:06:36 Yeah, they run that. Yeah.
0:06:44 But Microsoft owns 49% of the non-profit sort of umbrella company. But now Microsoft is generating
0:06:49 their own models, right? They have the MAI image model, but they also have like some small language
0:06:54 models that they’ve built and released as well. So they’re like making models that are competing with
0:06:56 OpenAI’s models.
0:06:56 A hundred percent.
0:07:02 Even though they’re like a huge owner in OpenAI. But then also recently, I don’t know if you heard
0:07:06 about this, but inside of the various Microsoft co-pilot products, they’ve started introducing
0:07:08 Anthropic into those products.
0:07:14 Exactly. Yeah. They’ve embedded co-pilot into literally everything from Windows to Office and as well as
0:07:21 Anthropic. This is like a creative stack and they’ve been doing it for like some time now. But the fact that
0:07:26 this is the first thing that they do in-house is what people are not catching onto that, but it’s
0:07:27 huge news. It’s big news.
0:07:32 Yeah. I did play with it a little bit. And there’s one way that you can use it right now. Like it’s not
0:07:40 open to the public. They haven’t built any sort of like AI image generation platform yet, but they did
0:07:45 put it over on LM Arena. If people aren’t familiar with LM Arena, basically what it is, is they give you
0:07:50 a box to enter a prompt. You enter a prompt and it will give you two outputs, whether it’s a text output
0:07:54 or an image output. It’ll give you these two outputs, but it won’t tell you which model these
0:08:00 two outputs are. You pick which one you like, and then it tells you which model you just used. And
0:08:04 then over time, that’s what informs the leaderboard and figures out the rankings of these models.
0:08:10 And so LM Arena right now is the only place that you can test it. And if you go to LM Arena and you
0:08:16 select direct chat up in your little header menu, and then you select MAI image as the model, you can
0:08:19 actually directly generate with this MAI image model.
0:08:25 So cool. Yes, so cool. It’s like the Olympics for AI models in LM Arena, honestly. This is how we
0:08:25 define it, honestly.
0:08:29 Yeah. And this is how everybody discovered Nano Banana when it came out as well, right?
0:08:30 Yeah, exactly.
0:08:36 When Nano Banana went wild and everybody was using it, it was mostly being used on LM Arena before they
0:08:42 finally opened it up and finally put it in the Google products. I wanted to share these images that I
0:08:45 generated because I always like to test a few things, right? I want to see if it will generate
0:08:50 actual real people. So I asked it to generate Sam Altman shaking hands with Elon Musk.
0:08:51 Is that Sam Altman?
0:08:57 It’s supposed to be. I mean, you can tell this is supposed to be Elon Musk. I would say maybe this
0:09:02 is Elon Musk like 15 years ago, Elon Musk. I wouldn’t say this is Elon Musk today.
0:09:03 Oh, he doesn’t see that.
0:09:07 But I don’t really see any resemblance to Sam Altman on this picture here.
0:09:13 So it seems like it will generate like some known names, but not all known names.
0:09:18 I guess probably more images of Sam Altman on the internet that got sort of scraped into the
0:09:22 training of this image model, I would guess. So the other prompt that I wanted to test was a
0:09:28 three-headed dragon wearing cowboy boots, watching TV while eating nachos. And the goal here was to
0:09:31 just shove a lot of things into one prompt and see if it got them all, right?
0:09:33 That is so cool.
0:09:38 And it actually did it all. Three-headed cowboy boots, nachos, watching TV. And I’m like,
0:09:43 damn, it actually followed all of my directions. Because a lot of times when I test this on other
0:09:46 models, it’ll get like two of the things, but miss two of the things, you know?
0:09:50 Oh, you should have added like a cowboy hat, honestly, in my opinion. Because like this would
0:09:50 fit.
0:09:55 Yeah, yeah, it would. So that’s what that one generated. And then I was curious, will it generate
0:09:59 trademarked IP? You know, Mickey Mouse, Mario, things like that. So I put Mickey Mouse,
0:10:04 high-five Super Mario. No problem. It did it. Although they’re kind of high-fiving with the
0:10:05 back of their hands.
0:10:08 The hand stuff, yeah. Like, I don’t know who does it. Maybe Italians do that. Who knows?
0:10:08 But yeah, we’ll see.
0:10:14 And then I just did another sort of like trademark test. SpongeBob SquarePants,
0:10:16 Batman and Spider-Man taking a family portrait together.
0:10:19 Right, yeah. SpongeBob looks like he’s happy to be there, which is cute. Yeah.
0:10:24 Yeah. But there’s definitely some hand wonkiness still. You can see it in Spider-Man. You can
0:10:24 see it in SpongeBob.
0:10:28 It’s always the hands. I don’t know if people notice, but like, it messes up with the hands.
0:10:28 I don’t know why.
0:10:30 Yeah. Hands are always a problem.
0:10:31 Yeah.
0:10:39 Real quick. I just found out about this AI dragon slayer quiz that HubSpot just dropped.
0:10:46 It’s part game, part business assessment. You answer 15 strategic questions, get scored across
0:10:52 five key domains, and you get a personalized roadmap that shows you exactly how AI is affecting your
0:10:59 business and what you need to do next. Start the quiz right now. You can scan the QR code or click
0:11:02 the link in the description. Now let’s get back to the show.
0:11:08 So those were the tests that I did. I only did a handful. But yeah, that’s how you use it. You can
0:11:13 use it at Eleanor Arena, and that’s kind of what it does. Have you played around with it at all yet?
0:11:17 I haven’t really played that. Like, I tried to open it. I think it’s not really still available
0:11:22 for the UK. I haven’t really checked yet. Maybe it is today. We’ll see. But I think they’re trying
0:11:27 to prove that they want to compete head-to-head with like OpenAI, Majorney, and stability. But from
0:11:32 their website, their website showed like some pretty good stuff, in my opinion. But from what you’re
0:11:37 sharing, I think this is more refined from the end. Oh yeah. So they’re trying to compete. It still
0:11:43 needs work, in my opinion. Like nothing beats me journey so far. And from what I’ve seen, it needs
0:11:47 a bit of work. Yeah, I agree. And I mean, when these companies release these models, whether it’s a video
0:11:52 model or image model, they’re always going to probably cherry pick the best results to put on their
0:11:57 website. You know, so who knows how many times they had to prompt something before they got it
0:12:04 website worthy? Yeah. Cool. Well, the next thing that I wanted to talk about is I wanted to shift
0:12:09 over to ChatGPT a little bit. Let me see if I could find the tweet again from Sam Altman. Let’s read it
0:12:15 out real quick. So Sam tweeted on the 14th of October that we made ChatGPT pretty restrictive to make sure
0:12:21 we were being careful with mental health issues. We realized this made it less useful, enjoyable to many
0:12:26 users who had no mental health problems. But given the seriousness of the issue, we wanted to get this
0:12:31 right. Now that we have been able to mitigate that serious mental health issues and have new tools,
0:12:37 we are going to be able to safely relax the restrictions in most cases. In few weeks, we plan
0:12:42 to put out a new version of ChatGPT that allows people to have a personality that behaves more like
0:12:47 what people liked about 4.0. We hope it will be better if you want your ChatGPT to respond
0:12:53 in a very human-like way or use a ton of emojis or act like a friend. ChatGPT should do that,
0:12:58 but only if you want it. In December, as well, as we roll out age gating more fully and as part of our
0:13:04 treat adult users like adult principle, we will allow even more like erotica for verified adults.
0:13:10 Yeah. This is what’s been happening in the GPT house and the OpenAI house.
0:13:17 So the thing is, when GPT-5 came out, I was so excited because sometimes I use ChatGPT to refine
0:13:22 the stuff that I do. I am not an English speaker, like it’s not my native tongue. So sometimes I make
0:13:28 mistakes. Like I need to, you know, refine some of the stuff that I do because obviously I’m going to
0:13:35 make mistakes. And everything was all good until it was so cold towards me. And I was like, why are you
0:13:41 doing this? What did I do to you for you to be so cold to me? I thought we were friends. Like I thought
0:13:48 we were, you know, I was trauma dumping to you last week. Why would you? Yeah. So obviously they did this
0:13:52 because some people have been having deep conversations with them and it was giving, you know, because of the
0:13:57 data and stuff, it was giving them kind of false information and they were relying on it with their
0:14:04 mental health stuff. And, you know, it’s not really advised to do that. So he admitted, like we made it
0:14:10 more serious, but still it hurt. So now they’re like, you know, they’re shifting a bit and they’re
0:14:16 trying to make it sort of closer to people and they’re like less restrictive. They were obviously,
0:14:20 you know, worried about like people forming emotional attachment while they were struggling
0:14:26 with mental health. And they stripped it out of all the personality. But I think now they’re allowing
0:14:32 it like back to being friendlier, I guess is the correct term. So, yeah. Yeah, no, it was kind of
0:14:38 crazy. Stepping back when GPT-4 came out, it was a big sort of monumental day. They made like, they put
0:14:43 on keynotes. The whole world was talking about it. It was like the biggest thing of the, of the week when
0:14:48 that happened. Right. Then GPT-5, you kind of expect something like that. Like if, if you’re doing a
0:14:54 whole number, it’s not like a 4.1 or a 4.0 or 0.4, whatever. Right. It’s not some weird naming
0:14:59 convention. It’s actually a new whole number. You’d think it’d be like this big monumentous
0:15:06 event. But it seemed like when GPT-5 came out, most of the active users were like really bummed
0:15:12 about GPT-5 and what it did. And I remember on Reddit, there was just so many people on Reddit
0:15:18 basically talking about how they broke down in tears because they felt like they lost a friend and
0:15:20 all of that kind of stuff. And it was wild.
0:15:24 It felt like I was having, I was waiting for the, we need to talk kind of moment.
0:15:29 And I don’t like that moment. Like, don’t do this to me. Like I don’t want GPT to do this to me. I’ve
0:15:35 been forming some sort of like friendship with it anyway. But yeah, I think Sam Altman is like
0:15:40 basically saying that, you know, we’re trusting adults to be adults again, as he said. It’s like a huge
0:15:45 shift, like not just technically, but emotionally, obviously, because they’re acknowledging that
0:15:49 chat GPT isn’t just like a productivity tool. It could be like for many people, some sort of
0:15:55 companion. And, you know, I would say so myself, because there are some times where you would talk
0:16:00 to a GPT and it’s less judgy, like normal people are. So yeah, a lot of people trust chat GPT not to
0:16:06 give it some sort of a prejudice or like they need to be able to communicate with a tool without having
0:16:08 to feel like they’re walking on eggshells.
0:16:15 Yeah. Back when GPT-5 launched, it only took like 24, 48 hours before OpenAI went,
0:16:18 okay, the old model’s back. So you can go in and use it again.
0:16:23 Like, we’re sorry we did this, but like, don’t freak out. So people freaked out, including me.
0:16:26 Yeah. Yeah. We killed your friend, but we brought him back from the dead. So it’s okay.
0:16:27 That’s okay. Yeah.
0:16:28 Yeah.
0:16:33 But yeah, I think that one of the wildest things about Sam’s tweet there that we just read
0:16:40 was the sort of erotica aspect of it. I would have not have anticipated chat GPT to go in that
0:16:45 direction. Elon Musk and Grok? Sure. That’s what everybody expects him to do.
0:16:51 He does that. He tends to do that. Yeah. By default. But like, Sam, like, are we okay with,
0:16:55 I thought like you were like, you know, PG-13, what is happening right now?
0:17:00 Yeah. Yeah. So that’ll be interesting. And I’m actually really, really fascinated and curious
0:17:05 to see how they pull off their sort of age algorithm, right? Because they basically said
0:17:09 they’re not going to just do a like, hey, what is your birthday? And then like select a date,
0:17:12 right? Like, you know how sometimes you might go to a website for like an alcohol company or
0:17:16 something, and it’ll ask you to enter your birthday so that they can verify that you’re
0:17:22 the right age, right? In the UK, I think in the UK, we do have these laws that when you need to
0:17:27 enter like any adult sort of website, whether it was to buy alcohol, to buy this or to buy that,
0:17:32 or like to watch something, you have to prove it with an idea. Like you have to prove it with a selfie
0:17:36 and that selfie needs to be proven. And obviously the picture is not being used, but just to prove
0:17:42 that you are actually an adult. So I think it’s going to be the same route as what we have in the UK
0:17:48 right now. Yes, I agree. So what Sam Altman sort of alluded to doing is that they were going to
0:17:54 basically look at the conversations that you’re having with ChatGPT and try to figure out your age
0:18:00 based on how you talk to ChatGPT, as opposed to a more like traditional like age verification.
0:18:06 Because the whole like showing your idea thing, that I think has been a great method for
0:18:12 the decade prior. I don’t think that’s going to be a good method for the future because
0:18:21 Nano Banana exists. MAI Image 1 exists. You know, Imagine exists. All of these tools,
0:18:25 Mid Journey, all of these tools are going to be capable of creating a fake ID for you
0:18:29 that you could probably use to sort of fool the generation systems.
0:18:33 Like you can make a whole passport with absolute details. And like, I’m not giving people ideas.
0:18:37 I hope not, but this is what’s going to happen. People aren’t dumb. Like what people forget that,
0:18:41 you know, people are not dumb. People can be very creative. And like, if given the right tools,
0:18:47 and the younger generations, the younger generations that are growing up, like native computer,
0:18:52 native internet, native AI, like they’re growing up in a world where this stuff has just always been
0:18:57 here. So they’re very, very, very tech savvy. And I think open AI knows this, right? They know that
0:19:01 they can’t just do a, you know, hold up your ID to the camera. They know that they can’t just do a
0:19:06 enter your birth date. So we can verify your age. That stuff’s not going to work. They have to figure out
0:19:13 other methods. And it seems like their path is to sort of analyze how someone chats with chat GPT
0:19:17 and then guess their age based on how they have discussions with it.
0:19:24 So you theoretically could be a 45 year old adult that just has a, you know, really dirty mind or
0:19:28 something. And it will think you’re a kid and block you out.
0:19:34 I think you’re 15. Yeah. We’re like, yeah. The IQ of a 15 year old, that’s going to be what it is.
0:19:39 Exactly. So I think that’s the thing that I’m most fascinated to see how they implement it.
0:19:43 They have experts. So we’ll see what they could be capable of. I don’t know how they’re going to
0:19:45 pull it off though. We just have to wait and see.
0:19:50 Yeah, for sure. And then I’m sure they’ll figure out a standard and then this will become what Google
0:19:53 does. This is what Meta does. This is what all the platforms eventually do.
0:19:59 But I think they need to sort of iron out that standard first and figure out how to do it.
0:20:05 Yeah, exactly. Just, I’m fascinated about like how Sam does all his things and how he runs the company.
0:20:10 I think it’s been the most successful AI company in the world right now. Everyone uses chat GPT. Even
0:20:15 my grandma’s about to start using it and she’s like 80 years old. So yeah.
0:20:21 OpenAI is the most valuable privately held company on the planet. It used to be SpaceX. Now it’s OpenAI.
0:20:21 Yeah.
0:20:26 They’re valued at 500 billion, a half a trillion dollars. I mean, they’re valued at a bigger number
0:20:28 than most companies on the stock market are.
0:20:30 Probably. 100%. Probably.
0:20:36 So they’ve become pretty huge. It’s funny because I was at dinner last night with some friends
0:20:42 and my friend’s a mechanic. He works for Honda and you know, he turns wrenches under the hoods of cars
0:20:47 and like, he’s not a computer guy. He doesn’t really use computers ever at all. And his work
0:20:54 is actually forcing him to use chat GPT essentially now to do like performance reviews for his employees.
0:20:54 Yeah.
0:20:59 So he’s having to basically learn how to use a computer and learn how to use chat GPT to do his
0:21:03 job because they basically said, this is how we do it now moving forward. So, I mean,
0:21:07 it’s getting integrated into everything, literally everything. The only thing I’m
0:21:11 actually looking forward to is the fact that I’m going to wake up and everything in my house is
0:21:16 automated and it makes me an ice soy latte. This is what I’m going to be most excited about. The other
0:21:20 stuff, like, you know, like if they want to save the world and stuff, sure. Amazing. But I really want
0:21:24 that ice soy latte in the morning. So yeah, I hope that you can be able to put that off.
0:21:29 Just thinking about that. It’s funny. Cause what I think of is like, I remember the internet when it
0:21:33 first came out, right. When it was like dial up and it was super, super slow. And
0:21:38 now we take like the internet speed for granted. So like if I have a day where, you know, maybe
0:21:42 there’s a power outage in my neighborhood or the internet goes down for a few hours, I’m sitting
0:21:46 here going, what the heck? I can’t do anything. Like I’m, I’m totally lost. I don’t know what to do
0:21:50 with my day, what to do with my life. I’m like, I need to be on the internet to like do my thing.
0:21:55 Yeah. And I wonder with like a lot of the smart home automation stuff, like what happens when the
0:22:00 power goes out? What happens when your internet goes down? Are we going to just like forget how to make
0:22:05 our own coffees? I don’t want to jinx it. Like it didn’t happen yet. I really don’t want to jinx it.
0:22:09 Like what if that happened to me and I wake up and there’s no ice soy latte, what am I supposed to do?
0:22:13 Like I need, I need that to work out for me, Matt. Absolutely.
0:22:21 Okay. Let me tell you about another podcast. I know you’re going to love. It’s called Billion
0:22:26 Dollar Moves. Hosted by Sarah Chin Spellings. It’s brought to you by the HubSpot Podcast Network,
0:22:32 the audio destination for business professionals. Join venture capitalist and strategist Sarah Chin
0:22:38 Spellings as she asks the hard questions and learns through the triumphs, failures, and hard lessons of
0:22:43 the creme de la creme. So you too can make billion dollar moves in venture, in business, and in life.
0:22:49 She just did a great episode called The Purpose Driven Power Player Behind Silicon Valley’s Quiet
0:22:54 Money with Mike Anders. Listen to Billion Dollar Moves wherever you get your podcasts.
0:23:03 Let’s move on to like the next topic here. Google Gemini just added calendar integration. This is
0:23:08 actually something I haven’t played with yet, but I know you’ve kind of looked into it. So what are
0:23:13 your thoughts on this one? So obviously I don’t get to play around with all the tools because I have to
0:23:18 write about them every single day, which is, you know, like it’s a whole thing. But what I’ve seen
0:23:22 from people and like, I’ve seen really good reviews so far from people that have like started sort of
0:23:27 using it, but also looked into it. We used to have like this back and forth of like, you know, we’re
0:23:32 trying to schedule a meeting and like, does Thursday work for you? Like, no, maybe Friday and like these
0:23:37 kinds of stuff. And it’s all chaotic. And people, some people are not type A, you know, like me and
0:23:41 others. Some people are very, you know, they want something to happen, but they’re struggling to
0:23:47 make that work. And Google finally said like enough, and we’re going to probably launch a new
0:23:52 Gemini powered feature in Gmail called help me schedule. And I love the name. It’s kind of a
0:23:59 genius. If like you hit one button and Gemini sort of like scans your whole calendar and like finds an
0:24:07 ideal time slots and automatically plugs them into your email. And it basically finds the right time for
0:24:12 you to have that meeting. And when the other person kind of picks one as well, because it’s like a back
0:24:17 and forth, there’s an invite. It happens like no, no, no, a passive aggressive follow-up, you know,
0:24:22 because a lot of people do passive aggressive follow-ups and I don’t like it as per my email.
0:24:28 So what’s smart about this is that Gemini doesn’t just look at the availability. It actually reads
0:24:36 the context of it all. So if someone says like, can we do a 30 minute meeting before next Friday,
0:24:43 it kind of filters out your open slots and like try to match what exactly that, like if we can do that
0:24:50 on Friday. So it’s context aware sort of scheduling, which is beautiful. It’s amazing. And for people
0:24:54 that have a lot of like, for people that are very busy and like have, have to have a lot of meetings
0:25:01 all day long, like back to back, this is very useful. And it’s a very like sort of a small change,
0:25:06 but it kind of like completely redefines what we expect from AI assistants.
0:25:11 So again, I haven’t really looked into this one too much, but so if somebody emails you and says,
0:25:17 hey, I’d love to jump on a call next week, do you have any availability? Does Google just jump in
0:25:23 and respond? Or would I send an email saying, hey, I’m tagging Google to find a time on my calendar?
0:25:27 I think it does it automatically. Like it’s just like kind of like scans it out and
0:25:33 kind of finds what you have and what they have and like kind of merges and like sends the invite. So
0:25:39 that’s how, and if they both pick the right time, it kind of creates the meeting or like kind of schedule it.
0:25:43 Gotcha. It’s interesting because I feel like I’m constantly talking about this with like OpenAI
0:25:52 and Google and these companies is that they figure out what the AI that they developed is being used
0:25:57 for in other companies. And then when they find like use cases that other companies have done well,
0:26:00 they go, cool. We’re just going to go build that ourselves.
0:26:06 They run and yeah, they run, they run with it. I like it so far. This launch obviously is not random.
0:26:10 Like I don’t like to do the comparison, but in my mind, this is the comparison.
0:26:16 You know how like you’d find like a really high end brand that does this nice bag, all of a sudden Zara
0:26:21 or like Mongo or like these kind of like chain shops do the same thing. This is how it’s working out in
0:26:27 the AI world. It’s part of obviously a bigger Gemini strategy. So Google is quietly, you know,
0:26:33 threading AI through every corner of the workspace. So we have it in slides, which is awesome. By the way,
0:26:39 I do a lot of presentations and it’s amazing. And then now it has like AI images and like AI image
0:26:44 tools, et cetera. Notebook LMs. I haven’t really played out with Notebook LM yet. It’s pretty smart
0:26:52 so far. And even Google Vids is getting an AI revamp. So yeah, Google in general likes to run with an idea.
0:26:56 It’s like a block of things that keeps on building, which is awesome. Google is amazing. I like it.
0:27:05 Yeah. But it does sort of beg the question, you know, what company can I build using AI models that,
0:27:11 you know, Microsoft, Google, OpenAI aren’t just going to go and build and compete with me later,
0:27:17 right? I feel like that’s kind of the question. We’ve been hearing this for years now around AI is,
0:27:22 you know, no companies have a moat anymore. And I feel like if you’re building a company on OpenAI’s
0:27:29 technology, on Google’s technology, on even now Microsoft’s models, if you’re building technology
0:27:35 on top of these things, are you able to really build any sort of moat for your business? Because
0:27:40 if something really takes off and really, really works well, there’s a pretty good chance Google
0:27:44 or OpenAI is just going to be like, oh, that worked well for the company that’s using our API.
0:27:45 We’re going to run with it. Yeah.
0:27:46 Let’s just skip the middleman.
0:27:53 Yeah. It’s a very tough market right now for, you know, startup companies. Obviously it is worrisome,
0:27:59 but I think this is how you’d find out who is the most creative one in the game. And it’s a game of
0:28:05 like, who’s the most, not powerful, who’s the most creative right now. So even if that happens,
0:28:10 it just takes the right person and, you know, to market it the right way and to make sure that they
0:28:14 build their own guardrails so that no one else steals it. So we have to wait and see. People have,
0:28:19 as I said, very creative. So we’ll just have to see who is the most creative of them all.
0:28:19 I agree.
0:28:21 Like the low of the rings and stuff.
0:28:27 I agree. I think people figure it out, right? I think we always sort of find solutions to these
0:28:33 kinds of things. And in my opinion, I think the big differentiators are going to be two things,
0:28:37 user experience and customer service. The companies that can figure out user experience and customer
0:28:43 service are probably always going to mostly outperform, you know, a version that Google made,
0:28:49 but it’s not like a main sort of thing for Google, right? If this is this company’s main
0:28:53 thing and they can put all of their focus on, yes, you can do this with Google, but we have way
0:28:56 better customer service. We’re going to walk you through it. We’re going to hold your hand
0:29:01 and you have a better user experience. It just feels more intuitive. It’s easier to use. You like the
0:29:06 color schemes, whatever. Those are going to be the differentiators because the underlying
0:29:11 technology that powers almost all the tools in the future are all going to be fairly the same.
0:29:17 A hundred percent. Yeah. There’s always something, there’s always an AI tool. So if I go on like a
0:29:22 live chat with that, you know, on their website and no one else is there, I’m going to be pretty upset.
0:29:26 But if they kind of guide me out of the problem, I’m probably going to use it forever because I know
0:29:33 for a fact that they care that I’m having problems with that. So yeah, as you said, UX, UI and customer
0:29:38 service, et cetera. Yeah, absolutely. And more and more companies are like using AI automations for their
0:29:42 customer service, which I believe is going to make real human customer service feel much more
0:29:47 valuable and premium. So I think there’s that element as well. Cool. There’s a handful of other
0:29:52 news stories that have come out in the last few weeks that maybe we don’t need to go as deep into,
0:29:56 but we can just kind of look at them from a surface level, help people, you know, sort of be aware that
0:30:02 these things came out. The day that we’re actually recording this, Google just launched VO 3.1,
0:30:09 which is about 10 days after Sora 2 came out. Of course. So we’ve got these two video models within
0:30:14 a two week window that both came out really, really, really close to each other. And some of the claims
0:30:19 they made about it, they basically said it’s much more prompted here. Now it’s a little bit more
0:30:24 realistic. Now the sound effects are pretty much the same, but you can also do start and end frames now.
0:30:28 So you can actually give it like a starting frame that you want the video to, you know,
0:30:32 start with and an ending frame and it will figure out how to animate between the two frames. That’s
0:30:37 one of the newer features that they just rolled out with 3.1 as well, which I’m really excited
0:30:43 about because when you take nano banana and you take something that gives you start and end frames,
0:30:47 now you have like endless possibilities for animations because you have your starting image,
0:30:51 right? And then you take it and you give it to nano banana and say, uh, you know,
0:30:57 make this person facing that direction instead of this direction. And then you put both images in VO 3
0:31:03 and now you can animate it between the two scenes that you generated. So that combo of nano banana and VO 3
0:31:09 gives you this like infinite customizability of creating almost any animation you can imagine,
0:31:14 which I think is, is what’s really exciting. And I think it’s only a matter of time before nano banana
0:31:18 and VO 3 are all just sort of clumped into one sort of video making platform.
0:31:25 In my opinion, it should be because I am speaking for all the nerds out there. Can you imagine the
0:31:30 amount of stuff that we can create? I have been on TikTok, obviously with like BookTok and everything,
0:31:35 and I’ve been seeing what everyone has been creating and it’s just beautiful. And I think
0:31:41 that, you know, creativity goes so far, but like people are so creative. Can you imagine like your
0:31:45 favorite characters coming to life? You’ve been seeing them in books and I’m saying this from like a,
0:31:49 like my own perspective. And I’m no, like a lot of people that could maybe could be watching this.
0:31:53 They know what I’m talking about. Having our favorite characters, whether it was in a game
0:31:57 or whether it was in a book or whether it was anything else, like a fantasy book or whatever,
0:32:02 coming to life. This is what we want to see because we don’t get to see it every time. It doesn’t get
0:32:08 picked up by any production house or anything. So it’s awesome. We are so excited about this. So yeah,
0:32:13 the nerds out there are like wooing right now. Yeah. You can make your own sort of visual fan fiction.
0:32:20 Exactly. That’s awesome. Yeah. No, I I’m, I’m really excited about these tools again. I always
0:32:25 look at almost everything that comes out in AI from like both sides. Right. I look at it as like,
0:32:29 this is really fun. I love to like make these kinds of videos and I have a lot of fun with it,
0:32:33 but I look at the other side too. And I always wonder, all right, well, where is this all headed?
0:32:39 Right. I have this sort of like tinfoil, the hat theory about platforms like meta and Instagram
0:32:46 reels and tick tocks and things like that is that if we can generate videos with prompts right now
0:32:52 that look really good, that people want to watch at what point does meta and tick tock still need the
0:32:57 human to actually physically type the prompt, right? If I can get on tick tock and start scrolling and it
0:33:03 knows what videos I sort of pause on for a minute and watch the full video. And then I go to the next
0:33:07 video and I kind of skip over it quickly. That’s how tick tock learns, right? It pays attention to
0:33:11 what you sort of stick on and what sort of stuff you scroll past quickly gives you more of the stuff you
0:33:17 stick on and gives you less of the stuff that you scroll past quickly. Well, at what point do the AI
0:33:22 algorithms go? Okay. Well, he tends to stick on this kind of stuff. Let’s generate prompts in the
0:33:28 background that just generate more videos like the stuff that he sticks on at that point. Are the creators
0:33:33 still necessary? Because I’m getting my dopamine hit of like, oh, I like that video. Oh, I like that video,
0:33:37 but it’s just being generated in the background. Yeah. I think that’s where like my concerns lie with
0:33:41 a lot of the video generators. I have the same concern, obviously. I don’t want to be sticking
0:33:47 on the same exact thing. Could be like a never ending loop of infinity stuff that I’ve seen already. Like, I
0:33:52 don’t want to be, you know, seeing the same exact video, but like from different font. Yeah. You know?
0:33:57 So yeah. And I, I’m also curious to see how everything’s going to turn out. Is it going to
0:34:03 stay on the same algorithm? Is it, is my FYP just going to be this exact same video or like something
0:34:08 else? Algorithm on LinkedIn is changing. What if it’s on TikTok? So yeah. Yeah. We’ll have to see.
0:34:13 We’ll have to wait and see how everything turns out to be. Absolutely. And then also the day that we’re
0:34:18 recording this, Anthropic released a new model, right? They just released Haiku 4.5. Yeah.
0:34:25 And Haiku 4.5 is actually cheaper to use and it’s faster than Claude’s Sonnet. I think Sonnet 4,
0:34:30 I don’t know about 4.5, but I think it’s cheaper and faster than Claude’s Sonnet 4. And it’s a free
0:34:34 model that you can, anybody can just go use at Claude’s website. So you don’t even have to pay if you
0:34:42 want to go use this new model. So my opinion on large language models and these like rollouts of a new
0:34:47 LLM, I feel like there’s at least two new LLMs every week, right? Like probably.
0:34:51 I believe two to three, yeah, business days. Yeah. And you, yeah, yeah. Like there’s two or three that
0:34:58 come out every single week and all of them to me feel like fairly marginal improvements, right? Like
0:35:05 they got, they, it got slightly faster. It got slightly smarter. Um, it hallucinates slightly less.
0:35:10 We’re AI tech nerds. Like we’re in it. We’re seeing all these models. We might get excited by these like
0:35:15 small improvements, but like 95% of the world that’s not paying attention to like every new
0:35:21 rollout that comes out with large language models or AI, like 90% of the world doesn’t really care that
0:35:28 a model got 0.5% better on some sort of benchmark, right? Like most people don’t care. So a lot of the
0:35:33 like new large language models that come out, I tend to kind of like tune out and go, yeah, that’s cool.
0:35:39 And just sort of ignore it because I’m probably still just going to use GPT-5 or, you know,
0:35:44 the Gemini models that I’m already using. No offense to Claude, but I do like my GPT right now.
0:35:50 I hope that people at Anthropic don’t feel offended by this, but yeah, I mean, it’s just like my mom
0:35:55 started using GPT the other day and like, she was fascinated by it, but like, she doesn’t care if like
0:36:00 GPT 5.5 is going to come out. I don’t know, like next year, you know, because it’s going to give her the
0:36:05 results. So the upgrades are not upgrading in my opinion. It’s just the same. I want to see
0:36:09 something insane. Like I want them to create something huge. I’m not asking them to go on
0:36:14 the moon right now. Like they can, but you know, like give me something that kind of like shocks me.
0:36:21 I want to be bamboozled by it. Give me something that would shock me to my core right now. The 4.5 stuff
0:36:26 is cute and everything. Just we want to see more things. Yeah. I also feel that, you know,
0:36:32 being immersed in AI all the time. I’ve become harder to impress. You know what I mean? Like
0:36:35 we see stuff all the time and we’re like, Oh, that was actually kind of impressive. And then
0:36:38 you see another company do something similar. That’s slightly better.
0:36:40 Yeah. That was done last week.
0:36:44 They did it. Like, give me something new, bro. Like, I don’t want to see the same exact
0:36:46 thing with different fonts. Yeah.
0:36:51 Yeah. That’s kind of how I feel about the new large language model releases.
0:36:56 I’ve to me, it’s mostly just kind of, meh, that’s cool. I’m excited about the next big leap,
0:36:59 not the next marginal leap, you know? Yeah.
0:37:05 Well, cool. So let’s talk about Elon because Elon is, I guess, been talking about an XAI world model.
0:37:08 And there’s, there’s been a lot of talk about world models. And if we’re talking about things
0:37:13 that are kind of like sexy, big leaps that really excite me, world models are actually one of them.
0:37:16 Like when we saw, is it just genie or is it genie three? I don’t remember what it’s called,
0:37:21 but Google showed off their genie model, which looked really, really impressive. And then you
0:37:24 have world labs. That was impressive. That was news. Yeah.
0:37:30 Yeah. You have world labs, uh, co-founded with, uh, by Fei Fei Li, which is building these world models
0:37:38 as well. And I got a demo. I went to a 16 Z in New York a couple of weeks ago and met with the people who
0:37:42 made world labs. So they gave me some demos of what it can do. And it was mind blowing. I mean,
0:37:47 people were walking around these fully AI generated synthetic worlds that they created
0:37:50 by actually scanning these worlds in with their phone, right? They were using the, you know,
0:37:55 Gaussian splatting technique to scan. I don’t want to get too into the weed, but you know,
0:37:59 they scanned in all of these images, those images turned into a 3d world model that they can then
0:38:03 walk around in. Right. And I guess Elon’s doing something similar. I actually haven’t heard about
0:38:08 the XAI world model. So I’m hearing about it for the first time from you, but I’m, I’m curious,
0:38:13 like what, what’s the story there? I mean, can Elon sit still for like five minutes? I don’t think
0:38:19 so. Cause he’s going to do something huge, obviously. Uh, so yeah, actually I is now building the world
0:38:24 models. Uh, and it’s basically AI that doesn’t just read for people that don’t know what that is. It
0:38:29 doesn’t read the world. Like chat GPT does it and understands it. So it’s like, it’s, it goes into the
0:38:36 nooks and crannies and these models, uh, learn physics from the videos and robots and they can
0:38:43 simulate reality. So, you know, think AI that knows how things fall and move and crash. And,
0:38:50 you know, so he’s even teasing an AI generated video game next year. So people, the nerdies out there,
0:38:54 who knows what I’m talking, they know what I’m talking about. Uh, but the big picture is that if this
0:39:01 works, which is it’s gonna, uh, it could change robotics and gaming, and it can be insanely hard
0:39:06 to train. Obviously. Yeah. You know, you’d need data over data, over data that basically
0:39:13 matters the real world that we live in. So is Elon trying to build the next matrix, you know, or like,
0:39:17 who knows? We’re trying to see what he’s up to, but yeah, that’s what basically is.
0:39:20 Yeah. Yeah. There’s a handful of really fascinating things about the world models,
0:39:26 models, but I think the one that is sort of the best use case right now is using these world models
0:39:31 as virtual environments to train, you know, robots or self-driving cars or things like that,
0:39:37 right? It’s, it’s a lot safer for everybody involved to put, you know, the brain of the robot
0:39:42 into this virtual world, let it learn in a virtual environment, let it like trip over things, let it
0:39:48 learn how to walk, let it learn how to do all the mechanics that it needs to do in this virtual world.
0:39:52 Then you take that brain that was trained in a virtual world, put it into a robot in the physical
0:39:56 world, and it already understands how to do it. It’s like the matrix, like you mentioned, like
0:40:03 the robot now knows Kung Fu because it sort of got it learned in a virtual world, but for self-driving
0:40:08 cars and things like that, I think that’s really, really exciting. I think that’s something that’s
0:40:13 going to really, really speed up the time to market of a lot of products of a lot of self-driving
0:40:18 cars and robots and things like that, because in a virtual world, you could do it at, you know,
0:40:23 exponentially faster speeds than you can in the real world. So you can train a lot faster.
0:40:29 A hundred percent. I think, as I said, the people that know how things work with the,
0:40:33 like with virtual reality are going to nerd it out. Like they’re going to probably lose their minds
0:40:37 with what is coming. And Elon doesn’t go, whether you like him or you don’t like him,
0:40:42 he doesn’t just do it at a surface level. He goes deep. So we’re going to see insane stuff.
0:40:47 We just have to wait and see what he’s up to. But yeah, it could, it could sound like a matrix thing.
0:40:51 He might be like a year late on his timeline, because he’s never really very good at estimating
0:40:56 times, but he’ll do it. But he’s probably going to do it. Yeah. Yeah. Yeah. It’s funny that the
0:41:02 companies that I think are the most best set up to actually develop these world models, to me,
0:41:08 are Tesla and Meta because Tesla has all of the camera sensor data from its cars, right? So it can
0:41:12 use all of that data to train world models, but it’s only going to train world models on what it
0:41:16 can see from the street, right? It’s not going to get those world models and nooks and crannies
0:41:22 and inside of buildings and stuff like that. Yeah. Meta, on the other hand, they’ve pretty much pioneered
0:41:28 putting cameras on everybody’s face, right? With the Meta Ray-Ban glasses. Yeah. And they definitely
0:41:33 want that visual data to train AI models. Then in fact, when you get the glasses, I think you,
0:41:38 you pretty much check a box saying you can use the visual data that I collect from my glasses to train
0:41:42 models. And so to me, it’s, it’s fascinating. I think that’s, those are the two companies most
0:41:47 sort of set to train these world models. And neither of them are really talking about world models. Well,
0:41:52 I guess Elon is, but he’s probably using the Tesla data. Meta isn’t yet. So like we have to see what
0:41:56 Meta has to, but what I wish for Meta is that, okay, sure. Ray-Bans are cute,
0:42:01 but like, can you go for like another house? Like, can you maybe like do products? I would
0:42:06 love to wear products rather than Ray-Bans. I don’t want to wear Ray-Bans like all the time.
0:42:09 They rolled out Ray-Bans and then they rolled out Oakley’s. And to be honest,
0:42:15 like there’s pretty much like one sunglass manufacturer that manufactures all of them.
0:42:19 They should partner with product. This is all that I’m asking. I want to walk out looking nice.
0:42:24 I don’t want to wear Ray-Bans. They look cute. I want nice product. I want to look expensive.
0:42:28 Yeah. I don’t want people to know that I’m wearing Ray-Bans. That’s what I’m trying to say.
0:42:31 Yeah. And I do think that’s actually their roadmap is to get them into like more and
0:42:36 more styles of glasses. So everybody gets like their style, but with the extra features.
0:42:40 Yes, please. This is my official request. Please. Thank you.
0:42:45 Let’s talk about the Google AI makeup thing. Now, this is something I just saw briefly,
0:42:49 but it seems to be some sort of filter inside of Google Meet now that’s leveraging AI.
0:42:54 It is. I can’t wait to use it. Yeah. So Google Meets, it just became like everyone’s favorite
0:42:59 coworker. You know, the one who says like, oh, if you want to go on a meeting and you look like
0:43:03 you woke up from a bad night, like after a bad night or like you’ve been hung over,
0:43:10 I’ve got a concealer to cover everything up. So they’ve launched an AI powered makeup filter for those
0:43:17 mornings when your face doesn’t, it says you didn’t sleep. So, and there are like 12 virtual
0:43:22 looks, but the white part is that the filter doesn’t glitch when you move and it stays, you know?
0:43:23 Interesting.
0:43:27 So you can sip coffee, blink or like side eye your boss or like do whatever you want with
0:43:34 the makeup staying flawless. I am a very makeup girly. I like makeup a lot, so I cannot wait to use it.
0:43:39 I don’t want to just have to wear my makeup every single time I’ll go on meetings. If it’s like an open
0:43:43 cam, I really want to make sure everything looks good, but I haven’t worn anything that day.
0:43:50 So obviously beyond the fun, this shows how far real-time AI tracking has come. And it’s just,
0:43:57 you know, like smoothing faces and reading motions with scary precision, but I like it. This is what I
0:43:57 want.
0:44:03 Yeah. Yeah. I need one that’s like a AI hair fixer. Why do you think I always wear hats? It’s because like
0:44:06 a lot of times I jump up and go jump on calls and I just need to throw on a hat because I wasn’t able to
0:44:12 brush my hair real quick. Men are so lucky. Like it’s just a hat. I have to have like a full makeup on
0:44:19 because I, because yeah, that’s complaining about hats now and like wear something and I have to put makeup on.
0:44:25 Yeah. I guess that’s yours is probably a better solution to solve than mine. Yeah.
0:44:30 They probably want to do something about the hair. We would have to wait and see, but yeah.
0:44:36 Cool. And then the last little one that I want to talk about is something that when it comes to the
0:44:42 AI world, this is the thing that I think is going to be the most beneficial to humanity as a whole,
0:44:49 right? Is the ability for AI to sort of early diagnose diseases, find new cures, find new weight
0:44:53 treatments for diseases, things like that. And I guess there was a new story in the past few days
0:44:59 about how AI helped somebody diagnose Lyme disease when doctors couldn’t diagnose it.
0:45:04 Yeah. The story, it was pretty shocking. We covered it on the Mindstream newsletter and
0:45:10 there’s a man here in the UK that used AI to figure out what his doctors, his GP, general practitioner,
0:45:16 this is what they use in the UK. So after years of being told like his symptoms were anxiety and like
0:45:24 being gaslit, he typed them all into an AI tool and it suggested Lyme disease. So after a private test,
0:45:30 it confirmed that it is. And he said like, if I hadn’t persisted and used AI, I wouldn’t have known
0:45:36 that I had that. So it’s both incredible and like, obviously for a lot of people, a little unsettling,
0:45:41 you know, like the proof that AI can connect medical dots a human might miss. But also like
0:45:49 a reminder of like why self-diagnosis still comes with real actual risks. So I myself had the same
0:45:54 experience, but like not something as serious as a Lyme disease. I was like looking for something for
0:45:59 my hair because I’ve changed environments. I came from my home country to here and the water is different.
0:46:03 You know, the water in the UK is thicker. It’s sort of like confirmed to me that it is because of the
0:46:08 water here in the UK. And I, this is what I need to take and what kind of like product I need for
0:46:14 my hair. So yeah. Self-diagnosis comes with risks. And in my opinion, even though like the people are
0:46:19 going to hear this, like, oh, our doctor’s not going to be necessary anymore, et cetera. They are,
0:46:25 you can do both. If you feel like you want a second opinion, you can go to a third opinion. That’s how it
0:46:29 is. Like people don’t go to one doctor to find out what’s happening to them on normal days.
0:46:33 Whenever we find like a molar or something, we always have a second opinion. That’s the
0:46:38 same exact thing in my opinion. Yeah. I totally agree with that. I don’t think doctors are going
0:46:43 anywhere. In fact, I just think doctors are going to get better at their jobs. I think AI is going to
0:46:49 assist them in diagnosing stuff that maybe they missed somehow. I think it’s going to help recommend
0:46:54 treatments for things that, you know, maybe they wouldn’t have just normally thought of off the top of
0:46:58 their head. So I actually think it’s going to like really, really improve the sort of healthcare
0:47:04 experience of getting things right the first time. And Lyme disease is a really good one to figure out
0:47:09 because Lyme disease is really, really hard. Honestly, Lyme disease sort of manifests itself
0:47:16 in a way that feels like 15 other things. Right. And so most people that get Lyme disease,
0:47:18 it’s like they think it’s this, then they think it’s this, then they think it’s this,
0:47:21 and then finally figure out that it’s Lyme disease. Exactly.
0:47:25 So it’s one of the more difficult things to diagnose. I only know some of this stuff because
0:47:30 my son had a Lyme disease issue a long time ago when he was younger, but it is one of those things
0:47:36 that like it’s really hard to diagnose. So that’s pretty exciting that it can sort of connect those
0:47:41 dots now. Yeah, exactly. It’s relieving for a lot of people. Yeah, absolutely. I mean,
0:47:48 it’s a fun world to be in. I mean, how have you been enjoying diving into like AI stuff? Because to me,
0:47:53 I feel like a kid on Christmas where I’m just getting new toys constantly. And they’re like,
0:47:58 here’s a new toy to try. Here’s a new toy to try. Yeah. The thing is, when I was studying politics,
0:48:03 obviously people were like, oh, you know, like you’re studying something so rigid, but actually it
0:48:08 isn’t because of how everything, you know, unfolds in the political world. The same thing is with AI.
0:48:14 Every second of every minute, something pops up, like what happened today with Veo. So it’s very
0:48:20 refreshing to wake up to new news. It’s very refreshing to see what AI could help with and
0:48:25 like, kind of like call out these missteps that it does, obviously, because we are human at the end
0:48:28 of the day, we’re going to call out when there’s a mistake. But yeah, I’ve been loving it so far,
0:48:34 especially the fact that I get to write about it is the best part about it all, honestly. So yeah.
0:48:40 Yeah. It’s awesome. A little overwhelming at times, a little bit of a flood of information
0:48:45 at times, but it’s, it’s a lot of fun. Well, very cool. So you write the Mindstream newsletter,
0:48:48 which goes out, is it five days a week, every business day?
0:48:54 Seven days. Yeah, we do it every single day. I think it depends on where you are in the world.
0:49:00 And the UK comes out at 4:00 PM for a lot of people at 7:00 AM. So yeah, it’s a, that’s how it is.
0:49:03 Very cool. Well, if, if you don’t want the AI news to feel like a fire hose,
0:49:10 getting a little quick daily report on what’s going on could be helpful. So definitely check
0:49:14 out the Mindstream newsletter and Maria, this has been an absolute blast. I’m looking forward to
0:49:18 doing more of these and just kind of nerding out with you about future AI news.
0:49:21 So glad to be here, honestly. This was awesome. Thank you so much.
0:49:25 Awesome. Well, thank you. And hopefully we’ll see everybody in the next one.
0:49:25 Yeah. See you.
0:49:32 Bye.

Take the AI Dragon Quiz to get tailored recommendations for AI tools & resources: https://clickhubspot.com/mkw

Episode 81: Is Microsoft finally stepping out of OpenAI’s shadow to compete in the AI image generation race? Matt Wolfe (https://x.com/mreflow) is joined by special guest Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9), head writer of the Mindstream newsletter and one of the sharpest AI journalists around. Maria’s journey from studying international affairs and politics to reporting on the AI frontier has made her writing a daily go-to for thousands, and now she’s bringing her AI insights to The Next Wave.

In this packed episode, Matt and Maria break down Microsoft’s surprising new MAI Image 1 model, its impact on the OpenAI-Microsoft partnership, and what it signals for future AI competition. They also dive into the evolving personality (and rules) of ChatGPT—including Sam Altman’s statements on mental health and GPT erotica—and talk about Google Gemini’s brand-new calendar integration. Other hot topics include Elon’s ambitious “World Model” for XAI, when AI beats doctors to diagnose Lyme disease, and how Google’s new “AI makeup” feature is changing work calls.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) Maria’s Journey into AI

  • (04:48) Microsoft’s First In-House AI Model

  • (06:58) LM Arena AI Demo Explained

  • (11:40) Relaxing ChatGPT Restrictions Soon

  • (14:51) ChatGPT: Tool or Companion?

  • (18:13) AI Age Detection Challenges

  • (21:57) Google’s Gemini Schedules Meetings

  • (25:45) AI Models and Business Moats

  • (30:07) Bringing Characters to Life

  • (31:04) AI Tools and Future Uncertainty

  • (36:50) Elon’s XAI: Revolutionizing AI Understanding

  • (38:05) Training Robots in Virtual Worlds

  • (43:47) AI Diagnoses Man’s Lyme Disease

  • (45:22) AI Enhancing Healthcare Diagnosis

  • (47:48) Mindstream: Daily AI Updates

Mentions:

Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

The Next Wave - AI and The Future of TechnologyThe Next Wave – AI and The Future of Technology
0
Let's Evolve Together
Logo