AI transcript
0:00:14 we deep dive in the news. So last week we put out an episode with me and Maria just riffing on all
0:00:19 of the latest AI news. And people really liked that episode. We got a lot of good feedback about
0:00:24 it. A lot of people loved what we were talking about, loved getting that sort of refresher of
0:00:29 the news for the week. So we figured let’s do it again. People like it. Let’s come back to this
0:00:33 well and make another episode and break down all of the news that’s happened in the last week.
0:00:38 So once again, today I’ve got Maria, the head writer of the Mindstream newsletter,
0:00:42 joining me on the show today. And we’re going to deep dive in the news once again.
0:00:53 Being a know-it-all used to be considered a bad thing, but in business, it’s everything. Because
0:01:00 right now, most businesses only use 20% of their data. Unless you have HubSpot, where data that’s
0:01:06 buried in emails, call logs, and meeting notes become insights that help you grow your business.
0:01:11 Because when you know more, you grow more. Visit HubSpot.com to learn more.
0:01:18 So thanks again, Maria, for hanging out with me.
0:01:22 Thanks for having me. I was so happy that people liked it so much. I mean, a lot had happened last
0:01:27 week. So we have a lot to cover today. Yeah, yeah. There’s been a lot of news. And you know,
0:01:32 anybody who does pay attention to like both my main YouTube channel and the Next Wave podcast,
0:01:38 you might have noticed that I actually slowed down on making like the sort of AI news roundup videos
0:01:43 over on my YouTube channel. And I feel like this style of podcast can kind of replace that
0:01:49 format a little bit. So if you’re missing out on getting a lot of that like AI news deep dive in
0:01:53 the sort of bigger picture of everything that happened in the AI news world, episodes like this
0:01:58 are designed for you. 100%. So if you haven’t listened to that episode, definitely check that
0:02:04 one out. You’ll get a little bit deeper of a dive into who Maria is and her role with the Next Wave
0:02:08 and HubSpot. But for this episode, let’s start chatting about some of the cool stuff that happened
0:02:13 this week. I feel like the biggest story of the past week or so was OpenAI’s Atlas.
0:02:18 Oh yeah. It kind of came out of nowhere. Like I didn’t know they were planning a live stream for
0:02:24 Atlas until the day of the live stream. So that was my sort of entry point to it. Didn’t even see it
0:02:29 coming. Yeah. I mean, when it comes to Sam Altman, it’s always the element of surprise. We thought like
0:02:35 Mark Zuckerberg does that, but no, it’s actually Sam as well. So I’m really, I was really surprised by the
0:02:39 launch. I saw it because I do my research for the newsletter and I saw it like as a breaking news.
0:02:44 I was like, what do you mean? No one told us anything. There’s been rumblings for a while
0:02:49 that OpenAI was working on a browser. Eventually, but not like three days ago. So I was weird.
0:02:53 Yeah. It definitely came quicker than I expected because it was only like, I don’t know,
0:03:00 maybe two months ago that OpenAI was literally trying to buy Chrome from Google. Exactly. Exactly.
0:03:03 My point. Yeah. But I guess underneath the hood, it’s still Chromium. It’s still like
0:03:09 the bones of the Chrome browser, just with like OpenAI features baked into it.
0:03:15 Yeah. I think with ChatGPT, I think it’s no longer your chat box anymore. I think like OpenAI just
0:03:22 decided to turn it entirely into a browser. So the idea is that instead of like switching tabs and like
0:03:27 pasting links and like pretending to multitask, because a lot of people pretend to multitask and not
0:03:31 everyone is like very productive as all of us. ChatGPT kind of like follows you around and follows
0:03:37 you everywhere. So you’re scrolling through like a recipe. It can order the ingredients of that specific
0:03:41 recipe. And if you’re like reading a job post, you know, and like you don’t want your manager to see
0:03:46 also it can summarize it. Right. So if you want to plan a trip, it’ll start building like the whole
0:03:53 itinerary, which is really wild. Basically, Atlas wants to be that friend who finishes your sentences,
0:03:59 but like in a browser kind of turn. Right. It’s huge. It’s wild. Atlas isn’t just answering questions
0:04:05 anymore. It’s actually doing the things because we were used to ChatGPT answering everything for us. Now
0:04:11 it just, you know, does everything. It got agent mode built in, which basically means that ChatGPT can click
0:04:17 research plan and even book stuff while you’re still browsing. And before anyone asks, because people are
0:04:22 going to ask this question, it’s built with optional browser memory so it can remember what you were
0:04:27 working on. But it does have an incognito mode. So if you want to be browsing something that you don’t
0:04:32 want it to remember, you do have that capability as well. If you want to be shady about it, it feels like
0:04:38 OpenAI finally kind of stepped out of the chatbot phase and into the workflow sort of phase. And everyone’s
0:04:46 been saying like AI will live where you work and this is exactly it. So OpenAI is more or less saying
0:04:52 that we’re done being just an app on your phone or like app on your laptop or whatever. We’re becoming
0:04:58 your own desktop. And the timing is very smart with Chrome and everything and like with Claude and Gemini
0:05:05 getting louder. I think Atlas gives them a serious sort of edge, especially for like enterprise users
0:05:12 who want AI to actually do things, not just, you know, talk about them. Yeah. The way I see it, where
0:05:19 I think OpenAI is kind of going with this is I think they want to get it to the point where you don’t even
0:05:25 use your browser anymore. Like your browser is just a tool that goes and does stuff for you. That’s where I
0:05:30 think they’re trying to go with it. So right now you get into the Atlas browser. You can chat with it in
0:05:35 the little right sidebar that pops up. You can turn on the agent mode if you’re on one of their paid plans
0:05:41 and say, hey, like you just mentioned, right? Read this article that has a recipe on it. Go find all
0:05:47 the ingredients for me and order enough ingredients so I can make this recipe for 20 people, right? It’ll do
0:05:52 that math, figure out like the amount of ingredients, go to the store, order them for you, and then stop short
0:05:56 of making the payment, right? It’ll say, hey, I put it all in the cart. Do you want to make the
0:06:01 payment? Do you want to Apple Pay? Like I could just Apple Pay this. Yeah, exactly. And so right now
0:06:06 you’re typing in this little sidebar, right? But I think what they want to do is they want to get it
0:06:11 to a point where once they roll out like a mobile version of it and they get the audio features more
0:06:17 dialed in, you just prompt everything with your voice. So I can be hanging out with a friend and I really
0:06:21 like the shoes they’re wearing and I’m like, oh, those shoes are really cool. Let me order some,
0:06:27 take a picture of the shoes and go, hey, Atlas, order me a pair of these shoes. It looks at the image,
0:06:32 figures out what the shoes are, does the research on where to find them, finds the best price on the
0:06:37 shoes among multiple outlets and then places the order for you. And all you did was give it, you know,
0:06:43 a spoken prompt into your phone. Like I think they want to remove actually using the browser. I think
0:06:48 that’s the long-term picture for them is actually make it. So we’re not even thinking about using a
0:06:53 browser anymore. Just make the internet promptable. I give it a prompt and it can go and take actions
0:06:57 for me in the browser without me needing to open the browser and do the things anymore.
0:07:03 A hundred percent. I cannot wait to steal someone’s look. I have one person in my life that I love
0:07:07 everything that she wears. I cannot wait for that day. I don’t have to ask her every single time.
0:07:10 So I’m going to do this. So this is a win-win for everyone in my opinion.
0:07:16 Yeah. And then, you know, there is a sort of more pessimistic take that I have on things.
0:07:21 I mean, it’s not overly pessimistic. I just know that open AI, they need to figure out how to drive
0:07:24 more revenue right now. Right. Yeah. I mean, they’re losing billions of dollars a year. I think
0:07:30 every prompt you enter into something like chat GPT, they actually lose a little bit of money on every
0:07:35 time you enter a prompt. So they actually really, really, really need to find new monetization methods.
0:07:41 And by having, you know, an app like Sora, where you’re scrolling a feed like TikTok or Instagram,
0:07:47 right? That opens up a new platform where they can put ads into it. When you have a browser that more
0:07:52 and more people are using, that opens up another platform that you can put ads into, potentially
0:07:57 even opens up like recommended products. Like I just gave you an example of me just talking into my phone
0:08:06 going and ordering stuff. I mean, is it too far to believe that at some point, OpenAI might partner
0:08:12 with certain companies and say like, when they order toilet paper, make sure they go by Charmin,
0:08:16 right? Like, would it be so out of left field to believe that they might make a deal with a company
0:08:21 like that? They’re going to monetize it. Honestly speaking, you did say that they are losing money.
0:08:26 I think the monetization part just is like a walk in the park for them. So I think they’re going to
0:08:30 monetize everything. If everything is going to be accessible to everyone on their phones,
0:08:34 especially when it becomes an app, everything is going to be monetized. Every ad that you see,
0:08:39 it will be monetized. So yeah. Yeah, absolutely. Absolutely. And then, you know, you also mentioned
0:08:44 like the timing of it was interesting, right? Because it’s only been a couple of weeks since we got
0:08:50 Perplexity’s Comet browser. Which is not free. Atlas is free, but Comet isn’t, you know? So like,
0:08:54 that’s also a thing. I thought they just recently made it free for everybody within the last couple
0:09:01 of weeks. I think so. If I remember correctly, they did open it up publicly, like to sort of get ahead
0:09:06 of OpenAI. Why don’t I know this? I may not know everything as a journalist, it seems. Yeah.
0:09:10 Yeah. I mean, there’s a lot, a lot of news flying around. A lot of things are happening.
0:09:17 So it’s definitely easy to let. Be nice to me. Yeah. But I’m fairly confident that you can use Comet now
0:09:22 for free, but there are some features that are only available for paid members inside of Comet.
0:09:28 Yeah. There’s also Dia, which is the one from the browser company, which I do believe is paid. I
0:09:34 don’t know if that one actually has a free version. Maybe freemium. Maybe. Maybe it is free to download,
0:09:39 but some of the features are paid. Honestly, I haven’t actually tried that browser, so I can’t speak.
0:09:42 Me neither. So many browsers out there. You can’t just try everything, honestly.
0:09:46 And then also Microsoft just rolled out a whole bunch of AI features into Edge. Google just rolled
0:09:52 out AI features into Chrome. So pretty much every browser is going in this direction of the AI browser
0:09:58 right now. But you mentioned something that you wanted to show off, an interesting thing that happened
0:10:00 with Atlas that I’m kind of curious if we can take a look at.
0:10:06 So I was playing around with Atlas yesterday and I’m going to London next week. So I was like thinking
0:10:10 to myself, like, I don’t want to look at London and just have like any brunch. It’s like, I want to have a
0:10:17 brunch that feels like I’m a London girl, even though I am not. So I said to Atlas, like, treat me like a
0:10:33 Nepal baby that lives in London and plan out a day for me to have in the city with top notch restaurants from 9am to 10am.
0:10:38 This is how it talks to me. I was so happy when it started talking to me. It basically like gives me like a
0:10:43 whole itinerary of everything. So that’s what people should know. It gives you Chachipadiba in your browser.
0:10:50 So yeah, it’s pretty cool. And if there was anything on that list that it made for you that might need
0:10:54 like a reservation or anything, theoretically, you should be able to turn on agent mode and be like,
0:10:59 hey, go make any reservations for me that need to be made to achieve this day. Yeah.
0:11:04 A hundred percent. I love it. I love it. So I can’t wait to try it. So that’s gonna be awesome.
0:11:10 And then somebody from OpenAI named Adam Fry. Okay, so they’re the product lead on ChatGPT Atlas.
0:11:14 They actually made this post basically saying these are all of the fixes that they’re going
0:11:18 to be making from all the feedback they got. This is the fixes they’re going to be making,
0:11:25 which is also a pretty solid list of like the issues that you would find with Atlas right now.
0:11:29 I’m going to read these fixes out loud just for the people that are listening on the podcast.
0:11:34 So this is just like the rapid fire list of fixes OpenAI claimed they’re going to be rolling out into
0:11:42 Atlas soon. So fixed text entry for Japan and Korea, captive portal for Wi-Fi needs to work,
0:11:49 multi-profile support, tab groups, model picker and ask GPT sidebar, multiple tab attachments in chat
0:11:56 composer and improved at mentions user experience. Use projects from ask chat GPT sidebar. That’ll be a
0:12:02 cool one to be able to access your projects right in the sidebar. An opt-in ad blocker,
0:12:09 add a menu listing, all shortcuts, bookmarks overflow menu, add speed bump before deleting all chats in
0:12:14 browsing data dialogue, improve personalization of suggestions, keep improving agent time to first
0:12:22 message, improve under triggering of chat GPT using agent mode, make agent pause state more reliable,
0:12:27 improve chain of thought animation for different agent actions and improve cloud Excel and Google
0:12:32 drive use on agent. So that’s the full list of things they said they’re actively working on,
0:12:37 which is also a full list of things that you might’ve run into if you’ve used agent mode and had
0:12:38 issues with it.
0:12:43 A hundred percent agent mode for a lot of people is like, sounds like magic for people that have started
0:12:46 using it. Like people that aren’t very tech savvy when they start using agent, they’re like,
0:12:52 they’re wild. Imagine having Atlas as an agent. That’s, that’s a hundred percent more wild, you know?
0:12:57 Yeah. So I think that actually the one thing about Atlas, and we can move on from this topic,
0:13:01 because I know we spent quite a bit of time on it. The thing that I found the most valuable when I was
0:13:06 using Atlas is probably the thing that most people are scared of, was the ability to go and have it
0:13:09 search my chat history for something I was looking at a few days ago.
0:13:15 Yeah. Because it will import all of your history from Chrome if you want it to. And I actually asked
0:13:20 it something like, I was reading an article on tech crunch the other day about AI agents. Can you find
0:13:24 that tech crunch article for me again? And it searched and it was like, well, I found three
0:13:28 articles. Maybe it was one of these. And sure enough, one of the articles that it listed was one of the
0:13:32 articles I was looking for. And I was like, that to me is so helpful because I don’t know how many times
0:13:38 I’ve literally scrolled by history, looking for a website I visited a few days ago. And for whatever
0:13:42 reason, I can’t find it again in my history. And I can’t remember what the domain was. So I’m like,
0:13:46 ah, but now I can just do it with my chat. I can just say, Hey, I was looking at this site three days
0:13:50 ago. What was it again? And it pulls it up. And I found that so helpful.
0:13:55 It is helpful. It is. Honestly, a lot of people don’t have really good memory. Imagine just scrolling
0:14:00 through and like finding the right one at the right place at the right time. That’s like so helpful.
0:14:04 Yeah. Especially if you’re like at a workplace and like your manager told you, like,
0:14:06 remember that article and like you’re scrolling through it and like,
0:14:10 you couldn’t find it. But then Atlas does it for you. Yeah. Promotion.
0:14:15 I guess that can work the other way too. You leave the office, your boss is down his computer.
0:14:16 What else was he looking at?
0:14:19 What else would he? Yeah.
0:14:25 But yeah. So speaking of browsers though, Microsoft rolled out a new version of edge
0:14:32 that has a whole bunch of AI features in it. And I can’t help, but think it feels like a response to
0:14:38 OpenAI’s Atlas. It feels like Microsoft going, all right, that Atlas thing got a lot of buzz.
0:14:43 Let’s roll out our browser now. Yeah. And there’s a lot of features in this one also. Like there’s
0:14:48 like 12 new features, which is like also a cute little orb that is called Myco. And apparently it’s
0:14:56 like a whole AI philosophy about optimism in a time of like synonym and like, yes, but are you okay?
0:15:01 It looks like it’s not okay. It looks like it’s being pushed to say some stuff to be optimistic and
0:15:06 things, even though the world is not optimistic as we speak. So it feels like it’s trying to make you
0:15:10 feel better. But on the inside, it’s like being destroyed, you know, like this is, but it’s a cute orb.
0:15:15 And the name is also cute, Myco. You know, it’s like Clippy, but like the cousin of Clippy,
0:15:19 which is… Yeah, yeah, yeah. And you can actually make it become Clippy if you want.
0:15:21 Yeah. If you want to go back to the ancestor of it. Yeah.
0:15:26 Actually, let me pull it up here. We can actually take a look and show it off.
0:15:26 Look at it.
0:15:33 So inside of the Microsoft co-pilot, these are the new features here. This is Myco. There’s a bunch of
0:15:37 other new features too. I’ll talk about some of those in a minute, but let’s play with Myco here.
0:15:43 So cute. What’s the most interesting thing to happen in the world of AI news this week?
0:15:47 Okay, that’s enough. Okay. Yeah, that’s… We’re getting into politics. That’s plenty.
0:15:52 Wait, did you see the demeanor of the blob? Like, it’s like, it’s suffering on the inside. So I tried to
0:15:57 say, okay, so this is what happened today. I know you want that, but like, I’m tired today. So I’m just,
0:16:00 let me give the answers and go back to speed. This is the vibe that I was giving.
0:16:03 Yeah. Oh, you again. Here we go.
0:16:04 What do you want to know?
0:16:10 I just wanted to show this off real quick. If I just like beat it with my mouse over and over
0:16:14 again, it eventually becomes clippy. Oh, clippy. Yeah. Well, come back, bro.
0:16:20 So yeah, that’s the new Myco. You can kind of talk to it. They made it playful and sort of fun,
0:16:26 and you can tell it to change its colors and stuff like that. But I think the more interesting stuff that
0:16:31 Microsoft rolled out is they actually rolled out memory this week. So you know how in ChatGPT,
0:16:35 we’ve had memory for a little while now, where it remembers all of your conversations and can
0:16:40 sort of reference past conversations you’ve had. They actually just rolled that out inside of
0:16:46 Copilot now. Like that hasn’t been a feature yet inside of Microsoft until today. The other thing
0:16:51 that’s interesting is you can do these collaborative chats now. So you can be in there talking to their
0:16:56 chat bot. You know, in my case, right, I might be planning the next YouTube video and having it do some
0:17:01 research for me on my next YouTube video. And then I can pull my producer from my YouTube channel,
0:17:07 Dave, in and say, hey, here’s the ideas AI came up with. What do you think? He could get in there,
0:17:12 start actually having a chat with the same AI conversation I was just having a chat with,
0:17:17 and keep that conversation going. And so now you can have like a three-way conversation that’s me,
0:17:22 another human, and Microsoft’s Copilot all sort of in the conversation.
0:17:26 Yeah. It’s wild that we’ve come to the point where this is actually happening.
0:17:32 Yeah. And the funny thing is like, you know, OpenAI released Atlas earlier this week. And then
0:17:39 today, Microsoft rolled out the ability to share your chats and bring other people in.
0:17:46 Yeah. Well, OpenAI responded this morning as well. So check this out. So this post is from
0:17:51 at the time of this recording, like an hour ago, right? But it says shared projects are expanding
0:17:56 to free plus and pro users. Invite others to work together in chat GPT using shared chats,
0:18:02 files, and instructions all in one place. So Microsoft rolls out Copilot with the ability to
0:18:08 bring other people into the conversation. And OpenAI went, oh yeah, well, we do that now too.
0:18:10 On the exact same day.
0:18:15 Yeah. I think the tech war, they’re kind of speed running the whole thing. And it’s like,
0:18:19 just butting heads right now. But if we can come back to Copilot, I think it’s more like
0:18:25 getting smarter across Windows and Edge. And a lot of people that use Windows are like,
0:18:29 use basically any Microsoft. They’re having the time of their lives, you know, because like,
0:18:33 it can summarize things, it could summarize your tabs, it can book hotels, it can fill out forms,
0:18:38 it can do many things. And if you want to like tell it, if you want to tell like Copilot,
0:18:43 like, can you fix my life? And it’s like, just organize your whole calendar. And that’s honestly,
0:18:48 it’s kind of refreshing to see, you know, like, you know, a company that talks about AI as something
0:18:52 that you should connect people together, not just making us stare at screens the whole time.
0:18:58 Yeah. They’re betting on like warmth and memory and like good vibes, like an AI assistant that remembers
0:19:02 your anniversary and your trauma at the same time. That’s what I want to see. Yeah. I mean,
0:19:08 it didn’t really lack it in open AI, but like now that Windows has it also, it’s like, everyone’s just
0:19:11 like who has the better agent right now. Yeah.
0:19:19 Hey, if you take a look at my web presence online, it’s safe to say that I’m a bit AI obsessed.
0:19:24 I even have a podcast all about AI that you’re watching right now. I’ve gone down multiple
0:19:31 rabbit holes with AI and done countless hours of research on the newest AI tools every single week.
0:19:36 Well, I’ve done it again. And I just dropped my list of my favorite AI tools. I’ve done all the
0:19:40 research on what’s been working for me, my favorite use cases and more. So if you want to steal my
0:19:46 favorite tools and use them for yourself, now you can, you can get it at the link in the description
0:19:55 below. Now back to the show. Well, Microsoft has an upper hand too, that they also run the operating
0:20:00 system that so many people run on as well. So right now we’re getting a lot of this AI in the browser,
0:20:05 but I mean, that’s the sort of next level for Microsoft. Windows 11 is sort of getting rolled
0:20:09 out into like every computer now. I think they’re deprecating windows 10. So it’s going to be harder
0:20:14 and harder to actually use windows 10 if you still want to, but windows 11, they’re baking more and
0:20:19 more AI features into it. So a lot of like what we’re doing in our browser, theoretically, pretty
0:20:24 soon you’ll be able to do directly inside of windows, like go to windows and say, Hey, my computer
0:20:28 monitors too dim. Can you brighten it up for me? And it just goes and adjust the settings for you.
0:20:34 Or, Hey, I need a new desktop background. I want to use something from the Simpsons. Can
0:20:39 you find me a good desktop background? And it goes and searches the internet, finds a good image and
0:20:44 just replaces your desktop background for you. And everything you do with your computer, your operating
0:20:48 system, you’re just prompting it to do it instead of having to dig around in the settings anymore.
0:20:54 So I think that’s where we’ll see windows go. Everything becomes promptable on the operating
0:20:54 system.
0:20:59 Can you imagine if that rolled out like 40, 50 years ago, like it would be
0:20:59 Yeah.
0:21:00 It would be witchcraft.
0:21:05 Yeah, I mean, if we had iPhones 40 or 50 years ago, that would be witchcraft.
0:21:12 So another story I want to talk about from the past week is that Cloud Code is now inside your
0:21:17 browser. Now, I don’t know how much, you know, vibe coding type stuff you’ve done yourself.
0:21:17 Never.
0:21:22 But I’ve played around with a lot of the various AI coding tools. And for the most part,
0:21:27 I’ve found Cloud Code to perform the best out of what’s out there. Cloud Code seems to work really
0:21:31 well. The main problem with Cloud Code is that you had to run it in a terminal.
0:21:37 So you either had to run it inside of your terminal on Mac or Windows, or you could run it inside one
0:21:43 of the terminals inside of an IDE. So like Visual Studio Code, Cursor, Windsurf, you know, tools like
0:21:48 that that have a terminal built in, you’re able to fire up one of those tools, then get Cloud Code
0:21:53 running, and then you’re dealing with the whole thing inside of a terminal. That part was a pain in
0:21:58 the butt and definitely scared a lot of people away from using Cloud Code. Well, now Cloud Code,
0:22:02 they just have like a browser interface where you can mess with Cloud Code now.
0:22:05 Yeah. Also, for the people that don’t know what IED is,
0:22:12 it’s basically like a developer’s all-in-one workspace. So think of it as like a Microsoft Word,
0:22:18 like a Google Doc and like a Finder combined, but for coding. So if for people that don’t know this,
0:22:25 but yeah, as you said, Anthropix decided developers deserve fewer tabs, you know, and like more sanity.
0:22:30 So they did Cloud Code and like it’s now running in the browser, no terminals, like no local setups,
0:22:37 no chaos. So you just describe what you want and it connects your repo and Cloud starts coding like the
0:22:46 world’s calmest intern ever that is like being just given 7,000 caffeine cups and stuff. So now it runs in
0:22:52 the cloud and it does real time debugging and ships a pull request and even tells you what to change.
0:22:56 And it’s basically like it’s GitHub co-pilot, but with like an intention, you know, like a really good
0:23:03 attention span. And it’s not just like for hardcore devs. The fact that you can do this straight from
0:23:08 chat, it means that anyone could explain a bug in English, you know, in plain English and suddenly
0:23:12 has access to engineering power, you know, so you don’t have to have like a background in that.
0:23:20 You can just do it obviously. And it’s the kind of feature that quietly eats away at IDEs. Once you
0:23:27 can code from your browser, VS Code starts looking at it all 2019, you know? So for people that want to
0:23:32 kind of experience the world of coding, this is wild stuff. So yeah, everyone’s going to be happy about this.
0:23:38 Yeah, it’s a lot, a lot, a lot more user-friendly than what cloud code was, right? You basically had
0:23:44 to go to a website and it gave you instructions of like other dependencies you needed to install on your
0:23:49 computer. So if you didn’t have Python installed on your computer, you had to install Python. If you
0:23:55 didn’t have like Node.js installed on your computer, you had to install Node.js, right? There’s all these
0:23:59 little like development elements that you had to install on your computer. So you had to follow these
0:24:05 like step-by-step instructions and do it all in the terminal. And if you miss one step or miss type
0:24:09 one thing, nothing seems to work. And then you want to pull your hair out because you can’t even get
0:24:14 into cloud code in the first place to start writing the code. Well, now they just simplified it, right?
0:24:19 We’ve got this. I have friends that do coding and like they’re engineers and stuff. And I look at them
0:24:25 as if they own the world, like they can do many things. And most of them are women, which is badass,
0:24:29 but that I can’t do that. And so, yeah, it is witchcraft. They should be persecuted,
0:24:34 but this is awesome. This is amazing. I think that it could like unlock a lot of people’s potentials.
0:24:38 People can do many things now and like kind of create their own apps and stuff. So.
0:24:42 And if you do anything with code, you’re probably familiar with GitHub. It’s like a place where,
0:24:47 as you’re writing code, you can sort of push your code there. And it really serves a few purposes.
0:24:51 One, it’s like a backup for your code. So if, you know, you go too far to your code and you’re like,
0:24:55 oh, you know what? I liked the version three versions ago better. Let’s roll it back to that.
0:25:00 You can actually just pull the old version off of GitHub and start from there. But it also is a
0:25:04 platform where if you write code that you want others to be able to access, you can share that code with
0:25:10 anybody. They can download it and use it. And so what’s really cool about cloud code is it integrates
0:25:14 with GitHub just right out of the box. Like one of the first things it says is connect your GitHub
0:25:20 account. And so anything I’ve developed on GitHub in the past pops up here as something that I can
0:25:26 just pick up where I left off. So I made this AI jump game like two years ago. I can select that game,
0:25:30 give it prompts, and it will just the code from GitHub of the game that already exists.
0:25:33 And just let me start coding from that spot.
0:25:38 Yeah. You can basically continue what you’ve done before without having to do it from scratch.
0:25:40 Exactly. So a pretty useful upgrade.
0:25:43 Thank you, Anthropic. We love you, Anthropic. You’re the best.
0:25:48 And Anthropic makes the best coding models too. They’re called Sonnet 4.5 and Opus 4.1
0:25:53 are pretty much considered the best AI coding models out there for the most part.
0:25:56 A hundred percent. Do you listen to that open AI or like,
0:25:59 is it open AI not in the picture anymore? Like they should do something about it.
0:26:02 Yeah. I feel like they’ve kind of diverged in two separate directions.
0:26:05 For now. Yeah. For now. We’re going to go more in that consumer direction.
0:26:09 We’re going to build social media apps. We’re going to build browsers. We’re going to build that
0:26:15 easy user interface that anybody can chat. They’re going that consumer path where Anthropic,
0:26:20 they realize their models are the best for coders. Let’s just really go heavy down that path. Now,
0:26:25 I’m not saying they’re not thinking about consumers as well, right? They do have, you know,
0:26:32 the regular Claude website where you can use it just like ChatGPT. But I feel like their main focus is
0:26:38 let’s focus on coders. Like as far as the amount of use a large language model gets, you’ve got the
0:26:44 open AI models and you’ve got the Anthropic models. There’s graphs out there that show which model gets
0:26:51 the most usage and Anthropic blows open AI away. Like people are tapping into the Anthropic models way
0:26:56 more than the open AI models, but that’s just because it’s all through API. There’s so many companies
0:27:02 tapping into Claude’s API where open AI, most of their use is through ChatGPT through their front end user
0:27:10 interface. So like in terms of like profitability and getting the most use, Anthropic is actually in
0:27:14 quite a bit better position than open AI, but that’s just some behind baseball, like really nerdy talk.
0:27:20 But Anthropic like it gets so much more use than open AI because it’s so good at coding.
0:27:26 Yeah. Yeah. I sincerely, as I said, I don’t like to see coding in any shape or form. It triggers my anxiety.
0:27:31 So I have so many things I want to do. So I’m going to probably play around with it just to see what it
0:27:36 comes up with. Yeah. And I love like the AI coding for just making like a little simple apps. I mean,
0:27:41 we’ve talked about this in many episodes in the past on this show. Yeah. This is a little bottleneck
0:27:47 I have in my business. I can write a quick little script that fixes this problem for me. And then I
0:27:52 never have that problem again, right? Like I’ve made a little Python script that will take any image file
0:27:57 and convert it to a JPEG. Exactly. Sounds really, really, really simple, but it’s just like a little
0:28:04 box that’s open on my window and I drag any image into it or bulk images. I can drop like 20 images
0:28:08 and if I want, and it just converts all of the images to a JPEG. So if you get a web P file,
0:28:14 if you get an AVIF file, if you get a all the random image files you get when you do like a search for an
0:28:18 image and then you download the image and you’re like, dang it, this is a vector file. It’s not a JPEG.
0:28:23 I can’t use it on my website or whatever. Right. I’ve made a little script where no matter what
0:28:27 image I get, drag and drop it in here, converts it to a JPEG that will look good in my videos,
0:28:30 look good on a website, et cetera. And it was just a little script that took me 10 minutes to build.
0:28:36 And I vibe coded it and I would never sell it. It’s really, really basic, but it solves a little
0:28:40 problem that I had in my life and made that little thing a little bit easier.
0:28:47 It makes it easier. It makes your life 10 times easier. That’s the whole idea of like why AI is
0:28:53 just, you know, wild to a lot of people. And like, I understand that people feel like it’s taking away
0:28:58 some of their jobs and stuff, but it’s, it’s actually very helpful if you know how to use it. And it’s not
0:29:02 just taking away some stuff. It’s probably creating more things to do. So, yeah.
0:29:09 Okay. Let me tell you about another podcast I know you’re going to love. It’s called Billion
0:29:15 Dollar Moves, hosted by Sarah Chin Spellings. It’s brought to you by the HubSpot Podcast Network,
0:29:20 the audio destination for business professionals. Join venture capitalist and strategist, Sarah Chin
0:29:26 Spellings, as she asks the hard questions and learns through the triumphs, failures, and hard lessons of
0:29:32 the creme de la creme. So you too can make billion dollar moves in venture, in business, and in life.
0:29:38 She just did a great episode called The Purpose Driven Power Player Behind Silicon Valley’s Quiet
0:29:42 Money with Mike Anders. Listen to Billion Dollar Moves wherever you get your podcasts.
0:29:50 Yeah. Yeah. I know a lot of coders are really worried, but I think right now is like probably
0:29:55 the best time in history to be a coder because like, you know what you’re doing with code,
0:29:59 but you also have AI to assist you. And if AI can assist you and make things a lot faster.
0:30:02 And you can create like thousands and thousands of things. Yeah.
0:30:06 But then when AI screws something up and it’s like, okay, I’m running into this bug that I can’t figure
0:30:10 out how to get around. Most vibe coders like me never figure out how to get around it. But if you
0:30:14 actually know how to code, you’re like one of the few people that goes, okay, I ran into this issue.
0:30:19 I’m not going to vibe code the solution because I know how to fix the thing myself.
0:30:22 Right. So I feel like right now is actually a really good time to be a coder,
0:30:28 like that person that can jump between using AI to help code. But then when they run into a roadblock,
0:30:31 jump in and just manually fix the thing that AI messed up.
0:30:37 I’ve seen people debugging before. If it was me in their place, I would have set my laptop on fire.
0:30:41 I would never in my life put myself in that position. There’s no amount of coffee in this
0:30:47 world that would make me kind of sustain my calmness when it comes to no. So yeah, this is helpful.
0:30:50 Absolutely. You mentioned the next topic that we’re going to dive into.
0:30:56 There was this article circulating Amazon plans to replace more than half a million jobs with robots.
0:31:02 Internal documents show the company that changed how people shop has a far reach plan to automate 75%
0:31:10 of his operations. Yeah. Yeah. So this is pretty much the thing about AI that most people are most
0:31:15 scared about, right? Is AI taking all of our jobs, right? Yeah. Understandable. You know,
0:31:20 like we’re not saying that people shouldn’t be anxious about stuff like that. When it comes to
0:31:25 evolution and revolutions in general, these kinds of stuff happen all the time. And now because of
0:31:30 technology and how everything is like more convenient, it’s happening faster and faster than we anticipated.
0:31:35 But when it comes to this specifically, it doesn’t mean like mass layoffs. It just means that instead
0:31:41 of hiring another half million people, Amazon planning to lean onto its robots and like robot workforce to
0:31:48 keep up with demand, which is translation for like more orders, fewer coffee breaks. So the company is
0:31:55 already testing new warehouse systems that can pack and like sort and move products faster than ever,
0:32:01 saying about like 30 cents per item, I think, and potentially $12 billion in costs by 2027.
0:32:08 So that’s not obviously surprising. Robots don’t need overtime pay or like motivational posters.
0:32:08 Right.
0:32:10 They just need, you know, charging docs.
0:32:16 Also, to be fair, Amazon says that these numbers come from internal teams rather than like the whole
0:32:22 company. And they’re still hiring like thousands and thousands of people for the holidays. So what’s
0:32:28 interesting here is that there’s a language shift. So they’re using softer terms like advanced technology
0:32:34 and like cobots instead of automation, which, let’s be honest, sounds way cuter than robotic
0:32:41 efficiency initiative. But this really shows how the next wave of automation will work instead of
0:32:48 replacing it. So robots handle the repetitive stuff, which humans, you know, can handle the creative or
0:32:55 complex parts. And together, it’s like a complete different kind of workplace. And if Amazon can pull
0:33:01 that off at a scale, it sets the blueprint, obviously, for how AI and robotics kind of like
0:33:06 quietly blends together when it comes to industries and like everywhere. Not, you know, not a takeover,
0:33:08 but sort of an upgrade. That’s me.
0:33:14 Yeah. This is a little bit clickbaity to me to say Amazon plans to replace more than half a million jobs
0:33:19 with robots. Because if we read this, it says Amazon’s automation team expects the company can avoid
0:33:25 hiring more than 160,000 people in the United States it would otherwise need by 2027.
0:33:29 And then down here, it said executives told Amazon’s board last year that they hoped robotic automation
0:33:34 would allow the company to continue to avoid adding to its US workforce in the coming years,
0:33:39 even though they expect to sell twice as many products that would translate to more than 600,000
0:33:46 people whom Amazon didn’t need to hire. Yet, if you read this headline, it sounds like Amazon is dumping
0:33:52 500,000 people. But when you actually read the article, it’s like, no, they’re building automation,
0:33:55 so they don’t need to hire as many people as they used to need to hire.
0:34:01 Honestly, it’s just the way that journalism works these days and not just, you know, to promote us.
0:34:07 But with my stream is that we never say that like we never put in these headlines where people should
0:34:11 be scared of what’s going to happen tomorrow because no one should be fear mongering anything.
0:34:17 We’re trying to make people like AI and like work with AI rather than make them have a heart attack
0:34:22 because someone woke up and like decided to write something so weird like this, you know?
0:34:23 Yeah.
0:34:27 We need to lessen the triggering and make people feel more at ease when it comes to artificial
0:34:28 intelligence, in my opinion.
0:34:34 Yeah. And it’s just sort of like a truth, too, that so many people out there just read headlines
0:34:39 and don’t read articles, right? They read the headline and infer what the rest of the article
0:34:45 is going to be about based on the headline. And then they’re, you know, out in the world talking to
0:34:50 their friend going, hey, did you hear Amazon’s going to lay off 500,000 people because of AI?
0:34:54 Not everyone’s going to click and like read the article, even if it’s clickbaity. The headline
0:34:59 itself could be enough for them to kind of like form an opinion about the whole thing. So people
0:35:02 should be careful with the words that they’re using these days. Yeah. In my opinion.
0:35:06 Yeah. And I mean, it’s so easy for somebody to just like take that headline, screenshot it,
0:35:11 post it on Twitter, and then it goes viral on Twitter. And all anybody knows is the context
0:35:14 of the headline and nothing else, you know? Exactly. Yeah.
0:35:17 It’s a sort of shady approach, but we know the truth.
0:35:20 We do. We know that, yeah, they’re not replacing people.
0:35:24 Yeah. All right. So there’s a handful of other things. We could kind of do more of like a rapid
0:35:29 fire sort of like just quick thoughts on each one of them. Here’s something that just came out the day
0:35:37 that we’re recording this. It is a new AI video model. There’s this new model that came out by LTX
0:35:44 called LTX two. And here’s what they say about it in their sort of launch tweet, synchronized audio and
0:35:49 video generation. So it can do what VO three and what Sora do where it actually generates with the
0:35:55 audio. Oh, wow. This is wild. Yeah. Native 4k fidelity up to 50 frames per second. And one thing
0:36:01 is it’s not actually generating in like 720p and then upgrading to 4k like some of the past video models
0:36:06 do. It’s actually generating natively in 4k. Wow. I know that because I’ve talked to the people
0:36:13 over at LTX. API first design runs efficiently on consumer GPUs. Companies do use the words consumer
0:36:19 GPUs very freely, freely, liberally, because a lot of times they’ll say it works on consumer GPUs. Okay,
0:36:24 well, which one? Well, if you have an NVIDIA 5090, it’ll work. Oh, okay. So if I spend three
0:36:29 grand on a GPU, I can make it work. Is that really consumer, but it’s not consumer friendly. I guess
0:36:33 technically it’s a consumer GPU. It’s just the highest end consumer GPU you can buy.
0:36:38 Yeah. But the results are right. Like I’m looking at the screen right now. It’s really, really good.
0:36:44 Like, can you imagine what people could create for this? Yeah. I saw some other demos side by side,
0:36:49 and it seems to be really good at physics, like even better than like via was at some physics,
0:36:54 like doing things like flips and things like that. It’s really interesting to me because we’re moving
0:37:01 at this pace now where we get these state of the art models that just blow people’s minds. VO 3.1,
0:37:07 Sora 2, things like that. And then like two weeks later, an open source equivalent, that’s pretty much
0:37:13 just as good rolls out. So it’s like, even those big state of the art models aren’t safe. There’s like
0:37:18 always an open source version following closely behind. Yeah. I’ve been seeing what people are
0:37:23 coming up with. I know you’re not on TikTok. I am chronically online, but like I’ve been seeing what
0:37:32 these people have been creating with all of these. And it’s just wild. Like someone created like a 1970s
0:37:39 fantasy and it’s anime. And as if like it was drawn by an artist in the seventies and it’s anime. So
0:37:43 it’s wild, it’s wild what we book and create with these. It’s so, so cool.
0:37:49 Yeah. It’s getting really crazy what you could do with video. And the one thing that’s a little scary
0:37:55 about stuff like this and having them be open source, open weight models is, you know, with companies like
0:38:00 Google and open AI, right? They’re going to have some guardrails, right? They’re going to make it so you
0:38:04 you can’t generate certain things. You can’t deep fake stuff. You can’t create,
0:38:09 you know, fake news that makes it look like a war just broke out in a place that it didn’t actually
0:38:14 just break out in. Yeah. They’re going to do what they can to safeguard against that stuff. These open
0:38:19 models. They don’t have guardrails, do they? I mean, even if they do have guardrails because they’re
0:38:23 open, people can figure out how to rip the guardrails off pretty easily. You know, like I think,
0:38:29 I think a lot of these companies do sort of bake guardrails in when they first train the models and
0:38:33 do a little bit of the fine tuning. But as soon as you put open source models into the hands of the
0:38:37 public, the public’s going to figure out how to break them as quick as they possibly can.
0:38:41 Sure. It’s pretty smart, aren’t they? Yeah. I mean, it is dangerous at the same time. It’s not,
0:38:46 you know, like you wake up to war that broke up in one country, but it didn’t really break out in that
0:38:50 country. It’s just like been made by some 15 year old somewhere, you know, because they did,
0:38:54 they found out how to use this. Yeah. So yeah, it is dangerous.
0:38:58 A hundred percent. So these companies need to have a bit of responsibility when it comes to
0:39:02 these kinds of stuff, because you don’t want it to end up in the wrong hands, honestly. So yeah.
0:39:06 Well, and I mean, once it’s open source too, the genie’s out of the bottle, the toothpaste is out of
0:39:10 the tube or whatever saying you want to run with. Toothpaste out of the tube. We’re going to use that.
0:39:15 Yeah. You can’t put it back in. And like, even if the company put this up on their GitHub and said,
0:39:21 go ahead and download it and work with it and then deleted their GitHub. Well, somebody still got it
0:39:27 somewhere, right? It’s still out there. It is there. So yeah. So yeah. Super, super interesting
0:39:32 to kind of pay attention with what’s going on. I don’t do a lot of stuff with open source, but it’s
0:39:38 always super impressive to me how quickly open source follows behind Frontier Lab, state-of-the-art
0:39:44 models. It is wild. Yeah. So the next thing I want to talk about here is this DeepSeek OCR. We don’t
0:39:47 have to spend too much time on it because it’s kind of getting a little more into the weeds,
0:39:53 a little bit more on the technical side, but the company out of China, DeepSeek, they basically
0:40:02 figured out this new like training method where they can essentially ingest a lot more text into
0:40:08 the training model by actually putting all of the text into an image and then having it train on the
0:40:14 images instead of the text itself, which is just like a really wild sort of like outside the box.
0:40:19 How did they even come up with this concept for training? But supposedly it makes it a lot more
0:40:26 efficient. It is. Yeah. And they figured out like a way to make AI read text as images and would do it
0:40:32 like up to 10 times more efficiently than the standard text tokens. So instead of like breaking words down
0:40:37 into thousands of little pieces, like other models do, because that’s what they do, a DeepSeek literally
0:40:42 like looks at the page and like kind of scans it and compresses it, or in other words, squishes it
0:40:47 so that visually it somehow keeps everything almost perfect accuracy. And they’re calling it like the
0:40:54 paradigm inversion, which is it means that they flip the script. So text used to be more efficient than
0:40:58 the vision for models. Now it might be the other way around. Yeah. I mean, it’ll be interesting to see if
0:41:03 some of the other companies pick up this idea and run with it. Like what we see, you know, DeepMind,
0:41:10 OpenAI, some of these other companies, Anthropik testing stuff like this, or are they going to stick with
0:41:17 let’s jam as much, you know, text content as possible in. Yeah. I have no idea whether they will experiment.
0:41:22 Can you imagine? 10 million tokens worth of data into one model context window. And like,
0:41:26 that’s like feeling like having like an AI running your company of like the entire Dropbox of your
0:41:31 company. Yeah. I guess the other question I have around something like this is, does it work from
0:41:37 the consumer direction? So let’s say I go to OpenAI and OpenAI has a, like one of the chat models.
0:41:44 And let’s say it’s a model that has 125,000 token context limit, right? So I can, I can only put in,
0:41:52 let’s say up to 90,000 words of text into this model before I run out of context window for it to
0:41:57 actually understand what I’m putting in. What if I took a hundred thousand words of something,
0:42:04 put them all into images instead, and then fed that to OpenAI or Anthropik or Google, or one of these tools,
0:42:11 does this same method work that other direction? Can I actually shove more context into a prompt
0:42:16 by giving it images with text instead? That’s something that I think could be fascinating to
0:42:21 figure out because then what we might actually see is we’re still typing or copying and pasting text
0:42:28 into a prompt box, but then behind the scenes, these model creators might convert that to images for us
0:42:33 in order for us to squeeze out more context out of the models. Does that make sense? I don’t know
0:42:38 getting too in the weeds or not. No, no, no, it does. It does. Can you imagine like now that they’re all
0:42:42 images, it could be the other way around. Yeah. Yeah. So maybe, maybe it’ll allow us to squeeze
0:42:48 more context out from the user side and actually be able to push more text into the model. But yeah,
0:42:53 we’ll have to see how that plays out. This is very, very new sort of technology and research. So we’ll
0:43:00 have to see, I don’t know. I’m curious to see if any of the other frontier labs look at this technology
0:43:05 and decide to play with it or if they’re going to keep going with the methods they’ve been going
0:43:11 I think it’s like a big hint that the future of language models might not be, you know, language
0:43:17 based at all anymore. Yeah. A couple of last quick, sort of like rapid fire topics here over the last
0:43:24 week, YouTube added in AI likeness detection. So on YouTube, we’ve always been able to like claim
0:43:30 copyright on videos. If somebody just like flat out takes your video and post it on their channel or,
0:43:35 you know, takes music that you created and post it on, on their channel, things like that.
0:43:41 But now they’re making it where if somebody actually uses an AI generation of your voice in their video
0:43:46 to make it sound like you, or if they use AI video, like one of these avatar tools, like Hey Jen or
0:43:52 something like that to generate a video of you saying something you didn’t say, you can actually now
0:43:58 strike those down as well. And I was telling you before we hit record, I actually saw this feature,
0:44:03 double checked my account, saw a bunch of people actually using my likeness in their videos and
0:44:04 copyright claimed them all.
0:44:11 I mean, I think it would put people’s like minds at ease when it comes to a lot of creators feel
0:44:16 better when it comes to these kinds of stuff. It’s just, there’s trying to say, we got your face
0:44:21 covered, you know? So I think it makes people feel honestly, it’s like an apology, you know,
0:44:26 like for everything they’ve done. So yeah, it’s awesome. I like it. I like that they’re going into
0:44:30 this route because now you can create stuff, but you don’t have to use someone else’s work in order
0:44:30 for you to do that.
0:44:36 Yeah. Yeah. I like that a lot. It is, it is kind of a funny dichotomy just in my scenario,
0:44:41 right? Where I make videos about here’s how you use all these cool AI tools and how you can do things
0:44:47 like clone avatars and stuff. And then people are using my teachings against me. And then I’ve got to
0:44:50 go and copyright strike them. Very interesting dichotomy to be in.
0:44:57 Yeah. I did warn like YouTube creators that it might even flag their own videos by mistake. So can you
0:45:00 imagine that happening? Like, you know, but this is my work. What am I, you know,
0:45:02 people are going to probably lose their minds.
0:45:07 For sure. There’s a couple other news stories that you mentioned that I’m actually not super
0:45:10 familiar with. So I’ll let you, you sort of take lead on these two, but you mentioned something
0:45:14 about like an MIT fashion thing and then Yelp’s new AI.
0:45:20 Yeah. MIT builds like an AI tool that is called refashion, which leads you to remix your clothes,
0:45:24 like an actual DJ set, which is so funny. It’s like, you can rebuild everything by like Legos.
0:45:29 So you can turn your trousers into a mini dress or like your hoodie into like a maternity wear,
0:45:36 like whatever. Basically it’s like a, like a Depop meets like cat software. So for the fashion
0:45:41 girlies, it’s just not just for the fashion girlies, but like anyone that likes fashion because who has
0:45:47 the time for like overconsumption and when you can literally just control pussy your whole outfit.
0:45:53 So this is wild. I like MIT. It’s not Zara yet, by the way, it’s, but it’s on its way to be that.
0:45:58 So it’s also pushing for sustainability and like making sure all the clothing end up in like
0:46:02 landfill. So I’m very happy about this. Yeah. Do you know what it’s called? Refashion. That’s what
0:46:08 it’s called. Refashion. Okay. By MIT. MIT and Adobe, I think. Gotcha. Okay. New software designs,
0:46:14 eco-friendly clothing. That’s the one. Yeah. All right. So to reduce waste, the refashion program
0:46:20 helps users create outlines for adaptable clothing, such as pants that can be reconfigured into a dress.
0:46:25 Each component of these pieces can be replaced, rearranged, or restyled. It’s awesome.
0:46:30 Cool. This is actually one that I haven’t come across yet. It’s wild. It’s so good. It changes
0:46:35 the face of fashion, in my opinion. It’s like a recycling kind of thing. Is it demoable right now
0:46:41 somewhere? No, I don’t think so. It’s just like the MIT article. Yeah. It’s probably just like internal
0:46:47 at MIT for right now. And then once they feel like it’s ready for prime time, put out some demo that we can
0:46:51 use. The last thing that we were going to chat about, I actually came across the news, but I only
0:46:56 read the headline. I’m that guy right now. I only read the headline, didn’t read the article, but there’s
0:47:02 some sort of new like Yelp AI. Yeah. So Yelp just launched like an AI receptionist and like an AI
0:47:08 host at the same time, because apparently no one’s actually answering phones anymore. So for restaurants,
0:47:13 there’s like a Yelp host, which is an AI that answers calls and like takes reservation and manages
0:47:19 tables and gives away time and sends you the menu. And like, you know, probably knows if your date’s
0:47:25 allergic to gluten, probably. It even texts you like follow-ups, like your table’s ready, bestie,
0:47:29 you know, that kind of stuff. So, but there’s also something called Yelp receptionist, which is
0:47:35 basically the same exact thing, but for businesses, I guess. So it books appointments and collects customer
0:47:42 info and screens, everything that leads to 24 seven. So all of this for $99 a month, which is
0:47:47 cheaper than a real receptionist. And it doesn’t really need a lunch breaks and, you know, these
0:47:52 kinds of stuff. So the $99 a month that’s paid for by like the restaurant, right? Yeah. Or like
0:47:58 whatever company is using it. I’m assuming Yelp is still free to use for the actual consumer site.
0:48:03 I don’t think people use Yelp in the UK, but I think in the U S it’s insane. It’s like very
0:48:08 helpful for a lot of people. So also the wild part about this is that Google’s working on AI that
0:48:14 calls businesses for you. So we’re getting like, they’re just close to AI is just calling each
0:48:20 other now, which is going to be very funny for me to see. So. Yeah. Yeah. It’s like you Google a
0:48:25 restaurant and you tell AI to call the restaurant to find out if there’s reservations and then the
0:48:33 restaurant answers, but it’s an AI bot answering. So it’s like my Google’s AI reaching out to this
0:48:39 restaurant’s AI. They’re going to probably like tell each other that they’re doing amazing and
0:48:43 they’re supercharging each other. This is going to be a conversation. It’s going to be insane. I would
0:48:47 love to be a fly on that wall when that happens, like just to see how that works.
0:48:54 I mean, we’re getting there already kind of with things like email and whatnot anyway. Right.
0:48:59 It’s like, I don’t know if you’ve seen those, those memes where it’s like, I wrote this one
0:49:04 sentence and had AI turn it into a giant paragraph for me. And then they send the email. And the person
0:49:09 that received the email is like, this person sent me this giant paragraph of email. I had AI summarize
0:49:13 it down into one sentence for me. Right. So it’s like the original person wrote one sentence.
0:49:17 The person that received it summarized it back down to one sentence. Can you imagine it would
0:49:23 be summarized into one word eventually? Yeah. Like it’s hilarious. Cause like, we’re going to see
0:49:28 more and more of that kind of stuff where it’s just like the AI is communicating with each other on
0:49:34 every half. And then eventually the AI is just like a middle man anyway. Yeah. Yeah. It’s going to be
0:49:38 like the meme of like the Spider-Man. I don’t know if you’ve seen that meme of like three Spider-Mans
0:49:43 just pointing at each other. That’s what’s going to be like. Yeah. Yeah. Yeah. Exactly. That’s going to be
0:49:46 pretty hilarious. I know that’s already happening in like schools, right? Like
0:49:50 students are using ChatGPT to help them write their homework. And then teachers are actually,
0:49:54 there’s a whole self-work episode about this. Yeah. The teachers are actually going and like
0:50:00 using ChatGPT to grade the homework. So like the student is no longer even seeing the homework
0:50:05 cause ChatGPT is doing it. And the teacher’s no longer even seeing like what the students are doing
0:50:10 cause ChatGPT is doing it. And that’s funny. These are wild times. So I just hope that schools
0:50:15 like find like more ways into making sure that the youth is learning something.
0:50:22 Yeah. Well, I mean, my hope, my hope is that tools like ChatGPT and Notebook LM and all of these
0:50:27 various AI tools that are out there make students more resourceful. They feel like they have more
0:50:33 access to more information, more access to more knowledge, and they use that to actually help them,
0:50:38 you know, and the sort of opposite end of the spectrum, this side that I hope it doesn’t go
0:50:44 down is the path of everything just becomes AI generated brain rot. Yeah. I mean, I need something
0:50:49 to make sure that the youth kind of like has ways of getting more creative rather than just recycled
0:50:54 content. Cause to be fair, like we’re not going to beat around the bushes. It is recycled content.
0:50:59 So like we need like more things to be pushed so that the students are learning something. Yeah.
0:51:03 Yeah. This is getting probably off the rails a little too much, but like, I feel like that’s
0:51:09 where things like world models kind of solve a lot of that kind of stuff because if AI sort of
0:51:14 understands the world around it, as opposed to just what was injected into the initial training data,
0:51:19 that gets us to a point where AI is starting to come up with novel solutions because it has an
0:51:24 understanding of the world, not just an understanding of what it was trained on sort of that future
0:51:30 excites me as well. But how far off we are to like these world models actually being helpful. I,
0:51:36 I don’t know. We might still be a ways off. Yeah. I mean, but besides I just need AI to cure like big,
0:51:41 big problems like cure cancer and stuff also automate my house. That would be also very helpful,
0:51:45 but yeah, it’s the same level automate my house and like cure. Yeah. Yeah. Yeah. Yeah. Yeah.
0:51:49 Cure cancer, solve climate change and do my dishes and laundry for me. Please.
0:51:56 Exactly. Exactly. Well, this has been awesome. I mean, obviously a lot of has happened over the last
0:52:01 week since we chatted last time. Yeah. I would love to hear from the audience what people think about
0:52:07 these episodes, these sort of news recap and riff episodes. Again, I don’t really do them as much on my
0:52:13 main YouTube channel. So if you like more of this dial and you want to hear more of our musings and
0:52:17 thoughts on what’s going on in the world of AI news, let us know in the comments. Yeah. We’ll
0:52:20 definitely keep doing more of them because there’s such a blast for us to do. I’m having so much fun.
0:52:24 Yeah. Amazing. Well, so make sure you check out the Mindstream newsletter. That’s the newsletter that
0:52:31 Maria is writing and the head person over there, keeping you looped in with all the latest AI news.
0:52:36 So check that out if you haven’t already and make sure you’re subscribed. So whether you’re listening on
0:52:41 Spotify, YouTube, Apple, we’re available everywhere you listen to podcasts. So make sure you’re
0:52:45 subscribed if you’re not already. And thank you so much for tuning in. Yeah. Thanks guys.
Get Matt’s AI Tools Playbook (free): https://clickhubspot.com/dgb
Episode 82: What’s behind the latest wave of AI-powered browser upgrades—and how will it reshape the way we work, shop, and code? Matt Wolfe (https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9), head writer of the Mindstream newsletter and AI news expert, dig into every major browser update you might have missed, from OpenAI Atlas and Microsoft Edge to Claude Code and even the newest open-source video models.
This episode is your rapid-fire refresher on all things AI browsers. Matt and Maria break down the surprise launch of OpenAI’s Atlas browser, its agent mode and workflow automation, compare it with Microsoft Edge’s Maiko assistant and new Copilot features, plus unpack how Claude Code and GitHub integration make coding easier than ever for beginners and devs alike. They riff on the future of promptable operating systems, the ethics of browser memory, open source video models, Amazon’s approach to automation, groundbreaking image-based AI training, and more.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
-
(00:00) AI News Roundup Replacement
-
(05:31) Voice-Powered Shopping Innovation
-
(06:53) AI Monetization Strategies
-
(10:34) ChatGPT Atlas Updates Announced
-
(15:47) Microsoft Copilot Adds Memory
-
(19:17) Windows 11 Integrates Advanced AI
-
(22:10) Cloud-Based Coding Revolution
-
(24:09) GitHub Integration for Code Collaboration
-
(27:24) Image Converter Script
-
(31:56) Amazon Automation Reduces Hiring Needs
-
(35:29) Open Source Rivals AI Advancements
-
(38:27) DeepSeek’s Innovative OCR Method
-
(40:15) Expanding AI Context with Images
-
(45:46) Yelp Launches AI Host & Receptionist
-
(47:37) AI Email Expansion and Compression
-
(49:47) AI and World Model Potential
—
Mentions:
-
Maria Gharib: https://www.mindstream.news/authors
-
Mindstream AI newsletter: https://www.mindstream.news/
-
OpenAI Atlas: https://chatgpt.com/atlas
-
Microsoft Edge: https://www.microsoft.com/en-us/edge/?form=MA13FJ
-
Claude: https://claude.ai/
-
GitHub: https://github.com/
-
Dia Browser: https://www.diabrowser.com/
-
Perplexity: https://www.perplexity.ai/
-
Google Veo: https://gemini.google/overview/video-generation/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
-
Newsletter: https://news.lore.com/
-
Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Reply
You must be logged in to post a comment.