Author: The Next Wave – AI and The Future of Technology

  • 6 AI Workflows You Can Steal For Your Business in 2024

    AI transcript
    I think most people don’t realize
    like how much better perplexity is than Google.
    – You can really, really rapidly do a lot of research
    and get a lot of ideas super, super quick.
    And to me, that’s really powerful,
    but also I hope nobody goes and takes this idea
    and runs with it.
    – Tons of people will.
    – Tons of people will.
    (upbeat music)
    – When all your marketing team does is put out fires,
    they burn out.
    But with HubSpot, they can achieve their best results
    without the stress.
    Tap into HubSpot’s collection of AI tools,
    breeze, to pinpoint leads, capture attention,
    and access all your data in one place.
    Keep your marketers cool
    and your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    (upbeat music)
    – Hey, welcome to the Next Wave Podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz.
    And, you know, everybody’s talking about AI.
    AI is everywhere.
    It’s all over the news.
    Every big company is putting AI into everything,
    but we keep hearing the same question.
    What the heck do I actually use this for?
    How is this benefiting me?
    How is this gonna make my company better
    or my daily life better?
    Well, that’s what we’re gonna talk about
    in this episode today.
    Nathan and I, we’re going to share with you
    some of the ways that we’re actually using AI
    in our own businesses.
    So that’s the goal of this episode,
    is to break down the ways
    that maybe you’re not even thinking about
    that AI could really, really benefit your life.
    I think there could be some overlap here
    between what you’re gonna say and what I’m gonna say,
    but there’ll be some slight nuances
    to the way I do it versus the way you do it.
    You wanna kick it off?
    Like what’s like the first actually actionable use case
    for AI that we can talk about here?
    – Yes, I saw this tweet from Balaji,
    which I thought was really interesting
    ’cause for a long time I’ve been a huge fan of perplexity.
    You know, I’ve started using it for a lot of the research
    I do instead of doing a Google search.
    I found that often I get better answers
    on perplexity actually.
    And I’ve started using it for the podcast research,
    especially like for interviewing a guest.
    You type in the guest’s name
    and like the information you get from that
    is so much better than just doing a Google search.
    You get videos they’ve been in,
    you can ask it, follow up questions,
    but you know, like what kind of stuff
    did they talk about on recent podcast or whatever?
    It’s great.
    – If people want to learn more about perplexity,
    we actually had Arvind, the CEO
    and founder of Perplexity on the show.
    So make sure you check out that episode.
    But what perplexity is,
    is it’s essentially a large language model
    like chat GPT or like Claude
    or whatever large language model you’re used to using.
    But it also searches the internet for you
    whenever you ask it any sort of questions
    and then uses whatever it finds
    as part of the context for the response.
    So you could ask it questions
    just like you would ask chat GPT.
    And when it’s formulating that question,
    it’s always going to do like a search on the internet
    to try to make sure it’s giving you the most
    sort of informed response that it possibly can.
    And once you start using it,
    often it’s kind of hard to go back to regular Google.
    – Right.
    And so I saw this tweet from Balajas, I was genius,
    like, yeah, you can actually just change it, right?
    You can go into Google Chrome or whatever browser you use
    and you can change your default to be perplexity.
    And then it works.
    – So Google made that like an option in the settings
    where you can just…
    It’s like, it’s slightly hidden.
    It’s like, it’s not as easy to find.
    – But it’s not like you don’t need any sort of extra
    Chrome extension or anything like that.
    You just go straight into the Chrome settings
    and then do a search for it.
    And then you can make that your primary search engine.
    So now like whenever you type anything,
    you do the URL bar at the top,
    it’s gonna search for perplexity instead of Google, right?
    – Yeah, for example,
    if I type in the next wave podcast right here, right?
    And so now it’s like pulling up all of our episodes.
    It’s giving a synopsis of what the podcast is about.
    It’s even like sharing some of the feedback
    from listeners, which is wild, related topics.
    And yeah, and then you can even follow up questions, right?
    You could ask about a certain episode,
    what happened on this episode
    or what were the key takeaways and it has all that.
    – So I’m curious, what are some of the ways
    that you’re using perplexity that have, I guess,
    made life easier, made business easier,
    made daily productivity easier?
    Like what are you actually using perplexity before?
    – I mean, mainly like researching companies,
    research for the podcast, but also for meetings.
    Like if I’m meeting somebody and I don’t know much about them,
    I go for, I would look at LinkedIn,
    but it’s like kind of, you know, so much information.
    And a lot of it’s really kind of…
    – Well, and perplexity is actually gonna pull in stuff
    from LinkedIn a lot of times as well.
    – Right, right.
    And then you can ask, you can ask follow up questions.
    It was just way better ’cause you wanna actually talk to it
    and have like a thing where like, oh, where did they go to,
    you know, what company did they work at?
    What were they actually responsible for?
    I just liked that experience better
    than looking at someone’s LinkedIn.
    And also if they were on videos or whatever,
    you get so much more context about a person for a meeting.
    So I think that’s like the main use case.
    I’m sure there’s tons of others
    for people who are doing other kinds of research,
    maybe market research, other things
    that perplexity is great for.
    But like I said, I had a really hard time
    developing the habit.
    I’m just so used to going to Google or LinkedIn.
    And I’m finding that this is definitely helping me
    to develop the habit of like, oh,
    just use perplexity for that stuff.
    – For sure, for sure.
    Well, I’ll share my first one ’cause it’s super related.
    It’s related to perplexity as well.
    I was gonna talk about how I actually do guest research
    for the podcast using perplexity.
    So basically I plugged in myself
    as if I was interviewing myself today
    just ’cause I was curious what it would bring up.
    But this is the type of thing I’ll do
    before every single podcast.
    I’ll give it a prompt like, I’m interviewing Matt Wolf,
    the YouTuber and creator of Future Tools
    on my podcast today.
    What should I ask him to ensure
    an engaging educational and entertaining episode?
    So that’s the prompt that I like to give.
    I always like to give extra context
    just in case like somebody else has that same name.
    Like there’s a professional golfer named Matt Wolf.
    And so if I just say like, what should I ask Matt Wolf?
    It’ll be like, how do I drive the ball farther
    or whatever, right?
    So I wanted to make sure I gave that additional context
    of the YouTuber and the creator of Future Tools
    in my prompt.
    So I know it’s gonna pull up the right Matt Wolf.
    But then I also don’t want it to just ask
    like boring questions about the company, right?
    Like let’s say I plugged in somebody
    that works at HubSpot or something like that.
    I don’t want it to give me questions like,
    what are HubSpot’s new initiatives for 2025?
    You know, like I don’t want it to get
    like boring corporate questions.
    I want them to be engaging, educational and entertaining.
    So very, very specific in the way I prompt it.
    But then like you can see the steps
    that perplexity went through.
    Research background information about Matt Wolf
    and his YouTube channel.
    Identify key topics and areas of expertise for Matt Wolf
    that would make for an engaging and educational podcast.
    Brainstorm a list of potential interview questions
    that would cover the key topics
    and provide an entertaining and informative episode.
    So you can actually see the way perplexity
    is sort of thinking through this.
    It’s almost borderline agentic, right?
    Where it does this one search and it goes,
    okay, we’ve got this information now.
    Now use this search and it pulls up more information.
    And you can see here up at the top,
    Matt Wolf creator of Future Tools,
    current job founder of Future Tools,
    YouTube host and podcaster where I live.
    My education apparently pulled all this in from LinkedIn
    and has like a little bio of me right there.
    Over on the right sidebar,
    you can see a whole bunch of content
    that I’ve been involved in with the ability to watch it.
    And then it brings up some questions.
    You know, what sparked your interest in AI?
    Can you describe your journey
    from being an entrepreneur to becoming a YouTuber?
    How do you stay organized and manage your time effectively?
    And it broke this all down into sections.
    And I’m telling you, like sometimes,
    maybe we shouldn’t put this out in the world,
    but sometimes we’ll pull on a guest onto our podcast.
    And we’ve had such a busy week
    that we didn’t have all of the time in the world
    to research the guest.
    Well, this right here just made it infinitely easier.
    So I wanted to share this
    because it’s very related to what you were talking about.
    But also like, if you’re going and interviewing
    for a job somewhere, plug in the company here,
    plug in the person’s name that’s interviewing you here,
    and learn more about that person.
    So you’re going into this interview like dialed in
    and ready to have the conversation
    with the person that you’re talking to, right?
    If you’re about to just jump on a call
    or a pitch meeting or something like that,
    you can really, really rapidly do a lot of research
    and get a lot of ideas super, super quick.
    – Yeah, I mean, I think most people don’t realize
    like how much better perplexity is than Google.
    And like, and it seems like Google,
    the quality of Google seems to be going down every year.
    And it’s like, most people just don’t realize it
    ’cause it’s like, it slowly has been happening, right?
    – Yeah, yeah.
    – But, and now we’re like, you know,
    AI content being mass generated,
    like the quality is going down more and more it seems.
    And, you know, and Google’s tried to fight that
    by relying more on authorities.
    And now that’s why you see Reddit and Quora at the top,
    but now people are using AI to mass spam Reddit and Quora.
    So it’s just the quality of Google continues to go down.
    It’s when you search for something,
    you often don’t get the answer.
    Unless it’s something very simple,
    you often don’t quickly get the answer to, you know,
    to your question, but with perplexity,
    you get such great quality, you know, answers,
    whatever you’re asking and all the extra information
    with the videos and you can do follow-ups.
    I highly recommend people try that.
    – Yeah, and even like Google is trying
    to do this same kind of thing now, right?
    Where you do a Google search
    and it has the AI response up at the top.
    The problem with Google is it’s still to this day,
    doesn’t know the difference between a meme and reality.
    Right? Like all of the stuff that came out with Google,
    like saying, hey, maybe you should try putting glue
    on your cheese to make sure it sticks to your pizza.
    And how many rocks should you eat per day?
    And, you know, geologists recommend you eat
    at least 17 rocks a day.
    All of that stuff is because memes exist.
    It said that random ass stuff and Google thought
    that it was, you know, actually factual information
    that it fed through its AI.
    Perplexity doesn’t seem to have those same sorts of issues
    because I think it’s sort of doing a little bit
    more cross-referencing than what Google’s doing.
    – Yeah, they started from scratch.
    Like thinking how to build a search engine
    or an answer engine up, you know, from first principles.
    Whereas I think Google’s relying
    on really antiquated technology.
    They built a long time ago.
    You know, you can see this in some of the recent stuff
    with like people trying to search for Donald Trump
    or the assassination attempt and it’s like,
    “Oh, did you mean Kamala Harris?”
    And it’s like, “What the hell?
    What? What are you talking about?”
    And I don’t think that was like somebody manually doing that.
    That’s probably just based on like all the news sources
    they’re pulling information from
    and they’re putting certain authority
    to those certain news sources.
    That’s probably why that happened.
    But that’s based on like antiquated technology
    and that’s probably why all that’s happening.
    So yeah, people should be using perplexity.
    Okay, this next one, I saw it from a tweet
    from Allie Miller, she calls the “clawed walk.”
    And I’ve actually heard this from other people too,
    like Dan Shipper, apparently this is one of his top use cases
    for AI, is that when you’re thinking about something,
    you know, instead of just sitting in your room
    or in your office and just kind of, you know,
    working on it that way, like actually get out
    and get some exercise and do work at the same time, right?
    And AI actually makes this actually feasible now.
    So like, so what he does and what Allie’s suggesting
    people do is go for a walk and then use something
    like a super whisper or something like that
    to transcribe everything that you’re saying
    and then make it, we actually have notes,
    which then you could, you know,
    you could feed into chat to your Claude,
    tell it to remember it or the, you know,
    and make it actionable.
    And I think Dan said he’s even using that
    for his newsletter, I believe.
    Like, that’s how he’s writing his newsletter.
    And so I was like, I have to start doing that.
    ‘Cause like, I’ve been on like a big health kick,
    especially, you know, I think in the age of AI
    being healthy is really important.
    And I was like, okay, so if I can get out for a walk
    and whatever I’m thinking about from my newsletter,
    just say it as I’m walking, right?
    That would save so much time.
    But also I just, I find that when I’m walking,
    I’m more relaxed and I’m more able to think, you know,
    it’s kind of different than jogging
    where it’s hard to think when you’re walking.
    You can think very clearly.
    Sometimes even better than when I’m sitting down.
    And so I’m trying to get in the habit of doing that.
    Having kind of a hard time getting into it
    because I feel like in Japan it feels slightly awkward to me.
    Like the American guy walking around
    where everything’s really quiet here in Kyoto
    and I’m like walking around talking to myself outside
    and people are staring.
    – I feel like people are worried about that
    less and less and less these days.
    ‘Cause it’s so common now to be, you know,
    just have like air pods in and be like,
    talking to somebody on the phone
    while you’re walking around or something.
    So I don’t know, I’ve always felt self-conscious
    about that as well,
    but I feel like it’s definitely getting more normalized.
    – Yeah, so here’s Super Whisper.
    People can check it out.
    See, I’m actually playing on like,
    installing this and trying it today.
    I haven’t tried it yet.
    (upbeat music)
    We’ll be right back,
    but first I wanna tell you about another great podcast
    you’re gonna wanna listen to.
    It’s called Science of Scaling, hosted by Mark Roberge.
    And it’s brought to you by the HubSpot Podcast Network,
    the audio destination for business professionals.
    Each week hosts Mark Roberge,
    founding chief revenue officer at HubSpot,
    senior lecturer at Harvard Business School
    and co-founder of Stage 2 Capital,
    sits down with the most successful sales leaders in tech
    to learn the secrets, strategies, and tactics
    to scaling your company’s growth.
    He recently did a great episode called,
    “How Do You Solve for a Siloed Marketing and Sales?”
    And I personally learned a lot from it.
    You’re gonna wanna check out the podcast,
    listen to Science of Scaling
    wherever you get your podcasts.
    (upbeat music)
    – So what does Super Whisper do?
    I’ve heard of it, but I’m not super familiar with it.
    – So you can download the app.
    I think it’s mainly on macOS, I believe,
    and you download it and it just helps you transcribe
    whatever you’re saying to it.
    So it’s like you just talk to the app
    and then it transcribes it for you.
    – Okay, similar to Otter.
    I use Otter for the same thing.
    – Okay, yeah.
    – You Otter know.
    – How do you use Otter?
    – So Otter’s the same idea.
    It’s an app that’s on my iPhone that I open it up
    and you just start talking into it
    and it basically transcribes whatever you say in real time.
    One of the ways I actually use it,
    this is actually not something that was in my notes,
    but I’ll share it anyway
    ’cause it’s top of mind,
    is whenever I go to these conferences,
    when I go to Google I/O,
    when I go to Microsoft Build,
    I’m going to MetaConnect next month,
    whenever I go to these events,
    I actually pull out my iPhone, open up Otter,
    and then set it on my lap
    and just let it transcribe the entire presentation
    that somebody’s giving.
    And then when it’s done,
    Otter will give you the summary and bullet points
    of like here’s the main takeaways
    from the transcript that you just created.
    And it will just give me bullets of like,
    here’s the 10 things they just talked about
    in this presentation.
    So I can sit in the presentations
    and just completely zone out if I want and not pay attention.
    And I’ll just have my cliff notes
    of the entire presentation sitting on my phone later.
    It’s awesome.
    – That’s great.
    – But actually, you know what?
    I’m going to share,
    I’m going to jump to another idea
    that I was going to share today
    because it’s related to what you just said right there.
    – Oh, okay, okay.
    – Similar concept,
    but one thing that I like to do
    when trying to create like documents
    like PDFs or any sort of written blog posts
    or things like that is I like to speak them out
    and then have them transcribed
    and then use AI to sort of reword the transcription for me
    into something that sounds more like a written article
    versus just like reading a transcript, right?
    So the most recent one that I did,
    I’m sort of laughing.
    You’ll understand why I’m laughing in a second.
    The most recent one I did was HubSpot reached out to me
    and asked if I can help them create a PDF
    on all of my thoughts on like the AI world right now.
    And they sent me this questionnaire
    with like 14 questions on it.
    And I looked at it and went,
    “Man, I’m going to have to actually like sit down
    and type out responses to this whole thing.
    Oh, this is going to be a pain in the butt.”
    So what I actually did instead
    was I gave this questionnaire to Emily,
    who’s my assistant.
    I gave it to Emily, we jumped on a call
    and I said, “Read these questions to me.
    Just pretend like you’re interviewing me
    for a podcast or something
    and read these 12 questions to me.
    And I’m going to record this
    and I’m going to record my response
    to every single question.”
    So I recorded the entire conversation
    as she read through these 12 different questions.
    And I answered every single question.
    And then I took that recording
    and I pulled it into Descript.
    So Descript, you can upload an audio or a video file.
    It will transcribe the entire thing for you.
    So I used Descript to get my transcription.
    And then I took that entire transcription
    and I pulled it into Claude.
    Claude lets you upload really long text files.
    And then I pulled the whole thing into Claude
    and I went through and I had it actually answer
    the questions again for me one by one,
    but using the context of the document that I uploaded.
    The idea was I uploaded the entire transcript
    as like the context and I said use this transcript
    to answer the questions that I’m about to ask you, right?
    So then I went through the document again,
    the 12 questions and I took question one,
    copied and pasted it into Claude.
    And then Claude responded to the question,
    but based on the answers I gave in my transcript.
    And I just went through every question one at a time
    and had Claude write up a very succinct response
    that had like my sentiment in it.
    It had the ideas and thoughts and things
    that I shared in my response,
    but it did it in like a single paragraph per question.
    And so it’s super helpful to write up documents
    and articles and things like that.
    And so how other people can apply this is,
    let’s say you wanna write a blog post on SEO.
    I don’t know, I don’t know what niche you’re in,
    but let’s say you wanna write up a blog post
    all about like the top 10 SEO tactics or something,
    write out a list of questions
    that you think people would want answered about SEO,
    just write out all of those questions
    and then record yourself
    just answering those questions out loud.
    And then you can pull it
    into one of these transcription tools like Descript.
    Let Descript transcribe the whole thing
    and then take that entire transcript,
    pull it into Claude and then have Claude
    turn that into a written article or a PDF document
    or something like that for you.
    And it saves so much time
    and it writes it probably way better
    than you would have written it yourself, let’s be honest.
    So that’s a really, really good process,
    but it’s very similar to the idea of like going for walks
    and just sort of transcribing your thoughts
    as you’re walking.
    – Yeah, that’s awesome.
    That’s anything like that that takes it to an area
    where like humans are living more of their lives
    and like being healthier
    while actually still getting their work done,
    I think is awesome.
    It actually kind of goes into my next one.
    So my next one really is about using AI to like stay healthy.
    So this is not for work,
    but I do think it’s relevant.
    If you’re healthy, you can work better.
    And I think a lot of people have not really
    connected the dots too that like in the age of AI,
    people are probably going to live a lot longer.
    And AI is most likely going to unlock
    some like major advancements in the healthcare
    that people might end up living 10 to 30 years longer,
    maybe longer.
    So you want to be healthy, right?
    Like you don’t want to be like old,
    like living a super long life,
    but you’re very unhealthy.
    Like no, you want to be somewhat healthy.
    And so I’ve been trying like on a big health kick
    over the last year.
    I had gained a lot of weight during COVID
    and it’s kind of like kept it on
    and like, you know, lost five pounds here and there,
    but still was quite overweight.
    And then about a year ago,
    I started going to the gym a lot like quite often.
    And the first thing I realized was like,
    you need to really track your calories and your protein.
    And I had never done that in my life before.
    I had mostly been like kind of slightly overweight
    most of my life.
    I was super fit when I was maybe like 18 or 19.
    And so I realized, okay,
    you got to track calories and protein.
    And so I’ve been using this thing.
    Actually, it’s a custom GPT that I created,
    which is, it’s very simple.
    Like it doesn’t do all the stuff I want it to do.
    It’s like, you know,
    I would like for it to do a lot more,
    but the memory is quite limited on what,
    you know, what they’ll actually remember.
    And it’s real simple.
    And I just have a thing here where I have like,
    for example, I’ll do like a new day, you know,
    I’ll just, I’ll like type this in.
    I’m typically doing it on the mobile app.
    I’m typically not doing it on my desktop.
    But anyways, you type in like new day
    and you’ll set like what are your calorie goals
    for the day, what are your protein goals?
    And just anything that I eat,
    like if I know the calories and protein,
    I will type that in.
    And I can even, you can even do just shorthand
    just literally just put the,
    you don’t even have to tell it what’s calories and protein.
    It’ll figure that out.
    You can just put the two numbers and it’ll know that.
    If you want, you can add a description to it.
    So like, oh, I had a latte or I had oatmeal or whatever.
    But also the really cool thing is,
    you just kind of keep this same window open,
    the same tab and like, you can go back to it every day.
    And I’m sure at some point,
    there’s some kind of memory limit there,
    but I’ve been using it for like two months now,
    the same one.
    – Yeah.
    – And then the cool thing is like it,
    in that single context,
    he remembers all the stuff you shared before.
    So like, oh, I had this protein drink or whatever,
    it’ll know what the calories and protein are for that.
    So you don’t even have to type in the numbers anymore.
    – So you’re basically using it to track the calories
    and protein.
    Is there any other like benefit that you get out of it?
    Is it like giving you motivation?
    Is it giving you like–
    – I would like to do more stuff like that.
    Yeah, yeah, I would like,
    it’s literally just doing the calories and protein.
    Like I wanted to do a lot more.
    I was like, okay, I’m gonna make this like a really cool,
    you know, custom GBT where like,
    it’ll like, it’ll motivate you,
    it’ll like track your progress.
    Like, yeah, it’s currently not capable
    of doing any of that.
    I mean, I’m sure you could build an application
    that used ChatGBT’s API.
    I’m sure you could do that.
    But in terms of a custom GBT,
    it doesn’t seem to be possible at the moment.
    – Well, you could use,
    you could use like the rag method, right?
    Retrieval Augmented Generation,
    where like every time it gives you an update,
    copy and paste it into a text file
    and then upload that text file and then it will–
    – Yeah, I’ve been too lazy to figure that out.
    Yeah, I should at some point.
    But yeah, I mean, for me it just,
    it simplifies tracking calories and protein.
    You know, I just type new day every time
    when it’s a new day and it starts over.
    I want to change anything.
    Like, okay, right now I’m trying to diet.
    Okay, I reduced my calories by 300 or something like that.
    If I’m trying to gain more muscle,
    I increase it by 300.
    I am trying to give it to where it can actually
    coach you more on that.
    Like, you know, I tried to feed it like different documents
    from like people who I really respect,
    like different like health scientists
    and things like that or exercise scientists.
    And then it’s pretty useful.
    Like you can talk to them about like,
    okay, I want a bulk right now.
    Like, what does that mean?
    Like how many calories should I be taking in?
    Or should I be doing differently with exercise?
    So it will answer that kind of stuff.
    – Yeah, yeah.
    ‘Cause what’ll be cool is like, you know,
    you can go in there and be like,
    oh, I’ve seen like I’ve plateaued.
    I haven’t, you know, been able to add more weight
    to my bench press or whatever in two months.
    What do you think’s wrong?
    And based on all of the memory of all of the data
    you’ve plugged in, might be like,
    oh, well, it looks like you’re doing this wrong
    with your diet and maybe, you know,
    you didn’t do this with the weights or whatever.
    And it can actually start giving you custom feedback
    based on everything you input.
    Like I feel like that’s probably something
    that can be achieved right now with Clot or GPTs.
    I just don’t know the exact formula to do it.
    – Yeah, so somebody should go and copy my thing
    and then actually make it super useful.
    Do your own thing, have at it.
    I would love to use it if you do, let me know.
    – Yeah, yeah, yeah.
    – But I have found that one thing that’s actually,
    so when I tried using calorie trackers before,
    like you try to add an, you know, something that you ate,
    it’s like, you have to like find it in a list
    or it’s like, they’re usually like kind of complicated.
    But with this, I can just type it in.
    And you can even do, if you’re not gonna be super precise,
    like, okay, I’m not trying to like win a competition
    or something, obviously, like,
    I don’t have to be super precise, estimate for me.
    I just eat this, estimate, you know.
    And the estimations seem to be pretty good.
    Like often like within like 20% of the real calories
    and protein, so it makes it an easy way to track it.
    Just like, just type it in, tell it what you ate
    and it’ll come up with a pretty good estimation
    of how many calories and protein you took in.
    – Awesome, well, the last one that I was gonna share
    is actually how I sort of write scripts for short.
    So on my YouTube channel, I’ve actually started doing–
    – Something different, okay, good.
    It’s not–
    – Yeah, yeah, this one’s different.
    I actually go back to Claude for this one.
    You could actually do it with a custom GPT
    and it would work just as well.
    So if somebody’s listening to this and they’re like,
    but I pay for chat GPT, I don’t wanna pay for Claude too.
    You can do the same thing in chat GPT.
    It doesn’t really matter which one you’re using.
    So I’ve actually started doing a lot more shorts
    on my channel.
    I’m trying to experiment with doing more shorts
    as opposed to only doing long form videos
    because I wanna try to get in front of different audiences
    and shorts tend to get in front of different audiences
    than the long form videos.
    I’ve also got sponsors coming to me saying,
    hey, I would love to pay for a short on your channel.
    So I’m like, okay, well, maybe I should start
    doing shorts then.
    So I’ve actually started playing around with more shorts.
    And so I created this custom project in Claude.
    And if you’re not familiar with custom projects,
    but you are familiar with like custom GPTs,
    it’s basically Claude’s version of a custom GPT, right?
    So I created this one called shorts writer.
    And what I did with it was,
    you can see I uploaded a whole bunch of transcripts
    from shorts that I thought were really, really good shorts.
    So I came across shorts that had a lot of views
    that were in sort of technical niches
    that talked about AI or talked about like emerging tech
    or things like that.
    And I downloaded each of the videos
    and then I pulled them into Descript
    to get the transcript from the video.
    And then I uploaded all of the scripts
    from all of these videos that I found.
    And then basically what I told this custom Claude prompt
    to do is to read the scripts that I uploaded
    and try to find the sort of consistent formula
    that seems to make all of these work well.
    And for anything I put into the prompt box,
    give me a similar script.
    So that’s essentially the way I did it.
    And so now if there’s like a new piece of news.
    So if I go over to like a news website,
    I know you can’t actually see this
    ’cause I’m just sharing the one tab.
    But if I go to like a news website,
    there’s some news out today about how the humane pin
    is actually getting more refunds
    than it has purchases right now.
    Not a great look for humane,
    but if I was to go and copy the entire article
    and come over to Claude,
    you can see I can paste in the entire article
    and it makes this little like pasted box here.
    So I just posted in the entire article from the verge
    about how humane’s performance is underperforming right now.
    I don’t have to put anything into the prompt box
    because it already knows what I’m looking for.
    And if I just hit enter on this,
    it’s going to read this news article
    and then write me a script based on this news article
    that I put in here.
    – That’s crazy.
    You could do that with like a faceless YouTube channel
    too, couldn’t you?
    – You could, yeah.
    So it just gave me,
    it knows that I want my script to be under 60 seconds.
    I’m trying to model,
    Cleo Abrams is like one of my favorite YouTubers
    as far as like shorts go.
    She does a really good job with them.
    So it’s kind of trying to model a similar formula
    to what Cleo’s videos are.
    And you can see it wrote like a 60th second script
    about that news article that I just put in.
    Humane just launched their AI pin,
    wearable device meant to replace your smartphone,
    but things aren’t going as planned.
    Imagine you create a revolutionary new gadget.
    You spend years developing it,
    raise over 200 million from big tech names
    and finally release it to the world.
    But then more people return it than keep it, right?
    And it just wrote this whole script for me.
    That’s actually a pretty like compelling,
    interesting script that.
    – It sounds, yeah, it sounds like a good short.
    – That’s crazy.
    – You take the script,
    I throw it into my teleprompter here.
    I read it, I overlay it with B-roll
    and I can crank out shorts in 45 minutes, you know?
    – Wait a minute, is that what you’re doing?
    Are you changing anything?
    – Yeah, I mean, a lot of times it’s not specifically worded
    the way I would word it, right?
    Sometimes it’ll use like delver.
    You know, the common words that like make it obvious
    that it’s AI.
    So I will, you know, tweak some words
    to make it sound a little bit more like me,
    but for the most part,
    the scripts come out pretty good out of the box.
    – That’s wild.
    – Let me, let me see if I can show you my system prompt here.
    So create video scripts that will be one minute
    or less in the style of Clio Abram.
    Use the transcripts in the project knowledge
    to determine the consistent formula behind the video scripts
    and use the details about the video idea inside of the prompt
    to create a video about the details in the prompt
    in the style of the Clio Abram’s videos
    following a very similar formula.
    So it’s, I uploaded the transcripts
    and it’s following a very similar formula
    because I really liked her flow.
    She always starts with like, imagine this
    and then give some more details.
    And then like it’s got a very formula flow to it.
    And I was like, I really liked that flow.
    So now I can plug in any news article, any sales page.
    If I need to make a video about like the rabbit R1,
    I can go to the rabbit R1 homepage,
    copy all of the details from that page, right?
    Copy all of the bullets and the selling points
    of the product, paste them in,
    and it will write a short for me
    that will ideally make people interested in the rabbit, right?
    So that little like flow for me
    has made making short form content really, really easy for me.
    – That’s crazy.
    I was imagining like, do you combine that with like 11 labs
    and like generating a voice reading all of it out?
    And then you start using some of the new AI video tools
    that are out there.
    I think there was a new open source one released today
    or it’s going to be released soon.
    Yeah, generate some B roll or something like that.
    I mean, like you have like most of the video just like done
    like automation.
    – That’s not really a future I’m looking forward to.
    – Yeah, yeah, I know exactly.
    I’m like, somebody’s going to do that.
    It’s going to be either great or horrible.
    – Yeah, lower the, we bury the entry.
    The lower we bring this barrier to entry
    to create content like this,
    the more we’re just going to get flooded with junk.
    So I’m like, I’m always sort of hesitant
    to share this kind of stuff.
    Cause I’m like, this works really well for me.
    But I also know like,
    if something doesn’t come out quality,
    I’m not going to upload it.
    A lot of other people aren’t going to have those filters,
    right?
    A lot of other people are going to go,
    I can make a workflow where I can crank out a video
    every 10 minutes and just see what works.
    I’m not really looking forward to that future,
    but I found a workflow that works for me.
    And I know, you know,
    a lot of others might find it valuable.
    If there’s like a content creator that you’re like,
    oh, they have a decent formula,
    a decent flow that they follow when they make their videos.
    You can actually use a tool like this
    to reverse engineer the flow of the video
    and then use that reverse engineering
    to then make videos for you based on the topics
    that you input.
    And to me, that’s really powerful,
    but also the lowering of the barrier for effort
    like also makes it sort of scary.
    And so I hope nobody goes and takes this idea
    and runs with it.
    (laughing)
    – Tons of people will.
    (laughing)
    Tons of people will.
    – Damn it.
    (laughing)
    – But I think a long-term, yeah,
    people want to see people’s faces
    and actually know who the, who’s the person behind it.
    And like you said, even if you have Cloud help you make that,
    you’re still curating,
    you’re still coming up with the idea
    to do the video in the first place.
    – I think, yeah, I think that’s gonna be
    a big differentiator.
    I think some people will go out there
    and try to make these faceless videos
    where they get the formula written for them
    and then they plug it into 11 labs
    and then they plug it into a video tool
    that generates all the B-roll
    and then they just throw it online
    and nobody knows who’s behind it,
    nobody knows why they should care.
    I just don’t think it’s gonna work for most people.
    Some people are gonna crack that code
    and they’re gonna have videos that go viral.
    It’s just sort of inevitable.
    99% of people will never crack that code
    and their videos are gonna get seen by seven people.
    I think in the future, as we move forward,
    being like a personality online,
    being like a name that people can trust
    that they find reputable
    is gonna become so much more important
    than the actual content that you’re putting out there.
    I think the faceless channels are just going to be,
    well, I can’t really trust this.
    I don’t really know the person behind it.
    They could just be trying to sell me something.
    How do I know this isn’t their affiliate link?
    And I think having the face, the personality behind it
    is going to be the differentiator
    that makes some content work for us versus others.
    – Yeah, I bet there will be an opportunity though
    for two or three years to make a lot of money doing that.
    – Oh dude, some companies gonna roll out an app
    that just does it for you.
    Like, hey, I need a video about the ERABIT R1.
    All right, here it is and it’s just done.
    – Yeah, pay me this much money, just pay it.
    Stripe, 300 bucks or whatever.
    Maybe that’s what Laura should be.
    – Yeah, yeah. (laughs)
    – Yeah, this has been a fun episode
    ’cause I feel like there’s a lot of things I learned
    from you, like how you’re using AI for video
    that I find fascinating.
    I think it actually kind of fun to do
    maybe a whole episode on that at some point.
    – Yeah, yeah, definitely.
    – Also, at the same time, it kind of pushes me to,
    there’s all these great use cases for AI,
    but some of it, there’s a few things I actually use
    and a lot of things I know I should be
    or should be trying that I have it.
    This kind of pushes me to actually go out and try it.
    – Man, I’ve gotten so hooked on Claude and Perplexity.
    Claude does sort of help with the creation process,
    Perplexity to help with the research process.
    Between those two tools, I mean,
    I pretty much have those tabs open all the time now.
    Like I’m hooked on using those to just A,
    stay looped in and B, to turn around and create content
    that I think people are gonna like out of it.
    But you know, I think, like you said,
    I really, really enjoy doing episodes like this.
    I wanna kind of turn it to the audience for a second.
    So if you’re watching this on YouTube
    or listening to the podcast, I’d love your thoughts on this.
    I think on Spotify, you can actually leave comments now
    if you’re watching it on YouTube.
    Leave some comments, let us know.
    Do you like this style video?
    Do you enjoy us showing off use cases?
    Do you prefer interviews?
    We’re still sort of finding our flow and figuring out
    like what is gonna provide the most value
    for the people that tune into the next wave?
    So your opinion’s valuable.
    Let us know in the comments
    wherever you’re watching or listening to this.
    It’s super, super appreciated.
    Thanks again for tuning in and we’ll see you in the next one.
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)

    Episode 19: Can AI tools revolutionize your business workflow? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into six game-changing AI workflows for 2024.

    This episode covers everything from using Claude for content creation to leveraging Perplexity for research, illustrating how these tools can enhance productivity and engagement. Matt and Nathan also explore AI transcription tools like Super Whisper and Otter, and how they use a combination of Descript and Claude for efficient document creation. Plus, they touch on AI’s role in personal health management and the potential for AI-generated video content.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00)Perplexity is a powerful language model tool.
    • (05:10) Preparing engaging podcast prompt with context and depth.
    • (06:42) Search tool summarizes personal information, aids research.
    • (12:53) Otter app transcribes real-time speech, offers summaries.
    • (14:01) Transcribing and rewording content for easier reading.
    • (18:13) Pursuing health, tracking calories, and losing weight.
    • (21:45) Custom feedback based on data input is desirable.
    • (23:58) Creating custom project for analyzing successful short videos.
    • (28:59) Lowering barriers to content creation may lead to low-quality flood.
    • (30:19) Faceless, formulaic videos won’t work for most.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • The 4-Step Blueprint To Building a Successful AI Startup w/Vijoy Pandey

    AI transcript
    – Humans are amazing tool developers.
    From the moment we created Fire to the first beer,
    this is the latest, greatest,
    shiniest tool that we’ve developed.
    – What you’re describing is actually one of the areas
    that I’m A, most excited about,
    but also seems to be the thing
    that most people are scared of.
    – We are actually witnessing a step function change
    when it comes to human creativity and productivity.
    And so because of that,
    there’s going to be a fundamental change
    at how businesses operate and how society functions.
    (upbeat music)
    – When all your marketing team does is put out fires,
    they burn out fast.
    Sifting through leads,
    creating content for infinite channels,
    endlessly searching for disparate performance KPIs,
    it all takes a toll.
    But with HubSpot,
    you can stop team burnout in its tracks.
    Plus your team can achieve their best results
    without breaking a sweat.
    With HubSpot’s collection of AI tools,
    queries, you can pinpoint the best leads possible.
    Capture prospects attention with click-worthy content
    and access all your company’s data in one place.
    No sifting through tabs necessary.
    It’s all waiting for your team in HubSpot.
    Keep your marketers cool
    and make your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    (upbeat music)
    – Hey, welcome to the Next Wave Podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz.
    And in this episode,
    we’re gonna break down a four-step process
    to start a business in the world of AI.
    We’re also gonna discuss how new startups in AI
    can compete with the big boys like Google and Microsoft.
    Today’s guest is named Vijoy Pandey
    and he is the senior vice president of OutShift by Cisco.
    And I think you’re really gonna enjoy this conversation.
    So let’s dive right in.
    Hey, Vijoy, thanks so much for joining us.
    It’s great to have another conversation with you.
    How are you doing today?
    – I’m doing good.
    – Awesome.
    Well, let’s just dive right in.
    Let’s talk a bit about how the business landscape changes
    over the next five to 10 years
    with this new AI era that we’re coming into, right?
    There’s a lot of huge technology
    that’s just exploded over the last two years.
    And now pretty much every business seems
    to be integrating AI in some way.
    How do we see this changing the business landscape?
    – Yeah, so that’s actually a great question.
    I think every time I look at that transition
    that’s happened in the past, I would say five years,
    but especially in the last two years,
    we are actually witnessing a step function change
    when it comes to human creativity and productivity,
    especially when we’re looking at AI
    and generative AI in particular.
    And so because of that,
    there’s going to be a fundamental change
    in how change and how businesses operate
    and how society functions.
    And because of these reasons,
    we are in for a really interesting ride.
    And you might think like, okay, so we’ve heard this before.
    So what’s different this time?
    I mean, I think first and foremost, generative AI,
    I mean, AI has been generating content for a while.
    I mean, I think the first quote unquote generative AI system
    that at least I know of was this computer program
    called ELISA, and which was a conversational Q&A program.
    But it was based on export systems.
    So it was based on if-then-else statements.
    It was pretty hard-coded in the way they approached problems.
    But it did generate answers and it was a chat bot.
    I mean, so things have existed for a while,
    but this time with neural networks,
    with neural networks with context,
    transformers and everything that’s been happening,
    few things are taking shape.
    One, the creation is actually getting super smart.
    So we’re looking at creation
    when it comes to not just text,
    but audio, video and multimodal.
    So you can switch between text, audio and audio to images
    and that switching between all modes of communication
    is a big deal.
    So that’s a big deal.
    The second big deal, which I think is even bigger,
    and this to me is the most exciting bit,
    is these frontier models are actually beginning to reason.
    And so what that means is they are trying
    to build semantic relationships
    between the elements of the grammar.
    So let’s take a simple example.
    In English, they’re trying to build semantic relationships
    like you and I would in terms of what’s a verb,
    what’s a down, what’s a proposition,
    how do I combine these things
    to form an intelligent statement?
    And so that semantic relationship,
    whether it’s English or Japanese, it doesn’t matter.
    They’re building those semantic relationships.
    But that’s not what’s super exciting.
    What’s super exciting is those same semantic relationships
    are being built across mathematics.
    They’re being built across how proteins come together.
    So there’s a grammar and language for proteins.
    There’s a grammar and language in math.
    There’s a grammar and language in how molecules combine
    to build new materials.
    And so those are the places
    where I think things get really interesting.
    So once you have these semantic relationships,
    you can actually start reasoning about does this make sense?
    Does that make sense?
    Can I take a large ambiguous problem
    and break it down into smaller steps
    that I can then solve?
    So to me, that is the next big step
    that is being enabled through these frontier models.
    And then the third bit is the way we interact
    with these models is also changing.
    I mean, I talked about Eliza and Eliza was text in, text out.
    When we started with chat GVT, it was text in, text out.
    And so it was great as an assistant.
    You ask a question, you get a response.
    It could be a summary.
    It could be some code snippet.
    But in the end, it’s still a text response.
    Sure, you changed it now to multimodal,
    but it’s a content response.
    What’s happening now is we are moving towards agents.
    And agents are going to be autonomous.
    They’re going to be always on.
    They’ll be always listening to inputs from the environment.
    So you don’t have to push things to it.
    It’s always pulling information.
    And then once it pulls information,
    it actually takes action.
    Instead of giving you some content to absorb
    and then take a decision or action,
    the agent will take action on its own.
    But that’s not the end of it.
    What we actually figured out is,
    and this is something fascinating,
    the one that Andrew Ng and these other folks have been doing,
    is think of these agents as being no different
    from you or I, from humans, right?
    You will not come to VJoy and ask VJoy a question around,
    “Hey, VJoy, help me plan my next trip to Italy.”
    And then two minutes later, you come to VJoy and say,
    “Guess what, I’m having this chest pain.
    Can you tell me what that could be?”
    And then you told her, I’m gonna say,
    “I’m looking for stocks to buy, with stocks to buy.
    You will not do that.”
    I mean, it’s like you go to subject matter experts
    and you actually figure out what the subject matter experts
    have to say, and you trust those subject matter experts.
    But chat GPT behaves in this,
    what’s called one short or zero short approach,
    where you say, “Give me this,”
    and chat GPT just picks it up.
    And I’m just picking at GPT, but it’s like,
    Anthropic Gemini, I mean, you take your pick, right?
    So what we’re looking at now in the agent work flows is,
    can we build these really thin, small, model-based,
    really accurate subject matter expert agents
    that can come together, collaborate,
    constantly learn, and solve a higher-order problem?
    And so what Andrew Ng and people like that have shown is,
    take a simple thing like developing code.
    Instead of asking GPT or Gemini to spit out code,
    which is one short, zero short,
    you actually say, “Okay, I have one agent.
    Maybe it’s GPT based with Generate’s code.”
    I have another agent that is sitting on the side,
    which is, again, even small and accurate,
    which is actually going to test for correctness.
    Then I have another agent who’s going to sit and test
    for security, scale, and if you have these four or five agents
    come together and work on a coding problem
    or a software development problem,
    the output that you get sometimes is 10x better
    than what you get from a single or one-shot approach.
    So these three things of creation, reasoning,
    and agentic ways of interacting with these systems,
    I think these are game changers.
    I think they are going to change everything that we do.
    They’re going to change the way we approach
    not just software work, services work,
    but even physical work.
    And there’s a PWC study that actually says
    that AI, especially based on all of these things,
    is going to add 15 trillion plus of value
    to the economy by 2030.
    That’s trillion with a T.
    It’s like huge amount of value because
    of this wide applicability across agentic workflows
    and embedded forms in robotics as well.
    Because it’s the same thing, just embedded in a robotic form.
    Yeah, I think people hear that about agents
    and it sounds like sci-fi to them.
    They’re like, oh, that’s cool.
    That’s coming in 10 or 20 years.
    I think a lot of business leaders
    don’t realize this is probably like one to three years
    where you have very good agents that actually work
    and can go off and do work for your company.
    And so probably at Cisco, I think it’s great
    that you guys have been doing the out shift
    because I think more companies should be doing that.
    We’re thinking about, OK, even if I’m a big company,
    medium-sized company, how can I be innovative
    and keep trying new things?
    Because like in the age of AI, disruption
    is going to happen so fast.
    So what lessons have you guys learned at Cisco
    and is there anything like maybe our audience could learn from
    about how to be more nimble even as a big company?
    Yeah, I mean, that’s a great question.
    I think the big thing here is the velocity
    and the nimble aspect of doing business
    and being able to experiment and learn from it
    and then iterating on it is actually the key attribute here.
    And that’s why out shift exists.
    I mean, that’s the reason for us to exist.
    And that’s a great advantage that all these small startups
    out there have and you have in the industry
    because you have the ability and you don’t have the baggage
    to support a customer base that is mired in brownfield pain.
    Now, again, the thing I would note here
    is the one place that startups can come in and disrupt
    is to disrupt that brownfield pain.
    So I’m not saying that you should not go after that.
    You should absolutely go after brownfield pain
    because that’s the place where something like AI could come in
    and disrupt the industry pretty massively.
    There’s a complexity that businesses are trying
    to just grapple with.
    So all things that we are dealing with
    is this widening gap that we see as out shift
    between all of these frontier models,
    all of these foundation models, big or small,
    and the capabilities that they’re providing,
    everything that we talked about, creation, reasoning,
    assistance to agents, and all of those frameworks,
    all of that is happening at breakneck speed.
    And our customers, enterprises, including ourselves,
    when we think about us, Cisco as a customer,
    we are struggling to consume these in real-world use cases.
    And so what are the reasons?
    Four big reasons.
    Number one, I may have an idea.
    I mean, all of us use AI for consumer-oriented tasks.
    And we are getting pretty good at that.
    I mean, my kid has been using this for his homework.
    For God knows, since the day chat GPT got released.
    So we’re using it every day.
    But businesses might have some ideas,
    but they don’t know where to start.
    So step number one is, can we build something
    that enable businesses to just experiment?
    So if they have an idea, I mean, we talked to HR,
    we talked to finance, sales, legal,
    like all of these teams within these large enterprises,
    they have so many ideas.
    We, at one point, we tabulated like 150 ideas, use cases,
    that these folks want to come in and experiment with.
    But there is no easy way.
    So is there an easy way that somebody
    can provide for these teams to come together
    and experiment with their ideas very quickly at low cost?
    So that’s number one.
    Number two, now that you figured out, OK, this use case,
    this idea sort of makes sense, then
    you need to customize it.
    Because, again, to our earlier conversation, GPT or Gemini,
    they don’t have context around an enterprise’s data
    sources and enterprises’ knowledge bases, internal websites,
    snowflake instances.
    There’s a whole bunch of data sources
    that an enterprise does business on that should not
    be accessible to these public models.
    So if you need to customize it for your use case,
    you need to bring in these sensitive data sources
    and knowledge bases and customize these models
    with those data sources.
    It’s not just that.
    You need to figure out what policies make sense.
    Because the last thing you want is,
    if personally, if Nathan, you don’t have access
    to a particular document, but because I
    customize my internal assistant, using that document,
    suddenly, Nathan has access to all of the answers
    that the assistance is providing based on that document.
    That’s a big problem.
    So carrying that source of truth,
    carrying that identity across data access, knowledge base
    access, as well as assistant and AI access
    is the other big one.
    So that’s customization.
    We’ll be right back.
    But first, I want to tell you about another great podcast
    you’re going to want to listen to.
    It’s called Science of Scaling, hosted by Mark Roberge.
    And it’s brought to you by the HubSpot Podcast Network,
    the audio destination for business professionals.
    Each week, host Mark Roberge, founding chief revenue
    officer at HubSpot, senior lecturer at Harvard Business
    School, and co-founder of Stage 2 Capital,
    sits down with the most successful sales leaders
    in tech to learn the secrets, strategies, and tactics
    to scaling your company’s growth.
    He recently did a great episode called
    How Do You Solve for a Siloed, Marketing, and Sales?
    And I personally learned a lot from it.
    You’re going to want to check out the podcast.
    Listen to Science of Scaling wherever you get your podcasts.
    The third bit– now you’ve customized it.
    You’ve seen things are working.
    Maybe it makes sense.
    Now let me go forward and make sure
    that I’m getting value out of this use case.
    So ROI analysis is the other big, big problem.
    So if you’re in the observability space,
    if you’re in the what we’re calling prompt routing space,
    like, does this model make sense for these use cases?
    Or should you be looking at something else?
    Because mistrial might be good for something.
    GPTO might– before O might be better for something.
    Maybe a Llama 3, 4, or 5 billion parameter
    might be good for something.
    Or maybe something that is distilled
    might be good for something else.
    So how do you pick and choose which models make sense?
    How do you pick and choose which data sources are actually
    being effective in your use case?
    How do you figure out that, hey, I’m
    paying X amount of dollars to all of these foundation model
    providers, but my business process at the end of it
    is really at the same place?
    So have I actually benefited from spending all this money,
    from bringing AI into the equation for these use cases?
    That’s a big question right now for all of these enterprises.
    So is there a before and after when
    it comes to the business workflow, before AI, after AI?
    So that, I think, is pretty critical,
    because we are now entering a phase where
    there’s a justification needed.
    The hype cycle is a little warming down a little bit,
    and you need to justify–
    Well, let’s see what happens with GPT-5, right?
    [LAUGHTER]
    Let’s see what happens there, yeah, exactly.
    But I think it’s a good thing, because I
    think you’re getting to the point where you’re now
    getting into real-world use cases, especially
    in the B2B context.
    And now, finally, the fourth step is all of this great.
    Now you’ve deployed.
    Now you want to scale it.
    You want to make sure that there’s security behind it.
    You want to make sure that there’s data prevention behind it.
    You want to make sure that it’s trusted and safe.
    So it’s not hallucinating.
    It’s bias-free, it’s ethical, and so on and so forth.
    So those are the four steps–
    easy start, customization, ROI analysis,
    and trust, safety, and security.
    These are the places that enterprises are struggling with.
    So if you want to start up, innovate in this space.
    And there is so much to innovate here
    that I, myself, can probably farm out
    like hundreds of companies here to go ahead
    and tackle all these problems.
    But you should have to deal with HR and legal.
    I was hoping you were going to say just replace HR and legal
    with AI, and then you can do this.
    That was my hope.
    Well, we are on a long way away from that, Nathan.
    And that actually brings up a great, great point, actually.
    There is a pretty big debate around,
    even if we go through these agent work flows where agents
    are now coming in and taking autonomous action,
    people get worried, well, is my job at risk?
    And so one of the things that I would say
    is humans are amazing tool developers.
    From the moment we created fire to the first spear,
    whatever, right?
    We’ve been amazing tool developers.
    This is the latest, greatest, shiniest tool
    that we’ve developed.
    And we will have to make it better.
    We will have to make it better so that we can actually
    figure out a way for these tools to do the menial tasks.
    Whereas we elevate ourselves to solve
    the more ambiguous, the harder, the ill-defined problems,
    because we need to go after those higher-order problems.
    So even something like HR, I mean,
    it’s the parts of that process that, in fact, our HR teams
    come to us and say, things like reservation summarization,
    things like skill set matching, these are things that nobody
    wants to spend time on.
    So yes, automate those tasks, so.
    But how do you advertise a role?
    How do you attract a candidate?
    How– I mean, these are human-touch processes.
    I mean, humans are not going to go away
    from these kinds of processes.
    It’s just that you’re making these humans better
    at what they do.
    Yeah, I think what you’re describing
    is actually going back to the sort of agents thing,
    I think that’s one of the areas that I’m A, most excited about.
    But also, that seems to be the thing
    that most people are scared of is like, all right,
    if we get these AI agents running and doing all these jobs,
    like, now what does that leave humans up to?
    And I know a lot of humans are good decision-makers
    and can sort of do the higher-level thinking
    and then get the agents to run it, but not everybody.
    Define a lot.
    So fast-forwarding five, 10 years,
    like, where does that leave humans?
    Because I actually think we might even get to a point
    with a lot of these AI systems and agents
    that they’ll move up that chain
    and do more and more of that higher-level thinking.
    So this is obviously a very philosophical, theoretical question,
    but where does that leave humans five to 10 years from now?
    Yeah, I mean, that’s actually a great question.
    Again, I mean, if I had my crystal ball,
    I’m going to polish that and give you an answer.
    But I think probably 99% of the time, I’ll be wrong as well.
    So I’ll do my best and try to answer this question.
    But the way I think about this is,
    so first and foremost, it’s a tool.
    And humans have created the tool.
    Humans will continue to refine the tool.
    Our biases, our insecurities,
    all of the things that we stand for
    will actually seep into the tool as well.
    So first and foremost, as people, as humanity,
    we should be pretty careful and pretty cautious
    about how we build the future around AI
    and just be deliberate in figuring out
    whether there’s bias, figuring out
    whether there’s hallucinations,
    making sure that security problems are taken care of,
    making sure that privacy and data ownership
    is actually taken care of.
    So I think there’s a whole element there
    that we need to pay attention to.
    But the way to think about this is,
    and my favorite analogy here is,
    we should be behaving as if there is a printing press
    being invented two blocks down my house.
    So two blocks down my house,
    there is a printing press that is being invented.
    I cannot be sitting here sharpening my quill
    because that is the wrong thing to do, right?
    I know that there is a printing press.
    I should embrace it.
    It’s the shiniest toy.
    It is going to change the world.
    Do not sharpen your quill.
    Go out there, figure out how that printing press operates,
    and then maybe start writing a novel, right?
    And let the printing press do the job of printing it out
    instead of physically writing that novel out.
    So I think that’s the analogy I would throw out there
    where embrace it, learn it, use it, improve it,
    and get your fundamentals right, right?
    So in all of this journey, again,
    like I said, there’s going to be another AI winter.
    I mean, all this hyper side,
    we are going to hit an AI winter
    because there’s some fundamental problems like memory.
    Reasoning is not solved yet.
    There’s a whole issue around,
    can we make these models unlearn?
    So there are these fundamental problems
    that need to be solved.
    Sustainability is a big one.
    I mean, we are burning down the planet at this point
    to make all these models.
    So how can we make it–
    – I would argue, though, that I would argue, though,
    AI is what’s going to solve that.
    I would argue that we can’t save ourselves out of that.
    So anyways.
    – Agreed.
    I mean, every problem that I just described here, Nathan,
    there is an AI for that solution,
    and then we’ll be using AI to build new solutions as well.
    So there is that duality that exists
    in everything that we do.
    I mean, that boat has sailed.
    So I would say that if you’re sitting here thinking
    that AI is not going to change your world,
    that’s the wrong place to be.
    The printing press is being developed.
    So that is going to happen.
    And we’re going to leverage AI to solve
    sustainability problems, health care problems,
    environmental problems, education problems,
    accessibility problems across the board.
    But as far as individuals are concerned
    and where this thing is headed, I would say be pragmatic.
    Know that there are more problems to be solved,
    and we will be solving them over time.
    In fact, there is a famous quote by Thomas Kuhn.
    I don’t remember the exact quote,
    but paraphrasing it, what it says is,
    there are these step function changes
    that happen in scientific discovery.
    And between those step functions, real work is done.
    Because then you take that step function,
    the output of that step function,
    and you actually make it work in real world scenarios.
    That’s where we are right now.
    Secondly, between those step functions,
    you’re actually doing the work of the next step function.
    So just be pragmatic about it.
    Figure out what needs to be fixed.
    Go and fix it.
    Because that’s where we humans thrive.
    Now, when it comes to one of the negatives of this,
    and I heard this from somebody, and I’m a photographer,
    I love to take pictures.
    And then you’re looking at models like Sora,
    and you’re looking at what AI can do.
    And one of the photographers that I admire,
    he came back and said, you know,
    I really want AI to solve for all the menial work
    that I don’t want to do.
    I don’t want AI to solve for the stuff that I enjoy doing.
    And so there’s a little bit of a pet peeve
    that I personally have as well.
    Whereas they’re going and tackling things
    like video generation and image generation,
    which all of us enjoy.
    But then even there, if you take a step back and think about it,
    it’s like painting used to be a career for a lot of people.
    And then photography came on the scene.
    Yes, the demand for painters reduced quite a bit.
    But the remaining painters that existed and exist to this day,
    what did they do?
    They upskinned.
    And now your paintings and a painting that you would buy
    is way more expensive than a photograph that you would buy somewhere.
    So that’s again, going back to humans are good at innovating,
    at creating, at doing something new,
    even in a shape or form that has been automated through AI.
    So that I’m a firm believer.
    So we’re talking about like what, five to 10 years?
    So I personally think over the next two to three years,
    work is going to get dramatically easier and more fun.
    Like a lot of tedious things you normally do in work,
    a lot of that’s going to get automated.
    And so work is going to get better.
    Like people are just going to enjoy working more, which is going to be great.
    But I think long term, I think we are possibly heading to like a period
    where like in 10 years where work is probably going to be optional.
    I mean, I really believe that.
    Like once you start combining AI with robotics,
    you know, the cost of a lot of goods should come down.
    And so we’re going to have to rethink a lot of things in society.
    So like, yeah, maybe more people are going to use AI to learn how to play guitar.
    And maybe, yeah, AI could do that better, but you enjoy playing guitar.
    So you do that, right?
    Yeah, Nathan, just, just to add to that, I mean, I think you’re absolutely right.
    So that in my head, there are two axes of innovation.
    So there’s innovation that actually automates things that we do.
    And then there’s innovation that actually abstracts away things that we do.
    So abstracts away the complexity of things that we do.
    And these are two separate axes.
    So the analogy that I have is, let’s take, let’s take the example of building a house.
    You could build a house brick by brick, brick by brick, build a wall.
    Once you’ve built a wall, then you build the house by combining these walls
    and a roof and so on and so forth.
    You can automate that process.
    In fact, there are robots that actually go ahead and lay those bricks out for you.
    They exist.
    So you can still be building brick by brick, wall by wall and automate that process.
    But then you can abstract the complexity and you actually can move the unit of work
    to be a higher order function.
    And you can do that by saying, I’m going to the 3D print this entire house.
    When you 3D print that entire house, it’s like, yes, you’ve abstracted out
    the complexity of building bricks, laying out walls, connecting these walls,
    putting a roof together, all of that is gone.
    So now your unit of work is actually very, very different.
    It’s abstracted out to a unit of work that is democratizing, in fact,
    the way you would build houses.
    So abstraction always democratizes work.
    And you’re 100% right that with agent work flows, starting with software
    development and the tech space, then moving towards services because it’s
    sort of knowledge work.
    And agents are suited for that at least in the next two years, three years.
    And then looking at embedded AI, which is taking these agents,
    putting them in robotics, we are going to move towards a utopia that is just
    great, Nathan, which is we will abstract work out where you once get all the time
    they want to do what they do best, which is argue.
    I’ve had three years.
    No, hopefully that’s not it.
    I was hoping you were going to go towards like, yeah, let’s build the new
    Coliseum and let’s think of things like that, which then goes into the argument
    of we should be accelerating more, because yes, this is all going to require
    more energy.
    And so if we try to have less energy, just none of this is going to work
    out for humanity because people are going to demand more energy.
    So we have to be building AI faster and then hoping that AI can help us solve
    those problems versus trying to save our ways out of it.
    Because like, yeah, when people are not working, what are they just going to sit
    around in VR?
    I hope not.
    I hope they’re going to be off and like, I want to go build, I want to build a city
    off on the moon using robots, right?
    Like, I hope that’s what people are doing.
    It’s like amazing stuff like that.
    And I think that’s what we’re looking at in the next 10 to 20 years.
    So yeah, actually, that comment about VR is actually pretty interesting.
    What so the way we think about, if I might digress a little bit, but the way
    we’re thinking about agenting workflows is like, like we just described.
    I mean, they’re going to be agenting workflows.
    They’re going to solve software and tech problems first, because guess what?
    Tech folks are building agents and agenting workflows.
    They’re going to disrupt the thing that they’re most comfortable with first.
    Right.
    So that’s going to happen first.
    Then we are going to go after the services industry.
    And you see a lot of data points already.
    The PwC, a study that I mentioned.
    There’s a Sequoia video that talks about it as well.
    People are talking about the 15 trillion plus services industry that’s going
    to get disrupted with agenting workflows.
    I think before we get to embedded agents and robotics, the third step
    that’s going to happen is actually avatars, social networks and the metaphors.
    Have you think about where Meta is going and where Zuck is going with all of this?
    I mean, he pretty much said that in his last learnings call.
    And when he released Lama 3, but we’ve been theorizing about this for a while,
    where why would Meta make Lama 3?
    So there was a whole bunch of reasons why you would want to make it free.
    First of all, could disrupt the industry.
    Sure.
    Yeah, flip the table over and that’s yeah.
    I mean, I’m just going to make the bar here.
    Let’s see if five other companies disappear.
    I mean, that’s a great reason.
    Great.
    But really, I mean, they’ve got the data to train really good models.
    And one of the things that we are realizing is there’s this whole motion
    behind synthetics and using synthetic data to train models.
    And we are realizing as an industry that there’s going to be model collapse
    if we’re going to use synthetics.
    So true human data is actually pretty valuable with surprise of surprises.
    I mean, we knew that, but it’s actually being proven.
    And guess who has a ton of data is the goos of the world and the metas of the world, right?
    So he can leverage that.
    He can really build good models.
    But the third bit that he’s moving towards is almost a social network number three,
    a version three, which is version one was you interacting with friends and family.
    Version two of social networking was all of us interacting with influencers like Nathan and Matt here.
    So all was interacting with influencers.
    So these are not in our immediate friends and family space.
    These are people who are known that you can interact with outside of our close circle.
    Social network number three of version three is going to be all of us interacting with virtual avatars.
    And the way you’re going to train these avatars is using these open core models
    that are going to get trained on how you do things and then behave like a virtual Nathan or a virtual Matt.
    When it comes to, let’s say gaming, when it comes to finance, when it comes to creativity.
    Like a husband playing guitar.
    Sorry, that world is coming whether you like it or not, Nathan.
    I mean, I know we want to spend people to Mars, but people also will be sitting in.
    No, no, I’m a gamer.
    I started my career in gaming.
    I was a top player on EverQuest back in the day.
    So I, yeah, I definitely get it.
    I hope that we don’t go there though.
    Because like for me, the addiction was not healthy when I was young, right?
    So I’m like, I kind of, I hope to steer things away from that.
    Like, yeah, sure, hopefully some part of society does the whole VR thing.
    But yeah, let’s not stop there.
    Let’s think beyond VR and like go off and build amazing stuff again.
    Yeah, but there’s a pretty important use case here.
    And we’ve done that within Outshift.
    What we did is we have a designer, his name is Mark Schiavelli.
    He’s an awesome designer.
    Now his skills are pretty, pretty much quite a bit in demand, right?
    So everybody’s going to the designer because we have a philosophy of design something first,
    then get customer feedback, then try and build something and then iterate.
    So design, learn, build, iterate.
    And so design is the first step in the process as everybody was going to Mark and his team.
    And so what he did was he said, “Let me train a virtual Mark.”
    And so he ended up training a virtual Mark and virtual Mark is like, I would say 70% as good as real Mark.
    Right now, let’s go get better next year.
    But what I’m trying to get to from here is one thing that this does is it actually democratizes skills.
    So think about an expert in finance that today some of us can afford to pay for and get their advice.
    But the larger strata of humanity is not able, economically, societal barriers, whatever, is not able to access those skills or those SMEs.
    So I believe that there is the flip side of this, which is you can actually democratize those skills whereby a lot more of humanity can actually benefit from those skills.
    And that is something that we need to tap into.
    So yes, there is the aspect of VR and gaming and all of that.
    But there is a pretty real use case here around democratizing knowledge even more and giving that access to parts of society that have never had access to that.
    And that, I think, will drive up again, creativity and productivity even more.
    Yeah, I think we’re already actually seeing Meta do this to some degree, right?
    They just rolled out this week or last week a new feature where anybody on Instagram and Facebook could go and train their own mini AI version of themselves.
    And so somebody can go and talk to the virtual Matt Wolf that has all of the data on me and sort of understands how I would respond.
    Like they just rolled that feature out.
    The next step just feels like, all right, now let’s embody it into an avatar, maybe in VR, right?
    So it’s already happening.
    We’re already seeing that play out.
    And the more accurate they get and the more export they get going back again to our agent conversation.
    If you train them to be really good exports and let’s say security, where it comes to application security, like go really narrow, go really deep and be exports on that so that you have hallucinating less.
    You are providing accurate answers.
    And after a couple of these training sessions and fine-tearing sessions, I get comfortable enough where they probably hallucinate less than I do.
    And they make a few mistakes that I do.
    I’m like, yeah, go for it, right?
    And we are rapidly approaching that world.
    And once we oppose that world, maybe I’m thinking, maybe I should monetize that, right?
    And maybe let others access that avatar and leverage my skill in a much broader, more scalable way.
    So you’re on the beach in Hawaii, checking the VJoy podcast, like, oh, he’s doing a great job.
    He just sent me a report and asked me for feedback on a few key parts.
    Not in a virtual environment.
    Yeah, exactly.
    That was my point, though.
    Yeah, our next podcast interview with VJoy is going to be VJoy’s avatar.
    Because I mean, it’s the same thing anyway.
    Our avatars as well.
    We’ve already been replaced.
    You just don’t know it.
    But, you know, Andre Carpathia, actually, you know, he left open AI, started a new startup.
    And this is kind of the same idea of what he’s doing.
    He’s trying to help educators educate at bigger scales than ever possible, right?
    A single educator can go in, put all of their knowledge into a system,
    and then anybody can go and access that educator now.
    And it doesn’t matter what language you speak, right?
    I could go in and if I only know Japanese, I could learn from that person in Japanese, right?
    If I only know English, I could go and learn from that person in English.
    So I feel like that ability to, you know, take a base of knowledge, put it into a system,
    and then just let anybody access that at scale.
    I mean, think about how that, like, can benefit, you know, countries, lesser developed countries
    that don’t have access to some of this information.
    To me, that’s like, that’s the real power of what we’re talking about here.
    I think about from, like, the Silicon Valley perspective, like, people don’t know how to do startups.
    They don’t understand anything about startups if you’re not in San Francisco,
    or most people don’t, and there’s all this knowledge just, like, you know, within a few blocks, right?
    And, but you could have, like, a Paul Graham bot, right?
    Where it’s like, you just, you taught the Paul Graham, it’s like, he’s, like, interviewing you.
    Instead of wasting your time with someone who’s gonna be super, you’re scared of talking to them
    because you don’t know anything about startups, you can just talk to the bot and, like, practice on the bot
    and see if you have, if your idea makes any sense at all and have the bot, just like,
    you can just feed it all of Paul Graham’s essays.
    He’s had some of the best essays ever, right?
    You just feed it all the essays, then you’re chatting with the bot based on the essays,
    and I just think of what that’s gonna do for so many different industries.
    Like, yeah, people who before couldn’t do startups, now they’ll be able to try startups or whatever.
    Yeah, I mean, that actually brings up two points.
    So one is, like, we’ve talked about digital twins, and basically, now digital twins are actually going to be real.
    I mean, again, that’s what we’re talking about, whether it’s in education, whether it’s in healthcare.
    I mean, one of the things that we always talk about is things like drug discovery.
    Yes, we talked about the fact that we’re building these frontier models to do
    protein folding and figure out novel drugs.
    You can try it on a population, which is what a frontier model would have
    at a statistical level, but will it work for me?
    It’s something that we can make these agents that are trained on my nervous system, on my
    habits and the way I eat, the way I exercise or don’t exercise for that matter.
    I mean, all of those things can be part of this bot.
    And then you can try these drugs out without actually impacting the human themselves.
    So it’s drug discovery, it’s material discovery, all sorts of digital twins.
    The other thing I would say is, like, you were talking about these bots and how they can train
    people and give them access to, like, Silicon Valley and startups and all that.
    There’s an interesting anecdote that I want to bring up.
    So when we shipped our first assistant within our product,
    this product called Primaptica, it is actually a security product.
    It secures cloud-native, cloud-first applications.
    And so when we shipped this assistant, the whole goal here was, can we improve the day-to-day?
    Can we make people more efficient?
    Can we make SecOps and DevOps and SREs more efficient when they’re using this product?
    Can we help them communicate?
    Because a lot of the friction that exists is SREs don’t want to talk to devs.
    And devs are like, oh, they are friction points and they’re not letting us move fast.
    So there’s always this pain, and we want to just bridge that gap.
    So that was the intent.
    And that we did.
    We did actually solve that intent.
    But guess what?
    When we started looking at the kinds of queries that were coming in,
    the kinds of queries that were coming into that bot or the assistant were
    — help me understand what a CVE means.
    And if you’re a security expert, you would know what a CVE is.
    But maybe you’re a newbie in security.
    You don’t know what a CVE is, and you’ve joined this team,
    and you’re a fresh out-of-school grad, and you don’t want to ask somebody and embarrass yourself.
    You can ask stupid questions to a bot that you would not ask some expert, right?
    And here you have the expertise of 10 people.
    And you can ask really stupid questions.
    I do it all the time already.
    I don’t like to ask that really stupid questions and not lose my face.
    And that’s like a positive use case.
    Yeah, no, totally.
    I mean, I literally do that with AI already, because I’ve got chat GPT on my phone.
    The other day, my daughter was asking me, why this root beer called root beer if it’s not a beer?
    And I’m like, well, let’s ask.
    It’s like, I use it for things like that right now.
    Questions that I feel too dumb to ask somebody in real life,
    but I have no problem asking a computer.
    I asked AI about at 7-Eleven today with my wife.
    She was like, because in Japan, 7-Eleven’s everywhere, right?
    And but she didn’t know the history of like 7-Eleven, it being 7 days a week.
    And I knew that part, like, oh, it’s 7 days a week, and she goes,
    but what does 11 mean?
    I’m like, I don’t know.
    And it was 7 AM to 11 PM.
    So I learned that today using AI, right?
    But anyways, yeah.
    But again, if you look at the serious aspect of this is, again, democratizing knowledge.
    And so you’re providing knowledge to, it’s not just asking the stupid questions,
    but it’s also enabling people, like you were saying earlier, in countries where there is no
    such access for societal groups that have not had access.
    In the past, to just suddenly have access.
    I mean, the way Google democratized the web and knowledge, this is taking it 100x forward.
    Okay. So the last topic I want to touch on, you’ve already kind of touched on it a little bit,
    but I think one of the big fears with AI right now, especially among like smaller startup
    founders, is that you’ve got the Microsofts, the Googles, you know, the Metas out there
    that we’ve already seen this happen, right?
    Somebody would go and develop something with like an open AI API.
    And then two months later, chat GPT just makes that a feature of their product, right?
    Or Microsoft just goes and makes that a feature of their product.
    So how do you see smaller startups actually competing and, you know,
    staying in the game against the bigger incumbents?
    Yeah, that’s a great question.
    I think the way to think about this is, again, go back to history and learn from history.
    So at some point, there was a similar sentiment and statement being made about
    saw these big cloud providers is like, why should I build a product?
    Well, I know, and I’m not going to name the cloud provider.
    But when that cloud provider is actually just going to consume it in their ecosystem,
    what, the fusics, all of these companies, large companies, they can only go and scale
    in certain areas.
    I mean, there is a thesis behind which all of these companies are going to put their wheat behind.
    So for example, let’s take open AI.
    Open AI is going to go after the biggest, baddest, best foundation model, frontier model.
    That’s why they’re called frontier models.
    It’s like, it’s always going to be the best model out there.
    And everyone else, if you’re in the model game, you will be compared against open AI.
    I mean, there’s no question, at least for the next couple of years, right?
    I don’t know how it will change in the future.
    But open AI is not going to concentrate on things that don’t fit into that
    sort of mold or in that swimway.
    Yes, they might show an app ecosystem and they might dabble into a few things.
    Because they’re trying to figure out avenues of revenue.
    They’re trying to figure out how you can use that foundation model to build use cases.
    Because only then people will come and use that model and they can monetize.
    So there’s an ecosystem that they’re going to build up.
    But they’re not going to, they cannot.
    I mean, no company on planet Earth can do everything excellent in an excellent way all the time.
    Yet.
    But that is like, it’s one of those things where what is the niche that you want to go after?
    Go squarely after that niche, figure out use cases, figure out especially brownfield
    pain points that all of these companies tend to avoid.
    But customers are willing to pay for that brownfield pain to disappear.
    So go after that brownfield pain point.
    Take these new tools and disrupt that brownfield pain point.
    That is the way to succeed.
    So go and hunt for these pain points.
    You probably already know that this audience probably already knows that you are dealing
    with it day in, day out.
    You need to just take a breather, think through all those 10 of those look like.
    Pick one, go and solve it using these new tools.
    That’s the way to enter the market because nobody is going to solve all of the pain points
    all the time in a perfect manner.
    I mean, I’m a firm believer of that yet.
    Maybe by that point, by that I feel that everybody has made enough money.
    Our audience has made enough money that they can all be on that Hawaiian beach.
    Yeah.
    Or they’ll be in VR.
    Well, this has been an amazing discussion, Vejo.
    I thank you so much for joining us and talking about this stuff with us.
    I think when we came into this call, we sort of anticipated going in one direction
    and we went in a totally different direction.
    And it was, I think, even more fascinating than where we were going to take it originally.
    But if people want to go and learn more from you and hear more of what you have to say about
    this stuff, is there somewhere online they can go check you out?
    Are you on Twitter, YouTube, any place like that?
    Yeah, so I think so.
    They can check out our website, outshift.com.
    You can follow us on LinkedIn.
    You can follow outshift.com.
    You can also follow me on LinkedIn.
    And then we have a newsletter called The Shift, which we actually send these nuggets of information
    every so often.
    We don’t pitch.
    It’s all about nuggets of information, so you can subscribe to that as well.
    Amazing.
    Well, thank you once again for joining us today.
    This has been such a fun conversation.
    I really appreciate it.
    Thank you.
    It’s been a lot of fun.
    Thanks for your time.
    [Music]

    Episode 18: How can embracing AI solve fundamental problems in areas like sustainability, healthcare, and education? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive deep into this topic with Vijoy Pandey (https://x.com/vijoy), who leads Cisco’s Outshift team.

    In this episode, Vijoy Pandey reveals the 4-step blueprint to building a successful AI startup and emphasizes how AI’s integration in different sectors could revolutionize the way we live and work. Covering everything from the potential of AI to democratize knowledge, to the challenges startups face against tech giants, this conversation is packed with insights on adopting AI for larger societal benefits.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • What Exactly is Extended Reality and How Will It Affect Your Job? ft. Alvin Graylin

    AI transcript
    – When we talk about Metaverse, what is that vision?
    – I’m optimistic long-term about kind of human nature
    to adapt to this new environment and to get us
    from these siloed worlds that we have today
    to a much more interconnected, interoperable
    Metaverse platform long-term.
    – So who controls this world?
    – So here’s the thing.
    – When all your marketing team does is put out fires,
    they burn out.
    But with HubSpot, they can achieve their best results
    without the stress.
    Tap into HubSpot’s collection of AI tools,
    breeze to pinpoint leads, capture attention,
    and access all your data in one place.
    Keep your marketers cool
    and your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    (upbeat music)
    – Hey, welcome to the Next Wave Podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz.
    And today we’re talking with Alvin Grayland,
    the author of the book, Our Next Reality.
    And we’re talking all about the crossover of AI
    and the Metaverse and virtual reality and augmented reality.
    And in this episode, he basically lays out a game plan
    of how we get to this Metaverse
    that everybody’s been talking about.
    And well, quite honestly,
    a lot of people have forgotten about
    over the last couple of years.
    Well, we talk about how we get there.
    There’s also some interesting discussion
    about what we can learn from China
    and what China can learn from the US
    because Alvin has spent a lot of time
    in both of those countries.
    So there’s some very fascinating discussion
    around the sort of dynamic between China and the US.
    I think you’re gonna love this conversation.
    So let’s go ahead and jump on in with Alvin Grayland.
    (upbeat music)
    Hey, Alvin, thanks so much for joining us today.
    How are you doing?
    – I’m good, yeah.
    Thanks for inviting me, Matt.
    And great to meet you, Nathan.
    I think it’s gonna be a fun conversation.
    – Yeah, so Alvin and I,
    we go way back to about a couple of weeks ago,
    we got to meet each other at the augmented world expo.
    You were out there signing copies of your new book,
    Our Next Reality, and you and I got to talking
    and I had to get you on the podcast
    to talk about the crossover of the AI world
    and the metaverse world.
    And so I wanna talk a bit about
    the sort of crossover of AI and the metaverse, right?
    But before we do,
    I think we kind of need to define the metaverse
    ’cause I think a lot of people have a fuzzy vision
    of what the metaverse is.
    At one point, the metaverse was very tied towards like crypto
    and Facebook went and changed their name to meta
    and there was all of this stuff
    that was sort of muddying the waters.
    So how do you define metaverse, first of all?
    – Okay, so I think the easy way to think about the metaverse,
    first of all, the metaverse does not exist yet, okay?
    So all those people who says,
    “Oh, we have the X and X, Y metaverse,”
    they’re all just bullshitting you, okay?
    The metaverse is actually the internet we’ve been building
    for the last 30, 40 years,
    but the 3D version of the internet.
    And what that means is that it’s a open, interconnected,
    global network of instead of websites,
    you go to 3D worlds, right?
    That’s really the definition of the metaverse.
    In fact, the guy who invented or coined the word metaverse,
    and I checked with him, I was like,
    “This is my definition, does that make sense to you?”
    And he’s like, “Well, when I wrote it in 1992,
    “there was no such thing, it was kind of,
    “I made this stuff up.”
    But what you’re explaining
    is actually what I was trying to describe in the book, right?
    In terms of something that is a large network of worlds.
    It’s not a single world,
    it’s not something where you go and use, you sell NFTs,
    or it’s not just a game, right?
    And I think it’s the ability for you
    to be able to access it on any device,
    anywhere in the world with anybody else, right?
    That’s where the value comes in,
    because as we know, Metcalf’s Law, right?
    The value of a network is a square of its nodes.
    So, in a particular game or a particular NFT world,
    you’re gonna have thousands of people
    or hundreds of thousands of people.
    That value is gonna be significantly less
    than what we have today,
    where we have billions of people
    on a giant interconnected web, right?
    That’s why the internet is so valuable.
    That’s why we go on it every day,
    because there’s so much going on, and so much potential.
    And that’s what will happen in the 3D side,
    because as you know, I mean, we grew up,
    we evolved over the last several million years
    in a 3D space.
    There’s no reason for us to be constricting ourselves
    to that 2D screen.
    We’ve done it because that was the only option available.
    But soon, when that 3D space becomes accessible,
    affordable, and ubiquitous,
    there is no reason for us to go back
    to a limitation of a 2D space.
    And right now, we’re looking at each other
    through multiple 2D screens, and I can see you,
    but I don’t really feel like you’re with me.
    When we get into that matter of our space,
    we’re gonna be around the virtual table.
    We’re gonna see, you know,
    photorealistic potential of avatars of ourselves,
    if we want, we can make it more cartoony.
    But the fact that we will feel together,
    and whatever memories we create,
    we’ll feel the same as we were together.
    I think that’s something that is very powerful
    to bring people together,
    and to allow us to collaborate more,
    to allow us to communicate more,
    to allow us to get rid of the miscommunication
    and differences that people have
    between cultures and languages and age.
    – Like, how do you imagine the multiverse?
    Like, you know, my background is kind of like in gaming.
    I actually was one of the top players
    on the game EverQuest when I was a kid.
    So I feel like I grew up in this like virtual world, right?
    Like, do you see, you know,
    it’s almost like Ready Player One,
    but gaming is just one aspect,
    and like everyone’s living in this virtual world
    for like a lot of their life,
    and they’re doing business there,
    they’re meeting friends.
    Maybe there’s also gaming.
    Is that kind of how you see it?
    – So today, if you look at the amount of screen time we have,
    the average American is around 10 or 11 hours
    of total screen time, right?
    Between TV, desktop, you know, phone, et cetera.
    – Yeah.
    – And what we’ve seen is that over the last 10 years,
    people have primarily moved from,
    primarily on TVs to, you know,
    a lot of it on desktop,
    but to now mostly on mobile, right?
    Most of our screen time is actually on mobile.
    What we will find is that in the next five years or so,
    we will start to transition from something
    that is in our pocket to something that’s on our face.
    Why?
    Because that is already on our screen, on our heads.
    And half of the people in the world
    wear glasses on a daily basis.
    So it is something that’s already natural.
    Now, if I can turn the glasses they already wear
    into a display, into both an immersive,
    as well as a potentially AR display,
    so you have both options.
    Essentially, with a click,
    you can turn it into something that’s see-through
    versus pass-through versus, you know, completely immersive.
    You know, and when you have that,
    there’s really no reason
    to pick that screen out of your pocket anymore.
    In fact, I think that we will essentially be having,
    I would say, majority of our day
    viewing the world through this screen.
    Now, it could be an augmented,
    you could actually most of the time be clear
    and just have some few augmented items.
    Let’s say, instead of having a monitor,
    I can have a giant virtual monitor
    and the rest of my office looks the same, right?
    And for a lot of people, that’s enough.
    But, you know, for us who live in the first world,
    most people say, “Oh, you know, I would never want
    “to spend more time in these virtual spaces.
    “I know you’re different ’cause you’re a gamer.”
    But for a lot of the world,
    their physical world is not as beautiful
    and as welcoming as a potential virtual world.
    So for a lot of people who, let’s say,
    live in a dorm with 10 other people,
    you know, they may actually say,
    “Hi, I want to spend more time in this virtual space
    “because it allows me to have the freedom
    “to do virtual traveling,
    “to talk to people around the world,
    “to learn things in a space that is not distracted
    “by all these other things,” right?
    So, you know, and I think that’s the beauty
    of what this offers is that it allows you
    to pick the environment that you want to be in.
    And, you know, whether you want it to be for gaming,
    whether you want it for education,
    whether you want it to be for work, you know,
    we, especially white-collar people,
    spend so much of their time working with a 2D screen.
    But the difference of that now,
    if you have that ability to do it in a 3D space,
    I think a lot of them would, you know,
    in fact, I know you guys are both users of the Vision Pro,
    you know, having that virtual environment that they have,
    it really allows you to feel more focused
    on whatever topic that you’re working on, right?
    So even if you just had a screen of your computer
    and in that giant, you know, Yosemite Park
    or whatever, you know, the volcano,
    you just feel like, “Wow, you know, I can be concentrated,”
    ’cause I don’t have, you know, dogs walking around
    or people walking outside the window,
    that in itself will allow us to be more productive.
    In fact, there were studies that we’d done
    and we found that kids were six times more focused
    when they were studying inside VR
    than when they were in the classroom, right?
    And they were able to learn twice as fast
    versus having human face-to-face classroom education, right?
    So I think those kind of things,
    it’s a little counter to choose,
    ’cause everybody’s like, oh, there’s nothing better
    than face-to-face, but the problem is,
    when you’re face-to-face, you have 30 other kids
    or 50 other kids with you,
    you know, the teacher has one minute of time per class
    that they can spend on you,
    whereas when you’re in a virtual environment
    and you have that personalized face-to-face AI tutor
    that can be working with you the entire time
    and they can adjust and you can actually use
    your whole brain for learning.
    You know, as we know, although there’s some,
    you know, visual learners, there’s audio learners,
    there’s tactile learners,
    and when you’re in the XR, all of that is available.
    Whatever medium is the best way for you to consume
    and digest and understand content,
    it allows you to do that.
    Whereas, you know, maybe if you’re a really good audio learner,
    you’ll find in the classroom,
    but only about a third of the people are that way.
    So we’re leaving a lot of people behind.
    In fact, we did a study that showed
    when a class that was doing astrophysics,
    we found that the worst student in the class
    that was supplemented by XR
    was better than the best student in the normal class.
    And they were already pre-sorted beforehand
    to be having equal pre-test scores.
    So it’s not about, you know,
    oh, this class was actually better.
    It’s just the fact that until the reality is that
    there’s actually no dumb students.
    Everybody’s a genius.
    They’re just not being taught in a way
    to allow them to utilize a genius as in them, you know,
    or they’re not being cared for enough
    so that they paid focus and paid attention to the things.
    So you really smart kids and really kind of,
    maybe, you know, divergent kids,
    they have the ability to do a lot more
    if you gave them the right environments.
    And I think XR can do that.
    And particularly, if you have customized AI features
    or tutors that can help with that.
    – So I like part of the vision that you laid out,
    but not all of it.
    Like the idea that, like, okay,
    so if you live in a shitty environment
    that we should just keep that environment shitty
    and you should now go into virtual reality instead,
    I don’t like that.
    Like, I personally believe that with AI,
    we should actually make the physical world better.
    Right?
    – I cannot agree with you more, Nathan.
    Here’s the thing is that, you know,
    people have been talking about,
    oh, how can we create greater equality around the world?
    You know, how can we let, you know,
    the rich nations help the four nations, et cetera?
    And it just doesn’t happen, right?
    And or it hasn’t happened yet.
    And, you know, the key determinant of success today
    is where you’re born, more than anything else, right?
    And now, what happens when you actually start having
    a global metaverse ecosystem,
    is that actually you start to create a parallel economy,
    right?
    So this actually creates a natural flow of income
    and wealth-earning opportunity.
    No matter where you’re born, if you have a headset,
    if you’re connected to this global metaverse,
    then I am actually judged by my capabilities
    to contribute to society, whether as a programmer,
    as a, you know, technical support guy,
    as a consultant, as a tutor to some child,
    that’s a thousand miles away, right?
    If I was born in Bangladesh,
    I could make maybe, you know, $10 a month
    or something like that.
    If I’m, you know, born in the US,
    you know, the poorest people who are, you know,
    probably making $1,000 a month or something,
    you know, in that order.
    So the fact that now you create a global workforce
    and it equalizes them because it’s,
    people will pay you based on your value,
    not based on what they perceive your local costs is,
    ’cause they can’t tell where you’re from, right?
    That actually, I think, will help to achieve
    what you’re talking about, Nathan,
    in terms of how do we make people’s lives better
    by utilizing tech technology,
    essentially by creating a layer on top of the physical layer
    so that no longer are you constrained
    by who your parents were or by where you were born, right?
    Which are the two biggest characteristics
    that determine your future success, right?
    – Yeah, that’s something I wonder about though.
    Like, so if we spend more time in virtual rally,
    like when I played EverQuest a lot, right?
    I was not good at interacting with people at all.
    And I felt like a major detachment for other people.
    Almost like they were not a real thing.
    Like my game was the real thing.
    And so I do wonder if we will see more of that
    with humans where people don’t actually value other humans,
    if they spend more time in virtual reality.
    So I kind of imagine that the best scenario is like,
    yes, you have this amazing VR tech
    that we spend some time in.
    And but we’re so much more efficient
    with the time that we spend, whether if we’re working,
    okay, we put on the VR headset for 30 minutes,
    the AI helped us a lot,
    and we got a lot of our work done, right?
    And then I can go off in the physical world
    and spend time with family and friends and things like that.
    – So I think those are two different ways
    to approach it that actually would be different
    than what you probably experienced back
    when you were a child.
    ‘Cause you know, I also played every question.
    You know, you’re in front of a 2D screen,
    you’re talking to fixed characters that have fixed lines
    and you’re trying to get through this maze or this quest.
    Now, the good thing about the, you know,
    a future metaverse, which doesn’t exist yet,
    that this future metaverse is that the other people
    and there are actually also real people.
    They’re not scripted.
    They’re not something where you’re out there
    trying to solve and fix a fixed problem.
    You’re actually interacting with real people.
    And then we did a study on this.
    So we were doing AI based language learning
    and inside with avatars inside VR
    versus the traditional classroom learning
    or recorded learning.
    And what we actually found was that
    when people were learning inside VR with, you know,
    AI avatars or that was teaching them,
    they actually learned the language twice as fast
    as when they were learning inside a classroom.
    And here’s actually the more important thing.
    They were 10 times more willing
    to use it in the real world, right?
    So why is that, right?
    Because when they were inside the VR space,
    they didn’t feel like they were being judged.
    They weren’t shy about using it.
    So they got into the habit of actually using the language.
    I mean, you’re living right now in Japan.
    And I know a lot of Japanese who,
    they can read English great,
    but they hate to speak it ’cause they’re always afraid
    that they’re gonna pronounce something wrong or whatever.
    They’re feeling embarrassed
    that they may say something wrong.
    But when you’ve actually got used to just talking
    and you’re not being judged,
    you actually get more confident.
    And the more you speak, the more confident you get, right?
    And so I think that’s something that is,
    that can be gotten when you’re in immersive space
    that is a little bit more difficult
    when you’re dealing with kind of
    pre-scripted characters in a game, right?
    So I think that’s one thing.
    – We’ll be right back.
    But first, I wanna tell you about another great podcast
    you’re gonna wanna listen to.
    It’s called Science of Scaling hosted by Mark Roberge.
    And it’s brought to you by the HubSpot Podcast Network,
    the audio destination for business professionals.
    Each week hosts Mark Roberge,
    founding chief revenue officer at HubSpot,
    senior lecturer at Harvard Business School,
    and co-founder of Stage Two Capital,
    sits down with the most successful sales leaders in tech
    to learn the secrets, strategies, and tactics
    to scaling your company’s growth.
    He recently did a great episode called
    How Do You Solve for a Siloed Marketing in Sales?
    And I personally learned a lot from it.
    You’re gonna wanna check out the podcast,
    listen to Science of Scaling wherever you get your podcasts.
    – The other thing that you didn’t mention actually,
    I think it’s really important is that
    these worlds will house to be so much more efficient.
    You know, like right now I spend
    probably 60% of my time on the road, right?
    And I’m on, in traffic, in airports,
    I’m just spending all this time wasted time.
    If I didn’t have to travel, I put on the headset,
    I’m in a room with you guys, I feel I were connected.
    I have a lot more free time to spend with my kids,
    to spend with my best friends,
    to spend with people I want to meet, right?
    And to actually also stay healthy.
    I think that’s one thing that I wish I had more time to go
    and exercise and play golf and tennis and other things.
    Because I’m working so much, I’m less able to do that, right?
    So there’s kind of a combination of things where,
    I think it’s natural to fear that,
    hey, if you spend a lot of time in a virtual world,
    you get disconnected.
    I actually think that in some ways,
    it could be counterintuitive
    that it may be the opposite of that.
    – I don’t know if you guys have seen the clips recently.
    I saw one clip of like MIT and UCSD working together.
    And at MIT, they had an Apple Vision Pro on
    and they were moving their hands around
    and a robot here in San Diego
    was actually taking the same motions.
    And then like the very next day I was on Twitter
    and I saw another video that showed like a remote worker,
    I think in the Philippines,
    stocking shelves using a robot in the U.S.
    They had like some sort of headset on
    and they were actually sitting there going through the motions
    of stocking the shelves in the U.S.
    So now it sort of like shrinks the sort of job market down
    to where if you have access to this technology
    where you are, you can actually work anywhere in the world.
    – Yep, yep, and in fact, I mean,
    you can see that example a couple of years ago
    where I know this is a little bit kind of crypto related
    but the Play to Earn model, right?
    Where a lot of the Play to Earn people,
    the ones who were doing the work were actually in Philippines
    and they were, you know, getting $50 a month.
    But $15, $50 a month to them is a full salary job, right?
    Whereas here, people would essentially pay some money
    to have them do some things
    so that they could save the time of, you know,
    grinding through the game, right?
    So that’s kind of an early example.
    I actually, I think that’s probably not the best example
    ’cause I think there’s actually creating little real
    kind of social value to what they’re doing.
    – That was like the whole Axie Infinity thing, right?
    – Exactly.
    Once you have a global economy,
    people will gravitate to doing certain jobs
    that allow them to do something to earn more
    than then their natural environment gives them, right?
    And I think that will actually create
    that rebalancing that we talked about.
    So I think that the world can be better
    and then we just, you know, it won’t happen all at once, right?
    This is why I was saying, you know, Nathan,
    I know you don’t want to hear that, you know,
    people in the kind of emerging markets
    may wear this to escape from their kind of physical space,
    but it’s gonna take time for them
    to change that physical space.
    So as long as we give them the mechanisms
    to then earn a greater income
    than to create that natural flow of wealth,
    I think we will get to that point
    where we were much more equal longer term.
    – When we talk about metaverse is,
    do you see the metaverse is being like
    a bunch of separate platforms that all tie together?
    Is it like one single thing that everybody agrees
    this is the metaverse?
    Like, is it decentralized?
    Is it centralized?
    Like, what is that vision?
    – Yeah, so there’s actually a lot there, you know,
    just like in kind of the wealth and, you know,
    inequality issue, it’s gonna take time to work out.
    And just like today, we actually have a lot of, you know,
    whether you’re talking about gaming platforms
    or social platforms that all claim
    to be some form of metaverse,
    we will essentially go through
    kind of three or four stages, right?
    So in the book, I actually equate this
    to the metamorphosis of a butterfly.
    You know, so we’re at the larva stage
    or the egg stage, actually.
    So all of these are a single eggs.
    They’re all separate from each other, right?
    What we will get to soon is kind of larva stage.
    We will have a few of these platforms will grow bigger
    and be more self-sufficient.
    But, you know, they’re still independent from each other.
    Soon, we will actually get to the kind of cocoon stage
    where they shut themselves off.
    So we’ll have like a regional cocoon.
    So there’ll be a China cocoon,
    there’ll be a US cocoon, a European cocoon,
    based on all the regulations,
    they will essentially create ways for its own population
    to encompass in its own policies, right?
    And then, but it will probably create standards
    and regulation to allow these different
    virtual worlds to talk to each other, right?
    Whether it’s communication protocols
    or common ID systems or common currency systems
    or things like that, right?
    And then the last stage will be completely open
    where there will be, you know, probably 90% of these worlds
    will be able to talk to everybody,
    will be able to talk to everybody.
    And then we’ll be have 10% that are these micro cocoons.
    You know, there’ll still be a small China group of worlds
    that the Chinese governments don’t want to go to
    and there’ll be a small group of worlds
    that European governments don’t want you to go to.
    Just like today.
    – So who controls this world?
    (laughs)
    – Never talking about–
    – Who controls the internet today, right?
    I mean, really, there is no central party that controls it.
    There’s a few standards.
    Or ’cause there’s like the ICANN that controls
    the DNS systems or the URL system.
    – I mean, the US used to control it, right?
    I think that we handed over control,
    like maybe I don’t know if it was like five, six years ago
    or something like that.
    – Well, I mean, even the US, I think,
    helped to define it initially, right?
    But I don’t know if you would say it’s fully controlled
    because right now the network servers
    and the data centers and everything is distributed, right?
    So I think we will get to that type of a model,
    but it’s not completely distributed in the sense
    of not every single data will reside on your local server
    or your local PC, but there will still be data centers
    and there’ll be big pipes to those data centers.
    So I think we will continue to use
    the semi-centralized networks that we have
    because it’s efficient.
    I know a lot of people want to have a fully decentralized,
    crypto-focused world, but there’s a lot
    of technical limitations, scalability limitations
    and so forth.
    And I actually think there’s some inequality issues
    that the current crypto system imposes on the world,
    but we can maybe go on that later.
    But I think a lot of the current things
    that make the Internet successful
    will make this future metaverse platform
    or metaverse ecosystem success, right?
    ‘Cause we’re gonna be based on the same infrastructure.
    We don’t need to rebuild any of that.
    Just like right now, but I think one thing we will have
    is that we will probably have a global universal ID
    that allows us to go to any of these worlds,
    bring our assets with us, bring our characters,
    we can be whatever, you know, an avatar we want
    and depending on what’s the nearest happening
    in these different worlds, just like when we, you know,
    we have a phone number that no matter where you’re roaming,
    what phone device you’re using,
    what network you’re on, they can reach you, right?
    I think we will need that type of a common identity.
    And in fact, I think having that will make
    the world behave better in the sense of we, you know,
    the anonymity that we have today is part of why
    a lot of the kind of harassment and trolling that happens
    is because, you know, there’s no ramifications.
    People don’t know who you are.
    You’re gonna post something negative.
    And, you know, I know Matt Vats had some, you know,
    experience with that.
    But, you know, what we find is when people are faced
    to face each other, they actually are tend
    to be nicer to each other, right?
    Because one, I think it’s just a social norm,
    but the other thing is you don’t wanna pinch the face
    if you’re gonna be rude to somebody, you know?
    And we also find that actually in VR worlds,
    when people are in VR worlds, they tend to be nicer
    and more courteous and behave the way they do
    in physical worlds, ’cause they see somebody there
    and it feels to them, even if they’re anonymous,
    it feels to them that they’re talking to somebody
    and there’s an identity hide to that other party
    that you’re interacting with.
    So, I’m optimistic long-term about kind of human nature
    to adapt to this new environment and to get us
    from kind of these various styled worlds
    that we have today to a much more interconnected,
    interoperable metaverse platform long-term.
    – Yeah, hopefully there’s like a metaverse version
    of the mute button where if somebody is trolling you
    in person, you can just like make them disappear.
    – Well, no, no, they actually have that today.
    A lot of the social platforms today will essentially
    create a bubble where people, they can come within,
    let’s say a two or three feet of your space
    because some people, I think there are some
    harassment issues online, although I feel like
    people equate it to physical harassment,
    I think there’s still a gap between that, right?
    But the fact that you can actually mute anybody
    in these spaces where you can block them
    just like you do on digital spaces today.
    – But that brings up a good point though,
    like if everyone’s living in this
    and doing all their business in there,
    whoever controls that system, they can block people, right?
    And then you can basically exit somebody out of the economy,
    which is kind of like, from my perspective,
    it’s a pretty, you know, a possibility.
    – I agree with you, and in fact, in fact,
    this was also discussed in the book as well
    in the sense of, you know, we’re gonna have,
    we’re gonna be a lot more dependent
    on this future digital economy than we are
    on our phone today.
    And you know, when you leave your house
    without your phone, you feel very uncomfortable.
    In the future, we, if our entire world’s wrapped
    in this, you know, glasses form factor and immersive space
    and our unique ID, you know, when somebody hijacks your ID
    or you get blocked out of the system,
    it can definitely have a lot of negative ramifications.
    So we need to be careful, how do we manage that?
    How do we govern that system?
    Who gets to control that system?
    These are still kind of unknown questions
    that needs to be discussed.
    – Right, and I mean, we’re still having those same debates
    right now with AI, right?
    In the AI world, we’re sitting here going,
    should open AI have as much power as they have?
    You know, should Google have as much power as they have?
    I feel like the same discussions are gonna come up
    if this stuff becomes as, you know, prolific as–
    – Yeah, I mean, I think we’re having these discussions today,
    but even, you know, even open AI,
    we’re talking about 100 to 200 million users, right?
    So compared to 8 billion.
    We’re still a relatively small problem today.
    But when we start having, you know,
    seven or eight billion people on this, you know,
    global network, that’s gonna mean a lot more.
    ‘Cause that potentially, that digital economy
    could be as big as the physical economy
    that we have outside of it.
    And in fact, the population of that
    is gonna be bigger than any one country is today.
    In fact, well, I mean, you have Meta today,
    I think with three billion people
    is already bigger than any country.
    But, you know, they’re not able to govern them
    and control them as a country does to its citizens, right?
    But in this case, whoever’s managing that platform
    could have a level of control similar
    to a national government does, right?
    So it’s actually quite important
    how this will be long-term governed
    because of its social impact, economic impact.
    And in some ways, you know, by taking you out of the system,
    you’re almost in digital jail, right?
    And that can be, you know, if you’re blocked off
    from interacting with everybody that you know,
    the dirt there’s mental impact issues,
    the dirt there’s, you know, kind of, kind of just personal
    rights and freedom issues that I’ve associated with that.
    So there’s a lot of things that are uncertain still.
    – You know, speaking of things that are uncertain,
    like what do you see, maybe you have an opinion on this,
    maybe you don’t, but like what is the path
    to this being like a global technology?
    Like how do we get to a point where eight billion people
    have this if in third world countries,
    they don’t have access to the power, the technology,
    the, you know, how does everybody get their hands
    on something like this?
    – So I mean, you know, just like you say 20 years ago,
    very few people had cell phone, right?
    I remember in, well, maybe 30 years ago, like 1994,
    I had my first cell phone and you know, it was, I think,
    you know, I was in China, it was like $10,000 for this phone
    and it was, you know, the status of when you go to dinner
    with your boss friends and you put it on the table
    and it’s like, oh, my phone’s nicer than yours.
    That was a big thing.
    You know, now everybody’s got it, right?
    I mean, it took a couple of decades,
    but the reality is that technology flows down
    and the cost of it, once you get economy of scale,
    we’re gonna get there.
    I mean, you look at, you know, devices like this
    are $1,000.
    You know, I know the Vision Pros 3000, it’s kind of,
    I think unreasonably expensive for what it delivers, right?
    ‘Cause you see soon, I think there’ll be Chinese vendors
    are coming out with half the price and, you know,
    Apple will come out with something that’s half the price
    in another year.
    So these products will get to a point where it is no more
    expensive than cell phones today.
    Right, you can go to a third world country
    and get a $50 cell phone that will go to all the websites
    that you do, all, you know, cell phone, all the websites
    can use all the apps and, you know,
    make all the calls that are saved, right?
    They may not look as nice.
    They may not have an Apple brand on it.
    But, you know, I think it will be accessible
    for most of the world.
    And the difference is the phone that you have today
    is really just for communication
    and going, you know, online to two websites.
    In the future, if you have a glasses that is, you know,
    $200, that glasses can be your car,
    it can be your plane, it can be your tutor,
    you know, it can be your assistant.
    It can be all the things that you need it to be
    in your life, right?
    It can be your entertainment system, you know.
    So I think we will essentially
    create the experience level of the wealthy class
    with all classes, right?
    Even, you know, the people who can afford,
    you know, $50 phones, that changed kind of the whole
    economy in Africa, right?
    Because all they needed was a cell connection
    on the phone, they essentially now had digital banking
    and they could pay each other, they could see the weather,
    they could go make trade, right?
    And they saved them the hours and walking
    to get to a certain place, right?
    I think, you know, the transformational capabilities
    of a, you know, all-encompassing AI device on your head
    that is also immersive, I think that that capability
    will be, you know, taking that an order of magnitude higher
    in terms of its impact.
    And then I, you know, we talk about the has and the have nots.
    We’re gonna release really more about haves and have soon,
    right?
    And the question is really housing.
    And for some markets it may take five years
    or some markets it may take 10.
    But, you know, given the speed of a reduction of costs
    on things, I don’t think that will be a lot more
    than 10 years for this to proliferate.
    – I used to be more optimistic about
    US and China relations, quite honestly.
    I even like, when I remember Mark Zuckerberg talking
    about Facebook and how he thought everyone in the world
    would be able to use it.
    And then, you know, there’s like internal memos
    that they’ve basically given up on that idea.
    There’s no longer an idea of this being
    like a global social network is because the divide
    between the US and China in some ways,
    in some ways we’ve gotten closer,
    in some ways we’ve grown apart.
    And so I do wonder, like I mean, you mentioned
    like the whole US cocoon and the China cocoon and all that,
    you know, I wonder how, like in reality,
    how that would actually play out.
    ‘Cause I see it being more likely now,
    like based on my, based on my current worldview,
    and I’m open to changing my view is it’s way more likely
    that there’s just a total separate China system
    versus the US system and the global system.
    – Well, I think ’cause you look at the internet
    as an example, I actually just came back last week
    from Taiwan and Shanghai.
    So I’ve been both the places that you’re talking about.
    And, you know, you need a VPN to access a few places, right?
    If you wanna access, you know, some of the social networks
    like Twitter or Google, you know, maybe New York Times,
    Washington Post, those things.
    But the majority of that the web is available.
    If you look at the apps, the state.
    – But you’re not supposed to do that though.
    – You’re not supposed to. – You’re not supposed to do that though.
    – No, no, no, no, but the reality is that
    there are equivalents locally for those capabilities, right?
    But if you wanted to, essentially for, you know,
    five dollars a year, you can get a VPN service
    for you to access that, right?
    But that’s, you know, less than 5%, you know,
    less than 5% of the internet is not available
    to Chinese people.
    So essentially that’s what I was saying.
    At some point, we’re gonna get to a point
    where there’ll be 5% of these worlds
    that will be not accessible
    because of political or whatever issues,
    but then 95% it will be, right?
    And then most of the things that brings you value
    are actually gonna be that other 95%.
    And in fact, you know, I was there when Zuckerberg came
    and, you know, I was trying to do all this thing.
    The reality is that he was not actually blocked
    from entering China.
    He was just given the choice that
    if you’re gonna operate in China,
    you have to put your servers here,
    you have to follow the local laws.
    And he said, no, I don’t want to follow the local law, right?
    Just like if you look at TikTok,
    what does TikTok do, right?
    The US government said to TikTok,
    if you have to follow local laws, they did that.
    And then they said, well,
    even though you’re following local laws,
    I still wanna ban you,
    or I still wanna make you to sell.
    So in some ways, I feel like that–
    – Yeah, I mean, it’s complicated, right?
    ‘Cause like there’s more to that.
    ‘Cause like putting your servers in China
    also could mean your proprietary data
    being sold off to someone.
    So it’s not, you know, it’s complicated.
    – No, ’cause I operated,
    I operated a social network in China.
    So I, nobody’s really going in there
    and taking your proprietary data.
    What they will do, what they will do is every week or so,
    you’ll get a list and say here are the 20 terms
    that are the sensitive terms for this week.
    You know, if people search for these terms,
    if people, you know, talk about it,
    you know, don’t give an answer, right?
    And then, I mean, yeah, honestly,
    whether it’s Google or if it’s Facebook,
    it would have been better for China and for the world
    if they actually followed that.
    Because then for the other, you know, 1.5 million
    other words that people were looking for,
    they would have got a more diverse answer base, right?
    So, you know, if you look at Bing,
    Bing’s available in China, why?
    Because they follow these rules.
    Yahoo’s available in China, why?
    Because they follow these rules.
    So it’s not that, it’s not, you know,
    I don’t want to sound like
    I’m defending the Chinese government.
    – Yeah, I was like, let’s not go to that debate
    ’cause I’m definitely on the opposite side of all of that.
    – You know, I also work for a Taiwanese company.
    So, I’m not biased against, oh, you know, or towards anyone.
    And ’cause I’m a U.S. citizen, I was born in China,
    I’m mixed race between U.S. and China.
    You know, I work for a Taiwanese company
    and I’m part Jewish, okay?
    So, the point is, I guess,
    by having multiple perspectives,
    it allows me to understand that why are people prioritized?
    Why is the government here prioritizing this?
    Why is everyone there prioritizing?
    And what do they mean by it?
    Instead of taking everything as,
    oh, that’s a aggressive move to compete with me
    or to hurt me.
    You know, I think each country is doing the things
    that they are doing for the benefits
    that they see for their population, right?
    So, I get that.
    But I think as long as you see that perspective
    and instead of saying, oh, they’re doing that
    to try to take over democracy
    or they’re doing that to try to keep us down
    or whatever the perspective, you know,
    the narrative on the two sides are,
    nothing is that extreme, right?
    So, once you start to understand it and at least, you know,
    and like you said, if you know the language,
    it actually helps a lot.
    ‘Cause, you know, like two weeks ago,
    I was at this US-China kind of governmental dialogue
    and there was, you know, everybody was,
    at least all the US people there were super hawkish.
    They were like, how can we slow down China?
    How can we keep them from progressing in AI?
    How can we cut off more chips on them?
    I mean, that was their whole thing, right?
    And I was like, you know, when you do this
    and you’re public about this, you know,
    what China hears is that you’re trying to slow down
    and you’re trying to keep me from developing the way,
    what am I going to do?
    I’m going to create a parallel ecosystem of technology.
    I’m going to try to hoard as much technology as I can
    while I still have access.
    And it becomes a self-sufficient prophecy, right?
    Where we really should be thinking about
    how can we work more together?
    Because half of the world’s AI researchers today
    are actually originated from China.
    A lot of people don’t realize this.
    In fact, 38% of top researchers in AI in the US
    came from China, and only 37% of US researchers
    were born in the US.
    So there’s actually more top US researchers
    in the US from China.
    So, you know, there’s already that,
    that collaboration is already happening.
    In fact, there was three times more AI research
    that was collaboration between US and Chinese
    academic institutions up to about two years ago.
    And then it came down drastically,
    but then any other country, right?
    The next one was, I think, US with England,
    which was one-third as many collaborative projects, right?
    So, you know, I think if you talk to the academic,
    there’s the smartest people in this area
    are in two countries, right?
    But the politics is making it difficult
    for us to make progress.
    And then because of that, you know,
    everybody sees everybody else as an enemy.
    And when you see the other person as me and you say it out,
    then people behave the way that you expect them to behave.
    So, again, I mean, we don’t want to get too political,
    but I just feel like we need to have a balanced perspective.
    I want to talk a little bit about, like,
    the crossover of AI and XR, right?
    You did kind of give one example of, like,
    you know, maybe AI tutors that look like
    you’re talking to a real person.
    So, you get that, like, one-on-one teacher experience
    with, like, an AI tutor.
    But where are some of the other overlaps of AI and XR?
    Like, why is the sort of convergence of these two tools
    such a big deal?
    So, I mean, first of all,
    when it makes everything that happens in XR,
    it wouldn’t be possible without AI, right?
    Everything from hand tracking, eye tracking,
    you know, voice control, real timing,
    to, you know, world scanning, all of that is AI, right?
    To NPC, you know, being powered by AI models, right?
    So, AI is actually enabling XR to be even possible.
    Now, if you click the other way,
    what is XR doing for AI?
    So, a few areas.
    One is in terms of a synthetic training data, right?
    If you look at what Tesla’s doing, a lot,
    they’re actually doing now more training
    with virtual models inside AI in different situations
    than they are with physical.
    And they can learn faster.
    They can get billions of miles
    instead of millions of miles.
    The other thing is you are very soon going to have
    giant work job displacement.
    We’re looking at, let’s say, 10%, 30%, 40% of people
    being out of work.
    There will not be a physical economy
    for these people to be moved into.
    So, you have to create a parallel economy trend.
    And I think that’s where this metaverse ecosystem
    plays a huge part in terms of alleviating
    both the social, the economic,
    and also the mental health aspects
    of not having a work identity, right?
    Many of us identify ourselves with our work.
    When you introduce yourselves like, “Oh, what do you do?”
    That’s kind of the first question.
    Where in the future, maybe we’re gonna have
    managing a virtual world as my job,
    or I can create virtual environments.
    I could be a virtual tutor to somebody
    that’s a thousand miles away or whatever, right?
    This now creates that parallel economy
    to give you an added purpose, right?
    And it also allows you to be educated much faster.
    As we mentioned earlier,
    the education aspect of it to essentially,
    right now, probably the most rare asset
    is a good teacher out there, right?
    ‘Cause the most talented people usually go
    to where they’re paid most and they’re not teachers.
    Right, I mean, I’m not downplaying.
    I think there’s a lot of great teachers out there,
    but there could be a lot.
    And it’d be great if we had this now virtual classrooms
    that people who really love teaching
    and love their students could then create a income
    for themselves that would be able to help
    and support students around the world.
    In fact, that is the epilogue of the book
    talks about kind of the educational scenario
    and how that plays out.
    So I don’t know if I mentioned,
    but the book is actually a debate
    between myself and my co-author.
    So I’m the optimist in the book
    and my co-author is a pessimist.
    And so every chapter essentially,
    we go through that the two potential aspects of,
    here’s all the good things
    that can happen from this technology.
    Here’s all the bad things.
    And a lot of people are like,
    “Why would you create this that’s confusing?”
    I’m like, “No, actually, the problem is,
    “most people, when they buy books,
    “they buy books that coincide with their existing beliefs.”
    And so when you read it,
    you actually don’t learn anything,
    you just feel better,
    ’cause you’re like, “Oh, see, that book says I was right.”
    But you never really listen to the other arguments.
    And we want people to see both sides of the arguments,
    because once they do that,
    then they actually feel stronger about their beliefs
    that they actually do have.
    And then we give them actionable activities
    that they can do to help bend it
    towards that positive outcome.
    ‘Cause it will never be as positive as the utopia outcome.
    It will never be as bad as the whole dystopian outcome.
    But if we don’t do anything,
    it actually will probably lean towards the dystopian,
    just like we’ve seen with a lot of technologies today.
    So, and in this one, because of how powerful it is,
    we actually do need to take action.
    And particularly people, I think in policy,
    people that are corporate leaders,
    people that are in the ALF,
    they have such an immense ability to impact
    where the world is going.
    And to make sure that we have the UBI and the UBS safety,
    and that’s so that when people do lose a job,
    they’re now on the streets, causing havoc,
    because that’s what happens when people lose a dirt.
    And we go back and we look at the lead-eyed movement, right?
    What happened there was,
    they actually, if you have a chance,
    read the book, “Blood in the Machines.”
    I used to think, oh, lead-eyes,
    these guys are anti-technology.
    Actually, they were not anti-technology.
    They just, over a 30-year period,
    where we’re kind of removing their jobs
    and the factory owners didn’t give them options,
    didn’t smooth out the transition,
    and just said, okay, you’re fired,
    and you have no severance, you have nothing.
    And they’re like, I have kids to feed, what can I do?
    And they just, over several decades, this happened,
    and then it led them to be like, okay, I gotta do something.
    I gotta stop this, otherwise, my kids are gonna start, right?
    And so if there was a social safety net
    for that group of people who were being replaced,
    we could have prevented that whole thing.
    And because a lot of people died,
    a lot of damage to equipment,
    we probably slowed down progress,
    more than we could have or should have, right?
    So there’s a lot of things that if we learn from history,
    I think, you know, ’cause so many people say,
    oh, you can’t have this social welfare system.
    It’s unfair to the people who work and blah, blah, blah.
    The reality is actually,
    if you have another book, you should have a chance.
    It’s called Utopia for Realists.
    And it talks about the whole kind of,
    the justification of the whole UBI model,
    and how, over time, there’s actually been dozens of studies
    that’s been conducted around the world
    that showed more value were actually created
    when you provided these kind of things
    than you put into it.
    So it’s not a value depreciating type of an investment.
    It actually helped to create more stability.
    People learn more, they actually worked harder.
    You know, a lot of people say, oh, you give them stuff
    and they’re gonna stop being lazy,
    and they’re gonna drink, and they’re gonna smoke.
    And actually less money was put into drinking, smoking,
    more money was put into self-education,
    buying equipment, starting businesses,
    you know, those kind of things.
    So it’s a lot of what we’ve been told
    is actually not supported by the evidence.
    – Again, it’s all complicated.
    I’m in the point where like,
    I think socialism is bad, I’m worried,
    ’cause there’s been a lot of examples of socialism
    going very, very wrong.
    And at the same time, like, yeah,
    maybe we do have to have something like UBI in the future,
    ’cause like, I don’t know,
    there won’t be enough jobs for people
    to like have food on the table.
    – You look at the examples of what happened today.
    You look at places like the Netherlands,
    you look at places like Finland,
    you know, Germany, you know,
    there’s a lot of, you know, democratic socialism, right?
    And if you look at the education levels there,
    you look at the employment levels,
    you look at the fulfillment levels,
    they’re all higher than the U.S.
    They’re income levels a little lower,
    but they’re actually happier.
    They’re more educated, they’re more peaceful,
    there’s less crime.
    There’s a lot of other things that from a social level,
    you know, would you rather be 20% less wealthy,
    or would you rather live in a place
    where, you know, your kids can walk to school
    and you’re not afraid, and they have, you know,
    they’re feeling good about themselves,
    there’s less loneliness, there’s less, you know,
    mental health issues.
    I mean, I think there’s a trade-off
    that we should think about, right?
    And the reality is that what’s made it difficult
    in the past was that, you know,
    we’ve had a relatively slow increase in productivity.
    What we will see now with AI and robotics coming in,
    is that the productivity problem
    that has kept us from allowing
    these types of social services is going to,
    those obstacles are gonna go away, right?
    – Yeah, Elon Musk recently said that he hopes,
    instead of UBI that we have,
    what was it, like universal abundant income,
    or something like this.
    – I think he’s at universal high income.
    – Oh yeah, yeah, okay, yeah, okay, yeah, exactly.
    So I agree with that, ’cause I think his concern is like,
    if you do get people money that,
    and you combine that with like, okay, yeah,
    then also virtual reality gets great,
    and then yeah, sure, hopefully people in virtual reality,
    do a lot of business with each other,
    but look, I’m a gamer.
    Like if you give me, like,
    if you give me Baldur’s Gate 3
    with like 10 times better graphics,
    and I can create worlds forever,
    I could definitely, you know,
    if I didn’t have professional goals
    or things that were going well for me,
    I could definitely see me just like,
    I’m just gonna sink into that.
    I’m just gonna, that’s my new world.
    And so I hope that we do use these tools more and more
    to like, get people things to do, not, yeah.
    – And then we actually, we’ve done studies on this,
    in terms of, you know, is VRAR,
    is it more addictive than traditional 2D gaming?
    And what we did find was there’s about 2X more addictive
    from various aspects of addiction.
    So it is something that we need to be conscious of, right?
    But if you look at, I mean, I know,
    here’s an example that, you know, you may not like,
    but in China, they actually have this
    kind of child gaming law, right?
    Where children under 18 can only play three hours a week
    of network games.
    And it’s tied to your national, yeah,
    it’s tied to your national ideas.
    I mean, I’m glad you agree, ’cause I’m a parent.
    And I wish there was these kind of laws in more places,
    ’cause I would rather have my kids be studying
    and, you know, even watching YouTube to learn things
    than to be playing a game that, you know,
    maybe has need for mental value.
    But, you know, there’s a diminishing level of return
    that you get from most gaming, right?
    So, and then particularly today’s gaming is really,
    it’s designed to be addictive.
    It’s designed to make you, you know,
    go to the next level and keep trying.
    And, you know, I think there’s some things that teaches,
    but I think there’s a lot more value
    that they could be had with their, you know.
    So, those are the kind of things that we found was that
    to make this addictive medium less addictive,
    you actually have to create some system-mandated policies.
    So, and these policies could be mandated by your parents.
    It could be mandated by lessing the government
    for some cases, you know, which actually for laws,
    we have a lot of things like, you know, gun laws
    or whatever, pornography laws.
    I mean, there’s rules that we already have in the world.
    I think having some rules where we know that, you know,
    if these things are going to do hard for society,
    we should protect society in some way.
    – This has been fascinating.
    But I do want to, you know, shut out your book again.
    You got our next reality.
    A lot of the stuff that we talked about on this episode,
    you go into a lot more detail.
    And you also have a lot of the counterarguments in here too,
    with your co-host or your co-author, Louis,
    sort of taking some of the opposing viewpoints.
    – Yeah, thanks again for inviting me.
    This has been a fun, fun chat between, you know,
    few folks that actually know the space a little bit.
    And if you want to follow me, you can go on Twitter.
    It’s @aygraylin or @alvingraylin on LinkedIn.
    – Well, awesome.
    I have a copy of the book.
    I really appreciate the time.
    This has been a fascinating discussion.
    – Thanks again for inviting me.
    It’s been fun.
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    you

    Episode 17: How will Extended Reality (XR) transform the global job market and our daily lives? Nathan Lands (https://x.com/NathanLands) and Matt Wolfe (https://x.com/mreflow) are joined by Alvin Graylin (https://x.com/AGraylin), the Global VP of Corporate Development for HTC and and bestselling author of “Our Next Reality”.

    In this episode, Alvin Graylin delves into the critical intersections of AI and XR technology, the evolving metaverse ecosystem, and its potential to reshape job opportunities. He explores the role of advanced AR glasses in bridging economic disparities, the impact of a universal basic income on societies, and the future of personalized education in virtual environments. The discussion also touches on Alvin’s book “Our Next Reality” and the implications of AI-driven transformations in global industries.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Value of large network of interconnected worlds.
    • (03:47) Transition to 3D space for collaboration.
    • (06:54) Virtual space offers freedom to focus, connect.
    • (12:51) Future metaverse allows real interaction, faster learning.
    • (15:37) MIT and UCSD collaborate with remote robot workers.
    • (19:50) Internet infrastructure will shape future metaverse success.
    • (21:13) Anonymity impacts behavior in digital and physical worlds.
    • (24:30) Growing digital economy presents governance challenges.
    • (28:02) Digital banking, trade, and immersive AI impact.
    • (33:08) US-China rivalry in AI calls for cooperation.
    • (37:06) Virtual world jobs create new educational opportunities.
    • (40:01) Over decades, catalyst for social change discussed.
    • (43:37) VR/AR gaming more addictive than traditional 2D.
    • (44:35) Encourage progress, use policies to mitigate addiction.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • SEO 2.0: How to Trick Google and Rank AI Content ft. Greg Isenberg

    AI transcript
    – Where do you see SEO going in the world of AI?
    – If Warren Buffett was an internet marketer,
    he would be like all about SEO 2.0 right now.
    It’s all about how do you take a stranger
    and make them a raving fan?
    If you can do those things, you win at business.
    (upbeat music)
    – When all your marketing team does is put out fires,
    they burn out.
    But with HubSpot, they can achieve their best results
    without the stress.
    Tap into HubSpot’s collection of AI tools,
    breeze, to pinpoint leads, capture attention,
    and access all your data in one place.
    Keep your marketers cool
    and your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    (upbeat music)
    – Hey, welcome to the Next Wave podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz.
    And today we’ve got an amazing returning guest
    in Greg Eisenberg, he’s the host of the Startup Ideas podcast.
    He’s got a holding company called Elite Checkout,
    which does over $10 million in revenue per year.
    He runs the boring marketing company
    where he helps other companies do SEO.
    And in this episode, he’s gonna map out
    an exact blueprint that you can follow for your SEO
    in this new AI world that we’re going into.
    He breaks down the SEO 2.0 strategy.
    We also talk about all sorts of hot takes
    that he has on Twitter and dive into
    why he believes those hot takes are true.
    And you’re really gonna enjoy this episode.
    So let’s just go ahead and dig right in
    with Greg Eisenberg.
    Hey, Greg, welcome back to the show.
    It’s great to have you on.
    – I’m ready, I’m ready for it.
    – Awesome, well, let’s just get straight into it.
    So you have a couple tweets out there right now
    that Nathan and I both had some conversations around
    and got us talking and we thought,
    these are some fun things to bring Greg on
    and talk about on the podcast.
    And the first one being the concept of SEO 2.0.
    So maybe the best place to start is just like,
    what’s your definition of SEO 2.0?
    Where do you see SEO going in the world of AI?
    – Yeah, I mean, a lot of people call it programmatic SEO.
    I call it SEO 2.0.
    SEO 1.0 to me is basically, I mean, it’s old school SEO.
    It’s basically you go and create content pages.
    You get humans to do it, top things to do in New York City.
    You go and you know all the different restaurants
    and bars and you review it.
    And then you just sort of hope that, I guess,
    you get backlinks from other websites
    and then that’s gonna generate traffic.
    And that worked pretty well for a long period of time.
    And there’s quite literally dozens of billion dollar companies
    that rode the SEO 1.0 wave.
    Content companies, social networks like Reddit,
    these are companies that have great rank.
    And then about a year and a half, two years ago,
    I started thinking about, hey,
    if AI makes it easier to do two things.
    One is to get insights and data in a really efficient way.
    And two is creating content and creating web pages
    in a really efficient way.
    Isn’t that me?
    What I mean that you’ll be able to just create
    like millions of pages and rank for them?
    And that’s really what I mean by SEO 2.0.
    I’ll tell you why I think it’s exciting to me
    and maybe to people listening,
    is the same way that there’s dozens of billion dollar companies
    and thousands of million dollar companies
    that have rode the wave of SEO 1.0,
    I believe the same thing is gonna happen for 2.0.
    The people that are gonna be able
    to efficiently create these web pages, get ranked,
    are gonna do really well.
    And I know that there’s someone listening,
    especially ’cause this is an AI podcast
    that’s thinking themselves, well, okay,
    but it doesn’t matter because perplexity is the new Google.
    And if perplexity is the new Google,
    then you know, you ranking doesn’t really matter
    on this search engine that no one’s gonna go to.
    And the reality of the situation is,
    Google is not dead for,
    I know a lot of people like to think that Google is dead,
    but Google is not dead.
    And there’s still huge opportunities to start ranking.
    – You know, the first thing that comes to mind
    when I think of like an SEO 2.0 concept
    and like AI just sort of being able
    to sort of mass make content,
    is that I feel like the internet might just get flooded
    with a lot of junk content.
    I mean, how much of SEO 1.0,
    especially in the early days,
    was people like writing articles, spinning the articles,
    and then it would just like change
    a certain percentage of words
    and they were doing it for the algorithm
    and they weren’t really doing it for the reader.
    That’s the first thing that comes to mind,
    is like, doesn’t it seem like that might just lead
    to a whole bunch of clutter and junk all over the internet
    that nobody actually wants to read?
    – Yeah, so the reality is,
    if you go and create AI content today,
    you’re gonna get penalized by Google.
    So Google has gotten really good
    at basically knowing if something is written by AI.
    So what you actually have to do
    is you kind of have to like deke out Google.
    You have to essentially use humans for 20%
    and AI for 80%.
    And that’s how you do it.
    And if you do it that way,
    there’s no real way for Google to know
    as long as the content is high quality.
    Like that’s what Google cares about.
    Google cares about high quality content.
    And if you can create high quality content
    and add some human vibes to it, you’ll still rank.
    And I’ll just give you a few examples
    of what does it mean to make it more human?
    Well, one is Google knows if,
    like if you put this article has been written
    by Nathan Lanz or Matt Wolf,
    and well, Google will know that this is like a human
    behind it, they’ll go and search.
    What’s the bio of this person?
    Is this a reputable person?
    So what a lot of people are doing in the AI SEO space
    is just creating content.
    It just looks like chat GVTs created it.
    You know how, and I always notice it
    ’cause it’s always like the first word
    of every sentence is capitalized.
    And there’s like colons for days.
    It’s like no regular person
    would have that many amount of colons.
    It’s funny, I’m actually doing a search right now
    and I searched up the word as like as an AI language model.
    And like, if you just do a search for that term,
    it’s like insane how many results actually show up
    because so many people are so lazy
    that they literally leave as an AI language model
    in some of their content.
    Totally.
    And the other thing that I’m really into
    from an AI SEO 2.0 perspective
    is creating things like tools and calculators
    and embedding it into the content.
    So those are things that maybe you’re creating
    a piece of content around mortgages, let’s say,
    and you embed a mortgage calculator
    into that piece of content.
    These are things that helped you rank.
    And going back to AI, it’s like,
    well, you can just use Clode 3.5 Sonnet
    to code you up some of these tools and calculators,
    start embedding it into the content
    and then just start really just outranking
    your typical SEO 1.0 people.
    So I think that like why have I been interested in this
    in the last year and a half and two years
    is just because I think that if it becomes
    so much easier to figure out which keywords to rank for
    and what the content needs to look like
    in order for it to be seen by Google,
    when you create these assets,
    these are assets that pay forever.
    It’s not like paid ads where if you spent $100,000
    on your paid ads this month, chances are next month,
    nothing’s gonna come from that.
    But when you invest in SEO, generally,
    it just is a compounding machine
    and it’ll increase 10, 20% a month.
    And I will say the downside of SEO
    is you don’t see results right away.
    So if you’re an instant gratification person,
    like you might not love it, but once it starts compounding,
    it becomes a beautiful thing
    and then you just have an asset that’s paying you.
    – So kind of, I guess kind of the idea would be like,
    if you’re just going out there and using AI
    to create content for the purpose of SEO,
    a lot of other companies are gonna kind of do
    the same thing, but you can actually differentiate
    by going and creating tools
    that will give people a reason to click to your site, right?
    I guess if you’re creating a whole bunch of content,
    a lot of people are just gonna use Perplexity
    or the new Google AI to search
    and the response is just gonna be right there
    on the homepage without needing to click in.
    But if they have a tool that they can use,
    they actually have a reason to click into your site
    other than just like a sort of summarized version
    that AI gives you.
    Is that kind of what you’re getting at with it?
    – Yeah, yeah, exactly.
    I think like the new content has evolved.
    It’s no longer just like static words
    and stuff like that, right?
    So you have to think about, okay,
    what is the future of content
    and how do I create the most high quality piece of content?
    And it just so happens that when you create tools
    and calculators and things like that,
    people end up staying on the website longer.
    And that’s a signal to Google that,
    hey, this is something that’s really valuable,
    therefore prioritize it.
    And other things like video also keeps time on site.
    Therefore, that’s good for Google.
    And then you might outrank.
    And these are all things that AI could help you create.
    Where I mean, you could have created these things.
    Someone listening to this is like,
    but I could have created this about eight years ago.
    Yes, but it would have been really expensive.
    Like you would have had to hire a dev team
    to create all these tools on a monthly basis.
    You’re spending $10,000 to $20,000 a month.
    You would have had to hire an SEO firm
    just for thinking about what are the SEO keywords
    I wanna go after.
    And then you’d have to hire a bunch of human beings
    to create content that would cost a lot of money.
    So all these things would add up.
    What I’m saying now is it’s now a fraction of the price
    to do it and it allows you to compete with the big boys
    which is pretty cool.
    I do wonder though.
    It feels like now that all that’s so easy to make
    that Google will have to lean more and more into authority.
    Like looking for sites that are like the CNNs
    or the reddits or whatever, like Quora.
    The recent Google update,
    apparently they changed the algorithm
    where they’re highly focused on Reddit now.
    And people are now trolling Reddit
    and putting content on there.
    ‘Cause Google doesn’t know what to do.
    They’re like, well, trust Reddit, I guess, or Quora.
    There’s tons of real people there.
    And then now people are kind of gaming that.
    And then they probably have another issue too,
    like in terms of programmatic SEO.
    Have you seen like the perplexity pages that came out?
    – Yeah.
    – Yeah, so apparently perplexity’s SEO traffic
    is like booming right now.
    And it’s all from like literally
    the AI is creating all the pages, right?
    And so now they have a situation where like,
    okay, perplexity’s a new authority,
    but they’re using AI to create all the contents.
    Yeah, maybe Google’s not dead,
    but maybe they end up serving up
    most of perplexity’s content.
    – Yeah, my guess is,
    and this gets into like antitrust territory,
    but my guess is they are going to
    start suppressing perplexity pages.
    I think that if you’re out there
    and you’re listening to this and you’re like, okay,
    you know, I would love thousands of visits
    coming to my website every single month.
    The thing to think about is,
    how do you create the most high quality content
    in the most efficient way?
    And if you’re able to create high quality content,
    the CNNs of the world and the high,
    the trusted, great backlinks that you want
    are going to come after you just organically
    because you’ve created high quality content.
    And then of course, you have to do some things,
    like for example, not many people know this,
    but if you submit your startup to Product Hunt,
    Product Hunt is actually a high trust worthy site.
    Like the number one reason you should submit
    to Product Hunt is for SEO.
    And so there’s a bunch of things like that
    that you can do that, you know,
    I’ve learned over the years that are worth doing too.
    (upbeat music)
    – We’ll be right back,
    but first I want to tell you about another great podcast
    you’re going to want to listen to.
    It’s called Science of Scaling, hosted by Mark Roberers,
    and it’s brought to you by the HubSpot Podcast Network,
    the audio destination for business professionals.
    Each week hosts Mark Roberers,
    founding chief revenue officer at HubSpot,
    senior lecturer at Harvard Business School,
    and co-founder of Stage 2 Capital,
    sits down with the most successful sales leaders in tech
    to learn the secrets, strategies, and tactics
    to scaling your company’s growth.
    He recently did a great episode called,
    “How Do You Solve for a Siloed Marketing in Sales?”
    And I personally learned a lot from it.
    You’re going to want to check out the podcast,
    listen to Science of Scaling wherever you get your podcasts.
    (upbeat music)
    Now, what do you think that like, in the future,
    people are even going to like click into the links anymore,
    though, because I feel like, obviously,
    even Google’s going in the direction now
    where if you ask it questions,
    it just puts the answer on the homepage,
    or people can go to chat GPT or Claude
    and ask their question and, you know, get an answer.
    And I don’t know if I’m just like, I’m in an AI like tech bubble
    where that’s just sort of become my habit,
    or if the rest of the world is starting to do that as well.
    I don’t, I don’t totally know,
    but I feel like I find myself clicking
    into links less and less
    because the response is just right there for me.
    So like, do you think the incentive to even do SEO
    is going to go away over time?
    – I think that the searches that are going to do really well
    on like the perplexities of the world,
    AI search are going to be very information-based searches.
    So it’s like, what is the weather today in Miami?
    You know, Hurricane Barrel is coming.
    Is it going to land in Houston?
    And I think it’s going to do an incredible job
    at like going through the internet
    and just boom, here’s your answer.
    But there’s a huge sub,
    there’s a huge segment of searches
    that aren’t as simple as,
    let me just pull in the information
    that almost reminds me a little bit about,
    you know, going old school.
    It’s like, you walk into a library, you know,
    it’s not like there’s just one book in the library.
    There’s like different books in the library
    or, you know, you turn on Netflix.
    It’s not like you go into one channel
    and then it, you know,
    just knows what you’re looking for automatically.
    Sometimes it’s nice to have some sort of interface
    that allows you to jump off to different places.
    So I think that the future of search
    is going to be a lot more like perplexity
    than it is currently Google.
    But I think the idea of I’m going to jump around
    to different places,
    a la library, a la Netflix,
    a la, you know, linear TV isn’t going anywhere.
    – Okay, so is that the game plan that people do now
    is like you throw out tons of AI content
    and then you go back with humans to improve
    the content that’s actually ranking
    and actually make it good?
    Is that the kind of the playbook that people are doing?
    – I mean, that’s a playbook
    that a lot of people who are doing well
    in the space are doing.
    So I think the first thing I always think about is like,
    okay, who’s my ideal customer?
    And then based on that, it’s okay.
    So I want to track these people
    and then what do I want these people to do next?
    And for you, it’s probably join the newsletter, I’m guessing.
    ‘Cause that way you have their,
    okay, once around the newsletter,
    it’s like then you can send them the podcast, right?
    And then they, it’s all about how do you take a stranger
    and make them a raving fan?
    Like that’s the game of business in a nutshell.
    Everything else doesn’t matter, right?
    If you can do those things, then,
    and you can do it profitably, you win at business.
    Like that’s the most barbarian way
    of saying what business is, is that.
    So I feel like if Warren Buffett was an internet marketer,
    he would be like all about SEO 2.0 right now.
    – Well, I also feel like with AI,
    you have this attention to almost take this reverse route
    because you can create so much content at scale with AI, right?
    So you could always go and create
    a ton of different content pieces
    on a bunch of different AI related topics.
    It all would make a good sort of lead into your opt-in,
    figure out which content drives the most leads in
    and then sort of double down on that content.
    I guess that would be another approach as well,
    sort of cast a wide net,
    figure out which piece of the net catches the most fish
    and then hone in on that, you know?
    – I think that’s right.
    – Well, the other thing I wanna talk about with SEO 2.0
    is the fact that large language models in AI
    could make SEO a lot more personalized, right?
    I think you mentioned that in your SEO 2.0 tweet
    is that we’re gonna see SEO and the search results
    get a lot more personalized and tailored to the individuals.
    So how do you see people sort of leaning into that with SEO?
    – So yeah, that’s definitely where the world is going.
    I mean, it has been going there even pre-AI,
    like in, you know, Google started collecting data around,
    okay, we know your location is in Miami.
    Therefore, if you write weather,
    it’s gonna bring up weather Miami.
    It’s not gonna bring up weather Kyoto.
    So what AI has done is it has just made that,
    we just put that on steroids.
    So what the next evolution of that is agents
    basically coming to me with,
    hey, you need to know this piece of data for this reason.
    So the way search works today is it’s very,
    what I would call lean forward.
    You have to lean forward to get data.
    And I think probably where we’re gonna go
    in the next few years,
    it’s gonna be way more lean back.
    So instead of me searching for what’s a great,
    you know, sushi restaurant in Miami Beach,
    it’s going to know that it’s 6 p.m. my time.
    I haven’t made any dinner reservations.
    I live in Miami Beach.
    I, it knows I love sushi
    and I usually have sushi on Tuesday nights
    and it hasn’t done that.
    And I haven’t, you know,
    it’s just like weird and knows me so well.
    And it’s like, hey, you know, there’s this news,
    you know, sushi restaurants are just open.
    I think you’re gonna like it.
    Say yes if you want me to make a reservation for two.
    Oh, by the way, your friend, Jamie’s in town,
    do you want me to add them to the calendar invite?
    And that’s like literally the direction of where we’re going.
    – Yeah, I could see that.
    But I also think that, you know, that description,
    I think to some of us,
    like the early adopter techie types,
    that sounds really exciting.
    But then to a lot of people,
    that also sounds like more of a dystopian future
    where all these companies have so much data
    and, you know, information on us.
    How do you see these companies sort of balancing
    the data privacy concerns,
    trying to feed people exactly what they want
    when they want it?
    – The reality around data and privacy is
    people are okay giving away their data and privacy
    if it makes their lives way more convenient.
    So if the product ends up working
    and it makes their lives way more convenient,
    then they’re clicking the terms of service
    and they’re not reading and scrolling
    with the terms of services
    and they’re just going and making their lives convenient.
    Why?
    Because life for everyone is very difficult,
    stressful, overwhelming,
    no matter how wealthy you are from the top to the bottom.
    And I think that if I just believe actually
    that a lot of these products are going in the direction
    that they’re going to make the lives way more convenient.
    So of course they’ll be, you know, let’s put it this way,
    Google is way bigger than DuckDuckGo, you know?
    – Right.
    – It doesn’t mean DuckDuckGo is in a great business
    and a great product and there’s a subset of people
    who care about privacy first,
    but there’s a greater amount of people
    that are just like, I’ve got a baby,
    I’m holding a baby on this hand,
    my boss is calling me on the other hand,
    I need to Google search this
    and I’m not going to DuckDuckGo to do it.
    – Yeah, I always feel like I care about privacy
    but as soon as you give me some kind of convenience
    that makes my life easier, I’m like, yeah, you know.
    – Yeah.
    – Give it to me.
    – Yeah, and I think, you know,
    where the world is probably going to go
    and AI plays a role into this too is AI is going to scrape
    a lot of data that exists on the internet.
    So it’s going to know probably my address
    that I live in Miami Beach.
    And maybe what happens is it finds out
    that a friend of mine wants to send me an invitation
    via mail to a wedding, let’s say.
    And it’s just going to be like,
    hey, do you live at 123 Main Street Miami Beach?
    Yes or no.
    And so I think the future of a lot of these products
    is what I call contextual onboarding
    versus upfront onboarding.
    Upfront onboarding is when you ask for a million things
    upfront, your phone number, your, you know, where you live,
    all these things to, and then you say yes to, you know,
    each of those things.
    I think that with respect to search and other products too,
    but let’s just talk about search for a second.
    As it gets to know you, it’s convenience.
    So it’s going to be like, hey,
    do you want us to do this thing for you?
    All you have to do is press yes.
    And we have this data because it’s scraped.
    – Yeah, I mean, at the end of the day,
    all of the companies could get access to the data
    if they wanted to.
    – I think that whether you believe in SEO 2.0 or not,
    play with the tools, like go and create AI content,
    human layer it, add a calculator to it, add a tool,
    like just go and like try things out
    and just see how you feel with it.
    And through that, you’re going to learn in your niche,
    oh my God, you know, this type of calculator,
    this kind of tool works.
    And to me, like I invest in things
    when I can see like a crazy amount of upside return.
    And this is something to me that’s like not a lot of,
    like it’s not going to take you that long to learn
    this sort of stuff and to test this,
    but the return on investment, if it works is so large.
    – So I want to talk about some of your spicy takes.
    You had another tweet where you gave like 25
    different spicy takes and Nathan and I actually talked
    about it and we pretty much agree with all
    of your spicy takes, but I thought there was some fun ones
    that we can dive into because they’re obviously
    more nuanced than, you know, the single sentence
    for each take that you put.
    – By the way, for context, whenever I do
    those spicy takes things, I like drink a massive
    like cold brew or something or like an energy drink
    and just like bang them out in like 10 minutes,
    press tweet and then just like go for a walk
    and then see what happens in like 30 minutes.
    – Can’t see how many people disagree
    or agree with what you just said.
    – Yeah.
    – That’s awesome.
    Well, you had one of the spicy takes
    and I actually commented on your post about it was,
    you mentioned like tools like Otter that are going to be
    like a note-taking assistant are probably going to flop.
    And I’m curious why you think that.
    And the reason I ask is because I actually use Otter,
    I actually agree that it’s going to flop,
    but I think it’s for a different reason
    than you mentioned in your tweet.
    So I’m curious why you think so.
    – So, and our team, by the way, we have like 120 people
    on our team and like we use Otter in all these meetings
    and whenever I’m in there, I’m always like, it’s me,
    it’s like, I like stare down the Otter, you know,
    I’m kind of like, you don’t like me and I don’t like you.
    And I think a lot of people have that feeling
    with respect to an AI note-taking app.
    It gives the convenience to the person
    who wants to write notes, of course.
    So you can be like more present in the meeting.
    Like that’s the, that’s who people who like tools like that,
    that’s what they say to me.
    They’re like, yeah, like I just want to be present
    in the meeting, therefore this is going to help me,
    you know, be present.
    And it’s like, homie, can’t you just do two things at once?
    Like, you know, I remember being in the third grade
    taking notes, you know, in a classroom, you know,
    I think what people don’t like about AI note-taking tools,
    and this, I do believe this is 90 plus percent of people,
    is they can’t be free.
    They can’t, you know, maybe they want to swear
    and they can’t, they don’t want to swear.
    And I think it hurts more productivity
    than it does help productivity.
    That’s my spicy take with it.
    And I also just think that, yeah,
    the only person that enjoys the Otter
    is the person who is supposed to be taking notes.
    And I also think, yeah, I’m just, I don’t,
    I think go take notes, man, just go take some notes.
    – I mean, at the end of the day too,
    if you’re sitting there taking notes,
    that’s sort of turning it into a multimodality
    sort of engagement, right?
    You’re listening and you’re also writing,
    and because you’re doing both,
    you’re going to help lock a lot of those concepts in.
    – My take is there’s going to be a backlash
    on this sort of stuff.
    I feel like people now are kind of just like
    politely smiling and they’re like, that’s cute.
    But I think in a year or two,
    people are going to be like, hey, hey, do you mind just,
    I mean, you’re starting to see it a little bit,
    but you know, I’m starting to see like,
    hey, do you mind just like removing that recorder
    from that meeting?
    I think you’re going to see instances of that, 50X.
    – I’ll tell you, so here’s my sort of killer use
    that I love using Otter for.
    I actually go to like a lot of conferences
    to make content around them for my YouTube channel.
    So, you know, I was at Cisco Live
    and I was at Augmented World Expo
    and Google I/O, Microsoft Build, all these events.
    And a lot of times I’m sitting in these conferences
    while they’re speaking and it’s hard to take
    all the notes on everything they’re talking about.
    So I usually open up Otter on my phone,
    just set it next to me on my chair
    and let Otter actually record the entire, you know,
    keynote or panel or whatever I’m sitting and listening to.
    And then I’ll take like the summary and the transcript
    and I’ll pull it over into a clod or a chat GPT
    and say, hey, I need to make a video about this.
    What are some of the talking points from this presentation
    that I should bring up in this video?
    And it will actually help me sort of like
    outline rough draft a presentation
    about the keynote that I just saw.
    And I found that to be really helpful.
    Saying that, I also agree,
    it’s probably not going to be around for a long time
    and it’s going to flop, but I think the reason is more
    because I think it’s just going to be a feature
    instead of an actual app in a lot of these devices.
    I think we’re going to see Google just roll it into Android
    as part of the operating system.
    I think we’re going to see Apple intelligence
    just roll that kind of feature in.
    And I can’t see companies like Otter existing
    when it’s just like a native built-in feature
    of these devices.
    – So, well, where I’ll agree with you is in the,
    what I’ll call like broadcast setting
    if like you’re in a conference
    and there’s tons of people around you
    in groups of larger than 10,
    something like Otter super important.
    Like I can see it being super valuable, super important.
    But within the context of I’m a company
    and I’ve got all these meetings
    and a lot of the meetings are just
    hanging out or talking about one issue.
    And they’re smaller.
    Like maybe the average meeting is four people.
    That’s where I think Otter is going to really flop.
    – Yeah, yeah.
    I can see that take too.
    I mean, I don’t use it on like Zoom calls and things like that.
    So it’s not actually one of the use cases
    I’ve actually used it for.
    So I can definitely see that take.
    – So you said AI is taking jobs
    not just making people more productive.
    Companies want profits.
    The idea that AI frees you to focus on your passion
    is pure fantasy for most workers.
    You know, I kind of agree with this.
    I feel it’s a situation where everyone in AI right now
    they can’t say that though.
    There’s an odd thing there, right?
    Where you just, you feel like you can’t say
    the truth about the thing.
    ‘Cause it leads to like, okay, well, you know,
    does this change capitalism?
    Like I’m a capitalist.
    Like does capitalism make sense in the future?
    Does our current government and economy
    make sense in the future where, yeah,
    you can use AI for most things
    and maybe everyone doesn’t have to work the same way.
    Is that kind of like the gist of,
    is that how you see it as well or?
    – Sometimes I’m scrolling Twitter
    and I’m seeing like these tweets where it’s like,
    AI is gonna make, you know, there’s gonna be more jobs.
    You know, it’s making more jobs.
    And I’m like, what are you smoking then?
    Like honestly, I feel like I’m living
    in a completely different planet.
    I’m like, okay, there’s this new paradigm shift
    that has basically made it 100 times easier
    to do basically anything from creative to software,
    to marketing, to all these different things.
    So that’s happening.
    Everything’s become easier.
    At the same time, companies are looking
    to become more profitable and especially larger companies,
    especially VC-backed companies
    who are trying to exit or get acquired
    and publicly traded companies
    whose only job is to increase shareholder value.
    Once they get comfortable to the point
    where people are using these tools
    and they’re becoming way more productive,
    they’re not gonna be like,
    oh, hey, let’s go hire another thousand people.
    So yeah, I just think that people
    are gonna be way more productive.
    And if people are more productive,
    there’s going to be less of those types of jobs.
    – So what do you think, just like very theoretical, right?
    What do you think the job market looks like
    in several years from now?
    – There’s a lot less engineers.
    There’s a lot less designers.
    I think there’s a lot less marketers.
    I think the top 1% ends up getting paid 10 times.
    I think the middle gets stripped out.
    I think that’s what happens.
    I think the middle of these white collar jobs
    becomes stripped out.
    You’re either junior and you’re like very low paid
    or you’re senior and you’re very high paid.
    You know, I wish I wasn’t saying this.
    I agree with you.
    Yeah, I think we both agree.
    – It’s hard to say, right?
    Cause we don’t know the answer.
    Like we don’t know the answer for those people.
    Like what do they do and how do they earn a living?
    And yeah, sure the AI tools will make their life better,
    but they also still need food on the table, right?
    – Right, exactly.
    – So do you think this, I mean, obviously,
    none of us know where it’s all headed,
    but do you think this leads to like a UBI scenario?
    Do you think like, where do your predictions lie
    if you had to make predictions?
    – I think it’s important to just share your opinion
    and it’s okay if your opinion changes over time
    in real time.
    And as you learn about a particular topic,
    and I think that if more people do that,
    like the better society is,
    as long as you can be open-minded about, okay,
    the data has changed.
    Therefore my stance on something has changed.
    As far as like do, you know,
    in a situation where millions of people are out of work,
    is there some sort of UBI?
    I mean, I don’t wanna speak for Sam Altman,
    but like, why is he creating Worldcoin
    if he isn’t thinking about that?
    That UBI is gonna be a more important part of,
    you know, as a mechanism to pay certain people.
    You know, what do I think?
    I stop there.
    Like, I don’t know how, you know, I’m not a politician.
    I’m not an economist.
    I’m a technologist that knows
    where the technology is going.
    Where it goes from there?
    Like, I’m not, it’s above my pay grade.
    Well, yeah, you have to be like a philosopher,
    psychologist, everything, right?
    Like, it’s like, how do humans, you know,
    interact or how humans live if they don’t have work?
    And how does that work?
    And how do you feel motivated?
    And especially if other people are getting 10x, right?
    And they’re living this amazing new life,
    and then you’re kind of like living
    in your virtual reality world or whatever.
    Yeah, I mean, I don’t think there’s gonna be UBI.
    I think people will find different types of work
    and be productive to society.
    What that work will look like,
    I think will be different, very different
    than what it is today.
    I think that the wealth disparity, unfortunately, will grow.
    And that’s why I think it’s important for everyone
    to just stay ahead.
    That’s why this podcast is great, right?
    Like, learn, get your hands dirty,
    and you’ll never have to worry if that’s the case
    because you’ll always be one step ahead of other people.
    People don’t put in the work.
    People just don’t put in the work, right?
    Even when the work gets easier.
    They much rather watch episode 124 of Love Island
    than episode 21.
    Is that tonight?
    The next wave.
    I think that’s true.
    I think it, you know, it’s people wanna,
    you know, we talked about this before.
    It’s like, life is hard, right?
    And it’s like, if you wanna numb,
    you wanna numb your brain sometimes,
    but sometimes like,
    sometimes you gotta keep your brain stimulated
    in order for you to stay ahead.
    – Yeah, I mean, along that same lines,
    I’m gonna actually jump ahead
    to one of your other hot takes
    because it’s very relevant to what you just said.
    You said more young millionaires will get minted
    over the next 10 years than any point in the history.
    But watch YouTube, listen to podcasts,
    and learn to take ideas into reality thanks to AI.
    It’ll make many people jealous.
    So, I mean, there, you just kind of mentioned
    there’s gonna be a growing disparity
    between I guess, you know, the haves and have nots.
    The people with a lot of money
    versus people with not a lot of money.
    But also, to sort of counteract that point,
    it also sounds like you’re saying
    there’s no easier time in history to make money.
    – Absolutely.
    I mean, you’re connected to the internet.
    And so you have access to a library
    of the smartest people on the planet, basically,
    in your pocket.
    So, and then when you wanna create something,
    now you’ve got, you know, AI tools and no-code tools
    that you can actually just go and press some buttons,
    not have to have, you know, gone to Stanford
    and studied computer science with, you know,
    all these well-known people.
    And you can just put stuff on the internet
    and then literally billions of people
    have their credit cards on the internet
    and you can sell to anyone.
    It used to be that, you know, my dad had a store
    and he had a store in this, you know, little area.
    And if more people moved to the area,
    there’d be more people and if people left,
    then less people would buy at the store, you know?
    Now, the store is everyone
    and you don’t need to go and spend two years
    building a building.
    You can spend two minutes to build a building.
    So I think there’s gonna be a lot of low-quality
    and medium-quality startups, content, communities
    that are gonna exist.
    And it’s just gonna feel like,
    it’s just like how in marketing, like if you go,
    I saw like a video of like 1919 in New York City
    and there’s like no billboards and there’s like no marketing.
    And then if you go in 2024, New York, like of course,
    there’s like your eyes constantly looking
    at a million different marketing, you know?
    I think that’s gonna, that’s the world we’re headed in
    in terms of the internet.
    It’s gonna look a lot less like 1919,
    a lot more like 2024 times square.
    And if you need, if you wanna stand out, you know,
    those are where the young millionaires are gonna mint it.
    And I think why are they gonna be young?
    It’s because the young people are gonna be native
    to the tools to create the stuff.
    – Yeah, one thing I feel like, you know,
    even though this technology is gonna create a,
    maybe a bigger divide in terms of the haves and haves nots,
    I do feel like it’s gonna equalize things in a way
    where you don’t have to be the smartest person
    to use AI and make something cool.
    So like a lot of times I feel like it’s gonna matter more
    about like how motivated is the person, right?
    Yeah, they didn’t go to Stanford,
    they didn’t have the highest GPA,
    but they can, if they’re motivated,
    they’ll still be able to use AI tools
    and build a company and build stuff, right?
    So I mean, I find that personally inspiring.
    And like, you know, if people really wanna make it,
    AI is gonna help them make it more than ever before.
    – Absolutely, absolutely.
    I think if you’re motivated and you’re ambitious,
    you’re gonna do well.
    You have nothing to worry about.
    There’s one last topic I wanted to talk about
    before we let you go, and that’s the climate issue.
    ‘Cause you mentioned that there’s going to be
    a major climate backlash against AI,
    it’s gonna become the current thing to talk about.
    And Nathan and I, even before this call,
    we’re actually talking about that.
    We both agree with you on that.
    I think that’s gonna be the big narrative.
    But when you made that hot take, like,
    what was your perspective?
    Where were you coming from on it when you wrote it?
    – Every major wave of technology, computer,
    computers, the internet, social, mobile, crypto,
    there’s been a point in time where those technologies
    get so big that there becomes a climate backlash.
    If you look at the maximums of the,
    like if you were to plot out, you know,
    number of articles, let’s say,
    that have the climate backlash,
    every peak has gotten higher.
    So we saw this, I think, in 2021 with crypto,
    there started to become a huge backlash
    that, and people started saying, you know,
    if we shut down Bitcoin, like we can power,
    like, you know, all of Ukraine
    and all the, you know, different places, right?
    So, and I think that someone is going to
    start telling the story that these technologies
    take compute and compute isn’t free.
    It requires electricity, requires the actual GPUs,
    and that affects the environment.
    So for me, it’s not a matter of if,
    it’s a matter of when.
    – Yeah.
    – And, you know, from an opportunity perspective,
    like, there’s probably opportunities to create more,
    like, climate-friendly, clean, carbon-neutral AI technologies.
    So if, you know, we were talking about DuckDuckGo,
    DuckDuckGo versus Google,
    I think that if, you know, that was the,
    you know, riding the privacy-first trend,
    I think there’s also a climate-first trend.
    So I think that there’s an opportunity to create,
    like, what does the green version of perplexity look like?
    – Yeah, it feels like, unfortunately, this makes,
    you know, AI is going to get political for this reason,
    which I hate, you know, because there’s obviously a divide
    of, like, okay, are we going to stay only on Earth,
    and are we going to, you know, solve climate issues
    by kind of being more efficient and having less people
    and less waste, or are we going to solve climate problems
    by using technology to solve those problems
    and going to other planets, right?
    And so a lot of the techno-optimists, like, you know,
    we believe, like, that obviously AI, you know,
    if it takes more energy, that’s okay.
    Like, AI is gonna help us solve the problems,
    it’s gonna help build the technologies
    we don’t even know exist yet
    that are gonna solve the problems,
    and you can’t get out of it just by having less,
    you know, plastic straws or whatever.
    But obviously, there’s a huge divide there,
    and it gets really political, and so yeah,
    I hate that, unfortunately, AI is probably gonna get
    pretty political in the next year or so,
    and I think it will.
    – Yeah, yeah, I’ve been hearing a lot of this stuff
    about, like, you know, the climate issues
    and how much power all of this uses,
    and all of that kind of stuff,
    but I’ve also been seeing both sides of the narrative.
    I’ve also been seeing a lot of articles and reports
    and stuff that say it’s way overblown, too.
    I’ve seen things saying that in 2020,
    when everybody was in their homes
    and, like, everybody started playing video games more often,
    that used as much compute as what AI is using right now,
    right, and then there’s also the sort of narrative
    of, like, NVIDIA and Qualcomm
    and all these companies that are making the processing,
    their number one goal, anytime they give a keynote,
    is to talk about how they’re trying to make it more efficient
    and bring down the energy usage
    and get more and more powerful compute,
    but for less the energy use.
    So I do think a lot of these problems will be solved,
    but I also think this is gonna be, like, the narrative,
    the popular narrative against AI,
    but I also think a lot of this,
    there’s solutions being worked on right now.
    – It is gonna get more efficient,
    the graphics charge is gonna get more efficient,
    but there’s a reason that, like, Elon Musk
    is, like, doubling down on building, you know,
    factories and stuff in Texas, right?
    Like, aligning himself with, like,
    people who are, like, more pro-energy
    and, like, have more energy, you know, consumption,
    is because that’s probably where things are gonna go.
    Like, yeah, in the current state,
    yeah, it doesn’t use too much energy,
    but when we get to AGI, (laughs)
    ASI, yeah, you’re probably talking about
    massive amounts of energy consumption,
    probably the likes that humanity’s never seen before.
    – So this is really interesting.
    I asked, I prompted perplexity and I said,
    “What’s the ecological impact of a chat GPT prompt?”
    I was gonna say perplexity prompt,
    ’cause I was like, I want something unbiased, right?
    (laughs)
    So, it says this.
    The ecological impact of a single chat GPT prompt
    is relatively small, but it can add up significantly
    when considering the massive scale of usage.
    Here’s the breakdown.
    Each chat GPT query is estimated to produce
    approximately 4.32 grams of CO2,
    but here’s where it gets really interesting.
    I didn’t realize, but, I mean, silly me,
    that there’s water consumption with chat, with this.
    So, chat GPT’s water usage is particularly noteworthy.
    For every 20 to 50 queries,
    the system consumes about 500 milliliters of water,
    equivalent to a standard bottle of water.
    This water is primarily used for cooling the data centers
    that power the AII model, so really interesting.
    And then, there’s more, and then there’s energy usage.
    So, the energy consumption of chat GPT is considerable.
    The system runs on an estimated 30,000 GPUs,
    which requires significant power to operate.
    So, to put this in perspective,
    15 queries are equivalent to watching one hour video.
    16 queries consume as much energy as boiling one kettle.
    139 queries use as much energy
    as watching one load of laundry.
    And if current growth trends continue by 2027,
    chat GPT’s electricity consumption
    could rival entire nations like Sweden, Argentina,
    and the Netherlands.
    – Well, I think the important part of this whole conversation
    is that people have conversations about it, right?
    Because I think when it comes to AI right now,
    the pro AI people want to pretend
    like these negatives don’t exist.
    And the anti AI people want to act like
    solutions can’t exist.
    And I think the important thing is that we,
    I don’t feel like AI should be this divisive topic.
    I feel like it should be a topic
    that we’re all talking about and collaborating on
    and figuring out solutions together,
    both on the pro and anti AI side.
    How do we find this middle ground
    that’s gonna make everybody happy?
    That’s going to solve a lot of these problems.
    And I think one of the frustrations
    that I’ve had recently just being so deep in the AI space
    is it’s all so binary.
    It seems like everybody’s either,
    I’m totally against AI or I’m totally for AI,
    but I believe there’s a spectrum there
    and you can land anywhere on that spectrum.
    Well, I think that’s just the bigger issue
    with human beings, especially now is like,
    you have to be either this side or that side
    and there’s no in between, there’s no gray,
    like gray is dead now.
    And that’s just, it’s just being applied to AI.
    So let’s leave it on a positive note.
    (laughing)
    Go create some stuff.
    You know what I mean?
    You are listening to this, you understand the tools,
    you just gotta go build some stuff
    and go and create stuff that people are gonna love
    that are gonna put smile on their faces.
    It’s gonna add value to their lives,
    make them happier and healthier
    and contribute to making the world a better
    and happier place and you can do that now.
    Well, on that note, you make a lot of great content.
    Everybody needs to go follow Greg on Twitter X.
    Great content there.
    What’s the name of your podcast again?
    – The startup ideas podcast.
    – The startup ideas podcast, go check that out.
    Well, thanks so much for joining us.
    This has been an amazing conversation.
    I’m sure it won’t be the last mode we have on this show
    and I appreciate you joining us.
    – Thanks for having me.
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    (upbeat music)
    [BLANK_AUDIO]

    Episode 16: How is AI transforming the future of SEO and job markets? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by innovator Greg Isenberg (https://x.com/gregisenberg), founder of Late Checkout and Boring Marketer. Greg hosts “The Startup Ideas Podcast”.

    In this episode, the trio delves into the shift from traditional SEO practices to AI-powered SEO 2.0. Greg explains the role of AI in creating personalized content, the pros and cons of AI note-taking tools, and the potential impacts of AI and automation on job markets. They also touch on the environmental and political implications of AI advancements and explore strategies for generating high-quality, interactive content that ranks well on search engines.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) AI to revolutionize content creation and ranking.
    • (06:30) Maximizing SEO through embedding tools and calculators.
    • (08:13) AI for SEO needs differentiation and interactivity.
    • (11:21) Create high-quality content for organic website traffic.
    • (14:56) Turning a stranger into a fan – the business game.
    • (20:18) AI will scrape internet data for convenience.
    • (21:30) Explore SEO 2.0, create AI content.
    • (25:32) Using Otter app for easy conference note-taking.
    • (28:28) There will be less certain types of jobs.
    • (30:54) Share evolving opinions openly and be open-minded.
    • (34:05) Access to global knowledge and market simplified.
    • (40:08) Climate, AI power usage, tech solutions narrative.
    • (43:13) AI conversation should unite, not divide.
    • (44:12) Humans polarized, but AI offers positivity.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • SB 1047: The Bill That Could Hinder AI Progress ft. Anjney Midha

    AI transcript
    Can you break down what is SB 1047? SB 1047 is a proposed law, but this one
    is the most harmful to little tech.
    When all your marketing team does is put out fires, they burn out. But with HubSpot,
    they can achieve their best results without the stress. Tap into HubSpot’s collection of AI tools,
    breeze, to pinpoint leads, capture attention, and access all your data in one place.
    Keep your marketers cool and your campaign results hotter than ever. Visit hubspot.com/marketers to
    learn more. Hey, welcome to the Next Wave Podcast. I’m Matt Wolf. I’m here with Nathan Lanz. And
    today we’ve got a really important episode. Today, we’re talking with Anjanay Mida and he’s a general
    partner over at A16Z. He was on the ground floor for companies like Mid Journey, and Luma, and
    Anthropic, and some of the biggest AI companies in the world. And today, his message is really
    important. He’s talking about legislation that they’re trying to pass right now in California
    that could kill AI. And this bill is called SB 1047, and it will really, really hinder AI
    progress if it gets passed. And in this conversation, we’re going to talk to him about what this bill
    is, why you should care about this bill, some better options for AI regulation, and what we
    can all do about it to make sure that the right regulations get passed and the wrong regulations
    do not. So this is a fascinating conversation with a lot to learn. So let’s jump right in with
    Anjanay Mida. Welcome to the show. Anjanay, thanks so much for joining us. We’re really excited
    to talk to you about AI and AI regulation and AI investments. So thanks for joining us today and
    how you doing? I’m doing great. Thanks for having me. Why don’t we start by getting a little bit of
    background? Can you break down what is SB 1047, like in layman’s term? What do we need to know
    about it? Yeah, so look, basically SB 1047 is a proposed law. It’s a California state bill
    that’s making its way through the California legislature right now. That’s part of a much
    broader wave of 750 or so new pieces of AI legislation that have been proposed in the US
    since Biden signed the AI executive order late last year. But this one is the most
    harmful to Little Tech. Little Tech is startups, open source researchers, academia. Unlike many
    of those well-intentioned builds, which say, “Hey folks, AI is a powerful new useful technology,
    and like many useful technologies like electricity or the internet, which has good and bad uses,
    we should be thoughtful about how this technology is used. We should punish bad actors for doing
    bad things with that neutral technology.” Unlike that rational approach, this bill is drafted to
    attack underlying model researchers, scientists, and developers. And among other things, it’s trying
    to place civil and criminal liabilities on developers of AI models, as opposed to focusing
    on the malicious users of those models. So as proposed by this bill, overseeing these new laws
    would be a frontier model division, which is kind of like a new DMV they want to form,
    a new regulatory agency that would have the power to propose requirements on startups,
    on researchers, on academia that would dictate if a researcher or an engineer could ultimately
    be thrown in jail or not. Now, it’s so crazy that when this bill was proposed amongst tons and tons
    of other bills, most people read it and said, “Okay, crazy bills like this get proposed all
    the time. This is never going to get anywhere.” But the California Senate passed SB 1047 in May,
    32 to 1. And so this bill is now slated for a California Assembly vote in August,
    less than 60 days away. If passed, we are one signature from Gavin Newsom away from cementing
    this into California law. And so this is an incredibly dangerous piece of well-intentioned,
    but incredibly misguided regulation that is trying to make AI safer by focusing on the underlying
    model instead of the malicious misuses, which is really where we should be focusing.
    If that’s passed, I don’t see how you build an AI startup in California. Why would you take that
    risk? You’d go to Texas or somewhere else. I mean, no rational AI researcher or scientist
    is going to risk being thrown in jail just to pursue their research in California.
    I think that if I had to be really sympathetic, I think he’s probably trying to elevate
    the attention that AI gets, but it’s just proposed such misinformed drafting that’s
    completely devoid from the way these models are actually researched, trained, and developed in
    the real world. When it comes to frontier AI research, a local legislator who has no background
    in AI development, technology development, and whose co-authors are frankly, I believe,
    the most experience that the bill’s co-authors have is real-world experience in a lab that has
    actually developed and productionized these models is a four-month internship at Google.
    One of the co-authors of the bill is a well-meaning high school think tank. It is staffed by a
    couple of high school researchers or something. From a substantive analysis, we’ve put out tons
    and tons of critiques of the substantive pieces of the bill, but the process around this bill,
    oh my god, I mean, it’s just a joke. That’s what ultimately led us to launching this website,
    Stop SB 1047, two weeks ago, because after many attempts to provide the senator and esteem with
    feedback on the problems with the bill and how to address it, and being ignored, our founders and
    the Little Tech community just got super frustrated when every new revision of the bill just ignored
    all that feedback and instead made the bill even worse. With this August vote deadline looming,
    it’s just become so important and urgent to amplify those voices that are being ignored,
    right? Like startups, researchers, the open-source community at large to voice their concerns.
    We just wanted to amplify those concerns that Scott Wiener’s team has been ignoring. I have
    learned now, even if you’re not interested in politics, politics takes an interest in you.
    A lot of legislators, I think, especially in other states, are being quite thoughtful about
    saying, “You know what? We’re open to feedback. Give us feedback,” and they’re making revisions to
    the bill that actually address that feedback. That’s not the case here, right? This is a process
    that’s been led by a legislator who keeps saying, “I’m open-minded to feedback.” It takes a bunch
    of founders’ time and companies’ time, and then when you see the new draft, it addresses none of
    the core issues. For a theoretical example of what this could mean, and you can definitely
    correct me if I’m wrong, but let’s say there’s an open-source model out there that some people
    developed and they put online, made it open-source. Somebody else grabs that open-source model,
    uses it to hack into a government system. I don’t know. Something like that. They use the model
    to do some bad actor stuff. The people who made the model are just as liable as the people who
    actually did the hacking, right? That’s right. If you read the bill, what the bill is saying is,
    if you open-source a model that meets some criteria that they put, which is completely arbitrary,
    we can get to that in a second. But if you open-source a covered model,
    you have to certify that this model cannot be used for any catastrophic harms,
    and if somebody downstream picks up your model and does something bad with it, fine-tunes it,
    changes it, modifies it in ways that you didn’t control and does something bad,
    you are liable as the open-source developer for the harm they did. They’re placing this perjury
    penalty on that developer, and you might go, “Okay, Anj, well, perjury is kind of,
    that’s pretty severe. If you’re a guilty perjury, you get thrown in jail.” Yes, you do. What this
    bill is proposing is that if an open-source model developer fails to certify appropriately, and
    by the way, there’s no real definitions proposed yet. All they’re saying is this new agency will
    have the full rights to determine these definitions in the future. You could potentially be hell liable.
    I think that’s just crazy.
    Yeah, definitely. So, is this only open-source, or does this apply to the closed-source models as
    well? They’re proposing civil and criminal liabilities on all model developers, open-source,
    closed-source. So, how is this disproportionately affecting the smaller businesses
    building open-source than it is the open AIs, Microsofts, Googles of the world?
    Oh, this is classic regressive tax, right? If you just think about a concept of regressive tax
    versus a progressive tax, a regressive tax is something that disproportionately hits
    less resourced people than people with more resources, right? And the way they’ve drafted
    the bill by putting all of this burden of definitions that have no precise definition today,
    what’s going to happen is, if this bill passes, this agency is going to get lobbied by Big Tech,
    who has armies and armies of lawyers and compliance experts to shape the definitions
    in their favor. And tiny startups, open-source researchers, academic labs, who don’t have all
    those resources will just be left out in the cold. We’ve seen this happen with multiple industries,
    and that’s what’s going to happen here as well.
    So, it actually sort of helps some of these bigger companies with the regulatory capture that,
    you know, they’re not outright saying they’re going for, but they’re probably going for, right?
    100%. Let’s take one example from the bill, which the sponsor,
    the bill sponsor, Scott Wiener, keeps saying, “Oh, look, my definition of what a covered model is
    only applies to Big Tech companies because it only gets triggered by $100 million training
    threshold.” Okay, well, hold on a second. The Big Tech company’s training budgets are in the
    billion. So, first of all, if all you cared about was really just attacking and regulating Big Tech,
    you would start your bill with the number B for billion, right? Number two, what even is a training
    budget? There’s no such canonical definition today. This space is so early that, you know,
    if I sampled the 16 different AI model startups that I’ve invested in over the last three years
    for their definition of training, every single one has a slightly different meaning, right?
    Pre-training versus post-training versus fine-tuning versus computing latent representations,
    like past training runs. If I took Lama, if I’m a startup and I took Lama 3,
    which costs, call it, you know, about $100 plus million to train, and then I fine-tuned it,
    does their training expenditure apply to mine too? None of these definitions have been,
    have at all been, the bill’s authors have proposed zero definitions around these
    pretty important issues, right? Do you think that’s purposeful? Because, like, obviously,
    if you leave it vague like that, that gives them so much power and control over all of this, right?
    Look, I think there’s the generous interpretation and the, you know, the less generous one. The
    generous one, you know, there’s this idea of Occam’s razor, right? The simplest explanation is usually
    the right one. When I first read the bill, I was so worked up, I was like, wow, this has been
    maliciously vague, right, to put this burden on model developers. When I then looked at the bill’s
    authors and their backgrounds, then I realized that they just don’t know what they’re talking about,
    right? I mean, I kid you not, there’s literally zero beyond, I think beyond one researcher on
    that team who spent four months inside of a lab as an intern. They don’t have any experts on the
    drafting team who’ve actually trained models, who’ve deployed them, who’ve worked at startups
    for an extended amount of time that are frontier model companies. I mean, I just think they don’t,
    I think they’re well-intentioned. I wish I could tell you they had the competence to have done
    this maliciously. I think there’s good reason to believe they’re just way in over their heads with
    no real-world experience here. Right, right. Don’t attribute the malice what you can, you know,
    explain with ignorance or whatever. I don’t remember the exact quote, but that seems to be
    what’s going on here. Right. We’ll be right back, but first, I want to tell you about another great
    podcast you’re going to want to listen to. It’s called Science of Scaling, hosted by Marc Robárez,
    and it’s brought to you by the HubSpot Podcast Network, the audio destination for business
    professionals. Each week, host Marc Robárez, founding chief revenue officer at HubSpot,
    senior lecturer at Harvard Business School, and co-founder of Stage 2 Capital,
    sits down with the most successful sales leaders in tech to learn the secrets,
    strategies, and tactics to scaling your company’s growth. He recently did a great episode called
    How Do You Solve for a Siloed, Marketing, and Sales, and I personally learned a lot from it.
    You’re going to want to check out the podcast, listen to Science of Scaling wherever you get your
    podcasts. I’m curious. If you were an advisor to help with creating some regulation, are there
    things that you believe should be regulated, or do you think it should just be open door,
    let’s just push forward, accelerate at all costs, or are there some areas where you’re like,
    okay, these are areas I think should be regulated? Oh, I’m absolutely in favor of regulation. Let’s
    make it clear. Models are powerful tools like electricity, they can be used for good and bad,
    and we should focus on preventing people from doing bad things with it. But this approach to
    regulating the underlying technology and placing burdens on researchers instead of placing the
    burdens on the misuses of the models is completely misguided. I have an issue with this particular
    piece of legislation. I don’t have a problem with regulation, especially regulation that’s
    thoughtful, that’s drafted in partnership with industry, that puts America first, that doesn’t
    just hand away our entire AI startup industry to China. Yes, I’m absolutely in favor of regulation.
    If you were asking me, if you were drafting legislation with policymakers to make
    sure AI is developed safely and responsibly, what would you prioritize? I’d probably look
    for three basic principles in that drafting. One, focus on the misuses, not the models.
    Right? Focus on the malicious users, not the underlying infrastructure.
    The second would be to prioritize concrete security problems over these sort of
    super theoretical borderline sci-fi, doomsday terminator scenarios that they’re calling AI
    safety. Those are not our most pressing safety issues where a model autonomously goes rogue
    and launches a cyber attack on our power grid. That’s the plot line of a Schwarzenegger movie.
    Right? What is happening, and I know this because our portfolio companies are being
    attacked by this, we get approached by law enforcement agencies all the time, is in fact
    good old-fashioned spearfishing, misinformation attacks, identity theft that are where these
    attacks are increasing in speed and scale because bad actors are using AI tools. It’s the same attack
    vectors. We have laws that say these are illegal. We don’t need more laws to say these should be
    even more illegal. What we do need is laws to bolster enforcement, invest in defensive tools
    that our agencies can then use to fight these increasing speed and scale of AI security. That’s
    the problem we should be focusing on. Right? Anyway, that’s the second thing. Let’s prioritize
    concrete AI security over sort of doomsday safety scenarios that have almost zero empirical evidence
    that these will ever come to pass. Then I think the third thing I would do is to really prioritize
    open-source development in the United States to maintain the competitive edge we have globally.
    Right? Because us placing these burdens on our startups, our open-source researchers,
    our universities is not slowing down China. They’re full steam ahead. But if you prevent our
    open-source ecosystem from collaborating, from putting up models that people can research,
    can fine-tune, can red team to make them more secure, you’re going to hurt us,
    and you’re hurting the U.S. national of competitiveness. Nobody else, while everybody
    else races ahead. Those are sort of the three simplest principles. We provided that feedback
    ad nauseum, to be honest, to the senator, but none of the amendments to the bill have addressed
    these core issues. Right, right. Speaking of China, I don’t know if you saw the news today,
    but it looks like open AI is going to be banning chat GBT in China. It looks like this is possibly
    in collaboration with the U.S. government, or at the direction of the U.S. government.
    So I do wonder if we’re going to end up in a scenario where open AI and the other major AI
    players who are closed-source, if they’re already in collaboration with the U.S. government behind
    the scenes, a member of the NSA who joined is, I think, a board member of open AI recently,
    Mira Moradi, the CTO, she openly said in an interview that they collaborate with the U.S.
    government in terms of showing them the new models before they come out. I do wonder if
    that’s going to lead to a world where, yeah, the closed-source models, they’re collaborating with
    the U.S. government because the U.S. government sees this as a national security. It’s an asset
    to the U.S., but also it’s a security threat. It’s a risk as well, and if they will actually end up
    pushing that there should be no open source because of that. I’m almost kind of like what
    Founders Fund is saying, where it’s like, and that’s one area where I’m a little bit conflicted,
    because I saw some of the people at Founders Fund saying that open-source AI can be dangerous,
    right? And that’s actually going to help China. So, I’d love to hear your thoughts on that,
    like how kind of A16D is more on the side of open-source, and it seems like Founders
    Fund is slightly against open-source for AI. Look, I think any arguments that claim
    that open-source AI is a threat to national security are either, frankly, misinformed so that
    they’re just coming from a place of not knowing the true state of reality on the ground
    or they’re malicious, and that they’re designed to hold the United States back.
    And let me explain what I mean there. Number one is a very, I think,
    misinformed understanding of the state of information security at the best labs, right?
    There’s this idea we have that closed-source labs are so protective and secretive of their
    weights that China doesn’t have them, and we somehow have this amazing competitive advantage
    over China. For over 10 years now, the Chinese government has had a state-sponsored program
    to infiltrate targets of valuable IP development in the United States, and it’s not AI-specific,
    this is in all kinds of industrial processes, that is a nationally-sponsored strategy by the
    government of China to exfiltrate valuable IP from the United States to China. And while the FBI
    and other enforcement agencies can’t comment on ongoing investigations, I will tell you that you
    don’t have to look too far to look for public evidence that this is already happening at the
    frontier labs. Just two months ago, there was an engineer from Google who was caught by the FBI
    boarding a plane to China with TPU schematics on a thumb drive. We’re not talking sophisticated
    exfiltration here, guys, thumb drive, okay? So, I would be, number one, I think any national
    security game theory that folks are abiding on must take into account the reasonable likelihood
    that frontier model labs in the United States are already infiltrated by adversarial nation states.
    Frankly, I think there’s good evidence already from our enforcement agencies that ongoing
    investigations that will soon become public will make that clear. But you just have to
    go read the news to know that this is happening. So, number one, any national security strategy
    that says, oh, we’re ahead and they can’t get our weights from closed-source labs is you’re already
    giving away the game. Okay, so let’s start from an operating assumption that at best, we are at
    par with them where they have our frontier developments today. I’m not even sure we can
    claim we’re ahead. Let’s just say the goal is to remain at parity, right? The idea that open source
    is somehow going to give away our national competitiveness fails to take into account
    that the way we got to the frontier in the first place was through collaboration between researchers
    of different labs, right? And the current big tech argument that, oh, open-sourcing our weights will
    allow adversarial countries to get them does only one thing and one thing only. It allows them to
    stop having to publish their research, stop having to have a convenient excuse to tell
    their best researchers who want to, by the way, publish their research. The way the best AI researchers
    get more sort of feedback on their research is by presenting openly. The scientific process
    is you put out your research, you share about it publicly, other people then provide feedback,
    and then you improve, right? That entire process of open collaboration at the frontier of AI is
    about to basically be all but dead. And one of the biggest ways that we have shot ourselves in
    the foot is by preventing academia in the United States to contribute to that research, right?
    So if open source, for example, today is the only way that allows frontier university labs
    to contribute to research at all. If Lama III was not open sourced, if Mistral was not open sourced,
    Stanford, Berkeley, MIT, like these institutions, the postdocs, the PhDs there would have zero way
    of contributing to AI research. And so I think if you believe that the public university system
    and open collaboration between labs is critical to keeping our national competitiveness ahead,
    then turning off open source is a great way to keep us behind, right? Especially at a time when
    our labs are already infiltrated. So if the enemy already has our best and then we’re slowing down
    our best, the most likely steady state is that we lose our national competitiveness and we fall
    behind, right? So I think that either people don’t know just how people advocating for these who are
    arguing that open source is bad for national security. Frankly, a lot of them just don’t know
    what they’re talking about because I don’t think they’re investors in enough frontier labs at this
    point. And frankly, I just think there’s a bunch of people who are culturally misguided because
    they think that these doomsday scenarios are more realistic than they really are.
    So I have one one sort of last question about the SB 1047. It’s a little bit of like a devil’s
    advocate question. So if this is like a California bill, you know, when people just argue, well,
    just go do your research outside of California. Like, I don’t know, just I’m curious your thoughts
    on that. Yeah, it’s a good question. So unfortunately, this, the drafting of the bill was amended to be
    even more clear that the bill stretches across state lines. You know, up until last week,
    there was some debate like, oh, Ange, like it doesn’t say that this applies outside of California.
    They, the bill’s authors went outside out of their way to make it clear that the statute would
    reach across state borders. Oh my god. So really, I mean, I mean, I believe the bill’s authors has
    been actually promoting that as a feature of the bill, not a buck. So this legislation is nationwide,
    whether we like it or not, or they’re proposing it to be nationwide. So the only, I think the most
    likely scenario will be that our best researchers, our best teams will move offshore to this emerging
    kind of region across the world that I’m calling an AI sanctuary. Basically, there’s sort of three
    things you need now as a world-class startup or a model research team. You need cheap electricity,
    right, cheap, abundant, sort of sustainable, clean electricity to run the massive amounts of
    compute you need to train these models. And the last thing is you need regulatory
    certainty and protection to train these models where you’re not being as a researcher, you’re
    not being held with civil and criminal liabilities. And you know what, frankly, I have been shocked
    by how many, you know, nations have reached out since we started publicly speaking about the
    bill saying, hey, please send us your best and brightest. We will gladly protect them without
    regulations. And I think that will mean that our best companies do offshore to places that are
    offering them cheap and abundant energy, compute and regulatory protection.
    Yeah. So let’s talk about what people can do. If, you know, you’re listening to this and you’re
    going, OK, yeah, this definitely sounds like we need to stop this from happening, what can we do
    about it? Yeah, I’m glad you asked. So stop SB1047.com. It’s a public website that’s a hub for researchers,
    academics, and anybody else concerned about the impact of the bill can go and write to their
    legislators. So if you oppose the bill, please visit the site. We’ve got a templatized bill that
    you can then customize for yourself and send it to your assembly representative. We have a list
    there where you can easily pick who your representative is. We released the website
    last week. And in the first four days, we had the community send over 375 letters to the assembly.
    And so this is an important issue that a lot of people, a lot of startups, a lot of academics,
    and a lot of open source researchers are concerned about. But we need to get the word out to even
    more people. We have less than 60 days before the final assembly vote on this proposed law.
    So please tell others about the site, share the information, raise awareness among those who’ll
    be impacted by this bill. We basically think Little Tech deserves to have its voices heard.
    And so if you visit the website, we make it super simple for you to understand
    how this bill impacts you if you’re Little Tech and how to take action, which is to send a letter
    to your representative and make your voice heard. You know, the message about helping small startups,
    like I personally feel that, but I think a lot of people are probably going to resonate more with
    the fact that this is very important for the future of America. It’s kind of appropriate
    that this is like right after July 4th, and it can stay. We’re celebrating America and I personally,
    that’s the thing that inspires me is like, this is very important. Like if we just like hand AI
    dominance to China, like the reason America has been so successful is that we were, you know,
    we were the leaders in culture for a long time with like entertainment. We were leaders in technology,
    internet. And so that’s why freedom is spread around the world. And if we don’t win an AI,
    it’s probably going to be the opposite of freedom that’s spreading around the world
    through China. And so I think it’s a big problem. And like, and the opposite of freedom when you
    have AI is, it can be quite scary when you can use AI to mass control people. And so I believe
    that it’s really important that America wins this. And I think that more people probably will
    resonate with that kind of message versus, you know, obviously, Silicon Valley people,
    we like, yeah, we want to support the startups. Right. But a lot of other people, I think that’s
    a more powerful message, I think. I think you’re absolutely right. And there’s a professor at
    Berkeley, Ian Stoika, he testified in Sacramento last week saying that this bill, well, this bill
    is called safe and secure innovation for frontier artificial intelligence models.
    In its current form, it would do the opposite. It will hurt the innovation in California,
    and it will result in a more dangerous rather than a safer world. And he goes on to explain that
    first if SB 1047 passes, when it comes to open source models, he predicts that within one year,
    we will all use open source models developed overseas, likely in China. Why? Because this law
    will discourage building open source models in California, and likely the United States,
    and Chinese open source models are already very competitive. And three of the top six
    open source models are already Chinese, according to the Berkeley LMSIS chatbot arena evaluation.
    The second is that if SB 1047 passes, then California will lose its competitive edge when it
    comes to AI. Because as a researcher in a fast moving field, you don’t want to be constrained by
    such limitations. So you just go elsewhere when you can do your best research. So more and more
    PhD students of Chinese origins will just go back to China, while others might consider going to
    places like, you know, other adversarial countries. So where they can enjoy huge funding for their
    research. And this is already happening, according to him. And he’s a leading academic at one of the
    preeminent American university labs. And so when he’s saying it, we really have to, I think,
    sit up and pay attention. And then the last thing he did talk about is how SB 1047 incentivizes
    companies that sell to enterprises to move out of California, since most of their customers and
    enterprises have already have headquarters, you know, out of California. And so for the
    California market, they could just provide, they will have to basically provide inferior models
    to conform with SB 1047, which will mean that the state will will turn from a leader to a laggard.
    And that’s not a future any of us wants. And so it’s this is, you’re right, Nathan, that this is
    not just a California issue. This is an America issue. And I don’t think enough people across
    the U.S. realize just how dangerous this piece of legislation is for all of America. And I think
    more people should be talking about about about it the way you are. I think on that note, that’s
    probably how we’ll wrap up the episode. But everybody can head over to stop SB 1047.com to
    get more details about the bill as well as more details about how to help prevent this bill from
    actually getting passed. And Anjanay, thank you so much for hanging out with us today. This has
    been a fascinating discussion. And I think it’s really going to open a lot of people’s eyes. I
    don’t think a lot of people even realize that this is kind of happening behind the scenes. It
    doesn’t seem like it’s getting a lot of publicity right now. So I think it’s important that we
    have these discussions and let people know that this is happening. This is what the California
    government’s shooting for. So if you like what we’re getting out of AI right now, and you like
    the progress we’ve seen, we need to do something about this. So I appreciate you sharing all your
    thoughts and all the details about this, because I do think it’s going to be eye-opening to a lot of
    people. Oh, thank you guys. I’m a huge fan of the bot and, you know, helping us spread the word and
    get the message about the gauze out is deeply appreciated. So thank you. Absolutely. Amazing.
    Thanks again. All right. Thanks, guys.
    you

    Episode 15: Is the future of AI development under threat due to new legislation? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by Anjney Midha (https://x.com/AnjneyMidha), a General Partner at a16z and a prominent voice in the tech community and advocate against SB 1047.

    In this episode, Anjney Midha dives deep into the potential ramifications of California’s proposed bill, SB 1047, on the tech industry, startups, and researchers. The discussion covers why regulations could force AI companies to leave California, how the bill might favor big tech over smaller developers, and the broader implications for America’s leadership in AI. Anjney also introduces StopSB1047.com, a platform to raise awareness and mobilize opposition to the bill.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Proposed bill aims to hold AI developers accountable.
    • (05:29) Launching website to stop SB 1047, frustration.
    • (07:21) Open source model certification liability issue explained.
    • (09:57) Sponsor questions bill’s effect on tech companies.
    • (15:59) OpenAI may ban ChatGPT in China.
    • (17:14) Open source AI not a national security threat.
    • (23:31) Global legislation may lead to AI sanctuary.
    • (26:06) Small startups need support to prevent AI dominance.
    • (27:57) Chinese dominance in open source AI models.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Claude 3.5 Sonnet: Is OpenAI Falling Behind?

    AI transcript
    I need this from my email as soon as possible.
    Yeah, I’m like imagining this workflow where like you’re listening to your emails with some extra commentary.
    There’s something new being born there that never existed before.
    When all your marketing team does is put out fires, they burn out.
    But with HubSpot, they can achieve their best results without the stress.
    Tap into HubSpot’s collection of AI tools, Breeze, to pinpoint leads, capture attention, and access all your data in one place.
    Keep your marketers cool and your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    Hey, welcome to the Next Wave Podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz.
    And today we’re going to discuss a lot of the latest stuff to come out of the AI world.
    Things from Anthropic and OpenAI and HubSpot and all of these big players that are building stuff in the AI space right now.
    But we’re not just going to talk about the news.
    We’re going to talk about what the heck do we actually do with this stuff?
    You know, there’s a lot of discussion going on in the AI world around that’s a really, really cool thing.
    But how would I actually use that in my life?
    Well, we’re going to talk about that and some really cool stuff has come out from Anthropic and OpenAI lately.
    And we’re going to give you our thoughts on where this might be going and how you can actually integrate this and implement it in your business.
    So this week, I think the most exciting thing that I saw that’s actually useful is the new Claude 3.5 Sonnet.
    You know, I’ve been a big believer in OpenAI and even though people were saying, “Hey, Claude’s better at writing.”
    I’m like, “Ah, but still, you know, with OpenAI and ChatGPT, I have my custom instructions and it knows more about me and has more context about me.”
    Well, now with the new Claude, not only is it better than ChatGPT, but also they’ve added this new thing called Projects.
    And so Projects is basically like custom instructions, but better.
    For example, like with my newsletter, I created a project called The Lore Newsletter and then I gave it all these notes.
    So I said, “Here’s kind of my writing style.
    Here are like two of my favorite newsletters and the kind of writing style that I’ve learned from.
    Here are my previous issues of newsletter and also the kind of general template I use to lay out my newsletter every week.”
    And that has made the Claude such a better editor than before.
    Like before, it had no context like, “Who’s Nathan?” or like, “What is he like?” or “What’s the newsletter?”
    And now you don’t have to do any of that. Like, it just knows all of it.
    And so it seems like, you know, yeah, for a newsletter, it’s a great use case, but it feels like this is going to be where companies can really start to find value with AI.
    Because my understanding too is you can actually share the projects, right?
    So you can make this for any kind of project and feed it all the context that’s relevant for that project.
    And now everybody who’s involved in that project, when they go into the AI, it’s already got everything set up.
    So it knows like what you’re trying to accomplish.
    So honestly, I haven’t played with the projects feature yet.
    You know, at the time of this recording, I’ve actually been traveling and I just got home and I haven’t played with that yet.
    But they also rolled out a feature called Artifacts, where it puts this like sidebar inside of Claude.
    So on the left of your screen, you type your prompts and it looks just like Claude’s always looked.
    But then on the right, whatever the sort of output you’re looking for will show up on the right.
    So one of the very first things that I tested with that was I wanted to see if I can have it write a game for me and actually test the game straight in Claude without ever leaving the website.
    And it worked. I asked it. I made a very, very simple game.
    I said, hey, make me a tic-tac-toe game that I can play against the computer.
    And with a single prompt, it generated that game and then it worked first try.
    I mean, the computer actually sucked at tic-tac-toe like the logic of where it should place the thing wasn’t there yet.
    But I would put my X’s down and it would put an O down and I put an X down and it put O down and I beat it every single time.
    Because the AI that it generated wasn’t very smart for the game, but it was all playable right inside of Claude.
    I didn’t even need to copy the code, paste it into a text document, open it as like a HTML file.
    Didn’t need to do any of that.
    It just loaded up in this new artifact sidebar and I was able to test it out.
    And I just thought that was like so cool.
    You can test the code without ever leaving the website.
    And then if something doesn’t work with it, you just go to the left side of your screen and say, hey, this didn’t work.
    Fix it and it will rework the code and show it to you again in the right sidebar.
    And that was really, really cool.
    But I haven’t actually played with the projects yet.
    I just know that, you know, I have some custom GPTs that I made over in over in chat GPT where I mostly use them for like
    Duplicatable processes that I find myself doing over and over again, kind of like you mentioned for your newsletter.
    And one of the ways that I was using the custom GPT was I was taking all of the links from one of my YouTube videos, right?
    I usually have like 20 tabs open and I’m like, here’s this thing and then here’s this thing and then here’s this thing.
    And then I take all of the URLs from those tabs, I would plug them over into chat GPT and a custom GPT that I made.
    And I said, you know, create a resource list for me and it would give every single URL a sort of title, a URL, title URL.
    And it used to work really, really good. When GPT 4.0 came out, it actually broke that custom GPT and stopped working for me.
    So I’m like, when you mentioned this just now, that Claude has this projects and it’s kind of similar to like a system prompt or almost like a custom GPT.
    I’m like, well, that sort of eliminated one of the use cases I used to go to chat GPT for.
    Yeah, for sure. I mean, this is like the first time I feel like I can actually switch over.
    But you’re talking about the artifacts thing. So another thing with projects is apparently you can share the artifacts.
    I’m imagining like two, you know, two teenagers who are like coding on a project together, like how cool that’s going to be now,
    where they literally can go into the AI and say, hey, we want to make this little game together or this little website or whatever.
    And you can talk to the AI and you can actually see the output. You can see the output of the code and go back and forth on that together.
    There’s something new being born there that never existed before, like in terms of collaboration on projects.
    I wonder if you can make like a multiplayer game, right? Like I make a game like just for the most basic possible example.
    The tic-tac-toe game I made, I log in and and playing the game against you, you know, over in Japan, right?
    Like that would be crazy. I don’t know if it’s there yet, right? You’re probably just seeing like the front facing code.
    You’re probably not seeing my input, but that would be cool if we could figure out how to like make multiplayer games in there,
    where you log in, you see like the move that I made and then you make your move and I see the move you made.
    And we can go back and forth and almost like create a multiplayer game inside of artifacts, but with the sort of team group feature that’s there.
    Yeah. I mean, I think right now it’s mainly JavaScript. They’re outputting. I could be wrong, but I think a lot of the stuff I’ve seen at least is JavaScript.
    So you’d probably want to have some kind of back in.
    But like, you know, you know, whether it’s Microsoft or one of these guys, you know, you know, Anthropic, I think they’re funded partially by Amazon.
    So in theory, they could just partner with AWS, right?
    And you could have something in the back end where it rolls up a server, usually with games, you know, net code is one of the kind of challenging things, engineering wise.
    But my understanding is there’s a few best practices for that now.
    So I don’t see why AI couldn’t learn how net code works and then just roll up a server for a game.
    And then, yeah, now you do have a multiplayer game.
    So I don’t think that’s currently possible, but that feels like something that’s probably six months away, maybe 12 months.
    Yeah. Yeah. And they’re up expect by Google, too.
    So they have AWS and Google Cloud, you know, potentially at their disposal.
    Right. So yeah, I mean, I think that’ll be really interesting.
    I need to play with that projects feature a little more as far as the project features go.
    I mean, what are what are some of the like super exciting use cases that that we can think of for using that projects feature?
    Because, you know, again, like we mentioned at the beginning of the show, like, this is really, really cool.
    All right. Now, how do I use this in my business?
    How do I use this in my daily life for productivity?
    How do I actually, like, integrate this new technology?
    I’m curious your thoughts there.
    One on the on a personal side, you know, I recently saw a tweet from Greg Eisenberg talking about micro weddings and telling me how that’s like a new thing,
    where people don’t want to spend hundreds of thousands of dollars on a wedding.
    They want to have like 10 to 20 friends.
    And and but he talks about how that’s a huge trend and it’s so hard to like manage putting together a wedding.
    And obviously, I just recently got married.
    We had a relatively small wedding, very expensive, though.
    And in the meetings with the staff were so long.
    I was sitting there thinking, God, you know, now that there’s this cloud, you know, the projects.
    I’m like, I can make a project for the wedding.
    I could have told it like, here’s the kind of wedding we’re wanting to have help us organize everything.
    And here’s like some reference images and stuff.
    And yes, sure, it could have done everything, but I think it could have helped basically been like the wedding organizer for sure.
    I wonder how close it is to like, you know, Anthropics got like visual capabilities as well, right?
    You can upload a photo and then it can see what’s in the photo.
    And you can ask questions about that photo.
    I wonder, can Claude and this is something I don’t know, maybe you do, but if you don’t, that’s OK.
    Can Claude actually like see what it built over in that right side, the artifact sidebar, right?
    Because if it can, how close is Claude to being like what we saw from Devin, right?
    Write me this game and then it writes the game.
    The game pops up in the right sidebar.
    It tests the game itself to see if the game is functioning the way that specific game should.
    And if it’s not, send that image or the response or the error report or whatever back to the code on the left sidebar.
    Fix it, iterate it and then present it again on the right sidebar.
    Like it feels to me like this is one step closer to that sort of agentic thing that everybody was looking for, especially for code.
    I mean, right now with Anthropic, the thing that everybody is sort of blown away by is how good it is at writing code.
    It’s gotten like so much better than what GPT4 can do at writing code in a single prompt, usually.
    So I’m just wondering that to me feels like where they’re going with all of this is like that sort of agentic.
    It can see what the output was, determine if the output was the output we were looking for.
    If it is cool, if not, throw it back into the chat and try again, you know?
    Yeah, I mean they definitely should go there. My understanding is currently it doesn’t do that.
    I could be wrong, but you know, I think right now, you know, Artifacts is basically like a UX experiment or UI experiment for them.
    Basically, you know, it’s actually one of the first things they’ve invented as far as I know that chat GPT didn’t have first.
    Yeah, because every other feature they’ve created was basically a copy of chat GPT’s features so far.
    Right. And Artifacts is the first thing that’s like, oh, that’s new. That’s a really cool thing.
    You just see the output. Yeah, why not?
    Yeah, especially with JavaScript, HTML, CSS, there should be no barrier to being able to just present that on the same screen.
    Yeah, but I think that is where it’s going, like whether it’s Claude doing it or someone else.
    And I think, you know, chat GPT is coming. They’re not going to just going to lose this fight.
    They’re going to roll out some amazing stuff too.
    But yeah, that’s something I noticed a lot of engineers are talking about online is like people are saying like in terms of like refactoring was a big thing.
    I heard they’re like, oh my God, like you can go in and like all your code is messy.
    You just copy and paste in all of it and it just like makes it all like tidy and removes all the unnecessary stuff or anything that’s redundant or inefficient.
    It just and it gives you code that’s like way better, runs better, cleaner, easier to read and nothing breaks.
    And it’s like before apparently like when you would do that, yeah, we’d like make it cleaner, but sometimes that would break things.
    And I’m hearing a lot less people saying, oh, it breaks things now.
    It seems like it’s way better. Like it understands what it’s doing and yeah, it can clean up your code and it doesn’t break anything.
    That’s that’s that’s that’s huge.
    Like when you’re coding, like you can, you know, that’s one of the hardest things about engineering is, you know, the project I’ve worked on is like when you go when you go into something and especially if there’s other people involved to go and look at somebody else’s code and understand what’s going on.
    Right, it’s so it’s so hard because like they’ve been sitting there for like, you know, you know, sometimes even like 24 hours coding on something and and it just, you know, it makes sense in their head.
    And then when somebody else looks at it, it’s like, how is all this stuff interconnected and what the hell is this and what’s that?
    And and so yeah, I think that’s a big thing that is going to actually be really useful for companies now.
    Is it like, yeah, your code will be cleaner, run better.
    I think one of the things that I’m I’m sort of feeling lately is that OpenAI is starting to fall behind a little bit.
    And again, we’ve talked about this plenty of times on the podcast.
    We have no idea what OpenAI is doing behind the scenes, right?
    They’re fairly secretive. Most of the stuff that they’ve announced and shown off, people didn’t see that coming.
    Because they don’t like to show it off until it’s ready to show off.
    But lately it’s starting to feel like companies like Anthropic are passing them in a lot of areas, you know, and other examples outside of the large language model is the video stuff.
    We actually kind of talked about this on a different episode, but you’ve got Sora, which they announced seemingly to overshadow, overshadow Google, right?
    Like when Sora was announced, I think it was the same week that Google announced Gemini 1.5 or whatever.
    It was a really big leap for Google with like a million token context window.
    Well, that week OpenAI went and showed off Sora and nobody was talking about Gemini 1.5 once they saw Sora.
    Sora was all anybody in the AI world was talking about or looking at or caring about, right?
    Well, here we are several months later.
    We still don’t have Sora access, but now we’ve got Luma AI video available.
    We’ve got Runway Gen 3, which is about to be available most likely before Sora.
    We don’t have it yet, but most likely before Sora.
    And then you’ve got the demo that they did for the sort of advanced audio assistant, I think, is what they call it, right?
    They did that demo once again.
    I feel like they timed that demo to sort of slap Google, right?
    Like Google had their Google AI IO event the same week like Google IO started on Tuesday.
    Well, what did OpenAI do?
    Monday, we’re announcing an event where we’re going to make a big announcement, right?
    They made this big amount announcement on Monday, which kind of took some of the flair away from the Google IO event.
    Here we are a month later.
    We still don’t have it.
    And the they just made an announcement that, oh, it’s getting delayed.
    We’re pushing it out.
    It’s not going to be rolled out.
    I mean, some people seem to be getting it.
    But for the most part, they’re telling us we’re not going to get it till fall now.
    And it’s like they’re making all of these big announcements with the intention of sort of overshadowing Google.
    But then they’re not actually shipping anymore.
    And I don’t think it’s a good look for OpenAI, in my opinion.
    And it also kind of feels like it’s giving a lot of these other companies an opportunity to catch up and pass them.
    Like Claude 3.5 is beating OpenAI and all of the LLM benchmarks right now.
    Like the companies are starting to pass them up.
    I agree that it’s not a good look.
    But I’m still convinced that behind the scenes, they’re pretty far ahead.
    And they’re starting to be some evidence of that.
    It’s like the stuff with Sora, even though, yeah, they did a demo and it’s not out yet.
    But the video that came out, I think it was yesterday, the Toys R Us commercial.
    Have you seen that yet?
    I haven’t watched it yet, but I saw the news.
    That’s generated with Sora.
    Yeah, I heard that.
    So Sora, there are deals going on behind the scenes where the technology is being used.
    I think with Sora, it’s probably just so expensive.
    They’re like, yeah, we can’t just roll this out.
    Everyone is like, like people got to pay like hundreds of dollars to use it.
    Like people are, you know, they’re not going to do that.
    The funny thing is, I didn’t even know that Toys R Us was like back.
    It’s like, and they’re back with Sora.
    Yeah, like this was their introduction to go like, hey, look, we’re here again.
    Yeah, yeah. And so, you know, I thought that commercial was pretty cool.
    Like, you know, is it perfect?
    No, but like, that’s the worst it’s ever going to get.
    And you can see that they’re able to accomplish things in that commercial
    that probably would have traditionally cost a lot, a lot more.
    Probably to the point where they never would have made that kind of commercial.
    It’s just like you’re almost making like a short film at that point.
    I think that behind the scenes that they’re still doing quite well, I agree.
    They’ve kind of slipped up in terms of how they presented things.
    I don’t think it’s good that like, I highly prefer how Claude’s doing it,
    where like they talk about something and then it’s just out and hey, try it.
    Like, you know, like the old Steve Jobs style.
    Like, I think that’s way better personally.
    We’ll be right back.
    But first, I want to tell you about another great podcast you’re going to want to listen to.
    It’s called Science of Scaling, hosted by Mark Roberge.
    And it’s brought to you by the HubSpot Podcast Network,
    the audio destination for business professionals.
    Each week, host Mark Roberge, Founding Chief Revenue Officer at HubSpot,
    Senior Lecturer at Harvard Business School, and Co-Founder of Stage 2 Capital,
    sits down with the most successful sales leaders in tech to learn the secrets,
    strategies, and tactics to scaling your company’s growth.
    He recently did a great episode called How Do You Sol For A Siloed Marketing in Sales,
    and I personally learned a lot from it.
    You’re going to want to check out the podcast.
    Listen to Science of Scaling wherever you get your podcasts.
    But I still think, you know, GPT-5 is coming, and it’s probably way better than Claude.
    So right now, Claude’s better for the moment.
    But yeah, I guess we’ll see.
    Yeah, we’ll see.
    We’ll see.
    The Toys R Us video is interesting to me because I’m wondering what sort of reception
    it’s going to get.
    Like, if it starts, I don’t know if that’s airing on TV already right now,
    or if it’s just kind of circulating on the internet.
    But almost every scenario where AI was used and this kind of thing,
    there was massive negative backlash, right?
    Oh, there is online right now.
    Oh, there is.
    Yeah, for this.
    It’s like, really, yeah, it’s really, yeah, people are quite hateful.
    Like, you know, screw, you know, Toys R Us.
    Screw Macy’s.
    They just stayed dead and all this kind of stuff.
    It’s like horrible stuff.
    I don’t get it.
    I don’t quite understand it.
    You know, I guess the argument is like, well, why didn’t you just hire real people,
    real visual effects artists to go make that?
    You know, you would have employed more people if you did.
    But I mean, they’re a company that went bankrupt, you know.
    So like, they’re probably looking to cut some costs in some places.
    Yeah, yeah.
    But yeah, I just remember like when the, there was that Disney plus show,
    the secret, what was it called?
    Secret agent, secret invasion.
    And parts of the intro were done with AI and like people were boycotting,
    watching that show because the intro was made with AI, right?
    And like, there was like a magazine post where there was like a clock.
    I don’t remember exactly what the ad was for.
    But there was like an ad where like the clock in the background,
    people are like, wait a second, the numbers on the clock look weird.
    That image is AI.
    You know, boycott this company.
    They use AI for their images.
    I don’t understand that.
    Like, like you’re throwing the baby out with the bathwater.
    You’re like, okay, because they use this one ad.
    The product that they’re offering is like, I don’t know.
    It’s a weird take to me that people are like that passionate about like,
    this company is saving costs by using AI.
    Therefore, screw that company.
    But that seems to be the sentiment whenever this kind of stuff comes out.
    I don’t know if that’s like the vocal minority or if that’s how most people feel.
    I kind of get the impression there’s like this vocal minority of AI haters.
    Most of the world is either like AI is cool or I really don’t care one way or the other.
    But then there’s this vocal minority of people that
    tend to speak the loudest on the internet.
    Yeah, I think it’s the vocal minority.
    But it will be interesting to see long-term.
    It’s definitely going to be like one of the biggest transitions in human history,
    at least in our lives, where things are going to radically change.
    And a lot of it’s going to be good.
    And there’s going to be a lot of, there’s going to be some bad and complicated stuff too,
    like we talked about before.
    It’s, you know, I think we should accelerate, but also at the same time,
    it’s like, yeah, but also like care about people because it’s like,
    some people are going to lose their jobs.
    You know, so we’ll see how that works long-term.
    I think there will be some backlash.
    But at some point, you know, like you get these, like the new show from Disney,
    I already forgot the name of it, the new Star Wars show.
    It’s, but it’s, it’s horrible.
    Like it’s all the commercial and they’re like chanting the power of one and all.
    It’s like, it looks like a, it looks like a, a comedy skit.
    My wife and I have actually watched them all.
    And I know where to start off on a tangent here.
    Oh no, I think the show’s horrible, but it’s almost like I’m watching it now
    to watch this train wreck in slow motion.
    There was an interview with the musician Will I Am, the guy who used to be in the
    Black Eyed Peas.
    And he said that, you know, the AI video stuff he’s seen, you know,
    he feels like it is going to take power away from Hollywood.
    And it’s going to give it back to individual creators for them to create new things.
    And he thinks that’s going to be like a more beautiful world where like people can
    bring it on, create the kind of stuff they want to see, you know.
    And so I am hoping like technology gets way better.
    And then Hollywood’s like, has to like produce better films with humans in it
    and stuff that humans actually want to see.
    And then there’ll be a whole other category.
    Like you said, probably a niche of where people can use AI to make stuff that’s
    personalized for them, what they want to interact with.
    So yeah, it’s interesting.
    I definitely see the arguments, right?
    Like I see the arguments of like, okay, this is taking away the need to hire
    visual effects artists and things like that.
    But I also think that the visual effects artists that learn to like leverage a lot
    of these AI tools are going to be much better at their craft, you know,
    using all the tools that are at their disposal, right?
    Why let yourself get left behind because you’re going to boycott a certain technology?
    Well, your competitors probably aren’t going to boycott that same technology.
    Like it’s ridiculous, but I do understand it’s changed.
    People are scared of change.
    But I think those that leverage it as another tool in their toolbox
    will only get better at their craft.
    Yeah, one of the other interesting things that came out this week was
    Eleven Labs, the AI voice company, they wrote out this reader app.
    So basically you can feed it any document and then you can use an AI voice to read it to you.
    So like if there was a book that didn’t have an audio version of the book,
    you could have it read it to you.
    Or if there’s a presentation or any kind of PDF or anything like that,
    you could feed it to it and have it read it to you.
    So like it feels like that’s going to be really powerful for people.
    Like it feels like it moves us closer and closer to like not having to be on your computer all the
    time, where you’ll be able to go off and live your life and do work at the same time.
    Like you can be going for a walk and staying healthy while hearing the presentation that
    your boss just sent to you or whatever.
    Like, yeah, I don’t have time to like look at that this moment,
    but like, hey, let’s just send that to Eleven Labs and have it read it to me while I’m like
    going off and enjoying my life for a little bit.
    No, that to me will be really interesting because it kind of makes it so like
    everything can be an audio book.
    Like articles you read online, they can be an audio book.
    Like you mentioned, like the memos from a meeting that you don’t really feel like reading.
    Turn it into an audio book.
    Go for a walk and listen and listen to the summary of the meeting.
    Like anything can be an audio book or a podcast that you can listen to and multitask.
    I need to run to the store real quick.
    I’ll listen to the audio of this meeting that was generated while I’m there.
    Or I’ll listen to this blog post that I came across on the way to the store real quick.
    I told Amar who’s working over there is like, I need this for my email as soon as possible.
    And then when it gets to the point where then you’re able to actually like talk back to it
    and tell it how to like craft response back.
    I mean, we’re probably like six months away from all this, which is awesome.
    Do they have an API for that feature?
    I think Eleven Labs has a pretty robust API.
    So if they don’t, I bet they will soon.
    Yeah. And that sort of gives me ideas of like, you know, using something like Zapier or make.com
    that you can start tying these tools together and every time an email comes in,
    automatically send it over to the Eleven Labs API, make an audio version of it,
    and then queue it up in my podcast player, right?
    And then when you go anywhere, you just open up your podcast player
    and listen to all the emails that have come in in the last couple hours, you know?
    Well, and what you could do too.
    And I’m just like, I’m sort of totally nerding out right now.
    But like, I like to make workflows with Zapier and make.com.
    And I’m like imagining this workflow where like an email comes in,
    it automatically sends it to the anthropic API, has anthropic at its own commentary
    to the email for you and then sends that over to Eleven Labs, makes a podcast for you.
    And now it’s like, you’re listening to your emails with some extra commentary.
    Like you can tell anthropic, make me some jokes about this email.
    Make me some jokes about this contract.
    So as you’re listening to the contract, it adds in a little bit of like entertainment factors.
    You’re more like interested in and reading it, you know?
    Like every once in a while laugh at something that’s said in this contract.
    So you can be listening to it and they’re like laughing at the contract
    as they’re reading it to you.
    Like that kind of stuff I could actually see myself doing
    just to like help me get through more of the content I’m trying to get through.
    Yeah. I mean, so a lot of the stuff we’re talking about with the email though,
    you know, a lot of that’s, you know, maybe some of it could be doable with Zapier
    and other things right now.
    I’m not sure what to go play with it and see, but if not, it’s going to be available soon.
    But one thing you can actually use like right now, which I’ve been finding really useful
    is, you know, and this is, you know, I’m not saying it because Darmesh is, you know,
    the co-founder of HubSpot.
    But Darmesh, the co-founder of HubSpot, the CTO launched this thing called agent.ai.
    And my understanding is he’s building different agents for different business use cases.
    I’m not sure that’s going to get rolled up into HubSpot products or what.
    But what he has right now that you can use without even signing up is this thing
    where you can forward an email to agent@agent.ai.
    And then it basically connects it to an LLM and you can ask it things.
    Like you can say, you can ask it a question about the email or you can ask it.
    The thing I’ve been finding very useful is summarize this email.
    Yeah.
    Right. And I don’t have to go anywhere else or anything.
    I literally just, you know, forward it to agent@agent.ai and say,
    “Hey, this email that’s like six paragraphs, summarize it into like three bullet points.”
    And you’re just writing that in the message that you send.
    So you’re like forwarding it and then in the message that above the forwarded message,
    you’re asking a question or whatever.
    That’s all you do.
    And you get a response back in about 30 seconds.
    Mine was like 20 to 30 seconds every time I tried it.
    And it’s great.
    It’s like, you know, it’s, I’m not sure what, you know, they’re using behind the scenes.
    Maybe they’re using Claude or, you know, whatever.
    But the responses are good.
    Like the summaries are pretty similar to what you’d get from putting it in the Claude.
    Of course you could go copy and paste and do all that.
    But, you know, for a lot of people who don’t know how to do all that or, you know,
    who are not as fast at doing that as I might be, like it’s super useful.
    And you can use it like right now.
    You just like forward the email.
    So anyone listening, I said, if you haven’t tried it and you’ve got emails that are like
    super long, you know, I suggest like just try it out.
    Like forward an email to agent@agent.ai and just say,
    “Hey, summarize this email.”
    It’s pretty magical.
    Yeah. Yeah. And so, yeah, there’s a lot there to think about.
    You know, we talked about a lot of the new tools that are coming out.
    We talked a lot about the use cases that you might be able to use them for.
    Even talked a little bit about the future implications of where some of these tools
    might go.
    I think this has been a really fun, fascinating discussion.
    And if you enjoyed this discussion, make sure you like this.
    If you’re watching it on YouTube and subscribe wherever you’re listening/watching this.
    If you’re watching it on YouTube, subscribe to us there.
    If you’re listening on our podcast platform, subscribe to us there as well.
    We really appreciate it.
    It helps get this show more reach in front of more people.
    We can’t thank you enough for tuning in and nerding out around AI with us.
    We really appreciate you.
    We’ll see you in the next one.
    [Music]
    [BLANK_AUDIO]

    Episode 14: How soon will artificial intelligence be able to create multiplayer games? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) explore this fascinating possibility alongside a range of innovative AI tools and their impact on various industries.

    In this episode, Matt and Nathan delve into the technical aspects of multiplayer games powered by AI, predicting significant advancements within the next six to twelve months. They also discuss the benefits of AI-driven projects management features, AI’s ability to refactor code and create audiobooks from diverse content. Finally, the hosts examine the potential collaboration between OpenAI and the U.S. government, the implications for privacy and data sharing, and the future of open-source AI technology.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Exciting new Claude 3.5 sonnet enhances editing.
    • (03:39) Testing codes in sidebar; using custom GPTs.
    • (08:24) Possible advancement of anthropic visual capabilities in AI.
    • (10:29) Engineers discussing refactoring and improving code efficiency.
    • (14:33) OpenAI delays rollout, overshadowing Google. Opportunity for competitors.
    • (18:29) Confusion over AI sentiment in the world.
    • (19:10) Anticipating major societal changes with some concerns.
    • (23:12) Excitement for integrating APIs into workflow processes.
    • (26:09) Forward long email to Agent AI, magical.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • The Rise of Generative AI Video Tools

    AI transcript
    I don’t think I’ve had as much fun with AI as I have in the last like month
    playing with these tools that are coming out right now.
    People are going to be very addicted to these things.
    Right. All the tools are getting better too.
    Now we’re starting to wonder, did Sora kind of blow it?
    When all your marketing team does is put out fires, they burn out fast.
    Sifting through leads, creating content for infinite channels,
    endlessly searching for disparate performance KPIs.
    It all takes a toll.
    But with HubSpot, you can stop team burnout in its tracks.
    Plus, your team can achieve their best results without breaking a sweat.
    With HubSpot’s collection of AI tools, Breeze,
    you can pinpoint the best leads possible, capture prospects attention
    with clickworthy content and access all your company’s data in one place.
    No sifting through tabs necessary.
    It’s all waiting for your team in HubSpot.
    Keep your marketers cool and make your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    Hey, welcome to the Next Wave Podcast.
    I’m Matt Wolf.
    I’m here with Nathan Lanz and today we’re going to talk about AI video.
    There’s been these really interesting AI video generators out there, right?
    We’ve had Gen 2 when we’ve had Pika Labs and we’ve had Leonardo Motion.
    And there’s been all these really cool AI video tools,
    but they’ve really been kind of just that, just sort of cool, right?
    They haven’t really had great practical use cases.
    We haven’t been able to create videos with one of these tools and legitimately
    use it as like B-roll or make like a really good film out of it.
    They’ve all had this sort of weirdness to it.
    That is until we got a sneak peek of Sora from OpenAI earlier this year.
    When everybody saw Sora, we saw this AI text of video platform that made
    videos that actually looked realistic and pretty much everybody in the AI world
    got ultra, ultra excited about Sora and what it could possibly do and how
    realistic it can make these videos.
    But then we never got access to it.
    We never actually got to play with it.
    We kept getting teaser videos.
    They gave it to like a handful of creators, like three or four different
    creators were allowed to use it and we got some demos from that.
    But still to this day, most of the world hasn’t gotten access to Sora.
    Well, now we’re starting to get some alternatives to Sora that are looking
    pretty dang good.
    We recently got Luma who released their dream machine.
    We have Gen three from runway, which has been sort of tease, but we haven’t
    gotten access to it yet and now we’re starting to wonder.
    Did Sora kind of blow it?
    That’s kind of the discussion we want to have today is, you know, where is AI
    video going?
    Where did it come from?
    Where’s it going?
    What’s available now?
    What’s coming in the future?
    I think there’s a really interesting discussion here.
    I think probably the general consensus online right now is that open AI did
    wait too long.
    That’s kind of what most people think.
    I think I disagree with that.
    Honestly, like I was probably one of the first people on Twitter, like doing
    like really big AI video threads, like when it was all first starting.
    Like that was like one of my main, you know, like things I was doing
    every week was like putting out here’s the top AI videos this week.
    And I kind of stopped because they after Sora came out.
    Because like the videos were kind of cute.
    And then Sora came out like, okay, yeah, sure.
    I can get clicks on this and views.
    I felt kind of dumb putting out like, here’s these amazing AI videos after
    people had saw Sora, you know, by them putting it out so early that it’s made
    everything else look bad.
    These new ones are like catching up, like especially like Gen three.
    I thought it was pretty amazing.
    Dream machine is pretty awesome.
    But still they don’t look as good as Sora.
    And so what I would say is, yeah, it’s not released yet, but whatever they
    showed then, it’s going to be better by the time it’s actually released.
    Most likely.
    And so when they do come out with something, you know, it’s going to be,
    you know, almost kind of like how like, I think Apple back in the day were
    like, they would come out with the very best product.
    Maybe it wasn’t out first, but it would be the best when it came out.
    No one’s actually found real use with AI video yet.
    And it feels like Sora is the most likely one that when it actually comes
    out, it’ll be the first one that will have real use.
    And that’s why they’ve talked with Hollywood and other, you know, like
    they’ve been talking to major studios, apparently right now, the main
    players are Sora.
    I mean, open AI, Sora, as well as probably Gen three.
    Cause Gen three also, it looks like just like barely behind Sora.
    Yeah.
    No, it’s funny you say that because like I used to make a lot of YouTube
    videos about men.
    Look how far AI video is coming, right?
    And I would show off like how much better Pika labs has gotten or how much
    better runway Gen two has gotten.
    And I was, and then we had stable video diffusion and there was all these
    different AI video models that came out, but they were all, you know, they
    had weirdness to them, right?
    Like every video for whatever reason looks like it’s moving in slow motion.
    People would like more if they would start looking like one person and then
    morph into a completely different person and all the AI video models I’ve
    seen so far still really suck at hands, right?
    So there was all of these video tools that were kind of cool, but then
    the open AI went and showed off Sora and now I was like, okay, well, they
    just raised the bar of what AI coolness looks like.
    So now anything I ever show off in a YouTube video that is me trying to
    say, look at this cool new AI video tool, looks lame compared to Sora.
    So I kind of stopped making those kind of videos, but now I’m making
    them again because we’re starting to see Gen three.
    We’re starting to see Luma’s dream machine.
    We’re seeing these other tools pop up now.
    Yeah.
    And it is exciting like dream machine you can actually use.
    So that’s that is the exciting part.
    Like it’s not as good as Sora.
    It’s probably not as good as gen three either, but it’s not that far behind.
    And you can actually use it right now.
    Like I saw your video where you made a music video, you know?
    And I thought that was awesome.
    Like, oh, it’s like, oh, that’s actually, yeah, I wouldn’t like put that on TV yet.
    That’s probably like six.
    That’s probably like six months or 12 months away from being like almost
    like TV quality and the idea that you’ve got all these new tools are coming out.
    You got like, you know, Udio and I don’t know.
    It feels like we’re at the very beginning of like this creative explosion
    where all these tools combine and there’s the level of like art and
    entertainment in the world is going to go up dramatically.
    I think because like everyone’s going to be able to make this stuff.
    It’s going to be awesome.
    Yeah.
    No, I totally agree.
    I think, you know, it’s a buzzword, but it really democratizes video creation.
    Right.
    One of the things that I’m really excited about is just B roll.
    Right.
    I make a lot of YouTube videos and I don’t like to be just on camera the whole time.
    I like it to change what you’re looking at.
    I want the video’s pace to keep going.
    And oftentimes it’s hard to find B roll.
    And when you do go find B roll, you’re searching for like stock video sites.
    Right.
    I use story blocks is the one that I use.
    And when I go through story blocks, like you can find videos that are sort of
    relevant, but you’re not fooling anybody that it’s not stock video.
    Right.
    It all looks like stock video when you have like that corporate conference room
    and like five people in a suit are all leaning forward over like a conference
    call or something.
    Everybody’s seen those exact videos.
    Even if you haven’t seen that stock video footage before, you just know what
    stock video looks like.
    And so this really excites me.
    Anything I can imagine, I can say any wild thing I want on one of my YouTube
    videos and now I can create a little bit of a B roll for that whatever random
    wild thing I said was.
    Did you see how good Gin three is a text in video?
    I haven’t yet.
    It’s perfect.
    I’m actually at augmented World Expo right now as we’re recording this.
    And a lot of these tools and announcements are dropping while I’m at this event.
    So I haven’t actually been seeing as many of the demos, but I will say
    about about the Luma dream machine is that it’s really, really good when you
    start with an image and you turn that image into a video.
    But if you go in there and you enter a text prompt and try to generate a
    video from a text prompt, that’s what I do.
    It’s not great.
    Yeah.
    So so Gin three, the CEO, he’s the CEO of runway.
    He’s been showing clips on Twitter and I saw one yesterday.
    He showed like five in a row of like text.
    Like, you know, you write out your name, you know, Mr.
    E flow or you write out the next wave or lower text on screen is perfect in Gin three.
    I mean, to the point it’s like crazy.
    Like, okay, you type in something and you want to be talking about time and you
    want it to have sand coming down or you want the words to come fly out of
    something and all of a sudden there’s sand dripping down.
    Perfect.
    And another one where like the words actually were like being like dragged
    through a jungle and they were made of dirt and then they popped up like, like
    anything you want to do with text, like for like advertisements or B roll or
    like intros to a show, it’s that’s already very, very good.
    Now I was, I was kind of surprised.
    Like I want to type in lore and see what pops up when you do that.
    Yeah, no, that’s that’s awesome.
    Cause I mean, even most of the text to image generators still struggle with
    getting the text in the image for the most part.
    So to actually know that we’re getting like a video one that can do that as well
    is is pretty crazy.
    The other thing about the gen three is all of the clips that they’ve been showing
    off, I believe are 10 seconds ish, maybe even longer.
    But when it comes to Luma’s dream machine, you can only generate five seconds
    of video right now, but they did just add this extend so that you can get five
    seconds and then I think it pretty much uses the last frame of that video as the
    first frame of the next video.
    And so it extends it that way.
    But when you do AI video generation in that way, because it’s like building off
    of the last one gets a little bit worse quality, right?
    Every single extension looks a little bit worse than the extension before it.
    Yeah.
    And I heard something from the Luma labs team saying that right now,
    yeah, they could do one minute videos, but with their current model, apparently
    after five or 10 seconds, like the animations just kind of stop.
    Like if you had a character doing like an action scene, running around
    with a gun, shooting it all around by 10 seconds, the person’s like kind
    of just like standing there with a gun looking around or something like this.
    Like the model’s not fully there yet in terms of like a long, long clip.
    And so apparently that’s the sweet spot currently for them.
    Yeah.
    And the other thing that I hear about gen three is it’s really fast, right?
    I think I saw Cristobal post something on Twitter about how like it generates
    about 45 seconds where I don’t know how much you’ve played around with Luma
    dream machine, but they have you have to wait in a queue.
    And then once you get through that queue, then it takes two minutes
    to generate minimum two minutes.
    I’ve actually found is probably closer to three minutes, but the very first
    time I ever used Luma’s dream machine, I logged in, tried to generate a video
    from it and it took seven hours in queue before that three minute generation
    happened. So I actually typed in my prompt and I had it open thinking,
    oh, it’s going to generate anytime now, anytime now, anytime now.
    And eventually I just like walked away and went and ate dinner and probably
    like watched a movie with my family and then came back and it was still in
    queue. It actually all said and done it took seven hours in queue before it
    finally amazing video, right? No, the video was horrible. I did it. I did the
    prompt a monkey on roller skates because that was the first AI video I ever
    generated back when I was playing around with a model scope really really like
    a couple years ago. And so I wanted to see okay. This is a monkey on roller
    skates that I generated two years ago. This is a monkey on roller skates
    using Luma dream machine. The Luma dream machine version was worse than the
    version I made two years ago with model scope and it took me seven hours and
    three minutes to generate and I mean there. So there’s actually there’s
    actually other video generators to that a lot of people have been comparing to
    sore. There was that cling that came out of China, but you had to have a
    Chinese phone number to use it. Although I did hear some people just entered
    there like US number and they got access anyway. I haven’t attempted yet. I
    don’t know though nothing that I’ve seen from cling makes me go like oh this is
    sore a level like I don’t know it just right to me. I never saw anything that
    went that’s on the same level as this stuff, but I have seen stuff come out of
    Luma’s dream machine. I have seen some of the gen three videos where I’m like that
    looks pretty damn close, especially when you start with like a realistic looking
    image or even a real image in Luma and have it animated. It’s actually pretty
    dang good looking. Yeah, I mean there’s a lot of scenes coming out man. I’m
    especially impressed by gen three. I think it looks pretty amazing. Like
    there’s parts where you can see like if it was higher resolution, it’s not high
    enough resolution yet. It would already be good enough to put in as like b-roll in
    like major films. Yeah. So that’s exciting. And then did you see the stuff with like
    anime like little anime clips? I mean like it even does anime pretty well and so
    yeah. I don’t think I’ve had as much fun with AI as I have in the last like
    month just playing with Suno and dream machine and UDO and all of these tools
    that are coming out right now. You know, it reminds me about two, I don’t know two
    and a half years ago, two years and a couple months ago when mid-journey first
    came out and I started playing with mid-journey for the first time and like I
    just like lost sleep, right? Like I would stay up until one thirty a.m. just
    generating images going. Can I do this? Can I do this? And then when I learned
    about stable diffusion and I fine-tuned a model on my face and I was able to make
    myself Superman or make myself riding a horse or myself an astronaut or
    whatever. And that was the next time I was like all right. I just lost a whole
    day of my life playing with this generating images. Well, now I feel that
    way again with the combo of like Suno and Leonardo’s new AI image generator
    and mid-journey and the dream machine like to me it is so much fun. I made
    that video on YouTube where I showed myself making a music video and I use
    Suno to make the video and then mid-journey to make the starting images and
    then I took the mid-journey images and I put them into dream machine to animate
    them all and then I used a Vinci resolve to edit them all together and I that
    video actually probably took me a good like ten hours to produce just because
    of all the waiting time for that the processing in Luma, but it was so much
    fun. I was like so blown away with some of the videos that were coming out of
    it. Not all of them. Some of the videos were really, really impressive though.
    We’ll be right back, but first I want to tell you about another great podcast
    you’re going to want to listen to. It’s called Science of Scaling hosted by
    Mark Roberge and it’s brought to you by the HubSpot Podcast Network, the audio
    destination for business professionals. Each week hosts Mark Roberge, founding
    chief revenue officer at HubSpot, senior lecturer at Harvard Business School
    and co-founder of Stage 2 Capital, sits down with the most successful sales
    leaders in tech to learn the secrets, strategies, and tactics to scaling your
    company’s growth. He recently did a great episode called How Do You Solve for a
    siloed marketing and sales and I personally learned a lot from it. You’re
    going to want to check out the podcast, listen to Science of Scaling wherever you
    get your podcasts. Yeah and all the tools are getting like better too and like
    more fun to use. Like even mid-journey now you don’t have to use it on Discord.
    They have the website and that interface is so much more pleasant to use and also
    the personalized feature. Have you tried that out yet? Yes, in mid-journey. I tried
    it on and off. I’m like, oh yeah, the one where it’s like personalized for me. Yeah,
    I like that better. That is kind of cool and to realize like the long-term all
    these models are going to learn whether it’s like the AI art, the videos, the
    music, games in the future. They’re all going to learn what kind of stuff you
    personally like and help you amplify your own creativity and it’s
    exciting to think about like, yeah, these are all going to get better. This is
    the worst it’s ever going to get and just imagining in like two years how
    fun it’s going to be to like produce music and videos and whatever you want.
    It’s probably going to be like way faster too. A lot of this stuff is probably going to get almost
    instant. There’s no reason this stuff can’t be instant at some point. So imagine
    that you could just like type in stuff instantly and you’ve just created a song.
    You’ve now created a video and you’re like in real time like editing these
    things together yourself. It’s going to be awesome. I’m excited. Yeah, everybody’s
    going to have essentially their own custom mid-journey model, right? Like I can
    enter a prompt into mid-journey. You can enter the identical prompt and if we’re
    both using our own personalized model, we’re going to get two probably pretty
    dramatic outputs because it’s going to make one for my taste and one for your
    taste and I just think that’s really, really cool. I think, you know, the other
    side of the coin of this conversation is the type of comments I’ve been getting
    on my YouTube video where I made a music video or when I actually shared the
    music video over on acts and on Instagram is, you know, I start getting a lot of
    these comments of like, oh great, you’re making a video that’s helping people, you
    know, perpetuate the downfall of the music industry, the downfall of the video
    industry. Oh, these tools are trained on copyrighted material. So, you know, this
    is just as, you know, bad as stealing the original material and using that in your
    videos and those are the types of like, I mean, not most of the comments, but I’m
    seeing those kinds of comments, right? Of like the copyright implications, the
    the implications of like if I could make music with this that that diminishes the
    work of artists and all that kind of stuff. I’m personally in the camp of
    like, I call BS on all of that. I don’t think it diminishes anybody’s work. I
    think the fact that I can make an AI image that I think looks really cool and
    the fact that this person over here can actually draw it with their own hands and
    make something that looks really cool. I’m way more impressed by that version
    than the version that I made and I think I always will be just by the fact that a
    human was behind it making it. Yeah. Did you see the blowback that Ashton
    Kutcher got when he was talking about he basically he got access to Sora. He said
    it’s good. He’s like, it’s going to change Hollywood and he made some like
    really like, you know, big statements about it and people are like, oh my god,
    you’re like, you know, you’re turning your back on. You went with tech over,
    you know, Hollywood now and you’re like turning your back on creators and you’re
    okay with screwing them all over. And he was like, no, I, you know, I think humans
    are still going to be involved. But like, yeah, of course, entertainment is going
    to evolve like it always has like, like, you know, obviously over the last 20
    years, you know, CGI is like really taken over Hollywood, right? Like, like, like,
    how many major films use CGI? Like a lot of them now. This is a further evolution
    of entertainment. And I think that’s like what humanity is kind of like our
    purpose is to evolve and continue getting better and better. So, but you know,
    there’s there’s a natural instinct to be worried about change. Like change is
    scary. And so I understand like people being worried because like, yeah,
    they’re probably there probably will be periods where there will be some job
    blasts related to this stuff for sure. Yeah. I mean, George Lucas, he was he was
    asked what he thought about all this AI stuff, right? And his response was
    essentially, well, it’s all inevitable. It’s it’s going to happen anyway. Just
    like, you know, you know, we were doing everything with practical effects. And
    then we got computer graphics and we started doing everything with CG. And
    now we’ve got AI. And so like his sort of analogy was like when cars started to
    come out and people started going, yeah, but we’re just going to stick with
    horses. Well, you can stick with horses, but these machines are going to keep
    going. We’re going to keep evolving. We’re going to keep improving them. You
    can stay with horses if you want, but that’s not how the world works. We’re
    going to figure out new, better, innovative solutions to accomplish the same
    goal. That’s just what humans do. We try to figure out how to get more
    efficient, how to optimize processes, how to get better at what we do, how to use
    technology in our favor to leverage that technology to make our lives easier.
    That’s what technology essentially exists for is how can we use tech to make
    the things that used to be more manual, less manual for us? That’s how it’s
    always evolved. Speaking of evolving, did you see where they, Gen 3, Runway,
    they were talking about, they put up this blog post talking about how they’re
    creating general world models. Like that’s apparently that’s the way that
    they are producing AI video now, which was rumored that that was what Sora was
    doing as well, is that they’re actually, they’re kind of creating an idea of what
    the world is like, a model of it. Like a digital twin kind of thing, yeah. Yeah,
    that’s why it can be so consistent, right? That’s why you could have a train
    actually moving and seeing things as it moves is because it’s kind of produced a
    world that it’s inhabiting. I think that’s fascinating and to think that,
    like, you know, and NVIDIA has talked about this as well, you know, and NVIDIA who
    just became the number one company in the world, thinking about how that’s
    going to change games, you know, videos. Like, imagine that, like, the online
    games now, like the worlds are so limited, but like it seems like this new
    technology, you’ll be able to, you know, create, you could create kind of like
    how Minecraft, you know, you go in the world and like you go to the edge of it
    and it produces more. Like, imagine those kind of things with AI, where you got to
    the edge of the world and you get the edge of space and like, oh, here’s now the
    new planet or here’s now whatever. Like, it’s infinite. It goes forever. Like, those
    those things are going to become possible, like a really high fidelity, not like
    Minecraft. Let the game developers say that because if there’s any group of
    people that are sort of more vicious towards the AI community that are sort
    of pro AI, it’s the game developers. Like, I’ve, you know, I’ve had debates with
    people that are in film and music and things like that and, you know, some of
    them are pretty upset by what’s going on, but I have never seen the level of
    hate on some of the stuff I’ve posted from then from what I’ve seen from some
    of the like game developer community. If you talk about AI taking over game
    development, they’re probably the first ones to like just absolutely try to
    disrupt. I mean, yeah, I mean, the reality is though, the game industry is in a
    really stagnant moment. Like the game industry, you know, it’s worse than
    what’s happening in Hollywood, I would say, where, you know, the games are so
    expensive to make that everyone just copies the previous game and gamers are
    getting tired of it. I think that’s why you see the growth numbers have
    stagnated. I feel like there’s a big like sort of renaissance of like indie
    developers right now. Like most of the games that I play, I’m a big gamer
    myself, most of the games I play are from indie studios. They’re not the big
    triple A games. Yeah. Yeah. And AI is going to help them. Like, and sure, some of
    them will be resistant to it right now, but once they see what it can actually
    do for them, we’re like, oh, you can be like five people who just you can now
    compete with EA, you know, you can, you’ll be able to produce an entire world.
    You’ll have help with the storylines, with the characters, with the creation of
    the art assets. That’s all like coming in the next two years. And so I think
    it’s going to be great. We’re like, oh, yeah, the game industry is very stagnant
    now, in my opinion. And I think that’s going to change in two years. And yeah,
    sure, people are yelling about it right now, but it won’t matter. Like some
    people will lead the way and then everyone else will have to follow after
    that, I think. Yeah. Yeah. One. One. You know what would be a really good
    guess for this show that I think would be really fun to talk to is somebody who
    can intelligently speak to us about copyright law and how copyright law is
    going to be impacted by a lot of this stuff. So I don’t know if this is a
    hot take or not, but in my opinion, copyright law is a part of the big
    perpetuation of AI. So all of the companies, all of the people that are
    out there like fighting against AI because they’re worried about it using
    copyrighted material, I sort of have this opinion that they’re pushing AI
    forward faster than had they just like not brought this stuff up. And the
    reason I say that is like look at like stock photos, right? So I had a blog
    where I actually hired a editor to come through after I wrote a blog post,
    sort of clean up the blog post and then add imagery to the blog for me,
    right? Well, they came in and they added some images and I looked at the blog
    post. Cool. This is cool. Let’s publish it. We publish the blog post. I assume the
    images they use were just from a regular stock photo site. Well, it turns out
    they did a Google search. They pull those a photo that was owned by the
    Associated Press, and I got an invoice in my email for using that photo for
    eight hundred dollars. So for this one photo that was used on the blog post
    that my editor grabbed from Google images, I had to pay eight hundred
    dollars for the right to use that photo and I emailed them. I’m like, well,
    can I just take it down and use a different photo and they’re like now
    with the damage is done, pay the invoice essentially. And so in my mind,
    when I saw AI image generation, I went awesome. I don’t have to like worry
    about that anymore. I could just go generate any image I want now. And so
    like these like copyright pressures that they’re putting on creators are
    pushing creators towards using AI. Same with Suno, right? You look at something
    like Suno. How many people have you heard of that put up YouTube videos?
    Maybe there was a song playing in the in the YouTube video. They got the video
    copyright struck in and either had the video completely removed or had a
    hundred percent of the monetization from that video go to the copyright holder.
    I’ve had that happen where I made a thirty minute video and maybe ten
    seconds of that video had an audio clip that was copyrighted by somebody else
    just kind of an oversight. It slipped. Well, because of that ten second clip,
    all of the revenue for that full thirty minute video had to go to the copyright
    owner of that ten second clip. That doesn’t make any sense. That’s not fair
    to me like give them ten percent of the revenue, not a hundred percent of the
    revenue, you know, so that kind of stuff comes up. Well, now that stuff like
    Suno exists, what am I going to do? I’m not going to go use music that I find
    online. I’m just going to go generate the perfect song for that video right
    now. However, had copyright law been a little bit different and creators were
    allowed to, you know, use some images they find online or use some music that
    they find online in their videos without worried about like it affecting them
    their livelihood. I don’t think people would be jumping to go and generate
    music with Suno or jumping to generate images with mid journey as part of
    their content as quickly because they can just use content and stuff that was
    created by other people. So I have this opinion that I really, really think
    copyright law needs to change and if copyright law is different than it was
    now, it would actually probably, you know, stop as many creators from jumping to
    using these potential AI tools. Anyway, that’s the end of my little rant about
    copyright. Yeah, yeah. Yeah, I think I think this will be a moment where
    copyright is forced to evolve. Like, you know, copyright is so complicated. I mean,
    my last startup bind did we end up pivoting into copyright, which was not
    the initial intention. I spoke in front of, I spoke in Washington, DC about the
    future of copyright on a panel and I still feel like I barely understand
    copyright. It’s so complicated. You know, and I met with the guy who was at
    that time heading Creative Commons and talked to him a lot about, you know,
    copyright and all the issues. And, you know, Creative Commons was always
    interesting, but also Creative Commons is so complicated. Like, there’s so many
    different versions of Creative Commons and it increases so much cognitive
    overhead of like, okay, which one do I pick and how do I do it? And it’s all so
    complicated. I don’t know where copyright goes in the
    future because it’s just like, yeah, I don’t mean in the new world, it almost
    doesn’t make sense in its current state. Like, I think there should be laws around
    like, okay, if you directly copy somebody, like, and it’s like, you know,
    it’s, you know, Michael Jackson and now you’ve got Michael Jackson singing in the
    song. Yeah. Yeah, like probably his estate should be
    paid something. But if it’s not directly copying people, I just,
    I don’t see how copyright exists in its current form like 10 years from now.
    Yeah. Yeah. And I don’t know this illusion either, right? Like, I don’t, I do
    think that people that spend the time to create the art, people that spend the
    time to make the music, to, you know, generate the stock video, to take the
    photos, I think they should be compensated for the
    work they’re doing. I do think that’s important. And I
    don’t know how that works. I mean, right now copyright is just kind of
    the best solution they got, but I don’t think it is
    the, you know, the final solution. I don’t think it’s the, I think the way
    copyright works and the way that companies are going and sort of,
    you know, slapping down creators for using the content is
    it’s just, it’s not helpful to their cause
    in the long run, right? I think that’s sort of my opinion on it, right?
    Like, you look at like tick tock and what was the, what was the company that
    took all of their music off of tick tock for a little while and now it’s back
    on, but like you couldn’t use Taylor Swift’s music and you couldn’t, there’s
    like all these artists that you couldn’t use on tick tock because they’re the
    deal with this music company and tick tock fell through. I don’t know if you
    remember that a couple months ago. What ended up happening was a lot of the
    artists that were on this record label. They got a ton of exposure because
    content creators were using their music in their tick tock videos and like
    there’s bands like AJR who I’m actually a big fan of AJR. They actually credit
    a lot of their fame and their success and their music growing,
    blowing up to the fact that tick tockers were using their music on their, you
    know, their little clips and things. So people heard the songs in the tick tocks.
    They, you know, tick tock always put the name of the band on there and then
    people would click on the name of the band and go find more songs by that
    band. It was actually a really, really good growth mechanism for these bands
    to allow content creators to just use the music on their videos. And then when
    the record label had their beef with tick tock, the record label shut off this
    stream of like awareness around these bands. It just doesn’t make sense to
    meet like the way copyright works right now just doesn’t make sense. Let
    content creators use it and let it be an exposure mechanism for these things.
    I mean, so I saw an article yesterday saying that perplexity is trying to
    figure out some kind of deal with publishers to pay them. And I think
    that could make sense when like you’re directly citing something like, okay,
    yeah, this is directly from this article. And that’s where we got the information
    from. And now I’m doing some kind of payment or revenue-shared deal for
    something that’s that clear. But with art, it’s way more complicated because
    it’s just like artists have always went to like art museums and
    things like that to get inspired. That’s what AI, that’s what the AI art
    models are doing. They are not copying the art. Yeah, music too. They are
    getting inspired by it. And so I think it’s different and it’s,
    I don’t see how you ever properly reward those people for having created that
    the same way you don’t pay, you know, if you got inspired by going to an art
    museum, you don’t go back and pay, you know, something as artist, right?
    Well, I mean, it’s just, I feel like music’s even muddier, right? Because
    you have bands that go and sample other bands, right? So like, you know, run DMC,
    goes and samples Aerosmith for, you know, walk this way, right? You get stuff
    like that. And now you’ve got multiple artists in the mix and, you know, it’s
    just, the waters are really muddy. And, you know, I feel like I’ve beaten
    this horse to death. I just, I think you and I are both on the same page here of
    like, yeah, like we understand why copyright exists. We understand that
    creators need to get paid for what they’re creating. It just needs to be
    rethought somehow. A lot of things about like, you know, the, like, you know,
    I’m pro capitalism, but like the whole system is probably going to have to be
    rethought at some point. Like a lot of things stop making sense in the next
    10 years as things become more and more abundant, you know, and there’s less
    scarcity, especially like with robots, like you combine AI and robots and a lot
    of things have to change like a lot of things. So I’m excited about that last
    week. I’m excited about the robot k-pop bands, right? You get, you get like five
    robots that can all sing and you put them on, you teach them dance moves, you put
    them on stage and now now people are going to go watch these and then they
    can just like clone those robots and then it’s like the blue man group, right?
    Like the blue man group can do tour like multiple tours at the same time because
    it doesn’t have to be the same blue men at every single group, right? Like is that
    the future of like music entertainment? We’re going to see like k-pop robots
    singing on stage, but they could be doing multiple tours at the same time.
    Yeah, there used to be this thing in Tokyo. There was like a robot show that
    you’d go to and I think it was just like girls dressed up like robots or
    something and they may have had like one or two real robots that did some small
    moves or something. Unfortunately, they stopped doing that but
    that used to be a huge tourist attraction. Yeah, I think in the future
    people are going to not have to work as many hours because they are just going
    to make people so much more efficient and you combine that with all these new
    technologies. Yeah, we’re going to have some really
    amazing live experiences and yeah, music and robots and
    everything you can imagine. It’s going to be, you know, I love people who are so
    scared of this and I’m like, imagine where we’re going to be in 10 years.
    It’s going to be fun. Like the world’s going to look way different than now.
    Stuff out of movies is going to be coming real. You know, it’s a very exciting
    time to be alive. Like at least for me, you know, it’s hard to get me excited
    about things like regular day-to-day things. Right.
    I find it pretty mundane and so I’m like, yeah, for the world to change more,
    that sounds great. Yeah.
    Things will, you know, I’ll be excited to wake up every day. That’s awesome.
    Yeah, yeah. I think, yeah, it’s going to be exciting. It’s going to be fun.
    I’m loving all the latest AI video, AI audio, AI image tech. I love seeing it
    progress, but at the same time, you know, I still love real art. Like I still love
    going to shows and watching bands play in concert. You know, I still love making
    my own music with a real guitar and, you know, and actually playing something
    that I’m proud of. You know, I like looking at art that I know was painted
    by hand with oil paintings or watching a video or a move going to the theater
    and watching a movie that I know took, you know, two years to produce to me.
    I don’t really see AI eliminating that stuff, which is what I feel like most
    people are scared of. I think more likely we’re going to see AI sort of cut down
    the process of, you know, maybe some of the small b-roll they use in videos or
    to like fill in the backgrounds of videos with like fake actors, right? Like I
    think the people that are really in trouble in Hollywood are probably like
    the extras. If I’m being honest, right? Like yeah, you have like a scene with a
    big crowd. Well, with AI now, I mean really just with visual effects in
    general. This doesn’t require AI, but with visual effects in general, you know,
    you can have just that front row of people be real people and then everybody
    behind them all be generated with AI. You don’t need to fill in with extras,
    right? So I think that’s probably going to be the most affected group in
    Hollywood. But overall, I think we’re going to see some big change. I have no
    clue what it’s going to look like, but I think it’s going to be fun to continue
    to have conversations about it. Yeah, I kind of think it may be more
    extreme than what you just said. I mean, I kind of think that you may replace
    all actors at some point and the human to do become more the niche product.
    And that, but I don’t know, right? Like it may be like, you know, okay, yeah,
    people read books, but how many more people watch movies and like the AI stuff
    might become more like the movies and the human stuff, maybe more like the books
    where, yeah, people, some people enjoy that, but a lot of other people don’t
    care. So I agree and I disagree. I agree that I think they will be able to make
    full movies without actors. It’ll be like they can AI generate it, but I think
    it’s going to be like a genre, right? I think you look at like Disney movies,
    right? For the longest time, you had all of the Disney movies that were drawn
    by hand and animated the old fashioned way and then Pixar came along and then
    we got this like 3D style of movies. Well, Disney didn’t like ditch the old
    style of movies and only make the 3D Pixar style movies, right? They still
    made Frozen and Moana and all these other movies long after Pixar came out.
    I think it’s just going to be a different style of movie. I think people
    might go to like creators will make movies and it’ll be a big deal with the
    fact that they used like AI for it and they’ll be like its own genre of like
    movies that used AI actors, but I still think people are going to want to go
    and see talented actors act out their craft. I still think that’s going to
    always exist. I don’t think AI is ever going to completely replace it to a
    point where Hollywood is only making AI generated stuff. I just I don’t see
    that happening. I think humans like watching other humans too much.
    Yeah, I agree. I just don’t know what piece of the pie that’s going to be like. I
    don’t know if it’s going to be like, yeah, they want to see humans, but how many
    people is that? Like is it like 5% of the market wants to see humans or is it
    like 90%? I’m not sure yet. Yeah, yeah. Yeah, which is why I
    think it’ll be like it’s its own genre. I think you got people that will just
    refuse to see it. Like I don’t really go and watch rom-coms in the theater, right?
    But that doesn’t mean there’s not a market for them, right? So I think I just
    think it’ll it’ll find its own market. I mean, right now they’ve already done
    like screenings of AI films in theaters and stuff. And to be honest, I love AI.
    I don’t really have any desire to go and sit through a fully AI generated movie
    right now. I just don’t. The tech isn’t good enough that I’m for me to be that
    excited about sitting through that. You know, show me something that’s really
    impressive in two or three minutes and I’m good. I don’t need to sit through an
    hour and a half movie. Yeah. Yeah, I’m really excited for the idea of like,
    you know, movies where it’s almost like when I was young, I would read those
    books, you know, where you can like make choices, you know? Yeah, yeah, choose your
    own adventure stuff. Did you see that? Yeah, and the Netflix did that with that,
    what was it, Bandersnatch or whatever it was called, which was a cool experiment.
    I’m sure it was really hard for them to do. I’m like, I’m sure the cost to produce
    that was quite high. And that’s probably why they didn’t continue doing it.
    But like with AI, you’re going to be able to do that kind of stuff. I think
    that’s going to be a huge genre is like you’re watching the movie and it’s like,
    oh, something just happened. And oh, yeah, I want to do this. And now it generates it.
    And when it gets to the point where it’s actually the quality is good enough where
    it’s like, OK, it’s 99% as good as a Hollywood movie. That’s going to be so fun.
    Like, oh, yeah, with the character to go, I want him to go pick up a, you know,
    bottle on the bar and smash it or whatever crazy thing I want to see happen.
    Just to be able to like say that out and then it happens. That’s going to be so fun.
    Yeah, it is. And what in the cool thing about that that I think the movie studios will absolutely
    love is the replay value of that content is huge because every single time you watch that film,
    it’s going to be different, right? Like that’s where I think gaming is going to. You know,
    we’ve talked about this in the past, too. I think gaming, all the dialogue and gaming
    eventually is going to be generative. They’re going to have guidelines they need to stay
    within so they don’t sort of spoil the rest of the game or anything for you, right? You can’t
    go to a character and say, Hey, how does the game end? And it just tells you because it’s
    trained in the LLM, right? Like it’s got to have some sort of guidelines, but I think
    gaming and the sort of choose your own adventure content on Netflix. I think both of those kinds
    of things are inevitable because for the studios that create it, it just like
    infinitely cranked up the replay value, the rewatch value of that content.
    Yeah. Yeah. I think of it like I’ve mentioned it before, but like Baldur’s Gate three,
    massive, you know, world with like a huge story and the characters are super interesting and you
    can make all these different choices. But in reality, the story is kind of mediocre. It’s
    like, it’s not great. Like some of the Dungeons and Dragons stories are like, they’re okay.
    Like the world’s awesome. And so I’m like, for sure, AI can probably do as good of a job on
    the story. And if you could just create a new world every time, like being able to type that in
    and it just produces all that, that is going to be, people are going to be very addicted to these
    things. Yeah. Yeah. Well, I mean, you already have so many games right now that are already,
    you know, procedurally generated, right? Where the story doesn’t really revolve around the world
    that you’re in because the world’s different every time, right? Minecraft, Valheim, Fortnite,
    like some of the most popular games in the world, a procedurally generated where every time you get
    dropped into a level, it’s, you know, following a set of guidelines, but that level is a completely
    different level that most likely nobody else has seen before, you know, and I think AI is just
    going like and that is what increases the replay value of a lot of these open world survival
    games is every time you play it. It’s a totally different game than the last time you played it.
    AI just amplifies that in my opinion. Yeah, big time. So yeah, it’s going to be really exciting
    times. We’re both excited to see how it plays out and we’re going to keep on making videos and
    podcasts and sharing the journey and showing what we’re finding. So make sure that you like this
    video. If you found it helpful, subscribe to this channel if you aren’t already because we have some
    amazing guests coming up and a lot more fun, interesting discussions like this. And once
    again, thank you so much for tuning into the Next Wave podcast, but we will see you in the next episode.
    [Music]
    [Music]

    Episode 13: What impact will AI-generated content have on the entertainment industry? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this topic, envisioning a future where AI generates interactive movies and complex gaming worlds with infinite replay value.

    In this episode, Matt and Nathan explore the potential of AI video tools such as Sora, Luma’s dream machine, and Runway’s gen three. They discuss how these advancements could democratize video creation, enhance b-roll, and expand creative possibilities, as well as the implications for copyright laws, gaming, and traditional creative industries. They also touch on George Lucas’ views on technological progress, Ashton Kutcher’s controversial support for AI, and the role of indie game developers in a rapidly evolving landscape.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Sora is the most anticipated AI video.
    • (03:39) AI video tools improve, but have quirks.
    • (09:23) Runway Gen 3 is fast, unlike Luma’s dream machine.
    • (12:04) Excitedly exploring and creating with new AI.
    • (14:48) Custom mid journey models personalize prompts, raise concerns.
    • (17:28) George Lucas acknowledges inevitability of AI development.
    • (21:40) Copyright law impacting AI and technological innovation.
    • (25:31) Copyright evolution in the new world uncertainty.
    • (27:28) TikTok boosted exposure for music artists.
    • (32:32) Excited about AI tech but still loves art.
    • (33:18) AI may replace video extras, changing Hollywood.
    • (38:52) Procedural generation and AI enhance game replayability.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Are Coding Jobs at Risk? AI’s Impact on the Future of Coding ft. Python Simplified | Mariya Sha

    AI transcript
    In the end of the day, when you think of the future, the first route is a utopia.
    The second one is a dystopia.
    The third one is something that I kind of envisioned.
    Hey, welcome to the Next Wave Podcast.
    My name is Matt Wolfen.
    I’m here with my co-host, Nathan Lanz, and today we’ve got an amazing guest.
    She has an amazing YouTube channel all about Python and how to code.
    Her tutorials are amazing.
    So many people learn Python from her, and we’re excited to have her on the show to
    talk about the future of coding and how it overlaps with AIs.
    When all your marketing team does is put out fires, they burn out.
    But with HubSpot, they can achieve their best results without the stress.
    Tap into HubSpot’s collection of AI tools, Breeze, to pinpoint leads, capture attention,
    and access all your data in one place.
    Keep your marketers cool and your campaign results hotter than ever.
    Visit hubspot.com/marketers to learn more.
    So welcome to the show, Maria Shaw.
    Thank you for being on.
    Yeah.
    Absolutely.
    Thank you for inviting me.
    Great to meet you.
    So let’s talk a little bit about the AI world, because I know, you know, when we were out
    at GTC, the talk of the whole event was AI, right?
    Like everything is AI, and obviously coding is one of those areas where AI has completely
    taken over.
    I know Nathan was one of the sort of early adopters of GitHub co-pilot, and yeah, I guess
    I just want to know, what are your sort of like overall thoughts?
    Let’s just kind of start from like the 30,000 foot like general overview.
    Like what are your thoughts of like that overlap of AI and coding?
    Are you excited that AI is making coding easier?
    Are you worried about coders losing jobs?
    Like where do you kind of stand on the whole thing right now?
    I’m slightly confused because I’ve tried a few of the AI models to kind of see what
    they’re all about.
    And I don’t see a lot of difference between just kind of copying the prompt and pasting
    it in a search engine.
    Like I think the biggest difference I see is the fact that when you do so with a search
    engine, you can see a whole bunch of sources and then you decide which one of these is
    your favorite.
    And you kind of go from there.
    But I find these co-pilots as a middleman between the developer and the documentation.
    And for me, being so nerdy, I really like documentation.
    I really appreciate it.
    It’s my favorite place to be.
    And that’s how I write most of my tutorials.
    So getting it from the source is more important to me than kind of saving a minute or so
    or even less.
    So that’s where I’m a bit confused because I hear a lot of my viewers really exciting,
    really excited about ChedGPT, about co-pilot and Devin probably, even though I don’t even
    know if it’s open to the wide public yet, but maybe Nathan can share his opinion about
    it.
    So that’s something I was thinking about too, for your YouTube channel.
    It’s got to be– I’ve seen other YouTubers as well who are like– their channel is primarily
    about coding that they’re very skeptical of it.
    And it feels like there’s a slight conflict there, too, though, because if AI does get
    so good that you don’t need to learn coding anymore, it’s like a lot of those channels
    that have to pivot to some other kind of content, which I’m sure you’re coming from a genuine
    place.
    And yeah, there’s definitely major limitations right now.
    And yeah, I just have a– like you guys were talking about the NVIDIA conference, Jensen
    said that in the future, you won’t need coders.
    When you think of the future, you need to think of all the possible routes that we can
    go through.
    And the whole purpose of artificial intelligence is to be competitive or better than us in
    all the cognitive tasks that humans are engaged in.
    And this was from the inception of AI.
    There is nothing new about it.
    This is in– people’s loss of jobs has been a very major ethical concern of AI.
    And people were reasoning about it since the 1950s.
    I noticed a very weird trend.
    I’m kind of– I’m taking you guys off topic because when– chat GPT–
    Let’s go.
    Matt loves wrapped holes, so.
    When chat GPT emerged, there was a wave of folks who were saying that it will never replace
    us.
    There were people saying that this will never happen.
    And I was kind of warning people about it.
    I was saying that, hey, if you’re filming videos and you’re saying on record that chat
    GPT is better than you encoding, what are the chances that your employer will hear it?
    Why would you do that?
    Like, even if it’s true, why would you admit something like that?
    And I found that a lot of people were telling me, Maria, you’re crazy.
    It’s not going to happen.
    You’re just paranoid.
    You’re just speaking doomsday stuff.
    Like now that folks like Jensen are saying very similar things, maybe they take me a
    bit more seriously.
    I’m not sure.
    Yeah, there was a recent tweet from– I think his name is Ethan Malik, the professor from
    Wharton, where he was sharing some kind of recent study that showed– I think he showed
    that 85% of employees in polls are saying that they’re using chat GPT at work.
    And then also, I think it was something like 77% of them don’t tell their employer.
    So it’s like, a lot of people are using chat GPT at work for emails and all kinds of different
    tasks, and they’re just not telling their bosses.
    So yeah.
    Well, it’s funny because I look at the whole scenario from somebody who doesn’t really
    know code.
    Like, I don’t know anything about Python.
    I don’t know how to write JavaScript.
    I know a little HTML and CSS, and that’s about it, right?
    And so when I look at it from that perspective, I was actually able to get into chat GPT.
    This was back when it was 3.5, I think.
    And I was actually able to develop a game using JavaScript that was playable.
    It had graphics.
    I actually used mid-journey to generate some images, and it was this little side-scroller
    game where you jumped and collect coins, and it looked really good because I used mid-journey
    and all the code worked.
    So from somebody who doesn’t know JavaScript at all, I was able to go from– I have an idea
    for a real simple game I want to build to– I actually have a playable game, and I never
    actually touched a line of code myself.
    So I think, from the perspective of somebody who doesn’t code, I think it’s really exciting
    to see, oh, maybe I’ll actually be able to code now.
    From people that actually know how to code, it’s a different story, right?
    So I’m at this Cisco conference now out in Vegas, so anybody who’s watching the video
    and sees a different background, that’s why.
    But somebody on stage was talking about how right now, if you know how to code, it’s still
    actually faster to just code something than to use chat GPT, because if you use chat GPT,
    it’ll write the code, and then you spend just as much time double-checking and debugging
    the code to get it to work.
    But from somebody who’s never been able to code JavaScript, all I did was go, “That didn’t
    work.
    What do I try next?
    That didn’t work.
    What do I try next?”
    It took me hours to code the game, but I actually eventually got there without ever touching
    a line of code myself.
    So it’s kind of like– I can see why there– whereas there’s excitement around it from
    non-coders, but I can also see why coders would be like, “Ah, right now it’s more of a nuisance
    than it is helpful.”
    For sure, for sure.
    There are a few reasons why folks who are coding for living, they wouldn’t be using it, but
    I think it’s cool that it gave you these superpowers that you weren’t able to use JavaScript.
    You never learned it, and suddenly you have your own JavaScript game, which is amazing.
    I find it to be an incredible way to take your creativity and kind of manifest it in
    a way that you never thought you could.
    So it’s pretty cool.
    So in terms of the coding, the world of coding, there are a few reasons why.
    It’s probably not a good idea to use these type of models for that, because our industry
    is very dynamic.
    Things change on a daily basis.
    Just because something works now, it doesn’t mean that it’s going to work tomorrow or in
    a few days.
    Python has new versions almost every couple of weeks, and it’s something that keeps changing.
    So you need to be on top of things, and it’s an occupational hazard, you can say, that
    you learned something in university, and by the time you’re done, nobody’s using it anymore.
    So that’s number one.
    If you’re using a model like ChatGPT, how often is it being updated?
    If there’s an update happening in one of the libraries, how long before ChatGPT is aware
    of it?
    There’s a bit of a problem there.
    Another issue is vulnerabilities, cybersecurity, computer security.
    Because these models, they tend to produce very similar code.
    Whenever you have a similar prompt, it will show you a similar code.
    Basically it creates a way for very malicious people to exploit a lot of software all at
    once.
    So if you write a prompt of, help me make this game, this car game.
    When everybody gets the same piece of code for their car game, if somebody wants to write
    a malicious software to target this specific bit of code, it will apply on all the software
    at once.
    So whenever you’re using this code, make sure you do a bit of variation.
    That’s what I would do, at least.
    Yeah, it feels like both of those are just current limitations, at least to me.
    Because you’re talking about the complexity and things changing fast, and I think AI could
    be way better at dealing with that than humans.
    Humans are not very good at dealing with fast change and massive amounts of data and processing
    that data.
    So yeah, chat GPT, GPT4 kind of suck at this, but I think GPT5, GPT6, I think these problems
    are going to be solved.
    Perplexity also, they pull in more real-time data and feed it into the LLM.
    So I think in terms of packages changing and things like that, you probably already could
    get past that.
    Maybe just no one’s done it in a good way yet, but it feels like that’s something that’s
    going to be solvable.
    No, you’re absolutely right.
    The things that humans find easy, usually that’s what AI finds very complex, and the
    things that humans find complex, such as mathematics, analytics, and reading large bodies of text,
    making conclusions out of them.
    It’s something that models are doing better than us.
    So basically, it’s all about a symbiotic relationship, I think, in the end of the day.
    We need to use these models to make our lives better, but it’s a challenge.
    Yeah, engineering is all about problem solving, right?
    I think ultimately, coding is just a way that you solve problems.
    So long-term, I kind of imagine it will have a new class of Uber engineers who are probably,
    they know how to code, but they’re also manning an army of AI bots that are helping them do
    things that they don’t want to spend time doing.
    The menial kind of coding task, right?
    But then they have the ability to go in, possibly with the help of a co-pilot as well, to kind
    of go in and where they need to to modify things.
    I think that’s going to be a really interesting world where you have these brilliant engineers
    who can go off and have a swarm of AI helping enhance what they do.
    For sure.
    Now, this is more of a philosophical question, but I’m definitely curious to hear your answer.
    If Jensen is right, and in five years, we don’t need coders, do you think, do you still
    think it’s important that people learn to code right now?
    I think this is a problem that is not unique to folks who code.
    I can say that about many industries.
    I can say it about accountants.
    I can say it about truck drivers.
    I can say it about many, many professions.
    So…
    100%.
    Is it worth doing anything?
    Right?
    Definitely a fair question, given the current state of AI.
    I think that there’s a few routes that we can go through in the future.
    The first route is a utopia.
    The second one is a dystopia.
    The third one is something that I kind of envisioned.
    So I imagined, instead of everyone using the same type of models, instead of millions of
    people using chat GPT, let’s say that everyone will have their own type of AI that is customized
    to ourselves.
    So, for example, if I have Maria GPT, I will train it on the books I read, on the movies
    I watched, on the values that my parents taught me, on the collection of knowledge that I experienced
    through the years.
    I will train it on the countries I visited.
    I will train it on anything that will make it closer to myself.
    I will train it on my political affiliations.
    I will give it my biases, because everyone has biases.
    My model would love Python.
    And if somebody else uses this model, who loves C++, they’ll be very upset, right?
    But I’m the only one who’ll be using my model.
    And needless to say, these models will be not public.
    Everyone will have it stored.
    And I can see how this would be an enhancement of yourself.
    And it’s going to be proprietary to you, and it’s going to be used in a way to make you
    indispensable, rather than just part of the herd that is using the exact same model to
    accomplish the same task.
    Yeah, that’s, I agree, it’s kind of like, for some reason, I’m thinking of like Star Trek
    versus Star Wars.
    I’m thinking like the Borg and Star Trek versus like the more like space pirate kind
    of vibe of Star Wars, you know, where like people are more independent and doing their
    own thing.
    Like, so I’m a big proponent of open source, but I’m also, you know, we’ve taught that
    this on the podcast several times now, like, I’m a big proponent of open source, and I’m
    also very worried that, and excited, I’m very mixed feelings about open AI, I think they’re
    probably very far ahead of other people, based on like rumors I’ve heard about GPT-5 from
    friends in San Francisco, I think they’re very far ahead.
    And so yeah, that could lead to a Borg type of scenario where, yeah, everyone’s outsourcing
    all of their thinking to this genius AI brain.
    And so everyone acts very, very similar.
    Now Sam Altman has said that like they plan to make it a little bit more personalized
    over time.
    And everyone, he said the same kind of thing, everyone has biases.
    And so they want to make it, they don’t want to make it left wing or right wing, whatever.
    It’s like, you know, whatever your biases are, like it learns that and it kind of adapts
    to you.
    So I hope they actually take that seriously long term so we don’t end up in a Borg type
    scenario.
    I mean, I’m on the same page.
    I think that’s where all of this is headed.
    I think that’s kind of like Sam Altman’s vision right now.
    I think that’s what, you know, Satya Nadella over at Microsoft, they’re trying to do that
    with everything at Microsoft, Sundar over at Google, starting to do that with Google.
    I think that’s sort of the vision all of these big companies have is turning this into
    like this personal AI assistant that is totally trained on you and what you like.
    And it can see your calendar, it can see your emails, it can, you know, go back through
    some of the conversations you’ve had and it just kind of knows everything about you
    and what you need to do next and can direct you through the day, hey, don’t forget, you’ve
    got this meeting at two o’clock and just sort of like be there with you all the time to
    assist you.
    I do think that’s sort of where that’s, where it’s all headed.
    And there was that interview with the CEO of Bumble as well, I don’t know if you saw
    that clip where she talked about like, that’s going to be the future of dating too, where
    you create like this AI avatar of yourself and your AI avatar that’s trained on you goes
    and dates other people’s avatars.
    And then when the two avatars find a compatibility, it comes back and says, here’s a match we’ve
    found for you.
    To me, it sounds awful, actually.
    That’s a dystopian one, yeah.
    We’ll be right back, but first I want to tell you about another great podcast you’re going
    to want to listen to.
    It’s called Science of Scaling hosted by Mark Roberge and it’s brought to you by the HubSpot
    Podcast Network, the audio destination for business professionals.
    Each week, host Mark Roberge, founding chief revenue officer at HubSpot, senior lecturer
    at Harvard Business School and co-founder of Stage 2 Capital, sits down with the most
    successful sales leaders in tech to learn the secrets, strategies, and tactics to scaling
    your company’s growth.
    He recently did a great episode called How Do You Solve for a Siloed Marketing in Sales
    and I personally learned a lot from it.
    You’re going to want to check out the podcast, listen to Science of Scaling wherever you
    get your podcasts.
    So for me, a future where Sam Altman controls our personal AIs that knows everything about
    us, it’s also kind of dystopian.
    You mentioned open source and for many, many years, the type of researchers that were in
    the field of AI, they all were accompanied by a paper that tells you with full transparency
    what kind of data the model was trained on, the architecture of the model, even what kind
    of equipment it used so that you can recreate the same conditions on your end and so you
    can provide what is called a peer review because it’s not truly science unless somebody else
    can recreate it and for many, many years I’ve been reading those papers and they were kind
    of the standard of publishing new architecture, so for example, AlexNet, which I’m sure you
    guys are familiar with, there’s ResNet, there’s VGG, they all are accompanied by papers that
    are fully transparent so when we are entering this realm of proprietary models where people
    are hiding the type of data that they’re using and it worries me because it hasn’t been the
    case for many, many years so I wonder what changed.
    It did kind of seem to change with open AI, obviously, I don’t know if you’ve seen the
    whole argument on X that Elon Musk and Jan Lacuna have been having, where Jan’s tried
    to claim that it’s not science unless there’s a paper and Elon, yeah, there’s a whole drama
    going on.
    Both of them are scientists, both of them.
    The fact that they are passionately arguing with one another, it’s just an example of
    how scientific both of them are.
    Yeah, it is interesting though, even Anthropic, who seems to be pretty ethical, pretty above
    board, pretty safety-minded, still has a closed model that they don’t share what’s going on
    underneath the model, which is interesting to me, but yeah, I wonder why that is.
    It does seem like it kind of started from open AI because isn’t GPT2 fully documented?
    Can’t you go and isn’t that one openly available to run off of?
    You can download it, you can use it in your software, you can get it right now if you
    want, from Huggingface.
    Yeah, so it must have been GPT3 that started it all.
    Yeah, they’re saying it’s from the advancement of the capability and what that enables is
    what they’re saying, Bellagy, I don’t know if you know Bellagy, but he tweeted yesterday
    something about the people who are against open source AI, they started with left wing
    talking points and then now they’ve migrated to right wing talking points and they’re
    trying to get both sides on the same page because they’re like, “Okay, we need this
    because otherwise it’s going to be biased against certain people,” and so they started
    on that angle and then now they went to more the national security thing of like, “You
    don’t want China to get this,” and that’s starting to get some traction even in Silicon
    Valley.
    So I have a lot of connections of people, I used to mentor for Peter Till’s 2020, so
    I have some connections at Founders Fund and they’ve been tweeting stuff about this recently
    where they seem to be more supporting closed source AI for this national security reason,
    whereas obviously A16Z and most other VCs are really supporting open source because, I mean
    even if it’s for their own personal reasons, you can’t really have many AI startups if
    everything’s closed source and you have to rely on open AI, but it is interesting.
    I would say the national security one is the one that I have mixed feelings about because
    I totally get what they’re saying.
    They’re like, “Yeah, if we open all of this up and then China or Russia or whoever can
    just copy it,” that is the one argument that I have really mixed feelings about.
    Yeah, I’m going to say something kind of controversial here, I think, but I think that’s kind of underselling
    China too.
    I think China is pretty far ahead.
    So many of the papers that I’ve seen lately have been coming out of China, not the US.
    But their LLMs are really far behind, and my understanding is their reason is because
    of their censorship.
    So they actually have a lot of issues there where their censorship is actually hurting
    how they can train their models because they don’t even want the people who are training
    the models to be able to talk about sensitive topics.
    And so it’s actually like really, apparently that’s one of the reasons they’re behind.
    That’s what I’ve heard from friends from China.
    I’m sure they’re trying to really bake their sort of governmental bias into all the AI.
    Yeah, which slows everything down, right?
    I think that when it comes to China, they’re more focused on facial recognition and kind
    of preventing you from using certain services.
    Because the whole idea about China is that they have an app that is like the everything
    app.
    They call it WeChat.
    They use it to pay for transactions.
    They use it to communicate to one another, send messages.
    They get coupons for food through this app.
    So everything you want to do, they do on this app.
    And obviously all this information goes to the source that trains those artificial intelligence
    models.
    So even though their LLM capabilities, it’s reasonable that they’re not there, obviously,
    because there’s a lot of censorship there.
    But there are other models that, I bet you, they’re way more advanced than what we have.
    And we wouldn’t know because they haven’t been used against us yet.
    Also wonder if TikTok is directly connected to China, because there’s a lot of information
    that people voluntarily post on TikTok.
    I don’t have a TikTok, so I don’t know.
    But I know that a lot of people are posting three videos a day about all the things they
    do.
    I wonder if they already know us really, really well.
    And they don’t really need LLMs because they just use the video input to train a model
    that is also looking at the facial expressions you have.
    I think you’re probably right there because TikTok is known for having the best algorithm
    in social media.
    So I think it’s a fair point.
    The best or the most addictive?
    I think it’s probably the most addictive.
    Both.
    It’s the most addictive because they know people the best.
    They understand people and I do believe, like I’m a proponent of TikTok being divested
    because when COVID went down, I had a lot of friends, like I actually studied Mandarin,
    I lived in Taiwan.
    So I’m probably slightly biased on the Taiwan side, but because I have a lot of Silicon
    Valley friends who are all like Taiwan founders or Taiwanese Americans.
    And like during COVID, there was like a thing that kind of went down that was not really
    talked about because everyone was talking about COVID.
    But during the same time period, China just like took over the boards of like every single
    major tech company in China.
    And people don’t really talk about that, but it’s kind of why it was like a wild transition
    to happen.
    Because before it was like, yeah, maybe they kind of control it, but it’s not like directly
    controlling it.
    It’s like, yeah, they can threaten them or whatever.
    Maybe the US government even does that kind of stuff.
    But like during the COVID time period, they literally inserted board members into all the
    major tech companies.
    And so they basically control every single major tech company in China.
    And so yeah, I would not be surprised if they also have the same, you know, China is involved
    in TikTok and using that data to get way ahead in AI.
    And like you said, probably we don’t even know.
    Maybe it’s an architecture that is way beyond, you know, LLMs.
    They just don’t share it with us.
    So I don’t know if this arguments of national security is entirely applicable, because yeah,
    they can copy it, but you still need to train it, right?
    Because even if you show people how you made your model, you still need to have the equipment
    and the time, you know, the processing power to train it.
    And we’re talking about models like chat GPT that are being trained actively for years
    on supercomputers.
    And it’s very hard to catch up to them, even if they come up like, and that’s the reason
    why I don’t understand why they don’t post, you know, the data, because no matter how

    But they want Taiwan and all the computing is happening in Taiwan.
    Of course.
    Well, it’s an important center.
    Yeah.
    I think every piece of electronic in the world depends on this manufacturing facility.
    Even cars, even cell phones, things like that.
    So if they’re being, you know, taken out of the equation, we’re facing a very weird future.
    At least those of us who haven’t invested in a very fancy computer.
    There’s not going to be a lot of parts.
    Yeah.
    I know, like, I know Jan LeCun, he’s been a big … he’s been very outspoken about
    his belief that AGI isn’t going to come from large language models, right?
    It’s going to come from some other sort of AI model, like some sort of world simulator
    model or something like that.
    Who knows?
    You know, maybe China’s already got something like that and they’re like, yeah, let them
    have fun with their little LLMs.
    This is what we’re working on, you know?
    You never know.
    They probably wouldn’t make it public and put it on GitHub if they were working on that.
    You will know about it when it’s too late.
    Right.
    Exactly.
    Exactly.
    I’m curious about your thoughts on the sort of ethics of how the code is trained, right?
    Because recently we heard about this partnership between Stack Overflow and OpenAI, allowing
    OpenAI to basically train on Stack Overflow’s data and then people started going and trying
    to sabotage the code that was on Stack Overflow to try to poison the data.
    How do you feel about coder’s work just being trained into the data without their permission?
    I think if it was a different organization, if it was an open AI, if it was like some
    open source type of model, Mr. Al, or like something along these lines that were entering
    this partnership with Stack Overflow, I don’t think those moderators will go to the length
    of like deleting their contribution, right?
    Because it’s been there for years.
    It is general knowledge, like it’s something that I’ve been using for many years and since
    this website existed, it was helping many people solve their errors.
    I think that people mistrust OpenAI in particular, first of all, because it’s not really open.
    How can you call your company OpenAI if it’s not open?
    That’s the number one.
    Closed AI.
    Right?
    It’s like the number one warning sign.
    That’s number one.
    Mm-hmm.
    Second of all, I think that there’s just a major level of mistrust to something that
    is not open source in the field of programming.
    We support open source from the very get go, you know, in every, even in the computer science
    program that I’m taking right now, I’m doing a computer science BSE.
    My lecturer was teaching us about open source.
    He was basically explaining why Windows is not as good as Linux, you know, these type
    of stuff.
    We have it for years and we’ve been teaching people for years about these concepts.
    And suddenly there’s this big company that everyone is using.
    It is far ahead of everyone else.
    They got a supercomputer for free from Jensen, you know, back in the time because they were
    open and free and they suddenly decided to kind of close all the doors and take all this
    transparency and just throw it to the garbage.
    Why would they do it?
    You know, is it because of money?
    I think they have enough money.
    I don’t think it’s because of that.
    So what are you hiding?
    And a lot of people are thinking that.
    So these moderators, I don’t think that you just have a problem with some AI learning
    because you can web scrape any piece of information from any website.
    You can make a bot that copies all the data you want.
    It’s not a problem.
    Plus all the data and stack overflow, as far as I’m concerned, it’s public.
    You don’t need to log in to be able to access it.
    So from a legal perspective, you can already use it.
    You don’t need to go into this partnership.
    So I think it’s a it’s a very weird, it’s very weird to me that Stack Overflow agreed
    because I don’t know what they are earning in this equation.
    I think there’s worried.
    Yeah.
    I think because like apparently their traffic’s way down because of people not using Stack
    Overflow.
    They just asked chat to be the same question.
    I don’t think they had a plan B. I think I think chat GPD came up and ate a bunch of
    Stack Overflow’s lunch.
    Stack Overflow tried to create their own stack AI or overflow AI or whatever they were calling
    it.
    They created their own model and nobody cared because they were already indoctrinated into
    chat GPD and they went, well, okay, if we can’t beat them, join them.
    I think that’s kind of what happened, honestly, but I also think, you know, the whole like
    ethos of like open source is like, I’m putting this code out there, but if you build something
    with this code, also make it open source, right?
    Like that’s sort of the whole like open source code, whatever you build with this open source
    stuff, make that available.
    And so it’s weird to me that companies like chat GPD or open AI can go train on all of
    this code and then close it off when all of this stuff was put publicly available and was
    designed for whatever you build with this, also make that publicly available.
    You know?
    Yeah.
    I agree.
    It’s kind of shady.
    You know, I don’t know what’s the purpose of it, but it makes me mistrust.
    And when you mistrust, you know, you had a lot of trust towards Stack Overflow as a developer,
    you know, from the get go, like it’s the best source to solve your errors.
    And, you know, you can find multiple solutions to the same problem and choose what kind of
    solution you prefer, basically.
    So it’s a very trusted name that has a lot of integrity, right?
    So when it goes into partnership with something that folks like ourselves, like the majority
    of the audience that is still using Stack Overflow, we never left.
    We kept using it, right?
    It’s kind of a slap in the face, and I don’t know, I don’t think that Stack Overflow should
    have been chasing them, you know, preventing them from deleting their comments because that’s
    their decisions, you know, they enter this partnership.
    They should have known there’s going to be a backlash.
    You know, they have marketing experts that, you know, chat with users, they understand
    what’s going on, you know, it was clear to me, and if it was clear to me, I assume it
    was clear to Stack Overflow.
    So don’t, like, antagonizing them, like, further, basically, just kind of, you can’t
    touch your account.
    It’s suspended.
    We are checking if you really meant to delete your messages or not, like, come on, you guys,
    that’s even worse.
    You just build, burning the creators that create on your platform even more.
    They’re training on all the GitHub data too, right?
    I think that’s even a bigger deal, but that’s the actual code itself.
    GitHub is owned by Microsoft and, you know, Microsoft and OpenAI are, they’re like this.
    You know, I would imagine all the GitHub stuff is, you know, freely available to OpenAI.
    Yeah.
    And why not?
    You know, it’s public data, like the, maybe the private repositories, you cannot really
    access them, but as long as something is public data, and I know it because there was a company,
    there was a company that was scraping data from platforms, and they were, they were
    accused, bright data, their name.
    They were accused of, like, stealing content, they were sued, but when they came to the
    court, when it was time to kind of actually talk about the laws behind it, it’s like,
    hey, we never logged in to, you know, whatever platform we were, we were taking information
    from.
    Anyone can copy it.
    So what is the difference if a bought, if an automated entity is copying it or somebody
    that physically sits with a keyboard does it?
    And I think they won.
    Yeah.
    Yeah.
    I mean, we’ve been seeing a lot of the same sort of arguments over on the art side as
    well, right?
    Um, you know, a lot of these AI art generators have trained on other people’s arts and, and
    a lot of that has gone to court and been fought in court and, and same with the writing too,
    we had a lot of authors take chat GPT to court saying they trained on their writing and almost
    all of these cases end up getting thrown out because well, it’s publicly available data.
    They’re, I guess they’re allowed to train on it.
    So yeah, it’s, it’s definitely interesting, but I personally have very mixed feelings
    about it all, right?
    Like I was actually for a little while, um, doing photography and uploading it to sites
    like shutter stock to earn, you know, the, the income from selling the stock photography.
    Well, a couple of years go by and now we find out that, okay, shutter stock trained all
    of the images that were uploaded into like their new AI model.
    And Adobe did the same thing with Adobe stock, right?
    Everything that’s been uploaded to Adobe stock over, you know, however long Adobe stocks
    been around now, all of a sudden that became training data for our AI art generator.
    Well, cool.
    When I was uploading photos years ago to try to make money, I didn’t really say you had
    permission to use that photo as part of the training data.
    I don’t really care that much because I’m, I’m like in AI and I’m an AI optimist and
    I, I’m kind of fine with it, but I don’t, I definitely see both sides of that token,
    you know?
    Yeah, absolutely.
    Yeah.
    I think it’s a, I think it’s a gray area.
    I think, I think it’s a gray area where like, you know, they, there’s a reason open AI doesn’t
    want to talk about it, right?
    Like the CTO, there was that famous clip of her stumbling where they were asked if she
    was training on, if they were training on YouTube data and she was like, I don’t know
    exactly, you know, kind of, kind of thing.
    And then obviously she knows the answer, like obviously she knows.
    And I think it’s because probably when they even started the company or when they started
    really thinking about this stuff, they would have talked with lawyers and they would have,
    it’s kind of like when Uber launched where technically Uber could be argued that it was
    illegal when it started, but it was a gray area where they knew they had a legal argument
    because there was no precedent exactly.
    And so there’s no exact precedent here because, you know, you have public data, but you can’t
    just copy public data.
    So they’re going to argue, well, the AI is learning from it.
    It’s just like if you go to a museum and you see some art or whatever, you’re learning
    from that.
    You’re not copying it.
    And so I’m pretty sure that’s like the legal case they will make long term, but I think
    they’re really trying to delay having that huge legal battle, but I think it’ll happen
    at some point.
    Because right now I decide to train my artificial intelligence model on a subset of data.
    You can only see the final product.
    You will not see what kind of data I trained on.
    I don’t have to share it.
    I don’t have to say anything about it, right?
    It’s unethical, but you would never know because all you do is you take a bunch of images,
    you feed it into a neural network, and then it finds patterns in those images and it basically
    creates something new out of this data.
    So you can say it’s like a musician.
    So for example, the Beatles were inspired by Elvis and Pink Floyd were inspired by the
    Beatles.
    So who copied from who?
    It’s the chicken and the egg.
    And you’re right when you’re saying that it’s inspiration because what the model creates
    is not an exact match to the training data.
    It has to be something else.
    They pull it out of late in space when it comes to a generative AI models.
    And this late in space, I don’t think anybody owns it, and those type of things, we should
    have thought of it way before.
    Right now when we’re talking about the legal copyright implications, it’s something we
    could have talked about years ago before it became a problem, before people’s livelihoods
    were affected from Shatterstock and places like that.
    By the way, Firefly, Adobe’s Firefly, is amazing.
    I might actually use it in my final project.
    I’m doing a final project for university, and I think that that’s my way to go.
    I’m going to make a database.
    But even in Firefly though, they were pitching that it was the responsible way to use AI
    images, and then apparently they trained on mid-journey images.
    They basically just went to another layer down and were like, “Yeah, we didn’t do it.
    They did it.”
    Yeah, well, I mean, they didn’t specifically go out of their way to train on mid-journey,
    but they allowed people to upload mid-journey images to be sold as stock images.
    So the mid-journey images that were uploaded to Adobe stock, and then when Adobe went to
    go and train their Firefly model, well, there was a ton of mid-journey and stable diffusion
    generated images in it already, because they were allowing people to sell those images as
    stock.
    So they didn’t go and train specifically on mid-journey, but mid-journey and stable
    diffusion and dolly images were in the data set because people weren’t allowed to sell
    that stuff on Adobe stock.
    That makes more sense.
    It’s crazy.
    If you start nitpicking what pieces of data went into the model, it’s an ever-ending story.
    You can always find somebody who contributed to create this image.
    So what?
    They deserve to get paid too.
    It’s a rabbit hole that I don’t…
    And how much?
    Right?
    I’m a very simple person.
    I like the internet how it was when I was 12, which was basically wild, wild west.
    You could copy anything.
    You could get anything.
    You can download anything.
    And as a content creator, I know that it’s a terrible thing to say because people will
    copy my content.
    You know what?
    If they want to, please, it only helps Python being thought.
    I make enough money from what I do.
    I don’t care if somebody else is basically using some of it to kind of grow this Python
    world and kind of teach others.
    If a teacher is using it in his lecture, I’m not going to chase him, asking him for some
    royalties.
    It doesn’t make any sense.
    Why would I do it?
    Same goes for a musician where some 12-year-old kid decides to use his piece of art, his music,
    in his video.
    Why would this musician chase the kid?
    He is sharing his music with the world.
    So yeah, you don’t get paid for it, but you get paid enough.
    It should cover other fan art if you want to call it.
    So I find it upsetting that people are really nitpicky with those copyright laws.
    And if we weren’t, maybe the internet would have been a nicer place.
    Yeah.
    Yeah.
    I mean, I really think we’re getting to a point where copyright in general just needs
    to be rethought.
    I don’t know the solution, but I think it just needs to be rethought because we’re going
    to be able to generate images that look like other people’s images but weren’t created
    by that person.
    We can already create songs that sound like other people’s songs that wasn’t created
    by them.
    When we get into Sora and Veo and some of these new video generators, we’re going to
    be able to generate videos that look like the style of other people’s videos but weren’t
    created by them.
    It’s just going to get so muddy that I don’t feel like the way copyright law was originally
    written is still going to be relevant.
    It just needs to be rethought.
    I agree.
    Absolutely.
    But I don’t know how to rethink it.
    I’m not the one to figure that out.
    Actually, I had meetings in the Library of Congress with people in the Copyright Office.
    I don’t know if you know my last startup, Binded.
    We started off not being involved in copyright and then unfortunately our startup, we kind
    of pivoted towards copyright, but we were not trying to be doing enforcement and stuff
    like that with more just attribution and things like this.
    I spent a lot of time thinking about how can you change this and I met with people in
    the government and now I’m convinced that I can’t.
    They have no interest in changing things.
    I don’t think there’s going to be a fundamental like, “Oh, the government’s going to decide
    that copyright should change” or something.
    Who knows?
    Who knows?
    I mean, it could be a generational thing.
    Right?
    Maybe once there’s a little bit more turnover from the more elderly people in the governments.
    Maybe some of the younger generations will see that, “Okay, technology has changed a
    lot since a couple hundred years ago when so many of these laws were written.
    Maybe we should update some of this stuff for the way the world is now instead of the
    way the world was a hundred, two hundred years ago.”
    Yeah.
    Maybe GPT-6 will help us figure it out, you know?
    Absolutely.
    Well, this has been an absolutely amazing conversation.
    We’ve had a blast talking to you, Maria, and I want to make sure that people can go check
    out your stuff.
    Where should people go and follow you after tuning into this episode?
    Where’s the best place to learn tutorials from?
    Yeah, the best place is always YouTube.
    That’s my main platform.
    This is where you can find shorts.
    You don’t like TikTok style shorts.
    You can find tutorials that are quite long.
    This is where you find me, explain very complex concepts in a simple language that even a
    six-year-old can understand, hopefully.
    It’s called Python Simplified.
    Python Simplified.
    Yeah.
    Python Simplified.
    You can find me on YouTube.
    You can find me on X as well, even though I’m not there very often.
    Maria Simplified.
    The best place is to find me is YouTube, Maria Shah, Python Simplified.
    Well, very cool.
    Thank you so much for hanging out and talking code with us.
    This was a conversation that we wanted to have, but we wanted to bring on somebody that
    knows a little bit more about the coding world than us.
    I’m so thankful that you were able to join us and actually have this conversation with
    us.
    I really appreciate it.
    Awesome.
    Thank you.

    Episode 12: Are coding jobs at risk with the rise of AI? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this compelling topic with guest Mariya Sha (https://x.com/mariyasha888), a seasoned coder and the creator of the popular YouTube channel Python Simplified.

    This episode delves into the contradictions and synergies between artificial intelligence and coding, featuring Mariya Sha, who started coding at a young age and later found success with her YouTube channel that simplifies Python programming. Together, they explore the changing landscape of coding due to AI advancements, ethical concerns, and the future of AI-integrated coding environments. Mariya shares her skepticism and hopes for the future, particularly AI’s potential impact on coding jobs and the importance of a personalized touch in YouTube content creation.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Confusion about AI models and documentation use.
    • (05:35) Exciting potential for non-coders to code.
    • (08:36) AI is better at handling fast change.
    • (09:49) Engineering and coding solve problems, with AI help.
    • (14:51) Future AI control raises transparency and ethical concerns.
    • (17:03) Debate over open source AI vs national security.
    • (19:33) Concerns about LLM capabilities and potential surveillance.
    • (25:20) Janssen’s free supercomputer and transparency questioned.
    • (26:24) Lack of plan B led to GPT domination.
    • (32:02) AI model training ethics and inspiration discussion.
    • (35:04) Sharing is important, copyright laws are nitpicky.
    • (36:43) Startup pivoted towards copyright, government unwilling to change.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Is the Apple-OpenAI Deal a Stopgap? Elon Musk’s Next Move, and the Future of AI Partnerships

    AI transcript
    Yeah, and it’s not artificial intelligence, it’s Apple intelligence.
    Seems like we’re like we’re heading towards a world where you’ll be able to open up your phone
    and it’s actually going to have tons of context about you and know who you are.
    Once these features are all rolled out into the tech, I think that’s when we really see AI like
    mainstream adoption. When all your marketing team does is put out fires, they burn out.
    But with HubSpot, they can achieve their best results without the stress.
    Tap into HubSpot’s collection of AI tools, breeze to pinpoint leads, capture attention,
    and access all your data in one place. Keep your marketers cool and your campaign results hotter
    than ever. Visit hubspot.com/marketers to learn more.
    Hey, welcome to the Next Wave Podcast. I’m Matt Wolf. I’m here with Nathan Lanz. And on this show,
    it’s our goal to keep you completely looped in on the world of AI, all the latest news,
    tools, updates, drama, all of that fun stuff. We talk about it here on the Next Wave Podcast,
    so that you’re always looped in. And today, we’re talking about the Apple event, WWDC 2024.
    Apple has been dancing around AI for months and months and months now. Last year, when they did
    WWDC, they literally did not mention the words AI once during that entire event. People sat there
    and combed through the transcripts. They said machine learning. They said neural networks.
    They said everything but AI. Well, this year, they kind of flipped that. And everything was about AI.
    Yeah, Apple intelligence. Yeah, I was pretty impressed with it, though. I mean, I think when
    we had a previous episode, I kind of put my score at, what, like a three or something in terms of
    how good I think it was going to be? I think it came out at probably, I don’t know, a six-ish,
    seven? Yeah. I mean, I thought it was better than I expected because they have their own AI,
    which I was surprised by. So, it’s not all just open AI stuff. They’ve got their own LLM. The
    benchmarks were, you know, looked pretty good, similar to GPT-4 in several areas. And also,
    it seems like they’re really, you know, planning on integrating AI into the entire OS, like the
    entire level. It seems like we’re heading towards a world where you’ll be able to open up your phone
    or your Mac and just talk to your device, whether it’s like with voice or text. And it’s actually
    going to have tons of context about you and know who you are and kind of know the kind of stuff you
    want to do. And I think that’s actually going to be one of the first really, really mainstream
    use cases is when regular people can just talk to their devices and it’ll help them do whatever
    they need to do. Yeah, no, I agree. I think this event, well, I don’t know what necessarily
    know if this event is what’s making AI mainstream. But once these features are all rolled out into
    the tech, I think that’s when we really see AI like mainstream adoption because it’s just like
    in the devices that everybody uses anyway. You know, a lot of the stuff, like a lot of the
    criticism that Google gets is they have these big announcement fest where it’s announcement after
    announcement after announcement and then rolling out soon, coming summer 2025, you know, coming
    next year, they make all these big announcements. And then we don’t get our hands on it. We’re like,
    that looked really cool. It’s really impressive. When do we get it? And then when they finally
    roll it out, we, you know, we get stuff like, Hey, you should put glue on your pizza, right?
    They kind of pulled a Google where every single feature they show, they had this like
    feature after feature after feature, but none of it’s none of it’s ready. None of it is in the
    device. None of it we have access to yet. They say Apple intelligence is coming fall 2024. So
    still very, very vague. That’s like a three month window that it could fall in. And, you know, I
    feel like that’s very unsteve jobs like I feel like when Steve Jobs used to do keynotes like this,
    he would get up on stage and talk about all of these cool features and all of this new tech that
    they’ve built. And then at the end, he would say something like, Oh, and by the way, it’s available
    for you in stores today. And then the crowd would just go like wild, right? Yeah, the one more thing.
    And then, Oh, by the way, it’s actually like out, it’s in the store right now. Like, Holy crap,
    you know, yeah. And there was an interesting interview with Steve Wozniak, you know, the other
    co-founder of Apple. And they were asking him, like, what did you think about it? He’s like, Oh,
    it all seemed cool, you know, but I want to actually try it and get my hands on it and see if it
    actually works and how it works, you know, and they made this joke about their like whole, you
    know, AI, like rebranding that as Apple intelligence. He’s like, I have my own actual intelligence.
    Overall, with the event, I was, I was sort of impressed, but I was also sort of like,
    there was nothing that they showed off that I hadn’t seen before, right? It wasn’t like some of
    the past open AI keynotes where they would show off a new feature. And I’d just be like, Whoa,
    I didn’t see that coming. Like this is something out of the blue that’s completely new, right?
    Like when we saw Sora for the first time, and everybody’s like, Yeah, whoa, this is such a
    huge leap from what we get from runway and Pika and bottle scope and all these other tools that
    were available for text to video, right? Everything Apple showed off was like, Oh, cool, we’ve seen
    that before open AI has done that, Google’s done that, Anthropics done that, like whatever we’ve,
    everything that we saw, we’ve seen before, it’s just now like got that Apple flair on it, you know.
    The one thing I thought was pretty cool was the whole helping you organize your emails. I mean,
    that that was the thing is like seemed like a small thing. But actually, I thought like for
    business people, that’s going to be pretty huge text messages to and they’ll actually learn from
    that as well. And it’s like, like I said, if you’re talking to, you know, Syria, the future,
    it’s got to have context to those emails and text messages, which is exciting. But also then it’s
    like goes into all the privacy concerns. People were talking about this, we’re like, you know,
    Apple announced a lot of the same kind of stuff that Microsoft announced. Yeah,
    when Microsoft the nails, if people are like, Oh my God, privacy, privacy, like, Holy crap,
    they’re like, you know, stealing all our information. And then Apple announced very similar
    things. There was like, this is amazing. I can’t believe they’re doing this for us.
    Apple, I think, has built this reputation of like, we put a big focus on privacy, right? So like,
    they are doing most of the AI inference on device, right? So when you are asking questions, it’s
    trying to do it all on the device. So theoretically, you don’t even need to be connected to the internet
    and you should still be able to have conversations with your AI. But they do have like their own AI
    cloud that somehow private, private, fully encrypted. So it’s encrypted onto the device,
    it’s encrypted on the cloud side, nobody supposedly can get access to it. And then also they have the
    option for it to send your queries to open AI, which they claim is all completely anonymized,
    it doesn’t collect your IP. So even open AI isn’t getting any data of who is actually asking the
    question when the question was asked. Big question that I have about all of that is like,
    how is that getting paid for? Right? Like, like right now with open AI, you can obviously use chat
    GPT for free, but it’s very limited. I think you get like 10 queries and then it kicks you out for
    an hour or whatever, right? Yeah. How is it that on device, we’re going to be able to just have
    infinite conversations with open AI, but even pro-paying members can’t have unlimited conversations
    with open AI. So I don’t totally know how that part’s going to work out yet. They did say you can
    like connect your API key or your login to open AI and you’ll actually be able to use your premium
    features and that sort of thing. But I don’t know, there’s still a little bit of like ambiguity for
    me. Yeah, I mean, we talked about that in the previous episode with Matthew Berman, right? Where
    we’re talking about who’s paying for it? Is it is open AI paying for it or is Apple paying for it?
    Yeah, yeah. I’m kind of I’m personally convinced that most likely Apple’s paying for it if I had
    the gas. Yeah. Tim Cook probably met with Sam Aldman. Sam Aldman probably showed him what they’ve
    got coming. Yeah. And they had to compare it, right? Like for sure, Tim Cook talked to Google,
    he would have talked to Anthropik, he would have talked to everyone, and he would have seen their
    private demos of what’s coming next. And then whichever one was best, that’s the one he would
    pick. And if he thought that there was no way that Apple was going to catch up anytime soon,
    that’s why he would make an alliance with someone like open AI. And so I’m convinced that’s what’s
    happened. I’m convinced like, yeah, Apple has some pretty cool AI that’s like GPT-4 level. And it’s
    like, oh, by the way, yeah, all the like rumors you hear about GP-5, yeah, there’s a reason those
    rumors exist. Yeah, it’s gonna be way, way, way better. It’s a leap forward, probably coming later
    this year. And then that’d be perfect, right? Because then they would be able to power Siri with
    that. Like, yeah, they showed some cool demos with, you know, updates to Siri. But yeah, you
    could make, you could have a GPT-5 powered Siri later this year, which would really blow people
    away. When open AI did their GPT-4 O demo, I think everybody saw that demo and went,
    this is what Siri should have been. This is like what all of us imagined Siri would be, right?
    And so I think Apple probably saw that. And this is just peer speculation, right? But I feel like
    Apple probably saw that and went like, yeah, that’s what we want for Siri. That’s where we want to
    get to with this. But I don’t know, I still think that Apple is the type of company that’s going
    to use open AI or Google Gemini or whatever as like a stop gap until they can just build their own
    version of it just as good. It may be a few years, but I still think they’re going to do what they
    did with Intel and then their own Apple Silicon, right? They’re going to use this product until
    their own product is ready. And then when their own product is ready, they’re just going to be like,
    bye, bye, we’ve got our own thing now. Yeah, I agree. That’s what they’ll try to do. Like,
    the question is that that’ll actually make sense. Like an open AI stays so far ahead that, yeah,
    maybe that’s the game Apple thinks they’re planning is like, oh, yeah, we’re just going to catch up.
    But I’m not sure with AI, that’s how things are going to play out, right? Because like,
    as soon as AI gets to self-improvement, whoever gets that, they kind of win. And if open AI like
    GPT-6 starts improving itself, Apple’s most likely not going to catch up possibly ever.
    Yeah, when you can go into chat GPT and say, hey, chat GPT, make yourself smarter. And it’s like,
    okay, I just did. Yeah, yeah. Hey, real quick, you and I both know how quickly AI is evolving.
    So if you’re tired of constantly playing catch up in the AI space, I’ve got just the solution for
    you. HubSpots AI for Business Builders Guide is a four step resource that covers prompt engineering,
    API integration, fine tuning, and product development. If you want to get a grip on the tech
    boost performance and get the most out of your AI investment, this guide will be your saving grace.
    They figured out a way to make this info easy for non-techie folks to understand,
    and they’re giving it away for free. Read the guide, get your AI questions sorted,
    and kick off the AI journey in your business today. Click the link in the description below,
    and now back to the show. Despite all of the sort of cool AI features that they’re rolling out,
    I feel like Apple also really played it safe with a lot of what they did with AI.
    And here’s what I mean by that. So they’re building out a new image generator. I already
    forgot what they’re calling it, but they have their own AI generative art feature in there.
    But if you notice when they showed off the generative art feature, it gave you three options.
    It gave you animation, sketch, and illustration. No option for realism. So I think Apple was like,
    we’re not going to let you generate AI images that look like real images because we don’t want to
    ever be associated with being capable of creating deep fakes. When you look at their large language
    model for text generation, the on-device stuff is really just using the context of what it sees
    on your device. It knows your emails, your text messages, what apps you use, things like that.
    And it’s sort of internally training on that so that when you ask questions or ask it to do tasks
    for you, it does that based on what it knows on your device. And then it has their own cloud thing.
    But the questions of like, hey, write me an article about X or what’s the latest news on Tesla or
    whatever, all of that, it sends to a third party company. So if there’s ever any sort of trademark
    infringement, copyright infringement coming out of the content that’s generated, it doesn’t
    land on Apple. It lands on open AI because all of the actual sort of unique new creation,
    the new generated content all comes from open AI stuff. So they’ve sort of like
    washed their hands of that as well. So I feel like they made some really, really smart plays
    in the way of going, we’re not going to be liable for deep fakes. We’re not going to be liable for
    trademark infringement from generated text. We’re not going to be liable for, oh, they’re scraping
    news from websites without permission. They’re letting other companies do that and then just
    tapping into those other companies. Yeah, but I mean, I wonder how that’ll work long term, right?
    Because like, yeah, they’re tapping into those other companies, but they could just like cut
    people off. Like if there’s like apps in the App Store long term, like AI apps, like they just,
    they easily just shut anyone off that they don’t like, right? So I do wonder long term how they’ll
    adapt all that. We’ll be right back. But first, I want to tell you about another great podcast
    you’re going to want to listen to. It’s called Science of Scaling, hosted by Mark Roberge.
    And it’s brought to you by the HubSpot Podcast Network, the audio destination for business
    professionals. Each week, host Mark Roberge, founding chief revenue officer at HubSpot,
    senior lecturer at Harvard Business School, and co-founder of Stage 2 Capital,
    sits down with the most successful sales leaders in tech to learn the secrets, strategies, and
    tactics to scaling your company’s growth. He recently did a great episode called,
    “How do you solve for a siloed marketing and sales?” And I personally learned a lot from it.
    You’re going to want to check out the podcast, listen to Science of Scaling wherever you get
    your podcasts. They also rolled out that whole feature where you can use AI to create emojis,
    I think, right? Yeah, yeah, yeah. And someone was like, “When are you going to see the first,
    you know, AI emoji, you know, Hitler or something like that?” And I’m like, “Oh, yeah.” I’m sure
    they get those kind of things. Like, I’m sure they got like a black list of words that you can’t
    produce. Yeah, I’m sure it’s going to happen probably pretty quickly after normal people
    put their hands on it, honestly. Yeah, but I feel like the big sort of like major lawsuit-type
    stuff, they’ve sort of put quite a bit of risk mitigation in place for that right now.
    What did you think about that tweet from Elon Musk? Did you see the one where he was talking about,
    you know, if Apple does this, he’s like, “This is outrageous. If they do this,
    like, I’m banning Apple phones from all of my companies and you’ll have to not bring, you can’t
    even bring the phone in if you’re a visitor.” Yeah, yeah, yeah. Like, that’s like, holy crap.
    Yeah, yeah. I mean, my thoughts, my thoughts on Elon at the moment is,
    Elon is the king of engagement farming. Yeah. I mean, and he doesn’t even need to be the king
    of engagement farming. He’s already, all he has to do is tweet anything and millions of people
    are going to see it. But I think he likes stirring the pot. I just think Elon loves going on Twitter,
    X, whatever you want to call it, and just stirring stuff up. Like, I just think that’s his MO.
    I don’t really think he would ban iPhones. I think a lot of his employees would be pissed
    off about that. Like, I don’t, I just, I can’t see him doing that. But he was basically saying,
    like, if it was like running locally on their phones, like, what did the actual tweets say?
    If they integrate OpenAI at the OS level, then Apple devices will be banned at my companies.
    Yeah. So do you really see Apple integrating OpenAI at the OS level? I don’t know. I have a hard
    time seeing them do that. I think OpenAI, they’re still going to use the API. It’s still going to
    send to the cloud. It’s still, you know, it’s still going to go that direction. I don’t actually
    see Apple integrating it at the OS level. Yeah. He probably watched the video and it was kind of
    vague about how those integrations are going to work. And he probably made some really quick
    assumptions and tweeted this out. I could see him being generally, you know, like he
    currently hates OpenAI. That’s like, now he is like, enemy number one, you know. Yeah. I could
    see him being like paranoid because like, yeah, in theory, you could have situations where, yeah,
    the phones are just like listening to you and like it’s feeding it to this AI brain. So yeah,
    you wouldn’t want somebody from this company that you hate having a device where it’s like
    feeding information to them in theory. Yeah. Yeah. Yeah. I mean, I definitely see where his concern
    is. I just, I don’t see Apple going that direction. You know, Apple tees, like, right now you can
    connect to OpenAI with more models coming in the future. So that makes me think that, you know,
    you’re going to ask a question and it’ll, you know, how it has a pop up now that says do you
    want us to send this to chat GPT? It might have like, you set a default, right? Oh, I want my
    default to be chat GPT. I want my default in the same way, like if you use Safari, for instance,
    on your on your mobile phone, you can go and set the default search engine, right? You could say
    I want my default search engine to be Google, I want my default search engine to be duck duck go
    being like anybody actually does that, but you could, I think it’s going to be similar to that.
    I think you’re going to be able to go into like your Siri settings and then say, what’s the default
    large language model you want to use if we have to send this off to the cloud, it’ll give you
    the options of Gemini, Anthropic, OpenAI, you pick the one of your choice, and then it kind of goes
    and does that in the cloud and then sends the response back. So no X phone, that was the other
    thing that for that tweet is start a whole thing on Twitter where people like her. Oh, that wouldn’t
    shock me at all. I feel like Elon just wants to have his fingers in every industry. So that would
    not shock me at all. That could be like the reason, like you’re saying he’s like engagement,
    farming to then set up for something like that, right? Yeah, yeah, like that’s his, that’s his
    market research right there. I’m going to tweet this, see what people say about it. And if there’s
    enough demand, I’ll go build another company around it. Yeah, I mean, actually, he just really
    doesn’t like this new alliance between Microsoft, Apple and OpenAI. It’s like, yeah, I get that too.
    But Microsoft and Apple are still not like fans of each other, right? I know, it’s a really,
    it’s a really odd relationship, the whole thing. Yeah, because supposedly there’s rumors that
    Microsoft is upset that OpenAI is now working with Apple and, but there’s nothing Microsoft can do
    about it. And so it’s a lot of interesting drama. Yeah. And there was also that rumor before the
    event that like, oh, maybe they’re like building robots together. Because everyone’s been saying
    that Sam Altman is like hiring people to build robots right now. That’s like, that’s a rumor
    in Silicon Valley. And I think they were doing the job posting or something about it too. And so
    they’re like, well, maybe they’re just going to collaborate with Apple or my crazy theory was
    like, you know, OpenAI invested in that robotics company figure. Like, I would not, I would not
    be shocked if Apple or OpenAI or someone just snaps up figure and just rolls that up internally to
    then build robots and then put their AI into it. So I assume now Elon is just seeing all of them as
    a threat. Like he’s really doubling down on robots, possibly at Tesla, if he actually, you know, if
    his whole pay package, that’s another whole thing that’s going on right now. But like, if his pay
    package gets passed, he’ll probably do it at Tesla. If not, I assume he’s going to be doing it at XAI
    and not Tesla, which is going to be traumatic. I also heard recently too that the lawsuit that
    Elon Musk had against OpenAI, he finally dropped it. So there must not have been a whole lot there.
    Yeah, well, there was that whole thing where the emails came out too, right, where we’re showing
    was not exactly as he had painted it, you know, because yeah, he because basically he walked
    away. Right. So it was like, they didn’t kick him out of the company. He was like, yeah, I want to
    go in a different direction. You guys disagree. So yeah, well, he basically said, I want control,
    right? He said, I want control. I want to be the decision maker or I’m out from the Apple
    keynote. What are just like to quickly sort of recap it? Like, what are some of the things that
    you thought were like the coolest things to come out of the keynote? The stuff about like, okay,
    you can move around the icons on the screen or whatever you put them. I was like, oh, what is
    this? Like Steve Jobs would be like freaking the hell out if he saw that. Like, what, you can just
    make your thing a freaking mess now. You can just whatever. That’s like the opposite of what Apple’s
    always been about. It’s like, okay, they’re basically like my specifying the Apple device now,
    like where you can just like put all this crazy colors and just change everything.
    Well, they’re Androidizing it, right? They’re taking the things that people seem to like about
    Android and going, okay, we’ll do that too, I guess, which Steve Jobs would have never taken that
    path, right? He would have been like, everybody else is doing it. We need to find a better way to
    do it, you know? Yeah. And it was a lot of stuff around like, you know, using Siri and then like
    the context there, too, were like, oh, it actually knows, like, I usually make this kind of appointment
    or I do this. I think they showed a few demos of that kind of stuff. And that was interesting to
    me. But again, that’s all like, it’s all stuff that’s like coming later on. And like, it’s not
    like a hands-on demo or I don’t think it was, it might have been. But I think a lot of it was
    pre-recorded, right? So I guess, you know, I think pretty much all of the keynote that we saw was
    pre-recorded. Yeah. So I mean, we’ll see like when it’s actually live, like how, you know,
    all these things like latency matters, like how many times does it make mistakes, matters a lot.
    So like, all the demos they’re showing, it’s very fast. And it gets everything right. Like,
    if you get the actual, if it, when you actually use it, if it’s slower, and it makes a lot of
    mistakes, well, that’s a dramatically different thing. So yeah, yeah, I think, I think the on-device
    stuff will probably be pretty dang fast. I think when it has to go to OpenAI or their cloud,
    it’s going to be a bit slower. And if you don’t have internet, then you’re not going to be able
    to use those features. Yeah. I actually did think, and this sounds silly, but I actually
    did think the calculator app was pretty dang cool on the iPad. Yeah, people were, people were meaning
    on it before the, the event, right? They were like, oh yeah, guys, it’s time to like sell Apple. It’s
    like, you know, Steve Jobs watching the new Apple event. It’s like, we’ve got a new calculator app.
    Yeah, yeah, yeah. But the way they did like the math notes, where you could just sort of like
    write the math problems in it, like solves it, as it watches your handwriting, and you can draw
    graphs and charts and it’ll find the angles for you and do all that kind of complex stuff.
    Now, honestly, I don’t think I will use that a lot. Like there’s not a whole lot of scenarios
    where I’m sitting around just like handwriting math on a day-to-day basis. But I didn’t think
    it was really cool. Yeah, I would, I was like hoping they would bring something like that into
    like the regular notes app. I use this one, uh, they announced that they will. Yeah. Oh, okay.
    Yeah, yeah. They said it’s going to be on notes for iPad, iPhone, Mac OS, all of that stuff.
    Oh, cool. I mean, because right now I use this Japanese app called Numie, which I kind of like,
    it’s like basically like this really simple, uh, notes app, where you can also, you can basically
    have like variables and things like that, where you like say like, oh, this equals this, and you
    can like add up numbers and it’s pretty cool. Like you can like do basic code stuff in there as well.
    Yeah. I mean, I thought the upgraded Siri features with the context of your phone, right? Like you’ve
    got these companies like Rabbit and Humane that have the AI pen and the RabbitR1 and that kind of
    stuff. And everybody was like, why do we need a handheld device? I actually, the funny thing is
    I’ve got a RabbitR1 right here. It arrived on the same day as the Apple Keynote. So I watched the
    Apple Keynote. I got a ring at my doorbell and my RabbitR1 arrived, you know, months and months
    and months after I ordered it, months and months, you know, a month or so after everybody said it
    sucks. But I’m like, I’m not going to cancel it. It’s going to be a relic of the future of AI. Like
    I just want to put it on my shelf and never use it. But one day people will be like, oh, you got one
    of those. You must be an idiot at the time. But anyway, the point is I got this the same day they
    made those announcements. And I feel like Siri, when it rolls out on our phones, will be able to do
    every single thing this can do plus more. So it’s like, what did we need these things for again?
    Yeah, yeah. I mean, you know, it’s, it’s, it’s kind of amazing is taking them this long to make
    Siri good, right? Like it’s been like, what, over 10 years now? And yeah. Any, any betting pools on
    what comes out first between GPT-4O like voice access and Apple’s new version of Siri?
    I wonder what we see first. Oh, I think definitely open AI. I think open AI. Yeah. I, I, I assume that’s
    coming. I would bet like a month or two, two Macs, I would say. Yeah. Well, somebody, I mean,
    obviously don’t put any weight on this at all. But somebody told me that they asked chat GPT
    when the voice feature was coming out. And chat GPT told them June 16th. And they said,
    they claimed that they asked it in multiple ways in multiple like versions and did all
    sorts of stuff to like confirm it. And every single time it asked, it kept saying June 16th.
    So obviously take that with a grain of salt because AI’s hallucinate and that’s most likely a
    hallucination. But I bet, I bet we actually do get the voice features from open AI this month.
    If I had to guess, I bet it’s in June. Yeah. There’s been a lot of things where they’ve said,
    oh, it’s coming out. And it came out like two or three months later. There’s been like several
    times that’s happened. So that’s this, that’s why I’m saying like a month or two. That’s just my
    gut feeling. Yeah. So all of this stuff from the keynote is, is super exciting, really,
    really cool, really fun to watch. But how, let’s discuss this a little bit. How do we think it’s
    actually going to impact the world? How do we think it’s going to impact normal people?
    How do we think it’s going to impact businesses? Where do you think it’s going to take the world?
    Because we sort of talked at the front of the show about how we believe that this is what’s
    going to make AI mainstream. And I do believe that because it’s going to be in one of the most
    popular devices on the planet, right? Everybody who has an iPhone is just going to have this AI
    in it now. And they may not even realize they’re using AI. Like when people are using things like
    Alexa and Siri, those are very, very like archaic versions of AI. I mean, they’re technically AI.
    They’re just like, compared to what we’ve got today, very archaic versions. Yeah. And people
    don’t think of those as AI. So it’s interesting because I think we might be moving into this
    world where right now everybody’s freaking out about AI. You’re either in like two camps. You’re
    either like, oh, AI is the coolest thing ever. It makes my life so much easier. Or you’re in the
    camp of like, I hate AI. It’s killing creativity. It’s making everybody dumber. Don’t, you know,
    kill AI. It’s going to, or it’ll kill us, right? Like those seem to be the two camps that everybody
    are in. And I feel like this will probably bring that middle ground where people are using it every
    day, but not thinking about it as AI. That’s how I see it as well. I was like, you know,
    right now, a lot of people, if they know about AI, they’re thinking about chat to BT. And then,
    yeah, that’s scary. And it’s just text. Or if they don’t know that much, they might actually
    think about Siri. Like my mom, she was thinking AI is like Siri. And she thought state-of-the-art AI
    was Siri, right? And so, and that technology is so outdated and so, so bad compared to what
    we currently have. So I think for a lot of people, it’s going to be shocking when it’s just like on
    their device all of a sudden. They buy the new iPhone and now they could just talk to it and
    it understands stuff about them. And they can also chat with it. I think that’s going to be like the
    mind-blowing moment where they’re just like, holy crap, what is this magic? And they’re not going
    to be thinking about like, oh, is it AI? What’s it going to do to my job or whatever? They’re just
    going to be blown away by the actual magic of the experience, I think. And I think that’s where
    it’ll actually really start to go mainstream because it’ll be kind of transmitted beyond
    just like thinking about jobs or whatever the latest news story is. It’ll be just like,
    this is magical. And I love it. I think it’ll be that simple.
    Yeah. And I also think, you know, I think a lot of people struggle with what’s the
    practical use case in my life, right? Like, I think a lot of people look at chat GPT and go,
    that’s cool, but I’m not writing essays. I don’t need it to, you know, write my essay for me.
    I’m not a YouTuber. I don’t need it to write outlines for me. I’m not a blogger. I don’t
    need it to help me write blog posts. You know, they look at stuff like mid journey and stable
    diffusion and, you know, maybe like Sora and some of these like AI video tools and they go,
    that looks really cool. I don’t have a use for that in my life, but that’s really cool that
    that exists, right? I think a lot of people look at AI and go, oh, that’s, you know, that’s cool
    tech that’s out there that some people are using. I just don’t see how to integrate that in my daily
    life. I feel like with what Apple showed off, it clicks a little, right? Like with what Apple
    showed off, it’s now like, okay, some, I got 10 text messages while I was on this, while I was on
    a flight, right? I was on a flight from San Diego to New York. I landed, I got 10 text messages.
    Crap. Well, it’ll now prioritize the most important text message for you. It’ll most likely
    it’ll be a thing on your home screen that’s like, all right, here’s what you need to look at now,
    save the rest for later, right? Same with like emails. You know, when it comes to email,
    one of the things we were talking about offline about, you know, Kip and Kiran over on the marketing
    against the grain show, they were having a conversation about how this is going to massively
    affect marketers, right? Because if your emails are now all getting prioritized to like the most
    important emails rise to the top and the ones that tend to be newsletters or, you know,
    marketing emails or things like that, those are going to get de-prioritized because now your
    email reader inside of your iPhone is going to read all your emails for you, figure out a quick
    summary and give you a one sentence summary of what that email is about. So you can quickly decide
    whether this email is important to read or just archive it without even opening it because it’s
    going to tell you that right from the home screen, right? You and I, Nathan, we have newsletters.
    I’ve got the future tools newsletter. You’ve got the lower newsletter. That’s going to impact us
    if people go, if they’re looking for, if you do a good job with your newsletter, people should be
    looking forward to those newsletters and opening up. But if you’re like a marketer who just like
    sends affiliate links about here’s the latest, coolest tool you should check out by through my
    affiliate link, like a lot of that stuff’s just not going to be as effective as it once was because
    that’s going to get de-prioritized. Eventually there’s going to be an AI that’s like, hey,
    you haven’t opened any of this person’s emails for the last 10 emails. Do you want me to just
    unsubscribe to you? Yeah. Like that’s coming guaranteed. Yeah. Yeah. I think if you have
    good quality, you’ll probably actually recommend your newsletter because like, oh, if I read this
    newsletter every single week, I’m always engaging with it. It’ll probably actually,
    you’ll even see it more if I had to guess. But yeah, if you have the one that’s like, yeah,
    occasionally click it and it’s like, oh, I don’t know why I opened it, but maybe I’ll open it
    again sometime and close it again. That person, they may no longer see that at some point or get
    auto unsubscribed. Or they’ll eventually just do what Google does to us where you ask the question
    in Google and Google just summarizes it. They’ll get the email and the email, like they’ll hover
    over the email. It’ll just say, here’s the 10 pieces of news you need to know about without even
    opening the email, which would kind of be a bummer because then sponsors will no longer want to
    do that. But there are implications towards businesses and people who focus on marketing
    and newsletters and things like that. I think as a user of the iPhone, I love that idea.
    As somebody who sends newsletters, there’s a little bit of, you know, I’m a little scared
    that my newsletters are going to be seen by less people in the future, right? So
    there’s definitely pros and cons to that. But, you know, the other thing I was talking to,
    to Bill of all about this, who was on the show recently, was that the notifications on your
    iPhone are a mess, right? Like whenever I look at my phone, I’ve always got like just a bunch of
    jumbled notifications that are all overlapping each other. And that really hasn’t updated in
    a long time. Like there’s been no new updates to the way the notifications look. Right. Well,
    with AI, it’s going to start prioritizing those notifications. It’s also going to pay attention
    to which ones you pay attention to. And if you never pay attention to them, you’re just going to
    stop seeing them. Yeah, I bet two of them like putting AI in the OS as well. Like you’ll be
    able to actually talk to it about the settings you want to change. Like I hate how this looks. I
    hate how that works. Yeah, yeah, yeah. And instead of like looking through all these freaking options,
    like if you look at the settings on the iPhone now, there’s like so many crazy, they always add
    new things, you’ll just be able to talk to it and it’ll be like just change it for you versus you
    having to spend all that time. So I think that’s the kind of stuff where like regular people are
    going to love this because like it’ll save you time. Like you’ll actually, you’ll, in some ways
    you’ll spend less times on the devices, at least doing things that you don’t want to be doing.
    Right. I don’t really sitting on my phone, messing with settings and looking crap or
    scanning through a hundred emails, but I only want to read one or two of them.
    Like, and I think that’s where people will start to just really love AI because it just
    makes their life better. And they’ll, you know, that’ll, that’ll be what they remember over all
    the news stories about everything else. Yeah. But I think Apple’s going to do their best to
    try to make you forget that you’re using AI. Yeah. So yeah, it’s going to be interesting. I think
    it’s going to change a lot of the, I think a lot more people are going to be onboarded into
    using AI on a daily basis without ever actually realizing that they’re using AI on a daily basis.
    Yeah. And you know, I think, I think it’s a net positive. There’s some stuff that worries me about
    it, but I think overall it’s going to make our lives better, easier, less notifications, less
    clutter. If you are a marketer, if you are sending newsletters and things like that,
    you got to find a way to do better to stand out to be the one that does get prioritized,
    you know? So I think the overall net of it is going to be positive for people. And
    it’s only a matter of time before Google, I mean, Google with Android just does this all the time.
    Oh, those are some cool features that’s now going to be in the Google phone also, right? So like,
    all of, all of the phones that are running on Android are probably eventually going to get
    anything that Google, that Apple showed off. The funny thing is most of the stuff Apple showed off
    is already available in most of the Android phones, except a lot of the AI features are sort of,
    you know, kind of novel. Awesome. Well, that’s all we got for you today. Thank you so much for
    tuning into the Next Wave podcast. If you enjoyed this episode, make sure that you like this episode
    and subscribe to this channel on YouTube, on Spotify, on Apple podcasts, wherever you’re
    listening or watching this show. It really helps us grow the show, get in front of more people.
    And thank you once again for tuning in. We really appreciate you. We’ll see you in the next episode.
    Bye.
    [BLANK_AUDIO]

    Episode 11: Is the Apple-OpenAI Deal a Strategic Move or Just a Stopgap? Nathan Lands (https://x.com/NathanLands) and Matt Wolfe (https://x.com/mreflow) delve into the intricacies of the recent Apple and OpenAI collaboration.

    This episode explores the potential limitations for OpenAI’s pro paying members, the speculation around Apple possibly paying for OpenAI’s tech, and the implication of this on the future AI developments by Apple themselves. They also dive deep into the rapid advancements of AI, the risks of deepfakes, trademark infringements, and how AI’s integration into daily devices could reshape our interaction with technology.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Apple’s features not available; vague timeline expected.
    • (04:11) Overall, not completely new, already seen before.
    • (08:24) OpenAI’s GPT4 demo wows, potential for Siri.
    • (11:04) Language model uses device context for generation.
    • (15:13) Apple unlikely to change, default settings discussed.
    • (17:09) Rumors about robots in Silicon Valley abound.
    • (21:28) Upgraded Siri, Rabbit R one: future AI.
    • (25:38) New AI will be shocking when it arrives on iPhone.
    • (27:24) iPhone updates prioritize important emails for users.
    • (31:41) AI integration will change daily life positively.

    Mentions:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • AI Entrepreneur Matthew Berman On The Power of LLMs – “It blows my mind.”

    AI transcript
    In the long run who wins oh man, let’s just say I’m a big believer in open source. I hope you’re right
    Yeah, they’re taking the scorched earth mentality the scorched earth strategy and by the way
    I’m a hyper competitive person, so I love it
    Hey, welcome to the next wave podcast. My name is Matt Wolf. I’m here with my co-host Nathan Lanz and today
    We’re talking to a serial entrepreneur and AI expert Matthew Berman. He’s got a very popular AI YouTube channel
    But let’s just go ahead and get right into it. I’m curious about your story a little bit
    I you know, we see all of your YouTube videos. You seem to be super tapped into the AI world
    But how did you get into AI in the first place?
    I saw chat GPT along with the rest of the world and I was completely enamored with it
    so I decided to start a YouTube channel just to document my
    learning process and hopefully share my learnings with other people and
    Luckily my third video went pretty viral. It was about Leonardo dot AI and I thought oh well, this is easy and
    Yeah, the channel just went from there and midway through last year. I went full-time on it
    When all your marketing team does is put out fires they burn out fast sifting through leads creating content for infinite channels
    Endlessly searching for disparate performance KPIs. It all takes a toll
    But with HubSpot you can stop team burnout in its tracks
    Plus your team can achieve their best results without breaking a sweat with HubSpot’s collection of AI tools
    Breeze you can pinpoint the best leads possible capture prospects attention with click-worthy content and
    Access all your company’s data in one place. No sifting through tabs necessary
    It’s all waiting for your team in HubSpot
    Keep your marketers cool and make your campaign results hotter than ever visit hubspot.com slash marketers to learn more
    When it comes to YouTube and creating a lot of this AI content like what’s your source of
    Information for all of this it’s one of the questions
    I know I get asked a lot is like how do you keep your finger on the pulse of all this stuff?
    And I know for me just keeping my finger on the pulse is literally my full-time job now
    So it seems like you’ve sort of found like this niche for yourself like I mentioned you in my videos
    And I know a lot of the other AI creators mentioned you in their videos now as like the guy to go watch
    Whenever a new large language model comes out right like Phi3 just came out right and I was like Phi3’s out
    I could talk about it, but Matthew Berman’s gonna do a better job than me
    So go check out his channel cuz he’ll probably make a video about it
    So like I I love that you’re doing that. How did you sort of fall into that?
    Like what is it about the like large language models and sort of comparing them and figuring them out that made you want to go
    Down that rabbit hole. I had this real love for the local model being able to download a model have it on my computer run it and
    I know this isn’t exactly true
    But essentially have the entirety of world knowledge and like just a handful of gigabytes on your computer
    Still just saying it out loud blows my mind and so I wanted to benchmark
    The the models that I was playing with I’m curious like right now like what’s the best local model you could try right now like for text
    Yeah, I’m gonna mention two companies and I’m sure everybody watching this video has heard of them
    Obviously meta meta ai’s with the llama three models. Those are
    probably the best and
    Maybe only slightly better than the Mistral companies models. So Mistral AI has Mistral
    They have mixed role which is a mixture of experts model
    If I am going to recommend a local model, obviously it depends on how how much RAM you have how much VRAM you have
    But generally you’re gonna choose either one of the Mistral models or one of the Llama models now Microsoft
    Just recently released the Phi model PHI Phi 3 and those are pretty capable
    They tend to be smaller and they tend to need more fine-tuning to
    For specific tasks, so they don’t have as broad of knowledge
    But they’re really performance and they are still very high quality
    So I think between those three families of models probably my favorite is gonna be the llama three models
    But all three of them are fantastic. Yeah. Yeah, no, I’ve used llama three quite a bit
    Especially when you combine it with GROC not GROK GROC, but GROQ GROC when you combine it with that version of with the
    What are they called? Is it the LPU their language processing unit or something?
    But when you use that it’s just like insane like the speed that you get back from the models on there
    The GROC company GROQ and it’s kind of nuts
    They you know llama three like 8b or this it’s like 800 tokens per second
    And you would think like okay. Wow, you achieve something incredible now like take a break, but no, no
    Now they’re like, okay, that wasn’t enough
    We’re just gonna continue to work on increasing speeds and so it’s I can’t remember the exact number
    But they have some it absolutely insane input token speeds. I think it’s like
    2,000 tokens per second. Don’t quote me on that. But yeah, that that company is
    Kind of nuts. They they are all about the speed and they’re doing really well
    So would you say they’re like a competitor to Nvidia?
    Are they making chips to compete with Nvidia or do they sort of work in tandem with Nvidia chips?
    So they are definitely a competitor to Nvidia. However, Nvidia doesn’t actually offer inference as a service
    They sell you the chips, right? And then like these big data center companies can buy the chips and offer inference
    GROQ they used to sell chips and
    Offer inference and they actually acquired this company. I’m forgetting the name of it now
    But they they essentially acquired a company. That’s like the front end of the inference service
    And so like I mentioned, they used to sell chips and then at a certain point maybe a month ago
    They decided to stop selling chips and just offer inference
    So they are building out massive data centers with their own chip technology and just strictly offering an endpoint an API endpoint
    Or a cloud service like chat GPT except it’s lightning fast and you can use different open-source models
    So are they trying to be like cheaper faster, but lower quality, but in some use cases, that’s okay
    Is that the kind of yeah, you’re probably right Nathan
    It’s like if you’re if you’re talking about like that last 5% of quality GPT 4.0
    Like it’s gonna win but you know, I open source is catching up quickly when it comes to large language models
    Have you found that like this model is good for X and this model is good for Y and this model is good for Z
    Right like have you found certain models to be better for specific use cases?
    Yeah, and and actually that’s really why I like open-source because these companies like meta will put out this raw model like a Llama 3 and
    Then Eric Hartford will make the dolphin flavor of it and all of these versions are very good at a particular thing
    Whether it’s being uncensored or whether it’s role-playing
    Math coding in fact Mistral just released code stroll. I believe it’s called it was just today
    So I haven’t had a chance to look at it
    But yeah, so like that is also why I love open-source because you can have all of these fine-tuned models specific to different use cases where in that particular use case
    9 times out of 10. They’ll be better than a GPT for more broadly speaking, right?
    So GPT 4 is like across the board GPT 4.0 is it’s just a better model
    But if you look at each individual use case and you find the best open-source use case
    Typically, you can find one that is just as good and yes much cheaper and you can get it to be much faster
    It’s interesting because you’ve got open AI you’ve got anthropic
    They’re both sort of closed models, right? You’ve got to basically use them through their website. You can’t install them locally
    We don’t really know what they’re trained on
    It’s kind of all closed off and then you’ve got the open models like Llama, Mistral, 8x, 7b
    Google has Gemini that’s closed-source and Gemma that’s open-source, right?
    And then Mistral also has a mix of both closed and open-source stuff as well exactly in the long run who wins
    Do you think I know that’s a very loaded question?
    But like I’m just curious like what is your sort of initial gut thought?
    You know when you when you have a person or I should say a company Mark Zuckerberg with meta AI dumping
    hundreds of millions of dollars into buying these
    server farms chip farms
    attracting the best talent in the world and then just giving it away for free crazy, right?
    They they’re taking the scorched earth mentality the scorched earth strategy and and by the way
    I’m a hyper competitive person. So I love it like they were behind, right?
    They weren’t they weren’t anywhere close to open AI. So they’re using the scorched earth
    Strategy, I think likely what is going to happen is for a while closed-source
    specifically flawed chat GPT will probably be
    three to six months ahead of open-source, but it’s like the four-minute mile like once you’ve seen
    Somebody else do it. You know, it’s possible
    And so once you see for example a GPT 4 oh where they have the multimodal model off like voice
    It sounds super real all of a sudden your intonations and your voice can be input as an input to the model
    Other open-source builders look at it and they say, oh, yeah, okay. Well if we didn’t already know about that like yeah
    now let’s go do that and so I think there’s going to be this the gap but it’s going to shrink over time and
    Here’s my other thought about who wins in the long run
    If you’re an open AI you have chat GPT and you’re building and the models themselves are becoming commoditized quickly
    And so you you have chat GPT and the value is with all of the developer tools that you build around chat GPT
    However, all of those developer tools that you build around it are only applicable to chat GPT
    So if one a developer of business wanted to use a different model, they couldn’t they are completely locked into the open AI platform
    Whereas with meta or if somebody builds a suite of developer tools on top of open-source models
    You could swap out the open-source models as
    Much as you like you can find that perfect fine-tuned model for you that is efficient high quality low cost and
    I think that’s a really powerful strategy because you as a buyer of inference as a buyer of the model outputs
    I’m not locked into a platform
    And so that’s why I like if I were to choose as a business
    I would probably do the initial experimentation of whatever I’m building
    Using one of the closed-source models and then as soon as I found something that worked and my code is pretty sound
    I would try to convert it over to open source as quickly as possible because platform lock-in is real. Yeah, I mean
    I kind of agree with you and I hope you’re right like yeah, I want open source to win
    But I’m pretty skeptical. I mean a lot of the stuff you’re saying kind of assumes that GPT 5 is gonna be slightly better
    And that it can and also that open AI is not gonna get to some form of self-improvement before open source, right?
    If they do that changes everything right and then and they’re in the rumors from people. I know who knows Sam
    You know a lot of things sound like GP 5 is very very good
    and then people are going to be shocked quite soon and
    So I I think that everyone’s comparing to you know GPT 4 and I think that’s like really old
    I think open AI is probably starting GPT 6 right now and GPT 5 is basically done
    And it’s gonna be way better than anything that currently exists
    So I think there’s some validity to what you’re saying
    There was an interview that Jan Lacune the head leave his chief AI scientist at meta AI gave on the Lex Friedman podcast and
    The way that he talked about it was kind of this path the AGI is it’s not like an on/off switch
    It’s not like suddenly gonna happen. There’s not these huge step functions of improvement. It’s more gradual
    It’s more subtle than that, but he’s always been trailing so far
    So I so just because he works at meta
    He has not innovated much at all like he’s always been behind and catching up
    He hasn’t actually been the one who’s created new things. You know what you’re sounding like Elon Musk right now
    But it’s true. It’s true. That’s it’s that that is the first principles way of looking at it
    It’s not as unless it’s published. Okay. Yeah, which is crazy. Yeah, so yeah, I agree. That’s crazy
    so it would be hard for me to believe that open AI has some
    mind-blowing
    technology
    Innovation that we just could not even fathom to this point and they would release it all at once
    It’s hard for me to imagine that scenario. I like the most ahead. I would guess they are is
    15%
    Maybe and then that gets that gap gets closed within six months
    So I I’m less bullish on the open AI long-term play
    especially because models are becoming commoditized and as they become commoditized
    They are going to find it more and more difficult to attract the best talent
    To get more funding to get the subscribers
    Right because you can go anywhere and so it’s just a race to the bottom on pricing and so that that will actually
    Help defeat the moat that they have in terms of kind of just model
    quality
    I hope you’re right. My gut is that you’re very wrong
    We’ll be right back but first I want to tell you about another great podcast you’re going to want to listen to
    It’s called science of scaling hosted by mark robert’s and it’s brought to you by the hubspot podcast network
    The audio destination for business professionals each week hosts mark robert’s
    founding chief revenue officer at hubspot senior lecturer at harvard business school and co-founder of stage two capital
    Sits down with the most successful sales leaders in tech to learn the secrets strategies and tactics to scaling your company’s growth
    He recently did a great episode called how do you solve for a siloed marketing and sales?
    And I personally learned a lot from it. You’re going to want to check out the podcast
    Listen to science of scaling wherever you get your podcasts
    I feel like sam altman too has been like really sort of trying to set these expectations because whenever you hear him talk
    He’s constantly talking about like the world doesn’t like massive changes
    They want this stuff to kind of happen gradually and he keeps on saying that kind of wording every time he’s interviewed
    So, you know, I kind of lean more on the side of like
    I don’t know if gpt5 is going to be as big of a leap as everybody
    You know might think it is because of sam’s subtle little hints
    Like to me it sounds like he’s trying to manage that expectation by saying things like the world wants incremental steps
    They don’t want this quantum leap all at once
    I think I think it’s because gpt5 is going to be amazing. He’s trying to calm people down
    Is he saying that because they have some incredible thing and they
    Want to drip it out over time and they don’t want to reveal their secrets
    Or are they saying that because they don’t and they want to manage expectations? It could be either. Yeah, it could go either way
    You’re totally right. You know, I have this sort of theory that open ai
    And you guys could definitely debate me and try to find some holes in my logic here
    But I have this feeling that open ai is actually sort of
    In not not a very good place right now
    Because if you you look at open ai, right, they’ve got two main things
    they’ve got their api that
    Other companies can go and build ai related platforms on top of and then they’ve got their consumer facing product with this
    Which is chat gpt. Well, their consumer facing product in chat gpt is
    That’s becoming more and more commoditized, right? Like it’s just built into
    Graham and whatsapp and google search
    I mean, not very great in google search yet
    But it’s built into google search and it’s just like very very commoditized anybody who wants to talk to an ai
    There’s a hundred different places. They can do it now
    So the need to go and pay 20 bucks a month to do it at chat gpt
    The value of that’s getting smaller and smaller and you look at the api side, right chat gpt or
    gpt 3.5 or gpt 3 that was the only game in town for a long long long time if you wanted to use an ai
    api
    Right, but now we’ve got clod from anthropic. We’ve got gemini. We’ve got llama. We’ve got
    Mistral we’ve got all of these other options for apis and a lot of them cheaper than what open ai offers
    So their two main business models have both become kind of commoditized
    Right and then, you know, you put all of the safety stuff and all of the weird drama and lawsuits and stuff
    You stack that on top
    To me it paints a picture that open ai is
    Probably got to do something they either got to have something really big with gpt 5 or
    You know
    Siri uses gpt now and when when the new iphone launches it’s got
    It’s got gpt as the siri model and that can reinvigorate open ai
    But right now I kind of feel like they’re in trouble, but i’m curious your thoughts
    Like are there holes in that logic?
    So I okay a lot to unpack. I think for the most part you’re you’re 100 right?
    I think from a consumer perspective open ai
    actually has
    Pretty big dominance, right? They they have a pretty big moat just because chat gpt
    Is the verb now it is ai for most people right it is ai nobody even knows of
    Anthropic, but your point of you know, google search is going to have it
    Facebook is going to have it instagram is going to have it, but it’s not going to be chat gpt
    It’s going to be llama. It’s going to be gemini and it’s going to be more native into the existing
    Interaction of whatever that is and so I remember sam altman. Do you guys remember when?
    gpt apps kind of came out for a little bit where you can like call
    Like price line and oh the plugins
    Yeah plugins. Yeah, that’s what they were called. So just a couple months after launch
    He said something super interesting that you you just reminded me of he said
    We kind of realized people don’t want to go to they don’t want to have their apps in chat gpt
    They want to have chat gpt in their apps
    And and like that stuck with me and that’s it’s very akin to what you’re just talking about matt where
    Like if google search has ai
    Facebook has ai
    telegram
    Instagram all the grams they have ai
    Well, like you’re not going directly to chat gpt. So then that leads us to the apple partnership potentially, right?
    Yeah, first of all that blows my mind. What is apple doing?
    Trillions of dollars and like they couldn’t do this
    You know, there’s a really like a handful of engineers in a basement can pump out a decent model. So
    It’s like very disappointing as somebody who’s been a long time apple fanboy
    Yeah, but that would like go back to what I was saying is I think open ai is very ahead
    And when you actually see the behind closed doors, yeah, they could throw it together something like gpt 3.5
    Five is so far ahead that yes apple would just say we’re bowing. We’re partnering with you
    We’re gonna make the best hardware. We’re gonna continue, but you have won the game. And so we’re going to uh
    I’ll make an alliance with you. And so I think that’s what’s happening with apple in my opinion
    It wouldn’t be actually the first time that apple did something like this, right? They they bowed out of the search game they and and
    google
    pays apple
    Billions of dollars a year to be their search engine
    So like would would it be like a similar setup? I you know, I don’t I don’t know that that is that is interesting. Um
    Can you see open ai paying apple? I don’t I I think actually apple’s gonna pay them if I had to guess, um, yeah
    It’s interesting because the apple real estate is important. Obviously. I I assume with the google thing
    Uh, apple had more leverage because they could come in and say
    Hey, we actually do have the talent to build something like google
    Maybe it’s not gonna be as good. It’s gonna be 95
    Uh, and so with that argument then they could get it where okay. Yeah, okay. Google, you know google you pay us
    But if open ai is very far ahead and then apple doesn’t have that leverage to come say that to them
    Like hey, yeah, we can just catch up overnight if they’re if they’re not able to say that
    Then they have no leverage and then I think that would result in open ai getting paid
    I believe well, I think I think this is just a stopover for apple
    I think um, you know, you look at what apple did with intel, right?
    They put intel chips in all of their computers for the longest time
    But as soon as their m series of chips came along. All right. Bye bye intel. I think it’s the same kind of thing
    I think chat gpt is their stopover to whatever they’re building. I think that’s it. I think that’s a good analogy
    Yeah, they’d be crazy not to be trying
    They’d be able to yeah, I think it’d be crazy long term to not be trying. Yeah. Yeah, for sure
    I mean look when I first saw gpt 4o and the voice interactions. I was like, okay
    Well, that’s siri right that that was that is the promise of what siri should have been a long time ago
    Right and they couldn’t accomplish it and gpt 4o, which is not as you said nathan. It’s not even gpt 5
    They were able to accomplish something
    Really impressive. I guess by the way, there were also rumors that it was google’s gemini that was going to be powering siri
    So I don’t know no wonder open ai is hiring for internal risk
    Like spy, right? Didn’t you see that job posting?
    They were hiring for like an internal risk assessor or basically like prevent the leaks. That’s really the job description prevent the leaks
    Then again, you know microsoft is open ai, right and and apple and microsoft have a long
    Contentious history. Well, I would not say they are open ai
    I mean because you know the whole like that open ai can get out of it when they have a gi and that’s
    You know, you can you could possibly argue. They already have a gi depending on what level and you know
    It’s depending on the definition
    So I wouldn’t I don’t I don’t think microsoft has a complete hold over open ai. They have a lot strong influence
    But I don’t think a complete hold. Yeah, so I was just looking at it
    I was just watching an interview of elan musk. I think it’s something
    Called viva tech something like that. He called it microsoft’s open ai
    He like said multiple times. I could tell like oh, he’s poking for sure. He’s talking
    I mean, that’s the same thing with like that’s the same thing with like you and right like you’re just following orders boy
    like it’s like
    I I thought it was so funny. It definitely gave me a smile when I heard him say that but I I’m I’m on a different
    Uh, I I have a different position to you Nathan
    I think satya nadella is playing for d chess and everything everybody elan musk
    Meta ai they’re all his pawns because he invests in open ai
    He takes all their tech builds it internally builds it into every level of windows, but doesn’t call it open ai
    Also partners with meta on the open source. I think he also did an investment
    I might be wrong on this with anthropic like
    He basically put his chips on
    On every potential option. Well, he basically bought inflection ai too like inflection ai just got consumed by my airsoft
    He’s I think satya nadella is just he’s blowing my mind right now just for d
    ceo chess
    Yeah, that’s it’s kind of like the classic microsoft playbook as an effort. What was it? It’s like
    Extend exterminate. What’s what’s the other part of that? There’s nothing missing. Oh, I remember that
    But that was like always microsoft’s playbook right was like to get things and then like make them a bit bad
    And then like you basically enough like killing off the the competition miss mobile
    They were strong
    They were a little bit late, but they got very strong with cloud and now they’re early and strong on ai so
    Very bullish on microsoft not investment advice
    Yeah, I mean he’s doing way better than the ceo of google. That’s for sure
    Yeah, yeah, that’s right. Yeah. Yeah, I mean if you do any searches right now about google ai
    All you’re gonna find is their blunder with their like image generation model that couldn’t get the races correctly
    And how google is teaching people that they should be eating rocks
    Like google is um, yeah, they’re in a tough spot right now. I’d say yeah
    That makes me like but that goes back to like the thing of adding ai into everything that you were talking about earlier matthew
    Like i’m not so sure about that like yeah, you add ai to instagram and all these things
    I think we’re gonna be seeing more new experiences versus just tacking ai on to old things personally
    Uh, because like if you look at google, they’ve they’ve basically just tacked on ai because they’re they’re in panic mode
    And they’re in a panic mode for a few reasons. I think people are not really thinking about like
    Yes, sure like ai is going to eat their lunch or whatever
    But also there’s a flood of ai content
    On the internet now and they’re having to deal with that and they don’t have a clear answer to how to deal with that
    Um, and the the the algorithm leak that came out a few days ago shows that like they don’t currently have any way
    To deal with any of this they’re kind of like falling back on like okay. Who has the highest authority?
    Well, now it’s uh, it’s a reddit site. Well, okay. Now people are just shitposting on reddit
    So now it’s how do you answer that? You know the highest authority sites have people shitposting and stuff. Um, and so
    I i’m not so sure that uh
    You know just tacking on ai the things is is gonna be the plague is yeah, I don’t know where google even goes from
    I agree with you nathan. I I think that’ll level the playing field
    Uh, it’ll give people a lot more options to be maybe first exposed to ai
    But I agree. I I don’t know what it looks like but new experiences with ai it seemed like the inevitable future
    GPT-40 keeps coming to mind like that experience of being able to actually just
    Have a real conversation with ai that can understand my tone my emotion
    Also reflect back its own tone emotion
    That’s really powerful the voice interface which really hasn’t been tackled
    Seems like the most obvious
    user interface
    Yeah, and that’s where I do hope the open source like stays up, you know stays close to open ai
    So like like the startups can actually be building those new experiences. It’s not just like oh, it’s all open ai
    Yeah, agreed another thing that that I’ve followed you a lot for is
    Um, you’re sort of coverage of ai agents, right? You’ve talked about ai agents a lot in videos
    And it feels like most of the stuff that I’ve played around with isn’t quite there yet
    I mean, what are your thoughts on ai agents? Have you played with anything that is like exciting you in that world?
    Is there anything that’s like bubbling up that you’re like, all right, if you want to see an ai agent at work
    Go mess with this. Yeah, so I I’m I’m very very bullish on ai agents
    There’s there’s really like two main products
    Autogen by mark soft, which is more of a research project
    and then crew ai
    Disclosure i’m an investor in crew ai
    so
    Very bullish on agents and there’s there’s a few reasons and I’ll I’ll also answer your question
    I’m bullish on agents because when you just do a single prompt to ai
    You’re you’re just not going to get the type of results as if you first of all had a more complex prompts
    But also allowed it to
    iterate with itself to reflect on its own output to work with other different models
    In coordination to give tools to them. So at the end of the day agents to me are really two things
    It’s the ability to put together
    multiple large language models to work together which
    Have been proven through different research papers like reflection and
    Tree of thoughts to output better results
    And then it’s also kind of all of the infrastructure around the large language model and the workflows that you need to bring it to a production level environment
    like I mentioned tools
    benchmarking
    Logging all of this stuff is kind of all in these agent frameworks now
    And so if you’re building production level ai
    It kind of goes hand in hand to use one of these that that’s how I see it
    But I also agree like the it’s still very early days
    um, the agents often don’t behave exactly like you need them to and that’s the
    Nondeterministic factor at work there. I think when you’re talking about use cases that work really well
    Automating things that are very well defined
    So in the work environment research analysis
    You know crafting content all of these things are use cases that agents do really really well
    Beyond that we’re still trying to figure it out and I think
    The improvement is going to come
    Because of two things model improvements and then framework improvements and then as you combine those two things and they both get better
    They’ll kind of build off of each other and get exponentially better over time and we’ll be able to automate
    More and more real-world tasks with agents
    That do you think we’ll ever actually see a large action model?
    Oh, man
    That’s uh, yeah, okay. So I look I I was a fan
    Of the rabbit device like I got it like that’s what you’re really asking about right?
    Yeah, I want to I want to get into the rabbit thing. Yeah
    So okay, so large large action model, um, there there’s actually a few examples of it in the field right now
    There’s there’s two projects. There was one research paper which allowed the large language model to control
    a
    Windows mac linux environment through kind of a special
    Version of those environments and it worked really well open interpreter. That was the other right open interpreter was a and now they have the one
    I think it’s just called one which is like a little device
    But it essentially allows you to control your computer and that’s really what a large action model is it’s right
    Can the large language model write a script to execute things on a computer dynamically?
    I think we’re gonna have that as like a middle ground
    but
    Like that is that is like the stop gap to a place where large language models can just execute code
    Directly like you are just speaking your command
    Uh, they interpret the command and then they just uh
    Write code for the end device whatever that is. So let’s say you have a smart fridge
    You say tell me what’s in my fridge. It writes a script to go execute on that fridge
    Um, now, I’m sure a lot of people who are wary of security are shuttering right now
    But um that I do see as the future and I made a video about you know developers probably won’t be needed in 10 years
    Yeah, yeah, we actually recorded a podcast episode about that concept as well, but we never released it
    I would be interested in watching that. Um, so yeah, I guess like right now large action models don’t work very well
    especially the ones that
    overlay a grid on top of an operating system because
    It’s just hard for the large language models to predict an x y coordinate on top of an image
    Um, but I’ve seen some decent examples of it and especially as
    And this is why again, I’m so bullish on microsoft as they expose more of their operating system to the ai directly in the direct
    In the ai control the operating system directly and it’s a well-defined
    Uh interface between them
    The the kind of idea of ai controlling your computer becomes more real
    Yeah, yeah when it comes to the rabbit, I I loved the concept of it, right?
    Like I like that large action model concept of like hey go do this thing
    You train it once at your computer and then forever beyond that you can press a button and get it to do that same thing again
    Right, but obviously as we’ve kind of seen so far the rabbits sort of under delivered on some of its promises
    But yeah, we could just kind of leave it there
    yeah, yeah, I think this is uh
    Degree of over promise under deliver a very strong degree of that. I you know
    You can watch coffee zilla’s video about whether it’s a scam or not. I I don’t believe so but um, certainly under delivered
    Yeah, yeah, I I honestly don’t think they built it with the intention to scam people
    But I do think maybe they got in over their head or something like that. So yeah, well, you know
    So the the whole um, you know our our mutual friend Bilal Sado. Uh, he runs the ted ai podcast
    He just had Helen tone her on who was one of the board members of open ai
    What a get one of the one of the big things that she said in that interview was she broke down a lot of the things
    That sam altman lied about right? Um, you know, one of the things she mentioned was they found out about chat gpt on twitter
    Yeah, I think I think the most damning one was that
    Open ai’s board didn’t know that sam altman owned the open ai fund. Yes
    And and in parallel went in front of the senate and said I have no financial incentive in open ai
    Now I guess like if you want to break it down to technicalities
    Maybe that’s true. But even even that either that’s just plain old false to me, right?
    Yeah, it’s crazy
    I mean to me at first I thought like this is crazy. They learned about it from twitter
    But then like the more comments I got and the more I thought about it the more I’m like
    That’s actually not that really that’s not a deal, right? Because
    Like at the time the api was being used in a lot of tools
    It was out there in the wild and so basically open ai made their own sort of
    Um, their own sort of tools using their own api to show off what it was capable of
    It was a research preview when they first put it out, right? So like
    If you’re thinking you’re just putting out a research preview
    Are you really running that by the non-profit board that’s over here and not involved on a day-to-day basis?
    Probably not, right?
    It’s like the more time I’ve had to think about that the more I’m kind of like
    Yeah, that’s probably not that big of a deal
    But the thing that you’re talking about where he’s flat out saying I have no financial interest in this
    But then he does have financial interest in the open ai startup program like that to me feels very shady
    The whole thing is weird though. He doesn’t own equity in the company is also that’s the shocking thing to me though
    Like why is the board not fixing that?
    Like that’s that’s the thing they should be fixing like how can you have a CEO who doesn’t have upside and
    Like I respect bill wall. Obviously. He’s our friend. But like Helen toner. Like what’s her background?
    I mean, she’s basically like a political writer who has done some puff pieces for china, right?
    So like what is like what does she bring to the board?
    Like how does she get on the board in the first place is my big question?
    And so I I’m not saying we should say that she’s lying
    But I also don’t like sam alman saying the opposite of what she’s saying. So
    You know as a as a fellow builder and also I have friends who know sam and say he’s a great person
    He’s a very honest person
    I wouldn’t say that we just should assume that sam’s lying and Helen’s telling the truth like she could be lying
    I can’t remember exactly, but I think she has some background in ai ethics and and and things of that nature
    But let’s put that aside for a second. Yeah, right. Yeah, she was outed, right? She she was put
    Kicked to the curb out of open ai kicked off the board after
    Essentially trying a mutiny. So yeah, so she’s going to be biased
    It’s easily provable whether or not he owned or owns the open ai fund. So that that’s like one issue
    the other issue is whether he
    Released chat gpt without telling them. I think madame in in your corner on this like I I didn’t really see that as a big deal
    He even said early on he was like
    I didn’t think it was going to be as big as it was like I thought we were going to put out this little
    Experiment get some feedback and then roll out a bigger
    Project and and so I don’t I don’t think he put it out intending to be like ha ha going behind the
    Boards back on this. I think he just didn’t think it was a big deal and he just put it out
    There’s also the report of like
    emotional abuse
    And I like that. What does that mean?
    Yeah, like I’m gonna say what does that mean? Give me give me the receipts. I need to see some documentation or something
    Because what some person might consider emotional abuse might just be a strong disagreement on something
    Or you didn’t get what you wanted or you know, like I it’s hard for me. I’m not saying it’s not true
    But it could be true like how could you say it is true?
    Um, the only one that is just like cold hard fact is he yeah, he did own open ai’s fund
    I’ve tried thinking like it depends on like what he owns the fund. What does that mean?
    Like does he actually like is he someone who’s signing for the fund?
    Like like there’s like nuance there that I think should be unwrapped a little bit like
    Like he could just be the person who signs for him or does he actually owns the entire there’s like different levels of this and
    I mean, I think it’s crazy
    Like I said, like I think he should own the fund or own part of it or something like there should be some financial upside for him
    If he said that to the senate
    Yeah, that’s a major screw up and I think a lot of like my critiques right now are about him
    Like his judgment would be the the thing like right like like why did he have hell in on the board?
    Yeah, things like this. Why does he not have any ownership in the company? That’s not I don’t think that’s the smartest way to go about things
    Um, well, I think you know, he he’s been very very rich for very very long
    And so I think maybe when they started open ai first of all, it was just like a research lab
    So I don’t think they ever plan to necessarily commercialize it. He was already ultra rich
    Maybe he didn’t he just didn’t have that long term view that oh, I I need to be
    financially incentivized or aligned with this company
    Um, well, it seems to be at the at the mind that like the long term it doesn’t matter
    Like if you reach agi money, does it matter like like that’s why he did like, you know, universal basic basic income
    He did those experiments which leads you to believe that yeah, he believes that once you reach agi money’s no longer a real thing
    So does it matter and by the way, Nathan, um, I’ve heard the same thing from both friends
    Who I have who know sam alman and then also just kind of what everyone’s saying on twitter their stories
    They all say like yeah, great guy was there for me
    trustworthy like so like
    The the only thing that I really have to point at is the ai or the open ai fund thing and you know
    I I don’t know exactly what happened there
    Also, I just pulled up an article from axios from just a little while ago about a month ago
    That says he’s no longer the owner or controls that fund in association with the company. So
    Obviously, uh, if that is true
    He and open ai more generally post the board export leaving realize that that was not a good idea
    So i’m wondering what when he did the senate hearing
    Was he still an owner at that time?
    yeah
    Yeah, he was for sure. I mean and and like we can go back and parse the exact words he used and I’m sure people will
    Like if i’m the senate, I’m thinking like hey, we need to give him a call and bring him back here and answer some questions because yeah
    Like it was almost a joke. I forget who asked but he’s like well, you don’t have any
    Incentive in the company. How is that possible? He’s like, well
    I forget it’s exact words, but it’s like, uh, well, I have money and no, I don’t so
    What he was trying to convey was
    Well, of course, I don’t have any incentive monetary incentive in the company
    So thus I can make the best decisions for the company because I don’t have some financial
    incentive
    Which is actually kind of the opposite of the way the world works and actually not the truth
    Yeah
    So i’m curious has has like anything that’s happened recently, right? Like there’s been a lot of reports, right?
    There’s the scarlet johansson thing which I think is probably more sort of
    I don’t want to say coincidental, but I mean there’s nothing wrong with going
    I like the sound of scarlet johansson’s voice. Let me go hire someone that’s got a kind of similar voice
    There’s nothing illegal about that. There’s nothing. I even feel immoral about that, right? Um, like
    Do you think any of this like crazy news that’s been coming out sam putting himself on the new safety board things like that
    Has any of it changed your opinion or perspective on sam or open ai?
    I think so I I watched um the all-in podcast a week ago david sacks had a lot of good points
    Uh on on this and and he said like, you know
    One coincidence is okay, but once you have these coincidences stacking up
    Sequentially all of a sudden maybe they’re not coincidences and maybe you know open ai isn’t as well run as as we all thought
    I think the like new open ai security committee is like the most uh
    Like just like surface level like just a pr stunts completely
    It’s essentially sam altman who runs the board as a sub
    Group of the board doing that having this security committee when just last week
    Ilya and yawn the two top security guys at open ai left
    It’s like okay, so you’re basically creating your own board to oversee
    Or your own committee to oversee your own board and your own company. Okay that i’m sure that’s
    Not biased whatsoever. Yeah, and don’t forget too the government is creating their own
    Uh sort of ai safety committee and guess who’s on that board?
    sam altman satyadala
    sundar kichai
    Right. Yeah regulatory capture is a real thing
    Look, I I like open ai. I like that they
    Brought all of this incredible innovation and really opened the world’s eyes to what’s possible with artificial intelligence
    Um, but like yeah, I um, let’s just say I’m a big believer in open source. Yeah
    I’m a big believer in open source
    But I I think a lot of stuff you’re seeing at open ai is what you would expect to see
    From a company that is approaching the most important thing that humanity could ever build
    Right agi like you would expect that there would be major
    Uh, the emotions would be very high. You would expect that major mistakes would be made
    We’re still human you would expect that there would be like culture classes of people having very different opinions
    of what you do with this new fire that we’re inventing, right?
    Um, and so I kind of I personally think that’s what’s going on like he’s under immense pressure
    He’s definitely made some judgment mistakes like even tweeting out her that like the whole thing about the voice and all that most
    Bullshit, but him tweeting out her was a mistake, right? That’s like that was a mistake
    And so I think that’s that’s what we’re seeing here and and even all the drama people leaving stuff
    I see in the helen statement. I see that as culture class. I see that as
    Uh, the people who were more concerned about what agi could do
    They are now out, right and the people who are more on you know, we want to build agi
    We think that you know net. It’s a good for humanity. They’re the ones now
    In charge of the company and I personally think that’s a good thing personally
    Yeah, I mean, you know, it’s it’s hard to speculate what’s going on inside
    I think Nathan you could be very right about that and actually likely are right. It’s just like
    Tumultuous time in the company like hyper growth
    Emotions are high. They gotta be
    Yeah, yeah and matt by the way, um, you mentioned, you know, using scarlett Johansson’s voice alike, right?
    I’ll look alike of the voice
    I’ll reference david sax again in the last all-in podcast
    He said he’s like actually this happens all the time if you’re a director and you can’t afford hiring scarlett Johansson
    You essentially say get me a scarlett Johansson type and it is somebody who kind of fits that
    role of scarlett Johansson, you know female blonde kind of, you know
    Has the same history of movies in the same view like it happens all the time
    So I I don’t think they did anything wrong with that
    But I also like the fact that they tweeted her and yeah, that’s the misstep right there, right?
    Is the yeah, they tweeted it. He tweeted her and then the voice sounded similar to scarlett people put those together and went
    Oh, he was you know trying to clone scarlett’s voice
    I think that was the misstep not actually hiring a voice actor that sounds close to her
    Tweeting the word her is the misstep, right?
    So Matthew, I’d love to you know hear what you think about uh, the news from xai that they you know
    The Elon Musk raised six billion dollars on eight million pre a few a few people said I mean that the company’s worth 18 billion
    No, that means it’s worth 24 billion that’s how that’s how pre and post money works
    So at 24 billion valuation in this early stage, it’s like I think that’s the largest fund raise ever
    Like at a at a early stage. What do you think he’s going to do with that money?
    Do you think it’s just all about grok is or something?
    You know bigger that yeah in the works or what do you think first of all?
    Him raising that much money is all about his name rightfully so
    Yeah, it’s not like this is some random dude coming out of the woodwork building a AI company
    This is Elon musk who has proven he is
    One of if not the best entrepreneurs of all time. Okay, so that aside
    Um, I think he already tipped us his his hat or his cards. Uh, he said he’s going to be buying a ton of
    CPU or GPUs, right? He’s going to be investing heavily into
    Uh, Nvidia cards building out. I I hopefully I’m saying this right. He said the biggest
    GPU cluster ever he said a hundred thousand h 100s, I believe
    Wow, so, um, I think that is the right play. So so a few things
    Um, that is the way to attract talent, right? Because the GPU is really the bottleneck
    So if you have the GPU, you can attract the best talent who could hopefully build the best models
    I’m hoping and I am all for more competition in the space closed source open source. I don’t care
    Competition is good for the end user the consumer. So I I I’m very very bullish on
    Using all of that money for for buying the compute. I really like that strategy
    I I they they have a data set
    that
    Is incredible that really nobody else has right open ai has been announcing all of these partnerships
    So they’re trying to get all this data
    But like xai has twitter’s data. That’s that’s a crazy
    And possibly take tesla and then nerling and then space
    And so that’s the other question Nathan. How do they how do they uh, how does
    Xai relate to tesla and that’s actually something that worries me as a tesla investor because yeah me too
    Elon is kind of holding the company hostage right now if they don’t
    approve
    Of his comp package, which is essentially he wants to own 25 of the company. He’s going to go do ai somewhere else, which
    That is rightfully so. Okay. So
    So him holding the company hostage is not right like well, yeah
    I would say that’s not right, but his comp the fact that they’re trying to hold back his comp package
    So he basically did a deal where if you look at the video clips from I don’t know
    How long ago was it when the comp package? Was it like was it already like 10 years or something? Was it five years?
    It’s it’s it’s been a long time
    But anyways when when the pump that when the comp package was uh introduced
    Like there’s video clips of saying that the goals in the comp package were so ridiculous that like what the hell is he doing?
    He’s crazy. There’s no way that anyone would ever reach these goals right and he doesn’t reach it. He’s not gonna get paid
    He’s insane and uh, and so of course if you reach those goals, he should be compensated for reaching those goals
    It’s insane that over a decade now that he’s created this company into this huge behemoth in the industry
    And then now some investors are trying to say oh, uh, we don’t like your politics or whatever
    So you you shouldn’t get paid for what you did the last 10 years. Well, they did
    That’s crazy. It is the comp, right? That’s crazy. I mean it’s absurd. It’s absurd. Um
    He is yeah, I think you you you told that story perfectly
    It’s easy with 2020 vision to look back and and say like wow you made so much money an absurd
    crazy amount of money which he did
    But his initial compact like tesla was
    Nothing when he joined. Yeah, it was like a it was it was like a an off-the-shelf electric motor put in
    Oh, what are those little cars? I think the lotus, right? Yeah, that’s all it was
    Like he’s the one who turned it into one of the most valuable companies in the world
    He is the one who changed the car industry forever. So holding the company hostage hurts me as a shareholder though
    So I don’t like that. But I understand and I want him like go ahead
    If you want 25 fine, like if you want to make every decision fine
    But then if he goes and he takes ai and he goes and builds it somewhere else x ai without kind of integrating it into tesla
    Then all of a sudden what what’s where’s autopilot going to be and and like and then yeah, um tesla’s just a car company
    All of a sudden their valuation is going to it’s going to plummet because it’s not
    This vision anymore. It’s just a car company and and there’s a lot of car companies out there
    So I I don’t know it’s it’s uh, yeah, what do you think?
    Yeah, you know, there’s a reason that they say like silicon valley pirates, right?
    Like my friends ollie murty who had this company called peanut labs back the day
    They used to do this big pirate cruise every year and tons of silicon valley elites would come out to it
    We’d all dress up like pirates, right every year, uh, there there’s a reason that that kind of tradition exists
    Right is it like people who start big companies? They often break rules, right?
    And they often do things that from the outside people be kind of shocked about the stuff that actually goes on
    That uh, you know and and there is kind of pirate behavior of like going off to get the treasure and doing whatever the hell it takes to get it
    Uh and breaking rules along the way and so I I think from that perspective like yeah, he’s being screwed over at tesla
    So I’m not shocked at all that like yeah, there’s some rules that he can’t hold them hostage
    But he’s trying to do that. That’s not shocking at all. I mean you can you can look at
    Other founder-led companies versus non founder-led companies that maybe previously were let’s look at a few of you examples meta ai
    Very much founder led zuckerberg is in charge. He still has those super super shares as far as I remember and so he’s able to
    invest a ton of money into
    uh, vr
    ar
    And then on a dime when the market doesn’t like it
    Kind of turn the ship around to a 180 and now they are like booming with ai
    And so he’s able to do that now you look at google where sergey
    And larry are no longer kind of as day to day as they once were
    And and google you can see they like they were slow to get ai
    They literally made the paper transfer or attention is all you need which was the defining paper that
    All of the current uh wave of ai is built on
    They couldn’t productize it couldn’t commercialize it
    And even when they did they they stumbled they had the woke ai and it’s just like it’s so slow
    I think they’re finally starting to get their act together which is good
    But like it’s very clear when you have a founder-led company and somebody who can make decisions quickly somebody who could
    You know
    Break rules not you know not break laws but break break the rules how to break break the mold
    as a as a uh
    Tesla shareholder give ilan his money like just let him make the decisions and uh if he’s wrong
    Hopefully that he he doesn’t get paid as well like that. I think that should just
    Like he’s proven he could do it. So let him try it again. I mean that’s uh
    I think we covered so much ground on this episode and talked about so many things but this has been awesome
    We should definitely uh do a round two at some point if you’re open to it
    I would love to you know, thank you so much for for hanging out with us
    Before we wrap though like where should people go? I know you’ve got your youtube channel
    You’re doing stuff over on x. Where’s the best place to go? Uh, check out what you’re up to
    Yeah, uh, definitely check out my youtube channel
    Matthew Berman just search it in the search bar and then Matthew Berman com if you want to check out my newsletter
    Awesome. Well, thanks again for for coming on and uh, just sort of nerdin out about ai with us today
    Yes, sir. Anytime guys. Seriously. This is fun
    [Music]
    [Music]
    [Music]
    [Music]
    [Music]
    [Music]
    [Music]
    you
    you

    Episode 10: Are closed-source or open-source AI models the future of artificial intelligence? Nathan Lands (https://x.com/mreflow) and Matt Wolfe (https://x.com/NathanLands) delve into this question with guest Matthew Berman (https://x.com/MatthewBerman), a serial entrepreneur and founder of a popular AI YouTube channel.

    In this episode, we explore the pros and cons of closed models from giants like OpenAI and Google versus open models like Llama and Mixtural. Matthew Berman shares his insights on the evolving AI landscape, the potential of future models like GPT-5, and the impact of integrating AI into major platforms such as Google and Facebook. We also speculate about OpenAI’s partnership with Apple and the wider implications for the tech industry. Additionally, the discussion dives into the job market for AI specialists, Silicon Valley’s pirate culture, and the challenges of hyper-growth within companies pushing the envelope on AI innovations.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) Tracking AI advancements.
    • (05:58) Love for open source: diverse, specific, cost-effective.
    • (09:23) Swap open source model for flexibility and quality.
    • (11:41) Doubts about OpenAI’s long-term future, models commoditized.
    • (15:03) Competition and challenges ahead for OpenAI.
    • (19:06) Apple’s leverage with Google, OpenAI’s potential payment.
    • (20:14) GPT 4’s accomplishment, versus rumors of Google’s Gemini.
    • (23:26) Concern about integrating AI into existing platforms.
    • (28:25) Large action model can control computers dynamically.
    • (34:31) Sam Altman downplayed release intended for feedback; denied malintent.
    • (37:39) Senate considering recall after contradictory financial statements.
    • (42:56) Elon Musk raised $6 billion, largest fundraise.
    • (45:32) Questioning validity of Elon’s compensation package.
    • (47:35) Silicon Valley pirates break rules for success.

    Mentions:

    Free Resources:

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

AI Engine Chatbot
AI Avatar
Hi! How can I help?