AI transcript
I don’t think I’ve had as much fun with AI as I have in the last like month
playing with these tools that are coming out right now.
People are going to be very addicted to these things.
Right. All the tools are getting better too.
Now we’re starting to wonder, did Sora kind of blow it?
When all your marketing team does is put out fires, they burn out fast.
Sifting through leads, creating content for infinite channels,
endlessly searching for disparate performance KPIs.
It all takes a toll.
But with HubSpot, you can stop team burnout in its tracks.
Plus, your team can achieve their best results without breaking a sweat.
With HubSpot’s collection of AI tools, Breeze,
you can pinpoint the best leads possible, capture prospects attention
with clickworthy content and access all your company’s data in one place.
No sifting through tabs necessary.
It’s all waiting for your team in HubSpot.
Keep your marketers cool and make your campaign results hotter than ever.
Visit hubspot.com/marketers to learn more.
Hey, welcome to the Next Wave Podcast.
I’m Matt Wolf.
I’m here with Nathan Lanz and today we’re going to talk about AI video.
There’s been these really interesting AI video generators out there, right?
We’ve had Gen 2 when we’ve had Pika Labs and we’ve had Leonardo Motion.
And there’s been all these really cool AI video tools,
but they’ve really been kind of just that, just sort of cool, right?
They haven’t really had great practical use cases.
We haven’t been able to create videos with one of these tools and legitimately
use it as like B-roll or make like a really good film out of it.
They’ve all had this sort of weirdness to it.
That is until we got a sneak peek of Sora from OpenAI earlier this year.
When everybody saw Sora, we saw this AI text of video platform that made
videos that actually looked realistic and pretty much everybody in the AI world
got ultra, ultra excited about Sora and what it could possibly do and how
realistic it can make these videos.
But then we never got access to it.
We never actually got to play with it.
We kept getting teaser videos.
They gave it to like a handful of creators, like three or four different
creators were allowed to use it and we got some demos from that.
But still to this day, most of the world hasn’t gotten access to Sora.
Well, now we’re starting to get some alternatives to Sora that are looking
pretty dang good.
We recently got Luma who released their dream machine.
We have Gen three from runway, which has been sort of tease, but we haven’t
gotten access to it yet and now we’re starting to wonder.
Did Sora kind of blow it?
That’s kind of the discussion we want to have today is, you know, where is AI
video going?
Where did it come from?
Where’s it going?
What’s available now?
What’s coming in the future?
I think there’s a really interesting discussion here.
I think probably the general consensus online right now is that open AI did
wait too long.
That’s kind of what most people think.
I think I disagree with that.
Honestly, like I was probably one of the first people on Twitter, like doing
like really big AI video threads, like when it was all first starting.
Like that was like one of my main, you know, like things I was doing
every week was like putting out here’s the top AI videos this week.
And I kind of stopped because they after Sora came out.
Because like the videos were kind of cute.
And then Sora came out like, okay, yeah, sure.
I can get clicks on this and views.
I felt kind of dumb putting out like, here’s these amazing AI videos after
people had saw Sora, you know, by them putting it out so early that it’s made
everything else look bad.
These new ones are like catching up, like especially like Gen three.
I thought it was pretty amazing.
Dream machine is pretty awesome.
But still they don’t look as good as Sora.
And so what I would say is, yeah, it’s not released yet, but whatever they
showed then, it’s going to be better by the time it’s actually released.
Most likely.
And so when they do come out with something, you know, it’s going to be,
you know, almost kind of like how like, I think Apple back in the day were
like, they would come out with the very best product.
Maybe it wasn’t out first, but it would be the best when it came out.
No one’s actually found real use with AI video yet.
And it feels like Sora is the most likely one that when it actually comes
out, it’ll be the first one that will have real use.
And that’s why they’ve talked with Hollywood and other, you know, like
they’ve been talking to major studios, apparently right now, the main
players are Sora.
I mean, open AI, Sora, as well as probably Gen three.
Cause Gen three also, it looks like just like barely behind Sora.
Yeah.
No, it’s funny you say that because like I used to make a lot of YouTube
videos about men.
Look how far AI video is coming, right?
And I would show off like how much better Pika labs has gotten or how much
better runway Gen two has gotten.
And I was, and then we had stable video diffusion and there was all these
different AI video models that came out, but they were all, you know, they
had weirdness to them, right?
Like every video for whatever reason looks like it’s moving in slow motion.
People would like more if they would start looking like one person and then
morph into a completely different person and all the AI video models I’ve
seen so far still really suck at hands, right?
So there was all of these video tools that were kind of cool, but then
the open AI went and showed off Sora and now I was like, okay, well, they
just raised the bar of what AI coolness looks like.
So now anything I ever show off in a YouTube video that is me trying to
say, look at this cool new AI video tool, looks lame compared to Sora.
So I kind of stopped making those kind of videos, but now I’m making
them again because we’re starting to see Gen three.
We’re starting to see Luma’s dream machine.
We’re seeing these other tools pop up now.
Yeah.
And it is exciting like dream machine you can actually use.
So that’s that is the exciting part.
Like it’s not as good as Sora.
It’s probably not as good as gen three either, but it’s not that far behind.
And you can actually use it right now.
Like I saw your video where you made a music video, you know?
And I thought that was awesome.
Like, oh, it’s like, oh, that’s actually, yeah, I wouldn’t like put that on TV yet.
That’s probably like six.
That’s probably like six months or 12 months away from being like almost
like TV quality and the idea that you’ve got all these new tools are coming out.
You got like, you know, Udio and I don’t know.
It feels like we’re at the very beginning of like this creative explosion
where all these tools combine and there’s the level of like art and
entertainment in the world is going to go up dramatically.
I think because like everyone’s going to be able to make this stuff.
It’s going to be awesome.
Yeah.
No, I totally agree.
I think, you know, it’s a buzzword, but it really democratizes video creation.
Right.
One of the things that I’m really excited about is just B roll.
Right.
I make a lot of YouTube videos and I don’t like to be just on camera the whole time.
I like it to change what you’re looking at.
I want the video’s pace to keep going.
And oftentimes it’s hard to find B roll.
And when you do go find B roll, you’re searching for like stock video sites.
Right.
I use story blocks is the one that I use.
And when I go through story blocks, like you can find videos that are sort of
relevant, but you’re not fooling anybody that it’s not stock video.
Right.
It all looks like stock video when you have like that corporate conference room
and like five people in a suit are all leaning forward over like a conference
call or something.
Everybody’s seen those exact videos.
Even if you haven’t seen that stock video footage before, you just know what
stock video looks like.
And so this really excites me.
Anything I can imagine, I can say any wild thing I want on one of my YouTube
videos and now I can create a little bit of a B roll for that whatever random
wild thing I said was.
Did you see how good Gin three is a text in video?
I haven’t yet.
It’s perfect.
I’m actually at augmented World Expo right now as we’re recording this.
And a lot of these tools and announcements are dropping while I’m at this event.
So I haven’t actually been seeing as many of the demos, but I will say
about about the Luma dream machine is that it’s really, really good when you
start with an image and you turn that image into a video.
But if you go in there and you enter a text prompt and try to generate a
video from a text prompt, that’s what I do.
It’s not great.
Yeah.
So so Gin three, the CEO, he’s the CEO of runway.
He’s been showing clips on Twitter and I saw one yesterday.
He showed like five in a row of like text.
Like, you know, you write out your name, you know, Mr.
E flow or you write out the next wave or lower text on screen is perfect in Gin three.
I mean, to the point it’s like crazy.
Like, okay, you type in something and you want to be talking about time and you
want it to have sand coming down or you want the words to come fly out of
something and all of a sudden there’s sand dripping down.
Perfect.
And another one where like the words actually were like being like dragged
through a jungle and they were made of dirt and then they popped up like, like
anything you want to do with text, like for like advertisements or B roll or
like intros to a show, it’s that’s already very, very good.
Now I was, I was kind of surprised.
Like I want to type in lore and see what pops up when you do that.
Yeah, no, that’s that’s awesome.
Cause I mean, even most of the text to image generators still struggle with
getting the text in the image for the most part.
So to actually know that we’re getting like a video one that can do that as well
is is pretty crazy.
The other thing about the gen three is all of the clips that they’ve been showing
off, I believe are 10 seconds ish, maybe even longer.
But when it comes to Luma’s dream machine, you can only generate five seconds
of video right now, but they did just add this extend so that you can get five
seconds and then I think it pretty much uses the last frame of that video as the
first frame of the next video.
And so it extends it that way.
But when you do AI video generation in that way, because it’s like building off
of the last one gets a little bit worse quality, right?
Every single extension looks a little bit worse than the extension before it.
Yeah.
And I heard something from the Luma labs team saying that right now,
yeah, they could do one minute videos, but with their current model, apparently
after five or 10 seconds, like the animations just kind of stop.
Like if you had a character doing like an action scene, running around
with a gun, shooting it all around by 10 seconds, the person’s like kind
of just like standing there with a gun looking around or something like this.
Like the model’s not fully there yet in terms of like a long, long clip.
And so apparently that’s the sweet spot currently for them.
Yeah.
And the other thing that I hear about gen three is it’s really fast, right?
I think I saw Cristobal post something on Twitter about how like it generates
about 45 seconds where I don’t know how much you’ve played around with Luma
dream machine, but they have you have to wait in a queue.
And then once you get through that queue, then it takes two minutes
to generate minimum two minutes.
I’ve actually found is probably closer to three minutes, but the very first
time I ever used Luma’s dream machine, I logged in, tried to generate a video
from it and it took seven hours in queue before that three minute generation
happened. So I actually typed in my prompt and I had it open thinking,
oh, it’s going to generate anytime now, anytime now, anytime now.
And eventually I just like walked away and went and ate dinner and probably
like watched a movie with my family and then came back and it was still in
queue. It actually all said and done it took seven hours in queue before it
finally amazing video, right? No, the video was horrible. I did it. I did the
prompt a monkey on roller skates because that was the first AI video I ever
generated back when I was playing around with a model scope really really like
a couple years ago. And so I wanted to see okay. This is a monkey on roller
skates that I generated two years ago. This is a monkey on roller skates
using Luma dream machine. The Luma dream machine version was worse than the
version I made two years ago with model scope and it took me seven hours and
three minutes to generate and I mean there. So there’s actually there’s
actually other video generators to that a lot of people have been comparing to
sore. There was that cling that came out of China, but you had to have a
Chinese phone number to use it. Although I did hear some people just entered
there like US number and they got access anyway. I haven’t attempted yet. I
don’t know though nothing that I’ve seen from cling makes me go like oh this is
sore a level like I don’t know it just right to me. I never saw anything that
went that’s on the same level as this stuff, but I have seen stuff come out of
Luma’s dream machine. I have seen some of the gen three videos where I’m like that
looks pretty damn close, especially when you start with like a realistic looking
image or even a real image in Luma and have it animated. It’s actually pretty
dang good looking. Yeah, I mean there’s a lot of scenes coming out man. I’m
especially impressed by gen three. I think it looks pretty amazing. Like
there’s parts where you can see like if it was higher resolution, it’s not high
enough resolution yet. It would already be good enough to put in as like b-roll in
like major films. Yeah. So that’s exciting. And then did you see the stuff with like
anime like little anime clips? I mean like it even does anime pretty well and so
yeah. I don’t think I’ve had as much fun with AI as I have in the last like
month just playing with Suno and dream machine and UDO and all of these tools
that are coming out right now. You know, it reminds me about two, I don’t know two
and a half years ago, two years and a couple months ago when mid-journey first
came out and I started playing with mid-journey for the first time and like I
just like lost sleep, right? Like I would stay up until one thirty a.m. just
generating images going. Can I do this? Can I do this? And then when I learned
about stable diffusion and I fine-tuned a model on my face and I was able to make
myself Superman or make myself riding a horse or myself an astronaut or
whatever. And that was the next time I was like all right. I just lost a whole
day of my life playing with this generating images. Well, now I feel that
way again with the combo of like Suno and Leonardo’s new AI image generator
and mid-journey and the dream machine like to me it is so much fun. I made
that video on YouTube where I showed myself making a music video and I use
Suno to make the video and then mid-journey to make the starting images and
then I took the mid-journey images and I put them into dream machine to animate
them all and then I used a Vinci resolve to edit them all together and I that
video actually probably took me a good like ten hours to produce just because
of all the waiting time for that the processing in Luma, but it was so much
fun. I was like so blown away with some of the videos that were coming out of
it. Not all of them. Some of the videos were really, really impressive though.
We’ll be right back, but first I want to tell you about another great podcast
you’re going to want to listen to. It’s called Science of Scaling hosted by
Mark Roberge and it’s brought to you by the HubSpot Podcast Network, the audio
destination for business professionals. Each week hosts Mark Roberge, founding
chief revenue officer at HubSpot, senior lecturer at Harvard Business School
and co-founder of Stage 2 Capital, sits down with the most successful sales
leaders in tech to learn the secrets, strategies, and tactics to scaling your
company’s growth. He recently did a great episode called How Do You Solve for a
siloed marketing and sales and I personally learned a lot from it. You’re
going to want to check out the podcast, listen to Science of Scaling wherever you
get your podcasts. Yeah and all the tools are getting like better too and like
more fun to use. Like even mid-journey now you don’t have to use it on Discord.
They have the website and that interface is so much more pleasant to use and also
the personalized feature. Have you tried that out yet? Yes, in mid-journey. I tried
it on and off. I’m like, oh yeah, the one where it’s like personalized for me. Yeah,
I like that better. That is kind of cool and to realize like the long-term all
these models are going to learn whether it’s like the AI art, the videos, the
music, games in the future. They’re all going to learn what kind of stuff you
personally like and help you amplify your own creativity and it’s
exciting to think about like, yeah, these are all going to get better. This is
the worst it’s ever going to get and just imagining in like two years how
fun it’s going to be to like produce music and videos and whatever you want.
It’s probably going to be like way faster too. A lot of this stuff is probably going to get almost
instant. There’s no reason this stuff can’t be instant at some point. So imagine
that you could just like type in stuff instantly and you’ve just created a song.
You’ve now created a video and you’re like in real time like editing these
things together yourself. It’s going to be awesome. I’m excited. Yeah, everybody’s
going to have essentially their own custom mid-journey model, right? Like I can
enter a prompt into mid-journey. You can enter the identical prompt and if we’re
both using our own personalized model, we’re going to get two probably pretty
dramatic outputs because it’s going to make one for my taste and one for your
taste and I just think that’s really, really cool. I think, you know, the other
side of the coin of this conversation is the type of comments I’ve been getting
on my YouTube video where I made a music video or when I actually shared the
music video over on acts and on Instagram is, you know, I start getting a lot of
these comments of like, oh great, you’re making a video that’s helping people, you
know, perpetuate the downfall of the music industry, the downfall of the video
industry. Oh, these tools are trained on copyrighted material. So, you know, this
is just as, you know, bad as stealing the original material and using that in your
videos and those are the types of like, I mean, not most of the comments, but I’m
seeing those kinds of comments, right? Of like the copyright implications, the
the implications of like if I could make music with this that that diminishes the
work of artists and all that kind of stuff. I’m personally in the camp of
like, I call BS on all of that. I don’t think it diminishes anybody’s work. I
think the fact that I can make an AI image that I think looks really cool and
the fact that this person over here can actually draw it with their own hands and
make something that looks really cool. I’m way more impressed by that version
than the version that I made and I think I always will be just by the fact that a
human was behind it making it. Yeah. Did you see the blowback that Ashton
Kutcher got when he was talking about he basically he got access to Sora. He said
it’s good. He’s like, it’s going to change Hollywood and he made some like
really like, you know, big statements about it and people are like, oh my god,
you’re like, you know, you’re turning your back on. You went with tech over,
you know, Hollywood now and you’re like turning your back on creators and you’re
okay with screwing them all over. And he was like, no, I, you know, I think humans
are still going to be involved. But like, yeah, of course, entertainment is going
to evolve like it always has like, like, you know, obviously over the last 20
years, you know, CGI is like really taken over Hollywood, right? Like, like, like,
how many major films use CGI? Like a lot of them now. This is a further evolution
of entertainment. And I think that’s like what humanity is kind of like our
purpose is to evolve and continue getting better and better. So, but you know,
there’s there’s a natural instinct to be worried about change. Like change is
scary. And so I understand like people being worried because like, yeah,
they’re probably there probably will be periods where there will be some job
blasts related to this stuff for sure. Yeah. I mean, George Lucas, he was he was
asked what he thought about all this AI stuff, right? And his response was
essentially, well, it’s all inevitable. It’s it’s going to happen anyway. Just
like, you know, you know, we were doing everything with practical effects. And
then we got computer graphics and we started doing everything with CG. And
now we’ve got AI. And so like his sort of analogy was like when cars started to
come out and people started going, yeah, but we’re just going to stick with
horses. Well, you can stick with horses, but these machines are going to keep
going. We’re going to keep evolving. We’re going to keep improving them. You
can stay with horses if you want, but that’s not how the world works. We’re
going to figure out new, better, innovative solutions to accomplish the same
goal. That’s just what humans do. We try to figure out how to get more
efficient, how to optimize processes, how to get better at what we do, how to use
technology in our favor to leverage that technology to make our lives easier.
That’s what technology essentially exists for is how can we use tech to make
the things that used to be more manual, less manual for us? That’s how it’s
always evolved. Speaking of evolving, did you see where they, Gen 3, Runway,
they were talking about, they put up this blog post talking about how they’re
creating general world models. Like that’s apparently that’s the way that
they are producing AI video now, which was rumored that that was what Sora was
doing as well, is that they’re actually, they’re kind of creating an idea of what
the world is like, a model of it. Like a digital twin kind of thing, yeah. Yeah,
that’s why it can be so consistent, right? That’s why you could have a train
actually moving and seeing things as it moves is because it’s kind of produced a
world that it’s inhabiting. I think that’s fascinating and to think that,
like, you know, and NVIDIA has talked about this as well, you know, and NVIDIA who
just became the number one company in the world, thinking about how that’s
going to change games, you know, videos. Like, imagine that, like, the online
games now, like the worlds are so limited, but like it seems like this new
technology, you’ll be able to, you know, create, you could create kind of like
how Minecraft, you know, you go in the world and like you go to the edge of it
and it produces more. Like, imagine those kind of things with AI, where you got to
the edge of the world and you get the edge of space and like, oh, here’s now the
new planet or here’s now whatever. Like, it’s infinite. It goes forever. Like, those
those things are going to become possible, like a really high fidelity, not like
Minecraft. Let the game developers say that because if there’s any group of
people that are sort of more vicious towards the AI community that are sort
of pro AI, it’s the game developers. Like, I’ve, you know, I’ve had debates with
people that are in film and music and things like that and, you know, some of
them are pretty upset by what’s going on, but I have never seen the level of
hate on some of the stuff I’ve posted from then from what I’ve seen from some
of the like game developer community. If you talk about AI taking over game
development, they’re probably the first ones to like just absolutely try to
disrupt. I mean, yeah, I mean, the reality is though, the game industry is in a
really stagnant moment. Like the game industry, you know, it’s worse than
what’s happening in Hollywood, I would say, where, you know, the games are so
expensive to make that everyone just copies the previous game and gamers are
getting tired of it. I think that’s why you see the growth numbers have
stagnated. I feel like there’s a big like sort of renaissance of like indie
developers right now. Like most of the games that I play, I’m a big gamer
myself, most of the games I play are from indie studios. They’re not the big
triple A games. Yeah. Yeah. And AI is going to help them. Like, and sure, some of
them will be resistant to it right now, but once they see what it can actually
do for them, we’re like, oh, you can be like five people who just you can now
compete with EA, you know, you can, you’ll be able to produce an entire world.
You’ll have help with the storylines, with the characters, with the creation of
the art assets. That’s all like coming in the next two years. And so I think
it’s going to be great. We’re like, oh, yeah, the game industry is very stagnant
now, in my opinion. And I think that’s going to change in two years. And yeah,
sure, people are yelling about it right now, but it won’t matter. Like some
people will lead the way and then everyone else will have to follow after
that, I think. Yeah. Yeah. One. One. You know what would be a really good
guess for this show that I think would be really fun to talk to is somebody who
can intelligently speak to us about copyright law and how copyright law is
going to be impacted by a lot of this stuff. So I don’t know if this is a
hot take or not, but in my opinion, copyright law is a part of the big
perpetuation of AI. So all of the companies, all of the people that are
out there like fighting against AI because they’re worried about it using
copyrighted material, I sort of have this opinion that they’re pushing AI
forward faster than had they just like not brought this stuff up. And the
reason I say that is like look at like stock photos, right? So I had a blog
where I actually hired a editor to come through after I wrote a blog post,
sort of clean up the blog post and then add imagery to the blog for me,
right? Well, they came in and they added some images and I looked at the blog
post. Cool. This is cool. Let’s publish it. We publish the blog post. I assume the
images they use were just from a regular stock photo site. Well, it turns out
they did a Google search. They pull those a photo that was owned by the
Associated Press, and I got an invoice in my email for using that photo for
eight hundred dollars. So for this one photo that was used on the blog post
that my editor grabbed from Google images, I had to pay eight hundred
dollars for the right to use that photo and I emailed them. I’m like, well,
can I just take it down and use a different photo and they’re like now
with the damage is done, pay the invoice essentially. And so in my mind,
when I saw AI image generation, I went awesome. I don’t have to like worry
about that anymore. I could just go generate any image I want now. And so
like these like copyright pressures that they’re putting on creators are
pushing creators towards using AI. Same with Suno, right? You look at something
like Suno. How many people have you heard of that put up YouTube videos?
Maybe there was a song playing in the in the YouTube video. They got the video
copyright struck in and either had the video completely removed or had a
hundred percent of the monetization from that video go to the copyright holder.
I’ve had that happen where I made a thirty minute video and maybe ten
seconds of that video had an audio clip that was copyrighted by somebody else
just kind of an oversight. It slipped. Well, because of that ten second clip,
all of the revenue for that full thirty minute video had to go to the copyright
owner of that ten second clip. That doesn’t make any sense. That’s not fair
to me like give them ten percent of the revenue, not a hundred percent of the
revenue, you know, so that kind of stuff comes up. Well, now that stuff like
Suno exists, what am I going to do? I’m not going to go use music that I find
online. I’m just going to go generate the perfect song for that video right
now. However, had copyright law been a little bit different and creators were
allowed to, you know, use some images they find online or use some music that
they find online in their videos without worried about like it affecting them
their livelihood. I don’t think people would be jumping to go and generate
music with Suno or jumping to generate images with mid journey as part of
their content as quickly because they can just use content and stuff that was
created by other people. So I have this opinion that I really, really think
copyright law needs to change and if copyright law is different than it was
now, it would actually probably, you know, stop as many creators from jumping to
using these potential AI tools. Anyway, that’s the end of my little rant about
copyright. Yeah, yeah. Yeah, I think I think this will be a moment where
copyright is forced to evolve. Like, you know, copyright is so complicated. I mean,
my last startup bind did we end up pivoting into copyright, which was not
the initial intention. I spoke in front of, I spoke in Washington, DC about the
future of copyright on a panel and I still feel like I barely understand
copyright. It’s so complicated. You know, and I met with the guy who was at
that time heading Creative Commons and talked to him a lot about, you know,
copyright and all the issues. And, you know, Creative Commons was always
interesting, but also Creative Commons is so complicated. Like, there’s so many
different versions of Creative Commons and it increases so much cognitive
overhead of like, okay, which one do I pick and how do I do it? And it’s all so
complicated. I don’t know where copyright goes in the
future because it’s just like, yeah, I don’t mean in the new world, it almost
doesn’t make sense in its current state. Like, I think there should be laws around
like, okay, if you directly copy somebody, like, and it’s like, you know,
it’s, you know, Michael Jackson and now you’ve got Michael Jackson singing in the
song. Yeah. Yeah, like probably his estate should be
paid something. But if it’s not directly copying people, I just,
I don’t see how copyright exists in its current form like 10 years from now.
Yeah. Yeah. And I don’t know this illusion either, right? Like, I don’t, I do
think that people that spend the time to create the art, people that spend the
time to make the music, to, you know, generate the stock video, to take the
photos, I think they should be compensated for the
work they’re doing. I do think that’s important. And I
don’t know how that works. I mean, right now copyright is just kind of
the best solution they got, but I don’t think it is
the, you know, the final solution. I don’t think it’s the, I think the way
copyright works and the way that companies are going and sort of,
you know, slapping down creators for using the content is
it’s just, it’s not helpful to their cause
in the long run, right? I think that’s sort of my opinion on it, right?
Like, you look at like tick tock and what was the, what was the company that
took all of their music off of tick tock for a little while and now it’s back
on, but like you couldn’t use Taylor Swift’s music and you couldn’t, there’s
like all these artists that you couldn’t use on tick tock because they’re the
deal with this music company and tick tock fell through. I don’t know if you
remember that a couple months ago. What ended up happening was a lot of the
artists that were on this record label. They got a ton of exposure because
content creators were using their music in their tick tock videos and like
there’s bands like AJR who I’m actually a big fan of AJR. They actually credit
a lot of their fame and their success and their music growing,
blowing up to the fact that tick tockers were using their music on their, you
know, their little clips and things. So people heard the songs in the tick tocks.
They, you know, tick tock always put the name of the band on there and then
people would click on the name of the band and go find more songs by that
band. It was actually a really, really good growth mechanism for these bands
to allow content creators to just use the music on their videos. And then when
the record label had their beef with tick tock, the record label shut off this
stream of like awareness around these bands. It just doesn’t make sense to
meet like the way copyright works right now just doesn’t make sense. Let
content creators use it and let it be an exposure mechanism for these things.
I mean, so I saw an article yesterday saying that perplexity is trying to
figure out some kind of deal with publishers to pay them. And I think
that could make sense when like you’re directly citing something like, okay,
yeah, this is directly from this article. And that’s where we got the information
from. And now I’m doing some kind of payment or revenue-shared deal for
something that’s that clear. But with art, it’s way more complicated because
it’s just like artists have always went to like art museums and
things like that to get inspired. That’s what AI, that’s what the AI art
models are doing. They are not copying the art. Yeah, music too. They are
getting inspired by it. And so I think it’s different and it’s,
I don’t see how you ever properly reward those people for having created that
the same way you don’t pay, you know, if you got inspired by going to an art
museum, you don’t go back and pay, you know, something as artist, right?
Well, I mean, it’s just, I feel like music’s even muddier, right? Because
you have bands that go and sample other bands, right? So like, you know, run DMC,
goes and samples Aerosmith for, you know, walk this way, right? You get stuff
like that. And now you’ve got multiple artists in the mix and, you know, it’s
just, the waters are really muddy. And, you know, I feel like I’ve beaten
this horse to death. I just, I think you and I are both on the same page here of
like, yeah, like we understand why copyright exists. We understand that
creators need to get paid for what they’re creating. It just needs to be
rethought somehow. A lot of things about like, you know, the, like, you know,
I’m pro capitalism, but like the whole system is probably going to have to be
rethought at some point. Like a lot of things stop making sense in the next
10 years as things become more and more abundant, you know, and there’s less
scarcity, especially like with robots, like you combine AI and robots and a lot
of things have to change like a lot of things. So I’m excited about that last
week. I’m excited about the robot k-pop bands, right? You get, you get like five
robots that can all sing and you put them on, you teach them dance moves, you put
them on stage and now now people are going to go watch these and then they
can just like clone those robots and then it’s like the blue man group, right?
Like the blue man group can do tour like multiple tours at the same time because
it doesn’t have to be the same blue men at every single group, right? Like is that
the future of like music entertainment? We’re going to see like k-pop robots
singing on stage, but they could be doing multiple tours at the same time.
Yeah, there used to be this thing in Tokyo. There was like a robot show that
you’d go to and I think it was just like girls dressed up like robots or
something and they may have had like one or two real robots that did some small
moves or something. Unfortunately, they stopped doing that but
that used to be a huge tourist attraction. Yeah, I think in the future
people are going to not have to work as many hours because they are just going
to make people so much more efficient and you combine that with all these new
technologies. Yeah, we’re going to have some really
amazing live experiences and yeah, music and robots and
everything you can imagine. It’s going to be, you know, I love people who are so
scared of this and I’m like, imagine where we’re going to be in 10 years.
It’s going to be fun. Like the world’s going to look way different than now.
Stuff out of movies is going to be coming real. You know, it’s a very exciting
time to be alive. Like at least for me, you know, it’s hard to get me excited
about things like regular day-to-day things. Right.
I find it pretty mundane and so I’m like, yeah, for the world to change more,
that sounds great. Yeah.
Things will, you know, I’ll be excited to wake up every day. That’s awesome.
Yeah, yeah. I think, yeah, it’s going to be exciting. It’s going to be fun.
I’m loving all the latest AI video, AI audio, AI image tech. I love seeing it
progress, but at the same time, you know, I still love real art. Like I still love
going to shows and watching bands play in concert. You know, I still love making
my own music with a real guitar and, you know, and actually playing something
that I’m proud of. You know, I like looking at art that I know was painted
by hand with oil paintings or watching a video or a move going to the theater
and watching a movie that I know took, you know, two years to produce to me.
I don’t really see AI eliminating that stuff, which is what I feel like most
people are scared of. I think more likely we’re going to see AI sort of cut down
the process of, you know, maybe some of the small b-roll they use in videos or
to like fill in the backgrounds of videos with like fake actors, right? Like I
think the people that are really in trouble in Hollywood are probably like
the extras. If I’m being honest, right? Like yeah, you have like a scene with a
big crowd. Well, with AI now, I mean really just with visual effects in
general. This doesn’t require AI, but with visual effects in general, you know,
you can have just that front row of people be real people and then everybody
behind them all be generated with AI. You don’t need to fill in with extras,
right? So I think that’s probably going to be the most affected group in
Hollywood. But overall, I think we’re going to see some big change. I have no
clue what it’s going to look like, but I think it’s going to be fun to continue
to have conversations about it. Yeah, I kind of think it may be more
extreme than what you just said. I mean, I kind of think that you may replace
all actors at some point and the human to do become more the niche product.
And that, but I don’t know, right? Like it may be like, you know, okay, yeah,
people read books, but how many more people watch movies and like the AI stuff
might become more like the movies and the human stuff, maybe more like the books
where, yeah, people, some people enjoy that, but a lot of other people don’t
care. So I agree and I disagree. I agree that I think they will be able to make
full movies without actors. It’ll be like they can AI generate it, but I think
it’s going to be like a genre, right? I think you look at like Disney movies,
right? For the longest time, you had all of the Disney movies that were drawn
by hand and animated the old fashioned way and then Pixar came along and then
we got this like 3D style of movies. Well, Disney didn’t like ditch the old
style of movies and only make the 3D Pixar style movies, right? They still
made Frozen and Moana and all these other movies long after Pixar came out.
I think it’s just going to be a different style of movie. I think people
might go to like creators will make movies and it’ll be a big deal with the
fact that they used like AI for it and they’ll be like its own genre of like
movies that used AI actors, but I still think people are going to want to go
and see talented actors act out their craft. I still think that’s going to
always exist. I don’t think AI is ever going to completely replace it to a
point where Hollywood is only making AI generated stuff. I just I don’t see
that happening. I think humans like watching other humans too much.
Yeah, I agree. I just don’t know what piece of the pie that’s going to be like. I
don’t know if it’s going to be like, yeah, they want to see humans, but how many
people is that? Like is it like 5% of the market wants to see humans or is it
like 90%? I’m not sure yet. Yeah, yeah. Yeah, which is why I
think it’ll be like it’s its own genre. I think you got people that will just
refuse to see it. Like I don’t really go and watch rom-coms in the theater, right?
But that doesn’t mean there’s not a market for them, right? So I think I just
think it’ll it’ll find its own market. I mean, right now they’ve already done
like screenings of AI films in theaters and stuff. And to be honest, I love AI.
I don’t really have any desire to go and sit through a fully AI generated movie
right now. I just don’t. The tech isn’t good enough that I’m for me to be that
excited about sitting through that. You know, show me something that’s really
impressive in two or three minutes and I’m good. I don’t need to sit through an
hour and a half movie. Yeah. Yeah, I’m really excited for the idea of like,
you know, movies where it’s almost like when I was young, I would read those
books, you know, where you can like make choices, you know? Yeah, yeah, choose your
own adventure stuff. Did you see that? Yeah, and the Netflix did that with that,
what was it, Bandersnatch or whatever it was called, which was a cool experiment.
I’m sure it was really hard for them to do. I’m like, I’m sure the cost to produce
that was quite high. And that’s probably why they didn’t continue doing it.
But like with AI, you’re going to be able to do that kind of stuff. I think
that’s going to be a huge genre is like you’re watching the movie and it’s like,
oh, something just happened. And oh, yeah, I want to do this. And now it generates it.
And when it gets to the point where it’s actually the quality is good enough where
it’s like, OK, it’s 99% as good as a Hollywood movie. That’s going to be so fun.
Like, oh, yeah, with the character to go, I want him to go pick up a, you know,
bottle on the bar and smash it or whatever crazy thing I want to see happen.
Just to be able to like say that out and then it happens. That’s going to be so fun.
Yeah, it is. And what in the cool thing about that that I think the movie studios will absolutely
love is the replay value of that content is huge because every single time you watch that film,
it’s going to be different, right? Like that’s where I think gaming is going to. You know,
we’ve talked about this in the past, too. I think gaming, all the dialogue and gaming
eventually is going to be generative. They’re going to have guidelines they need to stay
within so they don’t sort of spoil the rest of the game or anything for you, right? You can’t
go to a character and say, Hey, how does the game end? And it just tells you because it’s
trained in the LLM, right? Like it’s got to have some sort of guidelines, but I think
gaming and the sort of choose your own adventure content on Netflix. I think both of those kinds
of things are inevitable because for the studios that create it, it just like
infinitely cranked up the replay value, the rewatch value of that content.
Yeah. Yeah. I think of it like I’ve mentioned it before, but like Baldur’s Gate three,
massive, you know, world with like a huge story and the characters are super interesting and you
can make all these different choices. But in reality, the story is kind of mediocre. It’s
like, it’s not great. Like some of the Dungeons and Dragons stories are like, they’re okay.
Like the world’s awesome. And so I’m like, for sure, AI can probably do as good of a job on
the story. And if you could just create a new world every time, like being able to type that in
and it just produces all that, that is going to be, people are going to be very addicted to these
things. Yeah. Yeah. Well, I mean, you already have so many games right now that are already,
you know, procedurally generated, right? Where the story doesn’t really revolve around the world
that you’re in because the world’s different every time, right? Minecraft, Valheim, Fortnite,
like some of the most popular games in the world, a procedurally generated where every time you get
dropped into a level, it’s, you know, following a set of guidelines, but that level is a completely
different level that most likely nobody else has seen before, you know, and I think AI is just
going like and that is what increases the replay value of a lot of these open world survival
games is every time you play it. It’s a totally different game than the last time you played it.
AI just amplifies that in my opinion. Yeah, big time. So yeah, it’s going to be really exciting
times. We’re both excited to see how it plays out and we’re going to keep on making videos and
podcasts and sharing the journey and showing what we’re finding. So make sure that you like this
video. If you found it helpful, subscribe to this channel if you aren’t already because we have some
amazing guests coming up and a lot more fun, interesting discussions like this. And once
again, thank you so much for tuning into the Next Wave podcast, but we will see you in the next episode.
[Music]
[Music]

Episode 13: What impact will AI-generated content have on the entertainment industry? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this topic, envisioning a future where AI generates interactive movies and complex gaming worlds with infinite replay value.

In this episode, Matt and Nathan explore the potential of AI video tools such as Sora, Luma’s dream machine, and Runway’s gen three. They discuss how these advancements could democratize video creation, enhance b-roll, and expand creative possibilities, as well as the implications for copyright laws, gaming, and traditional creative industries. They also touch on George Lucas’ views on technological progress, Ashton Kutcher’s controversial support for AI, and the role of indie game developers in a rapidly evolving landscape.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) Sora is the most anticipated AI video.
  • (03:39) AI video tools improve, but have quirks.
  • (09:23) Runway Gen 3 is fast, unlike Luma’s dream machine.
  • (12:04) Excitedly exploring and creating with new AI.
  • (14:48) Custom mid journey models personalize prompts, raise concerns.
  • (17:28) George Lucas acknowledges inevitability of AI development.
  • (21:40) Copyright law impacting AI and technological innovation.
  • (25:31) Copyright evolution in the new world uncertainty.
  • (27:28) TikTok boosted exposure for music artists.
  • (32:32) Excited about AI tech but still loves art.
  • (33:18) AI may replace video extras, changing Hollywood.
  • (38:52) Procedural generation and AI enhance game replayability.

Mentions:

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Engine Chatbot
AI Avatar
Hi! How can I help?