AI transcript
0:00:07 playing with these tools that are coming out right now.
0:00:08 People are going to be very addicted to these things.
0:00:11 Right. All the tools are getting better too.
0:00:14 Now we’re starting to wonder, did Sora kind of blow it?
0:00:23 When all your marketing team does is put out fires, they burn out fast.
0:00:26 Sifting through leads, creating content for infinite channels,
0:00:29 endlessly searching for disparate performance KPIs.
0:00:31 It all takes a toll.
0:00:35 But with HubSpot, you can stop team burnout in its tracks.
0:00:38 Plus, your team can achieve their best results without breaking a sweat.
0:00:41 With HubSpot’s collection of AI tools, Breeze,
0:00:45 you can pinpoint the best leads possible, capture prospects attention
0:00:50 with clickworthy content and access all your company’s data in one place.
0:00:52 No sifting through tabs necessary.
0:00:55 It’s all waiting for your team in HubSpot.
0:00:58 Keep your marketers cool and make your campaign results hotter than ever.
0:01:01 Visit hubspot.com/marketers to learn more.
0:01:06 Hey, welcome to the Next Wave Podcast.
0:01:07 I’m Matt Wolf.
0:01:12 I’m here with Nathan Lanz and today we’re going to talk about AI video.
0:01:16 There’s been these really interesting AI video generators out there, right?
0:01:21 We’ve had Gen 2 when we’ve had Pika Labs and we’ve had Leonardo Motion.
0:01:24 And there’s been all these really cool AI video tools,
0:01:28 but they’ve really been kind of just that, just sort of cool, right?
0:01:31 They haven’t really had great practical use cases.
0:01:35 We haven’t been able to create videos with one of these tools and legitimately
0:01:39 use it as like B-roll or make like a really good film out of it.
0:01:41 They’ve all had this sort of weirdness to it.
0:01:46 That is until we got a sneak peek of Sora from OpenAI earlier this year.
0:01:53 When everybody saw Sora, we saw this AI text of video platform that made
0:01:57 videos that actually looked realistic and pretty much everybody in the AI world
0:02:03 got ultra, ultra excited about Sora and what it could possibly do and how
0:02:06 realistic it can make these videos.
0:02:08 But then we never got access to it.
0:02:10 We never actually got to play with it.
0:02:12 We kept getting teaser videos.
0:02:16 They gave it to like a handful of creators, like three or four different
0:02:20 creators were allowed to use it and we got some demos from that.
0:02:24 But still to this day, most of the world hasn’t gotten access to Sora.
0:02:29 Well, now we’re starting to get some alternatives to Sora that are looking
0:02:31 pretty dang good.
0:02:34 We recently got Luma who released their dream machine.
0:02:39 We have Gen three from runway, which has been sort of tease, but we haven’t
0:02:44 gotten access to it yet and now we’re starting to wonder.
0:02:46 Did Sora kind of blow it?
0:02:51 That’s kind of the discussion we want to have today is, you know, where is AI
0:02:52 video going?
0:02:52 Where did it come from?
0:02:53 Where’s it going?
0:02:54 What’s available now?
0:02:55 What’s coming in the future?
0:02:58 I think there’s a really interesting discussion here.
0:03:02 I think probably the general consensus online right now is that open AI did
0:03:03 wait too long.
0:03:04 That’s kind of what most people think.
0:03:06 I think I disagree with that.
0:03:09 Honestly, like I was probably one of the first people on Twitter, like doing
0:03:12 like really big AI video threads, like when it was all first starting.
0:03:15 Like that was like one of my main, you know, like things I was doing
0:03:18 every week was like putting out here’s the top AI videos this week.
0:03:21 And I kind of stopped because they after Sora came out.
0:03:25 Because like the videos were kind of cute.
0:03:27 And then Sora came out like, okay, yeah, sure.
0:03:29 I can get clicks on this and views.
0:03:32 I felt kind of dumb putting out like, here’s these amazing AI videos after
0:03:37 people had saw Sora, you know, by them putting it out so early that it’s made
0:03:38 everything else look bad.
0:03:41 These new ones are like catching up, like especially like Gen three.
0:03:42 I thought it was pretty amazing.
0:03:44 Dream machine is pretty awesome.
0:03:47 But still they don’t look as good as Sora.
0:03:51 And so what I would say is, yeah, it’s not released yet, but whatever they
0:03:54 showed then, it’s going to be better by the time it’s actually released.
0:03:55 Most likely.
0:03:58 And so when they do come out with something, you know, it’s going to be,
0:04:00 you know, almost kind of like how like, I think Apple back in the day were
0:04:02 like, they would come out with the very best product.
0:04:04 Maybe it wasn’t out first, but it would be the best when it came out.
0:04:08 No one’s actually found real use with AI video yet.
0:04:11 And it feels like Sora is the most likely one that when it actually comes
0:04:13 out, it’ll be the first one that will have real use.
0:04:16 And that’s why they’ve talked with Hollywood and other, you know, like
0:04:19 they’ve been talking to major studios, apparently right now, the main
0:04:21 players are Sora.
0:04:24 I mean, open AI, Sora, as well as probably Gen three.
0:04:27 Cause Gen three also, it looks like just like barely behind Sora.
0:04:27 Yeah.
0:04:30 No, it’s funny you say that because like I used to make a lot of YouTube
0:04:30 videos about men.
0:04:33 Look how far AI video is coming, right?
0:04:37 And I would show off like how much better Pika labs has gotten or how much
0:04:39 better runway Gen two has gotten.
0:04:43 And I was, and then we had stable video diffusion and there was all these
0:04:47 different AI video models that came out, but they were all, you know, they
0:04:48 had weirdness to them, right?
0:04:52 Like every video for whatever reason looks like it’s moving in slow motion.
0:04:56 People would like more if they would start looking like one person and then
0:05:00 morph into a completely different person and all the AI video models I’ve
0:05:03 seen so far still really suck at hands, right?
0:05:08 So there was all of these video tools that were kind of cool, but then
0:05:14 the open AI went and showed off Sora and now I was like, okay, well, they
0:05:18 just raised the bar of what AI coolness looks like.
0:05:23 So now anything I ever show off in a YouTube video that is me trying to
0:05:28 say, look at this cool new AI video tool, looks lame compared to Sora.
0:05:31 So I kind of stopped making those kind of videos, but now I’m making
0:05:33 them again because we’re starting to see Gen three.
0:05:35 We’re starting to see Luma’s dream machine.
0:05:38 We’re seeing these other tools pop up now.
0:05:38 Yeah.
0:05:40 And it is exciting like dream machine you can actually use.
0:05:42 So that’s that is the exciting part.
0:05:44 Like it’s not as good as Sora.
0:05:48 It’s probably not as good as gen three either, but it’s not that far behind.
0:05:49 And you can actually use it right now.
0:05:52 Like I saw your video where you made a music video, you know?
0:05:54 And I thought that was awesome.
0:05:58 Like, oh, it’s like, oh, that’s actually, yeah, I wouldn’t like put that on TV yet.
0:06:00 That’s probably like six.
0:06:03 That’s probably like six months or 12 months away from being like almost
0:06:06 like TV quality and the idea that you’ve got all these new tools are coming out.
0:06:09 You got like, you know, Udio and I don’t know.
0:06:12 It feels like we’re at the very beginning of like this creative explosion
0:06:16 where all these tools combine and there’s the level of like art and
0:06:18 entertainment in the world is going to go up dramatically.
0:06:20 I think because like everyone’s going to be able to make this stuff.
0:06:20 It’s going to be awesome.
0:06:21 Yeah.
0:06:22 No, I totally agree.
0:06:28 I think, you know, it’s a buzzword, but it really democratizes video creation.
0:06:28 Right.
0:06:31 One of the things that I’m really excited about is just B roll.
0:06:32 Right.
0:06:36 I make a lot of YouTube videos and I don’t like to be just on camera the whole time.
0:06:38 I like it to change what you’re looking at.
0:06:40 I want the video’s pace to keep going.
0:06:43 And oftentimes it’s hard to find B roll.
0:06:47 And when you do go find B roll, you’re searching for like stock video sites.
0:06:47 Right.
0:06:50 I use story blocks is the one that I use.
0:06:54 And when I go through story blocks, like you can find videos that are sort of
0:06:57 relevant, but you’re not fooling anybody that it’s not stock video.
0:06:58 Right.
0:07:02 It all looks like stock video when you have like that corporate conference room
0:07:05 and like five people in a suit are all leaning forward over like a conference
0:07:06 call or something.
0:07:08 Everybody’s seen those exact videos.
0:07:11 Even if you haven’t seen that stock video footage before, you just know what
0:07:13 stock video looks like.
0:07:14 And so this really excites me.
0:07:20 Anything I can imagine, I can say any wild thing I want on one of my YouTube
0:07:24 videos and now I can create a little bit of a B roll for that whatever random
0:07:26 wild thing I said was.
0:07:30 Did you see how good Gin three is a text in video?
0:07:32 I haven’t yet.
0:07:33 It’s perfect.
0:07:37 I’m actually at augmented World Expo right now as we’re recording this.
0:07:41 And a lot of these tools and announcements are dropping while I’m at this event.
0:07:45 So I haven’t actually been seeing as many of the demos, but I will say
0:07:51 about about the Luma dream machine is that it’s really, really good when you
0:07:54 start with an image and you turn that image into a video.
0:07:58 But if you go in there and you enter a text prompt and try to generate a
0:08:00 video from a text prompt, that’s what I do.
0:08:01 It’s not great.
0:08:02 Yeah.
0:08:05 So so Gin three, the CEO, he’s the CEO of runway.
0:08:08 He’s been showing clips on Twitter and I saw one yesterday.
0:08:11 He showed like five in a row of like text.
0:08:13 Like, you know, you write out your name, you know, Mr.
0:08:20 E flow or you write out the next wave or lower text on screen is perfect in Gin three.
0:08:22 I mean, to the point it’s like crazy.
0:08:24 Like, okay, you type in something and you want to be talking about time and you
0:08:28 want it to have sand coming down or you want the words to come fly out of
0:08:30 something and all of a sudden there’s sand dripping down.
0:08:31 Perfect.
0:08:33 And another one where like the words actually were like being like dragged
0:08:37 through a jungle and they were made of dirt and then they popped up like, like
0:08:40 anything you want to do with text, like for like advertisements or B roll or
0:08:44 like intros to a show, it’s that’s already very, very good.
0:08:45 Now I was, I was kind of surprised.
0:08:48 Like I want to type in lore and see what pops up when you do that.
0:08:50 Yeah, no, that’s that’s awesome.
0:08:55 Cause I mean, even most of the text to image generators still struggle with
0:08:57 getting the text in the image for the most part.
0:09:01 So to actually know that we’re getting like a video one that can do that as well
0:09:02 is is pretty crazy.
0:09:06 The other thing about the gen three is all of the clips that they’ve been showing
0:09:11 off, I believe are 10 seconds ish, maybe even longer.
0:09:15 But when it comes to Luma’s dream machine, you can only generate five seconds
0:09:20 of video right now, but they did just add this extend so that you can get five
0:09:24 seconds and then I think it pretty much uses the last frame of that video as the
0:09:25 first frame of the next video.
0:09:27 And so it extends it that way.
0:09:33 But when you do AI video generation in that way, because it’s like building off
0:09:36 of the last one gets a little bit worse quality, right?
0:09:43 Every single extension looks a little bit worse than the extension before it.
0:09:43 Yeah.
0:09:47 And I heard something from the Luma labs team saying that right now,
0:09:51 yeah, they could do one minute videos, but with their current model, apparently
0:09:55 after five or 10 seconds, like the animations just kind of stop.
0:09:58 Like if you had a character doing like an action scene, running around
0:10:01 with a gun, shooting it all around by 10 seconds, the person’s like kind
0:10:04 of just like standing there with a gun looking around or something like this.
0:10:08 Like the model’s not fully there yet in terms of like a long, long clip.
0:10:10 And so apparently that’s the sweet spot currently for them.
0:10:11 Yeah.
0:10:14 And the other thing that I hear about gen three is it’s really fast, right?
0:10:19 I think I saw Cristobal post something on Twitter about how like it generates
0:10:23 about 45 seconds where I don’t know how much you’ve played around with Luma
0:10:26 dream machine, but they have you have to wait in a queue.
0:10:30 And then once you get through that queue, then it takes two minutes
0:10:31 to generate minimum two minutes.
0:10:36 I’ve actually found is probably closer to three minutes, but the very first
0:10:41 time I ever used Luma’s dream machine, I logged in, tried to generate a video
0:10:46 from it and it took seven hours in queue before that three minute generation
0:10:50 happened. So I actually typed in my prompt and I had it open thinking,
0:10:53 oh, it’s going to generate anytime now, anytime now, anytime now.
0:10:56 And eventually I just like walked away and went and ate dinner and probably
0:11:01 like watched a movie with my family and then came back and it was still in
0:11:06 queue. It actually all said and done it took seven hours in queue before it
0:11:12 finally amazing video, right? No, the video was horrible. I did it. I did the
0:11:16 prompt a monkey on roller skates because that was the first AI video I ever
0:11:21 generated back when I was playing around with a model scope really really like
0:11:25 a couple years ago. And so I wanted to see okay. This is a monkey on roller
0:11:29 skates that I generated two years ago. This is a monkey on roller skates
0:11:33 using Luma dream machine. The Luma dream machine version was worse than the
0:11:37 version I made two years ago with model scope and it took me seven hours and
0:11:41 three minutes to generate and I mean there. So there’s actually there’s
0:11:45 actually other video generators to that a lot of people have been comparing to
0:11:48 sore. There was that cling that came out of China, but you had to have a
0:11:52 Chinese phone number to use it. Although I did hear some people just entered
0:11:58 there like US number and they got access anyway. I haven’t attempted yet. I
0:12:02 don’t know though nothing that I’ve seen from cling makes me go like oh this is
0:12:06 sore a level like I don’t know it just right to me. I never saw anything that
0:12:10 went that’s on the same level as this stuff, but I have seen stuff come out of
0:12:15 Luma’s dream machine. I have seen some of the gen three videos where I’m like that
0:12:20 looks pretty damn close, especially when you start with like a realistic looking
0:12:24 image or even a real image in Luma and have it animated. It’s actually pretty
0:12:29 dang good looking. Yeah, I mean there’s a lot of scenes coming out man. I’m
0:12:34 especially impressed by gen three. I think it looks pretty amazing. Like
0:12:38 there’s parts where you can see like if it was higher resolution, it’s not high
0:12:42 enough resolution yet. It would already be good enough to put in as like b-roll in
0:12:46 like major films. Yeah. So that’s exciting. And then did you see the stuff with like
0:12:50 anime like little anime clips? I mean like it even does anime pretty well and so
0:12:56 yeah. I don’t think I’ve had as much fun with AI as I have in the last like
0:13:02 month just playing with Suno and dream machine and UDO and all of these tools
0:13:09 that are coming out right now. You know, it reminds me about two, I don’t know two
0:13:12 and a half years ago, two years and a couple months ago when mid-journey first
0:13:16 came out and I started playing with mid-journey for the first time and like I
0:13:20 just like lost sleep, right? Like I would stay up until one thirty a.m. just
0:13:23 generating images going. Can I do this? Can I do this? And then when I learned
0:13:27 about stable diffusion and I fine-tuned a model on my face and I was able to make
0:13:31 myself Superman or make myself riding a horse or myself an astronaut or
0:13:37 whatever. And that was the next time I was like all right. I just lost a whole
0:13:41 day of my life playing with this generating images. Well, now I feel that
0:13:47 way again with the combo of like Suno and Leonardo’s new AI image generator
0:13:55 and mid-journey and the dream machine like to me it is so much fun. I made
0:13:59 that video on YouTube where I showed myself making a music video and I use
0:14:02 Suno to make the video and then mid-journey to make the starting images and
0:14:06 then I took the mid-journey images and I put them into dream machine to animate
0:14:12 them all and then I used a Vinci resolve to edit them all together and I that
0:14:16 video actually probably took me a good like ten hours to produce just because
0:14:20 of all the waiting time for that the processing in Luma, but it was so much
0:14:23 fun. I was like so blown away with some of the videos that were coming out of
0:14:27 it. Not all of them. Some of the videos were really, really impressive though.
0:14:33 We’ll be right back, but first I want to tell you about another great podcast
0:14:37 you’re going to want to listen to. It’s called Science of Scaling hosted by
0:14:42 Mark Roberge and it’s brought to you by the HubSpot Podcast Network, the audio
0:14:47 destination for business professionals. Each week hosts Mark Roberge, founding
0:14:51 chief revenue officer at HubSpot, senior lecturer at Harvard Business School
0:14:55 and co-founder of Stage 2 Capital, sits down with the most successful sales
0:15:00 leaders in tech to learn the secrets, strategies, and tactics to scaling your
0:15:04 company’s growth. He recently did a great episode called How Do You Solve for a
0:15:09 siloed marketing and sales and I personally learned a lot from it. You’re
0:15:13 going to want to check out the podcast, listen to Science of Scaling wherever you
0:15:19 get your podcasts. Yeah and all the tools are getting like better too and like
0:15:22 more fun to use. Like even mid-journey now you don’t have to use it on Discord.
0:15:26 They have the website and that interface is so much more pleasant to use and also
0:15:31 the personalized feature. Have you tried that out yet? Yes, in mid-journey. I tried
0:15:34 it on and off. I’m like, oh yeah, the one where it’s like personalized for me. Yeah,
0:15:39 I like that better. That is kind of cool and to realize like the long-term all
0:15:45 these models are going to learn whether it’s like the AI art, the videos, the
0:15:49 music, games in the future. They’re all going to learn what kind of stuff you
0:15:54 personally like and help you amplify your own creativity and it’s
0:15:57 exciting to think about like, yeah, these are all going to get better. This is
0:16:01 the worst it’s ever going to get and just imagining in like two years how
0:16:06 fun it’s going to be to like produce music and videos and whatever you want.
0:16:09 It’s probably going to be like way faster too. A lot of this stuff is probably going to get almost
0:16:13 instant. There’s no reason this stuff can’t be instant at some point. So imagine
0:16:16 that you could just like type in stuff instantly and you’ve just created a song.
0:16:21 You’ve now created a video and you’re like in real time like editing these
0:16:25 things together yourself. It’s going to be awesome. I’m excited. Yeah, everybody’s
0:16:29 going to have essentially their own custom mid-journey model, right? Like I can
0:16:33 enter a prompt into mid-journey. You can enter the identical prompt and if we’re
0:16:37 both using our own personalized model, we’re going to get two probably pretty
0:16:40 dramatic outputs because it’s going to make one for my taste and one for your
0:16:45 taste and I just think that’s really, really cool. I think, you know, the other
0:16:51 side of the coin of this conversation is the type of comments I’ve been getting
0:16:56 on my YouTube video where I made a music video or when I actually shared the
0:17:01 music video over on acts and on Instagram is, you know, I start getting a lot of
0:17:07 these comments of like, oh great, you’re making a video that’s helping people, you
0:17:11 know, perpetuate the downfall of the music industry, the downfall of the video
0:17:16 industry. Oh, these tools are trained on copyrighted material. So, you know, this
0:17:21 is just as, you know, bad as stealing the original material and using that in your
0:17:26 videos and those are the types of like, I mean, not most of the comments, but I’m
0:17:30 seeing those kinds of comments, right? Of like the copyright implications, the
0:17:35 the implications of like if I could make music with this that that diminishes the
0:17:40 work of artists and all that kind of stuff. I’m personally in the camp of
0:17:44 like, I call BS on all of that. I don’t think it diminishes anybody’s work. I
0:17:49 think the fact that I can make an AI image that I think looks really cool and
0:17:53 the fact that this person over here can actually draw it with their own hands and
0:17:56 make something that looks really cool. I’m way more impressed by that version
0:18:00 than the version that I made and I think I always will be just by the fact that a
0:18:06 human was behind it making it. Yeah. Did you see the blowback that Ashton
0:18:11 Kutcher got when he was talking about he basically he got access to Sora. He said
0:18:14 it’s good. He’s like, it’s going to change Hollywood and he made some like
0:18:17 really like, you know, big statements about it and people are like, oh my god,
0:18:20 you’re like, you know, you’re turning your back on. You went with tech over,
0:18:24 you know, Hollywood now and you’re like turning your back on creators and you’re
0:18:28 okay with screwing them all over. And he was like, no, I, you know, I think humans
0:18:31 are still going to be involved. But like, yeah, of course, entertainment is going
0:18:35 to evolve like it always has like, like, you know, obviously over the last 20
0:18:39 years, you know, CGI is like really taken over Hollywood, right? Like, like, like,
0:18:44 how many major films use CGI? Like a lot of them now. This is a further evolution
0:18:48 of entertainment. And I think that’s like what humanity is kind of like our
0:18:52 purpose is to evolve and continue getting better and better. So, but you know,
0:18:56 there’s there’s a natural instinct to be worried about change. Like change is
0:19:00 scary. And so I understand like people being worried because like, yeah,
0:19:03 they’re probably there probably will be periods where there will be some job
0:19:08 blasts related to this stuff for sure. Yeah. I mean, George Lucas, he was he was
0:19:11 asked what he thought about all this AI stuff, right? And his response was
0:19:16 essentially, well, it’s all inevitable. It’s it’s going to happen anyway. Just
0:19:20 like, you know, you know, we were doing everything with practical effects. And
0:19:24 then we got computer graphics and we started doing everything with CG. And
0:19:30 now we’ve got AI. And so like his sort of analogy was like when cars started to
0:19:34 come out and people started going, yeah, but we’re just going to stick with
0:19:38 horses. Well, you can stick with horses, but these machines are going to keep
0:19:42 going. We’re going to keep evolving. We’re going to keep improving them. You
0:19:45 can stay with horses if you want, but that’s not how the world works. We’re
0:19:49 going to figure out new, better, innovative solutions to accomplish the same
0:19:54 goal. That’s just what humans do. We try to figure out how to get more
0:19:58 efficient, how to optimize processes, how to get better at what we do, how to use
0:20:04 technology in our favor to leverage that technology to make our lives easier.
0:20:09 That’s what technology essentially exists for is how can we use tech to make
0:20:12 the things that used to be more manual, less manual for us? That’s how it’s
0:20:17 always evolved. Speaking of evolving, did you see where they, Gen 3, Runway,
0:20:20 they were talking about, they put up this blog post talking about how they’re
0:20:24 creating general world models. Like that’s apparently that’s the way that
0:20:28 they are producing AI video now, which was rumored that that was what Sora was
0:20:31 doing as well, is that they’re actually, they’re kind of creating an idea of what
0:20:36 the world is like, a model of it. Like a digital twin kind of thing, yeah. Yeah,
0:20:39 that’s why it can be so consistent, right? That’s why you could have a train
0:20:42 actually moving and seeing things as it moves is because it’s kind of produced a
0:20:47 world that it’s inhabiting. I think that’s fascinating and to think that,
0:20:51 like, you know, and NVIDIA has talked about this as well, you know, and NVIDIA who
0:20:56 just became the number one company in the world, thinking about how that’s
0:21:02 going to change games, you know, videos. Like, imagine that, like, the online
0:21:05 games now, like the worlds are so limited, but like it seems like this new
0:21:09 technology, you’ll be able to, you know, create, you could create kind of like
0:21:12 how Minecraft, you know, you go in the world and like you go to the edge of it
0:21:16 and it produces more. Like, imagine those kind of things with AI, where you got to
0:21:20 the edge of the world and you get the edge of space and like, oh, here’s now the
0:21:25 new planet or here’s now whatever. Like, it’s infinite. It goes forever. Like, those
0:21:28 those things are going to become possible, like a really high fidelity, not like
0:21:33 Minecraft. Let the game developers say that because if there’s any group of
0:21:38 people that are sort of more vicious towards the AI community that are sort
0:21:45 of pro AI, it’s the game developers. Like, I’ve, you know, I’ve had debates with
0:21:48 people that are in film and music and things like that and, you know, some of
0:21:53 them are pretty upset by what’s going on, but I have never seen the level of
0:21:58 hate on some of the stuff I’ve posted from then from what I’ve seen from some
0:22:02 of the like game developer community. If you talk about AI taking over game
0:22:07 development, they’re probably the first ones to like just absolutely try to
0:22:11 disrupt. I mean, yeah, I mean, the reality is though, the game industry is in a
0:22:15 really stagnant moment. Like the game industry, you know, it’s worse than
0:22:19 what’s happening in Hollywood, I would say, where, you know, the games are so
0:22:22 expensive to make that everyone just copies the previous game and gamers are
0:22:25 getting tired of it. I think that’s why you see the growth numbers have
0:22:31 stagnated. I feel like there’s a big like sort of renaissance of like indie
0:22:35 developers right now. Like most of the games that I play, I’m a big gamer
0:22:38 myself, most of the games I play are from indie studios. They’re not the big
0:22:42 triple A games. Yeah. Yeah. And AI is going to help them. Like, and sure, some of
0:22:46 them will be resistant to it right now, but once they see what it can actually
0:22:50 do for them, we’re like, oh, you can be like five people who just you can now
0:22:55 compete with EA, you know, you can, you’ll be able to produce an entire world.
0:22:59 You’ll have help with the storylines, with the characters, with the creation of
0:23:03 the art assets. That’s all like coming in the next two years. And so I think
0:23:06 it’s going to be great. We’re like, oh, yeah, the game industry is very stagnant
0:23:09 now, in my opinion. And I think that’s going to change in two years. And yeah,
0:23:14 sure, people are yelling about it right now, but it won’t matter. Like some
0:23:16 people will lead the way and then everyone else will have to follow after
0:23:20 that, I think. Yeah. Yeah. One. One. You know what would be a really good
0:23:24 guess for this show that I think would be really fun to talk to is somebody who
0:23:32 can intelligently speak to us about copyright law and how copyright law is
0:23:35 going to be impacted by a lot of this stuff. So I don’t know if this is a
0:23:42 hot take or not, but in my opinion, copyright law is a part of the big
0:23:48 perpetuation of AI. So all of the companies, all of the people that are
0:23:53 out there like fighting against AI because they’re worried about it using
0:24:00 copyrighted material, I sort of have this opinion that they’re pushing AI
0:24:04 forward faster than had they just like not brought this stuff up. And the
0:24:11 reason I say that is like look at like stock photos, right? So I had a blog
0:24:16 where I actually hired a editor to come through after I wrote a blog post,
0:24:20 sort of clean up the blog post and then add imagery to the blog for me,
0:24:25 right? Well, they came in and they added some images and I looked at the blog
0:24:29 post. Cool. This is cool. Let’s publish it. We publish the blog post. I assume the
0:24:33 images they use were just from a regular stock photo site. Well, it turns out
0:24:36 they did a Google search. They pull those a photo that was owned by the
0:24:42 Associated Press, and I got an invoice in my email for using that photo for
0:24:46 eight hundred dollars. So for this one photo that was used on the blog post
0:24:50 that my editor grabbed from Google images, I had to pay eight hundred
0:24:54 dollars for the right to use that photo and I emailed them. I’m like, well,
0:24:57 can I just take it down and use a different photo and they’re like now
0:25:01 with the damage is done, pay the invoice essentially. And so in my mind,
0:25:06 when I saw AI image generation, I went awesome. I don’t have to like worry
0:25:10 about that anymore. I could just go generate any image I want now. And so
0:25:15 like these like copyright pressures that they’re putting on creators are
0:25:20 pushing creators towards using AI. Same with Suno, right? You look at something
0:25:24 like Suno. How many people have you heard of that put up YouTube videos?
0:25:28 Maybe there was a song playing in the in the YouTube video. They got the video
0:25:32 copyright struck in and either had the video completely removed or had a
0:25:37 hundred percent of the monetization from that video go to the copyright holder.
0:25:41 I’ve had that happen where I made a thirty minute video and maybe ten
0:25:45 seconds of that video had an audio clip that was copyrighted by somebody else
0:25:51 just kind of an oversight. It slipped. Well, because of that ten second clip,
0:25:56 all of the revenue for that full thirty minute video had to go to the copyright
0:26:00 owner of that ten second clip. That doesn’t make any sense. That’s not fair
0:26:05 to me like give them ten percent of the revenue, not a hundred percent of the
0:26:09 revenue, you know, so that kind of stuff comes up. Well, now that stuff like
0:26:14 Suno exists, what am I going to do? I’m not going to go use music that I find
0:26:18 online. I’m just going to go generate the perfect song for that video right
0:26:24 now. However, had copyright law been a little bit different and creators were
0:26:28 allowed to, you know, use some images they find online or use some music that
0:26:33 they find online in their videos without worried about like it affecting them
0:26:38 their livelihood. I don’t think people would be jumping to go and generate
0:26:42 music with Suno or jumping to generate images with mid journey as part of
0:26:47 their content as quickly because they can just use content and stuff that was
0:26:52 created by other people. So I have this opinion that I really, really think
0:26:56 copyright law needs to change and if copyright law is different than it was
0:27:02 now, it would actually probably, you know, stop as many creators from jumping to
0:27:06 using these potential AI tools. Anyway, that’s the end of my little rant about
0:27:10 copyright. Yeah, yeah. Yeah, I think I think this will be a moment where
0:27:14 copyright is forced to evolve. Like, you know, copyright is so complicated. I mean,
0:27:18 my last startup bind did we end up pivoting into copyright, which was not
0:27:24 the initial intention. I spoke in front of, I spoke in Washington, DC about the
0:27:27 future of copyright on a panel and I still feel like I barely understand
0:27:32 copyright. It’s so complicated. You know, and I met with the guy who was at
0:27:36 that time heading Creative Commons and talked to him a lot about, you know,
0:27:39 copyright and all the issues. And, you know, Creative Commons was always
0:27:42 interesting, but also Creative Commons is so complicated. Like, there’s so many
0:27:45 different versions of Creative Commons and it increases so much cognitive
0:27:49 overhead of like, okay, which one do I pick and how do I do it? And it’s all so
0:27:53 complicated. I don’t know where copyright goes in the
0:27:56 future because it’s just like, yeah, I don’t mean in the new world, it almost
0:27:59 doesn’t make sense in its current state. Like, I think there should be laws around
0:28:03 like, okay, if you directly copy somebody, like, and it’s like, you know,
0:28:05 it’s, you know, Michael Jackson and now you’ve got Michael Jackson singing in the
0:28:09 song. Yeah. Yeah, like probably his estate should be
0:28:12 paid something. But if it’s not directly copying people, I just,
0:28:16 I don’t see how copyright exists in its current form like 10 years from now.
0:28:21 Yeah. Yeah. And I don’t know this illusion either, right? Like, I don’t, I do
0:28:25 think that people that spend the time to create the art, people that spend the
0:28:30 time to make the music, to, you know, generate the stock video, to take the
0:28:33 photos, I think they should be compensated for the
0:28:37 work they’re doing. I do think that’s important. And I
0:28:41 don’t know how that works. I mean, right now copyright is just kind of
0:28:46 the best solution they got, but I don’t think it is
0:28:50 the, you know, the final solution. I don’t think it’s the, I think the way
0:28:54 copyright works and the way that companies are going and sort of,
0:29:00 you know, slapping down creators for using the content is
0:29:04 it’s just, it’s not helpful to their cause
0:29:07 in the long run, right? I think that’s sort of my opinion on it, right?
0:29:12 Like, you look at like tick tock and what was the, what was the company that
0:29:16 took all of their music off of tick tock for a little while and now it’s back
0:29:19 on, but like you couldn’t use Taylor Swift’s music and you couldn’t, there’s
0:29:23 like all these artists that you couldn’t use on tick tock because they’re the
0:29:26 deal with this music company and tick tock fell through. I don’t know if you
0:29:30 remember that a couple months ago. What ended up happening was a lot of the
0:29:36 artists that were on this record label. They got a ton of exposure because
0:29:40 content creators were using their music in their tick tock videos and like
0:29:46 there’s bands like AJR who I’m actually a big fan of AJR. They actually credit
0:29:51 a lot of their fame and their success and their music growing,
0:29:57 blowing up to the fact that tick tockers were using their music on their, you
0:30:00 know, their little clips and things. So people heard the songs in the tick tocks.
0:30:03 They, you know, tick tock always put the name of the band on there and then
0:30:06 people would click on the name of the band and go find more songs by that
0:30:11 band. It was actually a really, really good growth mechanism for these bands
0:30:17 to allow content creators to just use the music on their videos. And then when
0:30:22 the record label had their beef with tick tock, the record label shut off this
0:30:28 stream of like awareness around these bands. It just doesn’t make sense to
0:30:31 meet like the way copyright works right now just doesn’t make sense. Let
0:30:36 content creators use it and let it be an exposure mechanism for these things.
0:30:41 I mean, so I saw an article yesterday saying that perplexity is trying to
0:30:46 figure out some kind of deal with publishers to pay them. And I think
0:30:50 that could make sense when like you’re directly citing something like, okay,
0:30:53 yeah, this is directly from this article. And that’s where we got the information
0:30:57 from. And now I’m doing some kind of payment or revenue-shared deal for
0:31:01 something that’s that clear. But with art, it’s way more complicated because
0:31:05 it’s just like artists have always went to like art museums and
0:31:09 things like that to get inspired. That’s what AI, that’s what the AI art
0:31:15 models are doing. They are not copying the art. Yeah, music too. They are
0:31:20 getting inspired by it. And so I think it’s different and it’s,
0:31:24 I don’t see how you ever properly reward those people for having created that
0:31:27 the same way you don’t pay, you know, if you got inspired by going to an art
0:31:32 museum, you don’t go back and pay, you know, something as artist, right?
0:31:36 Well, I mean, it’s just, I feel like music’s even muddier, right? Because
0:31:42 you have bands that go and sample other bands, right? So like, you know, run DMC,
0:31:48 goes and samples Aerosmith for, you know, walk this way, right? You get stuff
0:31:52 like that. And now you’ve got multiple artists in the mix and, you know, it’s
0:31:58 just, the waters are really muddy. And, you know, I feel like I’ve beaten
0:32:01 this horse to death. I just, I think you and I are both on the same page here of
0:32:05 like, yeah, like we understand why copyright exists. We understand that
0:32:09 creators need to get paid for what they’re creating. It just needs to be
0:32:15 rethought somehow. A lot of things about like, you know, the, like, you know,
0:32:18 I’m pro capitalism, but like the whole system is probably going to have to be
0:32:23 rethought at some point. Like a lot of things stop making sense in the next
0:32:27 10 years as things become more and more abundant, you know, and there’s less
0:32:33 scarcity, especially like with robots, like you combine AI and robots and a lot
0:32:38 of things have to change like a lot of things. So I’m excited about that last
0:32:42 week. I’m excited about the robot k-pop bands, right? You get, you get like five
0:32:46 robots that can all sing and you put them on, you teach them dance moves, you put
0:32:50 them on stage and now now people are going to go watch these and then they
0:32:54 can just like clone those robots and then it’s like the blue man group, right?
0:32:58 Like the blue man group can do tour like multiple tours at the same time because
0:33:02 it doesn’t have to be the same blue men at every single group, right? Like is that
0:33:06 the future of like music entertainment? We’re going to see like k-pop robots
0:33:10 singing on stage, but they could be doing multiple tours at the same time.
0:33:13 Yeah, there used to be this thing in Tokyo. There was like a robot show that
0:33:16 you’d go to and I think it was just like girls dressed up like robots or
0:33:19 something and they may have had like one or two real robots that did some small
0:33:22 moves or something. Unfortunately, they stopped doing that but
0:33:26 that used to be a huge tourist attraction. Yeah, I think in the future
0:33:30 people are going to not have to work as many hours because they are just going
0:33:34 to make people so much more efficient and you combine that with all these new
0:33:36 technologies. Yeah, we’re going to have some really
0:33:41 amazing live experiences and yeah, music and robots and
0:33:44 everything you can imagine. It’s going to be, you know, I love people who are so
0:33:46 scared of this and I’m like, imagine where we’re going to be in 10 years.
0:33:49 It’s going to be fun. Like the world’s going to look way different than now.
0:33:53 Stuff out of movies is going to be coming real. You know, it’s a very exciting
0:33:57 time to be alive. Like at least for me, you know, it’s hard to get me excited
0:34:00 about things like regular day-to-day things. Right.
0:34:04 I find it pretty mundane and so I’m like, yeah, for the world to change more,
0:34:05 that sounds great. Yeah.
0:34:09 Things will, you know, I’ll be excited to wake up every day. That’s awesome.
0:34:13 Yeah, yeah. I think, yeah, it’s going to be exciting. It’s going to be fun.
0:34:19 I’m loving all the latest AI video, AI audio, AI image tech. I love seeing it
0:34:25 progress, but at the same time, you know, I still love real art. Like I still love
0:34:29 going to shows and watching bands play in concert. You know, I still love making
0:34:35 my own music with a real guitar and, you know, and actually playing something
0:34:39 that I’m proud of. You know, I like looking at art that I know was painted
0:34:44 by hand with oil paintings or watching a video or a move going to the theater
0:34:49 and watching a movie that I know took, you know, two years to produce to me.
0:34:53 I don’t really see AI eliminating that stuff, which is what I feel like most
0:34:59 people are scared of. I think more likely we’re going to see AI sort of cut down
0:35:04 the process of, you know, maybe some of the small b-roll they use in videos or
0:35:10 to like fill in the backgrounds of videos with like fake actors, right? Like I
0:35:12 think the people that are really in trouble in Hollywood are probably like
0:35:17 the extras. If I’m being honest, right? Like yeah, you have like a scene with a
0:35:22 big crowd. Well, with AI now, I mean really just with visual effects in
0:35:26 general. This doesn’t require AI, but with visual effects in general, you know,
0:35:29 you can have just that front row of people be real people and then everybody
0:35:34 behind them all be generated with AI. You don’t need to fill in with extras,
0:35:37 right? So I think that’s probably going to be the most affected group in
0:35:42 Hollywood. But overall, I think we’re going to see some big change. I have no
0:35:45 clue what it’s going to look like, but I think it’s going to be fun to continue
0:35:50 to have conversations about it. Yeah, I kind of think it may be more
0:35:53 extreme than what you just said. I mean, I kind of think that you may replace
0:35:56 all actors at some point and the human to do become more the niche product.
0:35:59 And that, but I don’t know, right? Like it may be like, you know, okay, yeah,
0:36:03 people read books, but how many more people watch movies and like the AI stuff
0:36:06 might become more like the movies and the human stuff, maybe more like the books
0:36:11 where, yeah, people, some people enjoy that, but a lot of other people don’t
0:36:17 care. So I agree and I disagree. I agree that I think they will be able to make
0:36:22 full movies without actors. It’ll be like they can AI generate it, but I think
0:36:26 it’s going to be like a genre, right? I think you look at like Disney movies,
0:36:29 right? For the longest time, you had all of the Disney movies that were drawn
0:36:33 by hand and animated the old fashioned way and then Pixar came along and then
0:36:38 we got this like 3D style of movies. Well, Disney didn’t like ditch the old
0:36:42 style of movies and only make the 3D Pixar style movies, right? They still
0:36:47 made Frozen and Moana and all these other movies long after Pixar came out.
0:36:51 I think it’s just going to be a different style of movie. I think people
0:36:57 might go to like creators will make movies and it’ll be a big deal with the
0:37:02 fact that they used like AI for it and they’ll be like its own genre of like
0:37:06 movies that used AI actors, but I still think people are going to want to go
0:37:11 and see talented actors act out their craft. I still think that’s going to
0:37:15 always exist. I don’t think AI is ever going to completely replace it to a
0:37:19 point where Hollywood is only making AI generated stuff. I just I don’t see
0:37:24 that happening. I think humans like watching other humans too much.
0:37:27 Yeah, I agree. I just don’t know what piece of the pie that’s going to be like. I
0:37:29 don’t know if it’s going to be like, yeah, they want to see humans, but how many
0:37:33 people is that? Like is it like 5% of the market wants to see humans or is it
0:37:38 like 90%? I’m not sure yet. Yeah, yeah. Yeah, which is why I
0:37:42 think it’ll be like it’s its own genre. I think you got people that will just
0:37:46 refuse to see it. Like I don’t really go and watch rom-coms in the theater, right?
0:37:50 But that doesn’t mean there’s not a market for them, right? So I think I just
0:37:54 think it’ll it’ll find its own market. I mean, right now they’ve already done
0:37:59 like screenings of AI films in theaters and stuff. And to be honest, I love AI.
0:38:03 I don’t really have any desire to go and sit through a fully AI generated movie
0:38:08 right now. I just don’t. The tech isn’t good enough that I’m for me to be that
0:38:11 excited about sitting through that. You know, show me something that’s really
0:38:14 impressive in two or three minutes and I’m good. I don’t need to sit through an
0:38:19 hour and a half movie. Yeah. Yeah, I’m really excited for the idea of like,
0:38:22 you know, movies where it’s almost like when I was young, I would read those
0:38:26 books, you know, where you can like make choices, you know? Yeah, yeah, choose your
0:38:28 own adventure stuff. Did you see that? Yeah, and the Netflix did that with that,
0:38:32 what was it, Bandersnatch or whatever it was called, which was a cool experiment.
0:38:35 I’m sure it was really hard for them to do. I’m like, I’m sure the cost to produce
0:38:39 that was quite high. And that’s probably why they didn’t continue doing it.
0:38:42 But like with AI, you’re going to be able to do that kind of stuff. I think
0:38:44 that’s going to be a huge genre is like you’re watching the movie and it’s like,
0:38:49 oh, something just happened. And oh, yeah, I want to do this. And now it generates it.
0:38:51 And when it gets to the point where it’s actually the quality is good enough where
0:38:56 it’s like, OK, it’s 99% as good as a Hollywood movie. That’s going to be so fun.
0:39:00 Like, oh, yeah, with the character to go, I want him to go pick up a, you know,
0:39:04 bottle on the bar and smash it or whatever crazy thing I want to see happen.
0:39:09 Just to be able to like say that out and then it happens. That’s going to be so fun.
0:39:14 Yeah, it is. And what in the cool thing about that that I think the movie studios will absolutely
0:39:20 love is the replay value of that content is huge because every single time you watch that film,
0:39:24 it’s going to be different, right? Like that’s where I think gaming is going to. You know,
0:39:27 we’ve talked about this in the past, too. I think gaming, all the dialogue and gaming
0:39:31 eventually is going to be generative. They’re going to have guidelines they need to stay
0:39:35 within so they don’t sort of spoil the rest of the game or anything for you, right? You can’t
0:39:39 go to a character and say, Hey, how does the game end? And it just tells you because it’s
0:39:43 trained in the LLM, right? Like it’s got to have some sort of guidelines, but I think
0:39:49 gaming and the sort of choose your own adventure content on Netflix. I think both of those kinds
0:39:53 of things are inevitable because for the studios that create it, it just like
0:39:58 infinitely cranked up the replay value, the rewatch value of that content.
0:40:02 Yeah. Yeah. I think of it like I’ve mentioned it before, but like Baldur’s Gate three,
0:40:07 massive, you know, world with like a huge story and the characters are super interesting and you
0:40:11 can make all these different choices. But in reality, the story is kind of mediocre. It’s
0:40:14 like, it’s not great. Like some of the Dungeons and Dragons stories are like, they’re okay.
0:40:20 Like the world’s awesome. And so I’m like, for sure, AI can probably do as good of a job on
0:40:24 the story. And if you could just create a new world every time, like being able to type that in
0:40:28 and it just produces all that, that is going to be, people are going to be very addicted to these
0:40:31 things. Yeah. Yeah. Well, I mean, you already have so many games right now that are already,
0:40:38 you know, procedurally generated, right? Where the story doesn’t really revolve around the world
0:40:42 that you’re in because the world’s different every time, right? Minecraft, Valheim, Fortnite,
0:40:46 like some of the most popular games in the world, a procedurally generated where every time you get
0:40:51 dropped into a level, it’s, you know, following a set of guidelines, but that level is a completely
0:40:56 different level that most likely nobody else has seen before, you know, and I think AI is just
0:41:00 going like and that is what increases the replay value of a lot of these open world survival
0:41:05 games is every time you play it. It’s a totally different game than the last time you played it.
0:41:10 AI just amplifies that in my opinion. Yeah, big time. So yeah, it’s going to be really exciting
0:41:15 times. We’re both excited to see how it plays out and we’re going to keep on making videos and
0:41:22 podcasts and sharing the journey and showing what we’re finding. So make sure that you like this
0:41:26 video. If you found it helpful, subscribe to this channel if you aren’t already because we have some
0:41:32 amazing guests coming up and a lot more fun, interesting discussions like this. And once
0:41:37 again, thank you so much for tuning into the Next Wave podcast, but we will see you in the next episode.
0:41:47 [Music]
0:41:57 [Music]
Episode 13: What impact will AI-generated content have on the entertainment industry? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive into this topic, envisioning a future where AI generates interactive movies and complex gaming worlds with infinite replay value.
In this episode, Matt and Nathan explore the potential of AI video tools such as Sora, Luma’s dream machine, and Runway’s gen three. They discuss how these advancements could democratize video creation, enhance b-roll, and expand creative possibilities, as well as the implications for copyright laws, gaming, and traditional creative industries. They also touch on George Lucas’ views on technological progress, Ashton Kutcher’s controversial support for AI, and the role of indie game developers in a rapidly evolving landscape.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) Sora is the most anticipated AI video.
- (03:39) AI video tools improve, but have quirks.
- (09:23) Runway Gen 3 is fast, unlike Luma’s dream machine.
- (12:04) Excitedly exploring and creating with new AI.
- (14:48) Custom mid journey models personalize prompts, raise concerns.
- (17:28) George Lucas acknowledges inevitability of AI development.
- (21:40) Copyright law impacting AI and technological innovation.
- (25:31) Copyright evolution in the new world uncertainty.
- (27:28) TikTok boosted exposure for music artists.
- (32:32) Excited about AI tech but still loves art.
- (33:18) AI may replace video extras, changing Hollywood.
- (38:52) Procedural generation and AI enhance game replayability.
—
Mentions:
- Sora: https://sora.aitubo.ai/
- Runway Gen-3 Alpha: https://runwayml.com/blog/introducing-gen-3-alpha/
- Ashton Kutcher’s Support for AI: https://variety.com/2024/film/news/ashton-kutcher-ai-movies-sora-hollywood-1236027196/
- Luma Dream Machine: https://lumalabs.ai/dream-machine
- Mid Journey: https://www.midjourney.com/home
- Augmented World Expo: https://www.awexr.com/
- Perplexity AI: https://www.perplexity.ai/
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano