Ranking 13 Of The Most Popular Ai Video Tools (Q4 2024 Tier List)

AI transcript
0:00:04 I’ve listed out Pika, Sora, Hotshot, Vio.
0:00:06 Just incredible to think that none of this is real.
0:00:09 Like I showed some of the videos to my son yesterday when he was like,
0:00:10 that’s not real.
0:00:14 My benchmark up until now was I like to generate various monkeys on roller skates.
0:00:19 I don’t know why, but it passes my monkey on roller skate test pretty well.
0:00:20 This one is definitely real.
0:00:21 Nobody will ever know.
0:00:26 Hey, welcome to the Next Wave podcast.
0:00:27 I’m Matt Wolf.
0:00:29 I’m here with Nathan Lanz.
0:00:32 And today we’re going to go down the AI video rabbit hole.
0:00:34 We’ve recently seen Sora.
0:00:35 We’ve seen Vio.
0:00:40 We’ve had access to Pika and Luma and all sorts of AI video tools.
0:00:43 Half of them, Nathan and I have never even touched.
0:00:47 So we decided, let’s bring on a guest who has played with all of them.
0:00:51 Put them all through the motions, knows the various differences between them.
0:00:55 And today we’ve got Tim from one of my favorite YouTube channels,
0:01:01 Theoretically Media on the show, and we’re going to go down this AI video rabbit hole.
0:01:05 Look, if you’re curious about custom
0:01:08 GPTs or our pro that’s looking to up your game, listen up.
0:01:13 I’ve actually built custom GPTs that help me do the research and planning
0:01:19 for my YouTube videos and building my own custom GPTs has truly given me a huge advantage.
0:01:21 Well, I want to do the same for you.
0:01:25 HubSpot has just dropped a full guide on how you can create your own custom GPT.
0:01:27 And they’ve taken the guesswork out of it.
0:01:31 We’ve included templates and a step by step guide to design and implement
0:01:35 custom models so you can focus on the best part, actually building it.
0:01:38 If you want it, you can get it at the link in the description below.
0:01:40 Now, back to the show.
0:01:44 So let’s start with Pika.
0:01:49 In my mind, Pika, where I think it really shines is like some of the new effects
0:01:54 that they’ve put into it, like the ability to like cut like a cake or to like squish it
0:01:58 or like the hydraulic press, those are kind of like fun for like memes.
0:02:01 And then they also it is Pika, right?
0:02:06 That just release that new feature where you can like put your own face into videos
0:02:09 and then put yourself in like a certain sweater.
0:02:11 I haven’t really played with those features yet.
0:02:16 But what do you feel is like Pika’s winning features?
0:02:18 But what would you go to Pika for?
0:02:23 I think that lately they do seem to be kind of moving into, yeah,
0:02:27 that squish it, cut it, cake it, explode it.
0:02:31 It’s kind of more on it feels a lot more like it’s on their aiming
0:02:34 for like social kind of like fun tools.
0:02:38 But I know that’s, you know, you can upload recipes, essentially,
0:02:40 of like a person, a person, a place and a thing.
0:02:41 And then, you know, stamp them all together.
0:02:44 And then that’ll create, you know, a video.
0:02:46 But my understanding is that it’s sort of on the template side.
0:02:51 So once again, this is sort of falling into the area of like, you know,
0:02:54 just cool, quick videos to share on social.
0:02:59 Yeah. So with Pika, I really think that they’ve got like this sort of meme
0:03:04 thing down, but when I was trying to play with the newest features in there,
0:03:06 the ones where you can sort of add your own face and blend it
0:03:08 with other styles and stuff like that.
0:03:12 I was actually messing with it on a live stream earlier in the week.
0:03:16 And I could not get it to generate what I wanted to.
0:03:19 I was using like two images and then somebody’s like, Oh, it doesn’t work
0:03:21 unless you actually give it a text prompt as well.
0:03:23 So I was starting to blend it with text prompts.
0:03:26 And even then it was still giving me like really funky results.
0:03:28 And I couldn’t actually get it to generate what I wanted.
0:03:32 I feel like Pika is one of those tools where I have to re-roll
0:03:35 so many times before I finally get what I want.
0:03:38 But eventually I get there.
0:03:41 Yeah, I think they were the first one I saw that did like cartoon faces
0:03:43 really well, like even like the expressions in the eyes.
0:03:45 I think they were the first ones that kind of, you know,
0:03:46 didn’t perfectly nail it.
0:03:49 But like at that point, it was way better than runway.
0:03:52 But since they came out, like all the other models that came out
0:03:56 and just visually they’re all more impressive, I think,
0:03:58 like most of the top models now.
0:03:59 So I don’t know where they go.
0:04:03 Like it almost feels like Pika should be some kind of new app or something.
0:04:07 B and C just because I struggle to see a lot
0:04:10 of the real practical use cases outside of meming.
0:04:12 I just don’t know whether it should fit in B or C.
0:04:15 I don’t think it’s D tier because I think it’s like A.
0:04:18 It’s OG, they were one of the earlier ones, them and runway.
0:04:23 We’re kind of like the two big OGs that we’re putting out these video tools.
0:04:27 And I do find some fun usefulness in it in that like meme thing.
0:04:30 I’ve also used it like the end of the videos where I’ll be like,
0:04:32 all right, guys, see you in the next one.
0:04:35 And then like the thing crushes me at the end, you know.
0:04:38 So like there’s some fun stuff that you can do with it.
0:04:40 I’m kind of leading towards B tier on that one.
0:04:45 Look, every video platform is like one update away
0:04:47 from just becoming the greatest platform in the world.
0:04:49 So, you know, that’s yeah, yeah, yeah.
0:04:51 I mean, the last time we did it to your list,
0:04:54 we we definitely said some stuff to like save our butts
0:04:57 so these companies don’t hate us and we’ll do it again on this one.
0:05:01 All of these tools could be D’s right now,
0:05:03 but are like one update away from being A’s.
0:05:06 And I feel that about every single one that we’re going to talk about.
0:05:10 I could see Pika being like a really viral app at some point.
0:05:12 If they keep going down the route they are now
0:05:14 or your own self into stuff, maybe it’s not going to be the thing
0:05:17 that like Hollywood uses, but maybe it becomes like
0:05:19 a really popular app with teenagers or something at some point.
0:05:23 So I think I think B makes sense because like right now
0:05:25 it is actually useful for memes and stuff like that.
0:05:27 And like you said, like little parts of videos,
0:05:30 whereas a lot of these, you know, tools, even though they look cool,
0:05:33 they’re not super useful in a practical way right now.
0:05:34 So I’d say B.
0:05:36 Well, I’m going to I’m going to move along here.
0:05:38 The next one I want to talk about is Hot Shot,
0:05:44 because Hot Shot, in my opinion, is like the D tier of like the list for me.
0:05:46 I don’t know what you guys think if you play with Hot Shot yet,
0:05:49 but I could we see Hot Shot because I haven’t seen it.
0:05:52 Yeah, I play with Hot Shot and for me, Hot Shot,
0:05:56 I could not get it to generate anything that looked good for me.
0:06:01 And they charge $99 a month for it, which is really high for what it is.
0:06:06 And literally none of the generations I got it to do looked any good at all.
0:06:10 But saying that we can move this around based on what Tim says,
0:06:12 because Tim’s probably played with it a little bit more than I have.
0:06:13 I have, I have.
0:06:17 I do, you know, those guys are these guys are bootstrapping.
0:06:21 They are they’re trying to as best they can to get that model up and running.
0:06:24 I do agree with you, though, a lot of the generations out of it
0:06:28 always kind of have this to me, at least have this graininess to them.
0:06:31 Or there’s a lot of morphing inconsistencies with them.
0:06:35 And they were one of the first that kind of did the facewapping thing as well.
0:06:41 But it always felt like my face kind of came out a little too much on the red side.
0:06:45 Like, you know, the the skin tones and color consistency weren’t great.
0:06:46 I don’t know.
0:06:50 I was really struggling to get much out of Hot Shot personally.
0:06:53 Well, what model should we talk about next?
0:06:56 I actually have not played a ton with Kling.
0:07:00 But I know everybody swears by how good Kling is.
0:07:01 It can be.
0:07:04 I’ve generated maybe a couple of videos, but I I haven’t played with it a ton.
0:07:07 So I’m going to have to lean on you a little bit for how good Kling is.
0:07:12 You know, I think between right now kind of leading, at least on the the Chinese
0:07:15 models right now, I mean, it’s like neck and neck.
0:07:18 A while back, I did a short film
0:07:21 that was entirely I generated called Dead Sea.
0:07:25 It was a Pirates versus Vampires thing.
0:07:27 It was kind of stupid, but it was fun.
0:07:30 And that was all generated in Kling one point out.
0:07:34 Nice. So would you put Kling in like A tier S tier?
0:07:37 I mean, I think we’re we’re talking top of the top of the chart here.
0:07:42 So, you know, if we’re if we’re talking about what I would put in S tier
0:07:49 personally, I think I think Veo, Veo, Veo is it like here for me is is where
0:07:51 is where Veo is at.
0:07:53 Maybe another one can make it.
0:07:58 But that’s where I would put it because the one thing that I do like about Veo
0:08:02 is that every time you give it a prompt, it’s going to generate four videos.
0:08:05 So you hear people going, Oh, are these cherry picked?
0:08:06 Well, kind of.
0:08:09 But they built the platform sort of designed for cherry picking.
0:08:12 They gave you four and say, which one’s your favorite of these four?
0:08:17 Right. So like out of the four, almost every single time,
0:08:19 I’m impressed by at least one of the four that it gets.
0:08:21 That is true. You know, I think it.
0:08:22 Yeah, I think it’s I think it is really good.
0:08:27 I mean, again, the model itself has only been really released for a couple of days.
0:08:30 So I think there’s a lot of that’s that’s that’s a generation that looks really good, man.
0:08:34 Yeah, there is definitely some some more work ahead of it.
0:08:37 You know, I’ve got a lot of good generations.
0:08:39 I also do have a lot of funky ones, too.
0:08:42 Like I tried to get it to generate like somebody doing a Rubik’s cube
0:08:45 and I definitely got bad to it.
0:08:48 Yeah. Yeah, not horrible. Definitely not horrible.
0:08:50 I mean, again, the fingers are not turning to spiders.
0:08:53 Like true. That is a way to address.
0:08:56 Yeah. It really sucks at gymnastics.
0:08:58 Nothing has passed the gymnastics test.
0:09:00 Yeah. Yeah. I found.
0:09:02 That’s the new test after spaghetti.
0:09:04 Yeah, the gymnastics is the new benchmark.
0:09:09 This is probably my favorite generation is this Rhino walking school.
0:09:11 That’s like the cameras panning with it.
0:09:15 You know, my benchmark up until now was I like to generate various
0:09:17 like monkeys on roller skates. I don’t know why.
0:09:20 But it passes my monkey on roller skate test pretty well.
0:09:23 There’s one of the monkeys on roller skates that I got.
0:09:25 Here’s another monkey on roller skates.
0:09:28 This is the one that I shared on X that I thought was really good.
0:09:33 I got it to do like a first person looking out of a jet fighter
0:09:35 and another incredible thing that none of this is real.
0:09:38 Like I showed some of the videos to my son yesterday from Vio,
0:09:40 and he was like, that’s not real.
0:09:42 Like he’s like, oh, that one’s not real. That one’s not real.
0:09:44 And then there was one with like an animal or something.
0:09:48 He’s like, that’s not real. No, it’s not real. Disappointed.
0:09:49 Yeah. Let’s see.
0:09:53 One of the ones that really did not work out well was I tried to tell it
0:09:57 to make a guy playing with his dog with dolphins jumping in the background.
0:09:59 And that’s like barely even a video.
0:10:02 Here’s another one that it did.
0:10:05 And the dolphins are just floating there in the background.
0:10:09 So it’s not always like the best thing you’ve ever seen, right?
0:10:12 Like there’s definitely some issues with some of them.
0:10:15 But again, every single time it gives you four
0:10:18 and like one of the four is usually pretty good.
0:10:23 Were you doing, was that a lot of the image to video or is that more text to video?
0:10:25 That was all text to video.
0:10:28 Text to video. Yeah, I do find that it tends to generate a lot less wonk
0:10:31 when you’re doing text to video as opposed to image.
0:10:36 And one thing about Veo, too, is when you do image to video,
0:10:38 you can’t upload your own image.
0:10:41 You have to generate an image with their image generator.
0:10:45 Yes. And then you can turn the image that they generated for you into a video.
0:10:49 So I think it’s using their image in three, which looks good.
0:10:52 I got to say, that’s actually I can’t really complain about like that.
0:10:57 Image in three has taken a pretty big jump up from image in two.
0:10:59 It actually looks pretty solid in all honesty.
0:11:00 Yeah, it is good.
0:11:02 And it’s got people really well. Yeah.
0:11:03 I haven’t been able to try this yet.
0:11:07 So like in terms of a product, like how do you, how does it compare to like,
0:11:10 you know, Runway’s got a pretty great editor tool.
0:11:12 Sora now has a pretty awesome storyboard.
0:11:14 It’s definitely still a beta product.
0:11:17 It’s definitely still a beta product, you know, like Tim mentioned,
0:11:20 it does not seem to have like any sort of library.
0:11:23 So anything you generate, if you refresh your page
0:11:25 and you didn’t download those videos, those generations are gone.
0:11:27 You can’t find them again.
0:11:29 It doesn’t really have any editing features yet.
0:11:33 It doesn’t have the ability to upload your own images and compared to videos.
0:11:36 It’s definitely, it definitely beta.
0:11:39 Yeah, that’s, yeah, that’s where I have mixed feelings on like on the S
0:11:41 because, you know, Google’s done this so many times
0:11:42 where like they have amazing technologies.
0:11:45 I know, you know, you call me cynical the other day, but like I kind of am about
0:11:49 Google, I know I am, but they haven’t really launched great products though.
0:11:51 So even though they have great technology, that doesn’t mean it will
0:11:54 turn into a consumer product that people actually use.
0:11:56 Yeah, no, I agree.
0:12:00 It like, I think that’s where the debate would live for, for Vale, right?
0:12:03 It’s like, as far as the video generation technology goes,
0:12:05 I think it’s the best right now.
0:12:07 I think that’s the state of the art right now.
0:12:11 As far as the fact that most people don’t have access to it yet.
0:12:14 Who knows when more people will get access to it.
0:12:15 Who knows what they’re going to charge for it.
0:12:18 Who knows if they’re going to build a good UI around it.
0:12:22 There’s still a lot of questions around it, which, you know, I, I can see the
0:12:25 arguments for knocking it down a tier because of those reasons.
0:12:26 Yeah, yeah.
0:12:29 I mean, Sora has like a, you know, great domain store.com.
0:12:31 Sexy landing page looks beautiful.
0:12:32 It’s got the storyboard feature.
0:12:36 I mean, those are all, you know, those matter for like regular people.
0:12:39 So yeah, but some of the generations are definitely looking really, really good
0:12:44 out of it. As far as their overall plans, I mean, I think that we will definitely,
0:12:46 I think we’re going to get a definite release out of it.
0:12:50 The, there’s already been talked about YouTube integration with it.
0:12:54 And then Matt, I don’t know if you saw as well, but did a little banner
0:13:01 on your channel pop up that asked if, if you will allow for your videos to be used as.
0:13:02 It might have.
0:13:04 I might have closed it real quick and didn’t pay any attention to it.
0:13:09 But I knew, I know they have that feature just rolled out where you can decide
0:13:11 whether or not you want to allow your channel to be trained on or not.
0:13:13 Which I figure, I mean, I’ll turn around as fair game.
0:13:15 I mean, I will turn it on.
0:13:19 I will, I mean, like, look, I’ve like, who wants you prompt a guy, you know,
0:13:22 in a studio talking to a camera with a bunch of like mid-journey images
0:13:26 and an AI video, you’re going to train off of AI video, which, you know, whatever.
0:13:29 You know, but I just, I feel like they had to have already trained on YouTube video.
0:13:32 Oh, no, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah.
0:13:36 This is like now like covering their asses after the fact, like, hey, do you opt into this?
0:13:40 No matter when we did it.
0:13:41 I have been desperately trying.
0:13:44 That’s another one of the tests that I another one of the tests that I really
0:13:46 love to do is to, is to try to prompt for me.
0:13:50 So I, you know, I’ve run that into chat GPT number of times.
0:13:54 Just a screenshot of me and say, look, describe everything that you see here
0:13:56 and make me a video or image prompt out of it.
0:13:59 And I’m constantly writing that just to see, like, how close it gets.
0:14:02 So far, I haven’t gotten me, but I’m also not Marquis Brownlee.
0:14:04 So I’m not that level of famous.
0:14:07 So, yeah, I do think they will roll it out.
0:14:11 I bet you it’ll, it’ll roll out like into Gemini or something where you just sort
0:14:14 of can generate videos in line with your chat and things like that.
0:14:17 Kind of like Dolly, Dolly and chat GPT.
0:14:20 Yeah, yeah, I think, I think that’s probably where they’ll go with it.
0:14:26 But I do want to keep on moving along here because we’re about a third
0:14:28 of the way through these tools here.
0:14:30 Okay, they still get an S.
0:14:32 They still get an S because it’s like video wise, it’s the best quality.
0:14:35 So yeah, I’ll leave them an S just for that reason.
0:14:40 But moving on, let’s talk about Sora because Sora and another video.
0:14:45 Sort of, yeah, I mean, but let’s just go ahead and get that one over with
0:14:47 because I know we probably have a lot to say about that one.
0:14:51 The thing about Sora and here’s, here’s where I’m struggling to play Sora.
0:14:55 What we have access to right now is Sora Turbo, right?
0:14:59 And Sora, Sora Turbo gets us 20 second generations.
0:15:02 They’ve got some additional frame interpolation built into it so that
0:15:06 they don’t have to generate as many frames, but it also makes it a little
0:15:09 less smooth with the, how the videos come out.
0:15:13 But then you also have like Sora regular, which is like some of the demos
0:15:16 we saw nine months ago or whatever.
0:15:18 And those look a hell of a lot better than what we’re getting out of
0:15:20 Sora Turbo.
0:15:22 So it’s like, how do we rank it?
0:15:24 Are we ranking it based on what we have access to now?
0:15:25 I think you got to rate, yeah.
0:15:26 What we’ve seen from Sora.
0:15:31 I think that you have to rank it based on what was released and what is available.
0:15:38 And to that, I mean, like, listen, I know that there’s a large segment
0:15:42 of the of the of the population of the AI video generation or video
0:15:45 population that kind of it landed with a wet fart, you know, like
0:15:47 people were not stoked about it.
0:15:51 But I also think that there is still something there.
0:15:53 Is it worth $200 a month for most people?
0:15:54 Probably not.
0:15:57 It probably is one of those things where everybody came in with their
0:15:59 standard sort of prompt structure, tried it out.
0:16:02 It didn’t work or it was just like weird, wonky results.
0:16:06 I think that we just have to spend more time digging into it and figuring
0:16:07 out what it’s really good at.
0:16:13 There I did the video, the video aspect, the remix and the blend and the fact
0:16:15 that you can cut, there’s a lot of power in that.
0:16:20 Probably those are features that we haven’t seen in any other video model.
0:16:22 So, yeah, there is really good stuff in there.
0:16:26 Again, I know the controversy side of it is just like you can’t generate
0:16:28 people on the on the $20 plan.
0:16:32 Yeah, there’s a lot of you can’t you can’t you can’t unless you pay $200 to.
0:16:36 I have not gotten anything that I’m super impressed with out of Sora.
0:16:40 If I’m being honest, you know, here’s my monkey on roller skates.
0:16:46 That was probably the best prompt that I got out of Sora.
0:16:51 But then my Vio Vio prompt with a monkey on roller skates like blew this
0:16:52 one out of the water.
0:16:56 Yeah, I asked it to do a wolf howling at the moon and it literally
0:17:00 just gave me like images of wolves howling at the moon, not even a video.
0:17:04 And then I started to learn, all right, you need to give it more detail
0:17:06 in your prompts if you really want to get something exciting out of it.
0:17:11 So when I started, you know, putting a lot more detail into these prompts,
0:17:15 like you could see how long of a prompt that is on my screen.
0:17:18 The video started coming out quite a bit better.
0:17:22 But still, I don’t think they’re on par with what we’re getting out of Vio,
0:17:24 if I’m being honest.
0:17:25 Yeah, no, I would agree with that.
0:17:28 And that I think definitely comes down to training data.
0:17:34 I mean, obviously Google likely has access to the entirety of YouTube.
0:17:41 And, you know, so, yes, well, if you believe Miramarati did not have
0:17:46 access to all of YouTube, but yeah, or, you know, Turbo might also,
0:17:47 that’s actually kind of cool.
0:17:50 Turbo actually might not have as much of that data in there.
0:17:55 I think the product itself, though, is probably the best looking AI video
0:17:57 product, though, in terms of like the user interface.
0:17:59 100%, like how they present it.
0:18:04 Yeah, there is definitely a very mid-journey-esque aspect to when you
0:18:06 first log in and see that sort of end the scroll.
0:18:10 But yeah, in that, you know, again, the remix, just all of the little
0:18:13 pulldowns, the simplicity of the prompt, like, no, it looks great.
0:18:18 I think, again, this is one of those things where I always try to be
0:18:23 a little gracious with them when they release a V1 product.
0:18:27 And, you know, like, essentially we’re beta testing their V1 right now.
0:18:30 By the time this hits, like, well, you know, again, remember how weird
0:18:33 and wonky mid-journey V1 looked to where it looks now.
0:18:37 So where, what will Sora 6.1 or 7 look like?
0:18:39 That’s, you know, the spider one is cool.
0:18:43 Yeah, I pretty much did all of the exact same prompts with Sora that I did
0:18:46 with, like, Veo, because I wanted to compare it with those.
0:18:47 Yeah.
0:18:52 And for the most part, like, Veo won on every single generation.
0:18:52 Yeah.
0:18:58 So I can’t put what we currently have from Sora and S tier.
0:19:02 Maybe what I’ve seen from Sora, some of the videos they gave us six
0:19:06 months ago, nine months ago, some of those might fit in S tier, but what
0:19:12 we’ve got now, I think what it’s got going for it is the UI is really cool.
0:19:14 The storyboard feature is really, really cool.
0:19:19 I like that a lot, but at the end of the day, it’s like most of my generations.
0:19:22 I don’t feel like I would actually use as, like, a B roll over one of my
0:19:24 other videos or something like that.
0:19:26 I still feel like it’s, like, looks like the second best.
0:19:29 I don’t know, maybe like the top of A or something like that.
0:19:30 Well, I haven’t used Kling a whole lot.
0:19:36 So it’s like, how would you, like, if you were ranking Veo, Kling and Sora,
0:19:38 like, how would you rate those?
0:19:43 If I was to do all three right now for my use cases and for sort of, like,
0:19:48 the, the pack that I travel with, it would actually probably be Kling on
0:19:49 the S tier between those three.
0:19:53 We’ll probably Kling on the S tier only because you could generate images
0:19:58 from outside of, of, of the, of the platform.
0:20:01 So on Veo, you have to generate in, or Veo, you have to generate
0:20:06 in Imogen, whereas in Kling, they’re like, you can generate in stable diffusion.
0:20:07 You can generate in flux.
0:20:09 You can generate in midger.
0:20:09 They don’t care.
0:20:13 It’s just that you bring it in and from there you have motion brushing
0:20:15 and all of your other various tools.
0:20:20 So there’s a lot more control of what you’re choosing to input in with Kling
0:20:22 than there is with, you know, Veo right now.
0:20:26 Or, well, Sora, you can actually generate, you can actually bring your own
0:20:27 images into as well.
0:20:30 I’m noticing it’s still going to take a little bit more testing, but they tend
0:20:33 to begin with your initial image first, and then they kind of wander off
0:20:34 into their own world.
0:20:37 Like the first frame will be like, you know, oh, this is exactly what
0:20:40 you’re looking for, but I’m going to do something complete, not completely
0:20:43 different, like something that’s based off of what I see here, but it’s not
0:20:45 going to be, that’s not going to be the first frame.
0:20:47 I’m not going to continue on with that.
0:20:52 Which I mean, that may be just the new way of generating a video.
0:20:58 But in some ways that the video, those like this, this wave of video models
0:21:02 is kind of saying like, listen, we’re the director and we’re the production team
0:21:06 here, like you’re writing this, but we’re in charge of actually making the thing.
0:21:09 So it’s almost like, you know, you’re sort of like a producer or a writer.
0:21:12 And, you know, you’re handing out a script to a production company in
0:21:14 like Bulgaria, and yet you don’t know what’s going on.
0:21:17 They’re just sending you back stuff and you’re like, well, this isn’t exactly
0:21:19 what I asked for, but I guess it kind of works.
0:21:23 So we’ll see, we’ll see, you know, and I would say the other thing that I would
0:21:27 definitely say on the Sora side is that was a prompting part of it.
0:21:30 It is weird because I do think, again, like Matt, like you said, you need
0:21:34 those longer prompts, whereas with like, as we have seen with, well, we’ll
0:21:38 look at many max later on and with Kling, there are sort of those AI enhancement,
0:21:42 you know, like tools that are built in there that you can do something like a
0:21:43 monkey riding a skateboard.
0:21:47 And then the the model will fill in a lot of the additional details that it
0:21:48 needs to generate the image.
0:21:52 It is so weird that OpenAI did not put that in.
0:21:54 Like, why isn’t ChatGPT right in there?
0:21:55 So I don’t know.
0:21:58 Yeah, Sora Turbo feels kind of like rushed out.
0:22:01 Like, I think they were pressured because like the public, you know,
0:22:03 consensus was like, what the hell is going on with Sora?
0:22:04 It looks so cool and we haven’t seen anything.
0:22:06 They don’t ship anything.
0:22:09 So then they had to get like a cost-efficient model out as quick as
0:22:13 possible and it did leak like a week earlier and got enough people managed
0:22:15 to get access to it a little bit early.
0:22:19 And then they had their 12 days of Christmas and it was like, all right,
0:22:22 well, I guess this is probably the opportunity to push it out.
0:22:23 But yeah, for sure.
0:22:26 Well, you mentioned Halu AI.
0:22:29 Let’s like, is it Minmax, Minimax?
0:22:30 I always just call it Minimax.
0:22:32 Like it’s just because that’s Halu.
0:22:36 I think, again, the actual name of the, and then Minimax, I think,
0:22:39 is the name of the video model.
0:22:41 So yeah, I got you.
0:22:42 Yeah.
0:22:46 So I always, I always call it Halu, but I always just do it almost
0:22:48 to like make fun of it when I make videos, right?
0:22:51 Like I just sort of screw up the name on purpose because I think it’s funny.
0:22:56 But so where would you rank Minmax or Minimax?
0:22:58 Like that’s one I have not played with yet at all.
0:23:00 Oh, really?
0:23:04 Literally, my entire audience would come at me with Pitchforks
0:23:06 if I did not say that was S tier.
0:23:10 Minimax is definitely the one that I think is like current crowd favorite.
0:23:12 Everybody loves it.
0:23:12 I love it too.
0:23:13 It’s a great model.
0:23:15 Would you put it above Veo?
0:23:17 Yeah, currently.
0:23:17 Yeah, for sure.
0:23:18 Really?
0:23:20 So see, I haven’t I haven’t I haven’t played with Minimax,
0:23:24 but like all the examples I’ve seen, like all the the demos, like, I don’t know,
0:23:26 like Veo seems like way better than Minimax.
0:23:30 Yeah, maybe. Well, again, with Minimax, I think that the the trick.
0:23:32 You can use external images and stuff like that.
0:23:33 Yeah, it’s the control.
0:23:39 Like, you know, there’s like there’s so much control in there and prompt adherence.
0:23:41 Yeah, I mean, they’re not as worried about being sued.
0:23:44 Like, definitely, that’s a problem that Google and OpenAI have.
0:23:47 Is that, you know, making sure they’re not being sued.
0:23:51 You can’t just like copy people’s faces and make entire films out of it.
0:23:54 So, yeah, these were I was running.
0:23:59 Yeah. So, you know, bad tests here, but I was running the the Sora Viking thing,
0:24:02 trying to get that kind of look out of it.
0:24:05 And again, I mean, it’s like it’s, you know, compared to that Sora leak demo
0:24:09 that we saw earlier, yeah, of course, it’s not going to be as great.
0:24:12 But like that looks pretty smooth and good.
0:24:15 The other thing that I think it’s really good at, I just did a video not too long
0:24:18 ago, it was about an interview.
0:24:21 So it was this guy and this guy.
0:24:23 So it was a job interview.
0:24:24 So this was our interviewer.
0:24:29 And then this was, yeah, this was a hitman that he was hiring or that
0:24:33 was going out for a job interview and, you know, the running joke.
0:24:35 There’s like, what kind of experience do you have?
0:24:38 And he’s like, well, I’m very patient, you know, I guess for hours.
0:24:45 So and I used Runways Act one to do all of them and then 11 labs to do all the voices.
0:24:47 So this kind of stuff, it’s really, really good at.
0:24:49 And then we had like various flashbacks.
0:24:51 He’s like, I have an international experience.
0:24:52 My wife was killed in Thailand.
0:24:56 Like, we meant to run away from this life together.
0:24:58 But then the triads killed her.
0:25:04 You know, like, so this is the kind of stuff that I think it does extremely well.
0:25:09 Like, you know, for the kind of doing these short clips, like there’s like, I mean,
0:25:12 like, yeah, Mini Max’s is really on point.
0:25:15 It really does a great job of understanding exactly what you’re looking for.
0:25:17 A lot of these are running, by the way, no prompt.
0:25:19 Like, I’m just like, I’m not even putting in a prompt.
0:25:21 It just it’s like, oh, you have a city there?
0:25:22 Yeah, I know you kind of want to do a thing.
0:25:24 No talking minimal head movement.
0:25:27 You know, you don’t have to go crazy with your prompts.
0:25:30 Like, I think the girlfriend here was no prompt.
0:25:32 It knows waves.
0:25:34 It knows, you know, move her around.
0:25:37 Yeah. So this is the kind of stuff that like no prompt, like.
0:25:40 So are you generating images and then pulling them in?
0:25:40 Is that what’s going on?
0:25:42 So this is this is all generated.
0:25:45 The the initial images are generated up in mid-journey.
0:25:47 That’s usually how I tend to mid-journey or flux.
0:25:51 I’ll go back and forth on either of those and then drop them in.
0:25:53 And for the most part, I mean, yeah,
0:25:56 highly will always understand what you’re kind of looking for.
0:25:59 Yeah, I mean, I’m looking at their website right now
0:26:02 and all the ones that are on the Explore page are really impressive.
0:26:08 Like, yeah, like really aesthetically pleasing.
0:26:09 The lighting looks great.
0:26:11 The colors are great.
0:26:12 Like, I don’t know.
0:26:13 I have to actually kind of agree.
0:26:17 Just looking at the Explore page that it’s really impressive.
0:26:19 Yeah, I’m kind of surprised by this.
0:26:22 I had seen it before, but like, this is better than I remember.
0:26:24 And like, again, a lot of the prompts here
0:26:28 that you can see in these demos are very and again,
0:26:30 you can kind of go either way, always, you know, prompt
0:26:32 if you’re not getting the results that you want to.
0:26:33 But a lot of this stuff is very
0:26:39 it almost kind of looks like this prompt here is actually just the image prompt.
0:26:39 You know what I mean?
0:26:43 Like, and then this like that was the prompt they ran in mid-journey.
0:26:44 And then they dropped it into here
0:26:48 and just kind of reran the prompt in here to kind of reinforce the idea.
0:26:51 No, it’s that the S tier or?
0:26:54 Yeah, right now I’ve got that top of S tier, but it’s right next to Veya.
0:26:58 Like they’re they’re they’re they’re sharing that that tier together.
0:27:00 All right, so let’s let’s talk about in video.
0:27:03 Have you played around with in video much?
0:27:06 And in video is one of my sponsors for my other channel.
0:27:11 So it’s hard for me to like, I’ll put in a D tier.
0:27:17 I think they’ve offered it.
0:27:19 They’ve offered to sponsor me a couple of times as well.
0:27:23 I really haven’t got a not gotten the chance to like I dove into them.
0:27:25 I think early when they first launched
0:27:27 and I really haven’t got a chance to circle back to them.
0:27:29 So yeah, that’s that’s one that I’m not.
0:27:32 Yeah, I don’t want to play around with that much.
0:27:36 Yeah, so within video, one thing that like makes in video stand out
0:27:41 is you give it a concept for like a short film, right?
0:27:44 And you can actually say generate a five minute video for me.
0:27:49 And it will actually go and generate all of the scenes for your video
0:27:52 similar to what you can do in like LTX Studio.
0:27:56 But yeah, you can you basically say like I want to make a movie
0:27:59 about a Viking, you know, hitting the shores
0:28:03 and then attacking the the Pete, the locals or something, right?
0:28:04 Yeah. And it will go and generate
0:28:07 like a three minute video of that happening.
0:28:09 And each scene in it is generated.
0:28:14 I don’t know 100 percent if in video is using an API
0:28:16 or if it is their own internal model.
0:28:18 I believe it is their own internal model.
0:28:20 It’s it’s it’s pretty good.
0:28:25 What I actually use in video for more and like it for even better
0:28:27 is it’s got like stock footage.
0:28:30 And so you can give it a concept for a video
0:28:32 and it will go find stock footage for that video.
0:28:34 I actually think it’s stock footage.
0:28:37 Features are really where in video shines
0:28:40 more than its generative features at the moment.
0:28:43 But saying that, you know, they’re only like one update away
0:28:46 from, you know, jumping up the tier list.
0:28:51 But right now, just speaking about like the generative capabilities of it,
0:28:54 I’d have to probably put it in B or C.
0:28:58 But the stock footage feature is really, really, really good.
0:29:01 It’s really good at going and finding stock footage for whatever you give it.
0:29:05 And it’s funny because I remember the video that I did this.
0:29:10 They got me for a sponsored spot a while back when they first launched.
0:29:12 So this is before they were doing generative stuff.
0:29:14 And it was just the stock image stuff.
0:29:17 And of course, me being me, I wanted to push it into weird directions
0:29:18 and see what we could do with it.
0:29:22 So I ended up making a it was a zombie survival guide.
0:29:25 But it was, you know, picking everything from stock footage
0:29:26 and I generated up the script.
0:29:29 It was like a British guy, like how to survive the first 10 days of a zombie
0:29:32 apocalypse and like is, you know, is very kind of like,
0:29:35 I guess, like YouTube explainery.
0:29:36 But it was, yeah, it was a neat idea.
0:29:42 So you can kind of like weirdly thread narratives out of stock footage as well.
0:29:44 So it’s all just a matter of kind of how you approach it.
0:29:45 I think it did the serviceable job.
0:29:48 But again, I just haven’t gone back and checked it out
0:29:50 from from a from a generative standpoint set.
0:29:53 Yeah. And it also does like it’ll write the story for you.
0:29:55 It’ll do the voiceover for you.
0:29:59 You can upload your own voice and actually have it use your voice in the video.
0:30:02 You can do all that kind of stuff.
0:30:04 So it’s got some really, really cool features.
0:30:06 But the generative capabilities.
0:30:10 I mean, I don’t think I can put it the generative capabilities
0:30:12 on the same level as any of these other ones.
0:30:16 So I think I would probably put it in like the lower B tier right now.
0:30:21 It’s not possibly bordering on C tier, but not not to say that in videos,
0:30:22 not a great product.
0:30:26 It’s just I feel like the generative functionality has a ways to go.
0:30:29 Well, I’ve got to I’ve got to I just thought about this.
0:30:31 I got a controversial S tier coming up.
0:30:33 A controversial, uh-huh.
0:30:36 Yeah, yeah. Well, let’s jump to the end.
0:30:39 Oh, should we leave that one to the end?
0:30:40 No, it’s whatever you want to do.
0:30:42 It’s fine. All right.
0:30:45 So which one which one would you put into S tier next?
0:30:47 Oh, so this is going to be controversial,
0:30:50 especially coming off of the the NVIDIA one.
0:30:52 I I’m going to put LTX in S tier.
0:30:55 Why? OK, like aside from this,
0:30:58 they did that we both met and I are sponsored by them.
0:31:03 But it’s actually it’s not that.
0:31:05 No one’s putting that money in my pocket right now.
0:31:09 No, I literally because they open source their video model.
0:31:12 That is like, is there any other company here
0:31:15 that open source their own video model?
0:31:17 Not of this list.
0:31:18 I don’t believe so.
0:31:20 Meta didn’t do it looking through that much.
0:31:23 I don’t believe in their open source other than LTX.
0:31:25 No. So I mean, that is that.
0:31:28 Do you think LTX from the generative side of it, though,
0:31:33 do you do you think LTX generates videos on the same level
0:31:36 as what you’d get out of Minimax or Veo?
0:31:37 Not currently.
0:31:41 But again, the code is out there and people can tweak it
0:31:45 and people can do things with it to the open source community
0:31:48 can bring it up to speeds or to a quality of that level
0:31:50 or perhaps even higher.
0:31:52 They can do they can tinker with it, you know, because again,
0:31:57 now you’ve got a worldwide, you know, network of people
0:32:01 that can now fuzz with that model to make it do kind of whatever they want.
0:32:03 And it actually does generate extremely fast.
0:32:05 I mean, I think that’s the selling point of the whole thing,
0:32:08 is that it actually generates videos sounds great in theory,
0:32:12 but like massive compute to like advance these models in the next year.
0:32:14 Yeah, I think it’ll take open source sounds nice.
0:32:17 But like, how are they going to compete with like SOAR and Microsoft?
0:32:19 The training data and all of that stuff and and all of that.
0:32:25 I mean, you know, again, I think it’s it’s not necessarily for the
0:32:28 for the for the massive ramifications.
0:32:31 But just in the fact that it’s like, you know,
0:32:33 it gave they gave their model away to everyone.
0:32:37 That’s that’s actually, again, that’s just super cool and giving back to,
0:32:41 you know, a community of developers that ended up bringing, you know,
0:32:43 bringing all of this technology together.
0:32:44 So yeah, I think it’s good.
0:32:47 I think we should try to judge it based on like how useful it would be to people.
0:32:50 I am sort of leaning more towards Nathan’s take on it.
0:32:56 Like, I see I see why it goes up the chain because of the open source.
0:33:00 But at the same time, if we’re judging, you know,
0:33:05 if if I see open AI and SOAR based on
0:33:08 like what we’re getting today right now, I feel like we should be judging
0:33:12 LTX on what we get today right now from it. Yeah.
0:33:15 If we were judging future, I would literally put SOAR at the top of the top.
0:33:18 Honestly, it would be my personal take. Right. Right. Right.
0:33:20 Yeah. No, that’s valid.
0:33:22 I whatever you guys landed tier wise,
0:33:26 I think it should just bump up one for for the fact that, you know,
0:33:29 they get I would have probably put the generative capabilities on par
0:33:31 with what you’d get out of like in video or pica.
0:33:37 But because it’s open source, my I say it bumps up a tier and gets an A.
0:33:39 I’ll say, have you been around lately?
0:33:41 Because they’ve actually been doing a lot of improvements in there as well.
0:33:44 The video model does look a lot better, a lot smoother.
0:33:47 I know it was a little bit on the rocky side to start.
0:33:49 They added flux in as their image generator too.
0:33:52 So that tends to look a lot better.
0:33:57 And I think today they kind of put in was it sort of like that advanced
0:34:01 live portrait thing where you could kind of do expression control and everything.
0:34:06 It’s funny talking to those guys that it’s they have such an interesting
0:34:10 problem in that they are kind of a do it all platform.
0:34:13 Right. And as like new technology kind of appears,
0:34:15 they’re kind of like putting it all together.
0:34:17 But it’s not just a matter of just plugging it in.
0:34:19 It’s not like, oh, here’s a module that we just plug right in.
0:34:23 It’s almost like, you know, you’ve got this existing.
0:34:26 It’s not like a car where you’re like, oh, I’m going to drop in the or actually,
0:34:29 you know, a good example is like it’s not a computer where you’re just dropping
0:34:30 in like a new RAM slot into it.
0:34:35 It is more like an 18 ton semi where you’re like trying to drop a new engine
0:34:36 into it, or you know what I mean?
0:34:39 Like a like a new like catalytic converter or something.
0:34:43 So there’s a lot of like playing around to get the whole thing to work together.
0:34:47 But I think they do a pretty solid job of trying to stay up on everything
0:34:49 and kind of creating that all in one platform.
0:34:54 So then they also have LTX Studio, which does similar things to what we talked
0:34:58 about within video, where you can give it an idea for a video, right?
0:35:02 And then with that idea, it will actually generate all of the scenes for you.
0:35:05 And then you could go in and swap out scenes.
0:35:06 It’ll do the voiceovers.
0:35:10 It’ll essentially make like a little short film for you based on just like a prompt,
0:35:15 which I think, you know, that feature sort of bumps it up a little bit as well.
0:35:19 I think where people get sassy with that, too, is that, you know, they they kind
0:35:24 of go in there, like maybe a serial killer werewolf meets an alien, you know,
0:35:27 vampire robot story. My exact.
0:35:30 And then, you know, yeah, which I would I would totally watch that sounds like
0:35:32 something I would definitely watch when I was 13 on Cinemax.
0:35:36 And then they get something that is like, it’s kind of there and it’s kind of like
0:35:39 it just in the video, the the images that they pick aren’t great.
0:35:42 Like it is that thing where you’ve got to spend a lot of time in there.
0:35:44 It’s not just like prompt and like, I’m instantly entertained.
0:35:46 It’s like it’s prompt.
0:35:50 And then you got to start going in and you know, changing things on a granular
0:35:53 level. I think that that’s where people are just like, oh, so that good.
0:35:56 And it’s like, no, it is you just have to spend a lot of time working on it.
0:36:00 It’s not going to, you know, you’re like, you’re still going to have to do the work,
0:36:05 basically some. So Luma, so we’ve got Luma Dream Machine.
0:36:08 Where would you rank Luma Dream Machine?
0:36:13 I I really like the generations that Luma Dream Machine comes up with.
0:36:18 But I don’t think it’s on par with what we’ve seen from from Veo or from
0:36:24 Minimax. I’m curious where you would rank it based on all of your testing.
0:36:27 But the new image model looks really good.
0:36:29 So you can now generate on platforms.
0:36:33 They’re kind of doing something a little bit on the similar side to it’s not
0:36:36 quite LTX, but kind of more like Leonardo’s flow state, I guess,
0:36:38 where you can kind of have a bit of a conversation.
0:36:43 It doesn’t quite do like that cascading like level of like of images
0:36:44 that keep going down.
0:36:48 But you kind of end up, you remember when Dolly 3 first came out
0:36:52 and you were like, you were supposedly be able to to chat with your image
0:36:55 and, you know, and get it to do things.
0:36:57 That’s kind of the kind of that idea.
0:36:59 And yeah, it looks really good.
0:37:01 You got to give it up because their API is everywhere.
0:37:03 Anywhere that you go, you can now generate in Luma.
0:37:07 The first frame, last frame thing, you can do some really cool things with.
0:37:12 I think that they are I would probably put them into the A category.
0:37:16 Somewhere has somewhere between A and B, like they’re they’re just
0:37:18 but they’re just like one update away from like hopping up to an.
0:37:23 Most of what I do with them is the the image to video.
0:37:28 I feel like with this one, starting with an image, you tend to get
0:37:32 a much better result than if you start if you just give it a video prompt.
0:37:36 But, you know, the other thing, let’s see, this is and I think of the right tool.
0:37:40 This is the one where you can give it a start and an end frame, right?
0:37:45 Yes, I’ve got some so like so you can give it like a starting image
0:37:49 and an ending image and it will sort of animate between the two images, right?
0:37:53 Yeah, I’ll show you something that I was playing around with this idea
0:37:56 about wanting to do kind of a Game of Thrones type opening.
0:38:01 So, you know, I started off by generating up some, you know, I like still images
0:38:06 of like these castle things that wasn’t quite it ended up kind of arguing
0:38:12 with with Luma a bit, ended up zooming out, you know, with once we established
0:38:16 a look, zoomed out a little further, kind of got into a map look.
0:38:19 And then, you know, from there, you could kind of start taking taking it into
0:38:21 to see if it was one of the 12 second ones.
0:38:22 What’s this user interface?
0:38:24 Is this is this Luma?
0:38:25 This is yeah, this is Luma now.
0:38:29 So, you know, we can kind of get these kind of cool effects using first frame,
0:38:30 last frame, you know what I mean?
0:38:37 Like kind of getting this real sort of like swooping, droney, not quite Game of Thrones,
0:38:39 but, you know, kind of like, you know, by the end of it, I had like, you know,
0:38:45 fairly quickly to end up with like this 19 second shot of like this, you know, fly through.
0:38:50 Yeah, this kind of stuff is like really fun and very easy to do in Luma.
0:38:56 So, yeah. Yeah, I think Luma is a great little it’s a great exploration.
0:38:58 It’s it’s yeah, it’s great for exploring.
0:39:03 I actually played around with an anime style too, which I normally do not do on my channel.
0:39:06 Yeah, I’d say Luma for me is probably a tier.
0:39:09 I really like that sort of transition feature where you give it to images
0:39:11 and then it can transition between them.
0:39:12 I think that’s really, really handy.
0:39:15 So for me, I say Luma’s a tier.
0:39:16 Yeah, for sure.
0:39:21 And then like so moving along here, next up, we’ve got runway and runway
0:39:23 in my mind is like the OG, right?
0:39:27 When it comes to video generators, they were out there doing it first, right?
0:39:32 We had Gen 1, Gen 2, all those three seconds of video, my friend.
0:39:34 Three seconds of video.
0:39:36 It was when people complained.
0:39:39 So it’s like you can’t put them in D tier just just because they’re like
0:39:42 the godfathers of like the video tools.
0:39:47 I, you know, maybe hot take on this, but I would actually I actually
0:39:50 they’re one that I would put up in the S tier only because, again,
0:39:53 the suite of other tools that they have as well.
0:39:56 I mean, Gen 3 is great and it’s fast.
0:39:58 I mean, it is super fast.
0:40:01 I don’t know if you’ve played around that much with it on the image to video
0:40:04 stuff, but you add Act 1 into there and like the expand and all of that
0:40:06 other stuff that they’ve been doing lately.
0:40:08 I mean, is that killer suite?
0:40:10 And then on top of it, you can just take all of that stuff
0:40:15 and instantly rotoscope and green screen in there and like the subtitle thing.
0:40:19 I think if you go into like the greater, adding new things, right?
0:40:20 Like I always say new features.
0:40:24 They’re releasing the greater like runway ecosystem.
0:40:27 Like there’s just so many tools down there as well.
0:40:30 Don’t they have over 36 tools, right?
0:40:37 You got audio, lip sync, removing backgrounds, text images built in.
0:40:41 I mean, one of the guys that runs runway used to like he’s one
0:40:43 of the original creators of stable diffusion as well.
0:40:48 So it’s like they’re OGs that created runway.
0:40:53 They go way back to, you know, the very beginning of AI image generation.
0:40:57 And yeah, there’s just so many tools like I use the green screen tool
0:41:00 quite a bit where you could take another video and remove the background.
0:41:03 I think that’s really, really a handy tool.
0:41:06 But yeah, when it when it comes to like Act 1, where you can just sort of
0:41:11 upload a video of you talking and change it into like any sort of cartoon.
0:41:13 That’s super cool.
0:41:16 You can now do it with I really think it should have been called it Act 2 as well.
0:41:22 Now you can drive that Act 1 video footage onto any other like footage as well.
0:41:26 So you can have somebody walking around in a 16, 9 frame and then just completely
0:41:29 change, you know, essentially what they’re saying in their facial expressions.
0:41:31 Like that’s super handy, like it’s huge.
0:41:33 Yeah. Well, I mean, you got all these camera controls, right?
0:41:36 Horizontal, vertical, zoom, roll, tilt, pan.
0:41:40 You know, you’ve got expand video where you can give it
0:41:45 like a vertical video and turn it into a 16 by 9 and have it actually stretch
0:41:47 it out for you or vice versa.
0:41:51 You know, it’s got it’s got a ton of cool features.
0:41:54 It’s really, really good at image to video as well.
0:41:56 Like you mentioned, it’s it’s fast, too.
0:42:00 Let me see if I can just find something real quick to drop in here and see how
0:42:03 fast it generates. Oh, and this one has a first and last frame, too.
0:42:05 I didn’t even realize they’ve added that feature to run.
0:42:10 So here’s like a colorful, swirly one.
0:42:13 And you also go pretty quick.
0:42:17 If you miss a Friday, that seems to be when runway loves to drop an update.
0:42:19 So you always got to pay attention.
0:42:21 I that’s. Yeah.
0:42:25 Yeah. Well, I always make my news videos on Thursdays and release them on Fridays.
0:42:28 So yeah, news is always like.
0:42:30 Yeah, I’m always talking about it.
0:42:32 A week behind. Yeah.
0:42:34 So the runway’s got the best editing tools now, right?
0:42:36 Like the best editing tools.
0:42:39 But I still feel like the visuals are not the best, man.
0:42:41 They’re good, but they’re not the best of the best.
0:42:43 Is that a fair take?
0:42:46 I think depending on what you’re what you’re generating, a lot of times.
0:42:48 Like that one, the the mount one here looks pretty good.
0:42:53 Like, and again, you do have all of your not your your your your video controls,
0:42:55 your your motion controls and all of that stuff.
0:42:59 And depending on what because this was promptless too, right, Matt?
0:43:04 Yeah, you know, there’s I think it depends on what you’re looking for.
0:43:07 And then on top of it, because you can take this up to whatever, 10 seconds.
0:43:09 And there’s all kinds of other tricks.
0:43:11 There’s like, I’m always thinking a lot in terms of like,
0:43:16 you know, you know, use all of the meat of the animal.
0:43:20 So when I see a generation like this, what I’m thinking is like,
0:43:21 OK, I’m going to bring it in.
0:43:25 I’m going to run through topaz and then maybe I’ll speed it up.
0:43:27 And I’m only going to take the first like four seconds of it
0:43:30 to give it a quicker vibe or quicker feel to it.
0:43:33 Maybe put a zoom in and the, you know, I’m just thinking about all the other
0:43:36 ways that I’m going to use this, not necessarily just the generation.
0:43:41 Yeah. And that’s another thing that when it comes to like generating videos,
0:43:46 a lot of people don’t really think about that is in most scenarios,
0:43:49 you really only need about three seconds, right?
0:43:52 Yeah. If you ever turn on your TV and watch a TV show or you watch a movie
0:43:56 or something like that, they never linger on a shot for more than three to four seconds.
0:43:57 Right. Yeah.
0:44:01 So if you can get these tools generating eight, 10, 20 second videos for you,
0:44:07 if just three to four seconds of that video is good, you’ve got perfect B-roll, right?
0:44:11 That’s something that when that Sora thing, when Sora was first announced
0:44:14 and was like the 1080p one minute long generation, it’s like no one’s ever going
0:44:17 to know you’re never going to watch one minute of like, you know what I mean?
0:44:20 Like people can can barely hold their attention through a one minute
0:44:23 TikTok or YouTube short where it is cutting like crazy.
0:44:28 Like try making a YouTube short one day when it’s just you literally
0:44:31 just talking to the screen for a minute and like, I guarantee you,
0:44:33 it’ll be the lowest watch short of all time.
0:44:38 So runway, I don’t know my problem with S tier is I feel like visuals.
0:44:42 It’s like B, like in terms of that, I was just looking like how good to the
0:44:45 videos look like recently I put together like some Twitter threads like
0:44:48 comparing and I didn’t actually include many videos from runway because I
0:44:50 just didn’t think they looked as good.
0:44:52 I didn’t think they would perform as well in the thread.
0:44:57 And I, you know, when I looked at all the different models, I mean, it was like,
0:45:01 you know, VO was by far the best and then store was pretty good.
0:45:06 And I felt like runway was like a like slightly a tier below visual quality wise,
0:45:07 but the editing tools are amazing.
0:45:09 And it’s not the most controllability though.
0:45:13 Like if you saw the camera angles and the like from a controllability
0:45:16 standpoint, I think runway is up there.
0:45:22 Whether I put it on the same level is what I saw from Minimax and from Vio.
0:45:26 So then, all right, we’ve got two last tools and these are the two that we
0:45:27 don’t have access to.
0:45:30 I mean, technically, I think all three of us actually have access to
0:45:34 Adobe Firefly video, but yeah, the emails I’ve gotten from Adobe says
0:45:36 I’m not allowed to share those videos yet.
0:45:38 Oh, D tier.
0:45:47 Cause I thought the model was released and out to the public.
0:45:50 No, the Firefly video model.
0:45:54 I literally got an email from them this morning saying that you can play with it,
0:46:00 but make sure that you send any videos to us for review before showing them off.
0:46:00 Yeah.
0:46:06 So what I figured we can do for this is we’ve got the current model that’s out.
0:46:06 Probably is what we should do.
0:46:12 So for Adobe Firefly, we’ve got their page here with some probably pretty
0:46:18 cherry-picked demos, but you’ve got like this dog video here and it seems
0:46:22 to be one of the better ones, maybe with text from what I’ve seen.
0:46:26 But some of this isn’t great.
0:46:27 Some of it is pretty good.
0:46:33 Based on what you’ve tested with Firefly, we can’t actually show any
0:46:36 videos that we generated because of the rules there.
0:46:39 But based on like the videos that you’ve tested, like how would you rank it?
0:46:44 You know, I think it is very, I think it’s very Adobe.
0:46:50 Like, I think that if you are, you know, working as, yeah, I think that if you’re
0:46:57 working as a professional editor and you are, you know, you need some B-roll of a thing,
0:47:02 you know, much of the same way, like any, how much are professional photographers
0:47:04 using generative fill, like, you know what I mean?
0:47:08 Like, I think that it’s become a tool that they’ll try out a few times.
0:47:11 They figured out where the use cases work really best.
0:47:14 They know where it doesn’t do great.
0:47:19 Like, you know, if you’re a wedding photographer and you need, you know, I don’t know,
0:47:21 something like a couple of birds up in a tree or something like that.
0:47:25 I’m sure that, you know, Firefly or Jenfill is going to do that for you.
0:47:29 If you need dragons flying in and attacking the wedding party, it’s from another tool
0:47:30 that you’re going to use for it.
0:47:36 So I feel that, that Firefly video is kind of in the same, in the same area.
0:47:42 If you’re doing, you know, kind of like what I would call like regular movies or
0:47:46 regular videos, like kind of like very, like, I don’t want to say, I guess,
0:47:47 stockish kind of stuff.
0:47:49 Like, yeah, it’ll, it’ll perform very well.
0:47:53 If you’re trying to get into doing your, you know, if you’re trying to make avatar
0:47:56 for like, you know, then you’re going to struggle with it.
0:48:01 So. So the other thing that I’ve noticed about Adobe, and this is more with the
0:48:05 Firefly image generation, but I imagine it translates to the video as well.
0:48:09 Is as far as like censorship goes, they’re the worst.
0:48:13 Like they’re the ones that, like I tried to generate an image of a person
0:48:15 standing in front of the Eiffel Tower and it’s like, oh, we can’t generate the
0:48:16 Eiffel Tower, right?
0:48:17 Interesting.
0:48:22 There was like lots of things that it would not generate for me.
0:48:24 That it didn’t make sense that it wouldn’t generate.
0:48:28 It is always hard for me to argue against like, because I’m in Adobe is
0:48:31 obviously very much taking a hard stance on like, we are trying to be as
0:48:32 ethical as possible.
0:48:34 And it is sometimes hard for me to be like, well, stop.
0:48:38 So I’ll do it.
0:48:39 I’ll say stop.
0:48:46 But, uh, but yeah, I think the bad guy on this show, I have no hard.
0:48:53 So yeah, I think that that is that eventually I think that that will be.
0:48:58 But again, I think that where Adobe, where Firefly and GenPhil really do
0:49:02 shine is the fact that they are going to be directly integrated into Premiere,
0:49:04 into, you know, into Photoshop as we’ve seen.
0:49:08 And that is, you know, so it becomes that at least first, when you’re stuck
0:49:13 on a thing, you know, at some point when this, like I edit in Premiere, at some
0:49:17 point, you know, when I need a quick shot, like, uh, am I going to bounce
0:49:20 out necessarily to another video model while I’m editing?
0:49:23 Like if, if I’m like, oh, this is just occurred to me.
0:49:23 I want to try this out.
0:49:27 Or am I going to give it a shot real quick with the new Firefly video
0:49:30 model just because I’m already there, you know, there.
0:49:30 Yeah.
0:49:31 Yeah.
0:49:34 The other thing that I really like that, uh, Firefly is going to do.
0:49:35 I don’t believe we have it yet.
0:49:38 Is that generative extend, right?
0:49:41 Where you have a video that you needed to cover eight seconds, but right
0:49:43 now it only covers six seconds.
0:49:46 You can, you can extend it out by two more seconds.
0:49:50 And it figures out what the rest of that video would have looked like right
0:49:51 now when we’re doing video editing.
0:49:54 I don’t know if you’re like me, but what I’ll do is I’ll literally slow
0:49:58 down the clip so that it’s a slightly more slow motion to fill that gap.
0:50:03 But I like that idea of having that generative fill inside of video to
0:50:05 sort of fill out that extra one or two seconds that I need.
0:50:06 Yep.
0:50:10 I think that is probably going to be more the use case then.
0:50:14 I mean, again, even with Firefly image, like I think that most people use
0:50:19 it in gen fill, people don’t really go to adobe.com/firefly.
0:50:22 And, you know, image model and generate stuff there.
0:50:24 You know, they do it, they do it in Photoshop.
0:50:27 So I think that’s really, that’s really where it’s going to live.
0:50:31 So I don’t even necessarily know if like adobe necessarily really belongs
0:50:33 on this list because it’s not necessary.
0:50:37 I don’t really think of it as like a full platform generator, as much as I
0:50:40 do an extension of Photoshop and Premiere.
0:50:41 Yeah, gotcha.
0:50:42 Yeah.
0:50:46 Well, and then the last one we’ve got on the list here is the make a video
0:50:49 from Meta, which nobody has access to yet.
0:50:57 So, yeah, I don’t know if we will or not, but maybe they’ll make it open source
0:51:00 or, you know, Facebook’s version of open source.
0:51:03 Every time I say that Meta is open source, I get a million comments
0:51:05 from people going, it’s not open source.
0:51:06 Stop calling it open source.
0:51:07 Not really, yeah.
0:51:13 But, you know, Meta’s level of open source, maybe they’ll open source,
0:51:15 open source-ish this one.
0:51:18 I’m curious, like, with the, like, now that we’re sort of getting this,
0:51:22 like, second tier of, like, of, like, because I mean, I think that this is
0:51:25 maybe, like, in my head, since I’ve been following this for so long, I
0:51:30 consider this era that we’re moving into as like AI video 3.0.
0:51:34 And, like, definitely with Sora now released and with Vue release.
0:51:38 I mean, it’s kind of, it’s kind of forcing Meta’s hand a little bit to
0:51:39 do something with this, right?
0:51:43 I mean, I don’t know if Zuck really cares, like, you know, at the same time.
0:51:46 But, I mean, you know, just comparing it to what else is out there,
0:51:50 this looks like the last generation of AI video, right?
0:51:53 Like, it doesn’t look like the current gen that we’re seeing now.
0:51:58 But this was the one that had all of the sound effects and you can do,
0:52:00 you know, everything generates as one, right?
0:52:02 It feels like a, it feels like a Pika competitor.
0:52:04 It feels like Pika’s going to go more in the social direction and then
0:52:06 Facebook’s already there.
0:52:10 So they’re kind of like, if they approve this, they’ll end up kind of causing
0:52:11 problems for Pika, possibly.
0:52:13 Like, if you made like a cool social product, you know, with it.
0:52:17 My other question, too, is in, yeah, in this, given that it is Meta, like,
0:52:19 where does this live?
0:52:21 Is this, are they thinking about this for?
0:52:23 Does this a Facebook thing?
0:52:24 Is this an Instagram thing?
0:52:26 Like, where does this live in the Meta ecosystem?
0:52:28 Like, like, what is it?
0:52:29 Yeah, what’s it, what, why?
0:52:30 That’s a very good question.
0:52:32 That’s something like, I’ve been saying for a while, it’s something like
0:52:37 there’s some new social product with AI that doesn’t exist yet.
0:52:37 I don’t know what that is.
0:52:41 It could be, they’re probably, probably multiple ones, but like Instagram
0:52:42 started because of filters.
0:52:45 Well, maybe there’s something new now that you could create with these
0:52:48 new AI technologies, like whether it’s like video and you take a photo of
0:52:50 yourself and now you made a really cool viral video or something, you know,
0:52:51 something really funny.
0:52:54 Well, and I know that’s the other thing that everybody’s really hot on right
0:52:57 now is like trying to figure out, or not everyone, there’s a lot of companies
0:53:00 that are, that are very much trying to become the Netflix of AI video.
0:53:05 And like, I, I, I’m not, I don’t believe that needs to happen yet.
0:53:07 I think that we need to have AI video on Netflix.
0:53:11 I think I would like to see an AI generated, you know, AI produced,
0:53:16 generated, produced, like thing, like something like, uh,
0:53:18 did you guys watch scavengers rain on HBO?
0:53:18 So cool.
0:53:19 So good.
0:53:25 Uh, it was that animated series about the no HBO in Japan, which is a bummer.
0:53:26 Really?
0:53:29 There’s no, have you heard of this thing called a VPN VPN?
0:53:31 I was going to say, yeah, and it does sound.
0:53:36 VPN and the iron.
0:53:40 A, well, what sucks is though, but I, but I watch content.
0:53:41 I watch stuff with my wife, right?
0:53:43 And so you want the subtitles.
0:53:47 So it’s like, you know, yeah, like, like, I, like, we’re, uh, we’re
0:53:50 finishing up watching a Vikings right now, which was kind of funny for me
0:53:52 because we’re doing all this Viking AI video stuff here.
0:53:54 We’re finishing up that series right now.
0:53:58 And I was like, well, I like game, I’d actually never watched a Viking before.
0:53:59 And it’s pretty good.
0:54:01 I was like, but I like Game of Thrones better, except for the last season.
0:54:03 And so I’m wanting her to watch Game of Thrones.
0:54:07 You’re like, she’s like a fantasy nerd like me and she’s never watched Game of Thrones.
0:54:11 And it’s just like, I had to buy the entire show.
0:54:14 Like, it’s quite expensive, like actually buying it.
0:54:18 So right now this is, this is our current rankings of the tools.
0:54:20 This is where I feel like they all landed.
0:54:22 Is anything need to be adjusted?
0:54:25 What do you guys think is never coming on our podcast?
0:54:27 Oh, he’ll be fine.
0:54:31 Well, I think going back to like the only reason I put Meta down in the
0:54:33 D tier is A, we don’t have access to it.
0:54:38 B, it still kind of looks like the last gen AI, like Meta tomorrow could drop
0:54:41 something that looks just as good as Veil and be like, we’ve been working on this
0:54:44 behind the scenes and all of a sudden they’re in A or an S tier, right?
0:54:46 So I think they’ll have some cool stuff coming out.
0:54:49 So the same as I think, you know, Grock will have some cool AI video stuff at some point too.
0:54:50 And, you know, yeah, yeah.
0:54:53 So I mean, obviously none of this is a knock on any of the companies.
0:54:57 We’re just kind of like ranking them based on where we feel their current
0:55:02 generation of technology is and any one of them like is an update away from,
0:55:04 you know, bumping up a tier or two.
0:55:08 And also like, like Tim mentioned, I don’t really feel like we know what
0:55:11 Meta is going to do with AI video yet, right?
0:55:15 Like they are trying to shoehorn like the AI image generators like into
0:55:19 their chat apps and stuff, but I don’t think anybody’s still really using them.
0:55:21 I don’t think that’s going to work just adding it on stuff.
0:55:22 It’s like the same problem Adobe has.
0:55:25 You just add on AI to some existing thing.
0:55:27 Like, no, you need entirely new things for this new world.
0:55:29 It’s now there’s there’s so many new possibilities.
0:55:31 Why do you want to be tied to these old systems?
0:55:34 But they have to because that’s what that’s like, they’re, you know, cash cows, right?
0:55:38 Well, and I think I think that as we move into 2025, too, like there’s
0:55:41 going to be a big shift up amongst all of these.
0:55:43 And we’re going to see if we come back and do this again next year,
0:55:47 you’re going to have we’re going to have like five or six other companies
0:55:49 that we’ve never even heard of now on this list.
0:55:51 The list is going to be all over the place.
0:55:55 We’re going to see a lot more of that 3D element with what we were talking
0:55:59 about before with World Labs and with what kind of in a lot of ways
0:56:02 what Genie 2 is doing, those like world builders.
0:56:06 Like, and I know that mid-journey’s been because mid-journey video will be coming along.
0:56:07 That’s what I’m about to say.
0:56:08 Yeah, they’ve been wanting to do.
0:56:09 Yeah. Yeah.
0:56:11 So mid-journey is also working on a video model, too.
0:56:14 That they keep on saying they’re going to show it because I’m sure.
0:56:17 Yeah, there’s something new, like great video models,
0:56:19 like how do they release something without it being spectacular, right?
0:56:22 Like, actually, and I was just in the office hours a couple of weeks ago.
0:56:26 I think they were like, and I’ll bless them for too, as well.
0:56:29 As the team was just like, we could release it.
0:56:30 It’s the holidays.
0:56:33 We’d like to spend time with our families and like so that’s like
0:56:37 legitimately why they’re not, you know, because like if they release it,
0:56:40 they’re going to be hit with all kinds of like questions and they and the server
0:56:41 is going to go down this and that and that.
0:56:44 Like, just let them let them go home and hang out with their families.
0:56:47 And then let’s be honest, right now is not the time to be releasing
0:56:50 something with what OpenAI and Google are doing, right?
0:56:52 They’re just going to get overshadowed by those two companies right now.
0:56:55 Who? Rock ’em, sock ’em robots going at it right now.
0:56:57 It’s like, so, yeah.
0:57:02 Meanwhile, what’s your prediction on January, do you think we’re going to see
0:57:05 a massive slowdown or do you think we’re going to ramp up again?
0:57:11 So I’m kind of waiting to see what OpenAI releases over the next two days.
0:57:14 Yeah, I’m sort of given away when we’re recording this episode
0:57:15 compared to when it’s getting released.
0:57:19 But there’s two more days of OpenAI about to release updates.
0:57:24 I think if they don’t release tool use as one of the next two updates,
0:57:26 I think that’s coming in January.
0:57:29 They’ve already hinted that that’s probably going to come in January.
0:57:33 So I think we’ll see the tool use come from OpenAI.
0:57:37 And then, you know, knowing what I know about Google and spending some time there,
0:57:43 they’re really working on the sort of like agentic tool use kind of stuff as well.
0:57:46 So I think that’s going to be the big theme of 2025 overall,
0:57:49 is these tools that can use tools on your behalf.
0:57:52 Yeah, I’m excited about that.
0:57:55 I don’t think we’re going to see like a new AI video model,
0:57:59 for instance, in January, unless it’s from like mid journey
0:58:01 and they’ve already got it ready to go, you know.
0:58:03 Yeah, I think we’ll see the rate of progress improve.
0:58:06 Like if everyone, like probably everyone’s going to be copying
0:58:09 like what OpenAI is doing in terms of like their 01 model.
0:58:14 And from my understanding, like, you know, since it’s not entirely
0:58:17 just relying on the training of, you know, with massive amounts of data,
0:58:19 that’s one part of it, but that’s not the only way it improves.
0:58:21 We’ll probably see these models just like keep getting better,
0:58:23 like over and over and over.
0:58:25 I mean, we might get to a point where like every month,
0:58:29 there’s like a, it got better 20% this month, right?
0:58:34 And I think AI video will end up in a similar scenario quite soon.
0:58:36 If it’s not already there.
0:58:40 You know, it’s funny because I talked to Caleb over at Curious Refuge
0:58:42 about this a bunch of times, he and I chat back and forth of like,
0:58:46 you know, as the video models get better and better and better
0:58:51 and like, you know, we are always reporting and doing tutorials on all of it.
0:58:54 And like, what’s the what’s the point when it’s like, it’s like, OK,
0:58:57 it’s perfect now, like when it’s like, what do we make then?
0:59:00 And like the fact is that it’ll never, it’ll never be there.
0:59:03 Like it’s just, it’s just like there are still people doing, you know,
0:59:07 camera tutorials and camera, like, you know, reviews and and all of that.
0:59:10 There’s always, you know, it’s like last year’s Canon is terrible.
0:59:13 This year’s Canon is better. Nikon is better than that.
0:59:15 You know, there’s always just going to turn into that. So.
0:59:17 Yeah, yeah, I agree. I think it’s going to get better.
0:59:19 I think we’re going to get higher and higher resolution.
0:59:23 I think there’s always going to be room to focus on the finer details within
0:59:28 images, you know, we’re going to get more controllability out of the images,
0:59:31 right, where you have a little more control over the lighting, over the
0:59:35 the style of it. We’re already seeing that with, I mean, basically,
0:59:37 mid journeys got Lora’s available now.
0:59:41 They don’t call it that, but they’ve got the mood boards is essentially that.
0:59:47 So like, I think there’s always going to be places to improve,
0:59:50 but it’s going to be more and more nuanced as time goes on.
0:59:53 And that’s one of the things that I kind of want to start
0:59:56 maybe like preaching a little bit more on the channel is like,
0:59:59 you start moving into some of the tools that help you develop story.
1:00:03 Because ultimately, as fidelity gets better, as details get better,
1:00:06 as control gets better, what are we doing with this stuff?
1:00:08 Like, are we telling a good story?
1:00:11 Like, is it is it something that’s going to make us laugh, cry, get excited,
1:00:13 whatever? I mean, that ultimately is the thing.
1:00:16 Like none of us watch Star Wars when we were kids.
1:00:18 And, and, you know, we were we all love the special effects,
1:00:22 but that’s not what drew us in, you know, right, right, for sure, for sure.
1:00:26 And I’ve actually really started to see some impressive AI generated short films.
1:00:28 I saw one that somebody made recently.
1:00:30 I’m sure you’ve come across it, Tim.
1:00:33 In fact, it might even bed your ex-account that I came across it on.
1:00:37 But there was recently like a little Batman short film that came out.
1:00:39 Oh, yeah, yeah, yeah, really, really good.
1:00:41 That got copyrights really good on it.
1:00:42 Oh, did they get that?
1:00:46 I was looking for it and they got knocked down for for for being.
1:00:48 Yeah, it was Warner had knocked it down.
1:00:51 So yeah, I mean, I’m not surprised by that.
1:00:56 They did basically just, you know, steal somebody else’s IP and make a short film out of it.
1:00:57 That was really good.
1:00:58 It was actually, yeah, it was really solid.
1:01:00 Great sound design throughout that whole thing as well.
1:01:01 They really captured the tone.
1:01:04 There was an interesting trick that those guys pulled where
1:01:07 they used that first frame feature, but they were taking screenshots
1:01:11 from the Robert Pattinson, you know, Batman movie.
1:01:16 And then basically running their own generations to kind of create iterations of that scene.
1:01:18 It’s really smart in all honesty,
1:01:22 but I can see exactly why Warner was like, no, no, no, that is gone now.
1:01:26 They’ll release something like that in their own in the future, right?
1:01:29 Like here’s your custom Batman story.
1:01:30 I’m sure, I’m sure.
1:01:34 But I mean, like just that idea alone opens up the door for so many
1:01:35 like fan films and that kind of stuff.
1:01:38 I actually, truthfully, I think it’s kind of stupid that Warner knocked it down.
1:01:39 I really think that it’s like, if anything,
1:01:42 that’s just like free publicity for your Batman movie.
1:01:45 So I don’t, you know, that again, I’m not a corporate lawyer.
1:01:51 So again, it’s the point is people are figuring out how to use these tools
1:01:55 to make like knowing the limitations is just creating bumpers
1:01:58 that they know they have to stick between and, you know, with those bumpers,
1:02:02 they’re figuring out how to make stuff look really good and sound really good.
1:02:06 And I think we’re just going to see more and more of that, you know,
1:02:08 ideally people should create their own IP instead of.
1:02:13 I agree with that Marvel or companies like that because that that kind of stuff
1:02:15 probably going to get slapped down every day of the week.
1:02:17 We saw it with the South Park generation, too,
1:02:18 where there was a whole A.I.
1:02:21 Generated South Park episode, which was actually a decent episode.
1:02:23 I watched the whole thing. It was like, that’s not bad.
1:02:26 Yeah. But that also got slapped down, right?
1:02:30 That got that got pulled off the internet because that’s somebody else’s IP.
1:02:34 Yeah, the simulation guys are sort of moving out of that South Park thing.
1:02:37 And now they’re starting to do kind of like their own branded IPs.
1:02:40 And the last one, I think that it kind of looked a little bit more
1:02:42 like that Star Trek lower deck show.
1:02:44 It’s not, but it kind of has that vibe to it.
1:02:47 It’s like a spaceship thing and kind of an animated series.
1:02:49 But that’s, yeah, I agree with you.
1:02:52 I think that, like, just don’t make fan fiction for stuff.
1:02:53 Go make your own fiction.
1:02:56 Like, you know, like it can be very heavily based on, you know,
1:02:58 the thing that you want to make. It’s not the punisher.
1:03:03 It’s the it’s the it’s the punishment or whatever you want to do.
1:03:05 The vindicator. Yeah, exactly.
1:03:08 Yeah. So, you know, just make that instead.
1:03:10 So at least you own it.
1:03:12 Well, cool, Tim, this has been amazing.
1:03:14 You know, obviously this episode isn’t two hours long,
1:03:19 but for anybody who’s listening right now, we’ve actually been recording for two hours.
1:03:22 That’s how we’ve heard out we’ve gone on some of these video tools.
1:03:25 But before we do, like, you shout out your platforms.
1:03:28 I know you’ve got an awesome YouTube channel you post on X.
1:03:29 Where should people go check you out?
1:03:31 That’s really about it right now.
1:03:33 I think next year I’ve got to start
1:03:38 with probably picking your brain a little bit on like on like doing a website
1:03:42 and probably a newsletter at some point or another.
1:03:44 Actually, pick both the channel name in the end.
1:03:46 Oh, sorry. Yeah.
1:03:50 So theoretically, media on YouTube is probably the easiest way.
1:03:53 And then from there, you can find links. Absolutely. Yeah.
1:03:55 Well, thanks again, Tim.
1:03:57 Thanks for everybody who’s tuning in.
1:04:01 Make sure that if you’re not already subscribed, you subscribe on YouTube.
1:04:03 That’s where you’re going to get all the cool visuals.
1:04:06 You’re going to be able to see our breakdown and examples of these tools.
1:04:08 You’re going to be able to see the tier list that we made.
1:04:14 See the handsome faces of Matt and some faces of Tim Nathan.
1:04:15 I was hoping you were going to say Nathan, too.
1:04:17 You’re going to say Matt.
1:04:21 Well, this is awkward.
1:04:26 But again, if you do prefer audio, we are available wherever you listen to podcasts.
1:04:29 So thanks again, Tim, and thanks everybody for tuning in.
1:04:32 [MUSIC PLAYING]
1:04:33 .
1:04:36 [MUSIC PLAYING]
1:04:39 [MUSIC PLAYING]
1:04:42 [MUSIC PLAYING]
1:04:45 [MUSIC PLAYING]
1:04:47 you

Episode 39: How are AI video tools revolutionizing content creation? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by Tim Simmons (https://x.com/theomediaai), founder of Theoretically Media, to delve into the latest advancements in AI video tools.

This episode ranks 13 of the most popular AI video tools of Q4 2024, discussing their features, strengths, and weaknesses. They explore the capabilities of tools like Sora, Runway, and Adobe Firefly, and predict future developments in AI integration. The conversation highlights the evolving landscape of AI video generation and its impact on content creation.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) Pika struggles with customization; requires frequent rerolls.
  • (06:49) Veyo ranks S tier for generating videos.
  • (13:51) Leaving Sora in category S, discussing Turbo.
  • (20:21) You write scripts; creators adapt them independently.
  • (25:40) Prompts reinforce ideas, rerun if needed.
  • (26:51) InVideo generates videos from concepts automatically.
  • (33:28) Integrating new technology requires complex adaptation.
  • (40:50) Runway: Advanced camera controls, fast image-to-video.
  • (45:58) Adobe’s generative tools: Occasional use, specific purposes.
  • (48:08) Firefly and Gen Phil integrate into Adobe.
  • (57:19) AI models continually improve, likely accelerating progress.
  • (59:09) Focus on storytelling, not just technical details.

Mentions:

Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Comment