AI transcript
diving into the world of AI and visual effects. Today, we’re chatting with Nikola Todorovic,
the CEO of Wonder Dynamics. Now, if you’re not familiar with Wonder Dynamics, it’s a tool
that allows you to film yourself or any human on video and then re-skin that video with like an AI
generated or 3D CG character. So if you think of like Lord of the Rings and you’ve got Golem and
Lord of the Rings, that was all acted by a human. And then special effects artists went in later and
sort of re-skinned it as a CG character as Golem. That’s what Flow Studio from Wonder Dynamics does
right now. And anybody can use it and it makes it super easy. And Nikola is actually going to demo
exactly how to do that in this video. And make sure you stick around as well, because if
you’ve got a business or are doing marketing or you’re a content creator, we’re going to dive
into all sorts of use cases for you as well. So let’s go ahead and jump in with Nikola from
Wonder Dynamics. Thanks so much for joining us on the show today. How are you doing?
Pretty good. Thanks for having me. It’s good to see you guys.
Yeah, thanks. You know, we were saying this right before we hit record, but the first time that we ever
saw the Wonder Dynamics studio tool, both of us thought it was fake. We saw the tool and went,
I don’t believe this. I think they’re pulling our legs. And then we finally got our hands on it a
couple of weeks later and we’re actually blown away that it did what it said it could do.
Yeah, I know. You know, as I said, I think a lot of people thought that, you know, a lot of people
thought we faked our demo, which is always a compliment, right? When people think that. So it’s
actually a nice thing. But I got to say, Matt, when you got access and did a coverage, our team was
really excited because a lot of people on our team follow you for news on tech and AI and all that
stuff. So they’re really pumped that we got that coverage because we were working in stealth mode for
about four years. Right. So one of those things with demos as well as, you know, you don’t know what the
reaction is going to be right when you’re so close with something for so long. So it’s very grateful,
you know, to get the coverage we did. Yeah. Yeah. I appreciate it. Yeah. I remember I made a video
where I was outside my yard and I was playing basketball. Right. And then I switched myself
with a robot. And then Elon Musk went and shared that video and said, maybe we’ll have this in real
life soon. I was like, oh, this is cool. That’s awesome. So very cool. Well, let’s get into your
background a little bit. Like how did Wonder Dynamics come to be? Were you in visual effects
and Hollywood? What’s the backstory there? Yeah. So I always wanted to be in filmmaking since I was,
you know, 12 years old and I come from a really small country. I was born in Bosnia, lived in
Croatia and Serbia all my life. You know, I couldn’t really afford to go to a film school or something
like that. But, you know, I started about 12, 13 years old, started watching YouTube tutorials,
you know, video co-pilot Andrew Kramer and those guys, you know, that was kind of my little VFX
school. And, you know, I moved to U.S. really to pursue career in filmmaking and, you know,
started working as a VFX artist, as a compositor first, worked for, you know, freelance and a lot
of different studios, you know, everything from ads to indie films. And then I started working as a
supervisor. And I met my co-founder, Ty Sheridan, who’s a young actor and a producer. And we really
started writing together. We wanted to, you know, tell our own films. Usually whatever we wrote,
it was, you know, sci-fi with robot characters. So we really wanted to tell the story about the near
future, you know, near future when we coexist with robotics. And every time we wrote something,
we realized, all right, this is about $200 million budget. There is no way we’re going to ever get
that, you know, and then we started looking into a bit of AI. You know, first we built something that’s
more interactive nature. About 2018, we built this product where you can have a conversation with the
character. So let’s say watching a murder mystery and a little switch happens from a stream into a
digital double. That’s a 3D representation that you can have a chat with. And that’s kind of when we,
you know, it was cognitive AI wasn’t there yet. This is before ChatGPT and all this push,
but we really saw a big opportunity in visual. We call it visual AI. There was no gen AI or any terms
like that. And we said, okay, this is going to change production. So I always say selfishly,
we wanted to just do it to make our own films. So we’re like, what’s the worst thing that could happen?
You know, and the worst thing could happen. We both concluded, we’re just going to learn what
the future of filmmaking is before other people. So we kind of get a little headstart. And then about
six months in, we realized, all right, this is bigger than just two of us. Let’s turn it into a tool and a
platform. I mean, that’s really how we started. We did a complete pivot, bootstrap first three years,
you know, kind of trying to find your way. What is it that you really want to do? And then,
yeah, 2018, 19, we really focused on this and kept growing the team and kept building.
What was the first prototype? And like, what made you confident that you guys could actually
build it?
You know, what was the first prototype? We actually did, if you know, the Spectre opening sequence in
Mexico, when James Bond walks on a ledge, we did that and test, we did kind of, you know,
just focus on the mocap, you know, AI mocap. And obviously the quality wasn’t even close to it. But
then we really realized the potential, you know, was huge. You know, we really started looking more
into kind of technology around, you know, self-driving vehicles and robotics, which is all
about, you know, understanding the world around you. You know, we call it scene understanding. How
much can I understand what I’m seeing in the pixels? And really, you know, as a VFX artist,
you’re always trying to do that. You’re trying to understand what looks real, what doesn’t,
what is it in 3D space from a 2D plate that I have, right? So that’s how we just kept building on top
of that.
Yeah. I wanted to share too, you mentioned your co-founder partner on this was Ty Sheridan.
Anybody who’s not familiar with them, if you’ve seen Ready Player One, Ty Sheridan’s the main
character in Ready Player One, right? So you’ve probably seen him and not even realized it,
but very cool. I would love to dive in and actually take a look and show off to anybody who actually
hasn’t seen this product yet, what it’s capable of. So, you know, if you wouldn’t mind sharing your
screen and showing us some of the sort of sizzle reel of what this thing could do, we’d love to just
kind of take a peek and chat about that.
Yeah, absolutely. So I can guide you a little bit on some capabilities here.
This is a flow studio and we have about four project types right now, which is live action,
which you mentioned earlier. And that’s really tailored about having CG character inside a live
action shot. And then we have a motion capture. That’s more just if I want to get the mock-up
performance out of a shot. And then recently we launched something called video to 3D scene,
which is more towards animation. But I’ll show you quickly of how it works.
We’ve really wanted to build an easy interface. So we have three steps. The idea behind Wonder was
always, how do I get someone who’s not that proficient in 3D to also get in the 3D? But then
also people that are proficient, how do they use it in their existing pipelines? This is a big issue I
saw, you know, in generative AI even early on. It was a lot of black boxes. You get certain results,
but you can’t really push it and edit it.
So we were from the get-go, we said, okay, it’s not there yet, but if I can get 60% there,
what is the data I can get out that I can then plug in an existing tools and push it there?
Because as you know, you know, being an artist is all about control of every single element,
whether you want to control your performance or animation, or you want to control the camera
or lighting, et cetera. So that was the idea behind it. But I’ll show you a quick kind of
workflow on that work. And then the other thing we wanted to do, we wanted to not just work shot by
shot, but we said, let’s enable it so it works in a sequence. So I’ll show you quickly a couple of
shots that you’ve probably seen, but it’s good to show. So, you know, a bit of a non-related shot.
And then we have one with a couple of connected shots with the same actor. So let’s say I’m
happy with my edit. I would go on my next. And then what I do is I scan frame for actors and then
it looks for the actor in the shot. And then we have characters here that you can apply. You know,
we have a couple of characters that we, a few characters we provide for people that don’t have
their own. But the idea is that you can upload your own characters. Right now we have a blender
and a Maya add-on to help you prep your character a bit easier in those. So the idea was always,
you know, create your character traditionally, as you do until AI gets there, you can completely
control it. So let’s say for this first one, we’ll assign this test crash dummy. And then
I have my second shot and we’ll come back to this one. And for this one, I’ll assign, we have
this little alien character. So I’ll just drag and drop. And then I have a couple of shots with the
same actor. I don’t have to go target for each one. I just need to target once. And then we use
something called re-ID that’s looking for the same actor in multiple shots. So let’s say we’ll do this
also, test crash dummy. And pretty much that’s it as far as interaction. Really where the power of the
software is, I can get a video out, but we say that’s post-viso best. So it’s just meant to show
you where the AI worked, where it didn’t work, right? It’s not meant to be your final VFX shot.
Really the power is in these elements and the scenes. So I can export my mocap. I can export clean plate,
alpha mass, camera track, and character pass. And then I can export, you know, a 3D scene out of it.
So essentially what it takes, it takes that plate. We have about 25 models in there. And these models
are everything from, you know, facial performance tracking, body pose tracking, camera tracking,
lighting estimation, and things like that. So then it takes basically from a 2D plate and puts it in
3D. In this case, let’s say I’ll export blender scene. And then something like this will take about,
you know, 70 minutes to process. I have it from before so I can show your results. You know,
and these shots are considered, if nothing, they’re considered really easy VFX shots. So you’re going to
get pretty decent results out of it, right? But for a trained eye, you’re going to be like, all right,
you know, obviously I can see a little leftover of the actor there. Not happy with the lighting here,
et cetera. So what I would do here is I would basically download my clean plate. So my compositor
can go in and clean what didn’t clean well, you know, on the actor. But most importantly,
I would really download these 3D scenes. So let’s say I download this 3D scene. So clean plate,
obviously pretty straightforward. You get, you know, image sequence of that shot. So obviously I can see
the actor remaining. So my compositor would clean that out. But when it comes to 3D scene,
essentially I’m getting each one of these shots in a separate 3D file. So if I open one,
what happens is it takes that performance and he estimates the animation and the camera. So it
actually tracks the camera and it tracks the animation, right? And then I have an option to
download my pass, which is my texture for the character. And I can download the clean plate,
which would be my background. Then I can download textures of that character. Now I have that
characters as a texture also, but then I’m also going to add that background because that clean plate
essentially, let’s say I cleaned it and I’m happy with it. That’s basically clean plate is my background,
right? Because it’s a 2D background. So essentially what that gives me, gives me that shot, right?
In elements I need it. So I have full animation data in it that I can control. I have obviously camera,
I have lighting info. So if I want to adjust my lighting, right? So I would really continue my
animation the way I do. So in this case, let’s say, you know, I want to control to the smallest
details. I want the character to actually look up a little bit higher. I can control that. So the idea
always has been around, how do I not lose control as an artist, right? When I do things like that. So
that’s why the data you get out of it is really the main thing of what we did inside of Full Studio
on that side. And then obviously one of the things that we always get from artists is, you know, I
don’t need all the passes. Can you just give me a camera track and clean plate? So we opened these
wonder tools as well that are essentially just these single models because we have so many models running
in the background. So this will do quicker as well. And let me show you a couple of other things that I
think is interesting. You know, you probably saw the demo we launched for animation use case.
The idea for it was that if I have multiple sequences in one space, can I calculate my camera setups?
So in this case, I have a shot, two shot that goes to one shot, another shot, and then it goes a little bit
two shot behind them. What it does, it actually calculates the footprint from those actors and the trajectory
of the cameras. And then this is all one camera, but I just edited it, right? But what animation use case does,
it really places it in 3D space based on the cuts, right? So it tries to guess where those cameras
are. So it pretty much sets you up with a kind of virtual production inside of Maya or Blender just
based on that sequence. So the idea was here was, you know, I also select my environment, not just the
characters, but more importantly, what we wanted to do here is, you know, what if I’m in my living room
and I want to frame my animation how I like it, but I also want to cut it how I like it, but I don’t want
to deal with each shot separately. I want to have that performance to be one continuous performance
with different camera setups throughout the shot, right? On the side. So there’s obviously a lot of
models working here together to try to calculate that. And one thing we saw is AI mocap, obviously,
you know, a pose estimation is just, you know, one to two models out of 20 something we have.
But the problem is with kind of when you’re doing markless mocap is you are only basically guessing
position of joints and everything just based on an image, right? So once you lose an actor or they’re
occluded by another actor, an object, it’s really hard not to have it break. Right? So one thing we
recently released after, you know, we’ve been building this for almost about 10 months to a year
is, you know, I have this issue is that anything you film, as you know, you’re always going to have
inclusion, right? So it’s very hard to rely only on an image. So we built something called motion
prediction that essentially predicts motion if you lose the character. So if it sees a few frames before
this tree, it will guess, okay, most likely it’s still walking, right? Or if it only sees half of
the subject, it’s still, you know, going to guess what it is. So in this position, I see upper body,
most likely it’s sitting. So I’m going to set up my lower body. Why is this important? Because,
you know, for live action, you kind of only care what you see in a frame, right? Because that’s what
you framed it. But for animation, you really want a full 3D body pose for that, you know, so you can
control everything and it’s going to affect your animation as well. So another thing also it does is
if I go in a closeup and you only see the top of the body, it will float, right? The bottom part
will float as you can see here. So even if it only sees the upper body like that on a closeup,
it’s still going to generate what the bottom body is doing, right? And that’s important for any animator,
right? They don’t want to be doing so much cleanup when the thing break on the side. So that’s a couple
of things, you know, we had added recently that really came as a natural flow of things that we were
looking to add for artists. We’ll be right back to the next wave. But first I want to tell you about
another podcast I know you’re going to love. It’s called Marketing Against the Grain, hosted by Kip
Bodner and Kieran Flanagan. It’s brought to you by the HubSpot Podcast Network, the audio destination
for business professionals. If you want to know what’s happening now in marketing, what’s coming,
and how you can lead the way, this is the podcast you want to check out. They recently did a great
episode where they show you how you can integrate AI into the workplace. Listen to Marketing Against
the Grain wherever you get your podcasts.
Very cool. I’m curious, like, how has the sort of reception been to this in like Hollywood? I know,
you know, there’s obviously when it comes to AI in general, there’s a lot of sort of fear around
job loss and things like that. How has the reception been?
Yeah, we were very cautious about it. I think if you notice in our demo, I always say we don’t
generate art. We’re accelerating, right? It’s still the artist that made that character,
still the artist that made the environment. It’s still we’re picking performance from the actor,
right? So we’re very cautious about how we build it. And obviously, you know, we’re a bit different
also because, you know, everything we train on is synthetic data, you know, because we’re not
generating art. We didn’t really have to rely on scraping the internet or similar. So, you know,
the way we started, we also had some really big names on the board, you know, from Russo Brothers to
Spielberg, etc. And Ty is an actor. So we come from this space, from the entertainment industry.
So as an artist, I saw the fine line, right? It’s a really tricky one. Sometimes you almost have to
resist building certain things just because of that. Right? So, but you know, in your question,
how did Hollywood react? I think this example of OpenAI with the recent release, it’s a good example of
how these reactions sometimes are very, you know, drastic. I’ve seen it early on. You’ve probably seen it,
you know, 2023. A lot of panic around it and how things are trained. And then it calmed down a
little bit and it comes back up. Yep. Seems to be waves for sure.
There’s waves. Yeah, there’s waves. I would say, you know, we see a lot of innovation and a lot of
directors that really accept it because it gives them an opportunity to, you know, make things quicker and
iterate quicker. We see people that are scared of it, you know, mostly sometimes lack of understanding.
Again, you know, Flow Studio is a bit different. So we’re in this interesting intersection where,
you know, we went through a bunch of studios we worked with. We had to go through rigorous security
process and stuff. So the fact that we’re not generating art is always something that is a big
relief. Right. But I’ve also seen studios, some accepting GNI, some not, you know, some can talk
about it, some cannot. So, but for me, we really did build this for indie artists, first and foremost,
even though we have studios as customers, you know, I always say studios will be okay. They’ll figure
out their way, no matter what the change is. To me, what’s exciting about AI, and I think for a lot of
people is it is a very hard industry to break into. You know, I got a lot of luck on my side, took many
years to break in the industry, but I know a lot of artists that are way better artists than me that
didn’t have these opportunities. Right. So I think that’s why people are excited because you’re really
going to open up opportunities for people to tell stories. And maybe I’m a little bit naive, but I
don’t think storytelling should be tied to any kind of socioeconomic status. It really, storytelling is
too important for you to be a good salesperson or meet a producer to be able to tell your story. Right.
So to me, it’s like, you know, it should be a clear path and easier to tell stories. So.
Yeah. Nicola, I spent a little bit of time in Hollywood. So I was partnered with Barry Osborne,
the producer of Lord of the Rings and The Matrix, and we tried to create a movies
studio together. And I, the same thing you’re talking about, like, you know, it takes like 200
million dollars to do a film with big special effects. I was like, this is nuts. If you want
to like start a new studio, you’re talking about like raising hundreds of millions of dollars.
That was what I found exciting about Wonder Studio. Just imagine like in the future, small
independent creators, whatever’s in their head, they’ll be able to get it out of their head and on
the film, which just has not been possible before. Yeah. And Nathan, you know this well. I mean,
there’s so much sacrifice from a script. It’s very hard to write scripts. I, you know, I, I,
used to write and I still write and to me start. Sometimes I say starting a company might be easier
than writing a script sometimes, but you know how much sacrifice you go from a script, what’s on paper
to what’s on screen. There’s always, you know, as they say, like you can push that imagination to
the screen. There’s so many technical challenges. And, and I don’t know, to me, it’s a little bit
ridiculous. You know, Ty and I always joke when we walk, like you were, I think we were in San
Francisco and we’re like, how much do you think that building costs? And we look it up,
it’s like $40 million. We’re like, someone makes one movie. That’s three of these buildings,
spend three years, and then has a weekend to potentially get the money back. Cause they have
to make double that amount. It’s such a broken system. I mean, think about it from a standpoint,
if you’re starting a company, a startup, if you have a startup that you built for five years
and you put $200 million and you only have that weekend to get your money back. And you have no
control of what’s going to happen that weekend. Nobody would fund it. Nobody would fund it. Right.
So I had a friend that said, he’s like, the difference between a tech and a film industry is
that, you know, in tech space, there’s obviously much more capital and film we do because of love
and occasionally a hit here and there. Yes. It’s also, it’s like sexy to be involved in,
right? Like people, some people invest cause like, if I get my money back, that’s cool,
but I get to be involved in around the Hollywood, which was slightly how I got involved through
friends in Silicon Valley, actually. Yeah. It’s, it’s interesting. I mean,
it’s such a, such a passion thing, right? Storytelling is very important. You, you know,
I was fortunate enough to work with a lot of people that are really, you know, not in it to become
famous or anything. They’re in it because they really believe their story can make a difference.
And, you know, that’s novel to me. So, you know, that’s one of the reasons I came into
it as well. Very cool. You mentioned that like some of your clients are big studios. Are there
any like films out in the wild or maybe even stuff in production right now where this technology is in
use? Yeah, we have a lot, but unfortunately we have hefty NDAs that you can imagine. No AI involved in
any of the films. Okay. Unfortunately, we do have some that when I really like Boxer studio did a
Superman and Lewis and did this case study, they used it for it. So one, that’s one of the public
ones that you can see for a TV show. And it’s very cool. I like those guys at Boxer studio. They’re
really innovative. They’re like throwing a lot of tools and playing with it and combining and being
more efficient on a turnaround. But obviously besides film and TV, we’ve also seen it being used in gaming,
you know, also marketing, a lot of brands as well on that side. And then obviously a lot of content
creators, you know, kind of individual content creators. And what’s exciting for me is we saw early on
people that don’t know what rigging is or what, you know, they would get into 3D and start learning
because it seems easier, right? Because to me, 3D is always, I used to do compositing. And then when I
switched to 3D, I was like, this is so overwhelming. It’s so hard to start. It’s so complicated. CG is so
slow. So I think that’s the beauty, you know, if we can make it easier for people to get in the field
and not be so, you know, scared of even going that way. As you said, Nathan, it’s like, sometimes when you
look at the process, filmmaking process from the outside, you’re like, oh my god, this is so hard.
This is so scary, right? Yeah. I got a tour of Weta and also like got to go backstage when they were
working on Mulan for Disney and got to see the early special effects and things like that. I was very
intimidated. Like, I was like, there’s so much money involved. It’s so hard to get involved. I wish I
could just be like, just raise like 5 million to be able to create like something cool just out of my
head versus having to depend on some gigantic studio like Disney.
Yeah. No, I think a lot of people feel like that. And also, I think a lot of artists are introverted
as well. So it’s very hard for you to go and pitch a project or, you know, go in a room and try to ask
for money and things like this. So a lot of true artists I know are like brilliant artists, but
they’re just not good at that part of the game. And unfortunately today, you have to be good at both
parts of the game. So I do see a lot of benefits when we see this AI, you know, I think, you know,
obviously it’s not there yet, whatever tool and you’re looking at, but it’s getting there. I’m
excited. And also it’s going to be much more global. You know, it’s going to be much more global right
now. It’s still very local. You have to be in a certain part of the world to be able to get funding
or greenlit on a project. I’m kind of curious what your thoughts are on like AI video. Cause I imagine
using something like wonder, or what are you calling it now? Wonder studio, or maybe it’s
flow studio. I imagine using something like that and like getting the shot I want and then
going into some kind of AI video tool in the future and like spicing it up and changing it.
Yeah. Yeah. We’ve seen people do it the other way. Also we’ve seen people generate AI video
and then getting that to use that for animation source so they can run and get mocap out of
it and then get a 3d source out of it. Right. So we’ve seen it both ways. Yeah. We’ve seen
it both ways. My problem still with AI video, I think it’s going to get there, but it’s really
that editability because it’s too general. I’m a big believer in separation of elements. Like I have
to control my character, but I have to control individual elements of my character, whether it’s
hair, whether it’s eyes, whether it’s right. And then, um, I’m also a big believer. You can’t prompt
the performance. Performance too subtle, especially reactions. Right. If you look at your reaction now,
Nathan, I can’t describe what you’re doing to the word, right? It’s very, very subtle and every actor
is going to do it differently. So I do think this multimodality is which really what we’re going,
you know, even though we marketed this kind of animation and live action, we really are building
foundational models so we can add these models on top of it that can help us get consistent in,
you know, spatial awareness. It’s a big issue, as you know, you know, cause you know, right now with
AI video is you can get a couple of good shots, but if I go from a wide to a closeup and then do five
step and then I cut to another shot, will it really be consistent of how much space it went,
right? This is the limitation of the training data really, because these, you know, AI video models are
trained on a 2D video that doesn’t have this 3D awareness, the world awareness. So we’re seeing a lot
of companies now understanding that we’ve been from the beginning kind of making a bet of let’s build this
world synthetic data. And so we can then have this consistency in 3D space, but then we can also
control it. But as I mentioned, I do think it’s going to take a little longer than people think.
So the pipelines will not change overnight, at least for professionals. You know, you still need
passes. You’re going to still get notes. You’re going to still have to control every single element
of your video. So I think you’ll get there. It’s just, you know, right now it probably is going to get
used a little bit quicker on, you know, social media, maybe advertising and et cetera. And then we’ll see
where we end up with copyright thing, because a lot of people that want to make commercial thing still
can’t touch it because it’s still being, you know, to be determined, you know, is fair use a real thing
or not a real thing. So we’ll see what happens.
Yeah. I still really feel like none of the AI video tools have nailed character consistency.
You know, I feel like a lot of them are starting to claim that they’ve got character consistency now,
but you watch them back and you’re like, eh, do they really though?
It’s a hard problem.
Yeah. I don’t really feel like they have. I do like that workflow though, of using something
like Flow Studio, generating sort of what you want to see, and then sort of doing
like a, you know, a video to video sort of AI transfer, that kind of stuff I think is going
to get really, really cool. Especially, you know, I know early on when I first heard about
Flow Studio, when it was Wonder Studio, I remember the Corridor Crew guys, they did some content around
it as well. And those guys also did this big video where they were trying to use AI to make an anime
film. And yeah, I mean, how much easier has that gotten now, right? You can actually use stuff like
Wonder Studio, Flow Studio to actually go and, you know, create all those scenes now,
run them through a video to video workflow and get a very sort of consistent look. So I think
these workflows are really, really improving right now.
Yeah. A hundred percent. That’s one thing we also want to do in a sense of like kind of having
at least that look, that sort of post-vis and then really push it forward more. I think the,
you know, kind of rendering approach will change. We’re going to see a lot of change in it. I love
what companies like Pika and Runway and Luma and those guys are doing and it’s really cool and it’s
progress. But I think what’s really exciting for me is that, you know, before this AI boom,
you had VFX studios that had engineers, maybe 30, 50 on a big studio, right? And that’s who was building
these tools. Like a bunch of these major tools like Nuke came out of digital domain.
It was engineers and TDs inside of these VFX houses that had to solve a problem. And then they built
a software that ended up being a global software, right? But now you have millions of people working
on these tools. So the pace of innovation, you know, is such a growth that we’re seeing. So that’s exciting.
I think we’re going to see much, much quicker. And I’m excited. Also, the content creation is
such a main thing. Yeah. You know, people was talking about, but it’s also a little bit ironic.
The idea behind AI and robotics has always been, you know, how do we help humanity not do monotone
things and focus on creativity, which really we’re, you know, meant to do. Right. And then the first
industry in danger is creative industry. So it’s like, okay, an irony. Right. So yeah. Yeah. I’ve
seen memes going around that are like, yeah, I want AI to, you know, do my dishes and fold laundry
so I can be creative. I don’t want AI to be creative. So I have to go do my dishes and fold laundry.
I know. I know. But I think also it’s a bit like, you know, Nathan, as you said,
if we can get quicker to a shot, we can iterate quicker. And really to me, that creative aspect,
you know, obviously you have to build it ethically. And I think we need to build it inside of our
industry as well. I think the best tool will be built by storytellers. And it’s a good question.
I’m interested in what you guys think. So do you believe that we’re going to have a future? Because
a lot of these AI video tools are generating humans, right? Do we believe that five years from now,
all the celebrities and actors on the box office, top five are all synthetic, right? Are we going to be
okay with that? I’m not a believer in that. I do think we love, take like a certain TV shows,
severance or things like this. We love talking about it because we also, we love those artists
behind it. Right. We love their performances. It’s so, I don’t know. I don’t know. I have this
conversation a lot lately. So interested to see what you guys think. I mean, I’ve got my take on it.
I don’t know if it’s the same as Nathan’s take or not, but I still believe that, you know,
humans appreciate the skills of other humans. You know, people go to live theater. I still go to the
theater with my wife and we watch like, you know, Hamilton and we just watched a Harry Potter play.
We’re going to see book of Mormon in a couple of weeks. Like we still go to live theater,
even though movies exist, even though I could watch it on my home TV. Like I still like that sort of
getting to see humans be talented in front of me. And I don’t think AI is going to take that away.
Personally. I think people are still going to really, really appreciate it. I can generate
any song with AI that I can imagine now with tools like Suno and UDO and things like that.
But I still love to go and watch live acts. I still want to hear music that I know was created by a
human. And I mean, I do think AI is going to get better and better, but I think just the idea of
knowing that a human created that is still an important factor to the listener and to the viewer.
And I don’t really see that going away personally.
Yeah, it might take pretty similar, right? I do think we’ll have a new genre in the future where
it is like entirely AI video generated, you know, entirely AI generated. It’ll be like
just crazy stuff that humans can barely even imagine, especially like in the core and sci-fi
and anime and some of these categories. I think AI video will be able to do some stuff that humans
just haven’t done yet. Like I’ve seen some things from AI video. It’s like that’s more terrifying than
anything I’ve ever seen in any horror film ever. Right. Right. And so I think in some of those areas,
it’ll be like new genres or it’ll, you know, I think overall, I think you’re right. I mean,
like the human performance is so important, but I think you’ll probably start to kind of like mix it
too. You’ll have human performances and then like probably stuff with like, you know, Flow Studio
and things like that. You’ll integrate that human performance into having AI enhance it in other
ways, different special effects or changing the environment or things like that.
Yeah. Yeah. I think so too. I think as long as to me, it’s like, okay, if you’re going to have a
character that doesn’t exist in this world, it makes sense, right? Whether you’re going to do traditional
CG or you’re going to kind of generate it. But if you want to just generate a human that doesn’t exist,
you know, that is where I’m a little bit like, okay, maybe we’re going to have few celebrities,
like little Michaela, if you remember, you know, it’s like kind of a CG, but I just don’t see.
And I also don’t hope we have a future where, because I know these technologies are starting
like this, but really, you know, they’re kind of same as we’re doing with, you know, getting facial
performance. Like you want to transfer that performance and drive a character by human
performance. That’s why I don’t, I think multimodality is really where to go, but I hope
we’re not in a future where, you know, certain actors are kind of licensing their likenesses,
sitting at home and doing five production at the same time. You know, that also to me,
it’s like a bit of a scary future that might, that might happen. You know, some people might do that.
It could. I mean, so like the one possible positive there, if that technology does get that
good to like, you know, recreate like human performance, you know, you imagine someone like
a Quentin Tarantino where they’re like a control freak. Well, now they can really be a control
freak and like actually control, you know, really control their actors. Like they want to, you know,
like, like here, you know, virtual character do it exactly this way. Oh, you didn’t do it right.
I want to tweak you a little bit, you know, on that side, I kind of find that exciting, but, uh,
I’m not sure if, you know, if we’ll get there. Yeah. But I also feel like a sort of future like
that could lead to just like too much choice that the cream kind of has to rise to the top. I mean,
if you look at like streaming services right now, you’ve got Netflix, Hulu, Disney plus,
HBO max, like the list goes on and on and on at any given moment, you have 2 million options of
something that you can watch right now. Yet we still all talk about severance, right? We all still
talk about breaking bad and the office and some of the cream that rose to the top. And so I think if
you kind of hit that future where, you know, the rock is in 17 releases all in the same month,
there’s going to be a lot of slot mixed in there and the cream will have to sort of rise to the top.
Yeah. I don’t know. I don’t see that being a future that like the majority of people want.
Yeah. No, I agree too. I mean, similar when digital cameras came out and DSLRs came out,
right? Everybody’s like, everybody’s going to be a filmmaker, but I do think the artistic eye,
and I see it with, with, uh, you know, artists and wonder, you know, we, you know, we have internal
artists and we worked on some projects and some big films really. And you can see how it takes so much
time to train artistic eye. It doesn’t matter that your tools sped you up from a week to a day.
You still need to recognize when a shot is good, when something’s good. And let’s say shot, maybe
recognition or shot being good is going to be easier. But when his story is good, when your beats are good,
when your performances are good, right? That’s really hard. That’s what I learned. Like, you know,
Nathan, you can probably back me up on this. Like, if you look at your work 10 years ago, you can be like,
I can’t believe I thought this was good. I can’t believe I actually showed this to someone.
It’s embarrassing. Right. So I think that’s what really comes down to, you know, it doesn’t matter
what tool it is. Like, can you recognize when there’s a good story that you made? Yeah.
Is this shot and sequence work together? Right. Does this, you know, structure of your story work
together? So that’s why I think I agree with you, Matt. I think it’s going to be, you know, kind of like
what happened with music as well. You have, everybody can release their own song on Spotify and self
release, but it’s one is you’re going to get, as you say, like the cream on top, but I think also
marketing is going to be such a hard thing. Like break, who can break through the noise? Right.
Right. Because it’s really hard to break through the noise on the side. So I hope it doesn’t happen
like music though. So I feel like music’s been a horrible spot for a long time. So when you say that,
that’s like kind of terrifying to me. Like it’s going to be like music. It’s like, oh, it’s like suck
for like 30 years. Okay.
It’s funny. But I, you know, I do look at like, you know, like Lord of the Rings is a great example,
right? You’ve got Gollum in Lord of the Rings, and we know that a human played all of that behind the
scenes, right? You can actually see the emotions in Gollum. You can see that acting come through,
even though it’s sort of a CG character overlaid on top of that. Right. And something like what flow
studio does now is it kind of democratizes being able to do that for more people and more companies,
like smaller budgets can do that same kind of thing now. Yeah. And that to me is exciting.
Regardless of wonder, I think in AI in general, I think if we can get more people we’ve had,
when we launched, especially a lot of smaller production tell us, hey, we had this project
forever. We’re a team of summit. We could have never do it. Thank you so much. Now we can finally do it.
And that’s so rewarding, you know, as a founder and a storyteller, it’s very rewarding to hear that
because you’re like, okay, I made something that, you know, we made something that people
actually can, you know, do something they couldn’t before because of the financial constraints.
And yeah, it’s nice to hear. Yeah, absolutely. I want to go back to something you said a little
bit earlier in the conversation. You mentioned that you’ve actually been seeing it used for,
you know, marketing and by content creators. And, you know, our audience on this show specifically
is a lot of those like solopreneurs, small businesses, content creators that makes up a lot of the
audience. So I’m curious if maybe you can share some of the like ways you’ve seen it used in those
worlds. Yeah. I think a lot of it’s been used by people. Well, let’s maybe separate YouTube and some
other social media platforms. We’ve seen a lot of use on kind of VFX creators. Like you have a lot of
these at home VFX creators that do incredible things. And I love seeing that because again, you know,
going back to video copilot, Andrew Kramer, that was my intro in visual effects. I saw someone,
you know, who was producing and making things that inspired you to follow. So we see it a lot. And,
you know, kind of this short form content where people are creating content with CG characters or
they’re just doing it to drive, you know, their camera or something they need out of the two elements.
And then on YouTube side, actually, we’ve also seen it a lot on, you know, obviously 3D content creators.
There’s a lot of, you know, subcategory in YouTube of 3D content creators, pretty large. We’ve also seen a lot
of kids animated shows, which is cool to see. You know, kind of some of these mainstream
kid animations. I think the reason why it lends itself naturally to YouTube as well is because,
you know, you guys know this well, my release time in a film is two to three years. My release time in a
TV show is a year or two. My release time on YouTube is a week. Right. I can’t really do CG much when I
only have a week. Right. And you mentioned Corridor Crew. That’s what they do. You know,
two, three weeks, they release a new video and it’s so impressive how quickly they do it. I mean,
I remember I was in some indie films and you can’t afford to do any roto because your budget is only
10 million. And you’re like, and the producer is like, no, we just can’t. We can’t even do roto,
not alone green screen. And then you’re so limited. You’re like, wow, like 10 million is little.
When you tell someone who’s not in the film space, they’re like, 10 million is a really small budget.
They’re like, 10 million dollars. Like, are we talking dollars? Right. So that’s that concept.
And I think this new generation of, you know, YouTubers, they’re just, they’re just so crafty
and they move much quicker. Right. And so they find the tools, they combine a couple of tools
and they’re releasing content. So I think that’s what’s going to happen. I think we’re going to see
a bit of shift of like, you had big studios. They’re only one that can do big visual film.
And then indie films cannot, you got a more grounded, more like live action. And then you’ll have,
you know, your social media content creators. I think what’s going to happen is like your indie
filmmakers are going to be able to produce now major visual stories. But to me, that’s exciting
because now you’re going to have grounded stories, like more character driven stories that you can make
higher risks because studios cannot really risk. If you make something for 200 million,
you can’t really make an art piece. Yeah. Finally can explore new ideas.
Like films have been repeating the same things with barely any new ideas for a long time. So yeah.
You said that Nathan, you said that.
It’s the same thing in the game industry, same thing in the game industry, you know,
so like in both of those, I’m excited for AI to like kind of change that and bring new ideas, hopefully.
So I do think like that where indie filmmakers are, we’re going to see, you know, kind of this
social media content creators really push it more. So they’re going to be creating. And then I think
studios will just push it higher. So that’s really exciting for me. It’s like kind of like
we’re shifting where everything was a little step forward. Right. And, you know, I don’t know,
I’ve seen some in the AI video space as well. I’ve seen some things that are so creative. You’re like,
wow, because if I had to do this traditionally, I would have spent so much time. And then during that
process, you still have to keep that vision and then creation of some element will take you forever.
You know, animating with hand, everything will take you forever. So you might not actually
keep that original vision because of the limitations will push you back and you might change. You’ll
sacrifice, you do something. So, you know, kind of that like artistic, I guess, instinct works fast.
You know, like when you’re ideating something, you’re like, oh, got it this. Right. But sometimes
that comes and then three months later, you’re still trying to work it. And now you question it
million times. You’re like, is that really what I wanted? Maybe this is better. And then this I can
because I don’t have money, et cetera. So. Yeah. Have you seen like any like brands or,
you know, e-commerce companies or, you know, any sort of like marketing use cases where maybe they’ve,
you know, had a alien selling their product or something like that. I’m just curious about some
of the like really fun sort of marketing. It’s out of this world. Yeah. Yeah, exactly.
We have, we have seen a bunch of it and we can recognize our characters and our characters,
you know, we open it free for license wise. People can download it. So we’ve seen it. And sometimes
it’s going to pop up on my YouTube as an ad and I’m like, oh, that’s our robot. I’ve also seen
ads on my Instagram where it’s like a, they’re advertising a tennis game. And like, you know,
those ads on Instagram where you like see it and it looks so good. Then you download it. It’s nothing
what they advertise. It’s like not even close. Right. I’ve seen one. It’s like tennis game.
And I’m like, I’m pretty sure this is for studio. I’m pretty sure this is ours. Cause I can recognize
animation. I can see where like little clean play didn’t go fully. And I can also, I’m a tennis player.
I can recognize this is real footage. This is based on real movements. This is not someone animating it.
So it’s funny. Like you see it in those aspects. And then we’ve had some major brands also that we
worked with closely that did it with certain characters as well. So it’s been fun. I think
advertising makes sense because you have a lot of kind of spokesperson, like, like, you know, like Geico,
but not every, you know, brand can afford, you know, spent that much money because Geico
traditionally is known, you know, how, how, um, high production and I love those ads. Right. So,
so, but it’s not cheap to make. So, yeah, it’s been, uh, interesting to see like how
some of the ads, but some of the ads are just, you know, also kind of terrible.
Yeah. I actually think that’s a really smart approach. I think like a company should start
making their own little like mascot and then, you know, like the Geico Gecko mascot or Tony,
the tiger kind of mascot and starts using something like flow studio to have that be their brand
representative. Yeah.
It’s safer than having a person, right? You know, a person could go off and say some crazy
things on social media or whatever. Yeah. It’s not us. It’s not us. It’s the lizard.
But it’s funny how we all remember these characters, right? They’re not real, but we all,
as you mentioned, you know, the, the tiger, a Gecko, it’s, there’s so many of them that we see.
What is the fox, the Carfax, right? It’s a, it’s a fox.
Yeah. You got a jack in the box, a jack character.
Yeah. Yeah. So it’s interesting how that stays with us. So yeah.
Yeah. Yeah. So this is sort of the last little rabbit hole I want to go down with you. And I
don’t know how deep you want to go on it, but I’m curious about the relationship with Autodesk. I know
you guys were recently acquired by Autodesk. How did that whole thing happen? Was there a reason you
decided that they’re the people we want to work with?
Yeah. Always happy to go down the rabbit hole with you, man.
How big was the check?
Well, I’d say, you know, we started the Autodesk partnership maybe a year before acquisition.
And we did it because we knew a lot of our users use Maya, obviously. Maya’s being, you know,
a leading tool in animation and character creation for so long. So, you know, we spent a year with working
with them and Diana, who runs the media entertainment part of it, we kind of really aligned on the vision.
And then one thing that was really important for me is, you know, you always have this perception
as a startup founder. Like once you go in a corporation, they’re going to tell you what to do.
It’s going to be like, you know, they’re going to turn you an add on or whatever. So from the get
go, Diana is very honest in like, hey, roadmaps, your product is yours. I believe in your vision.
You’re still running it as your startup. And that really has been true. And every decision has been,
you know, our team and mine on what we’re building, what the roadmap is going, how the product is going
to look. I mean, I just showed you, we use Blender as one of the outputs, you know, we actually doubled
down on that. You know, we were like, you know, let’s have an open ecosystem. So I’ve been very
fortunate on that side, I got to say, because I had a lot of friends as founders that really like
went into a bigger company and, you know, kind of lost that control completely. So, you know, from the get
go out, this came to us and said, Hey, you guys been building this for a while. We want to learn
from you guys. And, you know, we don’t want to be steering you in certain directions. We really want
you to be independent and we like the product and you should build the platform and the vision you have.
So we always went with like, here’s my five year vision. And, you know, so far that’s been really
supported on the side. And to us, it made sense because we are big believers in 3D. You know, like you
have your AI video, but I do think that 2D, 3D approach is needed for one another. You know, that kind of
consistency, whether it’s latent consistency or you’re talking a spatial consistency, it’s a big
problem to solve. And I think you need to be in 3D space. And I think you need to be in 3D space to
be able to control it. What I’ve seen is a lot of startups is this, they come from research space,
they create something that generates, and then you’re like, okay, how do I control animation?
Oh, what’s a rig? How do I control a body, right? Oh, how do I control a camera? So they’re kind of
learning the film terms as they go, right? Their research first, and then the film term.
We went kind of bottoms up. You know, I’m a big believer in like, okay, I have to be able to control
a camera in an inch. I have to be able to control a performance. And then I have to go back and forward
a lot. So for us, that made sense. And also, as I mentioned earlier, I don’t think the pipelines will
change. I don’t think it’s going to be one AI tool that’s going to replace it all. I think it’s going to be
a combo because they are built for a reason by very smart people for the past 30 years,
right? In the industry and how the creative process works. So I’m a big believer it’s going to be a
mix. And also we never wanted to build, you know, our product to be like, yeah, I’m going to disrupt
completely. So that’s what we build. Let us fit it in until some of this research gets better. So we
can take more part of the pipeline, but the control and editability being the main aspect of it. So
that’s where our vision is really aligned on that side.
Nicola, are you still going to make a studio one day or like?
We’re still going to make movies. Yeah. Ty and I are still writing. We’re still looking at projects,
you know, looking for projects to produce and also write and direct. So I’m still doing that. I’m still
writing. I don’t have as much time, obviously, you know, running a company and writing at the same time,
but you know, it’s my passion. I don’t think being a part of storytelling from one way for another,
that’s something I always do. Cool. Awesome. Well, so is wonderdynamics.com,
is that the best place for people to go check it out and use it themselves?
Yes, wonderdynamics.com. Cool. And if anybody wants to follow you personally,
do you have any sort of social platforms that you hang out on? Anything like that?
Oh man, I’m not big on that. Yeah. Yeah. I’m more of someone who like goes,
you know, on Twitter and Instagram to follow things than to post.
Gotcha. I’m not the best at it. I’d say follow wonderdynamics socials. Like there you go.
Follow wonderdynamics socials. Yeah. Awesome. You’re not going to learn anything too smart from
my socials. Well, amazing. This has been absolutely fascinating. Thank you so much for demoing everything
and going down these rabbit holes with us. We really, really appreciate it. And thanks for your time on the
episode today. Yeah. Thanks so much for having me guys. I’m a big fan of what you guys do. And I think
it’s important that you’re, you know, kind of educating people in this moving environment
we’re in. Appreciate it. Thank you. Yeah. Thanks guys. Thank you.
Thank you so much for tuning into this episode. If you haven’t already, make sure you go subscribe
on Apple or podcast or YouTube or wherever you like to listen to podcasts. And also one last thing,
this podcast is up for a Webby award in the business category. So if you can do us a huge favor and go cast
a vote for this podcast, we might actually win an award that we really appreciate it. So thanks again for
tuning in and hopefully we’ll see you in the next one.
Episode 54: Ever wondered if AI tools could be as good as they claim? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) delve into this question with Nikola Todorovic (https://www.linkedin.com/in/nikola-todorovic3/), the CEO of Wonder Dynamics.
In this episode, the hosts discuss with Nikola how Wonder Dynamics’ Flow Studio allows anyone to reskin videos with AI-generated characters, reminiscent of the stunning special effects seen in major films like Lord of the Rings. Nikola explains the evolution of Wonder Dynamics, the skepticism they faced, and the blend of creativity and technology that drives their success. Discover how this groundbreaking tool is democratizing filmmaking for indie creators, and explore Nikola’s vision for the future of AI in Hollywood and beyond.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) From Bosnia to VFX Artist
- (04:38) AI Mocap’s Potential Unveiled
- (06:24) Bridging 3D Skills for All
- (12:33) Motion Prediction for Markless Mocap
- (13:34) Animation Control Enhancements Explained
- (17:44) Hollywood’s Unsustainable Financial Model
- (22:45) 3D Video Consistency Challenges
- (24:14) AI Workflow Innovations in Studio Production
- (29:19) Future of Digital Characters
- (30:41) Content Overload: Cream Rises
- (34:22) VFX and 3D in Social Media
- (38:24) Misleading Ads Featuring AI Characters
- (42:32) Startups’ Film Knowledge Evolution
- (43:08) Creative Collaboration and Controlled Evolution
—
Mentions:
- Nikola Todorovic: https://www.instagram.com/nikola_todorovic3/
- Wonder Dynamics: https://wonderdynamics.com/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
Vote for us! https://vote.webbyawards.com/PublicVoting#/2025/podcasts/shows/business
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano