Everything You Need To Know About A.I. Avatars in 2025

AI transcript
0:00:06 Hey, welcome to the Next Way Podcast. I’m Matt Wolf. And today we’re talking with the
0:00:12 founder of Mindstream, a daily AI newsletter. And well, talking to him, I learned that he
0:00:19 hates making video. So he became an expert on all of the various AI avatar tools to help
0:00:24 him create videos. So in this episode, we’re going to dive down the rabbit hole of AI avatars,
0:00:28 how to use them, how to create them, how to make them the most effective you possibly can.
0:00:32 It’s an amazing episode. So let’s go ahead and dive in with Adam Biddlecomb.
0:00:38 Hey, we’ll be back to the pod in just a minute. But first, I wanted to tell you about something
0:00:43 very exciting happening at HubSpot. It’s no secret in business that the faster you can pivot,
0:00:47 the more successful you’ll be. And with how fast AI is changing everything we do,
0:00:53 you need tools that actually deliver for you in record time. Enter HubSpot’s spring spotlight,
0:00:57 where we just dropped hundreds of updates that are completely changing the game.
0:01:02 We’re talking breeze agents that use AI to do in minutes what used to take days,
0:01:08 workspaces that bring everything you need into one view, and marketing hub features that use AI to
0:01:15 find your perfect audience. What used to take weeks now happens in seconds. And that changes everything.
0:01:20 This isn’t just about moving fast. It’s about moving fast in the right direction. Visit
0:01:27 hubspot.com forward slash spotlight and transform how your business grows starting today.
0:01:31 Thanks for joining me today, Adam. I’m excited to dive in. How are you doing?
0:01:36 Yeah, I’m good. Thanks so much for having me on. I’ve been looking forward to this for a while.
0:01:41 And yeah, I’m really interested in these AI avatars. I’m someone who creates like a lot of content for
0:01:44 social. I’ve been building my following specifically on LinkedIn for a couple of years.
0:01:49 And dude, I just hate doing video. So anything that can like stop me setting up,
0:01:54 sitting down in front of a camera, I’m all in. Yeah, yeah. I mean, I really love the concept of
0:01:59 sort of making an AI avatar who can do, you know, maybe some of the short form video for you. But I’ve
0:02:05 been really, really sort of scared to do it myself because of how the audience might react, which I’m
0:02:10 sure we’ll get into some of that conversation a little bit deeper in. But maybe let’s start with
0:02:16 just sort of the landscape, like what tools are available out there to build this kind of stuff.
0:02:21 And maybe we’ll get into some of the pros and cons of each. Yeah, 100%. So there’s really two big
0:02:26 players, which is HeyGen and Synthesia. I kind of think of these as maybe like the ChatGPT and Claw.
0:02:31 They’re the ones that have a lot of funding, a lot of support behind them, a lot of usage. And they’re
0:02:35 kind of like, you know, generalist tools as such, like a lot of people are using them for all of
0:02:40 the different use cases. And I’ve spent most of my time playing around with HeyGen. At HubSpot,
0:02:45 we have some support from HeyGen. So I can get like that little bit of extra love that isn’t maybe
0:02:49 available to everyone else. But then there are some other tools. I’ve been playing around with Argyle
0:02:54 recently, which is specifically made more for people creating this short form content. So they’ve got a
0:02:59 bit more of like editing built in. And it’s just like very, very quick and easy to use. Whereas HeyGen and
0:03:01 Synthesia, there might be a little bit of a learning curve.
0:03:05 Gotcha. Like HeyGen, when they first came out, they actually were called something else. And then
0:03:12 they rebranded. But Synthesia was actually the first one that I came across early on and was like
0:03:17 super impressed by it. And the way they were sort of angling these things, like originally when they
0:03:23 first launched was this like marketing tool where you can make customized videos to your audience.
0:03:30 So let’s say somebody joins your email newsletter, you can actually email them and a personalized video
0:03:33 would be talking to you that would say like, Hey Adam, thank you so much for joining the Mindstream
0:03:38 newsletter. I really appreciate you joining. Here’s some things you can expect from us. And it would
0:03:43 like actually personalize that video. That was the use case that we’re pitching, but the use case that
0:03:51 it’s sort of evolved into has been more of this short form content, like AI avatar thing, which has been
0:03:56 really, really interesting to watch. You know, we were talking before we hit record about Rowan Chung,
0:04:01 who’s been doing a lot of this kind of stuff. And he kind of modeled that from Varun Maya.
0:04:06 And then we started to see a whole bunch of other channels pop up that are doing this kind of thing.
0:04:12 And it sort of blows my mind, like how well they do, right? Like I’m so impressed that they’re not
0:04:17 getting pushback or people going, Oh, this is gross. It’s an AI character, you know?
0:04:22 Yeah. Rowan Chung’s Instagram is insane. And he posted a really great case study on it that
0:04:28 really kind of inspired me. He spoke about like how many followers the account grew to 50,000 followers
0:04:32 in time of posting 7 million views. I think he’s like more than double that amount of followers now.
0:04:37 And it’s all exclusively these AI avatar videos. And generally what you find with these pieces of content,
0:04:43 if you haven’t seen them, is you’ll get kind of like 20% of the video will be the avatar speaking
0:04:48 to camera. And then 80% will be B-roll. Right. And I think part of the reason with that is if you’re
0:04:53 seeing the avatar in like quick cuts, it isn’t really on screen enough to kind of maybe put you off or make
0:04:59 you realize that it is an AI avatar because the fidelity is getting pretty good, but it’s not yet
0:05:06 kind of human. It’s maybe like 85%. Yeah. Um, so yeah, a lot of B-roll and a lot of cuts to kind of
0:05:10 get it to that level where the video is really captivating. Yeah. Yeah. When I first saw the
0:05:16 Varun Maya doing it, the first few videos I saw, I did not even realize they were AI. He actually had
0:05:21 to point out like, that’s actually not me speaking. That’s actually AI. There’s, I believe he used like
0:05:25 11 labs to do the voice, to make it sound like him. And it was, Hey Jen, to do the actual video.
0:05:29 And then Reed Hoffman, I don’t know if you’re familiar with, uh, you know, he was the founder
0:05:35 of LinkedIn. He actually made like AI virtual read and he does like interviews with himself.
0:05:40 And originally he was using some sort of like custom model, but it got to a point where Hey Jen got so
0:05:47 good. But now the whole virtual read is also just Hey Jen. It’s pretty crazy. That video was insane.
0:05:51 It was a full kind of like studio setup with him sat down, speaking to himself. When that came out,
0:05:55 that really blew my mind. Yeah. And you see these kinds of use cases out there, like with these
0:06:00 sort of leaders in the space. And like, you do know that they’re working very closely with the
0:06:04 technology leaders to get it to that level. Yeah. Yeah. They may even have access to like
0:06:07 a tier that might not be totally public yet. Some of these guys.
0:06:12 Yeah, I think so. What I’m really interested in. And I think a lot of people in this space is like,
0:06:18 what can we get easily? Like that’s where it’s going to become adopted by loads and loads and loads
0:06:23 of people. So like, where is the technology at with like a very, very basic input. Right.
0:06:26 And I’ve got some examples I can show you. Yeah. Yeah. I’d love that.
0:06:32 I’m going to show you like 10, 20 seconds of three models. So I’m going to start off with Hey Jen.
0:06:37 And they’re all a little bit different. But what we have with Hey Jen is the input recording that I’ve
0:06:41 used to train the avatar is the same across all of these three. So I recorded two and a half minutes.
0:06:45 The script was actually provided by Synthesia. And you just kind of like speak to camera and read
0:06:49 the script. And then I’ve trained the three models. This here is the Hey Jen test.
0:06:56 Do you think faking your Instagram birthday still works? Not anymore. Instagram has leveled up its AI
0:07:03 to spot underage users. Even if they lie about their age, it can read between the lines from a sweet 16
0:07:07 caption to user reports. Yeah, that’s really good. That was generally with Hey Jen.
0:07:12 Yeah. So the video is in Hey Jen. What I’ve done is I’ve actually uploaded my own audio,
0:07:18 like true audio recorded with into this microphone. So this, in my opinion, is one of the ways to get
0:07:24 like a real jump in the output. Hey Jen has this like really easily to do. I’ve tested the Hey Jen
0:07:30 audio, which I have elsewhere. But natively, the audio options from these tools, I don’t think are
0:07:34 great. You mentioned 11 labs earlier, they do kind of integrate with 11 labs. So you can go and train
0:07:40 a voice on 11 labs separately and like connect that in. But if you do have the time to record like a 60
0:07:45 second audio input, that’s probably where I’d go. And just for general context, we’re going to start
0:07:49 posting these clips like every day across the mainstream Instagram and LinkedIn actually starting
0:07:56 today. So this is what we’re going to be doing currently. So jumping over to Synthesia. Synthesia
0:08:02 is the same input, but in this case, we’re using their audio. Think faking your Insta birthday still
0:08:09 works? Not anymore. Instagram has leveled up its AI to spot underage users, even if they lie about their
0:08:15 age. It can read between the lines from a sweet 16 inches caption to user reports. And if it thinks
0:08:20 you’re under 16, you’re automatically placed in a teen account. So what I think is really interesting
0:08:26 is I actually think the lip syncing and the fidelity is better here, but maybe that’s because you’re using
0:08:28 their audio. Right.
0:08:33 Right. Whereas if you’re kind of uploading the audio separately, the model is kind of struggling to
0:08:38 maybe pick up your audio. Yeah. So you almost have this like trade-off of like, what’s more important,
0:08:43 the audio or the kind of lip syncing fidelity. Yeah. And it’s so funny. I remember when I was
0:08:48 listening to you speaking about Eleven Labs, you said that when you listen to your own voice, it sounds
0:08:53 rubbish. But when anyone else listens to it, they tell you it’s good. And I kind of think that it might
0:09:00 be the same with me critiquing the Adam AI avatar. And I probably need to give these to somebody else
0:09:04 to tell me which really is the better option. Yeah. Yeah. It’s a weird phenomenon that happens.
0:09:07 Yeah. With the Eleven Labs, I’m like, this doesn’t sound like me at all. And then all of the comments
0:09:12 like it sounds exactly like you. And when I watched this, I was really impressed with the voice that
0:09:16 came out of it. Yeah. I feel like maybe with Hey Jen, there was a little bit more motion in your
0:09:20 head where this one, it felt like you looked a little stiffer, you know, but the voice sounded
0:09:24 great to me. Yeah. And the other thing that I’ve learned kind of when playing around with these
0:09:30 models is the audio input, like when you’re using your own voice really dictates how the video output
0:09:36 comes out. So if you record like the audio input and be like really dynamic and energized, then the kind
0:09:39 of avatar will move around a little bit more. And if you speak a little bit slower and monotone,
0:09:44 it doesn’t give that level of dynamism. Right. Right. The third one I wanted to show you is
0:09:50 Argyle. And what I really liked about Argyle is I trained the model with the same input and then to
0:09:54 create the video, I put in the script. And then as you’re going through the creation process, it just
0:10:00 added subtitles for me. And it said to me, do you want to add B-roll? And I was like, sure. Okay. So
0:10:06 this output, I have done no editing. I have not like checked it or anything. I’ve just accepted the
0:10:11 subtitles, accepted the B-roll. And when you talk about speed to execution, this is really exciting.
0:10:12 Cool. Let’s check it out.
0:10:19 Think faking your Insta birthday still works? Not anymore. Instagram has leveled up its AI to spot
0:10:24 underage users. Even if they lie about their age, it can read between the lines from a sweet 16 caption
0:10:31 I think this is awesome. I think to get that level of output from just putting a script in, like it’s
0:10:36 reading the script, it’s finding B-roll that’s relevant. And like the B-roll is very relevant.
0:10:41 It’s added that music in. Obviously, this is all adjustable. But to go from like idea to posting
0:10:47 something in this format with no editor researching your own B-roll or whatever, I think this is a
0:10:50 pretty cool kind of like one-stop solution for this specific use case.
0:10:57 With the Argyle, is it its own custom model or is it using like a Haygen or Synthesia on the back end?
0:10:57 Do you know?
0:11:02 I don’t actually know. I should look that up. But yeah, it’s an interesting thing you raised,
0:11:06 because I think at the start of this, I mentioned how like, you know, I think a Haygen and Synthesia is
0:11:12 like the comparable to like ChatGPT, etc. I do think that like a lot of these kind of wrapper companies
0:11:17 are going to pop up with like specific use cases. I’ve got a friend who’s running one specifically for
0:11:22 coaching. So they’ve got like the avatar thing built in, but then they’ve also got in the software,
0:11:26 like all of the marketing tools that you need and like booking appointments and all of these things
0:11:31 specifically so you can create an AI coach of yourself. So yeah, I do think that’s interesting.
0:11:32 Go on.
0:11:36 Now, I’m curious, when you showed me the first one, the Haygen one, you said that you actually
0:11:42 recorded your own voice into it. Is there a benefit to doing that versus, you know, just flipping on the
0:11:47 camera and, you know, recording your voice to the camera? Because, you know, at the end of the day,
0:11:51 it’s probably the same amount of time that you put into like actually recording your voice
0:11:55 to feed it to Haygen versus just flipping on the camera and recording into a camera.
0:12:02 I think for myself, if I want to record a video, even if I use something like a teleprompter,
0:12:06 I personally just find it quite hard. Like first, I’ve got to like turn all my lights on and everything,
0:12:11 get my camera set up, and then I’ve got to deliver and read to camera and be dynamic and all of these
0:12:17 things. And for me to record like a 60-second clip like that might take 30, 40 minutes. But if I have a 60-second
0:12:22 script read, I can read that in one take pretty dynamically. Maybe two. But I feel quite
0:12:27 comfortable just recording audio compared to video. That’s just me. Obviously, I’m speaking to
0:12:31 you, Matt, who’s been doing YouTube forever. So for you, it probably seems like second nature to just
0:12:36 click with camera and click record. But I think where this is going to be particularly interesting
0:12:39 is for people like myself who are not so used to recording videos.
0:12:44 You know, there’s a lot of these things that have gone like really viral. Like we mentioned
0:12:47 Varun Maia, we mentioned Rowan Chung. There was another one that you mentioned earlier.
0:12:48 Ruben Hasid, yeah.
0:12:52 We mentioned a few others. I think maybe it’d be cool to show off some of these videos
0:12:56 that have gone viral so people could kind of see what sort of results others are getting with these.
0:12:59 A hundred percent. So here’s one of Rowan’s.
0:13:04 This AI can turn any photo into a 3D world you can explore. World Labs, founded by the godmother
0:13:10 of AI, Fei-Fei Li, created a system that transforms regular images into interactive 3D environments.
0:13:14 Their system lets you step inside the picture and look around as if you were really there.
0:13:20 You can also add real-time visual effects, change depth of field, and experiment with camera effects
0:13:20 like dolly zooms.
0:13:25 Very cool. Yeah, that one looks like he pretty much did the whole thing generatively.
0:13:28 You know, it sounded like the audio was probably like an 11 labs kind of audio.
0:13:32 And then he had all the B-roll. I don’t know if Rowan specifically
0:13:36 uses an AI tool that sort of sources the B-roll for him,
0:13:40 or if he does this sort of AI avatar, the whole video is shot,
0:13:45 and then he sends it to a team member who goes and finds B-roll to kind of hide any of the uncanniness, you know?
0:13:49 Yeah, the interesting thing is the fact that the avatar is in there for about two or three
0:13:56 seconds in the middle. What this really is, is like great short form storytelling video with B-roll.
0:14:01 And the avatar gives it that personal touch to kind of tie it back to like a personal brand,
0:14:06 or even if it was a business. Whereas like, you know, when I scroll through like my Instagram or
0:14:11 TikTok or whatever, there’s so many like great AI generated videos that don’t have any personality.
0:14:16 The recent viral ones I’ve seen, have you seen those like, you wake up as a France in 1800s?
0:14:23 Yeah. So like people are out there making like great AI videos, but here is a way to kind of take
0:14:27 these AI videos and give a little bit of personality towards it. And I think the interesting thing as
0:14:32 well is, you know, the specific use case we’re talking about is kind of me expanding and leveraging
0:14:38 my personal brand. But I think there’s opportunity for people here to create net new personal brands
0:14:42 that are completely fabricated, completely AI. Right. You can go and design an avatar around
0:14:46 someone and then kind of add that as an extra element to these like faceless YouTube channels
0:14:52 that you hear about. Yeah. Yeah. So you can almost be sort of anonymous yourself, but create a character
0:14:56 that’s out into the world that people kind of assume is a real character. I mean, we’ve actually been
0:15:02 seeing that quite a bit with the whole Instagram AI influencer thing, which, you know, I have very mixed
0:15:07 feelings about it. Right. I don’t totally get it, but a lot of people are doing, you know,
0:15:12 they’re really successful with these AI avatars and like having Instagram accounts with hundreds of
0:15:17 thousands of followers. And the character isn’t even a real person, which, you know, kind of blows my mind
0:15:21 still. It’s insane. Are you aware of little Michaela? I think she’s like the most famous one.
0:15:26 That was probably like the original, right? That has kind of like the genesis of it.
0:15:31 Yes. I was doing a bit of research. I think she launched in like 2016. Yeah. It was a startup
0:15:38 called Brood. Trevor McFedry’s and Sarah Deku, they developed and managed her persona, social media
0:15:44 presence, brand collaborations. And then this company was acquired in 2022. The crazy thing is,
0:15:51 is this little Michaela earns $10 million per year from brand partnerships. And she has some insane
0:15:57 collaborations, like with BMW of all companies, which shows that like real brands are kind of
0:16:03 willing to work with these AI personas. And I think the thing that’s interesting is that we talk about
0:16:09 this being launched in like 2016. This was pretty technologically revolutionary at the time, really,
0:16:14 like this is not easy to do. But now with some of these tools, like anyone can do this for like 50
0:16:21 bucks a month and go and create these personas. I think this AI influencer industry thing is really
0:16:26 going to expand. And yeah, I don’t really know what that means for like, you know, the future.
0:16:32 It’s quite scary. Yeah. I don’t know. I think there’s sort of like a generational thing, right?
0:16:37 Like I feel like, you know, at my age, like I don’t totally get it, but I feel like I might be sort of
0:16:41 like aged out of it, right? Because a lot of like younger generations are really into the whole like
0:16:47 character AI thing and, you know, chatting with fictional characters on character AI. And that seems to be a
0:16:52 popular thing. Obviously, a lot of these Instagram AI influencers get a lot of followers. For whatever
0:16:57 reason, it doesn’t click in my brain. Like when it comes to social media, I like to connect with other
0:17:02 humans. But I also, again, I think it’s a generational thing. I think as younger generations sort of grow up
0:17:07 with this kind of technology sort of being natively in their lives, it’s just going to become more and
0:17:12 more normal. I don’t know how I feel about that. Yeah, it’s very strange. I was kind of looking
0:17:17 through her Instagram earlier and this one video like really blew my mind. She was reviewing a
0:17:23 skincare brand. So it’s this digital avatar, like putting this kind of like skincare, like
0:17:26 makeup stuff on and be like, this is great. It’s going to make your skin really like clean and
0:17:31 whatever. And you kind of think the people who are watching that ad to go like, yes, I want to buy
0:17:36 that thing now. Like surely you need to see it on a human and see kind of, you know, before or after.
0:17:41 It’s better now. It’s all digital. Yeah. It’s so weird. So we were talking about Rowan earlier
0:17:46 and you mentioned he put out a report and in the report, he did comment about like how it was
0:17:51 perceived. What were some of the like things that were talked about? Because, you know, like we sort
0:17:57 of mentioned in the beginning, my sort of biggest worry around doing that kind of thing, a short form
0:18:01 video is, is just how it’s going to be perceived. The funny thing is I watch Rowan’s videos and I watch
0:18:06 Varun’s videos and I don’t really think twice about it, but I’m worried about doing it myself
0:18:11 because I’ve sort of put myself out there. I haven’t been an AI avatar. If I start doing it,
0:18:16 are people going to be like, oh, he’s starting to go the lazy route or whatever. You know, I don’t know
0:18:20 how it’ll be perceived. And that’s what worries me about it. Yeah. It’s wild. Like Rowan put out this
0:18:26 tweet and maybe we can link it below, but the kind of crux of it is he says like nobody cared. Like he
0:18:32 didn’t have any negative sentiment. And I think that is partly due to the AI thing,
0:18:36 but also partly because of the kind of quality of the work he did. I think if he was to start posting
0:18:41 like AI avatar content and like the B-roll wasn’t as kind of relevant and researched and all of those
0:18:47 things, like we said, and the script wasn’t so good at storytelling, you know, I think people need good
0:18:53 content. That’s what people are like desperate for. And if you can deliver that with this technology,
0:18:56 technology, then I don’t think people are going to be too unhappy. And the other thing he said,
0:19:01 which I think is crazy is he believes that the avatar is better than he is on camera. So again,
0:19:05 I think he’s potentially quite similar to me. Don’t really want to do video. Don’t have a bunch of
0:19:09 experience with it. Let’s look at this technology. Whereas again, for you, I think it’d be quite a
0:19:13 leap. Yeah. You know, biggest AI YouTuber. He’s had enough of doing video.
0:19:19 Yeah. I mean, I know Rowan personally pretty well, and I can, I can attest to the fact he doesn’t want
0:19:23 to be on camera. We’ve tried to get him on this podcast. I’m calling him out right now. We’ve tried to get him on
0:19:27 this podcast a handful of times and he’s always been like, yeah, I don’t really want to be on
0:19:33 camera. So yeah, I know that’s kind of the case with him is he prefers to be behind the camera,
0:19:39 run his newsletter and put it out videos like that. So I think, you know, like the sort of next logical
0:19:46 discussion here is like outside of this sort of virtual influencer kind of concept, maybe we can
0:19:51 sort of rattle off some of the various like other ways to use kind of this technology. I already
0:19:55 mentioned one in the very beginning when this stuff first came out, sort of personalized
0:20:00 videos that look like you created this video specifically for the person that just opted in,
0:20:06 right? There’s APIs that can automatically, whenever you opt into an email list, feed the person’s name
0:20:11 into like a Hey Jen generator. Hey Jen generates the videos. And then, you know, whatever, 30 minutes
0:20:17 later, you get a welcome email with a personalized video to you, which is really, really going to
0:20:22 probably increase retention because people feel like, Oh my God, this guy just sent me a video.
0:20:26 Like, even if they know it’s AI, they’ll be like, I can’t believe, you know, they sent me this
0:20:31 personalized thing. So that’s like one other use case that I’ve seen, but you know, what are some
0:20:36 other sort of business implications? This is obviously a HubSpot podcast. They’re a B2B company,
0:20:39 like anybody that’s listening to this, how else can they use this stuff?
0:20:44 Yeah. Just one thing I’ll say on that personalization. Like if you think about this AI avatar,
0:20:48 like say, Hey Jen, as part of like a big tool stack, personalization is getting more and more
0:20:51 important. And like when you receive an email in your inbox, that doesn’t have a level of
0:20:57 personalization, it’s almost an insult to like today. I’m really excited to see people integrating
0:21:02 this tool with something like Clay. So like you get the personalization of like, Hey Matt,
0:21:07 as an introduction, but then it can also bring in like extra personalization. Like I’ve scraped your
0:21:12 LinkedIn and I know everything about you and I can make that personal message specifically to you.
0:21:17 I think when you start getting videos like that land in your inbox. And the other thing is I get
0:21:21 these sales emails quite often now where someone’s like, they’re offering me like, let’s say LinkedIn
0:21:27 personal branding services. And they’ve done like a 20 second loom of them scrolling up and down my
0:21:32 LinkedIn account while they’re kind of playing me a not very personalized thing over the top.
0:21:37 I think personalization in outreach where you were using an AI avatar to give the personalization of
0:21:41 yourself, but you’re integrating with tools like Clay to get a lot of information on someone and have
0:21:44 very, very targeted outreach. I think that’s super exciting.
0:21:49 Yeah, no, that’s definitely another thing. I mean, you’ve got these tools like, um, I think it’s
0:21:54 pronounced N8N or Natan. I don’t know, but it’s like an AI automation tool where you can sort of
0:21:59 start to connect all of these various tools together. So it could do things like that. Like when somebody
0:22:05 opts in, go scrape their LinkedIn, add it to my CRM, go, you know, look at their Twitter bio, add it to my CRM.
0:22:11 You know, do they have a website, go add that info to my CRM. And then it can pull all that data, feed it
0:22:17 into, you know, chat GPT or clot or Gemini or whatever, and write up like a personalized message
0:22:22 about them. Like, Hey, I know you’re into guitar and surfing. That’s really cool. I’m into that too,
0:22:27 whatever. Right. And send like a personalized email based on all of this information that it grabbed
0:22:32 and then even feed it into HeyJet and create a personalized video with all of that information.
0:22:38 And I mean, right now I feel like that stuff’s still kind of expensive and still pretty slow, but I mean,
0:22:43 it’s the worst it’s ever going to be, right? Like as the saying goes, it’s only going to get easier and faster.
0:22:48 Yeah, a hundred percent. I think when you look at like how things are changing across any type of content creation.
0:22:55 So we’ve talked about social media, like content creators, you could look at onboarding documents for your team.
0:23:00 You can look at sales outreach. You can look at weekly reports in your Slack. You can look at investor emails,
0:23:08 like any of these sorts of things. You’re kind of starting to see often like a little bit at the top of the email where you can listen to the email.
0:23:15 I think in the future, there’s the option where you can watch the email. And I think the nice thing about the democratizing of content creation,
0:23:22 let’s say it enables people to create content for others in the way that they like to consume it.
0:23:28 You know, someone who is a newsletter operator. Now they have a video podcast with Hey Jen. They also have an audio podcast.
0:23:35 They also have short form content they’ve created and you’re just able to kind of meet your audience where they want to be met.
0:23:44 I wonder how long it’s going to be before, you know, companies like Twitter or Facebook, Instagram allow sort of this extra personalization on the video, right?
0:23:49 You’re scrolling your Instagram feed and it’s like, Hey Matt, stop for a sec. You know, Hey Adam, stop for a sec.
0:23:53 Have you checked out this elderberry supplement, whatever. Right.
0:24:01 But it’s like, it personalizes and stops you based on like, um, you know, knowing your name and all the details these companies have on you.
0:24:09 I bet you it’s not far off. Although I do feel some of those companies may like test that and then sort of pull back on it as people get creeped out by it.
0:24:15 Yeah, I think so. The adoption curve is going to take a bit of time. I’m pretty sure you might know about this.
0:24:21 I’m pretty sure I saw that Instagram was testing translation, which is another thing that these avatars can do.
0:24:28 Um, so, you know, if you’re putting out content generically in English, like it’s going to automatically translate it into Spanish or whatever for different audiences.
0:24:30 So that’s almost like the first step of that coming out.
0:24:36 Yeah. Yeah. And not only is it translating it, I don’t know if the Instagram specifically is doing it, but I know Hey Jen does it.
0:24:43 It actually lip syncs it. So it looks like you’re actually saying it in that other language where like, you know, YouTube has native translation.
0:24:50 Now I don’t think it’s rolled out to everybody yet, but they have this native audio translation feature, but it doesn’t sync anything up.
0:24:59 It just sort of overdubs like a, a translated voice over to the rest of the video, but with like Hey Jen and Synthesia and some of these tools that are out now, it actually changes your lips.
0:25:03 So it looks like you’re saying it in that language. It doesn’t actually look like a dub anymore.
0:25:18 Yeah. A hundred percent. And I think if you are a big content production, whatever, you know, even looking at a company like HubSpot that produces all of this video content, being able to distribute that worldwide to like every market is a huge, a huge piece of leverage of this technology.
0:25:35 Yeah. I mean, even on YouTube, there was an interview that Colin and Samir did with Mark Zuckerberg. And then later in the interview, Mr. Beast jumped on. Right. And when Mr. Beast was on this interview, he was telling Mark Zuckerberg that only about 30% of his audience speaks English.
0:25:50 And the reason he’s not putting more focus on Facebook is because Facebook doesn’t have that native translation and YouTube does. And, you know, when he puts it on YouTube again, 30% of people are listening in English. The other 70% are completely different languages.
0:25:57 So it’s like, if he puts it on Facebook, he’s like missing out on potentially 70% of the people that could watch his videos.
0:26:06 That’s insane. Mr. Beast is like so hot on this translation thing. He’s been looking at it for a while and he like has a separate company or part of his company that does this, this translation.
0:26:15 And he’ll like hire movie stars in the countries that he wants to distribute to kind of be him on these videos, which I think is so smart.
0:26:16 Yeah. Yeah.
0:26:21 The thing we should talk about is the negative use case, the possibility of scamming of scammers with this technology.
0:26:31 I was listening to an episode that you did recently where you’re talking about audio and scamming and like how, you know, you can personalize someone’s voice and call their mother or grandma or whatever.
0:26:42 I actually had this moment like a couple of days ago, I was trying to set up a new Google account for a new business I was starting and I tried my number for the 2FA and it was like, no, you’ve got too many Google emails with this number.
0:26:53 You can’t use it. I tried my partners, the same thing. So then I sent a 2FA code to my mom and text her saying, Hey mom, can you give me that code? And then I realized like, Oh wow, this is the kind of scam workflow.
0:27:01 So I sent my mom a picture of myself and said, it’s actually me. Send me the code. She sent me the code. That was just a selfie that I took in that moment.
0:27:08 You know, imagine if someone kind of got an access to a photo of me from anywhere, then they have that. That’s kind of how easy it is to get someone to send you a code.
0:27:16 But they don’t even need to get a selfie. They could probably generate one. If there’s enough images of you online to train in, you could probably, people will be able to generate them of you.
0:27:32 A hundred percent. So when you think about that and then you can have my voice perfectly emulated and then someone can also have my likeness personally emulated, sending a selfie video to my family member saying, I’ve just crashed my car. Can you send me some money? Like this is a pretty scary place we’re going towards.
0:27:44 Yeah. And I mean, we’ve already seen it in like scam ads as well. You know, we mentioned Mr. Beast, Joe Rogan apparently has had it happen to him. It’s kind of popped up quite a bit where, you know, they’ll start running Facebook ads, Instagram ads, Twitter ads.
0:27:57 And it looks like Joe Rogan or Mr. Beast or one of these big names is actually promoting a product, but they just used an AI tool, trained those person’s voice into it, and then generated an ad as those people.
0:28:13 I don’t think that’s going away anytime soon. I think a lot of these companies are going to try to figure out how to put guardrails up against it, but it’s going to be a constant cat and mouse game, right? Like whenever new guardrails come up, the people that are trying to do this stuff are just going to figure out more sophisticated ways to get around it.
0:28:22 But yeah, so we’re just kind of entering this world where you’ve got to be really careful. You know, you’ve got to know that this exists and kind of question everything almost, right?
0:28:32 Like you mentioned with the scenario with your mom, like I told my parents, if you ever get a call from me saying I’m in trouble and need money or something like that, ask for this passcode, right?
0:28:46 And then this is how you can confirm that it’s actually me is I will give you this word. If they can’t give you this word back, then you know, it’s not really me and it’s possibly a scam because look, I’m somebody who puts my likeness online a lot.
0:28:58 I’ve got thousands of hours of video, thousands of hours of audio. I’m probably one of the easier targets for some of this kind of stuff, but you know, it’s only going to get easier and easier for people to kind of do that sort of thing.
0:29:12 A hundred percent. And that’s a physical word that you’ve said to your mom in person or like written down somewhere. That’s really smart. I haven’t heard that before, but that is potentially one of the only ways to do it in this new world. We’re looking towards the other thing that’s interesting.
0:29:31 You mentioned there these kind of like fake and disingenuous ads, but I do think that these avatars are really interesting for UGC ads. There are a couple of companies that I noted down, Arcads and Creatify that are kind of specifically for these use cases where you can go in and use like hundreds of their UGC creators.
0:29:41 They’re kind of preloaded. I was looking through earlier and like, you know, you click through and it will be John and then you can have John sat laying in a hammock or sat in a podcast studio or like lying on his bed or whatever.
0:29:46 And, you know, you can put in scripts and then B-roll and you can have them kind of holding your product.
0:29:58 So I think when you look at like speed to market of testing and validating ideas, you can get pretty decent AI ads and test them without having to hire an actual UGC creator and ship products to people and all these sorts of things.
0:30:04 Yeah. Yeah. I was looking at Arcads the other day. I thought it was interesting that they actually used real actors, right?
0:30:09 They didn’t go and create like AI characters and now you can go and generate videos with those AI characters.
0:30:14 They actually got a whole bunch of actors to come in and actually do a bunch of training.
0:30:19 And then they use those actors inside, which if I was one of those actors, I don’t know, that would make me nervous, right?
0:30:23 Like you’re scrolling Twitter and you see yourself in a Viagra ad or something, you know?
0:30:27 That’s crazy. I didn’t, I didn’t realize they’d use real people to create their avatars.
0:30:39 That’s quite interesting that that has to tell you that it cannot create the likeness of a person as well as a real person, or at least you cannot create the likeness of a person to the level that an AI can mimic a person, which is quite interesting.
0:30:45 Yeah. Yeah. I think they had these people go and do a lot of the sort of reactions because if you look at Arcads, it’s a lot of reactions.
0:30:51 It’s people like pointing up or people doing like the shocked face or crying or, you know, laughing or that kind of thing.
0:31:00 And from what I understand, they brought in actors, had them do all of those things, and now you can prompt it and it will sort of mimic what they did.
0:31:02 But obviously, you know, they’re speaking involved and stuff like that.
0:31:14 So it’s sort of generating the reaction, but they actually have footage of those people doing those real reactions as well to sort of, you know, sort of fine tune it on those kinds of reactions.
0:31:17 That’s really smart. I’d love to know the deal that those UGC creators made.
0:31:22 Is it like a, is it a one-time fee that they got paid to kind of lease their likeness forever?
0:31:26 Are they getting like two cents every time someone makes an ad with their likeness?
0:31:28 Yeah, I have no clue.
0:31:42 I was actually reading a story the other day, though, about how a handful of Hollywood actors are all like really, really upset because they actually gave permission and gave people the ability to use their likeness in ads and things like that.
0:31:50 And now they’re all frustrated because they’re starting to see ads that they would have never actually approved being in spreading online.
0:31:55 So, yeah, it’s definitely a really weird and interesting world we’re entered into with this.
0:32:03 But I mean, the sort of like ethical use cases of making your own creators and making your own short form videos or creating ads for your own products.
0:32:10 To me, it’s really exciting, but, you know, you do have that sort of unethical counterbalance that needs to be figured out as well.
0:32:14 A hundred percent. I think that’s the case with kind of all of these new AI technologies.
0:32:19 The other thing I think is interesting is the idea of brands and creator-led brands.
0:32:27 And, you know, sometimes you will have a brand that kind of starts with a creator and then that, you know, creator will move on for whatever reason and the brand kind of gets left.
0:32:35 I think that there’s a world where we see kind of brands that create AI personas that become the creator for that brand.
0:32:37 But, you know, it’s not a real person. It’s forever licensed to the brand.
0:32:49 Like, even kind of like the Duolingo owl, like, is there a version of that where, you know, that owl is actually like an AI avatar, like creator type thing that’s doing all this, like, video content?
0:32:52 There’s obviously a range of, like, more and less human.
0:32:54 Well, like this Lil Mikaela example.
0:33:00 Like, if Lil Mikaela was to license herself forever to BMW and was forever the spokesperson to BMW,
0:33:06 you can kind of create these brands around creators and get the momentum of a creator-led brand,
0:33:09 but never, ever have the risk of that creator moving away from the brand.
0:33:15 Yeah, we were actually talking, we had Nikola from Wonder Studio on the show not too long ago.
0:33:23 And when we were talking to him, we were talking about this concept of, like, if you’re a brand, you can create your own, you know, Geico, Gecko, you know, Tony the Tiger,
0:33:30 you know, all of these companies that have, like, this mascot that’s not a real human, but people know the mascot, right?
0:33:35 Anybody can go and create that now and have that mascot do these ads for them.
0:33:41 You know, this was in the context of using Wonder Studio and actually creating, like, a 3D character model of it
0:33:44 and then putting them in a world using something like Wonder Studio.
0:33:49 But, yeah, I mean, like, anybody can go and do that now and have their own sort of mascot.
0:33:56 Like, for me, I can have, like, an animated cartoon wolf or something that pops up in my videos, and I own him forever.
0:33:57 That’s my IP.
0:33:59 He’s never going to, like, you know, go look for another job.
0:34:05 Yeah, and then when you think about where you can place that character, like, so, yeah, you pick up the phone to kind of, like, tell them you crashed your car
0:34:11 and it’s the Geico Gecko that answers you, you know, and you, like, do the chatbot on the website and, like, the avatar of them pops up
0:34:12 and it’s not just a chatbot.
0:34:14 It’s, like, the Gecko speaking to you, you know.
0:34:21 I think it’s a really smart opportunity for some brands to go out and really, really own this and put this, like, whatever it is, everywhere.
0:34:23 Like, all the touch points with this brand.
0:34:30 You can even imagine walking into a store, like, walking into the Apple store, and there’s just this, like, Apple, like, talking to you, like a hologram of an Apple.
0:34:38 The company, I think they’re called Hypervision, that are doing these holograms, which are similar to the avatars for, like, conference booths and, like, welcoming in stores.
0:34:41 So, I think that’s a kind of exciting tangent of it.
0:34:48 I went to a small meetup up in San Francisco a couple months ago, and as you walked in the door, it wasn’t a hologram.
0:34:51 It was, like, a giant flat screen TV that had, like, a camera on the top.
0:34:55 But as you walk in the door, it was like, hey, welcome into our store.
0:34:56 Oh, I love the shirt you’re wearing.
0:34:57 That plaid looks really good on you.
0:34:59 Oh, and you’ve got a nicely trimmed beard.
0:35:01 Thanks for joining us today, right?
0:35:05 And it was actually, like, commenting on your appearance as you walked in, right?
0:35:13 I can totally see stuff like that, either in hologram or, you know, in the beginning, maybe on just, like, big flat screen TVs that’s sort of interacting with you.
0:35:19 But it’s a character interacting with you and actually sort of giving feedback and actually responding to what it sees.
0:35:23 So, it’s actually specifically talking about you as you walk through the door.
0:35:24 100%.
0:35:27 It sounds insane, and it sounds like a Black Mirror episode.
0:35:32 But really, with the kind of, like, piecing these tools together, it could be, like, 6, 12 months away.
0:35:35 I think you’ll probably do it today, you know, if you wanted to.
0:35:37 I think it’s super exciting.
0:35:38 Yeah, yeah, for sure.
0:35:43 Well, is there any other avenues that we should travel down that we haven’t around some of these concepts?
0:35:47 I’ve got one more thing I can show if you like.
0:35:51 So, I’ve got a breakdown of the personal avatar compared to studio avatar done in HeyGen.
0:35:52 Okay, cool.
0:35:52 Yeah, yeah.
0:35:54 So, let’s start with the personal avatar first.
0:35:59 So, if anyone’s created an avatar with HeyGen or Synthesia before, you’ve probably created a personal avatar.
0:36:02 You kind of sit down, you can do it with your webcam, your phone.
0:36:07 You do a little 10-second recording of yourself saying, I give permission for Synthesia to make this.
0:36:10 And then, you’ll read a kind of two to three-minute script.
0:36:12 So, that’s the personal avatar.
0:36:19 So, this is, like, I think similar to the use cases that we’ve seen from Ruben previously and some of the tests I’ve done before.
0:36:23 DeepSeek R1, and it went viral overnight.
0:36:29 Built at a fraction of the cost, DeepSeek’s open-source model offers advanced math, coding, and reasoning skills.
0:36:31 Within days, it topped the app store charts.
0:36:33 So, that’s the idea of the personal avatar.
0:36:39 Now, the studio avatars, when I recorded this studio avatar, I went to a local studio.
0:36:50 I didn’t do it with HeyGen, but they gave us a very, very specific list where they said, use this camera, this light, stand in front of this type of green screen, have this type of microphone.
0:36:51 Like, it was very, very specific.
0:36:57 And we had very specific instructions on how to kind of speak and all of those sorts of things.
0:36:59 And the output of this one is pretty insane.
0:37:00 Let’s have a look.
0:37:08 DeepSeek, the Chinese AI startup, recently dropped its game-changing chatbot, DeepSeek R1, and it had gone viral overnight.
0:37:14 Built at a fraction of the cost, DeepSeek’s open-source model offers advanced math, coding, and reasoning skills.
0:37:17 Within days, it topped the app store charts.
0:37:18 So, there you go.
0:37:19 What do you make of the difference between those two, Matt?
0:37:21 Yeah, I mean, they look really good.
0:37:25 One thing I’ve noticed is that, you know, the second one, it had more of the full-body shot.
0:37:33 And one thing that I’ve seen, I think HeyGen is the one that does it, is they can actually do videos now of you, like, walking down the street.
0:37:39 And it sort of, like, changes your lips so it looks like you’re walking and talking to the camera, which sort of blows my mind.
0:37:42 But yeah, so you can record them full-body.
0:37:46 It’s funny, like, out of those two, the studio one was very expensive to record.
0:37:50 And it had a lot longer process, and we worked kind of more in collaboration with HeyGen.
0:38:02 But because you have the green screen, you can imagine that is amazing for, like, there’s so many use cases you can imagine that from, you know, changing the background all the time, presentations, you know, you could have someone kind of speaking over slides, like all of those sorts of things.
0:38:10 But for the specific use case that we’ve been working on, the short form video, the best background ever is the same background that I have when I’m on podcasts or whatever.
0:38:12 So that personal avatar actually works in that case.
0:38:17 I’m curious with the one that you showed, was the B-roll, did you go and do that, like, separately?
0:38:20 Or was that one of the tools that actually helped put the B-roll?
0:38:29 Because you had the, I don’t know what the sort of editing style is called, but when you have the words and it jumps between all the articles but stays focused on the same word, was that something that you, like, custom edited?
0:38:33 Or was that, like, an AI sort of function of one of these tools you’re using?
0:38:39 Yeah, so that one did run kind of, like, through an editor, and that is the workflow we’re working with now.
0:38:47 Obviously, we’re creating these videos on the Mindstream brand, which is owned by HubSpot, and we’ve got to be, like, really, really careful with what we put out and make sure it is the highest quality.
0:38:56 I haven’t spent a lot of time with, like, AI editing softwares, but I don’t know right now if we’re at the level that it kind of really can replace a full-time editor at the highest level.
0:38:56 Right.
0:39:01 In, like, kind of, like you say, matching all those and getting that really kind of, like, high level of video editing.
0:39:02 Yeah, yeah.
0:39:12 Descript actually put out a video about a new AI video agent that they’re calling the cursor for video editing, where you can basically say, hey, at, you know, this minute, go change this.
0:39:15 During this scene, add this B-roll into it.
0:39:16 At this scene, do this.
0:39:20 And you just keep on chatting with it, and it edits your video through natural language.
0:39:22 I mean, I haven’t gotten my hands on it.
0:39:26 I don’t know how effective it is, but the concept is really interesting to me.
0:39:28 Yeah, that’s really smart.
0:39:34 My editor has said to me that, like, Descript is his favorite, but it’s kind of like vibe editing, like we’re seeing this vibe coding now, right?
0:39:34 Yeah.
0:39:43 I think the magic of it with any of these things is if an AI tool can make it easier to do something, you still need to know what you’re trying to do to get there.
0:39:54 And someone who’s got a kind of 1,000 hours of video editing experience, put the AI tool in their hand, they’re going to get like a 10x output compared to myself who’s never edited videos.
0:39:56 And it’s like, oh, where do I even start?
0:40:04 Yeah, now that really, really solid editor can get you that edit back in three days instead of a week and a half, you know, but still just as quality.
0:40:11 You know, that’s what excites me about it is it sort of up levels the abilities of the people that actually know what they’re doing.
0:40:25 Like I’ve done a lot of the vibe coding stuff, but because I’m not that proficient of a coder, I know like a little bit, but because I’m not that proficient, I’ll run into bugs and I’ll sit there for like three hours trying to fix the smallest, tiniest bug.
0:40:30 That seems like it should be a simple fix, but the AI can’t figure out how to overcome that bug.
0:40:33 And because I don’t know enough about coding, I can’t get to where I want to go.
0:40:37 So, you know, vibe coding often turns into rage coding very, very quickly.
0:40:39 A hundred percent.
0:40:43 One more thing I’ll say that I found really interesting with these avatars.
0:40:48 And I do think it’s a little bit of a barrier when you’re just kind of like, you said the layman sitting down to make their first avatar.
0:40:52 When we created, we did the studio avatar first with Heygem.
0:40:55 We did that back in January and I’ve worked closely with them.
0:40:58 And recently we did a personal avatar.
0:40:59 So this is like the basic one.
0:41:05 But I did the recording with a Heygem kind of consultant on call with me.
0:41:09 So I went and did the two minute recording and he said, okay, now let’s change this thing.
0:41:14 And we did eight inputs until he said that input will create the best avatar.
0:41:19 And what that input is doesn’t really feel like it will create the best avatar.
0:41:23 You want to create an avatar that’s going to do kind of social media reels.
0:41:27 You’d expect the input to be like, hey guys, it’s Adam here.
0:41:29 And like, you know, do like the YouTube voice.
0:41:30 Right.
0:41:32 If you do that, it will break the avatar.
0:41:36 To get the best avatar output, you need to be the least human.
0:41:40 The take that Heygem, the kind of Heygem tech team were most happy with.
0:41:46 I was literally sitting there and going, hello, my name is Adam speaking really, really slowly.
0:41:53 And when you do a hand movement, they say like, bring it up, hold it for like three seconds and then take it down really slowly.
0:41:55 If you do this, it will break the model.
0:42:01 And so then you create this avatar in this really robotic way that is very, very unnatural.
0:42:08 And then when you kind of then do like the inputs, you know, I said earlier, it’s about the input audio dictates the kind of dynamism of the video.
0:42:16 And then you can, you know, with all these hand movements you do, you can kind of like set them as expressions and you can actually like retrofit them in to train the avatar.
0:42:21 But I think that’s really hard to communicate in like a kind of onboarding process.
0:42:27 Like, Hey guys, don’t record the avatar, how you want it to sound, record it like a robot to look human.
0:42:29 That’s interesting.
0:42:41 So, but you can still the output, you can still get something that looks excited and energetic, even though you train it on a sort of monotone, almost like boring sort of version of yourself.
0:42:41 Yeah.
0:42:44 And I’ll clarify, it’s not monotone or boring.
0:42:45 It’s very, very slow.
0:42:46 Okay.
0:42:46 Slow.
0:42:46 Okay.
0:42:47 Very slow.
0:42:57 And when you’re kind of doing your voice movement, if you’re doing like facial features or whatever, you kind of have to accentuate them and move slowly.
0:42:59 So it’s unnatural.
0:43:00 It’s very, very unnatural.
0:43:03 The kind of input you have to do.
0:43:09 And yeah, again, if you’re sitting down and you’re like, well, I want my avatar to be very, very professional or very dynamic or whatever.
0:43:15 That’s kind of all done in the edit, really not in the input recording of the avatar.
0:43:17 I think that’s a really, really good takeaway.
0:43:21 I’m glad you added that in because I think that’ll be like really, really helpful to people.
0:43:30 You know, one thing I mentioned sort of off recording is that I have issues because I have a beard and sometimes like the beard starts to look a little blurry and fuzzy around my lips as I start talking.
0:43:35 Or it looks like I have hair on my lips or, you know, something weird like that when I go to generate something.
0:43:39 But what you just described could be the solution to it, right?
0:43:42 I tend to get really excited and just talk really, really fast.
0:43:47 So when I go and train one of those models, I do it how I do it on video.
0:43:48 I talk fast.
0:43:48 I sound excited.
0:43:50 Maybe that’s the problem.
0:43:52 Maybe that’s what I’m doing wrong.
0:43:53 Potentially, potentially.
0:43:58 If you’re speaking kind of at the level you are now, I do think the model will struggle, but some would have been taught.
0:44:01 Another thing that’s obvious I haven’t mentioned, don’t ever cover your mouth.
0:44:03 So the way you’re sat now, you’re sat quite close to your microphone.
0:44:07 You know, if you did like kind of dip into that, that’s going to break it as well.
0:44:13 So yeah, like super slowly, you have to take long pauses kind of between the scripts they give you.
0:44:15 You also have to have a like home base.
0:44:18 So on the studio avatar, I was kind of holding my hands like that.
0:44:22 You know, I had to do that and move away from that, but always come back to that.
0:44:28 So, you know, if you don’t want your hands in shot, keep them out of shot, bring them into shot for a movement, but then take them back out.
0:44:33 You have to kind of have that home base that you always come back to, which will be like the home base of the avatar.
0:44:34 Right, right.
0:44:37 And same rules apply for either of the models that you’re using.
0:44:38 I believe so.
0:44:41 This, all this kind of learning I’ve got is from working with HeyGen.
0:44:45 I would assume it applies to Synthesia and Agile and all of them as well.
0:44:45 Awesome.
0:44:46 Well, very cool.
0:44:48 I think that’s super helpful for people.
0:44:50 So this has been super fascinating.
0:44:52 I really, really appreciate all the tips and insights.
0:45:01 And I think a lot of people are going to have a lot of ideas of how they can go and use this technology inside of their marketing, or if they want to go and become a creator or things like that.
0:45:05 If people want to go and learn more from you, learn more about what you’re up to, where can they go check you out?
0:45:07 So I’m very active on LinkedIn.
0:45:08 I post twice a day there.
0:45:09 It’s just my name, Adam Biddlecum.
0:45:12 But also, I’m still running the Mindstream newsletter.
0:45:14 It’s now owned by HubSpot.
0:45:18 So if you want kind of daily AI updates, make sure to subscribe to Mindstream.
0:45:18 Cool.
0:45:20 And where do they go to subscribe to it?
0:45:21 Mindstream.news.
0:45:23 You can tell I don’t do too many pods, can’t you?
0:45:26 This has been so much fun, Matt.
0:45:26 Thanks for having me on.
0:45:27 It’s been a blast.
0:45:28 Yeah, thanks for joining me.
0:45:36 And everybody who’s listening, if you enjoy this type of content, make sure you like this video and subscribe to this podcast because we’ve got more where that came from.
0:45:37 Thanks so much.
0:45:37 Thanks so much.

Episode 56: Is it possible to build a thriving content strategy—without ever stepping in front of a camera? Matt Wolfe (https://x.com/mreflow) is joined by guest Adam Biddlecombe (https://x.com/adam_bidd), founder of Mindstream, the daily AI newsletter now owned by HubSpot. Adam has rapidly grown his audience (especially on LinkedIn) while openly hating making videos. His solution? Becoming an expert in AI avatar tools to handle his video content creation.

In this episode, Matt and Adam dive deep into the world of AI avatars: the tools, the workflow, the best approaches for maximizing quality, and how these avatars are powering everything from viral Instagram channels to hyper-personalized B2B outreach. Whether you’re camera-shy, looking to scale your personal brand, or curious about the ethical and business implications of AI-driven video, this is the ultimate guide to the current landscape (and what’s coming next) for AI videos—straight from the creators who use them every day.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

—

Show Notes:

  • (00:00) Name Changes and Synthesia’s Evolution

  • (05:30) Testing Avatar Models: Heygen Analysis

  • (09:17) Instagram Enhances AI for Age Detection

  • (10:59) Video Recording Challenges

  • (14:53) AI Influencers: Expanding Industry Trends

  • (18:54) Personalized AI Videos Boost Retention

  • (19:43) AI-Driven Email Personalization Trends

  • (25:19) Scamming Risks in Voice Tech

  • (28:02) UGC Avatars for Advertising Innovation

  • (32:13) Create Your Own Brand Mascot

  • (33:03) Brand-Interactive Avatars Revolution

  • (38:59) Enhancing Efficiency with Proficient Editors

  • (40:40) Challenges in Avatar Creation

  • (42:56) Microphone Usage Guidelines

—

Mentions:

Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

—

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

—

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Comment