AI transcript
0:00:10 When you open Gemini, it has a pop-up that says, we got a nano-banana.
0:00:11 Would you like to do something with it?
0:00:13 A little pane where you have to type something.
0:00:15 I don’t know what to do.
0:00:20 These are hot-up nuances that I think makes people actually take the first.
0:00:25 The models have gotten to the level of quality that you can build a real scalable app on top of them.
0:00:30 And so the hope is 2026 will be a huge year for consumer builders.
0:00:36 As 2025 comes to a close, consumer AI is starting to look very different than it did at the beginning of the year.
0:00:40 A small number of products now dominate everyday usage.
0:00:46 New multimodal models have gone viral, and the big labs have pushed harder than ever into consumer experiences.
0:00:53 To take stock of the year, the A16Z team, Anisha Charya, Olivia Moore, Justine Moore, and Brian Kim
0:00:56 break down what actually worked in 2025 and what didn’t.
0:00:59 They discuss which model launches and interfaces change user behavior,
0:01:03 why small product details matter more than raw model quality,
0:01:07 and whether the consumer AI market is trending toward win or take most.
0:01:12 The conversation also looks ahead to 2026, where there is still room for startups,
0:01:15 how templates and multimodality are reshaping creation,
0:01:19 and why this may finally be the moment when scalable consumer AI apps break out.
0:01:25 Today we’re talking about who won consumer AI in 2025.
0:01:29 This was arguably the year that we saw the big model providers,
0:01:34 OpenAI and Google, most out of everyone, make a major push of their own into consumer,
0:01:39 both in terms of new models they release, but also in terms of new products, features,
0:01:41 and interfaces that target the mainstream user.
0:01:45 You might wonder, why does it matter who is in the lead here?
0:01:51 There are some early signs that the general LLM assistance space might be trending towards winner-take-all,
0:01:53 or at least winner-take-most.
0:01:59 So only 9% of consumers are paying for more than one out of the group of ChatGPT,
0:02:01 Gemini, Claude, and Cursor.
0:02:08 And for most of the year, less than 10% of ChatGPT users even visited another one of the big LLM providers like Gemini.
0:02:17 If we had to call it now, ChatGPT is currently in the lead, by far, at 800 to 900 million weekly active users.
0:02:23 Gemini’s at an estimated 35% of their scale on web and about 40% on mobile.
0:02:25 And everyone else significantly trails this.
0:02:29 So Claude, Grok, Perplexity are all about 8% to 10% of the usage.
0:02:34 But, especially in the last three to six months, things are changing very quickly.
0:02:37 With the launch of new viral models like NanoBanana,
0:02:42 Gemini is now growing desktop users 155% year-over-year,
0:02:46 which is actually accelerating even as they reach more scale, which is pretty crazy to see.
0:02:50 And ChatGPT is only growing 23% year-over-year.
0:02:56 And we’re starting to see players like Anthropic almost specialize within consumer,
0:02:58 owning different verticals like the hyper-technical user.
0:03:03 So today, we’ve brought together the A16Z consumer team to recap what we saw this year
0:03:08 from the big model companies in consumer, and also to predict what might be ahead of us in 2026.
0:03:10 Cool. Well, thank you, Olivia.
0:03:11 It’s been a super fun year.
0:03:14 If we kind of wind the timeline back to last January,
0:03:17 maybe we should start with what we saw, launches, products,
0:03:18 what worked, what didn’t.
0:03:20 So, Justine, tell us what you saw this year.
0:03:22 OpenAI, Google, what are you paying attention to?
0:03:23 What have you changed your mind on?
0:03:27 Yeah, those two in particular had a ton of consumer launches, like Olivia mentioned.
0:03:31 From a model perspective, I would argue their most viral models this year,
0:03:34 at least among consumers, were in image and video.
0:03:39 So, for OpenAI, it was the Chack GPT 4.0 image, the Ghibli moment,
0:03:41 which is crazy that that was this year.
0:03:42 It seems like this is this year.
0:03:44 It feels like it was years ago.
0:03:47 And then Sora, obviously, Sora 2.
0:03:51 And then for Google, it’s VO, VO3, and VO3.1,
0:03:54 and then Nano Banana and Nano Banana Pro in image models,
0:03:57 which went insanely viral, probably comparable to,
0:04:00 if not beyond, the Ghibli moment for OpenAI.
0:04:03 I think in terms of the product layer,
0:04:09 what we saw was OpenAI tended to keep more things in the ChatGPT interface.
0:04:14 So, like, Pulse, group chats, shopping, research, tasks,
0:04:17 all of these features launched inside ChatGPT as the core.
0:04:21 The exception there is obviously Sora as a standalone video app,
0:04:24 whereas Google tended to launch more things as standalone products.
0:04:27 So, they did ship a lot through, like, Google AI Studio
0:04:30 and Google Labs and Gemini
0:04:34 and the plethora of Google Surfaces there are to launch a product.
0:04:36 But they would also ship things as standalone websites
0:04:38 that you could go to and visit,
0:04:40 which basically allowed for a more custom interface
0:04:42 for each type of product,
0:04:46 not just the kind of chat entry, chat exit, or image video exit.
0:04:48 Well, so, Justine, I have a question for you on that.
0:04:51 So, it felt like 18 months ago, we were talking about mid-journey,
0:04:55 and most of the multimodal models were defined by aesthetics and realism.
0:04:56 Is that still true?
0:04:57 What changed this year?
0:05:00 Yeah, I think there was definitely different styles still.
0:05:01 And I think mid-journey,
0:05:04 when you talk to people really deep in image and video,
0:05:07 it still kind of stands apart for this, like, aesthetic sensibility
0:05:10 that a lot of the models don’t have if you don’t know how to prompt for it.
0:05:12 But I would say, this year in particular,
0:05:14 we made a lot more strides on realism
0:05:18 and also on reasoning within both image and video.
0:05:22 Like, all of the little details that make an image or a video actually seem real.
0:05:25 For example, if you have a person walking and talking,
0:05:27 the people in the cars in the background,
0:05:30 if they’re on a street, should be moving in the correct direction,
0:05:33 like they shouldn’t be morphing and looking strange.
0:05:37 In image, we were able to have multiple input images and text
0:05:40 and sort of reason across all of those uploads
0:05:43 to create, like, a cohesive design or something like that,
0:05:45 which was not something we saw happening last year, for sure.
0:05:49 Yeah, I remember when we were excited about having
0:05:52 a letter show up correctly in images.
0:05:55 And now we have insane infographics.
0:05:55 Yes.
0:05:58 We can just put up an amazing YouTube video and say,
0:05:59 give me an image that explains this.
0:05:59 Yeah.
0:06:01 That’s incredibly different.
0:06:03 Nano Banana Pro can even generate, like, market maps.
0:06:05 Like, I would tell it, generate a market map in the space.
0:06:06 You generated a market map. It’s incredible.
0:06:10 And it either has or will go do the web research within the image model,
0:06:12 which is crazy, to get the correct list of companies
0:06:14 and then pull their correct logos, which is insane.
0:06:15 I know.
0:06:18 There’s one benchmark left that the reasoning image models have not cracked.
0:06:21 I tested GPT Image 1.5 yesterday.
0:06:25 They sometimes struggle with both reasoning and multi-step reasoning.
0:06:29 So what I’ve been testing is you upload a picture of a monopoly board
0:06:31 and you say, remove the names of all the properties
0:06:35 and replace them with names of AI labs and startups.
0:06:39 And GPT Image 1.5 is actually the closest,
0:06:44 but it’s very hard for them to do all of those steps.
0:06:46 Remove it, come up with the new names,
0:06:47 put all of the new names in the correct places,
0:06:51 make sure there’s not overlaps or one thing you mentioned three times
0:06:53 and another big player you never mentioned.
0:06:56 So there’s still some room to go on the image evals.
0:06:57 And it’s interesting that,
0:06:59 especially from the image model from ChatGPT,
0:07:02 where you can actually see perseverance of like,
0:07:06 it carries a character over into multiple image generation,
0:07:07 the same style.
0:07:07 Yeah.
0:07:09 And I thought that was like,
0:07:10 oh, like this is actually very interesting.
0:07:11 We’re storyboarding.
0:07:11 Totally.
0:07:13 makes you want to generate more.
0:07:14 Yeah.
0:07:15 You know, for me,
0:07:17 it felt like the most underhyped aspect of Nano Banana
0:07:19 was the integration with Search,
0:07:21 because it feels like there’s realism,
0:07:24 which is physics and sort of other things
0:07:25 that feel like we’re on Candy Valley.
0:07:27 There is reasoning,
0:07:29 which is apply modifications
0:07:31 that are adherent to what the user asked for.
0:07:33 But then there’s also sort of accuracy.
0:07:36 And for me, a good example of this is product photography.
0:07:38 If you say, hey, generate a photo of this album cover
0:07:42 or a historically accurate photo of this moment in time,
0:07:43 you have to actually have the search integration.
0:07:45 And that was sort of non-intuitive,
0:07:46 but it’s actually very useful.
0:07:47 Totally.
0:07:47 Yeah.
0:07:49 It’s kind of like the VO3 moment
0:07:52 when I don’t think it was intuitive to people
0:07:53 that video would be cracked necessarily
0:07:56 by bringing audio together with video in the same place.
0:07:57 And that ended up being the thing
0:07:59 that made AI video go viral.
0:08:00 Yeah.
0:08:01 Like since VO3,
0:08:03 and now Sora maybe dominates,
0:08:04 but like since VO3,
0:08:07 my social feeds have been like full of
0:08:08 really realistic looking.
0:08:09 I counted.
0:08:11 About one-fifth of my feeds are AI generated.
0:08:12 Amazing.
0:08:12 Wow.
0:08:13 What do you guys do?
0:08:15 There’s so many launches this year
0:08:16 and many of them went well,
0:08:17 like VO and Nano.
0:08:18 What do you think is under-hyped
0:08:19 or products that you think
0:08:20 didn’t get enough attention?
0:08:21 Brian?
0:08:23 It’s a good question.
0:08:26 I think under-hyped pulse of the world
0:08:27 is probably still under-hyped.
0:08:29 And we’re talking about
0:08:31 OpenAI, Google,
0:08:33 which to me falls under productivity category.
0:08:35 So if you actually think about,
0:08:36 if you go to App Store today,
0:08:39 top five out of top 10 productivity apps
0:08:40 are all Google.
0:08:41 It’s insane.
0:08:42 And ChatGP is number one.
0:08:44 So we’re talking about a productivity category
0:08:46 where it helps you do things.
0:08:48 And I feel like a lot of people
0:08:50 are trying this from a different angle.
0:08:52 Like how do I actually ingest your data
0:08:53 or your schedule,
0:08:54 your email,
0:08:55 to make it more helpful
0:08:56 and give more proactive
0:08:58 and notification to you?
0:09:00 I think a lot of people are working on it.
0:09:01 Given the frequency
0:09:03 of people using ChatGPT,
0:09:04 which I think is what,
0:09:06 25 times a week,
0:09:07 pretty good,
0:09:08 pretty good,
0:09:08 three to four times a day,
0:09:11 it feels like it’s a really good position
0:09:14 to actually give you proactive nudges
0:09:15 and summary
0:09:17 and help your life in general.
0:09:20 So I feel like the Everything app
0:09:21 was always this myth
0:09:22 in the Western world.
0:09:24 I think OpenAI is trying to move
0:09:24 in that direction
0:09:26 where it’s ingesting enough,
0:09:27 people are going there enough
0:09:29 to start giving really useful,
0:09:30 proactive nudges.
0:09:32 and I think that’s a space
0:09:33 that I’m excited about.
0:09:34 It’s interesting.
0:09:35 But are you a DAU?
0:09:37 I am not a DAU.
0:09:37 A Pulse?
0:09:39 Not a Pulse.
0:09:41 Similarly, I tried Pulse for a while
0:09:43 and have kind of largely turned off of it.
0:09:44 But I would agree with you
0:09:46 that I feel like Pulse
0:09:47 and a couple other examples
0:09:49 that OpenAI launched this year
0:09:50 are kind of new primitives
0:09:52 or ideas that feel underhyped.
0:09:53 But because the execution
0:09:55 is a little off.
0:09:55 I think it’s execution.
0:09:57 The usage is off.
0:09:58 Another example that I would give,
0:09:59 which is similarly like
0:10:01 personal contacts,
0:10:02 would be their connectors.
0:10:03 So now you can,
0:10:05 and you can do this on cloud as well.
0:10:06 You can connect your calendar,
0:10:07 your email, your documents.
0:10:08 And so hypothetically,
0:10:10 you could say to ChatGPT,
0:10:11 you know,
0:10:12 read all of my memos
0:10:13 over the past six months
0:10:14 and like summarize
0:10:15 what’s most interesting,
0:10:16 least interesting.
0:10:17 I think when that works,
0:10:18 it’s really exciting.
0:10:19 I have found it to be
0:10:21 a little bit unreliable so far,
0:10:22 but I think as the models get better,
0:10:24 they have a real chance
0:10:25 to kind of own
0:10:26 the prosumer workspace
0:10:27 if they get that right.
0:10:28 Prosumer is a perfect category
0:10:29 because we talk about it sometimes,
0:10:31 but 99% of people
0:10:33 don’t run their lives on calendar.
0:10:33 Yeah.
0:10:34 We do.
0:10:34 Right.
0:10:35 So that’s what I’m thinking about,
0:10:38 the actual average frequency
0:10:39 of using ChatGPT.
0:10:40 And look,
0:10:42 if it’s 24 times a week,
0:10:44 that’s a pretty good place to start.
0:10:44 Yeah.
0:10:45 Olivia,
0:10:46 I feel like you are
0:10:47 the ultimate power user.
0:10:49 What are you still using?
0:10:50 What’s your stack?
0:10:51 It’s a great question.
0:10:53 From all of the larger model companies,
0:10:54 actually,
0:10:55 I would have to say the thing
0:10:56 that I’m still using the most
0:10:57 and was maybe the most impressed
0:10:58 by this year
0:11:00 was the Perplexity Comet browser.
0:11:01 And I don’t,
0:11:04 and was not using Perplexity
0:11:06 as my core general LLM assistant.
0:11:08 I use ChatGPT and Claude much more,
0:11:11 but I think they really executed on it
0:11:12 in a first class way
0:11:14 in terms of both the agentic model
0:11:15 within the browser,
0:11:17 but also perhaps more importantly,
0:11:18 all of the workflows
0:11:19 that you can set up
0:11:21 that allow you to basically run
0:11:23 the same task over and over,
0:11:24 either at a preset time
0:11:25 or when you trigger it
0:11:26 on a certain web page.
0:11:26 So that to me
0:11:27 was a really exciting launch.
0:11:28 And if you look at the data,
0:11:30 like the spike at launch
0:11:31 and the sustained traffic
0:11:32 for Comet
0:11:33 was actually much higher
0:11:35 than for ChatGPT’s
0:11:36 own browser launch,
0:11:36 Atlas,
0:11:38 which is kind of crazy
0:11:39 given how much more distribution
0:11:40 ChatGPT has
0:11:41 than Perplexity.
0:11:42 But I think
0:11:43 they also launched
0:11:45 an email assistant this year,
0:11:45 Perplexity did,
0:11:46 and they made
0:11:47 a couple acquisitions
0:11:48 of really strong
0:11:49 agentic startups.
0:11:51 And so what I would love
0:11:52 to see from them next year
0:11:53 is like more of these
0:11:55 dedicated prosumer interfaces.
0:11:56 I feel like that would be
0:11:57 an awesome direction
0:11:57 for them to kind of
0:11:58 double down in.
0:11:59 They do feel like
0:12:00 the startup that has
0:12:01 the biggest breadth
0:12:02 of ambition,
0:12:02 you know,
0:12:03 alongside the labs
0:12:04 and sort of big tech,
0:12:05 like it’s very,
0:12:05 very impressive
0:12:06 just the number of things
0:12:07 they’ve shipped this year.
0:12:08 Yes, definitely.
0:12:10 What, you know,
0:12:11 one thing I wanted to ask
0:12:11 you, Justine,
0:12:12 was sort of Gemini
0:12:13 feels like it’s having
0:12:14 a real moment
0:12:15 because of all the
0:12:16 image and video models.
0:12:17 Do you think it can
0:12:18 overtake ChatGPT?
0:12:19 Is there truly
0:12:20 that much demand
0:12:21 for these types of models?
0:12:23 I think, yeah.
0:12:23 So what we,
0:12:24 what I’ve seen basically
0:12:27 is there is always
0:12:29 nearly infinite demand
0:12:30 for like the best
0:12:31 in class image
0:12:32 or video model
0:12:34 because then you have
0:12:35 a mix of tons
0:12:36 of different people
0:12:36 seeing it
0:12:37 and wanting to use it.
0:12:37 You have,
0:12:39 like if you’re using
0:12:39 it professionally,
0:12:40 if you’re marketing
0:12:41 or in entertainment
0:12:42 or storyboarding
0:12:42 or whatever,
0:12:44 you always want
0:12:45 to be using
0:12:45 what’s at the forefront
0:12:46 of the field
0:12:47 and so you’re totally
0:12:47 fine to go somewhere
0:12:49 other than ChatGPT
0:12:49 and Sora
0:12:51 to get access to VO.
0:12:52 Even if you’re
0:12:53 an everyday consumer,
0:12:55 so many new viral trends
0:12:55 are created
0:12:57 around new capabilities
0:12:58 of the best in class
0:12:59 image and video models
0:13:01 and so that ends up
0:13:02 driving users
0:13:03 into different products
0:13:05 that they may have
0:13:05 never tried before.
0:13:06 Like you might be
0:13:08 downloading the Gemini app
0:13:10 or accidentally ending up
0:13:11 on Google AI Studio,
0:13:12 which I know they’re trying
0:13:12 to make be more
0:13:13 for developers
0:13:15 to use Nano Banana Pro,
0:13:16 which a lot of users
0:13:17 I think experienced
0:13:19 in the past couple of months.
0:13:19 Yeah.
0:13:21 The interesting thing
0:13:22 about Gemini to me
0:13:23 is like hypothetically
0:13:24 they benefit from
0:13:25 the massive Google
0:13:26 distribution advantage.
0:13:27 Like if you look
0:13:27 at Android,
0:13:29 Gemini is at like
0:13:31 50% of ChatGPT’s
0:13:32 scale on mobile,
0:13:33 whereas on iOS
0:13:34 it’s like 17%.
0:13:35 So like clearly
0:13:36 something is working there.
0:13:37 They launched a little
0:13:37 Gemini widget
0:13:39 within Chrome recently
0:13:40 that encourages you
0:13:41 to use it.
0:13:41 They’re launching it
0:13:42 within Google Docs
0:13:43 and Gmail and other things.
0:13:44 Yeah.
0:13:46 But I think that most
0:13:47 the average person
0:13:48 is still just using
0:13:49 one AI product
0:13:51 and ChatGPT is like
0:13:52 the Kleenex of AI.
0:13:53 Like it is the brand
0:13:53 that has become
0:13:54 the noun.
0:13:55 Exactly.
0:13:55 Yes, yes, yes.
0:13:56 And so I think
0:13:57 that Gemini still has
0:13:58 a pretty big hurdle
0:13:59 to overcome
0:14:01 just in terms of that.
0:14:01 Yeah.
0:14:03 But if they keep
0:14:03 doing what they’re
0:14:04 doing on these
0:14:06 amazing viral
0:14:07 consumer creative
0:14:08 tool launches
0:14:09 and model launches
0:14:10 like they could
0:14:11 get there next year.
0:14:12 I’m thinking about this.
0:14:13 It’s really interesting
0:14:15 when you look at
0:14:16 Gemini,
0:14:17 which is everywhere.
0:14:18 Yeah.
0:14:18 Yeah.
0:14:19 But yet nowhere
0:14:20 to some extent, right?
0:14:21 You don’t like,
0:14:22 you know,
0:14:22 when you look at
0:14:23 the actual usage,
0:14:24 people still think
0:14:24 of the Kleenex.
0:14:25 Yep.
0:14:26 And they go to
0:14:27 ChatGPT.
0:14:28 But the interesting
0:14:29 thing also is on
0:14:30 the product sensibility.
0:14:32 So this morning
0:14:33 I had like two panes open,
0:14:35 OpenAI’s image model
0:14:37 and Google’s Gemini
0:14:38 and basically use
0:14:40 an image functionality.
0:14:41 When you open Gemini,
0:14:43 it’s a blank screen.
0:14:45 It has a pop-up
0:14:45 that says,
0:14:46 we got a nano banana.
0:14:47 Would you like
0:14:48 to do something with it?
0:14:50 And it’s a little pane
0:14:51 where you have to
0:14:51 type something.
0:14:52 Yeah.
0:14:53 I don’t know what to do.
0:14:53 Yeah.
0:14:55 ChatGPT,
0:14:55 you go in
0:14:56 and it has a very
0:14:58 TikTok-like style
0:14:58 of like,
0:14:59 here’s a trending
0:15:00 themes that you
0:15:01 might want to generate
0:15:02 and you click on
0:15:03 I want a sketch pen
0:15:04 or whatever
0:15:05 and then just like
0:15:06 use one other picture
0:15:06 and it creates
0:15:07 something amazing.
0:15:08 And then it says,
0:15:08 would you like
0:15:09 a holiday card?
0:15:09 Would you like
0:15:10 a blah, blah, blah, blah, blah.
0:15:12 These are product nuances
0:15:13 that I think
0:15:14 makes people actually
0:15:15 take the first step
0:15:17 to generate it.
0:15:18 And then once you have it,
0:15:19 you have character consistency.
0:15:19 Yeah.
0:15:20 So you keep going.
0:15:21 Right.
0:15:22 So that’s interesting
0:15:22 in that I think
0:15:23 OpenAI
0:15:24 and ChatGPT
0:15:25 has proven
0:15:25 that there is
0:15:27 deeper product sensibility.
0:15:28 Yeah.
0:15:29 But then I,
0:15:30 this is a funny thing,
0:15:31 maybe a little
0:15:32 non-cosher thing to say,
0:15:33 but you know,
0:15:35 I worked at Snap.
0:15:36 So when you look
0:15:37 at Meta versus Snap,
0:15:38 famously,
0:15:39 Evan Spiegel
0:15:40 was chief product
0:15:40 officer of Meta.
0:15:41 Yeah.
0:15:41 Yeah.
0:15:44 I wonder if there’s
0:15:44 a world where
0:15:46 the ChatGPT team
0:15:47 that innovates
0:15:48 on the product front
0:15:49 again and again,
0:15:50 Google with distribution,
0:15:51 looks at them like,
0:15:52 that’s cool.
0:15:53 Let’s just,
0:15:54 let’s just integrate it
0:15:55 and keep going
0:15:56 and actually play that game.
0:15:57 The interesting thing there
0:15:59 is that images pane
0:16:00 just launched yesterday
0:16:01 when we’re filming this.
0:16:02 In ChatGPT.
0:16:03 In ChatGPT.
0:16:03 Yeah.
0:16:04 Brand new.
0:16:04 Yeah.
0:16:04 And it took them,
0:16:06 like they had image models
0:16:07 for years
0:16:08 and it took them that long
0:16:09 to come up with a separate
0:16:11 relatively basic interface
0:16:12 for generating images.
0:16:14 I would almost argue
0:16:16 the application layer companies
0:16:17 like the CRIAs,
0:16:18 the Hedras,
0:16:19 the Higgs fields of the world
0:16:22 popularized that template format
0:16:23 and did it first
0:16:23 and did it better.
0:16:24 I agree.
0:16:26 And they are ChatGPT’s
0:16:27 product people
0:16:28 and then maybe
0:16:28 the ChatGPT product people.
0:16:29 So it’s a supply chain
0:16:30 of product ideas.
0:16:30 Exactly.
0:16:32 Always.
0:16:33 Well, maybe going
0:16:34 in a slightly different direction,
0:16:34 BK,
0:16:35 I’m very curious for your take
0:16:37 on OpenAI’s social features
0:16:38 because it does feel like
0:16:39 that’s something
0:16:40 that you really have to get
0:16:41 product execution right on
0:16:42 but also network design.
0:16:43 You know,
0:16:43 there’s some efforts
0:16:44 around Sora too.
0:16:45 We should talk about that.
0:16:46 There’s also group chats
0:16:47 within ChatGPT.
0:16:49 You’re our sort of social guy
0:16:50 or have been historically
0:16:51 bullish, bearish.
0:16:51 Where’s your head at?
0:16:53 Bearish for now.
0:16:54 Okay.
0:16:56 And the reason to me
0:16:57 is twofold.
0:16:58 Historically,
0:16:59 we look at sort of,
0:17:00 it’s funny,
0:17:01 I look at products
0:17:03 based on what I call
0:17:04 inception theory.
0:17:05 You go like three
0:17:05 to four layers down
0:17:06 to figure out
0:17:07 what the one liner is
0:17:08 which is like
0:17:09 I want my dad to love me.
0:17:10 And so,
0:17:10 you know,
0:17:11 when they think
0:17:12 about products,
0:17:12 Is that for you
0:17:13 or for the world?
0:17:13 That’s for me
0:17:15 as well as for a lot of people.
0:17:15 Okay.
0:17:16 Yes, yes.
0:17:17 And so,
0:17:18 I look at some of the,
0:17:19 you know,
0:17:20 products like,
0:17:21 like ChatGPT.
0:17:22 Ultimately,
0:17:23 when you peel the onion
0:17:23 five times,
0:17:24 I think essentially
0:17:25 is help me be better.
0:17:26 Like,
0:17:27 help me get that information.
0:17:28 Help me be more productive.
0:17:29 Help me be more efficient.
0:17:31 And then when I think
0:17:32 about social features,
0:17:33 meta,
0:17:33 Instagram,
0:17:34 what have you,
0:17:35 or even TikTok,
0:17:37 the two layers
0:17:37 of information
0:17:38 or the,
0:17:39 you know,
0:17:40 emotion that it’s trying
0:17:40 to address to me
0:17:41 is for TikTok,
0:17:43 entertain me.
0:17:44 I want my clown
0:17:45 to entertain me.
0:17:45 Yeah.
0:17:46 And then the other layer
0:17:47 is I’m lonely,
0:17:48 I want to be seen,
0:17:49 I want to connect
0:17:49 with people.
0:17:51 And to me,
0:17:52 these are pretty
0:17:54 two different parallels
0:17:55 in the product direction.
0:17:57 And OpenAI’s product
0:17:58 is incredible.
0:17:59 It’s magic.
0:18:00 It’s amazing.
0:18:01 But it’s ultimately
0:18:02 a see me
0:18:03 or help me category,
0:18:04 which essentially
0:18:05 is why
0:18:06 it’s the number one
0:18:07 in productivity category.
0:18:07 Yeah.
0:18:09 Now we’re trying
0:18:10 to take this
0:18:11 and shove it
0:18:11 in people’s life
0:18:12 and say,
0:18:12 guys,
0:18:13 connect,
0:18:14 connect better
0:18:15 and like actually
0:18:17 feel like you’re being seen.
0:18:18 And even the group chat function,
0:18:19 which I love,
0:18:20 it’ll be so good
0:18:21 to plan a trip
0:18:22 and like actually
0:18:23 have that common pain.
0:18:25 But I think
0:18:26 it still stops
0:18:28 at probably
0:18:28 end count
0:18:30 of two to three people
0:18:31 planning something
0:18:32 in a help me way
0:18:32 versus,
0:18:33 oh,
0:18:34 I feel like
0:18:35 I understand
0:18:36 a niche so much better
0:18:38 because I’ve sort of
0:18:38 done that.
0:18:40 So largely over time,
0:18:42 I think that’s the reason
0:18:43 of that division.
0:18:44 But that is not to say
0:18:45 you can build
0:18:46 a separate product
0:18:47 that completely
0:18:48 sort of addresses that.
0:18:49 I think Sora,
0:18:51 so we talked about group chat,
0:18:52 Sora 2 was the other big
0:18:53 I think social push
0:18:54 this year
0:18:55 from all the consumer AI giants.
0:18:56 Which was basically
0:18:58 like a TikTok feed
0:18:59 but all AI generated
0:19:00 video and you can make
0:19:01 cameos of your friends.
0:19:03 The cameos was a very good bet.
0:19:03 Yeah.
0:19:04 It was a strong bet.
0:19:04 Yeah.
0:19:06 But I think what we’ve seen
0:19:08 is like in the retention data
0:19:09 and how we’re seeing it used
0:19:11 is it was massively successful
0:19:12 as a creator tool.
0:19:13 Like now,
0:19:15 my feed is probably
0:19:16 two-thirds AI slop
0:19:16 if not more.
0:19:19 And over 50% of it
0:19:20 is now Sora.
0:19:20 Whereas before
0:19:21 it was like all VO
0:19:22 and some cling.
0:19:24 but it has not been
0:19:25 as successful
0:19:26 as like a social app.
0:19:27 Consumption.
0:19:28 Yeah.
0:19:28 People are like
0:19:30 a small number of creators
0:19:31 are creating a ton of content
0:19:33 and then bringing it out
0:19:34 to like TikTok,
0:19:34 Instagram,
0:19:35 X,
0:19:35 Reddit
0:19:36 where it’s going
0:19:37 massively viral.
0:19:38 But it doesn’t seem
0:19:39 like there’s a
0:19:40 as much
0:19:42 consumption happening
0:19:42 in the app.
0:19:43 Yeah.
0:19:44 As much remixing,
0:19:44 as much commenting,
0:19:46 especially as there was initially.
0:19:47 You know,
0:19:47 in a funny way,
0:19:48 the way I think about it
0:19:50 is like Sora’s competition
0:19:51 or analogy
0:19:53 isn’t actually TikTok.
0:19:54 It’s actually Capcom.
0:19:57 It’s like a funny way.
0:19:57 It’s almost like
0:19:58 a creative tool.
0:19:58 Yes.
0:19:59 Interesting.
0:19:59 Yes.
0:20:00 Olivia,
0:20:00 what’s your thing?
0:20:00 Well,
0:20:01 I was going to say
0:20:02 like I think it goes back
0:20:03 to your earlier point
0:20:04 which is like
0:20:06 the kind of motion
0:20:07 that drives social apps
0:20:09 is both these like
0:20:10 positive and negative
0:20:11 feelings of like
0:20:12 oh,
0:20:13 I’m publishing this thing
0:20:14 of myself
0:20:15 that’s kind of sensitive
0:20:16 or that I want people
0:20:17 to think it’s this
0:20:17 or that
0:20:18 or this other thing
0:20:19 and so that’s kind of
0:20:21 what drives participation
0:20:21 on the app.
0:20:22 Yeah.
0:20:22 The status game.
0:20:23 Yeah.
0:20:24 A little bit of the status game.
0:20:25 Exactly the status game
0:20:27 and when it’s AI generated content
0:20:28 and people know
0:20:29 it’s not real
0:20:31 like a real representation
0:20:32 of you as a human being
0:20:33 the status game
0:20:34 is lost a little bit.
0:20:35 Absolutely lost.
0:20:35 Yeah.
0:20:36 I think the status game
0:20:37 comes then with
0:20:39 can you prompt
0:20:40 something very cool?
0:20:40 Yeah.
0:20:41 But that’s a different type
0:20:42 of product
0:20:43 and that’s why
0:20:44 I think it goes viral
0:20:45 on like Twitter
0:20:45 and all these other
0:20:46 existing platforms.
0:20:47 I mean,
0:20:48 my sort of counterpoint
0:20:49 or bull case
0:20:49 for Sora 2
0:20:50 is I actually think
0:20:51 the status game
0:20:51 was about humor
0:20:53 more than anything else
0:20:54 and humor is the intersection
0:20:56 of knowing how to prompt
0:20:56 and sort of being
0:20:57 culturally aware.
0:20:58 Yeah.
0:20:59 So I think that if they
0:21:00 iterated on that
0:21:01 that’s like a direction
0:21:02 that nobody has captured before.
0:21:02 Yeah.
0:21:03 Yes,
0:21:04 but if you can export
0:21:05 those videos
0:21:06 isn’t it true
0:21:08 that like TikTok
0:21:09 with Sora videos on it
0:21:11 is strictly better than Sora?
0:21:12 We talked about it so much
0:21:13 where like
0:21:13 the ultimate
0:21:15 social product
0:21:16 is where consumption
0:21:17 and creation
0:21:18 both live together
0:21:19 and that the output
0:21:20 of it is
0:21:21 not native
0:21:22 to other platforms
0:21:23 like TikTok
0:21:24 like YouTube Shorts.
0:21:27 So what do folks
0:21:27 think of the challengers?
0:21:28 You know,
0:21:29 we’re talking about
0:21:29 Sora 2
0:21:30 who I mean
0:21:31 Meta
0:21:32 it’s crazy
0:21:32 to talk about
0:21:33 Meta as a challenger
0:21:34 I guess in this context
0:21:34 they are
0:21:35 but I think
0:21:36 Claude,
0:21:36 Perplexity,
0:21:37 Grok
0:21:38 are the more obvious
0:21:39 names for challengers.
0:21:39 Olivia,
0:21:40 what’s your take?
0:21:41 I love Claude.
0:21:41 I talk to Claude
0:21:42 all the time.
0:21:43 Claude is somewhat
0:21:44 replaced ChachiBT
0:21:45 for me as my
0:21:46 general LLM.
0:21:47 I think Claude
0:21:48 is opinionated
0:21:49 in an interesting way.
0:21:51 I also love Claude
0:21:52 because I’m willing
0:21:53 to invest time
0:21:53 into building out
0:21:54 AI workflows.
0:21:55 I think Claude
0:21:56 actually launched
0:21:56 a lot of really
0:21:57 powerful things this year
0:21:58 around like artifacts
0:21:59 and skills
0:22:00 where you can
0:22:01 essentially set up
0:22:03 tasks or workflows
0:22:05 to run over time.
0:22:06 I do think the reason
0:22:07 it hasn’t hit
0:22:08 the mainstream yet
0:22:08 is even the way
0:22:09 they built those things
0:22:10 is geared towards
0:22:12 a technical user
0:22:13 or an engineer.
0:22:14 It’s
0:22:15 I think they tried
0:22:16 to make skills
0:22:17 as easy as they could
0:22:18 to create
0:22:19 and it still
0:22:20 was not anywhere
0:22:21 near easy enough
0:22:22 for the mainstream consumer.
0:22:23 Another example
0:22:24 would be
0:22:25 they were actually
0:22:25 the first
0:22:27 of the big players
0:22:28 to kind of launch
0:22:29 file creation
0:22:30 slide deck creation
0:22:31 editing
0:22:33 and they branded it
0:22:33 as like
0:22:34 file generation
0:22:35 and analysis
0:22:36 or something
0:22:36 and it was like
0:22:37 a toggle feature
0:22:39 within a setting bar
0:22:40 of a setting bar
0:22:41 or something
0:22:41 so like
0:22:42 very few people
0:22:42 used it
0:22:43 and yet to me
0:22:44 it’s still
0:22:45 the best product
0:22:46 across all of them
0:22:47 at doing that
0:22:48 kind of complex work.
0:22:50 So I love Claude
0:22:50 but I think
0:22:51 if they want
0:22:52 to be a true
0:22:53 mainstream consumer
0:22:54 product
0:22:55 they need
0:22:56 to
0:22:57 dumb it down
0:22:58 even more
0:22:59 in terms
0:22:59 of accessibility.
0:23:00 There was
0:23:00 that survey
0:23:00 you found
0:23:01 recently
0:23:01 of U.S.
0:23:02 teens.
0:23:03 Yeah there’s
0:23:04 I think it was
0:23:05 three times
0:23:05 more U.S.
0:23:06 teens have
0:23:06 ever used
0:23:07 character AI
0:23:07 than have
0:23:08 used Claude.
0:23:08 Yeah.
0:23:09 So I think
0:23:09 that shows
0:23:10 that like
0:23:10 Claude is
0:23:11 a pretty broad
0:23:12 thing.
0:23:13 Claude is beloved
0:23:13 amongst
0:23:14 tech people
0:23:15 but outside
0:23:16 of tech people
0:23:16 I think
0:23:17 they are
0:23:18 maybe struggling
0:23:18 to pick up
0:23:19 relevance.
0:23:20 It is
0:23:20 interesting
0:23:20 though
0:23:20 like
0:23:21 if you
0:23:21 look at
0:23:21 the sort
0:23:21 of
0:23:22 aesthetics
0:23:22 the product
0:23:23 design
0:23:24 the craft
0:23:24 like three
0:23:25 things that
0:23:26 Anthropic did
0:23:27 were MCP
0:23:29 skills and
0:23:30 command line
0:23:30 interface
0:23:31 Claude code
0:23:32 like those
0:23:32 are three
0:23:33 surprising bets
0:23:35 especially Claude code
0:23:35 I would have
0:23:36 said command line
0:23:37 interface really
0:23:37 like is this
0:23:38 the way that
0:23:38 people want
0:23:38 to interact
0:23:39 I thought
0:23:39 you were
0:23:39 going to talk
0:23:40 about taking
0:23:41 over airmail
0:23:41 and the thinking
0:23:42 cap
0:23:43 yeah that too
0:23:44 they’re a consumer
0:23:46 so three
0:23:46 things like
0:23:47 where’s the
0:23:47 thinking cap
0:23:49 but it’s
0:23:49 sort of
0:23:50 very high
0:23:50 minded
0:23:51 design
0:23:51 it’s
0:23:52 sort of
0:23:52 like
0:23:52 versus
0:23:52 mass
0:23:53 market
0:23:54 or maybe
0:23:54 that’s
0:23:55 apologetic
0:23:55 on their
0:23:55 behalf
0:23:56 but I
0:23:56 think
0:23:56 it is
0:23:57 that
0:23:57 it’s
0:23:58 opinionated
0:23:58 and it’s
0:23:58 great
0:24:00 I do need
0:24:00 to hear
0:24:01 Justine’s
0:24:01 take on
0:24:01 both
0:24:01 Meta
0:24:02 and
0:24:02 Grok
0:24:02 as I
0:24:03 feel like
0:24:03 they both
0:24:03 had
0:24:04 fascinating
0:24:04 years
0:24:05 in different
0:24:05 ways
0:24:06 so
0:24:07 Meta
0:24:08 hired
0:24:09 all those
0:24:09 researchers
0:24:10 I think
0:24:10 their
0:24:10 strongest
0:24:10 models
0:24:11 are
0:24:11 actually
0:24:12 not
0:24:12 consumer
0:24:12 facing
0:24:13 models
0:24:13 it’s
0:24:13 their
0:24:14 SAM
0:24:15 3
0:24:15 series
0:24:15 so
0:24:16 like
0:24:16 segment
0:24:16 anything
0:24:17 for
0:24:17 video
0:24:18 for
0:24:18 image
0:24:19 and
0:24:19 for
0:24:19 audio
0:24:20 and
0:24:20 basically
0:24:20 like
0:24:21 for
0:24:21 video
0:24:22 for example
0:24:22 you can
0:24:23 upload
0:24:23 a video
0:24:23 and you
0:24:24 can describe
0:24:24 a natural
0:24:25 language
0:24:25 like
0:24:26 find
0:24:27 the kid
0:24:27 in the
0:24:27 red
0:24:28 t-shirt
0:24:29 and it
0:24:29 will find
0:24:29 and track
0:24:30 that person
0:24:31 across
0:24:32 every
0:24:32 the entire
0:24:33 video
0:24:33 even if
0:24:33 they’re
0:24:34 coming in
0:24:34 and out
0:24:34 of the
0:24:34 frame
0:24:35 it will
0:24:35 let you
0:24:35 apply
0:24:36 effects
0:24:36 like
0:24:36 blurring
0:24:37 them out
0:24:37 or removing
0:24:38 them
0:24:38 or whatever
0:24:39 and
0:24:39 you
0:24:39 can
0:24:39 imagine
0:24:39 a
0:24:40 similar
0:24:40 thing
0:24:40 with
0:24:41 audio
0:24:42 with
0:24:42 different
0:24:43 stems
0:24:43 and
0:24:43 then
0:24:44 with
0:24:44 image
0:24:44 with
0:24:45 different
0:24:45 objects
0:24:45 in an
0:24:46 image
0:24:47 I
0:24:47 think
0:24:47 we’re
0:24:47 going
0:24:47 to
0:24:47 see
0:24:48 next
0:24:48 year
0:24:48 hopefully
0:24:49 some
0:24:49 incredible
0:24:50 consumer
0:24:50 products
0:24:50 built
0:24:51 on top
0:24:51 of
0:24:51 those
0:24:52 models
0:24:53 but
0:24:53 today
0:24:54 they’re
0:24:54 more
0:24:54 of a
0:24:54 playground
0:24:55 for
0:24:55 developers
0:24:56 than
0:24:56 they
0:24:56 are
0:24:56 a
0:24:57 consumer
0:24:57 which
0:24:57 is
0:24:57 surprising
0:24:58 given
0:24:58 just
0:24:59 like
0:24:59 the
0:24:59 DNA
0:24:59 of
0:24:59 the
0:25:00 company
0:25:00 yeah
0:25:01 so
0:25:01 the
0:25:01 one
0:25:02 good
0:25:02 consumer
0:25:02 feature
0:25:03 I
0:25:03 think
0:25:03 they’ve
0:25:03 launched
0:25:03 this
0:25:04 year
0:25:04 with
0:25:04 AI
0:25:05 is
0:25:05 the
0:25:05 Instagram
0:25:06 AI
0:25:06 translations
0:25:08 where
0:25:10 when you’re uploading a reel now
0:25:12 you can opt in to enable translations
0:25:13 and it will
0:25:14 clone your voice
0:25:16 translate it into five different languages
0:25:19 apply the translation with your voice
0:25:20 so you know
0:25:21 and then re-dub with
0:25:22 the lip sync
0:25:23 wow
0:25:25 and so it basically makes it seem like you’re a native speaker
0:25:26 in whatever language
0:25:29 so I would love to see more of that stuff come to
0:25:31 to the meta products
0:25:33 Grok I think has had
0:25:35 so Grok had a crazy year
0:25:36 with like the companions
0:25:37 yes
0:25:38 with all of the LLM progress
0:25:39 and the coding progress
0:25:42 I think their image and video progress
0:25:44 is probably the steepest slope
0:25:46 I’ve seen of any of the companies
0:25:47 like
0:25:50 it was probably like six months ago
0:25:52 they didn’t even like have image and video models
0:25:53 and
0:25:55 they’re shipping so fast
0:25:56 to launch new features
0:25:58 like it was initially just image to video
0:26:00 they added text to video
0:26:01 they added audio
0:26:02 then they added lip sync with speech
0:26:04 then they added 15 second videos
0:26:05 like
0:26:07 they’re just not slowing down the speed of progress
0:26:08 and
0:26:10 Elon has made a bunch of statements about like
0:26:14 wanting more interactive video game type content out of Grok
0:26:17 and wanting movies out of Grok by the end of next year
0:26:17 so
0:26:20 let’s hope it continues to go at that pace
0:26:22 do you feel like it’s a pinster movement
0:26:23 where like on one hand
0:26:24 there’s like a very
0:26:27 infrastructural model layer of like
0:26:27 let’s get to the
0:26:30 let’s top the LL Marina charts
0:26:31 and then the other one is like
0:26:33 let’s go Annie
0:26:35 I think
0:26:37 it’s like a little bit of like a bifurcated move
0:26:38 right
0:26:39 like the entertainment and the like
0:26:40 absolutely
0:26:41 but entertainment in a way that like
0:26:43 we’re talking about you know
0:26:46 anthropic and chat to these general population
0:26:49 but you just said character AI is way more popular
0:26:49 yes
0:26:51 so then like how do we think about that
0:26:52 and I think you know
0:26:53 it’s a very interesting
0:26:55 strategy in my mind
0:26:57 and Grok like in the image and video app
0:26:59 since pretty early on
0:27:00 they’ve had templates
0:27:01 of popular things
0:27:02 like
0:27:03 you’re standing somewhere
0:27:05 and suddenly like a thing drops
0:27:06 a rope drops from the ceiling
0:27:07 and you grab onto it
0:27:08 and it like swings you out of the scene
0:27:09 like
0:27:10 some really good ones
0:27:11 that go viral regularly
0:27:13 on TikTok and other places
0:27:14 yeah
0:27:15 really really interesting
0:27:17 well so maybe switching gears
0:27:18 from 25 to 26
0:27:20 what are some of all
0:27:21 your predictions for next year
0:27:22 what do you think we’ll see
0:27:23 hardware
0:27:24 models
0:27:24 commerce
0:27:26 we haven’t spoken about yet
0:27:28 so what do we think will play out
0:27:29 I think
0:27:30 I know this is
0:27:31 we’re talking about consumer
0:27:32 but one of the things
0:27:33 that’s been really
0:27:35 maybe underrated for me
0:27:35 about ChatGPT
0:27:37 that we might see more of next year
0:27:39 is they’ve really made a push
0:27:40 into the enterprise
0:27:41 both with the traditional
0:27:42 enterprise licenses
0:27:43 and then working with
0:27:44 specific companies
0:27:46 to even like train models for them
0:27:47 and I think when we think about
0:27:48 the fact that
0:27:50 most consumers only use
0:27:52 one general LLM product
0:27:54 ChatGPT enterprise usage
0:27:55 they publish a big study
0:27:57 but it’s up somewhat like
0:27:58 8 or 9x year over year
0:27:58 yeah
0:28:00 and so if we’re entering a world now
0:28:01 where people
0:28:02 have to
0:28:03 use ChatGPT
0:28:04 for their company
0:28:05 or as part of their work
0:28:06 yeah
0:28:07 that could really translate
0:28:08 into consumer usage
0:28:08 yeah
0:28:09 or
0:28:11 maybe they become
0:28:12 the workspace
0:28:13 with the connectors
0:28:14 and some of the other things
0:28:15 that they’re investing in
0:28:16 and someone else
0:28:18 owns the consumer
0:28:19 consumer use cases
0:28:19 yeah
0:28:20 I think
0:28:21 to that end
0:28:22 we have to talk about
0:28:23 their push into apps
0:28:23 and I think
0:28:24 whether or not
0:28:25 that works
0:28:25 is going to be
0:28:27 kind of the defining question
0:28:27 for them next year
0:28:28 yeah
0:28:29 and I think that the
0:28:30 we’ve all discussed
0:28:30 the importance
0:28:31 of the apps SDK
0:28:32 and the apps directory
0:28:33 as they’re calling it
0:28:34 and it’s going to be
0:28:35 a huge new channel
0:28:35 for a consumer
0:28:36 I think what’s less discussed
0:28:37 is it’s hyper relevant
0:28:38 to enterprise
0:28:39 so I think
0:28:40 where ChatGPT shines
0:28:41 is where it’s able
0:28:42 to operate
0:28:43 across a number
0:28:43 of tools
0:28:44 for one workflow
0:28:45 and if you think
0:28:46 about the number
0:28:46 of things you do
0:28:47 in your sort of
0:28:48 business day to day
0:28:49 that operates
0:28:50 across many tools
0:28:51 it’s most of those things
0:28:52 yeah
0:28:52 so I think
0:28:52 that will have
0:28:53 very interesting
0:28:54 implications
0:28:55 for the SaaS ecosystem
0:28:56 and it’s a part
0:28:57 of the app store
0:28:57 we’re not talking
0:28:58 about as much
0:28:58 yeah
0:28:59 yeah
0:29:00 maybe less of a prediction
0:29:01 but I’m thinking
0:29:03 through 2025
0:29:04 and we talked
0:29:05 about all the big
0:29:06 moves from big labs
0:29:08 and from the startup
0:29:08 point
0:29:09 I think one of the
0:29:11 biggest trend
0:29:11 we’ve seen
0:29:12 is app generation
0:29:14 and I think
0:29:15 there is a real world
0:29:17 where we see
0:29:18 the big labs
0:29:19 with the distribution
0:29:20 and the frequency
0:29:20 of usage
0:29:22 of people coming in
0:29:23 to start saying
0:29:24 look like
0:29:24 maybe there is
0:29:25 a common
0:29:27 type of product
0:29:27 and apps
0:29:28 that we could
0:29:28 actually help you
0:29:29 generate
0:29:30 within the confines
0:29:32 of the big lab
0:29:32 products
0:29:32 yeah
0:29:33 I think that’s
0:29:34 like one of the
0:29:34 interesting thing
0:29:35 which you know
0:29:35 again going back
0:29:36 to the supply chain
0:29:37 of ideas
0:29:37 and research
0:29:38 maybe that’s
0:29:38 one thing
0:29:39 and again
0:29:40 nothing groundbreaking
0:29:41 but as we know
0:29:43 the Ghibli
0:29:44 broke the internet
0:29:45 my cousin
0:29:47 who knows
0:29:47 nothing
0:29:48 about tech
0:29:50 sent me
0:29:50 a Ghibli
0:29:51 photo
0:29:52 well let’s not
0:29:53 send this to your
0:29:53 cousin then
0:29:54 yeah
0:29:56 and I think
0:29:57 that goes
0:29:58 to show
0:29:59 that templates
0:30:00 matter
0:30:00 yeah
0:30:01 that style
0:30:01 matters
0:30:02 yeah
0:30:03 and I think
0:30:03 about video
0:30:04 and like
0:30:05 it’s pretty
0:30:06 freaking good
0:30:06 yeah
0:30:08 and it’s possible
0:30:08 that we’re
0:30:09 already at a point
0:30:10 that it’s not
0:30:11 necessarily
0:30:12 just about
0:30:12 the capability
0:30:13 of models
0:30:14 of the big labs
0:30:15 but the
0:30:16 stylistic things
0:30:16 the template
0:30:17 think of
0:30:18 TikTok
0:30:19 the large
0:30:20 capability
0:30:21 largely still
0:30:21 the same
0:30:22 music
0:30:22 trend
0:30:23 dance
0:30:23 go
0:30:25 except the
0:30:26 trend and format
0:30:26 keeps on changing
0:30:27 keeps it extremely
0:30:28 fresh
0:30:29 so I feel like
0:30:29 there’s a real
0:30:30 world where
0:30:32 the repurposor
0:30:32 team or what have you
0:30:33 can start thinking
0:30:34 about ways to
0:30:35 actually really
0:30:36 build in
0:30:37 video first
0:30:38 products
0:30:38 into these
0:30:39 lab models
0:30:40 and I think
0:30:40 the cost will go
0:30:41 down enough
0:30:41 for people to
0:30:42 try it out
0:30:42 and I’m
0:30:43 excited to
0:30:43 see that
0:30:44 yeah I think
0:30:45 what I’m
0:30:45 most excited
0:30:46 about is
0:30:47 sort of along
0:30:48 those lines
0:30:49 basically everything
0:30:50 becoming multimodal
0:30:51 like I call it
0:30:52 like anything in
0:30:53 to anything out
0:30:54 which is
0:30:55 basically initially
0:30:56 especially with
0:30:57 these image and
0:30:57 video models
0:30:59 it was
0:31:00 you put in
0:31:00 a text prompt
0:31:02 and you get
0:31:03 an image out
0:31:04 or a video out
0:31:04 you couldn’t
0:31:05 really do much
0:31:05 with it
0:31:06 and now
0:31:08 we started
0:31:08 to see this
0:31:09 with the
0:31:09 image edit
0:31:10 models
0:31:10 with like
0:31:11 nano banana
0:31:11 and with
0:31:12 flux
0:31:14 and with
0:31:14 the new
0:31:14 open AI
0:31:15 model
0:31:16 where you
0:31:17 can put an
0:31:17 image in
0:31:18 now and get
0:31:19 another image
0:31:19 out
0:31:20 you can put
0:31:20 an image in
0:31:21 with a text
0:31:21 pair in a
0:31:22 direction
0:31:23 or put an
0:31:23 image with
0:31:24 a template
0:31:25 another reference
0:31:26 image and get
0:31:26 another image
0:31:26 out
0:31:28 what happens
0:31:28 when you can
0:31:29 put a video
0:31:30 in and get
0:31:31 images out
0:31:32 that are
0:31:33 related to
0:31:33 or the
0:31:34 next iteration
0:31:35 of the video
0:31:35 or you can
0:31:36 put a video
0:31:37 in and a text
0:31:38 prompt about
0:31:38 what you want
0:31:39 to edit
0:31:39 and get the
0:31:40 edited video
0:31:40 out
0:31:42 from my
0:31:43 conversations
0:31:43 with the
0:31:44 labs
0:31:44 a lot of
0:31:44 them are
0:31:45 trying to
0:31:46 basically
0:31:47 combine all
0:31:48 these largely
0:31:48 separate efforts
0:31:49 they’ve had
0:31:50 across like
0:31:51 text reasoning
0:31:52 and intelligence
0:31:52 the LLM
0:31:53 space and
0:31:54 image and
0:31:55 video into
0:31:56 like what
0:31:56 if we can
0:31:57 put merge
0:31:58 those all
0:31:58 into like
0:31:58 a mega
0:31:59 model that
0:32:00 can take
0:32:00 a lot
0:32:01 different
0:32:01 forms of
0:32:02 content
0:32:03 and produce
0:32:04 much more
0:32:04 I think
0:32:05 it’s also
0:32:05 going to
0:32:05 have huge
0:32:06 implications
0:32:06 for like
0:32:07 design
0:32:08 because if
0:32:08 you think
0:32:09 about it
0:32:09 a lot of
0:32:10 design is
0:32:10 combining
0:32:11 images
0:32:12 with text
0:32:13 with video
0:32:14 with different
0:32:14 elements
0:32:15 in kind
0:32:15 of interesting
0:32:16 ways
0:32:16 yeah
0:32:17 I guess
0:32:18 if I think
0:32:18 about like
0:32:18 a macro
0:32:19 level
0:32:19 prediction
0:32:21 I think
0:32:21 it’s actually
0:32:22 going to be
0:32:22 more of the
0:32:22 same
0:32:23 in that
0:32:24 when we
0:32:24 talk about
0:32:25 what all
0:32:25 of the
0:32:26 labs have
0:32:26 launched
0:32:27 in consumer
0:32:28 they’ve done
0:32:28 a great
0:32:29 job with
0:32:29 models
0:32:30 and they’ve
0:32:30 done a great
0:32:31 job with
0:32:31 incremental
0:32:32 things that
0:32:33 improve the
0:32:33 core experience
0:32:34 of using
0:32:34 like a
0:32:35 chat
0:32:35 gbt or
0:32:36 gemini
0:32:37 in my
0:32:37 opinion
0:32:38 we’ve gone
0:32:38 through dozens
0:32:39 of things that
0:32:40 they’ve launched
0:32:40 or tried as
0:32:41 new consumer
0:32:42 products or
0:32:42 new consumer
0:32:43 interfaces like
0:32:44 group chat
0:32:45 like pulse
0:32:46 like atlas
0:32:47 like sora
0:32:48 google has had
0:32:49 a long tail
0:32:49 like stitch
0:32:50 gems
0:32:51 opal
0:32:51 doppel
0:32:52 tons
0:32:53 none of
0:32:53 those are
0:32:54 really working
0:32:55 and I think
0:32:55 it’s because
0:32:56 it’s not the
0:32:57 core competency
0:32:57 of these
0:32:58 companies
0:32:58 anymore
0:32:59 to build
0:33:00 opinionated
0:33:01 standalone
0:33:01 consumer
0:33:02 ui
0:33:02 out of
0:33:03 all of
0:33:03 those
0:33:04 I think
0:33:04 the product
0:33:04 that’s
0:33:05 working the
0:33:05 most is
0:33:06 like notebook
0:33:06 lm
0:33:07 and that’s
0:33:07 one of
0:33:08 like maybe
0:33:09 20 things
0:33:09 that google
0:33:09 has tried
0:33:10 or experimented
0:33:10 with
0:33:11 so I think
0:33:11 it’s actually
0:33:12 very positive
0:33:13 for startups
0:33:13 in that
0:33:14 consumer
0:33:15 startups
0:33:15 and that
0:33:15 the models
0:33:16 will keep
0:33:16 getting better
0:33:17 which the
0:33:17 startups
0:33:18 can use
0:33:19 and they’ll
0:33:19 keep
0:33:19 you know
0:33:20 they’ll make
0:33:20 chat gbt
0:33:21 better and
0:33:21 better but
0:33:21 I don’t
0:33:22 necessarily
0:33:22 think that
0:33:23 chat gbt
0:33:25 like verticalizes
0:33:26 into all of
0:33:26 these other
0:33:27 amazing use
0:33:27 cases or
0:33:28 products and
0:33:28 there’s still
0:33:29 room for
0:33:29 startups to
0:33:30 be building
0:33:30 there I
0:33:30 have a
0:33:31 yes and
0:33:31 to that
0:33:32 okay
0:33:32 where
0:33:33 absolutely
0:33:34 but however
0:33:35 when the
0:33:36 input and
0:33:37 output is
0:33:37 text
0:33:38 yep
0:33:39 where
0:33:40 chat gpt
0:33:40 and gemini
0:33:40 of the world
0:33:41 shine the
0:33:42 most no
0:33:43 matter how
0:33:43 deeper you
0:33:44 go no
0:33:44 matter how
0:33:45 specific you
0:33:45 think your
0:33:46 text output
0:33:46 is going
0:33:47 to be
0:33:47 essentially
0:33:48 given the
0:33:49 frequency of
0:33:50 usage of the
0:33:50 main
0:33:51 big lab
0:33:51 product
0:33:52 yeah
0:33:52 I think
0:33:52 it’s going
0:33:52 to be
0:33:53 really hard
0:33:54 to stitch
0:33:54 that and
0:33:55 get that
0:33:55 away from
0:33:56 that usage
0:33:57 if your
0:33:57 product is
0:33:59 mainly text
0:33:59 and text
0:34:00 out
0:34:00 yeah
0:34:01 so I do
0:34:02 think you
0:34:02 have to be
0:34:03 creative around
0:34:04 what is the
0:34:05 angle that
0:34:06 you can like
0:34:07 go steal people
0:34:07 away from
0:34:08 you know I love
0:34:09 that you use
0:34:09 the word
0:34:10 opinionated
0:34:10 because I
0:34:11 think that
0:34:11 for labs
0:34:12 certainly for
0:34:13 big tech and
0:34:14 perhaps increasingly
0:34:14 for labs
0:34:15 the priorities
0:34:16 get set in
0:34:16 their promo
0:34:16 committee
0:34:17 always and
0:34:18 if you’re a
0:34:18 PM and
0:34:19 it’s always
0:34:19 the sort of
0:34:20 mid-career
0:34:20 PMs and
0:34:21 I’ve been one
0:34:21 of these
0:34:22 and like the
0:34:22 incentives are
0:34:23 always to get
0:34:24 promoted and
0:34:24 the way to get
0:34:25 promoted is to
0:34:25 build something
0:34:26 safe that
0:34:27 extends a core
0:34:28 metric and a
0:34:29 core feature
0:34:30 so building
0:34:31 opinionated products
0:34:32 is a very risky
0:34:32 way to manage
0:34:33 your career
0:34:33 you know because
0:34:34 they’re probably
0:34:34 not going to
0:34:35 work they’re
0:34:35 probably going to
0:34:36 have a bunch of
0:34:37 implications for
0:34:37 legal and
0:34:38 compliance and
0:34:39 the CEO might
0:34:39 yell at you
0:34:40 so I just think
0:34:41 that they are so
0:34:42 structured to do
0:34:43 incremental things
0:34:43 the more founders
0:34:44 do opinionated
0:34:45 things the more
0:34:46 advantaged they are
0:34:47 I think honestly
0:34:48 the big thing
0:34:48 we haven’t
0:34:49 discussed here
0:34:49 too is compute
0:34:50 which is the
0:34:51 labs have this
0:34:52 inherent tension
0:34:53 between there’s a
0:34:54 limited amount of
0:34:55 compute and they
0:34:56 either spend it on
0:34:57 like training models
0:34:58 or they spend it on
0:34:59 inference and even
0:34:59 with inference
0:35:01 there’s this split
0:35:01 between like the
0:35:02 entertainment Ghibli
0:35:03 use cases and the
0:35:04 like coding
0:35:05 intelligence use
0:35:06 cases I think
0:35:07 XAI is probably the
0:35:08 only model company
0:35:09 that is not
0:35:09 bottlenecked on
0:35:10 compute from my
0:35:11 understanding whereas
0:35:12 the others have
0:35:13 to make really
0:35:14 like serious and
0:35:15 significant calls
0:35:16 of like if we
0:35:17 let if we release
0:35:18 Nano Banana and go
0:35:19 super viral like it
0:35:20 may slow down the
0:35:21 next like big LLM
0:35:22 we’re trying to push
0:35:23 forward whereas
0:35:24 startups who focus on
0:35:25 the app layer don’t
0:35:26 have that problem
0:35:27 because there’s no
0:35:28 tension there
0:35:29 absolutely yeah
0:35:30 we’ve talked about
0:35:31 this before I also
0:35:32 think that there are
0:35:33 categories in which
0:35:34 being multi-model
0:35:35 is just allows you
0:35:36 to deliver a better
0:35:37 proposition to the
0:35:38 customer and the
0:35:39 labs and big tech
0:35:39 are always going to
0:35:40 be sort of
0:35:41 definitionally first
0:35:41 party model only
0:35:42 yeah so I think
0:35:43 as all the models
0:35:44 get better perhaps
0:35:45 80% of what you
0:35:47 need can be received
0:35:47 from a single
0:35:48 model but for the
0:35:49 power users and so
0:35:50 much of AI is a
0:35:51 power user story
0:35:52 yeah you know you
0:35:53 always said that like
0:35:54 well power users are
0:35:55 just power users and
0:35:55 I think that’s true in
0:35:57 a pre-AI world but
0:35:58 now the kind of depth
0:35:59 of value and the
0:35:59 depth of monetization
0:36:01 is so much higher that
0:36:02 maybe all of AI is
0:36:03 actually a power user
0:36:04 story you know and
0:36:05 everyone else is just
0:36:06 traffic yes yeah
0:36:07 which is why we’re
0:36:08 also seeing like
0:36:09 consumer products for
0:36:10 the first time ever
0:36:11 have more than 100%
0:36:12 revenue retention
0:36:13 yes yes and that’s
0:36:14 separating the good
0:36:15 from the great from
0:36:16 the exceptional in
0:36:18 consumer AI and to
0:36:18 be clear how that
0:36:20 happens is they charge
0:36:21 for usage often in
0:36:22 addition to a
0:36:22 subscription so you
0:36:23 can use beyond
0:36:24 whatever your quota
0:36:25 is for the month
0:36:26 given your subscription
0:36:26 and pay more
0:36:27 it’s either upgrade of
0:36:28 the tier yeah or
0:36:29 actually buying tokens
0:36:32 or more usage yeah
0:36:33 it’s that’s what
0:36:34 differentiates it like
0:36:35 you know if you told
0:36:38 me pre-AI we see a
0:36:38 consumer company with
0:36:40 100 plus retention and
0:36:41 money I’m like that
0:36:41 that doesn’t make any
0:36:42 sense doesn’t that
0:36:45 compute yeah yeah no
0:36:46 pun intended exactly
0:36:48 exactly well guys okay
0:36:49 maybe let’s talk about
0:36:50 start with specific
0:36:51 recommendations like
0:36:52 after this pod what are
0:36:53 the products people
0:36:54 should download or the
0:36:54 features or the
0:36:55 models what should
0:36:56 folks be using today
0:36:59 I guess on the
0:37:00 multimodal point I
0:37:01 think one really
0:37:03 under hyped product
0:37:04 that people should
0:37:05 check out not because
0:37:05 they’ll use it every
0:37:06 day but because it
0:37:07 shows sort of what is
0:37:08 possible when you
0:37:09 combine like an agent
0:37:11 with image with text is
0:37:13 Pomelli so this is like
0:37:14 the Google Labs product
0:37:15 where you put in the
0:37:16 URL of your business
0:37:18 and it has an agent go
0:37:20 to the website pull all
0:37:20 of the product and
0:37:22 brand photos summarize
0:37:23 what it thinks your
0:37:25 brand’s aesthetic is
0:37:26 what it stands for what
0:37:27 kind of customers it’s
0:37:28 targeting and then it
0:37:30 will generate three
0:37:31 different ad campaigns
0:37:32 for you and it will
0:37:33 generate not only the
0:37:34 text but it will
0:37:35 generate like the
0:37:36 Instagram posts it will
0:37:37 generate the flyer it
0:37:38 will generate like the
0:37:40 photo of your product in
0:37:42 this you know whatever
0:37:43 wherever it thinks it
0:37:44 should be based on your
0:37:47 customer and very cool
0:37:49 product would be hard to
0:37:49 become a giant
0:37:50 standalone product within
0:37:52 Google I think but
0:37:53 show sort of the future
0:37:54 of what happens if we
0:37:56 combine agents with
0:37:58 generation models that
0:37:59 have sort of really
0:38:00 deep understanding of
0:38:02 context that an image
0:38:03 model or video model
0:38:04 normally wouldn’t have
0:38:06 startup products though
0:38:06 do you have a favorite
0:38:07 startup startup
0:38:09 creative tool yes in
0:38:10 creative tool yes I
0:38:11 think I mean we’re
0:38:12 investors in Korea so
0:38:14 this is bias but I think
0:38:15 they they’ve they’ve
0:38:16 really done an
0:38:17 exceptional job of
0:38:19 being the best place to
0:38:21 use every model or
0:38:22 every quality model across
0:38:24 every modality and also
0:38:25 building more of the
0:38:26 interface on top of
0:38:27 these models like I
0:38:28 now prefer to use
0:38:30 nano banana pro on
0:38:31 Korea because Korea
0:38:33 allows you to save
0:38:34 elements which are
0:38:35 essentially characters or
0:38:37 styles or objects that
0:38:38 you can like at tag to
0:38:40 reprompt versus having
0:38:41 to drag the same image
0:38:42 reference into nano
0:38:43 banana over and over
0:38:44 again it’s a good one
0:38:47 I suppose it falls under
0:38:49 startup category against
0:38:50 shilling companies but you
0:38:52 know the one that I use
0:38:53 the most is actually a
0:38:55 11 labs reader and the
0:38:57 reason is we’ve seen an
0:38:58 explosion in podcasts and
0:38:59 there’s I think a reason
0:39:00 for that right people are
0:39:02 a lot more on the go the
0:39:04 reading capability of us
0:39:05 reading I think it’s going
0:39:08 down over time and so you
0:39:09 know let’s not fight the
0:39:10 reality let’s embrace it and
0:39:12 okay so then like let’s
0:39:13 actually find a written
0:39:15 material translate into
0:39:17 listening and and do that
0:39:19 and I used to be a power
0:39:20 user of tools like
0:39:22 pocket you know I didn’t
0:39:23 have time to read
0:39:23 everything that I wanted
0:39:25 to read and it’s a saving
0:39:26 behavior right you’re
0:39:27 you’re going around and
0:39:28 saving all the things you
0:39:29 eventually want to consume
0:39:30 but I think what I do now
0:39:32 is similar where I go get
0:39:34 all the things I want to
0:39:35 read and I just put it
0:39:36 either PDF it or put it
0:39:38 on 11 reader and just
0:39:39 like once in a while when
0:39:40 I’m on a walk and I’m
0:39:42 like three four minute you
0:39:43 know 1.5x speed or 2x
0:39:44 speed and just listen to
0:39:45 one of these and get the
0:39:46 gist of it so I think
0:39:47 that’s been a good way
0:39:50 to use a little bit of
0:39:51 time as a sort of
0:39:53 semi-normal person yeah
0:39:55 well first of all I love
0:39:56 this question because I
0:39:58 am strongly opinionated
0:39:59 that by far the best way
0:40:00 to get up to speed on AI
0:40:01 is just to try a ton of
0:40:03 products and you get
0:40:04 opinionated really quickly
0:40:06 yeah justine and I
0:40:07 actually for the whole
0:40:08 month of December are on
0:40:10 Twitter publishing one new
0:40:11 consumer product a day for
0:40:12 people to check out so
0:40:15 that’s one way I’ll name
0:40:16 three others that I think
0:40:18 are super maybe relevant
0:40:19 or interesting that people
0:40:19 can plug into their
0:40:21 workflows so one would be
0:40:22 gamma for slide deck
0:40:23 generations you can go
0:40:25 text prompt a slide deck
0:40:26 you can go document a
0:40:28 slide deck I use it for
0:40:29 everything also the slides
0:40:31 are flexible sizes so
0:40:31 you’re no longer like
0:40:33 editing every little pixel
0:40:34 in your Google slides to
0:40:35 get it to fit into one
0:40:37 which is great granola for
0:40:38 no taking you might not
0:40:39 have any meetings over the
0:40:40 holidays but in the new
0:40:42 year and it just gets
0:40:43 better and better the more
0:40:44 meetings you have on it
0:40:45 because it has the context
0:40:46 of what you talked about
0:40:48 before and then lastly I’m
0:40:49 still going to plug try the
0:40:50 comment browser if you want
0:40:52 to try kind of an AI native
0:40:53 workspace I think that’s one
0:40:54 of the most accessible ones
0:40:55 to start with
0:40:57 I mean for me I’ve spent my
0:40:59 whole year obsessed with
0:41:01 coding and AI code it’s just
0:41:02 been so tremendously fun I
0:41:03 by the way Brian would take
0:41:04 the other side of your
0:41:06 argument that the big labs or
0:41:07 big tech will win an app
0:41:09 generation I think they just
0:41:10 lack the focus products like
0:41:12 Opal have been you know
0:41:14 released with a whimper and
0:41:16 they’re one model only so
0:41:17 I didn’t think they will win
0:41:18 it I think that we will see
0:41:20 them doing it yes yes I think
0:41:21 that’s true but I think for
0:41:22 the pure consumer side of
0:41:24 course Wabi is really fun and
0:41:26 really capable and I think
0:41:27 they are they’re creating the
0:41:30 right sort of constraints on
0:41:32 app generation so that you can
0:41:33 get a really satisfying
0:41:34 functional result and I think
0:41:36 so far there’s been a lot of
0:41:37 over promising in app
0:41:38 generation which has
0:41:39 discouraged the early users
0:41:41 I also think if you haven’t
0:41:43 tried you know GPT-5.2 in
0:41:45 Codex or in Cursor it’s worth
0:41:46 trying even for non-technical
0:41:48 people it’s just amazing I
0:41:49 think almost being technical
0:41:50 is sort of a constraint because
0:41:52 you have a pre-existing idea
0:41:54 for what these models can do
0:41:55 and they can do a lot more and
0:41:56 I’m hearing increasingly about
0:41:57 people doing knowledge work
0:41:59 and writing essays in Cursor
0:42:00 instead of just writing code
0:42:02 just one thing I’m going to do
0:42:04 at the end year end it’s just
0:42:06 to plug in like a popular trend
0:42:08 I see on TikTok where there
0:42:09 are people who said what is a
0:42:11 most unhinged thing I said
0:42:13 this year okay and it actually
0:42:16 does a review of all the things
0:42:17 that you said but I think
0:42:19 similarly it’ll be a good thing
0:42:21 I’m going to do this at the
0:42:23 year end tell me how to live a
0:42:25 better life next year yeah give
0:42:27 me give me actual unvarnished
0:42:30 opinions and some directions and
0:42:32 I think it’ll be helpful I love
0:42:33 that idea I’m going for a worse
0:42:37 life next year fantastic let’s go
0:42:39 full dgen guys any closing
0:42:44 thoughts that was I mean the
0:42:45 obvious one is we are very
0:42:46 actively investing in consumer
0:42:48 companies and I genuinely I think
0:42:49 a lot of people say this I
0:42:52 genuinely believe that the models
0:42:53 have gotten to the level of
0:42:55 quality that you can build a real
0:42:57 scalable app on top of them
0:42:59 Wabi is a great example of this
0:43:02 and so the hope is 2026 will be a
0:43:04 huge year for consumer builders
0:43:06 not just like consumers as
0:43:10 consumers being consumers of a
0:43:12 product yes well thank you all for
0:43:14 a super fun year in consumer and
0:43:15 AI we’ll be back with more next
0:43:17 year and Merry Christmas guys this
0:43:19 is a wrap yeah happy holidays
0:43:25 thanks for listening to this episode of
0:43:27 the a16z podcast if you like this
0:43:29 episode be sure to like comment
0:43:31 subscribe leave us a rating or
0:43:32 review and share it with your
0:43:34 friends and family for more
0:43:36 episodes go to YouTube Apple
0:43:39 podcast and Spotify follow us on X a
0:43:41 a16z and subscribe to our sub
0:43:44 stack at a16z.substack.com thanks
0:43:46 again for listening and I’ll see you in
0:43:49 the next episode as a reminder the
0:43:50 content here is for informational
0:43:52 purposes only should not be taken as
0:43:54 legal business tax or investment
0:43:56 advice or be used to evaluate any
0:43:58 investment or security and is not
0:43:59 directed at any investors or
0:44:01 potential investors in any a16z fund
0:44:04 please note that a16z and its
0:44:05 affiliates may also maintain
0:44:06 investments in the companies
0:44:08 discussed in this podcast for more
0:44:09 details including a link to our
0:44:12 investments please see a16z.com
0:44:14 forward slash disclosures
As 2025 comes to a close, consumer AI is entering a new phase. A small number of products now dominate everyday use, multimodal models have unlocked entirely new creative workflows, and the big labs have pushed aggressively into consumer experiences. At the same time, it is becoming clearer which ideas actually changed user behavior and which ones did not.
In this episode, a16z consumer investors Anish Acharya, Olivia Moore, Justine Moore, and Bryan Kim look back at the biggest product and model shifts of 2025 and then look ahead to what 2026 may bring. They discuss why consumer AI appears to be trending toward winner-take-most, how subtle product design choices can matter more than raw model quality, and why templates, multimodality, and distribution are shaping the next wave of consumer products.
Where do startups still have room to win? How will the role of the big labs continue to change? And what will it actually take for consumer AI apps to break out at scale in 2026?
Â
Resources:
Follow Anish: https://x.com/illscience
Follow Olivia: https://x.com/omooretweets
Follow Justine: https://x.com/venturetwins
Follow Bryan: https://x.com/kirbyman01
Â
Stay Updated:Â
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Show on Spotify
Listen to the a16z Show on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply
You must be logged in to post a comment.