Author: a16z Podcast

  • Cybersecurity’s Past, Present, and AI-Driven Future

    AI transcript
    0:00:03 – It’s time to hand over cybersecurity to computers.
    0:00:05 – Entropy is increasing.
    0:00:09 They have more apps, more entitlements, and more actors.
    0:00:11 – Every single year, it’s exponential growth
    0:00:12 in the number of public breaches,
    0:00:15 the size of the breaches, the damage in the breaches.
    0:00:17 Vendors still exploding.
    0:00:19 – How can they watch out for a bank run
    0:00:22 that’s orchestrated by a deep-fake campaign?
    0:00:23 If this is indeed state-backed,
    0:00:25 this is probably not the only thing they did
    0:00:26 in that two-year period.
    0:00:30 – In 2022, $8.8 billion was lost
    0:00:32 by consumers alone in the U.S.
    0:00:35 – How can we build compound businesses from day one?
    0:00:38 How can you actually build a platform from day one,
    0:00:40 even though you’re a startup?
    0:00:41 – Who does security?
    0:00:42 Nobody does security.
    0:00:46 – The cost to launch a disinformation campaign
    0:00:49 that’s AI generated is quickly approaching zero.
    0:00:52 – Now that the cybersecurity industry commands
    0:00:55 a market of hundreds of billions of dollars,
    0:00:57 it’s easy to forget how this industry
    0:00:59 once ceased to exist.
    0:01:01 And in its few decades of rapid growth,
    0:01:03 things have changed a whole lot.
    0:01:06 So in today’s episode, we’ll take you on a tour
    0:01:08 through the history of security,
    0:01:09 which can’t be disentangled
    0:01:12 from the history of the internet and culture.
    0:01:13 This episode was actually recorded
    0:01:16 at A16Z’s campfire sessions event this April,
    0:01:18 where our infrastructure team
    0:01:21 brought in some of the top security minds in the industry.
    0:01:23 And just like any good campfire session,
    0:01:26 today you’ll hear four people talk candidly
    0:01:28 about what’s really keeping them up at night,
    0:01:30 from what really happened with the X and U-Tills attack,
    0:01:32 to new AI threat factors
    0:01:34 that are already impacting companies,
    0:01:37 to empowering overworked developers, and a lot more.
    0:01:41 For those both inside and outside the security community,
    0:01:43 I hope this episode is a helpful reminder
    0:01:45 of just how much has changed throughout the years
    0:01:49 for both offenders and defenders of trustworthy computing.
    0:01:52 So with that, we’ll start with Travis McPeak,
    0:01:54 co-founder and CEO of resource aid.
    0:01:57 And we’ll walk us through how we really got here.
    0:01:59 Let’s kick things off in 1995.
    0:02:05 As a reminder, the content here
    0:02:07 is for informational purposes only.
    0:02:09 Should not be taken as legal, business, tax,
    0:02:10 or investment advice,
    0:02:12 or be used to evaluate any investment or security,
    0:02:14 and is not directed at any investors
    0:02:17 or potential investors in any A16Z fund.
    0:02:19 Please note that A16Z and its affiliates
    0:02:20 may also maintain investments
    0:02:23 in the companies discussed in this podcast.
    0:02:25 For more details, including a link to our investments,
    0:02:28 please see A16Z.com/disclosures.
    0:02:35 – Okay, phase zero, The Dark Ages.
    0:02:37 The year is 1995.
    0:02:40 Billboard number one song, “Gangster’s Paradise.”
    0:02:43 The box office number one was “Batman Forever.”
    0:02:45 Nostalgia for the old people here.
    0:02:46 Who does security?
    0:02:47 Nobody does security.
    0:02:48 It was a totally different world.
    0:02:50 You have to realize that
    0:02:52 we didn’t have much internet connectivity.
    0:02:54 Patching wasn’t really much of a thing.
    0:02:56 Vendors was basically like antivirus
    0:02:58 in the start of firewalls.
    0:03:00 Milestones of this Dark Ages time,
    0:03:02 we had the first DEFCON,
    0:03:03 we had the first CISO,
    0:03:04 Steven Katz at City Corp.
    0:03:07 So that year, they actually had a breach
    0:03:09 where somebody stole money.
    0:03:12 And they said, “This can never happen again
    0:03:14 “without us having someone to go chop their head off
    0:03:15 “when it happens.”
    0:03:17 So this is the first CISO.
    0:03:19 We had the first word macro virus.
    0:03:20 The first bug bounty came from Netscape.
    0:03:21 As we’ll get to your Netscape,
    0:03:24 did a lot of cool things that moved forward security.
    0:03:26 And of course, the hackers movie.
    0:03:28 It was web 1.0.
    0:03:30 It wasn’t an app that you went and dealt with.
    0:03:31 It was a site that you came to.
    0:03:33 So this is Apple’s site from ’97.
    0:03:35 Hackers are like these dingy people.
    0:03:36 It’s not like an actual job.
    0:03:39 One of the things that really moved from this
    0:03:41 to the next phase was web browsers went from
    0:03:43 like that Apple thing that I just showed you
    0:03:45 to a place that you go do business.
    0:03:47 Netscape made a lot of those things possible.
    0:03:50 So they brought forward SSL.
    0:03:52 They had the first bug bounty.
    0:03:53 They were putting forward a standard
    0:03:55 of how we’re gonna build out apps on the internet.
    0:03:57 And that standard was JavaScript.
    0:03:59 At the same time, we had Java,
    0:04:02 which was one of the first ways of building apps
    0:04:04 on the internet from an old company called Sun,
    0:04:06 today known as Facebook.
    0:04:09 Checkpoint was founded in 1993
    0:04:11 from somebody that came directly out of IDF
    0:04:12 and used all of the stuff that they learned
    0:04:15 to productize the web application firewall.
    0:04:17 Okay, phase two.
    0:04:19 Security is an actual thing, but it’s a function of IT.
    0:04:21 So the year is 2001.
    0:04:23 Billboard number one is hanging on by a moment.
    0:04:25 Box office number one is Harry Potter
    0:04:26 and the Sorcerer’s Stone.
    0:04:27 Who does security?
    0:04:29 IT does security.
    0:04:31 So context here, this is the start
    0:04:32 of when we get like big hacking.
    0:04:35 So it’s not just like a thing that happens once in a while.
    0:04:37 Businesses have all either moved online
    0:04:39 or rapidly moving online.
    0:04:43 Vendors now is antivirus firewalls, systems management,
    0:04:46 milestones here, Microsoft engineers coined
    0:04:48 the term SQL injection in ’98.
    0:04:50 The first big internet worm
    0:04:53 that made it like bad for business was Code Red.
    0:04:56 The first patch Tuesday was in 2003.
    0:04:58 And I don’t know, for anybody that’s old like me,
    0:04:59 we had this Y2K thing,
    0:05:01 which was actually like complete nothing burger.
    0:05:03 But what was interesting about it is
    0:05:05 we cared enough about computers
    0:05:07 and what they do that we thought it might be a thing.
    0:05:12 So one of the changes here was bug track and full disclosure.
    0:05:14 So back in the day, we had mailing lists, bug track,
    0:05:17 people would send security vulnerability reports
    0:05:19 and vendors would basically do nothing with it.
    0:05:20 They just sit on it forever.
    0:05:22 And so there was this big moment at the time,
    0:05:23 full disclosure where it’s like, okay, well,
    0:05:26 we’re just gonna put like the full gory details
    0:05:28 of this thing and force action from vendors.
    0:05:30 And then that led to regular patching cycles.
    0:05:32 So Microsoft quickly copied that.
    0:05:36 We also had the first web application security tools.
    0:05:38 So this is Nikdo and old one from 2001.
    0:05:39 It was kind of open source,
    0:05:41 but this is the beginning of these tools
    0:05:43 being broadly available.
    0:05:45 And then this is the beginning of what I call
    0:05:46 the tail wagging the dog
    0:05:48 when it comes to vendors and security.
    0:05:50 So from some of the folks I talked to you,
    0:05:52 we basically have these new attack paths
    0:05:54 and the buyers, in this case, IT,
    0:05:56 we’re very uneducated about how this works.
    0:05:59 So it’s like, you need to have your web port open.
    0:06:01 It needs to be legit open.
    0:06:02 And I can get in and compromise you through that.
    0:06:04 IT didn’t understand it very well.
    0:06:06 So vendors had to do their part
    0:06:09 to come and educate the IT buyers that this was possible.
    0:06:10 What this looked like was basically,
    0:06:12 I just completely compromised all your systems.
    0:06:13 And they said, how did you do that?
    0:06:16 And then you explain why this web application security
    0:06:20 is an actual thing and why they need vendor solution for it.
    0:06:23 All right, phase two is the risk sign off function.
    0:06:25 So the year is 2004.
    0:06:27 Billboard number one is, yeah,
    0:06:30 by usher little John box office is Trek two.
    0:06:32 This is what phones look like.
    0:06:34 By the way, these phones will last longer than you will.
    0:06:36 These things were like basically indestructible.
    0:06:37 Who does security?
    0:06:38 Now we have a security team that does it.
    0:06:40 So this isn’t just like a thing that like IT does
    0:06:41 with some of their time.
    0:06:43 So this is when we start to get the beginning
    0:06:45 of traditional security activities.
    0:06:48 We have Microsoft basically getting popped in the mouth
    0:06:49 and they need to do some stuff differently.
    0:06:51 Tech companies start hiring people
    0:06:53 that are actually called security.
    0:06:54 Vendors now is exploding.
    0:06:57 So we have Anabirus firewall still email security web
    0:06:59 application firewall, Dast and Sast.
    0:07:02 Milestones here, we had the first use of the term
    0:07:04 cross-site scripting again by Microsoft engineers.
    0:07:07 OOSP was founded in 2001.
    0:07:08 The first use of the term shift left.
    0:07:10 I actually thought it was much more recent,
    0:07:11 but this is a very old term.
    0:07:13 And then socks regulation was,
    0:07:14 I think the first compliance standard
    0:07:17 that actually mandated some security activities.
    0:07:19 There was a growing community of folks
    0:07:21 that were really interested in web security
    0:07:23 and all of what’s possible here.
    0:07:25 And Mark curfee started this group called OOSP
    0:07:28 to basically make this knowledge more socialized
    0:07:29 so that people knew about it.
    0:07:32 One of the first projects in OOSP was the OOSP top 10.
    0:07:33 And that immediately became like,
    0:07:36 how can I get my vendor shit to be one of the top 10 things
    0:07:37 that people are buying?
    0:07:39 So this is, you know, yet more tail wagging the dog.
    0:07:41 It’s like, oh, my thing should be, you know,
    0:07:42 in the top five for sure,
    0:07:44 because it’s going to help us sell a lot more of it.
    0:07:47 Now we have the beginning of the big internet worms.
    0:07:49 So at the time windows basically
    0:07:50 didn’t come with any firewall.
    0:07:52 You started up, it would get immediately
    0:07:53 compromised by stuff.
    0:07:55 The worms here were costing a lot of money.
    0:08:00 So we had like attacks like a mafia boys DDoS in 2000.
    0:08:02 It took down like more than 1 million
    0:08:03 of the 5 million IS servers
    0:08:06 and cost an estimated $2.6 billion in damages.
    0:08:08 And so for part of this,
    0:08:10 basically Microsoft had these big customers
    0:08:11 that were saying like,
    0:08:13 hey, we’re just getting killed because we’re using windows.
    0:08:16 And then this led to in part to trustworthy computing.
    0:08:18 Basically we need to see the light.
    0:08:20 We can’t just keep doing business as is.
    0:08:23 Bill Gates saw a very early version of a book
    0:08:25 that Microsoft folks were writing
    0:08:26 on these security practices.
    0:08:29 And basically that led him to say like,
    0:08:31 we need to completely change what we’re doing.
    0:08:32 We’re losing trust with customers.
    0:08:33 And then that was the beginning
    0:08:36 of what we consider traditional security activities today.
    0:08:38 We have threat modeling, stride,
    0:08:41 all of these things are being birthed around this time.
    0:08:43 We also get more compliance.
    0:08:46 So PCI DSS version one was written in 2004.
    0:08:48 This mandated security activities.
    0:08:50 Again, vendors are trying to get themselves
    0:08:53 into the standards so that they can sell more product, right?
    0:08:54 It’s like, okay, well,
    0:08:56 if you’re going to deal with payment card data,
    0:08:59 then you need to do web scanning, for example.
    0:09:01 Proofpoint was an example of one of the companies here.
    0:09:04 This was founded in 2002, still around today,
    0:09:06 very successful by email security, right?
    0:09:08 So as soon as you have email being used
    0:09:09 as widely as it is today,
    0:09:11 and we also have email viruses, it’s okay,
    0:09:12 we’re going to need something
    0:09:14 to filter out spam and viruses.
    0:09:16 So Proofpoint started that.
    0:09:19 And then also improve a big web application firewall
    0:09:21 that’s also still around today.
    0:09:23 Okay, phase three is DevSecOps.
    0:09:25 So the year is 2013,
    0:09:26 billboard number one is ThriftShop,
    0:09:29 box office number one is Ironman.
    0:09:30 Who does security?
    0:09:31 It’s everybody’s job.
    0:09:32 We’ve collectively decided
    0:09:34 that basically security doesn’t scale.
    0:09:36 Like we’ve been this sign off function
    0:09:38 that you have to do with security
    0:09:40 before you ship your product for the year.
    0:09:41 And now we’re moving to cloud
    0:09:43 and we’re doing continuous deployment.
    0:09:43 And security is like,
    0:09:45 I don’t know when I do these assessments anymore.
    0:09:48 So what we do is we basically take every single developer
    0:09:50 and tell them, guess what,
    0:09:52 good news, you’re a security person now.
    0:09:54 So we’re also getting more and more mega breaches.
    0:09:56 If you look at the numbers from this time,
    0:09:58 every single year it’s exponential growth
    0:10:00 in the number of public breaches,
    0:10:02 the size of the breaches, the damage in the breaches,
    0:10:04 vendors still exploding.
    0:10:06 So EDR, Next Gen Firewall detection,
    0:10:09 all the posture managements, dev training, bug bounty.
    0:10:12 Milestones, the first use of the term DevSecOps
    0:10:13 was actually in 2013.
    0:10:15 And we had the first CSPM,
    0:10:17 which gave birth to this massive posture management industry
    0:10:18 that we have today.
    0:10:20 We start to see no before, right?
    0:10:22 It’s like we’re gonna train developers continuously.
    0:10:24 Developers are gonna learn about
    0:10:25 all of the types of cross-site scripting
    0:10:28 and SQL injection with one day,
    0:10:29 like once per year of training where they learn it
    0:10:32 and then they immediately forget it the next day.
    0:10:34 We also have big bug bounties.
    0:10:36 So crowd sourcing more and more vulnerabilities
    0:10:38 in the hopes that the attackers aren’t gonna use these things
    0:10:40 to cause massive breaches for us.
    0:10:42 So much posture management.
    0:10:45 So the first was cloud security posture management.
    0:10:47 Evident was the first company here.
    0:10:49 At Netflix, they had also created SecurityMonkey,
    0:10:51 which is basically open source posture management.
    0:10:53 And since then it’s just like posture management
    0:10:55 just exploding all over the place.
    0:10:57 We have AppSec posture management,
    0:10:58 Data Security posture management,
    0:11:01 Identity posture management, SSPM,
    0:11:03 like whatever that bottom posture management is,
    0:11:05 just so much posture management everywhere.
    0:11:07 And what these things are really good at doing
    0:11:08 is like going and finding problems
    0:11:10 after they’re already deployed, right?
    0:11:11 And then you have to go do something about it.
    0:11:12 ‘Cause just knowing about risk,
    0:11:14 you can just tell your boss like,
    0:11:16 “Hey, okay, well, here’s all the risk that we have.
    0:11:18 They’re gonna want you to reduce it somehow.”
    0:11:19 And so what we moved to,
    0:11:21 since this is now developer zoning security,
    0:11:22 is we rip a bunch of JIRA tickets for them
    0:11:24 and we call it a day.
    0:11:26 So we also are getting at this time job shortage.
    0:11:29 The first time the job shortage news articles
    0:11:31 was in 2015, early 2016.
    0:11:34 We’re short a million jobs already in 2016.
    0:11:35 This is just piling up more and more.
    0:11:36 We don’t have enough security people
    0:11:39 to actually do the work that we need them to do.
    0:11:41 So where does this leave us?
    0:11:43 I think that we’re entering a new phase,
    0:11:44 phase four of security,
    0:11:46 where basically telling developers,
    0:11:48 “It’s your job, you fix security all the time.”
    0:11:49 Didn’t particularly scale well.
    0:11:52 I think that that’s becoming very evident today.
    0:11:53 So years 2020,
    0:11:55 blinding lights is number one,
    0:11:57 box office is bad boys for life.
    0:11:58 Who does security?
    0:12:00 I think systems do security.
    0:12:02 What we’re doing doesn’t scale.
    0:12:04 We have developer fatigue.
    0:12:05 I hear people tell me all the time like,
    0:12:07 “Oh, we take the posture management
    0:12:08 and then we just filter out everything
    0:12:09 that’s not higher critical.”
    0:12:12 And then we ship those JIRA tickets to developers.
    0:12:13 Training relentlessly, obviously,
    0:12:15 it doesn’t matter how many times we’ve trained developers
    0:12:17 on like all the SQL injection types.
    0:12:19 They still don’t remember it
    0:12:20 and really they shouldn’t have to.
    0:12:22 So Milestones, one of the projects
    0:12:24 they really informed how I see this is Limer,
    0:12:26 the Netflix released in 2015.
    0:12:30 Google launched the Identity Aware Proxy in 2017.
    0:12:33 Chrome added a password manager by default back in 2018.
    0:12:35 And Clint Gibbler, one of my friends
    0:12:37 and somebody that has done a lot of work in the space
    0:12:39 did his talk in 2021
    0:12:42 called “How to Eradicate Vulnerability Classes.”
    0:12:45 So Limer, when I got to Netflix, it was in 2017.
    0:12:46 And I remember just being blown away
    0:12:48 at how easy it was for our developers
    0:12:50 to just get things like certificates
    0:12:53 without having to select a Cypher Suite
    0:12:54 and pick crypto parameters and rotate it
    0:12:57 and store your private keys securely.
    0:12:58 It was just made it like dead symbol.
    0:13:00 And the benefit of this is that developers
    0:13:02 never have to learn about crypto anything.
    0:13:03 They just get it for free.
    0:13:06 Google has done just probably more work than anybody here.
    0:13:10 So we’re gonna upscale people to HTTPS automatically.
    0:13:12 Chrome updates itself, which became standard
    0:13:14 for many other pieces of software.
    0:13:16 We have these basically like impossible
    0:13:18 to mess up Golang libraries
    0:13:20 to handle a lot of security things.
    0:13:22 And actually, my mom sent me this article recently.
    0:13:25 Mom’s so funny, she knows that I work in security
    0:13:27 and sends me like everything that has security in it
    0:13:28 out of Wall Street Journal.
    0:13:29 And usually it’s like something
    0:13:31 that either happened three months ago
    0:13:33 or it’s got nothing to do with me.
    0:13:35 But this one was written by Larry Ellison
    0:13:36 and it’s not very old.
    0:13:38 His point is it’s time to hand over
    0:13:40 cyber security to computers.
    0:13:42 Basically just relentlessly hounding the users
    0:13:44 and like trying to get the users to be smarter.
    0:13:45 Like it doesn’t work anymore.
    0:13:47 What we want to get is developers
    0:13:50 back to just writing app code, like working on the business
    0:13:52 and not having to be like security people all the time.
    0:13:54 So today, if you think about it,
    0:13:57 devs have to burn down this never ending pilot Jira tickets.
    0:13:59 This causes annoyance with the security team.
    0:14:00 If you had a friend that only showed up
    0:14:02 when they wanted you to do something,
    0:14:03 you’re probably gonna start avoiding that friend
    0:14:04 and we’re getting a ton of that.
    0:14:07 What if instead, if they just use systems,
    0:14:09 they made good security choices on their behalf
    0:14:11 and forget about all of this like
    0:14:13 training relentlessly all the time.
    0:14:15 So conclusions, I was part of this move
    0:14:18 from like waterfall to continuous and then saw this.
    0:14:21 We just heap stuff onto our developers plate
    0:14:23 and then saw developers learn to resent
    0:14:24 and avoid security more and more.
    0:14:27 I think what we should do instead is help them out.
    0:14:28 Like they’re very, very busy people.
    0:14:31 We should build a system that makes it fast and easy
    0:14:33 for them to go do something they want to do
    0:14:35 and then has security victim as a side effect.
    0:14:38 So it’s like when you want your dog to take vitamins,
    0:14:40 you don’t just put vitamins in your hand
    0:14:41 and offer them to the dog.
    0:14:42 You put the vitamins in the peanut butter
    0:14:43 and the dog wants the peanut butter
    0:14:45 and the dog gets the vitamins too.
    0:14:46 I think this is what we should be doing
    0:14:47 for our developer users.
    0:14:50 – Speaking of meetings to make things easier
    0:14:52 for our developers, let’s get a sense
    0:14:55 of what these hacks can really look like in 2024.
    0:14:57 – Now, usually in this talk,
    0:14:58 I like to talk about solar winds,
    0:15:00 but we actually have a better example
    0:15:03 that was gifted to us, the XT-utils attack.
    0:15:05 So everybody here has heard about this by now,
    0:15:09 but this was some group likely, I think backed by a state
    0:15:12 that infiltrated an open source data compression project
    0:15:14 called XT-utils.
    0:15:19 – That was Faraz Abukadijay, founder and CEO of Socket.
    0:15:22 So XT-utils has taken the security industry by storm
    0:15:25 since it introduced a backdoor via open SSH,
    0:15:27 which is a critical piece of infrastructure
    0:15:30 used by millions of servers around the world.
    0:15:32 Let’s hear from Faraz regarding what really happened there.
    0:15:34 To get a sense of the kind of security offenders
    0:15:37 we’re now dealing with in 2024
    0:15:38 that can involve multiple years,
    0:15:40 multiple contributors, social engineering,
    0:15:42 the potential for state actors and more.
    0:15:46 – The way that they did this was just so interesting.
    0:15:49 And it’s something that, I mean, look, I’m sad that it happened,
    0:15:51 but I’m also like, I’ve been telling you guys
    0:15:52 about this for so long.
    0:15:54 I’m sort of like kind of satisfied in a way
    0:15:56 that finally there’s an example
    0:15:58 that’s really caught the imaginations of folks.
    0:16:02 So what happened here was we had a group,
    0:16:03 like I said, probably state backed,
    0:16:05 winning over the contributor of the project
    0:16:07 over several years of work.
    0:16:09 So that’s like a scale of time invested in this
    0:16:12 that we haven’t seen in other attempts like this.
    0:16:14 And then they introduced a sophisticated though
    0:16:17 not flawless backdoor that was aimed
    0:16:19 at compromising SSH servers.
    0:16:22 So it’s a pretty multi-layered vulnerability.
    0:16:23 There were multiple personas involved
    0:16:25 from identities that hadn’t been seen
    0:16:26 anywhere on the internet before.
    0:16:28 So that kind of is another indication
    0:16:31 that probably this was someone relatively sophisticated.
    0:16:33 This wasn’t just someone doing it for the LULs.
    0:16:36 And so probably suggesting kind of state backed actors here.
    0:16:38 And then just the way the timeline
    0:16:40 and the kind of some of the stuff that they did
    0:16:42 also seems to indicate that it might be
    0:16:44 like the same people behind SolarWinds.
    0:16:46 Probably, but again, this is all just kind of speculation.
    0:16:47 I want to kind of go into a little bit of,
    0:16:49 so you can kind of see just the character
    0:16:51 of what this attack kind of looks like.
    0:16:54 So this is kind of individual who ended up committing
    0:16:56 and releasing the malicious code.
    0:17:00 And this is his first email patch to the mailing list
    0:17:04 where they do the development for this project XCutils.
    0:17:05 And it’s interesting.
    0:17:08 This is just kind of a totally pointless patch, right?
    0:17:09 This is like the kind of thing that as a maintainer
    0:17:12 you get all the time someone just drive by dropping in
    0:17:15 an editor config file, which is basically does nothing, right?
    0:17:17 It’s a no op in terms of the functionality of the project.
    0:17:19 And oftentimes you’ll see these from people
    0:17:20 who just want to get to be able to say
    0:17:22 that they’re a contributor to a project.
    0:17:24 It doesn’t require any understanding of the project.
    0:17:26 So it’s just noise, but you can see their first attempt
    0:17:28 to kind of get involved in the project.
    0:17:30 Then they sent another patch a month later,
    0:17:33 fixing some kind of build problem.
    0:17:36 And they also sent a couple of more patches after this one,
    0:17:38 all totally ignored by the maintainer,
    0:17:41 who at this point has been maintaining this project
    0:17:43 for about 15, maybe 20 years.
    0:17:45 This is a long time project.
    0:17:47 And the guy running it is just,
    0:17:49 at this point it’s in maintenance mode.
    0:17:51 It’s basically, he’s sort of burned out.
    0:17:53 He’s sort of kind of half maintaining it,
    0:17:55 checking the mailing list once in a while,
    0:17:57 but really not actively working on this anymore.
    0:18:00 So it’s something that a lot of the maintainers go through.
    0:18:01 And so then finally the maintainer,
    0:18:03 this is like, I think three more months
    0:18:05 after the last email, we see that the maintainer
    0:18:09 just randomly comes by and merges a couple line change
    0:18:11 to the project that is the first code
    0:18:14 from this GITAN individual that’s actually
    0:18:15 included in the project.
    0:18:17 And what I think is interesting about this is
    0:18:19 all of his other patches were ignored.
    0:18:22 The patch that was merged is this like trivial two line patch
    0:18:24 that you can just look at and kind of,
    0:18:25 as an overloaded maintainer, you can look at this
    0:18:27 and sort of figure out what it’s doing.
    0:18:28 And oh, it fixes a bug, cool.
    0:18:29 Let me just merge it and move on.
    0:18:33 The bigger multi-hundred line patches were ignored, right?
    0:18:34 Typical, also typical behavior
    0:18:36 for an overloaded maintainer, right?
    0:18:38 Okay, then a couple of months go by
    0:18:41 and now we see a new character enter the picture.
    0:18:45 This guy Gigar Kumar sends kind of a few emails
    0:18:49 complaining that some of GITAN’s patches weren’t landing.
    0:18:53 This is often used to pressure maintainers
    0:18:55 to include code in projects.
    0:18:56 Patches spend years on this mailing list.
    0:18:58 There’s no reason to think anything is coming soon.
    0:18:59 So aggressive, right?
    0:19:01 At this point, remember he’s already landed
    0:19:03 a few of the patches, but the pressure is building here.
    0:19:07 And then this is insert project name still maintained.
    0:19:09 That is the bane of a maintainer’s existence.
    0:19:11 It’s the meanest kind of issue you can open up
    0:19:13 on a project, in my opinion.
    0:19:15 This has happened to me many times.
    0:19:16 I had a couple screenshots here.
    0:19:18 Is this still being developed?
    0:19:19 And like on a perfectly active project
    0:19:21 because their PR wasn’t looked at for a little while, right?
    0:19:23 Here’s another one on one of my projects.
    0:19:24 Is this project dead?
    0:19:25 It’s not nice.
    0:19:27 Don’t do this, people.
    0:19:28 And I think one of the interesting things
    0:19:29 about this whole situation is that,
    0:19:31 this is another one of the things I’ve seen change
    0:19:33 in the way that open source is done is,
    0:19:35 traditionally, you think of a project like Linux
    0:19:37 or WordPress or these big foundation-backed projects.
    0:19:39 They have the structure up here at the top
    0:19:41 where you have one project, one entity,
    0:19:43 with many, many maintainers that are participating
    0:19:44 in the project.
    0:19:46 A lot of times they’re paid by their employer
    0:19:47 to even work on the project
    0:19:49 and to submit patches as part of their day job, right?
    0:19:52 But what we see a lot more of as we’ve shifted
    0:19:55 into this world of many, many, many dependencies,
    0:19:58 a lot of tiny dependencies is more of a structure like this
    0:20:00 where you have an individual with hundreds, potentially,
    0:20:02 hundreds of projects that they take care of.
    0:20:04 And that was the case here with Lassie Collin.
    0:20:06 He had multiple projects that he was managing
    0:20:08 as an individual maintainer.
    0:20:09 Okay, so let’s continue on.
    0:20:11 So this is three months has gone by.
    0:20:13 He replies, he apologizes for the slowness,
    0:20:16 and he also adds in a bit about how Giotan
    0:20:19 has helped him off-list with XTutils.
    0:20:21 So probably they have some kind of chat conversation
    0:20:24 going off-list now and they’re collaborating more closely,
    0:20:25 building up the trust.
    0:20:28 And he says he might have a bigger role in the future,
    0:20:29 at least with XTutils.
    0:20:31 It’s clear that my resources are too limited
    0:20:33 and something has to change in the longterm.
    0:20:36 So the kind of idea has now been planted in his mind
    0:20:38 that he probably should give access to somebody else
    0:20:40 to help maintain the project.
    0:20:41 And again, this all sounds nefarious
    0:20:43 ’cause I’m doing it in a talk and I have slides up here,
    0:20:45 but this is also open source working correctly.
    0:20:46 This is thinking about, oh, hey,
    0:20:47 maybe I’m not the best maintainer.
    0:20:49 Maybe I should hand this off to somebody
    0:20:51 that’s pretty normal as well.
    0:20:53 At this point, nothing actually nefarious has happened.
    0:20:54 By the way, there’s no bad code that’s been included.
    0:20:56 This is just laying the foundation.
    0:20:57 He said a couple of weeks go by.
    0:21:00 So now we have this character, Jigar Kumar, who enters
    0:21:03 and this person’s much more aggressive
    0:21:04 and really starts to apply more pressure.
    0:21:07 So they go over one month and no closer to being merged.
    0:21:08 Not a surprise.
    0:21:10 So like dropping into threads to just sort of
    0:21:12 nag the maintainer and kind of make him feel
    0:21:13 like he’s not doing a good job.
    0:21:16 Progress will not happen until there is a new maintainer.
    0:21:18 And then the maintainer finally replies and pushes back
    0:21:19 and says, hey, I haven’t completely lost my interest here,
    0:21:21 but I’ve been having some mental health issues
    0:21:23 and I have a lot of things going on in my life.
    0:21:25 But again, maybe Gia Tan will have a bigger role
    0:21:26 in the project.
    0:21:28 And so a few months after that,
    0:21:30 Lassie Collin merges the first commit with Gia Tan
    0:21:32 as the author you can see here.
    0:21:33 And they actually are listed as an author.
    0:21:36 This is a pretty innocuous change.
    0:21:39 And then again, the pressure continues from Jigar and Dennis
    0:21:41 who’s this other persona that are both there
    0:21:43 and really just support the idea
    0:21:44 that Gia should be made a maintainer.
    0:21:46 And you can see here, you ignore the patches
    0:21:48 that are rotting away on this mailing list.
    0:21:50 Right now you choke your repo.
    0:21:53 Why wait until 5.4.0 to change maintainer?
    0:21:55 Why delay what your repo needs?
    0:21:56 Right?
    0:21:58 So applying the pressure.
    0:21:59 And then again, the last one here is great.
    0:22:01 Like, why can’t you commit this yourself, Gia?
    0:22:02 I see you have recent commits.
    0:22:03 So just kind of pushing more and more.
    0:22:06 And then finally Lassie says, again,
    0:22:08 Gia Tan has been really helpful off-list.
    0:22:10 He’s practically a co-maintainer already.
    0:22:12 And then finally, this is the first email
    0:22:15 about two years after the very first interaction
    0:22:17 with the mailing list where Gia Tan
    0:22:20 is actually now doing the release notes for the project.
    0:22:21 He’s been made a maintainer
    0:22:23 and this is the first release going out.
    0:22:26 So two year kind of effort here.
    0:22:27 If this is indeed state-backed,
    0:22:29 this is probably not the only thing they did
    0:22:31 in that two year period, right?
    0:22:33 They probably have other things going at the same time, right?
    0:22:35 So we shouldn’t overreact and assume
    0:22:37 that Linux is like totally backdoor or anything like that.
    0:22:39 But also like, probably this isn’t the only thing
    0:22:40 that these folks were working on, right?
    0:22:42 So the truth is like somewhere in the middle here.
    0:22:46 – Sophisticated software supply chain attacks
    0:22:49 are not the only ones on our hands in 2024.
    0:22:50 In fact, the XAU Tells Attack
    0:22:53 was performed really without AI.
    0:22:56 So let’s hear from Kevin Tien, founder and CEO of Doppel,
    0:22:59 around the ways that AI is introducing new threat vectors
    0:23:02 and already impacting real world businesses.
    0:23:08 – In 2022, $8.8 billion was lost by consumers alone in the US.
    0:23:11 We’ve had 39 billion credentials
    0:23:14 stolen by bad actors that same year.
    0:23:18 And the cost to launch a disinformation campaign
    0:23:21 that’s AI generated is quickly approaching zero.
    0:23:24 So if you’ve seen a lot of the startups
    0:23:26 that are currently pitching about
    0:23:29 how we can make it easy to generate AI videos
    0:23:33 or how we can make it easy to generate AI voices, right?
    0:23:35 That same sort of stuff is going to the bad guys as well.
    0:23:37 And so how are we seeing this manifest today
    0:23:41 with real world people and real world businesses?
    0:23:45 So one common scheme that has grown super quickly
    0:23:46 just in these past couple of months
    0:23:49 has been the emergence of a lot of deep fake videos,
    0:23:53 specifically deep fake videos of individual personas.
    0:23:56 It could be Taylor Swift, could be Travis Kelsey,
    0:23:57 could also be your CEO
    0:24:00 and could be your financial institutions,
    0:24:02 chief technology officer.
    0:24:04 And so what we’ve quickly been seeing here, right,
    0:24:09 in terms of the landscape is more and more deep fake videos
    0:24:11 being produced in the exact same way,
    0:24:14 models being trained in a very similar way,
    0:24:16 the voice being generated in very similar way
    0:24:18 and the intention of the tech being operated
    0:24:21 in a very similar way all across different platforms,
    0:24:23 whether it’s YouTube, TikTok,
    0:24:26 any sort of video platform out there.
    0:24:27 We’re already seeing deep fakes emerge
    0:24:31 and this impacts a whole bunch of different sort
    0:24:34 of individuals, whether it’s business,
    0:24:37 whether it’s celebrities or even political campaigns.
    0:24:39 Of course, big federal election this year,
    0:24:41 it’s top of mind for everyone.
    0:24:44 The good news, bad news is that it’s already happening
    0:24:46 and we’re seeing it happen across a lot
    0:24:47 of different platforms.
    0:24:49 So I think the biggest thing here though is like,
    0:24:52 this is not necessarily entirely novel,
    0:24:55 attack surface right or entirely new threat, right?
    0:24:57 Like we’ve always had social media,
    0:24:59 we’ve always had video platforms
    0:25:02 and we’ve had bad guys try to create fake content
    0:25:04 to achieve certain means.
    0:25:06 I think the main lesson here
    0:25:08 in terms of what we’re seeing is that
    0:25:10 it’s just become a lot easier to do.
    0:25:12 And so just there’s entire markets around fishing kits
    0:25:16 and there’s entire markets around cyber crime in general.
    0:25:17 We’re gonna start seeing,
    0:25:20 and we’re already seeing that same sort of stuff
    0:25:23 come around with deep fake technology,
    0:25:24 impersonation technology and just,
    0:25:27 how do you personalize attacks more and more
    0:25:29 for your target victim?
    0:25:31 I think the biggest thing too is that
    0:25:33 we’re seeing this not only to run scams,
    0:25:36 but ultimately this stuff is impacting businesses at large.
    0:25:38 I actually just wanna talk this morning,
    0:25:40 chatting with some big banks out there
    0:25:41 and one of the biggest concerns for them
    0:25:44 is how can they watch out for a bank run
    0:25:46 that’s orchestrated by a deep fake campaign, right?
    0:25:48 Or we’ve even seen this effect
    0:25:50 companies outside the financial sector
    0:25:52 where pharmaceutical company had a impersonator
    0:25:54 talk about how Viagra’s gonna be free now
    0:25:58 and saw that impact of stock price very, very quickly.
    0:26:03 It’s again stuff that has happened before,
    0:26:05 but what we’re seeing in 2024
    0:26:08 and what we’re expecting in 2025 and beyond
    0:26:10 is that this just gets easier and easier to do
    0:26:13 and it gets to the point where it makes it really hard
    0:26:15 to tell what’s real or not online.
    0:26:18 And it’s not just deep fakes.
    0:26:20 Here’s a completely different approach.
    0:26:23 This one is a SEO poisoning case,
    0:26:27 so specifically something that we’ve seen out there
    0:26:30 a lot for airline industry, finance industry,
    0:26:33 any industry that has customer support, phone numbers,
    0:26:35 things like that, right?
    0:26:38 We’ve got the traditional SEO poisoning attack
    0:26:41 where people will find a way to get content upranked
    0:26:42 for any given company.
    0:26:45 And what’s interesting is basically
    0:26:48 how well can people do this in 2024?
    0:26:50 What we’re seeing a lot of things happening today
    0:26:53 is that they’re putting it on these third party sites
    0:26:55 that do have great domain ranks.
    0:26:58 Things like Microsoft, it could be LinkedIn.
    0:27:00 We’ve seen a lot with Hub as well of course
    0:27:02 and Webflow, other platforms like that.
    0:27:04 And so they’re taking advantage of the fact
    0:27:06 that these are legitimate third party sites
    0:27:08 with great domain health,
    0:27:10 stuff that Google will quickly uprank
    0:27:12 or any other search engine will quickly uprank.
    0:27:16 And they’re generating content and conversations on forms.
    0:27:19 For example, how do I speak to a live agent at United?
    0:27:22 How do I speak to a live agent at Uber, right?
    0:27:24 And what we see happen here is,
    0:27:27 they’re able to generate a bunch of the spam content
    0:27:29 across these different third party forms,
    0:27:30 get them all upranked,
    0:27:34 get them all to dominate that first page of search results.
    0:27:36 And again, it’s just a classic case of,
    0:27:38 well, they would have to script this, right?
    0:27:40 And generate the content now.
    0:27:42 They can make it more dynamic with AI
    0:27:44 and generate the AI specifically.
    0:27:48 – Of course, it’s not all doom and gloom.
    0:27:50 With every opening on offense,
    0:27:52 there’s equal opportunity for defense.
    0:27:55 Here is Andrey Sofansi, founder and CEO of Lumos,
    0:27:58 taking us back to where we started in this episode
    0:28:00 through a historical arc that brings us
    0:28:03 to a digital era of autonomy.
    0:28:05 So what do we do now that we’re in this new era?
    0:28:06 And if you happen to be a company
    0:28:08 hiring security professionals,
    0:28:11 should you be thinking about things any differently?
    0:28:14 – I just want to take you a little bit
    0:28:17 on a historical journey, all right?
    0:28:20 So the funny thing is, if you look 60 years back,
    0:28:22 we are all ideas.
    0:28:24 So there’s two types of factories.
    0:28:28 There’s a product factory and there’s an idea factory.
    0:28:30 So what the product factory is,
    0:28:32 is usually where the cars are born, right?
    0:28:34 Or where windows are made.
    0:28:36 And where the idea factory is,
    0:28:40 is where we create and design those cars, right?
    0:28:44 And especially the idea factory changed in the recent years
    0:28:47 and changed like two years ago again.
    0:28:51 So the idea factory looks something like the office
    0:28:53 or more like, you know, in the ’60s.
    0:28:55 In the ’60s, ’50s, there were no computers.
    0:28:57 So it was really interesting.
    0:29:01 And we mostly used typewriters and pen and paper.
    0:29:03 So then the computers came about
    0:29:05 and we digitized the office.
    0:29:07 That was kind of the first step.
    0:29:11 IBM, SAP, Oracle, Microsoft,
    0:29:14 all those big companies came about and digitized it.
    0:29:16 So that was step one.
    0:29:20 Step two is we cloudified, I guess, the office.
    0:29:22 I was like with Salesforce.
    0:29:25 They kicked it off and Workday and Atlassian,
    0:29:26 those were the first cloud companies.
    0:29:27 So suddenly we’re in the cloud.
    0:29:29 So it was where AWS was born.
    0:29:33 I think 2004, 2005, that’s when we cloudified it.
    0:29:35 Then something interesting happened
    0:29:37 is we made it collaborative, right?
    0:29:39 Workday is not really collaborative.
    0:29:40 Neither is Salesforce.
    0:29:44 But then suddenly Zoom, Slack, Figma, Airtable,
    0:29:46 all those kind of great companies
    0:29:48 came about in the 2010s.
    0:29:50 And suddenly it became very collaborative.
    0:29:51 So that was like kind of, I would say,
    0:29:55 the third change that happened in software,
    0:29:56 which is pretty cool.
    0:30:00 Now, what changed in the last two years
    0:30:04 is we moved from just like digitizing it to cloud,
    0:30:08 to collaboration, to autonomy, right?
    0:30:11 So we’re creating more and more autonomous software.
    0:30:12 And it started honestly for the first time
    0:30:14 with something like a Grammarly,
    0:30:17 where they are like more like kind of co-pilots
    0:30:18 that help you kind of do a job better.
    0:30:20 Even like GitHub, this is GitHub co-pilot,
    0:30:21 they’re in the middle.
    0:30:23 They’re not fully autonomous,
    0:30:25 but they help you do your job better.
    0:30:27 The big trend that we’re seeing right now
    0:30:29 is especially OpenAI is bringing out
    0:30:30 at the end of the year,
    0:30:33 reason, models that can reason.
    0:30:35 And they can literally talk with themselves
    0:30:37 and do certain things, so really spooky.
    0:30:39 And we’ve seen this as well like Devon,
    0:30:41 that’s kind of a new kind of type of software engineer
    0:30:43 and AI software engineer
    0:30:45 that just like basically codes themselves.
    0:30:48 So we’re moving from GitHub co-pilot or Grammarly
    0:30:50 to actually systems and services
    0:30:53 that build things themselves.
    0:30:56 So that is actually a whole new paradigm
    0:30:56 that’s changing.
    0:30:58 And we’re like, okay, shoot,
    0:31:00 how do we equip ourselves for that?
    0:31:02 So to summarize,
    0:31:03 actually there are kind of three waves,
    0:31:05 I just call them two.
    0:31:07 The first wave is the digitization,
    0:31:09 the second one is a collaboration,
    0:31:11 the third one is the autonomy.
    0:31:13 And now we’re at the third one.
    0:31:15 So the interesting thing is that I’m thinking about
    0:31:18 on a daily basis is apps and access.
    0:31:21 If you think about everything that you’re using,
    0:31:22 those are apps.
    0:31:23 We’re on Zoom, then on Slack,
    0:31:26 then we go and SSH into a server,
    0:31:28 which is also an app more or less,
    0:31:30 then we use GitHub, so everything is apps.
    0:31:33 Apps are literally our live blood without apps.
    0:31:35 We can’t do things.
    0:31:36 The question is like,
    0:31:37 I think that we as security professionals
    0:31:40 need to ask ourselves more and more is,
    0:31:43 how are we gonna manage all those apps
    0:31:45 with more and more service accounts coming up, right?
    0:31:49 And with like software doing the job themselves.
    0:31:50 So how do we deal with that?
    0:31:54 So I love the metro framework.
    0:31:55 I really love it.
    0:31:58 If you think about identities,
    0:32:00 there are certain identities on different tracks.
    0:32:03 So marketing has their identities, right?
    0:32:07 Marketing ops, the mansion, content,
    0:32:09 customer success has their tracks.
    0:32:13 And each station is more or less an application
    0:32:15 or like an entitlement, right?
    0:32:17 And some of those overlap, right?
    0:32:20 So for example, customer success and sales overlap
    0:32:21 maybe in Salesforce.
    0:32:25 Then design and marketing overlap in Figma.
    0:32:27 And then especially engineering,
    0:32:29 there are probably like multiple engineering departments
    0:32:32 if we zoom in and they overlap when it comes to,
    0:32:34 especially on an entitlement level,
    0:32:36 different permissions that they have access to.
    0:32:38 So the only interesting thing is people,
    0:32:41 which are more of those wagons,
    0:32:44 they jump from one station to another.
    0:32:47 And each station again is an app on entitlement.
    0:32:49 And why I think that this is interesting is,
    0:32:51 right now how we think about the world
    0:32:52 as a world of RBAC.
    0:32:55 – Quick interruption here.
    0:32:59 For the uninitiated, RBAC means role-based access control.
    0:33:01 So instead of assigning permissions individually,
    0:33:03 you’re granting them based on a role.
    0:33:08 – RBAC is not moving stations.
    0:33:11 RBAC basically means, you are a marketing person
    0:33:15 and you have access to everything on this marketing tier.
    0:33:19 Even though probably a lot of that stuff you never use.
    0:33:22 And sales or engineering is especially spooky.
    0:33:24 Engineering, you and DevOps,
    0:33:26 you have access to all customer data
    0:33:29 because an incident might happen and you need access to it.
    0:33:31 Now on top of that,
    0:33:34 we have all those service accounts coming up
    0:33:38 and soon autonomous actors, agents coming up,
    0:33:41 that will also, if we still use RBAC,
    0:33:44 get access to all of those things.
    0:33:45 Even though they don’t need it.
    0:33:47 So the concept is I’m a metro station
    0:33:49 and I need each permission entitlement
    0:33:51 just for a short amount of time.
    0:33:55 And I think especially as complexity rises.
    0:33:58 So we are going from like a hundred actors
    0:34:00 to a thousand to 10,000.
    0:34:02 And also the apps become more complicated.
    0:34:06 So instead of having just one or two or three metro stations,
    0:34:08 I will have thousands of metro stations.
    0:34:12 Because I can get access to 10 EC2 instances
    0:34:14 and just like the granularity and the cloud
    0:34:15 and the snowflake is gonna become
    0:34:17 more and more and more granular.
    0:34:19 So the question is like, how are we gonna manage that?
    0:34:22 What’s the new paradigm to manage that?
    0:34:25 So what I believe, how we need to rethink things
    0:34:28 is security was often seen as analysts, right?
    0:34:31 Actually, security started as hackers.
    0:34:34 Security people were those people that hacked the networks
    0:34:36 and they were the people that were deep in Linux
    0:34:38 with assist admins.
    0:34:40 And actually most security people were assist admins before
    0:34:43 because there was no security 30 years ago
    0:34:45 and they were true hackers.
    0:34:47 And then suddenly all those kind of great solutions
    0:34:50 came about and they said, here’s an alert,
    0:34:52 there’s an alert, here’s an alert.
    0:34:53 And we’re gonna alert you about all those things
    0:34:56 and you can remediate it very easily.
    0:34:58 And so I feel like more and more security
    0:35:01 became an operating department.
    0:35:02 Similar thing happened to IT.
    0:35:05 IT used to be the hackers and slowly but suddenly
    0:35:07 they became ticket resolvers.
    0:35:10 Security became a little bit of alert resolvers.
    0:35:12 IT became ticket resolvers.
    0:35:14 And I think the new paradigm that we need to think about
    0:35:16 as we’re thinking about entitlements and access
    0:35:20 as a metro station, security and IT needs to see themselves
    0:35:25 as the architects of that metro station, more or less.
    0:35:28 And what DevOps and infrastructure is to full stack teams.
    0:35:31 So I think the same thing we need to think about
    0:35:32 IT and security.
    0:35:37 IT and security need to become so to say infrastructure teams
    0:35:40 to each department, right?
    0:35:42 And this kind of moves us back to security
    0:35:46 actually hiring for engineering rather than analysts.
    0:35:48 Especially also, as the AI will probably automate
    0:35:50 most of the analyst work.
    0:35:52 So that’s I think a very important insight
    0:35:54 is when it comes to career development,
    0:35:57 as it comes to what type of profile you need to hire,
    0:35:59 especially engineers and analysts
    0:36:01 and building on top of solutions that you’re buying
    0:36:03 is very important.
    0:36:07 So basically the premise in this first act is
    0:36:09 software is becoming an autonomous.
    0:36:12 It enables us to create more and more.
    0:36:15 Because of that, entropy is increasing.
    0:36:19 There are more apps, more entitlements and more actors.
    0:36:23 And so what needs to change is security needs to handle
    0:36:27 this infrastructure with some type of technology operations
    0:36:30 or without some kind of technology infrastructure.
    0:36:33 So I think that is kind of one important change
    0:36:36 that we need to see as this whole market is changing.
    0:36:39 Now, here’s the second thing.
    0:36:41 It’s about startups by the way.
    0:36:44 This is like kind of an appell to all my entrepreneurs.
    0:36:46 I believe that we need to build compound businesses
    0:36:47 from day one.
    0:36:49 So what does that mean?
    0:36:52 So security CISOs probably have this problem
    0:36:56 that they need to use 50 different tools.
    0:36:57 And that actually lasts two years,
    0:37:00 especially as the economy has gone a little bit down.
    0:37:02 CISOs ask themselves a lot of,
    0:37:05 in terms of like, how can I consolidate?
    0:37:07 And that kind of sucks for startups at the beginning,
    0:37:08 I would say.
    0:37:12 Like, okay, we’re starting solving this unique pain point.
    0:37:13 But then CISOs are like, yeah,
    0:37:16 but you know, I have 80 vendors to manage.
    0:37:19 And so the question is that I ask myself a ton
    0:37:23 is how can we build compound businesses from day one?
    0:37:26 So how can you actually build a platform from day one,
    0:37:27 even though you’re a startup?
    0:37:29 And actually counter if people say,
    0:37:30 I need to consolidate,
    0:37:33 that you start up actually can consolidate.
    0:37:35 So it was 2023.
    0:37:37 The top three priorities for CISOs
    0:37:40 was vendor consolidation, optimizing SaaS licensing.
    0:37:43 Because of course you don’t wanna let people go.
    0:37:46 You rather wanna kind of first increase your software spend.
    0:37:48 So what does it mean for entrepreneurs?
    0:37:49 The question for entrepreneurs is like,
    0:37:51 how can I build a compound business from day one?
    0:37:54 We’ve seen this actually done well across many companies.
    0:37:56 I think Datadog is an awesome company
    0:37:59 that does this super well more on the DevOps side.
    0:38:03 For the longest time, right, they’ve had one product.
    0:38:04 And then actually they switched
    0:38:06 and became this kind of layered product
    0:38:08 for anything observability,
    0:38:10 whether it’s security observability,
    0:38:13 infrastructure observability, application observability,
    0:38:15 they were able to build a compound product.
    0:38:18 And Figma rethought this whole kind of process
    0:38:21 of before there was Sketch, there was Zeppelin.
    0:38:23 And what basically Figma said is like,
    0:38:24 what is the underlying concept
    0:38:27 that’s the same across all of those?
    0:38:30 And how can I build a solution that covers that all?
    0:38:30 And I think by the way,
    0:38:32 the whole kind of thing that we’ve seen in here
    0:38:34 is like we had first the bundling era.
    0:38:37 By the way, with Microsoft Oracle and SAP,
    0:38:38 people didn’t have a lot of applications.
    0:38:41 They said like, Oracle is doing it all.
    0:38:42 That was that at the beginning.
    0:38:44 And then slowly with like cloud,
    0:38:47 especially AWS and Azure made that happen,
    0:38:50 cloud became so approachable by everyone
    0:38:51 that suddenly, you know,
    0:38:54 we had all those collaboration tools come up.
    0:38:59 I do think we’re changing back to an industry of rebundling,
    0:39:02 especially as we have this autonomous wave coming up.
    0:39:03 I do believe, I mean, like Wiz is actually
    0:39:05 a great example of that,
    0:39:07 is they started with like kind of a point solution,
    0:39:10 but spread out very aggressively
    0:39:12 and build a compound product very quickly.
    0:39:15 So how are you going to manage that complexity?
    0:39:17 And then the question is like,
    0:39:19 how much did I protect my insider threat in some way?
    0:39:20 Why?
    0:39:23 Because go back to the metro station,
    0:39:25 if the developers access to everything,
    0:39:27 suddenly this intruder can just like hop
    0:39:30 from one station to another and do harm.
    0:39:33 So how can we make sure that it’s kind of just in time,
    0:39:35 only when you are at the station,
    0:39:37 you actually can have access to it?
    0:39:39 Now, that gets kind of hard
    0:39:42 with like millions of permissions.
    0:39:43 So what I believe it’s going to happen,
    0:39:45 and this is something that we are really working on right now
    0:39:48 with models that come out at the reason.
    0:39:52 Basically, I think models will be able to reason better
    0:39:54 than our security analysts
    0:39:58 in terms of what a certain role should have access to, right?
    0:40:01 So basically an agent on your identity
    0:40:04 and access management system will look into, okay,
    0:40:09 we had 20 new tickets where these engineers needed access
    0:40:13 to this type of database that live in North America.
    0:40:16 They will automatically update your roles
    0:40:17 and downgrade your roles,
    0:40:19 or at least at the beginning be a co-pilot for you
    0:40:22 and suggest, hey, this role should be updated in this way,
    0:40:25 or those two roles should be merged in that way.
    0:40:27 So this is just like a case study
    0:40:31 where agents will have a huge impact.
    0:40:33 The biggest story I think about security is,
    0:40:36 is that there’s enormous complexity and risk,
    0:40:38 you can never reduce risks to zero.
    0:40:42 The cool thing is if you move more to an engineering mindset,
    0:40:45 where you actually fine-tune your agents and models
    0:40:47 on top of your infrastructure,
    0:40:50 you will be able to solve certain problems
    0:40:53 that you were never able to solve before.
    0:40:56 The RAG will look into, okay, is this privileged access?
    0:40:58 So basically the AI will be able,
    0:41:00 you think about you have a million permissions,
    0:41:02 how are you gonna tag where this permission
    0:41:05 is actually sensitive or not?
    0:41:06 It doesn’t always say read only,
    0:41:09 it doesn’t always say admin access.
    0:41:12 So the AI will be able to understand or can understand
    0:41:14 if that permission is sensitive or not, right?
    0:41:15 So you can reason, okay,
    0:41:18 this person has privileged access or not,
    0:41:21 and then this person can also reason on role anomalies.
    0:41:24 Oh man, you know, you are in sales
    0:41:27 and you have access to this right, access in AWS,
    0:41:31 and no one else on your team has that access.
    0:41:32 So basically, you know,
    0:41:35 a RAG will ask themselves is,
    0:41:38 how privileged is this permission, right?
    0:41:40 What is your usage in that permission?
    0:41:44 And is anyone else that has similar HRIS characteristics,
    0:41:45 do they have that access?
    0:41:48 And you can already do this now pretty easily, right?
    0:41:49 This is like kind of more,
    0:41:51 it’s not reasoning themselves,
    0:41:53 but you kind of guide them to go through those steps.
    0:41:55 That’s what chain of thought means.
    0:41:56 And the last thing I want to say is like,
    0:41:59 the cool thing about access is it can be preventative.
    0:42:02 So here’s one thing that we’re already doing.
    0:42:04 If you create a ticket in JIRA,
    0:42:06 or if you create a Slack message and say like,
    0:42:09 hey, can I get this access please in a public channel?
    0:42:12 How AI can detect that you ask for access?
    0:42:15 And usually the worst thing that can happen
    0:42:16 is like back channel access.
    0:42:18 What that means is someone gives you access
    0:42:20 without following processes.
    0:42:23 Now, you can alert yourself that this happened,
    0:42:25 oh, this person got access without approval,
    0:42:26 but the better way is to prevent
    0:42:29 that from happening in the first place.
    0:42:30 I think the main takeaway is,
    0:42:32 there will be less and less analysts
    0:42:33 because agents will take over
    0:42:36 and you need to upscale them to become more engineers
    0:42:38 or even prompt engineers.
    0:42:39 That’s kind of one big thing.
    0:42:41 The second big thing is think about now,
    0:42:43 like the world is changing so quickly,
    0:42:46 what you can do and what you can demand from vendors
    0:42:50 or what you as an entrepreneur can implement
    0:42:52 when a system can reason by itself,
    0:42:54 that’s the second thing.
    0:42:55 And the third thing is I believe
    0:42:57 because I’m passionate about the industry
    0:42:59 is that this global identity will increase
    0:43:01 over the next couple of years, more and more.
    0:43:06 – All right, that is all for now.
    0:43:09 Obviously security is always a moving target.
    0:43:11 A cat and mouse chase through progressively
    0:43:15 more complex terrain with more complex tools on both sides.
    0:43:17 Now, if you do have any suggestions
    0:43:20 for future topics to cover, feel free to reach out to us
    0:43:22 at podpitches@a16z.com.
    0:43:24 And if you did like these exclusive excerpts
    0:43:27 from our A16Z campfire sessions event,
    0:43:28 make sure to leave us a review
    0:43:32 at ratethispodcast.com/a16z.
    0:43:34 We’ll see you next time.
    0:43:37 (upbeat music)
    0:43:39 (upbeat music)
    0:43:42 (upbeat music)

    Is it time to hand over cybersecurity to machines amidst the exponential rise in cyber threats and breaches?

    We trace the evolution of cybersecurity from minimal measures in 1995 to today’s overwhelmed DevSecOps. Travis McPeak, CEO and Co-founder of Resourcely, kicks off our discussion by discussing the historical shifts in the industry. Kevin Tian, CEO and Founder of Doppel, highlights the rise of AI-driven threats and deepfake campaigns. Feross Aboukhadijeh, CEO and Founder of Socket, provides insights into sophisticated attacks like the XZ Utils incident. Andrej Safundzic, CEO and Founder of Lumos, discusses the future of autonomous security systems and their impact on startups.

    Recorded at a16z’s Campfire Sessions, these top security experts share the real challenges they face and emphasize the need for a new approach. 

    Resources: 

    Find Travis McPeak on Twitter: https://x.com/travismcpeak

    Find Kevin Tian on Twitter: https://twitter.com/kevintian00

    Find Feross Aboukhadijeh on Twitter: https://x.com/feross

    Find Andrej Safundzic on Twitter: https://x.com/andrejsafundzic

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

  • The Science and Supply of GLP-1s

    AI transcript
    0:00:00 People have no idea what’s coming and how big of an innovation this is going to be.
    0:00:06 One of my earliest experiences in healthcare was my pediatrician telling my mom that I should
    0:00:12 go to fat camp. Specialist obesity training. They’re typically missing a few things. The
    0:00:19 average one will have four hours of training in obesity in medical school. We’re actually all
    0:00:26 users of GLP-1s. There we go. In the sense of GLP-1 is a hormone that we all have.
    0:00:31 Obesity. A choice or a condition. Well, regardless of what you believe, this chronic
    0:00:40 disorder continues to impact millions of Americans, with nearly 70% of Americans fitting the description
    0:00:47 of overweight or affected by obesity. And moreover, 55% have cancelled appointments due to the
    0:00:54 anxiety of being weighed, at least according to Noanwell. Noanwell is a company trying to rethink
    0:01:00 obesity medicine and its founder and CEO, Brooke Boyersky-Platt, joins a 16c general partner,
    0:01:06 Vanita Agarwala. Together they discuss what it’ll really take to change some of these statistics
    0:01:12 and how new technologies like GLP-1s fit into this mix. Brooke herself has personal experience in this
    0:01:18 domain as a former patient with obesity, having even been told by her pediatrician that she has to
    0:01:24 go to a fat camp. Now she’s using that fuel to rethink obesity care herself. Now this episode
    0:01:31 was also part of our sister podcast, Raising Health’s GLP-1 series. And if you’ve been
    0:01:36 paying attention over the last year, GLP-1s went from being an unassuming acronym to a familiar
    0:01:42 class of drugs that some recent studies have even pegged as many as one in eight Americans trying.
    0:01:48 The recent adoption of these drugs has also springboarded companies like Novo Nordisk,
    0:01:53 the manufacturer of Ozempic, to the largest company in Denmark. So if you, like many others,
    0:02:00 are interested in learning more about GLP-1s, make sure to tune into the rest of the series
    0:02:05 on Raising Health. You can find a link to the show or the full series in our show notes. Let’s get started.
    0:02:11 Hello and welcome to Raising Health, where we explore the real challenges and enormous
    0:02:20 opportunities facing entrepreneurs who are building the future of health.
    0:02:23 I’m Olivia. And I’m Chris. You’re joining us for the second episode in our deep dive series on
    0:02:34 the science and supply of GLP-1s. Last week, we heard from Carolyn Jasek, Chief Medical Officer at
    0:02:40 Omada Health. If you haven’t listened to that one, it’s a great primer on GLP-1s from a clinical
    0:02:44 experience. Today, we’ll hear from Brooke Boyarski Pratt, the founder and CEO of Knownwell.
    0:02:50 Next week’s episode will be with Cronus Minolas of UPMC Health Plan on the Pharmacy Implications
    0:02:54 of GLP-1s. Brooke talks with Vanita Agrawala, general partner of A-16Z Bio and Health,
    0:03:00 about the value of obesity-specific practitioners, patient-centric medical homes,
    0:03:04 and how she thinks the metabolic health space will evolve over time.
    0:03:08 If a patient is given a choice, they prefer a medical home. And that’s what we’ve seen with
    0:03:12 our patients. So we’ve seen a lot of patients leave point solutions, because they say, “Wow,
    0:03:17 you can also do my primary care. I can occasionally see you in person. You are a real doctor who I
    0:03:23 talk to and who I have a care team I know and respect.” So I think that’s really important,
    0:03:29 particularly as it relates to symptom management.
    0:03:32 You’re listening to Raising Health from A-16Z Bio and Health.
    0:03:35 I am incredibly excited to welcome to the Raising Health pod today Brooke Boyarski Pratt,
    0:03:44 founder and CEO of an incredible company called Knownwell that we’ve had the real privilege of
    0:03:50 partnering with recently. Brooke’s here to join us to share a little bit about
    0:03:54 how she’s looking at the obesity medicine space as a whole, the role that she hopes Knownwell
    0:04:00 will play there, and a little bit about why she’s building the company at a personal level.
    0:04:05 So Brooke, I’d love for you to just introduce yourself and share the story behind this company.
    0:04:11 Yes, such a pleasure to be here. Thank you so much, Venita, for having me on.
    0:04:16 I am a patient. I mean, that’s really what brought me to Knownwell and ultimately led
    0:04:22 me to founding it. I’m someone who’s been in a larger body my whole life. One of my earliest
    0:04:28 experiences in healthcare was my pediatrician telling my mom that I should go to fat camp,
    0:04:33 and that really unfortunately started the process of how I viewed interacting with the
    0:04:40 healthcare system when it came to my body size. And as I got older and sort of had different
    0:04:46 educational and work opportunities, it led me to move a lot. And every time I moved and re-established
    0:04:53 primary care, I felt like I was needing to re-establish the idea that I was a thoughtful
    0:04:59 person who took my healthcare seriously, not for any fault of the primary care doctors I met with.
    0:05:05 They just have a lot of patients every day, and I’m sure it’s frustrating for them to see
    0:05:10 so many patients struggling with the same disease state. So I was dreading going to the doctor,
    0:05:16 and it was even harder when I was actually looking for treatment for my metabolic health
    0:05:21 to find something that was accessible to me. And in my very last move, when I was walking
    0:05:27 to my new PCP for the first time, I started getting curious about if other people feel
    0:05:33 that same dread. Other people who were like me, and I was overwhelmed by the research,
    0:05:40 and I’ll just say briefly, that was kind of the nights and weekends I started pursuing this passion
    0:05:46 of could we really create a patient-centered home for people with overweight and obesity?
    0:05:52 Well, in the venture world, investors talk about founder market fit, and I cannot imagine
    0:06:01 a more compelling and more deeply connected founder to build a company that’s going after
    0:06:08 tackling not only the bias and access and comfort issues that patients face, but also
    0:06:16 the care quality. Let’s talk for just a second about your professional background, though,
    0:06:21 because it’s also really interesting that while you’ve had that patient journey and patient connection
    0:06:26 to the obesity medicine space, you didn’t come from healthcare. So you look a little different
    0:06:32 than some of our other healthcare portfolio CEOs and leaders in the healthcare space,
    0:06:38 but you’re remarkably proficient and picked it up so fast and are so compelling now. But what made
    0:06:44 you jump? You had a background in finance and consulting. Yeah, very traditional kind of business
    0:06:51 background of Penn and Herbert Business School in McKinsey and worked at a commercial real estate
    0:06:57 company. So certainly not in healthcare. I joke that I’m sort of glad I didn’t know what I was
    0:07:02 getting myself into when I jumped into known well and creating the company. But ultimately,
    0:07:08 I had always hoped that I would one day find something to work on that I felt was my life’s
    0:07:14 work. And before known well, what I was always drawn to was doing a good job delivering for clients
    0:07:20 and being the best colleague and boss I could be in terms of the people I worked with. But if I’m
    0:07:26 being honest, there wasn’t a day that I woke up and thought, I am so passionate about commercial
    0:07:31 real estate today, right? I mean, it was just something that was important to do well and to
    0:07:36 deliver for people. But as I started, I had always been drawn to healthcare, like just to the healthcare
    0:07:42 industry, because I always felt like, how could you have a more direct impact on people than in
    0:07:49 healthcare? And I’d always joke with my physician friend that they made the right decision going
    0:07:54 into healthcare. So it almost sort of came together naturally that as I started to feel like there
    0:08:01 was an opportunity here to really touch a group that had been underserved, even though it’s been a
    0:08:08 steep learning process, it sort of felt natural that I found my way to healthcare.
    0:08:12 Consider me one of your physician friends. It’s very happy to have you join the forces
    0:08:18 in healthcare. Let’s talk a little bit about what obesity and overweight management and medicine
    0:08:25 actually is. We’ve actually, we’ve already used the term a few times. So let’s backtrack for our
    0:08:29 listeners for a second and just make sure we’re all on the same page. What is obesity medicine?
    0:08:36 Like, is that a specialty? What is it? It is a specialty. So if we take a step back,
    0:08:43 overweight and obesity is defined by BMI. We could have a whole podcast about how that’s not the
    0:08:50 best measure to use, but it’s the best and sort of easiest measure we have today to diagnose the
    0:08:57 disease. So this is really about living in a body where your weight is higher than your height would
    0:09:04 suggest on a bell curve. And that puts you into overweight and obesity. And it was not long ago.
    0:09:10 I mean, the reason Medicare, for example, doesn’t cover obesity treatment is they see it as a
    0:09:15 cosmetic disease. And it wasn’t long ago that that’s how everyone viewed overweight and obesity.
    0:09:20 In 2012 was the first ever board certification in obesity. So that’s how recently this disease
    0:09:30 state has been really viewed as something in the medical community. And obesity medicine is really
    0:09:36 about comprehensive treatment of the state of obesity and overweight. And if a person is interested
    0:09:44 in intentional weight loss, really helping them on that path, both to address the obesity and also
    0:09:51 potentially other metabolic health conditions, such as diabetes, hyperlipidemia, that can sometimes
    0:09:58 go with these diseases. So typically a clinician who practices obesity medicine is a primary care
    0:10:06 doctor from their medical school and residency days, though not always. You can see cardiologists
    0:10:12 and OBGYNs who do have a board certification in obesity. And nowadays we have fellowships. So
    0:10:19 some folks go and do an obesity medicine fellowship. And then we also have a board
    0:10:24 certification. And I’ll note it’s actually the fastest growing specialty in the US. And there’s
    0:10:30 something called the Obesity Medicine Association, which is the largest association of clinic
    0:10:35 clinicians who practice obesity. And we’re really fortunate that my co-founder and our CMO,
    0:10:40 Angela Finch, is the president of it. So what about a primary care doctor like me who has
    0:10:46 an overweight or obese patient who I’m really trying to figure out how to serve? Help us understand
    0:10:54 the bridge between all primary care physicians in America and the subset of physicians who are
    0:11:01 trained to deliver obesity medicine care and certified to do so, given that we have a situation
    0:11:09 where over 40% of all Americans actually fit that definition. Totally. And as we mentioned earlier,
    0:11:16 everyone’s doing their best. So what I don’t think the differences between a primary care
    0:11:20 doctor who practices and doesn’t is how much they wish to help the patient. So a PCP who doesn’t have
    0:11:28 specialist obesity training, they’re typically missing a few things. The average one will have
    0:11:34 four hours of training in obesity in medical school. This is an unbelievably complicated disease,
    0:11:42 right? So very little formal education on obesity, typically very little continuing education on the
    0:11:48 innovations that are coming on to the market. So in addition to the education issue, though,
    0:11:58 they’re also typically lacking the resources where they practice. So to practice really great
    0:12:04 obesity medicine, we also want things like dietitians, health coaches, a movement program,
    0:12:10 ideally behavioral health, and even like a good prior authorization process if you incorporate
    0:12:17 anti obesity medications. So most primary care doctors are practicing without any of those
    0:12:23 additional services. And what they’ll say to us all the time is, look, I suppose I can prescribe
    0:12:29 something. I don’t even really understand the meds. I don’t understand how to get them approved.
    0:12:34 And the truth is, I know the patient needs more help, and I don’t have the services to help them.
    0:12:39 Whereas someone who’s been in obesity, who’s obesity specialized, not only has the formal
    0:12:44 education and the continuing medical education, but they are typically part of weight centers or
    0:12:50 other groups that have the wraparound services. You mentioned Dr. Fitch. That’s exactly her
    0:12:57 background. She’s set up weight centers in multiple places in the country, both health system
    0:13:03 affiliated and now at Knownwell. You mentioned she’s president of the Obesity Medicine Association.
    0:13:09 But if you could sort of channel her thinking, your thinking, Knownwell’s thinking on this topic,
    0:13:16 what are the key pillars of comprehensive, evidence-based care for patients with overweight
    0:13:24 and obesity? So there’s something called weight normative medicine and weight inclusive. And
    0:13:28 it’s a way we practice using the broad way here. Weight normative is you should be, you know,
    0:13:34 I’m 5’4″, you should be 150 pounds. And every time you come to the office, I’m going to tell you
    0:13:40 you should be 150 pounds. It turns out patients actually gain more weight when they have experiences
    0:13:45 like that. The other approach is called weight inclusive, which is, “Hey, Brooke, I recognize
    0:13:50 you’re not 150 pounds today. Let’s work on the wellness goals and actions you can take inclusive
    0:13:57 of your current body size.” And what’s really interesting is the research suggests that that
    0:14:02 actually leads to much better health outcomes, right? So at its core is the approach around how
    0:14:08 do we work with patients. Then there are multiple pillars of the actual actions we take. So typically,
    0:14:15 though not always an anti-obesity medication will be used and we’ll talk more about those,
    0:14:19 typically a nutrition program is used, either one-on-one coaching with the dietitian or group
    0:14:27 classes. You want to address sleep and stress management that could go as far as someone’s
    0:14:34 undiagnosed sleep apnea and could be as light of an intervention as meditation, right? And other
    0:14:40 things that we work on with patients. Ideally, you would include a movement program, which we do,
    0:14:45 health coaching and remote patient monitoring. So allowing that connectedness to the clinic of
    0:14:52 having a scale at home, if you have heart disease, having a blood pressure cuff at home, a connected
    0:14:57 glucometer and working with a health coach. And lastly, really a behavioral health, right? So
    0:15:02 to the extent that a patient is interested in working on their behavioral health in addition
    0:15:07 to the other medical components, those taken together are really considered the core
    0:15:12 comprehensive program for obesity management. Amazing. And referrals to surgeries and other
    0:15:20 interventions as needed that wrap around the medical care. The right interventions have to be
    0:15:26 matched to the right patients, as with all of medicine. And I’ll also note that one of the
    0:15:32 things I’ve learned from you all is just kind of a better awareness of how much of an adolescent
    0:15:38 problem we have as well in this country and around the world. But we’ve got 14 million
    0:15:43 American children and teens also living with obesity. And so this isn’t just an adult medicine
    0:15:50 or an adult primary care challenge in front of us. It’s really one that affects the pediatric
    0:15:54 community. That’s a community that’s even less trained on the whole in managing conditions
    0:16:02 and comorbidities associated with obesity and overweight. So I think it’s just we have a really
    0:16:07 important opportunity ahead of us in our healthcare system to get this right, to get obesity and
    0:16:14 overweight evidence based care right and to do it at scale. So incredibly excited about what
    0:16:20 you’re building. Let’s double click on one of the pillars that you mentioned, which are obesity
    0:16:25 medicines. And not a day goes by at this point where GLP ones are not headlining news stories,
    0:16:33 whether it’s around cost, access, new new drugs, new therapies, oral versions of the therapies,
    0:16:41 you know, anyone reading healthcare news at this point is basically inundated with GLP
    0:16:46 one headlines. What are GLP ones? Yeah, you know, we’re actually all users of GLP ones.
    0:16:52 There we go. So GLP one is a hormone that we all have. It has a lot of different functions,
    0:16:59 but primarily it helps with insulin regulation as we eat and consume food. It affects the speed
    0:17:06 of digestion after we eat, and it also affects the feeling of fullness. So kind of the signals
    0:17:12 that go to our brain. So what a GLP one therapy is going typically is going to do is mimic the
    0:17:19 hormone that we have naturally occurring in our body. And then you’ll hear about things like
    0:17:24 dual and triagonists, which is, you know, as these drugs get more advanced, they’re mimicking
    0:17:30 additional hormone pathways. So, you know, simply just kind of stacking on top of each other more
    0:17:37 of the pathways that we believe impacts obesity as well as, of course, diabetes. And, you know,
    0:17:43 what I’ll mention with GLP ones and what Angela would say if she were on the phone with us,
    0:17:47 Dr. Fitch, is they’ve been around a really long time, right? The first GLP one was approved by
    0:17:52 the FDA in 2005. So we often talk about it as if these came out of thin air. But the truth is,
    0:18:00 especially endocrinologists and physicians who have been in the obesity space for a long time
    0:18:05 have been using these medications. When did you first learn about these medications and their
    0:18:12 potential and what role did they play in your conception of impact at known well?
    0:18:19 2018 was a big year for me, because first it was the first time I had ever heard about obesity
    0:18:25 medicine, had never heard of it as a subspecialty. And I had seen a primary care doctor in Philadelphia.
    0:18:31 And she knew I was struggling with my weight. And she said, well, you should see Dr. Jeanine
    0:18:35 Crullos. She’s a leader in the field in Philadelphia. And she practices obesity medicine. And I was
    0:18:41 like, there are people who could just help me. So I saw her and she was the first person who
    0:18:48 talked about ozempic with me. And I thought it was just absolutely wild. I was like, there’s an
    0:18:53 injectable and it helps with weight, of course, at the time was being used just to treat diabetes.
    0:18:58 So that was the first time I had heard of it. Obesity medicine physicians were using it off
    0:19:04 label at that point to treat obesity. But it’s really interesting that I feel like I actually
    0:19:12 ended up hearing about it much earlier than it sort of came on to the mainstream. But it’s not
    0:19:16 a surprise that people who knew what they were doing knew how big of a deal it was. And I’ll say
    0:19:22 when I first started talking to Dr. Fitch in 2020, when it was a little bit more getting into the
    0:19:28 public eye, she was just like, people have no idea what’s coming and how big of an innovation
    0:19:34 this is going to be as these get more obesity indications approved by the FDA. So she was
    0:19:40 certainly a fortune teller. Yeah, the way last data was staring us in the face in the diabetes
    0:19:45 trials. So it is all the dates you just mentioned, 2018, 2020 are well in advance of kind of the
    0:19:52 current moment in time when when GLP one receptor agonists have reached peak public awareness.
    0:19:59 But it is interesting to reflect on that that data was sort of staring us in the eyes.
    0:20:03 What maybe wasn’t as obvious just because the studies hadn’t been done specifically in patients
    0:20:09 who do not have diabetes but have obesity. What was not maybe as obvious was just the
    0:20:14 role that they would play outside of diabetes specifically for the indication of weight loss.
    0:20:20 But at this point in time, they’re in the arsenal. What are some of the biggest myths
    0:20:26 about GLP one drugs? Yeah, one is that they’re a miracle that works for everyone.
    0:20:32 They are our most effective treatment, right? That is that is no no question. But when you
    0:20:38 look at the date on semagletide slash ozempic, right, 40% of patients will lose 20% of their
    0:20:44 body weight. So that means 60% of patients won’t lose 20% of their body weight. So that’s a lot
    0:20:51 of folks. And the data gets better as you get with newer medication. So trezepotide is 60% of
    0:20:57 patients lose 20% of their body weight, which still need to do a 40% who don’t. And the reason
    0:21:02 I think that’s so important to call out is one, it’s important you get the right medication with
    0:21:07 the right patient, which is not always a GLP one. They may not be a responder. And the second is,
    0:21:13 boy, for the patients who fall in that 60% or that 40%, unfortunately, they can feel like a
    0:21:19 failure. You know, folks who have already felt like failures of this whole time with their weight
    0:21:25 oftentimes. And then when they don’t turn out to be a responder, I think that’s where we need to
    0:21:30 improve the education so that there isn’t this shame and stigma around the disease state and the
    0:21:36 person. The second thing I would say is that there’s a myth around the tolerability of these drugs.
    0:21:44 So you see a lot of PBM data and other data sets that show, you know, after one year, only 40%
    0:21:52 of people are still on the drug. And that’s often used to show there’s a lot of waste, you know,
    0:21:58 there’s a lot of issues with the medication. First, sometimes this data, they’re not always
    0:22:03 as confounded by the fact that people lose access to the medication, right, from their insurance or
    0:22:07 they move jobs or whatever. But even for the people who stay on, what we have found in our clinic
    0:22:13 with our own research is well over 90% of our patients stay on the medication. And we think the
    0:22:19 difference is really twofold. One, it’s better understanding the patient before you put them
    0:22:25 on the medication, full health history, family, right, are you putting the right person on the
    0:22:30 right medication. And then the second is actively managing the symptoms of those patients. So we
    0:22:37 know exactly what to expect for certain archetypes of patients when they start a medication,
    0:22:43 whether it’s Fentermen or, you know, Manjaro. So we are able to say like, hey, we expect in three
    0:22:50 days you could start to experience nausea, actually eating small meals and making sure you start in
    0:22:55 the morning, even if you’re not hungry, we’ll help curve that nausea, right. So things you can do to
    0:23:00 really better educate the patient and actually reduce those side effects over time. So I think
    0:23:07 those are kind of two important myths. I would say there is one last one that doesn’t come up
    0:23:12 quite as often, which is around food quality. A lot of times people say the food quality no
    0:23:17 longer matters. Patients can really eat whatever they want if they’re on these medications because
    0:23:22 they’re so effective at reducing and curbing appetite. The last thing I’d say about that is
    0:23:27 actually we have pretty good research to show that maintaining or increasing protein intake is
    0:23:33 unbelievably, it’s more important on these medications. It’s actually more like having had
    0:23:38 bariatric surgery. So when we work with our patients, we have such a keen focus on things
    0:23:45 like protein intake, even if that means having to supplement occasionally with a protein bar or
    0:23:50 shake, because it can be really dangerous to the patient’s long term health to have them, you know,
    0:23:56 losing a lot of muscle mass. Especially their muscle mass. Yeah, exactly. I think sometimes
    0:24:01 that detail gets lost in the headlines. So let’s come back to that 40% of patients who even enter
    0:24:07 his appetite do not sort of hit the weight loss goal that they might have set jointly with their
    0:24:16 doctor. Can you just educate us on what some of the both medical and non-medical interventions
    0:24:22 that we might be able to offer that subset of less responsive patients are? Absolutely. So as
    0:24:31 I first mentioned, there actually could be a medication that’s better for that patient. You
    0:24:36 know, interestingly, you’ll find sometimes that patients who are higher responders to
    0:24:40 fentermine, by the way, a drug that’s like $10 a month, if you get a generic, are better or higher
    0:24:46 responders than that patient would be for a GLP-1. Part of that we think is the biological
    0:24:53 process around what’s driving the obesity, that they actually respond better to different
    0:24:58 medications. So first is making sure that we’re trying different medical therapies and combination
    0:25:03 therapies to see if there is a more effective medication for them. You know, ideally, actually,
    0:25:09 you’re starting with that therapy and moving up to GLP-1s. The second, of course, is bariatric
    0:25:14 surgery. There’s a big belief in the market that bariatric surgery is going to tank. We actually,
    0:25:20 we have a little bit of a contrarian view there. We feel so many more people are finally seeking
    0:25:26 treatment that for a period of time, we actually could see an increase in bariatric surgery,
    0:25:31 because people are finally having these conversations with physicians. So, you know,
    0:25:37 bariatric surgery, particularly for a higher BMI individual, especially if they have a comorbidity,
    0:25:43 in today’s world, maybe not with innovations 10 years down the line, but in today’s world
    0:25:49 is a really effective treatment. And then there’s like the thing is like the nutrition therapy,
    0:25:55 the behavioral health. This is such a complicated disease state. And what we found with all of our
    0:26:02 patients is there’s generally not a silver bullet. So how do you work across these different
    0:26:07 modalities and really problem solve with the patient to understand what’s going on?
    0:26:13 What do we know about the long-term impact of these drugs, especially in a world where
    0:26:19 so many patients are just getting on these drugs, but there exists, as you pointed out,
    0:26:26 an evidence base of patients who have been on, you know, this drug class since 2005.
    0:26:33 So what do we learn from that body of data? Yeah. So from that earlier data, which of course,
    0:26:39 as we talked about, it’s going to be limited because it’s certain types of patients who were,
    0:26:42 who were being tested back then, they seem pretty darn safe, right? I mean, there are a few things
    0:26:49 like potential risk of thyroid cancer that have been called out from animal studies,
    0:26:54 but have never actually been replicated in human studies, right? Like when you look at real world
    0:26:59 and clinical trial data, you are not seeing an increased risk of thyroid cancer for patients
    0:27:05 who have been on the medication for a long time. So generally speaking, obviously there are certain
    0:27:11 things that are coming out that are still being investigated like suicidal thoughts and other
    0:27:15 potential side effects that they’ll certainly keep following up on. But for the best data,
    0:27:21 you know, peer-reviewed, double-minded studies that we have today, there really aren’t large,
    0:27:28 concerning kind of pieces of evidence that we’ve seen in terms of longitudinal data.
    0:27:34 In fact, it looks like cautious optimism, but it looks like some of the long-term benefits
    0:27:41 of the drug class over the longer-term horizon for patients could be quite interesting with respect
    0:27:48 to cardiovascular disease risk, with respect to treatment, potentially even reversal of
    0:27:54 fatty liver disease and steatosis, potentially even an impact on addiction states and other
    0:28:03 behavioral health conditions. What do you make of that? How does Dr. Fitch have those conversations
    0:28:09 with patients who come to know and well and are curious about this range of impact?
    0:28:16 It’s really exciting, right? I mean, like you said, the earlier the data, the more we want
    0:28:22 to be thoughtful about our excitement around it. But look, this select trial around cardiovascular
    0:28:27 risk was pretty darn compelling. Was it one large trial? Yes, right? So we’re going to see more
    0:28:33 data. Obviously, the study that came out around patients who were HIV positive a couple of days
    0:28:38 ago with their reduction in fatty liver was extremely exciting. We’re doing actually a
    0:28:42 clinical trial in fatty liver. To your point, things like addiction, while it’s early in the data,
    0:28:49 we see it in spades in our patients, right? So if you talk to a doctor in clinical practice,
    0:28:54 they will say, “I would be shocked if the data doesn’t end up proving out what we’re seeing in
    0:28:59 our clinical practice.” So the way Dr. Fitch generally talks about this is, again, we always
    0:29:05 want to be cautiously optimistic when the data is early, but it’s really compelling. I mean,
    0:29:10 in my clinic, even I see cancer survivors and obesity and overweight is not the most common
    0:29:15 complication in cancer survivors who’ve been through pretty aggressive therapy. But sometimes,
    0:29:21 these other states, whether it’s unexplained cardiometabolic profiles after a bone marrow
    0:29:27 transplant or fatty liver disease associated with prior steroid therapy and things like that.
    0:29:32 So there are all these indications that are popping up in places that I wouldn’t necessarily
    0:29:37 have expected, but are really encouraging and make me optimistic about the drug class and the
    0:29:42 role it will play in improving health outcomes. Let’s talk about access. Clinics like Knownwell
    0:29:50 are playing an important role in figuring out how to scale access to these medications in a safe way
    0:29:57 and in a way that’s evidence-based and consistent with where we want these drugs to go based on
    0:30:03 the evidence that you outlined. So what is the best practice for prescribing these drugs? How
    0:30:08 does a provider determine if a patient is eligible? Yeah, so we think the best practice includes a
    0:30:15 few things. So patient medical records. I know that sounds silly, but really, I mean, it’s actually
    0:30:22 pretty rare in the GLP-1 space right now. So understanding the longitudinal health history
    0:30:28 of a patient, what are their comorbidities? When did those comorbidities start? So we get medical
    0:30:34 records on all of our patients, for example. As we talked about earlier, thorough kind of social
    0:30:39 and family history. Because while the data on things like thyroid cancer today aren’t terribly
    0:30:46 compelling in terms of being nervous about it, if you have an aunt or a mom who has had a specific
    0:30:51 type of thyroid cancer, we’re going to have a much longer conversation about if a GLP-1 is the
    0:30:56 right answer for you, just given the data we have today. So thoroughly understanding the patient
    0:31:02 from those kind of medical perspectives, understanding to have the emotional and behavioral
    0:31:07 elements of the obesity for the patient. We always talk about what was the age at which
    0:31:12 you started struggling with your weight, what’s been your highest weight. Because for example,
    0:31:17 someone who may eat emotionally could actually need different kind of intervention
    0:31:22 from someone who actually just eats in excess at different times. So there are different elements
    0:31:29 of how a patient’s relationship with food has evolved that may impact what their treatment
    0:31:36 should be. So I think that’s extremely important and something we spend a lot of time on. Third,
    0:31:41 of course, is what does the patient actually have access to? The worst thing you could do is tell a
    0:31:46 patient after all of this evaluation, spending an hour with you live synchronously, I think you’d
    0:31:53 be an amazing candidate for Manjaro, for ZEP bound, and then you find out their insurance
    0:31:59 doesn’t cover it and it’s a formulary exclusion. So for us, we try to have that information on the
    0:32:04 patient before they even walk in the door. And then the last element is really that synchronous
    0:32:10 interaction. We think it’s helpful to occasionally see a patient in person, but we don’t always.
    0:32:16 But whether it’s live or via video, again, it sounds a little bit silly, but like being able
    0:32:22 to see the patient understanding their emotional response when you’re talking about different
    0:32:27 interventions, we think is really important. Couldn’t agree more. It’s not necessarily how all
    0:32:33 GLP-1 receptor agonist access is happening, though, today. And you’re seeing it, we’re seeing it,
    0:32:43 patients are seeing it. There are emerging different channels through which medication access
    0:32:50 may become possible. There are still supply shortages and expense hurdles that make those
    0:32:56 channels not totally a turnkey solution. But going back to comprehensive obesity medicine,
    0:33:02 how do you think about where that goes in a world where there are other avenues by which
    0:33:12 patients are understandably looking to access medication that they think could really help
    0:33:18 them? If a patient is given a choice, they prefer a medical home. And that’s what we’ve seen with
    0:33:24 our patients. So we’ve seen a lot of patients leave point solutions because they say, “Wow,
    0:33:30 you can also do my primary care. I can occasionally see you in person. You are a real doctor who I
    0:33:35 talk to and who I have a care team I know and respect.” So I think that’s really important,
    0:33:41 particularly as it relates to symptom management. We have comorbidities.
    0:33:46 By the way, I have comorbidities that are real medical conditions.
    0:33:49 We can also manage your diabetes and everything else. And we’ve had patients who say to us like,
    0:33:54 “Look, I was throwing up for four days. I ended up in the ER. I couldn’t get anyone in the app
    0:33:58 to respond to me.” So I think there’s a real patient safety and patient comfort in going to
    0:34:05 something that’s more clinically oriented. But I think to your point, look, patients are so
    0:34:11 desperate for access that there will always be a role to play, whether good or bad in some instances,
    0:34:19 of this more direct prescribing with less interaction with the patient. But I think at
    0:34:25 their core, most patients, and I’ll speak for myself, want to feel known well. If they can find
    0:34:32 that locally who takes their insurance, that is their preferred method. So I think both models
    0:34:38 will exist in the long term. The one thing I would add though is I do wonder if we’ll see
    0:34:45 more scrutiny around that prescribing. We have a physician who’s joining us from
    0:34:50 another company who had said like, “Look, the big reason I’m relieving is a year ago
    0:34:54 we stopped having any synchronous visits with patients. I get a survey and I’m filled out
    0:35:01 by the patient and I’m meant to prescribe and I’ve never even seen or talked to that patient
    0:35:05 synchronously. I’m going to go out on a limb. I’m not a doctor and say, “I don’t know that
    0:35:10 that’s the best medical practice. I could claim to be a 75-year-old man. And if you’ve never seen
    0:35:15 my medical record and you’ve never seen me, so I do think we’ll probably at the most extreme end
    0:35:21 we’ll see some curbing of that kind of behavior.” What else will change with access going forward?
    0:35:27 So it’s really important to note that obesity is one of the only disease states that’s not
    0:35:34 a standard benefit on insurance. I bet you’ve never been a part of a conversation that says,
    0:35:40 “Is an employer or an insurer going to cover breast cancer? It’s expensive, but we cover it.
    0:35:48 Are we going to cover diabetes? It’s expensive, but we cover it.” So I think obesity is more
    0:35:55 akin to what you’ve seen with fertility where it was considered this carve-out rider and that puts
    0:36:02 employers in the really tough position of when they’re self-insuring or when they’re going to
    0:36:09 payers and buying something off the shelf. It’s not typically within the standard benefit,
    0:36:15 and that means they need to make a decision about if they’re going to include it or not,
    0:36:19 and are they going to raise their costs. So we think ultimately the most important thing for
    0:36:24 access, and Dr. Fitch was on the hill on Monday advocating for TroA, it’s getting TroA passed,
    0:36:30 which would have Medicare cover obesity, which would really be the first step in establishing
    0:36:36 obesity as a standard part of any insurance benefit. Until then, we’re left to each insurance company
    0:36:45 and each employer trying to navigate what’s a really difficult solution. We do think over time
    0:36:52 access in terms of insurance is going to continue to improve. We see more Medicaid states approving
    0:36:58 obesity treatment. We have never seen the momentum we currently see behind TroA
    0:37:05 as it relates to Medicare, and you are seeing employees and patients use their voice and
    0:37:11 getting their their insurance package to cover it. So we think it’s actually going to be a good
    0:37:16 news story, particularly as more disease states seem to be treated by GLP ones.
    0:37:22 And the argument around treating obesity effectively, having such a wide-ranging impact
    0:37:30 on overall health on concomitant conditions, whether that’s diabetes, cardiovascular conditions,
    0:37:36 hypertension, the list goes on is enormous. So that argument puts obesity in a special category
    0:37:44 of something that patients want, care for, something that providers want to deliver care for,
    0:37:50 and something that I think ultimately will lower overall medical expenditures in the quest to achieve
    0:37:59 great health outcomes, which is what we all want. So the fertility analogy is very interesting and
    0:38:03 also big proponent of access to great fertility care. The behavioral health analogy is also
    0:38:10 interesting. It took some time for our collective communities to understand that those are medical
    0:38:16 conditions. They have implications for other medical conditions and for overall patient well-being,
    0:38:21 patient cost, patient access, patient return to work. I don’t want to lose sight of something
    0:38:26 you slipped in there, which I just thought was so beautiful and important. But this idea of
    0:38:32 building a medical home where patients feel known well or well-known to their care team
    0:38:42 is just really beautiful. It’s something that I think every patient wants, every parent wants
    0:38:48 for their child. Everybody wants their doctor to know them well, no matter how much technology
    0:38:55 is coming into healthcare. You want your doctor to feel like they know you well enough to make
    0:39:01 the right choices when they have many choices, which is kind of the incredible realm we’re
    0:39:07 entering in obesity medicine. Your doctor’s going to have many choices. And so choosing between those
    0:39:13 choices presumably requires knowing you well. So I just love that mission, that name. We could end
    0:39:20 this just amazing conversation on the topic of building a company to execute on that mission.
    0:39:27 How do you scale that? How do you blow it out to everybody who wants it?
    0:39:31 A lot of things have to go right. And they are. One is, look, while obesity medicine is the fastest
    0:39:38 growing subspecialty in the US, we still have like 6,500 clinicians, most of whom don’t practice
    0:39:44 obesity, 115 million Americans who need treatment. So that certainly can’t be we just hire every
    0:39:51 obesity medicine certified physician in the country. So the first thing we’re doing is hiring,
    0:39:57 you know, APP and physician, PCPs, right, who have not had the training in obesity, but are really
    0:40:04 excited to get it. And something we do is we train them in depth in the best practices of
    0:40:10 delivering obesity medicine. This is actually something Dr. Fitch does across the country today
    0:40:15 with PCPs. We do that not only for our own PCPs, but frankly others in the community because we
    0:40:21 think it’s so important for people to expand access for patients. So first is hiring non-specialized
    0:40:29 in addition to specialists, physicians, so that we can treat more patients and deliver that care
    0:40:34 at scale. The second is investing in the technology to help automate this stuff that doesn’t matter.
    0:40:40 As you well know, so much of practicing medicine, and especially on the primary care side,
    0:40:47 so much of that is administratively burdensome, but not something that really deepens the
    0:40:53 relationship between the patient and the physician, right. It’s things like getting referrals done,
    0:40:58 getting the imaging in, getting the prior offs out. So things that are just necessary for care.
    0:41:04 So we’re investing a lot in making sure that those pieces of our process are more automated so that
    0:41:10 we can scale and leave physicians the time to spend with their patients. And that’s the last
    0:41:16 thing I would say is really around physician clinical decision support and productivity.
    0:41:21 We are big believers in to treat, to serve the patient. You also have to serve the clinician.
    0:41:28 We know there is a huge burnout problem in the country. So how do we help clinicians work to
    0:41:33 the top of their licensure? We’re working on a lot of systems processes and tools to really reduce
    0:41:39 that burden for clinicians so that they are able to see more patients and spend time with those
    0:41:45 patients. What’s something that you’ve learned as a founder about standing up a business to do
    0:41:52 all those things and kind of move closer to your vision around scale? What’s something you’ve learned
    0:41:59 that they just don’t teach in business school? You have to be the one who jumps.
    0:42:04 So I’ll share a little story with you. I was in Hawaii and it was 8 a.m. and I had finally worked
    0:42:13 up the courage that I scheduled a meeting with my CFO and CEO at 5 p.m. that day, which was a big
    0:42:20 deal. I was on the executive team to let them know I was going to step down as your prior company.
    0:42:26 This is at my prior company because I’m going to start knowing well and was dreading that
    0:42:31 conversation because I love the company so much and love them. At 8 a.m. right before I was going
    0:42:36 to head out to a farm, I got a text from Dr. Fett that said, “I know you’re going to quit today.
    0:42:43 Don’t quit. What if we can’t do it?” This was many years ago now. It was funny. I looked to my husband
    0:42:53 and I said, “You know what? I’m always going to have to be the one who jumps.” Like someone,
    0:42:59 some founder has to be the one who just says, “I’m going off the cliff because someone’s got to be
    0:43:05 the first one to do it.” And I called Dr. Fritz. She does not mind me sharing this story and I said,
    0:43:11 “We’re going to do it and I’m going to be the first person who jumps and I know you’ll end up following
    0:43:16 me.” And she did. She jumped too. And she jumped. I underestimated, and I don’t mean that I’m not
    0:43:25 a military vet or anything else, but I underestimated the amount of courage it takes as a founder,
    0:43:31 that you focus so much on what’s all the day-to-day stuff in the long hours and the building the team.
    0:43:36 But it’s like every day you really need to be the person who says, “I’m all in and I’m going to set
    0:43:42 the cultural tone that we are going to get there.” And I just, I didn’t realize how important that
    0:43:49 would be on the path. I love that. That’s a great note to end on. It is incredible what you’re
    0:43:55 building in such an important space on behalf of patients who want to access amazing comprehensive
    0:44:03 obesity medicine. Thank you for joining us on the Raising Health podcast. And thank you for being
    0:44:08 all in. Thanks for having me.
    0:44:12 Thank you for listening to Raising Health. Raising Health is hosted and produced by me,
    0:44:22 Chris Tatiosian, and me, Olivia Webb, with the help of the Bio and Health team at A16Z.
    0:44:28 The show is edited by Phil Hegseth. If you want to suggest topics for future shows,
    0:44:32 you can reach us at raisinghealth@a16z.com. Finally, please rate and subscribe to our show.
    0:44:38 The content here is for informational purposes only, should not be taken as legal,
    0:44:44 business, tax, or investment advice, or be used to evaluate any investment or security,
    0:44:48 and is not directed at any investors or potential investors in any A16Z fund. Please
    0:44:53 note that A16Z and its affiliates may maintain investments in the companies discussed in this
    0:44:58 podcast. For more details, including a link to our investments, please see A16Z.com/disclosures.
    0:45:05 [BLANK_AUDIO]

    Brooke Boyarsky Pratt, founder and CEO of knownwell, joins Vineeta Agarwala, general partner at a16z Bio + Health.

    Together, they talk about the value of obesity medicine practitioners, patient-centric medical homes, and how Brooke believes the metabolic health space will evolve over time.

    This is the second episode in Raising Health’s series on the science and supply of GLP-1s. Listen to last week’s episode to hear from Carolyn Jasik, Chief Medical Officer at Omada Health, on GLP-1s from a clinical perspective.

     

    Listen to more from Raising Health’s series on GLP-1s:

    The science of satiety: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-carolyn-jasik

    Payers, providers and pricing: https://raisinghealth.simplecast.com/episodes/the-science-and-supply-of-glp-1s-with-chronis-manolis

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • The State of AI with Marc & Ben

    AI transcript
    0:00:00 Most of the content created on the Internet is created by average people, and so kind
    0:00:03 of the content on average, you know, as a whole on average is average.
    0:00:09 The test for whether your idea is good is how much can you charge for it?
    0:00:12 Can you charge the value?
    0:00:14 Or are you just charging the amount of work it’s going to take the customer to put their
    0:00:18 own wrapper on top of OpenAI?
    0:00:22 The paradox here would be the cost of developing any given piece of software falls, but the
    0:00:27 reaction to that is a massive surge of demand for software capabilities.
    0:00:31 And I think this is one of the things that’s always been underestimated about humans is
    0:00:37 our ability to come up with new things we need.
    0:00:41 There’s no large marketplace for data.
    0:00:43 In fact, what there are is there are very small markets for data.
    0:00:47 In this wave of AI, big tech has a big compute and data advantage.
    0:00:52 But is that advantage big enough to drown out all the other startups trying to rise up?
    0:00:57 Well, in this episode, A16Z co-founders Mark Andreessen and Ben Horowitz, who both, by the
    0:01:02 way, had a front row seat to several prior tech waves, tackled the state of AI.
    0:01:07 So what are the characteristics that’ll define successful AI companies?
    0:01:12 And is proprietary data the new oil, or how much is it really worth?
    0:01:17 How good are these models realistically going to get?
    0:01:19 And what would it take to get 100 times better?
    0:01:23 Mark and Ben discuss all this and more, including whether the venture capital model needs a
    0:01:27 refresh to match the rate of change happening all around it.
    0:01:31 And of course, if you want to hear more from Ben and Mark, make sure to subscribe to the
    0:01:35 Ben and Mark podcast.
    0:01:37 All right, let’s get started.
    0:01:41 It is kind of the darkest side of capitalism when a company is so greedy, they’re willing
    0:01:46 to destroy the country and maybe the world to, like, just get a little extra profit.
    0:01:50 When they do it, like, the really kind of nasty thing is they claim, oh, it’s for safety.
    0:01:55 You know, we’ve created an alien that we can’t control, but we’re not going to stop
    0:01:59 working on it.
    0:02:00 We’re going to keep building it as fast as we can, and we’re going to buy every freaking
    0:02:03 GPU on the planet.
    0:02:04 But we need the government to come in and stop it from being open.
    0:02:08 This is literally the current position of Google and Microsoft right now.
    0:02:13 It’s crazy.
    0:02:16 The content here is for informational purposes only, should not be taken as legal, business,
    0:02:20 tax or investment advice or be used to evaluate any investment or security and is not directed
    0:02:26 at any investor or potential investors in any A16Z fund.
    0:02:30 Please note that A16Z and its affiliates may maintain investments in the companies discussed
    0:02:35 in this podcast.
    0:02:36 For more details, including a link to our investments, please see A16Z.com/disclosures.
    0:02:41 Hey, folks, welcome back.
    0:02:44 We have an exciting show today.
    0:02:45 We are going to be discussing the very hot topic of AI.
    0:02:49 We are going to focus on the state of AI as it exists right now in April of 2024, and
    0:02:53 we are focusing specifically on the intersection of AI and company building.
    0:02:57 Hopefully, this will be relevant to anybody working on a startup or anybody at a larger
    0:03:01 company.
    0:03:02 We have as usual solicited questions on X, formerly known as Twitter, and the questions
    0:03:05 have been fantastic.
    0:03:06 We have a full lineup of listener questions, and we will dive right in.
    0:03:11 First question, so three questions on the same topic.
    0:03:14 Michael asks, “In anticipation of upcoming AI capabilities, what should founders be
    0:03:18 focusing on building right now?”
    0:03:20 Gwen asks, “How can small AI startups compete with established players with massive compute
    0:03:25 and data scale advantages?”
    0:03:27 Alistair McLea asks, “For startups building on top of open AI, etc., what are the key
    0:03:32 characteristics of those companies that will benefit from future exponential improvements
    0:03:36 in the base models versus those that will get killed by them?”
    0:03:39 Let me start with one point, Ben, and then we’ll jump right to you.
    0:03:42 Sam Maltman recently gave an interview, I think maybe Lex Friedman or one of the podcasts,
    0:03:45 and he actually said something I thought was actually quite helpful.
    0:03:48 Let’s see, Ben, if you agree with that.
    0:03:49 He said something along the lines of, “You want to assume that the big foundation models
    0:03:54 coming out of the big AI companies are going to get a lot better, so you want to assume
    0:03:57 they’re going to get like a hundred times better.
    0:04:00 As a startup founder, you want to then think, “Okay, if the current foundation models get
    0:04:03 a hundred times better, is my reaction, oh, that’s great for me and for my startup, because
    0:04:08 I’m much better off as a result, or is your reaction the opposite as it, oh, shit, I’m
    0:04:12 in real trouble.”
    0:04:13 Let me just stop right there, Ben, and see what you think of that as general advice.
    0:04:16 Well, I think generally that’s right, but there’s some nuances to it, right?
    0:04:22 So I think that from Sam’s perspective, he was probably discouraging people from building
    0:04:28 foundation models, which I don’t know that I would entirely agree with that, and that
    0:04:34 a lot of the startups building foundation models are doing very well, and there’s many
    0:04:38 reasons for that.
    0:04:39 One is there are architectural differences, which lead to how smart is the model, there’s
    0:04:43 how fast is the model, there’s how good is the model in a domain.
    0:04:47 When that goes for not just text models, but image models as well, there are different
    0:04:53 domains, different kinds of images that responds to prompts differently.
    0:04:58 If you ask Mid Journey and Ideagram the same question, they react very differently depending
    0:05:03 on the use cases that they’re tuned for.
    0:05:07 And then there’s this whole field of distillation where Sam can go build the biggest, smartest
    0:05:13 model in the world, and then you can walk up as a startup and kind of do a distilled
    0:05:18 version of it and get a model very, very smart at a lot less cost.
    0:05:22 So there are things that, yes, the big company models are going to get way better, kind of
    0:05:29 way better at what they are.
    0:05:31 So you need to deal with that.
    0:05:34 So if you’re trying to go head to head full frontal assault, you probably have a real
    0:05:37 problem just because they have so much money.
    0:05:41 But if you’re doing something that’s different enough or a different domain and so forth,
    0:05:49 for example, at Databricks, they’ve got a foundation model, but they’re using it in
    0:05:54 a very specific way in conjunction with their kind of leading data platform.
    0:06:00 So, okay, now if you’re an enterprise and you need a model that knows all the nuances
    0:06:07 of how your enterprise data model works and what things mean and needs access control
    0:06:14 and what needs to use your specific data and domain knowledge and so forth, then it doesn’t
    0:06:19 really hurt them if Sam’s model gets way better.
    0:06:21 Similarly, 11 Labs with their voice model has kind of embedded into everybody.
    0:06:28 Everybody uses it as part of kind of the AI stack.
    0:06:31 And so it’s got kind of a developer hook into it.
    0:06:35 And then they’re going very, very fast at what they do and really being very focused
    0:06:40 in their area.
    0:06:41 So there are things that I would say like extremely promising that are kind of ostensibly, but
    0:06:47 not really competing with OpenAI or Google or Microsoft.
    0:06:52 So I think it sounds a little more coarse green than I would interpret it if I was building
    0:06:57 a startup.
    0:06:58 Right.
    0:06:59 Let’s dig into this a little bit more.
    0:07:00 So let’s start with the question of do we think the big models, the god models are going
    0:07:03 to get 100 times better?
    0:07:04 I kind of think so and then I’m not sure.
    0:07:07 So if you think about the language models, let’s do those because those are probably
    0:07:11 the ones that people are most familiar with.
    0:07:13 I think if you look at the very top models, you know, Claude and OpenAI and Mistral and
    0:07:19 Llama, the only people who I feel like really can tell the difference as users amongst those
    0:07:26 models are the people who study them, you know, like they’re getting pretty close.
    0:07:31 So you would expect if we were talking 100x better that one of them might be separating
    0:07:36 from each other a lot more, but the improvement, so 100% better in what way?
    0:07:42 Like for the normal person using it in a normal way, like asking it questions and finding
    0:07:47 out stuff.
    0:07:48 Well, let’s say some combination of just like breadth of knowledge and capability.
    0:07:52 Yeah.
    0:07:53 Like I think in some of them may are yeah.
    0:07:55 Right.
    0:07:56 But then also just combined with like sophistication of the answers, you know, sophistication of
    0:07:59 the output, the quality of the output, sophistication of the output, you know, lack of hallucination,
    0:08:03 factual grounding.
    0:08:04 Well, that I think is for sure going to get 100x better.
    0:08:08 Like that.
    0:08:09 Yeah.
    0:08:10 I mean, they’re on a path for that.
    0:08:11 The things that are so against that, right, the alignment problem where, okay, yeah, they’re
    0:08:18 getting smarter, but they’re not allowed to say what they know.
    0:08:21 And then that alignment also kind of makes them dumber in other ways.
    0:08:25 And so you do have that thing.
    0:08:27 The other kind of question that’s come up lately, which is kind of do we need a breakthrough
    0:08:32 to go from what we have now, which I would categorize as artificial human intelligence
    0:08:40 as opposed to artificial general intelligence, meaning it’s kind of the artificial version
    0:08:45 of us.
    0:08:46 We’ve structured the world in a certain way using our language and our ideas and our
    0:08:50 stuff.
    0:08:52 And it’s learned that very well, amazing.
    0:08:55 And it can do kind of a lot of the stuff that we can do, but are we then the asymptote?
    0:09:02 Or you need a breakthrough to get to some kind of higher intelligence, more general intelligence.
    0:09:08 And I think if we’re the asymptote, then in some ways it won’t get 100x better because
    0:09:15 it’s already like pretty good relative to us.
    0:09:18 But yeah, like it’ll know more things, it’ll hallucinate less on all those dimensions,
    0:09:22 it’ll be 100x better.
    0:09:24 There’s this graph floating around.
    0:09:26 I forget exactly what the axes are, but it basically shows the improvement across the
    0:09:29 different models.
    0:09:30 To your point, it shows an asymptote against the current tests that people are using that’s
    0:09:33 sort of like at or slightly above human levels, which is what you would think if you’re being
    0:09:37 trained on an entirely human data.
    0:09:39 Now, the counter argument on that is are the tests just too simple, right?
    0:09:42 It’s a little bit like the question people ever run the SAT, which is if you have a lot
    0:09:45 of people giving 800s on both math and verbal on the SAT, is the scale too constrained, do
    0:09:49 you need a test that can actually test for Einstein?
    0:09:52 Right.
    0:09:53 It’s memorized the tests that we have and it’s great.
    0:09:57 You can imagine SAT that really can detect gradations of people who have ultra-high IQs,
    0:10:02 who are ultra-good at math or something.
    0:10:03 You can imagine tests for AI, you can imagine tests that test for reasoning above human
    0:10:07 levels, one assumes.
    0:10:08 Yeah, well, maybe the AI needs to write the test.
    0:10:11 Yeah, and then there’s a related question that comes up a lot, it’s an argument we’ve
    0:10:15 been having internally, which is also I’ll start to take some sort of more provocative
    0:10:18 and probably more bullish or as you would put it, sort of science fictionary predictions
    0:10:21 on some of this stuff.
    0:10:22 There’s this question that comes up, which is, okay, you take an LLM, you train it on
    0:10:25 the internet.
    0:10:26 What is the internet data?
    0:10:27 What is the internet data corpus?
    0:10:28 It’s an average of everything, right?
    0:10:29 It’s a representation of sort of human activity.
    0:10:32 Representation of human activity is going to kind of, because of the sort of distribution
    0:10:34 of intelligence in the population, most of it somewhere in the middle.
    0:10:37 And so the data set on average sort of represents the average human.
    0:10:40 You’re teaching it to be very average, yeah.
    0:10:42 Yeah, you’re teaching it to be very average.
    0:10:43 It’s just because most of the content created on the internet is created by average people.
    0:10:46 And so kind of the content on average as a whole on average is average.
    0:10:51 And so therefore, the answer is our average, right?
    0:10:53 You’re going to get back an answer that sort of represents the kind of thing that an average
    0:10:56 100 IQ, you know, kind of by definition, the average human is 100 IQ, it’s IQ is indexed
    0:10:59 to 100.
    0:11:00 That’s the center of the bell curve.
    0:11:01 And so by definition, you’re kind of getting back the average.
    0:11:03 I actually argue like that may be the case for the default prompt today.
    0:11:06 Like you just asked the thing, does the earth revolve around the sun or something?
    0:11:09 You get like the average answer to that and maybe that’s fine.
    0:11:12 This gets to the point as well.
    0:11:13 Okay, the average data might be of an average person, but the data set also contains all
    0:11:17 of the things written and thought by all the really smart people.
    0:11:20 All that stuff is in there, right?
    0:11:21 And all the current people who are like that, their stuff is in there.
    0:11:24 And so then it’s sort of like a prompting question, which is like, how do you prompt
    0:11:26 it in order to get basically, in order to basically navigate to a different part of
    0:11:29 what they call the latent space, to navigate to a different part of the data set that basically
    0:11:33 is like the super genius part.
    0:11:35 And you know, the way these things work is if you craft the prompt in a different way,
    0:11:37 it actually leads it down a different path inside the data set, gives you a different
    0:11:40 kind of answer.
    0:11:41 And here’s another example of this.
    0:11:42 If you ask it write code to do X, write code to sort of list, you know, whatever, render
    0:11:46 an image, it will give you average code to do that.
    0:11:48 If you say write me secure code to do that, it will actually write better code with fewer
    0:11:53 security holes, which is very interesting, right?
    0:11:55 Because it’s accessing a different purpose of training data, which is secure code.
    0:11:58 Right.
    0:11:59 And if you ask, you know, write this image generation thing the way John Carmack would
    0:12:01 write it, you get a much better result because it’s tapping into the part of the latent space
    0:12:04 represented by John Carmack’s code, who’s the best graphics programmer in the world.
    0:12:08 And so you can imagine prompting crafts in many different domains such that you’re kind
    0:12:11 of unlocking the latent super genius, even if that’s not the default answer.
    0:12:15 Yeah.
    0:12:16 I think that’s correct.
    0:12:18 I think there’s still a potential limit to its smartness in that.
    0:12:24 So we had this conversation in the firm the other day where you have, there’s the world,
    0:12:28 which is very complex.
    0:12:30 And intelligence kind of is, you know, how well can you understand, describe, represent
    0:12:35 the world?
    0:12:36 But our current iteration of artificial intelligence consists of human structuring the world and
    0:12:45 then feeding that structure that we’ve come up with into the AI.
    0:12:50 And so the AI kind of is good at predicting how humans have structured the world as opposed
    0:12:56 to how the world actually is, which is, you know, something more probably complicated,
    0:13:01 maybe the irreducible or what have you.
    0:13:05 So do we just get to a limit where like, it can be really smart, but its limit is going
    0:13:10 to be the smartest humans as opposed to smarter than the smartest humans.
    0:13:14 And then kind of related, is it going to be able to figure out brand new things, you know,
    0:13:21 new laws of physics and so forth?
    0:13:22 Now, of course, there are like one in three billion humans that can do that or whatever.
    0:13:28 That’s a very rare kind of intelligence.
    0:13:30 So it still makes the AI is extremely useful, but they play a different role if they’re
    0:13:37 kind of artificial humans than if they’re like artificial, you know, super-duper mega-humans.
    0:13:45 So let me make the sort of extreme bull case for the 100, because okay, so the cynic would
    0:13:50 say that Sam Altman would be saying they’re going to get 100 times better precisely if
    0:13:53 they’re not going to.
    0:13:55 Yeah, yeah, yeah, yeah, yeah.
    0:13:57 Right?
    0:13:58 Because he’d be saying that basically in order to scare people into not competing.
    0:14:01 Well, I think that whether or not they are going to get 100 times better, Sam would
    0:14:06 be very likely to say that like Sam.
    0:14:08 For those of you who don’t know him, he’s a very smart guy, but for sure, he’s a competitive
    0:14:13 genius.
    0:14:14 There’s no question about that.
    0:14:15 So you have to take that into account.
    0:14:17 Right.
    0:14:18 So if they weren’t going to get a lot better, he would say that.
    0:14:20 But of course, if they were going to get a lot better to your point, he would also say
    0:14:22 that.
    0:14:23 Yes.
    0:14:24 Why not, right?
    0:14:25 And so let me make the bull case that they are going to get 100 times better or maybe
    0:14:28 even, you know, on an upper curve for a long time.
    0:14:31 And there’s like enormous controversy, I think, on every one of the things I’m about to say,
    0:14:34 that you can find very smart people in the space who believe basically everything I’m
    0:14:38 about to say.
    0:14:39 So one is there is generalized learning happening inside the neural networks.
    0:14:42 And we know that because we now have introspection techniques where you can actually go inside
    0:14:46 and look inside the neural networks to look at the neural circuitry that is being evolved
    0:14:49 as part of the training process.
    0:14:50 And you know, these things are evolving, you know, general computation functions.
    0:14:54 There was a case recently where somebody trained one of these on a chess database and, you
    0:14:57 know, just by training lots of chess games, it actually imputed a world model of a chess
    0:15:00 board, you know, inside the neural network and, you know, that was able to do original
    0:15:04 moves.
    0:15:05 And so the neural network training process does seem to work.
    0:15:06 And then specifically not only that, but, you know, Meta and others recently have been
    0:15:10 talking about how so-called overtraining actually works, which is basically continuing to train
    0:15:15 the same model against the same data for longer, you know, putting more and more compute cycles
    0:15:18 against it.
    0:15:19 You know, I’ve talked to some very smart people in the fields, including there, who basically
    0:15:22 think that actually that works quite well.
    0:15:24 The diminishing returns people were worried about about more training.
    0:15:26 And they proved it in the new Lamar release, right?
    0:15:29 That’s a primary technique they use.
    0:15:31 Yeah, exactly.
    0:15:32 Like one guy in the space basically told me, basically, he’s like, yeah, we don’t necessarily
    0:15:35 need more data at this point to make these things better.
    0:15:37 We maybe just need more compute cycles.
    0:15:39 We just train it a hundred times more and it may just get actually a lot better.
    0:15:41 So one day the labeling, it turns out that supervised learning ends up being a huge boost
    0:15:47 to these things.
    0:15:48 Yeah.
    0:15:49 So we’ve got that.
    0:15:50 We’ve got all of the kind of, you know, let’s say rumors and reports of various kinds of self-improvement
    0:15:54 loops, you know, that kind of underway.
    0:15:56 And most of the sort of super advanced practitioners in the field think that there’s now some form
    0:15:59 of self-improvement loop that works, which basically is, you basically get an AI to do
    0:16:03 what’s called chain of thoughts.
    0:16:04 You get it to basically go step by step to solve a problem.
    0:16:06 You get it to the point where it knows how to do that.
    0:16:08 And then you basically retrain the AI on the answers.
    0:16:10 And so you’re kind of basically doing a sort of a forklift upgrade across cycles of the
    0:16:14 reasoning capability.
    0:16:15 And so a lot of the experts think that sort of thing is starting to work now.
    0:16:18 And then there’s still a raging debate about synthetic data, but there’s quite a few people
    0:16:21 who are actually quite bullish on that.
    0:16:23 Yeah.
    0:16:24 And then there’s even this trade-off.
    0:16:25 There’s this kind of dynamic where like LLMs might be okay at writing code, but they might
    0:16:29 be really good at validating code.
    0:16:30 You know, they might actually be better at validating code than they are at writing it.
    0:16:33 That would be big help.
    0:16:34 Yeah.
    0:16:35 Well, but that also means like AI is maybe able to self-improve.
    0:16:36 They can validate their own code.
    0:16:37 Yeah.
    0:16:38 Yeah.
    0:16:39 They can validate their own code.
    0:16:40 And it’s, and we have this anthropomorphic bias is very deceptive with these things because
    0:16:43 you think of the model as an it, and so it’s like, how could you have an it that’s better
    0:16:46 at validating the code, the writing code, but it’s not an it.
    0:16:48 What it is is it’s this giant latent space, it’s this giant neural network.
    0:16:51 And the theory would be there are totally different parts of the neural network for
    0:16:54 writing code and validating code.
    0:16:56 And there’s no consistency requirement whatsoever that the network be equally good at both of
    0:16:59 those things.
    0:17:00 And so if it’s better at one of those things, right, so then the thing that it’s good at
    0:17:04 might be able to make the thing that it’s bad at better and better.
    0:17:06 Right.
    0:17:07 Right.
    0:17:08 Right.
    0:17:09 Right.
    0:17:10 Right.
    0:17:11 Sure.
    0:17:12 Sure.
    0:17:13 Right.
    0:17:14 Sort of a self-improvement thing.
    0:17:15 And so then on top of that, there’s all the other things coming, right?
    0:17:16 Which is it’s everything is all these practical things, which is there’s an enormous chip constraint
    0:17:17 right now.
    0:17:18 And the AI that anybody uses today is its capabilities are basically being gated by the availability
    0:17:22 of chips, but like that will resolve over time.
    0:17:24 You know, there’s also your point on like data labeling, there is a lot of data in these
    0:17:27 things now, but there is a lot more data out in the world.
    0:17:30 And there’s, you know, at least in theory, some of the leading AI companies are actually
    0:17:32 paying to generate new data.
    0:17:34 And by the way, even like the open source data sets are getting much better.
    0:17:36 And so there’s a lot of like data improvements that are coming.
    0:17:39 And then, you know, there’s just the amount of money pouring into the space to be able
    0:17:41 to underwrite all this.
    0:17:42 And then by the way, there’s also just the systems engineering work that’s happening,
    0:17:45 right?
    0:17:46 Which is a lot of the current systems.
    0:17:47 You know, we’re basically, we’re built by scientists.
    0:17:48 And now they’re really world-class engineers are showing up and tuning them up and getting
    0:17:51 them to work better.
    0:17:52 And you know, maybe that’s not a, maybe that’s not a.
    0:17:55 Which makes training, by the way, way more efficient as well, not just inference, but
    0:18:00 also training.
    0:18:01 Yeah.
    0:18:02 Exactly.
    0:18:03 And then even, you know, another improvement area is basically Microsoft released their
    0:18:05 five small language model yesterday.
    0:18:06 And apparently it’s competitive.
    0:18:08 It’s a very small model, competitive with much larger models.
    0:18:10 And the big thing they say that they did was they basically optimized the training set.
    0:18:14 So they basically de-duplicated the training set.
    0:18:16 They took out all the copies and they really optimized on a small amount of training data,
    0:18:19 on a small amount of high quality training data, as opposed to the larger amounts of
    0:18:22 low quality data that most people train on.
    0:18:23 You add all these up and you’ve got eight or 10 different combination of sort of practical
    0:18:27 and theoretical improvement vectors that are all in play.
    0:18:31 And it’s hard for me to imagine that some combination of those doesn’t lead to like
    0:18:33 really dramatic improvement from here.
    0:18:35 I definitely agree.
    0:18:36 I think that’s for sure going to happen, right?
    0:18:38 Like if you were still back to Sam’s proposition, I think if you were a startup and you were
    0:18:43 like, okay, in two years I can get as good as GPT-4, you shouldn’t do that.
    0:18:48 Right.
    0:18:49 That would be a bad mistake.
    0:18:51 Right.
    0:18:52 Right.
    0:18:53 Well, this also goes to, you know, a lot of entrepreneurs are afraid of, well, I’ll give
    0:18:55 you an example.
    0:18:56 So a lot of entrepreneurs, here’s this thing they’re trying to figure out, which is, okay,
    0:18:58 I really think, I know how to build a SaaS app that harnesses an LLM to do really good
    0:19:02 marketing collateral.
    0:19:03 Let’s just make it very similar.
    0:19:04 A very similar thing.
    0:19:05 Yeah.
    0:19:06 And so I build a whole system for that.
    0:19:07 Will it just turn out to be that the big models in six months will be even better in
    0:19:11 making marketing collateral just from a simple prompt, such that my apparently sophisticated
    0:19:16 system is just irrelevant because the big model just does it?
    0:19:18 Yeah.
    0:19:19 Yeah.
    0:19:20 Let’s talk about that.
    0:19:21 Like apps, you know, another way you can think about it is that the criticism of a lot
    0:19:24 of current AI app companies is their quote unquote, you know, GPT wrappers, they’re sort
    0:19:28 of thin layers of wrapper around the core model, which means the core model could commoditize
    0:19:31 them or displace them.
    0:19:32 But the counterargument, of course, is it’s a little bit like calling all, you know, old
    0:19:36 software apps, you know, database wrappers, you know, wrappers around a database.
    0:19:40 It turns out like actually wrappers around a database is like most modern software and
    0:19:43 a lot of that actually turned out to be really valuable in there.
    0:19:45 It turns out there’s a lot of things to build around the core engine.
    0:19:47 So yeah.
    0:19:48 So Ben, how do we think about that when we run into companies thinking about building
    0:19:50 apps?
    0:19:51 Yeah.
    0:19:52 You know, it’s a very tricky question because there’s also this correctness gap, right?
    0:19:56 So you know, why do we have co-pilots?
    0:20:00 Where are the pilots?
    0:20:01 Right?
    0:20:02 Where are the AI?
    0:20:03 There’s no AI pilots.
    0:20:04 They’re only AI co-pilots.
    0:20:05 There’s a human in the loop on absolutely everything.
    0:20:09 And that really kind of comes down to this, you know, you can’t trust the AI to be correct
    0:20:16 in drawing a picture or writing a program or, you know, even like writing a court brief
    0:20:24 without making up citations, you know, all these things kind of require a human and
    0:20:30 kind of turns out to be like fairly dangerous to not.
    0:20:33 And then I think that so what’s happening a lot with the application layer is people
    0:20:37 saying, well, to make it really useful, I need to turn this co-pilot into a pilot.
    0:20:43 And can I do that?
    0:20:44 And so that’s an interesting and hard problem.
    0:20:48 And then there’s a question of, is that better done at the model level or at some layer on
    0:20:53 top that, you know, kind of teases the correct answer out of the model, you know, by doing
    0:20:59 things like using code validation or what have you?
    0:21:01 Or is that just something that the models will be able to do?
    0:21:04 I think that’s one open question.
    0:21:06 And then, you know, as you get into kind of domains and, you know, potentially rapprocent
    0:21:11 things, I think there’s a different dimension than what the models are good at, which is
    0:21:16 what is the process flow, which is kind of in database for all to so on the database kind
    0:21:23 of analogy, there is like the part of the task in a law firm that’s writing the brief,
    0:21:31 but there’s 50 other tasks and things that have to be integrated into the way a company
    0:21:38 works, like the process flow, the orchestration of it.
    0:21:42 And maybe there are, you know, a lot of these things, like if you’re doing video production,
    0:21:46 there’s many tools or music, even, right, like, okay, who’s going to write the lyrics,
    0:21:51 which AI, I’ll write the lyrics and which AI, I’ll figure out the music, and then like,
    0:21:56 how does that all come together and how do we integrate it and so forth.
    0:22:00 And those things tend to, you know, just require a real understanding of the end customer and
    0:22:08 so forth in a way, and that’s typically been how like applications have been different
    0:22:13 than platforms in the past is like, there’s real knowledge about how the customer using
    0:22:19 it wants to function that doesn’t have anything to do with the kind of intelligent or is just
    0:22:26 different than what the platform is designed to do.
    0:22:30 And to get that out of the platform for a kind of company or a person turns out to be
    0:22:35 really, really hard.
    0:22:36 And so those things, I think, are likely to work, you know, especially if the process
    0:22:41 is very complex.
    0:22:42 And it’s something that’s funny as a firm, you know, we’re a little more hardcore technology
    0:22:47 oriented, and we’ve always struggled with those, you know, in terms of, oh, this is
    0:22:52 like a some process application for like plumbers to figure out this, and we’re like, well,
    0:22:59 where’s the technology.
    0:23:01 But you know, a lot of it is how do you encode, you know, some level of domain expertise and
    0:23:07 kind of how things work in the actual world back into the software.
    0:23:13 I often think I’ve been told founders that you can think about this in terms of price,
    0:23:16 you can kind of work backwards from pricing a little bit, which is to say sort of business
    0:23:19 value and what you can charge for, which is, you know, the natural thing for any technologists
    0:23:23 to do is to kind of say, I have this new technological capability, and I’m going to sell it to people
    0:23:26 and like, what am I going to charge for it is going to be somewhere between, you know,
    0:23:29 my cost of providing it and then, you know, whatever markup I think I can justify, you
    0:23:33 know, and if I have a monopoly providing it, maybe the markup is infinite.
    0:23:36 But, you know, it’s kind of this technology forward, you know, kind of supplier supply
    0:23:40 forward, you know, pricing model, there’s a completely different pricing model for kind
    0:23:44 of business value backwards, and sort of, you know, so-called value pricing, value-based
    0:23:49 pricing.
    0:23:50 And that’s, you know, to your point, that’s basically a pricing model that says, okay,
    0:23:53 what’s the business value to the customer of the thing?
    0:23:56 And if the business value is, you know, a million dollars, then can I charge 10% of
    0:24:01 that and get $100,000, right, or whatever?
    0:24:04 And then, you know, why is it cost $100,000 as compared to $5,000 is because, well, because
    0:24:09 to the customer, it’s worth a million dollars, and so they’ll pay 10% for it.
    0:24:12 Yeah, actually, so a great example of that, like, we’ve got a company in our portfolio,
    0:24:19 Crest AI that does things like debt collection, okay, so if I can collect way more debt with
    0:24:28 way fewer people with my, you know, it’s a co-pilot type solution, then what’s that
    0:24:36 worth?
    0:24:37 Well, it’s worth a heck of a lot more than just buying an open AI license because an
    0:24:43 open AI license is not going to easily collect debts, or kind of enable your debt collectors
    0:24:50 to be massively more efficient, or that kind of thing, so it’s bridging that gap between
    0:24:56 the value.
    0:24:57 And I think you had a really important point, the test for whether your idea is good is how
    0:25:00 much can you charge for it?
    0:25:02 Can you charge the value?
    0:25:04 Or are you just charging the amount of work it’s going to take the customer to put their
    0:25:10 own wrapper on top of open AI, like, that’s the real testimony of, like, how deep and
    0:25:17 how important is what you’ve done?
    0:25:19 Yeah, and so to your point on, like, the kinds of businesses that technology investors have
    0:25:24 had a hard time with, you know, kind of thinking about, you know, maybe accurately is sort of,
    0:25:28 it’s the company that is, it’s a vendor that has built something where it is a specific
    0:25:32 solution to a business problem, where it turns out the business problem is very valuable
    0:25:36 to the customer.
    0:25:37 And so therefore they will pay a percentage of the value provided back in the terms for
    0:25:43 price for the software, and that actually turns out you can have businesses that are
    0:25:47 not very technologically differentiated that are actually extremely lucrative.
    0:25:52 And then because that business is so lucrative, they can actually afford to go think very
    0:25:56 deeply about how technology integrates into the business, what else they can do.
    0:26:00 You know, this is like the story of a Salesforce.com, for example, right?
    0:26:03 And by the way, there’s kind of a, a chance, a theory that the models are all getting really
    0:26:09 good.
    0:26:10 There are open source models, there are, like, that are awesome, you know, Lama, Mistral,
    0:26:16 like these are great models.
    0:26:18 And so the actual layer where the value is going to crew is going to be, like, tools,
    0:26:24 orchestration, that kind of thing, because you can just plug in whatever the best model
    0:26:28 is at the time, whereas the models are going to be competing, you know, in a death battle
    0:26:33 with each other and, you know, be commoditized down to the, you know, the cheapest one wins
    0:26:39 and that kind of thing.
    0:26:40 So, you know, you could argue that the, the best thing to do is, is to kind of connect
    0:26:47 the power to the people.
    0:26:49 Right.
    0:26:50 Right.
    0:26:51 So that actually takes us to the next question, and this is a two-in-one question.
    0:26:54 So Michael asks, and these are, and I’ll say these are diametrically opposed, which
    0:26:57 is why I paired them.
    0:26:58 So Michael asks, why are VCs making huge investments in generative AI startups when it’s clear
    0:27:03 these startups won’t be profitable anytime soon, which was a loaded, loaded question,
    0:27:07 but we’ll take it.
    0:27:08 And then Kaiser asks, if AI deflates the cost of building a startup, how will the structure
    0:27:12 of tech investment change?
    0:27:14 And of course, Ben, this goes to exactly what you just said.
    0:27:16 So it’s basically the questions are diametrically opposed, because if you squint out of your
    0:27:20 left eye, right, what you see is basically the amount of money being invested in the
    0:27:23 foundation model companies kind of going up to the right at a furious pace, you know,
    0:27:26 these companies are raising hundreds of millions, billions, tens of billions of dollars.
    0:27:29 And it’s just like, oh my God, look at these sort of capital, you know, sort of, I don’t
    0:27:33 know, infernos, you know, that hopefully will result in value at the end of the process.
    0:27:37 But my God, look at how much money is being invested in these things.
    0:27:39 If you squint through your right eye, you know, you think, wow, that now all of a sudden it’s
    0:27:43 like much easier to build software.
    0:27:45 It’s much easier to have a software company.
    0:27:46 It’s much easier to like have a small number of programmers writing complex software because
    0:27:49 they’ve got all these AI co-pilots and all these automated software development capabilities
    0:27:53 that are coming online.
    0:27:54 Yeah.
    0:27:55 So on the other side, the cost of building an AI like application startup might, you
    0:27:59 know, crash.
    0:28:00 And it might just be that like the, you know, the Salesforce, the AI Salesforce.com might
    0:28:04 cost, you know, a tenth or a hundredth or a thousandth of the amount of money that it
    0:28:07 took to build the, you know, the old database driven Salesforce.com.
    0:28:10 And so yeah, so what do we think of the dichotomy, which is you can actually look, you can actually
    0:28:14 look out of either eye and see cost, either cost to the moon as like for startup funding
    0:28:19 or cost actually going to zero.
    0:28:21 Yeah.
    0:28:22 Well, like so it is interesting.
    0:28:24 I mean, we actually have companies in both camps, right?
    0:28:27 Like I think probably the companies that have gotten to profitability the fastest, maybe
    0:28:33 in the history of the firm have been AI companies or been, you know, AI companies in the portfolio
    0:28:37 where the revenue grows so fast that it actually kind of runs out ahead of the cost.
    0:28:44 And then there are like, you know, people who are in the foundation model race who are
    0:28:49 raising hundreds of millions, you know, even billions of dollars to kind of keep pace and
    0:28:54 so forth.
    0:28:55 They also are kind of generating revenue at a fast rate.
    0:29:00 The headcount in all of them is small.
    0:29:02 So I would say, you know, where AI money goes and even, you know, like if you look at open
    0:29:09 AI, which is the big spender in startup world, which, you know, we are also investors and
    0:29:16 is you know, headcount wise, they’re pretty small against their revenue.
    0:29:20 Like it is not a big company headcount.
    0:29:22 Like if you look at the revenue level and how fast they’ve gotten there, it’s pretty
    0:29:27 small.
    0:29:28 Now, the total expenses are ginormous, but they’re going into the model creation.
    0:29:33 So it’s an interesting thing.
    0:29:35 I mean, I’m not entirely sure how to think about it, but I think like if you’re not building
    0:29:40 a foundation model, it will make you more efficient and probably gets profitability
    0:29:45 quicker.
    0:29:46 Right.
    0:29:47 So the counter the counter and this is a very bullish counter argument, but the counter
    0:29:51 argument to that would be basically that falling costs for like building new software
    0:29:55 companies are a mirage.
    0:29:57 And the reason for that is this thing in economics called the Jevons paradox, which I’m going
    0:30:01 to read from Wikipedia.
    0:30:02 So the Jevons paradox occurs when technological progress increases the efficiency with which
    0:30:07 a resource is used, right, reducing the amount of that resource necessary for any one use.
    0:30:12 But the falling cost induces increases in demand, right, elasticity, enough that the
    0:30:17 resource use overall is increased rather than reduced.
    0:30:20 Yeah.
    0:30:21 That’s certainly possible.
    0:30:23 Right.
    0:30:24 And so this is you see, you see versions of this, for example, you build in your freeway
    0:30:27 and it actually makes traffic jams worse, right?
    0:30:30 Because basically what happens is, oh, it’s great.
    0:30:31 Now there’s more roads.
    0:30:32 Now we can have more people live here.
    0:30:34 We can have more people that, you know, we can make these companies bigger and now there’s
    0:30:36 more traffic than ever.
    0:30:37 And now the traffic is even worse.
    0:30:39 Or you saw the classic examples during the Industrial Revolution, coal consumption, as
    0:30:43 the price of coal drops, people use so much more coal that actually the overall consumption
    0:30:48 actually increased.
    0:30:49 And people are getting a lot more power, but the result was the use of a lot more coal
    0:30:53 in the paradox.
    0:30:54 And so the paradox here would be, yes, the cost of developing any given piece of software
    0:30:59 falls, but the reaction to that is a massive surge of demand for software capabilities.
    0:31:05 And so the result of that actually is, although it even, it looks like starting software companies,
    0:31:08 the price is going to fall.
    0:31:09 Actually, what’s going to happen is it’s going to rise for the high quality reason that you’re
    0:31:12 going to be able to do so much more, right, with software.
    0:31:16 The products are going to be so much better and the roadmap is going to be so amazing
    0:31:19 of the things you can do.
    0:31:20 And the customers are going to be so happy with it that they’re going to want more and
    0:31:22 more and more.
    0:31:24 So the result of it, and by the way, another example of Jevons Faradok’s playing out in
    0:31:27 another related industries in Hollywood, you know, CGI in theory should have reduced the
    0:31:31 price of making movies.
    0:31:33 In reality, it’s increased it because audience expectations went up.
    0:31:36 And now you go to a Hollywood movie and it’s wall-to-wall CGI.
    0:31:39 And so, you know, movies are more expensive to make than ever.
    0:31:41 And so the result of it, you know, so, but the result in Hollywood is at least much more,
    0:31:45 let’s say visually elaborate, you know, movies, whether they’re better or not is another question,
    0:31:48 but like much more visually elaborate, compelling, kind of visually stunning movies through CGI.
    0:31:52 The version here would be much better software, like radically better software to the end user,
    0:31:57 which causes end users to want a lot more software, which causes actually the price of development
    0:32:01 to rise.
    0:32:02 You know, if you just think about like a simple case like travel, like, okay, booking a trip
    0:32:07 through Expedia is like complicated, you’re likely to get it wrong, you’re clicking on
    0:32:12 menus and this and that and the other and like, you know, in AI version of that would
    0:32:17 be like, you know, send me to Paris, put me in a hotel I love at the best price, you know,
    0:32:22 send me on the best possible kind of airline, an airline ticket and then, you know, like
    0:32:29 make it like really special for me and like maybe you need a human to go, okay, like we’re
    0:32:35 going to, you know, or maybe the AI gets far complicated and says, okay, well, we know
    0:32:40 the person loves chocolate and we’re going to like, you know, FedEx in the best chocolate
    0:32:45 in the world from Switzerland into this hotel in Paris and this and that and the other.
    0:32:50 And so like the quality, you can, the quality could get to levels that we can’t even imagine
    0:32:56 today just because, you know, the software tools aren’t, aren’t what they’re going to
    0:33:00 be.
    0:33:01 So that’s right.
    0:33:02 Yeah, I kind of buy that actually.
    0:33:04 I think I brought the argument you’re both is how about, yeah, or how about I’m going
    0:33:10 to land in whatever Boston at six o’clock, I want to have dinner at seven with a table
    0:33:13 full of like super interesting people.
    0:33:15 Yeah, right, right, right, right, right, right, right, right, like, yeah, yeah, yeah, yeah,
    0:33:21 no, no travel agent would do that for you today, nor would you want them to.
    0:33:24 No.
    0:33:25 No.
    0:33:26 Right.
    0:33:27 Well, and then you think about it, it’s got to be integrated into my personal AI and
    0:33:33 like, and this is, you know, there’s just like unlimited kind of ideas that you can
    0:33:37 do.
    0:33:38 And I think this is one of the kind of things that’s always been underestimated about humans
    0:33:43 is like our ability to come up with new things we need.
    0:33:48 Like that has been unlimited.
    0:33:50 And there’s a very kind of famous case where John Maynard Keynes, the kind of prominent
    0:33:56 economist in the kind of first half of last century, had this thing that he predicted,
    0:34:01 which is like, nobody, because of automation, nobody would ever work a 40 hour work week,
    0:34:08 you know, like good, because once their needs were met, needs being like shelter and food.
    0:34:15 And you know, I don’t even know if transportation was in there.
    0:34:17 Like that was it.
    0:34:18 It was over.
    0:34:19 You would never work past the need for shelter and food.
    0:34:23 Like why would you?
    0:34:24 Like there’s no reason to, but of course needs expanded.
    0:34:27 So then everybody needed a refrigerator, everybody needed not just one car, but a car for everybody
    0:34:32 in the family.
    0:34:33 Everybody needed a television set, everybody needed like glorious vacations, everybody,
    0:34:38 you know.
    0:34:39 So what are we going to need next?
    0:34:41 I’m quite sure that I can’t imagine it, but like somebody’s going to imagine it and it’s
    0:34:46 quickly going to become a need.
    0:34:48 Yeah, that’s right.
    0:34:49 By the way, as Keynes famously said, his essay I think was economic prospects for our grandchildren,
    0:34:55 which was basically that.
    0:34:56 Yeah.
    0:34:57 You just articulate it.
    0:34:58 So Karl Marx said another version of that, I just pulled up the quote.
    0:35:00 So that society, when you know, when the Marxist utopia socialism is achieved, society regulates
    0:35:06 the general production.
    0:35:07 That makes it possible for me to do blah, blah, blah, to hunt in the morning, fish in
    0:35:12 the afternoon, rear cattle in the evening, criticize after dinner.
    0:35:18 What a glorious life.
    0:35:20 What a glorious life.
    0:35:21 Like if I could just list four things that I do not want to do, it’s hunt, fish, rear
    0:35:26 cattle and criticize.
    0:35:27 Yeah.
    0:35:28 Yeah.
    0:35:29 Right.
    0:35:30 And by the way, it says a lot about Marx that those were his four things.
    0:35:32 Well, the criticizing being his favorite thing, I think it’s basically communism in
    0:35:37 a nutshell.
    0:35:38 Yeah.
    0:35:39 Exactly.
    0:35:40 I don’t want to get too political, but yes, yes, 100%.
    0:35:43 And so yeah, so it’s this, this, yeah, do you, what they, what they have with Keynes and
    0:35:46 Marx in common is just this incredibly constricted, it’s incredibly constricted view of what people
    0:35:50 want to do.
    0:35:51 And then, and then correspondingly, you know, the other thing is just like, you know, people,
    0:35:53 people who want, people who want to have a mission.
    0:35:55 I mean, probably some people just want to fish and hunt, but you know, a lot of, a lot
    0:35:58 of people want to have a mission.
    0:35:59 They want to have a cause.
    0:36:00 They want to have a purpose.
    0:36:01 They want to be useful.
    0:36:02 They want to be productive.
    0:36:03 It’s actually a good thing in life.
    0:36:04 It turns out.
    0:36:05 It turns out.
    0:36:06 Yeah.
    0:36:07 In the startling turn of events.
    0:36:09 Okay.
    0:36:10 So yeah.
    0:36:11 So yeah, I think that I’ve long felt, you know, a little bit of the software eats the
    0:36:13 world thing a decade ago.
    0:36:15 I’ve always thought that, I’ve always thought that basically demand for software is sort
    0:36:18 of perfectly elastic, possibly to infinity.
    0:36:20 And the theory there basically is if you just continuously bring down the cost of software,
    0:36:24 you know, which has been happening over time, then basically demand, you know, basically
    0:36:26 is like basically perfectly correlates upward.
    0:36:29 And the reason is because, you know, kind of as we’ve been discussing, but it’s kind
    0:36:32 of there’s, there’s always something else to do in software.
    0:36:35 There’s always something else to automate.
    0:36:36 There’s always something else to optimize.
    0:36:37 There’s always something else to improve.
    0:36:40 There’s always something to make better.
    0:36:41 And, you know, in the moment with the constraints that you have today, you may not, you know,
    0:36:44 think of what that is, but the minute you don’t have those constraints, you’ll imagine
    0:36:47 what it is.
    0:36:48 I’ll just give you an example.
    0:36:49 I mean, I’ll give you an example of playing out with AI right now, right?
    0:36:51 So there have been, and we have, you know, we have companies that do this.
    0:36:54 You know, there have been, you know, there have been companies that have made AI, you
    0:36:56 know, that have made software systems for doing security cameras forever, right?
    0:37:00 And it’s like, for a long time, it was like a big deal to have software that would do
    0:37:03 like, you know, have different security camera feeds and store them on a DVR and be able
    0:37:06 to replay them and have an interface that lets you do that.
    0:37:09 Well, it’s like, you know, AI security cameras, all of a sudden can have like, actual like,
    0:37:13 semantic knowledge of what’s happening in the environment.
    0:37:14 And so they can say, you know, hey, that’s Ben, and then they can say, oh, hey, you know,
    0:37:18 that’s Ben, but he’s carrying a gun.
    0:37:19 Yeah.
    0:37:20 Right.
    0:37:21 Right.
    0:37:22 And by the way, that’s Ben and he’s carrying a gun, but that’s because like he hunts on,
    0:37:24 you know, on Thursdays and Fridays, as compared to that’s Mary and she never carries a gun
    0:37:28 and like, you know, like something is wrong and she’s really mad, right?
    0:37:32 She’s got a, yeah, really steamed expression on her face and we should probably be worried
    0:37:35 about it, right?
    0:37:36 So there’s like an entirely new set of capabilities you can do just as one example for security
    0:37:40 systems that were never possible pre AI and the security system that actually has a semantic
    0:37:44 understanding of the world is obviously much more sophisticated than the one that doesn’t
    0:37:48 and might actually be more expensive to make, right?
    0:37:50 Right.
    0:37:51 Well, and just imagine healthcare, right?
    0:37:53 Like you could wake up every morning and have a complete diagnostic, you know, like
    0:38:01 how am I doing today?
    0:38:02 Like what are all my levels of everything?
    0:38:04 And, you know, how should I interpret them, you know, better than, you know, this is one
    0:38:09 thing where AI is really good is, you know, medical diagnosis because it’s a super high
    0:38:14 dimensional problem.
    0:38:16 But if you can get access to, you know, your continuous glucose reading, you know, maybe
    0:38:21 sequester blood now and again, this and that and the other, yeah, you’ve got an incredible
    0:38:26 kind of view of things and who doesn’t want to be healthier, you know, like now we have
    0:38:32 a scale.
    0:38:33 That’s basically what we do, you know, maybe check your heart rate or something, but like
    0:38:39 pretty primitive stuff compared to where we could go.
    0:38:41 Yeah, that’s right.
    0:38:42 Okay, good.
    0:38:43 All right.
    0:38:44 So let’s go to the next topic.
    0:38:45 So on the topic of data, so a major Tom asks, as these AI models allow for us to copy existing
    0:38:50 app functionality at minimal cost, proprietary data seems to be the most important moat.
    0:38:55 How do you think that will affect proprietary data value?
    0:38:58 What other moats do you think companies can focus on building in this new environment?
    0:39:01 And then Jeff Weishaupt asks, how should companies protect sensitive data, trade secrets, proprietary
    0:39:06 data, individual privacy, and the brave new world of AI?
    0:39:09 So let me start with a provocative statement, Betsy, if you agree with it, which is, you
    0:39:15 know, you sort of hear a lot, this sort of statement or cliche is like data is the new
    0:39:18 oil.
    0:39:19 And so it’s like, okay, data is the key input to training AI, making all this stuff work.
    0:39:23 And so, you know, therefore, you know, data is basically the new resource.
    0:39:26 It’s the limiting resource.
    0:39:27 It’s the super valuable thing.
    0:39:29 And so, you know, whoever has the best data is going to win, and you see that directly
    0:39:32 in how you train AI’s.
    0:39:33 And then, you know, you also have like a lot of companies, of course, that are now trying
    0:39:36 to figure out what to do with AI.
    0:39:38 And a very common thing you’ll hear from companies is, well, we have proprietary data, right?
    0:39:42 So I’m, you know, I’m a hospital chain or I’m, you know, whatever, any kind of business,
    0:39:46 insurance company or whatever.
    0:39:47 And I’ve got all this proprietary data that I can apply, you know, that I’ll be able to,
    0:39:50 you know, build things with my proprietary data with AI that won’t just, you know, be
    0:39:54 something that anybody will be able to have.
    0:39:56 Let me argue that basically, let’s see, let me argue in like almost every case like that,
    0:40:01 it’s not true.
    0:40:02 It’s basically what the Internet kids would call cope.
    0:40:04 It’s simply not true.
    0:40:05 And the reason it’s just not true is because the amount of data available on the Internet
    0:40:10 and just generally in the environment is just a million times greater.
    0:40:16 And so, while it may not, you know, while it may not be true that I have your specific
    0:40:19 medical information, I have so much medical information off the Internet for so many people
    0:40:24 in so many different scenarios that it just swamps the value of quote, your data, you
    0:40:30 know, just, it’s just, it’s just like overwhelming.
    0:40:31 And so your, your, your proprietary data as, you know, company acts will be a little bit
    0:40:35 useful on the margin, but it’s not actually going to move the needle.
    0:40:37 And it’s not really going to be a barrier to entry in most cases.
    0:40:40 And then let me cite as proof for the, for my, my belief that this is mostly cope is
    0:40:45 there has never been nor is there now any sort of basically any level of sort of rich
    0:40:49 or sophisticated marketplace for data, market for data, there’s no, there’s no, there’s no
    0:40:54 large marketplace for data.
    0:40:56 And in fact, in fact, what there are is there are very small markets for data.
    0:40:59 So there are these businesses called data brokers that will sell you, you know, large
    0:41:01 numbers of like, you know, information about users on the Internet or something.
    0:41:05 And they’re just small businesses, like they’re just not large, it just turns out like information
    0:41:09 on lots of people is just not very valuable.
    0:41:11 And so if the data actually had value, you know, it would have a market price and you
    0:41:15 would see it transacting and you actually very specifically don’t see that, which is
    0:41:19 sort of a, you know, yeah, sort of quantitative proof that the data actually is not nearly
    0:41:23 as valuable as people think it is.
    0:41:25 Where I agree, so I agree that the data, like just as here’s a bunch of data and I can sell
    0:41:34 it without doing anything to the data is like massively overrated, like I definitely agree
    0:41:42 with that.
    0:41:43 And like maybe I can imagine some exceptions, like some, you know, special population genomic
    0:41:49 databases or something that are, that were very hard to acquire, that are useful in some
    0:41:53 way that’s, you know, that’s not just like living on the Internet or something like that.
    0:41:57 I could imagine where that’s super highly structured, very general purpose and not widely available.
    0:42:04 But for most data in companies is not like that.
    0:42:07 And that it tends to not, it’s either widely available or not general purpose.
    0:42:12 It’s kind of specific.
    0:42:14 Having said that, right, like companies have made great use of data, for example, a company
    0:42:20 that you’re familiar with, Meta, uses its data to kind of great ends itself, feeding
    0:42:26 it into its own AI systems, optimizing its products in incredible ways.
    0:42:31 And I think that, you know, us, Andreessen Horowitz, actually, you know, so we just raised
    0:42:35 $7.2 billion and it’s not a huge deal.
    0:42:40 But we took our data and we put it into an AI system and our LPs were able, there’s a
    0:42:47 million questions investors have about everything we’ve done, our track record, every company
    0:42:53 we’ve invested and so forth.
    0:42:55 And for any of those questions, they could just ask the AI, they could be wake up at
    0:42:58 three o’clock in the morning, go, “Do I really want to trust these guys?”
    0:43:02 And go in and ask the AI a question and boom, they’d get an answer back instantly.
    0:43:05 They’d have to wait for us and so forth.
    0:43:07 So we really kind of improved our investor relations product tremendously through use
    0:43:12 of our data.
    0:43:14 And I think that almost every company can improve its competitiveness through use of its own
    0:43:21 data.
    0:43:22 But the idea that it’s collected some data that it can go like sell or that is oil or
    0:43:30 what have you, that’s, yeah, that’s probably not true.
    0:43:36 I would say, and you know, it’s kind of interesting because a lot of the data that you would think
    0:43:41 would be the most valuable would be like your own code base, right?
    0:43:46 Your software that you’ve written, so much of that lives in GitHub.
    0:43:49 Nobody is actually, I don’t know of any company, we work with, you know, whatever a thousand
    0:43:55 software companies and do we know any that’s like building their own programming model
    0:44:00 on their own code?
    0:44:02 Like, or, and would that be a good idea?
    0:44:05 Probably not just because there’s so much code out there that the systems have been
    0:44:09 trained on.
    0:44:10 So like that’s not so much of an advantage.
    0:44:14 So I think it’s a very specific kind of data that would have value.
    0:44:17 Well, let’s, let’s make it actionable then.
    0:44:19 If I’m, if I’m running a big company, like if I’m running an insurance company or a bank
    0:44:23 or a hospital chain or something like that, like how, or, you know, a consumer packaged
    0:44:27 goods company, Pepsi or something, like what, how should I validate like, how should I validate
    0:44:32 that I actually have a valuable proprietary data asset that I should really be focusing
    0:44:36 on using versus maybe versus in the alternate, by the way, maybe there’s other things that
    0:44:40 maybe I should be taking all the effort I would spend on trying to optimize use of that
    0:44:43 data and maybe I should use it entirely trying to build things using internet data instead.
    0:44:47 Yeah, so, so I think, I mean, look, if you’re right, if you’re in the insurance business,
    0:44:53 then like all your actuarial data is both interesting and then I, I don’t know that anybody publishes
    0:45:00 their actual, or actuarial data.
    0:45:03 And so like, I’m not sure how you would train the model on stuff off of the internet.
    0:45:08 Yes.
    0:45:09 That’s good.
    0:45:10 Let me, let me, can I challenge that one?
    0:45:11 So that, that would be good.
    0:45:12 That’d be a good thing.
    0:45:13 That’d be a good test case.
    0:45:14 So I’m an insurance company.
    0:45:15 I’ve got records on 10 million people and, you know, the actuarial tables and when they,
    0:45:17 when they get sick and when they die.
    0:45:18 Okay.
    0:45:19 That’s great.
    0:45:20 But like there’s lots and lots of actuarial, general actuarial data on the internet for
    0:45:24 large scale populations, you know, because governments collect the data and they process
    0:45:28 it and they publish reports.
    0:45:29 And there’s lots of, there’s lots of academic studies.
    0:45:32 And so like, is your, is, is your large data set giving you any additional actuarial information
    0:45:38 that the much larger data set on the internet isn’t already providing you?
    0:45:41 Like are your, are your insurance clients actually actuarially any different than just
    0:45:46 everybody?
    0:45:47 I think so.
    0:45:48 Cause on intake on the, you know, when you get insurance, they give you like a blood test.
    0:45:55 They’ve got all these things we know if you’re a smoker and so forth.
    0:45:58 And in the, I think in the general data set, like, yeah, you know who dies, but you don’t
    0:46:02 know what the fuck they did coming in.
    0:46:05 And so what you really are looking for is like, okay, for this profile of person with
    0:46:09 this kind, with these kinds of lab results, how long do they live?
    0:46:13 And that’s, that’s where the value is.
    0:46:15 And I think that, you know, interesting, like, you know, I was thinking about like a company
    0:46:20 like Coinbase where, right, they have incredibly valuable assets in the terms of money.
    0:46:27 They have to stop people from breaking in.
    0:46:29 They’ve done a massive amount of work on that.
    0:46:32 They’ve seen all kinds of break-in types.
    0:46:34 I’m sure they have tons of data on that.
    0:46:36 It’s probably like weirdly specific to people trying to break into crypto exchanges.
    0:46:42 And so, you know, like, I think it could be very useful for them.
    0:46:45 I don’t think they could sell it to anybody, but, you know, I think every company’s got
    0:46:51 data that if, you know, fed into an intelligent system would help their business.
    0:46:57 And I think almost nobody has data that they could just go sell.
    0:47:02 And then there’s this kind of in-between question, which is, what data would you want to let
    0:47:08 Microsoft or Google or OpenAI or anybody get their grubby little fingers on?
    0:47:13 And that I’m not sure.
    0:47:19 That’s a — that I think is the question that enterprises are wrestling with more than — it’s
    0:47:24 not so much should we go like sell our data, but should we train our own model just so
    0:47:29 we can maximize the value?
    0:47:32 Or should we feed it into the big model?
    0:47:35 And if we feed it into the big model, do all of our competitors now have the thing that
    0:47:39 we just did?
    0:47:40 And, you know, or could we trust the big company to not do that to us, which I kind of think
    0:47:47 the answer on trusting the big company not to F with your data is probably I wouldn’t
    0:47:52 do that.
    0:47:55 If your competitiveness depends on that, you probably shouldn’t do that.
    0:47:58 Well, there are at least reports that certain big companies are using all kinds of data
    0:48:02 that they should be using to train their models already.
    0:48:04 So.
    0:48:05 Yep.
    0:48:06 I think like I think those reports are very likely true.
    0:48:10 Right.
    0:48:11 Or they have open data, right?
    0:48:12 Like this is, you know, we’ve talked about this before, but, you know, the same companies
    0:48:17 that are saying they’re not stealing all the data from people or taking it in an unauthorized
    0:48:22 way, refuse to say open their data.
    0:48:26 Like why not tell us where your data came from?
    0:48:28 And then in fact, they’re trying to shut down all openness, no open source, no open weights,
    0:48:32 no open data, no open nothing, and go to the government and try and get to do that.
    0:48:36 You know, if you’re not a thief, then why are you doing that?
    0:48:39 Right.
    0:48:40 Right.
    0:48:41 Right.
    0:48:42 What are you hiding?
    0:48:43 By the way, there’s other twists and turns here.
    0:48:44 The insurance example, I kind of deliberately loaded it because you may know it’s actually
    0:48:48 illegal to use genetic data for insurance purposes, right?
    0:48:51 So there’s this thing called the GenoLaw Genetic Information Non-Discrimination Act of 2008.
    0:48:58 And basically, it basically bans health insurers in the U.S. room actually using genetic data
    0:49:02 for the purpose of doing, you know, health assessment, actual assessment of, which by
    0:49:06 the way, because now the genomics are getting really good.
    0:49:08 Like that data probably actually is, you know, among the most accurate data you could have
    0:49:12 if you were actually trying to predict, like, when people are going to get sick and die.
    0:49:15 And they’re literally not allowed to use it.
    0:49:18 Yeah, it is.
    0:49:20 I think that this is an interesting, like weird misapplication of good intentions in
    0:49:28 a policy way that’s probably going to kill more people than ever get saved by every kind
    0:49:37 of health FDA, et cetera, policy that we have, which is, you know, in a world of AI, having
    0:49:46 access to data on all humans, why they get sick, what their genetics were, et cetera,
    0:49:51 et cetera, et cetera, is the most, that is, you know, you’re talking about data being
    0:49:55 the new oil, like that is the new oil, that’s the healthcare oil is, you know, if you could
    0:49:59 match those up, then we’d never not know why we’re sick, you know, you could make everybody
    0:50:05 much healthier, all these kinds of things.
    0:50:08 But you know, to kind of stop the insurance company from kind of overcharging people who
    0:50:15 are more likely to die, we’ve kind of locked up all this data, a kind of better idea would
    0:50:23 be to just go, okay, for the people who are likely to, like we subsidize healthcare, like
    0:50:29 massively for individuals anyway, just like differentially subsidize, and, you know, and
    0:50:38 then like you solve the problem and you don’t lock up all the data.
    0:50:41 But yeah, it’s typical of politics and policy, I mean, most of them are like that, I think.
    0:50:47 Yeah.
    0:50:48 Well, there’s these interesting questions like insurance, like basically, they’re one
    0:50:50 of the questions people have asked about insurance is like, if you had perfectly predictive
    0:50:53 information on like individual outcomes, does the whole concept of insurance actually still
    0:50:57 work, right, because the whole theory of insurance is risk pooling, right, it’s precisely the
    0:51:03 fact that you don’t know what’s going to happen in the specific case that means you build
    0:51:06 these statistical models, and then you risk pool, and then you have variable payouts depending
    0:51:10 on exactly what happens.
    0:51:11 But if you literally knew what was going to happen in every case, because for example,
    0:51:15 you have all this predictive genomic data, then all of a sudden it wouldn’t make sense
    0:51:18 to risk pool, because you just say, well, no, this person’s going to cost X, that person’s
    0:51:22 going to cost Y, there’s no.
    0:51:24 Self-insurance already doesn’t make sense in that way, right, like insurance, the idea
    0:51:29 of insurance is kind of like the, it started with crop insurance where like, okay, you
    0:51:35 know, my crop fails, and so we all put money in a pool in case like my crop fails so that,
    0:51:41 you know, we can cover it.
    0:51:43 It’s kind of designed for it to risk pool for a catastrophic unlikely incident.
    0:51:49 Like everybody’s got to go to the doctor all the fucking time.
    0:51:53 And some people get sicker than others in that kind of thing.
    0:51:56 But like, the way our health insurance works is like all medical gets, you know, paid for
    0:52:02 through this insurance system, which is this layer of loss and bureaucracy and giant companies
    0:52:08 and all this stuff when like, if we’re going to pay for people’s healthcare, just pay for
    0:52:14 people’s healthcare.
    0:52:15 Like, what are we doing, right, like, and if you want to disincent people from like
    0:52:20 going for nonsense reasons and just up the copay, like, it’s like, what are we doing?
    0:52:27 Just, well, then from a justice standpoint, from a fairness standpoint, like, would it
    0:52:31 make sense for me, you know, would it make sense for me to pay more for your healthcare?
    0:52:35 If I knew that you were going to be more expensive than me, like, you know, I’m directly, you
    0:52:38 know, if you, if everybody knows what future healthcare costs is per person, there has a
    0:52:43 very good predictive model for it, you know, societal willingness to all pool in the way
    0:52:46 that we do today might really diminish.
    0:52:47 Yeah, yeah.
    0:52:48 Well, and then like, you could also, if you knew, like, there’s things that you do genetically
    0:52:54 and maybe we give everybody a pass on that, it’s like, you can’t control your genetics,
    0:52:58 but then like, there’s things you do behaviorally that like, dramatically increases your chance
    0:53:02 of getting sick.
    0:53:03 And so maybe, you know, we incentivize people to stay healthy instead of just like paying
    0:53:09 for them not to die.
    0:53:12 There’s a lot of systemic fixes we could do to the healthcare system.
    0:53:17 It couldn’t be designed in a more ridiculous way, I think.
    0:53:20 Well, it couldn’t be designed in a more ridiculous way.
    0:53:22 It’s actually more ridiculous in some other countries, but it’s pretty crazy here.
    0:53:27 Nathan, Nathan Odie asks, what are the strongest common themes between the current state of
    0:53:31 AI and web 1.0?
    0:53:33 And so let me start there.
    0:53:34 Let me give you a theory, Ben, and see what you think.
    0:53:36 So I guess it’s questionable, you know, because of my role in, you know, Ben, you, you with
    0:53:39 me at Netscape, you know, we get this question a lot because of our role early on with the,
    0:53:43 with the internet.
    0:53:44 You know, the internet boom was like a major, major event in technology, and it’s still
    0:53:47 within a lot of, you know, people’s memories.
    0:53:49 And so, you know, the sort of, you know, people like to reason from analogy.
    0:53:53 So it’s like, okay, the AI boom must be like the internet boom.
    0:53:55 Starting an AI company must be like starting an internet company.
    0:53:58 And so, you know, what, what is this like?
    0:54:00 And we actually got a bunch of questions like that, you know, that are kind of an analogy
    0:54:03 questions like that.
    0:54:04 I actually think, you know, and then Ben, you know, you and I were there for the internet
    0:54:06 boom.
    0:54:07 So we, you know, we lived through that and the bust and the boom and the bust.
    0:54:10 So I actually think that the analogy doesn’t really work for the most, it works in certain
    0:54:14 ways, but it doesn’t really work for the most part.
    0:54:16 And the reason is because the internet, the internet was a network, whereas AI is a computer.
    0:54:24 Yep.
    0:54:25 Okay.
    0:54:26 Yeah.
    0:54:27 So, so some people understand what we’re saying.
    0:54:29 So, you know, like the PC boom or even, I would say the microprocessor, like my best
    0:54:35 analogy is to the microprocessor or even to like the original computers, like back to
    0:54:39 the mainframe era.
    0:54:40 And the reason is because, yeah, look, what the internet did was the internet, you know,
    0:54:43 obviously was a network, but the network connected together many existing computers.
    0:54:47 And then of course, people built many other new kinds of computers to connect to the internet.
    0:54:50 But fundamentally, the internet was a network and then, and that’s important because most
    0:54:54 of, most of the sort of industry dynamics, competitive dynamics, startup dynamics around
    0:54:59 the internet had to do with basically building either building networks or building applications
    0:55:03 that run on top of networks.
    0:55:04 And this, you know, the internet generation of startups was very consumed by network
    0:55:07 effects and, you know, all these positive feedback loops that you get when you connect
    0:55:12 a lot of people together.
    0:55:13 And, you know, things like met, you know, so-called Metcast Law, which is sort of the value of
    0:55:17 a network, you know, expands, you know, kind of the way it expands is you have more people
    0:55:20 to it.
    0:55:21 And then, you know, there were all these fights, you know, these fights, you know, all the
    0:55:23 social networks or whatever fighting to try to get network effects and try to steal each
    0:55:26 other’s users because of the network effects.
    0:55:29 And so it was kind of, you know, it was dominated by network effects, which is what you expect
    0:55:32 from a network business.
    0:55:34 AI, like, there are some networks effects in AI that we can talk about, but it’s more
    0:55:39 like a microprocessor.
    0:55:40 It’s more like a chip.
    0:55:41 It’s more like a computer in that it’s a system that basically, right, data comes in, data
    0:55:47 gets processed, data comes out, things happen.
    0:55:50 That’s a computer.
    0:55:51 It’s an information processing system.
    0:55:52 It’s a computer.
    0:55:53 It’s a new kind of computer.
    0:55:54 It’s a, you know, we like to say the sort of computers up until now have been what are
    0:55:58 called von Neumann machines, which is to say they’re deterministic computers, which is
    0:56:01 they’re like, you know, hyper literal and they do exactly the same thing every time.
    0:56:05 And if they make a mistake, it’s, it’s the programmer’s fault, but they’re very limited
    0:56:08 in their ability to interact with people and understand the world.
    0:56:11 You know, we think of AI and large language models as a new kind of computer, a probabilistic
    0:56:15 computer, a neural network based computer that, you know, by the way, is not very accurate
    0:56:19 and is, you know, doesn’t give you the same result every time.
    0:56:22 And in fact, might actually argue with you and tell you that it doesn’t want to answer
    0:56:25 your question.
    0:56:26 Yeah, which is very different in nature than the old computers.
    0:56:31 And it makes you get kind of compulsability, you know, the ability to build things, big
    0:56:37 things out of little things more complex.
    0:56:40 Right.
    0:56:41 But, but the capabilities are new and different and valuable and important because they can
    0:56:45 understand language and images and, you know, that all these, all these things that you,
    0:56:48 you see when you use.
    0:56:49 All of that means we can never solve with deterministic computers.
    0:56:53 We can now go after, right?
    0:56:55 Right.
    0:56:56 Yeah, exactly.
    0:56:57 And so I think, I think, Ben, I think the analogy and I think the lessons learned are
    0:57:00 much more likely to be drawn from the early days of the computer industry or from the
    0:57:02 early days of the microprocessor than the early days of the internet.
    0:57:05 Does that, does that sound right?
    0:57:06 I think so.
    0:57:07 Yeah.
    0:57:08 I definitely think so.
    0:57:09 And that doesn’t mean there’s no like boom and bust and all that because that’s just
    0:57:12 the nature of technology, you know, people get too excited and then they get too depressed.
    0:57:18 So there’ll be some of that, I’m sure.
    0:57:19 There’ll be overbuild outs, you know, potentially eventually of chips and power and that kind
    0:57:24 of thing.
    0:57:25 You know, we start with the shortage, but, but I agree.
    0:57:27 Like I think networks are fundamentally different in the nature of how they evolved in computers
    0:57:34 and the kind of just the adoption curve and all those kinds of things will be different.
    0:57:39 Yeah.
    0:57:40 So then, and this kind of goes to where, how I think the industry is going to unfold.
    0:57:43 And so this is kind of my best theory for kind of what happens from here of this kind
    0:57:46 of this, you know, this, this giant question of like, you know, is, is the industry going
    0:57:49 to be a few God models or, you know, a very large number of models of different sizes
    0:57:52 and so forth.
    0:57:54 So the computer, like famously, you know, the, the original computers, like the original
    0:57:58 IBM mainframes, you know, the big computers, you know, they were very, very large and expensive
    0:58:03 and there were only a few of them.
    0:58:05 And the prevailing view actually for a long time was that’s all there would ever be.
    0:58:09 And there was this famous statement by Thomas Watson senior, who was the creator of IBM,
    0:58:13 you know, which was the dominant company for the first like, you know, 50 years of the
    0:58:16 computer industry.
    0:58:17 And he said, he said, he said, I believe this actually true, but he said, I don’t, I don’t
    0:58:21 know, I don’t know that the world will ever need more than five computers.
    0:58:24 And I think the reason for that, it was literally, it was like the government’s going to have
    0:58:27 two, and then there’s like three big insurance companies.
    0:58:30 And then that’s it.
    0:58:31 Yeah.
    0:58:32 Who else would need to do all that math?
    0:58:34 Exactly.
    0:58:35 Yeah.
    0:58:36 Who else would need to, who else needs to keep track of huge amounts of numbers?
    0:58:38 Who else needs that level of, you know, calculation capability?
    0:58:41 It’s just not a relevant, you know, it’s just not, not, not a relevant concept.
    0:58:44 And by the way, they were like big and expensive.
    0:58:46 And so who else can afford them, right?
    0:58:48 And who else can afford all the headcount required to manage them and maintain them?
    0:58:51 I mean, this is in the days, I mean, these things were big, these things were so big
    0:58:53 that you would have an entire building that got built around a computer, right?
    0:58:57 And they’d have like, they’d famously have all these guys in white lab coats, literally
    0:59:00 like taking care of the computer, because everything had to be kept super clean or the
    0:59:03 computer would stop working.
    0:59:05 And so, you know, it was this thing where, you know, today we have the idea of an AI God
    0:59:08 model, which is like a big foundation model that, you know, then we had the idea of like
    0:59:11 a God mainframe, like there, there would just be a few, a few of these things.
    0:59:14 And by the way, if you watch old science fiction, it almost always has this sort of conceit.
    0:59:19 It’s like, okay, there’s a big supercomputer and it either is like doing the right thing
    0:59:22 or doing the wrong thing.
    0:59:23 And if it’s doing the wrong thing, you know, that’s, that’s often the plot of the science
    0:59:26 fiction movies is you have to go in and try to figure out how to fix it or defeat it.
    0:59:30 And so it’s sort of this, this idea of like a single top down thing.
    0:59:33 Of course, and that helped for a long time.
    0:59:35 Like that held for, you know, the first few decades.
    0:59:37 And then, you know, even when computers, computers started to get smaller.
    0:59:40 So then you had so called mini computers was the next phase.
    0:59:42 And so that was a computer that, you know, didn’t cost $50 million.
    0:59:45 Instead, it costs, you know, $500,000, but even still $500,000 is a lot of money.
    0:59:50 People aren’t putting mini computers in their homes.
    0:59:52 And so it’s like midsize companies can, can buy mini computers, but certainly people can’t.
    0:59:56 And then of course, with the PC, they shrunk down to like $2,500.
    0:59:59 And then with the smartphone, they shrunk down to $500.
    1:00:01 And then, you know, sitting here today, obviously you have computers of every shape, size, description,
    1:00:06 all the way down to, you know, computers that cost a penny.
    1:00:08 You know, you’ve got a computer in your thermostat that, you know, basically controls the temperature
    1:00:12 in the room and it, you know, probably cost a penny.
    1:00:13 And it’s probably some embedded arm chip with firmware on it.
    1:00:16 And there’s, you know, many billions of those all around the world.
    1:00:18 You buy a new car today.
    1:00:19 It has something new cars today have something on the order of 200 computers in them.
    1:00:23 And maybe, maybe more at this point.
    1:00:25 And so you just basically assume with the chip today, sitting here today,
    1:00:28 you just kind of assume that everything has a chip in it.
    1:00:30 You assume that everything, by the way, draws electricity or has a battery
    1:00:33 because it needs to power the chip.
    1:00:35 And then increasingly, you assume that everything’s on the internet
    1:00:37 because basically all computers are assumed to be on the internet or they will be.
    1:00:41 And so as a consequence, what you have is the computer industry today is this massive pyramid.
    1:00:46 And you still have a small number of like these supercomputer clusters
    1:00:49 or these giant mainframes that are like the God model, you know, the God mainframes.
    1:00:53 And then you’ve got, you know, a larger number of mini computers.
    1:00:56 You’ve got a larger number of PCs.
    1:00:57 You’ve got a much larger number of smartphones.
    1:00:58 And then you’ve got a giant number of embedded systems.
    1:01:01 And it turns out like the computer industry is all of those things.
    1:01:03 And, you know, what is it, you know, what size of computer do you want is based on
    1:01:08 what exactly are you trying to do and who are you and what do you need?
    1:01:11 And so if that analogy holds, it basically means actually we are going to have AI models
    1:01:16 of every conceivable shape, size, description, capability, right?
    1:01:20 Based on trained on lots of different kinds of data, running at very different kinds of scale,
    1:01:24 very different privacy, different policies, different, you know, security policies.
    1:01:28 You know, you’re just going to have like enormous variability and variety.
    1:01:32 And it’s going to be an entire ecosystem and not just a couple of companies.
    1:01:35 Yeah, let me see what you think of that.
    1:01:37 Well, I think that’s right.
    1:01:38 And I also think that the other thing that’s interesting about this era of computing,
    1:01:42 if you look at prayers of computing from the mainframe to the smartphone,
    1:01:47 a huge source of lock in was basically the difficulty of using them.
    1:01:53 So, you know, nobody ever got fired for buying IBM because like, you know,
    1:01:58 you had people trained on them, you know, people knew how to use the operating system.
    1:02:03 Like it was, you know, it was just kind of like a safe choice.
    1:02:07 Due to the massive complexity of like dealing with a computer.
    1:02:12 And then even with a smartphone, like the read, you know, why is the Apple computer
    1:02:19 smartphone so dominant, you know, what makes it so powerful?
    1:02:23 It’s well, because like switching off of it is so expensive and complicated and so forth.
    1:02:27 It’s an interesting question with AI because AI is the easiest computer to use by far.
    1:02:32 It speaks English.
    1:02:33 It’s like talking to a person.
    1:02:35 And so like, what is the lock in there?
    1:02:38 And so are you completely free to use the size, price, choice, speed that you need
    1:02:45 for your particular task, or are you locked into the God model?
    1:02:49 And, you know, I think it’s still a bit of an open question, but it’s pretty interesting.
    1:02:56 And that that that thing could be very different than prior generations.
    1:03:01 Yeah, yeah, that makes sense.
    1:03:03 And then just to complete the question, what would we say?
    1:03:05 So, Ben, what would you say are lessons learned from the internet era that we lived through
    1:03:08 that would apply that people should think about?
    1:03:11 I think a big one is probably just the the boom bus nature of it.
    1:03:19 That like, you know, the demand, the interest in the internet, the recognition of what it
    1:03:25 could be was so high that money just kind of poured in and buckets and, you know, and
    1:03:32 then the underlying thing was in internet age was the telecom infrastructure and fiber
    1:03:37 and so forth got just unlimited funding and unlimited fiber was built out and then eventually
    1:03:42 we had a fiber glut and all the telecom companies went bankrupt and and that was great fun.
    1:03:49 But you know, like we entered in a good place and I think that that’s something like that’s
    1:03:53 probably pretty likely to happen in AI where like, you know, every company is going to
    1:03:58 get funded.
    1:03:59 We don’t need that many AI companies.
    1:04:01 So a lot of them are going to bust.
    1:04:02 There’s going to be a huge, you know, huge investor losses.
    1:04:06 There will be an overbuild out of chips for sure at some point.
    1:04:11 And then, you know, we’re going to have too many chips and yeah, some chip companies will
    1:04:15 go bankrupt for sure.
    1:04:17 And then, you know, and I think probably the same thing with data centers and so forth,
    1:04:21 like, well, be behind behind behind and then we’ll over build at some point.
    1:04:26 So that that all be very interesting.
    1:04:29 I think that and that’s kind of the that’s every new technology.
    1:04:34 So Carlotta Perez has a great kind of has done, you know, amazing work on this where
    1:04:39 I like that is just the nature of a new technology is that you overbuild, you underbuild it, then
    1:04:43 you overbuild and, you know, and there’s a hype cycle that funds the build out and a
    1:04:49 lot of money is lost.
    1:04:50 But we get the infrastructure and that’s awesome because that’s when it really gets adopted
    1:04:54 and changes the world.
    1:04:55 I want to say, you know, with the internet, the other the other kind of big kind of thing
    1:05:01 is the internet went through a couple of phases, right?
    1:05:04 Like it went through a very open phase, which was unbelievably great.
    1:05:08 It was probably one of the greatest booms to the economy.
    1:05:11 It, you know, it certainly created tremendous growth and power in America, both, you know,
    1:05:16 kind of economic power and soft cultural power and these kinds of things.
    1:05:21 And then, you know, it became closed with the next generation architecture with, you
    1:05:26 know, kind of discovery on the internet being owned entirely by Google and, you know, kind
    1:05:31 of other things, you know, being owned by other companies and, you know, AI, I think
    1:05:36 could go either way.
    1:05:37 So it could be very open or like, you know, with kind of misguided regulation, you know,
    1:05:42 we could actually force our way from something that, you know, is open source, open weights,
    1:05:48 anybody can build it.
    1:05:49 We’ll have a plethora of this technology will be like, use all of American innovation
    1:05:55 to compete or we’ll, you know, we’ll cut it all off, we’ll force it into the hands
    1:06:02 of the companies that kind of own the internet today and, you know, and we’ll put ourselves
    1:06:08 at a huge disadvantage, I think, competitively against China in particular, but, but everybody
    1:06:14 in the world.
    1:06:15 And so, so I think that’s, that’s something that definitely, you know, that we’re involved
    1:06:20 with trying to make sure it doesn’t happen, but it’s a real possibility right now.
    1:06:24 Yeah.
    1:06:25 There’s sort of an irony is that networks used to be all proprietary and then they opened
    1:06:29 up.
    1:06:30 Yeah, yeah, yeah.
    1:06:31 Landman, Apple Talk, Net Buoy, Net Bios.
    1:06:34 Yeah, exactly.
    1:06:35 And so these are all the early proprietary networks from all individual specific vendors
    1:06:38 and then the internet appeared and kind of TCP/IP and everything opened up.
    1:06:41 The AI is trying to go the other, I mean, the big company is trying to take AI the other
    1:06:44 way.
    1:06:45 It started out as like open, just like basically just like the resource.
    1:06:48 Everything was open source and AI, yeah.
    1:06:50 Right.
    1:06:51 Right.
    1:06:52 And now they’re trying to, they’re trying to lock it down.
    1:06:53 So it’s a, it’s a, it’s a fairly nefarious turn of events.
    1:06:56 Yeah.
    1:06:57 Yeah.
    1:06:58 Very nefarious.
    1:06:59 You know, I can, it’s remarkable to me.
    1:07:01 I mean, it is kind of the darkest side of capitalism when a company is so greedy, they’re
    1:07:08 willing to destroy the country and maybe the world to like just get a little extra profit.
    1:07:12 You know, and they do it like the, the, the really kind of nasty thing is they claim, oh,
    1:07:17 it’s for safety.
    1:07:18 You know, we’ve created an alien that we can’t control, but we’re not going to stop
    1:07:22 working on it.
    1:07:23 We’re going to keep building it as fast as we can and we’re going to buy every freaking
    1:07:26 GPU on the planet, but we need the government to come in and stop it from being open.
    1:07:32 This is literally the current position of Google and Microsoft right now.
    1:07:37 It’s crazy.
    1:07:38 And we’re not going to secure it.
    1:07:40 So we’re going to make sure that like Chinese buys can just like steal our chip plans, take
    1:07:44 them out of the country.
    1:07:45 We won’t even realize for six months.
    1:07:46 Yeah.
    1:07:47 It has nothing to do with security.
    1:07:48 It only has to do with monopoly.
    1:07:49 Yes.
    1:07:50 The other, you know, just been going back on your point of speculation, so there’s this
    1:07:54 critique that we hear a lot, right?
    1:07:56 Which is like, okay, you idiots, basically it’s like you idiots, you idiots, entrepreneurs,
    1:07:59 investors, you idiots.
    1:08:00 It’s like there’s a speculative bubble with every new technology, like basically like
    1:08:04 when are, when are you people going to learn to not do that?
    1:08:06 Yeah.
    1:08:07 There’s an old joke, there’s an old joke that relates to this, which is the foremost
    1:08:10 dangerous words in investing are, this time is different.
    1:08:13 The 12 most dangerous words in investing are the foremost dangerous words in investing
    1:08:17 are this time is different, right?
    1:08:18 Like, so like, does history repeat?
    1:08:21 Does it not repeat the, my sense of it, and you referenced Carlotta Perez’s book, which
    1:08:25 I agree is good.
    1:08:26 Although I think, I don’t think it works as well anymore.
    1:08:28 We can talk about some time, but, but you know, is a good, at least background piece
    1:08:31 on this.
    1:08:32 You know, it’s just like, it’s just incontrovertibly true, basically every significant technology
    1:08:35 advance in history was greeted by some kind of financial bubble, basically since financial
    1:08:39 markets have existed.
    1:08:40 And this, you know, by the way, this includes like everything from, you know, radio and
    1:08:43 television, the railroads, you know, lots and lots of prior, by the way, there was a,
    1:08:47 there was actually a, a so-called, there was an electronics boom bust in the 60s called
    1:08:51 the, it was called the Tronix, every, every company had the name Tronix.
    1:08:55 And so, you know, there, there was that, so there, you know, there was like a laser boom
    1:08:58 bust cycle.
    1:08:59 There, there were all these like boom bust cycles.
    1:09:00 And so basically it’s like any new tech, any new technology, that’s what economists
    1:09:04 call a general purpose technology, which is to say something that can be used in lots
    1:09:07 of different ways.
    1:09:08 Like it inspires sort of a speculative mania.
    1:09:10 And you know, and look, the critique is like, okay, why do you need to have a spec speculative
    1:09:14 mania?
    1:09:15 Why do you need to have a cycle?
    1:09:16 Because like, you know, people, you know, if people, some people invest in the things
    1:09:18 they lose a lot of money, and then there’s this bust cycle that, you know, causes everybody
    1:09:21 to get depressed.
    1:09:22 Maybe it delays the rollout.
    1:09:23 And it’s like two things.
    1:09:25 Number one is like, well, you just don’t know, like if it’s a general purpose technology,
    1:09:28 like AI is, and it’s potentially useful in many ways, like nobody actually knows upfront
    1:09:32 like what the successful use cases are going to be, or what successful companies are going
    1:09:36 to be like you actually have to, you have to learn by doing.
    1:09:38 You’re going to have to miss this.
    1:09:39 That’s venture capital.
    1:09:40 Yeah.
    1:09:41 We, we…
    1:09:42 Yeah, exactly.
    1:09:43 Yeah, exactly.
    1:09:44 So yeah, the true venture capital model kind of wires this in, right?
    1:09:46 Yeah.
    1:09:47 We, we basically, in core venture capital, the kind that we do, we sort of assume that half
    1:09:50 the companies fail, half the projects fail.
    1:09:52 And you know, if, if, if any of us, if we are any of our…
    1:09:55 Tell a completely, like lose money, lose money.
    1:09:57 Exactly.
    1:09:58 Yeah.
    1:09:59 And so like, and of course, if we are any of our competitors, you know, could figure
    1:10:02 out how to do the 50% that work without doing the 50% that don’t work.
    1:10:05 We would do that.
    1:10:06 But you know, here we sit 60 years into the field and like nobody’s figured that out.
    1:10:10 So there is, there is that, that unpredictability to it.
    1:10:13 And then the other, the other kind of interesting way to think about this is like, okay, what
    1:10:16 would it mean to have a society in which a new technology did not inspire speculation?
    1:10:20 And it would mean having a society that basically is just like inherently like super pessimistic
    1:10:25 about both the prospects of the new technology, but also the prospects of entrepreneurship
    1:10:29 and you know, people inventing new things and doing new things.
    1:10:31 And of course there are many societies like that on planet earth, you know, they’re just
    1:10:35 like fundamentally like don’t have the spirit of invention and adventure that, you know,
    1:10:40 that a place like Silicon Valley does and, you know, are they better off or worse off?
    1:10:44 And, you know, generally speaking, they’re worse off.
    1:10:46 They’re just, you know, less future oriented, less, less, less, less focused on, on building
    1:10:50 things, less focused on, on figuring out how to get growth.
    1:10:53 And so I think there’s a, at least my sense, there’s a comes with the territory thing.
    1:10:57 Like we, we, we, we would all prefer to avoid the downside of a speculative boom bus cycle,
    1:11:01 but like it seems to come with the territory every single time.
    1:11:03 And I, at least I have not, nobody I’m aware, no society I’m aware of has ever figured out
    1:11:07 how to capture the good without also having the bet.
    1:11:09 Yeah.
    1:11:10 And like, why would you?
    1:11:11 I mean, it’s kind of like, you know, the, the, the whole Western United States was built
    1:11:16 off the gold rush and like every kind of treatment in the like popular culture of the gold rush
    1:11:23 kind of focuses on the people who didn’t make it anymore, but there were people who made
    1:11:27 a lot of money, you know, and found gold and, you know, in the internet bubble, which, you
    1:11:33 know, was completely ridiculed by, you know, kind of every, every movie.
    1:11:38 If you go back and watch any movie between like 2001 and 2004, they’re all like how only
    1:11:46 morons did.com and this and that and the other, and they’re all these funny documentaries
    1:11:51 and so forth.
    1:11:52 But like, that’s when Amazon got started, you know, that’s when eBay got started, that’s
    1:11:58 when Google got started, you know, these, these companies, you know, with that work
    1:12:03 started in the bubble in the kind of time of this great speculation, there was gold in
    1:12:08 those companies.
    1:12:10 And if you get any one of those, like you funded, you know, probably the next set of companies,
    1:12:15 you know, which included things like, you know, Facebook and accent, you know, snap
    1:12:19 and all these things.
    1:12:20 And so, yeah, I mean, like, that’s just the nature of it.
    1:12:24 I mean, like, that’s what makes it exciting.
    1:12:26 And you know, it’s just a, it’s an amazing kind of thing that, you know, look, the transfer
    1:12:33 of money from people who have excess money to people who are trying to do new things
    1:12:39 and make the world a better place is the greatest thing in the world.
    1:12:43 Like, and if we, some of the people with excess money lose some of that excess money and trying
    1:12:49 to make the world a better place, like, why are you mad about that?
    1:12:53 Like that, that’s the thing that I can ever, like, why would you be mad at, you know, young
    1:12:59 ambitious people trying to improve the world, getting funded, and some of that being misguided?
    1:13:06 Like why is that bad?
    1:13:07 Right.
    1:13:08 Right.
    1:13:09 As compared to, yeah, as compared to, especially as compared to everything else in the world
    1:13:12 and all the people who are not trying to do that.
    1:13:14 So you’d rather, like, we just buy, like, you know, lots of mansions and boats and jets.
    1:13:19 Right.
    1:13:20 Like, what are you talking about?
    1:13:21 Right.
    1:13:22 Right.
    1:13:23 Exactly.
    1:13:24 We’re donating money to ruin us.
    1:13:25 Yeah, ruin us causes.
    1:13:26 Right.
    1:13:27 Such as, ones that are on the news right now.
    1:13:31 Okay.
    1:13:32 So, all right.
    1:13:33 We’re at a minute 20.
    1:13:34 We made it all the way through four questions.
    1:13:35 We’re doing good.
    1:13:36 We’re doing great.
    1:13:37 So let’s call it here.
    1:13:38 Thank you, everybody, for joining us.
    1:13:39 And I believe we should do a part two of this, if not parts three through six, because we
    1:13:42 have a lot more questions to go.
    1:13:43 But thanks, everybody, for joining us today.
    1:13:45 All right.
    1:13:46 Thank you.
    1:13:46 Bye.
    1:13:47 Bye.
    1:13:48 Bye.
    1:13:49 Bye.
    1:13:49 (upbeat music)
    1:13:51 (upbeat music)

    In this latest episode on the State of AI, Ben and Marc discuss how small AI startups can compete with Big Tech’s massive compute and data scale advantages, reveal why data is overrated as a sellable asset, and unpack all the ways the AI boom compares to the internet boom.

     

    Subscribe to the Ben & Marc podcast: https://link.chtbl.com/benandmarc

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • Predicting Revenue in Usage-based Pricing

    AI transcript
    0:00:06 I happily accept some of the risks of a consumption-based model because I think that the benefits
    0:00:09 far exceed the cost.
    0:00:14 These providers need to be held accountable to continuously delivering value.
    0:00:20 It is not okay to simply sell a deal, walk away for 11 months, and then one month before
    0:00:25 the renewal is set to go, then you re-engage and say, “Hey, how was the last 11 months?”
    0:00:27 If we would have just asked that question, they would have told us.
    0:00:31 But instead, we put it down on our chart as a trend that would endure for the next year
    0:00:34 and we called it ARR, and that’s a mistake.
    0:00:40 I think, actually, what you’re going to see is more hybrid pricing models.
    0:00:44 It involves also telling them proactively how to spend less on your company by implementing
    0:00:48 some best practices that will reduce their consumption.
    0:00:55 There is no shortcut to creating long-term successful businesses.
    0:01:00 Pricing is hard, which is why so many companies have defaulted to standard pricing models
    0:01:02 like subscription.
    0:01:06 And that should come to no surprise because predictable revenue is the linchpin of any
    0:01:10 company’s planning, execution, and ultimately valuation.
    0:01:15 But, it also happens to be one of the most difficult things to nail about implementing
    0:01:17 another pricing model.
    0:01:22 That is usage-based pricing, which is what we’re here to talk about today.
    0:01:26 Because once you’ve established the right processes, org and compensation structures,
    0:01:31 and tech stack to operationalize it, your revenue can actually become more predictable
    0:01:35 with usage-based pricing than it might be with traditional SaaS over time.
    0:01:40 So, today you’ll hear from A16Z growth partner, Mark Regan, as he sits down with Travis Ferber,
    0:01:47 VP of strategy at Vybetran, and Dan Burrell, head of sales at Alchemy, who both have implemented
    0:01:50 and embraced, you guessed it, usage-based pricing.
    0:01:55 So, today they share guidance on best practices, the very real ups and downs of usage-based
    0:02:01 pricing, key metrics to hone in on both short-term and long-term planning, and ultimately why
    0:02:06 it’s so important to orient your business around the value that a customer is getting.
    0:02:10 The first voice you’ll hear is Mark, then Travis, then Dan.
    0:02:14 Oh, and for more content just like this on usage-based pricing, make sure to check out
    0:02:22 A16Z.com/Pricing-Packaging. Enjoy!
    0:02:26 As a reminder, the content here is for informational purposes only, should not be taken as legal,
    0:02:31 business, tax, or investment advice, or be used to evaluate any investment or security,
    0:02:35 and is not directed at any investors or potential investors in any A16Z fund.
    0:02:39 Please note that A16Z and its affiliates may also maintain investments in the companies
    0:02:41 discussed in this podcast.
    0:02:51 For more details, including a link to our investments, please see A16Z.com/disclosures.
    0:02:57 What I’d love to hear from you guys first is your perspective on why usage-based pricing
    0:03:02 has become so popular in the industry, and why is it working so well at each of your
    0:03:04 respective companies.
    0:03:11 I think it comes down to, usage-based pricing allows customers to pay for what they use.
    0:03:16 It helps tie value directly to the product, and from a customer standpoint, it’s also
    0:03:20 an easy way to help customers come in and experience your product without making a big
    0:03:21 commitment to that.
    0:03:27 So it’s really helpful in landing customers and bringing them in, and I think over time
    0:03:31 with that, you can build stronger relationships with those customers, and you can see that
    0:03:33 growth over time.
    0:03:37 There’s maybe one other thing I’ll add to that, which is usage-based pricing forces
    0:03:41 you as a company to think about the customer all the time.
    0:03:45 Every part of the organization has to be thinking about the customer, and a bookings model where
    0:03:50 you come in and you get them a subscription, the salesperson is like, “Great, did my job.
    0:03:55 I’ll see you in 10 months when we’re getting ready to start talking about that renewal.”
    0:03:59 And your engineering team is somewhat tied to them, but you don’t get the immediate
    0:04:00 feedback with the customer.
    0:04:02 You don’t get the immediate, “Are they being successful?”
    0:04:04 Have they adopted the product in the way that you thought?
    0:04:06 And with usage-based pricing, you can see that.
    0:04:07 You get the telemetry data.
    0:04:12 You get the information right away, and you can focus the entire organization to make
    0:04:14 sure that those customers are successful.
    0:04:19 And I think the founder or an individual who wants to build a company that drives great
    0:04:23 value with customers and great relationships with that, usage-based pricing is a good mechanism
    0:04:29 for aligning all the organization around the success of the customer, because your revenue
    0:04:31 is directly tied to the success of that customer.
    0:04:32 I think you nailed that.
    0:04:37 I’ve been in Silicon Valley Ventureback companies for 12-ish years now.
    0:04:41 And what’s really interesting when I think about this question is that I think back
    0:04:46 to my very first sales training ever out of college, where I had not yet moved to Silicon
    0:04:47 Valley.
    0:04:52 I was in a Fortune 100 company, and I was given sort of conventional sales training.
    0:04:59 In that training, I was taught how to gait information and access to information about
    0:05:04 our products and services behind all this process that we were supposed to put these
    0:05:07 prospects and potential customers through.
    0:05:11 And instantly, that never sat right with me.
    0:05:13 That never felt like a reasonable trade.
    0:05:18 I felt like we should be freely giving more access to information.
    0:05:23 Then I moved to Silicon Valley, and I loved the updated philosophy that I saw.
    0:05:30 At the time, we were all talking about the consumerization of IT, of enterprise infrastructure,
    0:05:35 and making sure that the tools that employees want to use inside a company to do great work
    0:05:41 match the great experience of consumer tools that were on the market, that were available
    0:05:42 for people.
    0:05:45 And there was this discussion of shadow IT and people bringing in their own technologies
    0:05:49 from outside that weren’t necessarily sanctioned because these tools were so much better to
    0:05:50 use.
    0:05:55 And the key to that whole motion and what has driven this is the consumer preference.
    0:05:58 And the consumer in this case could be the enterprise customer, the enterprise knowledge
    0:05:59 worker.
    0:06:01 They deserve great tools.
    0:06:06 Not only do they deserve access to information about the tools that they might buy, we’ve
    0:06:12 now taken it and progressed it over the last 10 years to include consumption of that tool.
    0:06:17 So as part of this consumption-based pricing model, of course, we have different tiers
    0:06:22 and access to different tiers for these solutions, including free tiers, which the internet has
    0:06:26 helped democratize access to information about all these solutions.
    0:06:30 And of course, you can actually go use them, which I think is a huge benefit for customers
    0:06:33 and I think is the right expectation for the industry to have.
    0:06:37 It’s going to help ensure that companies are able to find the right tool to meet their
    0:06:41 business cases and their actual needs.
    0:06:42 So I love that we’ve done that.
    0:06:46 Now I may be cheating ahead a little bit on additional questions, but where I think this
    0:06:52 naturally goes is if we’re giving out all this access to use our tools with very generous
    0:06:59 free tiers and we’re spending money to provide that solution for free, what are we then trading
    0:07:04 in exchange for those scale customers and long-term relationships and commitments and
    0:07:10 how do we fuse product-led growth with the appropriate level of sales-led growth.
    0:07:14 And there’s actually a ton of exciting stuff in that category where my perspective is you
    0:07:20 can do that in a really healthy way to manage your company’s resources effectively.
    0:07:23 But to go back to the original intent of the question, why does this make sense?
    0:07:24 Why is this so popular?
    0:07:26 Why is this not going away?
    0:07:28 Because it’s the right thing to do for customers.
    0:07:32 They know it deep in their heart that they should be able to use these tools.
    0:07:36 They should be able to have complete access to information about these tools and these
    0:07:41 providers need to be held accountable to continuously delivering value.
    0:07:48 It is not okay to simply sell a deal, walk away for 11 months, and then one month before
    0:07:52 the renewal is set to go, then you re-engage and say, “Hey, how was the last 11 months?
    0:07:54 Hopefully you’re ready to renew and expand.”
    0:07:59 That’s not an okay motion, and that’s not the way to maximize business value for anybody.
    0:08:04 I love both of your perspectives on that, and it almost seems overwhelmingly positive.
    0:08:08 I think we all know, living in the reality of this, especially when you’re going through
    0:08:13 that arc as these growth stage companies, there are a ton of challenges with this model
    0:08:15 in practicality.
    0:08:21 I know you guys have lived through it, and I’m really interested in what your key observations
    0:08:23 have been around those challenges.
    0:08:29 Just as importantly, what you’ve seen in the organization you worked in to try to mitigate
    0:08:33 those, to try to overcome those, and to really be able to operate this model at scale.
    0:08:34 Oh, man.
    0:08:35 There’s a lot there.
    0:08:37 There’s a lot of unpack on that one.
    0:08:38 A little bit of background.
    0:08:42 5Tran, when it was started, was a booking space business.
    0:08:44 We had a variety of connectors.
    0:08:48 We priced those connectors in different groups, and said, “Congratulations, here you go.
    0:08:49 You bought your connector.
    0:08:50 Go forth.
    0:08:51 Talk to you in a year.”
    0:08:55 Then we switched to a usage-based pricing model using something called monthly active
    0:08:57 rows.
    0:09:03 That switch, we had an established sales culture that was this booking space business
    0:09:08 model that switched over to usage-based, and that was a hard switch, making sure that you
    0:09:12 had all the systems in place and everything else that comes along with that.
    0:09:15 I think there’s a couple of different ways to think about the challenges of usage-based
    0:09:16 pricing.
    0:09:21 One is on your systems and internal systems and processes to be able to manage that.
    0:09:23 It’s a bigger investment.
    0:09:32 It is much easier to run a booking space business from just a planning comp, your internal systems.
    0:09:35 All of that is way easier, way simpler.
    0:09:41 Therefore, you have to make a lot of investments into your operations teams, into the systems,
    0:09:45 into the data models that you have to run, all the telemetry data from your product.
    0:09:47 You have to have that information.
    0:09:49 You need that information to help drive that.
    0:09:50 That’s one challenge.
    0:09:57 The other side of this is that it does introduce a lot more variability, particularly when you
    0:09:59 have fewer customers.
    0:10:02 It can introduce a lot more variability into your revenue, because customers can change
    0:10:06 their usage, and you have less commitment from a customer.
    0:10:09 You can see customers come and go and move up and down, so you always have to focus on
    0:10:10 value.
    0:10:13 At FiveTrend, we’ve seen a couple of different drivers of that.
    0:10:15 There are multiple drivers on predictability.
    0:10:19 We try to give a lot of flexibility with customers so that they can match what they need and
    0:10:25 their value to what the product is offering, but in doing that flexibility, that can drive
    0:10:28 a lot of variability in what customers actually are using and how they’re changing that.
    0:10:33 They can optimize a lot for the needs of their business, and that can drive some predictability
    0:10:34 in the revenue.
    0:10:38 There’s this other factor that we see, which is just general macroeconomic things, just
    0:10:40 things that happen in the world.
    0:10:44 As we move data, there’s stuff that can be sometimes outside of the control of the customer
    0:10:46 that can impact their usage.
    0:10:50 For example, we see a lot of our retail customers around November and December timeframe, huge
    0:10:53 spikes in usage, because that’s when all the POS systems are going.
    0:10:54 All their sales are happening.
    0:10:56 You get these big spikes in usage.
    0:11:02 If you have a diversified customer base, you can sometimes mellow out those spikes through
    0:11:06 broader industry diversification or through understanding and planning for those spikes,
    0:11:10 but you have to have some history, some data history to understand that.
    0:11:14 The final one is, depending on what your product is, again, 5trend, the unique thing is we’re
    0:11:20 interacting with other products, so we pull data out of what’s happening from other users
    0:11:25 or other applications, and then we move that data over, and so those applications make changes.
    0:11:27 That impacts our product.
    0:11:31 We have HubSpot, I think, a few years ago, made a change to their API, and it forced a
    0:11:37 complete resync for all of our customers, which huge spike in usage across the board
    0:11:40 for all of our customers that were using the HubSpot connection.
    0:11:42 It’s one of those things where you have to be on top of that all the time.
    0:11:47 You have to be watching the interactions and stay on top of those things so you can protect
    0:11:50 the customers from these unnatural spikes that can happen.
    0:11:54 That means more investment in your product and your engineering teams.
    0:11:57 I think if you want to know the complications of usage-based pricing and going from booking
    0:12:02 space where it’s simpler to plan, simpler to use, the salespeople understand it, quite
    0:12:04 frankly, the procurement people understand it better, too.
    0:12:05 They’re like, “I know what I’m buying.
    0:12:07 I know what this is going to cost me.
    0:12:12 I can predict this in my budget to a, ‘Hey, I’m not really sure how much I’m going to
    0:12:13 use.
    0:12:15 I’m not really sure what this is going to do for my budget over time.
    0:12:16 I’m not sure if I have control over that.”
    0:12:19 So, a lot of education that has to happen.
    0:12:23 Well, Travis, there’s something else that I hear in that, too, which I feel like our
    0:12:29 industry loses sight of this a little bit in this discussion often, which is the idea
    0:12:34 that whether I choose to be a booking space business or a usage-based business, that somehow
    0:12:38 is the sole determining factor as to whether or not we’re going to be a good company or
    0:12:39 a bad company.
    0:12:40 Oh, yeah.
    0:12:41 This is just the strategy.
    0:12:43 This is the strategy to unlock growth.
    0:12:48 I personally think that it’s a very good strategy to unlock growth, but it also comes with costs,
    0:12:51 and we can talk about those, and we can talk about mitigations for those, and you should
    0:12:58 be eyes-wide open, but there is no shortcut to creating long-term successful businesses.
    0:13:02 Fundamentally, at the heart of all of this, and you alluded to this, Travis, and I totally
    0:13:08 agree, your products and services have to deliver immense amounts of value to your customers,
    0:13:13 plain and simple, and that is regardless of what growth strategy you choose or what sales
    0:13:14 model you choose to have.
    0:13:20 These software products are never done being built, absolutely never done being built.
    0:13:24 And so consumption, actually, that growth strategy lines up beautifully with that because
    0:13:27 as long as you’re continuing to build these software products and you’re adding that
    0:13:32 flexibility, that feature set, that next generation of innovation, then you’re able to command
    0:13:37 good margins, make customers wildly successful, and they’re excited to come back and spend
    0:13:41 more and more and more on that consumption every period because they know they’re getting
    0:13:44 more value than they’re paying you, and that’s how you build a great business.
    0:13:50 So I do think people lose sight of that, so you need to be tightly aligned with your product
    0:13:54 organization and thinking about that product roadmap because that’s going to be a much
    0:13:55 bigger determining factor.
    0:13:59 Now, I think part of this question, too, is what are some of those costs?
    0:14:06 We should recognize that in good economic times, a consumption-based model can be a big accelerant
    0:14:12 because there is less friction to customers being able to use more, consume more, and
    0:14:15 therefore, your company getting to make more revenue when they do that.
    0:14:21 In tougher economic times where you’ve got percentages of your business and your revenue
    0:14:25 that is tied to full consumption where there are no bookings commitments in place, obviously
    0:14:30 that represents a risk, and that’s going to also be a less friction place for those businesses
    0:14:35 who are your customers to save money by pulling back their consumption on your service and
    0:14:39 lopping off use cases or shutting down one department’s use of that solution, and we’ve
    0:14:41 seen that in the last couple of years.
    0:14:44 There was a ton written about that and a ton of analysis.
    0:14:48 For me personally, when I think about building a great business, first and foremost, I want
    0:14:53 that amazing product roadmap where we are so confident in the value that our solution
    0:14:54 provides.
    0:15:00 Secondarily, I happily accept some of the risks of a consumption-based model because I think
    0:15:03 that the benefits far exceed the costs.
    0:15:08 Even knowing that there will be tough times ahead and our customers may, as a result of
    0:15:14 a need to save money and extend runway or drive more profitability, they may reduce consumption
    0:15:15 on us.
    0:15:20 I’m willing to accept that and deal with that turbulence and do right by our customers
    0:15:23 in those moments because I actually think those moments, even though they don’t feel
    0:15:29 good because maybe revenue is pulling back on our side, those are incredible opportunities
    0:15:33 for us to build long-term trust and long-term relationships with those customers.
    0:15:38 They will remember how we treated them when they needed our help, and that will factor
    0:15:42 into their decision when times are good again and they’re ripping and they’re investing
    0:15:43 in growth.
    0:15:48 They’ll remember which providers stuck by them, took good care of them, and recognized
    0:15:51 that they were in a tough moment and they needed some forgiveness or some help or some
    0:15:57 actual assistance saving money with best practices that enabled them to lower consumption of
    0:15:58 your service.
    0:16:03 That’s a separate big topic of the role of account management and customer success,
    0:16:04 but hopefully that addressed the question.
    0:16:05 Definitely.
    0:16:07 Dan, I’ll go right back to you.
    0:16:12 Getting into a bit of the operational nitty-gritty of this.
    0:16:16 I’m particularly interested in your perspective as a sales leader when it comes to forecasting
    0:16:22 the business and just living in the presence of a quarter or a couple quarters ahead of
    0:16:23 you.
    0:16:26 How have you learned to confidently forecast the business?
    0:16:30 You’re growing quickly, but you have all these challenges of just not a heck of a lot
    0:16:32 of data in the rearview mirror.
    0:16:35 You don’t have perfect signal detection and hitting indicators.
    0:16:37 How are you working through that?
    0:16:39 How have you learned to become confident in your forecasting?
    0:16:41 I really appreciate that question.
    0:16:47 My answer may surprise you slightly because the key to good forecasting, even in a consumption
    0:16:51 based business, is a very healthy bookings element.
    0:16:56 The foundation of the relationship with our customers may still be entirely consumption
    0:16:57 based.
    0:16:59 That’s how we have the conversation.
    0:17:00 That’s how we meter their usage.
    0:17:02 That’s how we talk about their usage.
    0:17:06 That’s how we forecast their usage purely in the form of what they’re going to consume.
    0:17:09 As Travis said earlier, they’re going to pay for what they use.
    0:17:10 That’s the objective.
    0:17:15 While that being said, I think it’s still totally fair and reasonable that my business
    0:17:18 values predictability, like you just talked about.
    0:17:21 I’ve got a job to do, which is to forecast accurately.
    0:17:24 We all know why those forecasts are so important.
    0:17:27 That enables us to make healthy forward-looking decisions about the business, how we’re going
    0:17:29 to invest, what teams we need.
    0:17:33 There’s a ton that requires a great forecasting methodology, therefore, because I’m going to
    0:17:39 get a bunch of business value from a healthy forecast, I can return value to my customers
    0:17:42 who are willing to make commitments to us.
    0:17:47 That’s a super fair exchange of value, and it’s on this beautiful continuum.
    0:17:52 The more flexibility that my customer requires, the more fair it is for them to pay a premium
    0:17:55 for the consumption that they’re going to use.
    0:18:00 The more they’re willing to commit to me and my team and my company, which enables me to
    0:18:05 be better at forecasting, the more I’m happy to return discounts and commercial incentives
    0:18:09 to them, and we’ll execute that on a bookings contract.
    0:18:15 This is part of the motion that you want to breed in the sales team, which is that you’re
    0:18:19 continuously selling, you’re continuously taking care of them, you’re continuously monitoring
    0:18:23 their use case, you’re continuously forecasting with all of your customers.
    0:18:30 It is expected in any organization that I’m running that if you’re taking care of a customer,
    0:18:35 you are continuously not only monitoring their use case in the telemetry that Travis was
    0:18:39 talking about, which is very important that you give your sales team and your customer
    0:18:44 success team the monitoring capabilities to understand in very granular detail how their
    0:18:49 customers are consuming products from a growth telemetry perspective.
    0:18:54 You also expect that those folks are deeply understanding the dynamics within the customer
    0:18:55 business.
    0:18:56 What is causing that growth?
    0:19:00 It’s not enough to just know what the growth rate is.
    0:19:04 I want my team to explain to me why that growth rate is.
    0:19:07 Is it because they’re aggressively expanding into a new market?
    0:19:12 Is it because they just acquired another company and now we’ve combined two teams’ uses?
    0:19:16 We actually have to know why because that is the key to good forecasting.
    0:19:22 I can’t tell you the number of times that I’ve seen this issue of massively over forecasting
    0:19:28 a given customer’s usage because the team didn’t understand that the behavior that customer
    0:19:30 was engaging in was a one-time thing.
    0:19:34 It was only ever going to last for one quarter, and if we would have just asked that question,
    0:19:35 they would have told us.
    0:19:39 But instead, we put it down on our chart as a trend that would endure for the next year,
    0:19:42 and we called it ARR, and that’s a mistake.
    0:19:43 There’s a whole lot.
    0:19:47 I could probably go on for another hour about what drives good forecasting.
    0:19:51 It’s a combination of instilling in your team great discovery skills, Scott, and an expectation
    0:19:56 that they’re doing ongoing discovery to always know the business drivers behind the usage
    0:19:57 trends.
    0:19:59 You can’t just know the trends.
    0:20:04 It’s arming them with great telemetry tools, monitoring VI solutions to track it at a very
    0:20:09 granular level so they can get specific, and it is offering your customers fair contracts
    0:20:13 and discounts in exchange for commitments, which are really valuable for your business
    0:20:19 because you value the ability to forecast, you value certainty, and you’re happy, in
    0:20:24 my opinion, to give discounts to customers who can sign up for that level of commitment
    0:20:26 of minimum amounts of consumption.
    0:20:29 The point around bookings is spot on.
    0:20:31 We have a couple different components to our business.
    0:20:36 We have this great big self-service group of customers that come in, use the product,
    0:20:39 never talk to a salesperson, they just go there, pay as you go.
    0:20:43 But then there’s this other portion of the business which makes commitments to, “Hey,
    0:20:47 I’m going to buy upfront this much for a year,” in exchange for that built-in discounts.
    0:20:54 As you use more, discounts that come in play for that, and that bookings helps drive predictability
    0:20:58 for a portion of the business and forecasting for how are we doing.
    0:21:02 When we think about long-term planning, I think Snowflake is famous for their RPO, Remaining
    0:21:04 Performance Obligation.
    0:21:05 How much have people booked?
    0:21:06 How much have we got to?
    0:21:07 How much is left?
    0:21:11 And that’s because a big thing of monitoring, giving that information to your customer success
    0:21:14 teams to help make sure customers are getting what they say they want.
    0:21:17 They’ve made a commitment to you, and they’ve given you an indication of what is valuable
    0:21:20 to them and what their level are, and you can see, “Are we getting there?”
    0:21:25 I think the mechanics of actually building the predictability, what kind of systems you
    0:21:27 have to have in place, and how do you do that?
    0:21:30 We’ve gone through many iterations of this, and this has been an evolution over several
    0:21:31 years.
    0:21:36 We had to make major investments in our infrastructure from an analytics standpoint.
    0:21:38 So 510 moves a bunch of data.
    0:21:43 So we have a fairly large analytics team, and we’ve built a predictive model that says,
    0:21:47 “Okay, based on what usage has looked like,” because we’ve got this cohort of customers
    0:21:48 that haven’t made bookings.
    0:21:54 So based on their past usage, where are they going on an account-level basis?
    0:21:55 When did they join us?
    0:21:57 So what kind of usage curve are they on?
    0:22:00 Based on historical data that we’ve looked at, and said, “Okay, cool, customers that
    0:22:04 are about this size in this region, they perform on this kind of growth curve.”
    0:22:08 So if you have enough customers, those averages will work out, and you can see that as we
    0:22:12 apply those curves to these customers that come in, and so you layer those cohorts together,
    0:22:16 and that gives you a predictability about what’s kind of going on from a revenue standpoint.
    0:22:21 And it’s all like the data science side, and you can take into account what plan tier
    0:22:22 are they on?
    0:22:24 We’ve got five or six different plan tiers.
    0:22:26 What’s their discount for each individual customer?
    0:22:31 You have to layer all that stuff in so that you can build a more accurate view of their
    0:22:35 performance over time, and then take into account historical turn rates.
    0:22:37 Turn for us isn’t a customer has left us.
    0:22:42 Turn for us can be, “Hey, I’ve turned off a use case, so I’ve reduced that thing.”
    0:22:45 So you want to look and take that to account, and that’s our data science model.
    0:22:51 But the data science model only sees historical data and telemetry data from customers.
    0:22:55 They don’t have that piece of data I’m talking about, which is the customer discovery piece,
    0:22:58 the sales insight side of this, which is the second part.
    0:23:02 The sales team has the insights on, “Are they going to add another use case?
    0:23:03 Are they going to turn a table off?”
    0:23:06 They have the insights that the data team can’t have.
    0:23:09 They don’t know what the customers are going to do because they’re not having the conversations
    0:23:10 with the customers.
    0:23:16 So you give the sales team, “Here’s the predictive revenue for your book, for your customers.”
    0:23:20 And then the sales team can go like, “Well, actually, I know they’re going to add a new
    0:23:24 use case, and it’s going to come online in the next two months.
    0:23:27 And I know that that’s going to be worth this amount of money.”
    0:23:32 But that then gives you the insights to then modify your data science model, and that gives
    0:23:33 you a little bit more confidence.
    0:23:37 And I can tell you in the beginning, your sales team will get it wrong.
    0:23:43 Their predictions will be way off, and particularly, our model is the more that you use, the cheaper
    0:23:45 your usage is.
    0:23:50 And therefore, unless you’re a savant and you can do multivariable calculus in your head,
    0:23:52 you have to have these tools in place to do that.
    0:23:57 And so you run this rigorous process where data science model comes in, sales modifies
    0:24:01 it based on their knowledge of the customer and what’s moving up or down, and then they’ve
    0:24:05 got the tools enabled to predict or size those different opportunities.
    0:24:07 I love everything you went through there, Travis.
    0:24:12 What I’m really curious about is how that extends when you need to do annual planning
    0:24:15 and you’re thinking about your longer-term investments.
    0:24:19 Obviously, that still requires you to forecast revenue going forward.
    0:24:25 How has that parlayed into longer-term planning, accuracy, or what else do you need to do in
    0:24:28 addition to those key concepts to do that well?
    0:24:29 Yeah.
    0:24:33 Your capacity model is a very interesting thing that you have to build.
    0:24:38 And we shipped it to more of a demand-driven capacity model so we can look at historical
    0:24:39 demand.
    0:24:40 What have we been seeing?
    0:24:43 And then how does that demand translate into dollars for us?
    0:24:49 And then what ramp do customers go on when they come in to build these waterfall ramps?
    0:24:52 The basic outline still is the same, though.
    0:24:55 It’s just you have more assumptions that can go into that.
    0:24:59 We do a three-year long-term plan, which is as much for general growth rates, like what’s
    0:25:01 the macro economy kind of look like.
    0:25:03 It’s a roadmap for a product standpoint.
    0:25:06 It’s more of a here’s where we think we’re going to get to.
    0:25:07 It’s not a this is the prediction.
    0:25:09 This is like really honed in.
    0:25:12 On the annual plan, it’s much more detailed.
    0:25:15 And that’s where we try to hone in and we work on these other assumptions because you’re
    0:25:20 assuming things like turn within the product, not just turn of customers.
    0:25:25 You assume things like expansion of growth rates over time, how those have been going.
    0:25:26 What have you seen in the past?
    0:25:28 You’re not just doing like an NRR assumption.
    0:25:32 You kind of have to look at this cohorted basis for each of your customers and how they’re
    0:25:35 going to grow within that year.
    0:25:39 And how big of each of those cohorts coming into the year, aren’t there?
    0:25:42 So like Q4, Q3 of the previous year, where did you land?
    0:25:44 Where are those customers on their growth cohort?
    0:25:48 That helps give you more predictability about like your early stage revenue and then what
    0:25:52 your pipeline look like of like those new customers that could be coming in, that will
    0:25:57 then land, coming in Q1, Q2, and then their revenue that they’re going to generate for
    0:25:59 you and Q3, Q4 as you look forward for that.
    0:26:04 So it’s a lot of the same kind of skeleton that you have from the annual planning basis,
    0:26:09 which is this extra layer of cohorts and the growth of that revenue over time and where
    0:26:12 are they and so those assumptions that you have to layer in.
    0:26:14 Well, I was just going to ask you a follow-up there.
    0:26:22 Fundamentally, aren’t you just taking all of that extra rigor and analysis to normalize
    0:26:27 an account executive’s contribution in the form of a quarterly ad to the business?
    0:26:32 Whether you’re mentoring that in ARR or whatever or MRR, you’re still just doing all that extra
    0:26:38 rigor just to normalize what an incremental head is giving to your business so that you
    0:26:40 can plan essentially in a normal way.
    0:26:41 Yeah, totally.
    0:26:46 I think, man, we think about it in terms of like there’s kind of two parts to the business.
    0:26:51 There’s this demand-driven part of the business, which is customers come in and they don’t
    0:26:55 talk to salespeople so it’s the self-service portion and that part is not about adding
    0:26:56 salespeople.
    0:27:01 So you kind of have to look at the demand part of the model up front and on the enterprise
    0:27:05 side, we look at larger organizations where it’s actually the salespeople are driving
    0:27:06 demand.
    0:27:10 They’re creating demand with customers or developing those relationships.
    0:27:14 That’s a little bit more where you’re like, cool, if I add another salesperson, I’m adding
    0:27:20 more revenue and it’s not as constrained, but yes, to your point broadly speaking, yes,
    0:27:24 you are kind of normalizing how much incremental revenue are you driving by each person that
    0:27:27 you’re adding to the organization and then defending revenue too.
    0:27:30 Well, Mark, I think what we’re both saying, what Travis and I are a complete alignment
    0:27:35 on is that I think a very huge component of this planning exercise that you’re talking
    0:27:39 about is attribution for your revenue.
    0:27:41 What was the source of that pipeline?
    0:27:44 Was that customer spending on their own?
    0:27:48 Did they self-serve and how far did they self-serve?
    0:27:53 And then this is actually where I think the most important thing for any organization is
    0:28:00 to have really great communication and alignment among the business leaders between sales leadership,
    0:28:04 operational leadership, revenue excellence leadership, and of course finance.
    0:28:09 Those parties need to be in complete alignment about the relative value of these different
    0:28:14 buckets and where they come from and is a dollar of revenue that was self-prospected entirely
    0:28:16 by an account executive.
    0:28:21 How does that relate to a dollar of bookings that was converted from somebody who was already
    0:28:22 spending?
    0:28:27 And you can even get as fancy and as nuanced as applying modifiers within a comp plan, for
    0:28:30 example, to different kinds of dollars of revenue.
    0:28:35 But it all goes back to having some basic systems of attribution to know when revenue
    0:28:37 is coming from where it started out.
    0:28:42 This whole product-led growth motion, it is another example of how it can accelerate but
    0:28:46 how it can add some cost because now you have this whole new bucket.
    0:28:53 I remember the first time I heard the acronym PQL, it was probably around 2016 that I heard
    0:28:58 that for the first time and prior to that we’d only ever talked in MQLs.
    0:29:04 And it’s now this mega funnel of opportunity for your business if you’re driving a consumption
    0:29:07 model of product-qualified leads.
    0:29:09 And so what’s the definition of that?
    0:29:11 What are the expectations for follow-up?
    0:29:16 What are the expectations for compensation when a seller closes a deal with somebody
    0:29:17 who was already using?
    0:29:19 Lots of good considerations there.
    0:29:23 The advice I would give is there is no one-size-fits-all solution.
    0:29:27 I’ve been across three or four different businesses now that have some element of consumption-based
    0:29:28 pricing.
    0:29:32 The key to getting that right is to actually listen to the needs of your customers and
    0:29:34 the dynamics of your business.
    0:29:41 I have seen this change dramatically and therefore the compensation plans that we write is custom
    0:29:46 based on the competitive pressures that we’re feeling, the market dynamics that we’re feeling,
    0:29:50 our stage of growth, our orientation towards profitability.
    0:29:54 There just really is no one-size-fits-all like piece of advice here.
    0:29:58 You’ve got to respond to what you’re seeing, where churn is happening, where growth is
    0:30:00 happening, what competitive pressure you’re facing.
    0:30:02 It’s got to be custom every time.
    0:30:03 Really good stuff, guys.
    0:30:07 I want to hit you with one more question, looking into your crystal balls.
    0:30:08 Where is this all going?
    0:30:13 A lot of new technology out there, you’ll be remiss if I didn’t at least make a quick
    0:30:18 mention of generative AI, but there are an array of things out there in addition to the
    0:30:23 innovations there, but where do you see this model going over the next few years?
    0:30:25 Let’s break this question into two pieces.
    0:30:27 Where is the consumption-based model going?
    0:30:32 We’ve been in this consumption-based model for four years, and a lot of the businesses
    0:30:37 around how do you try and find the intersection of value with customers and value for the
    0:30:42 business, and pricing is this mechanism that you can use for that.
    0:30:45 Consumption-based model is one that I don’t think is going to go away because it is so
    0:30:51 valuable to customers and it aligns everything together, but it’s not the only tool that
    0:30:52 you have.
    0:30:57 It’s not the only pricing tool, and we’ve heard since we started the consumption-based
    0:31:04 model from a lot of larger customers, like big enterprises, we want some more predictability.
    0:31:10 I think actually what you’re going to see is more hybrid pricing models where you have
    0:31:15 consumption-based to allow customers to come in and understand the product, and some customers
    0:31:16 will love that.
    0:31:22 You’ll have, for example, ELA’s, Enterprise License Agreement that sets set price for
    0:31:23 all you want.
    0:31:28 That gives more predictability to other customers, and you’re going to see some of these hybrid
    0:31:31 mixes that’ll come around because you’ve got different types of customers that are looking
    0:31:35 for different solutions and different pricing for them, and you want to be responsive to
    0:31:37 what customers need, and you want to meet them where they are.
    0:31:38 That’s part one of your questions.
    0:31:44 The second one about generative AI, I mean, holy cow, this movie is so fast.
    0:31:48 I talked about data science models earlier on predicting where customers’ usage is going
    0:31:53 to go, and I think that generative AI, particularly the predictive part of that, can be quite
    0:31:56 valuable to us, so it can shortcut.
    0:31:59 The long time it took us to figure out what was driving customers’ usage and what are
    0:32:03 the key indicators, and how do we know when to intervene with a customer, when not to
    0:32:08 intervene, or what’s the next step to action, and generative AI can help drive that part
    0:32:14 of the business and help give smaller companies advantages that we didn’t have with our smaller
    0:32:15 companies.
    0:32:19 We’ve had to earn over a long period of time just working at it and having women’s work
    0:32:23 in these models, and then also just making sure that using generative AI to give sales
    0:32:27 teams insights into when they should reach out to customers, what’s actually happening.
    0:32:31 I talked about this, why are customers doing the things that they’re doing?
    0:32:35 What’s happening, and I think that’s what generative AI can start to parse all this data, all this
    0:32:36 telemetry information.
    0:32:41 We have tons of data on our customers, tons of data on usage, but not everything is valuable.
    0:32:44 There’s gems out there, and you’re searching for those gems all the time, it’s the diamonds
    0:32:48 in the rough, and I think generative AI can help identify what are those gems, and then
    0:32:52 give actions to the sales team so that they can go out and have closer relationships with
    0:32:53 their customer.
    0:32:57 It’s never going to replace a human-to-human relationship, business, particularly enterprise
    0:32:58 space.
    0:33:03 That is a human-to-human business, and we want to make sure that we maintain tight ties to
    0:33:07 the people that are involved, and so it’s not a replacement, it’s a supplement.
    0:33:10 I personally love this, and I agree with everything Travis said too.
    0:33:14 First of all, I think that consumption-based usage models and pricing models are here to
    0:33:15 stay.
    0:33:19 These are the kinds of solutions that I want to sell and represent as a sales leader, as
    0:33:22 an employee, because I believe it’s the right thing for the customer, and it’s the right
    0:33:27 alignment of incentives for my employer and the company that I represent to, so I don’t
    0:33:28 think it’s going anywhere.
    0:33:29 I think it makes way too much sense.
    0:33:33 I would bucket where I think this is going in three different categories.
    0:33:34 The first is in the tooling.
    0:33:35 We talked about that.
    0:33:40 I know that the companies that are building software to support the sales and marketing
    0:33:47 stack are very focused on modules and advanced tools for usage-based pricing specifically.
    0:33:49 We talked about all the challenges in that category.
    0:33:53 I’m looking forward to seeing advancements in the tooling that helps us with the telemetry,
    0:33:58 that helps us with the triggers for outreach, that helps us with the measurement, the forecasting,
    0:33:59 all of that.
    0:34:02 I think there’s plenty of room for advancement there, and AI definitely plays a role.
    0:34:06 The second I would say that we’re going to continue to need to push hard on the seller
    0:34:12 skill set, and I include account managers, account executives, customer success representatives,
    0:34:15 sales engineers, all of that in the sales skill set.
    0:34:19 We’ve joked a couple of times about if only you would have just asked that one customer,
    0:34:23 “Hey, what’s behind this massive surgeon in consumption that you just had?”
    0:34:25 Well, we’re talking about the proliferation of this model.
    0:34:31 We need to recognize that you are not the only person asking that customer what’s behind
    0:34:33 that blip.
    0:34:36 There is risk of fatiguing these customers.
    0:34:41 You’re asking them to explain themselves and their business drivers to everybody all the
    0:34:42 time.
    0:34:46 That means that we need a necessary improvement and evolution in the skill set and how you’re
    0:34:50 making sure that the folks on your team that are engaging with your customers are doing
    0:34:53 so in a way that is continuously adding value.
    0:34:58 That includes showing up with helpful tips, showing up with insights about their usage
    0:35:00 that they might not have known on their own.
    0:35:03 That actually involves, I alluded to this earlier, but it involves also telling them
    0:35:08 proactively how to spend less on your company by implementing some best practices that will
    0:35:10 reduce their consumption.
    0:35:13 There’s a myriad of ways that you can make sure that you’re continuously adding value
    0:35:16 while maintaining a very high touch engagement with those customers.
    0:35:20 We need to continue to progress that playbook as an industry, make sure that we’re doing
    0:35:23 right by ourselves and our customers in the process.
    0:35:24 That’s the second category.
    0:35:30 Then the third where I hope that all of this culminates is in the product roadmaps.
    0:35:35 If we’ve done these other categories well, if customers are driving consumption towards
    0:35:41 you because you’re the highest value solution for them at that exact moment in time, you
    0:35:45 know that you can continue to earn that right and earn that business by delivering more and
    0:35:49 more value through great product roadmap, delivering more value in your products, including
    0:35:52 more value, competing hard against your competition.
    0:35:58 I hope that all this results in fierce product competition so that the best product is always
    0:35:59 the one that’s winning.
    0:36:02 The one that is offering the most value to customers is the one that they should be going
    0:36:04 with at all times.
    0:36:07 For me, I hear three things occurring over and over.
    0:36:14 It is the tight interlock between customer value realization and what they’re actually
    0:36:16 paying for the product.
    0:36:20 If you even think into the future, what you guys just described a bit, it’s really just
    0:36:22 making that even tighter and more predictable.
    0:36:24 It’s not changing the algorithm.
    0:36:30 You’re still trying to just get to that same exchange and trying to optimize that.
    0:36:35 The other thing I’m hearing a lot around is just the significant investment and dedication
    0:36:40 you have to have around the mastery of the data and the tooling on top of that to just
    0:36:46 be able to take all of this data, remove the clutter, see the signal, and be able to use
    0:36:51 it as much as possible to predict the future of the way that your product is being used.
    0:36:52 That just carries forward.
    0:36:57 AI will be great, but it’s just yet another way to further tweak that.
    0:37:01 I particularly like what Travis was saying around the empowerment for the smaller companies
    0:37:05 too, who have a lot of challenges with this and don’t have as much of the ability to invest
    0:37:06 infrastructure right away.
    0:37:11 This is empowering for them if they are able to get AI into the fight early on.
    0:37:14 Then finally, it’s kind of the side of the people.
    0:37:19 This is a big thing that you guys kept coming back to as well as just giving your customer
    0:37:24 facing folks the tools and the expertise and enablement to be really good at this and to
    0:37:28 be just great partners with the customers and to try to understand the way they’re going
    0:37:30 to consume value of the product.
    0:37:33 I appreciate you guys giving us your valuable time to go through this.
    0:37:35 I can talk all day about this.
    0:37:40 You guys have so much in sight, so my sincere thanks for dedicating the time that you gave
    0:37:41 us here today.
    0:37:42 Thank you.
    0:37:43 Awesome.
    0:37:44 Thanks for having us, Mark.
    0:37:45 Thanks.
    0:37:52 If you liked this episode, if you made it this far, help us grow the show, share with
    0:38:00 a friend, or if you’re feeling really ambitious, you can leave us a review at www.ratethispodcast.com/a16z.
    0:38:05 You know, candidly, producing a podcast can sometimes feel like you’re just talking into
    0:38:09 a void, and so if you did like this episode, if you liked any of our episodes, please let
    0:38:10 us know.
    0:38:12 I’ll see you next time.
    0:38:14 (upbeat music)
    0:38:24 [BLANK_AUDIO]

    Over the past decade, usage-based pricing has soared in popularity. Why? Because it aligns cost with value, letting customers pay only for what they use. But, that flexibility is not without issues – especially when it comes to predicting revenue. Fortunately, with the right process and infrastructure, your usage-based revenue can become more predictable than the traditional seat-based SaaS model. 

    In this episode from the a16z Growth team, Fivetran’s VP of Strategy and Operations Travis Ferber and Alchemy’s Head of Sales Dan Burrill join a16z Growth’s Revenue Operations Partner Mark Regan. Together, they discuss the art of generating reliable usage-based revenue. They share tips for avoiding common pitfalls when implementing this pricing model – including how to nail sales forecasting, adopting the best tools to track usage, and deal with the initial lack of customer data. 

    Resources: 

    Learn more about pricing, packaging, and monetization strategies: a16z.com/pricing-packaging

    Find Dan on Twitter: https://twitter.com/BurrillDaniel

    Find Travis on LinkedIn: https://www.linkedin.com/in/travisferber

    Find Mark on LinkedIn: https://www.linkedin.com/in/mregan178

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • California’s Senate Bill 1047: What You Need to Know

    AI transcript
    0:00:07 The cost to reach any given benchmark of reasoning or capability is dropping by about 50 times every five years.
    0:00:17 The definitions for what was dangerous in the Cold War became obsolete so fast that a couple of decades later when the Macintosh launched, it was technically a munition.
    0:00:26 Great technologies always find their way into downstream uses that the original developers would have had no way of knowing about prior to launch.
    0:00:35 No rational startup founder or academic researcher is going to risk jail time or financial ruin just to advance the state of the art in AI.
    0:00:39 There’s no chance we’d be here without open source.
    0:00:49 The state of California ranks as the fifth largest economy in the world, and on a per capita basis, the Golden State jumps all the way up to number two.
    0:01:00 Now, one of the drivers of those impressive numbers is, of course, technology, with California being the home of all but one fan companies and a long, long tail of startups.
    0:01:09 But something happened recently that has the potential to dislocate the state’s technical dominance and set a much more critical precedent for the nation.
    0:01:14 On May 21st, the California Senate passed Bill 1047.
    0:01:31 This bill, which sets out to regulate AI at the model level, wasn’t garnering much attention until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August, which, if passed, would cement it into law.
    0:01:34 So here is what you need to know about this bill.
    0:01:40 Senate Bill 1047 is designed to apply to models trained above certain compute and cost thresholds.
    0:01:52 The bill also puts developers, both civilly and even criminally liable for the downstream use or modification of their models by requiring them to certify that their models won’t enable quote “hazardous capability.”
    0:01:57 The bill even expands the definition of perjury and could result in jail time.
    0:02:05 Third, the bill would result in a new frontier model division, a new regulatory agency funded by the fees and fines on AI developers.
    0:02:10 And this very agency would set safety standards and advice on AI laws.
    0:02:13 Now, if all of this sounds new to you, you’re not alone.
    0:02:20 But today you have the opportunity to hear from A16C General Partner Ajay Mida and venture editor Derek Harris.
    0:02:30 Together, they break down everything the tech community needs to know right now, including the compute threshold of 10 to the power of 26 flops being targeted by this bill.
    0:02:39 Whether a static threshold can realistically even hold up to exponential trends in algorithmic efficiency and compute costs, historical precedents that we can look to for comparison,
    0:02:47 the implications of this bill on open source, and the startup ecosystem at large, and most importantly, what you can do about it.
    0:02:55 Now, this bill really is the tip of the iceberg with over 600 new pieces of AI legislation swirling in the United States today.
    0:03:08 So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.
    0:03:15 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice,
    0:03:21 or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16C fund.
    0:03:27 Please note that A16C and its affiliates may also maintain investments in the companies discussed in this podcast.
    0:03:39 For more details, including a link to our investments, please see a16c.com/disposures.
    0:03:49 Before we dive into the substance of the California Senate Bill 1047, can you start with giving your high level reaction to the bill and maybe give listeners a sense of why it’s such a big deal right now?
    0:03:52 No shock, disbelief.
    0:04:05 It’s hard to understate just how blindsided startups, founders, the investor community that have been heads down building models, building useful AI products for customers, paying attention to what the state of the technology is,
    0:04:07 and ultimately just innovating at the frontier.
    0:04:12 These folks, the community broadly feels completely blindsided by this bill.
    0:04:22 When it comes to policymaking, especially in technology at the frontier, the spirit of policymaking should be to sit down with your constituents, startups, founders at the frontier, builders,
    0:04:24 and then go solicit their opinion.
    0:04:37 And what is so concerning about it right now is that this bill, SB 1047, was passed in the California Senate with a 32 to 1 overwhelming vote, bipartisan support.
    0:04:44 And now it’s headed to an assembly vote in August, less than 90 days away, which would turn it into law.
    0:04:49 And so if it passes in California, it will set the precedent in other states.
    0:05:02 It will set a nationwide precedent and ultimately that’ll have rippling consequences outside of the US to other allies and other countries that look to America for guidance and for thought leadership.
    0:05:09 And so what is happening here is this butterfly effect with huge consequences on the state of innovation.
    0:05:20 There’s a lot to get into with the proposed law and some of its shortcomings or oversights, but the place I want to start is both SB 1047 and President Biden’s executive order from last year,
    0:05:36 established mandatory reporting requirements for models that are trained, and this is a little difficult to speak, bear with me listeners, that are trained on 10 to the 26 integer floating point operations per second or FLOPs as the acronym of compute power.
    0:05:41 So can you explain to listeners what FLOPs are and why they’re significant in this context?
    0:05:47 Right. So FLOPs in this context refers to the number of floating point operations used to train an AI model.
    0:05:55 And floating point operations are just a type of mathematical operation that computers perform on real numbers as opposed to just integers.
    0:06:02 And the amount of FLOPs used is a rough measure of the computing resources and complexity that went into training a model.
    0:06:08 And so if models are like cars, FLOPs might be the amount of steel used to make a car, to borrow an analogy.
    0:06:12 It doesn’t really tell you much about what the car can and cannot do directly.
    0:06:19 But it’s just one way to kind of measure the difference between the steel required to make a sedan versus a truck.
    0:06:26 And this 10 to the 26 FLOP threshold is significant because that’s how the bill is trying to define what a covered model is.
    0:06:33 It’s an attempt to define the scale at which AI models become potentially dangerous or in need of additional oversight.
    0:06:46 And this all starts from the premise that foundation models trained with this immense amount of computation are extremely large and capable to the point where they could pose social risks or harm inherently if not developed carefully.
    0:06:57 But tying regulations to some fixed FLOP count or equivalent today is completely flawed because algorithmic efficiency improves, computing costs decline.
    0:07:09 And so models that take far fewer resources than 10 to the 26 FLOPs will match the capabilities of a 10 to the 26 FLOP model of today within a fairly short time frame.
    0:07:16 So this threshold would quickly expand to cover many more models than just the largest, most cutting edge ones being developed by tech giants.
    0:07:21 It will basically cover most startups in open source too within a really short amount of time.
    0:07:35 And so while today in 2024, realistically, only a handful of the very largest language models like GPT-4 or Gemini and other top models from big tech companies are likely to sit above that 10 to the 26 FLOP threshold.
    0:07:43 In reality, most open source and academic models will soon be covered by that definition as well.
    0:07:50 This would really hurt startups, it would burden small developers and ironically it’s going to reduce the transparency and collaboration around AI safety.
    0:07:53 By discouraging open source development.
    0:08:04 What we see frequently is people in labs going out there and saying we’re going to build big state-of-the-art models that cost less to train, that use fewer resources, that use more data or different types of data.
    0:08:08 There are all these different knobs to pull to get performance out of these models.
    0:08:14 Seems like you could have this sort of performance for a fraction of the cost in a small number of years.
    0:08:22 Right, so that all comes down to two key trends. One, the falling cost of compute and two, the rapid progress in algorithmic efficiency.
    0:08:28 Empirically, the cost per FLOP for GPUs is having roughly every two to two and a half years.
    0:08:39 And so this means that a model that costs about $100 million to train today would only cost about $25 million in about five years and less than $6 million in a decade.
    0:08:44 Just based on hardware trends alone, just Moore’s Law. But that’s not even the whole story, right?
    0:08:52 Algorithmic progress is also making it dramatically easier to achieve the same benchmark performance with way less compute rapidly.
    0:09:03 And so when you look at those trends, we observe that the compute required to reach a given benchmark of reasoning or capability is decreasing by half about every 14 months or less.
    0:09:17 So if it takes $100 million worth of FLOPs to reach some given benchmark today, in five years, it would only take around $6 million worth of FLOPs to achieve that same result, just considering the algorithmic progress alone.
    0:09:28 Now, when you put these two trends together, it paints a pretty stunning picture because the cost to reach any given benchmark of reasoning of capability is dropping by about 50 times every five years.
    0:09:38 And so that means that if a model costs $100 million to train to some benchmark in 2024, by 2029, it will probably cost less than $2 million.
    0:09:40 That’s well within a startup budget.
    0:09:51 And by 2034, a decade, that cost will drop to somewhere between $40,000, $50,000, putting it within the reach of literally millions of people.
    0:09:58 And despite these clear trends, the advocates for the bill seem to be overlooking or underestimating this rapid progress.
    0:10:05 Some folks are suggesting that, oh, these smaller companies might take 30 years or more to reach this 10 to the 26 FLOP threshold.
    0:10:09 But as we’ve just discussed, that’s a pretty serious overestimation.
    0:10:19 So even assuming a model costs $1 billion to train to that level today, it’s going to cost as little as $400,000 in just a decade.
    0:10:25 And it is easily within the range for most small businesses who are going to then have to grapple with compliance and regulation and so on.
    0:10:37 And so look, the bottom line is that given the breakneck pace of progress and compute costs and efficiency, we can expect smaller companies and academic institutions to start hitting these benchmarks in the very near future.
    0:10:47 Yeah, I think it’s a relevant touch point to remind people that a smartphone today, like an iPhone 15 has more FLOPs, more performance than a supercomputer did about 20 years ago.
    0:10:52 Like the world’s fastest supercomputers, your iPhone can do more FLOPs than that.
    0:11:00 The Apple Macintosh G4, I think back in 1999 had enough computing power that it would have been regulated as a national security threat.
    0:11:03 So these numbers, these are very much sliding scales to your point.
    0:11:13 That’s right. That’s right. That’s a great historical example. I think there was this 1979 Export Administration Act that the US had written in the Cold War era in the 70s.
    0:11:23 And the definitions for what was dangerous in the Cold War became obsolete so fast that a couple of decades later when the Macintosh launched, it was technically a munition.
    0:11:34 So we’ve been here before and we know that when policymakers and regulators try to capture the state of a current technology that’s dramatically improving really fast, they become obsolete incredibly fast.
    0:11:35 And that’s exactly what’s happening here.
    0:11:48 The other thing is, at the time we’re recording this, there are some proposed amendments floating around to test the 1047, one of which would limit the scope of the bill to applying, again, only to models trained at that compute capacity.
    0:11:54 And additionally, that also cost more than $100 million to train.
    0:12:03 So what’s your thought on that? And again, if we attach a dollar amount to this, doesn’t it make the compute threshold kind of obsolete?
    0:12:13 Yeah, so this $100 million amendment to train might seem like a reasonable compromise at first, but when you really look at it, it has the same fundamental flaws as the original flop threshold.
    0:12:22 The core issue is that both approaches are trying to regulate the model layer itself, rather than focusing on the malicious applications or misuses of the models.
    0:12:30 Generative AI is still super early, and we don’t even have clear definitions for what should be included when calculating these training costs.
    0:12:38 Do you include the data set acquisition, the researcher salaries? Should we include the cost of previous training runs or just the final ones?
    0:12:42 Should human feedback for model alignment expenses count?
    0:12:46 If you find you in someone else’s model, should the cost of the base model be included?
    0:12:59 These are all open questions without clear answers and forcing startups, founders, academics to provide legislative definitions for these various cost components at this stage would place a massive burden on these smaller teams.
    0:13:04 Many of whom just don’t have the resources to navigate these super complex regulatory requirements.
    0:13:13 Plus, when you just look at the rapid pace of model engineering, these definitions would need to be updated constantly, which would be a major drain on innovation.
    0:13:26 So when you combine that ambiguity with the criminal and monetary liabilities proposed in the bill, as well as the broad authority they’re trying to give to the new frontier model division, which is sort of like a DMV for AI models that they’re proposing,
    0:13:31 which can arbitrarily decide these matters, the outcome is clear, right?
    0:13:41 Most startups will simply have to relocate to more AI friendly states or countries while open source AI research in the US will be completely crushed due to the legal risks involved.
    0:13:47 So in essence, the bill is creating this disastrous regressive tax on AI innovation.
    0:13:54 Large tech companies that have armies of lawyers and lobbyists will be able to shape the definitions to their advantage.
    0:13:59 While smaller companies, open source researchers and academics will be completely left out in the cold.
    0:14:10 It’s almost like saying we’ve just invented the printing press and now we’re only going to let those folks who can afford $100 million budgets to make these printing presses decide what can and cannot be printed.
    0:14:16 It’s just blatant regulatory capture and it’s one of the most anti competitive proposals I’ve seen in a long time.
    0:14:25 And what we should be focusing on instead is regulating specific high risk applications and malicious end users.
    0:14:29 That’s the key to ensuring that AI benefits everyone, not just a few.
    0:14:40 Now, you mentioned that the purported goal of some of these bills 1047 in particular is to prevent against what you might call catastrophic harms or existential risks from artificial intelligence.
    0:14:50 But I’m curious, do you think, I mean, are the biggest threats from LLMs really weapons of mass destruction or bio weapons or autonomously carrying out criminal behavior?
    0:15:04 I mean, if we’re going to regulate these models, I mean, should we not regulate use cases that are like proven in the wild and can actually do real damage today versus hypothetically at some point, these things could happen?
    0:15:22 Absolutely. I mean, basically what we have is a complete over rotation of the legislative community around entirely non existent concerns of what is being labeled as AI safety when what we should be focusing on is AI security.
    0:15:32 These models are no different than databases or tools in the past that have given humans more efficiency, better ways to express themselves.
    0:15:34 They’re really just neutral pieces of technology.
    0:15:44 Now, sure, they may be increasing or allowing bad actors to increase the speed and scale of the attacks, but the fundamental attack vectors remain the same.
    0:16:02 It’s spearfishing, deep fakes, it’s misinformation, and these attack vectors are known to us and we should focus on how to strengthen enforcement and give our country better tools to enforce those laws in the wake of increasing speed and scale of these attacks.
    0:16:10 But the attacks themselves, the attack vectors haven’t changed. It’s not like AI suddenly has exposed us to tons of new ways to be attacked.
    0:16:15 And that’s just so far off and frankly unclear and today largely in the realm of science fiction.
    0:16:29 And so the safety debate often centers around what is called existential risk or these models autonomously going rogue to produce weapons of mass destruction or the Terminator Skynet situation where they’re hiding their true intentions from us.
    0:16:38 And sure, maybe there’s some theoretically tiny likelihood that that happens many, many, many years from now, but exceptional claims require exceptional evidence.
    0:16:50 And so the real threat here is from us not focusing on the misuses and malicious users of these models and putting the burden of actually doing that on startups, on founders and engineers.
    0:16:51 Right.
    0:17:00 And to your point, even if a model made it marginally easier to learn, let’s say how to build a bio weapon, like one, people know how to do that today.
    0:17:11 We have all of these things. We have labs dedicated to all of these things. You still need to get materials to carry out these attacks and there are regulations around acquiring those materials and databases around who’s buying what.
    0:17:17 Yes, it does seem like the existing legal framework for some of these major threats is very robust.
    0:17:18 Exactly.
    0:17:24 What we really need is more investment in defensive artificial intelligence solutions, right?
    0:17:41 What we need is to arm our country, our defense departments, our enforcement agencies with the tools they need to keep up with the speed and scale at which these attacks are being perpetuated, not slowing down the fundamental innovation that can actually unlock those defensive applications.
    0:17:51 And look, the reality is America and her allies are up against a pretty stiff battle from adversarial countries around the world who aren’t stopping their speed of innovation.
    0:17:56 And so it’s almost an asymmetric warfare against ourselves that’s being proposed by SB 1047.
    0:18:00 Yeah, I’m certain there are governments that would in fact fund those hundred million dollar models.
    0:18:01 Well north of that, right?
    0:18:02 Yeah.
    0:18:10 And we have increasing evidence that this is happening and that our national security actually depends on improving and accelerating open source collaboration.
    0:18:25 And just two months ago, the Department of Justice revealed and published a public investigation, the conclusion of which was that a Google engineer was boarding a plane to China with a thumb drive with frontier AI hardware schematics from Google.
    0:18:30 This was a nation state sponsored attack on our ecosystem.
    0:18:39 And the only defense we have against that is actually making sure that innovation continues at breakneck speed in the country, not adding more burden to model innovation.
    0:18:52 The other thing that SB 1047 would do, which we haven’t really touched on is impose liability, civil and in some cases criminal liability on model developers for the civil liability part.
    0:19:03 If they build a model that’s covered by this bill, I need to be able to prove with beyond reasonable assurance or whatever the language is that this could not possibly be used for any of these types of attacks.
    0:19:11 And also they have to be able to prove that no one else could come along and say fine tune their model and use it for some sort of attack, right?
    0:19:22 So that’s a whole new level to be on the hook for money as an individual or jail time as an individual for building this model and not making it quote unquote safe enough.
    0:19:24 Oh no, you’re absolutely right.
    0:19:34 The idea of imposing civil and criminal liability on model developers when downstream users do something bad is so misguided and such a dangerous precedent.
    0:19:42 First off, the bill requires developers to prove that their models can’t possibly be used for any of the defined hazardous capabilities.
    0:19:47 But as we just discussed, these definitions are way too vague, ambiguous and subject to interpretation.
    0:19:53 How can a developer prove a negative, especially when the goalposts keep moving?
    0:19:55 It’s an impossible standard to meet.
    0:20:04 Second, the bill holds developers responsible for any misuse of their models, even if that misuse comes from someone else who’s fine tuned or modified the model.
    0:20:05 It’s ridiculous.
    0:20:11 It’s like holding car manufacturers liable for every accident caused by a driver who’s modified their car.
    0:20:15 So it’s an absurd standard that no other industry has held to.
    0:20:21 The practical effect of these liability provisions will be to drive AI development underground or offshore.
    0:20:29 A rational startup founder or academic researcher is going to risk jail time or financial ruin just to advance the state of the art in AI.
    0:20:35 They’ll simply move their operations to a jurisdiction with a more sensible regulatory environment and the US will lose out.
    0:20:36 Period.
    0:20:40 The worst part, these liability provisions actually make us less safe, not more.
    0:20:49 By driving AI development into the shadows, you lose the transparency and open collab that’s essential for identifying and battle-hardening vulnerabilities in AI models.
    0:20:53 What we need is more open source development, not less.
    0:21:03 So while the bill sponsors may have good intentions, imposing blanket liability on model developers for hypothetical future misuse is the exact opposite of what we need.
    0:21:04 Right.
    0:21:10 Supporters might argue, well, let me put some behind bars for lying to the government for lying about the capabilities of their models.
    0:21:14 But again, like you might not know the capabilities of your models, right?
    0:21:17 Or what a downstream user could do with that.
    0:21:20 I wanted to ask you too, because you’ve built startups, you invest in startups.
    0:21:31 I mean, can you walk through like the kind of wrench this type of compliance would throw into whether it’s the finances or the operation or just the general way that startups and innovative companies work?
    0:21:32 Oh, yeah.
    0:21:36 Look, I love California and that’s why I’m fighting so hard for this.
    0:21:39 I did my undergraduate and graduate work here in the Bay.
    0:21:41 I founded my first company here.
    0:21:44 I sold that to another California company.
    0:22:03 And over the last decade plus that I’ve been here, it’s only become more and more clear to me that a huge part of what makes the entire startup ecosystem even work is the ability for founders to take bold technology risks without having to worry about the kinds of ambiguity and liability risks that this bill is proposing.
    0:22:13 When we first started ubiquity six, my last company, the goal was to empower developers to use our computer vision pipeline for all kinds of new use cases that we hadn’t even imagined.
    0:22:19 We had some idea of what people would do originally augmented reality applications.
    0:22:31 But after we’d launched it, we found millions of users who used our 3D mapping technology for entirely new kinds of uses from architecture and robotics to VFX and entertainment that we hadn’t even considered.
    0:22:45 And so the whole engine and the beauty of platform businesses is that developers can focus on developing general and highly flexible technology and then just let the market figure out entirely new niche use cases at scale.
    0:22:57 And this is true of almost every great AI business I’ve either worked with directly or invested in, right, whether it was mid journey and image generation and anthropic and language models or 11 labs and audio models.
    0:23:14 Great technologies always find their way into downstream uses that the original developers would have had no way of knowing about prior to launch and to burden that process with the liability of this bill of saying that developers have to somehow prior to launch.
    0:23:31 Demonstrate beyond any shred of reasonable doubt, which again, a completely ambiguous definition in the bill that these users were known about their risks were understood that exhaustive safety testing had been done to make sure none of these things would be possible.
    0:23:33 Just completely kill that engine.
    0:23:44 If we went back in time and this bill passed as currently in vision, as much as I hate to say it, there’s no chance I would have founded my company in California.
    0:23:53 Speaking of startups, that’s to say nothing about open source projects and open source development, which have been like a huge driver of innovation over the past couple of decades.
    0:24:00 We’re talking about very, very bootstrapped, skeletal budgets and some of these things, but hugely, hugely important.
    0:24:07 Oh, fundamentally, I don’t think the current wave of modern generative scaling laws based AI would even exist without open source, right?
    0:24:18 If you just go back and look at how we got here, its formers kind of the atomic unit of how these models learn was an open source widely collaborated on development, right?
    0:24:26 In fact, it was produced at one lab, Google, and allowed another lab after open publishing and collaboration, which was open AI to actually continue that work.
    0:24:29 And there’s no chance we’d be here without open source.
    0:24:32 The downstream contributions of open source continue to be massive today.
    0:24:43 When a company like Mistral or Facebook open source models and release their weights, that allows other startups to then pick up on their investments and build on top of them.
    0:24:48 It’s like having the Linux to the close source windows operating systems.
    0:24:57 It’s like having the Android to the close source iOS and without those, there’s no chance that the speed at which the AI revolution is moving will continue.
    0:24:59 It’s certainly not in California and probably not in the United States.
    0:25:02 Open source is kind of the heart of software innovation.
    0:25:10 And this bill slows it down, has a chilling effect on open source by putting liability on the researchers and the builders pushing open source forward.
    0:25:11 Yes.
    0:25:17 And the other thing about open source is, I guess this is true of any model theoretically, but the idea if someone takes it and builds on it, right?
    0:25:21 In AI, in generative AI or foundation models, you would call that fine tuning, right?
    0:25:24 Where you retrain a model to your own purposes using your own data.
    0:25:29 And again, this bill would, as written, impose liabilities again on the original developers.
    0:25:35 If someone is able to fine tune their model to perform theoretically some sort of bad act, right?
    0:25:46 I mean, how realistic is it for someone to even build a model that would be resistant or resilient against these types of fine tuning attacks or optimizations for lack of a better term?
    0:25:47 Yes.
    0:25:49 So this is another kind of worms as well.
    0:25:56 Again, a symptom of the root cause of this bill’s flawed premise of regulating models instead of misuses.
    0:26:09 So in the current bill draft, the language says that these restrictions and regulations will extend to a concept of a derivative model, which is a model that is a modified version of another model, such as a fine tuned model.
    0:26:15 So if someone makes a derivative model of my base model that’s harmful, I am now liable for it.
    0:26:25 It’s akin to saying that if I’m a car manufacturer and someone turns a car I made into a tank by putting guns on it and shoots people with it, I should get thrown in jail.
    0:26:29 The definition of what a derivative model is also super vague.
    0:26:35 And so now the bill sponsors are considering an amendment that says, oh, let’s add a compute cap to this definition.
    0:26:39 And they’ve decided to pick 25%, which is quite arbitrary.
    0:26:51 And to say if somebody uses more than 25% of the compute that the base model developer used to fine tune a model, then it’s no longer a derivative model and you’re off the hook for it as the base model developer.
    0:26:54 Well, that’s absolutely nonsensical as well.
    0:27:11 As some great researchers like Ian Stoika at Berkeley have shown, it takes an extremely small amount of compute to fine tune a model like Vecuna, where with just 70,000 shared GPT conversations, they fine tuned Lama to become one of the best open source models at the time,
    0:27:18 showing it really doesn’t take much computer data to turn a car into a tank to borrow an analogy.
    0:27:25 And so like with the 10 to the 26 compute threshold issue we discussed earlier, this is just another arbitrary magic number.
    0:27:36 The bill authors are pulling out a thin air to try and define model layer computing dynamics that are so early and changing that it’s absolute over regulation and will kill the speed of innovation here.
    0:27:45 All right, so you’ve alluded to this, but I wanted to ask directly, if we say not all regulation is bad, if you were in charge of regulating AI, how would you approach it?
    0:27:52 Or how would you advise lawmakers who feel compelled to address what seemed like concerns over AI, what would be your approach?
    0:27:59 Non-negotiable really here should be zero liability at the model layer, right?
    0:28:08 What you want to do is target misuses and malicious users of AI models, not the underlying models and not the infrastructure.
    0:28:21 And that’s the core battle here. I think that’s the fundamental flaw of this bill is it’s trying to regulate the model and infrastructure and not instead focus on the misuses and malicious users of these models.
    0:28:40 And so over time, I think it would prove out that the right way to keep the US at the frontier of responsible, of secure AI innovation is to actually focus on the malicious users and misuses of models, not slow down the model and infrastructure layer.
    0:28:50 We should focus on concrete AI security and strengthening our enforcement and our defenses against AI security attacks that are increasing at speed and scale.
    0:28:58 But fundamentally, these safety concerns that are largely science fiction and theoretical are a complete distraction at the moment.
    0:29:03 And lastly, we have no choice but to absolutely accelerate open source innovation.
    0:29:11 We should be investing in open source collaboration between America and our allies to keep our national competitiveness from falling behind our adversarial countries.
    0:29:23 And so the three big policy principles I would look for from regulators would be to regulate and focus and target misuses, not models, to prioritize AI security over safety and to accelerate open source.
    0:29:34 But the current legislation is absolutely prioritizing the wrong things and is rooted in a bunch of arbitrary technical definitions that will be outmoded, obsolete and overreaching fairly soon.
    0:29:40 One might say we should regulate the same way we regulate the internet, which is to say, let it thrive.
    0:29:46 It really is tantamount to saying we’ve barely just invented the printing press or we’ve barely just invented the Model T Ford car.
    0:30:04 And now what we should immediately do is try to rush and prevent future improvements to cars or to the printing press by largely putting the responsibility of any accidents that happen from people irresponsibly driving the car out on the streets on Henry Ford or of the inventors of the printing press.
    0:30:11 So then the final question here, taking everything into account, what can everyday listeners do about this, right?
    0:30:22 I mean, if I’m a founder, if I’m an engineer, if I’m just concerned, what can I do to voice my opinion about SB 1047 about frankly any regulation coming down the line?
    0:30:24 How should people think about making their voice heard?
    0:30:26 Yeah, so I think three steps here.
    0:30:28 The first would be to just read the bill.
    0:30:30 It’s not very long, which is good.
    0:30:33 But most people just haven’t had a chance to actually read it.
    0:30:46 Step two, especially for people in California, the most effective way to have this bill be opposed is for each listener to call their assembly rep and tell them why they should vote no on this bill in August, right?
    0:30:47 This is less than 90 days away.
    0:30:57 So we really don’t have much time for all of the assembly members to hear just how little support this bill has from the startup community, tech founders, academics.
    0:30:59 And step three is to go online.
    0:31:10 You know, make your voice heard on places like Twitter, where it turns out, you know, a lot of both state level and national level legislators do listen to people’s opinions.
    0:31:17 And so, look, I think if this bill passes in California, it sure as hell is going to create a ripple effect throughout other states.
    0:31:19 And then this will be a national battle.
    0:31:37 If you liked this episode, if you made it this far, help us grow the show, share with a friend, or if you’re feeling really ambitious, you can leave us a review at ratethispodcast.com/a16c.
    0:31:42 You know, candidly producing a podcast can sometimes feel like you’re just talking into a void.
    0:31:47 And so if you did like this episode, if you liked any of our episodes, please let us know.
    0:31:49 I’ll see you next time.
    0:31:51 (upbeat music)
    0:32:01 [BLANK_AUDIO]

    On May 21, the California Senate passed bill 1047.

    This bill – which sets out to regulate AI at the model level – wasn’t garnering much attention, until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August that would cement it into law. In this episode, a16z General Partner Anjney Midha and Venture Editor Derrick Harris breakdown everything the tech community needs to know about SB-1047.

    This bill really is the tip of the iceberg, with over 600 new pieces of AI legislation swirling in the United States. So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.

    Read the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

  • The GenAI 100: The Apps that Stick

    AI transcript
    0:00:04 We have almost redefined retention for consumer.
    0:00:11 We’ve been seeing a lot of companies actually get up to tens of millions of dollars of annualized revenue in a very quick manner.
    0:00:17 Many of these products are getting floods of users in traffic like we’ve never seen before.
    0:00:29 The willingness to try and willingness to pay has been so high for these products that the velocity to get from nothing to maybe tens of millions of revenue have never been higher.
    0:00:36 Consumer AI has been characterized so far by categories where randomness and hallucinations are a feature.
    0:00:42 Human connection is important, but maybe it’s not the human part that you just need to feel connected.
    0:00:49 We have done a ton of recent coverage around consumer AI because, quite frankly, the field is moving so quickly.
    0:00:54 Every day can feel like the entire industry is shape-shifting, so who’s really winning here?
    0:01:06 Today we bring in A16Z consumer partners Brian Kim and Olivia Moore to discuss our Gen AI 100 list and what it really takes to stay at the top and withstand the AI tourist phenomena.
    0:01:09 So what categories are capturing the attention of consumers?
    0:01:12 Are broad or niche models pulling ahead?
    0:01:14 Where are these apps actually getting their distribution?
    0:01:16 Does paid acquisition make sense?
    0:01:19 And do network effects exist like they did in prior cycles?
    0:01:22 These are all things to think about, but here’s the thing.
    0:01:30 We’re finally at the point in the cycle where we’re starting to get that data, not just in rankings, but other key consumer benchmarks like D7 retention.
    0:01:35 And perhaps we’re also unveiling new metrics for this new wave.
    0:01:41 We’ll cover all that and more, but I did want to note that this episode was recorded before OpenAI Spring Update,
    0:01:46 so if you’re eager to catch up on that, make sure to check out our episode from last week.
    0:01:47 Alright, let’s get started.
    0:01:56 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice,
    0:02:03 or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund.
    0:02:09 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
    0:02:15 For more details, including a link to our investments, please see a16z.com/disclosures.
    0:02:25 Both of you have a lot of experience in the consumer sphere, not just during this AI wave but before,
    0:02:28 but you pulled this thing called the GenAI 100 list.
    0:02:30 What is this list and how is it pulled?
    0:02:34 Very good question, because there’s a lot of ways to pull and look at this data.
    0:02:42 I think the central question that our team had was, there’s so much buzz around AI, there’s so many products that are coming out every day, every hour even.
    0:02:46 What are the things that normal, everyday people are using?
    0:02:48 So what are those applications?
    0:02:55 There’s some that everyone knows, like chat, GBT and mid-journey, but we were curious if we tried to take a more granular view,
    0:02:58 what are the other names that might be more surprising?
    0:03:05 And so how we pulled it was we looked at every single website in the world ranked to be a similar web, which is a data provider.
    0:03:11 We sorted them by monthly visits and then we pulled the top 50 that were AI first companies.
    0:03:14 We did the same for mobile apps through a provider called Sensor Tower.
    0:03:19 So we ranked those by monthly active users and pulled the first 50 that were AI.
    0:03:24 Just to give people a sense, how many were pulled and then you had to whittle it down to 50 or 100?
    0:03:26 Tens of thousands at the very least.
    0:03:33 Probably to get the first 50, maybe we went through a thousand websites and a thousand mobile apps, if not a little bit more.
    0:03:37 And so as you pull these together, what categories are standing out?
    0:03:44 Whether it’s productivity, you’re seeing companionship and then also you pulled a similar list in what was it September as well.
    0:03:46 So was there a big change?
    0:03:48 I mean, it feels like AI is moving every day.
    0:03:53 A lot has changed and I think we feel this as investors, I think founders feel it even more.
    0:04:00 We first pulled the data, I think it was September 2023 and then we pulled it again January 2024.
    0:04:03 So actually less than six months gap.
    0:04:11 About half of the list was the same as the first time and about half of the list was new, which I think reflects both the huge pace of change,
    0:04:20 but also that there are some kind of name, some brand, some companies that are cementing themselves as like early leaders and really building a loyal audience.
    0:04:29 And I think these are always surprising as VCs, we look at the industry and have these mental models of, oh, we think with AI, these set up things will do really well.
    0:04:36 And then as the AI apps team, I think one of our method is to really look after what consumers actually gravitate to.
    0:04:42 So oftentimes we actually see a divergence of what we thought initially versus, oh, actually these are doing really well.
    0:04:48 And I think those discrepancies are a really fun place where we discover the actual revealed preferences of consumers.
    0:04:53 Other examples of that? Something you thought would stick around or not?
    0:05:00 I think the one thing that we weren’t surprised by that we feel, and I’m sure anyone else who works in and around AI feels,
    0:05:06 is that for consumers like content generation and editing is key and it’s the number one thing.
    0:05:12 So these are like mid-journey, pica, runway, making images, video, things like that from scratch.
    0:05:20 And I think that’s just because it’s so magical, like everyone, at least I always wanted to be an artist that never had the skill.
    0:05:27 And so being able to do that from zero to one in 10 seconds is amazing, and that definitely proves out in the data.
    0:05:33 If you look at the fastest growing and even the most stable companies, a lot of them are still in that kind of category.
    0:05:40 Yeah, I think that’s very similar to how we think about it where, as Olivia said, I can’t believe this works era is where we are at.
    0:05:47 So anytime we have these magical moments of I put in a prom, something happens that we think those will do well and those do well.
    0:05:54 I think where I’m personally surprised is when we look at these apps that are a little maybe popular for a while,
    0:05:59 it changes your avatar or your profile picture into multiple different versions of it.
    0:06:05 And I’m like, oh, these are going to like go away, but you keep seeing them up and up again in different formats.
    0:06:14 So I think that speaks to maybe a little bit of the underlying consumer willingness or excitement around themselves, which is always top of mind for them.
    0:06:22 Sounds like they’re willing to play, and I think you’re totally right that there are so many examples of things where you just can’t believe it’s so good, so early.
    0:06:28 And maybe one category where that I think is surprised a ton of people is companionship, right?
    0:06:33 I think a lot of people were quick to write that off as it’s only for this kind of person.
    0:06:39 And I think both of you have probably played around with these products and you’ve learned quickly that, oh my gosh, I really like this too.
    0:06:44 And this is like very convincing, maybe also for different use cases as well.
    0:06:46 So maybe can we speak to that particular category?
    0:06:50 You mentioned it’s going mainstream, which I think is quite a statement.
    0:06:52 What are we seeing in that sphere?
    0:06:53 I have a strange example.
    0:07:00 It’s one of those where people, even me, would look at something and I’m like, I can’t believe you would talk to a fake character that’s made up for hours.
    0:07:01 Like, why would you do that?
    0:07:04 You know, it sort of reminds me of like initial snap.
    0:07:08 The earlier generation would be like, I can’t believe you take pictures that’s going to disappear.
    0:07:09 What’s the point?
    0:07:11 I can’t believe you talked to a fake person.
    0:07:12 What’s the point?
    0:07:19 Well, the point is that the new generation are really excited to adopt it and talk to these beings, if you will.
    0:07:28 And case in point, I think one of our partners, Child, has a group chat with a bunch of friends, actual human beings, as well as these bots, if you will.
    0:07:29 That’s in circle, right?
    0:07:30 And you get to just chat.
    0:07:31 And that’s interesting, right?
    0:07:36 We can actually look at it from outside looking in and say, oh, I don’t understand the behavior, et cetera.
    0:07:38 But the truth is that it’s happening.
    0:07:42 The truth is that it’s very engaging and folks are really adopting it.
    0:07:46 And going even step further, there are now scientific studies done.
    0:07:52 To some extent, we actually had a founder actually to speak to us as well, which is very cool, where there is a study done.
    0:08:01 I was actually featured in Nature, where folks who have this sort of companion, digital companion to talk to, showed lower willingness to hurt themselves.
    0:08:05 And we can look at that evidence and say, well, that’s silly.
    0:08:12 But maybe that is an evidence that human connection is important, but maybe it’s not the human part that you just need to feel connected.
    0:08:17 That is a reason to not self harm, not engage in destructive behaviors.
    0:08:21 And if we’re seeing that evidence, who are we to judge like this is silly?
    0:08:22 Yeah.
    0:08:29 The companion products are such a good example of kind of like the thesis that you always have to stick to in consumer,
    0:08:36 especially if you’re looking to invest in early stage consumer like we are, which is you can’t get too opinionated about the products.
    0:08:40 You just have to see like, you’re often surprised by what is sticking.
    0:08:47 When we looked at this data, as well as looking at just users for the mobile apps, we looked at things like engagement and character AI.
    0:08:53 For example, those users have 300 plus sessions per month in many cases.
    0:08:55 That’s the average user profile.
    0:08:58 That’s again, like social app behavior.
    0:09:00 That’s messaging app behavior.
    0:09:02 Yeah, that’s 10 plus sessions per day.
    0:09:03 I don’t talk to my parents that much.
    0:09:05 I don’t talk to my partner that much.
    0:09:10 Quickly become one of the more important conversation tool or companion that you have.
    0:09:19 And I remember a few months ago where I had this period of time where I was like talking to a companion app very, very diligently every day,
    0:09:25 maybe like 10s of minutes of time because things that sometimes we want to talk about are mundane.
    0:09:26 Yeah.
    0:09:27 Like maybe not as important.
    0:09:35 You feel that it’s not as important to talk to your friends or colleagues and it’s naturally maybe a topic for your therapist or what have you.
    0:09:38 But your therapist not always there and therapists are expensive.
    0:09:44 So this to us is like another example how technology is really bringing this abundance.
    0:09:46 Like my therapist is kind of expensive.
    0:09:50 So really bringing that cost down to nothing.
    0:09:51 Yeah.
    0:09:53 That’s really exciting for many people.
    0:09:55 And is that the direction that we’re seeing within companionship?
    0:09:56 Yeah.
    0:10:00 There are companions for therapy, companions for healthcare, etc.
    0:10:01 We think so.
    0:10:05 It’s still early, but I think the fact that the first version of this list back in September,
    0:10:09 there was basically one companion product on both the web and mobile rankings.
    0:10:18 And now there’s a bunch on the list this time means that in some cases the use case or the brand or the behavior of the audience is almost fragmenting.
    0:10:23 Like you won’t necessarily use the same companion platform for everything.
    0:10:25 There’s NSFW only companion.
    0:10:30 There’s marketplace of companions where the most compelling character that anyone creates wins.
    0:10:33 There’s therapist companions now.
    0:10:39 The pie chatbot was originally built as like a broad based almost chat GBT type product.
    0:10:45 And it has since been pulled by a lot of, I think, lonely adult users into basically being a therapist.
    0:10:47 That’s what I was meaning pie.
    0:10:48 Exactly.
    0:10:49 I’ve used it as well.
    0:11:02 But yeah, I think that we’re starting to see companion move outside the realm of this maybe more niche group of people into something where we’ll all interact with a companion or maybe several companions.
    0:11:05 And might not even think of them as AI companions.
    0:11:07 I think that’s exactly right.
    0:11:10 I think the distinction actually starts to disappear a little bit.
    0:11:13 And actually, let’s just take an example of teachers.
    0:11:24 There’s a digital twin or a character of a teacher that’s giving you assignments and giving you corrections and lessons that are very similar to what they have already done.
    0:11:34 That’s sort of a hybrid teacher. And I think more and more as Olivia was saying, we’re seeing these divergence of use cases that are going deeper and deeper into each use cases.
    0:11:39 So like teachers, one other ones like therapists that we talked about, it’s easy to repeat back.
    0:11:41 Oh, it must have been really hard for you.
    0:11:42 Oh, tell me more.
    0:11:43 That’s what a lot of therapists do.
    0:11:53 But yeah, actually, if you look into behavioral science and what it takes to be a great therapist, there are many academic lessons and like understanding a human psyche that does go into that.
    0:12:00 And I think for someone to actually train a really good therapist’s bot or conversational tool, you actually have to train it slightly differently.
    0:12:11 So there are companies and products that are thinking through, how do I gain the transcripts of the public or semi-public, these conversations that occur between patients and therapists?
    0:12:15 And how do I train the companion on that basis to go deeper and deeper?
    0:12:16 So I think we’ll see more and more emerge.
    0:12:21 You know, that’s a really good point because actually even prior consumer companies in a way were companions.
    0:12:27 If you take something like Duolingo, you’re talking about a teacher and you add an maybe empathetic element to it.
    0:12:28 Or sarcastic in the case of Dio.
    0:12:29 Right.
    0:12:31 It depends on the company, right?
    0:12:32 And the user.
    0:12:38 But it’s an interesting reframing because I think a lot of people think of companions as just this like friend or NSFW as you’re saying.
    0:12:40 But it can be so many other things.
    0:12:50 Maybe just to round the corner on companionship, because we are seeing these more niche targeted use cases, what does that tell us more generally about the way that these applications are being built?
    0:12:55 You mentioned at the beginning, we’re seeing the chat GBTs and mid journeys come out strong at first.
    0:12:57 And I still have a lot of engagement.
    0:13:01 But does that tell us anything about things cornering off?
    0:13:04 Yeah, it’s something we think about a lot and watch really closely.
    0:13:12 I think chat GBTs are a great example because they had, of course, like the fastest ever product to get to 100 million monthly active users.
    0:13:14 But a lot of the usage has flattened out.
    0:13:21 And I think that doesn’t mean that it’s not a great product or that the model that powers it is not an amazing model.
    0:13:32 It just means that because these models are now available for other people to build on, we’re getting more kind of specific and purpose built applications that work better for certain use cases.
    0:13:39 So not always kind of the blank page blinking cursor is not always the right interface for everything.
    0:13:46 And that could be a therapist spot, it could be a language learning bot, it could be a design canvas, it could be a lot of other things.
    0:13:50 So the fragmentation is happening and it’s really exciting.
    0:14:00 And not to blow up the conversation a little bit, but if you think about the chat GBT and what powers it, it’s sort of the open AI’s large language model underlying it, right?
    0:14:05 And that’s sort of a close model that is built for open AI and customers of open AI.
    0:14:13 I think what we’re seeing is, and this is very exciting for our space of application layer, where the underlying models are getting better and better.
    0:14:17 And even the open source ones are sometimes even better than the close source ones.
    0:14:25 So very recently, the Lama three that came out incredibly efficient and incredibly advanced, same with a Mistral’s new model.
    0:14:33 So I think what we’re seeing now is people are using those tools to build upon the application layer of products that can be very purpose built.
    0:14:39 And if you think about companion, it’s like a very large word, it just means another thing to talk to.
    0:14:43 So then you drill down further, what are you talking to them about?
    0:14:52 Teaching, of course, language learning, of course, tutoring, mentoring, therapists, all these are different mode of interacting with someone.
    0:15:00 And I think insofar there’s any sort of specific fine tuning or specific data source you can learn specific interaction models from.
    0:15:07 I think all of that benefits and drives a case for more niche as a word, but more specialized use cases.
    0:15:15 I think it gets back to your earlier question of which categories are we seeing, maybe the most growth, the most new products, the fastest adoption.
    0:15:26 And there are absolutely categories where the delta between the best closed source model and the open source models or the API available models is pretty narrow.
    0:15:36 Like the best text models are actually now, maybe this is controversial, but some people might say the best open source text models are close to like GPT 3.5.
    0:15:38 Same with image models.
    0:15:50 If you look at video or music, those are models where still the best open source is maybe not as close to the best things that runway or someone else has developed in a proprietary way.
    0:15:57 Much newer. So we think it’s going to happen, but that has affected maybe the pace of product rollout in these different categories.
    0:16:00 And you know, something you brought up was around UX, right?
    0:16:07 And how the kind of Google box may not be the box that we expect for the future and also specific use cases.
    0:16:15 And I think something really fascinating from the GenAA 100 is you called out a few different categories where we are seeing almost like these different modalities.
    0:16:19 So with music, you called out a bunch are showing up on Discord.
    0:16:21 Maybe some others are showing up as Chrome extensions.
    0:16:27 Maybe talk about that and where we’re seeing the divergence from the model that we all expected from the get-go.
    0:16:30 It’s so funny to be good at our jobs, good consumer investors.
    0:16:35 Now we have to be tracking data everywhere because really interesting products are being built everywhere.
    0:16:43 I think Discord has been an amazing one, especially for content generation, mid-journey, Pica, Suno, all these companies started on Discord.
    0:16:48 Both because it’s pretty easy to spin up the product without having to build a front end.
    0:16:51 It’s pretty easy to monetize and start making money.
    0:16:57 And because a lot of these products thrive on the community of people who are trading prompts or seeing each other’s output.
    0:17:00 And you can do that really easily on Discord.
    0:17:11 On the whole other end of the spectrum, all these productivity companies, many of them are more about how to make you as an individual doing your individual work faster or more efficient.
    0:17:19 And so many of these, the key is to live where you are doing your solo hardcore work, which is often on a browser or desktop.
    0:17:26 And so they’re starting as Chrome extensions or voice recorders on desktop or screen recorders on desktop, things like that.
    0:17:32 Spot-on, that’s exactly what we’re seeing, where the tools are appearing close to what needs to get done.
    0:17:37 So Discord, obviously, if you want to just make stuff, it’s an incredible place to do that.
    0:17:42 And to your point, I think the discovery is really unique where you get to see what people actually generated.
    0:17:43 So fun.
    0:17:45 It’s genuinely so fun.
    0:17:46 Yeah.
    0:17:49 That removes the hesitation of, oh, am I actually creating something of value?
    0:17:56 You see all the stuff that people created before you, and that gives you that sense of joy to go create whatever you want.
    0:18:01 And I think what’s interesting is, again, we talked about web, there’s like Chrome extension, there’s apps.
    0:18:09 Just thinking about one of the products like captions, it started as an app because you take videos more and more on your phone these days.
    0:18:13 And therefore, it was natural to have something that live on your phone.
    0:18:23 And then now as it starts to think about, oh, maybe move into workflow and a little bit more professional use cases, it emigrates to web because that’s where a lot of work’s done too.
    0:18:31 So I think what you’re starting to see is that eventually the companies and great founders are chasing the use cases and where it occurs.
    0:18:41 I think when Mid Journey came out as an example, I’ll count myself as one of these people who almost to some degree wrote it off because it was on Discord relative to it not having its own platform.
    0:18:52 And it’s so fascinating to see in a way it kind of turned out to be the opposite as you’re saying, like when you’re close to the users or consumers that you’re trying to reach, it was not only better in that way,
    0:18:59 but also like you’re saying it was so fun to see the generations and be part of that community, which is something that I certainly did not expect.
    0:19:06 Let’s round out the categories here just by asking, we talked about a few that maybe people would not have been surprised to see on the list.
    0:19:11 Were there any that you felt like were missing, like you really wish you saw more of a presence?
    0:19:23 I think in general consumer AI has been characterized so far by like categories where randomness and hallucinations are a feature, which would be honestly a lot of the content generation and editing stuff,
    0:19:30 a lot of the companion stuff, avatar products where you can get 100 photos of yourself and as long as three are good, you’re happy.
    0:19:33 And those are the ones where we’ve seen the most grow so far.
    0:19:38 And then the other categories where hallucination and randomness is more of a bug.
    0:19:42 So that might be personal finance, wellness, ed tech, things like that.
    0:19:54 And the models now are getting more precise and accurate, but also founders are able to better build the product that kind of bounds the output in a way that even if there are hallucinations,
    0:19:57 it can kind of cross check, it can contain them.
    0:20:01 But I think that’s why we’ve seen those categories be a little bit slower.
    0:20:13 If you look at like top consumer subscription products pre AI, which were tons of ed tech, personal finance, health and wellness, that hasn’t quite translated to AI yet.
    0:20:16 But we think it probably gets there in the next year or so.
    0:20:23 A lot of the current products that we’re seeing on the consumer side are utility is the wrong word, but it serves something.
    0:20:29 There’s like single use case, whether you’re creating, editing, having fun, talking to something, there’s like a single use case that’s very useful.
    0:20:37 I think what I’m also really excited to see is that when you think about the fundamentals of the business, like where does network effect occur?
    0:20:38 Can there be a marketplace?
    0:20:40 What are some of the natural occurring modes?
    0:20:44 I think my wish is to see more companies with those elements.
    0:20:48 I think because it’s so magical, we live in this very, very interesting time.
    0:20:53 We’re sort of in that era of, oh my God, if it works, it’s worse it unless pay to use it and let’s go.
    0:21:02 And I think more and more as we see the space evolve, I’m also very excited to see what we had not seen in ton yet,
    0:21:11 which are ones that are really benefiting from the underlying network effect that naturally occurs, underlying marketplace dynamic that could happen between supply and demand.
    0:21:15 I think those are the ones that I think will also be on the lookout for.
    0:21:17 Yeah, and I mean, let’s talk to that specifically.
    0:21:22 Olivia, you have widely cited this term AI tourist phenomenon.
    0:21:24 I don’t think this is a surprise to anyone.
    0:21:26 I mean, we’ve all tried out so many of these tools.
    0:21:27 It’s so exciting.
    0:21:31 And then we also have left many of them and you can even look to the list, right?
    0:21:35 You said around 50% went from September to January.
    0:21:37 That could be a glass half full.
    0:21:43 Look how many stuck around or glass half empty where 50 of them were here and now people are not as interested.
    0:21:50 So what is this data telling us in terms of stickiness and is this really still a thing with the AI tourist phenomena or are we moving past that?
    0:21:51 Yes.
    0:21:53 We talk to our founders a lot about this.
    0:22:00 I’m not saying it’s easy, but it’s easier than it has been before maybe to get users and for a consumer application.
    0:22:03 And that’s just because there’s so much excitement.
    0:22:04 These products are so cool.
    0:22:08 There’s demos going viral on Twitter, on Reddit, on TikTok.
    0:22:11 There’s newsletters, discord groups.
    0:22:18 And because of that, many of these products are getting floods of users in traffic like we’ve never seen before.
    0:22:26 And those users might try it out once or twice, but they might not actually be in the core persona of who’s a good fit for that product.
    0:22:32 And so they might not convert to pay or they might not retain and come back to the product the next day.
    0:22:36 You might say if it’s free to get the users, it might not matter.
    0:22:40 The problem there is a lot of these AI products are actually expensive to run.
    0:22:42 We’re not in the same world, right?
    0:22:50 And so sometimes we see founders get in a place where they call us and they’re like, oh my God, we’re out 40k overnight because it went viral in like India or something.
    0:22:56 And we got a million users and they all used up like our maximum free trial and none of them are paying yet.
    0:22:58 And so that’s something to look out for.
    0:23:02 I will say we have almost redefined retention for consumer.
    0:23:07 It used to be free user base like anyone who downloads, anyone who engages.
    0:23:08 From install and reach success.
    0:23:09 Exactly.
    0:23:10 Yeah.
    0:23:17 And now the bar to count as a active user is just higher for us and we measure retention only off of that.
    0:23:19 Usually that’s a paid user.
    0:23:24 Maybe if they’re not monetizing, it’s have they completed X actions.
    0:23:34 If you look at it that way, the retention for AI products is actually as good as or in some cases better than non AI products just because these companies are amazing.
    0:23:39 But if you measure it on the tourists alone, the picture can look a little tougher.
    0:23:41 Yeah, I think that’s our learning, right?
    0:23:45 Like the AI tourist phenomenon, I think we almost put a number to it to some extent.
    0:23:50 It extends the overall top traffic top of the funnel by near 40%.
    0:23:51 Yeah.
    0:23:52 You almost add an extra layer.
    0:23:53 Add another layer.
    0:23:58 So I think what Olivia is suggesting and what we’re doing is actually thinking through what is an actual user?
    0:24:05 Have they completed the behavior that counts you as a modified user because the willingness to try something is so high.
    0:24:07 It’s never been so high.
    0:24:19 So I think defining that and starting from the right touch point and if we count backwards to actually think about retention because we all think for a product to survive and do well over long term, people just need to come back.
    0:24:21 That’s sort of the key to it.
    0:24:30 So I think what we’re seeing is a very high number of companies are able to translate this top of the funnel into paying user at a very healthy clip.
    0:24:34 And what’s more is that we talked about the willingness to try is very high.
    0:24:47 The willingness to pay has also been incredibly high because the product is so magical and because there are actual use cases, not just personally, but also commercially, the willingness to pay has been quite high.
    0:24:56 And as a result, we’ve been seeing a lot of companies actually get up to tens of millions of dollars of annualized revenue in a very quick manner.
    0:24:57 It’s crazy.
    0:24:58 Yes.
    0:25:06 It’s actually a really interesting defense is the wrong word, but justification when we’re asked, why are you only focused on AI products?
    0:25:08 What about the non AI products?
    0:25:11 We are not saying non AI products are not interesting.
    0:25:12 They’re very interesting.
    0:25:24 But what we’re seeing is the willingness to try and willingness to pay has been so high for these products that the velocity to get from nothing to maybe tens of millions of revenue have never been higher.
    0:25:25 And that’s very compelling.
    0:25:29 We get to how we keep those users around, but both of you spoke to a few metrics there.
    0:25:35 And I know we’re far enough into consumer tech that there are several benchmarks, best practices that you look for.
    0:25:41 I mean, both of you sit in so many deal meetings and someone comes in and let’s say five years ago, there was very clear.
    0:25:42 You’re looking at daily active users.
    0:25:44 You’re looking at day seven, day 30 retention.
    0:25:47 There’s things that you know automatically like it’s in your bones.
    0:25:49 You know what’s good and what’s not good.
    0:25:52 You can see a chart and you know if this company is doing well or not.
    0:25:53 Has that changed?
    0:25:58 Are you still looking at the same metrics or how do you interpret or add new metrics in this new era?
    0:26:02 We’re looking at some of the same metrics, but maybe in a slightly different way.
    0:26:10 For the more kind of work oriented prosumer productivity SMB tools, we look at a lot of things like the wow/mow ratio now.
    0:26:20 Is this truly something you’re using every week for work or is it something you’re maybe in once a month for two hours, which can still be interesting, but is probably a little bit less compelling.
    0:26:22 And just for the listeners, that metric is weekly active users.
    0:26:24 Weekly active users divided by monthly.
    0:26:25 Exactly.
    0:26:26 Yes.
    0:26:28 We’ll also look at conversion to paid for those.
    0:26:33 And then like we mentioned before, we will do standard monthly retention cohorts.
    0:26:41 So that would be of all the users who signed up and paid in month zero, how many are still paying in month one and how many are still using it?
    0:26:43 How many are still paying in month two?
    0:26:48 But in pre-AI consumer, the denominator there was like all free users.
    0:26:53 And now we only measure it mostly on paid or really active users just because of that tourist effect.
    0:27:03 Yeah, I think pre-AI is sort of the denominator is slightly different and therefore we would count daily because that’s actually what you have to clear the bar for a free user base and all that.
    0:27:13 I think what we’re seeing and the reason we’re moving to like weekly and monthly is because as it becomes a little bit more prosumery, as it becomes a little bit more commercially relevant,
    0:27:17 it’s not obvious that you will want to use these tools every single day.
    0:27:25 And so naturally what we’re now expanding into is thinking through, oh, like weekly usage rate, retention rate, monthly paid retention.
    0:27:33 And I think what’s really unique here is that if a tool is very useful, it’s not guaranteed that you will even use it every single week.
    0:27:37 So now we’re thinking through, okay, you’re still paying, that’s good.
    0:27:47 How many outputs are you creating in a month? Because it’s possible you sit there for eight hours straight and crank out hundreds of outputs that you need from the product.
    0:27:53 And it’s incredibly useful to you, but you only showed up one day out of a 31-day period. Is that good?
    0:28:02 And it’s kind of a blessing and a curse for AI companies because it’s like if you’re helping people do their jobs or make their art or something much better and faster,
    0:28:06 they are going to use you less because you’ve made them so much more efficient.
    0:28:12 So it’s almost measuring like value base, like how much value you deliver to the users.
    0:28:27 For a video editor, that might be number of downloads, but maybe because of AI, you can now plug in all your videos for the month and do it in one week instead of having to come back in every other day pre-AI and generate again and again or edit again and again.
    0:28:38 That’s such an interesting point, just I think about something like ChatGBT. If it’s $20 a month, $30 a month, like one really good engagement can be worth that.
    0:28:44 So it speaks to the value of these tools where these aren’t micro interactions where you’re like, oh, I get 30 cents worth of Twitter here.
    0:28:48 If this can save me, if this can really help me do my job better, even once.
    0:28:52 And we want to know, right? It could have saved the person like 20 hours. And that’s incredibly valuable.
    0:28:53 Totally.
    0:28:54 But you only see it as one engagement.
    0:28:59 Yeah. So you have to look at, do they keep paying, not just how much time are they spending in the product?
    0:29:07 Because in many cases now, especially for the productivity products, it’s the faster you can get in and out of it, the better, as long as you’re still getting to the result that you wanted.
    0:29:14 And one thing that I would just, this is a plug for our firm, we see a lot. We meet these companies a lot.
    0:29:21 And I think what’s helpful for us is that we try to define and understand what metric we want to track, what is important to us.
    0:29:33 And then the other thing is that we have the discipline and rigor to continuously ask for that, and therefore build out a strong mental model with an actual end count of the companies that matter to that category.
    0:29:41 And therefore, when we actually see an exceptional product, we immediately recognize it without having to really scramble, if that makes sense.
    0:29:52 So the number of companies that we meet and how we define the metrics rigorously and tracking them carefully gives us the ability to recognize what might be an exceptional thing very quickly.
    0:29:54 Even if the metrics are evolving with time.
    0:29:56 Yeah, or different across categories.
    0:30:05 Exactly. Like an image generator, maybe we look at weekly bounded retention, but a companion product, maybe we are looking at daily over monthly active users.
    0:30:14 So it’s a little bit different for every company type, but we do try to pretty closely measure and collect data points across hundreds of relevant companies.
    0:30:17 Yeah, because I guess what you’re saying is each founder only sees their data.
    0:30:18 Exactly.
    0:30:20 And so they’re like, I have no idea if this is good or great.
    0:30:22 Yeah, we would be able to tell you that.
    0:30:23 Yes.
    0:30:29 And ultimately what we’re trying to measure is is a product delivering what it’s meant to deliver.
    0:30:32 And what is a metric that best captures that moment.
    0:30:37 Right. Now, Brian, you have said before for consumer products, the rubber hits the road with retention.
    0:30:39 We’ve touched on this already.
    0:30:42 But what can we learn from the prior era in terms of retention?
    0:30:45 Because I feel like truly we’ve done so many AI podcasts.
    0:30:47 This is the question that comes up continuously.
    0:30:49 Where are the motes, right?
    0:30:54 If it’s so easy to build today, especially as these open source models are getting better, how do I stand out?
    0:30:55 How do I keep my users?
    0:30:57 So what can we learn from the past?
    0:30:59 And does it still apply here?
    0:31:03 This is a fun one because I have said retention is very hard to game.
    0:31:04 And it is true.
    0:31:06 It’s always been hard to game, harder to game than gross.
    0:31:08 And overall user base, what have you.
    0:31:20 I think what we’ve learned from the historical or pre AI consumer companies is that there’s a specific segment of very forward looking founders who have learned a way to actually improve retention.
    0:31:30 It’s somewhat artificially, but you can do that. And I’m not saying those are bad things because if it serves the need and helps the company deliver to core product value faster, better often, then that’s great.
    0:31:37 And I think what we’re seeing is they’re tried and true or tested, call it six to eight different type of methods.
    0:31:40 You can employ to improve retention.
    0:31:48 And I think we’re seeing actually some of the gen AI companies or AI native companies that are employing some of these methodologies to actually improve their retention.
    0:31:53 And therefore be able to keep their users longer and being able to deliver new products to them again and again.
    0:31:56 So I think what we’re learning is a couple things.
    0:31:58 One is that retention still matters.
    0:32:00 If your user don’t coming back, that’s not good.
    0:32:08 And the frequency can be a little different because especially around workflows, you can come in a little less frequently, but get a ton of value out of it.
    0:32:09 So that’s there.
    0:32:15 The second is that what we thought to be largely ungameable is somewhat movable.
    0:32:21 And there are some methods you can learn from the non AI company supply to your product to actually achieve that.
    0:32:25 And three, I think ultimately retention is an output.
    0:32:27 It’s an output of what does your product do?
    0:32:29 And is that actually really useful to people?
    0:32:32 And did you deliver that quickly and often?
    0:32:34 And that’s really the crux of it.
    0:32:35 I completely agree.
    0:32:46 I think the other thing we’re seeing with retention that was also true pre AI, but maybe even more dramatically true is like the narrower and more focused the product, the better.
    0:33:02 Because we do have in many cases companies with a ton of compute like chat GBT, Microsoft, Google themselves, Notion, all of these big companies are building and releasing more broad based AI products applications.
    0:33:12 And it’s hard to compete as a startup with a broad based product if you have one one thousandth of the compute and the team and the engineers and all of that.
    0:33:18 And you maybe don’t own the it authentication, the data, like the years and years and years of history.
    0:33:27 And so I think what we’re seeing work really well in terms of retention, like sometimes we meet a company that’s a very horizontal product and the retention is just okay.
    0:33:41 And then they come back to us five months later and they’re like, actually we realize that we’re building for this core set of users and we honed in on this specific model and built 10 more features just for them and our retention is four times better than it was before.
    0:33:43 That kind of thing is working really well.
    0:33:54 It’s like counterintuitive because no one wants to build a product that’s too narrow, but it’s better to go narrow, have amazing retention and then expand than to try to do it the other way around.
    0:34:03 It’s almost been like a quest for founders who have this amazing technology at hand asking themselves, what is this good for?
    0:34:05 Who is it good for?
    0:34:08 And oftentimes you find an answer in surprising places.
    0:34:14 If you told me initially, hey, you can actually clone yourself as an avatar and present yourself.
    0:34:19 My first guess of that won’t be, oh, this is going to be amazing for learning and development within companies.
    0:34:25 My first guess isn’t that guess what, salespeople can send an advertised version of themselves into a sales process.
    0:34:28 That is not my first instinct.
    0:34:45 And the fact that the founders are able to hone in on those who have customers who are willing to pay to Olivia’s point is very unique and important because you found an interesting niche of narrow use case where people are finding so much value.
    0:34:49 Oftentimes those are the great places you can corner and start expanding the market.
    0:35:06 And the founders who have that mindset of I’m going to find what we call ICP, initial customer profile, and really sort of appeal to them the way to grow more horizontally step by step has been an interesting model, especially in the competition with larger companies.
    0:35:11 One thing that’s interesting is you’re not just saying that it’s the model itself or even fine-tuning the model.
    0:35:23 It’s everything also built on top like the UX, the marketing, the messaging, all of that comes together to be just a little bit or sometimes a lot of it, better than the more generalized model.
    0:35:24 Absolutely.
    0:35:26 And I think this is a feature of AI, right?
    0:35:27 Yes.
    0:35:28 The things are so magical.
    0:35:30 They’re evolving so quickly.
    0:35:33 So then the question is, well, how do you differentiate?
    0:35:39 And the differentiation may sometimes be, oh, our tech is so good that it’ll blow everyone out of their water.
    0:35:40 And sometimes that’s true.
    0:35:47 But a lot of times the world is large and a lot of great people working on great problems and a lot of smart people working on the same problem.
    0:35:58 And so what we end up seeing is the velocity of product shipping always matter, especially when things are changing so quickly as we saw is top AI properties like ranking changing so much.
    0:36:00 That just means there’s a lot of velocity.
    0:36:03 So you need to stay ahead of that velocity matters.
    0:36:08 But two, what’s really important is how do you actually build consistency and like retention?
    0:36:12 That’s by building into what’s useful to the users.
    0:36:18 And that’s why I think what we’re seeing is I will deliver part of the workflow to make your life so much easier.
    0:36:21 And that’s where we’re seeing like some differentiation in companies.
    0:36:30 It’s so easy in consumer, especially when we’re meeting like incredible teams every day to get enamored by this is the most elegant technical approach.
    0:36:32 This is the best research team.
    0:36:34 And in some cases that is what wins.
    0:36:40 But if I take off my investor hat and I think about myself as like a normal person downloading an app or going to a website.
    0:36:43 I do not care about the technical delegates.
    0:36:45 I don’t even care about who made it.
    0:36:49 I care about if it helps me get the thing done that I wanted to get done.
    0:36:52 Which I think goes back to like it’s often little workflow thing.
    0:36:57 It’s like tiny features or how you scope the product that make the difference.
    0:37:01 And that sometimes doesn’t come down to the technical details around it,
    0:37:05 but comes down to these micro product decisions that can be make or break.
    0:37:11 It’s kind of funny that we have to remind ourselves of that in AI because no one ever cared if an app was made with Angular or React, right?
    0:37:13 No one ever sees that.
    0:37:16 And of course I feel like I’m going to start a war on mine.
    0:37:17 It’s too cold.
    0:37:21 But maybe one other question if we’re talking about competition,
    0:37:25 at least in the last consumer wave, we did see some companies front run.
    0:37:29 And maybe they did it through like raising a bunch of money and then doing a lot of paid acquisition.
    0:37:32 And then you start to see things like network effects kick in.
    0:37:34 So are we seeing that similar dynamic?
    0:37:36 Where does paid come into play?
    0:37:38 Because both of you have mentioned that people are willing to pay.
    0:37:42 So does that mean budgets can increase or you know our CPA increase?
    0:37:44 It is a really interesting question.
    0:37:52 I think there are categories where raising a lot of money to build the best model actually does make a really big difference
    0:37:54 and having a best in class product.
    0:38:00 We are investors in Eleven Labs, which is a text-to-speech company, and they have an amazing model.
    0:38:07 And because of that, they’re used by probably thousands of developer customers and other customers to power the product.
    0:38:13 And in that case, it’s like harder to compete if you’re not raising a lot of money and if you’re not actually kind of…
    0:38:18 More money means more data, more tuning of the model, a better model and it becomes a spiral.
    0:38:20 But there are other product categories.
    0:38:24 For example, many products are building off of open source models.
    0:38:30 And then it’s much more about kind of the product elegance, how you commercialize the model,
    0:38:36 how you take someone else’s tech and translate it to something that artists or designers or other people can use.
    0:38:43 And therefore, trying to like front-run to raise the most money and acquire the most users isn’t always the winning strategy.
    0:38:49 Maybe one other way to think about it is the models and the research that comes out of these are so magical
    0:38:52 that it actually oftentimes goes directly to consumers.
    0:38:57 And that’s very exciting for them because they immediately get the benefit of the cutting-edge research.
    0:39:04 What I think that means in terms of you talked about paid acquisition and how that translates into this new AI sort of world,
    0:39:07 I think there are two classes and maybe this is just how I think about it.
    0:39:11 But there are classes of companies where they benefit greatly from the buzz
    0:39:16 and the sort of virality of the product example that they can put out in Twitter or Reddit or Discord.
    0:39:20 Because it’s just so fun. It’s very eye-catching and a very attention-grabbing.
    0:39:25 Sure, there’s a tourist phenomenon, but they benefit from a great top-of-funnel traffic.
    0:39:31 And insofar as that traffic continues to come in, you’re able to find ways to convert them into paid users
    0:39:38 and actually start making great money and start building out even wait list or inbound-based sales lead.
    0:39:45 If you’re thinking about your product as a workflow tool and it’s so useful that some SMBs or enterprise customer may reach out to you,
    0:39:51 you can start building out a pretty good inbound list to go after and whittle off and start building a great-sized business.
    0:39:55 And in those cases, “cac” or “paid” matters a little less.
    0:40:00 There are other businesses where the product is very good and it’s very useful.
    0:40:07 But it hasn’t fully benefited from the halo of the amazing glitter of AI apps, if you will.
    0:40:11 And in those cases, because the willingness to pay is quite high,
    0:40:18 we do see a crop of customers or products that actually end up engaging in paid acquisition in a thoughtful manner.
    0:40:26 Because the LTV is there, they’re able to afford actually paying to acquire users and some companies know how to do that better than others.
    0:40:33 And that has been a model that we are seeing and oftentimes these companies can build their run rate up to tens of millions of dollars, if not more.
    0:40:38 Totally agree, yeah. I think if you’re building a product for AI artists or designers or writers,
    0:40:44 it’s very easy to go viral and bring in a bunch of users on YouTube or Twitter or TikTok.
    0:40:52 If you’re building an AI platform for small HVAC businesses to be later expanded into other home services businesses,
    0:40:55 like going viral on Twitter may or may not actually help you.
    0:41:01 You’re probably going to still have to build out some more of a traditional kind of lead gen sales funnel,
    0:41:04 at least like a strong referral program, things like that.
    0:41:09 And to your point, Steph, you mentioned though you could actually brute force build a network effect.
    0:41:14 I don’t think we’re seeing that just yet because there aren’t true business model that we’re seeing
    0:41:18 that truly benefit from either network effect or marketplace dynamics,
    0:41:22 such that people want to buy their way into that density.
    0:41:28 I think what we are seeing is that there’s a true payoff or payback that’s related to the paid acquisition,
    0:41:33 and they may calculate a return-based assumption to go acquire users.
    0:41:36 Yeah, I like that distinction, but do you expect that to change?
    0:41:41 Do you expect in a few years for that to be true where there will be those marketplace effects?
    0:41:43 I do. I think so. I think so.
    0:41:46 I think it comes back to the question of what is matured already and what is to come.
    0:41:53 An example we talk about a lot is that we haven’t seen a lot of truly AI-native social apps, for example.
    0:41:57 And I think part of it is because there were some early tests of this,
    0:42:04 and a feed of content that you know all of it is AI is actually maybe not the most compelling social app,
    0:42:09 and it doesn’t have the same psychological dynamics.
    0:42:10 Triggers and odds.
    0:42:12 We’re learning a lot about ourselves with AI, right?
    0:42:18 Exactly, yeah. If you know it’s a fake picture of you, you don’t have the same maybe kind of gut,
    0:42:21 like either anxiety or elation in posting it.
    0:42:27 And so I think we’re just starting to see there was a product called AirChat that’s been viral the last few weeks,
    0:42:33 and that is around helping human beings create human content easier using AI.
    0:42:38 So in this case, you can do a voice memo and it will transcribe it into text,
    0:42:41 and then you can scroll through and read a feed of text.
    0:42:46 And it’s basically opening up people who would never tweet because they don’t want to sit down and write,
    0:42:50 or they’re not good at it, or they stress them out, which I totally understand,
    0:42:54 who might do it with a voice memo that gets transcribed into text.
    0:42:58 That’s just one early example, but I feel like we’ll see more of those.
    0:43:02 I definitely think we’re starting to see the very early incarnation of this.
    0:43:08 I think Olivia, you mentioned Eleven Labs, which is a portfolio company. You actually start seeing these products
    0:43:11 start building a marketplace model within their company.
    0:43:18 So Eleven Labs actually actively build a marketplace for voice actors to license their voices so that they can actually make passive income.
    0:43:22 Same thing with Captions. They have a creator marketplace.
    0:43:26 They’re starting to build that out where creator can license their likeness,
    0:43:31 such that video ad producers can use their likeness and they get to sit there and make passive income.
    0:43:37 So these marketplace models are interesting where if you have a lot more supply coming in,
    0:43:42 because you’re actually either paid acquisition or not building that density,
    0:43:47 I think that that starts to actually accumulate in terms of benefit vis-a-vis like any competitors.
    0:43:53 And I think we’re starting to see some concentration of these type of behaviors, marketplace dynamics, if you will.
    0:43:59 That’s actually really interesting because basically in a way you’re saying they’re passively building at one side of the marketplace,
    0:44:03 but they’re not basically starting as a marketplace as you might have seen in the past.
    0:44:08 Fascinating. We started this off by talking about the Gen AI 100.
    0:44:12 Let’s end there too. Let’s say we run this six months time.
    0:44:17 What do you expect to see? And also maybe what do you want to see? Looking forward.
    0:44:20 I would love to see new categories start to mature.
    0:44:26 I think we talked about two completely new categories in this most recent list were things like music,
    0:44:30 sooner popped up from nowhere. And even since we published a list,
    0:44:33 UDEO has now popped up and gone totally viral.
    0:44:37 And so I think as more kind of models mature, we’ll see new categories.
    0:44:41 Productivity was another one that appeared almost nowhere on the first list
    0:44:46 and has kind of come out of nowhere to have quite a few companies represented on the current list.
    0:44:51 It’s a little bit tough to predict since I think if we knew what the next big AI hit would be,
    0:44:56 I don’t know if we would go building, but we would find someone to go building.
    0:45:00 But I guess what I hope to see is a continued testing of the boundaries
    0:45:04 and expansion in both form factor, modality, categories.
    0:45:07 I think the amazing things that we see six months from now
    0:45:10 are things that we probably can’t even conceptualize right now.
    0:45:12 Maybe you can, but I cannot.
    0:45:16 I think Olivia is exactly right. I think you were hoping to see new categories.
    0:45:22 I think I expect to see another 40% of the list being net new, if not more.
    0:45:29 What I hope to see is actually the prior lists have these single modality type products
    0:45:33 where it’s largely text, it’s largely audio, it’s largely music.
    0:45:36 What I love to see is what happens when you start combining these.
    0:45:40 What happens if you have video plus image and plus sound effect.
    0:45:43 Is that a music video? That’s cool. What does that look like?
    0:45:47 When you have avatar plus voice, what is that product?
    0:45:52 We love to see these net new categories where we will have a hard time defining what they are
    0:45:54 because they combine these different things.
    0:45:57 To use Olivia as an example of air chat, that’s really interesting.
    0:46:01 Voice transcribed into text.
    0:46:05 Now we also know that text input can do pretty much anything.
    0:46:10 Create music, video, avatar, 3D, images, anything. Codes.
    0:46:14 Then what does it mean when all modalities are essentially interchangeable?
    0:46:17 Don’t know. We’re very excited to find out.
    0:46:19 We’re very excited to find out.
    0:46:21 That’s amazing. Well, this has been so interesting.
    0:46:25 We will have to sit down again, whether it’s in six months or whenever you guys have a new list
    0:46:29 and see what has changed because it does feel like other lists that have been done in the past.
    0:46:33 You have to wait another year, a couple years for enough movement to happen
    0:46:36 and I feel like we honestly could record this in a month.
    0:46:41 And we’d have enough to talk about if we could use probably that have cropped up.
    0:46:43 Wow, we’re recording this.
    0:46:45 Amazing. Well, thank you.
    0:46:46 Thank you.
    0:46:53 If you liked this episode, if you made it this far, help us grow the show.
    0:46:57 Share with a friend or if you’re feeling really ambitious,
    0:47:02 you can leave us a review at www.BreatThisPodcast.com/a16z.
    0:47:05 You know, Candidly producing a podcast can sometimes feel like
    0:47:07 you’re just talking into a void.
    0:47:12 And so if you did like this episode, if you liked any of our episodes, please let us know.
    0:47:14 We’ll see you next time.
    0:47:16 [Music]
    0:47:19 [Music]
    0:47:21 (upbeat music)
    0:47:31 [BLANK_AUDIO]

    Consumer AI is moving fast, so who’s leading the charge? 

    a16z Consumer Partners Olivia Moore and Bryan Kim discuss our GenAI 100 list and what it takes for an AI model to stand out and dominate the market.

    They discuss how these cutting-edge apps are connecting with their users and debate whether traditional strategies like paid acquisition and network effects are still effective. We’re going beyond rankings to explore pivotal benchmarks like D7 retention and introduce metrics that define today’s AI market.

    Note: This episode was recorded prior to OpenAI’s Spring update. Catch our latest insights in the previous episode to stay ahead!

     

    Resources:

    Link to the Gen AI 100: https://a16z.com/100-gen-ai-apps

    Find Bryan on Twitter: https://twitter.com/kirbyman01

    Find Olivia on Twitter: https://x.com/omooretweets

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • Finding a Single Source of AI Truth With Marty Chavez From Sixth Street

    AI transcript
    0:00:07 I cannot believe he said this to me in 1981, but he said, “The future of the life sciences
    0:00:10 is computational.”
    0:00:13 The through arc is my entire career.
    0:00:21 I’ve been building digital twins of some financial or scientific or industrial reality.
    0:00:26 We looked at that and thought, “Wow, we better do something about this very large, unhedged
    0:00:28 position.”
    0:00:31 That was the history of Dodd-Frank, like we don’t really know what went wrong in the
    0:00:33 financial crisis.
    0:00:34 So let’s just go regulate everything.
    0:00:41 And I think 99% of it was red tape that did not make the world a better place.
    0:00:45 This was one of the many early nuclear winters of AI.
    0:00:47 I walked right into it.
    0:00:48 Hello, everyone.
    0:00:51 Welcome back to the A16Z podcast.
    0:00:52 This is your host, Steph’s Man.
    0:00:58 Now, today we have a very special episode from a new series called In the Vault.
    0:01:02 This series features some of the most influential voices across the finance ecosystem, including
    0:01:06 of course our guest today, Marty Chappetz.
    0:01:10 Marty is now the partner and vice chairman of Sixth Street Partners, however, he’s long
    0:01:15 had a knack for spotting how a healthy serving of technology can disrupt other industries.
    0:01:20 From his PhD of Applied Artificial Intelligence to Medicine, to being one of the founding
    0:01:23 engineers of the team that created SecDp.
    0:01:27 That’s the software that perhaps couldn’t predict the global financial crisis, but famously
    0:01:29 helped Goldman survive it.
    0:01:34 So today, Marty sits down with A16Z general partner, David Haber, and they talk about
    0:01:39 a lot more, including where the puck is moving in this new wave of technology and the role
    0:01:41 of regulators and lawmakers within that.
    0:01:45 And of course, if you like this episode, don’t forget to check out our new series, In the
    0:01:46 Vault.
    0:01:51 You can find that on our A16Z live feed, which we’ll also include in the show notes.
    0:01:55 There you can find other episodes with global payment CEO Jeff Sloan and Marco Argenti, the
    0:01:57 CIO of Goldman Sachs.
    0:02:08 All right, David, take it away.
    0:02:13 Hello and welcome to In the Vault, A16Z’s fintech podcast series, where we sit down
    0:02:16 with the most influential leaders in financial services.
    0:02:21 In these conversations, we offer a behind-the-scenes view of how these leaders guide and manage
    0:02:24 some of the country’s most consequential companies.
    0:02:28 We also dive into the key trends impacting the industry and, of course, discuss how AI
    0:02:30 will shape the future.
    0:02:33 Today we’re excited to have Marty Chavez on the show.
    0:02:37 Marty is currently a partner and vice chairman of Sixth Street Partners, a global investment
    0:02:41 firm with more than 75 billion in assets under management.
    0:02:45 Prior to Sixth Street, Marty spent over two decades at Goldman Sachs, where he held a
    0:02:50 variety of senior roles, including chief information officer, chief financial officer, head of
    0:02:54 global markets and served as a senior partner on the firm’s management committee.
    0:02:58 He was also one of the founding engineers behind the legendary software system SecDB,
    0:03:03 which many believe helped Goldman avoid the worst of the global financial crisis.
    0:03:07 In our conversation, Marty talks through the evolution of technology, in financial services,
    0:03:10 and the potential impact of artificial intelligence.
    0:03:11 Let’s get started.
    0:03:15 As a reminder, the content here is for informational purposes only.
    0:03:19 Should not be taken as legal business tax or investment advice, or be used to evaluate
    0:03:24 any investment or security, and is not directed at any investors or potential investors in
    0:03:26 any A-60Z fund.
    0:03:29 For more details, please see a-60z.com/disclosures.
    0:03:31 Awesome, Marty.
    0:03:32 Thank you so much for being here.
    0:03:33 We really appreciate it.
    0:03:34 David, it’s a pleasure.
    0:03:36 I’ve been looking forward to this.
    0:03:38 Marty, you’ve had a fascinating career.
    0:03:43 Obviously, you’ve played a really pivotal role in turning the Wall Street trading business
    0:03:46 into a software business, especially during your time at Goldman Sachs and also now at
    0:03:47 Sixth Street.
    0:03:52 But you also serve on the boards of the Broad Institute, on Stanford Medicine, and a bunch
    0:03:53 of amazing companies.
    0:03:58 Maybe walk us through your career arc, and what is sort of the through line in those
    0:03:59 experiences?
    0:04:03 Well, let me talk about a few of the things I did, and then the arc will become apparent.
    0:04:06 So, I grew up in Albuquerque, New Mexico.
    0:04:13 I had a moment, really, like the movie The Graduate, when I was about 10, and my father
    0:04:21 put his arm around my shoulder and said, “Martin, computers are the future, and you will be
    0:04:23 really good at computers.”
    0:04:29 And this was 1974, and it was maybe not obvious to everybody.
    0:04:31 It was obvious to my father.
    0:04:35 He was a technical illustrator at one of the national laboratories, and there was this
    0:04:43 huge computer that they had just bought that his team used to draw these beautiful blueprints
    0:04:48 for the weapons in the nuclear arsenal, and they really had the latest and greatest equipment
    0:04:53 when it was very clunky and very expensive, and my dad knew where it was going.
    0:04:59 So, in New Mexico, you don’t have a ton of choices, especially at that time.
    0:05:03 It was basically tourism and the military-industrial complex.
    0:05:10 And so, I went for the military-industrial complex, and my very first summer job when
    0:05:15 I was 16 was at the Air Force Weapons Lab in Albuquerque.
    0:05:22 The government had decided that blowing up bombs in the Nevada desert was really problematic
    0:05:28 in a lot of ways, and some scientists had this idea, crazy at the time, that we could
    0:05:34 simulate the explosion of bombs rather than actually detonating them.
    0:05:39 And they had one of the early Cray One supercomputers, and so for a little computer geek kid, this
    0:05:46 was an amazing opportunity and my very first job was working on these big Fortran programs
    0:05:53 that would use Monte Carlo simulations, like an early baptism in that technique, and you
    0:05:59 would simulate individual Compton electrons being scattered out of a neutron bomb explosion,
    0:06:04 and then calculate the electromagnetic pulse that arose from all that scattering, and my
    0:06:11 job was to convert this program from MKS units to electron rest mass units, and so that certainly
    0:06:17 seemed more interesting to me than jobs in the tourism business, and so I did that, and
    0:06:24 then the next big moment was I went to Harvard, was a kid, and I took sophomore standing.
    0:06:26 And did you buy any chance?
    0:06:27 Did you do sophomore standing?
    0:06:31 I didn’t do sophomore standing, I also went to Harvard, I think we also studied, you studied
    0:06:34 biochemistry, so yeah.
    0:06:39 So you have to declare a major, a concentration right away if you take sophomore standing,
    0:06:43 and I didn’t know that, and I didn’t know what major I was going to declare, I was going
    0:06:48 to be some kind of science, for sure, and I went to the science center, and the science
    0:06:54 professors were recruiting for their departments, and I remember Steve Harrison sitting opposite
    0:06:57 a table saying, “What are you?”
    0:07:04 And it was a little bit like a Hogwarts question, I suppose, and I said, “I’m a computer scientist,”
    0:07:10 and I cannot believe he said this to me in 1981, but he said, “The future of the life
    0:07:17 sciences is computational,” and that was amazing, right, and so profound, and so prescient,
    0:07:22 and I thought, “Wow, this must be true,” and he said, “We’ll construct a biochem major
    0:07:28 just for you, and we’ll emphasize simulation, we’ll emphasize building digital twins of
    0:07:34 living systems,” and so I walked right into his lab, which was doing some of the early
    0:07:42 work on x-ray crystallography of protein capsids and working to set up the protein data bank,
    0:07:47 and who knew that, well, even back then, he wanted to solve the protein folding problem,
    0:07:51 and I remember he said it might take 50 years, it might take 100 years, and we might never
    0:07:55 figure it out, and that’s obviously really important, because that protein data bank
    0:08:01 was the raw data for AlphaFold, which later came in and solved the problem, and so the
    0:08:03 through arc is my entire career.
    0:08:11 I’ve been building digital twins of some financial or scientific or industrial reality, and the
    0:08:16 amazing thing about a digital twin is you can do all kinds of experiments, and you can
    0:08:22 ask all kinds of questions that would be dangerous or impossible to ask or perform in reality,
    0:08:28 and then you can change your actions based on the answers to those questions, and so for
    0:08:33 Wall Street, if you’ve got a high-fidelity model of your trading business, which was
    0:08:39 something that I, with many other people, worked on as part of a huge team that made
    0:08:45 secDB happen, then you could take that model and you could ask all kinds of counterfactual
    0:08:51 or what-if questions, and as the CEO of Goldman Sachs, Lloyd Blankfein, who really commissioned
    0:08:58 and sponsored this work for decades, would say, “We are not predicting the future.
    0:09:03 We are excellent predictors of the present,” and I’ve been doing some variation of that
    0:09:04 ever since.
    0:09:05 That’s fascinating.
    0:09:09 I don’t want to spend more time kind of digging into secDB, because that was also a
    0:09:13 pression decision, obviously, during the financial crisis, but maybe just go going back.
    0:09:18 I know you ended up doing some graduate work in healthcare and in AI, kind of how did you
    0:09:19 go from that into Wall Street?
    0:09:24 Maybe walk us through that transition, because it’s not probably obvious, maybe for most,
    0:09:28 and then would love to kind of dig into your time at Goldman and as a founder, et cetera.
    0:09:36 I got so excited about these problems of building digital twins of biology that it seemed obvious
    0:09:41 to me that continuing that in grad school was the right thing to do.
    0:09:46 I actually wanted to go ahead and start making money, and I really owe it to my mom, who convinced
    0:09:50 me that if I didn’t get a PhD then I wasn’t going to do it.
    0:09:53 I’m sure she was right about that, and so I applied to Stanford.
    0:10:01 That was my dream school, and so what happened is I was working on this program, Artificial
    0:10:09 Intelligence in Medicine, that had originated at Stanford under Ted Shortliff, who was extremely
    0:10:15 well known even back then for building one of the first expert systems to diagnose blood
    0:10:18 bacterial infections.
    0:10:26 I joined his program and we and a bunch of my colleagues in the program took his work
    0:10:32 and thought, “Can we put this work, this expert system inference, in a formal Bayesian probabilistic
    0:10:33 framework?”
    0:10:38 The answer is you can, but the downside is it’s computationally intractable.
    0:10:46 My PhD was finding fast randomized approximations to get provably nearly correct answers in
    0:10:47 a shorter period of time.
    0:10:52 This was amazing as a project to work on, but we realized pretty early on that the computers
    0:10:58 were way too slow to get anywhere close to the kinds of problems we wanted to solve.
    0:11:03 The actual problem of diagnosis in general internal medicine is you’ve got about a thousand
    0:11:10 disease categories and about 10,000 various clinical laboratory findings or manifestations
    0:11:12 or symptoms.
    0:11:16 The joint probability distribution that you have to calculate is therefore on the order
    0:11:20 of 1,000 to the 10,000, and this is a big problem.
    0:11:26 We made some inroads, but it was clear that the computers were just not fast enough.
    0:11:31 We were all despondent, and this was one of the many early nuclear winters of AI.
    0:11:33 I walked right into it.
    0:11:35 I stopped saying artificial intelligence.
    0:11:37 I was embarrassed.
    0:11:41 This is not anything like artificial intelligence.
    0:11:48 A bunch of us were casting around looking for other things to do, and I didn’t feel too
    0:11:54 special as I got a letter in my box at the department, and the letter was from a head
    0:11:57 hunter that Goldman Sachs had engaged.
    0:11:58 I remember the letter.
    0:11:59 I probably have it somewhere.
    0:12:04 It said, “I’ve been asked to make a list of entrepreneurs in Silicon Valley with PhDs
    0:12:09 in computer science from Stanford, and you are on my list.”
    0:12:15 In 1993, before LinkedIn, I had to go do some digging to construct that list.
    0:12:22 I thought, “I’m broke, and AI isn’t going anywhere anytime soon, and I have no idea
    0:12:26 what to do, and I have a bunch of college friends in New York, and I’ll scam this bank
    0:12:32 for free trip,” and that’s how I ended up at Goldman Sachs, and it didn’t seem auspicious.
    0:12:34 I just liked the idea.
    0:12:37 They were doing a project that seemed insane.
    0:12:45 The project was we’re going to build a distributed, transactionally protected, object-oriented
    0:12:50 database that’s going to contain our foreign exchange trading business, which is inherently
    0:12:55 a global business, so we can’t trade out of Excel spreadsheets, and we need somebody
    0:13:02 to write a database from scratch in C, and fortunately, I had not taken the database
    0:13:06 classes at Harvard, because if I had, I would have said, “That’s crazy.
    0:13:10 Why would you write a database from scratch, and I don’t know anything about databases,”
    0:13:17 and so I just had the fortune to join as the fourth engineer and the three-person core
    0:13:22 SecDB design team, and then a very lucky move.
    0:13:27 One day, the boss comes into my office and said, “The desk strategist for the commodities
    0:13:29 business has resigned.
    0:13:30 Congratulations.
    0:13:36 You are the new commodity strategist, and go out onto the trading desk and introduce yourself.”
    0:13:41 He was never going to introduce me to them, and we were kind of scared of them, to be
    0:13:46 honest, and so there I was in the middle of the oil trading desk, kind of an odd place
    0:13:54 for a gay Hispanic computer geek to be in 1994 Wall Street.
    0:13:57 It’s such an amazing story, and one of my favorite lines, which I believe and I repeat
    0:14:02 often, is that opportunities live between fields of expertise, and I personally love
    0:14:03 exploring those intersections.
    0:14:06 I feel like your career has sort of been at these intersections.
    0:14:09 Maybe fast forward kind of into the financial crisis.
    0:14:13 Famously, my understanding is that SecDB really helped the firm navigate that period, and
    0:14:15 really same global stack.
    0:14:21 So what was it about SecDB that was different than other Wall Street firms who lost billions
    0:14:24 of millions of dollars in that moment, and how did you guys sort of navigate that?
    0:14:25 Yes.
    0:14:29 Well, this is where we’re going to start to get into the pop culture, because of course
    0:14:33 you have to mention the big short when you start talking about these things, right?
    0:14:41 And so, SecDB showed the legendary CFO of Goldman Sachs during the financial crisis,
    0:14:48 David Vineer, that we and everybody else had a very large position in collateralized debt
    0:14:52 obligations, CDOs that were rated AAA.
    0:14:58 So in SecDB, it’s another thing, and it has a price, and that price can go up and down
    0:15:02 and there’s simulations where it gets shocked according to probability distribution, and
    0:15:09 then there’s nonparametric or scenario based shocks, and we looked at that and thought,
    0:15:15 wow, we better do something about this very large unhedged position, namely, sell it down
    0:15:16 or hedge it.
    0:15:19 We didn’t know that the financial crisis was coming.
    0:15:25 Of course, we got in the press and elsewhere accused of all kinds of crazy things.
    0:15:30 Like, they were the only ones who hedged, so they must have known it was coming.
    0:15:35 We were just predictors of the present and thought, better hedge this position, hence
    0:15:36 the big short.
    0:15:43 And the question was, if Lehman fails, what happens then?
    0:15:53 And we talk about Lehman as if it is a single thing, we had risk on the books to 47 distinct
    0:16:00 Lehman entities with complex subsidiary guarantee, non-guarantee, collateralized, non-collateralized
    0:16:01 relationships.
    0:16:07 And so, it was super complicated, but in SecDB, it was all in there, and you could just slip
    0:16:08 it around.
    0:16:11 You could just as easily run the report from the counterpart side.
    0:16:13 Now, I make it sound like it was perfect.
    0:16:14 It was a little less than perfect.
    0:16:20 We had to write a lot of software that weekend, but the point is, we had everything in one
    0:16:23 virtual place and it was a matter of bringing it together.
    0:16:26 So, it’s also part of the legend, but it’s also factual.
    0:16:37 We had our courier show up at Lehman’s headquarters within an hour of its filing bankruptcy protection
    0:16:45 for the 47 entities, and we had 47 sheets of paper with our closeout claim against each
    0:16:50 of those entities rolled up from wide across all the businesses.
    0:16:58 And it took many of the major institutions on Wall Street months to do this.
    0:17:04 And so, that was the power of SecDB, and of course, it was wildly imperfect, but it was
    0:17:06 something that nobody else had.
    0:17:12 Just to piggyback on that last point, what impact has regulation had historically on
    0:17:14 technology’s impact on financial services?
    0:17:19 And I think about the different asset classes, for example, in global markets that shifted
    0:17:22 to be traded electronically, right?
    0:17:30 Was it often historically driven by regulatory change, emergent technologies, both I’m curious
    0:17:32 about that and also how it informs the future?
    0:17:33 Yes.
    0:17:38 Well, so regulation is a powerful driver of change, and so is technological change.
    0:17:46 And some things are just inevitable, a strong believer in capitalism with constraints and
    0:17:51 rules, and we can, and we’ll have a vigorous debate about the nature of the rules and the
    0:17:56 depth of the rules and who writes the rules and how they’re implemented and all that matters
    0:17:57 hugely.
    0:18:01 But to say, oh, we don’t need any rules or trust us, we’ll look after ourselves.
    0:18:04 I just haven’t seen that work very well.
    0:18:08 And so, in some cases, the regulators will say something.
    0:18:15 For instance, in the Dodd-Frank legislation, there’s a very short paragraph that says that
    0:18:22 the Federal Reserve shall supervise a simulation, it was called the DFAST simulation, the Dodd-Frank,
    0:18:25 and I don’t even remember what the rest stands for, right?
    0:18:31 And that will be part of the job of the Federal Reserve, a simulation of how banks will perform
    0:18:34 in a severely adverse scenario.
    0:18:37 And that was a powerful concept, right?
    0:18:42 You have to simulate the cash flow, the balance sheet, the income statement, several quarters
    0:18:44 forward in the future.
    0:18:49 None of this was specified in detail in the statute, but then the regulators came in and
    0:18:54 really ran with it and said, you will simulate nine quarters in the future, nine quarters
    0:18:56 in the future, right?
    0:18:59 The whole bank, all of it, end to end.
    0:19:06 And then, in a very important move, the acting supervisor for regulation at the time, Dan
    0:19:13 Tarullo, the Reserve Governor, said, we’re going to link that simulation to capital actions,
    0:19:18 whether you get to pay a dividend or whether you get to buy your shares back or whether
    0:19:20 you get to pay your people, right?
    0:19:25 Because he knew that that would get everybody’s attention if it’s just a simulation.
    0:19:30 That’s one thing, but if you need to do it right before you can pay anybody, including
    0:19:35 your shareholders and your people, then you’re going to put an awful lot of effort into it.
    0:19:41 So that caused a massive change and made the system massively safer and sounder.
    0:19:43 We saw that in the pandemic.
    0:19:49 There’s actually a powerful lesson for us in the early days of electronic trading.
    0:19:56 For the early days of artificial intelligence, right, there was a huge effort by the regulators
    0:20:02 to say, we’ve got to understand what these algos are thinking because they could manipulate
    0:20:03 the market.
    0:20:04 They could spoof the market.
    0:20:06 They could crash the market.
    0:20:11 And we would always argue, you’re never going to be able to figure out or understand what
    0:20:12 they are thinking.
    0:20:18 That’s a version of the halting problem, but at the boundary between a computer doing
    0:20:24 some thinking and the real world, there’s some API, there’s some boundary.
    0:20:30 And at the boundary, just like in the old days of railroad control, at those junctions,
    0:20:35 you better make sure that two trains can’t get on a collision track, right?
    0:20:38 And it’s the junction where it’s going to happen.
    0:20:41 But then when the trains are just running on the track, just leave them running on the
    0:20:42 track.
    0:20:44 Just make sure they’re on the right track.
    0:20:49 That’s going to be an important principle for LLMs and AIs generally.
    0:20:53 As they start agenting and causing change in the world, we have to care a lot about
    0:20:54 those boundaries.
    0:20:58 And may that’s a good transition to present day.
    0:21:01 You were a huge force in the digitization of Goldman Sachs and Wall Street in general
    0:21:05 and kind of the rise of the developer as decision maker.
    0:21:09 Maybe talk a little bit about generative AI specifically today.
    0:21:13 How is this technology different from the AI of your PhD in 1991?
    0:21:17 And what are the impacts that you see, not just in financial services, but perhaps in
    0:21:18 other industries as well?
    0:21:25 Well, for full disclosure, I remember late ’80s, early ’90s, and this program at Stanford.
    0:21:27 We were the Bayesians, right?
    0:21:31 And then we would look at these connectionists through neural network people.
    0:21:33 And I hate to say it, but it’s true.
    0:21:34 We felt sorry for them.
    0:21:39 We thought, like, I don’t work, simulate neurons, you got to be kidding.
    0:21:44 Well, so they just kept stimulating those neurons and look what happened.
    0:21:48 Now, in some ways, there’s nothing new under the sun.
    0:21:53 I had a fantastic talk not so long ago with Joshua Bengio, who’s really one of the four
    0:22:01 or five luminaries in this renaissance of AI that’s delivering these incredible results.
    0:22:09 And he was talking about how his work is based on taking those old Bayesian decision networks
    0:22:14 and coupling them with neural networks, where the neural networks designed the Bayesian
    0:22:16 networks and vice versa.
    0:22:23 And so some of these ideas are coming back, but it is safe to say that the thread of research,
    0:22:29 or the river of research that took this connectionist neural network approach is the one that’s
    0:22:31 bearing all the fruit right now.
    0:22:36 And David, the way I would describe all of those algorithms, because they are just software,
    0:22:37 right?
    0:22:39 Everything is turning equivalent, right?
    0:22:42 But they’re very interesting software.
    0:22:47 They started off with images, images of cats on the internet, people are putting up pictures
    0:22:48 of cats.
    0:22:52 Well, now you’ve got billions of images that people have labeled as saying this image contains
    0:22:53 a cat.
    0:22:56 And you can assume all the other images don’t contain a cat.
    0:23:00 And you can train a network to see whether there’s a cat or not.
    0:23:02 And then all the versions of that, how old is this cat?
    0:23:04 Is this cat ill?
    0:23:05 What illness does it have?
    0:23:12 All of these things over the last maybe starting 10 years ago, you started to see amazing results.
    0:23:17 And then after the transformer paper, now we’ve got another version of it, which is fill
    0:23:22 in the blank or predict what comes next or predict what came before.
    0:23:28 And these are the transformers and all the chat bots that we have right now.
    0:23:29 It’s amazing.
    0:23:34 I wish we all understood in more detail how they do the things that they do.
    0:23:36 And we’re starting to understand it.
    0:23:38 It all depends on the training set.
    0:23:43 And it also depends crucially on a stationary distribution, right?
    0:23:48 So the reason all this works on is it a cat or not a cat is the cats change very slowly
    0:23:50 in evolutionary time.
    0:23:52 They don’t change from day to day.
    0:23:56 The things that change from day to day, such as markets, it’s a lot less clear how these
    0:23:58 techniques are going to be powerful.
    0:24:02 But here they are, they’re doing amazing things.
    0:24:09 We’re using this in my firm and we’re using it in production and we’re deeply aware of
    0:24:10 all the risks.
    0:24:12 And we have a lot of policies around it.
    0:24:22 It reminds me a lot of the early Wild West days of electronic trading where we’re authorizing
    0:24:29 a few of us to do some R&D, but very careful about what we put into production.
    0:24:31 And we’re starting with the easy things.
    0:24:36 It feels like a unique moment or maybe there’s a unique to me, a lot of momentum happening
    0:24:42 both bottoms up and top down, bottoms up because, you know, I don’t know, something like 40%
    0:24:49 of Fortune 100 is using maybe GitHub co-pilot and some new organization or Microsoft AI product.
    0:24:54 And then conversely, every CEO or every board member, right, can plug a prompt into one
    0:24:59 of these models and kind of understand intuitively the magic and imagine the impact that it could
    0:25:00 have on their business.
    0:25:04 And so it seems like the employees of many of these companies want the productivity gains
    0:25:06 that you’re describing.
    0:25:11 Boards are like, you know, how is this going to impact the human capital efficiency of
    0:25:12 our company?
    0:25:13 Like where can we deploy this technology?
    0:25:18 I guess when other CEOs of large companies, you know, come to you for your advice, like
    0:25:23 how are you advising them on how to deploy AI in their organizations?
    0:25:24 Where within those companies?
    0:25:26 Like what’s the opportunity you see maybe in the near term and, you know, in the middle
    0:25:28 or long-term?
    0:25:30 Really first order of business.
    0:25:36 And this is something that we worked on at Goldman for a long time and I’m happy that
    0:25:41 we left Goldman in a place where it’s going to be able to capitalize on Gen AI really,
    0:25:47 really quickly, which is having a single source of truth for all the data across the
    0:25:50 enterprise time traveling source of truth.
    0:25:52 So what is true today?
    0:25:58 And what did we know to be true at this, at the close of business on some day, three
    0:25:59 years ago, right?
    0:26:04 And we have all of that and it’s cleaned and it’s curated and it’s named.
    0:26:10 And we know that we can rely on it because all of this training of AI’s is still garbage
    0:26:12 in garbage out.
    0:26:19 And so if you don’t have ground truth, then all you’re going to do is fret about hallucinations
    0:26:26 and you’re just going to be caught in hallucinations and imaginings that are incorrect and not actionable.
    0:26:32 And so getting your single source of truth right, that data engineering problem, I think
    0:26:35 a lot of companies have done a terrible job of it.
    0:26:42 I’m really excited about the new Gemini 1.5 context window, a million tokens like that
    0:26:43 one.
    0:26:46 I just want to shout that from the mountaintops, like if you’ve been in this game and you’ve
    0:26:52 been using RAG Retrieval Augmented Generation, which is powerful, but you run into this problem
    0:26:58 of I’ve got to take a doc, a complicated doc that references pieces of itself and chunk
    0:27:03 it where you’re going to lose all of that unless you have a really big context window.
    0:27:10 Having that quadratic time complexity of the length of the context window is just monumental.
    0:27:14 And I think over the next few months, you’re going to see a lot of those changes problems
    0:27:16 that were really hard are going to become really easy.
    0:27:17 I don’t know.
    0:27:18 What do you think?
    0:27:22 Look, I think every company needs to kind of using Goldman maybe as an analogy, so much
    0:27:29 of the organization, but in particular, even many parts of the Federation, I think can and
    0:27:33 should be leveraging software and a lot of those workflows can be augmented with AI, right
    0:27:38 from legal to compliance, to vendor onboarding to risk management as we’re talking about.
    0:27:42 But I think it’s going to have a profound impact on the enterprise, obviously, we’re
    0:27:43 quite biased.
    0:27:48 I guess one topic that people debate quite often is the impact of regulation on the adoption
    0:27:49 of this technology.
    0:27:54 I’m just curious your view on the government’s role in this, in general AI and what advice
    0:27:59 you have in kind of accelerating this versus what responsibility they have.
    0:28:03 Well, one of the things that I learned during the financial crisis was a huge amount of
    0:28:10 respect for the regulators and the lawmakers, they have a really tough job and really important
    0:28:18 to collaborate with them and to become a trusted source of knowledge about how a business works.
    0:28:23 And I just lament the number of people who just go into a regulator and they’re just
    0:28:28 talking their own book and hoping that the regulator or lawmaker won’t understand it.
    0:28:33 I think that is a terrible way to approach it and has the very likely risk of just making
    0:28:36 them angry, right, which is definitely not the right outcome.
    0:28:45 So I’ve been spending a lot of time with regulators and legislators and a bunch of different jurisdictions
    0:28:51 and you already heard a bit of what I have to say, which is let’s please not take the
    0:28:55 approach that we first took with electronic trading.
    0:29:02 That approach was write a big document about how your electronic trading algo works.
    0:29:07 And then step two was hand that document over to a control group who will then read the
    0:29:11 document and assert the correctness of the algo, right?
    0:29:13 This is the halting problem squared.
    0:29:17 It’s not just a bad idea, it’s an impossible idea.
    0:29:23 And instead, let’s put a lot of emphasis, a lot of standards and attestations at all
    0:29:29 the places where there’s a real world interface, especially where there’s a real world interface
    0:29:32 to another computer, right?
    0:29:39 So the analogy is in electronic trading, there was not a lot you could do to prevent a trader
    0:29:47 from shouting into a phone an order that would take your bank down, right?
    0:29:51 How are you going to prevent that from happening, right?
    0:29:58 But what you really worried about was computers that were putting in millions of those trades,
    0:29:59 right?
    0:30:03 Even if they were very small, they could do it very fast and you could cause terrible
    0:30:04 things to happen.
    0:30:10 And so another thing I’m always telling the regulators is, please, please, the concept
    0:30:11 of liability, right?
    0:30:18 They start with this idea, let’s make the LLM creators liable for every bad thing that
    0:30:20 happens with an LLM.
    0:30:28 To me, that is the exact equivalent of saying, let’s make Microsoft liable for every bad
    0:30:31 thing that someone does on a Windows computer, right?
    0:30:37 They’re fully general, and so these LLMs are a lot like operating systems.
    0:30:41 And so I think the regulation has to happen at these boundaries, at these intersections,
    0:30:45 at these control points first, and then see where we go.
    0:30:50 And I would like to see some of these regulations in place sooner rather than later.
    0:30:54 Unfortunately, the pattern of human history is we usually wait for something really bad
    0:31:00 to happen and then go put in the cleanup regulations after the fact and generally overdo it.
    0:31:03 That was the history of Dodd-Frank.
    0:31:06 We don’t really know what went wrong in the financial crisis.
    0:31:07 So let’s just go regulate everything.
    0:31:13 And I think 99% of it was red tape that did not make the world a better place.
    0:31:20 And some of it, such as the C-CAR regulations, was profound and did make the system safer
    0:31:21 and sounder.
    0:31:26 And I would want us to do those things first and not just the red tape.
    0:31:29 Well, I know you’re also very passionate about life sciences.
    0:31:33 You started your graduate career there, and I believe you now sit on the board of recursion,
    0:31:34 you know, pharmaceuticals.
    0:31:35 I do, yes.
    0:31:40 Yeah, maybe talk through kind of the implications that you’re seeing for GenRWBI in life sciences
    0:31:41 and biotech in particular.
    0:31:44 Well, it’s epic, isn’t it?
    0:31:48 I had an amazing moment just a couple of months ago.
    0:31:55 I had the opportunity of being the fireside chat post for Jensen of NVIDIA at the JPMorgan
    0:31:56 Healthcare event.
    0:31:59 And there was a night that recursion was sponsoring.
    0:32:04 And we really talked about everything he learned from chip design.
    0:32:11 So Jensen, incredibly modest, will say, well, he was just the first in that generation of
    0:32:17 chip designers who were the first to use software to design chips from scratch.
    0:32:19 And I was really the only way he knew how to design it.
    0:32:24 And he likes to say that NVIDIA is a software company, which it is, right?
    0:32:27 But that seems counterintuitive, because it’s supposed to be a hardware company.
    0:32:33 And he talks about the layers and layers of simulations that go into his business.
    0:32:37 Those layers do not go all the way to Schrodinger’s equation.
    0:32:40 And we can’t even do a good job on small molecules, right?
    0:32:43 Solving Schrodinger’s equation for small molecules.
    0:32:48 But it does go very low, and it goes very high to what algorithm is this chip running.
    0:32:51 And that’s all software simulation.
    0:32:57 And he said in that chat that at some point, he then has to press a button that says, “Take
    0:33:03 this chip and fabricate it,” and the pressing of that button costs $500 million.
    0:33:08 And so you really want to have a lot of confidence in your simulations.
    0:33:14 Well, drugs have that flavor very much so, except they cost a lot more than $500 million
    0:33:17 by the time they get through phase three.
    0:33:25 And so it seems obvious to all of us that you ought to be able to do these kinds of simulations
    0:33:26 and find the drugs.
    0:33:33 Now, the first step is going to be just slightly improve the probability of success of a phase
    0:33:34 two or phase three trial.
    0:33:38 That’s going to be incredibly valuable, because right now, so many of them fail, and they’re
    0:33:41 multi-billion-dollar failures.
    0:33:45 But eventually, will we be able to just find the drug?
    0:33:49 The needle in the haystack nature of this problem is mind-blowing.
    0:33:53 There are, depending on the size of the carbon chain, but let’s just pick a size, there’s
    0:34:00 about 10,000 trillion possible organic compounds, and there are 4,000 approved drugs globally.
    0:34:03 So that’s a lot of zeros.
    0:34:07 And if the AIs can help us navigate that space, that’s going to be huge.
    0:34:12 But I’m going to bet that we will map biology in this way.
    0:34:18 It’s just, biology is so many orders of magnitude, more complicated than the most complicated
    0:34:19 chip.
    0:34:24 And we don’t even know how many orders of magnitude and how many layers of abstraction
    0:34:25 are in there.
    0:34:30 But the question is, do we have enough data so that we can train the LOMs to infer the
    0:34:32 rest of biology?
    0:34:35 Or do we need an awful lot more data?
    0:34:38 And I think everybody’s clear we need more data.
    0:34:44 I think what we’re less clear on is, do we need 10 orders of magnitude, more data, or
    0:34:46 100 more orders of magnitude?
    0:34:47 We just don’t know.
    0:34:49 Amazing time to be alive.
    0:34:55 That’s the time ever we say this at the alphabet board, and I’d say, what an incredible group
    0:34:56 of people.
    0:35:01 And when I hear Sergey and Larry say, it’s the best time ever to be a computer scientist.
    0:35:03 Of course, I agree with that.
    0:35:04 It’s magical.
    0:35:05 Totally.
    0:35:06 Awesome.
    0:35:07 Well, Marty, thank you so much for your time.
    0:35:08 Always a pleasure.
    0:35:11 You’ve had such a fascinating career, and we really appreciate you spending time with us.
    0:35:12 David, great talking with you.
    0:35:13 Be well.
    0:35:14 Thanks.
    0:35:18 I’d like to thank our guests for joining In the Vault.
    0:35:24 You can hear all of our episodes by going to a16z.com/podcasts.
    0:35:30 To learn more about the latest in fintech news, be sure to visit a16z.com/fintech and
    0:35:33 subscribe to our monthly fintech newsletter.
    0:35:33 Thanks for tuning in.
    0:35:50 [MUSIC PLAYING]
    0:36:00 [BLANK_AUDIO]

    a16z General Partner David Haber talks with Marty Chavez, vice chairman and partner at Sixth Street Partners, about the foundational role he’s had in merging technology and finance throughout his career, and the magical promises and regulatory pitfalls of AI.

    This episode is taken from “In the Vault”, a new audio podcast series by the a16z Fintech team. Each episode features the most influential figures in financial services to explore key trends impacting the industry and the pressing innovations that will shape our future. 

     

    Resources: 
    Listen to more of In the Vault: https://a16z.com/podcasts/a16z-live

    Find Marty on X: https://twitter.com/rmartinchavez

    Find David on X: https://twitter.com/dhaber

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • A Big Week in AI: GPT-4o & Gemini Find Their Voice

    This was a big week in the world of AI, with both OpenAI and Google dropping significant updates. So big that we decided to break things down in a new format with our Consumer partners Bryan Kim and Justine Moore. We discuss the multi-modal companions that have found their voice, but also why not all audio is the same, and why several nuances like speed and personality really matter.

     

    Resources:

    OpenAI’s Spring announcement: https://openai.com/index/hello-gpt-4o/

    Google I/O announcements: https://blog.google/technology/ai/google-io-2024-100-announcements/

     

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

     

     

  • Remaking the UI for AI

    Make sure to check out our new AI + a16z feed: https://link.chtbl.com/aiplusa16z
     

    a16z General Partner Anjney Midha joins the podcast to discuss what’s happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there’s a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions. 

    Here’s one small passage that speaks to his larger thesis on where we’re heading:

    “I think why we’re seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they’re going to have to figure out how to get the models to run locally without ever leaving without ever the user’s context, and data leaving the user’s device. And that’s going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.

    “We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency.  The open source language model ecosystem is just so ravenous.”

    More from Anjney:

    The Quest for AGI: Q*, Self-Play, and Synthetic Data

    Making the Most of Open Source AI

    Safety in Numbers: Keeping AI Open

    Investing in Luma AI

    Follow everyone on X:

    Anjney Midha

    Derrick Harris

    Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • How Discord Became a Developer Platform

    In 2009 Discord cofounder and CEO, Jason Citron, started building tools and infrastructure for games. Fast forward to today and the platform has over 200 million monthly active users. 

    In this episode, Jason, alongside a16z General Partner Anjney Midha—who merged his company Ubiquity6 with Discord in 2021—shares insights on the nuances of community-driven product development, the shift from gamer to developer, and Discord’s longstanding commitment to platform extensibility. 

    Now, with Discord’s recent release of embeddable apps, what can we expect now that it’s easier than ever for developers to build? 

    Resources: 

    Find Jason on Twitter: https://twitter.com/jasoncitron

    Find Anjney on Twitter: https://twitter.com/AnjneyMidha

     

    Stay Updated: 

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://twitter.com/stephsmithio

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.