0:00:06 Hi, everyone. Welcome to the A6 and Zee podcast. I’m Sonal. Today’s episode was recorded at 0:00:10 our most recent annual Innovation Summit in our pop-up podcast booth with me and Kevin 0:00:15 Kelly, founding executive editor of Wired Magazine and author of several books. 0:00:19 In this quick, literally hallway-style chat, I ask him about two of his big ideas, the 0:00:24 notion of 1,000 true fans, which sometimes people misinterpret or miss the nuances of, 0:00:29 and two, the idea of being able to sell our own attention versus our attention being sold 0:00:33 for very little. And we try to connect the dots between these and other ideas, including 0:00:38 some new, never-been-heard-before ones in between. The second idea was also covered in his most 0:00:42 recent book, The Inevitable, which we did a podcast on with Chris Dixon, and his conversation 0:00:46 with Mark at this summit is also available in this feed as well. 0:00:47 Welcome, Kevin. 0:00:50 It’s a real delight to be here. Thanks for having me. 0:00:54 I think of you as one of the original thinkers of the future. We’re just coming off of summit. 0:00:59 One of the big themes was about the future of business models after advertising. This 0:01:02 is a talk Connie gave last year, and then today she went further on that, like what 0:01:07 happens when things become a super app. We had Kevin Chu from Forte talking about business 0:01:12 models for crypto-economics and gaming. And then we had Jonah Peretti and Chris Dixon talking 0:01:17 about the evolution of the web and how so much of the promise of the web in some ways 0:01:21 came about, but in other ways didn’t because of the sort of albatross around our neck of 0:01:27 advertising as a business model. So one idea I remember you talking about in your book 0:01:30 that just blew me out of the water. It was so interesting, like it was a Kevin Kelly 0:01:36 signature idea. Is this idea that you can actually, in the future, we may be able to 0:01:42 quote, reverse our attention economy? You should actually explain this idea. 0:01:51 So the idea is, in some ways, disimmediating attention. So advertising model is, let’s 0:01:58 say I am a company that I’m selling a widget. And I want people to know about the widget. 0:02:05 I want attention to the widget. So the normal way is I will hire advertising agencies. I 0:02:13 will make ads that will go out and people will see the ad. So I’m paying an advertising 0:02:18 company and they’re going to make an ad that will then take that attention from the consumer. 0:02:26 But you could actually short-circuit that rather than having a two-step. What if I paid 0:02:33 the audience directly for their attention? Exactly. And so let’s say I send a call out 0:02:40 and say I will pay you 25 cents to watch this ad. So you’re getting paid for your attention, 0:02:45 to give your attention to an ad. And it’s not just ad, it could even be something like 0:02:54 email. And so you could set up something saying I’ll charge 25 cents to read your email. 0:03:02 So you have to pay me for my attention. And so if it’s true that attention is the only 0:03:09 scarcity that we have in this world of abundance, how come you and I are giving our attention 0:03:13 away for free? I completely agree, which is why I’m so glad we’re finally talking about 0:03:17 this. So a couple of quick things. On the example you just gave about email, that’s 0:03:22 a great example that’s commonly cited for a way that we can fight the spam problem, 0:03:27 especially when you think about combining with crypto and blockchain economies where 0:03:31 you can actually do micro payments in a scalable way because right now it’s actually very cost-prohibitive 0:03:38 to charge someone 25 cents to read their email. And then if someone is a spammer, it’s pretty 0:03:41 unlikely that they’re going to do a spray and pray method to try to get your attention 0:03:46 or even a spearfisher, whoever, all the bad economic models of the web get broken to your 0:03:51 point with this perfect example of email because the bad actors are not incented to pay for 0:03:54 your attention. But I want to really dig a little deeper because your idea is a lot 0:03:59 more nuanced and I really want to pause the profound implications of what you’re saying. 0:04:05 So you’re putting the power back into the consumer. The power of the attention is with us who 0:04:10 have it. We’re surrendering it. We’re giving away for free when we should really be charging. 0:04:14 We want to have a technology that reverses that. So the power is back with us. And there’s 0:04:18 a second aspect of that. Oh, good. This is what I want to hear. 0:04:25 Which is that in media and publications, in that world, the publication, the magazine, 0:04:33 the newspaper, whatever that portal is, they don’t really have choice about what advertisements 0:04:43 they run. That is something that’s decided by the advertiser. But what if anybody could 0:04:49 run an ad and you would get the benefits of that ad if people clicked on it, if people 0:04:55 watched it. So what you have is you have an outsourced, crowd, decentralized version of 0:05:03 an ad network where anybody is making an ad and anybody can run the ad. And you have 0:05:07 the money flowing through the system, again, using crypto to kind of keep or blockchain 0:05:13 and keep track of things. But what that would mean is that you would have very, very creative 0:05:20 people making ads that worked and the sponsors have to pay up when people actually watch 0:05:28 them. And so what I’m trying to do is to imagine a decentralized advertising system that put 0:05:38 power back into the audience, but would require something like crypto or blockchain to maintain 0:05:42 the integrity and to have that financing. The provenance, economics, the alignment of 0:05:46 incentives, all the things, all the features. Sending the credit money through as it follows 0:05:51 these different things. So that’s a possibility that we haven’t really thought about before. 0:05:55 I love it. But less people think this is so far off because you repost this idea in your 0:05:59 book The Inevitable, which is about the future. And who knows if that’s five years, 10 years, 0:06:03 20 years, 100 years. Less people think that’s so far off. Let me give a concrete example 0:06:10 today. So TikTok, basically what people are already doing in a not necessarily decentralized, 0:06:15 but certainly a bottom up manner. The centralized platform is TikTok. They are essentially making 0:06:20 ads. And these are short viral clips where they are promoting some idea, a product. Because 0:06:23 if you think about an ad, it’s simply an ad for anything, whether it’s a product, an 0:06:28 idea, whatever. They’re short, they go viral. And the reason they go viral is unlike on 0:06:33 YouTube where the algorithm is very optimized for people who are mature creators, have an 0:06:38 established track record, et cetera. Because it’s all purely AI based, it’s not basing 0:06:44 things on specified intent, but learned intent. It can let anybody, any creator have a clip 0:06:48 go viral, even if they don’t have a huge following. And that’s hugely powerful. So it’s really 0:06:52 fascinating is what you’re basically describing is kind of already happening with TikTok. 0:06:56 And now I just need to put the economic of getting those creators paid. Because the other 0:07:00 thing that I think is super interesting about this is that when you have new models, business 0:07:05 models, it then in turn, this is when you and I both care about changes of creator economy 0:07:10 that feeds it. Not only unlocking creators who maybe didn’t come out before in the current 0:07:14 model, but more importantly, you don’t even have to get that big of a scale in order to 0:07:20 be successful. It actually ties to your original true idea of 1000 true fans. But that part 0:07:24 of your idea that people don’t talk about as much, they don’t get past 1000 true fans 0:07:29 is that not only do you get the 1000 true fans, but you get the nodes next to them. So I’d 0:07:33 love for you to explain that. And then maybe we can connect the dots between this attention 0:07:36 economy back to 1000 true fans. 0:07:42 Just to summarize the 1000 true fans theory very, very quickly, which is that in the world 0:07:46 in which you have direct contact with your audience, you’re and when you’re not going 0:07:52 through the intermediate of a publisher, the studio, a record label, but you actually 0:08:00 have your truth, you have your fans, you’re getting the money directly from that if you 0:08:04 could get a certain amount of money from them directly every year, that the number that you 0:08:10 would need to make a living is in the neighborhood of 1000s. So 1000 true fans, if you could 0:08:16 get $100 each from one of you for each of your fans for a year, then that’s $100,000. 0:08:22 So then that’s what I would call a true fan, someone who’s going to buy whatever you make, 0:08:26 you know, the hardcover, the softcover, the singles, the box set, they’re going to travel 0:08:32 200 miles to see you. That’s your true fan. But that’s just your true fans. And your true 0:08:38 fans become basically marketers for this other concentric circle around them, which is kind 0:08:42 of the casual fan. And so it’s your true fans who actually are doing the hard work 0:08:50 of publicizing and promoting you to this other larger, even larger one. So you get the income, 0:08:55 not just of your true fans, but you get the larger income of that concentric circle that’s 0:08:56 right next to it. 0:09:00 Central fans around you. And the other aspect of 1000 true fans is that in a world of a 0:09:05 billion, now that we are global and we have a global system, even if there’s only one 0:09:10 in a million people who are interested in your idea, you still have a thousand potential 0:09:16 people, you just have to find them. So there’s almost any idea you can come up with, anything 0:09:22 that you can imagine can probably find a thousand people on the planet to be true fans 0:09:29 of it. And so that process of finding your true fans is really the process that we want 0:09:33 to use and we want to have tools that enable us to do that easier and easier. 0:09:36 You wrote about 1000 true fans in what year was it again? 0:09:41 Gee, it was right before Kickstarter, it was probably like, I don’t know, 2007 or something. 0:09:45 It was very prescient as always. So it was very early. Then Chris Anderson, our mutual 0:09:50 friend wrote the long tail around either before or after them, forgetting when he wrote that, 0:09:51 2006. 0:09:53 I think he wrote before. 0:09:56 And the idea of the long tail is that the internet lets you find these niche communities. 0:09:59 So that goes to the discovery aspect and you can actually create communities around niches. 0:10:04 Then you wrote 1000 true fans and now today we’re talking about this idea of monetizing 0:10:08 and reversing the attention economy. What I’m hearing you say when we connect all those 0:10:14 dots is that we now finally have a business model for those 1000 true fans to monetize 0:10:18 because what readers are essentially doing, if you imagine a world where the reader is 0:10:24 at the center of a future media web, where there’s a million publications like this and 0:10:30 whatever form, podcast, newsletter, blog posts, doesn’t matter, you know, decks, whatever. 0:10:37 We now have a way and add crypto in for an economic internet that empowers creators of 0:10:43 all kinds to now empower readers to monetize their attention and essentially curate their 0:10:49 custom personalized, perfectly curated dream holy grail paper of their choice, but I’d 0:10:52 be printed on demand by selling their attention. 0:10:57 I even have another idea that I think was patentable. Patents don’t give you much. So I decided 0:11:02 to publish it instead of patenting it and it was called, I’ll pay you to read my book. 0:11:04 Tell me more about this. I’ve never heard this. 0:11:09 So the problem with books these days is I don’t care about selling books. I want people 0:11:15 to read my books and it’s so, attention is so, so scarce that I said, oh, look, I’ll 0:11:19 pay you to read my book and I’m going to make money doing it. 0:11:20 How? 0:11:29 So it’s an ebook and what it is, is I’ll sell the book for let’s say $4 and then I will 0:11:35 pay you $5 if you finish reading the book and we can tell, Amazon can tell whether you’ve 0:11:36 read the whole book or not. 0:11:38 Right, they already have that data. 0:11:42 And so most people probably won’t finish it. And so I think the total amount that they 0:11:46 would make would exceed the amount that I have to pay out. So there’d probably be fewer 0:11:49 people who are going to finish it. 0:11:50 Interesting. 0:11:53 I could, I could adjust those numbers. But I would sell it for very little and I would 0:11:57 pay you to actually finish reading it. So the idea is I’m paying for your attention. 0:12:01 Yes, you are. The completeness of the attention is what I love is that you’re not saying it’s 0:12:07 an either or a binary yes, no, it’s actually a degree. What I love about this is a very 0:12:13 much fits into a world we’re entering now where there is no discreet beginning and end. 0:12:16 Like Doug Roushkoff and I talked about narrative collapse, you know, like in one of Op-Ed’s 0:12:20 I did it for me at Wired in this world of Game of Thrones binge watching the everlasting 0:12:27 story gaming economies, gaming narratives. We just talked today about how gaming is bigger 0:12:32 than music and entertainment combined huge economies. And those narratives are endless 0:12:33 narratives. 0:12:38 So what I love about what you’re saying is essentially it’s a way to optimize for the 0:12:44 few rare completers while also making money off the people who are dipping in and out 0:12:47 and not going to complete the thing. So it puts it on a degree and kind of a continuum. 0:12:52 But the second thing that I love about it is that if you, this is a crazy counterintuitive 0:12:56 part of this, if your idea and your book is so damn good that people are going to read 0:13:00 the entire thing, you’re actually going to pay them a lot more because you’re putting 0:13:04 them a dollar extra to make this happen. And so tell me about that the flip side of it. 0:13:07 Does it actually make creators not want to, because one thing that Connie’s talked about 0:13:12 in China is that there’s actually apps that let people readers weigh in on books as they’re 0:13:16 being created. And that then in turn changes a narrative or how many chapters. This reminds 0:13:19 me the Charles Dickinson day of like paying my word. 0:13:24 So the reality is that very few people make real money from books. I don’t make my living 0:13:28 from books. I make my living from giving talks about the book. 0:13:33 So the book is like a vehicle for that and that’s true for more and more people. The 0:13:38 actual book itself is just a part of this network. And so you can still lose money on 0:13:43 a book and many authors do and still make overall. It’s particularly important when 0:13:48 you are talking about ideas. Maybe this doesn’t work if you’re just writing novels, but if 0:13:49 you’re trying to get ideas out. 0:13:51 This is what we both care about more than anything. 0:13:57 Then again, the battle for attention is so great that I am willing to pay you for your 0:13:58 attention. 0:13:59 Yes. 0:14:03 And by the way, one thing I also bet attention is I did these calculations of the total amount 0:14:10 of attention that are given to different media and I found out that on average, we surrender 0:14:13 our attention for about $3 an hour. 0:14:15 Wow. That’s so cheap. 0:14:16 That’s ridiculous. 0:14:20 So look at the total amount of time that you spend reading a book and how much you can 0:14:25 charge. How much you pay for the book, for a movie, for whatever it is. And it comes out 0:14:31 to very, very low pay that we are accepting for our attention. 0:14:32 It’s insane. 0:14:39 It’s just that we value our attention at. And so, and when TV, TV is, if you take all 0:14:43 the amount of hours that people watch TV and the total amount of revenue TV, that’s what 0:14:44 it’s come out to be. 0:14:45 Yeah. 0:14:49 It’s like we’re giving up our attention for such small wages. So we really want to be 0:14:50 charging more. 0:14:55 Well, what I love about this is it puts again people at the center. And what I love about 0:14:59 what you’re saying is this is a way to be optimistic about the future that readers and 0:15:05 creators can be empowered by putting better models in place that align incentives that 0:15:10 remove adverse selection and bad alignment of incentives that we can actually embrace 0:15:11 a better future. 0:15:12 Right, right. 0:15:14 So thank you for joining this episode of the A-60 podcast. 0:15:18 Well, yeah. And if we were really doing things, we would be paying you listener right now. 0:15:19 The listener. 0:15:20 Yes. 0:15:22 That’s fantastic listeners. We need to be paying you. Thank you so much for listening. 0:15:26 Kevin, thank you so much for joining the A-60 podcast live from the Andreessen Horowitz 0:15:29 annual innovation summit. The future is inevitable. 0:15:30 Great. 0:15:30 Good. 0:15:33 (audience laughing)
The idea of “1000 true fans” — first proposed by Kevin Kelly in 2008 and later updated forTools of Titans — argued that to be a successful creator, you don’t need millions of customers or clients, but need only thousands of true fans. Such a true, diehard fan “will buy anything you produce”, and as such, creators can make a living from them as long as they: (1) create enough each year to earn profit from each fan, plus it’s easier and better to give existing fans more; (2) have a direct relationship with those fans, which the internet (and long tail) now make possible.
But patronage models have been around forever; what’s new there? How has the web evolved; and how are media, and audiences/voices finding and subscribing to each other changing as a result? If the 1000-true-fans concept is also more broadly “useful to anyone making things, or making things happen” — then what nuances do people often miss about it? For instance: That there are also regular fans in the next concentric circle around true fans, and that the most obscure node is only one click away from the most popular node.
Finally — when you combine this big idea with another idea Kelly proposed in his most recent book The Inevitable (covered previously on this episode) on inverting attention economies so audiences monetize their attention vs. the other way around, how do we connect the dots between them and some novel thought experiments? In this hallway-style episode of the a16z Podcast, which Sonal Chokshi recorded with Kevin in our pop-up podcast booth at our most recent a16z Summit, we discuss all this and more. Because on average, we all currently surrender our attention (whether to TV, books, or whatever) for about $3 an hour. Whoa?!
0:00:06 Hi, everyone. Welcome to the A6NZ podcast. I’m Sonal. I’m excited because this episode is about 0:00:10 one of my favorite topics. It’s all about reading and writing and much more. Our special guests 0:00:15 are Robert Cattrell, the founder and editor of the browser, which is a very popular newsletter that 0:00:21 shares five pieces of writing worth reading every day and which we discuss as an example of the 0:00:26 changing web today. Our other special guest is Chris Best, the CEO of Substack, which makes it 0:00:31 simple for writers to start an email newsletter or podcast that makes money from subscriptions if 0:00:36 they like, but more broadly is really a platform for voices and audiences to connect with each other. 0:00:40 And finally, we have General Partner Andrew Chen, who’s been leading a lot of investments on our 0:00:45 consumer team into new media, gaming, and marketplaces. In this hallway-style jam, which took 0:00:50 place over an informal meetup in person recently, we cover writers and writing, readers and reading, 0:00:55 including all the forms that may now take today, business models for creators, and where new delivery 0:01:00 mechanisms and tools come in. I also ask Andrew and Robert to quickly share their stories of 0:01:05 how they built their content outlets. But we begin by going around the table, Ron Robbins-style, 0:01:10 to share a quick pulse check on where publishing is today, how we got here, and where we’re going. 0:01:17 Nils Bohr said that science advance is one funeral at a time. And I say that publishing 0:01:23 advance is one bankruptcy at a time. I think we’ve known for some years that you can’t really 0:01:28 finance good writing with bad advertising. It seemed inevitable to me that, as and when 0:01:34 the technology falls into place, that we’re going to see a shift in market power away from 0:01:40 publications towards writers. Now, I’ve been looking at a thousand pieces of writing a day. 0:01:46 That sums to somewhere between three and five million pieces of writing. So one thing that I’ve 0:01:55 deduced from all that time reading is that the best possible predictor of the value of a piece 0:02:03 of writing is the writer. Now, that might seem like an absurdly obvious thing to say, but the whole 0:02:09 strategy of the publishing industry up until now is to persuade you that the value lies in the 0:02:15 publication, the brand, that that’s the guarantee of quality. That’s what you should pay. That’s 0:02:24 what you should be loyal to. The quality of writing varies far more within any one publication 0:02:32 than does the quality of a given writer’s writing across publications. So if I want to read 0:02:37 Susan Orlean, I don’t care whether she’s writing in Harper’s or the New Yorker or the Financial 0:02:45 Times. So the ideal for me would be in some way to be able to subscribe directly to 0:02:51 Susan Orlean or Tani Easycoats or any good to great writer, stay with them, 0:02:58 pay them, feel a relationship with them. I think what really got me excited is all of a sudden 0:03:04 seeing folks like you and Ben Thompson from Stratechery and other kind of individuals really 0:03:08 figure out how to build a whole business model behind it and do it really full-time. 0:03:15 Wow, that is actually like a complete alternative to the ad-supported media model that really has 0:03:19 been with us for like hundreds of years at this point. In the early days, people would just, 0:03:24 as soon as they had steam-powered printing presses, this was like in the early 1800s, 0:03:28 at first you had a bunch of folks overcharging nine cents per issue and then actually the 0:03:35 predecessor to the penny presses would basically sell advertising, sell the actual issues for one 0:03:39 cent, and then all of a sudden that was like, “Holy shit, you’re actually giving it away, 0:03:44 basically.” And it was very powerful to have this ad-supported model. You fast forward hundreds 0:03:52 of years later, and now we are seeing some of the tools to actually build from a brand new 0:03:58 foundation, a new ecosystem that’s based on people writing primarily based on their passion. 0:04:02 We’re seeing this in writing and newsletters, but we’re obviously also seeing that in the 0:04:07 way that people are setting up e-commerce shops and Shopify. There’s a lot happening in video 0:04:11 streaming. There’s a lot happening in video games. There’s a lot happening in podcasting, 0:04:15 and the list goes on and on, that there’s this new ecosystem that’s really based on the direct 0:04:21 relationship between consumers and the content creators and these tools that facilitate that. 0:04:26 And I’m very bullish that there can be an ecosystem that’s as big as the media ecosystem, 0:04:29 but completely based on these new technologies enabled by the internet, et cetera, et cetera. 0:04:36 I think what you said about the bad incentive structures of ad-supported media is interesting. 0:04:42 Craig Maude calls them attention monsters, which is a very colorful term that I love. 0:04:47 I think that you do track this progression where ad-supported media has been with us for a long 0:04:53 time, but I think we’re hitting a turning point where attention monsters, ad-supported media has 0:04:59 eaten up enough of people’s attention that there’s just no more to go. With the smartphone, 0:05:04 things are demanding all of your attention all the time. The next frontier as somebody who wants 0:05:10 to regain control of their mind is to be thoughtful about what you want to put in there, 0:05:13 what you want to be reading. That’s one of the things that people love about the browser, 0:05:19 is it’s a way to regain some signal in the noise of what I’m going to read, what I’m going to 0:05:24 focus my attention on, and not let it be dictated by an algorithm that’s selected by somebody that 0:05:29 I greatly trust. I love that you’re saying that, because when I think of the evolution of the 0:05:33 internet, Chris Anderson coined the long tail and how there’s going to be this inevitable 0:05:38 shift from the big head to the long tail. People would not only go to a blockbuster 0:05:43 to find the hits, but that they would find the niche movies that they love. You have infinite 0:05:47 shelf space on the internet. Then the funny thing happened after that, which is that we had a little 0:05:54 of too much long tail. I think it became cluttered in the first wave of Web 2.0. Now we’re seeing 0:05:59 this shift to a more curated, artisanal thing. What I think is really interesting about what all 0:06:03 three of you are saying is it’s an intersection, as you’re saying, Robert, of people now finding 0:06:08 people, not just brands, as Chris is saying, the incentive structures being aligned. As Andrew is 0:06:13 saying, that we are now entering a world where people can find the right business models to 0:06:19 do this. That’s what was lacking in that first wave. As a reader, there’s no possible way you can 0:06:23 keep abreast of everything that’s happening and make some rational choice about what you spend 0:06:29 your time on. At best, you’re choosing which filters you want to see the world through. 0:06:35 If you choose to see the world through algorithmic-driven feeds who make their 0:06:41 money by keeping you maximally addicted, there’s going to be some predictable result to that on 0:06:46 your life. Whereas if you choose to put your faith in people who you have some sort of relationship 0:06:51 in, some sort of trust in, who have some sort of motivation, either to serve you well or just 0:06:56 beyond caring about you at all, that just care about quality and care about interesting things, 0:07:04 you’ll get different results. I think it’s fair enough to say that any platform or publication 0:07:14 that proposes to personalize your experience is going to game you one way or another. I try to 0:07:21 accept that the browser is simply my choice, my sensibility, and to stay firm with that, 0:07:28 to avoid as far as possible analytics, which will tell me what, because I’m otherwise in danger 0:07:32 of giving people what they want. If I may say you recently had us hide, how many links people 0:07:38 clicked, you had us hide it from the UI for you so that you could more effectively live by that. 0:07:45 There was a bit of me that felt, shouldn’t we be kind of like jolly-ing up a bigger click-through 0:07:52 rate somehow, which would mean popularization. But then I thought, no, because I don’t think of 0:07:58 myself as a curator, that’s what happens in museums. I think of myself as a critic and like a 0:08:04 theater critic or a music critic. So I’m pointing to a piece of writing and I’m saying, I think this 0:08:09 is worthy of your attention. And here’s why I think it’s worthy of your attention, and then giving 0:08:15 you enough of a flavor of it for you to make the decision as to whether or not you go and read it. 0:08:21 So actually, if people are happy to pay for the browser as a newsletter, I’ve started to 0:08:27 think of that as a measure of success. But yes, seeing which articles provoked click-throughs 0:08:33 and which didn’t, it was seductive. It was alluring. When you go to any publication, any website, 0:08:39 whether it’s the BBC or the FT or the New England Journal of Medicine, you’ll find that 0:08:48 the outlying most read stories are always the lobster that spoke Italian or the hamster that 0:08:53 ate my baby or something. It’s always the sensational things, even within the most distinguished of 0:08:57 places. Can I take the opposite side of that though, because coming on the other side of media 0:09:02 and the writing side, I would say that right now it’s sad how little information writers have 0:09:07 about their work. I’ve actually sent a lot of my friends to Substack because they get very limited 0:09:11 data streams. Their audiences are sprinkled all across the board, especially if you’re a freelancer 0:09:14 across like 20 different outlets. If you’re trying to follow a person, it’s kind of a bummer 0:09:20 because what’s missing right now in the ecosystem is this matchmaking between this amazing curatorial 0:09:27 or critical ability to your phrase, Robert, with the ability to actually market and put your ideas 0:09:31 out there in a way that reaches the audience. And so you have this divide where there’s a lot of 0:09:36 writers who are really talented, who don’t know how to connect with their audiences, don’t have the 0:09:41 tools to do it. One of the really common threads when I hear about folks who’ve kind of built their 0:09:47 own audiences, you know, resonated with what you just said, is I find that there ends up being 0:09:53 this huge focus on the quality of the work as opposed to purely the metrics and the audience 0:09:58 around the work. I think it’s actually common for really two reasons. One is you end up needing to 0:10:04 be motivated intrinsically as opposed to extrinsically. I think the second thing though as well is 0:10:11 if you end up creating a business model around your work that is based on the interest and 0:10:15 the passion of your audience to consume your stuff, then what ends up happening is the quality of 0:10:20 your audience actually matters. Whether or not you really engage people deeply and your building 0:10:24 relationships over a long period of time actually matters because those are the folks that are 0:10:31 actually most likely to open their wallets as opposed to something like the traditional advertising 0:10:36 model that really incentivizes a lot of people driving past a billboard, right? And that’s a 0:10:41 very different kind of strategy. What’s really interesting about you said, Andrew, is that when 0:10:45 you think about this as part of the about selling and matching and creating this audience that you 0:10:50 want, I think what that means is not about the writer, it puts the reader back at the center. 0:10:55 Kevin Kelly in his book, The Inevitable, talks about this future, which he thinks is inevitable, 0:11:00 where we’ll be able to invert our attention. And this goes to your point about broken incentives, 0:11:07 where instead of people selling our attention, we as audiences will be able to sell our audience, 0:11:13 chip to people. It’s like negative interest rates of the attention economy. Oh, I love that. 0:11:21 I’m very optimistic about the future of reading. We’ve been doing it for upwards of 3,000 years, 0:11:28 and we’re not going to stop now. I’m very, very curious to see what kind of a revolution, 0:11:32 really good machine translation is going to wreak upon our reading universes when 0:11:40 that becomes free. I’ve been doing quite a lot of work with a computer science startup in London, 0:11:46 and I’ve been so massively impressed by the quality of machine translation. I’m talking here 0:11:53 about paid cloud based neural network driven machine translation, which at the moment is 0:11:59 quite expensive. But to me, the results that it delivers are stunning. You can read a large 0:12:03 chunk of German into English or Spanish into English without knowing that you’re reading a 0:12:09 translation. The best news magazine in the world right now is Dash Beagle. It’s as good as time 0:12:15 and news we were in the golden age. I get a lot of my daily news now from Gazette of Aborture, 0:12:19 which I thought was going to tell me sort of lots of little things about Poland, but it’s 0:12:26 actually a really good newspaper. That’s only going to get better. And because all five majors 0:12:31 and then some are all competing at that same high level of machine translation, it’s going to get 0:12:37 commoditized. It’s going to be free. So really good, almost invisible machine translation will 0:12:44 simply drop silently into Google news, Apple news level of reading, and suddenly we’ll be reading 0:12:49 the whole world without even knowing it. So I love that vision, but to me, that sounds like a 0:12:55 crazy explosion, which brings us back to this web where there’s looking for seeking the signal and 0:12:59 the noise. And I want to think about centering it in the reader’s experience. This is one of the 0:13:03 big motivations for why I was interested in starting Substack. It’s not, I’m not a writer. 0:13:10 It’s as a reader. As a reader, I am fed up with this feeling that there’s a mill, you know, 0:13:17 water, water everywhere and not a drop to drink. There’s a million things that I could be reading, 0:13:22 and yet I find myself in these dark patterns where I’m sort of obsessively checking my 0:13:29 Twitter feed against my better judgment. And I just want there to be a better way for me to 0:13:35 see the world through the written word and to pay for the people that are creating something better. 0:13:40 We think of this thing as having two sides to it. There’s the writer side being able to 0:13:47 speak directly to your audience, not being mediated by an algorithm, getting funded directly 0:13:51 by your audience because they trust you. And so you have sort of aligned incentives. That’s all 0:13:55 really great. But the other side of this is you have to have willing readers. You have to have 0:14:00 readers who subscribe directly to voices they trust. Today, for a lot of people that’s happening 0:14:05 in their email app or in their RSS reader, if you’re a little bit old school. 0:14:12 I think the RSS is the most undervalued thing in the entire universe. But lately I’ve been 0:14:18 taking more newsletters. I’m generally frightened of my email. My inbox is full of claims on my 0:14:25 time and I hate to go there. But newsletters are actually a very efficient way of getting the 0:14:31 greatest hits from producers. When I think about why I like email in that way, I think the answer is 0:14:40 that it comes to me. And I think the same is true about audio. It comes to me. Two or three years 0:14:46 ago I thought of listening to a book as being lower status than reading the book. But now I 0:14:53 don’t. Reading the book is every bit as good. I’m equally if not more happy listening to things 0:15:02 as I am reading them. Airpods may be the most consequential Apple product since the iPhone. 0:15:09 You used to talk about always on and now we can talk about always in. So we can be permanently 0:15:16 in a world of listening. And as a matter of fact, when I hear Elon Musk talking about neural 0:15:21 implants, that actually strikes me as a relatively small practical leap. You’re just moving the 0:15:27 metal and other inch of your ear canal. And that kind of funny way that meshes with what I like 0:15:34 about Substep is I live on my RSS reader. I would argue that the borders between the newsletter, 0:15:39 email, audio, podcasting is all just going to blur up. I actually believe that people will start 0:15:44 distributing written articles and audio form in podcasting because it’s about having ear share, 0:15:48 which is basically the best form of mind share because you’re essentially in people’s heads 0:15:53 quite physically. And if you get your neural implant feature, you will really be in people’s 0:15:56 heads. And so I think that’s a big part of it. I would also argue when you think about that 0:16:00 this is a golden age of reading, I think that’s absolutely true of TV and entertainment as visual 0:16:04 literature these days. I think of games as like immersive books. There’s no difference. I think 0:16:09 it’s actually a short blip in our human history that we’ve had to arbitrarily divide these media 0:16:13 when in fact they are really fame at the core. Yeah. If you kind of imagine stack of authoring, 0:16:19 publishing, community and monetization that should exist for almost any kind of new modality, 0:16:25 whether that is audio and podcast, even something like video, it should be that for whatever is 0:16:29 your jam, whatever is the kind of content that you’re into building, there’s going to be these 0:16:34 stacks that are end up being built. I think we’ll see those kinds of tools for almost every media 0:16:42 type. But how do we monetize? I think we’ve already dismissed bad advertising. We’ve had a real 0:16:48 resurgence of science publishing thanks to Foundation Money, Simons. Oh, right, like Quanta, 0:16:56 Magazine, Nautilus, Eonmag, Bridgetnames. Exactly. So, individual and institutional 0:17:03 philanthropy. But if you think in terms of selling directly, then your orders of magnitude are going 0:17:10 to be somewhere between the hundreds and the million readers. I mean, the economist has a 0:17:16 circulation of a million, the New Yorker likewise. So, they may not be the same million, but that’s 0:17:23 roughly the number you can hope to hit at the very, very top of your game. Now, a lot of the 0:17:29 things that have happened so far, at least at the fundable end of the internet, the numbers have 0:17:36 always had to begin with a B. So, it’s possible to have meaningful changes in the architecture 0:17:43 of writing and monetizing and reading, even for that up to one million scale of readers that I 0:17:48 think the universe accommodates. You’re basically saying that people now today, given the right 0:17:55 tools, can actually make a good living without having to be the size of a major traditional 0:18:00 media outlet. Oh, from the writer’s point of view, the economics are transformationally better. 0:18:08 If you measure the writer’s footprint, let’s say by their following on Twitter, any good 0:18:13 byline journalist in a major publication can have a Twitter following in the tens of thousands. 0:18:18 And if they work at it, they can push it up to the hundreds and the millions. So, some 0:18:28 tiny fractional conversion of that footprint into paying readers, let us say that hypothetically, 0:18:36 Susan Orlean publishes an original piece of writing and offers it for sale at $5. Well, 0:18:41 this seems to me to be a wholly reasonable price. And then the question becomes, are there 0:18:47 a thousand people who would read that? Are there 10,000 people who would want it? And at that point, 0:18:54 you’re getting returns to the writer in this thought experiment of $50,000 for one piece 0:18:58 to which they still own all of the rights. I think there’s one subtle point there, 0:19:03 which is it’s certainly true that if you have a dedicated audience that loves the work, 0:19:09 you have a relatively small number of people paying you cash, whether it’s a subscription, 0:19:13 whatever, and the economics work out very well, right? You have a thousand true fans or a couple 0:19:20 thousand people paying $50, $100 a year, that adds up really quickly. And the thing that is subtle 0:19:26 there to me is that the kind of work you do, if that’s the outcome you want, is different than 0:19:32 the kind of work you do if you have to get a million clicks. And so it’s not just a question of, 0:19:37 I have one piece of writing and either I could go and monetize it by putting it on clickbait, 0:19:40 or I could go and sell it to a bunch of really thoughtful people who might pay me for it. 0:19:46 When you choose the funding model of people who actually deeply value the work, the kinds of 0:19:51 things you can create are fundamentally different. I love that. And you can fund work that otherwise 0:19:58 would not exist. One of the most motivating things that I hear is I’m getting to do work that I want 0:20:02 to do that I think is valuable for the world, that’s valuable for my readers, that my readers tell me 0:20:09 that they love, that would have been impossible at my last job. That unlocks entirely new types of 0:20:13 things that we’re not even seeing from the talents and the voices and the people that we want to 0:20:18 follow by having find a business model that people can monetize these things. And I think of 0:20:24 it as a market failure that that doesn’t already exist. If there’s 10,000 people out there that 0:20:28 want to read some writer and that writer really wants to write the kind of work that those people 0:20:33 would love to have, and yet it’s not happening because we have a bad business model, that’s a 0:20:39 huge net loss for the world that’s totally unnecessary. Just to add to Chris’s point, 0:20:44 I think one of the sort of starkest differences in this is the way that you apply business metrics 0:20:49 to the traditional media model versus one where it’s based on subscribers. In the traditional 0:20:56 media model, almost everybody ends up measuring their monetization based on a CPM or an RPM, 0:21:00 whatever you want to call it basically. Cost per meal. Exactly, a cost per meal. So basically, 0:21:06 the idea is how much money are you making per thousand impressions of the ad? It’s not like 0:21:10 you’re developing a relationship. It’s not about the people. It’s literally an impression. 0:21:15 It’s literally, again, driving past the billboard. And I think that’s very interesting in contrast 0:21:19 to a world of subscription where you’re really thinking about it as, well, what percentage 0:21:26 of my audience are subscribers? How many subscribers do they have? And I think, interestingly, 0:21:30 with that terminology, it really humanizes that the business model actually matches 0:21:36 what you’re really trying to do, which is to develop this ongoing audience and relationships 0:21:41 with individuals because it’s about people as opposed to just the fact that they looked at 0:21:45 something. I mean, I’m going to use an analogy from the enterprise world when we think about the 0:21:51 days of on-prem computing and then we move to subscription software as a service or aka SaaS. 0:21:54 That was a very huge shift because in the olden days of companies, 0:21:59 you would do this big million-dollar contract upfront. Someone would be fancy in your office 0:22:04 installing it and managing the relationship and walking the hallways and you had a crap, 0:22:09 crap product like 99% of the time. But in the new model of SaaS, it created a model where 0:22:15 subscribers, to your point, Andrew, could choose to leave if they didn’t like something. 0:22:19 You actually feel vulnerable when you hear the word “subscribing” because you think that it means 0:22:23 people can leave you. And the reality is you don’t have them in the first place when you’re not 0:22:27 subscribed. But secondly, when you do have them, you have more ways to offer them more things. 0:22:31 Because as we found in the SaaS model, people who had to do better to make sure people stayed 0:22:35 subscribed and then on top of it, a majority of subscribers would then pay for more things 0:22:39 because they just kept wanting more and more. It was a cross-sell, upsell, etc. So when you 0:22:42 think of the enterprise model of that, I think it’s a very interesting analogy for what’s actually 0:22:47 happening in consumer of this. It aligns incentives, which is the key thing. And if you take Robert’s 0:22:52 point that in writing, you care a lot about the individual writer and if you like one piece they 0:22:56 write, you’re likely to continue to really want to follow everything they write in this analogy. 0:23:00 I suppose you’re a happy SaaS customer and that’s not how I would phrase it, but I think the 0:23:05 analogy holds there. I think the reason I’m just emphasizing this is that we tend to underestimate 0:23:10 how important delivery models are. When you combine the delivery infrastructure with the medium 0:23:14 and the ability of the internet, and then you have this creation on both sides, creation and 0:23:19 consumption unlocked, when you combine those three things, magic can really happen. Where technology 0:23:26 can really make an impact on the life of creators is really, you think about an entire media 0:23:32 operation. You have a bajillion people in a single room and how do you squeeze that into a 0:23:39 really easy to use software stack that lets you do all those functions, but empower a single person 0:23:44 working from a coffee shop to just run an entire media operation on their own. I do think that 0:23:51 what is fascinating about having this existing content industry and structure is they’re both 0:23:57 aggregation points for audiences, but they also become gatekeepers. And then on the flip side, 0:24:02 if you are a star within the industry, if you are somebody who has really, really built an 0:24:08 audience, the chances that you’ve built a business model that truly allows you to creatively do 100% 0:24:13 of what you want to do, as opposed to, I think we’re in a funny place where sometimes if you’re 0:24:18 the top writer, maybe at a publication, you’re generating a lot more value than you’re actually 0:24:22 capturing. What ends up happening is a lot of that value that’s being created is, of course, 0:24:27 it’s being grabbed to subsidize the rest of the folks that may not be there, because we want a 0:24:31 level playing field. But what’s interesting in a completely open marketplace is obviously, 0:24:36 for the folks that choose to do that, they can then build and own this audience that they can 0:24:42 keep for a lifetime, which is how I feel about my blog audience. The second piece is how does this 0:24:47 get to consumers in a way where they’re able to experience it in the best light? You have this 0:24:51 layer of interactions with your audience, and then there’s the monetization layer and the business 0:24:55 models that we’ve talked about, which subscription is just one out of many, many, many different 0:25:00 options. What I often end up seeing and something I’ve experimented with myself is you have subscription, 0:25:09 but then ultimately, you also run conferences, you sell premium education, you consult for Fortune 0:25:13 500 companies, there’s a list of all of these different things. I’m curious for your guys’ 0:25:17 thought on community. To me, what’s amazing is that the browser essentially has a community 0:25:23 of people, and what I was noticing is it’s like birds of a feather. Robert, one of the things 0:25:27 that you and I are both chuckling about is how you and I have been trying to meet up for ages, 0:25:31 and you’re like, “Oh, Sonal, I met with Tyler,” and he’s saying, “Hey, Sonal, 0:25:37 you met with Kevin, we’re talking about mutual friends.” We haven’t tried explicitly to develop 0:25:47 more of a community vibe. We also sense that our subscribers want their privacy to be respected. 0:25:52 That may be a misjudgment, but that, for the time being, is our sense. But shared reading habits 0:25:58 are a very powerful bond, and I think at present, they’re underexploited. We certainly see this 0:26:05 effect in other publications where the people who follow a given writer or creator, especially the 0:26:10 people who love them enough to pay for them, tend to be birds of a feather and tend to… 0:26:11 Yes, exactly. 0:26:17 As soon as you give them a space to talk, form a natural community, we’ve started adding some 0:26:23 very basic community discussion forum threads. The activity we see in there is very strong, 0:26:29 but more importantly, the tenor of the discussion that you see is different than other things that 0:26:32 tend to happen on the internet. I bet you they’re high quality, right? They’re not like crap comments. 0:26:37 It’s the exact opposite of YouTube comments, basically. You have people that are interested 0:26:41 in the same thing, that are actually reading the thing, that already feel like they’re sort of 0:26:46 have a shared sensibility, as you say. And I think there’s space to create profoundly positive 0:26:49 internet communities around content creators. 0:26:50 I agree. I don’t think we should give up on this. 0:26:56 The other thing that I think is true about all good online communities is that I suspect you need 0:27:01 a benevolent dictator to make them work. Someone that just sets the tone, sets the culture, 0:27:06 and you have a natural person to do that when you have a writer who wants to foster a community. 0:27:09 I used to always fight with my colleagues about this at Wired, because they would always say, 0:27:12 “Never read the comments. That’s like a known thing on the internet.” 0:27:15 I had the best comment sections on my pieces, and what I would do is I would go in 0:27:18 and tell people when I had trolls and be like, “Hey, can you please…” 0:27:22 The minute I just showed my presence, they immediately settled down and behaved, 0:27:25 and then they would go like 300, 400 comments deep. 0:27:28 I think we’ve given up too soon on this, and when you go back to this idea of Kevin Kelly’s 0:27:32 true fans, the point that people never bring up in his original article about it, 0:27:36 is he talks about the power of it connecting you to other nodes in that network, 0:27:39 and that to me is a really powerful thing, and I think it’s still early days. 0:27:42 We should not give up on the so-called comment section of the internet. 0:27:48 So my variation of this is I really love meeting people offline that are readers, 0:27:51 whether it’s events or conferences or dinners. I felt like growing up, 0:27:53 it was like, “You should never meet anybody off the internet.” 0:27:56 And then I’m like, “I exclusively meet people off the internet.” 0:28:02 So I think one of the really interesting things there is that in the content that I’m writing, 0:28:05 that I really think of it as I’m writing for the group of people, 0:28:08 and then what ends up happening is I serendipitously, 0:28:12 as a result of doing the writing, it becomes this really scalable, 0:28:14 enhanced form of professional networking. 0:28:18 And so I think one of the things that’s helpful in thinking about it that way is, 0:28:23 I think it does really hone your authenticity around who is this really for, 0:28:26 like your readers, your customers, or people that you’re going to end up seeing, 0:28:29 and so you’re probably not going to push out a bunch of filler, 0:28:32 drivel, because then you’re going to be embarrassed when you see them in a coffee 0:28:36 shop coming up, et cetera. But I think what ends up happening in the digital realm, 0:28:41 that’s really powerful, is that there’s probably enough other weird people, 0:28:44 now that the internet has billions of monthly actives, 0:28:48 that you’re going to be able to find them and connect with them. 0:28:52 And one way in which to think about that is you’re a writer and you’re interacting with 0:28:56 your readers, but I think the other way, which we talk a lot about, 0:29:01 you’re putting out the bat signal for the people that are the same kind of weird as you, 0:29:07 right? And inevitably, you’re able to pull together a set of real life relationships 0:29:09 where the discovery happens in this kind of digital realm. 0:29:13 Yeah. Andrey, actually, tell me about how you got to your blog and your newsletter. 0:29:14 How many years has it been around for you? 0:29:17 I’ve been writing, I think, for 12 years now in the Bay Area. 0:29:22 If you go all the way back, I was as a teenager, one of those people that kept a journal. 0:29:24 And so the first couple of blogs I did were just like for fun. 0:29:27 They’re just more, you know, kind of here’s my day. 0:29:30 Back in the day when blogging first came out, it was really a bit more about that. 0:29:34 But my favorite format actually was like a link blog. 0:29:36 And so I would just like find cool links from across the internet, 0:29:38 purely like curation, right? Back in the day. 0:29:40 But when I moved to the Bay Area in 2007, 0:29:43 I decided to make a professional blog. 0:29:45 And that’s ultimately what ended up sticking. 0:29:50 I was an entrepreneur in residence working at another firm and I would just write down 0:29:54 interesting conversations I was having, interesting factoids I was learning. 0:29:57 And at the time it was super funny because again, this is 2007, 0:30:02 people would literally ask me, like, why are you writing all of your insights 0:30:04 and like publishing it? Like that’s your edge. 0:30:05 Why are you giving away your edge? 0:30:08 Oh, interesting. That seems like so amazing that people would actually think that. 0:30:09 That’s what people thought. Yeah, exactly. 0:30:12 And it’s funny, of course, because now a decade later, 0:30:15 it’s funny how like it’s completely diametrically shifted. 0:30:18 It’s also kind of ironic because looking over here at Chris and Substack, 0:30:23 if you had done it today, you would have actually been able to 0:30:26 do some of those for free, but also get others to pay for some of the really 0:30:30 specialized things if you really wanted to, which you didn’t have the ability to do at the time. 0:30:34 That’s right. That’s right. I basically just added my friends from Seattle. 0:30:40 I added my mom, I added my sister, and then I just got to like tens of thousands of RSS subscribers. 0:30:43 And so what ended up happening after that was it’s becoming this chore or whatever. 0:30:45 I actually took a two year break. 0:30:50 And so now I actually have completely flipped the other way where I basically 0:30:56 think that starting a blog was the single most important decision that I made in my 20s. 0:30:59 It’s the thing that sort of unlocked a lot of other opportunities. 0:31:06 And I tell people, instead of spending two hours at a conference or going out and doing a 0:31:12 bazillion coffee networking, if you just sit down and you take the time to write down some of the 0:31:16 really amazing, you know, conversations or articles or whatever that you’re reading, 0:31:19 like that just scales. It lasts forever. I mean, I have people reading articles from 0:31:24 like eight or nine years ago that I wrote and finding them useful, which is just fantastic. 0:31:27 I think of it as a little bit like guerrilla warfare actually. War is a very dangerous thing. 0:31:31 This is a positive thing. But the whole point of that type of warfare is that you have this 0:31:36 asymmetric power to do things against these big powers. And in this case, we’re talking about 0:31:41 centralized big gatekeepers. And so you can not only punch above your weight, but you can actually 0:31:44 take that leveling ability of the internet and deploy it. 0:31:49 You know, as a 24, 25, 26 year old, I’m not going to be the one that’s going to be like 0:31:54 keynoting conferences and this and that. But like, can I write something awesome that then ends 0:31:59 up on the front page of Hacker News or, you know, ends up in Reddit and a lot of people forward 0:32:04 around or whatever. I would, you know, look at my different logs from people visiting the blog 0:32:07 and people who are subscribing. What I would see is I would often see a whole bunch of people 0:32:11 from a single startup all signed up at the same time because they found an article that they 0:32:15 liked. And once I talked to them, it was clear that they were forwarding, you know, a particular 0:32:18 essay around and then people were subscribing as a result of that. 0:32:22 So tell me about how you came to newsletters. Like, what made you add the newsletter? 0:32:26 When I first started, I didn’t really, I actually wasn’t thinking at all about, 0:32:30 you know, how do I sort of keep users over a long time? You know, I just thought of it as like, 0:32:35 let’s just create a one time spike of users because I write something cool and then I’ll 0:32:38 create another spike and I’ll create another spike. And so, you know, one day I remember 0:32:42 I crossed a thousand and I was like, wow, that’s amazing. And then I crossed 10,000 and then I 0:32:49 crossed 50,000. At one point, I had almost 100,000 RSS subscribers, which is amazing, 0:32:57 except that is literally when Google Reader got shut down. I know RIP, Google Reader. And what 0:33:02 I realized with that was maybe emails actually the right way, you know, because I thought like, 0:33:06 well, should I focus on growing my Twitter following? Should I spend a lot of time on 0:33:11 Quora, which I did. But what I realized was like, look, you know, email is this thing that’s open 0:33:16 and durable. And so that’s my primary focus in terms of building an audience is newsletters and 0:33:21 email. And actually think of my blog, the actual web pages themselves as like landing pages to grow 0:33:26 my email newsletter. That’s a fantastic inversion of the conventional wisdom. And what I love about 0:33:30 what you said though is that both, which I also love personally is that RSS is the backbone that 0:33:34 also drives podcasting ecosystem. And the difference in this case, though, is that people, 0:33:39 Google Reader was a central choke point because that was actually the one mainstream RSS reader 0:33:43 that so many people use, which is why that was such an issue, because otherwise it’s an open 0:33:48 ecosystem, just like email is. But in your case, where you can’t get your people who are the 0:33:53 subscribers out of RSS, your email list is portable. You can take it with you wherever you go. 0:34:00 That’s right. Yeah. I applaud Andrew’s point there about the authoritative writer, because I 0:34:08 find that the quality that I come to value most in pieces of writing is honesty. And 0:34:15 you can only really be honest about something if you know it absolutely thoroughly. It doesn’t mean 0:34:21 that you can’t write about things simply by investigation, but it means that the best pieces 0:34:24 are the writings that come out of your life. That’s actually the thing that I think brought 0:34:29 us together, Robert, is that you have a bias for that. And I also did. So when I was at Wired, 0:34:33 and especially when I came to A6 and Z, the fundamental thesis that I used to shift our 0:34:39 editorial model was, why are we diluting the expert’s voices through reported stories when 0:34:44 you can just hear from the expert on quantum computing or the expert on bio, on podcasting, 0:34:50 the authenticity that you get from sharing your raw unfiltered insights and voice. 0:34:53 I would argue that newsletters and podcasting are most the exact same thing because 0:34:58 they’re both have this illusion of one-on-one communication, but they’re actually one to many. 0:35:03 Robert, tell me the backstory of the browser and how you started the thing. 0:35:10 Well, let’s wind ourselves back to 2007. It all started on a cold rainy night. 0:35:18 It all started on the hot New York Dave because I was running the editorial side of Economist.com. 0:35:24 And it was the time when Internet 2.0 was bedding down and the Economist was 0:35:33 moving its center of gravity from the print paper alone. And it was introducing new features, 0:35:40 dedicated content, first blogs, reader comments onto the website. And so I thought, well, 0:35:47 it’s actually quite hard work doing new things in a large established company. Why don’t I cut loose 0:35:54 and start something? So together with a friend, we decided we would start Vox 10 years in advance. 0:35:56 You were trying to do an explainer type of thing? 0:36:02 We were trying to speed up the Economist from a weekly to a daily rhythm, I would say. 0:36:07 But if you remember the timing of this, I’m afraid we then had the financial crack. 0:36:07 Right. 0:36:10 Everybody flatlined and the question suddenly became, 0:36:19 what can I do with no money sitting in my pajamas? And the answer to that was, I could read. 0:36:26 The browser is a daily newsletter recommending five recent pieces of writing of lasting value. 0:36:32 The more I realized that it was actually a very valuable service because there was a 0:36:38 fantastic amount of good writing being published free online. But there was an even more fantastic 0:36:45 amount of dross being published around it, which obscured it. So simply going to find 0:36:53 point to and praise the good writing was really useful. And now, 10 years later, five million 0:36:58 pieces of reading later, I feel like I’ve invented inadvertently the best job on earth. 0:37:04 That’s fantastic. Do you mind sharing some quick behind the scenes tips or tricks that you use 0:37:09 for when you think about what makes a cut for the browser? I look at a thousand things a day, 0:37:14 which is to say, I look at the headline, I start to read and it’s usually when it loses my interest. 0:37:21 That’s almost rule number one. If it doesn’t start well, the chances are vanishingly small 0:37:26 that it will improve later. Turn this a wise to this. They know they’ve got to get it at the 0:37:31 first sentence. So if they don’t, then it’s not happening. One of the things I did at Wired was 0:37:36 I use chart beat to see where readers dropped off. And that made me very strongly think about those 0:37:42 first three paragraphs and nut graph. I literally thought about getting the reader to get just 0:37:46 enough that they would go to the next paragraph and the next paragraph. So the beginning, what else? 0:37:52 And the other question I ask myself is, is this still going to be a good piece to read in six, 0:37:59 12, 24 months time? So you go for evergreenness. Lasting value, right? Lasting value. 0:38:07 And I also think that we place far too much of a premium on recency in journalism. I mean, 0:38:14 you never hear anybody say, I don’t want to see that film. It was made last year. But we do train 0:38:20 ourselves to think that today’s journalism is what we must have. And yesterday’s journalism 0:38:26 is kitty litter or wrapping your fish and chips. And I think that is the instincts that were embedded 0:38:33 into us by the legacy publishing industry, where you had to buy the new thing, the new newspaper, 0:38:39 and to make room for it, you had to throw out the old one. But if you’ve got a good piece by 0:38:46 Joan Didion or James Baldwin that was written 40 or 50 or 60 years ago, that’s going to be better 0:38:55 than 99.9999% of the pieces written this year. Okay. So any final parting thoughts for our audience? 0:39:02 Yeah, I think in five years, we’re going to have a convergence of listening and translation 0:39:09 and algorithmic summarization so that I’m going to be listening on my AirPods to summaries of the 0:39:16 news from the entire world around all the time. I am actually waiting for the day that we’re going 0:39:22 to have a one person media company that is worth a billion dollars. We saw this with software, 0:39:27 right? We saw that you can have these really small teams, you know, that build Instagram, 0:39:31 that build Tumblr. And I think it is inevitable that in the very early days of all this that 0:39:36 we’re going to end up with like a video streamer or like a podcaster that’s really just like one 0:39:40 person. I mean, we already have Joe Rogan making quite a fair amount. Yeah, Joe Rogan and like, 0:39:44 you know, obviously Ben Thompson on the BDB side. And it’s going to be like New York Times, 0:39:50 Walsh Your Journal, and then like the like, you know, amazing newsletter writer, and it’s going 0:39:56 to be wild. I’m so excited for that world. I just think that we’re going to live in a world 0:40:03 that’s so much better and more truthful, but also kind of weirder and richer. I think the like net 0:40:08 effect where a greater share of the media we consume and therefore the lens through which 0:40:13 people see the world is actually not just going to transform people’s reading habits, 0:40:17 it’s going to transform society if done right. That’s fantastic. Well, thank you guys for joining 0:40:23 the A6NZ podcast. Thank you so much for having us. Thank you. I love the show and it’s a real 0:40:24 thrill to be here as well. Thank you.
We’ve been financing good writing with bad advertising — and “attention monsters” (to quote Craig Mod) for way too long. So what happens when the technology for creators finally falls into place? We’re finally starting to see shift in power away from publications as the sole gatekeepers of talent, towards individual writers. Especially when the best possible predictor of the value of a piece of writing is, well, the writer. The publication’s brand is no longer the guarantee of quality, or the only entity we should be paying and be loyal to, when a new ecosystem is forming around the direct relationship between consumers, content creators, and the tools and business models to facilitate all this.
So where do readers come in… how do they find signal in the noisy world of drive-by billboard advertising, “attention-monster” feeds, and the death of Google Reader? Particularly as machine learning-based translation, summarization, and other mediums beyond text increasingly enter our information diets, for better and for worse?
This episode of the a16z Podcast features Robert Cottrell, formerly of The Economist and Financial Times and now editor of The Browser (which selects 5 pieces of writing worth reading delivered daily); Chris Best, formerly CTO of Kik and now co-founder and CEO of Substack (a full-stack platform for independent writers to publish newsletters, podcasts, and more); and Andrew Chen, formerly independent blogger/ newsletter publisher, now also an a16z general partner investing in consumer — all in conversation with Sonal Chokshi. The discussion is all about writing and reading… but we’re not just seeing this phenomenon in newsletters and podcasting, but also in people setting up e-commerce shops, video streaming, and more. Is it possible that the stars, the incentives, are finally aligning between creators and consumers? What happens next, what happens when you get more than — and even less than — “1000 true fans“?
0:00:05 The content here is for informational purposes only, should not be taken as legal business 0:00:10 tax or investment advice or be used to evaluate any investment or security and is not directed 0:00:14 at any investors or potential investors in any A16Z fund. 0:00:19 For more details, please see a16z.com/disclosures. 0:00:22 Hello everyone and welcome to the A16Z podcast. 0:00:24 I’m Amelia. 0:00:29 Today’s episode is all about the past, present, and future of the web, featuring a conversation 0:00:34 between two people who have played key roles in shaping how the Internet has developed 0:00:35 to date. 0:00:42 A16Z general partner, Chris Dixon, is interviewed by the founder and CEO of BuzzFeed, Jonah Peretti. 0:00:47 The conversation originally took place at our most recent annual innovation conference, 0:00:51 the A16Z summit, and it was also previously released on YouTube if you’d like to check 0:00:52 it out there as well. 0:00:55 Hello, good to see you all. 0:01:00 So I am very excited to have a conversation with Chris today. 0:01:06 I first met Chris back in the New York tech scene. 0:01:11 Chris at the time was running this company called Hunch, and it was a company that was 0:01:16 really sparking a lot of thinking among all of the New York tech entrepreneurs. 0:01:23 I feel like Chris was very much in the scene in New York, and Hunch was a company that opened 0:01:28 people’s eyes to new possibilities and a kind of shift in the web. 0:01:33 Could you maybe give a little overview of what the Internet was like back then and what 0:01:34 Hunch was doing? 0:01:39 Yeah, so the way I think about the web, and I think the title of this talk is past and 0:01:40 future of the web. 0:01:41 So we’ll try to cover all of that, I guess. 0:01:46 But I think of it as sort of the web one era, which was in the ’90s, where really a lot 0:01:50 of what was going on, similar to a lot of forms of new media, where you look at early 0:01:54 films and early films, they were just sort of film plays. 0:01:58 And then eventually they figured out, okay, you can have a close-up, you can have an establishment 0:02:02 shot, you can have a sort of new grammar, and now, of course, films look totally different 0:02:04 than plays and much better. 0:02:09 And so early web was kind of like people were taking their magazine or their brochure and 0:02:10 putting it on the web. 0:02:13 I mean, there were exceptions and things like eBay and other things, but for the most part, 0:02:15 that was sort of the dominant thing. 0:02:19 So it was exciting, I think, when you and I got started as entrepreneurs in the early 0:02:24 mid-2000s was this idea that people were starting to realize that the web was fundamentally 0:02:28 a two-way or a multi-way communication device. 0:02:32 And what were all the new design possibilities that you could create, right? 0:02:39 And so that was a big sort of the, you know, Twitter and Facebook and all of the kind of 0:02:42 tagging and all these other new kind of concepts, like every week there was like this new kind 0:02:46 of concept to when we come up with like tagging or, you know, if you remember like Delicious 0:02:51 and Flickr and all these other cool things, and it was sort of this relatively small group 0:02:57 of people because I think the conventional wisdom at that time was outside of Google, 0:03:01 you know, the web was this great invention, but wasn’t a great business for the most part. 0:03:05 You know, people were still getting over the hangover of the dot com period. 0:03:08 But it was a great, in my mind, it was a great period of experimentation, right? 0:03:11 And then like Wikipedia as an example, which I still think of, I think it’s still kind 0:03:13 of an underappreciated marvel. 0:03:18 Wikipedia, by the way, for years and years was just, was trashed. 0:03:22 I wrote a blog post about this for you and should I went back and found all of the negative 0:03:23 thing. 0:03:26 It was actually like banned in schools, like it was going to destroy young minds. 0:03:27 It was so inaccurate. 0:03:32 There was finally, in 2007, there was a study done that said it was actually as accurate 0:03:33 as encyclopedia Britannica. 0:03:35 It was like a nature study and that was like a big revelation. 0:03:39 Of course, fast forward to today and like all the other things are bankrupt and Wikipedia 0:03:42 is sort of the dominant thing and this idea that you could have like users could come 0:03:46 together and collectively, like the interesting thing about Wikipedia is it’s something like 0:03:53 there’s like 100,000 per day, 100,000 sort of attacks on Wikipedia, people spamming it, 0:03:54 changing it. 0:03:57 But per day, there’s also more than 100,000 people fixing it, right? 0:04:02 It’s this big kind of ocean of like errors and hacks and mistakes and then these other 0:04:06 kind of counter force of like people doing good things and fixing stuff, right? 0:04:10 It’s like wrong, but for 15 minutes and it’s right for five years. 0:04:11 That’s right. 0:04:17 And so, so that to me, that was sort of the really kind of the big story of that decade 0:04:18 of the 2000s. 0:04:21 But then, you know, the next year, which you were deeply involved with, I remember you 0:04:25 talk, I remember you telling me like, I think it was like 2000, like it was pre iPhone, 0:04:32 I believe you said some day people are going to read their news on smartphones via social 0:04:33 networks. 0:04:36 And at the time, like, you know, we had our like our StarTech phone or whatever it was 0:04:38 and it sounded like completely insane. 0:04:42 I had a sidekick for a little while, it was pretty cool. 0:04:50 But then in my mind, that was the next wave, right, which, which I think even at the time, 0:04:54 we all knew the iPhone, like we probably, I had an iPhone, you probably had an iPhone. 0:04:59 But the idea that it was going to be as big as it was, like even like, even essentially 0:05:04 if you look back, like even like Clay Christensen said the iPhone is not a disruptive technology. 0:05:07 As he said, it’s just a high end rich person smartphone. 0:05:13 What he didn’t realize is it actually was disrupting the PC and those things. 0:05:19 He likes disruption from the bottom, moving up and a fancier phone with bells and whistles 0:05:24 isn’t disruptive, but a cheap computer that you can take with you everywhere is disruptive. 0:05:29 So but then, yeah, but that was the era that you, you were, you know, kind of help pioneer. 0:05:35 Yeah, I mean, I would say like, it’s hard in retrospect because we take the internet 0:05:36 for granted today. 0:05:42 But if you, if you look at early internet, there, there was still this long period of 0:05:46 everyone figuring out what you could do with the internet to your point about film, figuring 0:05:48 out the grammar film. 0:05:53 And so I think initially it was like, oh, you can use the internet as a way of doing, 0:05:58 you know, make a portal, which is kind of like a newspaper and everyone sees the same 0:06:02 thing and there’s no personalization, there’s no two way connection. 0:06:04 And then I think people started to realize all these things you could do with the internet 0:06:06 you couldn’t do in traditional media. 0:06:08 So it’s instantly global. 0:06:09 So that was one thing. 0:06:13 It’s like, oh, you put something online and people read it all around the world. 0:06:15 Then people started realizing, oh, it can be social. 0:06:17 Like you can share the content with a friend. 0:06:21 If you read a newspaper, I mean, some of you maybe have like a grandparent who will like 0:06:25 cut out a newspaper article and send it to you. 0:06:29 So it’s possible to share, but with the internet it became really easy to share. 0:06:32 It was possible to see the data of the users. 0:06:36 So like early HuffPost, we just did something super simple, which was a click meter on every 0:06:40 headline and we could just see which headlines were people clicking and which ones weren’t 0:06:41 clicking. 0:06:45 If there was an important story and no one was clicking it, we’d rewrite the headline. 0:06:50 So having this two way data connection was another piece. 0:06:52 The instantaniousness of it was another one. 0:06:58 Like it used to be, you get a newspaper with yesterday’s news on your doorstep, or you’d 0:07:03 read Time or Newsweek, which would have news from a week or longer in the past. 0:07:06 And now you just instantly get a push notification. 0:07:12 So I think we keep seeing new things you can do with the internet, and it keeps surprising 0:07:13 people. 0:07:19 And so I guess one sort of question for you is, what are the surprises that the internet 0:07:21 still has in store for us? 0:07:28 If it’s over the course of 15 years, we figured out it’s global, and it’s social, and it’s 0:07:34 personalized, and it’s instant, and it has all of these characteristics that have really 0:07:38 changed lots of industries, are we going to discover new things about the internet in 0:07:41 the next few years that are going to open up new businesses and markets? 0:07:42 Yeah. 0:07:43 So that’s a great question. 0:07:46 I think to me that’s the big question right now, is sort of like, how will the internet 0:07:47 evolve? 0:07:48 And I’ll take that in a few parts. 0:07:52 Like, the first thing I’ll say is, so I think the kind of conventional view I would call 0:07:57 it, is if you read a book like Tim Wu’s Master’s, which is a very good book, but I would describe 0:08:01 that as sort of the conventional view, which is the internet is like every other form of 0:08:05 media in the past, which is it starts off, and it’s sort of the wild west, and then eventually 0:08:11 a few incumbents emerge, you know, ABC/CBS, radio, cable, and then it’s sort of, okay, 0:08:14 those incumbents control it, and it’s sort of game over, and they’re the gatekeepers, 0:08:15 and that’s it, right? 0:08:16 And that’s kind of the conventional view. 0:08:21 Yeah, and that book, by the way, I think the thing that felt most analogous to the internet 0:08:27 was radio, because radio was started by a bunch of hobbyists who would put up an antenna 0:08:32 and broadcast in their local area, and it was a lot of hobbyists who were hacking radio 0:08:39 and building things out, and then slowly it ended up being consolidated into CBS as a national, 0:08:43 you know, media conglomerate that had lots of control over radio, so it kind of went 0:08:44 from hobbyists to — 0:08:48 I think of that as, the way I describe that is there’s technologies that have an outside-in 0:08:52 adoption pattern and inside-out, and so outside-in like open source would be the conical example, 0:08:58 right, where it’s completely fringe stuff, I mean it’s Richard Stallman, extreme libertarian 0:09:03 statements in the 80s at MIT, and now it’s 95% of the operating systems in the world, 0:09:04 right? 0:09:08 So completely on the fringes, whereas like the iPhone, that was inside-out, it was Apple 0:09:10 in, you know, Cupertino, and a lot of — 0:09:16 Really early Apple was — the PC was outside-in, smartphone was inside-out, right? 0:09:20 A lot of it has to do with — you needed probably a billion dollars to build a proper 0:09:23 iPhone and to market it and everything else, and supply chain, and there’s a whole bunch 0:09:26 of complex reasons why that had to be crypto, which we’ll talk about, I think it’s very 0:09:30 much an outside-in kind of movement, and sort of these hackers and hobbyists and smart 0:09:31 people doing it on a weekend. 0:09:34 And you like the outside-in movements generally? 0:09:39 I mean, yes, I do like them, I think that they — well, I think from a startup investor 0:09:42 point of view, both as an entrepreneur and as an investor, those are where the bigger 0:09:43 opportunities are, right? 0:09:48 Because it’s much harder — if it’s an inside-out and it’s going to require a new game console, 0:09:51 like it’s just the reality of the economics of it, it costs five billion dollars probably 0:09:56 to build that, to market it, to do all the exclusives, like, you know, it’s a very expensive 0:09:57 proposition. 0:09:58 So — 0:09:59 You have enough funds under management to cover that, right? 0:10:04 Yeah, I guess we’re getting there, hopefully, but from an entrepreneurial perspective, it’s 0:10:08 these kind of disruptive things that, you know, I like to say, they start off looking 0:10:13 like a toy that are sort of hackers on the weekends, right? 0:10:19 There’s a deep reason why — like, I think there’s a deep reason why so many of these 0:10:25 technology movements were done sort of by hobbyists, and it’s not just sort of a cultural 0:10:29 thing, you know, that technologists like to wear flip-flops and hang out or something 0:10:30 like this. 0:10:37 Which is, you basically have — nine to five is governed by business people, right? 0:10:40 Nine to five is governed by — like, what you do during nine to five work hours is governed 0:10:45 by people that generally have a one- to three-year time horizon, right? 0:10:48 They have to, like, unless they’re the — you know, maybe Jeff Bezos is an exception or 0:10:51 something, but, like, for the most part, like, you want to keep your job as a manager of 0:10:54 a company, and you’ve got to manage to a one- to three-year time horizon. 0:10:59 So, where does the ten-year away stuff, the five- and ten-year away stuff, happen, right? 0:11:04 It happens when the smartest people get to vote themselves with their time, right? 0:11:08 And that’s why it happens on the — that’s why, like, I have always done, if you go back 0:11:12 and read history, like, so much of the — I was — there’s a great book about Henry Ford 0:11:17 I read recently, and you look at early cars, it’s, like, exactly like, you know, SOMA 2015 0:11:18 or something. 0:11:19 They were in Detroit. 0:11:20 They were hacking. 0:11:24 You go read, like, I was reading the old — it was — a horse-less age was, like, the hot 0:11:26 was a tech crunch of the era. 0:11:28 It’s now — actually, car and drivers, the same magazine. 0:11:31 And if you go read the old ones, it was all, like, oh, my God, this, like, cool new carburetor. 0:11:35 And, of course, what did they do, like, today we think Henry Ford, you know, is wearing 0:11:39 a suit — no, he was, like, lying down, trying to race as fast as he could with his friends 0:11:40 and whoever could do the fastest car. 0:11:44 He’s, like, these pictures that covered an oil, like, you know, it was going 70 miles 0:11:47 an hour, the things, like, practically blowing up, you know, it was basically like it was, 0:11:51 like, Wozniak and, like, you know, and these other hackers. 0:11:55 So, yeah, so I think — and so I think the big question — going back to the Tim Luthor thing, 0:11:58 I think the big question with the web right now is, is it going to be like that? 0:11:59 Is it Comcast? 0:12:00 Is it over? 0:12:01 Is it sort of Google? 0:12:02 Apple? 0:12:03 And that’s it. 0:12:04 Or is it different? 0:12:05 I would argue it’s different. 0:12:09 The Internet is a very different type of medium than, let’s say, radio or cable, in that it’s 0:12:11 software-based. 0:12:15 The design of the Internet is you have a very, very deliberately very simple, you know, core 0:12:18 protocols, like Internet protocol. 0:12:22 And then all of the smarts live on the edges, and the edges can upgrade themselves, and 0:12:27 they’re constantly — it’s a constantly evolving organism, and it evolves according to incentives. 0:12:31 And one of the very powerful things, and one of the reasons I’m so excited about the whole 0:12:37 kind of crypto blockchain movement is it — the whole thing is around how do you design incentives 0:12:42 to get people to kind of upgrade and change the code they’re running on the Internet. 0:12:45 And so, to me, a huge question right now is just sort of, you know, is it going to be 0:12:50 like the last things, the last kind of radio, TV, etc., where it’s sort of, this is it, 0:12:54 and now startups just get sort of pick up the scraps, or maybe they’re — or maybe, you 0:12:59 know, there’s — by the way, there’s — I don’t want to — there’s plenty of other — one 0:13:01 interesting thing about tech is there’s so many different movements happening, right? 0:13:03 So this is sort of the core Internet. 0:13:06 Meanwhile, there’s all this interesting stuff happening in enterprise software, in fintech, 0:13:08 so that’s all going full speed ahead. 0:13:11 I’m talking more of just sort of the core Internet architecture structure. 0:13:15 I think another really interesting thing, if you look at past historical trends, is there’s 0:13:19 always sort of a first order and second order effect of any major new technology, okay? 0:13:23 So what I mean by that is, like, the car comes along, and the first order effect is you can 0:13:25 drive from point A to point B faster, right? 0:13:28 The second order effect took 50 years to play out. 0:13:34 It was suburbs, trucking companies, e-commerce, what — I don’t know, mechanized warfare. 0:13:39 Like, there’s just thousands of, like, kind of secondary second order implications of 0:13:41 this new technology, right? 0:13:43 But it took a really long time to play out. 0:13:47 And I think the thing that we’re seeing right now is with social media and the Internet, 0:13:51 and we’re in that kind of, I don’t know, if it’s a car, we’re probably in 1915. 0:13:55 But at, you know, 1950 or something, we’re still very early on. 0:13:58 We’re seeing the effects on the media landscape. 0:14:02 We’re seeing the effects on the political world. 0:14:06 I think things like cryptocurrency, for example, in many ways, it’s a consequence of social 0:14:07 media. 0:14:10 If 20 years ago, someone invented Bitcoin, you’d have, like, couple New York Times articles 0:14:13 quoting some Yale economists, “This is stupid. 0:14:14 It’s over.” 0:14:18 Maybe it’d be, like, a zine or whatever, like, you know, like some weird hobbyist magazine 0:14:19 you could read. 0:14:20 But that’d be it, right? 0:14:25 And now you’ve got an army of, I don’t know, probably 100 million plus cryptocurrency enthusiasts 0:14:29 who have, they have Twitter followers, they have blogs, they have GitHub accounts, they 0:14:34 have Reddit karma, and they’re out proselytizing, and, you know, it’s the fifth estate. 0:14:36 Like, the fifth estate loves it, right? 0:14:37 The fourth estate doesn’t. 0:14:38 The fifth estate loves it. 0:14:40 And it turns out they have a lot of influence these days, right? 0:14:45 And so that’s, like, another example of something that, you know, is this sort of unexpected 0:14:46 second-order effect. 0:14:50 And what will those other second-order effects be, I guess is, to me, a big question. 0:14:53 So this concept of the fourth estate and the fifth estate, basically the press and the 0:14:59 public on, or active people on social media and public sentiment. 0:15:05 One thing about the press that I feel like has happened in the last, you know, really, 0:15:10 Trump was maybe an inflection point, but it was a larger thing, which is, there feels 0:15:16 to be now a lot more fear in the press about decentralized networks. 0:15:21 And the fear seems to be, well, if there’s not a gatekeeper, there’s not someone checking 0:15:25 the facts, or if there’s not someone making sure that information has integrity, that you 0:15:31 might end up with, you know, fascist movements, populist movements, separatist movements, 0:15:38 people being driven by emotion and not facts, sort of post-truth where politicians can 0:15:43 just say whatever they want and just spread it on social media and kind of bypass the 0:15:45 press, kind of go direct to consumer. 0:15:52 And I think that that fear probably also has influenced press about crypto because it’s 0:15:58 a, the promise of crypto is similar to the internet in that it is democratizing and giving 0:16:03 more people a voice and more decentralized. 0:16:10 And so what’s legitimate about those concerns, what are the press missing, like how should 0:16:15 the press and the public be thinking about the value of decentralized systems? 0:16:18 I think it, by the way, not just crypto in the blockchain sense, but crypto in encryption 0:16:19 sense. 0:16:22 Like I wouldn’t be surprised if we head into another era similar to the ’90s where there’s 0:16:23 real battle. 0:16:26 I mean, we see with Apple and the FBI and things like this, just like encryption in general. 0:16:32 I mean, it used to be in the ’90s, they were classified as munitions, like the RSA algorithm 0:16:33 and things. 0:16:35 And so, and then this whole clipper chip thing anyway. 0:16:39 So like that, like encryption alone, like Zuckerberg says they’re going to go to private 0:16:41 messaging and end in encryption. 0:16:44 And, you know, so forget about blockchain, just like that alone is going to be a hot 0:16:45 button issue. 0:16:47 And the reality is you’re going to have bad stuff. 0:16:52 And even within Facebook, there was a lot of disagreements about should we have everything 0:16:56 being encrypted to protect privacy or should we have content not being encrypted so we 0:17:02 can scan the content for child pornography or terrorist activity or abuse or other kinds 0:17:03 of things. 0:17:08 I mean, you know, like, what, you know, look, take the telephone was to have bad things 0:17:12 happen using a telephone, like probably a lot of bad stuff has happened using a telephone, 0:17:13 right? 0:17:17 We decided as a society that we wanted to put pretty strict privacy controls over telephone 0:17:18 use, right? 0:17:22 Like in court transcripts, whenever you see call me, you know, something bad was going 0:17:24 to happen on that, on that phone conversation. 0:17:29 Yeah, because we, we decided like, we just, knowing, knowing laughs in this audience. 0:17:32 From a regulatory perspective, though, we decided to make that trade off, right? 0:17:37 We try, like, we decided to make the trade off that, that we wanted to preserve the, 0:17:42 you know, people’s feeling that this was a private feeling that probably more of a feeling 0:17:48 but the feeling that was a private medium and decided to regulate as such. 0:17:49 And I think that’s going to be a big question. 0:17:51 And look, there are going to be trade offs like the New York Times in the one week will 0:17:56 have an article about how these, you know, Google, etc. are surveilling you in the next 0:17:59 week we’ll have one about how they’re allowing terrible stuff to happen. 0:18:00 And so where do you draw the line? 0:18:01 It’s really hard. 0:18:06 I think, you know, I think that one of the great things about the internet in the first 0:18:10 era was the fact that it was sort of community governed and controlled. 0:18:14 And the second era, what was great is that we got amazing web services that were free 0:18:16 for billions of people. 0:18:21 And I, what my hope is in the third era, we could find kind of a happy medium where we’d, 0:18:24 we’d recapture some of the kind of community controlled aspects. 0:18:29 So instead of, so, so for some of these issues, you know, should political ads be allowed 0:18:30 on a social network? 0:18:34 Like, should this be decided by a single company or should it be decided by a community? 0:18:39 The way that decisions around DNS, for example, the naming service for the internet was always 0:18:40 a community thing. 0:18:41 It was run by a non-profit. 0:18:45 It was like a community, you know, standards, it was done in an open way. 0:18:48 This is not, you know, when I, it sounds very utopian. 0:18:52 It was actually reality until recently, you know, how the internet was governed. 0:18:55 And so the question for me is, I believe that what we can do now is, and we now have the 0:18:57 technology where we can have kind of the best of both worlds. 0:19:00 I mean, why is everyone so upset on Twitter, right? 0:19:01 There’s a bunch of reasons are upset. 0:19:05 But I think one of them is, you know, they helped create that platform. 0:19:10 I mean, I was early on Twitter, you were early on Twitter, and then you feel like, you know, 0:19:11 you helped create it. 0:19:14 And then suddenly all these new rules are getting imposed. 0:19:17 Like, you kind of felt like you sort of felt like an owner in a lot of these cases if you 0:19:18 were early on these networks. 0:19:20 And then you realize later on, you’re not. 0:19:23 You’re just setting up a platform for Kim Kardashian. 0:19:24 Basically. 0:19:29 And so, and it’s not just users, by the way, it’s people like you, like it’s, I mean companies 0:19:31 like yours, like media companies, right? 0:19:34 You have a partnership and the rules change and like next week, the rules are different. 0:19:38 And like, you know, should you have a seat at the table deciding that, right? 0:19:42 To me, that’s a right that doesn’t mean it’s anything goes. 0:19:45 It just means you have a community kind of governed process. 0:19:48 And that’s the core value of the whole kind of crypto blockchain world. 0:19:49 Everything is open source. 0:19:51 Everything is community governed. 0:19:54 It’s a very deeply held belief. 0:19:59 And you know, and it really comes from the open source movement. 0:20:04 I think of it as an extension of open source and what we were talking talking backstage 0:20:10 about the difference between a city and Disneyland. 0:20:16 And you know, if you’re if you’re in Disneyland, the every everything is is controlled. 0:20:20 There’s not bad areas where there’s crime or those things like that. 0:20:23 But also everything feels a little fake. 0:20:27 And if you’re in a vibrant city, there’s lots of, you know, dynamic things. 0:20:29 Buildings, building and creating things and making things. 0:20:30 There’s good parts. 0:20:32 There’s bad parts. 0:20:36 And that can be a metaphor for, you know, what what different kinds of visions of the 0:20:41 internet do we want the the wall guard and Disneyland internet, or do we want the more 0:20:44 vibrant city internet, even with some some of the downsides? 0:20:45 Yeah. 0:20:49 And I think until so to me, like one way to think of what a blockchain is a simple way 0:20:50 to think of it. 0:20:56 It’s the first time that we’ve had a concept of a community owned and operated web service, 0:20:59 one that’s truly owned by a community, right? 0:21:02 And so I kind of think of it, if you think of analogy to the real world, like you have 0:21:06 an iPhone that’s kind of like your home, it’s like your personal computer, you have you 0:21:09 can rent a computer at AWS, and it’s kind of like your office, right? 0:21:13 And then you have things like Facebook and Google, which are shared services that feel 0:21:16 like public spaces, but are actually controlled by a company. 0:21:20 And now we have the ability to create things that are kind of more like parks or like cities, 0:21:23 or community controlled public spaces. 0:21:28 And you can, it’s a very interesting kind of new thing that that allows. 0:21:31 And you know, the rules aren’t going to be changed because the rules are written into 0:21:32 the rules. 0:21:33 That’s right. 0:21:34 The rules are written. 0:21:35 That’s actually the core feature of a blockchain. 0:21:40 The way I would define a blockchain is it’s a computer where there are game theoretic strong 0:21:44 guarantees that the code will run as designed and the rules won’t change essentially. 0:21:47 Like that’s fundamentally what it is. 0:21:53 And so until now, you know, if you were using a public computer, if I was on Facebook, just 0:21:58 by virtue of the fact that that computer is controlled by that company, they can change 0:21:59 the code. 0:22:00 They can change the rules, right? 0:22:01 This is the first time you had that. 0:22:02 Now there’s trade offs. 0:22:07 Like it’s, you lose performance and you, there’s a whole bunch of like kind of weaknesses with 0:22:10 this architecture that I think we’ll get fixed over the next few years and we’re investing 0:22:14 in a lot of these things to try to improve it is still, you know, early and evolving. 0:22:18 But that’s a very powerful concept and it means you can have a commons and so you can 0:22:24 have a social graph, for example, you know, like, actually, if you go back for people 0:22:29 that are interested in the history of this, like RSS was a real contender for a while. 0:22:30 So RSS is a protocol. 0:22:36 It’s like a sort of a blogging protocol that was a real contender for a while to compete 0:22:39 with the proprietary social networks. 0:22:42 But the problem with RSS, in my view, I was involved in this and invested in a bunch of 0:22:47 companies around it is that it didn’t have, you couldn’t make a user experience the way 0:22:50 you could with Facebook or Twitter or something like this because you had to do all this kind 0:22:53 of weird technical stuff and set up your domain and do all and have these. 0:22:58 So we back, back in these days, we did a project that was an open source project called Reblog. 0:23:05 And Reblog would take, it was a server side RSS reader where you get all, you could subscribe 0:23:10 to all the sites you wanted, you get this information and then you could press a button 0:23:18 to repost anything you liked and it would say, “Chris has reblogged this.” 0:23:21 And that was a long time ago. 0:23:25 I don’t remember the exact year we did it, but then David Karp saw that and I had reblogged 0:23:30 a Tumblr and then Tumblr and then Twitter’s community saw sort of Tumblr having this 0:23:35 functionality and then they added that to Twitter where originally when you would retweet something 0:23:38 you would just write RT and retweet it and it wasn’t built into the software. 0:23:42 Actually Twitter then built it into the software and then Facebook added the share button kind 0:23:44 of looking at Twitter. 0:23:48 And so it was really to your point about RSS not being able to have a functionality. 0:23:56 We saw this need to make content social, you know, way before Buzzfeed and made it through 0:24:06 using RSS which is this open platform and it was, and like maybe 15 to 25 people installed 0:24:10 the reblog software and there was like a little network where they would like reblog to, you 0:24:15 know, and they had sites that maybe got a few thousand readers and so we would sometimes 0:24:19 have something get reblogged like three or four times and so you sort of saw this is 0:24:24 how it should work, but to make it really good and for a user, the Facebook and Twitter 0:24:28 model was a lot better and we never tried to make it into a company, but had we tried 0:24:31 to make it into an open source company we would have been at a disadvantage compared 0:24:32 to… 0:24:37 Yeah, there was no way to have a community like controlled place to store things like 0:24:38 to store social… 0:24:40 Like just simply the technology wasn’t there yet. 0:24:45 There was a wired article, I have a blog post I wrote about it where they, in 2008 or something 0:24:50 they tried to create a open source social, you know, kind of Facebook competitor and 0:24:54 they said like basically the problem is there’s nowhere to store the social graph or something. 0:24:57 Now what we have today is we have these public commons, we have these publicly shared databases, 0:25:00 community controlled databases, so what can we do with it, right? 0:25:06 And we’re in a very exciting period, I think, where like I always think of it as like the 0:25:11 history of technology is every 10 to 15 years there’s a major new computing cycle, so main 0:25:16 frames, you know, PCs, internet, smartphones and now today, what are we, you know, what 0:25:17 is the cycle? 0:25:24 I obviously have my beliefs, but what you have is with each of those periods you have kind 0:25:29 of a kind of gestation phase where people are sort of experimenting, so the early smartphones 0:25:34 you had Sidekick and Blackberry and Trio and it was sort of, you know, they had a scale 0:25:37 of like, you know, Blackberry was more successful, the rest are like on the scale of like a few 0:25:42 million, you know, maybe 10 million users and then eventually you kind of get to the 0:25:46 point where you have like kind of a breakthrough device and then you have this amazing golden 0:25:55 period where entrepreneurs flock to this new platform and then very rapidly explore 0:25:58 what I would call like the design space, like what can you do with this, right? 0:26:03 So like the smartphone comes along and, you know, if you ask people in 2007 what are you 0:26:06 going to do with a smartphone, they probably would have taken a lot of the ideas of what 0:26:07 happened with PCs, right? 0:26:13 They wouldn’t have thought like the killer thing will be calling a car or sending a vintage 0:26:17 looking photograph to a network of people or like, like it wasn’t, like you may have 0:26:21 been on your list of 100 things, but it probably wasn’t, you know, if you look at a ephemeral 0:26:26 message, I need, this will be the killer app for this, like, you know, I mean, they’re clearly 0:26:29 the entrepreneurs who believe that, but there were, you know, 10,000 credible attempts to 0:26:35 create mobile startups and of that, probably, you know, 10 of massive significance kind 0:26:37 of emerge. 0:26:41 And so I think that we, I believe, we’re on the precipice now, not just crypto, I think 0:26:45 AI and virtual reality and there’s a bunch of really exciting things happening, but we’re 0:26:49 about to hit that point where you get kind of the iPhone moment and then you get all 0:26:53 sorts of interesting experiments that get run and out of that, it’ll be a lot of chaos, 0:26:58 a lot of train wrecks, you know, so there’s pain in this process as well. 0:27:03 The other thing that I think happens is trends converge that you didn’t expect, you know, 0:27:09 so I think just as an analogy, when BuzzFeed started, the iPhone didn’t exist yet. 0:27:15 And then when the iPhone first started getting used, people were only consuming really text 0:27:18 content, maybe images on it and it wasn’t great for video. 0:27:21 And then there was this digital video trend. 0:27:26 So there was this digital video trend and this mobile trend and this social trend. 0:27:27 And so there’s three sort of different trends. 0:27:30 And by the way, I would add cloud to that too. 0:27:34 Like if it wasn’t for the work that Salesforce and AWS did, you couldn’t have stored all 0:27:36 that data so cheaply. 0:27:42 So there’s really, yeah, four different trends, you know, cloud, mobile, social, digital video, 0:27:50 and then it turned out that those trends all converged into mobile social, you know, being 0:27:55 on a mobile device on a social platform, watching digital video that streamed from 0:28:01 the cloud, you know, so all those things converge and then make something seem just 0:28:06 like one simple thing, which is like I’m scrolling through a feed and watching video. 0:28:11 And so I think you’ll see the same thing with crypto and, you know, where the other trends 0:28:21 you mentioned, AI and VR and crypto, you know, it’s hard to predict how those trends will 0:28:24 converge and end up making something that is more than the sum of its parts. 0:28:28 I actually think I would even go further and I would say not only, I think, are we about 0:28:33 on the cusp of multiple major technology breakthroughs converging. 0:28:36 So probably AI, but then I would call kind of new devices. 0:28:42 So that’s everything from talking speakers, to cars, to VR, to blockchain and crypto. 0:28:45 And I think those will be kind of like cloud, mobile, social and reinforce each other. 0:28:48 I think in addition to what else is happening right now, we’ve got, what is it, like three 0:28:51 to four billion people with smartphones, that’s going to go to eight billion. 0:28:55 In addition to that, the hours per day spent on these devices is going to continue to go 0:28:56 up. 0:28:59 So you’re just going to have essentially like two X the time spent, if not more. 0:29:05 Number three, you now have major areas of the economy that were previously relatively 0:29:10 untouched by technology, namely finance, education, healthcare. 0:29:16 I think now entrepreneurs have now figured out ways to kind of bring modern technology 0:29:18 into those industries. 0:29:25 So I kind of think like, if you combine like number of engaged users, level of engagement, 0:29:32 plus unlocking what’s basically 70% of GDP with these new kind of markets, plus what 0:29:36 I think is kind of like multiple major new trends coming together. 0:29:37 It’s not quite there. 0:29:41 The technology is still like all these things, like even the, you know, I mean, like Lex 0:29:45 is awesome, but like, you know, auto correct breaks half the time, like it’s still not 0:29:47 quite there, but it’s very, very close. 0:29:50 And the, if you, if you go and same with the crypto blockchains, like it’s slow and 0:29:53 Bitcoin has its issues and everything, it’s not quite there. 0:29:55 It’s still the sidekick era. 0:29:59 You know, it’s, it’s not the iPhone era, but I, yeah, I think those three things will 0:30:02 converge and you have these other kind of macro trends now. 0:30:05 And you have these, and the economics too, like the, you know, like the cable bundle 0:30:08 is going to break at some point, I think soon, right? 0:30:11 But at some point, like the economics, which like fundamentally, like it will no longer 0:30:12 work to have the cable bundle. 0:30:15 I think that’s relatively near future. 0:30:20 And then there will be this sort of massive, you know, wave of people and dollars shifting 0:30:23 over to, to digital technology. 0:30:27 My earlier comment about all the things the internet enables, TV hasn’t gotten the benefit 0:30:28 of those things. 0:30:34 Now, except for Netflix and now all the digital, all the big media companies, traditional media 0:30:38 companies are moving over and they’re going to have the advantages of, of the internet. 0:30:42 Like for example, you know, the idea that you would flip through the channels, hoping 0:30:47 something good is on was a way that traditional media has historically worked and as opposed 0:30:52 to what is the best content for me that has ever been produced in the universe. 0:30:54 I’m going to, you know, watch that. 0:30:58 And so now that’s possible with Disney, it used to be that was possible with Netflix 0:30:59 archive. 0:31:00 Now it’s possible. 0:31:03 You know, so, so we’re, we’re seeing, we’re seeing a lot of big industries that are slower 0:31:06 to change starting to adopt, adopt. 0:31:07 Yeah. 0:31:12 I mean, if you think about the, I mean, the, it’s kind of surprising 25 years into the 0:31:16 internet that the only industries that have been really, quote, disrupted have been, I 0:31:21 think media, maybe transportation and, and maybe starting to happen as retail, right? 0:31:24 And then the rest, I mean, if you look at the list of incumbents and every other industry, 0:31:26 it hasn’t changed that much. 0:31:27 Right. 0:31:28 And even the media world, right? 0:31:30 Like still the vast majority of people, like they sit there and they, you know, they watch 0:31:35 whatever reality TV and on a regular cable box and like, it hasn’t changed that much 0:31:36 yet. 0:31:37 Yup. 0:31:38 Yup. 0:31:39 It’s slow. 0:31:40 It’s slow for these to change. 0:31:43 There are also times when I feel like the converging trends end up creating some kind 0:31:47 of a Frankenstein situation where, where the trends are in conflict with each other. 0:31:53 The goal of being the future of media and recommending content to people and the goal of being a place 0:31:59 for people to connect with their friends in a private, you know, way with close friends 0:32:00 feels to be in conflict. 0:32:02 So you end up having Facebook. 0:32:05 It’s almost like we’re having a phone conversation and then you say something interesting and 0:32:08 then it’s like, oh, we’re going to show that to a million people. 0:32:12 You know, it’s a little weird if you’re talking with your friends and, you know, sharing content 0:32:15 with the, with the small group that that could end up reaching tons of people. 0:32:20 And there are trends that kind of smash into each other in a way that causes problems and 0:32:25 yeah, we’re, it feels like we’re in a very interesting time now with all of the so much 0:32:30 attention on these platforms and, you know, the decisions they’re making and, and just 0:32:35 the like, I don’t feel like we’ve, I mean, maybe you have insight into it, but I don’t 0:32:43 feel like we fully understand how to, you know, the, the, the, the etiquette, the personal 0:32:48 etiquette, you know, like, is it okay to, you know, talk a certain way on Twitter or 0:32:53 go find somebody’s old tweets or, you know, should the plot, like all these things are 0:32:54 being figured out, right? 0:32:56 Like it’s, it’s, and it’s very confusing. 0:33:02 And I think kind of my view at least causing a lot of upheaval, kind of social upheaval 0:33:03 or something. 0:33:04 Yeah. 0:33:09 People don’t know how to interact with each other or people know how to figure out what, 0:33:13 what information they should pay attention to. 0:33:20 And I definitely think there’s and, and then I feel like when you look at the, the press 0:33:25 coverage of all this, there’s just so many trade-offs that it’s, you know, and reporters, 0:33:29 I mean, I know this because we have Buzz, you know, Buzzfeed news covers this stuff. 0:33:33 And reporters are not in a, you know, it’s not really the job of a reporter, at least 0:33:37 classically, to read everything and make a policy recommendation. 0:33:42 They’re just trying to say, oh, there’s this like, here’s a leaked, you know, document or 0:33:45 here’s some information people didn’t know or, oh, this company’s struggling with deciding 0:33:50 between these two different, you know, possibilities. 0:33:55 But it doesn’t feel like there’s a, there’s a clear sense right now of, of this is where 0:34:00 society is headed and this is how technology is part of progress. 0:34:05 And, and is there, is there a way to get more of that back where, where people see the benefits 0:34:10 of technology and, you know, both to the economy, but also to people’s lives and to society 0:34:11 and. 0:34:12 Yeah. 0:34:13 That’s a great question. 0:34:18 I think, I mean, my view is partly this feeling of, I do, I, this is again, going back to 0:34:21 my main area of interest, the crypto stuff, but the, this is feeling, I think a lot of 0:34:25 people have that it’s sort of this bait and switch, you know, that, that they help build 0:34:29 up these networks, you know, whether it be a marketplace or a social network. 0:34:33 And then the benefits go to other people, the governance goes to other people. 0:34:40 And so I think having more, you know, new architectural designs that provide incentives 0:34:44 and are more inclusive can, can help with that. 0:34:48 Now that getting that message out, like it is very hard because there’s a lot of like 0:34:55 negative, I think misunderstandings around, you know, going back to Bitcoin and crypto 0:34:56 in general. 0:35:00 And like it’s actually like a very kind of utopian meritocratic technology, but that’s 0:35:04 that’s very hard to explain to people, you know, I think some of the, some of it from 0:35:09 the fifth estate is you’ll see people on social media being like, you know, triple your money 0:35:10 right now. 0:35:15 And like trying to basically pump an altcoin or something like that. 0:35:20 And how much has that driven progress in, you know, the fact that people are trying to make 0:35:22 money is in its powerful incentive. 0:35:27 How much has that driven progress in, in crypto and, you know, blockchain and how much is 0:35:28 that like holding it back? 0:35:36 And is there a way in a, in a more decentralized thing, you know, a system to, to shift the 0:35:41 narrative towards things that will lead to the overall benefit of the total community? 0:35:42 Yeah. 0:35:47 I think it’s definitely a good point and like there was this big run up in 2017, where a 0:35:52 lot of people kind of, you know, sort of had this get rich quick kind of mentality. 0:35:59 The, I do think what’s really important about it is there’s the negative side as you highlighted. 0:36:04 The positive side is, it’s a business model that doesn’t depend on advertising and tracking 0:36:05 users, right? 0:36:10 So, Mark Andreessen talks about this, how the original, there’s actually an error code. 0:36:12 You know, people are probably familiar with error code four or four. 0:36:16 There’s a, you know, when you go to a webpage, it’s not there, there’s error code for two, 0:36:17 which was never implemented. 0:36:18 It said payment required. 0:36:21 And so the original idea with the original internet was to actually have like money as 0:36:22 a native unit. 0:36:24 Right now, of course we have credit cards and things. 0:36:25 It took a long time, by the way. 0:36:30 And in fact, it was very controversial SSL, you need SSL for credit cards. 0:36:34 And that was actually almost regulated away because people thought, well, who would want 0:36:36 encryption except for bad people? 0:36:38 But it turned out you needed for banking and things. 0:36:42 Anyway, so we’ve now grafted on kind of the legacy financial payment systems onto the 0:36:44 internet and it sort of works, right? 0:36:48 But still the vast majority of these big companies are funded through advertising. 0:36:53 So I think one other utopian idea is that, is that, you know, I think one of the reasons 0:36:57 people are disillusioned is they feel this sense of like, they’re not part of it. 0:36:59 They aren’t included. 0:37:04 And the business model they feel like is extractive. 0:37:07 And you know, and we see this with GDPR, and I think there’ll be similar kinds of stuff 0:37:11 happening in the U.S. around sort of privacy. 0:37:17 And so, you know, I think there’s this huge, I mean, in my sort of corner of the internet 0:37:23 with digital media, there’s a huge number of intermediaries who are skimming money in 0:37:24 the ad tech space. 0:37:33 And oftentimes the main value they’re providing to an advertiser is surveilling and tracking 0:37:38 users where if users had the choice to say, do I want to be tracked this way or not? 0:37:40 They would almost always say no. 0:37:47 And these are companies that don’t actually make content or create apps or build things 0:37:49 that consumers value. 0:37:53 So there’s some hope that blockchain could help with that. 0:37:57 I think right now what I think a lot of what we’re feeling is, and why people are upset, 0:37:58 is they don’t feel like the interests are aligned. 0:38:01 And the question for me is, can, is there a way to realign that? 0:38:02 Or is it just too late? 0:38:03 And that’s just the state of things. 0:38:07 And maybe that’s the pessimistic view and it’s just going to be, you know, one trend 0:38:08 we have seen at BuzzFeed. 0:38:13 If you had asked me a couple of years ago, I would never have guessed the transformation 0:38:17 that’s happened in our business from advertising revenue being essentially the only kind of 0:38:25 revenue to now having so many of our, so much of our audience is looking to transact. 0:38:30 And so 500 million in GMV where people are seeing, you know, products we might recommend 0:38:37 or things that we’re talking about, and, you know, multiple of that of other kinds of transactions 0:38:39 that were indirectly influencing. 0:38:45 But we’re seeing that, you know, again, to this convergence of trends that used to be 0:38:48 content shifted to mobile, but people weren’t transacting on mobile. 0:38:52 They were going back to their desktop to do their shopping where they could, you know, 0:38:55 type in information and do things more easily. 0:38:57 Now it’s better to shop on mobile. 0:38:59 That’s where more people want to shop. 0:39:03 You can double click and buy stuff, your credit cards integrated in. 0:39:08 And so we’re seeing a huge shift where media has become much more transactional, where 0:39:11 people are looking to be inspired to do something. 0:39:15 Go on a trip or buy a product or try a new experience. 0:39:20 And I think a lot of it is also just these online marketplaces are starting to dominate 0:39:22 the entire economy. 0:39:23 And there’s infinite choice. 0:39:27 If you’re Gen X, Y, or Z, or anyone actually who’s using the Internet, you’re used to 0:39:36 being able to watch any show on the streaming services, listen to any song on Spotify, go 0:39:40 to any travel destination with Airbnb or one of the OTAs. 0:39:45 And so you’re looking for some culture and content and vicarious experience and something 0:39:49 that will inspire you to choose which option out of all of these options are worth doing. 0:39:52 And now with your phone, you can just transact. 0:39:58 And so I think that there is this larger shift towards just transactional media as opposed 0:40:02 to impression-based, advertising-based sort of thing. 0:40:11 And also, I’m wrong, but also the sort of the adjacent, like so using Tasty to build 0:40:17 a brand and then partnering with retailers to sell products related to that, right? 0:40:21 So instead of like new models like that, where you’re inserting yourself into the kind of 0:40:22 the purchasing experience. 0:40:23 Yeah. 0:40:26 There’s like a hundred skew line of Tasty products at Walmart. 0:40:30 So when you see a video where people are cooking, they’re like, “Oh, I can actually make that 0:40:33 recipe and I can buy the pots and pans,” you know, like connecting. 0:40:39 I feel like what one sort of shift is just connecting the Internet to people’s actual 0:40:43 lives and the things that they’re doing, you know, every single day. 0:40:47 And I think that, you know, crypto and blockchain can help people do that in a way where there’s 0:40:54 more trust and more security and that they’re more a part of it and can build it as part 0:40:55 of a community. 0:40:59 All right, we’re out of time. 0:41:04 I only asked the first question though, so I don’t know if we have a couple more hours, 0:41:07 but thank you, Chris, and I’m going to make that one. 0:41:08 Thank you, John. 0:41:09 Thank you, John. 0:41:09 Thank you, everyone. 0:41:12 (audience applauding)
How can we evolve the web for a better future? Has the web become a mature platform — or are we still in the early days of knowing what it can do and what role it might have in our lives? Just as “social/local/mobile” once did, what are the new trends — like crypto and blockchain networks and commerce everywhere — that might converge into new products and experiences?
Chris Dixon (general partner at a16z and co-lead of the a16z crypto fund) discusses all things internet with Jonah Peretti (founder and CEO of BuzzFeed). Their conversation ranges from the early days of the web to the way innovation happens (what Chris calls “outside-in vs inside-out”) to the promise of a community-owned and operated internet, and more.
Together they explore the possibilities that could co-evolve and converge are we enter into the next era of the web, and they share how we might not be quite as far removed from the “wild west days” of the internet as we imagined.
0:00:02 Hi, and welcome to the A16Z podcast. 0:00:07 I’m Doss, and in this episode, Frank Chen interviews UC Berkeley Professor of Computer 0:00:09 Science, Stuart Russell. 0:00:13 Russell literally wrote the textbook for artificial intelligence that has been used to educate 0:00:16 an entire generation of AI researchers. 0:00:20 More recently, he’s written a follow-up, Human-Compatible, Artificial Intelligence in 0:00:22 the Problem of Control. 0:00:27 Their conversation covers everything from AI misclassification and bias problems, to 0:00:31 the questions of control and competence in these systems, to a potentially new and better 0:00:34 way to design AI. 0:00:38 But first, Russell begins by answering, “Where are we really when it comes to artificial 0:00:42 general intelligence, or AGI, beyond the scary picture of Skynet?” 0:00:48 Well, the Skynet metaphor is one that people often bring up, and I think, generally speaking, 0:00:49 Hollywood has got it wrong. 0:00:56 They always portray the risk as an intelligent machine that somehow becomes conscious, and 0:01:01 it’s the consciousness that causes the machine to hate people and want to kill us all. 0:01:04 And this is just a mistake. 0:01:07 The problem is not consciousness, it’s really competence. 0:01:13 And if you said, “Oh, by the way, your laptop’s now conscious,” it doesn’t change the rules 0:01:14 of C++. 0:01:15 Right? 0:01:18 The software still runs exactly the way it was always going to run when you didn’t think 0:01:19 it was conscious. 0:01:23 So, on the one hand, we have people like Elon Musk saying artificial general intelligence, 0:01:28 like that’s a real possibility, it may be sooner than a lot of people think. 0:01:32 And on the other, you’ve got people like Andrew Ng who are saying, “Look, we’re so far away 0:01:33 from AGI. 0:01:37 All of these questions seem premature, and I’m not going to worry about the downstream 0:01:43 effects of super intelligent systems until I worry about overpopulation on Mars.” 0:01:45 So, what’s your take on the debate? 0:01:51 Yeah, so he, in fact, upgraded that to overpopulation on Alpha Centauri. 0:01:56 So let’s first of all talk about timelines and predictions for achieving human level 0:01:59 or superhuman AI. 0:02:05 So Elon actually is reflecting advice that he’s received from AI experts. 0:02:12 So some of the people, for example, at OpenAI, think that five years is a reasonable timeline. 0:02:18 And that the necessary steps mainly involve much bigger machines and much more data. 0:02:23 So no conceptual or computer science breakthroughs, just more compute, more storage, and we’re 0:02:24 there. 0:02:25 Yeah. 0:02:30 So I really don’t believe that, crudely speaking, like the bigger, faster the computer, the 0:02:32 faster you get the wrong answer. 0:02:38 But I believe that we have several major conceptual breakthroughs that still have to happen. 0:02:42 We don’t have anything resembling real understanding of natural language, which would be essential 0:02:48 for systems to then acquire the whole of human knowledge. 0:02:54 We don’t have the capability to flexibly plan and make decisions over long time scales. 0:03:02 So we’re very impressed by AlphaGo or AlphaZero’s ability to think 60 or 100 moves ahead. 0:03:04 That’s superhuman. 0:03:10 But if you apply that to a physical robot whose decision cycle is a millisecond, that 0:03:16 gets you a tenth of a second into the future, which is not very useful if what you’re trying 0:03:21 to do is not just lay the table for dinner, but do it anywhere, in any house in any country 0:03:23 in the world, figure it out. 0:03:28 Laying the table for dinner is several million or tens of millions of motor control decisions. 0:03:33 And at the moment, the only way you can generate behavior on those timescales is actually to 0:03:39 have canned subroutines that humans have defined, pick up a fork. 0:03:43 Okay, I can train picking up a fork, but I’ve defined picking up a fork as a thing. 0:03:49 So machines right now are reliant on us to supply that hierarchical structure of behavior. 0:03:54 When we figure out how they can invent that for themselves as they go along and invent 0:04:00 new kinds of things to do that we’ve never thought of, that will be a huge step towards 0:04:01 real AI. 0:04:05 As we march towards general intelligence, this literal ability to think outside the 0:04:09 box will be one of the hallmarks, I think we look for. 0:04:14 If you think about what we’re doing now, we’re trying to write down human objectives. 0:04:19 It’s just that we tend, because we have very stupid systems, they only operate in these 0:04:21 very limited contexts, like a go board. 0:04:26 And on the go board, a natural objective is win the game. 0:04:31 If AlphaGo was really smart, even if you said win the game, well, I can tell you, here’s 0:04:35 what chess players do when they’re trying to win the game. 0:04:41 They go outside the game and a more intelligent AlphaGo would realize, okay, well, I’m playing 0:04:43 against some other entity. 0:04:44 What is it? 0:04:45 Where is it? 0:04:51 There must be some other part of the universe besides my own processor and this go board. 0:04:56 And then it figures out how to break out of its little world and start communicating. 0:05:01 Maybe it starts drawing patterns on the go board with go pieces to try and figure out 0:05:05 visual language it can use to communicate with these other entities. 0:05:10 Now, how long do these kinds of breakthroughs take? 0:05:16 Well, if you look back at nuclear energy, for the early part of the 20th century, when 0:05:23 we knew that nuclear energy existed, so from E equals MC squared in 1905, we could measure 0:05:26 the mass differences between different atoms. 0:05:31 We knew what their components were, and they also knew that radium could emit vast quantities 0:05:32 of energy over a very long period. 0:05:36 So they knew that there was this massive store of energy. 0:05:42 But mainstream physicists were adamant that it was impossible to ever release it. 0:05:43 To harness it in some way. 0:05:48 So there was a famous speech that Lord Rutherford gave, and he was the man who split the atoms, 0:05:51 so it’s like the leading nuclear physicist of his time. 0:05:54 And that was September 11th, 1933. 0:06:01 And he said that the possibility of extracting energy by the transmutation of atoms is moonshine. 0:06:04 But the question was, is there any prospect in the next 25 or 30 years? 0:06:07 So he said, no, it’s impossible. 0:06:11 And then the next morning, Leo Zillard actually read a report of that in The Times and went 0:06:16 for a walk and invented the nuclear chain reaction based on neutrons, which people hadn’t 0:06:17 thought of before. 0:06:18 And that was a conceptual breakthrough. 0:06:22 You went from impossible to now it’s just an engineering challenge. 0:06:24 So we need more than one breakthrough, right? 0:06:30 It takes time to sort of ingest each new breakthrough and then build on that to get to the next 0:06:31 one. 0:06:36 So the average AI researcher thinks that we will achieve superhuman AI sometime around 0:06:38 the middle of this century. 0:06:41 So my personal belief is actually more conservative. 0:06:47 One point is, we don’t know how long it’s going to take to solve the problem of control. 0:06:52 If you ask the typical AI researcher, okay, and how are we going to control machines that 0:06:56 are more intelligent than us, does that beats me? 0:07:02 So you’ve got this multi-hundred billion-dollar research enterprise with tens of thousands 0:07:08 of brilliant scientists all pushing towards a long-term goal where they have absolutely 0:07:11 no idea what to do if they get there. 0:07:15 So coming back to Andrew Ng’s prediction, the analogy just doesn’t work. 0:07:22 If you said, okay, the entire scientific establishment on Earth is pushing towards a migration of 0:07:26 the human race to Mars, and they haven’t thought about what we’re going to breathe when we 0:07:30 can get there, you’d say, well, that’s clearly insane. 0:07:31 Yeah. 0:07:34 And that’s why you’re arguing, we need to solve the control problem now, or at least 0:07:37 the right design approaches to solving this control problem. 0:07:43 It’s clear that the current formulation, the standard model of AI as build machines that 0:07:45 optimize fixed objectives is wrong. 0:07:53 We’ve known this principle for thousands of years that be careful what you wish for. 0:07:57 King Midas wished for everything he touched to turn to gold. 0:08:02 That was the objective he gave to the machine, which happened to be the gods, and the gods 0:08:06 gave him his objective, and then that was his food, and his drink, and his family all 0:08:09 turned to gold, and then he dies in misery. 0:08:16 And so we’ve known this for thousands of years, and yet we built the field of AI around 0:08:22 this definition of machines that carry out plans to achieve objectives that we put into 0:08:23 them. 0:08:32 It only works if and only if we are able to completely, perfectly specify the objective. 0:08:38 So the guidance is don’t put fixed objectives into machines, but build machines in a way 0:08:44 that acknowledges the uncertainty about what the true objective is. 0:08:51 For example, take a very simple machine learning task, learning to label objects and images. 0:08:52 So what should the objective be? 0:09:00 Well, you go and talk to a room full of computer vision people, they will say labeling accuracy, 0:09:04 and that’s actually the metric used for all these competitions. 0:09:09 In fact, this is the wrong metric, because different kinds of misclassifications have 0:09:13 different costs in the real world. 0:09:17 Misclassifying one type of Yorkshire Terrier as a different type of Yorkshire Terrier is 0:09:19 not that serious. 0:09:23 Classifying a person as a gorilla is really serious. 0:09:28 And Google found that out when the computer vision system did exactly that, and it probably 0:09:32 cost them billions in goodwill and public relations. 0:09:40 And that opened up actually a whole series of people observing the ways that these online 0:09:47 systems were basically misbehaving in the way they classified people. 0:09:53 If you do a search on Google Images for CEO, I think it was one of the women’s magazines 0:10:00 pointed out that the first female CEO appears on the 12th row of photographs and turns out 0:10:02 to be Barbie. 0:10:06 So if accuracy isn’t the right metric, what are the design paths that you’re suggesting 0:10:08 we optimize for? 0:10:13 If you’re going to have that image labeling system take action in the real world and posting 0:10:16 a label on the web is an action in the real world. 0:10:20 Then you have to ask, “What’s the cost of misclassification?” 0:10:27 And when you think, “Okay, so ImageNet has 20,000 categories, and so there are 400 millions 0:10:32 or 20,000 squared different ways of misclassifying one object as another.” 0:10:38 So now you’ve got 400 million unknown costs, and obviously you can’t specify a joint distribution 0:10:41 over 400 million numbers one by one. 0:10:42 It’s far too big. 0:10:47 So you might have some general guidelines that misclassifying one type of flower as 0:10:53 another is not very expensive, misclassifying a person as inanimate object, those are going 0:10:54 to be more expensive. 0:10:59 But generally speaking, you have to operate under uncertainty about what the costs are. 0:11:01 And then how does the algorithm work? 0:11:07 One of the things it should do, actually, is refuse to classify certain photographs, 0:11:13 saying, “I’m not sure enough about what the cost of misclassification might be, so I’m 0:11:15 not going to classify it.” 0:11:18 So that’s definitely a divergence from state-of-the-art today, right? 0:11:23 State-of-the-art today is you’re going to assign some class to it, right? 0:11:26 That’s a dog, or a Yorkshire terrier, or a pedestrian, or a tree. 0:11:30 And then the algorithms can say, “I’m really sure,” or, “I’m not really sure,” and then 0:11:31 a human decides. 0:11:36 You’re saying something different, which is, “I don’t understand the cost of uncertainty, 0:11:40 so therefore, I’m not even going to give you a classification or a confidence interval 0:11:41 on the classification.” 0:11:42 Like, I shouldn’t. 0:11:43 It’s irresponsible for me. 0:11:48 So I could give confidence intervals and probability, but that wouldn’t be what image-labeling 0:11:50 systems typically do. 0:11:53 They’re expected to plump for one label. 0:11:58 And the argument would be, if you don’t know the costs of plumping for one label or another, 0:12:01 then you probably shouldn’t be plumping, right? 0:12:08 And I read that Google photos won’t label gorillas anymore. 0:12:11 So you can give it a picture that’s perfectly, obviously, a gorilla, and it’ll say, “I’m 0:12:13 not sure what I’m seeing here.” 0:12:19 And so how do we make progress on designing systems that can factor in this context, sort 0:12:22 of understanding the uncertainty, characterizing the uncertainty? 0:12:24 So there’s sort of two parts to it. 0:12:30 One is, how does the machine behave, given that it’s going to have radical levels of 0:12:34 uncertainty about many aspects of our preference structure? 0:12:38 And then the second question is, how does it learn more about our preference structure? 0:12:44 As soon as the robot believes that it has absolute certainty about the objective, it 0:12:47 no longer has a reason to ask permission. 0:12:52 And in fact, if it believes that the human is even slightly irrational, which of course 0:12:59 we are, then it would resist any attempt by the human to interfere or to switch it off, 0:13:06 because the only consequence of human interference in that case would be a lower degree of achievement 0:13:08 of the objective. 0:13:14 So you get this behavior where a machine with a fixed objective will disable its own off-switch 0:13:21 to prevent interference with what it is sure is the correct way to go forward. 0:13:28 And so we want a very high threshold on confidence that it’s understood what my real preference 0:13:29 or desire is. 0:13:34 Well, I actually think it’s in general not going to be possible for the machine to have 0:13:38 high confidence that it’s understood your entire preference structure. 0:13:43 You may understand aspects of it, and if it can satisfy those aspects without messing 0:13:49 with the other parts of the world that it doesn’t know what you want, then that’s good. 0:13:55 But there are always going to be things that it never occurs to you to write down. 0:14:00 So I can see how this design approach would lead to much safer systems because you have 0:14:02 to factor in the uncertainty. 0:14:07 I can also imagine sort of a practitioner today sitting in their seat going, “Wow, that 0:14:08 is so complex. 0:14:10 I don’t know how to make progress. 0:14:14 So what do you say to somebody who’s now thinking, “Wow, I thought my problem was X-hard, but 0:14:17 it’s really 10X or 100X or 1000X-hard?” 0:14:26 So interestingly, the safe behaviors fall out as solutions of a mathematical game with 0:14:27 a robot and a human. 0:14:31 In some sense, they’re cooperating because they both have the same objective, which is 0:14:36 whatever it is the human wants, just that the robot doesn’t know what that is. 0:14:43 So if you formulate that as a mathematical game and you solve it, then the solution exhibits 0:14:49 these desirable characteristics that you want, namely deferring to the human, allowing yourself 0:14:55 to be switched off, asking permission, only doing minimally invasive things. 0:15:00 We’ve seen, for example, in the context of self-driving cars, that when you formulate 0:15:07 things this way, the car actually invents for itself protocols for behaving in traffic 0:15:09 that are quite helpful. 0:15:14 For example, one of the constant problems with self-driving cars is how they behave 0:15:19 at four-way stop signs, because they’re never quite sure who’s going to go first and they 0:15:21 don’t want to cause an accident. 0:15:25 They’re optimized for safety, so they’ll end up stuck at that four-way intersection. 0:15:30 So they’re stuck and everyone ends up pulling around them, and it will probably cause accidents 0:15:32 rather than reducing accidents. 0:15:37 So what the algorithm figured out was that if it got to the stop sign and it was unclear 0:15:43 who should go first, it would back up a little bit, and that’s a way of signaling to the 0:15:48 other driver that it has no intention of going first and therefore they should go. 0:15:54 That falls out as a solution of this game theoretic design for the problem. 0:15:57 Let’s go to another area where machine learning is often being used. 0:16:03 I’m about to make a loan to an individual, and so they’ve taken all this data, they figure 0:16:06 out your credit worthiness, and they say loan or not. 0:16:12 How would game theory inside loan decision making be different than traditional methods? 0:16:20 So what happens with traditional methods is that they make decisions based on past data, 0:16:27 and a lot of that past data reflects biases that are inherent in the way society works. 0:16:32 So if you just look at historical data, you might end up making decisions that discriminate 0:16:37 in effect against groups that have previously been discriminated against, because that prior 0:16:44 discrimination resulted in lower loan performance, and so you end up actually just perpetuating 0:16:48 the negative consequences of social biases. 0:16:53 So loan underwriting in particular has to be inspectable, and the regulators have to 0:16:59 be able to verify that you’re making decisions on criteria that neither mention race or that 0:17:00 are not proxies for race. 0:17:06 So the principles of those regulations need to be expanded to a lot of other areas. 0:17:14 For example, data seems to be suggesting that the job ads that people see online are extremely 0:17:16 biased by race. 0:17:22 If you’re just trying to fit historical data and maximize predictive accuracy, you’re missing 0:17:27 out these other objectives about fairness at the individual level and the social level. 0:17:31 So economists call this the problem of externality. 0:17:37 And so pollution is the classic example, where a company can make more money by just dumping 0:17:43 pollution into rivers and oceans and the atmosphere rather than treating it or changing its processes 0:17:45 to generate less pollution. 0:17:47 So it’s imposing costs on everybody else. 0:17:51 The way you fix that is by fines or tax penalties. 0:17:55 You create a price for something that doesn’t have a price. 0:18:01 Now the difficulty, and this is also true with the way social media content selection 0:18:06 algorithms have worked, it’s, I think, very hard to put a price on this. 0:18:12 And so the regulators dealing with loan underwriting have not put a price on it. 0:18:16 They put a rule on it saying you cannot do things that way. 0:18:21 So let’s take making a recommendation at an e-commerce site for here’s a product that 0:18:26 you might like, how would we do that differently by baking game theory? 0:18:32 So the primary issue with recommendations is understanding user preferences. 0:18:37 One of the problems I remember with a company that sends you a coupon to buy a vacuum cleaner 0:18:40 and you buy a vacuum cleaner, great. 0:18:44 So now it knows you really like vacuum cleaners, it keeps sending you coupons for vacuum cleaners. 0:18:48 But of course you just bought a vacuum cleaner, so you’ve no interest in getting another vacuum 0:18:49 cleaner. 0:18:57 So just this distinction between consumable things and non consumable things is really 0:19:00 important when you want to make recommendations. 0:19:08 And I think you need to come to the problem for an individual user with a reasonably rich 0:19:14 prior set of beliefs about what that user might like based on demographic characteristics. 0:19:19 How do you then adapt that and update it with respect to the decisions that the user makes 0:19:25 about what products to look at, which coupons they cash in, which ones they don’t, and so 0:19:26 on? 0:19:32 And one of the things that you might see falling out would be that the recommendation system, 0:19:35 it might actually ask you a question. 0:19:39 I’ve noticed that you’ve showed no interest in all these kinds of projects. 0:19:41 Are you in fact a vegetarian? 0:19:45 As you look back at your own career in this space, are you surprised that the field is 0:19:47 where it is? 0:19:54 Ten years ago, I would have been surprised to see that speech recognition is now just 0:20:00 a commodity that everyone is using on their cell phones across the entire world. 0:20:03 When I was an undergrad, they said, “We definitely have to solve the Turing test before we’re 0:20:06 going to get speaker and independent natural language.” 0:20:13 And I worked on self-driving cars in the early 90s, and it was pretty clear that the perception 0:20:15 capabilities were the real bottleneck. 0:20:21 The system would detect about 99% of the other cars, so every 100th car, you just wouldn’t 0:20:22 see it. 0:20:23 So these are things that are coming true. 0:20:26 They were sort of holy grails. 0:20:32 It’s interesting that even though they achieve superhuman performance on these testbed datasets, 0:20:38 there are still these adversarial examples that show that actually it’s not seeing things 0:20:40 the same way that humans are seeing things. 0:20:42 Definitely making different mistakes than we make. 0:20:45 And so it’s fragile in ways that we don’t understand. 0:20:53 For example, OpenAI has a system with simulated humanoid robots that learn to play soccer. 0:20:54 One learns to be the goalkeeper. 0:20:58 The other one learns to take penalties, and it looks great. 0:21:00 This was a big success. 0:21:04 He basically said, “Okay, can we get adversarial behavior from the goalkeeper?” 0:21:10 So the goalkeeper basically falls down on the ground immediately, waggles its leg in 0:21:17 the air, and the penalty taker, when he’s kicking the ball, just completely falls apart, right? 0:21:18 I don’t know how to respond to that. 0:21:21 And never actually gets around to kicking the ball at all. 0:21:24 I don’t know whether he’s laughing, say, “Oh, you can’t kick the ball at what?” 0:21:31 So it’s not that just because we have superhuman performance on some nicely curated data set 0:21:35 that we actually have superhuman vision or superhuman motor control learning. 0:21:38 Are you optimistic about the direction for the field? 0:21:43 So one reason I’m optimistic is that as we see more and more of these failures of the 0:21:47 standard model, people will say, “Oh, well, clearly we need to build these systems this 0:21:54 other way because that sort of gives us guarantees that it won’t do anything rash, it lost permission, 0:22:00 it will adapt to the user gradually, and it’ll only start taking bigger steps when it’s reasonably 0:22:03 sure that that’s what the user wants.” 0:22:08 I think there are reasons for pessimism as well in misuse, for surveillance, misinformation. 0:22:15 I mean, there’s more awareness of it, but there’s nothing concrete being done about that 0:22:22 with a few honorable exceptions like San Francisco’s ban on face recognition in public spaces, 0:22:28 and the California’s ban actually on impersonation of humans by AI systems. 0:22:29 The deepfakes? 0:22:34 Not just deepfakes, but for example, robo-calls where I’m pretending to schedule a haircut 0:22:38 appointment and I didn’t self-identify as an AI. 0:22:44 So Google has now said they’re going to have their machine self-identify as an AI. 0:22:46 It’s a relatively simple thing to comply with. 0:22:50 It doesn’t have any great economic cost, but I think it’s a really important step that 0:22:54 should be rolled out globally. 0:22:59 That principle of not impersonating human beings is a fundamentally important principle. 0:23:04 Another really important principle is don’t make machines that decide to kill people. 0:23:06 Ban on offensive weapons. 0:23:12 Sounds pretty straightforward, but again, although there’s much greater awareness of 0:23:20 this, there are no concrete steps being taken, and countries are now moving ahead with this 0:23:21 technology. 0:23:30 So I just last week found out that a Turkish defense company is selling an autonomous quadcopter 0:23:37 with a kilogram of explosive that uses face recognition and tracking of humans and is 0:23:40 sold as an anti-personnel weapon. 0:23:45 So we made a movie called Slaughterbots to illustrate this concept. 0:23:48 We’ve had more than 75 million views. 0:23:52 So to bring this home to people who are sitting at their desks working on machine learning 0:23:57 systems, if you could give them a piece of advice on what they should be doing, what 0:24:00 should they be doing differently having heard this podcast that they might not have been 0:24:01 thinking. 0:24:05 So for some applications, it probably isn’t going to change very much. 0:24:10 One of my favorite applications of machine learning and computer vision is the Japanese 0:24:18 cucumber farmer who downloaded some software and trained a system to pick out bad cucumbers 0:24:19 from his… 0:24:23 And sorts them into the grades, the Japanese are very fastidious about the grades of produce 0:24:26 and he did it so inexpensively. 0:24:32 So that’s a nice example and it’s not clear to me that there’s any particular way you 0:24:33 might change that because it’s… 0:24:35 No game theory really needed for it. 0:24:36 It’s a very… 0:24:39 I mean, in some sense, it’s a system that has a very, very limited scope of action, which 0:24:43 is just to sort cucumbers. 0:24:47 The sorting is not public and there’s no danger that it’s going to label a cucumber as a person 0:24:48 or anything like that. 0:24:54 But in general, you want to think about, first of all, what is the effect of the system that 0:24:56 I’m building on the world, right? 0:25:03 And it’s not just that it accurately classifies cucumbers or photographs. 0:25:08 It’s that of course people will buy the cucumbers or people will see the photographs and what 0:25:11 effect does it have. 0:25:17 And so often when you’re defining these objectives for a machine learning algorithm, they’re 0:25:25 going to leave out effects that the resulting algorithm is going to have on the real world. 0:25:31 And so can you fold those other effects back into the objective rather than just optimizing 0:25:38 some narrow subset like click through, for example, which could have extremely bad external 0:25:39 effects. 0:25:45 So the model, if you sort of want to kind of anthropomorphize model, you would rather 0:25:50 have the perfect butler than the genie in the lamp. 0:25:51 Right. 0:25:53 All powerful, kind of unpredictable genie. 0:25:54 Right. 0:25:57 And very literal-minded about this is the objective. 0:25:58 Right. 0:25:59 Awesome. 0:26:01 Well, Stuart, thanks so much for joining us on the E16Z podcast. 0:26:02 Okay. 0:26:03 Thank you, Frank.
AI can do a lot of specific tasks as well as, or even better than, humans can — for example, it can more accurately classify images, more efficiently process mail, and more logically manipulate a Go board. While we have made a lot of advances in task-specific AI, how far are we from artificial general intelligence (AGI), that is AI that matches general human intelligence and capabilities?
In this podcast, a16z operating partner Frank Chen interviews Stuart Russell, the Founder of the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. They outline the conceptual breakthroughs, like natural language understanding, still required for AGI. But more importantly, they explain how and why we should design AI systems to ensure that we can control AI, and eventually AGI, when it’s smarter than we are. The conversation starts by explaining what Hollywood’s Skynet gets wrong and ends with why AI is better as “the perfect Butler, than the genie in the lamp.”
0:00:03 Hi, and welcome to the A16C podcast. 0:00:04 I’m Hannah. 0:00:06 The federal agency known as the FDA, 0:00:07 or the Food and Drug Administration, 0:00:09 was born over 100 years ago 0:00:12 at the turn of the Industrial Revolution 0:00:14 in a time of enormous upheaval and change 0:00:16 and rapidly emerging technology. 0:00:19 All of those things could be said to be just as true today. 0:00:23 From CRISPR to synthetic biology to using AI and medicine, 0:00:24 our healthcare system is undergoing 0:00:27 massive amounts of innovation and change. 0:00:28 This wide-ranging conversation 0:00:31 between Principal Commissioner of the FDA, 0:00:32 Amy Abernathy, and Vijay Pandey, 0:00:34 general partner at A16Z, 0:00:38 took place at A16Z’s annual summit in 2019, 0:00:41 and covers everything from gene editing your dog 0:00:43 to tracking the next foodborne outbreak, 0:00:45 how advances in bioengineering 0:00:47 are transforming healthcare, clinical trials, 0:00:48 and drug development, 0:00:50 and how the federal agency is evolving 0:00:53 to keep pace with the scientific breakthroughs coming 0:00:55 while staying true to its core mission 0:00:58 of assessing safety and effectiveness for consumers 0:01:00 in the world of food and medicine. 0:01:02 – So thank you so much for joining us. 0:01:04 – Terrific to be here, hello. 0:01:06 – So, you know, in thinking about how I start this, 0:01:08 I was thinking about the origins of the FDA. 0:01:13 So the FDA started in 1906, 113 years ago. 0:01:17 And it’s interesting to think about that time 0:01:20 because, you know, it’s turn of the previous century, 0:01:22 a time of a lot of tumult, innovation, 0:01:26 technical change, an Industrial Revolution, 0:01:28 you know, things that actually really were 0:01:31 the driving forces to create the FDA. 0:01:33 And like, here we are, another turn of the century, 0:01:34 another Industrial Revolution, 0:01:36 another amount of tumultuous change. 0:01:39 You know, what are the needs of the FDA right now? 0:01:42 And, you know, is its core mission really still relevant? 0:01:44 – Is the FDA still relevant? 0:01:45 – Yeah. 0:01:48 – So, I’m gonna go back that 113 years 0:01:50 and the time the FDA was formed 0:01:53 and continues to be the largest consumer protection agency, 0:01:55 was formed out of a hundred laws. 0:01:57 I think that the issue that was going on at the time 0:02:00 was unhygienic conditions in the Chicago stock yard. 0:02:03 And you can imagine, there’s been a lot of responsibilities 0:02:06 of the FDA over time, phyllidimide, et cetera. 0:02:09 But practically speaking, the FDA is responsible 0:02:12 as a science-based agency to protect 0:02:15 and promote public health, including through making sure 0:02:18 that we have safe and effective medical products 0:02:20 to use every day with our patients, 0:02:22 as well as through promoting innovation. 0:02:24 – You know, your question was 0:02:26 whether or not the FDA is still relevant. 0:02:27 – Yes. 0:02:28 – And I would argue that in a time 0:02:31 of rapidly emerging biology, 0:02:34 when we’ve got more and more scientific innovations 0:02:37 and potential products coming to bear, 0:02:40 the need to make sure that we have an objective way 0:02:42 of assessing safety and effectiveness 0:02:45 and providing consumer confidence 0:02:48 that this treatment is appropriate for me, 0:02:50 it is a responsibility of the FDA 0:02:52 that actually has more responsibility, not less. 0:02:53 – Well, so, you know, in that context, 0:02:55 let’s talk about what the FDA looks like today. 0:02:57 In those older days, you know, 0:03:00 the data came to the FDA by the truckload. 0:03:02 You know, I’m just imagining like, 0:03:03 reams and reams and paper and so on. 0:03:05 And you know, and I’m just curious to get your take 0:03:08 from, you know, what does the current system look like? 0:03:10 And you know, could you take us through, you know, 0:03:11 how this works? 0:03:12 – So, you know, a couple of things. 0:03:13 I actually think that there was a time 0:03:16 when it probably came on horse and buggy, not just trucks. 0:03:17 – Yes, yes, yes. 0:03:19 – So, practically speaking, 0:03:21 I usually think about five key elements 0:03:24 in drug and biologic product development. 0:03:26 So, there’s the discovery phase, 0:03:28 then there’s time of preclinical development. 0:03:31 After you’ve done adequate preclinical development 0:03:33 in line with good laboratory practice, 0:03:36 then you’d submit an investigational new drug application 0:03:39 for a drug or a biologic to the FDA, 0:03:41 which gives permission then to start 0:03:44 clinical studies with people. 0:03:46 And a drug or biologic will go through 0:03:47 a series of clinical studies, 0:03:49 typically phase one through three, 0:03:51 although these days those lines are blurring 0:03:56 for exactly what drugs follow then by a new drug application 0:03:59 or biologics application of BLA to the FDA. 0:04:02 And then the final stage after FDA marketing approval 0:04:03 would be post-marketing assessment 0:04:05 and continuous surveillance 0:04:07 about this particular medical product. 0:04:09 So, that’s kind of the usual steps. 0:04:10 – Given that, you know, 0:04:13 what would you want to modernize about it? 0:04:16 You know, how do you take us into the next 100 years? 0:04:18 – Yeah, so, you know, I think that goes back 0:04:19 to the horse and buggy and the trucks 0:04:22 and how information has historically gotten to the FDA. 0:04:26 So, I came from the landscape 0:04:29 of a health tech startup company focused on data. 0:04:32 I was recruited to FDA with the expectation 0:04:35 that I was gonna show up and focus specifically 0:04:38 on digital innovation for the FDA 0:04:43 and sort of went smashing into a realization 0:04:45 that in fact, the way the majority of our applications 0:04:49 still come in is through PDFs 0:04:53 or sort of essentially large digital representations 0:04:56 of what used to come in on trucks. 0:04:59 We now receive most of our drug applications, 0:05:03 but not all in some kind of electronic format. 0:05:05 As a matter of fact, just two weeks ago, 0:05:09 I had to approve a book of work in orphan diseases 0:05:11 where we’re still getting things on paper as one example. 0:05:15 But most things come in as a digital application 0:05:18 also with some digital data that gets stored at FDA. 0:05:22 And as I think about where we’re going in the future, 0:05:26 practically speaking, in order to become more efficient FDA, 0:05:28 we’re gonna need to receive more and more 0:05:31 of those applications in digital formats 0:05:33 that start to represent structured data, 0:05:36 structured data that we can review at scale 0:05:38 and also structured data that allows us 0:05:41 to now continuously surveil medical products 0:05:42 in a better way in the future. 0:05:45 And so what I realized was that if I was going to be 0:05:47 a person focused on data at FDA, 0:05:50 I was gonna need to also think about how do we modernize 0:05:53 the FDA’s underlying infrastructure to take that forward. 0:05:55 And that’s why I took on the CIO job. 0:05:56 – Yeah, in addition to the infrastructure, 0:05:59 it’s interesting to think about the culture. 0:06:03 And because for example, if there’s a great drug 0:06:07 that nobody ever gets, nobody ever knows about it. 0:06:08 Let’s say there was a cure to cancer 0:06:10 but the FDA didn’t approve it. 0:06:13 There’s no outcry because no one ever knew about it. 0:06:16 But on the other hand, if the FDA lets something through 0:06:18 that actually has harmful effects, 0:06:20 thalidomide and other classic examples, 0:06:23 then there’s a huge backlash. 0:06:26 How do you build and how do you innovate in a culture 0:06:29 that has to deal with such strong asymmetries? 0:06:32 – So it’s interesting as you describe those asymmetries, 0:06:35 what you all heard describing is the practical reality 0:06:38 that that can make you very risk adverse, right? 0:06:39 – Yes, exactly. 0:06:41 I’m really worried that I’m gonna do something wrong 0:06:43 and I’m gonna flub up and that’s actually gonna have 0:06:47 very public impact and we see that all the time. 0:06:52 I think that in order to develop solutions 0:06:55 for regulatory innovation, then what you really have to do 0:06:57 is come up with flexible mechanisms 0:07:00 that also allow us to deal with the risks 0:07:03 but also take some risks when appropriate. 0:07:06 And so that means that as FDA, 0:07:08 one of the things that we focus on 0:07:11 is risk-based scientific decision-making 0:07:15 and trying to right-size the degree of review 0:07:18 and expectation with the potential risk 0:07:20 of this particular product. 0:07:22 And what do I mean by risk? 0:07:23 Sometimes there’s safety risks, right? 0:07:26 So the risk of, for example, hepatic failure 0:07:29 or the risk that the drug might take a person’s life. 0:07:32 Sometimes the risks of the size of the population impacted. 0:07:34 So you’re trying to balance the urgency 0:07:36 of this particular problem sitting in front of you 0:07:40 with the number of people where this product may impact. 0:07:45 Also, there’s risks of public perception and expectation 0:07:48 and then the last sort of set of risks is 0:07:49 where can you de-risk it? 0:07:50 – Exactly. 0:07:53 – So you can de-risk it by trying to make sure 0:07:55 that, for example, preconditions are met 0:07:57 as it relates to the manufacturing process. 0:07:59 You can de-risk it by understanding 0:08:01 in a consistent way toxicity. 0:08:04 You can de-risk it by having consistent expectations 0:08:06 around clinical effectiveness. 0:08:10 – So I think what’s interesting is to think about that 0:08:12 in even the other context of what the FDA does. 0:08:15 I think people often don’t realize that the FDA 0:08:17 isn’t just about, let’s say, approving drugs. 0:08:18 We think about clinical trials. 0:08:20 There’s a lot of things that you do 0:08:23 to protect American consumers. 0:08:25 And when we’re talking about it, it’s almost like, 0:08:28 I feel like you could have a show that’s like CSI FDA 0:08:31 or something like that where you have these investigations 0:08:32 of the crisis. 0:08:35 Like, you know, like sometimes it’s slow moving crises, 0:08:36 like the opioid epidemic. 0:08:39 How do you, like for a crisis like that 0:08:43 where it slowly sneaks up on us and then it’s too late? 0:08:45 You know, how does the FDA even think about that? 0:08:48 – Practically speaking, the FDA is responsible 0:08:49 for many types of medical products. 0:08:53 And any one of those can have a crisis, I’ve discovered. 0:08:55 So we have food and drugs. 0:08:57 We have biologics and devices. 0:08:59 We have animal food and drugs. 0:09:00 We have cosmetics. 0:09:03 We have nicotine based products, vapes. 0:09:07 And so the distribution of potential crises are real. 0:09:11 And as you think about something like the opioid crisis, 0:09:16 a problem that snuck up on us in many different ways. 0:09:21 And as I step back, and I came to the FDA in March, 0:09:23 so I’ve had sort of the opportunity to watch this 0:09:26 as an insider/outsider during this period of time. 0:09:29 You know, as information starts to accumulate, 0:09:32 that says we’ve got a really big problem here. 0:09:34 That information comes from the public. 0:09:37 It comes from across different places in government. 0:09:39 And now we need to step back as a nation, 0:09:41 but as also as an agency and say, what do we formally do? 0:09:45 And so as an agency, we put in place an action plan 0:09:48 that had several parts that focus 0:09:49 on what we’re responsible for. 0:09:52 And I’m gonna come back to that key point in just a second. 0:09:55 But then also asked, how does that interdigitate 0:09:57 with all the other plans that are going on 0:10:00 across government and also across the healthcare setting 0:10:02 to try and solve for? 0:10:05 So I’ll say the last piece that I’ve found very interesting 0:10:09 since I’ve been in government is that there are very clear 0:10:11 rules of the road of our authorities. 0:10:13 It took me a while to get used to that word, 0:10:15 but the word would be authorities. 0:10:16 – Authorities, yes, yes. 0:10:19 – Yeah, so our area is a responsibility 0:10:21 and sort of legitimate jurisdiction. 0:10:25 And practically speaking, with respect to the opioid crisis, 0:10:28 we need to go back and say, whereas FDA, 0:10:32 do we have authority to try and help resolve this problem? 0:10:36 So practically speaking, we can help reduce 0:10:39 the number of opioid tablets, for example, 0:10:42 a patient has access to after back surgery or knee surgery 0:10:45 in order to reduce the chance that this particular person 0:10:47 has access and becomes addicted in the first place. 0:10:51 As a second example, we can increase methods for access 0:10:55 for example, naloxone-based treatments in the field. 0:10:58 And we’ve done a number of projects to try and make sure 0:11:01 that there is patient-informed labeling 0:11:03 and other aspects in the field. 0:11:06 And then also, we can start to think about developing 0:11:08 and helping to develop new treatments for the treatment 0:11:10 of pain as well as new treatments for the treatment 0:11:13 of addiction and start to solve a problem on that side. 0:11:15 So as FDA, we have to stick in our swim lanes, 0:11:17 but then we have to think about how that grooves 0:11:19 with everything else across government 0:11:22 so that there’s more of a nationwide approach. 0:11:27 – Yeah, in that sense of handling the authority nature 0:11:30 of things, you’ve got to make some tough decisions, 0:11:32 like even things like contaminated food 0:11:35 going across the border, like you’ve got to inspect trucks. 0:11:37 How do you know which truck to look for? 0:11:40 – Yeah, so as I got to the agency, 0:11:42 I was surprised to find not only are responsible 0:11:47 for regulating about 20% of international GDP 0:11:51 as I mentioned, that sort of is cross a broad number 0:11:53 of products and the weed that we regulate 0:11:55 is different by product. 0:11:58 So about 15% of our food is imported. 0:12:00 We need to be able to make sure that the food 0:12:02 is appropriately safe. 0:12:05 It’s appropriately labeled that it’s legitimate 0:12:06 for sale in this country. 0:12:08 And so that means that if you’re sitting 0:12:13 at the border of Mexico, we need to basically investigate 0:12:15 trucks that are coming across the border 0:12:17 and look for violative products 0:12:19 that aren’t appropriate for sale in the United States, 0:12:22 either for safety concerns or commercial concerns. 0:12:24 How do you know which truck to look at? 0:12:26 And if we don’t get that right, 0:12:29 we can basically stop traffic for miles. 0:12:33 So we have in the case of inspecting trucks, 0:12:35 we have something called the Predict Program. 0:12:39 And the Predict Program is a 10 years old rules engine 0:12:42 that’s been written by some of the different centers 0:12:45 across FDA where the rules start to predict 0:12:50 which truck is most likely to have unsafe food. 0:12:51 Now you can imagine that if we wrote those rules 0:12:55 10 years ago, they might be old rules. 0:12:58 And it’s true that we update the rules every year, 0:13:00 but we do so by hand. 0:13:02 And as I think about trying to develop 0:13:03 a more modern agency, 0:13:06 can’t we update the way that we modernize those rules? 0:13:08 And so right now we’ve got an experiment going on 0:13:11 where we’re looking at machine learning based prediction 0:13:13 of which trucks we should inspect on the border. 0:13:18 And can we now use machine learning as a way to be smarter? 0:13:21 Importantly though, going back to your point 0:13:22 about we can’t get it wrong 0:13:25 because we lose consumer and confidence when we get it wrong. 0:13:28 We can’t just say machine learning is gonna be great, 0:13:28 let’s go. 0:13:31 We actually have to thoughtfully do the experiments 0:13:35 to say if we apply a new approach over the old rules engine, 0:13:38 are we going to now be able to improve 0:13:39 our inspection on the border? 0:13:41 – Yeah, no, I think it’s fascinating to imagine 0:13:43 that there is this world where they have to use 0:13:45 doing deep learning, machine learning to be able to do this. 0:13:48 You know, I think perhaps also we underestimate 0:13:52 maybe cases where maybe you have averted crises 0:13:53 that we never heard of. 0:13:55 You know, those might be the best TV shows. 0:13:56 I mean, are there any cases like that? 0:13:57 – As you were saying this, 0:13:59 I was trying to come up with some crises I could talk about. 0:14:01 That was actually what I was thinking. 0:14:04 So, you know, here’s one that I just recently learned about. 0:14:07 I think we’re all very worried about drug shortages. 0:14:08 I’m an oncologist by background. 0:14:10 I practiced in academia. 0:14:12 I took care of adults, 0:14:16 but certainly when we think about children with leukemia, 0:14:18 one of the drugs that’s currently in shortage 0:14:21 is a leukemia drug for pediatric leukemia. 0:14:26 And we try and think through how do we help 0:14:27 avert drug shortages? 0:14:30 And in 2018, we had something north of 50 drug shortages, 0:14:34 but the little known secret is that we helped to avert 0:14:36 over 160 drug shortages. 0:14:38 That’s because practically speaking, 0:14:40 we have developed a whole staff 0:14:42 focused on drug shortages. 0:14:45 We’ve tried to start to figure out ways to predict 0:14:47 what is going to cause a drug shortage 0:14:49 and try and intervene beforehand, 0:14:51 speaking directly to manufacturers. 0:14:54 It takes sometimes a while to build that muscle, right? 0:14:56 You can imagine, we have to first understand 0:14:57 what are the causes of drug shortages 0:14:59 and how are we gonna go after them? 0:15:02 But then practically speaking, once we do so, 0:15:04 we can help to avert that crisis. 0:15:06 I see the same kind of thing right now 0:15:07 in foodborne outbreaks, 0:15:10 where we have a call every morning at 9 a.m. 0:15:12 where we’re sort of talking about the things 0:15:13 that worry us for the day. 0:15:15 I’ve officially stopped eating. 0:15:17 – Wow, that’s not good news for any of us. 0:15:20 – ‘Cause I’m getting very concerned 0:15:21 about what foodborne outbreak there’s gonna be 0:15:22 through the day. 0:15:24 But there’s sort of like this continuous sensing 0:15:26 to try and avert problems before they come. 0:15:27 – Yeah, fantastic. 0:15:29 So let’s change the channel. 0:15:30 So we’re watching CSI. 0:15:31 Let’s switch to a different show. 0:15:33 Let’s watch, let’s get into maybe something 0:15:34 more like Star Trek. 0:15:37 So let’s talk about the future, 0:15:41 because the future that’s really becoming today, 0:15:45 like stuff that I remember like five, 10 years ago, 0:15:46 were things that I thought would be sci-fi, 0:15:51 like gene therapy, gene editing, CRISPR therapies. 0:15:55 It’s kind of crazy that I was actually talking 0:15:59 with my eldest daughter and she was finding 0:16:03 that you could get kits off of Amazon to do DIY CRISPR 0:16:06 and that she could make our dog glow in the dark. 0:16:09 She liked it not to make our dog glow in the dark, 0:16:12 but I think in the end I think it was just crazy 0:16:14 that this is the world that we’re living in. 0:16:17 And so, you know, there’s two sides of this to dive into. 0:16:20 So maybe the first side is like we’ll get to kids 0:16:21 and dogs glowing in the dark in a second, 0:16:25 but like for thinking about the clinical side of this, 0:16:28 you know, how does the FDA think about something 0:16:31 like CRISPR because it’s both gene editing, 0:16:35 it’s a therapy, there’s a delivery aspect. 0:16:38 You know, does this live with the FDA? 0:16:40 Does this live with the AMA? 0:16:42 You know, how do you even start to think about people 0:16:44 throwing these crazy new developments at you 0:16:48 that could radically transform medicine and cure disease, 0:16:50 but now has to really push the paradigm 0:16:52 for the FDA in new ways? 0:16:55 – So I think one of the things to go back to 0:16:59 is this issue of risk-based regulation. 0:17:02 You know, how are we gonna start to solve for this 0:17:05 in a way that appropriately has the right regulatory paradigm 0:17:10 for this problem at hand and aligns with the level of risk? 0:17:13 And as I think about this space, you know, 0:17:15 I think that we are living in a space 0:17:17 that gets closer and closer to customized 0:17:19 or individualized therapies, 0:17:21 the landscape of the end of one, 0:17:23 and how do we make sense of that? 0:17:26 And practically speaking, in order to get there, 0:17:29 you have to have some kind of framework 0:17:31 that you apply that says, 0:17:34 all right, before we talk about CRISPR specifically 0:17:39 or anti-sense-aligonucleotides, 0:17:41 you know, before we talked about any specific thing, 0:17:43 like what’s the framework when we’re going to apply 0:17:47 and how do we do so in a risk-based way? 0:17:50 And practically speaking, that includes, 0:17:51 what do we know about safety? 0:17:54 What do we know about safety in vivo and in vitro? 0:17:56 In animals and in the petri dish, 0:17:59 what do we know about biological plausibility? 0:18:01 You know, what’s our understanding 0:18:04 from a biology perspective that this would indeed work 0:18:06 in the way that we would expect it to work? 0:18:11 What do we have in terms of a predefined set 0:18:14 of expectations in terms of clinical outcomes 0:18:18 that we can monitor in a objective way 0:18:20 to understand whether or not this intervention 0:18:22 is making the difference that we expect it to make? 0:18:25 Also, what do we need to think about with respect 0:18:26 to whether or not this is going to apply 0:18:28 just to one individual person 0:18:32 or we might now start to apply and scale this approach 0:18:34 across multiple individuals? 0:18:36 And that’s actually going to start to balance 0:18:37 how much risk we’re going to take 0:18:38 for this particular scenario. 0:18:41 What can we think about consistency and manufacturing? 0:18:44 And manufacturing I think starts to become more and more 0:18:46 of an issue across this space. 0:18:48 And then practically speaking, what are the ethics? 0:18:50 Like what’s going to happen in the clinic? 0:18:53 We may not be responsible in our authorities for ethics, 0:18:55 but I think we’re responsible for at least consciously 0:18:57 thinking about what’s going to go on. 0:19:01 So as I think about this space of incredible new therapies 0:19:06 coming forward, we all need frameworks that we can apply 0:19:09 and where every one of us can look at through a different lens 0:19:11 and say, I can understand why that’s the order 0:19:12 and that’s how we’re going to start 0:19:13 to work our way through it. 0:19:15 – Well, so let’s drill down a little deeper 0:19:18 because the end of one framework sounds like 0:19:20 mind boggling from a therapeutic point of view, 0:19:23 but maybe there’s actually a presence. 0:19:26 So think about surgeries, like heart surgeries 0:19:28 are all kind of n equals one, people are different. 0:19:33 There’s some similarities and perhaps we’re starting to look 0:19:36 at things like CAR-T, not just as actually back up. 0:19:39 So CAR-T is in this sci-fi category. 0:19:42 You take T cells out of your body, out of your blood, 0:19:46 you re-engineer them to make them sort of supercharged 0:19:48 and you put them back into the patient. 0:19:50 And the results of that are just mind boggling 0:19:53 that tumors can melt away within days 0:19:55 and people are just literally cured of cancer. 0:19:59 And so CAR-T has sort of biopharma aspect. 0:20:02 Maybe it has like a sense of doing molecular surgery, 0:20:05 you know, like in these n equals one cases. 0:20:06 So I’m curious to get your sense, 0:20:09 like n equals one is not completely unprecedented, 0:20:12 but what are the things that we need to do 0:20:14 to bring it into the future? 0:20:17 – So, you know, not only is n one not precedent, 0:20:19 we kind of go back across medicine, across time, 0:20:21 we actually really started off as in a one medicine 0:20:24 and became more quantitative across time, 0:20:25 especially as we had interventions 0:20:27 that were applicable to populations, 0:20:28 including in the million. 0:20:30 So practically speaking, 0:20:33 we do have frameworks for in-of-one therapies. 0:20:34 You mentioned surgery. 0:20:35 Another one, you know, in the landscape 0:20:37 that I came from in cancer medicine 0:20:39 was bone marrow or sim self-transplant, right? 0:20:44 These are places where we needed to first have 0:20:46 the scientific innovation that goes along 0:20:48 with biological plausibility 0:20:51 to start to figure out how we’re going to move forward 0:20:54 with new techniques and treatments about what to do, 0:20:57 but then start to apply a systematized set of expectations 0:20:59 about how to refine this and get it right. 0:21:02 So if I go back to bone marrow transplant, 0:21:05 you know, we started to develop refined processes 0:21:09 to understand which patient was appropriate for transplant, 0:21:11 where do we manage them in the hospitals? 0:21:13 Ultimately, we went to the home. 0:21:14 How are we gonna do that? 0:21:15 What’s the actual therapy as well? 0:21:17 Supportive therapies are going around, 0:21:19 including supportive therapy for the family. 0:21:22 And we slowly but surely worked our way through 0:21:25 not only the individual treatment, 0:21:27 but also all the processes that we needed to go along. 0:21:29 And I think what you’re gonna see in, for example, 0:21:31 cell therapy in many of these other places 0:21:34 is that not only do we develop in-of-one activities 0:21:37 where we say biological plausibility and safety 0:21:41 and good manufacturing and, you know, effectiveness, 0:21:44 but we also start to refine how they perform 0:21:48 in a greater scenario or greater system of care. 0:21:51 And we’ve seen that happen over and over again across time. 0:21:53 – So, you know, getting back to this genia of the bottle, 0:21:56 the fact that you can get these CRISPR kits, you know, 0:21:59 on Amazon, and actually literally you can YouTube this, 0:22:01 there’s like some guy that has like 15 glowing dogs. 0:22:05 You know, what, how do you think about that 0:22:07 when people can do things like that, you know, 0:22:07 in their home? 0:22:10 How do you think about what should the FDA do 0:22:12 about things like that? 0:22:15 – So, you know, it goes back to this issue of authorities 0:22:18 and sort of where’s our, you know, core responsibility 0:22:20 and how we need to move things forward. 0:22:23 You know, practically speaking, we’re responsible 0:22:25 for thinking about medical products 0:22:29 that are now going into commercial, commercial use, 0:22:33 and specifically for now the treatment 0:22:37 of medical problems or sort of justifiable claims. 0:22:40 So, the individual patient who’s buying it off the internet, 0:22:42 injecting it into their side, 0:22:44 it gets into this very fuzzy area 0:22:46 of exactly what are our authorities. 0:22:49 And I think that becomes now really much more 0:22:52 of a national conversation, a route, rights, and privacy, 0:22:55 as opposed to, you know, FDA approval of the kit. 0:22:58 And practically speaking, if the kit was gonna be approved 0:23:02 for commercial purposes with claims and labeling, et cetera, 0:23:05 that’s when it starts to get into the FDA perspective. 0:23:09 It gets really murky when we live in this landscape 0:23:11 of the internet without claims. 0:23:14 You know, we see that pop up, not just in CRISPR kits, 0:23:16 but CBD and vaping. 0:23:18 Like, there’s a lot of other places. 0:23:20 And, you know, that’s why it kind of went back 0:23:21 to that point around authorities. 0:23:23 Like, there’s sort of like really clear guidelines 0:23:24 of what does the law say? 0:23:27 And then like as you move out, how do we think about that? 0:23:30 – So, and, you know, maybe the ultimate sci-fi example 0:23:34 of thinking about FDA into new areas are that, you know, 0:23:36 even algorithms themselves, you know, 0:23:38 can have therapeutic or diagnostic value. 0:23:42 And, you know, how do you think about sort of regulating 0:23:44 these algorithms themselves as they change 0:23:47 and go through revisions and have impact 0:23:49 on how we make these either clinical decisions 0:23:51 or even our therapies themselves? 0:23:53 – So this is a really important area of, again, 0:23:56 new regulatory paradigms and really trying to figure out, 0:23:59 like, how do we do this? 0:24:03 And as I think about algorithms and the 0:24:06 regulatory paradigms around them, first of all, 0:24:08 I tend to divide this into two main categories. 0:24:11 Algorithms that have a responsibility 0:24:12 of acting as a medical treatment. 0:24:14 So essentially software is a medical device. 0:24:16 And there, there’s a risk-based paradigm 0:24:19 that asks the question, does this particular 0:24:24 software product ultimately basically take the place 0:24:26 of the judgment of the physician 0:24:28 and ultimately now make a clinical decision 0:24:31 on the physician’s behalf without the physician intervening? 0:24:33 And depending on whether or not that’s gonna happen, 0:24:37 then there’s a differing set of expectations 0:24:41 in terms of the development of the regulatory paradigm. 0:24:42 So a couple of issues, though, that goes along with this. 0:24:47 One is that as software can update so quickly, 0:24:51 developing regulatory paradigms that allow also update cycles 0:24:53 that keep pace with software update cycles 0:24:56 is one of the things that, as FDA, we’re working on. 0:24:57 – Yeah, actually, is that even possible? 0:24:59 I mean, ’cause people can update software 0:25:00 obviously very quickly. 0:25:04 – So this is something that, as FDA, 0:25:07 we’ve been speaking publicly about quite a bit. 0:25:11 Can we come up with essentially preconditions 0:25:14 for software updates so that if there are 0:25:17 strong quality controls in the way software is developed, 0:25:22 well-understood product performance in terms 0:25:23 of the expectation of the updates, 0:25:28 can you now have algorithm updates that are as expected 0:25:30 and don’t require the same level of review? 0:25:32 And so that’s something that we’re certainly 0:25:34 spending a lot of time working on 0:25:36 through a series of pilot projects. 0:25:38 Also, the other part of what you’ve just mentioned 0:25:42 in terms of our sci-fi land is that, practically speaking, 0:25:46 software and algorithms are actually also innovating 0:25:51 all across the spectrum of life sciences and healthcare 0:25:53 just outside of what we regulate. 0:25:55 So it may not necessarily be a software product 0:26:00 that’s acting as a diagnostic or treatment activity, 0:26:03 but it’s a software product that’s intended 0:26:05 to support life sciences more globally, 0:26:08 whether that’s to make clinical trials more efficient, 0:26:09 to match patients to clinical trials, 0:26:13 to curate data, to help do workflow in the hospital. 0:26:15 And all of those kinds of software products, 0:26:19 we don’t directly regulate, but importantly, 0:26:21 those products also need some good signals 0:26:23 of here’s what good looks like, 0:26:25 and here’s how you should think about good software controls 0:26:26 in those settings as well. 0:26:28 – Actually, one bit of news that came up 0:26:32 was Google purchasing a huge amount of healthcare data. 0:26:36 And data and understanding how that gets regulated 0:26:40 is I would think would also be a really tough question. 0:26:44 I mean, how do you think about how these new kinds of data 0:26:45 that people are generating, 0:26:48 and then new people who want to get access to that, 0:26:50 how does they have to think about ownership 0:26:53 of the data, privacy, and what are the opportunities 0:26:54 there and the challenges? 0:26:56 – I thought you were gonna go there in this session. 0:26:57 (laughing) 0:27:01 So data ownership and privacy. 0:27:03 So there’s an easy way for me to get out of this 0:27:05 is the FDA, which is the practically speaking, 0:27:07 when the data comes to the FDA. 0:27:10 It’s the, much of the data that comes to the FDA 0:27:12 is the proprietary information and confidential information 0:27:14 that belongs to the company. 0:27:16 And so we treat it as confidential information. 0:27:18 And then there’s other information that we use, 0:27:21 for example, for drug surveillance and those kind of things 0:27:23 that are sort of more publicly available data sets. 0:27:28 So, in a lot of ways, I think that the easy FDA answer 0:27:30 is we don’t have a lot of things 0:27:32 that we specifically have to worry about. 0:27:35 But in my CIO role, I just recently started pushing 0:27:37 on the fact that I really think we need 0:27:39 a chief privacy role at FDA, 0:27:41 which we don’t currently have. 0:27:44 And when I brought this up, people said, 0:27:48 well, the data that comes to us is de-identified 0:27:51 where this is not the problem that we’re really living in. 0:27:54 But if we go back to what prompted the question, 0:27:57 which is with the Ascension data 0:28:00 and what’s going on right now in the Google story, 0:28:05 I think practically speaking, our laws of the past, HIPAA, 0:28:07 really contemplated a different world 0:28:08 than we live in right now. 0:28:12 Whereby, essentially in 2019 and going forward, 0:28:16 it’s very hard to maintain privacy of any individual. 0:28:18 – It may be even hard to denominize. 0:28:19 – It’s really very hard to denominize. 0:28:22 And it’s not just genomics data. 0:28:24 The launch to the whole story of your healthcare, 0:28:26 every single time you visited the doctor, 0:28:30 how much medicine you received, whether or not 0:28:33 you got an additional test such as an EKG, 0:28:36 that pattern is your unique footprint as well. 0:28:38 And so there’s lots of different ways 0:28:42 that data these days actually has a unique 0:28:45 and representative pattern that really is individual to you. 0:28:48 And I think the reason I brought this up at FDA 0:28:52 as a Chief Privacy Officer is that I think practically speaking, 0:28:54 even information that is de-identified 0:28:56 from a HIPAA perspective actually still 0:28:59 is probably re-identifiable even in our context. 0:29:00 And we need to be starting to think about 0:29:02 what does that mean now and in the future. 0:29:05 And also, what are some of the creative ways 0:29:06 that we can start to prepare? 0:29:08 Some of it’s just having the conversation. 0:29:13 Some of it is making sure that we are absolutely fierce 0:29:16 when it comes to security and understanding 0:29:19 who’s accessing what data for what purposes. 0:29:20 But it’s also, when do we start using, 0:29:21 for example, synthetic data? 0:29:23 How do we actually start to think about 0:29:26 what new tools and techniques and tricks we can use 0:29:29 for data in the out and in the future to preserve privacy? 0:29:32 And I think it’s all of our responsibility. 0:29:33 – Well, so I also wanted to talk to you 0:29:34 about a different way to think about data, 0:29:37 which is we could imagine in clinical trial 0:29:39 the future that’s maybe fairly different. 0:29:43 So, I think, I don’t think anybody disagrees 0:29:46 or maybe I could be wrong that it’s important 0:29:48 for the FDA to test toxicity. 0:29:50 We don’t wanna put out things that are toxic. 0:29:53 But maybe a bold thing to say is that FDA 0:29:55 will run phase one clinical trials, 0:29:57 will review phase one clinical trials, 0:29:59 but maybe we don’t need phase two or three. 0:30:03 Maybe, especially for maybe life-threatening diseases, 0:30:07 we let real world evidence and payers decide efficacy 0:30:09 as they’re gonna do anyways through reimbursement. 0:30:11 And that maybe the FDA could actually pull back 0:30:13 and data gets used differently. 0:30:15 I think, do you think that’s viable? 0:30:19 – So, three things to kind of go into this 0:30:21 that will underscore ultimately what I believe 0:30:22 is a resounding yes. 0:30:25 So, practically speaking, we are already starting 0:30:30 to see new drug development paradigms in terms of 0:30:32 starting to shake up their traditional phase one, 0:30:35 phase two, phase three happen. 0:30:39 And practically speaking, we’ve seen drugs approved 0:30:40 based on phase one data. 0:30:42 We’ve seen expansion cohorts all happen 0:30:43 within the phase one setting, 0:30:46 which is basically now end up with phase one trials 0:30:47 with a thousand patients on a phase one trial. 0:30:50 That’s not the way I was taught as a clinical trial. 0:30:51 – And they’re sort of phase one slash two 0:30:52 or something like that. 0:30:53 – Yeah, there’s some of them phase one, too. 0:30:55 For them, they’re just expansion cohorts 0:30:57 in what’s traditionally caused phase one. 0:31:00 But practically speaking, I think that what we’re seeing 0:31:01 is a blurring of the phases. 0:31:02 What we’re also seeing now is 0:31:05 contemplation of platform trials. 0:31:07 We’ve been talking about platform trials for a while. 0:31:08 They’re really hard to pull off. 0:31:11 They’re hard to pull off because of issues 0:31:13 of contracting and intellectual property. 0:31:14 They’re hard to pull off 0:31:16 because the underlying infrastructure is tough, 0:31:18 which I’ll come back to. 0:31:21 But practically speaking, we’ve talked about platform trials 0:31:24 where we can now, in one clinical trial setting, 0:31:26 start to evaluate multiple investigational products 0:31:28 simultaneously. 0:31:33 – We’ve started to say, once approving a drug, 0:31:38 start to now use information in the real world setting, 0:31:40 whether that is prospective or retrospective, 0:31:42 but classically said, real world data 0:31:44 and real world evidence to start to create 0:31:47 a total product story or a totality of the evidence 0:31:49 around this particular product. 0:31:51 So I think this is the landscape we’re going to. 0:31:54 I also think that we had essentially 0:31:59 an accelerant put into the story in December 2016, 0:32:03 which was 21st century cures. 0:32:04 If you look underneath the hood 0:32:06 of the 21st century cures legislation, 0:32:09 what you see are a number of elements 0:32:11 that push us in the direction of starting 0:32:14 to accelerate our clinical evidence development process. 0:32:15 What kind of elements? 0:32:19 So this includes starting to double down 0:32:21 on how we think about surrogate endpoints, 0:32:24 how we use patient-reported data in the process, 0:32:26 how we actually start to understand 0:32:28 and enable platform trials, 0:32:29 how we now use real world evidence, 0:32:33 and asking FDA to get smart about when 0:32:35 and we can confidently use real world evidence. 0:32:38 And so I think that all of those were enabling features 0:32:40 within 21st century cures, 0:32:41 and now we all have a responsibility 0:32:43 to start to figure it out. 0:32:45 My last point around this, 0:32:47 which is that it is really hard to do 0:32:50 because it takes putting the toe in the water, 0:32:53 and that has to happen with some company 0:32:58 or some investigators core baby. 0:32:59 And that actually is really hard 0:33:02 because it’s hard to want to subject 0:33:04 your particular product that you’re studying right now. 0:33:06 It may be your only shot on goal 0:33:08 into a clinical evidence framework 0:33:10 that we’re still trying to all figure out. 0:33:12 And so I’m not surprised it’s taking us a while 0:33:13 to figure out. 0:33:16 We have to figure out not only how to do the work 0:33:19 of new clinical evidence development paradigms, 0:33:21 but actually people have to be ready to participate 0:33:24 and it’s taking us a while. 0:33:25 – You know, and it’s interesting you mentioned 0:33:26 21st century cures. 0:33:28 You know, I’m curious about, you know, 0:33:30 to connect this to how we think about 0:33:32 how innovations like this happen 0:33:34 with all the political landscape 0:33:35 that has to make it happen. 0:33:37 And, you know, what is this interplay 0:33:40 between politics and the FDA? 0:33:41 I mean, I hear there’s an election, 0:33:42 you know, coming up sometime soon, 0:33:44 and that these are things that are, 0:33:47 you know, these things turn into realities 0:33:50 for the life that we all have to deal with here. 0:33:51 – It’s really interesting. 0:33:53 So if we look back, 0:33:56 so Medicare Modernization Act was passed in 2003. 0:33:57 So I’ll sort of use that as my starting point. 0:34:00 If I look back to MMA in 2003, 0:34:02 around that time, 0:34:07 we were contemplating sort of new payment delivery models, 0:34:09 comparative effectiveness. 0:34:12 There was a report from the Institute of Medicine in 2007 0:34:14 around building a learning healthcare system 0:34:17 where basically available interconnected data 0:34:19 could help us continuously optimize 0:34:21 both healthcare delivery, 0:34:25 but also understanding performance of drugs and devices. 0:34:30 That piece of work from the Institute of Medicine 0:34:33 in 2007 basically said in order to pull this off, 0:34:35 we need a digital infrastructure. 0:34:37 And basically, you know, it was a treatise, 0:34:39 said here’s what’s going to happen. 0:34:41 The reason that’s so important 0:34:43 is that then we had something really important 0:34:46 in 2008 or so, the global financial crisis, 0:34:48 which then led to the stimulus bill. 0:34:51 So it was because that treatise was already ready 0:34:54 and also came along with the point of view 0:34:57 of we need a digital infrastructure to pull this off 0:34:59 that embedded within the context of the stimulus bill, 0:35:00 we got the high tech act, 0:35:03 which led to the full scale distribution 0:35:05 of electronic health records. 0:35:07 And we can all talk about electronic health records 0:35:09 and good and bad points of view, 0:35:14 but what you can see is that a big international experience, 0:35:18 the GFC actually then had a direct day-to-day result 0:35:20 in terms of an enabling digital infrastructure 0:35:23 in the United States circa 2009. 0:35:27 There’s a bunch of different examples along the way, 0:35:30 but if I then think about what was happening 0:35:32 in terms of parallel legislation 0:35:34 on the House and the Senate side 0:35:36 that ultimately became 21st century cures, 0:35:39 we had this conversation going on 0:35:41 around innovative legislation 0:35:44 to try and accelerate the development of cures. 0:35:46 But I don’t know if many of you probably remember it, 0:35:48 largely got put on the shelf. 0:35:53 And then we were moving into the election in November 2016, 0:35:55 the election happens, 0:35:59 and now it’s a country sort of in a rather tumultuous state 0:36:02 trying to figure out what might be bipartisan. 0:36:04 And 21st century cures gets pulled back off of the shelf 0:36:07 and in December 2016 gets signed into law. 0:36:11 And so I think, again, this was a piece of legislation 0:36:15 that had been formed over the prior two years, 0:36:18 a lot like that IOM report from 2007, 0:36:19 it was generally ready to go. 0:36:20 – Yep, and then just go– 0:36:21 – And then there was an event 0:36:23 and then that’s what pushed it along. 0:36:24 – No, that’s fascinating. 0:36:26 – Okay, so let’s change the channel one more time. 0:36:28 So let’s go to Food Network. 0:36:31 So there is an F and FDA, right? 0:36:36 And the food part I think is often underappreciated, 0:36:38 and so I’m curious to dive in there. 0:36:41 And, or we could combine shows, 0:36:42 I’m talking about Star Trek on the Food Network. 0:36:45 So one of the fascinating areas that we see 0:36:48 is genetic engineering and synthetic biology 0:36:50 connecting to food. 0:36:52 And you’re seeing things like cultured meat, 0:36:54 like meat that’s never, 0:36:56 that maybe originally the DNA came from an animal, 0:36:58 but that what you get out of it 0:37:01 is like flaminion or something like that in principle. 0:37:06 And that when you start to see that being created, 0:37:07 how do you even think about that? 0:37:11 Like, and what do you worry about? 0:37:13 How do you balance this innovation 0:37:16 with making sure that we’re being safe? 0:37:18 – So, I think these innovations 0:37:20 have shaken things up a lot, right? 0:37:22 So if we think about meat, 0:37:24 there’s a clear interplay in the United States 0:37:27 between what’s the responsibilities of the FDA, 0:37:30 what’s the responsibility of the USDA? 0:37:32 And so, just in this particular space, 0:37:35 we had to start to figure out, 0:37:37 again, that language of the authorities, 0:37:40 where do we appropriately say 0:37:42 this is the part that the FDA is responsible for, 0:37:46 and sort of our unique set of science-based skills, 0:37:48 versus this is the part that the USDA sees 0:37:50 as their core responsibility, 0:37:55 trying to keep markets intact. 0:37:59 And last year, we ultimately developed agreements 0:38:04 with USDA so that the parts of the cell culture food activity 0:38:07 they’ve got to do with cell culture, for example, 0:38:10 and that part of the equation 0:38:13 ultimately became the FDA’s responsibility. 0:38:16 And then as we moved now to marketing, et cetera, 0:38:19 it became USDA, and I’m actually sure 0:38:21 where the line got drawn, 0:38:23 but it sort of reminds me of a couple of things. 0:38:25 So first of all, I go back to this point 0:38:27 of authorities and jurisdiction. 0:38:29 So as new innovations come about, 0:38:31 we have to start to figure out, 0:38:34 do we need to change the regulatory paradigm 0:38:35 to make sure it works? 0:38:38 The second thing is that we also need to think about, 0:38:41 how do we make sure consumers understand what’s going on? 0:38:43 So what does labeling look like? 0:38:44 How do we talk about this? 0:38:46 How do we have consistent language? 0:38:50 Some of you may have heard the story last year 0:38:52 around almond milk and that almonds don’t lactate. 0:38:55 Well, you know, it’s ’cause like, practically speaking, 0:38:57 what’s rice milk and almond milk and dairy milk? 0:38:59 Like, you know, how do we make sure 0:39:02 that consumers understand what this is all about? 0:39:07 And so as I think about the innovations in food, 0:39:09 I also think about what does that mean 0:39:12 in terms of the innovations in the regulatory landscape? 0:39:15 And if we don’t try and keep those two things in lockstep, 0:39:16 we gum everything up. 0:39:18 – Well, and it’s interesting because like, 0:39:21 I’m not sure there’s long lines of people protesting 0:39:22 the fact that almonds don’t lactate, right? 0:39:23 – I think there are. 0:39:25 (laughing) 0:39:27 – But yeah, it comes from an interesting 0:39:30 different set of incentives there, right? 0:39:32 And so it’s just interesting, what do you call meat? 0:39:33 What do you call milk? 0:39:35 What do you call cheese? – Right, exactly. 0:39:36 – You know, another aspect of this 0:39:38 that I think is really fascinating is just also 0:39:40 all the things you have to do with foodborne illnesses 0:39:42 and just thinking about like, 0:39:45 how does the FDA sort of wrap their heads around that 0:39:47 considering that food could be coming from anywhere? 0:39:50 And these threats are coming from anywhere. 0:39:52 – Since we’re gonna go back to CSI for a second. 0:39:55 So these days, if there’s a foodborne illness, 0:39:58 we will take the bacteria and essentially 0:40:01 do whole genome sequencing to really understand 0:40:05 the outbreak as well as which individuals who are ill, 0:40:07 are they all related to the same outbreak? 0:40:08 And for example, with listeria, 0:40:10 which particularly likes to be cold. 0:40:15 So it tends to get stuck on the nozzle in the plant. 0:40:17 And it ends up in, for example, 0:40:20 your frozen peas or other places that, you know, 0:40:24 ultimately you can trace now through whole genome sequencing 0:40:27 the fingerprint of the listeria then all the way back 0:40:32 to the individual who had the bad product 0:40:34 at their local Whole Foods, for example. 0:40:37 And so the ability to now trace that all the way through 0:40:40 is doable through modern technology. 0:40:42 And one of the things the FDA does in concert with CDC 0:40:45 and now really through international database 0:40:49 is maintain a database of all the different genomes 0:40:51 so that we can also track back and do this more quickly. 0:40:54 And so that’s kind of where things have been going 0:40:56 in the food outbreak space. 0:41:01 The other side of it is the application of technology 0:41:03 to trying to improve our ability 0:41:05 to go essentially farm to table. 0:41:08 So for example, blockchain and distributed ledger technology 0:41:12 to make sure that we can trace all the way from the farm 0:41:14 to the grocery store. 0:41:16 And one of the things we’ve been contemplating at FDA 0:41:19 is like ultimately, could you imagine the application 0:41:22 on your phone that allows you to scan peaches 0:41:26 and understand did the peaches have a full supply chain 0:41:28 that we could monitor. 0:41:30 And so these are the kinds of things 0:41:33 that are now a part of the lexicon at FDA. 0:41:36 We have a program called Smarter Food Safety 0:41:39 and that whole book of work is around thinking 0:41:42 about how do we move this field forward. 0:41:43 – That’s fascinating. 0:41:45 Okay, so we just have a few minutes left 0:41:46 and I want to take us now to the future. 0:41:49 So we started the discussion by talking about 0:41:52 the FDA being over 100 years old. 0:41:55 And I know we’ve sort of talked about the sort of challenges 0:41:57 and the work that’s been done. 0:42:00 Thinking about, let’s think about the next 100 years 0:42:01 and where does that go? 0:42:03 We’re gonna have new types of challenges. 0:42:06 One challenge maybe just to throw at you to start 0:42:09 is that we’re gonna have even just a different way 0:42:10 of thinking about disease. 0:42:14 That there’s all this science in the science of longevity 0:42:17 of just what can keep us healthier, longer. 0:42:18 Where it’s not about treating cancer, 0:42:20 it’s not about treating Alzheimer’s. 0:42:22 It’s about making sure you never get cancer 0:42:24 or you never get Alzheimer’s. 0:42:25 And all the therapeutics that would be done 0:42:29 to expand lifespan and expand healthy span. 0:42:32 That seems like just completely paradigm breaking. 0:42:34 How do you think about that? 0:42:38 – Well, so to begin with I think that it helps us 0:42:43 to distinguish between aspects of biological aberrance. 0:42:45 Where essentially biology’s gone bad 0:42:48 and we’re trying to think about treatments to fix it. 0:42:51 Versus the really difficult construct 0:42:52 that you’re talking about, 0:42:56 which is how do we essentially apply preventative approaches 0:42:59 and have the confidence that these approaches 0:43:02 are both safe and effective in a longitudinal frame 0:43:04 that really is hard to contemplate 0:43:07 ’cause we don’t know if we’ve ever gotten there. 0:43:09 It’s really hard to know that this particular treatment 0:43:12 was indeed successful for this individual. 0:43:15 So I’m curious, as you do this here, 0:43:17 this has certainly been an area of focus for you. 0:43:18 What’s your thoughts? 0:43:21 – Yeah, I think a lot of it is also having the biomarkers 0:43:24 that you can to know that to your point, 0:43:26 I think a lot of what you’ve been talking about 0:43:28 is just it’s about measurement. 0:43:29 And it’s about understanding how those measurements 0:43:32 correlate with a harm. 0:43:34 And that I think we just need to know what to measure 0:43:37 and that’s work to be done. 0:43:39 But I think that that’s something 0:43:41 that I think is a part of the science already. 0:43:46 – And I think that if I follow on that line of thinking, 0:43:48 it’s also about longitudinality. 0:43:50 It’s about saying here’s a surrogate 0:43:52 or an intermediate set of endpoints 0:43:54 that we’re going to monitor, 0:43:56 but we’re actually gonna understand longitudinally. 0:43:59 How does that then translate to what we understand 0:44:01 is happening across time? 0:44:03 Historically, the way we’ve often thought about effectiveness 0:44:05 is sort of as a fixed book of work. 0:44:07 And I think that what you’re gonna see over time 0:44:08 is we’re gonna talk more and more 0:44:10 about longitudinal performance. 0:44:12 And this is a perfect example of that. 0:44:13 – And so one last really quick question. 0:44:16 So what does this mean for the next 100 years of the FDA? 0:44:20 What do you see, what’s your vision for it? 0:44:22 What are we gonna be talking about 100 years from now? 0:44:24 – So I think the FDA of the future 0:44:27 is gonna be far more digital and informed by data 0:44:28 at all times. 0:44:31 A lot of the activities are gonna be automated 0:44:32 so that we can focus our time and attention 0:44:34 on the things that need to happen first. 0:44:38 That we’re going to be able to ultimately understand 0:44:41 how products are performing across time 0:44:44 and actually use that information from across time 0:44:46 to right size indications in a smart way. 0:44:47 – Okay, well thank you so much. 0:44:48 – Thanks. 0:44:51 (audience applauding) 0:44:54 (audience applauding)
The federal agency known as the FDA, or the Food and Drug Administration, was born over 100 years ago—at the turn of the industrial revolution, in a time of enormous upheaval and change, and rapidly emerging technology. The same could be said to be just as true today. From CRISPR to synthetic biology to using artificial intelligence in medicine, our healthcare system is undergoing massive amounts of innovation and change.
Covering everything from gene-editing your dog to tracking the next foodborne outbreak, this wide-ranging conversation between Principal Commissioner of the FDA Amy Abernethy and Vijay Pande, GP on the Bio Fund at a16z, discusses how the agency is evolving to keep pace with the scientific breakthroughs coming, while staying true to its core mission of assessing safety and effectiveness for consumers in the world of food and medicine.
Highlights:
What the FDA looks like today and the key steps of the FDA process to getting a drug/product to market [2:20]
How to manage a culture when mitigating risk is a top priority while aiming to innovate for the future [5:22]
Creative problem-solving in times of crisis, such as the Opioid crisis [9:58]
Preparing for and preventing drug shortages at scale [13:30]
How advances in bioengineering are transforming healthcare [16:00]
How the FDA is thinking about n=1 therapies and its applications in the future [18:54]
0:00:01 – Hi, everyone. 0:00:03 Welcome to the A6NZ podcast. 0:00:04 I’m Sonal. 0:00:06 Today we have one of our reruns, 0:00:09 which was recorded during the JPMorgan Healthcare Conference 0:00:12 last year, where the A6NZ bioteam had a lot 0:00:14 and has a lot going on this year as well. 0:00:18 And it’s with Vas Naurasimhan, the CEO of Novartis, 0:00:19 one of the largest healthcare 0:00:22 and pharmaceutical companies in the world. 0:00:23 In terms of volume, 0:00:25 they’re the largest producer of medicines 0:00:27 with 70 billion doses a year 0:00:29 across a wide range of therapeutic areas 0:00:32 from cancer to cardiovascular disease and more. 0:00:34 Joining me to interview him 0:00:37 are A6NZ general partners Jorge Conde and Vijay Pandey 0:00:39 from the A6NZ bioteam. 0:00:42 And we cover the latest trends and therapeutics, 0:00:45 including the journey in chemistry and medicine 0:00:48 from large molecules and antibodies and proteins 0:00:51 to small molecules and other new modalities with RNA 0:00:55 and now moving more into the cell and gene engineered world. 0:00:58 We also cover when science becomes engineering 0:01:00 and what does that mean at an industry 0:01:02 and a big company innovation level? 0:01:05 And then we touch on topics such as clinical trials, 0:01:09 healthcare go to market, shifts in talent in the landscape 0:01:11 and startups working with Big Pharma. 0:01:14 But we begin with the business of science 0:01:17 with R&D and innovation both inside and out. 0:01:21 – You build up R&D expertise in our industry 0:01:23 over long periods of time. 0:01:25 If you think about cardiovascular disease, 0:01:26 we’ve been in it 40, 50 years. 0:01:28 When you think about transplant and immunology, 0:01:32 again, 40, 50 years, oncology, 25 years. 0:01:34 So you build up an accumulated expertise. 0:01:37 And really the art of it is to make sure you have a depth 0:01:40 of new medicines to keep filling your pipeline 0:01:42 in each one of those therapeutic areas. 0:01:44 Now there are instances where we find new breakthroughs 0:01:46 in areas we’re not in. 0:01:47 Those you have to really think about, 0:01:50 are you gonna really stay in that area for the long term? 0:01:52 The other element of the story 0:01:54 is when you really have exhausted your pipeline, 0:01:56 we’re not so good as an industry at this, 0:01:58 but you have to also be prepared to exit, I think, 0:02:00 areas where you’re gonna be subscale. 0:02:01 And that’s something we’re working on. 0:02:03 We’ve made a number of exits actually this year 0:02:05 where we just said this is areas 0:02:07 we just can’t sustain longer term. 0:02:08 – Can you give us a little bit more color 0:02:09 on how you make those decisions, 0:02:12 especially as a CEO steering this? 0:02:13 That’s a pretty big, I mean, frankly, 0:02:14 it’s a decision that every big company, 0:02:17 regardless of industry has to think about, 0:02:20 which is essentially what to proactively invest in 0:02:22 and what to proactively opt out of, 0:02:24 which killing things as we call it in the media business 0:02:26 is a pretty hard thing to do. 0:02:27 How do you think about that? 0:02:30 And how do you tease apart the signal from the noise 0:02:31 when you get a lot of inputs, 0:02:32 both internally and externally? 0:02:34 – So we do it currently at two levels. 0:02:37 One is from an overall portfolio standpoint. 0:02:39 We’ve made the decision to really focus 0:02:41 as a medicines company powered 0:02:44 by advanced therapy platforms and data science. 0:02:46 So in order to really make that happen, 0:02:51 we transacted in 2018 around $50 billion of deals 0:02:53 to really change the shape of the company. 0:02:56 We took principle decisions to leave consumer healthcare 0:02:58 ’cause we just didn’t believe we would be a long-term 0:03:00 leader in consumer healthcare. 0:03:02 A decision to spin our Alcon business, 0:03:05 which is to get out of medical devices and contact lenses. 0:03:08 And alongside that, as we moved out of those other areas, 0:03:11 we made significant investments in acquisitions 0:03:14 in this next wave of therapy, cell therapy, 0:03:18 gene therapy in an area called radio drug conjugates, 0:03:21 which is a nuclear medicine kind of area. 0:03:24 So that was at one level, at the portfolio level, 0:03:27 changing a 20 plus year trajectory 0:03:29 to actually become a very diversified company. 0:03:31 It really came out of a conviction in my mind 0:03:33 that science is moving so fast. 0:03:35 You have to focus your capital 0:03:37 and really focus your energies. 0:03:38 That’s at the macro level. 0:03:40 Now, when you zoom in to innovative medicines 0:03:42 and we have to decide, okay, which therapeutic area 0:03:44 do we stay in cardiovascular disease 0:03:46 or do we stay in ophthalmology? 0:03:48 I mean, those are pretty tough decisions 0:03:52 because if you take down an R&D effort, 0:03:55 so one example for us was infectious diseases, 0:03:57 where we had a longstanding effort. 0:03:58 It was not an easy decision. 0:03:59 I mean, it was a lot of going around. 0:04:00 Are we really sure? 0:04:03 Because you can’t change your mind now in three, four years 0:04:05 and say, I wish I had it back. 0:04:07 It’ll take you another 10 years to build it back up again. 0:04:12 The cycles of innovation and science are accelerating. 0:04:13 The science is moving much more quickly 0:04:14 than it has in the past. 0:04:16 So assuming that that’s the case, 0:04:17 will it continue to be true 0:04:21 that it will take decades to build up expertise 0:04:23 in any given therapeutic area? 0:04:26 In other words, will there be future emerging players 0:04:29 that come much more quickly than they have historically? 0:04:32 I think there can be very fast players 0:04:34 who are really working on a couple of medicines 0:04:36 or a couple of assets. 0:04:38 But when I talk about building up a capability, 0:04:41 I’m really talking about a scaled capability 0:04:45 that could generate new medicines consistently over time. 0:04:47 And while I do believe that pace of science 0:04:49 is improving dramatically, 0:04:51 we also have to keep reminding ourselves 0:04:52 and being humble with the fact 0:04:54 that we understand a fraction of human biology. 0:04:57 And actually, when you look at attrition rates 0:05:00 in our industry, really the chances of success that we have, 0:05:03 they haven’t moved in the last 15 years. 0:05:05 Still, when we bring a medicine into human beings, 0:05:08 on average, only one out of 20 works. 0:05:09 Finally. – Really? 0:05:11 – And that has stayed constant, 0:05:13 despite the fact we’ve had this explosion in new science. 0:05:14 – I just wanna quickly pause on that for a moment, 0:05:16 because that’s a pretty important point. 0:05:19 So only over the last, say, 10, 20 years, 0:05:23 only one out of 20 medicines actually work in the human body. 0:05:24 – Once we get it into human beings, 0:05:28 we have about a 5% success rate, 5% to 10% success rate. 0:05:30 And it varies by therapeutic area. 0:05:32 We’ve actually been fortunate at our company, 0:05:35 we average in that same metric about 8% to 10%. 0:05:39 But if you look industry-wide, it is about 5%. 0:05:40 – The attrition rates are pretty constant, 0:05:42 but the costs still keep going up too. 0:05:43 – They do. 0:05:44 – How does that work out? 0:05:47 ‘Cause in a sense, one analogy that people use 0:05:50 is almost like trying to get oil out of the ground. 0:05:52 And the low-lying fruit, I’m mixing analogies here, 0:05:54 but the low-lying fruit has been taken, 0:05:57 and it’s just harder and harder to find new therapeutics. 0:05:59 Or do you feel like the science is moving fast enough 0:06:00 that that’s not an issue? 0:06:02 – You know, I think we go through waves. 0:06:03 I think there was a period of time 0:06:06 where probably in the 1990s and early 2000s, 0:06:08 we had a pretty big wave of innovation, 0:06:09 and we could bring a lot of medicines forward. 0:06:12 We went through a lull for seven, eight years. 0:06:14 Now, I think with, again, explosion and ability 0:06:17 to really understand the mechanisms of disease, 0:06:18 we’re seeing a renaissance, 0:06:21 a record number of FDA approvals. 0:06:23 We’re investing heavily in new therapy areas. 0:06:25 I mean, 15 years ago, people would have said, 0:06:27 you’re crazy if you think we’re gonna do gene therapy 0:06:29 and cell therapy and all the things now 0:06:31 that we’re doing at scale. 0:06:32 You know, the costs really come 0:06:34 from our ability to manage complexity. 0:06:36 When you look at it, over time, 0:06:38 the trials get more complex. 0:06:42 The requirements from regulators get more complex. 0:06:43 Because the science gets more complex, 0:06:45 we can actually measure more things. 0:06:48 So we add and add and add and add. 0:06:52 And that’s led to an interesting, pretty linear increase 0:06:55 in cost per patient in our clinical trials. 0:06:56 I don’t think it has to be that way. 0:06:59 I think what really our industry’s not been great at 0:07:01 is really deploying technology 0:07:02 to make this much more efficient. 0:07:04 So I think there’s a lot of opportunity. 0:07:05 – Yeah, well, why do you think that’s the case 0:07:07 that’s been so hard to deploy? 0:07:10 – I think it’s, you know, we’re a high margin industry. 0:07:13 You know, unless it’s easy enough 0:07:14 just to keep arguing to yourself, 0:07:15 it doesn’t really matter. 0:07:18 As long as we get another big medicine out, it’s okay. 0:07:19 Let’s just keep going. 0:07:21 – Well, and if you screw things up, 0:07:22 there’s a huge cost. 0:07:22 – There’s a big downside. 0:07:24 But I think now we’re reaching the point 0:07:25 where we have no choice, 0:07:28 but to really now engage technology. 0:07:31 I mean, there are estimates now from various sources 0:07:33 that believe you could take out 20% 0:07:34 of clinical trials costs 0:07:38 if you were actually to really deploy technology at scale. 0:07:40 – If the attrition rates have been flat 0:07:41 for as long as they’ve been, 0:07:43 and there have been all of these proliferations 0:07:46 of new platforms, cell therapies, gene therapies, 0:07:48 is there a measure that you qualitatively 0:07:50 or even quantitatively can look at that says 0:07:52 that the medicines that are getting through 0:07:55 are meaningfully better medicines or different medicines? 0:07:58 In other words, the failure rate may be the same, 0:08:01 but the impact of success is at greater now 0:08:02 in any measurable way. 0:08:04 – So it’s a very important point. 0:08:07 And there’s no objective measure. 0:08:10 I mean, various institutes have different measures, 0:08:13 so it’s nothing I think that is used externally. 0:08:16 We internally have just set a very clear bar now 0:08:18 for ourselves, primarily because we live in a world 0:08:21 where nobody wants a Me Too medicine 0:08:24 or a medicine that’s just incrementally better. 0:08:25 We say to ourselves, 0:08:27 it has to replace the standard of care. 0:08:31 And that usually means it gives such a big clinical benefit 0:08:34 to patients that it just becomes the de facto medicine 0:08:36 of choice in that therapeutic area. 0:08:37 That’s a shift. 0:08:39 It means a lot of projects no longer make the cut 0:08:41 because you’re really asking yourself, 0:08:43 if I don’t have something really transformative, 0:08:45 I’m not gonna take it forward anymore. 0:08:47 And so all of our research teams and development teams 0:08:49 are having to now come to grips with that, 0:08:52 that we will stop projects unless we really believe 0:08:54 it can redefine the standard of care. 0:08:55 – I have a process question behind this 0:08:57 ’cause it’s a parallel to this idea 0:08:59 of basically going for a slugging average 0:09:02 versus batting average and like outsize hits 0:09:04 with great outsize impact. 0:09:07 So behind the scenes, what are some of the mindsets 0:09:10 that you and the R&D teams bring to bear 0:09:13 to make these investments for slugging versus batting average? 0:09:15 How do you set things up to make that happen? 0:09:18 – We have all of the various review committees 0:09:19 and portfolio meetings, et cetera, 0:09:23 but really what it takes is a lot of discipline 0:09:24 about the criteria that you’re using. 0:09:26 I mean, so we have very clear criteria. 0:09:28 We’ve tried to apply that rigor. 0:09:31 I think people, there’s a lot of romanticization 0:09:33 and R&D about big ideas. 0:09:36 So much of it is just about discipline and discipline. 0:09:37 – Nitty gritty. 0:09:38 – Nitty gritty, disciplined execution 0:09:39 of how you look at projects. 0:09:41 I think that’s one element. 0:09:43 I think second, you have to build patience 0:09:46 because part of the reason mediocre projects go forward 0:09:49 is you start to worry you don’t have enough in the pipeline 0:09:52 and you start to lose faith that something’s gonna come. 0:09:54 And you have to believe in your own scientists 0:09:57 and your own R&D engine to say I’m gonna say no five times 0:10:00 ’cause I believe the sixth one could be the big one 0:10:03 rather than get worried and just start letting things through 0:10:06 because actually what you do is you crowd out the money. 0:10:07 – It’s an opportunity cost. 0:10:09 – It’s a huge opportunity cost when you take those. 0:10:13 And that’s been a real ongoing challenge for us. 0:10:16 I think that the third element is to bring a real lens 0:10:19 of what does it take to be successful in the market. 0:10:21 I think historically we just had a belief 0:10:22 that if we had a great product, 0:10:23 it’ll all work itself out. 0:10:26 Now we actually ask the market access teams 0:10:28 that have to negotiate with payers 0:10:30 to show up at every meeting and say actually, 0:10:33 even in phase two, so really early for us, 0:10:35 what is it really gonna take to let’s say, 0:10:37 bring a new medicine forward in Asmar, 0:10:39 a new medicine forward in multiple sclerosis. 0:10:41 And if we don’t make the cut, 0:10:43 we just have to be brutally honest with ourselves. 0:10:46 – Reimbursement’s more important or more on your mind 0:10:48 than sort of just getting past FDA. 0:10:50 – It used to be we think about reimbursement 0:10:51 as we got to launch. 0:10:54 Now we’re thinking about it really early in development. 0:10:56 – For people that are new to the space 0:10:57 or just a lot of entrepreneurs, 0:11:00 they think that the FDA is the real challenge. 0:11:02 And just getting something to clinical trials 0:11:04 is expensive and hard and that’s true. 0:11:06 But the reimbursement being first in class, 0:11:09 being having this huge jump in care, 0:11:10 that is the real challenge. 0:11:12 And so what I would love to see, 0:11:15 especially in our founders is for them to work backwards. 0:11:17 And but work backwards, not from getting through trials, 0:11:19 but work backwards from reimbursement. 0:11:21 – Yeah, and the way that Voss describes it, 0:11:22 I think is absolutely true, 0:11:26 is a lot of people view reimbursement as a process 0:11:28 to get to market access. 0:11:31 But reimbursement is really just a proxy 0:11:32 for value proposition. 0:11:34 So what are the actual user stories? 0:11:35 Who’s gonna actually value this? 0:11:36 Who’s willing to pay? 0:11:38 It’s almost like a pricing study. 0:11:40 It’s almost like a price discovery in the consumer world. 0:11:43 In this case, it’s obviously the payer’s 0:11:45 not the direct beneficiary of the therapeutic, 0:11:48 but they do bear the burden of the cost. 0:11:49 And so they’re the great arbiter of saying, 0:11:51 is there true value proposition? 0:11:53 And actually that’s why when you talk about moving away, 0:11:55 industry moving away from me too drugs, 0:11:59 it was because a me too drug arguably could not show 0:12:03 a very significant marginal increase in value proposition, 0:12:05 and therefore you could be very difficult to justify 0:12:06 and increase premium price. 0:12:09 And so that historically has been the big challenge. 0:12:10 – On that note, I do find it ironic, 0:12:12 a big part of your business is still generics. 0:12:14 So I mean, what is that but a me too drug? 0:12:17 Like how does that fit into this big picture? 0:12:20 – Yes, if you look overall at Novartis generics, 0:12:23 from a sales standpoint and value standpoint, 0:12:26 is a small portion of the company, 0:12:28 but you look at a volume standpoint, 0:12:29 it’s the biggest part of access. 0:12:32 And so really what a generics, our generics business does 0:12:35 is take when medicines go off patent, 0:12:36 we then produce them at scale. 0:12:38 And we were the largest producer, for example, 0:12:40 of penicillins in the world. 0:12:42 I mean, so we have a huge role to play 0:12:45 in providing access to medicines around the world. 0:12:47 I mean, right now Novartis reaches 0:12:50 about a billion patients a year through our work. 0:12:52 And a lot of that is through our Sandoz generics unit. 0:12:54 – So if you break it down, so if you, 0:12:57 there’s 70 billion doses that are Novartis drugs 0:13:00 every year, how many of those 70 billion are generics? 0:13:03 – I would say roughly 80%. 0:13:06 – Is it standard that big pharmaceutical companies 0:13:08 have their own manufacturing facilities? 0:13:10 And do you see that changing anytime in the near future? 0:13:14 – Most pharmaceutical companies have their own manufacturing. 0:13:15 I mean, there’s different trends right now. 0:13:19 There’s a pretty significant increase of use of Chinese 0:13:23 and other producers from many elements of the manufacturing, 0:13:26 but still historically we’ve had our manufacturing facilities. 0:13:27 The biggest trend we have right now 0:13:30 is a shift to these advanced therapy platforms. 0:13:33 So what we’re having to do is as our volumes go down 0:13:36 and kind of the older medicines that were produced 0:13:39 and huge volumes in the innovative medicines, 0:13:41 we’re now building up cell and gene therapy 0:13:43 production facilities around the world. 0:13:45 So that’s a shift we’re seeing. 0:13:49 – You talk about Novartis becoming a medicines company 0:13:52 using data science and novel platforms. 0:13:54 You’re very specific about saying medicines. 0:13:57 Are medicines and therapeutics synonyms 0:13:59 in the Novartis mindset? 0:14:00 – I would say yes. 0:14:01 There’s of course a gray zone here. 0:14:03 So what is a therapeutic? 0:14:07 We would say medicines is our proxy for therapeutics. 0:14:10 I mean, one example we launched in the US, 0:14:12 a digital medicine. 0:14:14 I mean, with paratherapeutics, 0:14:17 this is the first digital app with an FDA label 0:14:19 that’s being used for opioid addiction 0:14:22 and other psychiatric illnesses. 0:14:26 And it is literally an app that has run clinical trials 0:14:28 and has gotten an FDA approved label. 0:14:31 So that’s truly an example of a therapeutic. 0:14:34 But I would put that within our world of medicines. 0:14:35 – So software is a drug. 0:14:36 – Yeah, software is a drug. 0:14:39 – Most surprising indication that you would expect 0:14:41 to see for a digital therapeutic. 0:14:45 ‘Cause I think most people assume that it’s gonna be around 0:14:48 behavioral health issues or addiction 0:14:50 like with the work you’ve done with Pear. 0:14:52 Can you imagine moving beyond that 0:14:55 from an indication standpoint for digital therapeutic? 0:14:57 – I mean, my hope would be we could develop one 0:14:58 for obesity, right? 0:15:00 That somehow that a digital therapeutic, 0:15:02 they could actually just move the needle 0:15:04 a little bit more on obesity. 0:15:06 It’s such a massive issue for society. 0:15:10 And it should be one where a behavioral intervention 0:15:13 on top of other interventions 0:15:14 could actually move the needle. 0:15:16 Because so much of it is behavioral. 0:15:19 – I mean, there’s not an example 0:15:21 that’s non-behavioral in your future. 0:15:24 – You’re not curing sickle cell with an app. 0:15:26 – I mean, I would put a guess around fertility, 0:15:28 but one could argue that’s also psychosomatic. 0:15:29 – Well, I mean, the things that actually, 0:15:33 so you think about like the modern medical sort of marvels, 0:15:34 I think about like an antibiotic. 0:15:35 Like I was sick when I was in college 0:15:37 and I had a super high fever. 0:15:40 I got an antibiotic and like next few days I’m fine. 0:15:42 Maybe without that, I’d been dead. 0:15:43 And so that’s kind of magical. 0:15:44 And it’s not like I have to take antibiotics 0:15:46 for the rest of my life or whatever like that. 0:15:47 I’m just cured. 0:15:50 But the amazing thing about behavioral 0:15:52 is that that’s where you don’t have this. 0:15:54 I can’t imagine that you have a molecule 0:15:55 that cures depression. 0:15:57 You take that and then you’re just done. 0:15:59 Or you take one, a couple of doses 0:16:01 and then you’re no longer up type two diabetes. 0:16:02 And behavioral is really broad. 0:16:05 It’s depression, it’s smoking sensation. 0:16:06 It’s type two diabetes. 0:16:08 It’s even quite possibly Alzheimer’s. 0:16:09 I don’t know if you’ve seen like all these. 0:16:11 – I’ve seen a lot of recent papers on this. 0:16:12 It’s fascinating. 0:16:13 – And so these are actually the areas 0:16:16 where if you look at the biology of Alzheimer’s disease, 0:16:18 that’s just a mess. 0:16:19 So it could be that for these things 0:16:21 where you have a very clear target, 0:16:23 I just have to hit the ribosome of the bacteria 0:16:23 and then we’re done. 0:16:24 That’s easy. 0:16:26 But there may be actually the future of things 0:16:28 where just it’s hard to hit with a molecule. 0:16:30 And all that is primarily behavioral. 0:16:31 – Interesting. 0:16:32 So basically you’re almost arguing 0:16:33 the question might be moot 0:16:35 because all of disease is behavioral 0:16:35 and some capacity. 0:16:37 – Well, no, all the stuff that’s hard. 0:16:39 – The complex is complex. 0:16:44 – The low-lying fruit molecularly is not behavioral. 0:16:46 – There’s this infrastructure layer 0:16:48 that’s being created now around gene therapies. 0:16:50 So as folks figure out manufacturing 0:16:52 as people think about delivery, 0:16:55 as people think about all of the various components 0:16:56 of modular aspects, 0:16:59 do you think those are things that necessarily 0:17:02 would be owned by one company 0:17:05 or these horizontal infrastructure layers 0:17:08 that a third party should develop 0:17:09 and sort of deploy across the industry? 0:17:10 How do you think this plays out? 0:17:13 In other words, is there a startup 0:17:14 that figures out AAVs? 0:17:17 Do they sort of supply AAV to the industry 0:17:20 or do they go and develop their own gene therapy? 0:17:22 – It’s a very timely question. 0:17:23 We don’t know the answer yet. 0:17:28 I think right now in this nascent phase that we’re in, 0:17:29 we believe we need to just own it 0:17:31 because the launches are so important 0:17:34 that we can’t afford there to be a lot of experimentation 0:17:37 and not really owning the supply chain. 0:17:39 We’ve done $15 billion of acquisitions 0:17:40 just last year in the space, 0:17:43 not including all of our internal work 0:17:44 in each of these areas. 0:17:48 So we’ve chosen to build out the infrastructure ourselves. 0:17:50 I think as the technology matures, 0:17:51 we’ll get more comfortable 0:17:53 about which areas we could send out. 0:17:56 I also think the entrepreneurial world 0:17:58 will also figure out where they can play a role. 0:18:01 I think that’s still all being figured out right now. 0:18:03 And I actually don’t have a view yet. 0:18:07 I don’t know what’s gonna be the elements we must own 0:18:09 and what are the elements that we could afford to give 0:18:10 to other parties? 0:18:11 – You know, on that note, 0:18:13 I’d love to hear from you more about 0:18:16 how you figured out the build versus buy piece then 0:18:19 because a big part of your work is, you know, 0:18:20 focus on innovative medicines. 0:18:21 And you made this argument that it takes 10 years 0:18:24 to build up a base even longer, 20, 30 years. 0:18:27 And yet you’re also acquiring the expertise 0:18:29 for the very new cutting edge things, 0:18:31 which almost makes it seem like you don’t, 0:18:32 seem like you don’t have to even bother 0:18:34 building up that base, why not just acquire it? 0:18:35 So how do you sort of navigate 0:18:37 the build versus buy part of this? 0:18:40 – I think when you wanna enter very new areas, 0:18:42 sometimes it’s prudent to ask yourself, 0:18:45 does somebody have this much more figured out than you do? 0:18:47 So if you take the example of gene therapy, 0:18:49 we acquired a company called AVEXIS, 0:18:51 though it really is, I think the front leading edge 0:18:53 gene therapy company. 0:18:56 Now the scientists at AVEXIS, you know, 0:18:58 they’ve been working at this actually 0:19:00 in their academic labs for 25 years. 0:19:02 I mean, they’ve been working on trying to hone 0:19:06 how to use AAV vectors to get to the neuromuscular system 0:19:09 of children to address these issues. 0:19:11 They’d actually figured out the manufacturing. 0:19:13 They built the manufacturing site. 0:19:15 We were working on gene therapies ourselves in-house, 0:19:17 but when we looked at that, we said, 0:19:20 this is an opportunity to really accelerate what we’re doing. 0:19:24 And so it made sense, I think, to go external. 0:19:26 There’s always that balance. 0:19:28 You know, we are a company that’s very focused 0:19:29 internally on research. 0:19:32 We consistently invest at the high end on internal R&Ds 0:19:35 simply because we believe that’s the heart of the company. 0:19:38 But what I’m trying to keep asking our people is, 0:19:40 if there’s somebody out there who’s got it better than us, 0:19:43 let’s just go get that and build off of it. 0:19:46 I love that, but there is a classic NIH 0:19:47 non-invented here syndrome. 0:19:50 And when you have a strong internal R&D culture, 0:19:53 it does compete with NIH a lot. 0:19:55 So the question that really begs is how you then, 0:19:57 with all these amazing acquisitions, 0:20:00 integrate them into the company and actually make sure 0:20:03 the classic Chesbro study of all these acquisitions 0:20:04 not being killed by the big company, 0:20:06 like how do you balance that piece? 0:20:08 So I think there’s two things I’d say. 0:20:10 One is, as a R&D person, 0:20:13 I have a sort of the ability to really get in there 0:20:16 and have the discussions directly with the scientists 0:20:19 and argue why we need to actually go external 0:20:23 and really evaluate the case with hopefully objective eyes. 0:20:24 The other thing we’ve decided to do, 0:20:26 at least with these very new tech, 0:20:27 three new technology platforms, 0:20:29 is leave them as independent units 0:20:31 and really let them grow up independent 0:20:35 from the big R&D and manufacturing machine. 0:20:37 Because I think exactly for that concern, 0:20:40 it makes sense to let them build up 0:20:43 and really incubate these new technologies, 0:20:44 get them all sorted out, 0:20:45 and then we can ask the question, 0:20:47 what’s the right setup down the line? 0:20:48 – Right, right, right. 0:20:49 That is what the classic studies show, 0:20:51 that that sort of is the way to do the success. 0:20:52 I did, by the way, find it very fascinating 0:20:56 ’cause I wasn’t aware that you have a scientific background. 0:21:00 It reminds me of this idea that we have around CTO-led, 0:21:03 really having technical people at the helm. 0:21:04 So I am curious about your view, 0:21:06 I mean, besides being able to talk to the internal scientists, 0:21:09 like how has that affected your own career 0:21:11 and trajectory at Novartis so far? 0:21:13 – Given the company’s heart is innovative medicine 0:21:16 and most of my background has been in drug development 0:21:17 and really developing vaccines 0:21:20 and then developing various medicines. 0:21:22 I think it gives me a really good insight 0:21:25 into the heart of the company, our key technology. 0:21:27 If you think about our pipeline today, 0:21:29 I know every asset, every clinical trial, 0:21:31 I know all the clinical trial endpoints. 0:21:34 So that, I think, gives you a certain insight 0:21:36 into where the company is heading. 0:21:39 And also, I think enables you to hopefully guide the company 0:21:43 into the right areas in the future. 0:21:44 I think it’d be self-serving to say that it’s better 0:21:48 to have an MD, R&D person running companies. 0:21:51 But I think it does give you a different perspective 0:21:53 on an R&D industry like ours. 0:21:55 – Right, it might even be able to help, 0:21:57 to be able to empathize when you are killing a project 0:21:59 that you actually know what it’s like to feel that. 0:22:00 – Well, that’s for sure. 0:22:02 – Okay, so on that note, 0:22:03 what are some of the most interesting 0:22:06 and most innovative medicines categories? 0:22:08 – You know, when you look broadly right now, 0:22:13 I think you’re seeing a few big areas of high innovation. 0:22:16 I mean, I think in the whole world of CAR-T, 0:22:18 so cell-based therapies, 0:22:21 really what this is is the harnessing the power 0:22:23 to take cells out of the human body, 0:22:26 reprogram those cells and put them back in the human body. 0:22:29 CAR-T is the way we do that in cancer, 0:22:30 but there’s certainly the opportunity to do that 0:22:32 in many other diseases. 0:22:33 There’s companies working on trying 0:22:35 to cure sickle cell disease, 0:22:38 others working on other inherited disorders. 0:22:41 So really reprogramming cells. 0:22:43 – So if you go back 10 years ago, 0:22:46 something like a CAR-T therapy would have seemed 0:22:49 science fiction-y, or at least maybe 20 years ago. 0:22:51 If we look forward 10 to 20 years, 0:22:55 what are the modalities of the future, do you think? 0:22:57 – I think a couple of things will likely come. 0:22:59 I think xenotransplantation, I mean, 0:23:01 which has been in and out and worked on, 0:23:03 and what’s interesting is every one of these 0:23:04 comes up and down. 0:23:08 So gene therapies, cell therapies popped up in the ’90s, 0:23:10 kind of went away, popped up in the 2000s, 0:23:11 kind of went away. 0:23:14 And then the key linchpin issues were solved, 0:23:16 and then it was, you know, 0:23:18 I mean, xenotransplantation where you were able 0:23:21 to make organs for transplantation in animals 0:23:24 that enable then to have a sufficient number 0:23:26 of transplantable organs for human beings. 0:23:28 I think we’re gonna probably get there 0:23:30 in the next 10 to 20 years. – Interesting. 0:23:33 – So regenerative medicine makes a real comeback. 0:23:36 – I mean, I think, well, I think another area, yes, 0:23:38 I think on xenotransplantation being one, 0:23:39 I think the other is gonna be, 0:23:43 we are gonna start to solve problems of regenerating tissue. 0:23:46 We already see examples where we, in our own labs, 0:23:48 where we can start to crack, 0:23:50 how can you regenerate cartilage, 0:23:53 or how can you regenerate other tissues in the body? 0:23:55 Which would, again, seem like science fiction, 0:23:58 but I think actually harnessing the pathways 0:24:00 to really get regeneration to happen, 0:24:03 which would help healthy aging is another thing 0:24:05 I think will likely come. 0:24:10 So there’s a lot of things that are still on the way. 0:24:12 – Can you imagine a moment in time 0:24:16 where aging becomes a therapeutic area for pharma companies? 0:24:19 – Yeah, we had actually an aging program, 0:24:21 a small aging program for some time 0:24:24 where we were trying to work on things like sarcopenia, 0:24:28 which is muscle wasting and similar kinds of conditions. 0:24:30 It turns out to be very, very difficult 0:24:32 because, again, multifactorial, 0:24:36 and you probably need a medicine with behavior, 0:24:39 with diet, with exercise, 0:24:41 with all kinds of things to actually help 0:24:43 healthy aging happen. 0:24:44 But like I said, I mean, we continue to focus 0:24:47 on more the pure regenerative parts. 0:24:50 I mean, you think about the whole world of joints 0:24:52 and movement has not really been addressed and cracked. 0:24:55 And so this is an area where we have exploratory programs 0:24:57 to see maybe we could find something. 0:24:59 I mean, if you could regenerate cartilage or tendons 0:25:02 or enable muscle strength incrementally, 0:25:05 you might be able to improve a healthy aging quite a bit. 0:25:06 – Fabulous, why don’t we actually shift 0:25:09 into the innovative medicines set of therapies? 0:25:13 – Another big area, hot area, is in the world of RNAs. 0:25:16 So these are really ways to deliver, 0:25:20 let’s call it genetic instructions into specific cells. 0:25:22 This has been an area that’s been worked on for many years. 0:25:23 It’s always been difficult, 0:25:26 but I think companies are now starting to crack the problem 0:25:29 of delivering RNAs into specific cells 0:25:31 in a highly effective way. 0:25:32 – Can you give me just a concrete example 0:25:34 of how that plays out with like a real disease? 0:25:36 – So there’s a couple of really nice examples now 0:25:38 with RNA interference. 0:25:42 So one that our company is working on is RNA interference 0:25:46 to impact a factor that’s really a big part of heart disease. 0:25:47 It’s called LP little A. 0:25:50 LP little A is actually thought to be one 0:25:53 of the remaining risk factors for heart disease 0:25:54 that have not been addressed. 0:25:55 You know cholesterol, 0:25:58 everybody’s of course addressed cholesterol extremely well, 0:26:01 triglycerides, LP little A is another factor, 0:26:04 but there’s never been a medicine against it. 0:26:08 And it turns out it’s really hard to drug LP little A. 0:26:10 And so the only way to really target it turns out 0:26:12 to be using RNA-based therapies. 0:26:17 These RNA-based therapies are able to block the production 0:26:22 of the gene, a translation of the gene into the protein, 0:26:25 and then actually reduce the LP little A in the blood. 0:26:26 And so this is one example 0:26:28 of how we’re trying to take this into an area 0:26:31 where otherwise you wouldn’t necessarily have a therapeutic 0:26:33 against something that could have a big impact 0:26:35 for patients with prior heart conditions. 0:26:38 – So RNA interference is essentially a mute button 0:26:39 for a gene of interest. 0:26:40 – That’s right, yeah. 0:26:42 – I love that, and that’s a great example. 0:26:44 And by the way, LP little A sounds like a name of a rapper. 0:26:46 (all laughing) 0:26:47 And just remind me really quickly, 0:26:50 obviously I know what I learned about RNA 0:26:53 from like biology class in the sense of proteins, 0:26:55 but can you give us a little bit more distinction 0:27:00 about what’s unique about RNA-based therapeutic modalities? 0:27:01 – Absolutely. 0:27:02 So when you think about the history of our industry, 0:27:06 maybe another way to describe the trend I see that’s happening 0:27:08 is we used to be about chemicals, the small molecules. 0:27:10 So for probably a hundred years, 0:27:14 most of the pharmaceutical companies had their basis 0:27:16 in the chemicals industry. 0:27:18 And so we made these small molecules 0:27:20 that happened to have various effects on the body. 0:27:21 And over a hundred years, 0:27:22 we figured out we could really target 0:27:24 what those chemicals do. 0:27:26 Around the late 1980s, 0:27:28 we realized you could actually make large molecules, 0:27:31 large proteins, and make them be therapeutic. 0:27:34 So this is antibodies and recombinant proteins. 0:27:37 And that led to a whole new renaissance in our industry. 0:27:40 And so over the next 20 years and up to today, 0:27:43 probably still the largest category 0:27:44 is so-called biologic medicines. 0:27:47 These are antibodies and proteins. 0:27:51 What I see happening now is a shift to a next set of modalities 0:27:54 that move beyond small molecules and proteins. 0:27:56 And that is now really touching other elements 0:27:57 of what happens in a cell. 0:28:02 So one is RNAs, which is really the way DNA gets translated 0:28:04 into a protein that goes through an RNA. 0:28:07 So that’s one new modality. 0:28:10 Another modality, both of them really are about 0:28:12 editing DNA in different ways. 0:28:14 One is to take the cells out of the body 0:28:15 and edit the DNA of the cell 0:28:18 or enable the cell to produce something different. 0:28:21 The other is to do it inside the body, 0:28:22 what we call gene therapy. 0:28:25 So we make that distinction as cell therapy and gene therapy. 0:28:28 So cell therapy is ex vivo, gene therapy is ex vivo. 0:28:29 Inside, outside. 0:28:29 Inside, outside. 0:28:34 So these are new ways of actually delivering medicines 0:28:36 or creating medicines in the human body. 0:28:38 And now you see early stage companies 0:28:40 doing even more radical things, 0:28:42 trying to turn red blood cells into therapeutics 0:28:43 amongst other things. 0:28:46 So it’s really an expansion. 0:28:48 Let’s think about, if you think about it of the game board 0:28:50 of how you can address human diseases. 0:28:52 I love sort of the sweeping history you have here 0:28:54 in terms of starting with chemistry 0:28:56 and then moving to large molecules 0:28:59 and then now moving more into the cell engineered world. 0:29:03 Historically, every single sort of drug program 0:29:05 has been a very bespoke thing. 0:29:08 A very sort of, you know, it’s on ground war, right? 0:29:10 You have your target discovery 0:29:11 and then you have your validation 0:29:12 and then you have your lead 0:29:15 and then you optimize that molecule and then so on and so on. 0:29:18 And at least my sense has always been 0:29:20 that because it’s so bespoke 0:29:23 that there are some learnings that are generalizable 0:29:24 in any given disease area 0:29:27 but every sort of program is a unique thing. 0:29:29 When you start to move to the RNA world, 0:29:32 to the cell world, to the gene world, 0:29:35 is it going to become much more of a modular world 0:29:38 where, you know, the first version of a CAR T 0:29:41 is going to be by definition less sophisticated 0:29:41 than the second version. 0:29:43 But the second version will be built off the first. 0:29:47 And you go from being in a bespoke world 0:29:50 to going much more into sort of an iterative world. 0:29:51 – Unfortunately, in our industry, 0:29:53 it’s always the answer is it depends. 0:29:56 I think in the specific example of CAR T, 0:29:57 I do think that’s what’s going to happen 0:30:00 because you have such a complex manufacturing 0:30:02 that you’re going to have the first generation, 0:30:04 let’s say, of a CD19 card, 0:30:08 which is a card that targets B cell cancers. 0:30:10 And you’re going to try to then move into a next generation 0:30:12 that hopefully has more rapid manufacturing, 0:30:13 maybe higher efficacy, 0:30:15 and then even more rapid manufacturing. 0:30:17 So you’re going to get into that iteration. 0:30:19 Now, it’s not like medical device iteration. 0:30:21 I mean, this is still going to take years to do, 0:30:23 but you are going to get to that iteration. 0:30:25 I think another way, what I see happening though, 0:30:29 with these new technologies is real platforms in so far 0:30:33 is once you have the backbone of the production 0:30:36 and even the go-to-market model, depending, 0:30:39 you can put multiple products onto the platform. 0:30:41 What we’ve done at our company 0:30:44 is build a global network of manufacturing sites 0:30:46 that can take cells out of human beings 0:30:48 and reprogram the cells and put them back in the body. 0:30:51 And we’ve built the links into hospitals 0:30:53 to enable us to do that. 0:30:55 So you have that as a capability. 0:30:57 You also have the capability to understand 0:31:01 how to use what’s called a Lenti virus to reprogram a cell. 0:31:03 So we’ve got all of that. 0:31:05 Now we can apply that in very different ways, 0:31:08 in cancer and sickle cell disease and inherited disorders, 0:31:10 and use that same infrastructure 0:31:13 to actually then keep pushing the medicines through. 0:31:15 That’s very different than what we’ve had to do in the past, 0:31:19 where every single medicine had a bespoke production process, 0:31:22 had to have its own manufacturing facility. 0:31:24 Now we can actually build that platform 0:31:25 and then layer medicines on. 0:31:27 It’s no different in gene therapies. 0:31:30 When you think about AAV vectors, 0:31:33 these are ways to deliver these gene therapies into the body. 0:31:35 Once you solve it, the process, let’s say, 0:31:36 for one of these vectors, 0:31:38 you can apply it to multiple different diseases 0:31:41 and not have to recreate everything again. 0:31:45 That’s a shift I see in how our industry operates. 0:31:46 – You know, I find that fascinating 0:31:48 ’cause it actually sounds a lot like what we talk a lot about 0:31:50 around this theme around engineering biology 0:31:52 and when you bring engineering principles 0:31:53 and mindsets to biology. 0:31:56 – You know, you’ve just mentioned multiple places 0:31:59 where there’s sort of repeatability 0:32:02 and sort of different aspects of engineering 0:32:04 have already come in. 0:32:05 How is this trend gonna continue? 0:32:06 Where are there gonna be the new places 0:32:08 where engineering can play a role? 0:32:10 – I think the easiest place is gonna be 0:32:13 in continuing to innovate on the processes 0:32:16 by which we really manipulate cells and gene 0:32:18 and really get to the next wave of manufacturing. 0:32:23 ‘Cause I would say we’re really on the only learning to crawl 0:32:25 with respect to most of these technologies 0:32:28 and how we produce them, pretty rudimentary. 0:32:30 And so I think there’s gonna be an engineering problem 0:32:32 of how do you handle cells 0:32:34 and how do you handle the vectors 0:32:37 and make this a much, much more efficient process. 0:32:38 And there’s a lot of, I think, 0:32:41 very smart engineering firms now working on that space. 0:32:43 So I think that’s one place. 0:32:44 The area I’m quite interested in 0:32:46 is how we can get much smarter 0:32:49 at actually engineering the medicines themselves. 0:32:52 I mean, we spend a lot of work investing in AI 0:32:54 and 3D visualizations to say 0:32:56 in the so-called world of chemical biology 0:32:59 or if you even think about using quantum chemistry 0:33:01 that really understand how to define 0:33:02 your monoclonal antibody. 0:33:06 How can we do a lot more engineering of medicines up front? 0:33:08 Because we really come from a heritage 0:33:10 where everything was just trial and error. 0:33:12 We just tried many, many, many molecules 0:33:15 until we found one that worked and we just took it forward. 0:33:17 How can we become much smarter about that? 0:33:18 And so at our research labs, 0:33:21 we’re spending a lot of time thinking about 0:33:23 how do we engineer the medicine up front 0:33:25 to do what we want it to do. 0:33:27 And that’s a whole new world, I think. 0:33:29 – Yeah, also I think there’s gonna, 0:33:32 presumably have to be a culture that shifts along with this. 0:33:35 I read Alan Greenspan’s book, “The History of Capitalism.” 0:33:38 And he talked about how actually in Europe, 0:33:40 like furniture was bespoke 0:33:41 and you’d make this beautiful chair 0:33:42 and it’s this handicraft. 0:33:46 And they actually hated the idea of factories and engineering 0:33:48 because it takes the art out of it. 0:33:49 – It’s not artisanal anymore. 0:33:50 – Yeah, it’s not artisanal anymore. 0:33:53 But I think once you can have this ability to shift 0:33:56 towards that mindset where you have reproducibility 0:33:59 and almost like a factory process that can be built, 0:34:00 once you can have that shift, 0:34:03 as long as everyone is ready to make that shift, 0:34:05 then things can really start rolling. 0:34:07 But there has to be a major shift. 0:34:08 In terms of like in America, 0:34:11 people really care about that artisanal part as much. 0:34:14 And we got factories and that was a huge part of the early, 0:34:16 like late 1800s. 0:34:18 And I’m curious, you spoke so much 0:34:20 about how the virus is changing. 0:34:23 And so presumably there’s an internal cultural change as well. 0:34:25 – Yeah, we’re making, 0:34:27 trying to make a quantum change, I think, in our culture. 0:34:30 I mean, what we have is as context, 0:34:32 I believe we’ve moved to become truly 0:34:33 just a knowledge organization. 0:34:36 I mean, so much of the rudimentary tasks 0:34:39 have been either automated or sent to third parties. 0:34:42 So we have a whole organization of knowledge workers, 0:34:44 50% of them are millennials. 0:34:46 And they want to work in a very different environment 0:34:49 than let’s say an industrial company 20 years ago. 0:34:53 And so we call our new culture inspired, curious and unbossed. 0:34:56 And we want our people to feel inspired by the work, 0:34:58 really curious about the outside world 0:35:00 and not lived in a bossed company, 0:35:03 but really live in an unbossed, much more empowered company. 0:35:05 And when we talk about areas like digital 0:35:08 and data science, cell and gene therapies, 0:35:11 it’s so critical because these are so complex areas, 0:35:13 you need your people to figure out the answers. 0:35:15 And we can’t be in a world where everybody’s waiting 0:35:17 for management to tell everybody what to do 0:35:19 because none of us know what to do either. 0:35:22 ‘Cause these are whole new spaces for us. 0:35:23 So that’s a big shift. 0:35:25 The other element of that journey 0:35:27 is to get a lot more comfortable with rapid failure. 0:35:30 I mean, we have to be much more rapid cycle. 0:35:32 We can’t expect that we’re going to sort it all out 0:35:33 and it’s all going to work perfectly 0:35:35 because the first thing we’ve learned already 0:35:38 in cell and gene therapy is nothing works 0:35:40 the way you expected it to work, right? 0:35:43 And so you built a platform for rapid iteration. 0:35:44 That’s the idea. 0:35:45 What I love about that is it reminds me 0:35:47 of computing software companies 0:35:50 and the shift from waterfall to like more DevOps, Agile. 0:35:51 Yeah, the same principle. 0:35:52 Even microservices, architecture, enable. 0:35:54 We’re a little late to the party, but yeah, that’s the idea. 0:35:57 It’s the same kind of principle, that’s fascinating. 0:35:59 So we haven’t talked about the big elephants 0:36:01 in a good way in the room of AI and ML, 0:36:04 you know, artificial intelligence and machine learning. 0:36:06 Let’s talk about AI and ML and data. 0:36:09 I mean, it’s not a question of if, when, it’s how. 0:36:10 The question I have, 0:36:13 because quite frankly, it’s a very hype topic too. 0:36:15 And people sort of promise all kinds of things 0:36:19 when they talk about applying AI and ML to medicine. 0:36:21 I’m very curious from your take as a head of Novartis, 0:36:26 like where do you see the strongest applications of AI and ML? 0:36:27 Well, I have to first say, I completely agree 0:36:29 about the hype cycle here. 0:36:31 I mean, as we’ve gotten quite scaled 0:36:33 in working on digital health and data science, 0:36:37 we’ve learned that there’s a lot of talk 0:36:41 and very little in terms of actual delivery of impact. 0:36:42 But we’ve learned a lot. 0:36:44 I think the first thing we’ve learned 0:36:47 is the importance of having outstanding data 0:36:49 to actually base your ML on. 0:36:52 And in our own hands, in our own shop, 0:36:55 we’ve been working on a few big, big projects. 0:36:56 And we’ve had to spend most of the time 0:36:58 just cleaning the data sets 0:36:59 before you can even run the algorithm. 0:37:02 That’s just taken us years just to clean the data sets. 0:37:04 And I think people underestimate 0:37:06 how little clean data there is out there 0:37:08 and how hard it is to clean and link. 0:37:11 It was never intended to have this type of analysis done, right? 0:37:14 It was intended for a given project and that was it. 0:37:16 Yeah, that’s been so much of it. 0:37:20 And then the other thing is there are patterns 0:37:22 that can be really learned from the day. 0:37:25 I mean, do you have a good training data set 0:37:27 to actually train the algorithms? 0:37:28 So there’s a few places I think 0:37:30 we’ve seen a lot of traction. 0:37:32 One, I think the vision or image problem 0:37:34 has been very well solved. 0:37:37 So right now we’re in the process of digitizing 0:37:39 all of our pathology images 0:37:41 and having AI just be able to scan 0:37:44 all of the three pathology images at Novartis. 0:37:46 And we have millions of, of course, 0:37:48 records of biopsies and tissue. 0:37:52 And so that’s a huge project we have called Path AI. 0:37:54 I really work on that as a single example. 0:37:56 I mean, that’s like a gold mine. 0:37:57 It should be. 0:37:57 I mean, it should be. 0:38:00 And if you then apply that as well to the vast stores 0:38:02 of imaging data we have from our clinical trials, 0:38:05 we have two million patients in clinical trials, 0:38:07 at least in the last 10 years. 0:38:11 And we have MRI, CT scans, retinal scans, heart scans, 0:38:13 and all of that as well. 0:38:16 I think ML can have a significant potential 0:38:19 to really find, hopefully, new insights. 0:38:20 So I think the vision image problem 0:38:23 has been one we’ve been able to really take on. 0:38:25 Another area is in our operation. 0:38:27 So we’ve built an operational command center. 0:38:30 Take us, as I said, two and a half years to build it. 0:38:31 We call it SENSE. 0:38:33 And what it enables us to do a team sitting 0:38:35 centrally in our headquarters 0:38:37 to look at all of our clinical trials in the world. 0:38:40 And AI is predicting which trials are gonna enroll on time 0:38:42 or not enroll on time, 0:38:44 predict which ones are gonna have quality issues 0:38:45 or not quality issues. 0:38:47 And the reason we could do that is we had 10 years 0:38:49 of history to train the algorithms. 0:38:53 And we run about 400 to 500 clinical trials a year. 0:38:56 So we have a lot of data that we could train, 0:38:57 train the algorithms. 0:38:59 – Does that mean you’ve had to dig all the way back 0:39:03 into automating sort of real-time information 0:39:05 on clinical trials? 0:39:07 So the data entry on a clinical trial 0:39:08 as a patient is enrolling, 0:39:10 has that all been automated as well? 0:39:11 ‘Cause that used to be done on pads. 0:39:12 – It’s a great question. 0:39:14 I mean, really what we focus on is the operational data. 0:39:16 So one level up from the patient. 0:39:17 So is the trial enrolling on time? 0:39:19 Are the sites open? 0:39:21 All of that, all of those elements? 0:39:25 On the operational side, it was really easier to do this 0:39:27 than trying to get all the way down 0:39:29 to patient level data. 0:39:30 On the other area, interestingly, 0:39:32 in the financial area as well, 0:39:34 we find that AI does a great job 0:39:36 predicting our free cash flow, 0:39:39 predicting a lot of our sales for per key products. 0:39:42 And it does better than our internal people 0:39:43 because it doesn’t have the biases 0:39:45 and the data is very clean 0:39:47 and we’ve got very long-term data. 0:39:48 So that’s been all positive. 0:39:50 But there’ve been other areas where I think 0:39:52 it’s just simply not met up. 0:39:54 I mean, I think that the holy grail 0:39:57 of kind of having unstructured machine learning 0:39:59 go into big clinical data lakes 0:40:01 and then suddenly find new insights, 0:40:04 we’ve not been able to crack mostly because the data, 0:40:05 to link it up. 0:40:09 And I mean, we are spending a lot of our energy 0:40:11 just trying to get all of our data harmonized 0:40:16 so that some algorithm could maybe find anything of use. 0:40:18 – There’s an area that’s desperately in need, 0:40:19 I think of innovation, 0:40:22 is how we think about clinical trials. 0:40:24 Recognizing we have to operate 0:40:26 within the system that we live in. 0:40:30 But if you could design testing safety 0:40:32 and efficacy in humans on a blank sheet of paper, 0:40:33 what would look different 0:40:36 from a clinical trial perspective versus where we are today 0:40:37 and the way we do it now? 0:40:40 – I mean, the ideal world, if we could ever, 0:40:42 if we could get there would be, 0:40:44 we would have integrated health records 0:40:47 where we couldn’t easily insert the fields 0:40:50 that we needed for clinical trials. 0:40:52 And then we could use something like a blockchain 0:40:55 or some other distributed architecture 0:40:56 that enabled patients to consent for us 0:41:01 then to access the data and then run the trials through that. 0:41:04 And that would eliminate so much of the effort 0:41:08 of creating a second database versus the EHR, 0:41:11 monitoring that database, QA’ing that database, 0:41:13 locking that database. 0:41:15 You could get the data on an ongoing basis. 0:41:18 I mean, we would radically simplify this. 0:41:22 I believe that’s a huge, huge opportunity. 0:41:25 I think we have a long way to go because EHRs 0:41:26 are not where they need to be. 0:41:28 We’re probably not where we need to be to get there. 0:41:31 But I see opportunities in baby steps 0:41:32 to actually get towards that. 0:41:35 And I think we’re experimenting with that. 0:41:38 I think other companies are as well. 0:41:39 The other thing people talk about, 0:41:41 but I mean, I’ll take a skeptical voice around it, 0:41:44 is the ability to use real world evidence 0:41:46 to try to get at these things. 0:41:48 But as somebody who’s worked in clinical trials 0:41:52 for most of their time in the industry, 0:41:57 I do believe that the power of randomization, 0:42:00 the power of blindedness is what enables us 0:42:04 to control for all of the things we don’t know 0:42:07 about the complexity of human life and human biology 0:42:09 and to think that we’re gonna take that away 0:42:11 and then be able to really determine 0:42:13 the efficacy of the medicine, 0:42:16 puts a lot on the statistics that I don’t think we have. 0:42:20 And so I’m more of a real world evidence, 0:42:21 I don’t know if it’s a skeptic, 0:42:25 but realist who sort of says after we have 0:42:28 randomized placebo control data that really tells us 0:42:30 that something has the effect we think it is, 0:42:33 then to explore more effects or explore more uses 0:42:35 through real world evidence makes a lot of sense. 0:42:38 But I don’t see this as a panacea 0:42:41 that suddenly will make the world much easier. 0:42:42 I mean, that’s my expectations as well, 0:42:45 is that you’ll see it first come out like as a phase four, 0:42:48 something where you’re using real world evidence, 0:42:50 which was right now used for reimbursement anyways and so on. 0:42:53 And, but then maybe see how far it can go back, 0:42:55 but it’s not gonna replace it. 0:42:57 You guys don’t think a secular, I mean, not to sound naive, 0:42:59 but you don’t think a secular shift, 0:43:01 like censorification of everything 0:43:04 and everyone really truly has continuous wearables, 0:43:07 like everyone’s wearing a CGM by default. 0:43:09 I hear you on the statistical side, 0:43:10 and there’s a lot of other various variables 0:43:13 and things introduced into that equation, 0:43:17 but it is a huge, it’s a very deep, nuanced, 0:43:19 patient level set of data 0:43:20 that seems like we can’t ignore the power of that. 0:43:23 Like where do you fall on that? 0:43:24 – When I think about, first of all, 0:43:26 I would say just in general in sensors is another place 0:43:29 where there’s been a lot of hype above expectations. 0:43:31 I mean, we’ve been really trying to explore the use 0:43:33 of sensors in clinical trials now for, 0:43:35 in my own experience, at least six years. 0:43:37 And it’s been tough to get sensors 0:43:40 that really meet clinical trial grade outcomes. 0:43:44 I mean, to really show that they can be validated 0:43:47 versus our current clinical endpoints. 0:43:50 Now, if it’s consumer products, fine. 0:43:52 I mean, they’re the perfect, perfect people can feel. 0:43:53 – But you’re talking medical grade. 0:43:55 – But here we need to really be able to replace 0:43:57 what are pretty rigorous tests. 0:43:59 And we haven’t seen that, seen that yet. 0:44:02 Now, we’re exploring, I think, use of many different sensors. 0:44:05 The real power of it is a continuous variable 0:44:07 to actually see how a patient’s doing 0:44:09 in between the study visits. 0:44:11 And so I think that will help a lot, 0:44:12 but I still think in the end, 0:44:15 you’re gonna need to randomize and blind. 0:44:18 I mean, I think if you don’t randomize, 0:44:20 I think it’s really hard to figure out 0:44:23 what is going on in a complex system. 0:44:24 – I agree with the short term. 0:44:28 I think longer term, my good feeling is that statistics, 0:44:30 this is a solvable problem statistically, 0:44:33 because there is even issues with clinical trial design 0:44:36 that one has to overcome today, 0:44:39 because randomization isn’t just picking people 0:44:41 literally randomly, you know, necessarily. 0:44:41 – True. 0:44:42 – It’s a sample, not a population. 0:44:43 – And there’s been a lot of work 0:44:46 on causality theory and statistics of around. 0:44:49 So there are advances, but I think it’s not there now. 0:44:50 – Yes, I agree. 0:44:51 Small n, not capital n. 0:44:53 More to say there, that was really interesting. 0:44:55 – What’s the role of bringing innovation in 0:44:59 from the outside through partnerships and M&A and ML, yeah. 0:45:03 – Yeah, I think one of the things we’re working through 0:45:05 is how do we get the talent, you know? 0:45:08 As we really start to organize the data, 0:45:09 and we’ve brought in some great talent 0:45:11 to really help us work on data architecture 0:45:14 and come up with a whole data landscape for the company. 0:45:16 So that we’re always now thinking about 0:45:18 how do we treat data as an asset? 0:45:20 That’s one of the things we keep harping on, 0:45:22 is data as an asset, whatever data we collect 0:45:25 from the external world has to be organized 0:45:27 in a clear data architecture. 0:45:30 But then to take the next step to get the data scientists 0:45:32 to really find the insights, 0:45:35 we’re not the traditional place where a data scientist 0:45:38 coming out of Stanford is looking for 0:45:39 where they want to come to. 0:45:41 So we’re working through partnerships with universities, 0:45:44 potential partnerships with startups. 0:45:45 Actually here in the Bay Area, 0:45:46 we have a center called the Biome, 0:45:48 where we’re working with different startups. 0:45:51 And so these are the things we’re trying to do to engage 0:45:54 and hopefully create an ecosystem that helps us do this 0:45:56 and not just do it ourselves. 0:45:58 I don’t think we’ll be able to track the scale 0:45:59 that you would need. 0:46:00 – Yeah, there’s a Reese’s Peanut Butter Cup issue 0:46:03 because startups sometimes have some innovation 0:46:06 on the data science, but not the data. 0:46:07 And so bringing the two together, 0:46:09 I think seems like a very natural combination. 0:46:11 – Where is Reese’s Peanut Butter Cup? 0:46:12 – Peanut Butter and Chocolate, 0:46:14 like she’s got the peanut butter and the chocolate. 0:46:16 – Oh my God, I’m like, I don’t remember those commercials. 0:46:18 – I don’t remember them. 0:46:19 I watched a lot of TV shows growing up, 0:46:20 but I don’t remember that. 0:46:23 I find it fascinating ’cause a lot of our bio entrepreneurs, 0:46:25 the number one thing that they tell me 0:46:27 that drawing data scientists to bio companies 0:46:30 is one of the hardest challenges they have to face. 0:46:32 And so you’re saying with the biome and other things 0:46:33 that you’re doing, that you’re essentially saying 0:46:35 you have to kind of create the pipeline, 0:46:36 not just source it. 0:46:38 – That’s right, that’s right. 0:46:40 And it really, to what you told your earlier point, 0:46:42 I mean, the opportunity is to say, 0:46:43 look, come and work with us 0:46:45 and we’ll let you work with our data 0:46:47 and you can learn and we’ll learn. 0:46:49 And maybe then there’s a partnership that’s created 0:46:50 or maybe you want to come work for us, 0:46:52 which would also be great. 0:46:53 But that’s how we’re approaching it. 0:46:55 – Well, and there’s actually an interesting shift 0:46:57 that can happen in academia 0:46:58 with my group at Stanford. 0:47:00 Many people actually, during their PhD, 0:47:03 have gone to work in pharma and it’s hard to, 0:47:05 it’s possible to pull the data out of pharma, 0:47:07 but it’s actually easier to put the grad student 0:47:08 into pharma. – Oh, nice. 0:47:11 – And so the grad student comes with the code, 0:47:14 runs it, you know, internal through the firewall of pharma 0:47:14 and we see how it does. 0:47:16 And then you can still publish papers 0:47:19 where maybe you have to obscure what the target is 0:47:19 or something like that, 0:47:21 but you can at least see how things are going. 0:47:24 And there’s nothing like sort of trying in the real world. 0:47:26 – Yeah, yeah, makes total sense. 0:47:28 So on this question of bringing in talent, 0:47:29 as you guys operate globally, 0:47:32 obviously you’re in 150 countries, 0:47:34 some of your headquarter in Switzerland, 0:47:37 Nibers in the Boston, Cambridge area, 0:47:38 you have a presence out here in Silicon Valley. 0:47:41 So how do you guys think about innovation hubs? 0:47:45 Very simplistically is all of the machine learning, 0:47:48 artificial intelligence talent going to be based out here. 0:47:51 What, you know, how do you sort of distribute teams 0:47:52 across the world? 0:47:54 – So it’s interesting. 0:47:55 You know, when you look at research, 0:47:56 we have three main hubs, 0:47:59 our three main hubs are in Cambridge, 0:48:01 in Basel, Switzerland and in Shanghai in China. 0:48:04 Those are three main research hubs. 0:48:06 In terms of development centers for product development, 0:48:09 you would add on to that list Hyderabad, India 0:48:13 as kind of the main, the East Hanover, New Jersey. 0:48:15 But when it comes to data science and digital, 0:48:16 what we’ve actually decided to do 0:48:18 is take a much more distributed approach. 0:48:20 So we’re building up these biome centers 0:48:23 in San Francisco and in London, 0:48:26 other locations in the Middle East, perhaps in China, 0:48:27 just trying to say, 0:48:30 we’re not gonna constrain ourselves 0:48:31 with our current locations. 0:48:33 We’re gonna just try to source talent wherever it is, 0:48:35 particularly because talent in these areas 0:48:38 doesn’t necessarily have to be housed, 0:48:40 you know, next to the other functions. 0:48:42 We’re really asking these people to explore our data 0:48:45 and find big, big new insights. 0:48:48 So that’s the approach we’re taking right now. 0:48:49 It was really saying, you know, 0:48:50 let’s go where the talent is 0:48:54 as opposed to force everyone to come to us. 0:48:57 So we’ll see, that’s the experiment we’re undertaking. 0:48:59 – How do you see the future of that sort of working out? 0:49:00 Like, do you see that, you know, 0:49:02 Boston, Silicon Valley, Basel, 0:49:04 like these places will specialize? 0:49:06 Will they distribute? 0:49:08 – Yeah, we have lots of debates as if we were to build 0:49:11 a scaled hub in digital or in data science health, 0:49:12 where would we go? 0:49:14 I think one of the challenges in the Bay Area 0:49:17 is again, just the competition for talent is so intense, 0:49:19 especially in the tech sector. 0:49:22 So we’re in the business of funding early stage companies, 0:49:24 supporting entrepreneurs. 0:49:26 If I’m an entrepreneur, 0:49:28 I obviously see a ton of benefit 0:49:30 in partnering with Novartis. 0:49:33 Access to data that doesn’t exist elsewhere, 0:49:35 obviously validation in my approach 0:49:37 and my technology, et cetera. 0:49:38 But if I’m an entrepreneur, 0:49:43 I’m also scared to approach a large company like Novartis, 0:49:45 ’cause I’d worry about, you know, 0:49:47 basically you’re an elephant and I’m a mouse 0:49:48 and if I want to dance, 0:49:50 I have to hope you’re a very graceful elephant. 0:49:52 (laughing) 0:49:53 Otherwise, you’re gonna crush me. 0:49:56 What advice would you give to entrepreneurs 0:49:58 about approaching biofarm, 0:50:01 a large biofarm in the spirit of collaboration? 0:50:03 – Yeah, I think in data and digital, 0:50:06 what we’ve tried to do is make us feel a lot smaller. 0:50:09 ‘Cause I think we recognize that we are a huge beast. 0:50:11 And so with things like the biome, 0:50:14 we work with many other entities to try to say, 0:50:16 how can we make ourselves feel smaller, 0:50:18 work in smaller units? 0:50:21 We created our own digital data organization 0:50:25 so that entrepreneurs would have an input into Novartis 0:50:26 where it’s people like them. 0:50:27 I mean, the people in that team 0:50:30 are all come from the tech sector. 0:50:32 They’re working in a much smaller, agile way. 0:50:34 They do sprints and scrums 0:50:36 and they work in all the ways 0:50:39 that the people are being used to working. 0:50:42 And so I would say really engaging through some place 0:50:46 in a large company that I think has a natural affiliation 0:50:48 for the entrepreneur makes a lot of sense. 0:50:49 I think it is harder 0:50:52 on the kind of traditional biomedical side, right? 0:50:55 I mean, we have, I mean, if you just think of, 0:50:57 we have 17,000 R&D people 0:51:01 and spent $9 billion plus a year in R&D. 0:51:02 So if you’re a small entrepreneur 0:51:04 who wants to start working with us, 0:51:07 it’s easy to get lost in the fray. 0:51:08 We’re trying to work on that. 0:51:10 I think most of the companies in our industry 0:51:14 try to have external offices that try to engage. 0:51:16 I mean, we have external scholars program 0:51:19 where we really try to enable scientists 0:51:23 to use our facilities, interact with our scientists. 0:51:24 So we’re trying to experiment, 0:51:26 but I can’t say that we’ve completely figured that out 0:51:27 on the biomedical side. 0:51:30 I’m much more optimistic on the data and digital science side 0:51:33 mostly because we just brought people in from that world 0:51:35 and they just think differently. 0:51:36 – There was something I wanted to ask you earlier 0:51:38 which was about measurement. 0:51:41 ‘Cause when you talked about the portfolio approach, 0:51:44 I wanted to know how you think about actually measuring 0:51:47 the way you make those investments in a portfolio. 0:51:49 And the reason I asked is because there’s all these mindsets 0:51:52 like pastures quadrant, like here’s a place 0:51:54 where we’re gonna put more emphasis on basic research 0:51:55 and we’re gonna put more emphasis 0:51:57 on something more practical. 0:52:00 Or there’s another approach in Xerox PARC. 0:52:03 They used a modified real options analysis 0:52:05 as a way to figure out how to do like short-term, 0:52:07 long-term, mid-term type investments. 0:52:09 Do you have a way of sort of closing the feedback loop 0:52:11 for how you measure the success 0:52:15 of how you’re allocating and deploying investments in R&D? 0:52:16 – Yeah, I mean, we have financial measures. 0:52:20 So we look at return on capital employed, NPV, NPV, peak sale. 0:52:23 So all the traditional financial measures, 0:52:27 we look at really the scientific innovativeness 0:52:28 for lack of a better word. 0:52:30 Is this really something that’s changing the game 0:52:31 from a scientific standpoint? 0:52:33 That’s a little bit more of a subjective measure 0:52:35 but we try to ask teams, 0:52:37 is this really moving the needle 0:52:39 from a standard of care science? 0:52:42 And we actually score that based on six different parameters. 0:52:43 – Oh, interesting. 0:52:44 Are you allowed to share those parameters? 0:52:46 – I don’t know them off the top of my head. 0:52:50 But we really try to score the medicines to say, 0:52:51 is this really transformative? 0:52:53 So you have a financial score, 0:52:55 you have a transformational score. 0:52:58 And then another kind of subjective element 0:52:59 is does this strategically fit? 0:53:02 So is it in one of our core therapeutic areas? 0:53:04 So if somebody comes with a great breakthrough, 0:53:08 which happens not quite often, 0:53:09 in an era that we’re not in, 0:53:11 that’s the toughest one 0:53:12 because it can break through, 0:53:14 but we’re not in this space. 0:53:15 And what do we do now, right? 0:53:17 And do we really want to build this up 0:53:19 or do we want to just send it to an out license 0:53:22 to a fund or do something else? 0:53:24 Those are tough discussions. 0:53:25 But we try to be disciplined 0:53:27 because it’s again, the patience 0:53:30 and being really sure you build depth in your key areas. 0:53:31 Because if you take another program on, 0:53:33 that means that there’s another program you have to stop. 0:53:34 I mean, it’s a zero sum game for us. 0:53:36 – It’s an opportunity cost. 0:53:36 – One thing that’s funny, 0:53:39 just listening to you talk about what Sonal brought up, 0:53:43 this question of not invented here syndrome. 0:53:46 And when you contrast that with managing, 0:53:49 having an organization that is naturally curious 0:53:51 and unbossed, as you said. 0:53:52 – Inspired. 0:53:53 – Inspired. 0:53:56 But managing that not invented here syndrome 0:53:59 versus maintaining sort of the skepticism 0:54:01 that things might be in a hype cycle 0:54:03 and not sort of chasing hype. 0:54:05 It’s a very fine balance, right? 0:54:08 It’s kind of like the not invented here. 0:54:10 The other side of that coin is not invented yet. 0:54:12 And you got to figure out like where you are in that. 0:54:14 And I think that is one of the most difficult things 0:54:16 that I would imagine that an innovative company 0:54:18 at this scale at which Novartis operates 0:54:21 has to always find that balance between. 0:54:22 – Absolutely. 0:54:24 I mean, there is a balancing act 0:54:26 between the different forces. 0:54:28 And I find a lot of it comes down 0:54:32 to just encouraging people just to have open, frank debate. 0:54:32 – Yes. 0:54:35 – And be comfortable with task conflict 0:54:36 without personal conflict. 0:54:38 That’s what I keep telling our team. 0:54:41 We have to be incredibly curious about one another, 0:54:42 what one another thinks. 0:54:45 Think that’s just all about trying to get the best ideas 0:54:46 and we’re just trying to debate. 0:54:48 But it’s never personal. 0:54:49 And it’s never, ’cause I think when, 0:54:50 particularly in the world of science, 0:54:52 it often becomes personal. 0:54:55 It becomes, this is about me and my science 0:54:57 versus you not believing in my science. 0:55:00 As opposed to saying, we need to just find a great medicine 0:55:01 or we need to just solve this problem. 0:55:04 That’s a journey I think we’re taking the organization on. 0:55:07 But I think that’s going to be what’s really critical 0:55:10 is having that radical transparency in the open debate. 0:55:13 – I find it fascinating because it alludes to the concepts 0:55:14 around skin in the game 0:55:16 because you want people to have skin in the game. 0:55:18 But at the same time, they need to have just enough out 0:55:19 that they can see things a little clearly 0:55:23 where you’re not like only attacking their sacred cows. 0:55:25 – Skin in the game, but not vital organs. 0:55:25 – Yes, exactly. 0:55:27 That’s a great way of putting it. 0:55:27 I love that. 0:55:29 – How long have you been in the CEO chair now? 0:55:30 – One year. 0:55:33 – What’s the, having come up through the R and D side 0:55:34 of the organization, 0:55:37 given the most surprising thing to you now as the CEO, 0:55:40 given that R and D is such a big part of what the company does. 0:55:44 – I’m just amazed by how vast our company is. 0:55:46 I mean, I think even though I’ve been at the company 0:55:50 since 2005, now actually overseeing a company 0:55:53 that’s 120,000 people in 150 countries 0:55:57 and you go anywhere, we are just a vast, vast company. 0:56:00 So that’s one thing that’s really, I think surprised me 0:56:02 just to have to know, when you think about 0:56:04 making a transformation happen 0:56:07 and you try to make that happen in such a large enterprise, 0:56:11 that certainly really, I mean, that really hits you. 0:56:15 I think the other thing about this job is crisis management, 0:56:17 which you just not exposed to. 0:56:19 I mean, this job is a lot about managing crises 0:56:22 and that’s been a big learning curve for me 0:56:25 because in the world of R and D, we had clinical trials 0:56:26 the last two or three years 0:56:28 and everything’s sort of predictable. 0:56:32 I mean, we sort of know what the decisions we need to make. 0:56:35 A lot of documentation that you can lean on. 0:56:38 Now you’re in the world of the ambiguous, the uncertain 0:56:41 and then things hit you completely from the blind side 0:56:43 and then you gotta keep moving ahead. 0:56:45 – If you were to write a letter to grad students 0:56:48 or just people kind of entering the space, 0:56:50 like what kind of skills would you encourage them to have? 0:56:54 Like if you could have added things 20 years ago, 0:56:55 what would you tell them to do? 0:56:58 – I’d say focus a lot on how you lead people. 0:57:01 I think there’s so much of a focus on technical expertise 0:57:02 and thinking that that’s gonna get you there. 0:57:05 It matters, of course, competence matters tremendously 0:57:07 but what really makes the difference 0:57:09 is how you lead people, how you lead yourself. 0:57:13 And I think investing more in that would pay off a lot. 0:57:15 I think the other thing I’d say is don’t underestimate 0:57:18 the importance of getting multidisciplinary exposure. 0:57:21 I mean, I think most people get worried 0:57:22 when they have to make those jumps. 0:57:24 I’ve had a career at Novartis 0:57:27 where I’ve worked in commercial areas and marketing areas 0:57:30 so most of my time in R and D worked across 0:57:32 four different areas of the business. 0:57:35 And so with that diversity of experiences, 0:57:38 it enables you, I think, to take the right decisions. 0:57:40 – There was one other point I wanted to raise. 0:57:42 I think what’s often lost some people, 0:57:44 ’cause you mentioned the miracles, right? 0:57:46 And how incredible it is 0:57:48 that we find any human medicines at all 0:57:49 because if you think about it, 0:57:52 every human being is probably 40 trillion cells 0:57:54 that are working together. 0:57:55 – It’s amazing anything even works. 0:57:56 – It’s amazing. 0:57:58 We understand a fraction of the proteins, 0:58:02 what they do, 1,200 drugable proteins, 0:58:03 and there’s only a fraction of those 0:58:05 that we can actually drug. 0:58:08 We don’t know what most of RNA does, non-coating RNA. 0:58:12 We don’t know most of what the genome’s even talking about. 0:58:15 And if you look at it, since the creation of the FDA, 0:58:20 there’s only been about 1,500 new molecular entities ever found. 0:58:21 – Wow. 0:58:23 – And most of those are actually overlapping 0:58:25 in similar therapeutic areas. 0:58:26 So actually, if you were to count for, 0:58:27 I haven’t done the analysis, 0:58:28 but if you count for double counts, 0:58:32 my guess is it’s in the hundreds of medicines 0:58:33 that we’ve actually found. 0:58:36 – And by the way, what’s the predominant therapeutic area? 0:58:37 – Probably, I would guess, 0:58:39 hypertension, cardiovascular disease, 0:58:41 but I’ve not looked carefully. 0:58:45 But it’s worth reflecting on how hard it is to do what we do. 0:58:47 And when we find, I tell our people, 0:58:49 you have to think every medicine we find 0:58:51 is a miracle that fits in the palm of your hand. 0:58:56 We’ve unlocked, in a sense, a billion years of evolution 0:58:58 of the eukaryotic cell and human biology. 0:59:02 And somehow we found something that was able to move the needle 0:59:04 in this incredibly complex system. 0:59:05 I think that’s easy to forget 0:59:10 when we just kind of overly simplify what we do. 0:59:12 – That’s a great note to end on. 0:59:14 Voss, thank you for joining the A6 and Zee podcast. 0:59:15 – Thank you. 0:59:16 – Thanks so much. – Thanks so much. 0:59:26 [BLANK_AUDIO]
How does the world’s largest producer of medicines in terms of volume balance the science and the business of innovation? How does an enterprise at such vast scale make decisions about what to build vs. buy, especially given the fast pace of science today? How does it balance attitudes between “not invented here” and “not invented yet”?
Vas Narasimhan, CEO of Novartis, sat down with a16z bio general partners Jorge Conde and Vijay Pande, and editor in chief Sonal Chokshi, during the JP Morgan Healthcare Conference around this time last year, to discuss the latest trends in therapeutics; go to market and why both big companies and bio startups need to get market value signals (not just approvals!) from payers earlier in the process; clinical trials, talent, leadership, and more in this rerun of the a16z Podcast.
0:00:02 Hi everyone! Happy New Year! I’m Zonal. 0:00:06 As you may know, we launched a new short-form news show last year, 16 Minutes, 0:00:09 where we cover recent news, the A6NZ podcast way. 0:00:12 What’s Hyped with Real? Why They Matter for Advantage Point in Tech? 0:00:15 And that show has continued in a separate feed for quite some time now. 0:00:17 You can subscribe to it, if you haven’t already, 0:00:21 in your podcast app by searching for 16 Minutes A6NZ. 0:00:25 But I’m also sharing the latest episode here in this show feed, 0:00:27 since we sometimes cover not just multiple news items, 0:00:30 but a single topic, prompted by recent headlines, 0:00:34 like we did on our episodes on esports and the opioid crisis. 0:00:38 This week, the topic is personal genomics, the promise, the perils, 0:00:41 where are we really today and where could we be going next? 0:00:44 We start with an article by Peter Aldhaus on, quote, 0:00:47 “Ten years ago, DNA tests were the future of medicine. 0:00:51 Now, they’re a social network and a data privacy mess.” 0:00:53 The article refers to a series of events, 0:00:56 everything from companies like 23andMe and the FDA, 0:00:58 to some of the headlines we’ve seen lately 0:01:01 around criminals being caught based on their relatives’ DNA. 0:01:04 There’s also a number of companies cited in the article who offer such tests. 0:01:06 To be clear, none of the following discussion 0:01:08 should be taken as investment advice. 0:01:11 Please see a6nz.com/disclosures for important information. 0:01:14 So that’s a context and super quick summary. 0:01:16 Now, let me introduce our A6NZ expert, 0:01:20 general partner Jorge Conde, who has a long history in this area. 0:01:23 Since it’s a turn of a decade and the first episode of January, 0:01:25 I thought it’d be great for us to do sort of a Janus-themed 0:01:26 look back, look forward, 0:01:29 starting with quick reactions on reading the piece. 0:01:32 Well, when I read the BuzzFeed piece, which was super interesting, 0:01:35 it took me back to a very specific moment in time. 0:01:36 And I was living in this world. 0:01:38 I was in the personal genomic space. 0:01:41 I had just started a startup that was looking 0:01:45 to essentially interpret full genome data at scale. 0:01:47 If there was something in DNA that could be found 0:01:50 to be relevant or actionable, 0:01:53 we were building technology to detect that. 0:01:54 But what I thought was really neat is, 0:01:56 I’m reading this 10-year retrospective, 0:01:57 if I go back to that moment in time, 0:02:01 I actually participated in a piece that was in some way 0:02:03 a 10-year prospective look 0:02:06 on what the future of personal genomics would look like. 0:02:10 And this is in the 2008 timeframe, more or less. 0:02:13 I get an outreach from, of all things, GQ Magazine. 0:02:15 They had an author, a guy by the name of Richard Powers, 0:02:19 who had just written a book and won all kinds of awards. 0:02:21 He wanted to write about the experience 0:02:25 of what it would mean to have his full genome sequenced 0:02:27 and essentially revealed to him. 0:02:29 And we had started this company known 0:02:30 with the idea that we would be among the first 0:02:32 to fully sequence individuals 0:02:34 and interpret their DNA for them. 0:02:35 But what’s really interesting is, 0:02:37 if you almost read that piece as a companion 0:02:38 to this backward-looking look, 0:02:39 you get the forward-looking look 0:02:40 of what the next 10 years 0:02:42 in personal genomics would look like. 0:02:43 – What was it called? 0:02:44 – It was called “The Book of Me.” 0:02:45 – Oh, fantastic. 0:02:46 What a great title. 0:02:47 So then what is your take? 0:02:48 What’s hype, what’s real here 0:02:51 when it comes to the promise of personal genomics? 0:02:53 The whole complaint of this article 0:02:54 is that we were promised one thing. 0:02:55 They were supposed to be the future of medicine, 0:02:58 but hey, instead we got this big data privacy mess. 0:03:02 – So looking 10 years back of what was hype, 0:03:04 or at least over-expectation, 0:03:06 was that people, in general, 0:03:11 would have a deep curiosity to understand their DNA. 0:03:12 – You’re saying that part is hype? 0:03:14 I would think that part is reality. 0:03:15 – Ah, well, what’s really interesting 0:03:18 is if you look at several companies named, 0:03:21 I think all had at some level an idea 0:03:24 that there would be a large number of people 0:03:27 that wanted to very deeply understand 0:03:29 any sort of secrets or actionable insights 0:03:31 that you could draw from your own genomic information. 0:03:33 And while those people definitely exist, 0:03:37 I don’t think that a large market materialized 0:03:38 around those people. 0:03:40 In fact, one of the eye-opening things for me 0:03:42 when I was starting my company back in 2008, 0:03:46 my ancestry.com was primarily selling subscription services 0:03:48 for getting into these sort of ancestry databases. 0:03:50 – Yeah, online family trees, yeah. 0:03:54 – So I remember I downloaded the S1ancestry.com’s 0:03:56 subscription revenue with something on the order 0:03:59 of $200 million that year. 0:04:00 So another question is, 0:04:03 do people fundamentally want to understand their DNA 0:04:05 in terms of health risks and the like? 0:04:07 Or do people have a fundamental curiosity 0:04:09 to know who they are and where they come from? 0:04:11 – Oh, that’s where you’re saying the difference 0:04:13 between what’s the actual market for this kind of, 0:04:15 there’s a curiosity, but not necessarily a market 0:04:16 for DNA around it. 0:04:18 – Exactly, so people want to understand 0:04:20 who they are and where they come from. 0:04:22 And if it happens to come from DNA data, great. 0:04:24 If it happens to come from looking at ancestry databases, 0:04:27 that seems to be a pretty reasonable substitute 0:04:28 for getting that insight. 0:04:30 – So okay, so you’re saying one of the things that’s hype 0:04:34 is that people may not necessarily want DNA data 0:04:36 specifically, what else is hype? 0:04:37 – I think one of the other things that was potentially 0:04:41 hyped certainly at that time in 2008, 2009, 2010 timeframe 0:04:45 is that there would be something deeply concrete 0:04:47 about DNA that would determine 0:04:50 what your potential health risks 0:04:53 and therefore what your potential outcomes might look like. 0:04:55 You know, sort of it’s this idea that DNA is destiny 0:04:57 when it comes to your health. 0:05:00 Now that’s certainly true in some subset of diseases. 0:05:02 The subset of diseases that are known to be monogenic. 0:05:04 – Right, so single factorial driving it. 0:05:06 – Exactly, when there’s a mutation in a gene 0:05:08 that results in a specific condition, 0:05:09 like a sickle cell anemia. 0:05:11 – Right, which we talked about in our CRISPR episode. 0:05:12 – But when you start to look at things 0:05:15 that are much more complex, much more multifactorial. 0:05:17 – Like cancer, many other diseases. 0:05:19 – Cancer, metabolic disorders, you know, 0:05:21 pick any number of cardiovascular risks. 0:05:23 There are certainly genetic contributors, 0:05:25 but as a lot of experts in the field say is, 0:05:28 you probably get that same level of information 0:05:31 from getting a good family history. 0:05:33 – Right, so basically the second hype piece you’re saying 0:05:37 is that it is not a direct link, a map from oh, 0:05:40 here’s your DNA and then oh, here’s all the diseases 0:05:42 you’re gonna get, not get, et cetera. 0:05:44 And here’s the precise risk you have 0:05:46 for this disease based on me analyzing your DNA. 0:05:47 So I think that’s probably an area 0:05:48 where expectations were probably higher 0:05:50 than where we were in reality, 0:05:52 in terms of how actionable is this information 0:05:54 for someone that is seeking to manage their health. 0:05:55 – So that’s maybe one of the things 0:05:57 where maybe the promise hasn’t quite come through yet. 0:05:59 – And I think another area that is really interesting 0:06:03 is a lot of these businesses were conceived 0:06:06 as subscription businesses. 0:06:09 Where I would give someone a DNA kit for Christmas, 0:06:11 they would get their genome scan 0:06:13 and then they would engage with that 0:06:14 on some regular basis. 0:06:17 And I would suspect that the vast majority of people 0:06:21 that had those DNA scans done, oh, so many Christmases ago, 0:06:23 probably haven’t logged in in a while. 0:06:28 So if you had a genome scan done in 2009 0:06:31 and you did another genome scan in 2019, 0:06:35 I can almost guarantee you that your ancestral makeup 0:06:37 would look different over the course of those 10 years. 0:06:39 – Simply because of the available data. 0:06:40 – We just know more, that’s right. 0:06:41 You, of course, haven’t changed who you are, 0:06:44 but who an ancestry map tells you you are has changed. 0:06:46 – Okay, so that’s where maybe things were hyped 0:06:48 or not delivered yet or promised 0:06:50 and didn’t quite come through. 0:06:51 Now let’s quickly talk about the reality. 0:06:53 So where are we today? 0:06:57 What is possible right now, truly, with personal genomics? 0:07:00 – Well, the first thing I would say is genomics more broadly 0:07:03 has delivered a lot over the course of the last 10 years. 0:07:05 In fact, I will say this as an expert, 0:07:08 not as an entrepreneur in the genomic space, 0:07:10 but as a parent of many children, 0:07:13 one of the fascinating things that I saw was 0:07:15 the time when my wife was pregnant with her oldest child, 0:07:17 you still could not get enough of a signal 0:07:19 from maternal blood as to whether or not 0:07:22 there was sufficient fetal DNA in circulation 0:07:23 to determine whether or not there were 0:07:25 genetic abnormalities. 0:07:27 By the time we had our last child, 0:07:30 that was routine standard of care. 0:07:32 The other example is when the child is actually born, 0:07:36 the mandated genetic tests when a child is born by a state, 0:07:39 and some of the ones you could also opt into, 0:07:42 that menu of tests that were available multiplied 0:07:44 in the relatively few years between the time 0:07:46 when we had our first child and our last child. 0:07:48 – So roughly a decade span. 0:07:50 – So we’ve seen a lot of advance just in the use 0:07:52 of genetic information and the practice of medicine, 0:07:54 and that’s a remarkable advance forward. 0:07:57 So now let’s focus on personal genomics specifically. 0:08:01 One of the promises of personal genomics even back in 2009 0:08:04 was predicated on the fact that there would be power 0:08:06 in numbers. 0:08:09 And in large part, that’s why some of the leaders 0:08:12 in this space, whether it’s 23andMe or ancestry.com 0:08:14 that eventually came into this, 0:08:17 there’s so much value in them amassing a large database. 0:08:20 Because in some ways, as you have more samples 0:08:22 in a database, you get better reads on who we are, 0:08:24 just genealogically, you have a higher resolution 0:08:26 map of the world. 0:08:30 Now, the risk of having a large aggregated data set, 0:08:33 it also becomes attempting target, 0:08:36 sometimes for legitimate uses for investigation, 0:08:38 sometimes perhaps for illegitimate uses. 0:08:40 – Right, this is where privacy concerns come in, exactly. 0:08:41 – Very interesting enough. 0:08:44 This privacy question, it sounds very futuristic, 0:08:46 but even in 2009, these concerns were very real. 0:08:48 If you read the terms and conditions 0:08:50 that services had to their credit, 0:08:53 they were very explicit that this information 0:08:55 could be used in unintended ways. 0:08:57 – Oh, in fact, the article even points out 0:08:59 that one of the companies had to actually expand 0:09:01 their definition of a violent crime 0:09:03 in order to cover it in their terms and services. 0:09:05 And secondly, that some of them are actually 0:09:08 moving to opting in to whether you can even be included 0:09:10 in that aspect of that database, 0:09:12 which is also fascinating that people can actually choose. 0:09:15 – Well, in GNOME, we made a decision early on 0:09:19 where we said we’re actually not gonna aggregate 0:09:20 all of the data, we’re not gonna centralize it. 0:09:22 We did something inverse. 0:09:27 What we decided to do was we would sequence an individual 0:09:30 and place that sequence, that genomic data 0:09:32 on an encrypted key that would live 0:09:34 in a decentralized network. 0:09:38 And the thought was you could keep the queries centralized. 0:09:40 So let’s say a researcher wanted to understand 0:09:42 how many people in a population have this mutation 0:09:43 associated with this disease. 0:09:45 You would push the queries down 0:09:47 to the edges of the network. 0:09:49 The analysis would run locally. 0:09:51 The result, and only the result would come back, 0:09:53 get centralized, then you’d have an aggregated 0:09:55 answer to that question. 0:09:56 – That’s fascinating. 0:09:58 Funnily, even though it’s a very different example, 0:10:00 it reminds me of differential privacy. 0:10:02 And that was also something that Apple 0:10:03 made a bigger deal about in the last few years, 0:10:05 but in fact, it was based on a paper 0:10:07 from Microsoft researchers like a decade ago. 0:10:08 It’s a fundamental insight they have 0:10:09 for how to separate these two things. 0:10:11 So it’s kind of funny, the synchrony of all that. 0:10:13 – Yeah, and arguably we were 10 years too early. 0:10:14 We came up with it. 0:10:15 – That was about timing. 0:10:16 – The other thing, when we thought 0:10:17 through these questions of privacy, 0:10:18 a lot of these were perceived risks. 0:10:21 We didn’t know, but we wrote a lot of risk factors 0:10:23 out to getting yourself sequenced. 0:10:26 And among them, we had things that sound fantastical, 0:10:29 like if someone had an entire readout of your genome, 0:10:31 they could essentially synthesize your genome 0:10:33 and then plant your DNA at a crime scene. 0:10:35 – Right, fascinating. 0:10:35 – Right, and all of a sudden you have 0:10:37 these genetic fingerprints of a place 0:10:38 where you’ve never been. 0:10:39 My co-founder, George Church, 0:10:42 who’s a professor of genetics at Harvard Medical School, 0:10:44 he insisted that we include, 0:10:45 if someone were getting sequenced, 0:10:47 we couldn’t ask them to get buy-in 0:10:49 from all of their family members. 0:10:52 But we could require that if any of them had a twin, 0:10:53 an identical twin, that that twin 0:10:54 would also have to sign up. 0:10:56 – Of course, that makes perfect sense. 0:10:57 So, okay, is there anything else 0:10:58 in what is possible right now 0:11:00 on the personal genomics front? 0:11:01 – Oh yeah, so in the present, 0:11:03 you could argue that on the ancestry side, 0:11:04 we’re getting much better at sort of 0:11:06 getting a high resolution view of who we are 0:11:07 and where we come from and all of that. 0:11:09 And it’s an end of one example, 0:11:12 but if you take the case of 23andMe, 0:11:13 over the course of the decade, 0:11:16 it has amassed a large enough genomic dataset 0:11:18 that it’s clearly valuable 0:11:20 from a research and development standpoint. 0:11:21 It wasn’t that long ago 0:11:22 when they announced a collaboration 0:11:24 with GlaxoSmithKline with GSK, 0:11:26 where GSK is essentially paying them 0:11:28 something on the order of $300 million 0:11:30 to get access to this dataset, 0:11:32 to be able to drive some insights from it 0:11:34 and potentially even follow up with people 0:11:36 on a very opt-in basis. 0:11:38 And so, that will show you that 0:11:40 at least on the original promise of personal genomics, 0:11:41 that this is one example. – The values there 0:11:43 and the data, yeah. – That’s been delivered, right? 0:11:45 And so, there is power in numbers. 0:11:46 And I think the question like in it, 0:11:49 with any other technology, with any other resources, 0:11:50 can we find the right balance 0:11:51 where we’re benefiting the commons 0:11:52 and not at the expense of the individual? 0:11:54 And I think that’s where a lot of the debate 0:11:56 happens in terms of are we doing the right things? 0:11:58 – So, that’s where we are now. 0:11:59 Let’s talk about the future 0:12:02 since we’re doing this whole Janus-themed episode. 0:12:04 So, given that there was this past of promise 0:12:05 that was and wasn’t delivered, 0:12:07 present of where we are, 0:12:10 where are we going with personal genomics next? 0:12:11 Or what is actually possible 0:12:12 based on what we already know today? 0:12:14 – Yeah, well, I think there are certain things 0:12:15 that are possible based on what we know today. 0:12:18 The first one is as these datasets become more rich, 0:12:20 the ability to derive insights from them, 0:12:23 that’ll be relevant for how we diagnose or treat disease. 0:12:26 I think that becomes increasingly more valuable over time. 0:12:28 So, what I mean by that is, 0:12:31 one of the big knocks on drug discovery and development 0:12:33 is that it takes a long time, 0:12:35 it’s very expensive and the risk of failure is high. 0:12:37 One of the sort of lesser known data points 0:12:42 is that if you have a genetic insight driving the program, 0:12:45 saying that I think that a particular molecular compound 0:12:48 is going to be effective in a particular patient population 0:12:52 that’s defined by some sort of genetic or genomic marker, 0:12:54 that molecule, that compound, 0:12:56 that drug has a much higher, 0:12:59 significantly higher chance of success. 0:13:00 – What does that mean practically? 0:13:02 Does it mean that we can actually basically, 0:13:04 is it natural extrapolation of that, 0:13:05 that there may be a future 0:13:07 where we do get personalized tailored medicine 0:13:08 based on those molecules? 0:13:09 – That’s right. 0:13:11 The extrapolation of that is that we’ll get better, 0:13:13 faster, cheaper drugs that are tailored 0:13:14 to the right population. 0:13:16 – Yeah, sort of like personalized cocktails 0:13:18 at a mass manufactured level. 0:13:21 – Essentially, yeah, personalized cocktails of therapies 0:13:23 that at least are targeted to specific populations. 0:13:24 – That’s actually the better way of saying that. 0:13:25 – Another potential future thing, 0:13:27 especially if we’re doing a 10 year perspective look 0:13:31 from today is people talk about personal genomics. 0:13:34 I think genomics is but one omic. 0:13:35 – Ah, yes, multi-omics. 0:13:36 – That’s right. 0:13:37 – Very big thing. 0:13:39 – If the big revolution over the course of the last 10 years 0:13:42 is that we were able to sequence, 0:13:45 read DNA at a massive scale at a low cost 0:13:47 at high fidelity and all those things, 0:13:50 that’s increasingly true across many other ways 0:13:52 in which biology transmits information. 0:13:54 And I can bore you with all of the omics. 0:13:55 – Go through a couple of the hit list. 0:13:58 I mean, proteomics is one I know from when I was at park. 0:14:00 – So genomics is DNA, proteomics is proteins, 0:14:04 transcriptomics is RNA, epigenomics is gene regulation 0:14:05 and how genes levels are set. 0:14:08 Metabolomics, the set of metabolites in your body 0:14:10 and of course microbiomics 0:14:12 and how the microbiome influences with all of that. 0:14:14 We’re increasingly going to read biology 0:14:16 across many, many frequencies. 0:14:18 And by the way, we can also increasingly read biology 0:14:20 at a higher and higher resolution, 0:14:23 which means you could read all of this information, 0:14:24 not for a single individual, 0:14:27 but increasingly from a single cell. 0:14:28 And that’s a very different thing 0:14:30 because now you could, for example, in a tumor, 0:14:33 you can understand how are the immune cells reacting 0:14:33 to the tumor cells? 0:14:35 How are the tumor cells reacting to the immune cells? 0:14:37 If you can read biology at that resolution, 0:14:40 we’re going to learn a lot more about biology. 0:14:41 Now, when you add to the fact 0:14:44 that it’s not just omics being transmitted by the cell 0:14:46 that we can capture at high fidelity, 0:14:48 but increasingly we have more sensor data 0:14:49 than ever before, more ability to crunch data 0:14:51 than ever before, I think if you look over the course 0:14:54 the next 10 years, it won’t be a question 0:14:55 of personalized genomics. 0:14:58 I think that will at some level be a data term. 0:15:00 It’ll be the question of, you know, 0:15:02 can you quantify individuals fully? 0:15:04 And we’re getting closer and closer to that. 0:15:05 – What would you say though? 0:15:05 I have to ask this 0:15:08 because we don’t want to be sitting here 10 years from now 0:15:10 and asking, so what did we get wrong 10 years ago? 0:15:12 Or hey, when you and I talked about this topics, 0:15:14 is it possible that multi-omics 0:15:16 is also one of these much hyped things as well? 0:15:18 I mean, we can’t predict the future obviously, 0:15:20 things play out, it’s always a matter of timing sometimes 0:15:23 when not, if, where are we really on the spectrum 0:15:24 of hype versus reality? 0:15:25 – Yeah, it’s a good question. 0:15:28 I think if we take the last 10 years as any guide, 0:15:31 there tends to clearly be sort of two stages, 0:15:33 two phases, two ages. 0:15:35 The first age is using technology to learn 0:15:37 and the second age is to use technology to act. 0:15:39 If we look at what happened with personal genomics 0:15:44 is that first age where we took to learn to gather data 0:15:46 ended up being I think a lot longer 0:15:48 than people probably originally anticipated 0:15:51 and we’re seeing the benefits of how we can act on that 0:15:53 towards the tail end of the last 10 years. 0:15:54 I think it’s probably reasonable to assume 0:15:57 that if we look over the course of the next 10 years 0:16:01 to your question, the dividing line between hype 0:16:03 and reality on something like multi-omics 0:16:05 for me is really the dividing line 0:16:08 and when do we shift from learning from information 0:16:10 to acting on information. 0:16:12 – So one thing I wondered about frankly is the parallels 0:16:14 and your team talked about this a lot 0:16:15 in terms of the parallels between engineering 0:16:17 and the engineering phase coming to biology 0:16:19 which is that when it comes to DNA in genomics, 0:16:21 the thing that’s been most fascinating for me to watch 0:16:24 for the last decade is that there is a Moore’s law 0:16:27 in genomics and it’s much faster than the regular Moore’s law 0:16:31 and yet that pacing outcome of practical application 0:16:33 is not necessarily on par with what happened 0:16:34 with the semiconductor. 0:16:36 So that’s where the analogy really breaks down 0:16:37 despite an accelerated effect. 0:16:41 So one question for me is what is missing in the ecosystem? 0:16:44 Like is it that there isn’t the ability to manufacture? 0:16:46 Is it a missing market as you alluded to earlier? 0:16:49 Are there missing components or materials? 0:16:50 You know, when I think of the history of innovation, 0:16:51 what still needs to be built out 0:16:54 in addition to this core fundamental technology 0:16:56 for this vision to come to reality? 0:16:57 – Ah, that’s a great question. 0:17:00 But first of all, I think the reason why we see a faster 0:17:02 than Moore’s law trend in genomics 0:17:06 is because the ability to sequence and interpret DNA 0:17:11 is really the confluence of three or four engineering marbles. 0:17:14 You know, if you look at the next generation sequencer, 0:17:17 really what it is is, you know, you’re tracking 0:17:18 the history and evolution of our ability 0:17:21 to engineer better microfluidic systems 0:17:22 in order to move around tiny amounts of liquids. 0:17:24 – All about microfluidics from Xerox. 0:17:25 – There you go. 0:17:27 So you’re seeing improvements in the ability 0:17:30 to engineer better chemistry. 0:17:32 And this is both at the nucleotide level 0:17:35 so we can, you know, get more efficient reactions. 0:17:37 And at the surface chemistry of the platform, 0:17:40 so you can actually run more and more reactions 0:17:41 in tighter and tighter real-estates. 0:17:43 You get more density, that’s a second wave. 0:17:47 The third wave is we have massive improvements in optics. 0:17:50 So if you’re gonna run a bunch of chemical reactions 0:17:52 in very, very, very small real estate, 0:17:54 you need to be able to detect those. 0:17:54 – Optical detection. 0:17:56 – Optical detection. 0:17:58 So when the actions that drive sequencing 0:17:59 are occurring at such density 0:18:01 that they fall below the pixel detection level 0:18:03 of the optics, you can’t see the difference between them. 0:18:05 So we had to see improvement in optics. 0:18:08 And then all of that generated data 0:18:12 that had to be deconvoluted with advanced computation. 0:18:13 – And those are the four factors. 0:18:14 – And those are the four factors. 0:18:16 – Microfluidics, optical detection, 0:18:18 improvements in chemistry and data, fantastic. 0:18:19 – So that’s that revolution. 0:18:22 So now why haven’t we seen sort of the output look the same? 0:18:25 The difference there is the output of Moore’s law 0:18:28 is better and smaller semiconductors. 0:18:31 Those could be placed within a system 0:18:32 that’s been designed by people, 0:18:34 like human beings that could be optimized. 0:18:37 And therefore you can get new products. 0:18:40 – In the case of genomics, the output of that information 0:18:43 has to go into a system that was not designed by human beings. 0:18:44 – It was designed by nature. 0:18:46 So Jorge, bottom line it for me. 0:18:49 So in this journey from looking backward 0:18:50 and looking forward, 0:18:52 where are we in the personal genomics revolution 0:18:54 and what should our takeaway be? 0:18:57 – We’re still in the early days of this revolution. 0:19:00 If we look over the long course of time, 0:19:03 we are still very much in the learning phase 0:19:05 and the data collection phase 0:19:07 and the information gathering phase. 0:19:10 And it will be some time before we make a mass shift 0:19:11 into the taking action phase 0:19:14 or into the productization phase 0:19:15 from all of this information. 0:19:17 But when you look where we are heading, 0:19:18 that day will arrive. 0:19:20 And that’s why we are incredibly optimistic 0:19:23 about what the future of genomics of multiomics 0:19:26 and biology more broadly will bring to our benefit. 0:19:29 And when it comes to the privacy, in its full iteration, 0:19:33 we will get the maximum power from genomic information 0:19:36 when virtually everyone is sequenced. 0:19:38 If we have perfect information, 0:19:40 we can theoretically draw better insights. 0:19:44 But that will come at important cost and considerations 0:19:47 for how we treat the concerns of individuals 0:19:50 that are contributing to that data 0:19:52 in a way that you’re still protecting the individual 0:19:54 but still benefiting the comments. 0:19:55 – Thank you for joining this episode. 0:19:56 – My pleasure.
This is a turn of the decade (and January-themed) look backward/ look forward into personal genomics, given recent and past retrospective and prospective pieces in the media on the promise, and perils, of the ability to sequence one’s DNA: What did it, and does it, mean for personalized medicine, criminal investigations, privacy, and more?
General partner Jorge Conde, who has a long history in the space, covers everything from where genealogy databases and large datasets come in to fetal testing, multi-omics, and other themes spanning the past, present, and future of personal genomics in conversation with Sonal Chokshi for episode #18 our news show 16 Minutes, where we cover recent headlines, the a16z way, from our vantage point in tech — and especially what’s hype/ what’s real. While we typically cover multiple headlines, this is one of our special deep-dive episodes on a single topic. (You catch up on other such deep dives, on the opioid crisis and other evergreen episodes, at a16z.com/16Minutes). And if you haven’t already, be sure to subscribe to the separate feed for “16 Minutes” to continue getting new episodes.
0:00:03 – Hi everyone, welcome to the A6NZ podcast. 0:00:05 I’m Sonal, happy new year. 0:00:08 Today’s episode is on why we should be optimistic 0:00:10 about the future, because it features two 0:00:13 of the most optimistic people together in conversation. 0:00:15 A6NZ co-founder, Mark Andreessen, 0:00:17 is interviewed by Kevin Kelly, 0:00:20 founding executive editor of Wired Magazine and more. 0:00:22 The conversation originally took place 0:00:25 at our most recent annual innovation conference, 0:00:28 the A6NZ Summit, and it was also previously released 0:00:29 on YouTube if you’d like to check it out there 0:00:30 as well. 0:00:33 – Good afternoon. 0:00:37 Thank you, Mark, for answering some questions. 0:00:38 I have a bunch of questions, 0:00:41 which I hope that we can talk about. 0:00:44 These all have to do about the future, where we’re going. 0:00:46 I want to start with a question about the past. 0:00:47 You know, a generation ago, 0:00:50 a lot of smart people didn’t think 0:00:52 the internet was gonna work, 0:00:55 and therefore they were unprepared for its benefits. 0:01:00 What are we smart people today not prepared for? 0:01:02 – Yeah, so you may remember actually generating, 0:01:04 it wasn’t even just that a lot of people thought 0:01:05 that the internet was gonna work, 0:01:06 a lot of smart people didn’t think that. 0:01:07 In fact, the inventor. 0:01:09 (laughing) 0:01:10 I can’t resist. 0:01:11 I can’t resist on the story. 0:01:12 They actually, the inventor of Ethernet, 0:01:15 which is a foundational technology for the internet, 0:01:17 spent the ’90s actually predicting the internet 0:01:18 would crash, would collapse, 0:01:20 and what we call it would be the gigalapse, 0:01:23 would take down the internet by like 1996, 1997. 0:01:25 He wrote a column at the time for a magazine 0:01:27 called Info World, and he said that if he was wrong, 0:01:28 by, I think it was like, 0:01:30 if the internet hadn’t collapsed by 1997, 0:01:32 he would eat his column. 0:01:36 And to his enormous credit in 1998, 0:01:37 he actually went on stage at a conference, 0:01:40 he actually ripped his column out of the magazine, 0:01:41 he put it in a blender with water, 0:01:43 blended it up, and he drank it on stage. 0:01:46 So it’s one of the more shining examples 0:01:49 of intellectual honesty I’ve ever seen. 0:01:51 As it turns out, he was wrong. 0:01:52 It turns out the internet did work. 0:01:53 So I think the big thing, 0:01:55 I’ve been thinking about this a lot, 0:01:57 you know, it feels to a lot of people 0:01:59 like things are getting strange. 0:02:01 And maybe I’m the only one who feels that way, 0:02:03 but if you read the news, 0:02:04 or just track things happening in the world, 0:02:06 just things feel kind of weird and different 0:02:07 over the last few years. 0:02:08 I actually think there’s like, 0:02:10 there’s that actual generational thing that’s happening, 0:02:12 and you alluded to the generational component. 0:02:14 Like it did take 25 years to get everybody online. 0:02:16 And like we’re not quite there yet, 0:02:18 but we’re getting very close. 0:02:19 Like I think the most exciting thing happened 0:02:21 in the world right now is Mukesh Ambani, 0:02:22 who’s the richest man in India, 0:02:24 has this program called Geo, 0:02:26 where he is literally providing internet access 0:02:28 to the 500 million lowest income Indians, 0:02:33 like literally it’s like free for six months. 0:02:34 And then it’s like a dollar a month. 0:02:35 It’s like the most amazing thing. 0:02:36 And it’s like, it’s working incredibly well. 0:02:39 And so we are very, very close to every, 0:02:42 at least every adult on the planet being internet connected. 0:02:43 But it took 25 years to get there. 0:02:46 And so for me, it’s like, okay, so then what? 0:02:48 One interpretation of that is, okay, we’re done. 0:02:49 We did it. 0:02:50 The other interpretation of that is actually, 0:02:52 okay, that’s just the beginning point. 0:02:53 – Right. 0:02:54 – That’s like the beginning point of what? 0:02:55 – Right. 0:02:56 – And I think it’s the beginning point of like, okay, 0:02:58 like what if you actually interconnect 0:02:59 with everybody on the planet? 0:03:00 Like what, you know, there’s like the metaphor 0:03:02 of the global mind of the global brain. 0:03:04 Like what if you actually connected everybody together 0:03:07 and let everybody find out what everybody else was thinking? 0:03:08 It’s one of those things that people think 0:03:09 sounds good. 0:03:10 And then they encounter it face to face 0:03:11 and they’re like, I don’t know. 0:03:12 – Right, right. 0:03:14 That was like, during my time, 0:03:16 that wired people were kind of concerned 0:03:17 about the digital divide. 0:03:19 And I said, the digital divide is going to cure itself. 0:03:21 The thing you should be worried about 0:03:24 is what happens when everybody is online? 0:03:26 So you think we’re not prepared 0:03:28 for what will happen when everybody is online? 0:03:30 – No, and I think we’re not prepared. 0:03:31 And then I think it’s going to be very exciting. 0:03:33 I mean, I think we’re already seeing that in many ways. 0:03:37 I think the, and then I think we’ve kind of figured out 0:03:38 collectively that it’s going to be different. 0:03:40 And so the initial impulse to say things 0:03:41 are going to get much worse. 0:03:42 And I don’t think that’s right. 0:03:43 I think things are going to get very different. 0:03:46 I think things will be much more positive. 0:03:48 And we’ll talk a lot about that today, hopefully. 0:03:50 But things are definitely going to be different. 0:03:53 – I think one lens that I’ve been trying to put on lately 0:03:55 is kind of think about it through a cultural lens. 0:03:57 Right, sort of what happens to culture 0:04:00 because culture, you know, Ben just wrote this book 0:04:02 about culture being kind of the foundation of the behavior. 0:04:04 And I think that’s really true certainly in companies, 0:04:06 but I think it’s also true in countries and globally. 0:04:08 And it feels like the internet’s impact on culture 0:04:12 is just beginning in the sense of like a world 0:04:13 in which culture is based on the internet, 0:04:15 which is what I think is happening. 0:04:16 It’s just at the very start, right? 0:04:17 ‘Cause it had to get universal 0:04:19 before it could set the culture, 0:04:20 but that’s actually happening now. 0:04:21 – Okay. 0:04:23 And at the same time, a generation ago, 0:04:25 well, there was a few people 0:04:28 who actually did think the internet was going to work, 0:04:32 but they were also, like myself, expecting VR 0:04:36 and conversational AI to happen tomorrow. 0:04:39 So what are we expecting to happen now 0:04:40 that it’s not gonna happen? 0:04:42 – Yep, so I object to the question. 0:04:44 (laughing) 0:04:45 Your Honor. 0:04:48 So this is one of those things in our business 0:04:49 that you deal with a lot, 0:04:50 which is ’cause you find yourself, 0:04:52 these entrepreneurs come in and they pitch an idea 0:04:53 and you kind of feel like you should draw judgment 0:04:54 on whether the idea is gonna work or not. 0:04:57 And it’s something I’m really leery of doing anymore. 0:04:59 And the reason for that, 0:05:01 and I think you know this from all of your reading, 0:05:04 every successful technology that I’m aware of, 0:05:05 the things that are like all of a sudden, 0:05:08 like the next big thing, like the iPhone in 2007, 0:05:09 or just as an example, 0:05:12 they all have this like incredible 25 or 40 0:05:14 or 50 year backstory to them. 0:05:16 And you sometimes have to go back and excavate, right? 0:05:17 Because you haven’t heard a lot of the backstory 0:05:20 as the previous efforts failed, right? 0:05:21 But if you go back and look, 0:05:24 like there’s often a multi-generational run-up, 0:05:26 and so I’ll just give you a few of my favorite examples. 0:05:29 So iPhone hit big in 2007. 0:05:30 I for years went around saying, 0:05:32 well, IBM is, there was a 20 year project, 0:05:35 IBM shipped the first smartphone in 1997 called the Simon. 0:05:36 I thought that was true. 0:05:37 It actually turns out it’s not true. 0:05:41 I found the other day, RadioShack had a smartphone in 1982 0:05:43 with their, they literally had a phone version 0:05:45 of their TRS-80 mini computer. 0:05:47 They sold about four of them. 0:05:49 But it was a thing, right? 0:05:52 So that had a 25 year fuse on it. 0:05:54 Video conferencing, video conferencing goes back 0:05:57 at least to the mid ’60s, to the World’s Fair. 0:06:01 The telegraph was invented in the 1870s, 0:06:03 and then sat on a shelf for 100 years 0:06:05 before the Japanese turned it into an industry. 0:06:07 And then another favorite is fiber optics. 0:06:10 Nominally, or you can kind of stretch, 0:06:13 you could say fiber optics were invented in the 1840s. 0:06:18 Paris had a optical telegraph network under the city. 0:06:19 You could actually do, you could actually do under, 0:06:21 you could do telegraphy in the 1840s in Paris. 0:06:23 And it was literally, they were shining flashes of light 0:06:24 through glass tubes. 0:06:27 So there’s this like this incredibly rich back story 0:06:29 to all these things. 0:06:31 And so as a consequence, it’s actually less a question 0:06:32 of like, what’s the new idea? 0:06:33 It turns out the idea is probably already out there 0:06:34 somewhere. – Right, okay. 0:06:35 – And then it’s less the question of like, 0:06:36 is it going to work? 0:06:37 It’s more of the question of like, 0:06:38 when is it gonna work? – Right. 0:06:40 – And I pushed it so far, and people in our office 0:06:41 have heard this. 0:06:42 I pushed all the way to the point where I just think 0:06:43 we should assume that whatever 0:06:45 we’re being pitched is going to work. 0:06:46 (laughing) 0:06:48 It’s just a question of timing. 0:06:50 Then of course, timing turns out to be the hard part, 0:06:52 but it at least focuses the conversation. 0:06:53 – Right, right, right. 0:06:55 So it was the same idea of kind of looking 0:06:57 at the history of things. 0:07:00 One wonders, who really made all the money 0:07:02 when electricity came along? 0:07:03 It probably wasn’t the people 0:07:06 necessarily generating electricity. 0:07:08 Who do you think is gonna make the money 0:07:10 when AI comes along? 0:07:12 Is it the AI providers? 0:07:15 Is it the AI service? 0:07:18 Is it the algorithmic writers? 0:07:20 Who’s gonna be making money in AI? 0:07:22 – Yeah, so we think that there’s two 0:07:24 obvious business models, and probably others, 0:07:25 but the two obvious. 0:07:26 One is to be sort of a horizontal platform provider, 0:07:28 infrastructure provider, you know, 0:07:29 for AI kind of analogous to the operating system 0:07:32 or the database or the cloud. 0:07:33 You know, the other opportunity is kind of in, 0:07:34 would say in the verticals, 0:07:35 and so the applications of AI. 0:07:38 And there’s certainly a lot of those. 0:07:39 So that’s the general answer. 0:07:41 I think that the deeper answer is 0:07:41 there’s an underlying question 0:07:43 that I think is an even bigger question about AI 0:07:45 that reflects directly on this, 0:07:49 which is, is AI a feature or an architecture? 0:07:52 Is AI a feature? 0:07:54 We see this with pitches we get now, 0:07:55 which is just like we get the pitch, 0:07:58 and it’s like, here are the five things my product does, 0:08:00 right, and bullet points one, two, three, four, five, 0:08:02 and then oh yeah, number six is AI. 0:08:04 Right, and so you go, it’s always number six, right, 0:08:05 ’cause it’s the bullet that was added 0:08:06 after they created the rest of the deck. 0:08:08 (laughing) 0:08:10 And so it’s like, okay, if AI is a feature, 0:08:11 then that’s actually correct, 0:08:13 which is like every, basically everything 0:08:15 is just gonna kind of have AI sprinkled on it. 0:08:17 There’ll be AI features kind of in every product. 0:08:18 That’s possible. 0:08:21 We are more believers in the other scenario, 0:08:24 that AI is a platform and is an architecture. 0:08:26 If, in the same sense that like the mainframe 0:08:28 was architecture, or the mini computer was an architecture, 0:08:31 the PC, the internet, the cloud have been architectures, 0:08:33 we think there’s very good odds that AI 0:08:34 is the next one of those. 0:08:37 And if that’s the case, then it means that basically, 0:08:38 when there’s an architecture shift in our business, 0:08:40 it means basically everything above the architecture 0:08:41 gets rebuilt from scratch. 0:08:43 Because the fundamental assumptions 0:08:45 about what you’re building change, right? 0:08:47 And so you’re no longer building a website, 0:08:48 you’re no longer building a mobile app, 0:08:49 you’re no longer building any of those things 0:08:51 you’re building instead an AI engine 0:08:52 that is just like, in the ideal case, 0:08:53 is just giving you the answer 0:08:55 to whatever the question is. 0:08:56 And if that’s the case, 0:08:59 then basically all applications will change, 0:09:01 along with that all infrastructure will change. 0:09:03 Basically the entire industry will turn over again, 0:09:04 the same way that it did with the internet, 0:09:06 and the same way it did with mobile and cloud. 0:09:08 And so if that’s the case, then it’s just, 0:09:10 it’s going to be like an absolutely explosive period 0:09:12 of growth for this entire industry. 0:09:14 – ‘Cause it means then that all the incumbents, 0:09:17 suppose the incumbents really aren’t incumbent at all. 0:09:19 – Yeah, the products just won’t be relevant anymore. 0:09:20 I mean, I’ll just give you an example. 0:09:22 There are lots and lots of sort of business applications, 0:09:23 just business apps as an example. 0:09:25 There’s lots of business apps, 0:09:27 where you basically, you type data into a form 0:09:28 and then it stores the data 0:09:29 and then later on you run reports 0:09:30 against the data and get charts. 0:09:32 And that’s been the model of business software 0:09:34 for 50 years in different versions. 0:09:36 What if that’s just not needed anymore? 0:09:38 Like what if in the future what you’ll do 0:09:40 is you’ll just give your AI and your business access 0:09:42 to all, email, all phone calls, all everything, 0:09:44 all business records, all financials in the company 0:09:46 and just let the AI give you the answer 0:09:47 to whatever the question was. 0:09:49 And you just don’t go through any of the other steps. 0:09:51 Google’s a good example of this. 0:09:52 Like they’re pushing hard on this. 0:09:54 Like the consumer version of this, right, is search, right? 0:09:58 So search has been, it’s been the 10 blue links 0:09:59 for 25 years now. 0:10:02 What Google’s, they talk about this publicly, 0:10:04 what they’re pushing towards this is just like, 0:10:05 no, it should be that answer. 0:10:06 Which is what they’re trying to do 0:10:07 with their voice UIs. 0:10:10 And so that concept might really generalize out, right? 0:10:11 And then everything gets rebuilt. 0:10:12 – Right. 0:10:16 So one of the new interfaces to AI 0:10:18 that people are talking about is voice 0:10:20 as the new interface. 0:10:24 What are we likely to get wrong about voice? 0:10:26 – Yeah, so I think the thing that, 0:10:27 if we’re gonna get something wrong about voice, 0:10:29 I think it’s gonna be that it would be a one-to-one 0:10:31 replacement for existing user interaction models 0:10:34 so that it would be like a replacement for keyboard 0:10:36 or that it’d be a replacement for the mouse or for touch. 0:10:39 Probably not, ’cause it’s a different modality, right? 0:10:42 It’s, you know, we know exactly what to keep. 0:10:43 After all this time, we know what the keyboard is for, 0:10:46 we know what touch is for and for voice 0:10:48 to displace those, seems like a stretch. 0:10:53 On the other hand, to the previous question, 0:10:55 there has been this turning point reached, 0:10:58 it feels like in AI applied to language 0:11:01 and from there to voice, right, to text and to speech. 0:11:03 Which is, it feels to us in the technology 0:11:05 like the natural language processing methods 0:11:06 that people have been working on for, again, 0:11:08 for 50 years, computer scientists have been working 0:11:11 on getting computers to understand basically speech. 0:11:13 And what we’re seeing now is in the technology 0:11:14 is that that now has started to work 0:11:15 in the same way that machine vision 0:11:18 started to work about seven years ago. 0:11:20 And so if that’s the case, then all of a sudden 0:11:23 the conversational UIs are about to get much better. 0:11:25 And again, and then you couple that with, okay, 0:11:26 what are you actually trying to achieve 0:11:29 when you talk to a computer? 0:11:30 Are you actually trying to like, you know, 0:11:31 are you trying to write a document? 0:11:32 Are you trying to read an email? 0:11:34 Are you trying to like do all these other things 0:11:34 that you do today? 0:11:36 Or are you fundamentally gonna be doing something different? 0:11:38 ‘Cause the machine’s gonna be so much smarter. 0:11:40 And I think that’s a very interesting open question. 0:11:43 – When I think about the AR mirror world, 0:11:45 I find it very hard to imagine it without it 0:11:48 having a voice component where we can understand 0:11:50 what you’re saying besides what you’re looking at, 0:11:52 that is that an essential part of the AR world? 0:11:54 – Yeah, I think actually I’d go so far as to say 0:11:56 it may be the case that voice actually is the key 0:11:59 to the AR world, like voice may be the thing. 0:12:01 Voice may actually be the foundation of the whole thing. 0:12:03 You know, for, this is kind of a cliche at this point, 0:12:05 but like the Apple AirPods, 0:12:06 I think were a fundamental breakthrough. 0:12:08 Like it’s again one of these funny things where it’s like, 0:12:10 okay, wireless headphones, okay, cool. 0:12:12 Like wireless headphones where there’s, you know, 0:12:14 there’s not even a wire connecting the two things. 0:12:15 Cool, okay, it seems like more of the same, 0:12:18 but you know, if you want, the experience you can have now 0:12:21 is like you can wear one of these things basically all day 0:12:22 and you can talk to it all day. 0:12:24 And you know, they’re getting, you know, 0:12:25 the new versions are getting better. 0:12:27 You know, and Siri and Google Now and Croton 0:12:30 and all these things are getting really good really fast. 0:12:33 And so it may be that we have just this constant 0:12:35 ongoing running dialogue. 0:12:37 This is kind of, you know, basically the machine 0:12:38 talking to our ear. 0:12:40 And then, you know, the visual overlay of AR 0:12:42 will obviously be important and valuable, 0:12:43 but it might be, it might, 0:12:45 the visual overlay might be supportive 0:12:47 on top of the voice experience. 0:12:51 – And we could very quickly have universal language 0:12:52 translation speaking over the years. 0:12:55 And I think people underestimate the change 0:12:56 that that would bring about in the world. 0:12:59 You’d have millions of people who are highly skilled 0:13:02 in everything except the skill of English. 0:13:05 Now being able to participate in a global economy. 0:13:08 We were talking about unexpected and unexpected things. 0:13:13 Biology, which is a million times as complicated as digital. 0:13:16 We’re now talking about a biotech revolution. 0:13:20 Are we misunderstanding what biotechnology actually is? 0:13:22 – Yeah, so that’s the big bet that we’ve made 0:13:26 with our bio effort that we started a few years back. 0:13:29 We think biological science is at a turning point 0:13:31 at the scientific level and we think it’s at a turning point 0:13:35 from basically being a process of discovery 0:13:38 of how biology works to being able to engineer biology. 0:13:40 And up to including literally being able 0:13:42 to program biology, being able to actually basically 0:13:45 be able to use electrical engineering and computer science 0:13:47 and these mechanical engineering and these kind of fields 0:13:49 for engineering things and be able to apply 0:13:52 those kinds of skills to biology. 0:13:55 If we’re right about that, then the whole concept 0:13:58 of kind of how bio and biotech work might be 0:14:00 on the verge of really changing. 0:14:03 The most obvious application that would be in pharmaceuticals, 0:14:05 there’s this concept of drug discovery. 0:14:06 It’s always the word discovery. 0:14:08 It’s always like, discovery sounds great. 0:14:09 It’s like, it’s optimistic. 0:14:13 It’s like, ooh, discovering things is fantastic. 0:14:16 The problem is, discovering, they literally call it that 0:14:18 ’cause they literally have to run all these experiments 0:14:19 and try to discover the drug that works. 0:14:22 Like try to kind of reverse engineer back from nature. 0:14:25 And the problem is sometimes they discover it 0:14:26 and sometimes they don’t. 0:14:29 So the example we always give is we talk about 0:14:32 with computers, we’ve been on this kind of 50 year track 0:14:34 of what’s called Moore’s Law. 0:14:35 We’re at chips beginning faster and cheaper 0:14:37 every year for a long time. 0:14:39 In biology, in drug discovery, 0:14:41 there’s what they call e-rooms law, 0:14:43 which is more spelled backwards, e-room. 0:14:46 And it’s the cost of discovering a new drug. 0:14:48 And it’s exactly the wrong direction. 0:14:51 It’s up and to the right, billions of dollars now. 0:14:54 And so if you could actually engineer biology, 0:14:56 then all of a sudden you can start to apply 0:14:57 this like just these decades of skills 0:14:59 that we’ve built up in how to engineer things 0:15:00 and be able to do things like 0:15:02 engineer new pharmaceuticals from scratch. 0:15:05 – And it all runs on basically ultimately Moore’s Law. 0:15:08 Moore’s Law has been foundational to this year. 0:15:10 It’s almost hard to imagine anything we have 0:15:13 in the modern world today without Moore’s Law. 0:15:18 Do you think Moore’s Law has another 30 years run? 0:15:19 Is it limited? 0:15:20 Is it finite? 0:15:21 Will it go on forever? 0:15:23 We’ll define Moore’s Law in the broadest sense 0:15:28 of computers getting cheaper by half every couple of years. 0:15:30 So what’s your take on Moore’s Law? 0:15:32 – Yeah, so the traditional definition is computer 0:15:35 in the form of the chip, and then specifically a chip. 0:15:36 So Moore’s Law has always been expressed 0:15:39 as kind of unit one of chip. 0:15:40 And that could be right, that could be a CPU 0:15:41 or it could be a graphics card 0:15:44 or it could be a graphics chip or a memory chip. 0:15:45 And then specifically what you were doing was 0:15:48 you were able to put more transistors on that chip 0:15:49 for the same cost. 0:15:51 And actually for a long time as you did that, 0:15:53 you were actually able to reduce the power requirement 0:15:56 for per transistor, which was this kind of added benefit. 0:15:58 And so chips kind of got simultaneously, 0:16:00 they got faster, they got cheaper, 0:16:01 and they got more power efficient. 0:16:04 And that was kind of a cornucopia effect that generated, 0:16:06 as you said, most of what you see today 0:16:08 in the computer industry. 0:16:11 So the bad news is that that in that form 0:16:13 seems to be coming to something of an end, 0:16:15 which is we’re too good at it. 0:16:17 We’ve hit basically, we being the semiconductor industry 0:16:19 broadly, the tech industry have kind of hit 0:16:20 the limits of fundamental physics. 0:16:23 Like we’re now down at the sort of deep atomic level. 0:16:25 And it’s becoming much harder to make, 0:16:26 there’s still progress, 0:16:27 it’s becoming much harder to make progress 0:16:29 at the per chip level. 0:16:31 The good news is that the industry starting 0:16:34 10 or 15 years ago, the computer industry broadly 0:16:36 refocused off of what you do with a chip 0:16:39 to what you do with a large number of chips, right? 0:16:41 So kind of the old model of a chip was 0:16:42 you make the chip more powerful 0:16:42 ’cause you’re trying to scale up 0:16:44 what you can do in the chip. 0:16:46 The new model is you use thousands of chips in parallel 0:16:48 and you have this kind of approach to scaling out. 0:16:49 And of course the full form of that 0:16:51 is what’s now known as the cloud. 0:16:53 And so we now have a 15 year head of steam going 0:16:55 to basically be able to get good 0:16:58 at using lots of chips to do things. 0:17:00 And that’s why you see the continued ability to, right? 0:17:03 To accelerate, you know, many, many things 0:17:05 that you deal with are getting still much faster 0:17:06 as if they’re still on the Moore’s Law. 0:17:08 The experiences you’re having are getting faster. 0:17:09 So we think number one, 0:17:10 like the rise of scale at architectures 0:17:11 is a really big deal. 0:17:13 Like, you know, in modern clouds as a developer, 0:17:15 you don’t really care about what the power 0:17:15 of any particular chip is. 0:17:17 You just like light up some more of them 0:17:18 and they don’t cost much. 0:17:20 So there’s that. 0:17:22 The other thing is chips are now specializing. 0:17:23 And in particular, you’ve got the rise 0:17:24 of these new dedicated chips 0:17:25 for things like neural networks 0:17:28 where there’s another level of opportunity to optimize. 0:17:31 And then the other kicker is the programmers. 0:17:34 Software, people like me get to step up. 0:17:38 In the old days, when computers were expensive, 0:17:39 programmers were really good at optimizing 0:17:42 every single step of a software program. 0:17:44 Programmers got out of that habit, 0:17:45 probably starting 30 years ago, 0:17:47 where it didn’t matter as much anymore 0:17:49 because Moore’s Law was working so well. 0:17:52 And so software today is just like massively inefficient. 0:17:53 There’s actually, I forget the name, 0:17:55 there’s something called Worth’s Law, 0:17:58 which is, it was written at the time, 0:17:59 I don’t know if it still holds, 0:18:00 but it was, somebody did benchmarks 0:18:04 of you take Microsoft Office 2000 on a PC from 2000 0:18:07 and you take Microsoft Office 2007 and a PC from 2007 0:18:09 and every function you could do, 0:18:12 you could now do in twice the time, right? 0:18:13 So like literally like, 0:18:16 the old adage in tech in the 90s was 0:18:18 when Andy Grove was running Intel 0:18:19 and Bill Gates was running Microsoft, 0:18:21 it was Andy Gibb in the form of Moore’s Law 0:18:25 and then Bill take it away in the form of software bloat. 0:18:28 And so, and Worth’s Law literally 0:18:31 is a mathematical proof of that. 0:18:33 And so like it’s become prime time again 0:18:35 for software programmers to get really good at optimization, 0:18:37 which is like what’s happening in AI world 0:18:39 and also in the cryptocurrency world. 0:18:40 And so with those different approaches, 0:18:42 it feels like we’ve got, 0:18:43 it feels like decades of advances ahead 0:18:46 that aren’t purely dependent on classic Moore’s Law. 0:18:48 – And because if we take the long-term, 0:18:52 like thinking of a 100 year span to have prosperity 0:18:55 like we’ve seen would kind of require that computer power 0:18:59 sort of get cheaper every year because of a dent 0:19:01 that it’s hard to imagine a world like that. 0:19:04 So is your confidence that we could kind of keep this going 0:19:07 based on just sort of human ingenuity 0:19:11 or do you think that there’s some basic principles 0:19:14 of science that suggest that we’re just at the beginning 0:19:15 of what we can discover? 0:19:17 – Well, so Gordon Moore who invented Moore’s Law 0:19:18 as co-founder of Intel, 0:19:20 he always said Moore’s Law is what was interpreted 0:19:23 as a prophecy and he always said it was not a prophecy. 0:19:25 It was a goal, right? 0:19:27 And it was a goal of basically what you could do 0:19:28 if you focused intensely, 0:19:30 if you focused the entire industry intensely 0:19:32 on a set of engineering optimizations, right? 0:19:34 Over a long period of time. 0:19:35 So he used to say it’s just like, 0:19:37 there’s nothing inevitable about it. 0:19:39 It’s a consequence of thousands and then tens of thousands 0:19:41 and then millions of engineers like working to actually 0:19:44 deliver on these kind of semi-arbitrary goals. 0:19:47 And so I think the answer to that is 0:19:49 we have many, many areas of improvement. 0:19:52 As I said, the problem is we don’t have the one that we had, 0:19:54 which is this transistor doubling kind of effect. 0:19:56 But we’ve got many, I mean, 0:19:58 there’s far more engineers working on all this stuff today 0:20:00 than we’re working on it in 1965. 0:20:01 When he invented Moore’s Law, 0:20:03 we’re in 1995 when everybody bought a PC. 0:20:08 Like we have a lot of mind power going into this. 0:20:12 We’ve got a lot of different technological options. 0:20:14 We’ve got a lot of, you know, incredibly impressive work 0:20:15 happening all over the world. 0:20:16 The other thing is you can’t, you know, 0:20:19 you never like, you know, 0:20:21 one of those things like the transistor was not obvious 0:20:22 and then they invented that. 0:20:25 And then this integrated microchip was like not obvious 0:20:26 and then they invented that. 0:20:27 And so you don’t quite know, you know, 0:20:28 there are lots of technical proposals 0:20:30 for how to get to the next level of Moore’s Law. 0:20:31 You know, so there’s all kinds of theories 0:20:32 around optical computing 0:20:33 and then in the long run, biological computing. 0:20:34 – Quantum computing. 0:20:36 – Quantum computing, exactly. 0:20:38 And so over the course of the next like 20 years, 0:20:40 like, we’ll put it this way. 0:20:42 This is one of the world’s largest prizes, right? 0:20:45 If you’re the engineer who figures out 0:20:46 how to reaccelerate Moore’s Law 0:20:48 or how to shift computing onto a new substrate 0:20:49 like biology, that is the thing to do. 0:20:52 And so that’s the prize. 0:20:54 And that historically has been pretty motivating. 0:20:54 – Right. 0:20:57 So taking this kind of theme of marching forward 0:21:02 progressively, we have 4G, we’re talking about 5G. 0:21:05 So far 5G seems to be faster 4G 0:21:06 with a lot of hype added to it. 0:21:09 There’s a technical specification for 5G, 0:21:11 which is really awesome. 0:21:14 You know, 100 gigabytes, two millisecond latency, 0:21:16 almost impossible. 0:21:19 Are you counting on that for the next decade 0:21:21 that we’re gonna have actual, 0:21:23 what they promise with 5G? 0:21:24 – Yes, I think there’s pretty good odds we will. 0:21:27 And the reason is because 5G has become 0:21:29 a national geopolitical battle. 0:21:31 Like it’s actually a very interesting twist. 0:21:33 It’s become actually a primary, like, you know, 0:21:35 if the Cold War between the US and the USSR 0:21:37 was like defined by the space race, 0:21:40 like at least the sort of nascent Cold War with China 0:21:43 is actually, a lot of it is around 5G, interestingly enough. 0:21:45 I mean, it could have been around a lot of things, 0:21:46 but it happens that it’s around 5G. 0:21:49 And so you now have nation states 0:21:51 that very, very badly need to win, 0:21:54 two big nation states in particular. 0:21:57 And so I think there’s gonna be a lot of, you know, 0:21:58 so we’re gonna start with the payoff from the space races, 0:22:00 like all the products that got, you know, 0:22:02 spun off from that, satellites and GPS 0:22:02 and everything else. 0:22:05 The other thing on 5G, you know, 0:22:07 people sometimes say 5G will lead to applications 0:22:08 they haven’t even thought of yet. 0:22:09 And I think that’s kind of true. 0:22:11 But I look at it a little bit differently. 0:22:13 It’s just a little bit like the most law conversation 0:22:14 we’re having, which is, 0:22:15 I look at a little bit as a math kind of question, 0:22:18 which is there’s sort of three classic rules 0:22:20 for how networks scale 0:22:23 and how network scaling turns into value or usefulness. 0:22:24 And there’s sort of historically, 0:22:26 there’s what’s called Sarnoff’s law, 0:22:28 which was based on broadcast TV, 0:22:30 which is the value of a network is equivalent 0:22:31 to the number of nodes, right? 0:22:33 So it scales with N, right? 0:22:36 So a TV network with 10 million viewers 0:22:38 is twice as valuable as a TV network of 5 million viewers. 0:22:39 That’s kind of the obvious one. 0:22:40 Then there was Metcalf’s law, 0:22:42 which is basically the value of the network 0:22:44 is on the number of connections between two points. 0:22:47 And that’s like how email works, right? 0:22:49 It just emails a person to person. 0:22:51 And that’s correlates to N squared. 0:22:52 So the value of the network rises exponentially 0:22:54 with N squared. 0:22:55 And then there’s this thing called Reed’s law, 0:22:57 which is called the group forming law, 0:22:59 which is the value of the network is proportional 0:23:00 to the number of groups and subgroups 0:23:02 that conform inside the network, 0:23:04 which turns out to be two to the N. 0:23:06 And if you wanna have fun in your plane flights home, 0:23:09 it’s like, just go on Excel and like chart, 0:23:11 N, N squared and two to the Nth, right? 0:23:13 And two to the Nth just goes like straight vertical. 0:23:15 Like you can’t even put them on the same chart. 0:23:17 And two to the Nth is like what’s now happening 0:23:18 was like social networks, right? 0:23:20 So like Facebook groups and all these other things, 0:23:21 like WhatsApp groups and all these other things 0:23:23 people do with social networks and games 0:23:24 and all these other things. 0:23:26 And so those are like the three ways 0:23:27 in which network growth pays off. 0:23:29 And like all three are working now 0:23:32 based on broadband, wired broadband. 0:23:32 They’re all working, 0:23:35 you see it happening very much with mobile. 0:23:36 The introduction of 5G, 0:23:37 the way I think about it is it’s gonna turbo charge 0:23:39 those three networks in particular, 0:23:42 that last one or those last two. 0:23:43 And so it’s gonna add a lot more N. 0:23:45 There’s just gonna be a lot more devices on the network. 0:23:47 There’re gonna be a lot more things 0:23:48 that those devices can do. 0:23:49 They’re gonna be a lot more point to point connections 0:23:51 that make sense to have. 0:23:52 There’s gonna be a lot more groups 0:23:54 that form a lot more economic activity that happens. 0:23:56 – Something that was again, 0:23:58 we were expecting to happen, but didn’t 0:24:01 was in the world of what’s sometimes called 0:24:02 the sharing economy. 0:24:05 There was a, after Airbnb and Uber, 0:24:08 there was a stampede of companies 0:24:09 that were gonna be Uber for X. 0:24:12 And then X was everything in the world. 0:24:13 Very few of them have succeeded. 0:24:16 Again, there was an expectation. 0:24:18 We see more of them, but we haven’t. 0:24:21 So is that whole idea kind of at a dead end? 0:24:24 Is it just, we’re in a very slow disruption. 0:24:25 It’s gonna take a while. 0:24:28 Like the generational requirements 0:24:31 we were talking about technology earlier. 0:24:32 Or is something else? 0:24:35 So what do you think happened there? 0:24:36 And what are we looking at? 0:24:38 – Once again, I object to the question. 0:24:39 – Okay. 0:24:41 – Throw the gavel. 0:24:43 So I look at it a little bit differently, 0:24:45 which is the, this is something we try hard to do 0:24:47 in our place. 0:24:48 It is very tempting. 0:24:50 And we do have this conversation all the time 0:24:51 at our place of like, okay, what about the trend? 0:24:52 What about the theme? 0:24:54 Right, what about the variations on the theme? 0:24:56 Kind of as you said, and this is something happened. 0:24:58 When something wins back, you always get this kind of, 0:25:00 we describe it as kind of the Hollywood model 0:25:03 of, it’s like, what’s your new movie about? 0:25:05 It’s Pretty Woman Meets the Rock, right? 0:25:06 Or whatever. 0:25:10 And so in the Valley, it’s super for X, 0:25:11 or most recently, superhuman for X, 0:25:12 which I’m very excited about, 0:25:13 is one of the big new trends 0:25:16 after another one of our companies. 0:25:20 So, but I don’t think it’s really that. 0:25:22 That’s not how the great ideas arrive. 0:25:23 They don’t look like that. 0:25:25 They look like very specific, 0:25:27 they look at very specific theories, 0:25:28 not general theories. 0:25:29 So they tend to be very specific 0:25:32 to the details of the market involved. 0:25:33 One of the things that I think we’ve learned 0:25:36 about ride sharing, why ride sharing works so well, 0:25:37 I mean, it worked well for many reasons. 0:25:38 One of the reasons it works so well as an idea 0:25:39 is because as long as the driver’s good, 0:25:41 as long as they’re rated at a certain level, 0:25:42 it doesn’t really matter who the driver is. 0:25:44 So like one of the classic examples 0:25:46 was Uber for cleaning your house or your apartment. 0:25:48 And it just, it turns out you just don’t want 0:25:49 a different person over every week 0:25:52 to clean your house like it’s a problem. 0:25:54 And so there’s a lot of these kinds of, 0:25:56 I would say, simple, you know, 0:25:58 sort of the simple applications to the idea 0:25:59 that don’t necessarily work. 0:26:01 Now, by trying all those ideas, 0:26:02 you kind of map the idea space 0:26:03 and you start to get a better sense 0:26:04 of like what your overall structure is. 0:26:06 And I think what’s happening now 0:26:08 is you’re starting to see another set of companies 0:26:09 coming out the other end 0:26:10 that have kind of fully internalized that lesson 0:26:12 and have figured out new models that work. 0:26:13 And so my favorite example, 0:26:15 one of my companies called Honor, 0:26:17 so Honors, you might think loosely, 0:26:19 might think of it as kind of Uber for senior care, 0:26:22 for in-home care for seniors. 0:26:24 It’s a loose model. 0:26:25 Actually, it turns out it’s a very loose model 0:26:26 for a couple of reasons. 0:26:29 One is it’s really deeply not a fungible service. 0:26:30 Like if you have an aging parent, 0:26:32 you actually very much don’t want somebody different 0:26:34 to show up all the time. 0:26:35 You want the same person. 0:26:36 And so in that case, for example, 0:26:39 Honor actually has a full-time employment relationship, 0:26:42 salaried employment relationships with the workers. 0:26:43 Right, which of course is very different 0:26:45 than the Uber and Lyft model. 0:26:47 It actually turns out the matching problem 0:26:48 is much more complicated. 0:26:50 Right, because when you’re matching human beings 0:26:52 in somebody’s home, there’s like 20 variables 0:26:53 that you need to match on 0:26:55 so that everybody’s comfortable with the experience. 0:26:58 As an example, in some cases, you literally need people 0:27:00 with the physical strength to be able to lift people 0:27:00 when you’re caring for them. 0:27:02 You do want to be able to do this kind of 0:27:04 multi-dimensional mapping, and that model’s really working. 0:27:07 And so I think we’re gonna see a whole set of these. 0:27:09 Like I think there’s a big kind of vista of exploration 0:27:11 that’s gonna happen from here. 0:27:12 And I would suspect there will be dozens 0:27:14 of hundreds of new models that people figure out. 0:27:16 – So speaking of new models, 0:27:18 do you ever think about new models 0:27:19 for the VC industry itself, 0:27:22 and how you would apply the principles of innovation 0:27:26 and disruption to what you do in general? 0:27:28 So as you look out 30 years, 0:27:30 what kinds of innovations would you expect 0:27:33 in the basic business that you’re in? 0:27:36 – Yeah, so there’s something very timeless about Venture, 0:27:37 which is there’s actually a new book out called, 0:27:39 literally called VC. 0:27:41 It actually tells the story that has been kind of hard 0:27:43 to get at for a long time in a really clear way, 0:27:45 which is the modern venture model is actually, 0:27:46 one of the historical precedents for it 0:27:50 was actually how whaling expeditions got financed 0:27:53 in the 1600s, so coming up on 500 years ago. 0:27:58 So whaling of, it was literally like, okay, 0:28:00 you’re gonna have like a ship with a captain and a crew 0:28:02 that’s gonna go out and try to like bring back a whale. 0:28:03 Right, and so it’s like a problem number one is like, 0:28:05 only two thirds of the ships are gonna come back, right? 0:28:07 So like high failure rate. 0:28:10 Two is like, okay, what the ship is really matters, 0:28:11 who the captain is really matters, 0:28:13 how do you know who good captain is, 0:28:15 and then what’s a good crew, 0:28:18 and are they gonna be willing to follow the captain? 0:28:20 And then there’s all these like strategy questions, 0:28:21 like do you want the captain who knows 0:28:23 where all the whales have been caught recently, 0:28:25 so they go there, or do you want the captain that says, 0:28:27 no, that’s Gary’s gonna be over fish, 0:28:29 do you wanna go someplace else? 0:28:31 And so literally all the whaling voyages, 0:28:34 like in the colonies 500 years ago, 0:28:37 got financed with basically angel syndicates, 0:28:39 basically venture capital effectively. 0:28:41 And then literally the term carry, 0:28:42 which is sort of how VCs get paid, 0:28:44 the so-called carried interest, 0:28:46 which is like the 25% that you make, 0:28:48 or the profits that you share, 0:28:52 the term carry actually was the percentage of the whale 0:28:54 that the ship carried. 0:28:55 It was literally physical carry. 0:28:56 It was literally that part of the whale, 0:28:58 like that’s where that term came from. 0:29:01 And so there’s a timelessness to the art of trying 0:29:04 to figure out how to finance these kind of expeditions 0:29:08 into the unknown that is likely to endure. 0:29:09 The big question for me is, 0:29:11 how will the shape of the companies, 0:29:14 or let’s say the ventures themselves change, right? 0:29:17 And so today there’s like a well-known understood template 0:29:20 for kind of the prototypical Silicon Valley venture 0:29:23 investment, and it’s like a company in a certain place. 0:29:25 It’s a C corporation, it’s domicile in the US, 0:29:26 it’s financed a certain way, 0:29:28 and to certain types of employees, 0:29:30 a certain relationship with its employees, and so forth. 0:29:35 30 years from now, are we gonna be financing companies here, 0:29:40 or anywhere, or in two places, 50 places, 500 places, 0:29:42 are the companies still gonna have physical place, 0:29:44 or are they gonna be fully virtual? 0:29:45 Are they gonna be companies, 0:29:47 or are they all gonna be blockchains, right? 0:29:49 Are they gonna have actual employment relationships, 0:29:51 or are they gonna have basically developers 0:29:52 and center through cryptocurrency? 0:29:53 That’s a real model. 0:29:56 And so I think the big question is like, 0:29:57 we don’t even know what the shape of companies 0:29:59 is gonna look like, or ventures is gonna look like 0:30:00 in 30 years. 0:30:01 So if I could figure that out, 0:30:03 then I could answer what venture looks like. 0:30:05 Without that, I think it’s hard to say. 0:30:08 – Okay, so we were tempted to do a little bit 0:30:10 of long-term thinking, and long-term thinking 0:30:15 is sort of rare and often ignored, 0:30:20 whereas civilizations demanded as being necessary. 0:30:22 So do you have any suggestions 0:30:25 about how long-term thinking could be applied 0:30:28 in Silicon Valley, and whether you have even 0:30:30 any suggestions to the people in this room 0:30:34 about how they could use long-term thinking? 0:30:36 – Yeah, so the thing I’ve always found about long, 0:30:39 I think long-term thinking is of course central. 0:30:40 Essentially one of the things about the valley 0:30:42 that I find outsiders miss the most, 0:30:45 which is it feels like it’s all moving so fast, 0:30:47 and yet like any of the important companies 0:30:48 and any of the important products 0:30:49 take like a decade or more to build. 0:30:51 And so it’s like everything important 0:30:52 basically takes a long time. 0:30:54 And so a lot of it actually feels quite slow. 0:30:58 And so long-term orientation is absolutely necessary, 0:30:59 and I think we probably all agree 0:31:01 there’s not enough of it in the world. 0:31:03 The thing about long-term thinking I’ve found is like, 0:31:06 it’s really easy if you know the thing is gonna work. 0:31:10 Like, boy, that’s completely straightforward. 0:31:12 Like let’s go on a 10-year journey to a place 0:31:14 where we know it’s gonna be great. 0:31:15 The problem is it’s long-term thinking 0:31:17 crossed with uncertainty, right? 0:31:18 And quite possibly fatality, 0:31:20 like the thing may just simply not work 0:31:22 for any of a thousand reasons. 0:31:23 And so that’s the issue. 0:31:25 And so I think the issue is less around long-term thinking. 0:31:27 I think the issue is more about how to deal with risk 0:31:28 and how to deal with uncertainty 0:31:31 and how to make really big consequential decisions 0:31:35 in the face of literally an unknowable future landscape. 0:31:37 And for there, I mean, 0:31:39 this is kind of the one kind of secret weapon of venture. 0:31:41 It’s like venture is the worst of all asset classes 0:31:44 in a lot of ways in that it’s like it’s a liquid 0:31:45 and it’s like incredibly volatile 0:31:48 and it’s like hit and miss in this kind of crazy way. 0:31:50 The one thing that venture really has going for it 0:31:52 as an asset class is we have the concept 0:31:55 of the portfolio kind of wired into the model 0:31:56 in which you just kind of assume, 0:31:58 in top-end venture, you just kind of assume, 0:31:59 fundamentally it’s half the company’s 0:32:01 gonna work half of them aren’t. 0:32:02 Right, and then the classic, right? 0:32:03 The classic, the cliche is like the ones that work, 0:32:05 then you have to work enough so that they pay 0:32:09 for the ones that don’t to make the whole enterprise work. 0:32:11 And so if you can adapt yourself 0:32:14 from the mentality of will this thing work, right, 0:32:19 to will this portfolio of things basically pay off, right? 0:32:20 Will enough things work 0:32:21 that they’ll actually pay for the portfolio? 0:32:22 Then at that point, you can start to make risk 0:32:26 a somewhat tractable thing to contemplate. 0:32:28 It’s still hard to divorce yourself emotionally from it, 0:32:30 right, ’cause it’s just like it’s still like absolutely, 0:32:31 you know, it’s just terrible 0:32:33 when any of the individual things don’t work, 0:32:35 but at least you have a conception of framework 0:32:37 for you to be able to make 10 long run bets 0:32:39 and being able to get to the other side. 0:32:41 Now, the response that I have to get to that is oh, 0:32:43 that’s great if you’re a VC, the problem is you’re a portfolio, 0:32:45 you know, you’re a founder or a CEO, 0:32:46 like you don’t get that, right? 0:32:48 You have the much harder version of the problem, 0:32:50 which is you’re on a one-way journey, 0:32:52 like you’re the captain of the whaling ship. 0:32:53 Yeah, there’s all those other captains over there, 0:32:57 but like, you know, they’re on their own, you’re on your own. 0:32:58 Even there though, you know, 0:33:01 the best run companies tend to run experiments, 0:33:05 they tend to run multiple experiments against their goals, 0:33:07 and they certainly run those experiments sequentially 0:33:09 as they kind of, you know, try to figure out what works 0:33:11 in a lot of cases, they run experiments in parallel, 0:33:12 as they’re trying to test different things. 0:33:14 And so I also think this kind of mentality 0:33:15 of sort of portfolio risk also applies 0:33:16 to how you run a company, 0:33:18 which is you want to basically, 0:33:20 you want to have a great deal of conviction 0:33:21 about where you’re trying to head, 0:33:23 but you want to have a lot of flexibility inherent 0:33:24 in how you’re going to get there, right, 0:33:25 and what the tactics are, 0:33:26 and then you want to be able to run 0:33:27 a lot of experiments against that, 0:33:29 and you can kind of diversify your risk 0:33:31 of any one theory by doing that. 0:33:33 – And that’s what governments are in some senses, 0:33:37 they have a portfolio of different kind of prospects 0:33:40 about the future, bits, I mean, some senses. 0:33:43 – So you think that’s an optimistic view of what governments do. 0:33:46 – Yeah, so, I mean, that’s what they’re, 0:33:49 and they’re adverse to risk, unfortunately. 0:33:50 – Well, the problem, the problem, 0:33:51 the problem governments have with risk 0:33:53 is like the end of one, right? 0:33:56 So there’s only one government per, right? 0:33:57 We only get to run, you know, 0:33:59 I mean, ex-federalism, which has been a huge advantage, 0:34:00 I think, for the U.S., but like, you know, 0:34:02 the U.S. national government only gets to run one scenario. 0:34:03 – Right. 0:34:04 – And running experiments in the population 0:34:06 is not necessarily well-received. 0:34:07 – Right, ’cause you can’t tolerate failure. 0:34:11 – Yeah, right, yeah, failure has real consequences, so. 0:34:14 – So there’s currently not the only introspection 0:34:16 about government, but also about capitalism, 0:34:21 and capitalism so far has depended on growth, 0:34:24 and growth is something that VCs pay attention to, 0:34:30 but we’re now wondering if what’s the minimum amount 0:34:32 of growth that you might need to have prosperity? 0:34:33 Can you have prosperity with low growth? 0:34:36 Can you have prosperity with fixed growth? 0:34:40 Do you have any insights about that 0:34:42 at the civilizational scale? 0:34:43 – Yeah, so I think, and actually I don’t even say 0:34:45 that the issue is even more intense these days, 0:34:47 ’cause there’s now very prominent people 0:34:49 in public life arguing that growth is bad, right? 0:34:53 And in fact, it’s, that in fact is ruinous and destructive, 0:34:54 and that the right goal might actually be 0:34:56 to have no growth or to actually go into negative growth, 0:34:58 then especially in the very common view 0:34:59 in the environmental movement. 0:35:02 So I’m a very strong proponent, a very strong believer 0:35:04 that growth is absolutely necessary, 0:35:06 and I’ll come back to the environmental thing in a second 0:35:08 ’cause it’s a very interesting case of this. 0:35:10 I think growth is absolutely necessary, 0:35:11 and I think the reason growth is absolutely necessary 0:35:13 is because you can fundamentally have two different 0:35:15 mindset views of how the world works, right? 0:35:18 One is positive sum, which is rising tide, 0:35:20 lifts all boats, we can all do better together, 0:35:24 and the other is zero sum, right? 0:35:25 Where for me to win, somebody else must lose, 0:35:26 and vice versa. 0:35:29 And the reason I think economic growth is so important 0:35:32 at core is because if there is fast economic growth, 0:35:34 then we have positive sum politics, 0:35:36 and we start to have all these discussions 0:35:38 about all these things that we can do as a society, 0:35:40 and if we have zero sum growth, 0:35:44 if we have a flat growth or no growth or negative growth, 0:35:48 all of a sudden the politics becomes sharply zero sum. 0:35:50 And the most, you just kind of see this 0:35:53 if you kind of track kind of the political climate, 0:35:55 you just, basically it’s the wake of every recession, right? 0:35:57 It’s just that in the wake of every economic recession, 0:36:00 the politics just go like seriously negative 0:36:03 on in terms of thinking about the world’s zero sum. 0:36:05 And then when you get a zero sum outlook in politics, 0:36:07 that’s when you get like anti-immigration, 0:36:08 that’s when you get anti-trade, 0:36:09 that’s when you get anti-tech. 0:36:11 If the world’s not growing, then all that’s left to do 0:36:14 is to fight over what we already have. 0:36:16 And so my view is like, you need to have economic growth. 0:36:18 You need to have economic growth for all of the reasons 0:36:20 that I would say right wingers like economic growth, 0:36:21 which is you wanna have higher levels of material 0:36:23 prosperity, more opportunity, more job creation, 0:36:25 all those things. 0:36:28 You wanna have economic growth for the purpose 0:36:30 of having like sane politics, 0:36:32 like a productive political conversation. 0:36:34 And then I think the kicker is you also want 0:36:36 economic growth actually for many of the things 0:36:38 that left wing people want. 0:36:39 One of the best books this year, 0:36:41 new books this year has got Andrew McAfee, 0:36:44 I was writing a book called I think More From Less. 0:36:46 And it’s actually a story of a really remarkable thing 0:36:48 that a lot of people are missing about what’s happening 0:36:52 with the environment, which is globally carbon emissions 0:36:55 are rising and resource utilization is rising. 0:36:58 In the US, carbon emissions and resource utilization 0:36:59 are actually falling. 0:37:02 And so in the US, we have figured out to grow our economy 0:37:04 while reducing our use of natural resources, 0:37:07 which is a completely unexpected twist, right, 0:37:08 to the plot of what kind of, 0:37:10 if you listen to environmentalists in the 60s and 70s, 0:37:11 like nobody predicted that. 0:37:14 And it turns out, he talks about this in the book, 0:37:16 but it turns out basically what happens is economies, 0:37:18 when economies advance to a certain point, 0:37:20 they get really, really good at doing more with less, right? 0:37:23 They get really, really good at efficiency. 0:37:25 And they get really good at energy efficiency, 0:37:27 they get really used in environmental resources, 0:37:29 they get really good at recycling 0:37:30 in lots of different ways. 0:37:31 And then they get really good at what’s called 0:37:33 dematerialization, which is what is happening 0:37:35 with digital technology, right? 0:37:37 Which is basically taking things that used to require 0:37:38 atoms and turning them into bits, 0:37:41 which inherently consumes less resources. 0:37:42 And so what you actually want, 0:37:44 like my view on environmental issues is like, 0:37:46 you’ve got a global problem, 0:37:48 which is you have too many people in too many countries 0:37:52 stuck in kind of mid the industrial revolution, 0:37:54 they’ve got to grow to get to the point 0:37:55 where they’re in a fully digital economy, 0:37:57 like we are precisely so that they can start 0:38:00 to have declining resource utilization, right? 0:38:01 I mean, the classic example is energy. 0:38:04 Like, you know, the big problem with energy emissions globally, 0:38:06 like a huge problem with emissions and with health 0:38:09 from emissions is literally people burning wood, 0:38:11 like in their houses, right, to be able to heat and cook. 0:38:13 And what you want to do is you want to go to like 0:38:15 hyper-efficient solar or ideally nuclear, right? 0:38:16 You want to go to these like super advanced forms 0:38:18 of technology. 0:38:20 So you want that, and then by the way, 0:38:22 if you want like a big social safety net, 0:38:23 you know, in all the social programs, 0:38:25 you want to pay for that stuff. 0:38:27 You also want economic growth because that generates taxes 0:38:27 that pays for that stuff. 0:38:30 And so like growth is the single kind of biggest 0:38:32 form of magic that we have, right? 0:38:33 To be able to like actually make progress 0:38:35 and hold the whole thing together. 0:38:38 – And to your point about the developing countries, 0:38:40 I think the idea of leapfrogging technology is a myth. 0:38:41 It doesn’t really work. 0:38:42 You actually have to, 0:38:45 if you want to have a high tech infrastructure, 0:38:48 you actually need the intermediate roads, clean water. 0:38:50 You can’t skip over that. 0:38:52 And so they all need to be built out in order to 0:38:54 have that prosperity at the end. 0:38:58 So, you know, it seems like you don’t worry about much. 0:38:59 I don’t worry about much. 0:39:02 But one thing I do worry about is cyber conflict, 0:39:05 cyber war, partly because I think we have no consensus 0:39:07 about what’s allowable. 0:39:09 Does this worry you at all? 0:39:11 – So I think there’s a lot of unknownness to it. 0:39:13 I think people are trying to figure this out, 0:39:16 but it’s a complex issue to grapple with. 0:39:18 I will make an optimistic argument, 0:39:20 which is gonna sound a little strange. 0:39:24 If you kind of project forward what’s happening 0:39:26 with generally a cyber, with information, 0:39:28 you know, operations of different kinds, 0:39:31 but also with drones, you know, UAVs. 0:39:35 And then also with, you know, unmanned fighter jets, right? 0:39:39 Unmanned, you know, ships increasingly being built. 0:39:42 You know, there’ll be unmanned submarines at some point. 0:39:44 If you project this stuff forward, 0:39:46 you start to get this very interesting potential world 0:39:49 in which basically the way I think about it 0:39:51 is like all human conflict between peoples 0:39:53 or between nation states up until now 0:39:56 has been basically throwing people at each other, right? 0:39:57 Throwing soldiers at each other 0:39:59 and like letting them make the decision of who to shoot 0:40:00 and like hoping they don’t get shot, 0:40:02 like with very serious repercussions 0:40:03 of all those individual human decisions. 0:40:06 You do have the prospect of basically a new world 0:40:07 of both offense and defense. 0:40:08 It’s like completely motorized, 0:40:10 completely mechanized, completely software driven 0:40:11 and technology driven. 0:40:12 And a lot of people, it’s just immediately like, 0:40:14 oh my God, that’s horrible. 0:40:15 You know, Terminator, like, you know, Skynet, 0:40:17 like, you know, this is just the worst thing ever. 0:40:19 There’s a novel called “Kill Decision.” 0:40:21 If you want the dystopian theory, 0:40:22 there’s a novel called “Kill Decision.” 0:40:23 – By Daniel Suarez. 0:40:25 – Daniel Suarez that extrapolates the drones forward 0:40:28 and it’ll keep you up late at night. 0:40:30 But the optimistic view would be like, boy, 0:40:33 isn’t it good that there aren’t human beings involved? 0:40:33 Isn’t it good? 0:40:35 Like if the machines are shooting at each other, 0:40:36 like isn’t that good? 0:40:38 Isn’t that better than if they’re shooting at us? 0:40:39 And by the way, and by the way, 0:40:41 I would go so far as to say like, 0:40:43 I don’t know that I’m in favor of like the machines 0:40:45 making like kill decisions, like decisions on who to shoot. 0:40:48 But like the one thing I know is humans do that very badly. 0:40:50 Very, very, very badly. 0:40:51 I’m the opposite of pro war. 0:40:53 I don’t want to see any of this stuff actually play out. 0:40:55 But if it has to play out, maybe having it be software 0:40:56 and machines is going to be actually a better outcome. 0:40:57 – Right. 0:41:00 I mean, it’s this kind of weird that we don’t allow, 0:41:02 we don’t want machines to kill humans. 0:41:03 We want other humans to kill humans. 0:41:05 – We want 18 year olds. 0:41:07 We want to take 18 year olds out of their homes, right? 0:41:08 And we want to put a gun in their hand 0:41:10 and send them someplace and tell them to decide who to shoot. 0:41:13 Like that that is going to go down in history 0:41:14 is having been a good idea. 0:41:16 Just strikes me as like unlikely. 0:41:19 – So we have only time for one last question, which is, 0:41:22 I’m usually, I claim to be the most optimistic person 0:41:24 in the room, but with you sitting across from me, 0:41:26 I don’t think that may be true. 0:41:29 What is your optimism based on? 0:41:34 – So my optimism, okay, so get cosmic for a second. 0:41:35 – Why not? 0:41:35 – I guess we’re here. 0:41:36 – It’s the last question. 0:41:37 – It’s the last question. 0:41:41 – So the science fiction authors always talk about 0:41:42 what’s called the singularity. 0:41:45 This kind of singularity, so the singularity is basically 0:41:46 what happens when the machines get so smart 0:41:48 that all of a sudden everything goes into exponential mode 0:41:52 and all of a sudden the entire world changes. 0:41:54 So my reading history is actually, 0:41:56 we actually were in the singularity already 0:41:59 and that it actually started 300 years ago. 0:42:03 And if you look at basically, if you look at basically 0:42:05 any chart of human welfare over time, 0:42:07 and you can look at, Child Mortality’s an obvious one, 0:42:10 but there’s many, many, many others, 0:42:12 and you just look at progress on that metric. 0:42:13 Just look at Child Mortality as an example 0:42:15 and it’s just basically flat, flat, flat, flat, flat, flat, 0:42:17 flat for like 50,000 years. 0:42:20 And this is the famous, Thomas Hobbes, 0:42:22 life is nasty, brutish and short, right? 0:42:23 It was just like the thing, 0:42:25 like everything was terrible everywhere, 0:42:28 all the time, forever, the end, 0:42:29 until 300 years ago and all of a sudden 0:42:31 there’s this knee in the curve. 0:42:33 And then all the indicators of human welfare, 0:42:36 not uniformly across the planet, 0:42:38 but in societies that were making progress. 0:42:41 Societies that were making progress first, 0:42:42 all of a sudden, all those indicators 0:42:43 of human welfare went up into the right, right? 0:42:45 And then all corresponded, by the way, 0:42:46 to economic growth. 0:42:48 But it was also right, it was the enlightenment, 0:42:49 it was the rise of democracy, 0:42:50 it was the rise of markets, 0:42:53 it was the rise of rationality, the scientific method, 0:42:57 by the way, human rights, free speech, free thought, right? 0:42:58 And they all kind of catalyzed, right, 0:43:01 around 300 years ago and they’ve been making their way 0:43:04 to the world in sort of increasing concentric circles 0:43:05 kind of ever since. 0:43:08 And so we have, I would argue like we have the answers, 0:43:11 like we actually don’t need new discoveries 0:43:12 to have the future be much better, 0:43:13 we actually know how to do it, 0:43:16 is to apply basically those systems. 0:43:20 And basically, contrary to the sort of constant temptation 0:43:23 from all kinds of people to try to compromise 0:43:25 on these things or subvert these things, 0:43:26 basically double down on these systems 0:43:27 that we know work, right? 0:43:28 So double down on economic growth, 0:43:30 double down on human rights, 0:43:33 double down on markets, on capitalism, 0:43:35 double down on the scientific method. 0:43:37 Fixed science, like we got as far as we did with science 0:43:40 actually being pretty seriously screwed up right now 0:43:43 with the replication crisis, like so we should fix that. 0:43:44 And then science will all of a sudden 0:43:45 start to work much better. 0:43:49 Technology, right, use of technological tools. 0:43:52 So we literally have the systems, like we know how to do this, 0:43:54 we know how to make the planet much better in every respect. 0:43:57 And so what we just need to do is keep doing that. 0:44:00 And then what I try to do when I read the news 0:44:02 is notwithstanding everything that’s going on 0:44:03 is basically try to look through whatever’s happened 0:44:05 in the moment, try to look underneath and kind of say, 0:44:09 okay, are those fundamental systems actually still working? 0:44:11 Like is the world getting more democratic or less, right? 0:44:13 Is free speech spreading or receding, right? 0:44:16 Are markets expanding or falling, right? 0:44:17 Are more and more people able to participate 0:44:19 in a modern market economy or not? 0:44:20 And, you know, those indicators generally 0:44:23 are all still up and to the right. 0:44:25 – So let’s go out and make the world better. 0:44:26 Thank you. 0:44:27 – Yeah, good, good. 0:44:28 Thank you everybody. 0:44:31 (audience applauding)
Many skeptics thought the internet would never reach mass adoption, but today it’s shaping global culture, is integral to our lives — and it’s just the beginning.
In this conversation from our 2019 innovation summit, Kevin Kelly (Founding Executive Editor, WIRED magazine) and Marc Andreessen sit down to discuss the evolution of technology, key trends, and why they’re the most optimistic people in the room.
***
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
0:00:03 The content here is for informational purposes only, 0:00:05 should not be taken as legal business tax 0:00:06 or investment advice, 0:00:09 or be used to evaluate any investment or security 0:00:11 and is not directed at any investors 0:00:14 or potential investors in any A16Z fund. 0:00:19 For more details, please see a16z.com/disclosures. 0:00:21 – Hi everyone, welcome to the A6NC podcast. 0:00:23 Today’s episode is all about SIFIUS, 0:00:25 the Committee on Foreign Investment in the United States 0:00:27 and their proposed updates to FERMA 0:00:30 or the Foreign Investment Risk Review Modernization Act 0:00:34 of 2018, which took place in September 2019. 0:00:36 The conversation was hosted by Andreessen Horowitz 0:00:38 as part of an event for founders and others 0:00:41 with general partner Katie Hahn interviewing Michael Leiter, 0:00:44 a partner at law firm Scad and Arps et al. 0:00:47 who covers national security, cybersecurity and privacy, 0:00:50 SIFIUS and more, which is what this Q&A is all about, 0:00:52 covering what it involves and doesn’t 0:00:54 to how to think about and structure your business 0:00:56 and partnerships strategically as a result. 0:01:00 But the conversation begins with what SIFIUS is. 0:01:02 – So SIFIUS stands for the Committee on Foreign Investment 0:01:04 in the United States. 0:01:08 And the basic function of SIFIUS is to review 0:01:11 any foreign investment in a U.S. business 0:01:13 that produces national security concerns. 0:01:15 So that sounds relatively basic. 0:01:17 It’s this big interagency body, 0:01:19 everything in Washington is interagency, 0:01:21 but you know– – How many agencies? 0:01:22 – And how many agencies? – 13 agencies. 0:01:26 It’s run by the Department of Treasury unsurprising 0:01:28 because it’s about foreign investment in the United States. 0:01:31 And it has traditionally sort of split 0:01:33 between two different camps. 0:01:35 Historically, it was those parts of the U.S. government 0:01:38 that wanted foreign investment in the United States, 0:01:40 Treasury, the U.S. Trade Representative, 0:01:43 State Department, like their job is to do this. 0:01:47 And those agencies that didn’t necessarily want investment 0:01:51 were at least more concerned about security. 0:01:53 So thank Department of Defense, Department of Justice, 0:01:55 now Homeland Security and elements 0:01:57 of the intelligence community. 0:02:00 Central Intelligence Agency, NSA, 0:02:02 organizations like that, FBI. 0:02:04 So once upon a time, we were really, really worried 0:02:06 about the semiconductor industry moving 0:02:08 from the United States to Japan. 0:02:10 So SIFIUS was fundamentally created 0:02:13 to start to limit that movement of technology 0:02:16 from the U.S. to Japan, and that was generally being done 0:02:18 through limiting Japanese acquisition 0:02:21 of certain businesses here in the United States. 0:02:23 So that’s now pretty ancient history. 0:02:29 Fast forward to when Katie and I were in government, 2006, 0:02:31 what is SIFIUS concerned with then? 0:02:34 It’s post-9/11 to buy Port’s World, 0:02:37 decides that they want to buy American ports. 0:02:41 Washington says, wait, that has to be bad too. 0:02:44 So in post-9/11 era, SIFIUS honestly 0:02:47 focuses on things like that, worrying 0:02:51 about critical infrastructure and the Emiratis buying that. 0:02:54 Fast forward to today, and the joke I often use 0:02:56 is, although it’s the Committee on Foreign Investment 0:02:59 in the United States, largely, it’s 0:03:01 Chinese foreign investment in the United States. 0:03:05 And what has changed is not just that political lens, 0:03:07 but what’s really changed and what 0:03:11 starts to really affect, I hate to say it, all of you, 0:03:17 is the changes in technology, the expansion of data, 0:03:21 the ability to use data in a huge variety of ways 0:03:25 that was never present 20 or 30 years ago, 10 years ago. 0:03:28 Now means that the US government is 0:03:32 focused on a huge range and fundamentally every sector 0:03:33 of society. 0:03:36 So SIFIUS is not limited to technology. 0:03:38 It’s not limited to aerospace and defense 0:03:40 and military technology. 0:03:42 It’s true, not everything is covered by SIFIUS, 0:03:46 but you have to assume it is if you’re involved in technology. 0:03:48 You touch data. 0:03:50 You own any real estate. 0:03:52 You do any work with the US government, 0:03:55 or you have anything else. 0:03:59 You can even get into the world of dog food. 0:04:03 Ooh, do the seals buy dog food from you 0:04:05 for their bomb-sniffing dogs? 0:04:09 So there’s no limitation on the sector. 0:04:11 There’s no limitation on the size of the deal. 0:04:14 You mentioned that this all came about during the Japanese 0:04:18 semiconductor era, but now it’s undergone some reforms 0:04:21 and then some new implementing proposed regulations. 0:04:23 So I want to talk about those reforms. 0:04:26 But before I do, you just mentioned data. 0:04:30 What about verticals like FinTech or crypto companies 0:04:31 who might have PII? 0:04:36 Yeah, so really, we’ve seen this for several years now 0:04:40 about a focus on personally identifiable information. 0:04:43 And SIFIUS has looked at this through a very broad lens. 0:04:44 So once upon a time, it was, again, 0:04:47 if I knew everything about Katie in one place, 0:04:49 maybe SIFIUS would care. 0:04:49 You probably do. 0:04:51 I do. 0:04:52 And I have stories for you. 0:04:54 So there are a number of things that 0:04:56 have happened which have really highlighted 0:05:00 how sensitive SIFIUS is about data that is collected. 0:05:02 First of all, there’s this thing called cybersecurity. 0:05:06 And what we’ve seen over the past, again, five to 10 years, 0:05:09 is obviously not just the ubiquity of data, 0:05:12 but the key vulnerability of data across every sector. 0:05:17 And we’ve seen countries, especially China, Russia, 0:05:21 and other others, use cyber attacks and cyber 0:05:23 penetrations for their benefit. 0:05:25 And not just their national security benefit, 0:05:27 but also their economic benefit. 0:05:30 Everyone remember the OPM hack, Office of Personnel Management? 0:05:33 So if you start to put these pieces together, 0:05:35 you understand how foreign adversaries 0:05:38 can take advantage of lots and lots of data 0:05:41 in different places and piece that together. 0:05:44 So in terms of FinTech, all financial information, 0:05:47 that’s been a clear area of focus for SIFIUS. 0:05:49 And the good news is these new regulations 0:05:55 do sort of carve out standard consumer credit card information 0:05:57 as an area of specific concern. 0:05:59 But beyond that, they specifically 0:06:04 cite things like credit reports, broader financial data. 0:06:08 So I think anyone in this sector working in FinTech 0:06:12 is inevitably going to have more than just your 16-digit credit 0:06:13 card number. 0:06:16 And that will absolutely be considered sensitive data 0:06:18 to SIFIUS. 0:06:20 SIFIUS also, in its newest regulations, 0:06:23 try to say, OK, well, it’s not really everything. 0:06:25 It’s only if you have a lot of it. 0:06:28 Everyone remember Austin Powers, how much money he asks for? 0:06:30 $1 million. 0:06:35 And they’re like, guys, $1 million isn’t a lot. 0:06:37 So what does SIFIUS come up with for the number 0:06:39 of people’s personal data? 0:06:42 1 million people. 0:06:43 That’s not a joke. 0:06:44 It really is. 0:06:47 And it’s not even if you have a million people’s data. 0:06:49 It’s if you have a stated business purpose 0:06:51 to get to a million people. 0:06:53 Who here doesn’t? 0:06:55 So again, these are draft regulations. 0:06:58 But it gives you a sense of, to some extent, 0:07:00 the dichotomy between how you’re seeing business, 0:07:02 how you’re trying to grow a business, 0:07:06 and how, from a Washington National security perspective, 0:07:08 everything starts to get encompassed. 0:07:11 And also financial technologies, as you know, 0:07:13 I have a real interest in crypto. 0:07:15 Obviously, I want to talk a little bit about how 0:07:18 SIFIUS reforms could affect the crypto industry. 0:07:22 I’ll save that for later in our discussion of what 0:07:23 is the US business? 0:07:25 And indeed, in the context of crypto, 0:07:27 what is even a business at all? 0:07:29 But tell us about the reforms. 0:07:32 Like, what’s new? 0:07:35 And why are we hearing about SIFIUS so much more? 0:07:38 I think we’ve seen two major factors which 0:07:42 have driven everyone listening about SIFIUS. 0:07:44 Chinese are a global competitor for the United States. 0:07:46 They’re also a global partner in some ways, 0:07:48 certainly in investment and technology. 0:07:52 But stated Chinese policy is about being 0:07:54 a global competitor on a bunch of technological fronts. 0:07:57 And that really motivated the Congress 0:08:01 to be fearful of China and start to limit Chinese influence 0:08:04 in US business, which brought about one 0:08:06 of the only bipartisan things that 0:08:07 just happened over the past three years, which 0:08:09 was reform of SIFIUS. 0:08:11 So that was kind of the driving impetus for it. 0:08:13 What did it actually change? 0:08:14 A couple of things. 0:08:16 First, SIFIUS has always been purely voluntary. 0:08:18 Can you unpack that a little bit? 0:08:19 What do you mean purely voluntary? 0:08:23 Yeah, so if Alibaba shows up and buys one of you 0:08:26 tomorrow prior to SIFIUS reform, it 0:08:30 was up to you and Alibaba to go and actually submit 0:08:32 something to SIFIUS or not. 0:08:36 And that did mean that the vast majority of transactions 0:08:38 were never seen by SIFIUS. 0:08:39 And every once in a while, SIFIUS 0:08:42 would go out and grab and pull a company in. 0:08:43 But again, there was no requirement 0:08:45 to ever present your matter. 0:08:47 And when you present a matter to SIFIUS, 0:08:49 it’s fundamentally the two parties coming together. 0:08:51 You describe the US business. 0:08:53 You describe the foreign acquirer. 0:08:56 You describe the transaction, the motivation for the transaction. 0:08:59 And there’s a process whereby SIFIUS reviews that 0:09:00 for national security concerns. 0:09:04 And SIFIUS can then either say, you’re good. 0:09:06 SIFIUS can say, you’re very, very bad. 0:09:09 And we’re going to ask the president to block it. 0:09:12 And the president can, under his Article 2 authority, 0:09:14 block the transaction from occurring. 0:09:17 Or what happens most often in a sense of transactions 0:09:21 is that SIFIUS says, well, you can do the transaction, 0:09:24 but we’re going to impose some mitigation 0:09:26 to reduce the national security risk. 0:09:27 And that can mean a lot of things. 0:09:30 That can be a separate board of US citizens overseeing 0:09:31 the company. 0:09:33 It could be limitations on access to technology, 0:09:36 controls over the data, all sorts of things. 0:09:38 So that’s how SIFIUS always operated. 0:09:41 People came to SIFIUS, presented their transaction, 0:09:44 and one of those three things generally happened. 0:09:46 If, since it was voluntary, did you 0:09:49 see companies that would have otherwise fallen 0:09:51 under the jurisdiction of SIFIUS saying, 0:09:53 I want to fly under the radar? 0:09:53 Absolutely. 0:09:58 So it happened all the time that people wouldn’t actually 0:09:59 go to SIFIUS. 0:10:02 So especially smaller transactions. 0:10:04 I mean, you have a big market transaction, 0:10:06 and it’s all over the front page of the finance times 0:10:08 in the Wall Street Journal, a little harder 0:10:09 to fly under the radar. 0:10:11 But for a long time, especially on smaller transactions, 0:10:12 it wasn’t occurring. 0:10:16 Now, why even go then if it’s voluntary? 0:10:20 Because there’s no statute of limitations on SIFIUS. 0:10:24 So if you don’t go to SIFIUS at any time the US government 0:10:27 can knock on your door and say, hey, Katie, 0:10:29 that deal you did three years ago? 0:10:31 We want to investigate that deal. 0:10:34 And ultimately, SIFIUS has the authority 0:10:37 to force divestiture, unwind the transaction, 0:10:39 or impose mitigation. 0:10:42 And if you’re a company and you’re taking money 0:10:44 or you’re buying something, that’s 0:10:48 a pretty uncomfortable place to be for the rest of time. 0:10:49 And especially if you’re a company that 0:10:53 is doing other transactions in the United States, 0:10:56 it gets harder and harder to say, ah, let’s not worry about it. 0:11:00 But the reforms, again, did several things here 0:11:02 that have changed this. 0:11:05 First, there are pieces of SIFIUS that are now mandatory. 0:11:06 So not voluntary. 0:11:07 Not voluntary. 0:11:09 And if you don’t show up, you can 0:11:14 be fined up to $250,000 or the value of the transaction. 0:11:16 And so it’s just like any other compliance scheme 0:11:18 at that point, export control or something else. 0:11:20 And that’s not every transaction, 0:11:25 but it does involve a lot of what people in the valley do. 0:11:29 In particular, it’s mandatory if the company operates 0:11:32 in a certain sensitive sector that’s listed by SIFIUS. 0:11:37 And if you produce or design, export control technology. 0:11:40 That sounds like military stuff, but it’s not just that, right? 0:11:40 Exactly. 0:11:41 It’s not just military stuff. 0:11:44 It’s also a huge range of other things 0:11:46 that are controlled by the Commerce Department 0:11:47 as dual-use technology. 0:11:49 So what does that include? 0:11:52 Things like encryption. 0:11:54 Your software has a certain level of encryption. 0:11:57 Your software is export-controlled. 0:11:59 That means if there’s an investment in that company 0:12:01 over a certain size or giving that 0:12:03 for an investor certain rights, it’s 0:12:06 a mandatory filing with SIFIUS. 0:12:08 What other kinds of things for companies 0:12:11 that you see out in the valley would be relevant that now, 0:12:13 under these new reforms, would be covered? 0:12:13 Yeah. 0:12:18 So today, it’s certain sensors, LIDAR, for example. 0:12:20 The high-end types of LIDAR defined 0:12:24 by certain wavelength for distance, those are controlled. 0:12:29 LIDAR that you use for your standard autonomous vehicle 0:12:32 test projects now, not controlled. 0:12:36 So it turns out that the export control regime actually 0:12:40 covers fundamentally everything that anyone makes. 0:12:42 And you either get classified under what’s 0:12:46 known as EAR-99, non-export-controlled, 0:12:48 or if there’s something more sensitive about technology, 0:12:51 it can be export-controlled to certain countries 0:12:52 for national security. 0:12:58 So it ranges from computing power, battery storage, sensors. 0:12:59 It’s everything. 0:13:02 Now, if you’re doing straight software, 0:13:07 it tends not to unless you get into the world of encryption. 0:13:09 Can you talk a little bit more about that 0:13:11 and about what now I know as part of the reforms 0:13:13 are if it’s a sensitive technology? 0:13:14 Yeah, so– 0:13:15 How is that defined? 0:13:17 Yeah, I assume there are kind of three stages. 0:13:18 I’ve talked a lot about the historical, 0:13:20 all the just voluntary stuff. 0:13:22 Then we had the reform in 2018, which 0:13:25 starts to really edge more into this world of sensitive 0:13:28 technology, which is, if it’s export-controlled 0:13:31 and you work in a certain sector, mandatory. 0:13:34 The reform continues because this is Washington. 0:13:36 So the law was passed in 2018. 0:13:38 There are still regulations being promulgated 0:13:40 to implement that law. 0:13:42 And those are the draft regulations you mentioned 0:13:43 that came out. 0:13:45 On going throughout that, there is also 0:13:48 reform of all US export control. 0:13:50 And this is where I think a lot of people 0:13:54 are going to be affected in the valley more than ever before. 0:13:56 The Commerce Department is now defining 0:14:01 what it means to be foundational and emerging technology. 0:14:02 Exactly what foundational emerging 0:14:04 is still to be defined. 0:14:08 But if your technology falls into that TBD category, 0:14:10 and that should be out in the next four to six months, 0:14:14 then any transaction there puts you back 0:14:15 into that mandatory bucket. 0:14:18 So what are the areas of foundational emerging 0:14:22 technology that we know the US government is most focused on? 0:14:27 Artificial intelligence, machine learning, autonomy, right. 0:14:28 What about sensors? 0:14:30 So very, very high-end battery technology 0:14:32 has always been sensitive. 0:14:34 And the export control rules literally 0:14:37 get down to how much battery storage do you have for the weight? 0:14:41 What’s the weight to storage capacity? 0:14:43 So this is all quite purposeful. 0:14:44 What is going on? 0:14:46 This is not an accident, because the view 0:14:49 is in Washington and from Syphius 0:14:53 that our global competitors, in particular China, 0:14:57 have focused on early-stage startups 0:14:59 who are developing technology, or the engine of innovation 0:15:02 in our society, coming in early investors, 0:15:04 getting access to that technology. 0:15:07 It’s not being reviewed for national security purposes. 0:15:09 And eventually, that technology is 0:15:13 moving across to foreign companies and, in some cases, 0:15:15 foreign militaries. 0:15:18 It’s actually a very strongly worded, pretty powerful article 0:15:22 in the Wall Street Journal about Chinese civil military 0:15:26 cooperation and investments in US companies. 0:15:30 But we are now in a place where, in terms of reforms, 0:15:32 some more stuff is mandatory. 0:15:35 More stuff is going to become mandatory. 0:15:37 And one thing we didn’t talk about is, 0:15:40 Syphius is not any investment. 0:15:41 It’s not any investment. 0:15:42 What’s covered? 0:15:46 So first of all, you have– think about it as kind of at least 0:15:47 two things, and then there’s a plus. 0:15:49 One is there’s got to be a US business. 0:15:51 A US business is somebody in the US 0:15:53 who’s engaged in interstate commerce. 0:15:55 There’s not a whole lot more definition than that. 0:16:01 It doesn’t matter if it’s, say, a French company that 0:16:04 has a US office and they’re doing business in the US. 0:16:07 If someone goes to buy that French company, 0:16:10 Syphius has nothing to do with that French acquisition, 0:16:13 but it still gets to look at, if it wants to, 0:16:16 the US element of that transaction. 0:16:18 So effectively, what you’re saying 0:16:20 is you don’t need to be a Delaware corporation. 0:16:21 Exactly. 0:16:23 You just have to be doing business in the US. 0:16:24 You have to be doing business here. 0:16:26 Now, that does mean if you’re just selling assets, 0:16:28 you’re not selling a business, it’s a general matter 0:16:29 that’s not covered. 0:16:32 By the way, it also doesn’t affect green field investments. 0:16:34 So foreign company can come here, 0:16:38 start– flatten a lot in Palo Alto, 0:16:41 build everything, start everything on their own. 0:16:43 No Syphius, except maybe the real estate. 0:16:45 We’re not going to worry about that for now. 0:16:48 Second, it’s got to be a foreign business. 0:16:50 You’ve got to have a foreign person making the acquisition 0:16:51 or the investment. 0:16:53 And what does it mean to be foreign? 0:16:57 So US business bought by another US business, 0:16:59 but the US business has a foreign parent? 0:17:00 That’s foreign. 0:17:03 So Syphius looks to the ultimate parent 0:17:09 and the ultimate ownership of the acquirer or the investor. 0:17:13 So foreign, private equity, foreign venture capital, 0:17:14 that’s all foreign. 0:17:15 And you said acquisition, but this 0:17:18 does importantly come up with investments, too, right? 0:17:21 So it doesn’t need to just be an acquisition. 0:17:22 Straight acquisition is easy. 0:17:25 That’s definitely Syphius. 0:17:28 Traditionally, Syphius was only about controlling transactions. 0:17:32 What does controlling mean to you, Katie, as someone in venture? 0:17:33 That you have a vote on a board. 0:17:35 You have more than 51%. 0:17:37 There’s the case. 0:17:39 You have more than 51%. 0:17:42 For Syphius, controlling for Syphius purposes. 0:17:44 So again, you have to have a controlling investment 0:17:47 by a foreign person in a US company. 0:17:52 Controlling for Syphius, more than 9.9% equity. 0:17:59 Or less than 9.9% equity with some other initiative of control. 0:18:02 So 8% in a board seat, controlling investment. 0:18:03 So it can be– 0:18:05 What, in addition to board seats, 0:18:07 are in-dish of controlling investment? 0:18:11 So anything in commitment letter, side-letter MOU, 0:18:13 which suggests some ability, decision-making authority 0:18:16 beyond standard minority protections. 0:18:19 That’s kind of the general rule. 0:18:21 But it gets even better, guys. 0:18:25 One of the big changes in Syphius, in the reform, 0:18:27 it was always about controlling. 0:18:30 So again, it was a pretty low bar, 9.9%. 0:18:32 That’s not what anyone normally thinks about control. 0:18:34 But in this case, it is. 0:18:37 The reform adds an entire category 0:18:39 of non-controlling investments. 0:18:41 If you’re involved, if your business does technology, 0:18:43 we’ve already talked about what that means 0:18:46 to be involved in critical technology. 0:18:48 If you’re involved in critical infrastructure– 0:18:50 and that’s a really detailed list we won’t go through– 0:18:55 or if you are a data company– 0:18:57 remember our data discussion, all those different categories, 0:18:59 one million, that sort of thing. 0:19:05 If you’re any of those, even a less than 10% investment 0:19:09 in you, if the foreign investor has board-seat, 0:19:15 board observer, ability to influence decision-making 0:19:19 or control decision-making, or if they 0:19:24 have access to material, non-public technical information, 0:19:27 any of those things, they can be at 2%. 0:19:31 If you’re a data company, they get some technology information, 0:19:33 that’s a covered investment. 0:19:36 So what you’ve seen, again, I think 0:19:38 you’ve now seen most of the movie, 0:19:42 is it starts with this relatively narrow swath 0:19:44 of defense technology information, 0:19:48 and is now moved into even small investments 0:19:51 if there are certain rights in almost everything 0:19:53 that occurs here. 0:19:55 There’s at least a voluntary filing, 0:19:59 and more and more, there are also some mandatory filings. 0:20:03 So mandatory filings, I’m sure Sifias and the government 0:20:06 have a nimble process for reviewing 0:20:09 all of these mandatory filings they’re not going to have? 0:20:11 When I think US government, I think nimble. 0:20:13 Nimble, agile. 0:20:16 OK, so we know about what historically the process has 0:20:16 been. 0:20:18 I guess you don’t know what it’s been, the review process, 0:20:22 with these new reforms, and presumably a lot more transactions 0:20:23 being submitted. 0:20:24 What was the historical process? 0:20:26 Yeah, so historically, the process 0:20:29 has been, we estimate, in most cases, 0:20:31 about four to six months start to finish. 0:20:34 Now, if you’re in the valley, four to six months 0:20:36 is like life or death. 0:20:38 Obviously, larger deals, it actually 0:20:41 tends to align relatively well with things like Hart-Scout 0:20:42 or Dino antitrust. 0:20:44 So it isn’t always a huge problem in larger deals, 0:20:46 but for smaller deals, it is. 0:20:49 And that four to six months constitutes kind of from sign 0:20:50 to close. 0:20:51 You’re prepping the documents. 0:20:53 You’re sending the documents to Sifias. 0:20:54 They review it. 0:20:55 There’s a back and forth. 0:20:57 They finally accept it formally. 0:21:00 So that whole thing takes sort of a month. 0:21:04 The acceptance, they then review it for 45 days. 0:21:08 At the end of that 45 days, they can say, you’re good. 0:21:09 Or they can say, actually, we need 0:21:11 45 more days of investigation. 0:21:14 You go into a second 45 days. 0:21:18 At the end of that, then they can say, things, you’re good. 0:21:19 We’re going to send it to the president. 0:21:22 Or what they often do, if it’s a really hard case, 0:21:23 is we’re not quite there. 0:21:26 We think we’d like you to restart the clock. 0:21:28 And you go through another 45 day period. 0:21:31 So that’s the traditional construct, four to six months. 0:21:32 It’s post-signing. 0:21:34 Some of it can be done pre-signing. 0:21:36 The problem is inevitably pre-signing. 0:21:37 You can get some of your ducks in a row. 0:21:40 But there’s certain information that the parties just 0:21:41 aren’t willing to exchange yet. 0:21:42 Not to mention pre-signing. 0:21:44 People are actually still trying to get the signing. 0:21:48 So getting anyone’s attention to do some of this is a challenge. 0:21:50 But I do want to come back to what 0:21:52 should absolutely be done pre-signing. 0:21:54 Because even if you’re not filing, 0:21:55 there’s an enormous amount of thought 0:21:58 that should go into that so you don’t end up 0:22:00 in a siphious ditch. 0:22:02 The four to six months is not exactly nimble. 0:22:05 So they created– that traditional process 0:22:06 is called a notice. 0:22:10 They created what is known as a short form declaration. 0:22:12 And a declaration is no more than about five pages. 0:22:13 It’s a web form. 0:22:15 It’s pretty easy. 0:22:18 There’s a 30 day timeline for review. 0:22:21 So in the valley, that’s actually relevant. 0:22:23 We just got to a place where we hope 0:22:25 they will have voluntary filings like that– 0:22:27 voluntary declarations. 0:22:28 We don’t have that yet. 0:22:32 But that means that if you start a little bit before signing 0:22:37 getting stuff done, you spend a couple of weeks post-signing. 0:22:41 Realistically, you have probably a 45 day process. 0:22:43 Again, it’s a 30 day review process 0:22:46 itself where the government has to give you an answer. 0:22:48 But realistically, you obviously need some time 0:22:52 before that to get your information together and file it. 0:22:53 So that’s good. 0:22:55 You can do it in a shorter time frame. 0:22:58 The bad news is, siphious doesn’t have 0:23:00 to give you an answer at the end of that. 0:23:02 And so then aren’t you stuck in some kind of limbo? 0:23:05 Then, well, you are stuck in a limbo. 0:23:07 But in reality, what we tell most of our clients, 0:23:11 if siphious can do what we affectionately call the shrug, 0:23:13 up to you what you want to do next. 0:23:15 They don’t clear you. 0:23:17 They don’t tell you how to file a notice. 0:23:20 And it’s totally up to you on what you want to do. 0:23:25 But the good news is, in most cases, that shrug means, go away. 0:23:27 We have more important things to do. 0:23:30 So you no longer get that safe harbor. 0:23:32 They can always come back to you later. 0:23:34 But what’s the likelihood of that occurring? 0:23:36 Really, really small. 0:23:40 So the shrug, in most cases, is good enough for government work. 0:23:41 I want to ask you one thing. 0:23:44 What if someone wants to challenge siphious and say no? 0:23:46 Can people do that? 0:23:48 Because in most contexts with government agencies, 0:23:50 you have a right of judicial review. 0:23:51 But siphious is different. 0:23:53 Siphious is different. 0:23:55 If you’ve been a litigator before and you never 0:23:57 want to litigate anything again, you’re pretty much 0:23:59 safe in siphious land. 0:24:04 Siphious provides a very, very, very narrow judicial review 0:24:06 provision. 0:24:09 You can’t challenge the national security determinations. 0:24:12 You can challenge on sort of due process grounds. 0:24:14 There has been one. 0:24:18 Yes, count it one challenge in federal court in siphious’s 0:24:22 history, which did, in fact, involve 0:24:25 the acquisition of some wind turbines in southern Washington 0:24:27 state. 0:24:30 And that established the requirements for due process. 0:24:32 That siphious has to tell the parties 0:24:35 what its concerns are to the extent it can. 0:24:39 But unlike every other regulatory environment, 0:24:41 the ability to challenge siphious in court 0:24:43 is extremely narrow. 0:24:46 So the bottom line is, you have to get 0:24:48 what you can out of the regulatory process. 0:24:51 And it does make for, honestly, a very different sort 0:24:54 of negotiation than you see in most contexts. 0:24:58 Because the US government may not hold all the cards, 0:25:01 but it holds 51 of them. 0:25:05 And how does the siphious body, do they vote? 0:25:07 I mean, what if some of the entities of the 13 0:25:10 don’t care about a transaction or an investment, 0:25:11 and then others do? 0:25:13 How does that work? 0:25:16 Now, so it’s a little bit like a jury. 0:25:18 You’ve got to be unanimous. 0:25:22 But if you have one holdout, you just keep going. 0:25:25 So it’s all on consensus. 0:25:28 If one person keeps saying, I want mitigation, 0:25:30 you’re still stuck in this loop of trying 0:25:32 to work through mitigation. 0:25:34 So you can’t get cleared unless everyone agrees. 0:25:36 You probably won’t get rejected unless everyone 0:25:37 agrees, too. 0:25:41 But it can be a very challenging fight about consensus. 0:25:43 And part of what we do with clients all the time 0:25:46 is think about the technology, think about the acquirer, 0:25:49 because siphious comprises 13 different agencies. 0:25:52 And those different agencies have very, very different concerns. 0:25:56 And sometimes, we, as siphious lawyers with our clients, 0:25:59 want to spend a lot of time with the Department of Defense, 0:26:01 because it’s something they care about. 0:26:03 Sometimes, we know the Department of Defense 0:26:03 doesn’t care at all. 0:26:07 We want to spend all our time with DOJ or NSA 0:26:08 on different pieces. 0:26:10 And part of the art is identifying 0:26:12 which agencies care about it and trying 0:26:15 to make sure that as you’re going through the process, 0:26:17 not only are they not an impediment, 0:26:19 but the ones who have the biggest interest in the US 0:26:22 government are actually advocates for the deal getting 0:26:23 done. 0:26:24 We’ve been talking about acquisitions. 0:26:26 I want to talk a little bit about investments. 0:26:29 I talked about those mandatory categories 0:26:31 on that critical technology piece. 0:26:33 If you have export control technology, 0:26:36 the draft regulations add one more thing that’s mandatory, 0:26:41 that if there’s a foreign government control transaction 0:26:44 in those technology infrastructure or data spaces, 0:26:47 and a foreign government control transaction 0:26:52 is a foreign investment of 25% or more in the US business 0:26:55 by an entity that is 49% or more controlled 0:26:56 by a foreign state. 0:27:00 GIC, sovereign wealth fund for Singapore, 0:27:04 the issue may or may not be if they make a 25% investment 0:27:07 in any of those types of companies, 0:27:11 that also drives a mandatory declaration. 0:27:14 So that’s going to change the environment a little bit 0:27:17 for some of the ever-present sovereign wealth funds 0:27:20 and related entities and their investments in the valley 0:27:21 as well. 0:27:23 What if those sovereign wealth funds are LPs 0:27:25 in venture capital funds? 0:27:28 So the way it’s written now in terms of a JV, 0:27:31 sovereign wealth funds in a joint venture, 0:27:34 and they have 49% of the joint venture, 0:27:35 in most cases, a sovereign wealth fund probably 0:27:39 wouldn’t be 49% LP. 0:27:43 The LP piece, this gets pretty complicated 0:27:44 because you’re combining two things. 0:27:47 They analysis on whether it’s foreign government control 0:27:50 and also the analysis on whether or not 0:27:54 the LP should even be considered as an investor of just the GP. 0:27:57 Because if it’s just an LP and that LP doesn’t 0:27:59 have certain decision-making rights, 0:28:01 then you don’t consider that LP at all. 0:28:04 So this is another way in which, for funds, 0:28:08 it becomes very, very important to look at LP agreements 0:28:10 and determine what rights you want to provide 0:28:12 and what rights you don’t want to provide. 0:28:16 So we’re seeing more and more excerpts of LP agreements, 0:28:19 which say under no circumstances will the limited partners have 0:28:22 access to material, non-public, technical information. 0:28:27 Because if they did, they no longer get the LP protection. 0:28:29 So it’s taking some of the language and some 0:28:31 of the art of what goes on at CFIUS, 0:28:33 inserting that into the LP agreement 0:28:37 and looking very carefully at the provisions around the advisory 0:28:39 committees, the investment committees for the LPs. 0:28:41 And in general, LPs want this. 0:28:45 Because LPs are joining a fund so they’re not the investor. 0:28:47 So this is basically just verifying their passive. 0:28:50 Now, the one complication you get in larger investments, 0:28:54 of course, is direct investment by that same LP. 0:28:55 That’s covered in CFIUS. 0:28:57 They don’t get to skate free of that because they 0:28:59 are an LP and another fund. 0:29:02 Well, a lot of founders, they’re not 0:29:06 yet kind of at the stage or later stage or growth investments. 0:29:09 How should they be thinking about CFIUS reforms? 0:29:12 I mean, they might have not yet a million users 0:29:15 for their product or service, but aspire to it. 0:29:19 And maybe they want to go raise money for a variety of reasons, 0:29:23 not just here in Silicon Valley, but outside of the US. 0:29:26 How should they be thinking about something like CFIUS? 0:29:28 They might be– it’s series A, they 0:29:31 might not have a general counsel or a huge legal budget 0:29:33 for outside counsel, such as yourself. 0:29:34 What should those founders– 0:29:36 So even if you don’t have a general counsel, 0:29:39 talk to the Katie Hans of the world to say, all right, 0:29:42 if I take this money from this foreign investor, 0:29:44 how is that going to affect my round? 0:29:46 What rights do I have to make sure are not involved? 0:29:49 How is that going to affect future business opportunities? 0:29:51 That’s really important. 0:29:53 So think strategically. 0:29:56 Of course, it can be very attractive 0:30:02 to take $5,000,000,000, $10,000,000, $15,000,000, 0:30:04 whatever it is from a foreign investor who 0:30:07 can write a big check and isn’t asking for much. 0:30:09 Or her provide some strategic– 0:30:10 That’s right. 0:30:12 Or strategic benefit for geographic diversity. 0:30:14 I mean, all sorts of reasons. 0:30:15 Think it through. 0:30:18 I’m not saying don’t do it, but potentially limit 0:30:19 certain information rights. 0:30:22 So you’re a specialist in coming up 0:30:24 with innovative deal structures. 0:30:27 Maybe you trip into SIFIUS, maybe you don’t. 0:30:30 If someone didn’t want to, what kind of things 0:30:34 would you tell a Series A or Series B founder if they said, 0:30:35 I want to take foreign investment, 0:30:38 but I really don’t want to go through this what sounds like a 0:30:42 not nimble and a fairly paper-intensive SIFIUS process? 0:30:44 What kinds of things could they be doing strategically? 0:30:46 You’ve got to think about the rights that you’re 0:30:47 including for that investor. 0:30:50 Those rights on board, information rights, 0:30:52 absolutely decision rights, may trip you into SIFIUS 0:30:53 one way or another. 0:30:54 So that’s critical. 0:30:57 And that investor might not care about those things. 0:30:57 That’s right. 0:30:59 And you might tell that investor, if you get this, 0:31:02 it’s going to delay things, and we have a long process. 0:31:05 They may say, oh, god, yeah, it’s not worth it to me, either. 0:31:07 So think about those rights up front. 0:31:11 Second, think about how, in terms of timing, 0:31:12 phase your investment in different ways. 0:31:15 So maybe they say 9% equity, but damn it, 0:31:18 I want a board seat if I’m giving you 9%. 0:31:19 And the answer may be, OK, great. 0:31:21 But let’s phase the investment. 0:31:22 I need the 9% now. 0:31:24 I need the equity now. 0:31:26 But you’ll only get your board seat 0:31:28 after we get through this SIFIUS process. 0:31:30 Three years ago, this is hard, because you 0:31:33 had lots of foreign investors who weren’t thinking about this. 0:31:36 Today, you’ve got a global environment 0:31:39 of very, very well-educated foreign investors who 0:31:42 don’t want to run afoul of the rules 0:31:44 and are more thoughtful. 0:31:46 If you said to certain investors three years ago, 0:31:47 sorry, if I give you a board seat, 0:31:49 we’re going to have a SIFIUS issue. 0:31:50 They’d say, well, who cares? 0:31:51 Let’s go through SIFIUS. 0:31:53 Today, I think it’s actually a very different environment, 0:31:56 and they might well say, totally understand. 0:31:57 I don’t need a board seat. 0:31:59 So that’s good news. 0:32:00 They’re taking the equity position, 0:32:02 but you’re delaying some other rights, 0:32:04 whether it’s a board seat, board observer, and the like. 0:32:07 So phase it so it doesn’t totally follow up your timeline. 0:32:09 Third, you really do have to think strategically 0:32:12 about where you’re trying to go with the business. 0:32:14 Do you want to do work with the US government? 0:32:17 Is your priority in Asia or Europe or anywhere else, 0:32:18 in which case? 0:32:20 You may have to walk through some of this, 0:32:23 and you’re perfectly fine closing one door to open another. 0:32:25 So don’t just think about this. 0:32:28 I know that’s hard when you need equity right away, 0:32:31 but you still have to be a little bit strategic looking 0:32:34 forward about how this will affect you in the future. 0:32:36 Think about if there are ways to structure it. 0:32:39 So they are LPs and another fund. 0:32:42 Now, funds of one aren’t good, but to the extent 0:32:44 you can work with someone and say, listen, 0:32:46 we’d love to take your money, but it’s problematic 0:32:49 if we do it that way, let’s move it over to here. 0:32:50 That’s really good. 0:32:54 Another possibility is, again, early stage this gets hard, 0:32:57 but later stage, you might start looking at not actually 0:33:01 selling in the US business. 0:33:03 If you’ve already gone international, carving it up, 0:33:06 so they’re actually making an investment outside the United 0:33:09 States and some of your growth in other geographic regions. 0:33:10 Now, that gets tricky because then you 0:33:13 have to really be careful that you’re not contributing 0:33:16 a US business, that your R&D isn’t supporting it, 0:33:17 your people, all those things. 0:33:20 But at some point, that becomes quite valuable 0:33:23 to create potentially a joint venture overseas 0:33:26 with the foreign investor rather than investing 0:33:28 in the domestic US company. 0:33:32 And if you finally decide, well, we’ve got to do this, 0:33:34 but I’m worried about what Siphius might say, 0:33:37 then you’re in the world of how are you allocating that risk 0:33:41 that Siphius is going to show up and stop us 0:33:43 from doing something or sharing something. 0:33:44 So you really have to understand why you’re 0:33:45 doing the investment. 0:33:47 If you’re just doing the investment to get the money, 0:33:50 the risk isn’t that bad for you. 0:33:51 Because what would the mitigation be? 0:33:53 The mitigation might not be the money, 0:33:55 the mitigation might be a lack of information access 0:33:57 for the acquirer or the investor. 0:33:59 But what you’re trying to do is I 0:34:01 want to share information and technology 0:34:04 with this foreign investor because they have technology 0:34:06 or access to markets that I need. 0:34:08 Well, then you have to be really careful 0:34:11 because that might be exactly what Siphius cuts off. 0:34:13 So you have to understand what the investment thesis is not 0:34:15 just for them, but what it is for you 0:34:17 and how Siphius may affect that. 0:34:19 And then you fold into the deal documents 0:34:22 the allocation of risk like you would in any other deal, 0:34:25 what the efforts they might have to do for the regulatory 0:34:29 regime, when you have a right to walk, things like that. 0:34:32 What about– we talked about US business doesn’t necessarily 0:34:33 just mean US business. 0:34:35 It could mean just you have a presence in the US. 0:34:37 What about where there’s no company at all? 0:34:38 And here I’m thinking about crypto. 0:34:42 We have these things called DAOS, decentralized autonomous 0:34:45 organizations or distributed autonomous organizations. 0:34:47 In many times they’re set up as a nonprofit. 0:34:50 Do these rules, the new reforms speak to that kind 0:34:52 of circumstance or not really? 0:34:54 I think they really don’t. 0:34:56 I think that is over the horizon for Siphius. 0:34:58 One thing though, going back to the point 0:35:01 about judicial review, a lack of judicial review 0:35:04 means one really important thing on something like this. 0:35:08 Siphius has enormous discretion to interpret its rules 0:35:10 the way it wants to interpret its rules. 0:35:14 So it wants to say it’s a US business. 0:35:17 It wants to find enough indicia of it being a US business. 0:35:18 It can say it’s a US business. 0:35:20 And there’s not going to be a court which says, 0:35:22 how dare you say that? 0:35:25 And there aren’t that many investors or companies 0:35:27 that want to go fight the US government in court, 0:35:30 even if they could, on that sort of thing. 0:35:32 Well, before we take time for questions, 0:35:33 I just wanted to ask you, what do you 0:35:37 think are the biggest surprises for which industries 0:35:39 or which types of business do you 0:35:41 think this is going to– these reforms are really 0:35:44 going to affect most in our world? 0:35:48 I think I am nervous about anyone 0:35:52 who mentions artificial intelligence on their website, 0:35:53 which is everyone. 0:35:55 Artificial intelligence machine learning 0:35:56 are inherently challenging places 0:36:00 because ultimately we’re talking about algorithms 0:36:03 and where do you actually draw the line between basic mathematical 0:36:06 science and research into the application of that. 0:36:08 So I think it’s going to have a potentially significant 0:36:09 impact there. 0:36:13 Biotech, anything health care related, 0:36:16 is increasingly becoming an area of focus. 0:36:18 Certainly, as I’ve already mentioned, 0:36:20 anyone who deals with identifiable information 0:36:24 in any way, this is a very hot topic. 0:36:28 The last two, as I said, if you don’t file with CIFIUS, 0:36:30 CIFIUS can always knock on your door 0:36:36 and you can have a really, really uncomfortable period 0:36:39 and they can impose penalties or impose divestiture. 0:36:40 There are two cases like that right now, 0:36:44 both involving Chinese investors and acquirers, 0:36:45 both pretty well known. 0:36:49 So I think– I hate to say it, but it’s 0:36:52 hard to find areas that aren’t of concern to CIFIUS right now. 0:36:56 Again, that doesn’t mean that everybody is going to be blocked. 0:36:58 It doesn’t mean that everyone has mitigation. 0:37:02 It does mean, in most cases, thoughtful planning early 0:37:04 in the process, structuring in a way 0:37:07 where CIFIUS is a lesser concern becomes more and more important. 0:37:09 And where can people learn more? 0:37:11 If they want to thoughtfully think about this, 0:37:12 I don’t want to go file anything, 0:37:15 but I want to keep my finger on the pulse of this work. 0:37:18 What are some resources that people could go look for? 0:37:23 So CIFIUS is also a funny regime in that CIFIUS never 0:37:25 publishes anything publicly. 0:37:27 Any other court case, you have a court case. 0:37:28 And what do lawyers do? 0:37:30 They go read the court case. 0:37:32 CIFIUS doesn’t release any of its decision. 0:37:34 It’s all confidential, which actually 0:37:35 is a good thing for the businesses, 0:37:38 because you’d rather not have the whole world know 0:37:40 what you’re doing, how your investors are. 0:37:41 So that’s a good thing, and things 0:37:43 tend not to leak out of CIFIUS. 0:37:47 There’s obviously an industry of lawyers who write on this. 0:37:49 There are one or two sites. 0:37:52 One slight warning, because CIFIUS has become a bigger deal, 0:37:56 it’s sort of like mushrooms after a rainstorm. 0:37:58 Experts are popping up everywhere. 0:38:00 They’re national security experts or CIFIUS experts. 0:38:03 If you call someone, if you call firms, 0:38:05 say, hey, how many CIFIUS filings have you 0:38:07 done over the past five years? 0:38:10 And if the answer is less than about 100, 0:38:11 you ought to be scared. 0:38:13 So find the reputable firms that do this. 0:38:16 Talk to the reputable investors who understand this. 0:38:19 And it is something where keeping your finger on the pulse, 0:38:21 I just think it’s going to be too overwhelming. 0:38:23 There’s too much change in this environment right now. 0:38:26 So find a lawyer that you trust so you can get on the phone 0:38:26 with. 0:38:29 It’s not always hundreds of thousands of dollars. 0:38:31 Any reputable lawyer should say, hey, 0:38:32 let’s talk through this for half an hour, 0:38:33 see if you have a problem. 0:38:36 The lawyer understands your technology, who the investor is, 0:38:38 what the timeline is, what your business goals are. 0:38:41 Say yes, no, or maybe. 0:38:43 And then you make a decision about how you want to proceed. 0:38:45 And you’re not going to get that from just reading something. 0:38:47 Reading is good background. 0:38:49 But again, you’re trying to build a business or run a business. 0:38:52 You’re only going to read so many articles from my guests. 0:38:52 Great. 0:38:54 Mike, thanks so much for being here. 0:38:56 I know people might have questions for you. 0:38:59 Is there ways you can accidentally back yourself 0:39:01 into a SIFI situation? 0:39:04 So for example, you have a downward protection 0:39:05 or a secondary market. 0:39:07 Someone buys your store. 0:39:09 You go public and someone buys your stock. 0:39:13 Yeah, so there absolutely are what you describe. 0:39:17 Everything from convertible debt, which when it converts, 0:39:19 and there are rules about how SIFI is 0:39:23 treats convertible debt instruments and the like. 0:39:24 And SIFI is doesn’t matter. 0:39:25 It doesn’t matter if it’s a direct investment 0:39:29 in private company or ownership on the public market. 0:39:33 So you get over 10% public ownership on the public market. 0:39:38 And that, too, implicates SIFI as it doesn’t matter 0:39:39 what form of ownership it is. 0:39:41 It’s just about equity. 0:39:45 By the way, debt does not count. 0:39:47 Convertible debt gets more complicated, but debt doesn’t. 0:39:49 So that’s another way to structure 0:39:52 potentially, which can be helpful. 0:39:54 Now, if you have debt and you have some other rights 0:39:57 on top of that debt, it gets a little bit more complicated. 0:40:01 But yes, you have to be aware of your shareholder base, 0:40:05 public, private, regardless of how it comes in. 0:40:09 Now, it does matter to SIFIUS in terms 0:40:11 of how they look at it, because if you’ve 0:40:14 been passive about this, you didn’t reach an agreement. 0:40:16 Somebody just comes in and acquires 0:40:18 your debt on a secondary market. 0:40:21 They may be concerned about what the effects of that are, 0:40:24 but they absolutely look at the US business 0:40:27 and the target a little bit differently, 0:40:29 since you have obviously not signed up 0:40:32 to do something collaboratively with that foreign investor. 0:40:34 So it changes the color, but it doesn’t change 0:40:36 the jurisdictional analysis. 0:40:39 The owner’s the responsibility for filing on the company 0:40:42 or the investor or acquirer for filing. 0:40:46 Yeah, so the way the fines work, it’s joint and several. 0:40:49 So they’ve only imposed one fine in that so far. 0:40:52 So we really don’t have a lot of data to know– 0:40:53 Can you just explain? 0:40:54 I mean, joint and several. 0:40:55 Oh, sure, sure. 0:40:58 Meets both of you are in trouble. 0:40:59 Yeah, so you both have a responsibility. 0:41:01 The filing is joint. 0:41:04 So any SIFIUS filing is joint. 0:41:07 It isn’t joint if there’s been an outright acquisition 0:41:10 and after the fact, because then there’s only one party. 0:41:13 So if you get acquired and you get your equity 0:41:17 and you get your cash and you walk away, listen, you’re good, 0:41:19 depending on what the contract said. 0:41:22 But generally, if you’re talking about an investment 0:41:25 in a US business, both parties go to SIFIUS. 0:41:28 Both parties can be fined by SIFIUS, 0:41:33 although each party generally only states– 0:41:36 it can only be responsible for its own information. 0:41:41 So you can’t be fined for the foreign investor lying 0:41:42 about something. 0:41:46 But if you then have a mediation agreement, 0:41:49 the foreign investor isn’t going to get access to my technology 0:41:51 and you provide them access or you 0:41:54 do something that violates that national security 0:41:55 agreement with the US government, 0:41:59 then the fine, you are jointly responsible for that fine. 0:42:02 Is the application viewed differently based off 0:42:02 of the timing? 0:42:06 For example, let’s say you do it post wire money 0:42:08 versus three months after the wiring money. 0:42:10 And is there penalties associated with the timing? 0:42:11 Great question. 0:42:13 If it’s a mandatory filing, then you 0:42:17 have to file 45 days before signing, for closing. 0:42:21 In closing, is that defined as when the money is transferred? 0:42:23 Well, probably, but it also depends 0:42:26 on how it’s defined in the term sheet. 0:42:28 Most term sheets, it’s going to be signed and closed. 0:42:33 But again, that means 45 days before the term sheet is completed, 0:42:34 you’ve got to file. 0:42:38 Now, if it’s voluntary, there’s no requirement 0:42:41 to file before or after. 0:42:43 So you can file it any time. 0:42:47 But here’s the important but. 0:42:50 Unsurprisingly, SIFIUS wants you to file before. 0:42:53 It’s always a harder conversation if they already 0:42:55 have the investment, they already have the rights, 0:42:57 and you’re going and explaining it. 0:43:01 So there are situations where we sort of work with SIFIUS 0:43:03 and the parties close before they’ve 0:43:06 gotten approval from SIFIUS. 0:43:08 But it has to be done very, very carefully 0:43:11 in a knowing, open, transparent way. 0:43:15 Or otherwise, SIFIUS has the ability to take it out on you. 0:43:18 Well, we have time for one more question. 0:43:19 Where do you think this is all going? 0:43:24 The footprint of SIFIUS has been expanding, arguably 0:43:27 beyond national interest and just broader economic interests 0:43:29 of companies. 0:43:32 And you mentioned the dialogue with the export control 0:43:35 regulations where it’s not just at the time of an M&A 0:43:37 event or an investment, but ongoingly, 0:43:39 may need a license from the government 0:43:43 just to do business with entities located 0:43:46 in foreign jurisdictions. 0:43:50 1% of the transaction is pretty onerous for mandatory filing. 0:43:54 And then who knows what the export licenses would be. 0:43:57 Where is this all going in DC, a stopping point? 0:43:59 So a couple of pieces on that. 0:44:02 First, this is about the only bipartisan thing 0:44:05 that has happened in Washington in the last three years. 0:44:06 So that tells you something. 0:44:11 That regards to what happens in 2020, this isn’t going away. 0:44:13 And a lot of this started at the end of the Obama 0:44:14 administration. 0:44:18 Now, you put on top of that, obviously, the US-China trade 0:44:18 tensions. 0:44:20 That’s clearly exacerbated this. 0:44:23 And we’ve seen instances where these issues are all 0:44:28 getting thrown into a pretty messy stew. 0:44:30 I think Huawei is ZTE. 0:44:31 Is that national security? 0:44:34 Is it a negotiating tactic on trade deals? 0:44:38 So I think some of that, let’s assume, going forward, 0:44:42 comes down a little bit on the broader US-China trade front. 0:44:44 I think we’ll have a little bit more predictability. 0:44:47 But I think the basic trajectory of CIFIUS, 0:44:49 looking broadly at technology, data, 0:44:52 critical infrastructure, expanding 0:44:55 that definition of critical technology, 0:44:56 that’s not going away. 0:44:58 And there are still a lot of things 0:44:59 you can do outside of CIFIUS. 0:45:03 As I said, licensing of technology 0:45:04 does go through export control. 0:45:06 That will change, but it doesn’t go through CIFIUS. 0:45:09 So we can’t do anything to evade CIFIUS, 0:45:13 but there are still good ways to do business 0:45:17 with overseas investors, in overseas environment, that 0:45:19 doesn’t put you squarely in the crosshairs. 0:45:21 I don’t think, going forward, this 0:45:22 is going to radically change. 0:45:24 What is the stopping point? 0:45:26 I hope we don’t have a global downturn. 0:45:29 But right now, we’re in a world where 0:45:33 it’s not that hard to find capital. 0:45:35 There’s a lot of capital out there. 0:45:37 Those capital markets start to shrink a lot. 0:45:40 I think they’ll clearly be an incentive on the US front 0:45:43 to open those doors a little bit more widely. 0:45:45 So I think there are some macro trends 0:45:48 that could start to have this ebb. 0:45:50 But I think short of that, the trajectory 0:45:53 is going to remain relatively constant at this point. 0:45:55 Well, on that note, thank you so much, Mike. 0:45:56 Thanks for coming in. 0:45:59 [APPLAUSE] 0:46:07 [BLANK_AUDIO]
When innovation and capital go global, so do restrictions on trade, foreign investment, and more. Over the past couple years, U.S. policymakers have expanded the scope of the Committee on Foreign Investment in the U.S. (CFIUS) through the Foreign Investment Risk Review Modernization Act (FIRRMA) of 2018 which was recently updated through proposed reforms this September 2019.
So what does this all mean for tech founders taking investments from, or doing joint ventures with, foreign entities — or just doing business globally in general? What does and doesn’t CFIUS cover, and how might one structure partnerships strategically as a result? In this episode, a16z general partner Katie Haun interviews Michael Leiter (of law firm Skadden Arps) who specializes in CFIUS as well as matters involving U.S. national security and cybersecurity, cross-border transactions, aerospace and defense mergers and acquisitions, and government relations and investigations.
The Q&A took place in September 2019 as part of an event hosted by Andreessen Horowitz.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
0:00:05 The content here is for informational purposes only, should not be taken as legal business 0:00:10 tax or investment advice, or be used to evaluate any investment or security and is not directed 0:00:14 at any investors or potential investors in any A16Z fund. 0:00:16 For more details, please see a16z.com/disclosures. 0:00:21 Hi, and welcome to the A16Z podcast. 0:00:26 I’m Lauren Murrow, and I’m here with Austin Elred, CEO and co-founder of the Skills-Based 0:00:32 Online School, Lambda School, and Darcy Kulikin, a partner on the consumer tech team. 0:00:37 Today we’re discussing an issue that has saddled much of a generation, student debt. 0:00:42 Student debt currently stands at more than $1.5 trillion, which makes it the second highest 0:00:45 consumer debt category behind mortgage debt. 0:00:50 The national total for student debt is higher than both credit cards and auto loans. 0:00:55 One solution that’s been proposed is income share agreements, or ISAs. 0:00:59 It’s a concept currently in the zeitgeist that’s spring debate across media and politics. 0:01:02 Here’s the idea. 0:01:06 Rather than charging students tuition to attend college, often forcing them to then take out 0:01:09 loans in the process, they go to school for free. 0:01:14 But under an ISA, they are then required to pay back a percentage of their income after 0:01:20 graduation, but only if they land a job with a salary that meets a certain amount. 0:01:23 In this episode, we delve into some of the greater implications ISAs may have for the 0:01:27 future of education, the economy, and more. 0:01:31 We also touch on some of the challenges of ISAs, why they’ve been relatively slow to 0:01:35 gain traction, why some have failed in the past, and why some in the political sphere 0:01:38 are still skeptical. 0:01:44 As we begin our conversation, Austin gives his explanation for why ISAs work. 0:01:49 I think of it as a very, very forgiving type of way to pay for education. 0:01:53 So if the education doesn’t work, basically, you don’t make payments. 0:01:58 So at Lambda School, you don’t pay the school anything unless you get a job in the field 0:02:03 you trained in that pays more than 50K, and if you get that kind of a job, you pay a percentage 0:02:05 of your income for a couple of years, and that’s it. 0:02:07 Put this into context for us. 0:02:11 If ISAs are the future, what’s the big picture potential? 0:02:15 It just doesn’t make sense for a lot of people to attend college anymore. 0:02:20 Things have stayed flat basically since the ’80s, and tuition has gone up basically 3X, 0:02:24 and so that gap is getting bigger and bigger, and it’s more and more risky every day for 0:02:26 a student to go to college. 0:02:30 So there are times when a student will enroll in a school when they’re 18, and they’ll pick 0:02:36 something to study, and when you look at the amount of student debt they’re going to have, 0:02:40 everybody knows from the school to the person funding the education to the government that 0:02:44 that person is unlikely to be successful and ever pay back their student loans. 0:02:50 If you look at most of the student debt, a lot of it is from students who didn’t graduate 0:02:55 or from students who aren’t making enough, and almost all of the default in student loans 0:02:57 comes from those categories. 0:03:00 In the world of an ISA, that doesn’t happen. 0:03:05 Almost by definition, a student who is unable to make payments on their ISA is much, much 0:03:09 less likely to happen than a student who can’t make payments on their loans. 0:03:13 But what it really allows us to do over time is better assess who will end up in the right 0:03:16 career path, do everything we can to get them there. 0:03:19 What do you mean when you say get them in the right career path? 0:03:23 So you walk in on day one knowing that if you don’t get a good job, you’re not paying 0:03:24 us anything. 0:03:28 Everything is driven to making you successful in the career path that you may choose. 0:03:33 It also diversifies risk, and it allows people to do things that they otherwise might not 0:03:34 have been able to do. 0:03:38 If you look at education outcomes, especially if it’s vocational schools where the ultimate 0:03:42 outcome is getting a job, it can be relatively binary. 0:03:46 You either get a job or you don’t, and when you have those things that have a high variance, 0:03:50 it can oftentimes feel really risky for the person entering into that, because if they 0:03:53 don’t get the job, that risk has then been placed on them. 0:03:57 The way that income share agreements work is they’re taking that risk from the individual 0:03:59 and putting it on the school. 0:04:05 Say you have a school with a 75% graduation rate, and then of those graduates, 75% end 0:04:09 up being successful in the job that they train for. 0:04:10 Those are not bad rates. 0:04:14 In fact, most colleges would take those rates any day, but when you look at the tuition 0:04:20 dollars that are flowing into that school, basically 46% of those tuition dollars are 0:04:22 coming from people who are unsuccessful. 0:04:26 So if those people are saddled with loans, there’s no way they’re going to be able to 0:04:27 pay those off. 0:04:32 If you’re talking a private student loan, your interest rate could be 10%, 11%, 12%. 0:04:37 If you’re only making 30%, 40%, 50K a year, you might be financially ruined forever. 0:04:42 The more income has spread, that then makes the financing education through debt that 0:04:43 much riskier. 0:04:46 The winners will win bigger, but the losers will lose bigger. 0:04:49 And to the extent that that’s financed through debt, that just can become a massive overhang 0:04:52 on somebody that doesn’t win coming out of that education lottery. 0:04:56 And one of the important things that economists have realized in the past few years, I mean, 0:05:00 Kahneman and Tversky, they came up with the notion of behavioral economics. 0:05:05 It was basically the realization that humans don’t always act in an entirely rational way 0:05:09 because their downside risk can be so much greater than the upside. 0:05:14 So ISAs are a way of saying, let’s say you have a 90% chance of your education working 0:05:19 out and a 10% chance of you being financially ruined for life, a rational person will still 0:05:21 not play those odds. 0:05:26 So as a school holds the risk, it now becomes 90% chance that somebody is successful and 0:05:30 pays back and a 10% chance that you have to eat those losses. 0:05:34 The school can actually afford the risk in the way that an individual can’t. 0:05:39 And when an individual is de-risked, they can look for more optimum outcomes and not having 0:05:42 to weigh in that downside risk is really, really important. 0:05:47 Are there types of students that ISAs make sense for and don’t make sense for? 0:05:51 Depending on the terms, I think it makes sense for every student. 0:05:56 As the ISAs exist today, they’re only slightly more expensive than upfront tuition at most 0:05:57 schools. 0:06:03 So you can take the risk and say, I’ll pay $20,000 upfront tuition, or if I’m successful 0:06:06 and I get a six-figure job, I’ll pay up to $30,000. 0:06:11 Turns out if you’re successful and you have a six-figure job, that $10,000 delta doesn’t 0:06:12 feel as bad. 0:06:16 And you would trade that for, and if I don’t get a job, I don’t pay anything. 0:06:20 In my ideal world, you only actually pay the school if it’s less than the increase in 0:06:21 your income. 0:06:24 The place where maybe the jury’s still out, or we’re still trying to figure out whether 0:06:29 the economics work is places where there’s much lower variance, is traditional finance 0:06:33 theory would say that’s a place where you should take out debt rather than some other form 0:06:34 of financing. 0:06:37 And so in those instances, which to a certain extent is kind of what college looked like 0:06:42 in the 1950s and ’60s, and that it was much more predictable, and debt was the natural 0:06:45 financial instrument to use then at that point. 0:06:48 But now that we’re at a place where the risk is so much higher, the outcomes are so much 0:06:51 more distributed, income share agreements are having this moment now, because that’s 0:06:55 the kind of population that it really makes sense for. 0:07:00 But I do think it does come back to what is the upside risk, what is the downside risk. 0:07:05 This may not be the perfect example, but say an NBA athlete or someone where if you’re 0:07:10 successful, the upside is millions and millions and millions of dollars, and you could de-risk 0:07:17 that by saying, “Okay, an ISA will pay me 100K for life in exchange for if I’m wildly 0:07:20 successful, I’ll give up 10% of my income.” 0:07:26 And you can use the high potential for upside risk to de-risk it for people and cover the 0:07:27 downside risk. 0:07:30 That’s the kind of financial engineering that I find fascinating. 0:07:32 The easiest way to think about it is just its insurance. 0:07:38 You’re saying, “If something really bad happens, I’m not going to be in really dire straits.” 0:07:43 Insurance is a really good way to think of it, and that’s the instrument we use in similar 0:07:50 scenarios where, “Okay, I’ll pay $100 a month, and I don’t love paying $100 a month. 0:07:55 But if someone wrecks my car and I now owe $20,000 to get a new car, I would like to 0:07:57 not have to pay that, because I can’t afford to.” 0:08:02 For the most people, when you’re talking houses and cars and potential really high 0:08:06 downside risk volatility, it makes more sense to de-risk yourself. 0:08:10 Let’s say you’re right in income to share agreements become more common. 0:08:13 How might that actually shape education? 0:08:18 So what I hope will happen is you can drive the effective price of studying something 0:08:20 that is likely to pay well down. 0:08:25 What I really think is missing in the economy is something that will tell you, “Hey, based 0:08:30 on who you are and what you know, and your talents and proclivities, here is the best 0:08:33 place for you in the economy, and we’re going to help you get there.” 0:08:37 I would imagine that the vast majority of the population could actually end up in a 0:08:41 better place than they are today if they only had any idea what opportunities are available 0:08:43 to them. 0:08:45 Universities don’t really serve that function today. 0:08:49 When you show up at the front door, they’re not handing you a book that says, “Here are 0:08:53 the best majors for you to choose,” and if they did that, then the professors would lose 0:08:54 their minds. 0:08:58 So I hope that it becomes more transparent when you’re a student. 0:09:06 And I think about our economy of human labor being the majority of GDP, 52%, 53% of GDP, 0:09:10 and completely unoptimized, and that’s insane. 0:09:14 Where our biggest asset class that exists, what most of us spend most of our waking hours 0:09:18 do, is not optimized at all. 0:09:23 Education as it is today is this kind of bundled value proposition where it is skills training, 0:09:26 it’s career development, it’s also networking. 0:09:31 What ISAs are really good at doing is they will kind of debundle that, and they will 0:09:37 atomize the different parts of education and take this part around kind of skills development, 0:09:42 training the ability to focus on optimizing and maximizing your long-term income, and 0:09:48 it will make really clear which programs or institutions are optimizing for that. 0:09:53 I think it’s insane that if, for example, there’s a factory that goes out of business 0:09:58 in Detroit, there are all these people that are unemployed, and even if there is huge 0:10:02 demand in other parts of the economy that those people could do, there’s nothing that 0:10:03 matches those two. 0:10:08 There’s nothing that will take an unemployed factory worker in Detroit and get them, even 0:10:12 if there’s a company next door that desperately needs something, there’s no bridge between 0:10:13 those two. 0:10:19 So at a higher scale, I hope ISAs will enable a broader economic clearinghouse where we 0:10:22 can move people to where they’re most valuable in the economy, where they’re happier, where 0:10:26 they’re paid more, and eliminate the friction of doing that. 0:10:31 I think it’s probably the biggest barrier to economic progress that exists today. 0:10:36 Assuming you have this ecosystem that’s built on top of ISAs, then you will have this much 0:10:40 more efficient system that can move people in and out of the workforce and retrain and 0:10:41 reskill people much more quickly. 0:10:47 A critique of ISAs is people say graduates with high salaries might end up paying more 0:10:49 under an ISA than say a traditional student loan program. 0:10:54 Yeah, it’s a trade-off that you have to make for sure, but we offer both upfront tuition 0:11:00 and an ISA, and 98%, 99% of our students pick the ISA. 0:11:04 Because in the scenario where you are paying a little bit more, you’re fine, and you’re 0:11:08 making enough money that it’s not a big deal, and frankly, for us, you only pay it for a 0:11:09 couple of years. 0:11:16 So it’s actually not a big delta relative to if you had $20,000 on the line and you 0:11:20 have no idea whether it’s going to work out and you owe interest on it no matter what. 0:11:23 Yeah, you might end up having to pay a little bit more than you would if you just paid straight 0:11:28 up, but you’re getting the psychological ease of not potentially having that debt hanging 0:11:29 over you. 0:11:30 Most ISAs are capped. 0:11:34 So no matter if you’re making gazillions of dollars, there is a maximum amount you will 0:11:39 pay whether it’s like $20,000 or $30,000 or $40,000 over the lifetime of an ISA, you’ll 0:11:41 only pay X amount of money. 0:11:46 You’re paying for risk protection on the downside, which can have a bunch of knock-on effects 0:11:50 in terms of how you think about your lifetime earnings. 0:11:53 If you’re able to not only take the risk of going to school or going back to school or 0:11:57 taking time away from the workforce, but also thinking about your career over a longer time 0:12:02 horizon because you don’t necessarily have to service debt immediately upon graduation, 0:12:04 that’s one thing that can actually change the dynamics of a career. 0:12:10 Yeah, one of the instances that’s been most fascinating to watch is if, for example, you 0:12:15 took out a loan to pay Lambda School, the loan payments start becoming due immediately 0:12:16 upon graduation. 0:12:21 So we’ve seen circumstances a couple of weeks ago where a student got a job offer for $70,000 0:12:24 and he was saying, “Okay, that’s not a bad job offer. 0:12:28 I can take that,” and we thought he could do a little bit better. 0:12:32 So you might want to turn that down and keep looking at the market because based on what 0:12:35 we’re seeing, we think you could get more. 0:12:39 If you have a debt payment that’s about to come in, you take that $70,000 offer because 0:12:40 you have to make those payments. 0:12:43 In this circumstance, he waited a couple of months. 0:12:49 We waited longer to get paid, but this particular student ended up getting a job for $240,000. 0:12:54 So a massive, massive difference in income just because he had the psychological ability 0:13:00 to wait a little bit and to pick the actual optimum outcome for him as opposed to optimizing 0:13:04 for, “I’ve got these payments that I need to make that are going to be due no matter 0:13:06 what my life looks like.” 0:13:07 The financial ability to do that too. 0:13:12 It’s psychological and financially, it’s freed him or her to make that better decision. 0:13:17 We all know these people that graduate from university, they take the job in investment 0:13:20 banking or something that’s going to give them a high cash component for three years 0:13:23 because they want to pay off their student debt, and then at that point, they’re going 0:13:24 to go do what they really want to do. 0:13:29 That’s this common thing that you see amongst graduates all over the US, and income share 0:13:35 agreements can mitigate that and it can change the immediate incentives you have upon graduation. 0:13:39 We’re seeing those incentives play out at a broad scale. 0:13:44 People are starting families later, they’re not buying houses, they’re not starting businesses. 0:13:47 My combinator is trying to do a study right now that determines how many people would 0:13:50 be starting businesses if they didn’t have student debt. 0:13:55 If you have $1,000 a month in debt payments, it becomes really difficult to start a company. 0:13:59 So you’re saying the impact of ISAs may extend beyond the issue of student debt. 0:14:04 I think a lot of the reason that people get excited about ISAs is because of the second 0:14:05 and third level effects of it. 0:14:10 The ISAs are like the building blocks, but then you do need this robust ecosystem built 0:14:11 on top of it. 0:14:14 I think we’re in the early days of building that ecosystem right now. 0:14:16 What else is in the ecosystem? 0:14:21 If you think about ISAs broadly, there’s places where you can use ISAs beyond skills training. 0:14:25 You can think about it as I can move you from place A to place B and pay for that and take 0:14:30 those upfront costs and then take an income share agreement on the back end and that’s 0:14:31 the way to finance that. 0:14:35 Right now, mentorship exists in this informal, nice people do it type thing, but you can 0:14:37 think of a way where that gets much more formalized. 0:14:41 There’s tons of value propositions that can be layered on top of income share agreements 0:14:43 that I think people get very, very excited about. 0:14:49 I’ve seen things go as far as an ISA for immigration, where we’ll pay for all of your 0:14:54 legal fees, we’ll do everything that we can, and then if you end up in a certain country 0:14:58 where you’re making way more money, you pay a percentage of income for a few years. 0:15:02 But net is going to be more than you were making wherever you were and you can create 0:15:03 a new life. 0:15:08 The design of it is so important, you really have to consider the details when structuring 0:15:09 it. 0:15:11 You’re 100% right that the devil’s in the details. 0:15:15 In theory, you could make an ISA that makes a lot of sense or you can make an ISA that 0:15:18 makes no sense whatsoever. 0:15:25 My hope is that Azure is more competition in the ISA space and as there becomes better 0:15:30 regulation, more regulatory guidelines, more repayment history on these things, we can 0:15:33 start to figure out what the optimum rate is. 0:15:36 Right now, it’s a little bit up in the air. 0:15:40 When we started, we were basically guessing about what repayment rates would be, and we 0:15:44 created terms that made sense in this imaginary model. 0:15:48 A loan has a par value, it has an interest rate. 0:15:53 You know what payments are going to be coming in every month, and then if they don’t come 0:15:57 in, then you call that a default and you report it to the credit bureau. 0:16:02 With an ISA, there are times when a payment is not coming in and that’s okay because it’s 0:16:04 built into the agreement itself. 0:16:06 In an ISA world, that’s actually not a default. 0:16:10 That may be exactly what the instrument intended for. 0:16:15 The entire ISA market today is maybe a couple hundred million dollars a year. 0:16:19 It’s not big relative to basically any other asset class, whereas student loans are in 0:16:21 the trillions. 0:16:26 You bring up a good point in that ISAs, though they’re getting a lot of buzz in politics 0:16:31 and in the media, are still a relatively small percentage of schools that are actually pursuing 0:16:32 this. 0:16:34 Why aren’t they gaining more traction? 0:16:39 As a school, today you will make less money with an ISA than you will with a loan. 0:16:45 The way the ISAs are structured today, if my upfront tuition is $20,000, I’m actually 0:16:50 not planning to make $20,000 on average from an ISA, and I might have to wait a couple 0:16:52 of years for that to happen. 0:16:57 It’s actually a worse deal for the school in many cases, but the important flip side 0:17:04 of that is it opens up access to students who wouldn’t otherwise be your students. 0:17:10 Lambda school is based on saying, “What if we increase accessibility to the point where 0:17:13 basically anybody with a laptop can attend?” 0:17:17 Now there are millions and millions of people who wouldn’t be willing to take the risk 0:17:22 upon their own financial future to go to a code school, and we say, “Try it out. 0:17:24 See if it works for you.” 0:17:27 If it works for you, you’re going to pay us back, but you have a great job now, so it 0:17:29 all works out. 0:17:35 But the cost of capital is high enough, and they’re different enough from loans that you 0:17:39 on average expect to make less. 0:17:42 The reason that so few schools are doing it is because you really have to design the 0:17:45 school around making that work. 0:17:52 Do you think then that ISAs work well for vocational schools and coding schools, but 0:17:55 the jury’s still out on more traditional higher education? 0:18:00 One of the ways they’re being used in traditional universities, which is interesting, is as a 0:18:03 tool to help people stay in school. 0:18:07 The biggest problem that all the universities deal with is retention, retention, retention. 0:18:11 A university looks at how much financial aid they have. 0:18:14 Some is from the government, some is internally. 0:18:17 And you can pretty well identify the students that are going to need that student aid or 0:18:19 they’re going to drop out. 0:18:25 That student aid is not enough to cover everybody, so they’re people who it’s basically either 0:18:29 they’re going to drop out of a university or you’re going to give them an ISA. 0:18:35 So even if you don’t get full tuition value dollar on the dollar, it’s better to have 0:18:39 those students pay you maybe it’s 90 cents on the dollar, maybe it’s 75 cents on the 0:18:42 dollar to allow them to not drop out. 0:18:46 Obviously, that approach is different than a vocational school like ours that is built 0:18:49 in such a way that you don’t pay unless you get a job. 0:18:53 There’s a bunch of things probably throttling the ISA market right now. 0:18:56 I think regulatory uncertainty is probably one of those. 0:19:01 The investor side likes to invest based off of data and they like to invest into things 0:19:02 that are very predictable. 0:19:06 The nature of ISAs is that they just take years and years and years to really build out that 0:19:07 data set. 0:19:13 So I think the nature of ISAs is just something where it’s destined to grow relatively slowly 0:19:16 just because the capital markets don’t unlock without predictability. 0:19:22 Not to mention the fact that if you’re a university, your COGS are so high that you can’t create 0:19:25 an ISA with a two-year repayment window. 0:19:30 They’re all eight years, 15 years, so that’s going to take a very long time to know what 0:19:32 the repayment predictability is like. 0:19:36 So the number one thing holding us back from creating a bigger data set is they’re just 0:19:41 fundamentally aren’t enough ISAs to get the capital in so you can watch the data roll through. 0:19:45 If you look at the history of lending, it was very similar. 0:19:47 Nobody knew what the right interest rate would be. 0:19:53 Nobody knew how to securitize loans and turn them into a bond and start selling them. 0:19:58 Today, there are trillions of dollars that are looking for returns, but they have to 0:20:03 be very, very sure returns before you’ll put those trillions of dollars in. 0:20:08 So our hope is that we can make that data happen faster and bring down the risk faster. 0:20:12 Can you bring up an interesting point in that ISAs, they’re very buzzy right now, are not 0:20:14 a new concept. 0:20:19 People have tried this before, maybe failed, what are you doing differently? 0:20:24 The most popular experiment with ISAs was done by Yale in the ’70s. 0:20:28 And they did it in a very interesting way, which was they said, “Okay, this class will 0:20:34 cost us $10 in tuition and everybody’s going to pay a percentage of their income until 0:20:39 we’ve reached that amount plus a little bit of interest,” but it was as a pool, not as 0:20:40 an individual. 0:20:45 And it was very explicitly, “Once we’ve hit that amount, then everybody’s done,” not 0:20:46 an individual ISA with a cap. 0:20:53 It was the cap for a class, which feels much worse if you’re a high earner. 0:20:55 And so that actually didn’t work. 0:20:59 Importantly, they also gave people the opportunity to buy out early. 0:21:02 So not only did they structure it as a class, and there was this rate of return that the 0:21:07 class had to hit as a cohort, but also they gave people the ability to pay, I think it 0:21:13 was like 150% of the cost of their tuition, so the people who would have been the really 0:21:16 big earners later on in life had the ability to buy out early. 0:21:19 And then obviously, people who weren’t doing well were paying a smaller percentage of their 0:21:23 income, so it really got burdened on this middle class experiment. 0:21:27 And so I think the lesson from the Yale experiment, which I think is the lesson of all ISAs in 0:21:30 general, is that design is really, really, really important. 0:21:31 All financial instruments in general. 0:21:32 Yeah, exactly. 0:21:36 It’s like, are you giving people the option to opt in or opt out, and when is that happening, 0:21:38 and how is that creating ad for selection in your pool? 0:21:41 How is that changing everybody else’s burden? 0:21:42 These things matter a lot. 0:21:47 And so I think ISAs are extremely exciting, the conceptual level, but I think also whether 0:21:52 it’s the Yale experiment or there was a series of companies in the 2000s and the early 2010s 0:21:57 that came and went and they were using ISAs, but they didn’t get those details right. 0:22:04 I think another big difference between ISAs today versus ISAs in the early 2010s and in 0:22:11 the past is most of the ISAs today are used in vehicles that are meant to shift your income. 0:22:17 Sometimes referred to ISAs is like selling an out-of-the-money call option against yourself. 0:22:21 So you’ll pay, but only in a circumstance where your income changes so much that you 0:22:23 don’t care. 0:22:27 In the past when it’s just been, hey, anybody can get an ISA, there’s the selection bias 0:22:32 of why would I take out an ISA if I can take out a loan? 0:22:36 But when tied to an educational institution, the reason you take out an ISA as opposed 0:22:41 to a loan is because the institution is promising to shift your income or you don’t pay, which 0:22:47 is very different than just a raw ISA to pay for a car repair or something like that. 0:22:51 They’re not lending you dollars, they’re lending dollars to a school that’s going to 0:22:52 shift your income. 0:22:57 Which is one of the things that was new about this generation of income share agreements 0:22:59 that didn’t exist in the previous generations. 0:23:02 So I want to get into the political spectrum a little bit. 0:23:07 So the student debt crisis is something that has gotten a lot of attention in the media, 0:23:10 particularly leading up to the upcoming election. 0:23:15 In June, the members of Congress sent this letter to Secretary of Education that read, 0:23:20 in part, ISAs carry many common pitfalls of traditional private student loans with the 0:23:24 added danger of deceptive rhetoric and marketing that obscure their true nature. 0:23:25 What would you say to skeptics? 0:23:31 A world of ISAs I think would be demonstrably better net net than a world of the existing 0:23:32 student loan arrangement. 0:23:38 I’ve spent a lot of time on Capitol Hill talking to Congress, Congress people and senators 0:23:40 on both sides of the aisle. 0:23:45 And what I’ve generally found is just a lack of understanding and a curiosity and kind 0:23:50 of just a fear and skepticism that’s probably healthy of, hey, there’s this new financial 0:23:55 instrument and one of the reasons we’re pushing so hard for regulation in the space is because 0:24:00 it’s not hard to imagine an ISA that would be predatory, right? 0:24:06 The same way if there were no lending regulations, there are a million different ways that you 0:24:07 could abuse loans. 0:24:12 You could say, this loan has a 9,000% interest rate, but you get your car today. 0:24:14 That’s not a win for anybody. 0:24:19 So I haven’t seen anybody abusing ISAs, but I think politicians are skeptical. 0:24:22 What the Department of Education is saying is let’s look at this deeper. 0:24:28 So similarly, we still at the federal level haven’t seen that data yet. 0:24:29 We haven’t seen what the returns are like. 0:24:32 We haven’t seen what will happen to student repayment. 0:24:35 We haven’t seen what the burden is like. 0:24:40 We haven’t even created rates as the federal government for what ISA repayment ought to 0:24:41 be. 0:24:44 So I think it makes sense that there’s skepticism. 0:24:47 Let’s say someone completely wipes out existing student debt. 0:24:50 Does that affect the outlook for ISAs at all? 0:24:54 Well, if you wipe out the existing student debt based on the way we’re pacing, we’ll 0:24:58 be basically where we are today in about 10 years. 0:24:59 That’s depressing. 0:25:05 So you can wipe out the student debt, but the bigger question is who pays for the education 0:25:07 and how? 0:25:12 And is it just the federal government will now make all university tuition free? 0:25:13 Which I suppose is an option. 0:25:15 I’d be shocked, but that’s an option. 0:25:18 Even in that world, there’s still probably a place for ISAs. 0:25:22 I’m from Canada and healthcare is free, but there’s still four fee medical service. 0:25:25 One of the things that’s really interesting is one of our bigger markets right now is 0:25:26 in the EU. 0:25:30 And in a lot of countries, there’s free education, but it takes a long time. 0:25:31 It’s low quality. 0:25:35 And a lot of our students are saying, even though I have the opportunity to go to college 0:25:40 for free, Lambda School is still providing a better return because it gets me the things 0:25:41 that I need. 0:25:47 Getting into the career faster and our ISA in the EU is 10% for four years. 0:25:53 And so we can make everybody’s income, I think, 10% higher than it would be at a university 0:25:55 over the same lifetime. 0:25:58 It makes sense that even if there’s a free option, there would be a premium option that 0:26:01 would exist that would attract a certain segment of the population. 0:26:05 The fact that you’re offering an ISA is a signal that we believe we can fundamentally 0:26:09 change your outcome in a way that’s a delta to the free system, right? 0:26:11 Could there then be unintended consequences? 0:26:18 To the extent that ISAs become dominant majority exclusive way to finance education, the entire 0:26:23 education system then will reorient itself around this idea of maximizing future income, 0:26:26 which for a lot of students, that is what they want. 0:26:29 For some students, that is not what they want. 0:26:34 And so you would have these knock on secondary effects of income is the thing we can measure. 0:26:38 It’s the thing we can optimize against with this financial instrument. 0:26:44 There’s all kinds of other value being created within the current educational system and 0:26:50 where and how those things would slot into this new different world of exclusively ISAs 0:26:51 is an open question. 0:26:56 So I think when we talk about secondary effects of this world, I think that is probably one 0:26:57 that is worth paying attention to. 0:26:58 Agreed. 0:27:00 We talked about immigration earlier. 0:27:03 Is there potential for this model to play out in fields beyond education? 0:27:08 I think about the excitement around ISAs and education is one big piece of it. 0:27:14 The idea of extending it out into other layers of the educational sphere is exciting, but 0:27:16 more unproven. 0:27:18 And then you trace it out farther and farther and farther. 0:27:22 And you have this version where you can be using ISAs for all these other things. 0:27:27 So whether it’s immigration, whether it’s diversifying risk amongst athletes and artists, 0:27:29 which is another use case that people are talking about. 0:27:32 You can also think about how that plays out in larger and larger contexts. 0:27:38 Where IPOing cars or IPOing pieces of art, there is this trend towards more and more 0:27:40 securitization of different types of assets. 0:27:45 And so it makes sense that the ability to finance advancements in human capital would 0:27:47 naturally tend towards some version of that. 0:27:49 Well, thank you so much for joining us. 0:27:50 Yeah. 0:27:51 Thanks for having me.
A bold proposal: You go to college for free, then pay back the school after graduation—but only if you get a job in your field of study and make a high enough salary to afford it. It’s called an income share agreement, and Austen Allred, the CEO and cofounder of Lambda School, thinks it’s the future of education.
Student debt currently stands at more than 1.5 trillion dollars, which makes it the second-highest consumer debt category behind mortgage debt. The crisis has saddled much of a generation, with far reaching effects. Income share agreements, or ISAs, have been put forth as an alternative to the current system. Put simply, an ISA is an agreement between a school and a student for the student to pay a defined percentage of income to the school, for a particular period of time, up to a certain cap. It’s a seemingly simple conceit with complex design considerations, and it’s spurring debate across media and politics.
In this episode, Lambda School CEO Austen Allred, a16z general partner D’Arcy Coolican, and a16z editorial partner Lauren Murrow delve into the greater implications ISAs may have for education and the economy. The discussion covers both the promise and the challenges of ISAs—why they’ve been relatively slow to gain traction, why they’ve failed in the past, and why some in the political sphere are still skeptical.