0:00:03 The content here is for informational purposes only, 0:00:05 should not be taken as legal business tax 0:00:06 or investment advice, 0:00:09 or be used to evaluate any investment or security 0:00:11 and is not directed at any investors 0:00:14 or potential investors in any A16Z fund. 0:00:18 For more details, please see a16z.com/disclosures. 0:00:22 – Hi everyone, welcome to the A6NZ podcast, I’m Sonal. 0:00:25 So this week to continue our 10-year anniversary series 0:00:27 since the founding of A6NZ, 0:00:29 we’re actually resurfacing some of our previous episodes 0:00:32 featuring founders Mark Andreessen and Ben Horwitz. 0:00:34 If you haven’t heard our latest episode 0:00:35 with Stuart Butterfield turning the tables 0:00:37 as the entrepreneur interviewing them, 0:00:40 please do check that out and other episodes in this series 0:00:41 that we’ve been running all week 0:00:45 on our website at a6nz.com/10. 0:00:48 But this episode was actually recorded in 2014 0:00:50 on the five-year anniversary of the firm 0:00:52 and features Michael Copeland interviewing Ben and Mark 0:00:54 about disruption theory, 0:00:57 as well as key traits of entrepreneurs. 0:00:59 – Disruption theory has been in the news of late 0:01:01 as it relates to Clayton Christiansen, 0:01:02 you know, the master of this. 0:01:04 And I just wanna ask you guys not so much 0:01:07 about the criticism of him, 0:01:10 but from where you set that theory 0:01:13 and his thinking kind of galvanized itself into a book 0:01:17 in 1997, you know, do you build companies differently today? 0:01:22 Does, do those theories still hold water or what’s changed? 0:01:26 – Yeah, so I think his book was actually quite brilliant. 0:01:28 It’s funny that it’s coming under criticism now 0:01:30 after he’s been proving like completely right 0:01:32 and the general idea that he had. 0:01:35 It’s kind of, it actually reminds me of the creationist attacks 0:01:39 on evolution where like, yes, from a, 0:01:41 it’s like intellectualism at its worst, right? 0:01:43 It’s like, oh, here’s something wrong 0:01:45 with Darwin’s original theory. 0:01:48 And it’s like, okay, now we’ve based all of biology on it. 0:01:49 We’ve made tremendous progress. 0:01:50 Like how about that? 0:01:52 And this is kind of like, you know, 0:01:55 I don’t believe in electricity, you know? 0:01:59 And you know, this is kind of the kind of business version 0:02:03 of that where, you know, he developed the theory, 0:02:05 all of us in high tech. 0:02:08 And it was an amazing business book at the time 0:02:11 because it explained a phenomenon that, you know, 0:02:12 and now is kind of obvious, 0:02:16 but in 1997 was tricky, which is why does there, 0:02:18 really why do there need to be new companies? 0:02:19 – Right. 0:02:22 – And what’s happened when we just got through talking about 0:02:24 like there’s an explosion of new companies 0:02:26 and these companies aren’t trivial, 0:02:28 they’re becoming very, very important company, 0:02:33 you know, companies like Google and Facebook and so forth. 0:02:37 And so he’s kind of been proven right. 0:02:39 And then not only has he been proven right 0:02:40 on kind of the large level, 0:02:45 but the mechanics that prevent the kind of incumbents 0:02:49 from innovating at the same rate as the new company 0:02:51 are still completely in effect. 0:02:56 And we, you know, use his models all the time 0:02:58 in our thinking and our analysis. 0:03:03 And no doubt there are probably some minor problems 0:03:07 with examples he’s used or like the way he worded it 0:03:10 or what have you, but like basically he was right. 0:03:11 – Yeah, I would also say two things. 0:03:14 Let’s say one is we actually use this theory basically 0:03:15 to tell us what not to invest in. 0:03:16 – Yes, right. 0:03:17 – As well as what to invest in. 0:03:18 – So how so? 0:03:20 – Well, so for example, we have this basically 0:03:22 this theory that basically it’s very, very dangerous. 0:03:24 So one of the great things about our industry 0:03:26 about venture capital is you get to do these things 0:03:28 that basically disrupt sort of the big 0:03:30 establishing incumbent companies. 0:03:32 Conversely, a very dangerous thing to do is to attack 0:03:35 companies that we, our internal term is the new incumbents. 0:03:37 And so it’s one thing to like go attack, you know, 0:03:39 a tech company that’s been in business for 50 years 0:03:41 that’s on its six C or something like that. 0:03:42 It’s another thing to go attack Google 0:03:43 being run by Larry Page. 0:03:44 – Right. 0:03:46 – Because Google being, you know, Larry Page 0:03:48 is like fully aware of the theory of disruption 0:03:49 and in full command of his company. 0:03:51 And if you like, he sees a disruptive threat coming. 0:03:53 He is quite capable of doing the things to head it off 0:03:55 that a, you know, fourth generation professional CEO 0:03:56 might not be able to do. 0:03:59 So anyway, so that was one thing I want to say. 0:04:01 The other thing I want to say is disruption. 0:04:02 It’s a, I agree with Ben. 0:04:04 It’s funny that this is a topic now, 0:04:05 but since it is, it’s worth talking about 0:04:09 which is the term disruption by its very nature, 0:04:12 the term itself has negative connotations, right? 0:04:13 It’s disruption seems like 0:04:15 it’s one step away from destruction. 0:04:17 And so it gets, it’s got this kind of, 0:04:19 you see it in this kind of popular kind of conception 0:04:21 that there’s something bad about it. 0:04:23 The actual way that Christensen used the term 0:04:25 was actually in a very sort of applied way 0:04:28 in a very specific circumstance in business. 0:04:31 And, and actually in a very positive way, 0:04:33 which is basically he described as a way 0:04:34 that progress happens, right? 0:04:37 So progress doesn’t happen by basically old companies 0:04:38 like deciding to do new things. 0:04:39 Company, progress happens 0:04:41 because new companies decided to do new things. 0:04:43 And then disruption is the process 0:04:44 by which the new things are able to take over 0:04:45 from the old things. 0:04:47 If you decide you don’t like disruption, 0:04:49 what you’re basically saying is you don’t like new things, 0:04:49 right? 0:04:51 It’s basically to be against disruption 0:04:53 is to basically be pro the status quo. 0:04:54 And pro the status quo means the way, 0:04:55 however the world is today, 0:04:58 like that’s it, like that’s all we’re gonna have. 0:04:59 Like the way things work today, 0:05:01 this is as good as it’s ever gonna get. 0:05:02 The disruption argument is no, no, no, no, no, no, 0:05:04 things can become much better. 0:05:06 Products can become much better. 0:05:07 Businesses can become much better. 0:05:09 Opportunities for people can become much better. 0:05:11 And so it’s a, it’s a negatively connotated term 0:05:13 that has very positive implications. 0:05:15 And I think that that’s really at least in the last couple 0:05:17 of years that’s been lost in a lot of the commentary. 0:05:19 – You mentioned Google and one of the things 0:05:20 that we’ve seen, you know, 0:05:22 through the technology industry’s history 0:05:25 is that it’s very, very hard to disrupt yourself 0:05:27 and kind of make a transition from one thing to another. 0:05:31 IBM, maybe the only company that’s done it. 0:05:34 Google, you know, they’re trying everything. 0:05:36 You know, and Facebook is trying everything. 0:05:39 And do these companies somehow change the rules? 0:05:42 Or is it the same rules applying and, you know, 0:05:45 disruption theory catches up with them in 50 years maybe. 0:05:48 – So I think you’re, I think kind of have to break 0:05:51 that back apart and go back to what Mark said. 0:05:53 I think that people often think of big companies 0:05:55 can’t innovate, little companies can, 0:05:59 but the real truth is new companies can innovate 0:06:03 and companies that are so old that the original inventors 0:06:06 are gone, have a lot of trouble doing it. 0:06:10 And so if you go back to HP or IBM 0:06:13 or any of these companies, when the founder, 0:06:15 when Thomas Watson was running the company, 0:06:17 when Dave Packard was running the company, 0:06:20 they didn’t have any trouble doing new things. 0:06:22 And they did a phenomenal, I mean, HP in particular 0:06:24 did like a crazy number of new things, 0:06:29 just amazing and in retrospect, really phenomenal. 0:06:30 And if Mark Zuckerberg’s running the company 0:06:32 or Larry Page is running the company, 0:06:33 you know, that’s not an old company. 0:06:34 That’s a new company. 0:06:39 And as innovators, they, you know, we believe 0:06:43 and this gets back to why we don’t attack them 0:06:46 because they’ll attack right back and very effectively. 0:06:48 You know, they’re going to be able to do new things. 0:06:50 And like sometimes that will mean 0:06:52 bringing in new talent through acquisition 0:06:55 or new technologies through acquisition, 0:06:59 but they’re going to be able to think about the problem 0:07:02 through a lens that is not the business they’re in. 0:07:05 And that’s kind of, this is the amazing thing 0:07:08 that Clayton Christiansen laid out was that, you know, 0:07:10 if you’re like, if you’re an old company 0:07:12 run by professional managers, 0:07:13 you’re really good at studying 0:07:15 and optimizing the business you’re in. 0:07:17 And so if there’s a new business that comes along 0:07:22 that doesn’t, is inconsistent with that, you get stuck. 0:07:23 But if you’re Mark Zuckerberg 0:07:26 who created a business from nothing, 0:07:27 then you have a very different view of the world. 0:07:31 And it’s not like, okay, how do I optimize the business 0:07:32 that I’m in? 0:07:33 It’s like, well, how do I get another business 0:07:34 that’s like Facebook? 0:07:36 That’s more the way you think about it. 0:07:37 – The other thing is the fact that Christiansen 0:07:38 was able to articulate this in a theory 0:07:40 that’s so clear and put in the book is, 0:07:43 I think that like the best professional CEOs 0:07:45 in the tech industry today, like now understand this 0:07:46 in a way that maybe their predecessors 0:07:48 10 or 20 years ago didn’t understand it. 0:07:49 So I’ll just give you two examples 0:07:51 of people I work with, John Donahoe at eBay. 0:07:53 Like when mobile came along, you know, 0:07:55 sort of classical professional CEOs, 0:07:56 when mobile comes along, you know, would look at it 0:07:58 and say, well, I’ve got this great business on the web. 0:08:01 If I move to mobile, it may or may not work as well. 0:08:03 And so maybe I don’t want to try to make the move. 0:08:05 Maybe I want to stay on the web and reinforce the web 0:08:07 and like not take the risk of quote disrupting myself 0:08:09 by making the jump to mobile. 0:08:11 But since John understands disruption theory 0:08:13 and it’s been like articulated and explained 0:08:15 in a way that makes sense, you know, 0:08:17 he was able to be based on a phenomenally successful job. 0:08:19 He went full throttle into mobile 0:08:21 and they made the jump and they’ve done it very well. 0:08:23 Meg Whitby doing the same thing with this. 0:08:25 I just, one example is this project Moonshot 0:08:27 which is these cartridge based servers. 0:08:28 – At HP, right? 0:08:31 – At HP that are a direct attack 0:08:32 on the existing blade server business. 0:08:33 And the blade server business at HP 0:08:35 is a very, very big and profitable business. 0:08:37 And HP is basically self disrupting 0:08:39 with this new kind of cartridge based server. 0:08:41 And so again, and when you have the discussion, 0:08:43 you know, HP board meeting and you have the discussion, 0:08:45 you’re like, okay, why are we taking the risk 0:08:47 of damaging this big existing profitable business 0:08:48 by doing this new thing? 0:08:50 The answer is because it’s the right thing to do 0:08:51 according to disruption theory. 0:08:54 Like it is, like there is a logical framework. 0:08:56 And again, think about what’s happening 0:08:58 which is something new is happening. 0:08:59 Progress is happening, right? 0:09:01 This is now the reason and the motivation 0:09:02 and the explanation and the justification 0:09:03 to be able to make progress. 0:09:06 So it’s an incredibly powerful positive thing. 0:09:09 – Let’s get to entrepreneurs and entrepreneurship. 0:09:11 You guys founded the firm. 0:09:15 In part, I’ve been told because you wished 0:09:18 you’d been told or helped in certain ways. 0:09:21 What’s one thing both of you wish you knew 0:09:25 or someone had told you as entrepreneurs? 0:09:28 – Well, that presumes we would have listened. 0:09:34 You know, there’s just so much that we did not know 0:09:36 going through it the first time. 0:09:37 And one of the great things 0:09:39 about the entrepreneurial experience 0:09:42 is it’s just an amazing learning curve 0:09:46 about everything from markets to organizational structures 0:09:48 to compensation to everything. 0:09:54 But probably one of the most challenging things 0:09:55 to learn while you’re out there 0:10:02 is kind of how macroeconomics impact markets 0:10:06 and particularly how they, 0:10:11 how private funding can change very, very rapidly. 0:10:14 You know, when we were, you know, particularly, 0:10:17 and this wasn’t as a harsh lesson at Netscape, 0:10:20 but at Opsware and LoudCloud, 0:10:23 it was like incredibly difficult for us 0:10:24 to go from the funding environment 0:10:28 where basically had the highest multiples 0:10:30 in the history of anything 0:10:32 to there was no money available, period. 0:10:33 I mean, like that was, 0:10:36 it was the most dramatic fall imaginable 0:10:38 from the highest of highs to the lowest of lows. 0:10:42 And, you know, to have the NASDAQ fall over 80% 0:10:46 and that not being, you know, that’s NASDAQ, 0:10:48 that’s not tech, tech fell 95%. 0:10:50 It’s just like not something you could even imagine 0:10:51 or get your head around. 0:10:55 So I wish, you know, like I wish we would have known that. 0:10:58 I wish, I don’t know if we would have believed anybody 0:10:59 if they had told us that, 0:11:03 but that would have probably made it a little less painful 0:11:05 if we had any idea how bad it could be. 0:11:07 – It would have made a worse book. 0:11:09 I’ll tell you that, that you wrote, but still. 0:11:10 – Yeah, yeah. 0:11:13 – On the other side of the table, 0:11:17 what do you want more or less from entrepreneurs, 0:11:19 more of or less of from entrepreneurs? 0:11:23 – Yeah, well, you know, it’s very different 0:11:25 across different businesses, 0:11:27 but like the one thing 0:11:30 that would probably be nice if there was less of 0:11:33 that’s pretty consistent is it’d be nice 0:11:36 if it wasn’t so important to entrepreneurs 0:11:40 what their peers valuations were. 0:11:44 Like that, that is probably the most meaningless thing 0:11:48 to focus your mind on as an entrepreneur imaginable. 0:11:50 It’s just like irrelevant. 0:11:53 – You don’t have anything else to base your value on, do you? 0:11:56 – No, no, no, that’s, you go ahead. 0:11:59 – Yeah, so it’s not actually, 0:12:02 you know, your company is your company, 0:12:03 their company is their company. 0:12:06 You’re looking at the price they got, 0:12:09 not any of the business metrics that they have 0:12:11 or like how the company is going. 0:12:13 So you’re not actually basing your valuation 0:12:15 on anything in that sense. 0:12:19 And there’s better data to be gotten for sure. 0:12:21 Like, you know, we have better data. 0:12:24 We can talk to them about all the kind of valuations 0:12:26 based on actual revenue and so forth, 0:12:28 as opposed to the person they went to school with 0:12:31 or the person they worked at their last company with. 0:12:33 And, but people get very wrapped around the axle on that 0:12:37 because there’s, you know, it’s kind of the thing 0:12:38 that Peter Thiel talks about, 0:12:40 whereas competition is actually like really destructive. 0:12:42 And that’s like the worst kind of competition 0:12:44 ’cause it’s competition that’s irrelevant 0:12:47 to anything in life other than, you know, 0:12:50 you can go tell your friend what valuation you got. 0:12:55 And I think that it causes bad, you know, 0:12:57 errors in judgment and delays in decisions 0:13:00 that need to be made quickly and things like that. 0:13:02 So, you know, it’s just, 0:13:05 it’s one of those things where humanity gets a better view. 0:13:10 And I wouldn’t like, less of that would be good. 0:13:13 – Mark, any, anything you would offer on that? 0:13:14 – Well, the thing that the great entrepreneurs 0:13:16 all have in common, we talk about this a lot, 0:13:18 but you just see it every day is the great entrepreneurs 0:13:20 all have amazing courage. 0:13:21 And so I would say we’re blessed 0:13:23 in that the entrepreneurs we work, 0:13:24 and we select for it. 0:13:25 I mean, we try very hard to select for it, 0:13:27 but the entrepreneurs we work with that are amazing. 0:13:28 One of the things they all have in common 0:13:29 is they’re incredibly courageous, 0:13:30 but which I mean, they don’t give up. 0:13:31 They don’t, they don’t quit. 0:13:33 Like they don’t, they don’t quit. 0:13:33 They don’t flinch. 0:13:34 They don’t get demoralized. 0:13:36 They don’t get, I mean, well, actually, 0:13:37 they may get demoralized or depressed, 0:13:39 but they show up to work the next day 0:13:41 and they work their way out of whatever problem they’re in. 0:13:43 And they just keep pounding and pounding 0:13:44 and pounding and pounding. 0:13:45 And I think there’s a little bit too much 0:13:48 in the Valley right now of the pivot 0:13:50 and the lean start, you know, the lean startup 0:13:51 and the, you know, the everything’s an experiment 0:13:54 and minimum viable product and failure is good 0:13:57 and kind of all of these excuses to be able to give up 0:13:59 when things aren’t going well. 0:14:02 And I think that the great entrepreneurs through history 0:14:04 have always been the opposite kind of personality 0:14:05 and all that they’ve always been. 0:14:06 “I’m gonna make this thing work 0:14:07 hell or high water no matter what. 0:14:09 I am going to knock my way, you know, headfirst 0:14:12 through any, you know, barrier that I run into. 0:14:13 I don’t care what people say about me. 0:14:15 I don’t care what kinds of problems I have. 0:14:18 I’m gonna figure this out and I’m not gonna give up.” 0:14:20 And so I would just say we love working with people 0:14:22 who have that personality type 0:14:23 and you can never have enough of them. 0:14:25 – Elon Musk comes to mind. 0:14:26 I mean, cars and space. 0:14:28 – Yeah, so to start, think about this, 0:14:30 to start a new electric car company. 0:14:32 And by the way, think about the last car company 0:14:33 started in the United States. 0:14:34 They literally made a movie about the catastrophe 0:14:36 that resulted, which is this movie, Tucker. 0:14:39 And so if you want like a story of like a horrible business. 0:14:40 – Which went better than DeLorean. 0:14:42 – Yes, well, actually, yeah, DeLorean. 0:14:43 Well, he had the added, 0:14:44 he had the cocaine smuggling business on the side, 0:14:47 which helped cover the, to free the expenses. 0:14:49 But, you know, car companies, 0:14:52 like all the car companies in the US that are successful 0:14:53 are like, you know, from the 1910s and 1920s. 0:14:55 And so to start a new car company 0:14:56 in the electric car category, 0:14:58 when all the electric cars had failed, 0:15:01 simultaneously to start the first new private rocketry company 0:15:04 in the United States in probably 40 years 0:15:05 to go straight up against the big boys, 0:15:07 to do those at the same time. 0:15:09 And then to go through the 2008 crash. 0:15:12 And he has actually recently opened up on this of like, 0:15:13 he almost lost both companies in 2008. 0:15:15 Like they’ve almost both vaporized. 0:15:17 And then a gut through both of those 0:15:18 and have both come out the other side, 0:15:21 like just like an excreaming successes is just a, 0:15:23 it’s a spectacular performance. 0:15:26 And a huge part of it is he didn’t give up. 0:15:28 – Ben, let’s touch on your book a little bit. 0:15:29 The hard thing about hard things. 0:15:31 One thing it got great reception, 0:15:34 but you’re like, well, yeah, that sounds good for you, Ben, 0:15:36 but that was your story. 0:15:38 How can I embrace that and make that my story? 0:15:41 But, you know, was there anything in the response 0:15:45 that you wish people had pushed you harder on? 0:15:48 – Well, the things that people pushed me on 0:15:49 actually annoyed me. 0:15:52 So I, it’s hard to say that I wish about that. 0:15:55 I mean, I think that to your point though, 0:16:00 it was my story and the reason for that, 0:16:02 I mean, there was a really specific reason for that, 0:16:04 which is building these companies 0:16:07 tends to be very dynamic and very situational. 0:16:10 And so a very frustrating thing 0:16:12 about management and advice in general, 0:16:15 and particularly, you know, both in books 0:16:18 and then things that you often get from board members 0:16:22 or kind of pattern matchers as it were, 0:16:23 is that they’re giving you advice 0:16:25 and it’s based on something. 0:16:28 And that advice and what it’s based on 0:16:30 may or may not be relevant to you. 0:16:31 And if you don’t know what it is, 0:16:33 it’s very difficult to interpret it. 0:16:35 And I always found that, you know, 0:16:37 management books would give like guidance 0:16:39 and you’d be like, well, okay, 0:16:41 is that what I should be doing? 0:16:44 But I have no idea where it came from. 0:16:45 And so it’s hard to say. 0:16:48 So a lot of putting my story in was just to say, 0:16:50 look, this is why I’m telling you this. 0:16:54 And like, if your situation is completely different 0:16:56 than this, then that might be the part of the book 0:16:59 that you ignore or like, at least, 0:17:01 or maybe you can map it on to what you’re doing. 0:17:03 But I think that without knowing 0:17:05 why somebody is telling you something, 0:17:09 it’s pretty difficult to get value out of it. 0:17:10 – And on the topic of the entrepreneurial journey, 0:17:11 we have to go see a pitch. 0:17:13 – All right, that’s what you guys get paid to do. 0:17:15 So Ben and Mark, thanks so much. 0:17:16 We will do this. 0:17:18 Well, it won’t do it in five years. 0:17:19 We’ll do it much sooner than that. 0:17:20 Thank you very much. 0:17:21 – Okay, thanks, Michael. – Thanks, Michael.
with Marc Andreessen (@pmarca), Ben Horowitz (@bhorowitz), and Michael Copeland
Continuing our 10-year anniversary series since the founding of Andreessen Horowitz (aka ”a16z”), we’re resurfacing some of our previous episodes featuring Andreessen Horowitz founders Marc Andreessen and Ben Horowitz.
This episode was actually recorded in 2014, on the 5-year anniversary of the firm, and features Michael Copeland interviewing Ben and Marc about disruption theory, as well as key traits of entrepreneurs.
You can find other episodes in this series at a16z.com/10.
0:00:03 – Hi, and welcome to the A16Z podcast. 0:00:05 I’m Amelia Salyers. 0:00:07 Today’s episode is a special one, 0:00:09 since it’s the 10th anniversary of Andreessen Horowitz, 0:00:12 which was founded in late June, 2009. 0:00:14 So we decided to turn the tables 0:00:17 by asking Stuart Butterfield, founder and CEO of Slack, 0:00:20 to interview our founders, Ben Horowitz and Mark Andreessen. 0:00:22 The three of them discussed the differences 0:00:24 between founders in 2009 and today, 0:00:28 the business model of VC in A16Z’s history, 0:00:31 and technology trends then, now, and into the future. 0:00:32 And they also throw in a few good 0:00:35 summer book and TV recommendations at the end. 0:00:36 Please note that the content here 0:00:38 is for informational purposes only, 0:00:40 should not be taken as legal, business, 0:00:41 tax, or investment advice, 0:00:44 or be used to evaluate any investment or security, 0:00:45 and is not directed at any investors 0:00:48 or potential investors in any A16Z fund. 0:00:52 Any investments or portfolio companies mentioned, referred to, 0:00:53 or described in this podcast 0:00:56 are not representative of all A16Z investments, 0:00:57 and there can be no assurance 0:00:59 that the investments will be profitable, 0:01:01 or that other investments made in the future 0:01:03 will have similar characteristics or results. 0:01:05 A list of investments made by A16Z 0:01:08 is available at a16z.com/investments. 0:01:13 For more details, please see a16z.com/disclosures. 0:01:15 – One question I wanna ask is, 0:01:18 has the nature of the entrepreneurs or the founders changed? 0:01:22 So 2009, you have raised some money, you have some LPs, 0:01:24 now you’re out there looking for ways to invest. 0:01:26 Who are you meeting, where are they coming from, 0:01:27 and what are they like? 0:01:31 – Yeah, well, you know, I really feel like in retrospect, 0:01:35 that class of 2009 entrepreneurs were some of the most 0:01:38 special ones that we’ve met in the entire history 0:01:40 of the company, and when we see that again, 0:01:42 we always say like yourself, 0:01:45 Todd McKinnon, Martine, Brian Chesky. 0:01:47 You know, the thing all of you had in common 0:01:51 is you had gone through something like just unbelievable 0:01:53 to get to the position you were to start the company. 0:01:55 You know, earned your stripes. 0:01:59 Like everybody in 2009 seemed like they pay their dues, 0:02:03 like in a pretty serious way to get into position. 0:02:05 And I think, you know, one of the great things 0:02:07 that’s happened over the last 10 years 0:02:09 is it’s just become easier to start a company. 0:02:13 But as a result, you don’t have, you know, 0:02:16 people who know exactly what it is and what that means 0:02:21 and what they’re about to face, and still wanna do it, 0:02:25 which is, you know, that’s a very special thing, 0:02:26 but that’s an unusual person. 0:02:28 And then I had two categories of entrepreneurs 0:02:30 who have really risen in the last decade. 0:02:32 So one is, you know, we, Alex, our partner, 0:02:34 Alex kind of coined this term, O2O, you know, 0:02:39 kind of B2B, B2C, now O2O, and O2O is online to offline. 0:02:40 So it’s this whole broad category. 0:02:43 It’s, you know, Airbnb and Lyft and Uber 0:02:44 and many of these postmates in DoorDash and all these, 0:02:46 it’s sort of these companies that you have an online 0:02:48 experience that culminates in something happening 0:02:49 in the real world. 0:02:53 And so those founders are much more, 0:02:55 I would say, operationally focused maybe 0:02:56 than the previous generation, right? 0:02:57 Which is, it’s not just software, 0:02:59 but even beyond that, just being software, you know, 0:03:01 those companies have a big real world, you know, 0:03:04 logistical infrastructure operations component. 0:03:06 And so that has turned out to be different kind of founder, 0:03:08 which is really interesting. 0:03:09 Almost actually even a little bit of a throwback model. 0:03:12 There, those people arguably are a little bit more like 0:03:14 the semiconductor founders from like 30 years ago. 0:03:16 They’re like, they’re harder core. 0:03:17 They’re grouchier. 0:03:18 They’re dirt under the fingernails. 0:03:21 Yeah, and they have to worry about more real world stuff. 0:03:23 Yeah, stuff can go wrong, you know, people can die. 0:03:24 Like, you know, all kinds of, you know, 0:03:25 all kinds of stuff can happen. 0:03:29 So that’s been a particularly interesting kind of rise 0:03:30 of a new kind of founder, which has been super interesting 0:03:31 to work with those people. 0:03:33 And then the other on the bio side is we’ve gotten more 0:03:35 involved in bio and in healthcare. 0:03:38 The other is the rise of the deep domain expert 0:03:40 in a science like biology. 0:03:41 So somebody might come, you know, 0:03:43 biology PhD or chemistry PhD or something 0:03:45 where, you know, 10 years ago, 0:03:48 if you met a newly minted biology PhD out of Stanford or MIT, 0:03:50 they really wouldn’t know that much about computers. 0:03:51 It was kind of an afterthought 0:03:53 and they wouldn’t really know, you know, 0:03:54 they know a little bit how to code, but not much. 0:03:57 And now you meet a newly minted biology PhD out of Stanford. 0:04:00 They basically have a dual PhD in computer science. 0:04:02 Like they basically, you know, it’s kind of the story. 0:04:04 They’ve been programming since they were little kids 0:04:06 in most cases and then they kept current. 0:04:08 And in fact, a lot of the research that they did 0:04:10 to get their bio PhD often had to do with computer science 0:04:13 and math and algorithms and machine learning, right? 0:04:14 A lot of that stuff’s happening now. 0:04:17 And so you’ve got these kind of dual discipline founders 0:04:18 for the deep science stuff. 0:04:22 So dual discipline, biology CS, mechanical engineering CS, 0:04:24 physics CS, chemistry CS. 0:04:27 And that’s a real, like that those, 0:04:28 those people are like super enticing. 0:04:30 ‘Cause those are, you know, it’s like two superpowers. 0:04:32 Like, you know, he can, you know, he or she can fly 0:04:33 and they’re vulnerable. 0:04:35 Like is a really good combination. 0:04:37 And so a lot of the companies we’re seeing on the bio side 0:04:39 and in the harder sciences is that kind of founder. 0:04:40 I think that’s new. 0:04:41 – Yeah. 0:04:44 – Or like Satoshi economics and computer science. 0:04:45 – Actually, it’s funny. 0:04:46 There’s actually a revolution happening 0:04:48 even apart from crypto in the field of economics. 0:04:51 There’s a wrote in the, in the actual academic field 0:04:52 of economics, there’s a revolution happening 0:04:54 towards what’s called empirical economics, 0:04:56 quantitative economics in which a lot of the new economists 0:04:58 who are kind of 40 and below are very focused on data 0:04:59 and machine learning. 0:05:01 And, you know, the economists that are in their fifties 0:05:02 and sixties were more inspired by physics 0:05:04 and they’re, they’re much more into formulas 0:05:05 and they’re, they’re much more abstract about what happens 0:05:06 in the real world. 0:05:08 And so there are actually, there are actually, 0:05:10 there are companies increasingly driven by, started by, 0:05:12 or, you know, new invent, new inventions being created 0:05:15 by economists with a very strong CS background. 0:05:16 – That’s true. That’s true. 0:05:18 So from the beginning, I’m not sure if this was intentional 0:05:21 or not, but certainly the positioning in the press was, 0:05:24 this was different, like you’re going to blaze a new trail, 0:05:25 kind of a different model. 0:05:28 And right out of the gate, there was the investment in Skype, 0:05:30 which was not a thing that VCs normally did. 0:05:32 How, how intentional is that? 0:05:33 – Yeah, a lot of it was intentional. 0:05:35 So there, there was a big kind of throwback element 0:05:37 to what we were doing, which was basically, 0:05:38 we wanted to work with the best founders 0:05:39 to build the most important companies. 0:05:41 And there’s, we could talk a lot about kind of the history 0:05:43 of venture and the art and science of the whole thing, 0:05:45 but there is a rich tradition there 0:05:47 that we definitely drew from. 0:05:48 And we actually, we spent a lot of, 0:05:50 we, we, to get Ben and I leading up to this, 0:05:52 spent a lot of time really digging into the history 0:05:54 of kind of where all this stuff comes from 0:05:55 and kind of what made it work across various areas. 0:05:57 And so, so we were, you know, very inspired 0:05:59 by a lot of the people who came before us. 0:06:01 And then there were a bunch of new ideas. 0:06:03 And so, you know, maybe list several of the new ideas. 0:06:05 And so one of the new ideas was just that we thought 0:06:07 that a lot of the venture firms had lost their way 0:06:09 in the sense that they had been started by founders. 0:06:10 They’d been started by founders and operators 0:06:11 who had built businesses. 0:06:13 And so you would, you know, raise money from, you know, 0:06:15 a firm and you would get, you know, 0:06:17 somebody who’d been a CEO or general manager 0:06:18 of an important business on your board 0:06:20 and they could really help you figure things out. 0:06:21 And then over the years, the, a lot of the venture firms 0:06:23 just quote unquote, professionalized 0:06:24 and they ended up with a lot of GPS 0:06:25 who didn’t have that experience. 0:06:27 And so it started to get, 0:06:28 the advice started to get more abstract 0:06:30 and maybe less helpful. 0:06:31 So that was one difference. 0:06:33 Another difference was, you know, we, we took seriously 0:06:35 the, the, the idea of building the institution. 0:06:37 And then in particular around that building a network 0:06:41 and an ecosystem and making a real long-term systematic 0:06:42 and actually very costly investment in building a, 0:06:44 you know, building a network and that had to do 0:06:46 with the fundamental staffing model of the firm. 0:06:47 And then that, you know, 0:06:49 that’s why we have all these operating functions. 0:06:51 We have all these, all these, all these professionals here. 0:06:53 And then that rippled over even into things like compensation, 0:06:56 like we get paid very differently than most, than most VCs. 0:06:57 And so that was a big difference. 0:06:58 The Skype deal you alluded to, 0:07:00 we did start at the very beginning 0:07:01 with the idea of being stage agnostic. 0:07:03 And that was probably a pretty new idea at the time. 0:07:05 And the idea there basically was 0:07:07 if the priority is to work with the best founders 0:07:09 to help build the most important companies, 0:07:11 it shouldn’t matter that much what stage the company’s at. 0:07:14 I mean, ideally you’d like to start working with people early, 0:07:15 but like, you know, you make mistakes. 0:07:17 And so you miss things. 0:07:18 And we were starting from scratch. 0:07:19 And so there were a bunch of companies 0:07:20 we thought were very impressive 0:07:21 that we wanted to get involved in, 0:07:23 you know, even though they already existed. 0:07:25 And so we kind of had this idea 0:07:26 that there should be multiple entry points 0:07:27 from an investment standpoint. 0:07:28 And then there would be things 0:07:30 that we could actually do to be able to help 0:07:32 and work with the entrepreneur. 0:07:33 And so we kind of put ourselves in business 0:07:36 from the beginning to operate across all the stages. 0:07:37 You know, now that took time to get, 0:07:39 I would say it took time for us to get good at all the stages 0:07:41 and maybe we’re still, we’re still working on it. 0:07:44 But that core fundamental idea was in there. 0:07:45 – You referenced looking back at history 0:07:50 for some context and thinking about the investment theses. 0:07:52 For us, 10 years doesn’t seem like that long ago, 0:07:55 but then we hire people who have like three or four years 0:07:56 of experience, which means that they were born 0:07:59 even in college, sometimes 10 years ago. 0:08:00 – Yeah, we experienced that a lot. 0:08:02 – Increasingly frequently, yeah. 0:08:04 – But that’s interesting ’cause that was a very 0:08:08 particular moment in history that I have vivid memories 0:08:13 of the 2008 crisis that then like almost immediately 0:08:15 after that, the idea that we were in a tech bubble 0:08:17 and kind of the whiplashing back and forth. 0:08:18 What was that like? 0:08:20 – Well, you know, it was interesting when we started 0:08:23 and we had, we of course had been through 0:08:25 the actual tech bubble. 0:08:26 – The 2000. 0:08:27 – 2001 tech bubble, yeah. 0:08:31 The, you know, the 2008 crisis was a banking crisis, 0:08:33 not a tech crisis, it was a debt crisis, 0:08:34 not an equity crisis. 0:08:37 So it was very different in nature for startups. 0:08:39 So a big advantage we had was, 0:08:43 it really did not bother us at all. 0:08:46 And so walking in, one, like everybody said, 0:08:49 you can’t possibly raise a fund in 2009, 0:08:50 you can’t raise a venture capital fund. 0:08:53 So the only two new funds were raised that year, 0:08:56 us and COSLA, and because we, you know, 0:08:57 we didn’t know any better. 0:08:59 We were like, come on, like this isn’t bad. 0:09:02 This is like not bad at all. 0:09:04 And then like the, the other thing that really helped us 0:09:07 was the just grouchiness of a lot of the other VCs. 0:09:10 I remember I’ll go unnamed venture capitalists 0:09:12 ’cause I remember it so well 0:09:14 ’cause I ran back to tell Mark, you know, 0:09:15 he asked me, he’s like, well, like, 0:09:16 what are you interested in? 0:09:17 And I was telling him about OCTA 0:09:19 and how like important it was gonna be 0:09:20 in a SaaS world and so forth. 0:09:22 And he gave me like a 30 minute lecture 0:09:24 that SaaS was a bunch of BS. 0:09:26 It was never gonna happen 0:09:29 that the only successful SaaS company was Salesforce. 0:09:32 And that was ever gonna be the only SaaS company 0:09:34 was gonna be Salesforce and all this stuff. 0:09:35 And so it was just like that kind of, 0:09:38 like everybody was just like pretty grouchy 0:09:41 and like crusty and we were, you know, new. 0:09:44 So we were excited to invest in all that stuff. 0:09:47 – So thinking back to that point, 0:09:48 I don’t remember the exact year that everyone started. 0:09:51 Obviously, my company started in 2009. 0:09:54 I think Airbnb was like maybe that same year or year later. 0:09:56 Something’s happened a couple of years earlier 0:09:58 like AWS came out in 2006. 0:10:01 iPhone and nominally in 2007 or really 2008. 0:10:03 But a lot of those things hadn’t actually picked up. 0:10:05 So certainly my recollection is 0:10:08 none of that was really visible at that time. 0:10:09 But we were at this, in retrospect, 0:10:11 was an incredible inflection point. 0:10:13 To what degree do you think that worked to your advantage? 0:10:14 ‘Cause you hear you are starting to fund, 0:10:16 people are skeptical. 0:10:18 But meanwhile there’s these massive secular trends 0:10:21 which were pretty much invisible to everyone at that time. 0:10:22 – I think the two things that are true. 0:10:24 So one is the big secular trends do drive 0:10:25 a lot of what happens in this industry. 0:10:27 And so like it is absolutely the case in the last decade 0:10:29 that mobile and cloud in particular 0:10:31 like drove just giant growth and social as well. 0:10:35 So they made a lot of people look like geniuses, 0:10:36 including a few actual geniuses. 0:10:37 So that’s also true. 0:10:39 But I think it’s also true what Ben mentioned 0:10:40 is like super important underline, 0:10:42 which is like people don’t think these things 0:10:43 are obvious in the beginning. 0:10:45 Like they’re only obvious after the fact. 0:10:46 They really don’t look that obvious in the beginning. 0:10:48 And so you like you mentioned the iPhone came out. 0:10:50 Like I remember the iPhone in 2009, 0:10:51 it was like a cool gadget, 0:10:53 but like it couldn’t hold a phone call. 0:10:55 Like that was the era in which like it wasn’t even, 0:10:57 was it on 3G at that point? 0:10:59 It was like a big- – No it wasn’t, it was pre-3G. 0:11:01 – The original iPhone was actually pre-3G, right? 0:11:02 It had like edge data connection 0:11:04 and then the 3G iPhone was a big upgrade, 0:11:05 but then you couldn’t hold a phone call on the thing. 0:11:07 And that was in the era when Steve was telling people 0:11:08 that you were holding the phone wrong. 0:11:10 – Yeah. – If it was dropping 0:11:11 phone calls, you need that. 0:11:12 – And they built that crazy room 0:11:14 to prove it to the journalists. 0:11:16 – Exactly, and then they shipped everybody the bumper, right? 0:11:19 And so it’s like, okay, to squint from that to like, 0:11:20 okay, now it’s the mobile boom of all time 0:11:22 and we’re gonna be sitting here 10 years later 0:11:23 and they’re gonna have, you know, whatever it is now, 0:11:25 a billion and a half of these things, you know, 0:11:26 in the field and it’s gonna be kind of the defining, 0:11:28 you know, device and interface for a generation. 0:11:30 Like that wasn’t super obvious. 0:11:32 And then cloud, you know, cloud, I just, you know, 0:11:36 there were lots and lots and lots of companies of that era 0:11:37 that were incredibly powerful, 0:11:40 that were in the server business or networking business 0:11:41 or storage business or software business 0:11:44 where this whole cloud thing like AWS, like it’s a toy, 0:11:46 it’s a gimmick, like it’s never gonna make any money. 0:11:47 It is really interesting. 0:11:49 There is a big leap that has to happen 0:11:51 even when they are the really big megatrends. 0:11:54 Like it’s not, I had this in the early, the internet, 0:11:56 like a lot of people in the early 90s, 0:11:57 a lot of people in the press, 0:11:58 a lot of people in the investment community, 0:11:59 a lot of entrepreneurs. 0:12:01 – Well, the entire, actually you should tell the story 0:12:02 about like when you went to raise money, 0:12:04 you and Jim went to raise money 0:12:06 from all the magazines and the newspapers. 0:12:08 – Yeah, so we started in Escaping early ’94 0:12:10 and we went out and pitched all the media companies 0:12:11 to become customers, partners, investors 0:12:13 and every single big media company. 0:12:14 And in fact, at that point, they said, 0:12:16 no, no, the future is AOL 0:12:18 because AOL pays us for our content. 0:12:20 And on the internet, we have to spend money 0:12:20 to put our per content. 0:12:21 So that’s never gonna work. 0:12:22 And of course they all knew 0:12:24 that normal people wouldn’t use the internet 0:12:25 if it didn’t have Time Magazine on it. 0:12:26 Right, ’cause Time Magazine 0:12:28 would obviously be the killer for the internet. 0:12:29 You know, and by the way, it’s like, 0:12:31 a friend of mine says that this thing is happening. 0:12:32 It’s going to fundamentally change the world 0:12:33 and people poo poo it. 0:12:35 Like that might actually be the logical response 0:12:37 because there are many new things to come along 0:12:39 where people claim it’s going to change the world. 0:12:41 And then most of those things don’t change the world. 0:12:43 And so maybe, you know, on average, 0:12:46 the correct response is no, this thing is stupid. 0:12:48 And then maybe our lot in life as founders and VCs 0:12:50 is to, you know, be the fringe element 0:12:52 that like, that bucks that. 0:12:53 – Who’s wrong 97% of the time. 0:12:54 – Right, right, exactly. 0:12:55 And by the way, you know, 0:12:56 we looked dumb a lot of the time except, you know, 0:12:59 during the times we looked, you know, really, really smart. 0:13:01 And you know, and the reality is we’re probably neither, right? 0:13:02 We’re probably neither super dumb or super smart. 0:13:04 We’re probably just willing to take the risk 0:13:05 at a time when other people aren’t. 0:13:07 – So let’s fast forward a couple of years, 0:13:10 kind of like the middle era, 2012, Facebook went public. 0:13:12 But I think that, I don’t remember what year that happened, 0:13:14 but we started talking about unicorns 0:13:16 and there was another round of this is a bubble. 0:13:17 What was that like? 0:13:20 And what do you remember about the investment? 0:13:21 Like basically the partner meetings 0:13:24 after I would leave the room 0:13:26 and the debate was happening. 0:13:29 How much did, is this too expensive factor 0:13:30 into the conversations? 0:13:33 – Yeah, you know, we have in the entire time. 0:13:35 And I can say we’re totally consistent on this. 0:13:37 And I think it’s because of our history, 0:13:41 never thought it was a bubble in our entire time 0:13:42 doing the job. 0:13:44 And I think a lot of it has to look, 0:13:48 prices of companies are always incorrect, 0:13:49 like always, always, always incorrect 0:13:52 because they’re valued on like future performance, 0:13:53 which nobody knows what that is. 0:13:55 So most people are optimistic, 0:13:56 the prices go a little higher. 0:13:58 Most people are pessimistic. 0:14:00 At that time, the prices go a little lower. 0:14:01 But to get to a bubble, 0:14:03 everybody’s got to be optimistic. 0:14:07 And that’s what happened kind of in the 99, 2000 era. 0:14:08 And like in our whole time, 0:14:10 we never signed anything close to that. 0:14:13 Like they never went anywhere, 0:14:15 anywhere like within an order of magnitude 0:14:18 to what it did in 99, 2000 0:14:20 for similar kinds of companies. 0:14:22 And so we were always, no, there’s no bubble. 0:14:23 Like, what are you talking about? 0:14:24 There’s no bubble. 0:14:27 But people want to believe there’s a bubble so badly. 0:14:30 I think 2011, I was in a debate 0:14:32 with Steve Blank and the economist, 0:14:34 and I argued it wasn’t a bubble. 0:14:38 And he argued it was in 2011 tech bubble, right? 0:14:41 And at the end of the debate, he called me 0:14:44 and he said, Ben, like, I voted for you. 0:14:45 You won. 0:14:49 The economist readers voted 78%, 22% for him. 0:14:51 ‘Cause like that’s how much people wanted to believe 0:14:52 we were in a bubble. 0:14:53 – That still do. 0:14:55 – Yeah, there’s a, I’m not sure if it’s quite 0:14:58 a cognitive bias, but I feel like there is a predisposition 0:15:00 that a lot of people have to take the cynical bet. 0:15:03 So how that seems smarter, ’cause either way, 0:15:04 there’s a payoff. 0:15:06 And the payoff, if you said that’s bullshit, 0:15:08 and then it turns out you are right, 0:15:11 seems greater to people than the opposite. 0:15:11 And also there’s a little bit 0:15:13 that you can’t prove a negative, 0:15:16 popper, the valid hypotheses and stuff like that. 0:15:19 – Plus you get the victory as the most smug person 0:15:20 in the room too. 0:15:23 – Yeah, so you’re generally betting against the cynicism 0:15:24 in your business. 0:15:26 Is that like something you can actually take advantage of? 0:15:27 Or is that something that works in your favor, 0:15:29 that cynicism? 0:15:30 – Oh, 100%. 0:15:33 In fact, like we have this thing that our friend came up 0:15:35 with, which is the East Coast, West Coast arbitrage, 0:15:38 which is anything that the people in the East Coast 0:15:41 think is ridiculous in a toy and people in the West Coast 0:15:44 think is the next big thing, that’s the thing to bet. 0:15:46 – And we said, you just keep flying back and forth. 0:15:48 – Yeah, and you find out what those things are, 0:15:50 and then you just invest all your money in that. 0:15:53 – That really suggests a question for today. 0:15:56 There are some big things happening today 0:15:57 that aren’t obvious. 0:16:00 What kind of energy do you put in to find that? 0:16:02 Are there like specialist researchers? 0:16:06 Is it just every partner’s kind of contribution? 0:16:07 How much time do you spend looking 0:16:09 for what isn’t obvious today, 0:16:11 but will be obvious in retrospect? 0:16:14 – Yeah, I think that ends up being like half the job 0:16:18 is trying to understand like, 0:16:21 what is the next big platform? 0:16:22 Where are things going? 0:16:25 What’s going to be the user interaction model 0:16:27 after the iPhone? 0:16:29 That’s a big open question right now. 0:16:31 Like what’s the next platform? 0:16:33 What is AI really going to mean? 0:16:37 Is this whole crypto thing real? 0:16:41 These are all like VR and AR, like at what point? 0:16:42 It’s hard to imagine. 0:16:44 20 years from now, it’s not working. 0:16:47 So like how many years from now will that take? 0:16:51 – Yeah, and these are the kind of fundamental questions 0:16:54 always in the venture capital business, I think. 0:16:56 – And you might add genomics, you might add CRISPR, 0:16:58 you might add synthetic biology. 0:16:59 – Absolutely. 0:17:00 – It’s three of the big new frontiers 0:17:01 on the biological front. 0:17:02 They all have that characteristic. 0:17:04 – CRISPR seems like one to me 0:17:06 that is going to have really dramatic impact. 0:17:09 Obviously there’s big moral debate to be had 0:17:11 and policy debate to be had. 0:17:13 How do you take something like that 0:17:15 and try to look for the opportunities? 0:17:17 – Yeah, so the big thing is we default into thinking, 0:17:18 okay, this is going to happen. 0:17:21 We don’t spend a lot of time on, okay, will this happen? 0:17:23 Like is this going to be a thing? 0:17:25 We try, in fact, I tried one of my things, 0:17:26 I tried it at the firm, it started very hard 0:17:28 to actually kind of prevent us from having the discussion 0:17:29 of like, okay, is this going to happen? 0:17:30 It’s more a question of like, okay, 0:17:32 let’s assume it does happen, right? 0:17:34 And so then there’s kind of two really critical questions 0:17:35 that follow from that, which is like, okay, 0:17:37 if it does happen, then where does it go? 0:17:39 And so the financial version of that question 0:17:41 is kind of how, we call how high is up, 0:17:43 which is like, okay, how big could it get, right? 0:17:45 Which is sort of a very interesting question 0:17:46 for venture capitalists, because it’s like, 0:17:47 if you make an investment in something, 0:17:49 even if it happens, it turns out to be small, 0:17:50 then it’s still not worthwhile. 0:17:51 And so you’re looking for the things 0:17:52 that could get really, really big. 0:17:53 And then you’re obviously looking for, 0:17:55 you know, you’re then looking for the founders 0:17:56 and looking for the specific ideas 0:17:57 and applications that you spend a lot of time on that. 0:17:59 The other thing you think a lot 0:18:00 about in this business is timing. 0:18:02 And so like my observation is basically, 0:18:04 basically everything happens, 0:18:05 like my entire history in this industry, 0:18:07 and at least for 25 years is basically everything 0:18:09 that people said was going to happen happened at some point, 0:18:11 up to an including online pet food delivery, 0:18:13 like it all actually happens. 0:18:15 – All the things they made jokes about 0:18:19 in all the early 2000s movies are all actually. 0:18:19 – They’re all actually happening, 0:18:21 but it’s like, okay, when is it going to happen? 0:18:23 And like, is it ready now? 0:18:24 You know, is it going to be on this cycle? 0:18:25 And then how do you bet this? 0:18:28 Like sometimes these things take three, four, five cycles, 0:18:30 right, for the founders to really figure things out 0:18:32 for the technology to kind of fall into place. 0:18:33 So it’s kind of like, what are the exploratory bets? 0:18:36 How are you kind of vetting whether the stuff is real? 0:18:37 And then there’s this kind of multi-dimensional question 0:18:39 of like, you know, kind of to your point, 0:18:40 there’s this multi-dimensional question of like, 0:18:42 okay, is the technology ready? 0:18:44 And then you got to kind of cross that with like, 0:18:45 do you think the market’s ready? 0:18:47 Like, do you think people are going to want this? 0:18:48 And then you might have to cross like, 0:18:49 in CRISPR, you might have to cross other issues 0:18:51 like regulatory issues, 0:18:52 like are the regulators going to buy into this? 0:18:55 And so, and that’s where I think you have to kind of explore 0:18:56 as you go, right? 0:18:58 Which is you have to kind of feel your way through it. 0:19:00 Like it’s often not the first company in a category 0:19:01 that ends up being the winner, right? 0:19:03 Well, this is a Peter Thielism that we quote a lot. 0:19:05 It’s not the first company that gets all the money. 0:19:06 It’s the last company in the market 0:19:07 that gets all the money, right? 0:19:08 In other words, it’s the company 0:19:09 that actually takes the market, right? 0:19:11 It ends up actually being the dominant company 0:19:13 and forecloses the opportunity for there 0:19:14 to be new startups behind it. 0:19:17 And so sometimes that’s a pioneer, sometimes it’s not. 0:19:19 And so you’re kind of having this constant discussion 0:19:21 about timing. 0:19:23 And then, at the end of all that, 0:19:25 we kind of try to park that to a certain extent 0:19:26 and just start talking to entrepreneurs 0:19:28 and figure out who is the person 0:19:30 who’s got this the most decoded, 0:19:32 how much time and effort has that person put into it? 0:19:34 How qualified are they to pursue this? 0:19:35 What’s their personality? 0:19:36 And can they build a company around it? 0:19:39 Yeah, that’s what I was gonna actually go right there 0:19:42 because knowing you, as I do, I can’t imagine it’s ever, 0:19:44 we have a thesis that this thing is going to work out, 0:19:46 now we’re going to look for the entrepreneurs 0:19:47 who are doing this thing. 0:19:50 It’s much more gonna be the confluence of who you’re meeting, 0:19:53 who you get to talk to, what people are up to, 0:19:56 and these background theses that give you the opportunities. 0:19:57 Yeah, that’s right. 0:19:58 Yeah, I think that’s right. 0:20:00 And actually, I mean, I think for a while, 0:20:04 it was always like started with the entrepreneur 0:20:06 in the early days of it, just ’cause they were allowing 0:20:08 the two of us and we couldn’t cover all the spaces 0:20:10 and enough depth to do it any other way. 0:20:14 But the other thing kind of related to that 0:20:18 is the platforms that kind of we think 0:20:19 are getting proximate from a timing standpoint 0:20:22 are the ones where like the smartest entrepreneurs 0:20:24 are all working. 0:20:27 So if we see 20 genius entrepreneurs all working on crypto, 0:20:30 that makes us pay attention, for example. 0:20:30 Right. 0:20:33 So Mark, you’ve talked about five-year cycles in tech. 0:20:36 Is that something that you think is a good way 0:20:40 to imagine what’s going on or to picture it in context? 0:20:42 So I would say there’s two big sites. 0:20:43 It’s hard to, you know, the stuff is just, 0:20:44 these are just general frameworks. 0:20:46 And so they vary a lot in practice, 0:20:47 but they’re two big general concepts. 0:20:51 And so like I would say, one is the big technology changes, 0:20:52 like the ones we’ve been talking about, 0:20:54 like they’re generational changes, 0:20:54 like they’re quite literally, 0:20:56 they’re human generational changes. 0:20:59 So it’s like the typical cycle in those is like 25 years. 0:21:02 And the reason literally is because a lot of the time, 0:21:04 you just, the people who are in positions of power 0:21:06 and decision and influence when the new thing comes out, 0:21:08 they just will not accept it. 0:21:10 They won’t accept it, they won’t adapt to it. 0:21:11 They won’t recalibrate to it. 0:21:15 And fundamentally they need to age out of the cohort 0:21:16 that has the power, right? 0:21:18 Purchasing authority and all the decision-making authority 0:21:19 and all these other things. 0:21:21 And so they, and then you need a new generation 0:21:22 that like takes the stuff seriously 0:21:23 ’cause they grew up with it, right? 0:21:25 And you need them to age into the cohort, right? 0:21:27 So it’s like, it’s literally a generational turnover. 0:21:29 And so you see these, and that’s why you get these things. 0:21:31 You’ll see these, you know, some of these new things 0:21:32 that’ll grow for 25 years. 0:21:33 And I’m convinced like a big part of it 0:21:35 is just simply that generational effect. 0:21:37 So that’s the good news is like when they work, 0:21:39 you can have literally decades of growth 0:21:43 off of enormous skepticism from day one. 0:21:46 You know, the bad news is, as you’re well aware as a founder, 0:21:48 no individual company gets 25 years, right? 0:21:49 To prove something, right? 0:21:50 (laughing) 0:21:53 Nope, in fact, something well short of that, let’s say. 0:21:57 And so like our basic mental model is a company on average 0:21:59 gets maybe five years to prove something, 0:22:00 to prove the hypothesis. 0:22:01 Like, and you can kind of like, 0:22:03 if you’re a top end founder and you’re super credible, 0:22:04 you could probably raise the seed around, 0:22:06 you know, series A, series B, 0:22:07 you can get yourself five years of runway, 0:22:09 you can get engineers to some product people 0:22:11 to sign up for that and you can prove it or not. 0:22:12 But after five years, if it’s not working, 0:22:14 like you start to have a problem and, 0:22:16 or I should say you have two problems. 0:22:18 One is you start to have a morale issue 0:22:19 where people start to lose faith 0:22:21 and they spin off and go to other things. 0:22:23 You can also end up with an architecture issue, right? 0:22:25 Which is like, even if you’re right, 0:22:26 even if it starts to happen, 0:22:28 you’re built on the prior architecture, right? 0:22:30 And so, you know, imagine being a mobile developer 0:22:33 that started in, you know, 2002, right? 0:22:36 And even if they were still around when the iPhone came out, 0:22:37 you know, they’d built their entire, you know, 0:22:38 system on brew and, you know, Java 0:22:40 and all these technologies that were now archaic. 0:22:42 And so, so you kind of have this, 0:22:45 this kind of aging in place thing that happens. 0:22:47 And so each company kind of has a five year shot. 0:22:48 So then what happens is- 0:22:49 – There are exceptions. 0:22:50 – Yes, there are particular founders 0:22:51 who can, who can, who can get through this, 0:22:53 but it does tend to be the exception. 0:22:55 And so, but, but then you think about it, 0:22:56 then there’s sort of the psychological thing 0:22:57 that happens as a consequence, 0:22:59 which is if the founder starts the company 0:23:00 in the first cycle, runs for five years 0:23:02 and it doesn’t prove the hypothesis, 0:23:03 that founder usually ends up, 0:23:06 so bitter about the whole experience 0:23:07 that they become cynical about that category 0:23:09 for the rest of their lives, right? 0:23:11 And then, and then somebody else in sort of, you know, 0:23:13 cycle to generation two, three, four does figure it out. 0:23:15 And like, you know, that, by the way, 0:23:17 I’m speaking out of also myself out of experience, 0:23:18 like talk about upset, 0:23:19 the fact that you couldn’t get it to work 0:23:22 in this other person did, it’s just like absolutely maddening. 0:23:23 And that’s just human nature. 0:23:25 The part of it that really bites the VCs 0:23:27 is the VCs do that too. 0:23:29 If you as a VC make a bet and go on a board 0:23:31 and you’re in the board meetings for five years 0:23:32 and it doesn’t work and the company shuts down 0:23:36 and then a new kid shows up, you know, three weeks later 0:23:38 and says, hey, I’ve got an idea. 0:23:40 Why don’t we do that, right? 0:23:41 And of course, what really makes you frustrated as a VC 0:23:43 is that kid half the time isn’t even aware 0:23:44 of the previous failed experiments 0:23:47 ’cause like they literally weren’t paying attention, right? 0:23:49 And so what happens is actually the VCs will free, 0:23:52 it’s actually, the VCs will actually freeze themselves out. 0:23:54 And so, and it’ll be VCs who are much more naive 0:23:56 and much less aware of the previous failures 0:23:57 that will actually make the bet. 0:23:59 And so that puts you in this very weird spot. 0:24:00 If you think about being a VC 0:24:01 or running a venture capital firm, 0:24:03 which is you would like to say that the person 0:24:04 who knows the most about the domain 0:24:06 is the person who should make the investment decision. 0:24:07 But it may also be the case, 0:24:09 the person who knows the most about the domain 0:24:10 has the most scar tissue 0:24:11 and has the most followed up psychology. 0:24:14 And so a big part, exactly to your comment, 0:24:16 like a big part of this job, just like being a founder, 0:24:18 is like you have to suppress your natural instincts 0:24:20 to get bitter and resentful and envious and upset. 0:24:22 And it goes to even a more fundamental question is, 0:24:24 like, can you learn lessons, right? 0:24:26 Like what do you learn, like in this business, 0:24:27 what do you learn from a failure? 0:24:29 And maybe the answer is you should learn a lot 0:24:31 from a failure ’cause like it’s those are all hard-won lessons 0:24:33 and maybe the answer is you should learn absolutely nothing. 0:24:35 Maybe all the lessons are wrong, so. 0:24:37 – Yes, it is a lesson to just, that didn’t work. 0:24:38 – Yeah. 0:24:41 – That’s something that I spent a lot of time trying to, 0:24:44 to convince people on the team of that, 0:24:46 it’s the right brothers or Thomas Edison. 0:24:49 It’s just every day, what’s the best idea we got? 0:24:50 That didn’t work. 0:24:51 All right, what’s the next best idea we got? 0:24:55 And the characterization of celebrating failure 0:24:58 sometimes misleads people to characterize that as a failure. 0:24:59 ‘Cause if this plane didn’t fly 0:25:00 or this light bulb didn’t light up 0:25:02 or it blew up or whatever, is that a failure? 0:25:04 Or is that just the process of getting to the light bulb 0:25:06 that works, that they are playing that flies? 0:25:09 – Yeah, Edison tried 3000 compounds, I think, 0:25:10 for the light bulb. 0:25:11 – Yeah. 0:25:11 – Before he figured out the filament. 0:25:15 – That is a level of persistence I would like to hire. 0:25:18 So Mark, you just mentioned a little bit of like control 0:25:20 over your own emotions or your reactions, the cynicism. 0:25:22 Ben, that’s something that you talked a lot about 0:25:23 in the hard thing about hard things, 0:25:24 just like the power to overcome. 0:25:26 To what, I mean, obviously you’ve seen a lot of this, 0:25:29 you’ve experienced it yourself, to what degree 0:25:32 do you think you’ve been helpful and let me just say, 0:25:34 you have been helpful to me personally 0:25:36 in helping entrepreneurs through some of that. 0:25:38 Whether it’s the overwhelming emotional reaction 0:25:40 to a bunch of good stuff happening 0:25:42 or a bunch of bad stuff happening. 0:25:44 – Yeah, so I think, I mean, that’s probably 0:25:47 that the number one kind of consistent thing 0:25:49 that I get back on that book is like, 0:25:52 you know, what I’m feeling is so intense 0:25:55 and there’s nobody to talk to about it. 0:25:58 And then the book kind of goes like, 0:26:01 this is what it feels like, this is what it looks like. 0:26:05 And I think that just knowing that you’re not 0:26:07 the stupidest entrepreneur of all times 0:26:10 is like really valuable. 0:26:11 And it’s something that I always wish that I had. 0:26:13 It’s a lot of the reason I wrote the book 0:26:17 ’cause I used to go around, you know, and I, you know, 0:26:21 and I was like in the kind of horrible period of 2001 0:26:24 and I would talk to other founders and I’d be like, 0:26:25 you know, how’s it going? 0:26:27 And they’d be like, it’s amazing. 0:26:30 I can’t, this is the greatest experience of my life. 0:26:32 And I’d just be like, wow, I am like 0:26:34 the stupidest motherfucker of all times. 0:26:37 Like, ’cause my business is in horrible trouble. 0:26:38 And it just seems so bad. 0:26:40 But then like, you know, as because I’ve lived long enough 0:26:43 to see like most of those guys went bankrupt. 0:26:44 So they were all going through it. 0:26:45 I was going through. 0:26:46 They just like nobody would tell each other 0:26:48 ’cause it’s so embarrassing. 0:26:50 So it was, you know, one of those, those kinds of things. 0:26:53 But it’s been, you know, it’s been great help to me 0:26:57 in the work because when I sit down with an entrepreneur, 0:27:00 they go, okay, yeah, I know, you know what I’m talking about. 0:27:03 So we can talk about the real like horrible shit 0:27:06 and not just, you know, the happy stuff. 0:27:08 – I feel like there’s an obligatory question here 0:27:10 to talk about the things that you guys tried 0:27:12 that didn’t work and the failures. 0:27:14 But before we get there, just on the way, 0:27:15 there’s a bunch of things that obviously did work. 0:27:20 So building up a bunch of capabilities in the firm 0:27:21 and in the partnership with something 0:27:23 that is now pretty widely emulated, 0:27:26 like having those services that the companies can call on, 0:27:28 being a little bit more stage agnostic 0:27:32 and even like industry technology vertical agnostic 0:27:35 has been something that’s worked out really well. 0:27:38 When you look back before we get to the failure question, 0:27:40 what do you think the best decisions you made are 0:27:42 in the way that you set up the firm? 0:27:44 – Yeah, so it’s a great question. 0:27:49 I think, you know, at the core, I think just this belief 0:27:54 that a venture capital firm has got to be able 0:27:57 to help the technical founder grow into a CEO. 0:28:00 It’s just so, you know, in retrospect, 0:28:02 that turned out to be profound 0:28:04 and just differentiating and important. 0:28:06 And it really is the work. 0:28:08 So when we think about what do we do 0:28:11 and why are we here, that’s it. 0:28:13 And then kind of backing that up, 0:28:15 the thing that helped us the most 0:28:19 is just taking what Michael Ovitz had done at CAA. 0:28:22 And that jump started us, I mean, 0:28:24 we probably saved five years by copying his model. 0:28:27 So that, and I can’t even believe how well it worked, 0:28:29 like every aspect of it worked. 0:28:32 So, you know, those are probably the two things. 0:28:34 – What was it from Michael Ovitz that you were copying? 0:28:35 – He has censor it in the book. 0:28:36 And so there’s a great book. 0:28:38 – Yeah, he finally revealed some of it. 0:28:40 – Some of it’s in the book, not all of it’s in the book, 0:28:42 but it’s the book is called “Who is Michael Ovitz?” 0:28:45 And it’s a highly entertaining book and we recommend it. 0:28:46 I mean, it’s actually really funny. 0:28:48 He kind of sat down and described the whole thing. 0:28:50 And it basically was this idea of, you know, 0:28:51 we’re not just going to be a collection of individuals. 0:28:53 We’re going to be an actual true team. 0:28:55 And then it’s not just going to be the principles. 0:28:57 It’s going to be an entire system, right? 0:28:58 It’s going to be an entire operating platform, 0:28:59 an entire infrastructure. 0:29:00 It’s going to be professionals 0:29:02 across all these different domains. 0:29:04 And, you know, we’re going to build this 0:29:06 enduring long run network that’s going to, you know, 0:29:08 it’s just going to constantly compound year after year 0:29:09 and build more and more value. 0:29:11 And then the next client comes longer, 0:29:12 the next, you know, founder comes along 0:29:14 and they can plug into this entire system, you know, 0:29:15 that’s been built, you know, 0:29:17 and we’ve been building the system now for a decade. 0:29:19 And so, you know, a new founder who works for this today, 0:29:21 like they’re walking into a, basically a system 0:29:22 that’s been built for a decade. 0:29:23 So then he said, basically what happens is 0:29:24 then it’s compounding advantage, 0:29:25 which is every year that goes by, 0:29:27 you just get more and more differentiation 0:29:28 off of the status quo. 0:29:30 He was competing at the time with William Morris, 0:29:31 which was this huge talent agency. 0:29:33 And it’s like, well, why wouldn’t they just copy you? 0:29:34 And he’s like, well, they’d have to vote themselves 0:29:36 big salary cuts, right? 0:29:38 Like they’re paying themselves all the money right now, right? 0:29:40 And so they had to go hire, you know, 0:29:41 100 people to go do the stuff that we’re doing. 0:29:42 They’d have to like, 0:29:43 they’d have to free up that money from something. 0:29:45 So they’d have to vote themselves giant salary cuts. 0:29:47 And he’s like, they don’t like each other. 0:29:48 Like they don’t get along to start with. 0:29:50 And so imagine getting into the room. 0:29:50 And they’ve all got like, you know, 0:29:52 they’re like very successful people. 0:29:53 They’ve all got very high personal burn rates, right? 0:29:54 They’ve got all kinds of hobbies, you know, 0:29:56 they’ve got vineyards and yachts and all this stuff. 0:29:59 And so, you know, they’re going to now decide 0:30:01 to give themselves an 80% pay cut, right? 0:30:03 To compete with the startup, like no chance. 0:30:05 And so anyway, that was his explanation. 0:30:07 – Yeah, that’s been a long lasting advantage. 0:30:08 I think so. 0:30:10 – Yeah, but yeah, that continues. 0:30:11 As it actually turns out, 0:30:13 that didn’t just happen in the talent agency business. 0:30:16 What I discovered doing more research after that was that, 0:30:18 it was also exactly what happened for law firms. 0:30:20 It’s exactly what happened management consulting firms. 0:30:22 It’s what happened to ad agencies, accounting firms. 0:30:23 And then also investment banks, 0:30:25 private equity firms, hedge funds. 0:30:26 All these other industries have gone 0:30:27 through this transformation. 0:30:30 They basically professionalized and upleveled. 0:30:31 And it just happens that the venture industry 0:30:33 is doing that now. 0:30:34 And in fact, I think at this stage, 0:30:36 like it’s beyond just us. 0:30:36 – There’s something else there 0:30:38 that I actually hadn’t really realized, 0:30:40 but maybe it was implicit the whole time, 0:30:43 is CA’s investment was to make the people 0:30:44 that are representing more successful. 0:30:47 Smart idea, given that you got points 0:30:49 on their success is a little bit of the same thing. 0:30:51 ‘Cause you can make an investment decision 0:30:54 that it’s just like I’m gonna buy some copper futures 0:30:56 or oil or something like that. 0:30:59 And I can’t do anything about to make oil more valuable 0:31:01 for people or copper more valuable. 0:31:02 I invest in the startup 0:31:03 and there’s a ton of stuff I can do. 0:31:05 I have my connections and I know that personally 0:31:08 I benefited from being able to call on both of you 0:31:11 from John O’Farrell who joined our board, 0:31:14 from Margaret, from Jeff Stump on recruiting, 0:31:16 from the whole team running the EBCs. 0:31:18 There’s more than I can mention here. 0:31:20 And I think that has, 0:31:24 I don’t know what the ROI for you is on that, 0:31:26 on top of the dollars that you put in, 0:31:29 but I would say it’s probably 70% of the value to us 0:31:31 and 70% of the value created came 0:31:33 from that additional support beyond just the money. 0:31:36 – Yeah, and there’s also this little knock-on effect 0:31:37 which you appreciate, I’m sure, 0:31:42 which is part of the trouble with an inventor becoming a CEO 0:31:46 is you just don’t feel like a CEO. 0:31:48 You don’t know the people who CEOs know, 0:31:50 you don’t know how to do the things CEOs know how to do. 0:31:54 And so a lot of what you get out of the firm is, 0:31:55 no, I’m a CEO. 0:31:57 Like if I need to know how to do it, I’ll call Margaret. 0:32:00 Like, you know, I can do that, I can step up. 0:32:03 And so like that’s a lot of what it conveys 0:32:06 at the end of the day is just like that confidence 0:32:08 which is often that little difference 0:32:10 between being able to stay in the job 0:32:13 and having to raise your hand and tap out. 0:32:15 – All right, so I promised that I was gonna ask 0:32:20 about any dumb decisions, mistakes, failures along the way. 0:32:21 What do you got? 0:32:22 – We haven’t made any. 0:32:24 I don’t really know why you would even ask that question. 0:32:26 There’s obviously nothing to talk about. 0:32:29 So we have a very specific philosophy on that 0:32:30 and the book I’d really recommend. 0:32:32 We were lucky enough to have her in the podcast a while ago. 0:32:34 So Annie Duke wrote a book called “Thinking in Bets” 0:32:36 where she talks about basically what is the nature 0:32:39 of a mistake in a probabilistic domain, 0:32:41 you know, with uncertainty of outcome. 0:32:44 And she uses the term in the book “resulting.” 0:32:46 It’s basically the process of looking at a bet 0:32:48 that was made in a probabilistic domain 0:32:49 that did not pan out. 0:32:51 And then concluding that that was a mistake 0:32:53 as compared to a bet that didn’t pan out. 0:32:54 And so basically what she says in the book, 0:32:57 she says the book is basically resulting 0:32:59 is the root of all evil if you’re in a probabilistic business 0:33:00 ’cause you will learn the wrong lessons 0:33:03 and you’ll torture yourself mentally to death by doing that. 0:33:06 And so she says the thing to do is basically 0:33:08 to very clearly separate in your own mind process 0:33:09 and outcome, right? 0:33:11 So you’re in a probabilistic domain, 0:33:11 you don’t know the outcome 0:33:13 of any particular bet ahead of time. 0:33:15 And so you need to design the best possible process 0:33:18 to generate the best possible set of outcomes over time. 0:33:21 And then basically when things go quote unquote wrong, 0:33:23 you don’t second guess the outcome. 0:33:24 You go back and you just make sure the process 0:33:26 was as good as it could have been. 0:33:28 And so, you know, from the outside, 0:33:30 the mistakes of venture firm makes are always like, 0:33:31 well, what’s the investment that you didn’t make 0:33:33 that worked or the investment that you made 0:33:34 that didn’t work inside the firm? 0:33:35 What we try to do is say, okay, 0:33:37 what have we done well in our process 0:33:39 and how can we improve our process? 0:33:41 You know, that’s a much more boring topic to talk about 0:33:42 ’cause it has to do with things like meeting structure 0:33:44 and memo documents and, you know, research 0:33:46 and due diligence and all these topics. 0:33:49 But that is the actual answer to the question, which is, 0:33:51 you know, when we started with a, as Ben said, 0:33:53 we started with a relatively lightweight process 0:33:54 on investments because it was just Ben and me 0:33:57 and then over the time we’ve evolved 0:34:00 to a much more rigorous process. 0:34:01 Today we do a lot more work on the investments 0:34:03 than we used to. 0:34:05 And then the other side of that is to try to keep all that 0:34:06 work from preventing us from making 0:34:07 the controversial investments. 0:34:08 – Getting people to that position. 0:34:11 And I wish I could remember who gave me this analogy, 0:34:14 but if you got super drunk and then you drove home 0:34:16 and you didn’t have a crash, 0:34:18 it’s not that that was a good decision. 0:34:18 The result was good. 0:34:21 And we really, it’s a tough thing to build structures 0:34:23 inside the company to celebrate. 0:34:26 That was a great idea to change the homepage 0:34:28 so that whatever, and it turned out that it was wrong, 0:34:30 but you still got a bonus. 0:34:32 You get a bonus, you get a promotion, 0:34:35 you get recognition, even though it didn’t have the result 0:34:37 ’cause people do get super result fixated. 0:34:40 – Yeah, Bisa says real good thing where he says, 0:34:42 “We rate people on the inputs, not the outputs.” 0:34:45 – One of the last questions I wanted to get to 0:34:47 is the nature of the entrepreneurs. 0:34:48 Now it’s been long enough. 0:34:51 You’ve seen some two-time, three-time entrepreneurs 0:34:53 and you’ve backed some of them. 0:34:55 What kind of difference do you see between that, 0:34:56 the first time and the second time? 0:34:59 – So for top-end venture, basically the rule is 0:35:00 if you’re a first-time fund, 0:35:01 to get funded by a top-end venture firm 0:35:03 as a first-time founder, you have to have something working. 0:35:04 You have to have a product. 0:35:06 You have to have some of a product market fit. 0:35:08 Google.com already existed. 0:35:09 Facebook already existed. 0:35:11 Airbnb already existed when they raised money. 0:35:13 So that’s the general pattern. 0:35:15 And then the question of the first-time founders is, 0:35:16 do they know what they’re doing, right? 0:35:19 Can they then do the job of being a founder CEO 0:35:22 of a scaling company and some can and some can’t? 0:35:24 The second-time founders are like a huge relief 0:35:25 to deal with on the one hand 0:35:27 because it’s like, okay, they’ve been through it before. 0:35:28 Now they know what they’re doing, right? 0:35:30 They’ve got some experience and some gray hair 0:35:32 and some operational experience. 0:35:34 The problem with the second-time founders, 0:35:37 they can raise money before they have something working. 0:35:40 And then there’s this question of like, okay, what’s the idea? 0:35:41 And then we talk a lot about like, 0:35:44 is it an organic idea or is it like a synthetic idea? 0:35:45 Was the process, I have a great idea there 0:35:46 for I’m going to start a second company 0:35:48 or was the process, I want to start a second company 0:35:50 and therefore I have to come up with an idea. 0:35:52 And what you often find is they want to start 0:35:55 a second company so badly that they come up 0:35:58 with basically a fragmentary idea, partial idea. 0:36:00 It’s a conceptually interesting idea, 0:36:02 but with nothing underneath it. 0:36:03 And we have this other concept 0:36:06 we use called the idea maze, which basically is the process 0:36:08 that a founder uses to figure out what the actual idea is, 0:36:09 which is like a hundred-step process 0:36:11 to work your way through all the different permutations 0:36:12 of the idea before you actually finally figure out 0:36:13 the real thing. 0:36:15 And the second-time founders often just haven’t gone 0:36:17 through the idea maze, but it’s really bizarre as a VC 0:36:19 because it’s like, here’s this founder who you love 0:36:21 and like they’ve showed incredible persistence 0:36:22 in the last company and like you so badly 0:36:24 want to work with them again and you can just tell like, 0:36:26 for the idea has almost become interchangeable. 0:36:28 Right, and it’s just like, that’s a super bad sign. 0:36:30 And so that’s what tortures you on those. 0:36:32 – Yeah, from the entrepreneur side, I can tell you, 0:36:33 having a whole bunch of money does take away 0:36:36 a very critical forcing function, which is like, 0:36:38 I’m about to find out of money, I better figure this out. 0:36:41 So we added overcorrect for that. 0:36:42 – What’s the job of the founder? 0:36:45 Is the job of the founder to figure out the product, 0:36:47 figure out the market and get the idea nailed? 0:36:49 Or is the job of the founder to staff 0:36:51 an executive team and an employee base, right? 0:36:54 And those are two like, they’re overlapping responsibilities, 0:36:55 but there’s a lot of- 0:36:57 – The first one turns out to be a lot more important. 0:36:58 – Yeah, it’s like the startups where it’s like, 0:37:01 they’re doing all the outward facing things 0:37:03 involved with being a startup, but like there’s nothing there. 0:37:05 – Well, you know, and you always kind of know it 0:37:09 because the founding team is all vice presidents 0:37:10 and no engineers or something like that. 0:37:13 It’s just like, okay, what are you doing? 0:37:16 You can’t execute your way through like no ideas. 0:37:18 – 10 people in the company, they’ll have a chairman, 0:37:20 they’ll have a CEO, they’ll have a CEO, they’ll have a president, 0:37:21 they’ll have a VP of sales, a VP of marketing 0:37:23 and a VP of engineering. 0:37:25 That’s not a good, yep. 0:37:27 – A little bit of a pet peeve for me too. 0:37:30 So the first time that I met Ben, it was with Mark 0:37:32 and it was at the Creamery in Palo Alto. 0:37:34 And I think you were about probably about six months away 0:37:36 from starting the fund. 0:37:40 I never would have predicted how things would have turned out. 0:37:42 Did you predict how things would have turned out? 0:37:47 – You know, no, I think we dreamed that we would kind of 0:37:48 get to where we got to, 0:37:51 but it was a much longer timeframe on the dream. 0:37:54 I mean, things worked out way, way better 0:37:58 than I think either of us set out and expected. 0:38:01 And, you know, we had a lot of good luck along the way. 0:38:03 And then a lot of, you know, great help 0:38:07 from a lot of people, you know, people like actually starting 0:38:08 with like people like Jim Breyer, 0:38:13 who kind of taught us what it meant to like create an LP base 0:38:16 and how to think about investors and those kinds of things. 0:38:19 And then, you know, we ended up getting like very lucky 0:38:21 on the hiring, I think our first hire was Scott Cooper, 0:38:25 who we probably kind of built the firm with that. 0:38:28 And then, you know, our first consultant was Margaret. 0:38:31 And, you know, there’s no way we would have like 0:38:32 pulled off the marketing thing without her. 0:38:35 So a lot of, a lot of really great luck 0:38:36 and a lot of really great help. 0:38:38 Oh, and Andy Rackliffe, yeah, he helped us understand 0:38:39 what venture capital was. 0:38:41 – It turns out to be. 0:38:43 – Neither of us had any experience. 0:38:44 – Then obviously all the partners who have joined us 0:38:47 and so about 150, 150 people now, 0:38:49 by definition numerically, they get almost all the credit. 0:38:52 – So one thing I definitely want to get to is the transition 0:38:57 from a VC firm to a financial advisor for whatever that means. 0:38:59 And you were a little bit iconoclastic and different 0:39:00 from the beginning. 0:39:04 This also seems, we’ll see looking back at iconoclastic, 0:39:05 but definitely different. 0:39:06 What was the idea there? 0:39:08 – Yeah, so, you know, kind of the thing that catalyzed it 0:39:10 was actually crypto. 0:39:14 There’s a rule that exempts VCs from having to do 0:39:17 a bunch of kind of regulatory compliance stuff. 0:39:21 And part of the thing that keeps you as a VC 0:39:26 is you can’t invest more than 20% of your funds 0:39:30 and things that aren’t like primary equity investments. 0:39:33 So crypto would fall into that category secondary 0:39:34 and so forth. 0:39:37 And look, we believe crypto is going to be important. 0:39:39 Now there’s a lot of VCs who do, 0:39:41 who won’t take the step that we did to become regulated 0:39:43 in the way that we have. 0:39:46 But, you know, like this is kind of another advantage 0:39:49 from our background is we’re not afraid of governance 0:39:50 or regulation or these kinds of things. 0:39:52 And that, you know, it’s something that we understand 0:39:54 pretty well from being a public company. 0:39:55 We’ve done it before. 0:39:57 And it opens up a lot of opportunities 0:39:59 that we can now think about 0:40:02 because, you know, we’re in another category. 0:40:05 – Yeah, so this changed the categorization 0:40:06 and then the regulatory environment, 0:40:07 but you’re still a V-serfer. 0:40:08 – Yes, yeah. 0:40:11 Now we still are in the exact same business we always were. 0:40:13 – Mark, I gotta also do TV shows that you recommend 0:40:17 ’cause you are a source of excellent viewing. 0:40:18 – There we go, good. 0:40:19 All right, well, I’ll struggle. 0:40:20 So I can’t help myself. 0:40:23 Deadwood is the best TV show of all time. 0:40:24 And it’s actually very relevant for founders. 0:40:27 It’s basically, it’s the story of the American frontier 0:40:29 through kind of a modern lens. 0:40:31 And it’s just astonishingly high quality. 0:40:32 And it’s basically the creation of a city. 0:40:33 It’s basically the creation of a city 0:40:34 and ultimately the creation of a state, 0:40:36 the state of North Dakota. 0:40:39 And it is, you know, in the face of just like, you know, 0:40:41 horrifying obstacles. 0:40:42 You know, and by the way, you know, 0:40:44 many ethical issues along the way and everything else. 0:40:45 If you think starting a tech company is hard, 0:40:47 you wanna watch a couple of seasons of Deadwood. 0:40:48 It’ll put you in the right frame of mind. 0:40:50 – And then the movie that it got canceled, 0:40:51 it gave us a decade ago 0:40:52 and it got canceled after three seasons 0:40:54 and it really should not have been. 0:40:56 And so they did a very rare thing. 0:40:56 They went back 10 years. 0:40:59 And all these other people in the movie became huge stars 0:41:00 afterwards in the show. 0:41:01 So they got them all back 0:41:03 and made the fourth season into a movie. 0:41:04 – Wow. 0:41:05 All right, can’t wait to see it. 0:41:06 – Yep. 0:41:07 You can give me two books if you want, two books. 0:41:09 – Yeah, give me two books, give me five books. 0:41:11 – Favorite two books of the year. 0:41:14 Book number one, “Why History is Always Wrong?” 0:41:18 is not written by a historian and it is basically, 0:41:19 if you’ve read Nassim Taleb, 0:41:21 he talks about something called the narrative fallacy, 0:41:24 which basically is, okay, why did something happen? 0:41:25 And then there’s some story as to why it happened. 0:41:27 And then it usually turns out, 0:41:28 like if you talk to the principles involved, 0:41:29 it wasn’t that story at all. 0:41:31 It was something much more complex. 0:41:34 And so this is like the next level of that theory 0:41:36 that basically says all of recorded history 0:41:37 is the narrative fallacy. 0:41:39 And so everything that we think we understand 0:41:41 about why the American revolution happened 0:41:44 or why Rome fell, right, or why Christianity emerged 0:41:45 or like any of these stories that we, 0:41:47 all the, any of these things that we teach you, 0:41:50 we’ll take 12 years of history class in school 0:41:51 and all this stuff, like it’s all wrong. 0:41:53 Like it’s worse than wrong 0:41:55 because it’s not like it could be corrected. 0:41:57 You couldn’t actually make a bunch of edits 0:41:59 to the book and make it correct. 0:42:00 You can’t do that at all. 0:42:02 And the reason why it’s worse than wrong 0:42:03 and you can’t ever get it right 0:42:06 is because reality is so complicated, right? 0:42:07 Reality is a complex adaptive system 0:42:09 when you’ve got human agents involved in everything. 0:42:12 And so you’ve got, and anything big that happens, 0:42:13 you’ve got thousands or millions of people 0:42:15 who are making all kinds of random decisions every day 0:42:16 for all kinds of random reasons 0:42:19 and it just happens that things result in a certain way. 0:42:21 – Is this wrong in the same way to think that 0:42:22 it didn’t rain because God was mad 0:42:24 or it did rain because we performed the ritual 0:42:27 in the right way, like puts some agency into a system 0:42:29 that there isn’t anyone making decisions. 0:42:30 It’s just the emergence. 0:42:31 – Yeah, exactly, in reality, the weather system. 0:42:33 I mean, you know, this is after 100 years 0:42:33 of meteorological science 0:42:35 and they still can’t predict it’s gonna rain tomorrow. 0:42:37 And it’s ’cause the atmospheric system 0:42:38 is a complex adaptive system 0:42:40 is too complicated to model. 0:42:42 And just like, and you can keep throwing supercomputers at it 0:42:44 and it’s still too complicated to model. 0:42:46 So, by the way, it’s the same thing, the human body. 0:42:47 Like we don’t, it’s actually, 0:42:48 we think about this a lot in the bio fund. 0:42:49 It’s like, we don’t understand. 0:42:50 Like we don’t even, there’s not even settled 0:42:51 in traditional science. 0:42:53 Like we still don’t know, like there’s still, 0:42:55 and there’s no, a whole new category revision of science 0:42:58 now questioning this whole, the whole protein fat, 0:42:59 you know, the whole protein fat thesis. 0:43:02 And so like it’s, the example he uses in the book 0:43:04 at the fall of Rome is like the, you know, 0:43:06 the single most studied kind of historical story 0:43:07 is like the rise and fall of Rome. 0:43:09 And it’s like, and basically what happens is like, 0:43:10 if like a science is working properly, 0:43:12 you converge on the correct answer. 0:43:13 Like you converge on Newton’s law. 0:43:15 So you converge on quantum mechanics 0:43:16 or something like that. 0:43:18 He’s like, the problem in history is that 0:43:19 the more time goes, Pat, 0:43:21 the more explanations they come up with, 0:43:23 the more new explanations they come up with. 0:43:24 And there’s like historians have documented, 0:43:26 there’s like 250 now different causes 0:43:28 for the fall of Rome, right? 0:43:30 And so like, it just, it leaves you with nothing. 0:43:33 And so it’s, it’s, it’s a, it’s a disconcerting theory 0:43:34 ’cause it basically says getting a handle 0:43:37 on cause and effect in the world is impossible. 0:43:38 It’s a very convenient theory 0:43:41 ’cause it means you can just ignore history. 0:43:42 That saves you a lot of time. 0:43:43 It saves you a lot of time. 0:43:45 And then it’s an inspiring story. 0:43:48 Our theory, I find, ’cause it’s like, okay, 0:43:49 things can change. 0:43:51 Like nothing is actually carved in stone, 0:43:53 like not even a little bit. 0:43:55 And who knows what’s going to be the next person 0:43:56 who’s going to make the decisions 0:43:56 that’s going to cause everything 0:43:57 to go one way or the other. 0:43:58 And that could be you. 0:43:59 And so I find that inspiring. 0:44:03 And then the other book I love to tell people about 0:44:06 is David Goggins, who’s a, the only guy in history, 0:44:08 he’s a triple qualified as an ABCL and army ranger 0:44:11 and what’s called an Air Force tactical air controller, 0:44:13 which is a special forces unit of the Air Force. 0:44:15 So triple qualified special forces. 0:44:18 He wrote a book called “Can’t Hurt Me.” 0:44:19 And it is one of the most amazing stories 0:44:20 that anybody has ever written. 0:44:23 His story is really amazing. 0:44:25 He’s one of the only African-American Navy SEALs 0:44:29 in history and just manages incredible accomplishments 0:44:30 both inside and outside the military. 0:44:32 And it’s the book. 0:44:34 Like if Ben’s book is about like the struggle in business, 0:44:36 like David’s book is about the struggle in life. 0:44:39 And so anytime anybody feels mopey. 0:44:43 About what’s happening in their startup or in their life. 0:44:44 This is the book to read, 0:44:46 to kind of reset all the expectations. 0:44:46 All right, great. 0:44:48 Well, thank you so much. 0:44:51 It was a pleasure and an honor to be able to do this with you. 0:44:52 All right, Stuart, thank you so much. 0:44:53 Thank you, Stuart.
with Marc Andreessen (@pmarca), Ben Horowitz (@bhorowitz), and Stewart Butterfield (@stewart)
A lot in technology — and venture — happens in decades. New cycles of technology come and go, including some secular shifts; a new generation of founders matures; and so much more changes. So when Andreessen Horowitz (dubbed with the numeronym ”a16z”) was founded a decade ago as of this month, the tech landscape looked very different between then and now: Not only had the global economy just seen a recession, but trends like mobile and cloud and even social were just taking off.
Now, 10 years later, what’s changed — not just in tech, but in profiles of entrepreneurs? And what’s changed in the firm itself, given that Marc and Ben — the Andreessen and the Horowitz — were yet again entrepreneurs in founding the firm too? As another repeat entrepreneur from then to now, guest host Stewart Butterfield, CEO of Slack, interviews the a16z co-founders in this special episode of the a16z Podcast to commemorate our 10th anniversary.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
0:00:02 Hi and welcome to the inaugural edition 0:00:04 of the A16Z BioJournal Club. 0:00:05 I’m Hannah. 0:00:08 Our goal here is to take an interesting new research paper 0:00:10 in the field and talk about why it’s cool, 0:00:12 break down a little of the science involved, 0:00:14 and consider what the implications of this research 0:00:15 for industry might be. 0:00:17 So in our first take, A16Z General Partner 0:00:21 on the Bio Fund Jorge Conde and Deal Team Partner Andy Tran 0:00:24 chat with me about two papers recently published. 0:00:27 The first transposon encoded CRISPR/CAS systems 0:00:30 direct RNA guided DNA integration 0:00:33 was published by a group under Samuel H. Sternberg 0:00:36 at Columbia in nature of June 2019. 0:00:39 The second is RNA guided DNA insertion 0:00:42 with CRISPR associated transposases 0:00:45 with a team under Feng Zhang from the Broad Institute 0:00:48 published in science also in June 2019. 0:00:50 We talk about what these papers are all about 0:00:53 in the field of CRISPR development and beyond. 0:00:54 This mini podcast is available 0:00:57 as part of our new A16Z Bio Newsletter. 0:01:00 So if you like it and you want to hear more or read more, 0:01:05 please sign up for the newsletter at a16z.com/subscribe. 0:01:07 Let’s talk about what specifically is happening here. 0:01:10 What does, what a transposon encoded CRISPR/CAS system 0:01:13 direct RNA guided DNA integration? 0:01:14 Like, what does that actually mean? 0:01:16 Can you help me understand what was the interesting science 0:01:17 that was going on here? 0:01:18 – Yeah, so what’s really interesting 0:01:19 in the field of CRISPR lately, 0:01:22 and actually a few papers that came out in the recent times 0:01:26 developed a way to basically use these transposon machinery 0:01:28 inside the cell and use CRISPR to direct it 0:01:30 into a specific place in the genome 0:01:34 and really edit the genome without cutting it open at all. 0:01:36 So you can think of it as this scarless type 0:01:37 of genetic modification. 0:01:39 – Essentially what a transposon is, 0:01:42 is this sort of phenomenon that’s been observed 0:01:44 of where you have sort of these genes 0:01:46 that jump into the genome. 0:01:47 – It’s kind of mysterious what the jumping 0:01:48 is really used for. 0:01:50 People surmise that, you know, potentially, you know, 0:01:54 it helps, you know, the cells and organism evolved in general. 0:01:56 So what this paper shows is to utilize 0:01:58 this CRISPR machinery known as Cascade, 0:02:02 and it’s formed by, you know, CASS6, CASS7, CASS8 proteins 0:02:04 to really insert genes into the genome. 0:02:08 And this Cascade protein is directed to the chromosome 0:02:11 by using guided RNA, you know, CRISPR machinery, 0:02:15 and it then binds to this transposase-associated protein, 0:02:18 TNIQ, and it allows them to recruit 0:02:20 these transposable elements and then effectively integrate 0:02:22 the gene into the genome. 0:02:25 And this is super powerful because now we’re able to, 0:02:27 you know, add genes in the genome directly 0:02:29 and precisely without cutting it open. 0:02:32 It’s a scarless way to modify the genome. 0:02:35 – So in some ways, this is almost like if CRISPR 0:02:38 is about surgically inserting or editing DNA, 0:02:41 this is almost in many ways like plastic surgery, right? 0:02:42 It’s scarless. – Ah, right, right. 0:02:45 – It doesn’t leave a mark and therefore in some ways 0:02:48 it’s less risky from an intervention standpoint 0:02:49 on the genome side. 0:02:51 – What it really boils down to is that 0:02:53 this machinery utilizes a way 0:02:56 to really efficiently integrate genes into the cell, 0:02:58 into the genome specifically, 0:02:59 without having to cut it open at all. 0:03:02 And to the metaphor of the surgery, 0:03:04 this is really important in the whole context 0:03:06 of gene therapy as a whole, right? 0:03:08 ‘Cause when we started off in gene therapy, 0:03:09 we had these random integration 0:03:12 of these trans genes into the cell. 0:03:15 And, you know, this was a more stochastic process. 0:03:17 So think of it almost as, you know, 0:03:20 the arrival of this paper and papers like it 0:03:24 are showing us that we’ve gotten to a point 0:03:28 where we have a fully programmable ability 0:03:32 to integrate new genes or new DNA into a genome 0:03:34 without having to first open up 0:03:36 or break apart the DNA to do that. 0:03:38 – So let’s go back and actually situate it 0:03:40 into the development of the science, 0:03:42 what it represents for where we’ve gotten 0:03:44 from where we began. 0:03:45 – Yeah, so if you want to do the really, 0:03:47 the fast forward montage version of this. 0:03:49 – Yeah, the 80s montage. 0:03:51 – Yeah, exactly, so I’m sure you’ll put 0:03:52 in the pop music behind it. 0:03:55 – Right, yeah, and the overalls in the paintbrushes. 0:03:56 – He always started in the 60s. 0:03:57 – In the time motion. 0:03:58 – Yeah, let’s go. 0:04:00 – But look, I mean, if you go all the way back, 0:04:02 you know, one of the earliest sort of technologies 0:04:04 that came to the fore was the discovery 0:04:05 of something called the restriction enzyme. 0:04:07 And the restriction enzyme was this ability 0:04:10 to take this protein that could cut DNA 0:04:12 at these predetermined sites, 0:04:15 essentially open up the DNA, 0:04:19 and then you could introduce new genetic material, 0:04:20 and it would eventually integrate randomly, 0:04:23 but it would integrate into that DNA. 0:04:25 That’s what gave us the ability to do 0:04:27 or recombinant DNA technology, 0:04:29 which gives rise to the entire biotech field. 0:04:32 One of the earliest applications of biotechnology is 0:04:35 getting bacterial systems to integrate human insulin 0:04:38 so that we could coax bacteria to make that drug 0:04:41 or human insulin on our behalf, of course, to treat diabetics. 0:04:43 So that starts the whole field. 0:04:45 Now, you know, if you sort of move forward, 0:04:46 the objective always has been, 0:04:51 can you cure disease by repairing 0:04:55 or replacing what’s broken in DNA? 0:04:58 And so the whole field of gene therapy arises 0:04:59 with this idea that, you know, 0:05:02 can we introduce a corrected version 0:05:05 of a non-functional gene in a patient 0:05:06 and have it do the job 0:05:08 that the non-functional gene cannot do? 0:05:12 The first version was just put that gene 0:05:16 into a viral vector, into almost like a delivery vehicle, 0:05:18 introduce that into patient cells, 0:05:20 that gets taken in by the cells, 0:05:22 and the gene just starts to sort of do its job. 0:05:23 It integrates randomly in the genome, 0:05:24 but it does the job it needs to do. 0:05:26 Hopefully it integrates, or maybe it doesn’t, 0:05:28 but it just does the job to compensate 0:05:30 for some mutated gene. 0:05:33 Yeah, so it’s basically almost a parallel support system 0:05:35 that the cell has. 0:05:37 We just had the first gene therapies approved 0:05:39 over the course of the last couple of years. 0:05:42 If you look at the spark therapy, gene therapy drug for, 0:05:44 this rare inherited form of blindness. 0:05:47 And so that was one big advance forward. 0:05:49 Now, that is introducing a full gene 0:05:51 and just hoping it gets taken up by the cell. 0:05:52 Just kind of throw it in the mix. 0:05:54 Yeah, throw it in the mix and it has its own risk. 0:05:56 The original discovery of CRISPR-Cas9, 0:05:59 that system, the way that system works 0:06:02 is by making what’s called a double-stranded break 0:06:03 in the DNA molecule. 0:06:04 So if you think of, you know, 0:06:07 we all remember DNA from high school biology 0:06:10 as sort of the beautiful double helix. 0:06:12 So imagine sort of cutting that double helix in two 0:06:14 and making an end of it, right? 0:06:17 And then, you know, and then putting it back together. 0:06:18 That’s not riskless, right? 0:06:20 And by the way, it’s very well known 0:06:22 and documented that one of the big setbacks 0:06:25 that the gene therapy field had a couple of decades ago 0:06:27 was when they ran a clinical trial 0:06:28 at the University of Pennsylvania, 0:06:31 there was a patient named Jesse Gelsinger 0:06:32 who died after receiving the therapy 0:06:34 just because there’s a lot of risk associated 0:06:37 with introducing, you know, a viral capsid with a gene 0:06:39 into a cell system, into human being 0:06:41 where you could have a catastrophic result 0:06:41 and that patient died. 0:06:44 And that actually put a big pause on how we thought 0:06:47 about developing gene therapies for humans. 0:06:49 But that approach will have 0:06:51 and does have therapeutic potential. 0:06:52 And there are several companies 0:06:56 that are pursuing developing CRISPR-Cas9-based therapeutics. 0:06:58 As that advanced in parallel, 0:07:02 we started to see other gene or genome editing technologies 0:07:03 come to the fore. 0:07:06 The first one was one known as zinc finger nucleases, 0:07:08 which really didn’t have as much uptake 0:07:10 as one would expect 0:07:13 because it’s just very hard to deal with these proteins. 0:07:16 – How zinc fingers work is that you actually use these proteins 0:07:18 to bind onto DNA sequences. 0:07:21 And every time you want to iterate to find a new target, 0:07:22 cost tens of thousands of dollars 0:07:24 in a few months to develop one protein. 0:07:26 – So it’s enormously expensive and labor intensive. 0:07:27 – Exactly. 0:07:28 So it never really caught on like wildfire 0:07:30 in the community, right? 0:07:31 – It’s a very bespoke process. 0:07:34 It’s one of the type of thing. 0:07:36 And then there were other technologies. 0:07:38 There was Talens, 0:07:40 which was a bit better than zinc finger nucleases, 0:07:45 but really the big sort of shift in the field 0:07:47 was the arrival of CRISPR. 0:07:49 – And so when scientists discovered CRISPR, 0:07:52 they noticed that it was just always constant evolutionary 0:07:55 warfare between bacteriophages and bacteria 0:07:56 for billions of years. 0:07:58 – Bacteriophages are viruses. 0:07:58 – Yes. 0:08:03 – And so when these viruses inject their viral genome 0:08:04 into the bacteria, 0:08:06 you can think of it as the bacterial immune system, 0:08:10 whereas they’re able to sense these little snippets 0:08:11 of viral genome, 0:08:13 they would create a vaccination card from it, 0:08:14 and then they’ll put it back into this, 0:08:16 what is known as this CRISPR array. 0:08:18 And this would be like a vaccine card 0:08:20 or a vaccine database, if you will. 0:08:22 So every time it recognizes that foreign viral sequence. 0:08:23 – Right, then it knows what to do. 0:08:25 – It would know how to snip it away, right? 0:08:27 And then people have hijacked this machinery. 0:08:31 So what if we can program this outside of bacteria 0:08:32 and use it in human cells 0:08:35 and program it to sense not viral DNA, 0:08:38 but specific targets in the human genome. 0:08:42 And then then we can program and effectively target 0:08:44 anywhere we want in the cell, right? 0:08:46 – So now bring us forward to today. 0:08:48 So is this new development, 0:08:50 does it mean that the therapeutic potentials 0:08:52 are essentially less risky? 0:08:54 Or are there new possibilities 0:08:56 that we haven’t been able to do before? 0:08:56 Or both? 0:08:59 – Basically, you know, it’s really powerful 0:09:01 because when we talk about the first gen of gene therapy 0:09:03 to the second gen of CRISPR, 0:09:05 this paper really represents this third wave 0:09:07 in this scarless genomic editing. 0:09:10 This is basically genomic surgery at its finest, right? 0:09:13 You know, when we think about laparoscopic surgery 0:09:15 and all these advanced surgery tools, 0:09:16 we want to have, you know, 0:09:19 basically a scarless methodology of doing surgery. 0:09:21 This is a way to really cleanly integrate, 0:09:23 you know, genomic segments into the cell 0:09:25 without even touching, right? 0:09:28 And then this also represents an even broader tool 0:09:31 of, you know, the entire, you know, genomic toolkit landscape. 0:09:33 – The real promise on the near term side 0:09:37 is that we get to a point where you can make 0:09:40 scarless integrations of genetic material into the DNA. 0:09:42 So it’s less traumatic in that regard. 0:09:44 So if you can do that more precisely, 0:09:46 there’s less scar tissue that hopefully is, you know, 0:09:48 a better intervention altogether. 0:09:50 So that holds great promise from a potential 0:09:53 for future therapeutics based on this type of technology. 0:09:55 I think the other thing that’s worth noting 0:09:57 is that in a relatively short period of time, 0:10:01 the programmability of these kinds of systems 0:10:04 has improved dramatically. 0:10:06 So the kinds of things that we could do, 0:10:09 that we can do now with, you know, with CRISPR 0:10:11 based on these kinds of advances 0:10:15 gives us a very broad repertoire and toolkit to work with, 0:10:17 whether it’s for therapeutic applications 0:10:18 or for diagnostic applications 0:10:20 or for any other number of things. 0:10:25 When most people think about CRISPR developing CRISPR 0:10:26 for human health, 0:10:28 they’re thinking about therapeutic applications. 0:10:31 But the reality is there’s also a lot of potential 0:10:32 for diagnostic applications. 0:10:34 So going back to the original discovery 0:10:35 of the CRISPR-Cas system, 0:10:38 this was essentially the memory banker immune system 0:10:40 for bacteria to remember what viruses 0:10:41 had attacked it before 0:10:43 so they could protect themselves going forward. 0:10:47 And the way it does that is by essentially cutting 0:10:49 that viral genetic material. 0:10:52 So it’s ineffective essentially basically, you know, 0:10:56 cutting it off at its Achilles heel, so to speak. 0:10:57 And so as you can imagine, 0:11:00 if you’re hijacking that capability 0:11:01 for therapeutic purposes, 0:11:03 you could also hijack that capability 0:11:05 for diagnostic purposes. 0:11:07 And a way that that could potentially work, for example, 0:11:12 is if you know what bacteria or what virus 0:11:15 or even what, you know, mutation in DNA you’re looking for 0:11:18 in say a human sample, a blood sample 0:11:20 or urine sample or anything like that. 0:11:24 If it’s present, it will get cut by the right CAS system. 0:11:27 You program the CRISPR-Cas system to say, 0:11:31 these are the sequences of genetic material 0:11:33 that I am looking for in the sample. 0:11:34 This is basically the search engine. 0:11:36 Like you’re doing essentially a Google search. 0:11:38 – And you basically just look for the kill switch 0:11:40 to have been activated. 0:11:41 – You just look for the identifier. 0:11:42 So you basically say, 0:11:44 if I want to look in this patient’s sample, 0:11:47 let’s say I want to look for a specific bacteria 0:11:48 or a specific virus 0:11:51 or specific genetic mutation associated with disease, 0:11:53 you can say, if you find the presence 0:11:55 of any one of these sequences, 0:11:56 those are almost like the search terms, 0:11:58 cut them and when you cut them, 0:12:00 you can essentially engineer the system 0:12:02 to send out some sort of a reporter, 0:12:04 a reporter, usually it’s a visual marker. 0:12:06 And so basically if it lights up, 0:12:08 it’s because the CRISPR-Cas system cut the DNA 0:12:10 you told it to look for. 0:12:12 So that has a potential diagnostic application 0:12:13 and you could run that diagnostic 0:12:15 essentially without a lab, right? 0:12:16 ‘Cause it just happens with the biology. 0:12:19 So a whole other kind of new tool, essentially. 0:12:21 – Yeah, and I think the diagnostic applications 0:12:23 for these kinds of technologies 0:12:26 are actually pretty intriguing 0:12:28 because most of the way we do diagnostics 0:12:33 is based on developing some very specific biological 0:12:34 or chemical assays. 0:12:36 So look for something and if so, 0:12:38 have the reaction take place 0:12:40 and one that I can visualize and quantify 0:12:42 or quantitate in some way. 0:12:44 But here you’re just basically letting the biology 0:12:45 do the work for you. 0:12:47 In the CRISPR toolkit, 0:12:51 a lot of the initial applications was using 0:12:53 CRISPR-Cas9, this one nucleus. 0:12:56 And actually the diagnostic application 0:12:57 that Jorge was talking about 0:12:59 was actually using these other nucleas 0:13:01 known as Cas13 and Cas12. 0:13:04 And so all these different CRISPR proteins, 0:13:08 Cas9, 12, 13, X, Y, that we’re continuing to discover 0:13:11 has all different fundamental applications, right? 0:13:12 Even fundamentally changing 0:13:14 what these CRISPR nucleases even do. 0:13:15 It doesn’t even cut anymore. 0:13:17 It can do scarless editing. 0:13:20 And then we can even add different applications on it. 0:13:22 There’s new applications adding, 0:13:24 deaminase is just to do base editing. 0:13:28 So we can really do single base pair resolution editing. 0:13:30 There’s really the final frontier of precision 0:13:32 in terms of genomic modification. 0:13:34 And I think what’s really important is that 0:13:36 we’ve really seen this shift from 0:13:38 when it became this random bespoke science 0:13:40 and really turning into full-fledged, 0:13:42 modified engineering tool. 0:13:44 And this paper that we talk about here 0:13:47 is not only a great representation 0:13:49 of this engineering biology thesis, 0:13:53 but also a pretty huge potential step change 0:13:54 for the field as a whole. 0:13:56 – You can program this to turn genes up and down 0:13:58 as opposed to just editing them. 0:14:01 There’s even work that’s ongoing to use this technology 0:14:03 to image DNA directly, 0:14:04 which is a pretty remarkable thing 0:14:07 because since this is acting locally on DNA, 0:14:10 you can add all kinds of agents to make it imageable. 0:14:12 So therefore, you can observe chromatin 0:14:14 or genome structure directly. 0:14:16 So there’s a lot that can be done with this technology, 0:14:20 and you hear the old adage about the pickaxes for the gold. 0:14:22 And with these toolkits, 0:14:25 I mean, we are quite literally panning for gold here. 0:14:28 I mean, where these things get found, 0:14:31 they’re found in soil and in ocean vents. 0:14:32 – New York City subways. 0:14:34 – The New York City subway. 0:14:37 So these people are quite literally looking in nature 0:14:39 because nature has ingenious ways 0:14:41 to do a lot of the things that we’re trying to do 0:14:45 from an engineering biology or programmable biology standpoint. 0:14:50 And so I think it’s a remarkable moment to take pause 0:14:53 and see how far this technology has come 0:14:55 in a relatively short period of time. 0:14:58 This generation’s recombinant DNA or restriction enzymes 0:15:01 that really gave rise to the biotechnology industry. 0:15:03 I think this tool kit, this CRISPR tool kit, 0:15:06 as we’re describing it and discussing it, 0:15:07 represents sort of the next frontier 0:15:10 for what will happen in biology. 0:15:13 So an incredible development of precision 0:15:15 in what the tools can do and at the same time, 0:15:17 a huge expansion of what that will enable us 0:15:18 to do going forward. 0:15:20 Thank you guys so much for joining us. 0:15:23 Should I say on the A16Z Journal Club? 0:15:23 – No, no, please don’t say that. 0:15:24 – Yeah, okay. 0:15:34 [BLANK_AUDIO]
Two recent scientific journal papers show what’s possible when CRISPR moves from cutting DNA tool to a full-fledged platform — expanding its toolkit for medicine across R&D, therapeutics, and diagnostics:
”Transposon-encoded CRISPR-Cas systems direct RNA-guided DNA integration” in Nature — by Sanne Klompe, Phuc Vo, Tyler Halpin-Healy, and Samuel Sternberg (of Columbia University)
”RNA-guided DNA insertion with CRISPR-associated transposases” in Science — by Jonathan Strecker, Alim Ladha, Zachary Gardner, Jonathan Schmid-burgk, Kira Makarova, Eugene Koonin, and Feng Zhang (of the Broad Institute)
What do these two papers — both about techniques for getting rid of the need to cut the genome to edit it — make possible going forward, given the ongoing shift of biology becoming more like engineering? Where are we in the wave of the genome engineering ”developer community” building on top of CRISPR with a constantly growing suite of programmable functionalities? a16z bio general partner Jorge Conde and bio deal team partner Andy Tran chat with Hanne Tidnam about these trends — and these two papers — in this short internal hallway-style conversation, part of our new a16z Journal Club series.
This podcast is also part of our new a16z bio newsletter, which you can sign up for at a16z.com/subscribe
0:00:05 The content here is for informational purposes only, should not be taken as legal business 0:00:10 tax or investment advice or be used to evaluate any investment or security and is not directed 0:00:15 at any investors or potential investors in any A16Z fund. For more details, please see 0:00:22 a16z.com/disclosures. Hi, and welcome to the A16Z podcast. I’m 0:00:27 Hannah, and this episode is all about synthetic fraud, a new evolution of consumer fraud that’s 0:00:32 emerging in financial services to the tune of $1-2 billion a year. 0:00:36 In this episode, Naftali Harris, co-founder and CEO of Centelink, which builds technology 0:00:41 to detect and stop synthetic fraud, talks to me and A16Z operating partner for information 0:00:46 security Joel de la Garza, all about what this new kind of fraud is, including the life 0:00:52 cycle of this long con, how these synthetic identities get made, incubated, and finally 0:00:57 busted out, and some of the wildest stories behind the strange fraud rings he’s seen. 0:01:02 We also touch on why this new fraud is on the rise, who the true victims are, and at 0:01:06 the end of the day, what the foundational security issue at the heart of it all truly 0:01:11 is. We’re here to talk about synthetic fraud, which I have to confess I didn’t even know 0:01:15 what that really meant when we first started talking about it. What does synthetic fraud 0:01:18 even mean? Almost no one hears about it in the public 0:01:22 outside of financial services industry. I hope that neither of you have been the victim 0:01:23 of identity theft. I have. 0:01:29 Okay, I’m sorry to hear that. And if you haven’t, you probably know someone that has been. 0:01:32 And so the general public is very aware of identity theft because there’s that consumer 0:01:38 victim. With identity theft, you’re stealing a real person’s identity. With synthetic fraud, 0:01:41 you’re saying, forget the real person, I’m going to make up a totally fake one. 0:01:44 And that means like fake from the very ground up. 0:01:49 Yeah, from the ground up. So a fraudster will use a synthetic identity, so a made up, named 0:01:54 date of birth, an SSN combination in order to open up an account with a bank or get a 0:01:58 loan from a bank. The key thing here is that there’s no one record, there’s no one one 0:02:02 actual person that it all belongs to. And then what they’ll be able to do is actually 0:02:07 acquire quite a bit of credit, take out a lot of loans, usually a few tens of thousand 0:02:13 dollars from every major bank and lender, and then use that to get a lot of money and 0:02:14 not repay any bit. 0:02:19 How prevalent is this kind of fraud in the industry? I mean, how much is this happening 0:02:21 versus like we all hear about identity theft all the time? 0:02:25 So this is actually one of the super interesting things. So we’ve added up the losses across 0:02:29 the industry and within lending, it’s somewhere from one to two billion dollars a year of losses 0:02:30 annually. 0:02:34 Wow. And how aware of it are the banks? At what point do they catch on? 0:02:38 That’s also one of the really interesting things because there is no consumer victim. 0:02:42 The banks have a really hard time figuring out which of their losses are attributable 0:02:46 to synthetic fraud as opposed to somebody that had a hardship or lost their job. 0:02:48 Oh, right. Same pattern of behavior. 0:02:52 Exactly. With identity theft, what happens is somebody opens up an account, they get 0:02:56 a new credit card, they steal a lot of money from the bank, and the way the bank finds 0:03:01 out about it is eventually the victim contacts them and says, “Hey, I didn’t take out this 0:03:04 credit card. This wasn’t me.” And they’ll sign an affidavit and then the bank will realize 0:03:09 this was an actual victim of identity theft as opposed to someone that just had a hardship 0:03:13 and took out more money than they should have. And with synthetic fraud, all the bank sees 0:03:19 is a large set of people that haven’t been making payments for the loans, and they have 0:03:23 a really hard time of figuring out which of these are people that have had some toward 0:03:26 of local economic challenges. 0:03:29 Yeah. Legitimate need for the loan, basically. 0:03:32 Exactly. And which of them are people that were actually defrauding them? 0:03:37 Synthetic fraud is a relatively new-ish phenomenon. So I think it’s something that’s kind of 0:03:41 grown up. As banks have gotten better at spotting identity theft and credit freezes and those 0:03:46 sorts of things, it seems that that correlated to the rise in synthetic fraud. Identity theft 0:03:52 used to be ridiculously simple. If you think back 10 to 15 years ago, as bank fraud teams 0:03:55 got better, they got better tools to catch this kind of thing you had credit freezes 0:03:59 come to effect, it seems like the fraudsters pivoted in this direction. 0:04:02 Yeah, that’s exactly right. I mean, another big one actually is the rise of the EMB chip. 0:04:04 Oh, that is a factor in this? 0:04:09 Absolutely. Fraudsters are committing fraud as a business. And what they do is they gravitate 0:04:14 towards channels, so to speak, that are profitable for them. And it used to be you can make a 0:04:17 lot of money doing card skimming. The EMB chip made that a lot harder. So you saw a lot 0:04:23 of fraud move online to card not present fraud. So people stealing credit cards online. There’s 0:04:27 been a lot of great technology that’s arisen there recently, which has made that harder 0:04:32 to do. Still certainly happens as we all know. And then a lot of progress towards identity 0:04:36 theft and that’s gotten harder. And so they’re moving on to synthetic fraud, which is very 0:04:40 challenging for banks and lenders to detect and quite lucrative for the fraudster. 0:04:44 But can we just go back to like that moment of opening the account? Why is it so hard 0:04:50 to verify like an actual birthday against an actual name against an actual SSN? Like 0:04:56 if those things are not matching, why is that initial moment not the place to catch it? 0:05:02 So what most people don’t realize is that financial institutions, so banks and lenders 0:05:08 do not have a list of all name, date of birth, and SSN combinations in the United States. 0:05:11 A lot of people think that the credit bureaus have this list, you know, experiencing Equifax 0:05:17 and TransUnion, and they don’t have it either. Essentially, the banks and lenders believe, 0:05:22 certainly until recently, had believed that the three credit bureaus had lists of all 0:05:25 name, date of birth, and SSN combinations. So they, everybody’s thought somebody else 0:05:26 was doing it. 0:05:27 Exactly. 0:05:28 That’s absurd. 0:05:31 It’s quite funny, actually. And this is the way that fraudsters actually create these 0:05:37 synthetic identities. If you apply for credit with a name, date of birth, and SSN repeatedly, 0:05:41 the credit bureaus will believe that it’s a real person, and they’ll create a record 0:05:43 for this totally fake person. 0:05:48 Because they’re only tracking the applications, they’re not backing it up to reality. 0:05:50 Yeah, and they have no way of doing so. 0:05:56 I feel like we’re giving tips to everybody in the world. I don’t like how to create this. 0:05:57 Do not do this. 0:05:58 Exactly. 0:06:03 But that is such a gaping hole in the information flow, a weird blind spot that everybody else 0:06:05 just kind of assumes that… 0:06:09 Yeah, it’s pretty interesting. I mean, so banks and lenders believe that the bureaus have 0:06:13 records on everybody, and mostly general public believes that as well. The logic on the bureau 0:06:18 side is essentially banks and lenders have strong know-your-customer procedures, they’re 0:06:21 doing a great job of risk. And so consequently they say, “Oh, you know, everyone’s talking 0:06:26 about John Smith, that must be a real person.” But actually, nobody really knows here. And 0:06:28 so everyone’s pointing fingers at everybody else. 0:06:34 It seems like, actually, it was this gaping hole for quite a while, right? So why… Was 0:06:37 there always some level of this, and then it just spiked? 0:06:44 I think the interesting point is sort of the actual genesis of this whole situation, which 0:06:48 is that there is no source of truth for proofing identity. And that really lies at the center 0:06:53 of kind of a lot of these issues. There’s sort of a coordination and a collaboration 0:07:00 that has to happen in between entities that, while wanting to minimize fraud, these entities 0:07:04 are also competing with one another in a number of different product categories. And so there 0:07:09 isn’t always necessarily a line financial incentive for them to collaborate. 0:07:12 It’s always been possible, but the thing that’s really challenging about synthetic fraud is 0:07:15 it is such a long con. It’s challenging. 0:07:16 What do you mean by that? 0:07:22 It’s not sufficient to just make a fake identity. You can do that, and it’s pretty easy. But 0:07:28 when you do that, all you have is a person who exists on one of the bureaus, or all three 0:07:33 of them, but doesn’t actually have real credit to their name. No bank is going to give them 0:07:35 $100,000 or even $10,000. 0:07:38 Right. So it’s like me when I first got out of college or whatever. 0:07:42 Exactly. It’s like when you first entered the credit space. And so there are some fraudsters 0:07:46 that will just try to churn through $300 cards, but there’s not a ton of money in that. The 0:07:52 real money that the fraudsters are pursuing is getting access to all the prime credit 0:07:59 cards, to big auto loans, to huge unsecured personal loans. And that requires building 0:08:03 up their credit over a period of one to two years. Get some low limit credit cards, start 0:08:08 making a little bit of payment, build their credit. They do it quite aggressively because 0:08:12 they’re optimizing to when can they get to that 700 plus credit score or better, but 0:08:13 it does take a long time. 0:08:18 And I think that’s the answer as to why we hadn’t seen it in the past because in the 0:08:23 old days, you could go steal someone’s identity, open a line of credit, have access to that 0:08:27 credit within a week, maybe even a couple of days, depending on how you did the disbursement 0:08:33 of funds. But then sort of as people got better about reporting those things as consumers 0:08:37 actually started to notice when lines of credit were open for them, or they had credit 0:08:42 monitoring capabilities, the response time was a lot quicker. So you couldn’t necessarily 0:08:46 get those funds out in the amount of time. And so this is kind of the new process that 0:08:51 they’ve moved on to. And to the earlier point, like this does take some amount of time and 0:08:55 preparation. So creating lots of identities, going through the process of establishing 0:08:59 credit for them over a period of one to two years, and then getting to a cash out that 0:09:03 in the old days, you could have done in five days to maybe a month. 0:09:06 So a lot more work for that same size hit. 0:09:10 So the hit actually can be even bigger than for identity theft. So with identity theft, 0:09:15 you’re racing against the clock, because the victim will actually notice this at some point 0:09:20 and they will, they will say, this wasn’t me. And so they go back to the, to the bank, 0:09:24 they go to the lender and they say, stop doing this. And they’ll put a freeze on their cutter 0:09:29 report and so forth. But with synthetic fraud, there’s, there’s no race for the clock. There’s 0:09:35 no one who’s watching for this. There’s no one, there’s no one that, that is going to 0:09:37 notice this until they stop making payments. 0:09:40 Yeah. Are you seeing the synthetic fraudsters actually make payments? 0:09:41 Oh, absolutely. 0:09:42 Absolutely. 0:09:43 Wow. 0:09:44 So they’re, they’re taking out loans. They’re making the payments except for the initial 0:09:50 fraud of the identity. They’re not, the behavior is not caught, is not at that point doing 0:09:51 anything wrong. 0:09:56 So there are three phases in the lifetime of a synthetic identity. The first part is 0:10:02 the creation phase. So this is where a synthetic identity starts applying for credit a couple 0:10:06 of times. Oftentimes, we’ll actually start with any lender that does a poll from all 0:10:12 three credit bureaus. So most lenders only pull from one of the three bureaus. So TransUnion 0:10:17 Experian and Recofacts. But when you first create a synthetic identity, you want to get 0:10:22 that synthetic ID to have credit records on all three of the major bureaus. So one of 0:10:27 the things that we see synthetic identities doing is initially the first place that they’ll 0:10:32 apply for credit is anywhere that does a tri-bureau poll that pulls from all three of the major 0:10:33 bureaus. 0:10:34 They want immediately to disperse that information. 0:10:35 Exactly. 0:10:36 Okay. 0:10:42 So in this creation phase of the synthetic identity’s life, they will apply for credit 0:10:46 at places that do tri-bureau polls. They’ll sign the synthetic identities up for an email 0:10:49 address and for a phone number. 0:10:53 So it’s really, it’s becoming like a real identity almost in a lot of dimensions. 0:10:54 Yeah. 0:10:58 They’ll sign them up for social media accounts. So get them a Facebook or even better as a 0:11:03 LinkedIn or a Twitter. The reason being that later on, a fraud investigator is going to 0:11:08 be looking for this person and this gives them a little bit more legitimacy. 0:11:12 That is so much, that’s so much attention paid at that early phase. 0:11:13 Absolutely. 0:11:17 So one of the things that we’ve, we noticed with a lot of the fraud rings, the traditional 0:11:23 fraud rings was a tremendous amount of technical sophistication. So highly automated, really 0:11:28 well, a really deep understanding of not just the fraud controls, but the entire technical 0:11:29 stack. 0:11:34 With this kind of fraud, it seems very manual. It seems very kind of almost like an artisanal 0:11:35 form of fraud. 0:11:37 Yeah. It’s like a bespoke, like you literally create these lives. 0:11:38 Absolutely. 0:11:47 The whole cloth. So, okay. So that’s phase one and then the birth of the fake person. 0:11:48 Yeah. 0:11:49 Exactly. 0:11:53 So then in phase two, that’s the buildup phase. This is where it takes one to two years. And 0:11:57 in this phase, the synthetic identity is acquiring credit as quickly as they can. So often this 0:12:03 means getting small credit cards, introductory credit cards, and actually making oftentimes 0:12:09 the minimum payments, but anything that shows this person has a good repayment history. 0:12:13 Now when eventually down the road, this is discovered and people are presumably going 0:12:18 back to figure out, can you start tracing those payments when you look back and start 0:12:23 understanding where that money comes from and have like understanding into the fraud from 0:12:24 that route? 0:12:30 Well, those payments often come from bank accounts in the names of the synthetic identities. 0:12:34 Isn’t there a point when you open the bank account where you need more than those three 0:12:35 pieces of information? 0:12:39 You’re supposed to collect four. It’s technically name, date of birth, SSN, and address. 0:12:40 Okay. 0:12:45 It’s called the customer identification program. And you’re supposed to verify these things 0:12:49 in a number of different ways, but because there’s simply no way of doing it, a lot of 0:12:54 times people say, “Oh, they have a credit record that’s sort of sufficient.” 0:13:00 Most of the account opening anti-fraud stuff people do is focused on identity theft, which 0:13:03 has traditionally been the big account opening from a fraud. 0:13:09 But for account opening, if you want to prevent identity theft, what you’re doing is trying 0:13:14 to see whether the person submitting the application is the same as the identity that they’re using 0:13:15 to apply for credit. 0:13:21 So as an example, if you see John Smith apply for credit using naftaliharris@gmail.com as 0:13:22 their email address. 0:13:23 Problem. 0:13:28 Yeah, problem. Exactly. It’s probably not John Smith doing it. It’s probably naftaliharris. 0:13:35 But if you see John Smith applying for credit with johnsmith@gmail.com, then it looks fine. 0:13:40 But what if it’s actually naftaliharris that made John Smith and made John Smith at gmail.com? 0:13:44 Let’s go back to the life cycle. So we talked about the birth, then we talked about the 0:13:45 development. 0:13:46 Incubation. 0:13:52 The incubation, where is the moment where they die, where you generally get killed? 0:13:56 So that’s every fraudster’s favorite part of the life cycle. It’s the bust out. Once 0:14:03 you have a synthetic identity that has been making payments, which has gotten access to 0:14:07 higher credit lines. So at the end of that incubation period, the synthetic ID has a 0:14:11 credit score over 700 or 750 plus, or even less than 800. 0:14:12 Pretty good. 0:14:13 Yeah. 0:14:14 Yeah, they look great. 0:14:15 Yeah. 0:14:19 And they make every bank and lender, especially in today’s low rate environment, wants to 0:14:23 throw as much money at them as they can. 0:14:27 And so in the bust out phase, fraudsters acquire as much credit as they possibly can. They max 0:14:32 out any credit card they’ve had, and all of a sudden they just stop making payments. They 0:14:35 go from your model customer to your worst one. 0:14:39 They stop paying their loans, and then what happens next? 0:14:43 So someone stopped making payments, and so the bank starts pushing them through their 0:14:44 collections process. 0:14:46 So somebody starts calling. 0:14:52 Yeah, it’s usually, it’s a polite email, “Hey, John Smith. Notice you missed your payment. 0:14:54 Could you please do that as soon as you can?” 0:14:58 And then that becomes a little bit more stringent, and then it starts paying phone calls. In 0:15:02 some cases, the fraudsters will ignore it completely and vanish from the face of the 0:15:08 earth. And in that case, it’s uncollectable. In other cases, they’ll pick up the phone 0:15:14 and they’ll say, “Oh, I’m really sorry. I couldn’t make payments. I lost my job. I had 0:15:19 a hardship. Someone in my family got ill. I can’t make payments right now.” 0:15:20 And they buy some time. 0:15:24 And they buy some time, and eventually the loan gets charged off. 0:15:29 Why does this not, at that moment, trigger when you suddenly, your behavior suddenly 0:15:35 changes and you take a big loan? There are all sorts of legitimate reasons for that kind 0:15:41 of sudden big loan. But why is that not automatically getting flagged just for a little check at 0:15:42 that point? 0:15:46 To the earlier point, right? There’s a very big interest to grow your creditor base, to 0:15:51 grow the base of people you’re loaning money to. And in that process, friction is generally 0:15:57 found upon, right? It’s a risk determination. Some of these organizations, they’ve built 0:16:00 risk models that feel comfortable enough about the validity of this identity, and they make 0:16:06 kind of the business decision to take a risk on extending credit to them. And it’s probably 0:16:09 one of those things where they need to make some adjustments to that risk model. 0:16:14 So that’s, I would say that there’s probably some perfectly rational process-driven reason 0:16:19 why this is happening. Fraud, like most of these kinds of criminal enterprises, are very 0:16:24 much games of cat and mouse. And this is just sort of the mouse finding a way around the 0:16:25 cat in this instance. 0:16:29 So where in the life cycle do you guys try and intervene? Like, how do you look at this 0:16:33 life cycle? And where do you think is the weak point? And with what kind of tools? 0:16:37 The places where they really are experts are on the U.S. credit system. They understand 0:16:44 that very deeply, honestly better than probably a lot of people who have that as their careers. 0:16:50 They know who does a tribunal credit poll. They know how to get through the KYC processes 0:16:56 at different organizations. They know who is weak at the beginning. And so, at a high 0:17:01 level, the way we actually solve this problem is we have a team of risk analysts that manually 0:17:06 review transactions, looking for fraud, investigating cases, deeply trying to understand individual 0:17:11 fraud transactions, and understanding what is new in the fraud world. 0:17:16 And then on the other hand, we have a sister team of technologists, so engineers, machine 0:17:20 learning engineers, data scientists, who are taking the insights and the labels from the 0:17:25 risk operations team and using those to build production-alized machine learning models 0:17:28 that actually can detect this sort of fraud in real time. 0:17:32 It almost sounds like a detective agency on one side, and then building the tech on top 0:17:33 of the knowledge. 0:17:38 So, I mean, a lot of the tech is based on the fact that we understand synthetic fraud 0:17:42 extremely well. Different kinds of products naturally fall in one or different parts, 0:17:48 so like a high limit rewards credit card from a top 10 card issuer. Those will tend to get 0:17:53 hit towards the end of that process a little bit before the bust out. And so, in that case, 0:17:58 we have more history through which to actually identify an application as synthetic. 0:17:59 Right. 0:18:06 But we also work with card issuers that are trying to give cards to immigrants or to young 0:18:11 people even as early as in college. And there, we’re really playing at sort of the very beginning 0:18:14 during phase one or the very beginning of phase two to differentiate between those real 0:18:16 people and those fake people. 0:18:21 A big thing that we do is around clustering, connecting together applications that come 0:18:26 from the same fraud ring. So, for this form of synthetic fraud, most of it comes from 0:18:33 organized crime rings, and $100,000 per identity is great, but if you want to make a business 0:18:40 out of it, the fraudsters are a lot more ambitious. And so, they make a number of these different 0:18:43 synthetic identities and incubate all of them at the same time. 0:18:45 Oh my gosh, it sounds like the Matrix that way. 0:18:50 Yeah, it’s a lot of fake people. We’ve seen them be so ambitious as to actually make 0:18:51 families. 0:18:53 So, they’ll have like a mother… 0:18:56 But only a families of lendable ages. 0:19:02 Exactly. So, they’ll be like a mother and father. So, they’ll have the same last name 0:19:07 with birthdays that are a couple years apart. And they’ll have like five kids, all of them 0:19:12 are in their early 20s or something like that, address history that’s shared at different 0:19:15 points and they tried to make the ages staggered and stuff like that. 0:19:20 It’s like scripting a story. So, you’ve seen that more than once? 0:19:24 We’ve seen a number of such families, quote-unquote, created. Internally, we call it the Keeping 0:19:29 Up with the Joneses approach because the first time we saw this, the last name was 0:19:30 Jones. 0:19:32 You know, a family that commits fraud together, stays together. 0:19:36 We need like a symbol, put on, but shh. 0:19:40 Thank you. I’ll be here all week. I was going to suggest we call this a fraud cast. 0:19:41 Yeah. 0:19:42 There you go. 0:19:49 Another good one. So, what are some of the other types of fraud rings that you guys see? 0:19:54 We oftentimes see alleged people that have no relationship with each other who are sharing 0:20:00 address history at some point. And it’s really interesting what causes that. So, one reason 0:20:05 this happens is that a fraudster will oftentimes reuse the same address or for that matter 0:20:08 the same phone number or email address if they’re lazy. 0:20:13 But during the incubation period, one of the ways in which fraudsters boost up someone’s 0:20:19 credit quite a bit is by purchasing authorized user trade lines. That’s when you give a credit 0:20:25 card to your spouse or one of your kids. So, like when you’re younger, sometimes your parents 0:20:29 will give you a credit card. The credit card, it actually is in the name of your parents 0:20:32 and they’re the ones that are actually responsible for making the payments. 0:20:36 But what a lot of people don’t realize is that that credit card will oftentimes show 0:20:41 up on the recipient’s credit report. So, if you’re a kid and your parent gives you a credit 0:20:45 card, which they’re responsible for, it’ll end up on your credit report. And that’s sort 0:20:51 of what all the major card issuers had historically thought was the point of having an authorized 0:20:57 user card. It’s to usually within a family or at most friends or maybe employees or something 0:21:04 like that. But actually, you’ll find hundreds of these, hundreds of these marketplaces that 0:21:08 let you purchase or sell a high-limit credit card that you have. 0:21:10 And that’s legitimate? 0:21:13 It’s not, but it is as far as I know, legal. 0:21:18 Whoa. So, you sell your ability to borrow to somebody else? I mean, it sounds like such 0:21:19 a bad idea. 0:21:25 The recipient won’t actually get the card. The card will show up on their credit report, 0:21:31 but the card actually won’t get sent to them. And the purpose of it actually is essentially 0:21:36 credit score arbitrage. If you have a high-limit $20,000 credit card that you’ve had since 0:21:41 2005, it looks really good when it shows up on somebody else’s credit report and they’re 0:21:42 willing to pay for it. 0:21:49 So fraudsters who are very prolific about buying and selling these authorized user cards will 0:21:53 oftentimes have shared addresses. And the reason the addresses are shared is that multiple 0:21:57 of these synthetic identities at one point or another bought the same authorized user 0:22:03 credit card. Our technology can detect this and realize that these people, 50 of them 0:22:07 throughout the United States, who should have no relationship to each other nonetheless 0:22:08 have shared history. 0:22:12 What’s the weirdest thing you’ve seen besides the Joneses? 0:22:19 So we saw one case where the fraudster actually had taken two totally different people and 0:22:23 mashed their identities together. And one of the identities that was mashed together was 0:22:29 someone that was actually in prison for murder. So that person, if they ever get out, might 0:22:30 be pretty upset about this. 0:22:36 So it’s like half identity fraud, half synthetic, like a kind of weird Hollywood mashup. Like 0:22:40 you take two movies and splice them together with lazy storytelling, basically. 0:22:46 One that I thought was just really amusing. And we saw a fraud ring that had so many identities 0:22:53 in it that the way they kept track of which identity had which SSN is actually included 0:23:00 the last four of the SSN in the email addresses of the synthetic identities. So lots of people 0:23:07 have, you know, Naftali Harris and then monthday@gmail.com or a lot of people have, you know, Naftali 0:23:14 Harris, year of birth@gmail.com. These fraudsters actually use Naftali Harris last four of SSN 0:23:15 at gmail.com. 0:23:16 Wow. 0:23:19 And they did this for all several hundred of their identities. 0:23:21 So that was an immediate first signal. 0:23:26 Yeah. And essentially the identities all looked very cookie cutter to us as though somebody 0:23:31 was following directions for how to create a synthetic identity. They had something that 0:23:38 worked. They all used the same original institution as their first inquiry. They all were structured 0:23:44 the same way. They all had first name, last name, last four of SSN@gmail.com was the one 0:23:49 that they used. Everything about them was sort of similar even though none of the information 0:23:51 was overlapping in that case. 0:23:57 So, you know, when we looked at this, people use the SSN4 in their email address. Almost 0:24:01 everyone who did that was fraudulent, but there were some that were not. And some people 0:24:05 just didn’t realize that you’re not supposed to put the last four of your SSN in your email 0:24:10 address. I think most of us realize that, but, you know, some people don’t. 0:24:13 Yeah. That’s another tip for our listeners. If you’re doing that change of email right 0:24:14 now. 0:24:18 So you look for patterns. You look for clustering. Are there other hallmarks that you look for 0:24:21 that you guys are paying attention to? 0:24:26 It’s a lot around the consistency of the history. Synthetic identities have histories 0:24:31 that are not really cohesive. So we’ll do things like look at state-by-state migration 0:24:37 patterns. So it’s pretty common for people to move from Florida to Georgia. It’s a lot 0:24:42 less common for people to move from Florida to Alaska. Obviously it does happen. And apologies 0:24:47 to whoever’s listening and did just that. But statistically, it’s there are certain 0:24:52 patterns that are more or less likely. So we’ll look at when SSNs were issued and then 0:24:56 when and where those were issued and see if they match up with someone’s actual credit 0:25:01 history. We’ll look at where they’ve been moving, how fast that’s happening. It’s pretty 0:25:07 rare for someone to be in a, have a residential home in a new state every, you know, one or 0:25:12 two months. It’s just not very frequent. So we’ll look for a lot of things around cohesiveness 0:25:13 of the identity. 0:25:14 And weird outliers. 0:25:20 I think there’s a really interesting salient point here that’s being made, which is that 0:25:27 kind of the first two generations of large-scale consumer fraud were mostly about technical 0:25:31 weaknesses, underlying technology weaknesses, lack of two-factor authentication, inability 0:25:37 to secure endpoints, right? It was very kind of software-driven or computer breach-driven. 0:25:42 This is actually a business process hack or a hack of sort of existing broken business 0:25:46 process. Yeah. You know, essentially it’s social engineering and scale. 0:25:50 So in some ways, it sounds terrible to say, but it kind of feels a little bit like a 0:25:55 victimless crime because you’re not stealing money from another person. You’re stealing 0:25:56 it from this like institution, right? 0:25:57 The funny thing about that is… 0:25:58 I know that’s not true. 0:26:04 Having worked at institutions that had lots of things attempted to be stolen from them. 0:26:07 Yeah. Like can you talk about how that impacts the whole? 0:26:12 Yeah, absolutely, right? Losses of these nature, of this nature go directly against 0:26:17 the bottom line of the corporation, right? Losses like these translate directly into 0:26:21 the financial performance of the stock, and these are the kinds of things that shareholders 0:26:26 and board members and anyone with the fiduciary responsibility that they want to tackle as 0:26:30 quickly as possible because reducing losses in these kinds of categories can translate 0:26:35 into meaningful movement of stock, especially if you’re talking about a billion to $2 billion. 0:26:41 That’s not trivial. So usually the way that these start to materialize is that this will 0:26:46 translate into higher costs associated with borrowing for legitimate customers. So these 0:26:50 expenses, they’re not going to get eaten by the corporation. They’re going to get probably 0:26:55 pushed out in the forms of new fees or higher interest rates to people opening new accounts. 0:27:00 It’s going to translate probably into more internal controls, more expense on the back 0:27:05 end to start validating some of these transactions to do more verification, and we’re going to 0:27:10 pay for it. And it’s going to be maybe a tenth of a percent, maybe a fifth of a percent, 0:27:13 but it’s going to start to drive up costs of borrowing for consumers, and that’s usually 0:27:14 where it turns out. 0:27:19 There’s actually two other sorts of ways in which certain groups are victims. So one 0:27:24 of them is that synthetic identities look like people that are new to credit, and those 0:27:29 populations, the legitimate populations there are often young people and immigrants. 0:27:33 Oh, so it’s making it harder for all the people who need credit the most. 0:27:38 Exactly. So it makes banks a lot less comfortable lending to immigrants and a lot less comfortable 0:27:42 lending to young people, or even just people that decided they didn’t need credit for 0:27:46 a long time. A lot of money will say, “I have no reason I should get a credit card and get 0:27:51 trapped in debt,” until they decide they might want a mortgage, and it makes it harder 0:27:54 for those kinds of people to acquire credit because they look like they might be not a 0:27:55 real person. 0:28:00 We did a podcast before about sort of different areas of cybercrime and different geographic 0:28:03 concentrations of different kinds of fraud. Is there a geographic concentration or is 0:28:07 there a type of fraudster that tends to gravitate towards this kind of fraud? 0:28:13 Yeah, a lot of this form of fraud is geographically concentrated. So we see a lot from Southern 0:28:20 California, a lot nowadays from the Atlanta region, a lot from South Florida. 0:28:24 And is that just because people get good at it and then the organization gets bigger? 0:28:26 Or they’re telling their friends, it’s like Amway or something. 0:28:27 Yeah. 0:28:28 Yeah. 0:28:29 A lot of it is organized crime. 0:28:33 Typically, the way we’ve seen these illicit criminal industries develop is that they start 0:28:38 off as sort of what you could think of as sort of familial clusters, right? Small groups 0:28:43 of individuals that figure out a neat trick, share it among a couple friends, perhaps locally, 0:28:46 which is why you’re seeing geographic concentration. 0:28:51 And then that information gets distributed more broadly, and other more professionalized 0:28:54 career type criminals start to move in. 0:28:59 And in industry develops, you’ll get sort of a one-stop shop, right? A group of individuals 0:29:03 that do soup to nuts, this kind of fraud. Specific tasks now will start to get broken 0:29:08 up. So you’ll be able to probably go buy these identities in the dark web. There’s probable 0:29:12 places that are actually farming them, developing them, and then selling them to other parts 0:29:16 of the organization. And then you’ll get specific groups that are focusing on kind of the bust 0:29:18 outrings and those sorts of things. 0:29:20 The industrialization of synthetic fraud. 0:29:24 I would suspect that we’re either in that phase or we’re moving towards it. We’re seeing 0:29:28 sort of that hockey stick growth of a new industry, right? And it’s just kind of the criminal 0:29:33 variant of it. And so as that starts to ramp up, it’s going to be interesting. So I am 0:29:38 not aware of any large scale arrests of people involved in this kind of activity. I’m interested 0:29:43 if you know of like any of the regulators have said anything about synthetic fraud or interested 0:29:44 in looking at it. 0:29:48 You know, that’s one of the really interesting things. Synthetic fraud right now is a huge 0:29:54 money laundering issue, but a totally underappreciated one. If you look at the regulations around 0:30:02 KYC, so specifically the laws that require this, they really contemplated identity thefts 0:30:08 and did not contemplate synthetic fraud almost at all. Everyone’s assumption for a really 0:30:14 long time has been that identities that are used to apply for credit are real. And as we’ve 0:30:17 discovered over the last couple of years, that’s really not the case. 0:30:21 So the banks are starting to understand it and noticing it and getting new tools to try 0:30:26 and notice it. When the banks catch this and they stop it, do they then alert the authorities? 0:30:28 Do people try and pursue this at all? 0:30:34 No, the first instinct of banks is to try to have it not happen again. And they’re not 0:30:42 quite as focused on having law enforcement step in and apprehend the people doing it. 0:30:46 What would be the tipping point for that to have to happen if it becomes this big industrial 0:30:49 loss? So it’s dollars, right? Arrests typically 0:30:55 happen towards the end of the life cycle of something like this. And so as it gets professionalized, 0:31:00 as you see kind of the industrialization of this sort of activity, regulators will start 0:31:05 to notice, law enforcement will start to notice. They may already be active investigations, 0:31:10 we don’t know, but they’ll start to kind of move against these sorts of organizations 0:31:16 as large scale criminal organizations that are engaged in things that may be drugs, may 0:31:21 be terrorism, could be things that are life-threatening. They’re always looking for new conduits for 0:31:25 money laundering. So sometimes what happens is that money or some of that activity will 0:31:31 find its way into some of these channels as a way to clean and rinse some of these funds. 0:31:33 And that’ll also draw the attention of law enforcement. 0:31:35 And then you really have to pay attention to where interesting. 0:31:39 Absolutely. If you don’t necessarily see criminals from other forms of crime moving into this 0:31:44 sort of crime. So you won’t see racketeers or you won’t see narcotics traffickers like 0:31:46 quitting their day jobs and deciding to do synthetic fraud. 0:31:47 It’s the specialist. 0:31:52 Exactly. But they will sort of give money to people to run it through these systems 0:31:57 to clean it for a fee, right? And that’s usually where you start to see the real professionalization. 0:32:00 That’s where it starts spreading through the criminal system. 0:32:04 And then you start to see the cases come and you’ll see arrests made. And that’s usually 0:32:05 how these things start to get rolled up. 0:32:09 Are there sort of fundamentally new human behaviors that you’re noticing? Or is it the 0:32:15 same fundamental criminal behavior, but just manifesting itself in new and different ways? 0:32:20 I think that’s actually a really interesting point here about all of this. And I think 0:32:26 most of the fraud discussions and just broadly a lot of security issues we have in general, 0:32:29 it all comes back to that kind of earlier discussion about like the social security 0:32:33 number that, you know, if you look at your social security card, it says this is not 0:32:37 to be used for identification, right? Like this is this is this number should mean nothing 0:32:41 to you. I mean, it’s almost like money Python, right? Like we’ve built all these things on 0:32:46 something that said, don’t make me the Messiah. And we kind of did that. And then as a country, 0:32:50 we’ve sort of refused to meaningfully consider any kind of national level identity or identity 0:32:55 management. And so you have the proliferation of a lot of these issues. And that’s that’s 0:32:59 sort of the really fascinating thing about almost all the fraud discussions. 0:33:05 So if there is this huge kind of foundational crack in all these systems that we’ve built 0:33:09 up, but that make it feels like a house of cards almost with this missing kind of giant 0:33:14 verification piece at the bottom, how do you get at the heart of that problem? 0:33:18 So I think one thing that Joel mentioned earlier was the sort of cat and mouse nature of a 0:33:22 lot of fraud. We want to go a step beyond that. There are many organizations out there 0:33:28 or even beyond financial services that are verifying identities as part of their business. 0:33:35 So every major bank in lender does this, but so do online marketplaces like Lyft or Airbnb. 0:33:41 So do also retailers. So one that you’re constantly having to do this. Yeah, you probably did 0:33:45 this a couple of times even today. Yeah. And one thing that we’ve observed is that these 0:33:52 organizations with respect to customer identification don’t really work together, despite the fact 0:33:54 that it’s fundamentally the same problem they’re solving, like figure out if someone 0:33:59 is who they say they are and if they actually exist. All these organizations are fighting 0:34:04 the same fraudsters and they’re verifying the same 300 million Americans. So the way 0:34:10 this really should work is the government should step in and make a sort of national 0:34:17 ID, I think to really solve this. One that does have printed on it. You should use this. 0:34:20 There’s web standards for how to do this and cryptography is advanced quite a bit and there 0:34:25 are ways of doing this. I don’t think we’re going to see the U.S. government step in and 0:34:29 do this. And so we’re building it. Thank you so much for joining us on the A16Z podcast.
Synthetic fraud—yes, it’s a thing: a new evolution of consumer fraud that’s been emerging in financial services, to the tune of $1-$2B a year.
In this episode of the a16z Podcast, Naftali Harris, co-founder and CEO of Sentilink, which builds technology to detect and stop synthetic fraud, talks with a16z’s Hanne Tidnam and operating partner for information security Joel de la Garza all about what this new kind of fraud is.
Where did this new form of fraud come from, and why is it on the rise? Who are true victims here (hint: it’s not the Joneses… or maybe it is!). And what is the fundamental security issue really at the heart of it all? The conversation covers the fascinating life cycle of this long con: how these “synthetic” identities get made, incubated, and finally busted out… and some of the wildest stories (and art of storytelling!) behind the strangest fraud rings we’ve seen.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Where did this new form of fraud come from, and why is it on the rise? Who are true victims here (hint: it’s not the Joneses… or maybe it is!). And what is the fundamental security issue really at the heart of it all? The conversation covers the fascinating life cycle of this long con: how these “synthetic” identities get made, incubated, and finally busted out… and some of the wildest stories (and art of storytelling!) behind the strangest fraud rings we’ve seen.
0:00:05 The content here is for informational purposes only, should not be taken as legal business 0:00:10 tax or investment advice, or be used to evaluate any investment or security and is not directed 0:00:14 at any investors or potential investors in any A16Z fund. 0:00:17 For more details, please see a16z.com/disclosures. 0:00:21 Hi, and welcome to the A16Z podcast. 0:00:22 I’m Frank Chen. 0:00:28 Today, I’m here with Carnegie Mellon’s Professor Tom Mitchell, who has been involved with machine 0:00:30 learning basically his entire career. 0:00:34 So I’m super excited to have this conversation with Tom, where he can tell us a little bit 0:00:37 about the history and where all of our techniques came from. 0:00:40 And we’ll spend time talking about the future, where the field is going. 0:00:45 So Carnegie Mellon’s been involved in sort of standing up the fundamental teaching institutions 0:00:50 and research institutions of the big areas, computer science, artificial intelligence 0:00:52 and machine learning. 0:00:56 So take us back to the early days, the you and Newell and Jeff Hinton are teaching this 0:00:57 class. 0:00:58 What was your curriculum like? 0:00:59 Like what were you teaching? 0:01:02 Pretty different, I imagine, than what we teach undergrads today. 0:01:03 That’s right. 0:01:05 Well, at the time, this was the 1980s. 0:01:13 So artificial intelligence at that point was dominated by what we would call symbolic methods. 0:01:19 Where things like formal logic would be used to do inference. 0:01:24 And much of machine learning was really about learning symbolic structures, symbolic representations 0:01:25 of knowledge. 0:01:31 But there was this kind of young whipper snapper, Jeff Hinton, who had a different idea. 0:01:41 And so he was working on a book with Rommelhart McClellan that became a very well-known parallel 0:01:47 data processing book that kind of launched the field of neural nets. 0:01:49 And they were, if I remember, psychologists, right? 0:01:58 Yeah. Jay McClellan is a psychologist here at CMU, Rommelhart, kind of a neuroscientist. 0:02:01 And more, he was a very broad person. 0:02:07 And Jeff, so the three of them were kind of the rebels who were taking things off in a 0:02:09 different paradigm. 0:02:14 The empire wanted us to do research on knowledge representation and inference and first-order 0:02:15 logic. 0:02:20 I remember as an undergrad, I took this computer aided class that John H.M.M.D. wrote called 0:02:24 Tarski’s World, where we learned all about first-world logic. 0:02:25 What could you prove? 0:02:26 What could you not prove? 0:02:30 And so that’s what the establishment, quote, unquote, was teaching. 0:02:37 And then Jeff was the rebel off in neural network land, and he gets his reprise later. 0:02:40 So take us back to the world of knowledge representation, because I’m actually seeing 0:02:44 a lot of startups these days who are trying to bring back some of these techniques to 0:02:49 complement deep learning, because there are well-known challenges with deep learning. 0:02:52 Like, we’re not encoding any priors, we’re learning everything for the first time, we 0:02:56 need tons of labeled data sets to make progress. 0:03:00 And so take us back to the days of knowledge representation. 0:03:04 What were we trying to solve with those set of techniques, and how might we use them today? 0:03:13 So back in the ’80s and the ’90s, and I have to say that some of the really senior people 0:03:20 in the field were totally devoted to this paradigm of logical inference, logical representations. 0:03:27 People like John McCarthy, for example, were very strong proponents of this, and really 0:03:34 essentially just saw that reasoning is theorem-proving, and therefore if we’re going to get computers 0:03:38 to do it, that’s what we have to do. 0:03:42 There were some problems with that, and there still are. 0:03:48 One thing that I remember from back then that was an example was the banana in the tailpipe 0:03:50 problem. 0:03:55 These logical systems were used to reason to do things like, how would you plan a sequence 0:03:57 of actions to achieve a goal? 0:03:59 Like, how would you get from here to the airport? 0:04:05 Well, you’d walk to your car, you’d put the key in, turn the car on, you’d drive out of 0:04:10 the parking lot, get on the interstate, go to the airport exit, et cetera. 0:04:12 But what if there’s a banana in the tailpipe? 0:04:17 Even back then, before it became a meme in Beverly Hills Cop, we were worried about 0:04:19 the banana in the tailpipe. 0:04:20 That’s right. 0:04:25 And the point of the banana in the tailpipe is there are an infinite number of other things 0:04:31 that you don’t say when you spin out a plan like that. 0:04:39 And any proof, if it’s a proof, really, is going to have to cover all those conditions, 0:04:43 and that’s kind of an infinitely intractable problem. 0:04:48 You couldn’t encode enough to do the inference you needed for your plans to be successful. 0:04:49 Right. 0:04:56 And so one of the big changes between the ’80s and 2019 is that we no longer really think 0:05:03 in the field of AI that inference is proving things, instead it’s building a plausible 0:05:05 chain of argument. 0:05:10 And it might be wrong, and if it goes wrong, if there is a banana in the tailpipe, you’ll 0:05:12 deal with it when it happens and when you figure it out. 0:05:13 Right. 0:05:18 So we move from certainty and proof to sort of probabilistic reasoning, right, Bayesian 0:05:20 techniques started becoming popular. 0:05:21 Right. 0:05:25 And so around 19– in the late ’90s, in fact. 0:05:31 So if you look at the history of machine learning, there’s an interesting trajectory where in 0:05:38 maybe up to the mid ’80s, things were pretty much focused on symbolic representations. 0:05:43 Actually, if you go back to the ’60s, there were the perceptron, but then it got swallowed 0:05:50 up by the end of the ’60s by symbolic representations and trying to reason that way and trying to 0:05:52 learn those kind of symbolic structures. 0:05:59 Then when the neural net wave came in around the late ’80s, early ’90s, that started competing 0:06:02 with the idea of symbolic representations. 0:06:09 But then in the late ’90s, the statisticians moved in and probabilistic methods became 0:06:11 very popular. 0:06:16 And at the time, there was this– if you look at this history, you can’t help but realize 0:06:22 what a social phenomenon technology advances in sciences and technology. 0:06:24 People influencing each other at conferences. 0:06:25 Right. 0:06:29 They were shaming them into adopting a new paradigm. 0:06:36 And so one of the slogans, or one of the phrases you kept hearing when people started working 0:06:41 on probabilistic, statistical probabilistic methods, they would never call them that. 0:06:48 They would have called them, instead, principled probabilistic methods, just to kind of shine 0:06:53 the light on the distinction between neural nets, which are just somehow tuning a gazillion 0:06:57 parameters in the principled methods that were being used. 0:07:03 And so that became really the dominant paradigm in the late ’90s and kind of remained in charge 0:07:12 of the field up through till about 2009, 2010, when now, as everybody kind of knows, deep 0:07:18 networks made a very serious revolution showing that they could do all kinds of amazing things 0:07:20 that hadn’t been done before. 0:07:21 Yeah. 0:07:26 We really are living in a golden age here in deep learning in neural network land. 0:07:29 But let’s go back to the original sort of rebel group. 0:07:35 This is Jeff Hinton hanging out in the shadow of sort of first order logic and saying, “No, 0:07:36 this is going to work.” 0:07:40 I think they were loosely inspired by the architecture of the brain. 0:07:41 Is that– 0:07:42 Definitely. 0:07:43 Definitely. 0:07:49 The kinds of arguments Jerry Feldman was one of the people who gave some of these arguments. 0:07:55 She said, “Look, you recognize your mother in about 100 milliseconds. 0:08:03 Your neurons can’t switch state in faster than a few milliseconds. 0:08:10 And so it looks like at most the chain of inference that you’re doing to go from your 0:08:16 retina to recognize your mother can only be about 10 deep, just from the timing.” 0:08:17 Oh, fascinating. 0:08:21 So it was an argument of sort of how long it took to recognize your mother. 0:08:22 Right. 0:08:23 And then how slow your neurons are, right? 0:08:26 Because they’re basically, these are biochemical processes, right? 0:08:27 Right. 0:08:28 Fascinating. 0:08:31 So really a computational efficiency argument. 0:08:37 And therefore, Jerry would say, “There must be a lot of stuff happening in parallel. 0:08:41 It must be a very wide chain of inference if it’s only 10 layers deep.” 0:08:45 And then he says, “Look at the brain, look at visual cortex.” 0:08:46 Yeah. 0:08:47 You got it. 0:08:51 And so neuroscientists at this time were making progress in understanding the structure of 0:08:55 neurons and how they connected to each other and how they formed connections, and those 0:08:58 connections could change strength over time, right? 0:09:02 All mediated by chemical interactions in the computer science community was inspired by 0:09:03 this. 0:09:04 Definitely. 0:09:12 And the level of abstraction at which the computational neural nets met up with the 0:09:19 real biological neural nets was not a very detailed level, but where they kind of became 0:09:28 the same was this idea of distributed representations, that in fact it might be a collection of hundreds 0:09:35 or thousands or millions of neurons that simultaneously were firing that represent your mother instead 0:09:37 of a symbol. 0:09:38 Right. 0:09:39 Right. 0:09:46 It’s a completely different notion of what it even means to represent knowledge. 0:09:51 And really, one of the most exciting things that has come out of the last decade of research 0:10:00 in neural and deep networks is a better understanding, although we still don’t fully understand of 0:10:07 how these artificial neural networks can learn very, very useful representations. 0:10:14 And for me, a simple example of that, that in a sentence summarizes it, is we have neural 0:10:22 networks now that can take as input an image of photograph and output a text caption for 0:10:23 that photograph. 0:10:28 What kind of representation must be in the middle of that neural network in order to 0:10:35 actually capture the meaning well enough that you can go from a visual stimulus to the equivalent 0:10:43 textual content, that it’s really, it must be capturing a very basic core representation 0:10:45 of the meaning of that photograph. 0:10:46 Yeah. 0:10:50 And one of my favorite things about the brain, which is otherwise this very sort of slow 0:10:54 computer, right, if you just look at neuron speeds, is that not only can they do this, 0:10:59 but they can actually use this, the representation they’re deriving to actually inform our actions 0:11:01 and our plans and our goals, right? 0:11:05 So not only is it like this picture has a chair in it, but like I can sit in that chair. 0:11:06 I can simulate sitting in that chair. 0:11:10 I think like that chair is going to support my weight, and all of these things happen 0:11:14 in like milliseconds, despite the fact that the basic components of the brain are very 0:11:15 slow. 0:11:16 Yeah. 0:11:17 It’s an amazing thing. 0:11:21 In fact, now that you mentioned it, I have to tell you, a half of my research life these 0:11:29 days is in studying how the human brain represents meaning of language. 0:11:32 We use brain imaging methods to do this. 0:11:39 And in one set of studies, we put people in a fMRI scanner, and we showed them just common 0:11:49 nouns like automobile, airplane, a knife, a chair, and so forth. 0:11:56 And we would get a picture literally with about three millimeter resolution of the three-dimensional 0:12:02 neural activity in their brain as they think about these different words. 0:12:06 And we’re interested in the question of all kinds of fundamental questions like what do 0:12:08 these representations look like? 0:12:12 Are they the same in your brain and my brain? 0:12:17 Given that they don’t appear instantaneously, by the way, it takes you about 400 milliseconds 0:12:22 to understand a word, if I put it on the screen in front of you. 0:12:24 What happens during that 400 milliseconds? 0:12:30 How do these representations evolve and come to be? 0:12:36 And one of the most interesting things we found, we studied this question by training 0:12:46 a machine learning system to take as input an arbitrary noun and to predict the brain 0:12:49 image that we will see if a person reads that noun. 0:12:56 Now, we only had data for 60 nouns at that time, so we didn’t train it on every noun 0:12:57 in the world. 0:13:00 We trained it on 60. 0:13:05 In fact, what we did was we trained it only on 58, so we could hold out two nouns that 0:13:06 we hadn’t seen. 0:13:12 And then we would test how well it could extrapolate to new nouns that you had never seen by showing 0:13:18 it the two held out nouns and having it predict the images. 0:13:21 Then we’d show it two images and we’d say, “Well, which of those is strawberry and which 0:13:23 of those is airplane?” 0:13:26 And it was right 80% of the time. 0:13:29 So, you could actually predict essentially brain state, right? 0:13:31 I’m going to show you a strawberry. 0:13:35 Let me predict the configuration of your neurons and who’s lighting up and who’s not. 0:13:42 So then we had a model that we trained with machine learning that captured something about 0:13:44 representations in the brain. 0:13:50 We used that to discover that the representations are almost identical in your brain and mind. 0:13:56 We could train on one set of people and decode what other people were thinking about. 0:14:02 And we also found that the representations themselves are grounded in parts of the brain 0:14:05 that are associated with perception. 0:14:10 So, if I give you a word like “peach,” the parts of your brain that code the meaning 0:14:16 of that are the ones associated with the sense of taste and manipulation because sometimes 0:14:19 you pick up a peach in visual color. 0:14:21 Yeah, that is fascinating. 0:14:25 Well, it’s so exciting to think that the brain structures are identical across people 0:14:30 because what everybody wants is sort of that, remember that scene in the matrix where you 0:14:34 sort of like you’re jacked straight into your brain and you’re like, “Oh, now I know Kung-Fu.” 0:14:35 Right? 0:14:36 Like this is what we want, right? 0:14:42 We want to learn new skills and sort of new facts and new inferences just like loading 0:14:43 an SD card, right? 0:14:48 And so, the fact that we’re sort of converging to the same structures in the brain at least 0:14:50 makes that theoretically possible. 0:14:51 We’re a ways away from that. 0:14:52 We’re a ways away from that. 0:14:53 But I’m with you. 0:14:54 Yeah, awesome. 0:15:01 So, another area that interests you is finding biases and why don’t we start by distinguishing 0:15:06 sort of two types of biases because when you hear the word “bias” today in machine learning, 0:15:10 you’re mostly thinking about things like, “Gee, let me make sure my data set is representative 0:15:13 so I don’t draw the wrong conclusion from that,” right? 0:15:18 So the classic example being here that I don’t do good recognition on people with darker skin 0:15:23 because I didn’t have enough of those samples in my data set and so the bias here is you’ve 0:15:29 selected a very small subset of the target data set that you want to cover and make predictions 0:15:31 on and therefore your predictions are poor, right? 0:15:33 So, that’s one sense of bias. 0:15:37 But there’s another sense of bias, that statistical bias, which is kind of what you want out of 0:15:38 algorithms. 0:15:39 So, maybe talk about this notion. 0:15:40 Yeah, sure. 0:15:45 And this is really a very important issue right now because now that machine learning 0:15:52 is being used in practice in many different ways, the issue of bias really is very important 0:15:54 to deal with. 0:16:00 You gave an example, another example would be, for instance, you have some historical 0:16:06 loan applications in which ones were approved, but maybe there’s some bias that say people 0:16:12 of one gender receive fewer loan approvals just because of their gender and if that’s 0:16:18 inherent in the data and you train a machine learning system that’s successful, well, it’s 0:16:22 probably going to learn the patterns that are in that data. 0:16:29 So, the notion of what I’ll call social bias, socially unacceptable bias, is really this 0:16:36 idea that you want the data set to reflect the kind of decision making that you want 0:16:39 the program to make if you’re going to train the program. 0:16:44 And that’s kind of the common sense notion of bias that most people talk about. 0:16:50 But there’s a lot of confusion in the field right now because bias is also used in statistical 0:16:55 machine learning to really with a very different meaning. 0:17:02 We’ll say that an algorithm is unbiased if the patterns that it learns, the decision 0:17:10 rules that it learns for approving loans, for example, reflect correctly the patterns that 0:17:12 are in the data. 0:17:18 Also that notion of statistically unbiased just means the algorithm’s doing its job of 0:17:21 recapitulating the decisions that are in the data. 0:17:28 The notion of the data itself being biased is really an orthogonal notion. 0:17:32 And there’s some interesting research going on now. 0:17:39 So for example, typically when we train a machine learning system, say to do loan approval, 0:17:45 a typical thing would be you can think of these machine learning algorithms as optimization 0:17:46 algorithms. 0:17:54 They’re going to tune maybe the parameters of your deep network so that they maximize 0:18:00 the number of decisions that they make that agree with the training examples. 0:18:06 But if your training examples have this kind of bias that maybe females receive fewer loan 0:18:09 approvals than males. 0:18:13 There’s some new work where people say, well, let’s change that objective that we’re trying 0:18:22 to optimize in addition to fitting the decisions that are in the training data as well as possible. 0:18:29 Let’s put another constraint that the probability of a female being approved for a loan has 0:18:33 to be equal to the probability of a male being approved. 0:18:38 And then subject to that constraint, we’ll try to match as many decisions as possible. 0:18:44 So there’s a lot of work right now in really technical work trying to understand if there 0:18:51 are ways of thinking more creatively, more imaginatively about how to even frame the 0:18:58 machine learning problem so that we can take what might be biased datasets but impose constraints 0:19:01 on the decision rules that we want to learn from those. 0:19:03 Yeah, that’s super interesting. 0:19:08 We’re sort of envisioning the world we want rather than the data of the world that we 0:19:13 came from because we might not be happy with the representation of the representedness 0:19:16 I guess, of the data that we came from. 0:19:20 And it’s causing people to look a lot more carefully at even the very notion of what 0:19:25 it means to be biased and what it means to be fair. 0:19:29 Are there good measures for fairness that the community is driving towards or do we 0:19:34 not really have a sort of an objective measure of fairness? 0:19:35 We don’t. 0:19:41 We don’t have an objective measure and there’s a lot of activity right now to discussing 0:19:48 that, including people like our philosophy professor, David Danks, who is very much part 0:19:54 of this discussion and social scientists, technology people, all getting together. 0:20:01 And in fact, there are now a couple of conferences centered around how to introduce fairness and 0:20:07 explainability and trust in AI systems. 0:20:11 This is a very important issue, but it’s not only technical. 0:20:16 It’s partly getting our philosophical, social, trying to get our heads around what it is 0:20:17 that we really want. 0:20:21 That’s a beautiful thing about AI and about computers in general. 0:20:27 It forces you to be way more precise when you are getting a computer to do it about 0:20:29 what you want. 0:20:36 And so even if you just think about self-driving cars, we have, when I was 16, I took a test 0:20:39 and I was approved to be a human driver. 0:20:43 They never asked me questions about whether I would swerve to hit the old lady or swerve 0:20:44 to hit the baby carriage. 0:20:45 Right. 0:20:47 Charlie’s problem was not on the DMV test. 0:20:48 Exactly. 0:20:50 But it’s on the test for the computers. 0:20:51 Right. 0:20:53 Yeah, it’s really interesting that we sort of hold computers to a different standard 0:20:55 because we’re programming them, right? 0:20:59 We can be explicit and we can have them sort of hit goals or not, right? 0:21:04 And those are design decisions rather than sort of, you know, bundled into a brain, right, 0:21:05 of a person. 0:21:06 Yeah. 0:21:12 And so I think of, you know, look, banks historically have hired loan officers. 0:21:16 Those loan officers may or may not be fair, right, according to the definitions that we’re 0:21:21 sort of talking about now, but we kind of hold those humans, those human loan officers to 0:21:24 a different standard than we would hold the algorithms. 0:21:25 That’s true. 0:21:27 I mean, who knows which way it will go in the future. 0:21:33 If we continue to have human loan officers and some computer loan officers, will we 0:21:39 up the constraints on the humans so that they pass the same qualifications? 0:21:45 Or will we drop the constraints on the computers so that they’re no more titrated than the 0:21:46 people? 0:21:47 Yeah. 0:21:48 That’s fascinating, right? 0:21:49 Who is master? 0:21:50 Who is the student, right? 0:21:55 The intuitive thing is let’s make the humans the models that we train our systems to approach, 0:21:56 right? 0:21:58 Like human competence being the goal. 0:22:00 The other way to think about it is no, right? 0:22:04 We can actually introduce constraints like, you know, equal number of men and women or 0:22:07 equal number of this ethnicity versus another ethnicity. 0:22:12 And our algorithms as a result of those constraints could be more fair than humans. 0:22:14 And so we invert it, right? 0:22:16 Let’s get humans up to that level of impartiality. 0:22:17 Right. 0:22:21 Like maybe the algorithm can end up teaching the human how to make those decisions in a 0:22:26 different way so that the fairness outcome you want is really achieved. 0:22:27 Yeah. 0:22:28 That’s fascinating. 0:22:32 And it’s great that sort of not just computer scientists are involved in this conversation 0:22:36 about the ethicists and the social scientists who are weighing in. 0:22:40 So that gives me hope that, you know, sort of smart people across disciplines are really 0:22:41 grappling with this. 0:22:43 So we get the outcomes that we want. 0:22:49 Well, sort of a related topic to this, right, sort of social impact of AI. 0:22:56 You recently co-wrote a paper with MIT’s Eric Bernholzen about the workplace implications. 0:23:00 And I think you also testified on Capitol Hill about what AI is going to do to jobs. 0:23:04 So why don’t you talk a little bit about what you guys found in the paper? 0:23:10 Well, this actually started with Eric and I co-chairing a National Academy study on 0:23:18 automation in the workforce, which was a two-year affair with a committee of about 15 experts 0:23:24 from around the country who were economists, social scientists, labor experts, technologists. 0:23:30 And in that study, I think we learned so much. 0:23:35 It turns out when you really dig into the question of what’s going to be the impact 0:23:39 of AI and automation on jobs. 0:23:46 You can’t escape noticing that there are many different forces that automation and technology 0:23:48 is exerting on the workforce. 0:23:50 One of them, of course, is automation. 0:23:55 Toll booth operators are going away, do not sign up to be a toll booth operator. 0:24:02 But in other kinds of jobs, instead of the job going away, there will be a shift, a redistribution 0:24:05 of the tasks. 0:24:07 So take, for example, a doctor. 0:24:12 A doctor has multiple tasks, for instance, they have to diagnose the patient, they have 0:24:17 to generate some possible therapies, they have to have a heart-to-heart discussion with 0:24:25 the patient about which of those therapies the patient elects to follow, and they have 0:24:26 to bill the patient. 0:24:31 Now computers are getting, computers are pretty good at billing, but they’re getting better 0:24:35 at diagnosis and they’re getting better at suggesting therapies. 0:24:41 For example, in just in the last couple of years, we’ve seen computers that are at the 0:24:49 same level, if not a little better than doctors at things like diagnosing skin cancer and 0:24:50 other kinds of diseases. 0:24:55 The radiologists, the tissue biopsies, all of these things, we’re using these computer 0:24:59 vision techniques to get very good performance. 0:25:02 So what does this mean about the future of doctors? 0:25:07 Well, I think what it means is automation happens at the level of the individual tasks, 0:25:09 not at the job level. 0:25:16 If a job is a bundle of tasks, like diagnosis, therapy, heart-to-heart chat, what’s going 0:25:24 to happen is computers will provide future doctors with more assistance, in some degrees, 0:25:30 hopefully automating billing, but some amount of automation or advice giving. 0:25:36 But for other tasks, like having that heart-to-heart chat, we’re very, very far from when computers 0:25:40 are going to be able to do anything close to that. 0:25:44 Good bedside manner is not going to be a future of your RoboDoc anytime soon. 0:25:53 And so what you find, if you look into this, and Eric and I recently had a paper in Science 0:25:59 with a more detailed study of this, but what you find is that the majority of jobs are 0:26:04 not like toll-booth operators, where there’s just one task, and if that gets automated, 0:26:05 that’s the end of the job. 0:26:15 The majority of jobs, like podcast interviewer, or computer, or professor, or doctor, really 0:26:16 are a bundle of tasks. 0:26:23 And so what’s going to happen is that, according to our study, the majority, more than half 0:26:29 of jobs, are going to be influenced, impacted by automation, but the impact won’t be a 0:26:30 elimination. 0:26:34 It’ll be a redistribution of the time that you spend on different tasks. 0:26:43 And we even conjecture that successful businesses in the future will, to some degree, be redefining 0:26:49 what the collection of jobs is that they’re hiring for. 0:26:53 Because they still have to cover the tasks through some combination of automation and 0:27:01 manual work, but the current bundles of tasks that form jobs today might shift dramatically. 0:27:06 So the key insight is to think of a job as a bundle of tasks, and that bundle might change 0:27:13 over time as AI enters and says, well, look, this specific task I’m very good at in algorithm 0:27:16 land, and so let’s get humans to focus on other things. 0:27:19 We just need to think of them as differently bundled. 0:27:24 Well, the last topic I wanted to talk with you about, Tom, is around whether this is 0:27:27 the best time ever for AI research. 0:27:29 So we started the grand campaign. 0:27:32 Some would argue summer of 1956 with the Dartmouth conference. 0:27:35 And we’ve had several winters and summers. 0:27:36 Where are we now? 0:27:39 And then what are you most excited about looking into the future? 0:27:45 I think we’re absolutely at the best time ever for the field of artificial intelligence. 0:27:48 And there have been, as you say, ups and downs over the year. 0:27:55 And for example, in the late ’80s, AI was very hot, and there was great expectation 0:27:57 of the things it would be able to do. 0:28:02 There was also great fear, by the way, of what Japan was going to do. 0:28:03 Yeah. 0:28:08 This is the fifth generation supercomputer and the entire national policy of Japan, right? 0:28:10 Focusing on this area. 0:28:16 And so in the U.S., there was great concern that this would have a big impact, Japan would 0:28:18 take over the economy. 0:28:21 So there are some parallels here. 0:28:24 Now again, AI, it’s very popular. 0:28:27 People have great expectations. 0:28:31 And there’s a great amount of fear, I have to say, about what China and other countries 0:28:34 might be doing in AI. 0:28:44 But one really, really important difference is that, unlike in the 1980s, right now, there’s 0:28:48 a huge record of accomplishment over the last 10 years. 0:28:57 We already have AI and machine learning being used across many, many different, really economically 0:29:00 valuable tasks. 0:29:06 And therefore, I think, really, there’s very little chance that we’ll have a crash, although 0:29:11 I completely agree with my friends who say, “But isn’t AI overhyped?” 0:29:13 Absolutely, it’s overhyped. 0:29:20 But there is enough reality there to keep the field progressing and to keep commercial 0:29:25 interest and to keep economic investment going for a long time to come. 0:29:27 So you would argue, this time it really is different. 0:29:28 It really is different. 0:29:31 Because we have real working stuff to point to. 0:29:36 And over the next 10 years, we’ll have a whole lot more real working stuff that influences 0:29:38 our lives daily. 0:29:44 So as a university researcher, I look at this and I say, “Where is this going and what should 0:29:47 we be doing in the university?” 0:29:51 If you want to think about that, you have to realize just how much progress there was 0:29:52 in the last 10 years. 0:29:59 When the iPhone came out, I guess that’s 11 years ago, computers were deaf and blind. 0:30:03 When the iPhone came out, you could not talk to your iPhone. 0:30:09 This is such a weird idea, but you could not talk to your iPhone because speech recognition 0:30:11 didn’t work. 0:30:19 And now computers can transcribe voice to text just as well as people. 0:30:26 Similarly, when you pointed your camera at a scene, it couldn’t recognize with any accuracy 0:30:29 the things that were on the table in the scene. 0:30:34 And now it can do that with about the same accuracy comparable to humans. 0:30:38 And in some visual tasks, like skin cancer detection, even better than you know. 0:30:40 Even better than trained doctors, yeah. 0:30:41 Better than trained. 0:30:44 So it’s hard to remember that it’s only been 10 years. 0:30:47 And that’s the thing about progress in AI. 0:30:53 You forget because it becomes so familiar, just how dramatic the improvement has been. 0:30:55 Now think about what that means. 0:31:00 That means we’re really in the first five years of having computers that are not deaf 0:31:01 and blind. 0:31:08 And now think about what are the kinds of intelligence that you could exhibit if you 0:31:13 were deaf and blind, well, you could do game playing and inventory control. 0:31:17 You could do things that don’t involve perception. 0:31:24 But once you can perceive the world and converse in the world, there’s an explosion of new 0:31:25 applications you can do. 0:31:30 So we’re going to have garage door openers that open for you because they recognize that 0:31:32 your car coming down the driveway. 0:31:37 We’re going to have many, many things that we haven’t even thought about that just leverage 0:31:43 off this very recent progress in perceptual AI. 0:31:49 So going forward, I think a lot about how I want to invest my own research time. 0:31:51 I’m interested still in machine learning. 0:31:53 I’m very proud of the field of machine learning. 0:31:55 It’s come a long way. 0:32:01 But I’m also somebody who thinks we’re only at the beginning. 0:32:05 I think if you want to know the future of machine learning, all you need to do is look 0:32:10 at how humans learn and computers don’t yet. 0:32:16 So we learn, for example, we do learn statistically like computers do. 0:32:24 My phone watches me over time and statistically, it eventually learns where it thinks my house 0:32:27 is and where it thinks my work is. 0:32:30 It statistically learns what my preferences are. 0:32:33 But I also have a human assistant. 0:32:39 And if she tried to figure out what I wanted her to do by statistically watching me do 0:32:43 things a thousand times, I would have fired her so long ago. 0:32:46 A lot of false positives and false negatives, right? 0:32:47 Right. 0:32:48 So she doesn’t learn that way. 0:32:51 She learns by having a conversation with me. 0:32:56 I go into the office and I say, “Hey, this semester I’m team teaching a course with 0:33:00 Katarina on deep reinforcement learning. 0:33:02 Here’s what I want you to do. 0:33:04 Whenever this happens, you do this. 0:33:10 Whenever we’re preparing to hand out a homework assignment, if it hasn’t been pretested by 0:33:16 the teaching assistants two days before handout, you send a note saying, “Get that thing pretested.” 0:33:22 So what I do is I teach her and we have a conversation, she clarifies. 0:33:32 So one of the new paradigms for machine learning that I predict we will see in the coming decade 0:33:35 is what I’ll call conversational learning. 0:33:42 Use the kind of conversational interfaces that we have, say, with our phones to allow 0:33:52 people to literally teach their devices what they want them to do instead of have the device 0:33:54 statistically learn it. 0:34:01 And if you go down that road, here’s a really interesting angle on it. 0:34:07 It becomes kind of like replacing computer programming with natural language instruction. 0:34:15 So I’ll give you an example of a prototype system that we’ve been working on together 0:34:19 with Brad Myers, one of our faculty in HCI. 0:34:25 It allows you to say to your phone something like, “Whenever it snows at night, I want 0:34:27 you to wake me up 30 minutes earlier.” 0:34:33 If you live in Pittsburgh, this is a useful app, and none of the California engineers 0:34:34 have created that app. 0:34:40 And today, I could create that app if I took the trouble of learning the computer language 0:34:43 of the phone, I could program it. 0:34:49 But only far less than 1% of phone users can actually have taken the time to learn the 0:34:51 language of the computer. 0:34:56 We’re giving the phone the chance to learn the language of the person. 0:35:03 So with our phone prototype, if you say, “Whenever it snows at night, wake me up 30 minutes 0:35:08 earlier,” it says, “I don’t understand, do you want to teach me?” 0:35:12 And you can say, “Yes, here’s how you find out if it’s snowing at night.” 0:35:17 You open up this weather app right here, and where it says current conditions, if that 0:35:20 says SNOW, it’s snowing. 0:35:25 Here’s how you wake me up 30 minutes earlier, you open up that alarm app, and this number 0:35:27 you subtract 30 from it. 0:35:36 So with a combination of showing, demonstrating, and telling voice, we’re trying to give users 0:35:44 the opportunity to create their own apps, their own programs with the same kind of instruction, 0:35:49 voice, and demonstration that you would use if you were trying to teach me how to do it. 0:35:50 I love that. 0:35:53 It’s sort of a natural language for an end to, we have an investment in a company called 0:35:57 ift, if this, then that, which is you can program those things, but you have to be a 0:35:59 little sophisticated. 0:36:03 You’d like to just be able to talk to your phone and have it figured out how do I fill 0:36:05 the slots into ift that ift wants. 0:36:07 Exactly, and if this, then that is a wonderful thing. 0:36:14 It has a huge library of these apps that you can download, but as you say, you still have 0:36:17 to learn the language of the computer to create those. 0:36:21 We’re trying to have the computer learn the language of the person. 0:36:27 If that line of research plays out, and I believe it will this decade, we’ll be in a very different 0:36:34 world because we’ll be in a world where instead of the elite few, less than 1% of phone users 0:36:39 being able to program, it’ll be 99% of phone users who can do this. 0:36:45 Now, think about what that does for the whole conception of how we think about human-computer 0:36:46 interaction. 0:36:47 Yeah. 0:36:49 That’s a profound shift in society, right? 0:36:55 Just like everybody became literate, not just the priests, and look what happened to society. 0:36:56 Exactly. 0:36:57 Yeah. 0:36:59 Think about what it means for the future of jobs. 0:37:04 Right now, if you have a computer introduced as your teammate, you, the human, and the 0:37:11 computer are a team, well, the computer is frozen, and the teammate who gets to do the 0:37:16 adapting is the human because the computer is a fixed functionality. 0:37:21 What if in that team, the human could just teach the computer how they want the computer 0:37:22 to help them do their job. 0:37:25 It would be a completely different dynamic. 0:37:27 It would change the future of work. 0:37:28 Yeah. 0:37:29 That’s fascinating. 0:37:33 And then I think another thread that you’re super interested in on the future of machine 0:37:37 learning is something around never-ending learning, so tell us about that. 0:37:38 Sure. 0:37:44 Again, I just go back to what do humans do that computers don’t yet, and computers are 0:37:49 very good at, say, learning to diagnose skin cancer. 0:37:54 You give it some very specific tasks and some data, but if you look at people, people learn 0:37:57 so many things. 0:37:59 We learn to all kinds of things. 0:38:00 You can tap the hands. 0:38:02 You can do double entry bookkeeping. 0:38:03 Right? 0:38:04 Right. 0:38:05 You can add numbers. 0:38:11 You can play music, all kinds of things, and a lot of those things we learn over time 0:38:17 in a kind of synergistic way, in a staged sequence. 0:38:21 First you learn to crawl, then you learn to walk, then you learn to run, then you learn 0:38:28 to ride a bike, and it wouldn’t make any sense to do them in the other sequence, because 0:38:31 you’re actually learning to learn. 0:38:37 When you acquire one skill, it puts you in a position that you now are capable of learning 0:38:38 the next skill. 0:38:44 So I’m very interested in what it would mean to give a computer that kind of capability 0:38:51 to do learning for days and weeks and years and decades. 0:38:59 We have a project we call our Never-Ending Language Learner, which started in 2010, running 0:39:03 24 hours a day trying to learn to read the web. 0:39:04 Fascinating. 0:39:07 And there’s something about sort of the longitudinal, right? 0:39:09 We started it in 2010 and it just keeps on going. 0:39:13 So it’s not just like transfer learning from one model to another. 0:39:15 It’s like long running. 0:39:20 It’s long running and it has many different learning tasks. 0:39:25 It’s building up a knowledge base of knowledge about the world. 0:39:33 But to keep it short, I’ll just say we’ve learned so much from that project about how 0:39:39 to organize the architecture of a system so that it can invent new learning tasks as it 0:39:46 goes so that it can get synergy once it learns one thing to become better at learning another 0:39:47 thing. 0:39:53 How, in fact, very importantly, it can use unlabeled data to train itself instead of 0:39:57 requiring an army of data labellers. 0:40:04 So I just think this is an area that’s relatively untouched in the machine learning field. 0:40:11 But looking forward, we’re already seeing an increasing number of embedded machine learning 0:40:14 systems in continuous use. 0:40:22 And as we see more and more of those in the internet of things and elsewhere, the opportunity 0:40:29 for learning continuously for days and weeks and months and years and decades is increasingly 0:40:30 there. 0:40:36 We ought to be developing the ideas, the concepts of how to organize those systems to take advantage 0:40:37 of that. 0:40:38 Yeah. 0:40:41 I love both of these design approaches in that they’re sort of inspired by humans and 0:40:50 sort of humans are mysteriously good at learning and adapting and they sort of shine a spotlight 0:40:53 on where machine learning algorithms are not yet. 0:40:56 So it’s such a fertile area to look for inspiration. 0:41:00 Well, Tom, it’s been a great delight having you on the podcast. 0:41:03 Thanks for sharing about the history and the future of machine learning. 0:41:06 We can tell you’re still fired up after all of these decades. 0:41:11 And so that’s a great delight just to see somebody who is committed, basically, their 0:41:17 life to understanding the mysteries of learning and wishing many good decades to come as you 0:41:19 continue working on it. 0:41:20 Thanks. 0:41:21 Thanks for doing this podcast. 0:41:25 I think it’s a great thing to get a conversation going and it’s a great contribution to do 0:41:25 that.
How have we gotten to where were are with machine learning? Where are we going?
a16z Operating Partner Frank Chen and Carnegie Mellon professor Tom Mitchell first stroll down memory lane, visiting the major landmarks: the symbolic approach of the 1970s, the ”principled probabalistic methods” of the 1980s, and today’s deep learning phase. Then they go on to explore the frontiers of research. Along the way, they cover:
How planning systems from the 1970s and early 1980s were stymied by the ”banana in the tailpipe” problem
How the relatively slow neurons in our visual cortex work together to deliver very speedy and accurate recognition
How fMRI scans of the brain reveal common neural patterns across people when they are exposed to common nouns like chair, car, knife, and so on
How the computer science community is working with social scientists (psychologists, economists, and philosophers) on building measures for fairness and transparency for machine learning models
How we want our self-driving cars to have reasonable answers to the Trolley Problem, but no one sitting for their DMV exam is ever asked how they would respond
How there were inflated expectations (and great social fears) for AI in the 1980s, and how the US concerns about Japan compare to our concerns about China today
Whether this is the best time ever for AI and ML research and what continues to fascinate and motivate Tom after decades in the field
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
0:00:04 The content here is for informational purposes only, should not be taken as legal business 0:00:10 tax or investment advice or be used to evaluate any investment or security and is not directed 0:00:14 at any investors or potential investors in any A16Z fund. 0:00:18 For more details, please see a16z.com/disclosures. 0:00:21 Hi everyone, welcome to the A6NZ podcast. 0:00:22 I’m Sonal. 0:00:26 I’m here today with a very special guest visiting Silicon Valley, the former prime minister 0:00:32 of the United Kingdom, Mr. Tony Blair, who now runs an institute for global change working 0:00:35 with governments, policymakers and others all around the world. 0:00:39 Also joining us, we have Andreessen Horowitz, managing partner Scott Cooper, who has a new 0:00:44 book just out called Secrets of Sandhill Road, Venture Capital and How to Get It. 0:00:49 And given that startups are drivers of economic growth and innovation, Cooper also often weighs 0:00:54 in on various policy issues, especially those that affect the flow of capital, people and 0:00:55 ideas around the world. 0:00:58 And that’s the focus and theme of this episode. 0:01:02 It’s more of a hallway-style conversation where we invite our audience to sort of eavesdrop 0:01:04 on internal meetings and convos. 0:01:09 We discuss the intersection of governments and technology and where policy comes in, 0:01:13 focusing mainly on the mindsets that are required for all of this. 0:01:17 But then we do also suggest a few specific things we can do when it comes to supporting 0:01:21 tech change for the many, not just for the few. 0:01:22 Welcome Tony. 0:01:23 Thank you. 0:01:24 Did you ask me to call you that? 0:01:25 Thank you. 0:01:26 Can everyone know? 0:01:27 He said it was okay. 0:01:28 And Cooper, welcome. 0:01:29 Thank you. 0:01:30 So let’s just get started. 0:01:34 I think the context is that there’s so much discussion right now about tech in the context 0:01:35 of inequality. 0:01:40 One of the points of view that I have, particularly coming from a background where my family came 0:01:44 from India, et cetera, is that it’s also very democratizing. 0:01:49 And a lot of people can do new things in better ways because of technology. 0:01:53 But I think the big question, the question I think we care about today is how do we bring 0:01:58 more people into the system and make sure that tech benefits everyone? 0:02:03 The first thing I would say from the political perspective is that technology is essentially 0:02:04 an empowering and enabling thing. 0:02:08 So I regard it as benign, but it’s got vast consequence. 0:02:10 So the question is how do you deal with the consequence? 0:02:14 How do you access the opportunities and mitigate its risks and disbenefits? 0:02:17 So that is, I think, the right framework to look at it. 0:02:24 But because it’s accelerating in its pace of change and because the change is so deep, 0:02:28 and I look upon this technological revolution as like the 21st century equivalent of the 0:02:32 19th century industrial revolution, it’s going to transform everything. 0:02:37 So I think the fundamental challenge is that the policy makers and the change makers are 0:02:39 not in the right dialogue with each other. 0:02:44 And this is where misfortune will lie if you end up with bad regulation or bad policy and 0:02:49 where the tech people kind of go into their little huddle, because I say this with great 0:02:55 respect, but you come to this Silicon Valley and it’s like walking into another planet, 0:02:56 frankly. 0:02:57 Yes, that’s actually really interesting. 0:03:00 I’m personally offended by that comment. 0:03:03 Now I think the difficulty is that, yes, you’re right, it’s very empowering. 0:03:07 On the other hand, it’s actually quite frightening to people because you kind of all understand 0:03:11 it and the rest of the world doesn’t quite understand it. 0:03:16 And as far as they do understand it, they find it somewhat dystopian and it’s look. 0:03:19 I was actually sitting with some people from my old constituency in the north of England 0:03:24 a few months back and I said to them, I wonder what’s going to happen when we have driverless 0:03:28 cars and their attitude was, it’s never going to happen. 0:03:34 And the role of a politician is to be able to, in a sense, articulate to the people those 0:03:37 changes and then fit them into a policy framework that makes sense. 0:03:41 And that’s the worry because if the politicians don’t understand it, they’ll fear it. 0:03:43 If they fear it, they’ll try and stop it. 0:03:46 You articulated the vision that we’ve always had, which is we’ve always invested around 0:03:48 this theme called software is eating the world. 0:03:53 It’s exactly what you describe, which is technology no longer kind of sits in its own box. 0:03:57 It really is the case that technology will permeate almost every industry over time. 0:04:01 I think that’s where the big change is happening now is it used to be that technology was a 0:04:02 piece of the puzzle now. 0:04:04 Every company is a technology company. 0:04:05 Yeah, exactly. 0:04:07 So that is the kind of board on which people are playing. 0:04:11 So the issue, I think, is this, is how do you get the structured dialogue between the 0:04:13 change makers and the policy makers? 0:04:16 What would you say the number one thing if you could give advice to entrepreneurs in 0:04:21 the Valley that they should do differently to engage this kind of a framework that you’re 0:04:22 describing? 0:04:26 My advice would be stop looking at your own narrow interest in what you’re doing and 0:04:31 understand you’ve got a collective interest in making the world of policy and politics 0:04:33 understand the technology. 0:04:34 What’s going to happen? 0:04:41 How you get A, the right system of regulation and B, how you allow government to enable 0:04:43 these transformative changes. 0:04:44 Yes. 0:04:45 Well, I actually have a question for both of you. 0:04:48 Today is the 30th anniversary of the web, the World Wide Web. 0:04:52 And I just had Google Doodle this morning and a note from Tim Berners-Lee, his original 0:04:55 memo, information management, colon, a proposal. 0:04:59 The question I have is that a lot of people would argue that the best technologies can 0:05:04 develop when you don’t try to, A, priority predict the consequences because, A, you cannot 0:05:09 their complex adaptive systems and, B, there was an environment of quote permissionless 0:05:13 innovation that allowed the web to thrive because the original makers may have foreseen 0:05:17 some apps, but the whole point is that the innovation is what allowed it to thrive. 0:05:20 So I’d love to hear from both of you on how to balance that perspective. 0:05:21 So I agree with that. 0:05:25 I think though what’s different is we used to be able to compartmentalize technology. 0:05:29 It was a piece of software that you used at work to help you be more productive. 0:05:33 But if technology really is going to be part and parcel of everything, then I think it 0:05:37 changes the nature of how we think about that responsibility because it is regulated industries 0:05:42 in many cases that have been largely immune over time from technology in a way that appears 0:05:43 to be different today. 0:05:45 So I would say that then there are two questions that derive from that. 0:05:48 One is how do you make regulation intelligent? 0:05:53 How do you make it abide by the public interest or enhance the public interest, but at the 0:06:00 same time not dampen that creative and in a way entrepreneurial drive behind the development 0:06:01 of new ideas? 0:06:06 And then secondly, what are the ways that government should be working with those that 0:06:08 are going to be impacted by technology? 0:06:11 If you’re in the car industry, it’s going to be a huge change, right? 0:06:15 I mean, if you get these driverless cars, it’s going to change obviously jobs. 0:06:17 It’s going to change insurance. 0:06:22 It’s going to change the method of production, what you produce, probably change the concept 0:06:23 of car ownership in some way. 0:06:24 Absolutely. 0:06:25 It might even reshape entire cities. 0:06:26 Everything will be impacted by it. 0:06:27 Exactly. 0:06:32 I am fascinated by the potential of technology to allow African nations and governments to 0:06:36 circumvent some of the legacy problems we have within our systems. 0:06:41 And that goes for everything from basic healthcare and education through to how you help agricultural 0:06:45 small holders develop a better yield, cooperate better together, and link up better with the 0:06:46 market. 0:06:49 And in fact, one thing that’s happening in Africa today is there are applications of 0:06:52 tech that are growing up in interesting ways. 0:06:55 So my point is, you’ve got all these different facets. 0:07:00 And yet at the moment, the curious thing is, if you were to go to virtually any European 0:07:06 country or if you were to come here and say, okay, name the top four issues, where would 0:07:09 technology be in that list? 0:07:10 Would it be at the top of the list is what you’re saying? 0:07:12 No, I think it wouldn’t be in the list. 0:07:16 Cooper, you spent a lot of time actually in your role as a partner in front of Congress 0:07:20 and various entities giving testimonies about policy and curious for your take on this. 0:07:21 Yeah. 0:07:25 There is a concern I often hear from entrepreneurs, which is, how do we know if we go there? 0:07:29 How does that not just bring us into the fold of regulation and therefore have negative 0:07:32 consequences versus, you know, we talk about things out here. 0:07:35 Sometimes you do things you ask for permission later is a better strategy. 0:07:36 Right. 0:07:37 Ask for forgiveness. 0:07:38 Ask for permission. 0:07:39 Yeah. 0:07:40 I complete to get that. 0:07:44 That’s why I think it’s got to be a big, it’s got to be done in a big way from the collectivity 0:07:47 rather than individual people going because of course you’re absolutely right what will 0:07:50 happen is that the entrepreneurs think, okay, if I go and say, I’ve got the following five 0:07:53 problems that I can see in this technology I’m developing, they’re going to regulate 0:07:54 it away. 0:07:55 Yeah. 0:08:01 I think the hard question will be you’re getting people and companies that will enormous power. 0:08:06 I mean, not just the big tech players, but the others as well. 0:08:11 So I think one of the things that in a sense, it’s my question to you is, how do you manage 0:08:18 to get into that dialogue with policymakers where, you know, these very powerful people 0:08:23 recognize that in the end, you know, however powerful they are, they are not more important 0:08:25 than the public interest. 0:08:28 Part of we believe our role is to help provide, you know, visibility. 0:08:31 I wouldn’t, I don’t want to say education because I think politicians are very well 0:08:35 educated and certainly well meaning, but to connect the divide between, in our case, DC 0:08:36 and Silicon Valley. 0:08:40 And so we will often reach out to regulators, legislators and help them understand this is 0:08:43 what’s happening from an innovation perspective and therefore these are things that you might 0:08:45 want to anticipate that you need to think about. 0:08:49 So autonomous driving is a great example, right, which you mentioned is in order to 0:08:53 make that work in the United States, we probably need forward-looking governments to say there 0:08:58 are test zones or areas where we might have almost regulatory free zones for testing purposes, 0:09:01 right, that have proper supervision, but to enable something that otherwise might not 0:09:03 exist ahead of its time. 0:09:07 Obviously, you’ve got specific micro issues, I mean, they can be big in their impact like 0:09:11 driverless cars, but there is a specific thing, they’ve got specific issues attached to them. 0:09:18 But where does the tech sector go if it wants to engage on, you know, the bigger macro question 0:09:24 of how do you redesign government, by the way, as well as individual sectors, because 0:09:27 government itself is going to have to change. 0:09:28 That organization doesn’t exist today, at least. 0:09:30 I’m not aware of where you would do that. 0:09:34 And I think the other problem with it is we have to think beyond geographic and national 0:09:39 borders on this stuff, because technology and capital are free-flowing in our society. 0:09:44 You almost need a United Nations or some kind of, you know, type of organization to convene 0:09:45 to have those discussions. 0:09:46 Yeah. 0:09:47 I would say there’s a couple of things, too. 0:09:48 There’s a couple of factors. 0:09:54 One, there are obviously lobbying entities like the NVCA, there’s the Internet Association, 0:09:56 which a lot of major companies are a part of. 0:10:02 Then there is a group of players, like there’s a group of think tanks and a middle layer, 0:10:06 and then the government agencies themselves have been soliciting testimony. 0:10:10 Cooper has actually done testimony on all kinds of topics, from Cypheus to crypto to 0:10:12 various different topics. 0:10:18 But what’s really interesting to me, especially, is there’s organizations like 18F and USDS 0:10:23 in the US government, at least, where you have technologists doing literally rotating 0:10:24 apprenticeships. 0:10:28 It’s like the rotating missions, essentially, where they go for three years and they’re 0:10:30 contributing to actually reinventing government systems. 0:10:32 Now, this is a very important addition about it. 0:10:33 Yes. 0:10:34 I think it is, too. 0:10:36 And what’s really amazing is that it’s got tangible impacts. 0:10:41 So a specific example is we have a huge Veterans Administration that doesn’t get great healthcare. 0:10:46 So they redesigned the VA site in order to make sure that people who have accessibility 0:10:49 issues can use the site in a friendly way. 0:10:51 There’s many more applications of the types of things they’re doing. 0:10:53 We’ve actually had them on this podcast. 0:10:54 But I think those are some avenues. 0:10:56 But to Cooper’s point, there is no single entity. 0:11:01 I will say that at Wired, I edited a big set of op-eds around the ITU, which is sort of 0:11:03 like a UN for the internet. 0:11:06 And it was during the WC-12 hearings, which you might recall. 0:11:11 I think Hama Dune Touré was the head of the commission, and I edited him as well. 0:11:16 And what’s fascinating to me is that there’s a lagging versus a leading approach to it, 0:11:20 because you’re sort of taking the data that’s passed, not really looking forward. 0:11:23 And that was what I saw as a big drawback when I was working with the WC-12 op-ed. 0:11:28 So I’m curious for your take on how do we shift it, so you are listening to those being 0:11:34 affected by technology, but with the point of view that spins it forward for future generations. 0:11:38 Because if we had listened to all the farmers in the first wave of the Industrial Revolution, 0:11:43 we may not have many of the things today, but their grandkids are benefiting from those 0:11:44 things. 0:11:45 Yeah, no, absolutely. 0:11:48 So look, I think there are two caps that I see, and I just look at this from the side 0:11:54 of the, as a we’re ordinary politician, because I think there are initiatives that are happening 0:11:58 inside government where people or departments will get it, and therefore they’ll embrace 0:12:02 it and bring in smart people to help them and so on. 0:12:04 But I think there are two sort of lacunae. 0:12:10 One is your average politician does not understand a lot about this, and that is not sort of 0:12:12 a disrespect to your average politician. 0:12:18 It’s that it’s new, it’s complicated, it takes you time to get your head around it. 0:12:22 My eldest son is in technology, and I am always trying to get him to explain blockchain to 0:12:23 me. 0:12:24 We’re big on crypto. 0:12:25 I know. 0:12:31 I remember you sent me the other day saying, “This is the idiot’s guide to crypto currency, 0:12:33 and I still couldn’t understand it.” 0:12:35 I’m going to send you our crypto cannon. 0:12:36 We took a stab at Compound Water Resources. 0:12:37 Right. 0:12:38 But you probably shouldn’t test me on it. 0:12:39 But that is one lacunae. 0:12:44 Those people have to understand this is like the 19th century Industrial Revolution. 0:12:47 So you’ve got to get your ordinary politicians to understand it. 0:12:51 And then there’s another lacunae which is, I think, in getting the dialogue at the top 0:12:56 level between particularly the Americans and the Europeans, because I also think it would 0:13:00 be immensely helpful if we had a more transatlantic approach. 0:13:01 I think there’s a third piece. 0:13:03 There’s an incentives problem. 0:13:07 I would imagine if you did a survey of most politicians, they would say, “My fundamental 0:13:11 role is how do I improve long-term economic growth and job sustainability for my constituents?” 0:13:15 I mean, if people kind of cut through a lot of the politics, that’s really why they think 0:13:16 they’re there. 0:13:19 Look, they want to make a better life and make a better opportunity for their constituents. 0:13:23 The problem we have, though, is because their short-term incentive program is to get reelected, 0:13:25 which I understand is a good thing from a political perspective. 0:13:29 It’s very hard for them to take that long-term view because the shorter-term opportunity 0:13:34 is to say, “Look, I really need to do no harm to my constituents and by allowing technology, 0:13:39 which might be in the short-term, displacing and unsettling to job growth and other stuff, 0:13:41 particularly for different segments of the population.” 0:13:44 It’s very hard, I would imagine, as a politician to square those two things, which is how do 0:13:49 I help my constituents understand, you know, to Sonal’s point that, yes, over a period 0:13:54 of time, it was a good idea to have industrialized farming as opposed to pure manual agrarian 0:13:55 farming. 0:13:58 But that’s an incredibly unsettling thing, particularly in the U.S. here, if every two 0:14:01 years you have to get reelected or you go find a new job. 0:14:07 By the way, this happened in the 19th century and you had whole new politics created around 0:14:08 it. 0:14:10 And I think there are two things that are important here. 0:14:16 First of all, I think the technology will, in some way, provide solutions to what is 0:14:23 a constant dilemma for an ordinary politician, which is we need to do more for our people, 0:14:27 but we can’t just keep spending more and taxing more. 0:14:31 If the technology can help unlock part of that, that is something they’re prepared to 0:14:32 go for. 0:14:37 And secondly, with most politicians, if they’re able to see this within a longer-term perspective, 0:14:42 what you say to them, “Look, we’ll help you and guide you through this process of change, 0:14:44 but in the end, it’s a beneficial change.” 0:14:47 And what I found when I was in government is some of the most difficult reforms we put 0:14:54 through, for example, around education reform, healthcare reform, we were able, in some ways, 0:15:01 with at least some people, to say this short-term difficulty is going to be worth it. 0:15:03 How did you pull that off, though? 0:15:05 Was it the education, the explanation? 0:15:07 Was it consensus building? 0:15:12 I mean, let me take a very specific example, which, of course, is under attack now, but 0:15:18 we introduced tuition fees in the UK, but my point was very, very simple, that universities 0:15:22 are going to be major engines of economic growth in the future, in particular because 0:15:26 of the link between university and technology and the development of technology. 0:15:31 And therefore, we cannot afford for UK universities not to be able to get the best talent, and 0:15:34 they’re going to have to therefore have an extended funding base. 0:15:35 They can’t get it all from government. 0:15:39 And my point is, if you get it all from government in the end, some governments will start to 0:15:43 slice it away, and you’re always hand-to-mouth as universities. 0:15:49 And I reckon when we did that, it was very difficult, in fact, it was extremely difficult, 0:15:55 but in the end, you were able to say to people, “Look, if we want to save our position as 0:16:00 a country that, along with the US, probably has most high-quality universities in the 0:16:02 top 50 in the world, then we’ve got to be prepared to do that.” 0:16:07 Now, some people, by the way, rejected it, and today it’s a big political issue again, 0:16:12 but you can get to at least some form of alignment between long-term and short-term. 0:16:15 It’s a fundamental rethinking of what the role of government is, quite frankly, right? 0:16:19 Which is, again, if you take the premise that the overall objective for government is to 0:16:23 create economic conditions that hopefully generate long-term economic growth and sustainability 0:16:25 for individuals and companies, then you’re right. 0:16:29 Maybe the ancillary role of government is, how do we deal with short-term issues that 0:16:32 have market dislocations for people? 0:16:35 Maybe that’s a more proper way to describe what the role of government is, in many cases. 0:16:38 I think the other thing would be, I think there’s another question for politics which 0:16:43 would be very challenging, because what would be weird is if the whole of the world is undergoing 0:16:46 this revolution and politics is just kind of staying fixed. 0:16:49 The type of people who go into politics, what happens often is people leave university, 0:16:53 they’re going to become a researcher for an MP, and then they become an MP, and then they 0:16:55 become a minister, but they have no experience of the outside world, right? 0:16:59 So that’s one, and it becomes a constraint over time, and then the types of people who 0:17:00 work in government. 0:17:04 So you were saying something about the people who’ve been brought into, say, the Veterans 0:17:06 Administration here. 0:17:12 So how do you actually open up public service and then get a greater interoperability between 0:17:15 public service and the private sector? 0:17:20 Because all of the sort of pressure certainly coming from the media has been not to allow 0:17:25 that to happen, and not to allow politicians to have anything other than they’re usually 0:17:26 just focused on… 0:17:29 Okay, let’s say we all agree, which I think we do, that there needs to be a connection 0:17:32 between all the entities working together, no question. 0:17:36 More engagement, more explanation, more understanding, thinking of consequence. 0:17:37 I think those are all table stakes. 0:17:41 The question now is, how do you then think about unintended consequences? 0:17:46 Because the story, to me, is not that bad things have bad consequences. 0:17:51 It’s that often the worst consequences come from very well-intended things. 0:17:53 And quite frankly, the perfect example that comes to mind is GDPR. 0:17:57 Yeah, to make it concrete, there’s been a, over the last several months, and I’m sure 0:18:00 probably more so in Europe as well, there’s been a number of articles talking about when 0:18:05 you look at kind of the broad impact of GDPR, essentially, it’s endured largely to the benefit 0:18:08 of the very large incumbents, which was probably not what’s intended to do. 0:18:09 Because they’ve got the resource to better handle. 0:18:10 That’s exactly right. 0:18:14 And the analogy we have here in the States was the Dodd-Frank legislation that came out 0:18:18 of the global financial crisis, where financial institutions had to comply with a whole new 0:18:20 set of regulations. 0:18:24 What it really did here in the U.S. was, it really entrenched those incumbents very well, 0:18:27 and it made it very hard for startup financial institutions to grow. 0:18:31 It was very hard for a new institution to get a banking license for many years, in part 0:18:33 because of the regulatory cost of doing so. 0:18:35 And so, how do you balance that? 0:18:38 And maybe the answer is, look, it’s an education problem, but well-meeting and politicians 0:18:43 certainly expect and intend that regulation is the appropriate way to deal with these things. 0:18:47 It does, in some cases, interfere with the overall goal of entrepreneurship in a startup 0:18:48 formation. 0:18:53 That’s why I think that the attitude of the technology sector to engagement with government 0:18:54 is so important. 0:18:58 Because if you’re engaging with government saying, look, we understand there’s a massive 0:19:01 set of issues here, and we’re really going to sit down and work with you as to how we 0:19:07 get the right answer, then government’s in a position where they regard you as a partner. 0:19:11 But I think for this moment in time, a bit like, actually, the aftermath of the financial 0:19:17 crisis, government kind of regards, you’re looking after yourselves, but we got to look 0:19:19 after the public. 0:19:21 And that’s where it leads to poor regulation. 0:19:28 I mean, poor regulation is nearly always the consequence of a failure on the regulating 0:19:31 side to really understand what’s going on. 0:19:35 And on the founder’s side, what I’m hearing is to really communicate the benefits of the 0:19:36 technology upfront. 0:19:40 And to be so defensive that you’re just thinking all the time, how can we ward these people 0:19:41 off? 0:19:46 But here’s the thing, you can sometimes, if you have wealth, which a lot of these big 0:19:51 tech players do, and power, and you also have access, and you can go and see whoever they 0:19:57 want to see, it can sometimes mask your essential underlying vulnerability. 0:19:58 Interesting. 0:19:59 Right. 0:20:05 And your vulnerability is the comes a point when suddenly the mood flips, and it doesn’t 0:20:10 matter how much wealth and power and access you have, you’re the target. 0:20:12 So that point, everything changes. 0:20:16 So if you want to avoid that, I think it’s got to be a dialogue that’s structured and 0:20:22 it requires not just things happening between the tech sector and government, but for people 0:20:28 like my own institute, to use our sort of convening power, the political side, to say 0:20:30 to the politicians, look, let’s get our heads around this. 0:20:32 Here’s my essential challenge. 0:20:37 How do you take this technological revolution and as a politician, weave it into a narrative 0:20:39 of optimism about the future? 0:20:40 Yes. 0:20:41 I want that too. 0:20:42 Right. 0:20:43 Yeah. 0:20:44 So what’s driving the populism is pessimism. 0:20:46 If people are pessimistic, then they look for someone to blame. 0:20:52 If people are optimistic, they look for something to aspire to, and that’s the essential difference. 0:20:54 It’s really interesting also, Sonal and I have been having this conversation, and we’ve 0:20:58 been having this conversation in the U.S. about ESG, right, which obviously, you know, certainly… 0:20:59 Environmental social governance. 0:21:00 Right. 0:21:02 Which Europe is way ahead of the U.S. there, and Larry Fink, who’s the head of BlackRock 0:21:05 here, has written this letter, you know, appealing to CEOs, and it really goes to the same issue 0:21:09 you’re talking about, which is fundamentally, what is the role of the corporation and how 0:21:12 do corporations think about obviously enhancing value for their shareholders, but also to your 0:21:17 point recognizing that they impact constituents in many other ways, and I think that’s kind 0:21:21 of the dialogue we ought to be having with politicians, which is, look, we can create 0:21:25 a world where it’s compatible to have, you know, maximizing shareholder opportunity, 0:21:28 but also recognizing and being a part of the broader community discussion about the impact 0:21:29 on society. 0:21:35 The other thing is to recognize that when we create these things, we have some obligation 0:21:36 to share. 0:21:40 It comes out of fundamental macroeconomics, right, which is we can improve growth for 0:21:44 a country by either population growth and/or productivity growth, right? 0:21:47 Those are the two levers in theory that we can impact, and if we can frame the discussion 0:21:50 around technology, that’s a lot of where the U.S. has done well, right? 0:21:54 We’ve generally, obviously times are changing, but we’ve generally been very open to immigration 0:21:58 and thought about population growth as a way to help improve the lot for people generally, 0:22:01 and we’ve also been very open to productivity growth, right, in the form of technology and 0:22:05 automation, and if we can frame it that way, but also to your point, recognize that there 0:22:09 are going to be disintermediations along the way, and part of our responsibility is to help 0:22:14 from a training and education perspective, and even potentially the role of government 0:22:19 in subsidizing the transition from less automated to more automated society. 0:22:21 What happens to education in all of this? 0:22:23 I don’t think we have a singular point of view on it. 0:22:27 We have talked about education a lot on this podcast and shared a diversity of views, but 0:22:32 I think a couple of the high level things are that universities are huge drivers, of course, 0:22:36 as you mentioned, of innovation, and in every study of regional innovation, every innovation 0:22:43 cluster is successful because of the collaboration between universities, government, local, entrepreneurial 0:22:44 communities. 0:22:47 The other key point, however, is it’s a combination of top-down and bottom-up. 0:22:52 People who have tried top-down, industrial, planned, smart cities or things like Silicon 0:22:53 Valley never work. 0:22:56 The only bottom-up ones alone don’t necessarily work. 0:22:57 You need a combination of the two. 0:23:00 That’s the number one finding, but the second thing, and this is a big topic we talk about 0:23:05 on this podcast, is the importance of apprenticeship and a new type of education that really thinks 0:23:07 about skills-based education. 0:23:12 We have this elitist attitude that education has to be a certain way when, in fact, in 0:23:16 this day and age, especially with increased automation and the need for jobs, we might 0:23:19 want to be really thinking about very specific skills-based education. 0:23:24 It’s actually fascinating because, in fact, my eldest son’s got a company that’s on apprenticeships 0:23:28 and technology, so that’s exactly what he does. 0:23:32 I think it’s really, really interesting because of the idea that you don’t necessarily have 0:23:33 to go to university. 0:23:38 Well, there are alternative universities coming about too, like we’re investors in 0:23:39 Udacity. 0:23:40 There’s just Lambda School. 0:23:44 There’s all these interesting types of containers where people can get what they call nano-degrees 0:23:46 or microskills or specific skills. 0:23:50 There’s so much that’s actually in play, because the point I want to raise here, this is kind 0:23:55 of an underlying theme to me, is that technology, as you pointed out, you can take an optimistic 0:23:56 view. 0:23:59 It also gives you the means to address many of the problems that we are complaining about 0:24:04 because when I think of some of the trade-offs between waiting for a government to update 0:24:12 policy, what I love is that a mass of users on a platform can essentially vote with saying 0:24:18 leave that platform, and immediately that platform is going to act the next day in a 0:24:20 way that a lawmaker cannot overnight. 0:24:22 Yes, from a political perspective. 0:24:24 You want this thing at least to have some sort of rational- 0:24:25 Of course. 0:24:26 It shouldn’t be mobbed. 0:24:31 But I think the other thing is, if you take, so a lot of what drove, for example, Brexit 0:24:35 in the UK is, apart from the immigration issue, was this idea of communities, people left 0:24:36 behind. 0:24:41 So what is it that technology would do to go into those communities and help people gain 0:24:45 better education, get connectivity to the world, because in the end, this is what it’s 0:24:46 all about. 0:24:47 If you’re not connected, you’re not really– 0:24:48 If you’re left behind. 0:24:49 Right. 0:24:55 So I think one big question is, how does the technology sector help us as policymakers 0:25:00 reach those people for whom the conversation we’ve just been having may be sort of scratching 0:25:02 their heads and thinking about what these guys are on. 0:25:03 That’s a fantastic question. 0:25:05 And actually, it’s interesting because we’re investors in NationBuilder, which is one of 0:25:10 the companies that mobilize a lot of the communities that actually organize for pro and for con 0:25:11 around these things. 0:25:14 So a quick thing, I do want to make sure we actually give answers because we’re asking 0:25:15 a lot of questions. 0:25:19 So can you both give a little bit more on what concretely we need to do? 0:25:23 So from the point of view of my institute, what we’re doing is we’re creating a tech 0:25:25 and public policy center. 0:25:29 And the idea is to bring a certain number of people who really understand the tech side 0:25:32 and a certain number of people who come from the public policy side, put them together 0:25:33 in a structured way. 0:25:35 I will kind of curate that. 0:25:43 And out of it should come what I call serviceable policy and a narrative about the future, 0:25:46 which makes sense of this technological revolution. 0:25:50 And then to link up with politicians, not just in the UK and Europe, but actually over 0:25:55 here and create a sense that this technological revolution should be at the center of the 0:25:56 political debate. 0:25:57 How do we handle it? 0:26:00 How do we, as I say, mitigate its risks and access its opportunities? 0:26:03 So that’s one very specific thing. 0:26:07 And then I think the other thing, frankly, is just to be out there, myself and a number 0:26:13 of other people, at least of access to the airwaves, to say, guys, we’ve got to switch 0:26:14 the conversation. 0:26:18 You’ve got to put this technology question at the heart of the political debate. 0:26:22 Now the solution, some people may go to the left, some people may go to the right. 0:26:24 Some people will never be in between. 0:26:25 But make it the conversation. 0:26:29 Put it at the top four of those priorities for every country, every organization. 0:26:30 I think that’s right. 0:26:32 Fundamentally, we’re talking around the issues. 0:26:36 It’s either immigration or its income inequality or other things that drive the debate. 0:26:40 But the fundamental question is exactly that, which is how do we move forward with broader 0:26:41 economic growth initiatives? 0:26:46 So sitting here in Silicon Valley, any individual company is probably better off, quite frankly, 0:26:49 taking the break glass first and then ask for forgiveness later. 0:26:53 And so it’s, I think, the idea of having kind of solving that collective action problem 0:26:56 through a convening organization makes a lot of sense. 0:26:59 But you come to the issues of very traditional income inequality. 0:27:03 Now there is a perfectly good question as to whether you raise the minimum wage, and 0:27:04 if so, by how much? 0:27:07 And my government was coming to introduce the minimum wage in the UK, so I’m very familiar 0:27:10 with all those arguments. 0:27:17 But in the end, there is a whole other dimension to that individual, which is about the world 0:27:20 that’s changing and their place in it, and whether they’re going to have the skills and 0:27:21 the aptitude to be able to. 0:27:25 So you’re just saying completely at every level reframe that technology is at the center 0:27:26 of that. 0:27:27 Right. 0:27:31 So it’s not that you displace traditional questions of taxation and inequality, but the 0:27:38 truth of the matter is it’s going to be probably in the long term more significant for that 0:27:43 individual and for the society if this technological revolution is handled properly. 0:27:47 So if you had a debate in the UK at the moment about our healthcare system, our national 0:27:53 health service, it would be, should we spend 10 billion pounds a year more on it or 5 billion? 0:27:59 But how do you change the whole of the way we implement care for people because of technology 0:28:00 is going to have a much bigger impact? 0:28:01 I agree. 0:28:05 I guess the only thing I would add to this, because I think about this a lot, interestingly, 0:28:10 is that we treat technology like this word, this homogenous, nebulous entity. 0:28:16 And the reality is that every single instance so depends on the specific technology. 0:28:19 So my call to action, I guess, would be to think about it very specifically. 0:28:25 The way we think about AI, that’s such a broad phrase and it’s a very scary phrase that 0:28:30 suggests everything from generalized intelligence to very specific automation that gets your 0:28:33 bank account updated automatically. 0:28:34 So I think there’s two things to this. 0:28:38 One that we need to be incredibly specific about what technology we’re talking about, 0:28:39 in what context. 0:28:43 And then B, we also dial in the right degree to what we’re talking about at what point 0:28:44 because I don’t really make a difference. 0:28:45 I completely agree with that. 0:28:50 I mean, I think the only thing I would say is right now we’re actually far away from 0:28:53 even getting the specifics. 0:28:54 Yeah, you’ve got a good point. 0:28:55 That’s fair. 0:28:59 You know, I was just saying to people in politics when we were campaigning and they’d say, you 0:29:03 know, I’d say, right, because we were, we campaigned in the slogan 99% right, new labor, 0:29:04 new Britain. 0:29:05 Right. 0:29:06 And they say, no, but it’s much more complicated now. 0:29:09 I say, okay, guys, it is, but actually it’s complicated, but sometimes you need to go 0:29:10 straight. 0:29:11 I hear you. 0:29:15 You’re saying that you’re saying that when you go back to your old constituency in constituency 0:29:19 in Northern England, they don’t care about the specifics, they just need to have their 0:29:20 fears established. 0:29:22 The first thing that you need to persuade them of, you’ve got to say to them, guys, technology 0:29:24 is going to change the world and we’ve got to prepare for it. 0:29:25 They’re not there yet. 0:29:29 When you get them there, then obviously in all sorts of different ways, but this is where 0:29:35 I think that the gulf that there is between the technology sector and the policies and 0:29:36 therefore the people is so big. 0:29:40 How do we deal with the fundamental challenges that we have as we talked about earlier from 0:29:44 an incentive perspective and short tenures on, you know, office and people’s ability 0:29:45 to be in office? 0:29:48 Is that a conversation you think that the country and the nations are prepared to have? 0:29:50 I think so, but it’s a very good question. 0:29:55 I would also say that in all of the change that’s going to happen, I mean, this is a 0:29:58 whole topic for another podcast, probably with the different people. 0:30:08 But how you exchange information and the validation of that information is an essential part of 0:30:12 having a democratic debate and that is a big problem in today’s world. 0:30:19 So I think it is possible to have that conversation with people, but all political conversations 0:30:25 today are extremely difficult because they happen in such a fevered environment with 0:30:31 so much polarization and the interaction between conventional and social media makes a rational 0:30:33 debate occasionally extremely difficult. 0:30:35 With that qualification I answer this. 0:30:39 I think the better way to approach that problem is to say, how do we make the U.S. and/or Europe 0:30:45 or other places attractive to entrepreneurship and encourage people to think about the regulatory 0:30:49 framework and the economic framework as, you know, want to be participants in these markets 0:30:53 as opposed to the anti, you know, kind of policies we have, which is let’s make it harder for 0:30:56 free flow of capital and try to stave off those opportunities. 0:31:00 It used to be that 90% of venture capital and entrepreneurship happen in the U.S. literally 0:31:05 almost as early as 20 years ago, and if you look at those numbers today, it’s about 50%. 0:31:09 And so the amount of kind of capital that’s kind of been distributed globally and therefore 0:31:12 the amount of opportunity set distributed globally is interesting. 0:31:15 We have to think about this beyond kind of regional borders. 0:31:18 We will have talent in people that are free flowing across geographies. 0:31:21 And so we have to think about this from a broader, you know, global initiative. 0:31:25 Well, you guys, I just want to say thank you for joining the A6NZ podcast. 0:31:26 Thank you.
with Tony Blair (@InstituteGC), Scott Kupor (@skupor), and Sonal Chokshi (@smc90)
If the current pace of tech change is the 21st-century equivalent of the 19th-century Industrial Revolution — with its tremendous economic growth and lifestyle change — it means that even though it’s fundamentally empowering and enabling, there’s also lots of fears and misconceptions as well. That’s why, argues former U.K. prime minister Tony Blair (who now has an eponymous Institute for Global Change), we need to make sure that the changemakers — i.e., technologists, entrepreneurs, and quite frankly, any company that wields power — are in a structured dialogue with politicians. After all, the politician’s task, observes Blair, is “to be able to articulate to the people those changes and fit them into a policy framework that makes sense”.
The concern is that if politicians don’t understand new technologies, then ”they’ll fear it; and if they fear it, they’ll try and stop it” — and that’s how we end up with pessimism and bad policy. Yet bad regulations often come from even the very best of intentions: Take for example the case of Dodd-Frank in the U.S., or more recently, GDPR in Europe — which, ironically (but not surprisingly) served to entrench incumbent and large company interests over those of small-and-medium-sized businesses and startups. And would we have ever had the world wide web today if we hadn’t had an environment of so-called ”permissionless innovation”, where government didn’t decide up front how to regulate the internet? Could companies instead be more inclusive of stakeholders, not just shareholders, with better ESG (environment, social, governance)? Finally, how do we ensure a spirit of optimism and focusing on leading vs. lagging indicators about the future, while still being sensitive to short-term displacements, as with farmers during the Industrial Revolution?
This hallway-style style episode of the a16z Podcast features Blair in conversation with Sonal Chokshi and a16z managing partner Scott Kupor — who has a new book, just out, on Secrets of Sand Hill Road: Venture Capital and How to Get It, and also often engages with government legislators on behalf of startups. They delve into mindsets for engaging policymakers; touch briefly on topics such as autonomous cars, crypto, and education; and consider the question of how government itself and politicians too will need to change. One thing’s for sure: The discussion today is global, beyond both sides of the Atlantic, given the flow of capital, people, and ideas across borders. So how do we make sure globalization works for the many… and not just for the few.
image credit: Benedict Macon-Cooney
The views expressed here are those of the individual personnel quoted and are not the views of a16z or its affiliates. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors and may not under any circumstances be relied upon when making a decision to invest in any a16z funds. PLEASE SEE MORE HERE: https://a16z.com/disclosures/
0:00:03 Hi, and welcome to the A16Z podcast. I’m Hannah. 0:00:08 This episode is all about how artificial intelligence is coming to the doctor’s office, how it will 0:00:13 impact the nature of doctor-patient interactions, diagnosis, prevention, prediction, and everything 0:00:14 in between. 0:00:20 The conversation between A16Z’s general partner Vijay Pandey and Dr. Eric Topol, cardiologist 0:00:25 and chair of innovative medicine at Scripps Research, is based on Topol’s book, “Deep 0:00:29 Medicine,” and touches on everything from how AI’s deep phenotyping can shift our 0:00:34 thinking from population health to understanding the medical health essence of you, how the 0:00:38 industry might respond, the challenges in integrating and introducing the technology 0:00:43 into today’s system, what the doctor’s visit of the future might really look like, and 0:00:47 ultimately how AI can make healthcare more human. 0:00:50 Before we talk about technology and we talk about all the things that are changing the 0:00:54 world or making huge impacts, it’s interesting to just think like, “What should a doctor 0:00:55 be doing?” 0:00:56 And how do you see that? 0:01:02 That’s really what I was pondering and I really did this deep look into AI. 0:01:07 I actually didn’t expect it to be this back to the future story, but in many ways, I think 0:01:13 it turns out that as we go forward, particularly in a longer-term view, the ability to outsource 0:01:19 so many things with help from AI and machines, I think is going to get us back. 0:01:20 It could. 0:01:21 It could. 0:01:24 That’s a big if, to where we were back in the ’70s and before. 0:01:28 What was better then was that doctors were spending much more time with us, right? 0:01:29 Exactly. 0:01:34 That gift of time, the human side, which is the center of medicine that’s been lost. 0:01:40 The big business of healthcare and all of its components like electronic records and 0:01:48 relative value units and all this stuff basically has sucked out any sense of intimacy and time. 0:01:51 And it’s also accompanied by lots of errors. 0:01:58 But of course, it’s not a gimme because administrators want more efficiency, more product to be. 0:02:02 I think you put this on Twitter where it’s like some kid drew like a drawing of going 0:02:07 to the doctor and the picture was the doctor with their back turned working on a computer. 0:02:12 And that is what happens too much, but yet we’re talking about technology coming in. 0:02:15 So how does this all work out that more technology means less computer? 0:02:23 I think that is a kind of fundamental of the problem of doctors not even making eye contact 0:02:30 and as a child to draw that picture, how unnerving that was her trip to the pediatrician natural 0:02:35 language processing can actually liberate from keyboards. 0:02:41 And so it’s already being done in some clinics and even in the UK in emergency rooms. 0:02:49 And so if we keep that up and build on that, we can eliminate that whole distraction doctors 0:02:52 and nurses and clinicians being data clerks and this is ridiculous. 0:02:58 So the fact that voice recognition is just moving so fast in terms of accuracy and speed 0:02:59 is really encouraging. 0:03:03 Alex is a very basic version of voice recognition, but you’re talking about something much more 0:03:04 sophisticated. 0:03:07 That’s something where they’re actually doing NLP, they’re doing transcriptions of doctors 0:03:09 and have to take notes. 0:03:13 If you were more sophisticated, you could put an ontology onto this such that you’re 0:03:17 not just getting like a transcript of what’s going on, but that you have very machine learning 0:03:19 friendly organized data. 0:03:20 Exactly. 0:03:26 So the notes that are synthesized from the conversation are far better than the notes 0:03:32 that you would get an Epic or Cerner where 80% are cut and pasted and they’re error laden. 0:03:39 So I mean, just as we Google AI published in JAMA of their experience, I think it’s really 0:03:45 going much faster because the accuracy of the transcription and the synthesized note is 0:03:50 far better than what we have today and it exceeds professional medical transcriptionists in 0:03:51 terms of accuracy. 0:03:52 Yeah. 0:03:55 And so I’m imagining like what the doctor visits like then. 0:03:59 So we’ve got maybe NLP so the doctor doesn’t have to be transcribing and not interacting 0:04:00 with Epic. 0:04:02 Who knows what’s in the back end, but doesn’t even matter anymore. 0:04:03 Right. 0:04:10 So we’ve got billing and all that headache for like a PCP, all that’s a huge headache. 0:04:13 That’s all part of that conversation because you say, “Well, Mr. Jones, we’re going to 0:04:18 have you have lab tests for such and such and then we’re going to get this scan and 0:04:20 it’s all done through the conversation.” 0:04:22 We could bring in other technologies, right? 0:04:28 We’ve thought about just how imaging or other type of diagnosis comes in and we’ve seen 0:04:33 all these cool things about how machine learning can improve this, but then also it’s fine 0:04:38 because at the same point, and you bring this up in the book that over-diagnosing also can 0:04:39 be very difficult. 0:04:40 Yeah. 0:04:44 Like it was very stunning where you talked about how the incidence of thyroid cancer 0:04:47 is going up, but the mortality’s been flat. 0:04:48 Exactly. 0:04:53 And so I guess the challenge will be is how can we bring in these technologies in so doctors 0:04:56 can do the things they should be doing and not the things they shouldn’t be doing. 0:04:57 Right. 0:05:03 Well, I think there is getting arms around a person’s data, this whole deep phenotyping. 0:05:10 So no human could actually integrate all this data, not only the whole electronic record, 0:05:18 which all too often is incomplete, but also pulling together sensor data, genomic data, 0:05:24 gut microbiome, all the things that you’d want to be able to come up with not only better 0:05:29 diagnoses, but also the better strategy for prevention or treatment. 0:05:34 So I think what’s going to make life easier for both doctors and for the patients is having 0:05:39 that data fully processed and distilled. 0:05:44 In the book, I tell the story about my knee replacement and how it was a total fiasco. 0:05:49 Part of that was because my orthopedist who did the surgery wasn’t in touch with my congenital 0:05:50 condition. 0:05:57 And so that hopefully is going to be something we can transcend in the future. 0:06:02 And this is one thing that computers do, can do very well as logistics and coordination. 0:06:05 There’s tons of cases where you might like thyroid cancer, we’re just talking about maybe 0:06:09 you have to bring in an endocrinologist in addition to an oncologist. 0:06:13 And it’s shocking that often there’s no discussion there, there’s no communication. 0:06:19 But yet the challenge in my mind is how does computer magically know things that we can’t 0:06:20 do right now? 0:06:27 Well, that’s I think where the complementarity, the synergy between machines and people is 0:06:35 so ideal because we just have early satiety with data, whereas deep learning has insatiable 0:06:36 appetite. 0:06:38 And so that contrasts. 0:06:45 But we have, as doctors and humans, we have just great contextual abilities, the judgment, 0:06:53 the wisdom experience, and just the features that can basically build on that machine 0:06:58 processing because we don’t ever want to trust an algorithm for a serious matter. 0:07:04 But if it tees it up and we have oversight and we fit it into that person’s story, that 0:07:06 I think is the best of both worlds. 0:07:08 Well, here I’m really curious because what is the ground truth? 0:07:13 Because generally we don’t trust an individual person as like, as knowing everything either, 0:07:14 right? 0:07:19 But the ground truth would be like a second opinion or a third opinion or a fourth opinion 0:07:22 or even like, you know, a board to look at something. 0:07:24 And that would be what I think most people would view as the ground truth. 0:07:29 The ground truth, when it’s applied to training an algorithm, of course, is knowing that it 0:07:32 is the real deal, that it is really true. 0:07:40 And I think a great example of that is in radiology because radiologists have a false 0:07:44 negative rate of 32 percent, false negative. 0:07:49 And that’s the basis for most of the litigation in radiology, which is over the course of 0:07:54 a career, a third of radiologists get sued mostly because they miss something. 0:08:00 So what you have are algorithms with ground truths that are trained on, you know, hundreds 0:08:05 of thousands of scans so that whether it’s a chest x-ray, a CT scan or MRI, whatever 0:08:08 it is, that it’s not going to miss stuff. 0:08:12 But then, of course, you’ve got that over read by the trained radiologists. 0:08:20 So they’re, you know, I think that’s an example of how we can use AI to really rev up accuracy. 0:08:23 Somebody is going to need to interpret the algorithms or sort of be on top and that in 0:08:29 a sense, the doctor is freed from the stuff the doctor shouldn’t be doing, like the BS 0:08:32 accounting or the typing or all these things. 0:08:34 And that’s not the best use of doctors time. 0:08:39 And it’s funny because it seems like the best use of doctors time is in understanding what 0:08:43 these tests would mean, whether it be an AI test or a cholesterol or whatever, and in 0:08:45 how to communicate with the patient. 0:08:48 And so that seems to be a theme running through the book. 0:08:51 So for the first half, in terms of the understanding, what do you think that’s going to look like? 0:08:55 I mean, one of the things I’ve been constantly wondering about is whether there’ll be a new 0:08:56 medical specialty. 0:09:00 Like, you know, because you don’t have radiology without like CT or X-rays, right? 0:09:03 So presumably you don’t have radiology a hundred years ago. 0:09:05 I do have AIology or something like that. 0:09:12 Saurabh Jha, who is a radiologist at Penn, he and I Penn, a JAMA editorial about the 0:09:16 information specialist radiologists and pathologists. 0:09:21 Their foundation is reviewing patterns and information. 0:09:26 But what’s interesting is this an opportunity for them to connect with patients because 0:09:27 they don’t. 0:09:32 You know, right now radiologists never sees a patient radiologists live in the basement. 0:09:35 The pathologists look at the slides that that group of pathologists. 0:09:40 But they actually want to interact with patients and they have this unique insight because 0:09:43 they’re like the honest brokers, they’re not, they don’t want to do a surgery. 0:09:46 They want to give you their expertise. 0:09:49 And so I think that’s what we’re going to see a pretty substantial change. 0:09:54 And as you’re touched on this new specialty, it’ll look different than the way it is today. 0:09:58 In that case, it almost seems like everybody is better off. 0:10:01 The doctor is better off because of pathologists and not just looking at slides, but actually 0:10:02 is dealing with patients. 0:10:08 And presumably the patients better off because you have these false negatives get found. 0:10:12 The pathologists, you know, they have remarkable discordance when they look at slides and to 0:10:18 be able to have that basically looked at as if hundreds of thousands of them were reviewed, 0:10:24 getting back to your second, third and nth opinion to get that as input for them to help 0:10:26 and consult with a patient. 0:10:28 I think it’s really a bonus. 0:10:31 You’re talking about radiology here in pathology, but I mean, we could sort of think about this 0:10:33 as an issue for diagnosis in general. 0:10:36 And you know, there’s one thing you point in the book, I think really beautifully, you 0:10:40 know, you said once trained doctors are pretty much wedged into their level of diagnostic 0:10:42 performance throughout their career. 0:10:46 That’s kind of an amazing thing is that doctors, I guess, go through CPE and so on. 0:10:49 But like, you only go through med school once and that’s an intense process. 0:10:52 You learn a lot, but you can’t go through med school all the time. 0:10:53 No, it’s so true. 0:10:58 And that gets me to, you know, Danny Kahneman’s book about thinking fast and slow and the 0:11:05 system one that is the reflexive thinking that happens automatically versus what we 0:11:11 want are reflective thinking, which is system two, which takes time. 0:11:17 And it turns out that if a doctor doesn’t think of the diagnosis of a patient in the 0:11:21 first five minutes is over 70% error rate. 0:11:25 And that’s how actually that’s how much time is the average with patients. 0:11:31 So we have the problem as you’ve alluded to of kind of a plateauing early on in a career, 0:11:35 but we also suffer because of lack of time with the system one thinking. 0:11:40 The thought is that the machine learning is reflecting what system two would look like 0:11:44 because it’s trained from doctors sort of doing system two. 0:11:45 Exactly. 0:11:53 That brings in the integration of what would be the ground troops of thousands for that 0:11:55 particular dataset. 0:11:57 So I think it has a potential. 0:12:03 And of course, a lot of this stuff needs validation, but there’s a lot of promissory studies to 0:12:05 date that suggests that’s going to be very possible. 0:12:11 When I think about what ML or artificial intelligence AI could do, there’s like two axes. 0:12:17 There’s one like just scale, like the fact that you can scale up servers on Amazon trivially 0:12:19 much more than you could scale up human beings. 0:12:22 We could scale up a thousand servers right now or 10,000 servers right now. 0:12:25 I don’t think we could call 10,000 doctors and get them here. 0:12:29 The other thing you could do is that you could sort of talk about other axes is like sort 0:12:31 of intelligence or capability. 0:12:36 And you’re talking about both in a sense that you can scale up not just the fact that you 0:12:42 could have like a resident or med school students sort of doing things and having a lot of them, 0:12:48 but you in addition to that, you have in a sense a doctor through AI that’s in diagnostics 0:12:50 that’s better than any single doctor. 0:12:51 Right. 0:12:55 I think that’s really what is going to be one of the big early changes in this new AI 0:13:00 medical era is that diagnosis is going to get so much better. 0:13:05 Right now we have over 12 million serious errors a year in the United States. 0:13:09 And they’re not just costly, but they hurt people. 0:13:14 So this is a real opportunity to upgrade that and that’s a much more of a significant problem 0:13:16 than most people realize. 0:13:20 And so so far we’ve been talking about things that feel like the sci-fi fantasy version of 0:13:21 stuff, right? 0:13:26 I mean like because we’ve got like this doctor of sorts through diagnosis that can do what 0:13:30 no single doctor could do, presumably at scale it’s doing this at lower cost. 0:13:35 It’s allowing human beings to do the things they should be doing. 0:13:37 Is there any dystopian turn here or how does this? 0:13:39 It knows no shortage of those. 0:13:42 And so how could this go wrong and what can we do to prevent it? 0:13:47 Well, I mean I think one of the things we’ve harped on is that you’ve got to have human 0:13:48 oversight. 0:13:54 We can’t trust an algorithm absolutely because for any serious matter, because if we do that 0:13:59 and it has a glitch or it’s been hacked or it has some kind of adversarial input, it 0:14:01 could hurt people at scale. 0:14:03 So that’s one of the things that we got to keep an eye on. 0:14:11 For example, if an algorithm gets approved by the FDA, oftentimes it’s these days it’s 0:14:14 an in silico retrospective sort of thing. 0:14:20 And if we just trust that without seeing how it performs in a particular venue, a particular 0:14:24 cohort of people, these are things that we just shouldn’t accept blindly. 0:14:30 So there’s lots of deep liabilities and it runs from of course privacy, security, the 0:14:31 ethics. 0:14:37 There are many aspects that are not ideal about this, but when you think about the need 0:14:41 and how much it could provide to help medicine, I think those are the trade-offs that we have 0:14:43 to really consider. 0:14:47 What we should be doing now and what people could be doing now to sort of anticipate this? 0:14:48 Or do you think it’s like too early? 0:14:51 I mean, because people have these algorithms now. 0:14:54 What do we see in one year versus five years versus 10 years? 0:14:57 Well, it is rolling out in other parts of the world. 0:15:03 I just finished this review with the NHS and that was fascinating because they are really 0:15:04 going after this. 0:15:10 They are the leading in the world force in genomics and now they want to be in AI. 0:15:15 So they already have emergency rooms that are using that are liberated from keyboards 0:15:18 and they are going after this. 0:15:23 And this course in the middle of Brexit, so that’s kind of amazing. 0:15:27 But China is really implementing this that you could say, well, maybe too fast because 0:15:32 out of desperation or need, but one of the advantages that we don’t recognize with China, 0:15:37 not just that they have scale in terms of people, but they have all the data for each 0:15:38 person. 0:15:40 And we have no data for each person. 0:15:46 Basically, our data is just spread around all these different doctors and health systems. 0:15:47 Nobody has all their data. 0:15:52 And that is a big problem because without the inputs that are complete. 0:15:53 For like you personally. 0:15:54 Yeah. 0:15:55 Yeah. 0:15:56 Then what are you going to get out of that? 0:16:00 So this is a, this, we are at a handicapped position in this country. 0:16:05 And the other thing of course is we have no strategy as a nation, whereas China, UK and 0:16:10 many other countries, they are developing or have developed planning and strategy and 0:16:13 put in resources here as a nation. 0:16:14 We have zero resources. 0:16:19 In fact, we have proposed cuts to the same, you know, granting agencies that would potentially 0:16:20 help. 0:16:23 And so what, what should one do at that scale? 0:16:27 Like, you know, there’s various things people propose, you know, is this something to have 0:16:31 a new national institute of health, you know, in this area? 0:16:35 Is there, I mean, I think when I think about the government playing a role, I think I want 0:16:40 them to try to help them build the marketplace and set the rules, but we have to be careful 0:16:42 that we don’t put too much regulation as well. 0:16:46 I mean, what are, what do you, I mean, when you say we don’t, we don’t have a strategy, 0:16:47 what’s missing? 0:16:48 What should we be doing? 0:16:52 We don’t have no national planning or strategies. 0:17:00 How is AI not only for healthcare, but in general, how is it going to be cultivated and made 0:17:02 transformative? 0:17:07 The experience I had in the UK was really interesting because there they not only have the will, 0:17:12 but they have a whole wing of the NHS for education and training. 0:17:13 You just think about it. 0:17:20 They talked about professions within medicine that are going to have a more of their daily 0:17:21 function. 0:17:25 So, we’re not well prepared, you know, who should take leave them? 0:17:29 One of the problems we have that you’re touching on is our professional organizations haven’t 0:17:31 really been so forward thinking. 0:17:36 They mainly are centered on maintaining reimbursement for their constituents. 0:17:43 The, you know, entities like NIH and NSF and others could certainly be part of the solution. 0:17:47 What you want to do here, I think, is to really accelerate this. 0:17:52 We’re in the middle of an economic crisis in healthcare, which is in the US, the worst 0:17:53 outlier. 0:18:00 I mean, we, we spending over $11,000 per person and we have the worst outcomes, life expectancy 0:18:06 going down three years in a row, childhood mortality, infant mortality, maternal mortality. 0:18:09 The worst people don’t realize that. 0:18:14 Then you have the UK and so many other countries that are at the $4,000 per year level and 0:18:16 they have outcomes that are far superior. 0:18:21 So, if we use this, we could actually reduce inequities. 0:18:27 We could make for a far better business model, paradoxically, but we’re not grabbing the 0:18:28 opportunity. 0:18:31 Maybe there’s another solution we could think about, which you also point to in the book, 0:18:36 which is what if, what can we do to drive through consumer action? 0:18:39 For instance, a lot of our healthcare is sick here, right? 0:18:40 What happens when we get sick? 0:18:41 That’s almost all of it. 0:18:44 What about, what can we do to stay healthy? 0:18:47 First thing I think of is diet and lifestyle, right? 0:18:50 That could go a long way in so many diseases, so many things that we deal with. 0:18:52 So, actually, you touch on diet. 0:18:56 Before we even talking about like diagnosing whether you have cancer, should we be diagnosing 0:18:57 what you should be having for lunch? 0:19:01 I couldn’t agree more that that should be a direction. 0:19:09 We have had this so naive notion that everyone should have the same diet, and we never got 0:19:16 that right as a country, but now we know without any question that people have an individualized 0:19:19 and highly heterogeneous response. 0:19:21 That’s not just through glucose spikes. 0:19:26 If you and I ate the exact same food, the exact same amount, the exact same time, our 0:19:30 glucose response would be very different, but also triglyceride response would be different, 0:19:32 and they don’t track together. 0:19:37 So, what we’re learning is if you get all this multimodal data, not just your gut microbiome 0:19:43 and sensor data and your sleep and your activity, your stress level, and what exactly you eat 0:19:47 and drink, we can figure out what would be promoting your health. 0:19:50 We’re not there yet, but we’re seeing some pretty rapid progress. 0:19:55 What’s intriguing to me is that in cases, there are cases now, especially, let’s say, 0:20:00 just glucose, where you can take technology to develop for type one or type two diabetics. 0:20:05 Now, I’m not diabetic, but I actually had the sort of, I was about to say joy, but at 0:20:09 least the intellectual intrigue of having a CGM on me for two weeks. 0:20:10 Yeah. 0:20:11 So, I got to play all these experiments. 0:20:12 Right. 0:20:13 Right? 0:20:14 So, I tried white rice as brown rice. 0:20:15 Yeah. 0:20:16 How’s ice cream? 0:20:20 Wine versus scotch, all the important questions one has to figure out. 0:20:21 Exactly. 0:20:26 And actually, it was actually a surprise to me that how, for instance, I did not spike 0:20:27 on ice cream. 0:20:28 Spiked on brown rice. 0:20:29 Yeah. 0:20:32 I don’t think I prepared to go on the ice cream diet just yet. 0:20:33 Yeah. 0:20:34 And I don’t think you would prescribe that either, right? 0:20:37 But I think the idea is that it’s just different for everybody, right? 0:20:39 So, maybe you spike on ice cream, I don’t. 0:20:45 And that, you know, what’s I think been kind of so annoying about nutrition is that we 0:20:48 hear all these conflicting things, but perhaps part of the reason why we’re hearing these 0:20:50 conflicting things is that it is so individual. 0:20:51 Exactly. 0:20:56 And that it’s so complicated and such a fundamental data science problem that it probably takes 0:20:57 something like machine learning to figure it out. 0:20:59 Well, I think that’s central. 0:21:02 If we didn’t have machine learning, we wouldn’t have known this. 0:21:06 And only, you know, thanks to the group and the Wiseman Institute in Israel, they cracked 0:21:07 the case on this. 0:21:09 So, Aaron Segal’s work? 0:21:10 Yeah. 0:21:11 Aaron Segal. 0:21:15 And now, it’s been replicated by many others and it’s being extended. 0:21:17 What would be promoting your health? 0:21:22 And right now, it’s, you know, these proxy metrics like your glucose or your lipids in 0:21:30 the blood, but eventually we’ll see how outcomes and prevention can be fostered by your diet. 0:21:31 It’s really kind of mind blowing. 0:21:34 How difficult data science problem it seems nutrition is. 0:21:35 Yeah. 0:21:39 The problem VJ is a number of levels and the sea of data. 0:21:44 I mean, we’re talking about terabytes of data to crack the case for each individual. 0:21:48 So it’s not even just your gut microbiome of the species of bacteria and their density, 0:21:54 but now we know it’s the sequence of those bacteria that are part of the story. 0:21:59 Then you have, of course, these continuous glucose every five minutes for a couple of 0:22:00 weeks. 0:22:01 That’s a lot of data. 0:22:09 Besides that, you’ve got, you know, all your physical activity, your sensors for stress, 0:22:15 you know, your sleep data, and then even your genomics. 0:22:20 So when you add all this together, this is a real challenge. 0:22:23 No human being could assimilate all this data. 0:22:28 But what’s interesting is not only at the individual level, but then with thousands of people. 0:22:33 So take everything we just talked about, multiply by thousand or hundreds of thousands. 0:22:35 That’s how we learn here. 0:22:41 And so what I think is the biggest thing about the AI underappreciation is the things that 0:22:43 we’re going to learn that we didn’t know. 0:22:49 Like, for example, another great example is when you give a picture of a retina to international 0:22:55 retina expert, and you say, is this from a man or a woman, the chance of them getting 0:22:56 right is 50/50. 0:23:02 But you can train an algorithm to be over 97, 98% accurate. 0:23:06 And there’s so many examples like that, like you wouldn’t miss polyps in a colonoscopy, 0:23:08 which is a big issue. 0:23:13 Or you would be able to see your potassium through your smartwatch level in your blood 0:23:14 without any blood. 0:23:20 And then the imagination just runs wild, as far as what you could do when you train things. 0:23:27 And so training your diet with this torrent of data, not just from you, but from a population 0:23:29 is, I think, a realistic direction. 0:23:33 And what I think is interesting about this is that it’s something where, A, we don’t 0:23:39 need the AMA or NIA or anything else to get involved in terms of diet. 0:23:43 And B, actually, people want to take care of these problems, because I think most people 0:23:44 are motivated. 0:23:45 We just don’t know what to do. 0:23:46 Right. 0:23:52 And so many aspects of it, like now chronobiology is really this hot topic. 0:23:54 That’s about your circadian rhythm. 0:23:57 And should you eat only for eight hours during the day? 0:23:58 Well, certain people, yes. 0:24:03 But the whole idea that there’s this thing for everyone, we got to get over that. 0:24:08 That’s what deep phenotyping is all about, to learn about the medical health essence 0:24:09 of you. 0:24:13 And we haven’t had the tools until now to do that. 0:24:14 OK. 0:24:17 So there’s a ton of data, but a lot of it seems kind of subjective. 0:24:18 Right? 0:24:20 I mean, did I sleep well or not? 0:24:24 How do you sort of overcome the fact that not everything is quantitative, like my cholesterol 0:24:25 level? 0:24:30 Well, it turns out that was kind of old medicine where we just talked about your symptoms. 0:24:34 But new medicine is with all sorts of objective metrics. 0:24:39 So a great example of this is state of mind or mood. 0:24:43 And that’s going to be transformative for mental health, because now everything from 0:24:52 how you type on your smartphone to the voice, which is so rich in terms of tone in a nation, 0:24:57 to your breathing pattern, to your facial recognition of yourself. 0:25:02 I mean, there’s all these ways to say, you know, you’re VGA, you’re really depressed. 0:25:03 Yeah. 0:25:04 You know that you’re depressed. 0:25:12 So the point being is that you have objective metrics of one’s mental health as a cardiologist 0:25:13 for all these years. 0:25:18 I’d have these patients they come and tell me, I feel my heart’s fluttering. 0:25:20 And I would put in the note, the heart’s fluttering. 0:25:22 That was so unhelpful. 0:25:26 Now I can say, well, you know, you should be able to record this on your phone or if 0:25:33 you have a smartwatch and when your heart flutters, just send me the PDF of that. 0:25:38 And we have the diagnosis that is real world, no longer subjective. 0:25:39 The whole different look really. 0:25:45 And by the way, did the patient who has the fluttering records their, their cardiogram, 0:25:47 they don’t have to wait for me. 0:25:53 They already have an automated read from AI that’s more accurate than a doctor. 0:25:55 There is something very anecdotal about the doctor visit. 0:25:57 He’s not right there in the moment. 0:25:58 Yeah. 0:25:59 Right. 0:26:00 It’s a one off. 0:26:01 Yeah. 0:26:02 It’s a one off. 0:26:05 And so it’s a fine thing because people wonder about, let’s say the knock on a wearable will 0:26:08 be that it’s not like a eight point EKG or something like that. 0:26:10 But on the other hand, it’s there with you all the time. 0:26:11 Yeah. 0:26:12 No, exactly. 0:26:16 And then there’s this contrived aspect of going to see the doctor where you, a lot of 0:26:19 people find that very stressful. 0:26:23 And when we talk about white coat hypertension, we don’t even know what normal blood pressure 0:26:28 is because we need to check that out in thousands, hundreds of thousands of people in their real 0:26:30 world to find out what’s normal. 0:26:36 We’ve already had this chaos of the American Heart Association saying that they changed 0:26:42 the blood pressure guidelines on the basis of no data, speaking of, of lack of some objective 0:26:43 metric. 0:26:44 Yeah. 0:26:47 Well, so one other area that I thought was really intriguing and just to me, this was 0:26:52 almost paradoxical, the concept of AI being useful for empathy. 0:26:54 Because I would have thought like, if we’re thinking about the things that a computer 0:26:58 is good at, like multiplying numbers, that’s going to be something like they’re going to 0:26:59 beat humans at any day. 0:27:04 I would have thought that empathy would be the one, like the last bastion of what we’re 0:27:06 good at and what the computer is good at. 0:27:08 But how does AI get to empathy? 0:27:12 Because as we started the conversation with us about how as a key part of what a doctor 0:27:15 does, we’re like, what can AI do there? 0:27:19 Well, we are missing that in a big way today. 0:27:21 And how do we get it back? 0:27:27 Well, I think how we get it back is we take this deep phenotyping, we do deep learning 0:27:35 about the person, and that’s all outsourced with oversight for a doctor or clinician. 0:27:42 Now, when you have this remarkable improvement in productivity in workflow and efficiency 0:27:46 and accuracy, all of a sudden, you have the gift of time. 0:27:54 If we just lay down, as doctors have over decades for administrators to go ahead and 0:28:03 just drive revenue and basically have no consideration for patients or doctors, we’re not going to 0:28:05 see any growth of empathy. 0:28:09 We’re not going to see the restoration of care in healthcare. 0:28:15 But if we stand up and if we say that time, all that benefit of the AI part, the machine 0:28:19 support, and by the way, that’s also at the patient level. 0:28:23 So the patients now, with their algorithmic support, they’re decompressing the doctor 0:28:24 load too. 0:28:25 Yeah, they’re doing some of it. 0:28:29 A lot of simple things, ear infections, skin rashes and all that sort of stuff that’s not 0:28:35 life threatening or serious, but that’s bypassing a doctor potentially almost completely. 0:28:43 So between this flywheel of algorithmic performance enhancement, if we stand up for patients, then 0:28:45 we have all this time to give back. 0:28:51 Once we have time to give back, then we tap into why did humans go into the medical profession 0:28:52 in the first place. 0:28:57 And the reason was because they want to care for their fellow human being, but they lost 0:28:58 their way. 0:29:03 And now we have the peak burnout and depression and suicide in the history of the medical 0:29:04 profession. 0:29:08 And by the way, not just in the US, in many parts of the world, and how are we going to 0:29:10 get that back? 0:29:15 Because it turns out, if you have a burnout doctor, you have a doubling of errors and it’s 0:29:20 a vicious cycle, you have errors, and they get more burnout, more depressed. 0:29:21 So we have to break that up. 0:29:29 And I think if we can get people, so there’s time together, and that real reason why the 0:29:34 mission of healthcare is brought back, we can do this. 0:29:38 It’s going to take a lot of activism, it’s not going to be easy, and it’s going to take 0:29:39 a while. 0:29:42 But if we don’t start planning for this now, it’s not going to happen. 0:29:45 How do you think that changes for how do you become a doctor? 0:29:50 I mean, getting into med school and all the training is really difficult. 0:29:53 What does the future of medical education look like? 0:29:59 Right now, pre-med degree is a lot of biology and chemistry, not too much effort in psychology 0:30:06 or empathy or in statistics or in machine learning. 0:30:07 What does that look like in the future? 0:30:10 I think we’re missing the mark there. 0:30:18 We continue to cultivate Brainiacs, who have the highest MCAT scores and grade point averages, 0:30:22 and oftentimes relatively low on emotional intelligence. 0:30:24 So we have tilted things. 0:30:25 We want to go the other way. 0:30:30 We want to emphasize who are the people who have the highest interpersonal skills, communicative 0:30:35 abilities, and who really are the natural empathetic people. 0:30:40 Because a lot of that Brainiac work is going to be machine generated. 0:30:45 And so it’s something that we all start to lean in that direction. 0:30:46 Yeah. 0:30:49 Now, and it’s intriguing because I think there’s a chicken and egg problem here because I think 0:30:51 first this has to be put in. 0:30:55 Often in these eddies, big changes, there will be resistance. 0:30:56 Who’s going to be fighting this? 0:31:00 The resistance we have to anticipate is going to be profound. 0:31:08 One of the problems is that the medical profession, it may not be ossified, but it’s very difficult 0:31:09 to change. 0:31:16 The only changes that have occurred rapidly, like the adoption of robotics and surgery, 0:31:19 were because it enhanced revenue. 0:31:21 These are none of these things are going to enhance revenue. 0:31:24 They’re actually going to potentially be a hit. 0:31:27 We have all these interests that this is going to challenge. 0:31:33 But for example, we could get remote monitoring of everyone in their home instead of being 0:31:38 in a hospital room, unless they were needing an intensive care unit. 0:31:42 Now, do you think hospitals are going to allow that to happen? 0:31:47 Because they could be gutted, and then they won’t know what to do with all their facilities. 0:31:50 So the American Hospital Association is not going to like this. 0:31:51 So I’ll be delusional here. 0:31:53 They’re not going to like it revenue-wise. 0:31:58 Would they say that would put patients in danger, because obviously at home you don’t 0:31:59 have what a hospital has? 0:32:00 Well, you know what the interesting thing is? 0:32:03 I don’t know if you can get in more danger than going into our hospitals. 0:32:05 One in four people are harmed. 0:32:07 Yeah, they mean sepsis and infection. 0:32:10 The main thing, nose and commu infections from the hospital. 0:32:15 But other medication errors and other things, the comfort of your own home. 0:32:19 You can actually sleep, you’d be with your loved ones, the convenient. 0:32:22 But most importantly, just think of the difference in expense. 0:32:29 You could buy years of broadband data plan for one night in the hospital, which is $5,000 0:32:30 on average. 0:32:31 It’s amazing. 0:32:37 We have the tools to do that now, but you’re not seeing it being seriously undertaken because 0:32:38 of the conflicts. 0:32:43 So if you think about how all this has to actually happen, we talked about what’s possible. 0:32:48 But if you get to nuts and bolts, it’s interesting to think who’s going to do it. 0:32:53 Because if you take just a pure data scientist who doesn’t understand the medicine. 0:32:58 I don’t know if that would be enough, but also I don’t know if you could take a doctor 0:33:01 that doesn’t understand data science. 0:33:06 And so is it going to be teams of commingial groups that get this together? 0:33:09 Because there will be iterations between the data science and the biology and the clinical 0:33:14 aspects that have to come one after the other to be able to make these advances. 0:33:19 We need machines and people to get the best of both worlds. 0:33:28 So in the book, that example of how we cracked the potassium case between Mayo Clinic cardiologists 0:33:30 and a live core data scientist. 0:33:36 And what was amazing about that experience to review with them was that the cardiologists 0:33:40 thought you should only look at one part of the cardiogram, which historically known as 0:33:45 so-called QT interval, because it was known to have something to do with potassium. 0:33:51 But when that flunked and the algorithm was a farce, the data scientist said, well, why 0:33:52 are you so biased? 0:33:54 Why don’t we just look at the entire cardiogram? 0:33:58 And by the way, Mayo, you only gave us a few million cardiograms. 0:34:01 And why don’t you give us all the cardiograms? 0:34:02 So then they nailed it. 0:34:08 So the whole idea is that the biases that we have that are profound. 0:34:15 But when you start de-biasing both the data scientist and the doctors, the medical people, 0:34:18 then you start to get a really great result. 0:34:23 One of the scariest stories I saw was that this algorithm was getting cancer, no cancer 0:34:29 right with crazy high accuracy, like AUC of like 1.0, like never making a mistake. 0:34:33 And it turned out that there was some subtle difference between like a high Tesla magnet 0:34:39 and a low Tesla magnet, and that the patients who were very sick to start off with were 0:34:42 always getting one type of scan, and that a human being couldn’t tell the difference, 0:34:47 but that the machine was picking up some signal, not of whether it was cancer, no cancer, 0:34:52 but whether they were getting like the fancy measurement or the less complicated one. 0:34:56 Or another great example is like there’s a classic example where they’re I think predicting 0:35:01 tumors and they had rulers for the size of the tumor on all the tumor ones. 0:35:03 And so really ML was a great ruler detector. 0:35:04 Yeah. 0:35:10 The whole idea that as a pathologist we can’t see in a slide the driver mutation, but you 0:35:13 could actually train the algorithms. 0:35:17 So when the pathologist is looking at it, it’s already giving you what is the most likely 0:35:19 driver mutation. 0:35:20 It’s incredible. 0:35:27 And that does get me to touch on the deep science side of this, which we aren’t recognizing 0:35:31 is way ahead of the medical side, the ability to upend the microscope. 0:35:35 You don’t have to use fluorescence or H and E. You just train. 0:35:37 So you forget staining. 0:35:43 The idea that you used to be hard to find rare cells, just train the algorithms to find 0:35:44 the rare cells. 0:35:50 I mean, we’re seeing some things in science, no less in drug discovery, in processing cancer 0:35:57 and sequencing data and certainly in neuroscience, it’s a real quiet revolution that’s much 0:36:01 further ahead than on the medical side because there’s no regulatory hurdles. 0:36:05 And you make a good point because I think it’s tempting to just try to do what the human 0:36:08 can do better or what the human can do better now, try to do as well. 0:36:12 But now you’re talking about doing things that no human being could do. 0:36:13 Yeah. 0:36:17 Imaging plus genomics where the genomics read out, let’s say, or whatever the blood assay 0:36:18 is, the gold standard. 0:36:21 I don’t want to predict what the pathologist would say. 0:36:25 I want to predict the biopsy, I want to predict the blood or whatever the true gold standard 0:36:26 is behind it. 0:36:27 Right. 0:36:30 And if you’re training on the best labels, you can do things that no human being could 0:36:31 do. 0:36:36 Well, you know, this may be the most important point is that we have to start having imagination 0:36:44 because we don’t even have any idea of the limitless things that we could teach machines. 0:36:46 Because I’m getting stunned almost on a weekly basis. 0:36:49 I said, I never would have thought of that. 0:36:52 And so just fast forward, here we are in 2019. 0:36:53 What’s it going to be like? 0:36:56 You know, a few years of all the things, like when the Mayo Clinic told me they could look 0:37:01 at a 12-lead cardiogram for millions and be able to say this person’s going to get 0:37:06 atrial fibrillation in their life with X percent probability, I said, really? 0:37:07 And they’ve done it. 0:37:10 And so I never would have expected that. 0:37:11 Yeah. 0:37:16 That’s a really fun point because, and you could either think of two ways that the human 0:37:21 beings aren’t being imagined enough or what does imagination mean for an algorithm? 0:37:22 Right. 0:37:28 Well, if we get into heavy and unsupervised learning, we’re a bit limited by the annotation 0:37:31 and the ground truth going back to that. 0:37:35 You can only imagine things when you have those for supervised learning. 0:37:40 But you know, as we go forward, we’ll have more of those data sets to work with and we’ll 0:37:47 be better at going forward without, well with federated data sets and unsupervised learning. 0:37:50 So the opportunities going forward are pretty enthralling. 0:37:51 Yeah. 0:37:54 Well, the unsupervised learning is interesting because you can finally just, you know, and 0:37:57 for those who aren’t familiar with the term, it’s kind of like trying to find the clusters 0:38:01 to sort of not have the labels, but to see the lay of the land. 0:38:04 And that’s interesting because no human being can sort of, especially in high-dimensional 0:38:07 space, like visualize that and see that. 0:38:08 And so that’s one thing. 0:38:13 The second thing is that if you just throw all of the data in and maybe have the algorithm 0:38:18 make sure that it’s not overfitting, that it’s not trying to find an overly complicated story 0:38:23 almost like, you know, these conspiracy theories are like human beings overfitting for the 0:38:26 moon landing being a hoax or something like that when there’s a simpler explanation for 0:38:27 things. 0:38:30 If you keep it to a simple explanation, the computer can try everything. 0:38:31 Yeah. 0:38:32 Yeah. 0:38:33 So like you talked about, it could look at the whole cardiogram. 0:38:39 We could look at things that we don’t look at because either we’re expert enough to 0:38:42 know that couldn’t possibly write even if it is. 0:38:43 Or we just don’t have the time. 0:38:48 It reminds me sometimes these algorithms almost like children in that kids just don’t know 0:38:49 until they’ll try things. 0:38:50 Yeah. 0:38:53 And that’s where imagination and creativity often comes from. 0:38:55 I couldn’t agree with you more. 0:38:58 So we’ve been spending a lot of time talking about diagnosis, but prediction is another 0:39:00 thing that is really important. 0:39:03 I’d call that a real soft spot in AI. 0:39:09 And I told the story of my father-in-law who kind of was my adopted father just in the 0:39:13 book about how he was on death’s door. 0:39:19 He was about to come to our house to die and he was resurrected. 0:39:23 But any algorithm would have said he was a goner. 0:39:29 And so the idea that at the individual level, you could predict accurately whether it’s 0:39:33 end of life or when you’re going to die or in the hospital. 0:39:37 This is how long you’re going to stay or you’re going to be readmitted, all these things. 0:39:38 We’re not so good at that. 0:39:45 We can have a general sense from a population level, but so far prediction hasn’t really 0:39:51 panned out nearly as well as classification, diagnosis, triage, that kind of stuff. 0:39:57 And I still think that that’s one of the shakier parts because then you’re going to tell a 0:40:01 person about a prediction, we’re not very good at that. 0:40:06 When we talk to people with cancer and we tell them their prognosis, it’s all over 0:40:08 the place in reality. 0:40:13 And so the question is, are algorithms really going to do better or are they just going 0:40:17 to give us a little more precision, maybe not much? 0:40:21 Is there enough information to ever predict anything like that? 0:40:25 Well, that’s a part of the problem too is that the studies that have been done to date, 0:40:30 things like predicting Alzheimer’s, predicting all sorts of outcomes you can imagine, they’re 0:40:36 not with complete data, they’re just taking what you can get, like what’s in one electronic 0:40:41 health record, one system, rather than everything about that person. 0:40:43 So maybe it will get better when we fill in the holes. 0:40:47 I always think about what would be the interesting challenges to work on. 0:40:48 That’s like one of the most interesting ones. 0:40:54 I think it is because there you could improve the efficiency if you knew who are the people 0:40:58 at the highest risk and who you want to change the natural history or what their algorithm 0:41:01 is predicting if it’s something that’s an adverse outcome. 0:41:06 So eventually we’ll probably get there, but it isn’t nearly as refined as the other areas. 0:41:11 But if you combine all these things together, this thing where you’re monitoring your body 0:41:16 every five minutes and your diet and your exercise and your drugs and you have all this 0:41:19 longitudinal data, that’s something that no one’s ever had before. 0:41:27 Yeah, well, you’re bringing up a big hole in the story, which is multimodal data processing. 0:41:29 We are not doing it yet. 0:41:35 Like a perfect example is like in diabetes, people have a glucose sensor and the only 0:41:40 algorithm they have tells them if the glucose is going up or down, that’s pretty dumb. 0:41:45 Why isn’t it factoring in everything they eat and drink and their sleep and activity 0:41:47 and the whole works. 0:41:51 Some day we’ll have multimodal algorithms, but we’re not there yet. 0:41:55 Well, so let’s go back to where we start, you know, a visit to the doctor in the future. 0:42:01 And like the good news is that the doctor doesn’t have to do any of the typing or recording 0:42:06 AIs sort of figuring out the diagnosis and that the doctor has all the time now to actually 0:42:09 be empathetic and communicate, which is great. 0:42:11 But is that now all that’s left? 0:42:15 No, no, not at all, because human touch. 0:42:18 So when you go to see a doctor, you want to be touched. 0:42:21 That’s the exam part of this. 0:42:28 People, when they get examined for their heart and you don’t even take off their shirt, they 0:42:30 know, they know there’s a shortcut going on. 0:42:31 Yeah, that’s interesting. 0:42:41 They have a thorough exam because they know that that’s part of the real experience. 0:42:44 And so what we’re talking about is the exam may change. 0:42:46 Like, you know, for example, I don’t use a stethoscope. 0:42:50 I use a smartphone ultrasound and do an echocardiogram. 0:42:54 And I show it to the patient together as we’re doing it in real time, which the person would 0:42:55 never see. 0:42:59 And by the way, they wouldn’t know what love dub looks like, but you sure can see or sounds 0:43:02 like, but you sure can show them. 0:43:09 So the tools of the physical exam may change, but the actual hands-on aspects of it and 0:43:15 the interaction with the person, the patient, and that that’s the intimacy. 0:43:16 And we’ve lost that too. 0:43:23 You know, the physical exams have really gotten very much a detraction from what they used 0:43:24 to be. 0:43:25 I mean, we need to get back to that. 0:43:28 That’s what people want when they go see a doctor. 0:43:33 And people have deprecated exams because essentially they said they weren’t of value, but it sounds 0:43:36 like what was being done was not the part that needed to be done. 0:43:41 Well, when you’re dealing with analog tools and, you know, they can be so superseded by 0:43:42 the things we have today. 0:43:46 And when you’re sharing them with the patient, so here’s what you have. 0:43:51 And then you send them the video files or the metrics that they can look at, you know, 0:43:54 when they get home and get more familiar with their body. 0:44:00 It’s not only the physical exam that happens instantaneously in the encounter, but the 0:44:05 ability to have that archived data that people get more, they learn about themselves. 0:44:08 That’s all part of that awareness that’s important. 0:44:11 And you know, you talked about back to the future, there might be another sci-fi analogy. 0:44:15 I think there’s some Star Trek episodes like this where actually the group that has the 0:44:18 highest technology is the one where the technology is invisible. 0:44:19 Yeah. 0:44:23 And it sounds like that’s where the, all of this is going to be in the background. 0:44:24 That’s right. 0:44:28 You really are interacting with a person and this person now has just these powers that 0:44:29 they couldn’t have before. 0:44:30 Yeah. 0:44:31 I’m with you all the way. 0:44:32 Well, thank you so much. 0:44:33 This has been fantastic. 0:44:34 I really enjoyed it.
with Eric Topol (@EricTopol) and Vijay Pande (@vijaypande)
Artificial intelligence is coming to the doctor’s office. In this episode, Dr. Eric Topol, cardiologist and chair of innovative medicine at Scripps Research, and a16z’s general partner on the Bio Fund Vijay Pande, have a conversation around Topol’s new book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. What is the impact AI will have on how your doctor engages with you? On the nature of the doctor’s visit as a whole? How will AI impact not just doctor-patient interactions, but diagnosis, prevention, prediction, medical education, and everything in between?
Topol and Pande discuss how AI’s capabilities for deep phenotyping will shift our thinking from population health to understanding the medical health essence of you, how the industry might respond and the challenges in integrating and introducing the technology into today’s system—and ultimately, what that the doctor’s visit of the future might look like.
0:00:03 The content here is for informational purposes only, 0:00:05 should not be taken as legal business tax 0:00:07 or investment advice or be used to evaluate 0:00:10 any investment or security and is not directed 0:00:14 at any investors or potential investors in any A16Z fund. 0:00:18 For more details, please see a16z.com/disclosures. 0:00:21 – Hey, I’m Andrew Chen from Injuries and Horowitz 0:00:24 and today we have Justin Kahn, who is one 0:00:26 of our repeat entrepreneurs that we are very excited 0:00:28 to be working with on Atrium. 0:00:31 And so we’re gonna talk a bunch about 0:00:33 what is it like to be a repeat entrepreneur? 0:00:34 I think we were just going through the list. 0:00:37 There’s like five different companies on there 0:00:39 and you’ve learned a ton from every single one. 0:00:42 And so we’re gonna do a series of sort of compare 0:00:45 and contrast across quite a number of topics. 0:00:47 But as a very first step, 0:00:50 I think it’d be awesome to have Justin talk about 0:00:53 some of the companies that you’ve been involved in. 0:00:56 And I know the company you were running when we first met 0:00:57 where you were running around with a camera on your head 0:01:00 is not actually even your first one. 0:01:01 There’s one before that called Kiko. 0:01:03 So why don’t you talk about Kiko first? 0:01:04 – Sure, yeah. 0:01:06 So I’ve been an internet entrepreneur here 0:01:09 for the last 14 years since 2005. 0:01:11 Our very first company was called Kiko. 0:01:13 It was kind of like Google Calendar, 0:01:16 but it came out one month before Google Calendar came out 0:01:20 and wasn’t really that good when I’m honest about it. 0:01:23 And so that company didn’t work out super good. 0:01:26 We ended up fire selling it on eBay 0:01:28 after several failed acquisition attempts 0:01:30 with the Sullivan Valley players. 0:01:34 And then after we did that, we started another company. 0:01:39 This one was even less well thought out than Kiko was actually, 0:01:42 which was the idea was we would create our own 0:01:45 live video reality TV show on the internet. 0:01:48 I do think that tapped into several things 0:01:49 that were in the zeitgeist 0:01:50 that actually have become popular now. 0:01:54 But unfortunately, we as talent in our own show 0:01:56 were not very entertaining and not very popular. 0:01:58 And so we launched this live streaming show. 0:02:00 We called it Justin TV. 0:02:03 ‘Cause I was the only one in our four founders 0:02:05 who was stupid enough to put the camera on his head 0:02:09 and be like, I’ll make myself the subject of this show. 0:02:12 And you literally wore, I remember you wore a backpack. 0:02:15 – Yeah, so this was 2007. 0:02:19 So it was pre-iPhone, pre-good cellular internet. 0:02:21 So we had this computer in a backpack 0:02:25 with like multiple cell phone modem connections. 0:02:27 And we hooked it up to a camera. 0:02:30 So there was a camera, this computer virtualized the video 0:02:33 as a webcam, basically sent it over to the server. 0:02:36 And then we had this very hacky way of streaming it out 0:02:39 to the millions of people watching, 0:02:40 actually not millions, hundreds, 0:02:43 but the people who were watching at home, 0:02:46 they were following along, it was actually pretty fun. 0:02:48 They could text, we put our number up there, 0:02:51 they would text me, usually fairly offensive things actually. 0:02:55 But then eventually we launched this show, 0:02:57 people started coming because they were like, 0:02:59 what is this guy doing, this is crazy. 0:03:02 They were like, you’re very boring, I hate your show, 0:03:04 but I want to create my own live video stream. 0:03:05 So how are you doing it? 0:03:07 And then the light bulb kind of went off 0:03:10 and we said, aha, let’s create a live video platform. 0:03:12 And that became Justin TV, the platform. 0:03:15 And then after that, we ran that for a couple of years, 0:03:17 I’ve condensed it because it’s a very long story, 0:03:19 but we raised a bunch of money, 0:03:21 ran it for a couple of years, 0:03:24 hit the nuclear winter of video startups 0:03:26 where all the other video startups in Silicon Valley died. 0:03:30 We were pretty scrappy and survived on like a ramen budget. 0:03:33 Eventually decided we needed to pivot to some new ideas. 0:03:36 And from there we incubate a few ideas internally. 0:03:38 One of them was a video app called social cam, 0:03:40 which we spun off and eventually sold to Autodesk. 0:03:45 And another one was a site that we, 0:03:48 my co-founder really thought of, 0:03:50 which was the idea was focus on, 0:03:53 let’s focus on the video game related video on our site. 0:03:57 And that became Twitch, which kind of grew and grew and grew. 0:03:59 We eventually pivoted the entire company to Twitch. 0:04:02 My co-founder Emmett was the CEO of the company 0:04:06 and eventually sold it to Amazon for $970 million in 2014. 0:04:08 About five years ago. 0:04:10 Along the way started some other companies 0:04:11 to start this company called exec, 0:04:14 which is in the errand running/home cleaning space. 0:04:16 It’s kind of like a handy year home joy. 0:04:19 We actually ended up selling it to handy, 0:04:22 which through an act of God sold to Angie’s List this year. 0:04:26 And then more recently, 0:04:28 I’d been a partner Y Combinator for a couple of years 0:04:30 and then incubated a few companies. 0:04:34 One of those was this video Q&A app called Whale. 0:04:37 And then just fast forwarding all the way 0:04:38 up till the present day, 0:04:42 decided I want to really put all my eggs back in one basket, 0:04:45 precariously thrown around basket. 0:04:47 And so I decided I’d start a new company. 0:04:48 That idea was Atrium, 0:04:51 which is a technology enabled law firm for startups, 0:04:53 really trying to solve all the problems that I had 0:04:56 as an entrepreneur when dealing with legal. 0:04:58 To make legal faster, more price predictable, 0:05:00 more transparent for me, the business owner, 0:05:02 you know, the business manager. 0:05:05 And that’s what we set out to do about two years ago. 0:05:06 It’s going pretty well. 0:05:08 We’re serving a bunch of startups here in Silicon Valley 0:05:09 with all of their needs. 0:05:12 And it’s been great, but I’m sure we’ll get to that. 0:05:13 – Awesome, yeah. 0:05:18 Well, I think one place where I’m going to start on this 0:05:23 is a lot of the advantages of being a repeat entrepreneur 0:05:25 are that, you know, you can raise more money 0:05:29 and there’s more, you know, maybe easier to recruit talent. 0:05:30 And there’s, you know, there’s all these advantages 0:05:32 that kind of, you know, come along with that. 0:05:35 One of the disadvantages I find ends up being that, 0:05:38 you know, there’s so many like distractions, right? 0:05:39 Like you could, there’s a million different things 0:05:40 you could do. 0:05:42 There’s a lot of things pulling at your attention. 0:05:45 I’m really fascinated by, you know, your movement 0:05:48 from going from YC and an incubator 0:05:51 where you can maybe kind of dip into a lot of little things 0:05:53 versus kind of putting all your eggs in one basket 0:05:54 and trying to like start a company. 0:05:56 Like, you know, to talk to us about that, 0:05:57 that kind of decision. 0:05:59 – Yeah, that’s a great question. 0:06:01 So once you become, you know, successful in some way 0:06:03 in Silicon Valley, whether that’s, you know, 0:06:04 you’ve been the executive at a company 0:06:06 that’s, you know, rocket ship unicorn 0:06:08 or you’ve started a company, you know, 0:06:09 the world opens up, right? 0:06:11 People want you to be a VC. 0:06:15 They want you to, you know, work on projects with them. 0:06:18 You can start any company that you want, 0:06:19 which is great, right? 0:06:20 But there’s just paradox of choice 0:06:22 and focus can be a huge problem. 0:06:25 I know for a lot of friends of mine, it has been as well. 0:06:27 As an investor, as a partner at YCom, 0:06:28 there were some parts I really liked. 0:06:31 You know, I loved helping early stage founders out 0:06:34 and like really working with them on these problems that, 0:06:36 like I felt like for five to 10% of them, 0:06:37 it was like life changing, right? 0:06:39 Like me helping them out in a way, 0:06:40 I came up with some great idea 0:06:42 or helped them at a critical moment. 0:06:43 That was life changing. 0:06:44 But then there was a large majority of them 0:06:46 that probably could have been listening 0:06:48 to a YouTube video of me or this podcast 0:06:50 and like you get the same information, right? 0:06:54 And so I didn’t feel like the feedback cycle 0:06:56 was fast enough also as an investor. 0:06:57 And I didn’t really feel ultimately, 0:06:58 like after the first couple of years 0:07:00 that I was continuing to learn and grow. 0:07:02 And you know, I’m still in my 30s. 0:07:03 I was like, I need to do something 0:07:05 where I’m going to be forced to grow. 0:07:09 And really the number one vehicle for personal growth 0:07:11 that I’ve ever experienced in my life has been startups. 0:07:13 You know, so I went back to when I knew 0:07:17 and I really decided that in order to grow as a founder, 0:07:19 you know, we had a pretty big outcome with Twitch 0:07:21 and I’d seen a lot of different things 0:07:24 in order to really grow to the next level. 0:07:26 I would have to do something that potentially 0:07:27 could be even bigger. 0:07:29 And so I really felt like it deserved, 0:07:30 that deserved my full attention. 0:07:33 And so that’s kind of how I decided. 0:07:35 I don’t necessarily think it’s the right answer 0:07:37 for everyone because what I didn’t mention 0:07:39 was that I had selective memory at the time 0:07:42 and I forgot just how painful starting a startup could be. 0:07:45 And so for the first couple months of atrium, 0:07:46 I was like, oh man, this is a dream. 0:07:47 I’ve like finally leveled up. 0:07:48 I learned all these skills. 0:07:51 I’ve like, I made it as a founder. 0:07:53 And then reality boom set in. 0:07:56 And of course there were nothing ever goes according to plan. 0:07:58 There were pains, there were struggles. 0:08:01 And that was, you know, that’s part of the journey. 0:08:03 But then the good part is, of course, 0:08:06 when you experience pain, that is a catalyst for learning. 0:08:08 And so I really got what I wanted in the end, 0:08:10 which was forced to learn. 0:08:11 – Right, that’s great. 0:08:13 And I know one of the big differences 0:08:16 among some of the companies that you started 0:08:17 in the past versus atrium, 0:08:20 and something that you’ve talked a lot about is, 0:08:21 you had this sort of succession 0:08:24 of very like consumer oriented startup, 0:08:26 sort of like watching other people play video games. 0:08:28 Like that’s as consumers, basically. 0:08:33 And so very interestingly, atrium is a B2B thing. 0:08:36 And why did you choose B2B? 0:08:37 Was it just for novelty? 0:08:39 Was it just to push yourself? 0:08:43 Or do you think that there’s something different in mind 0:08:47 that maybe takes advantage of some of your new found skills? 0:08:48 – Yeah, well, Twitch, I guess, is really the, 0:08:50 it’s like the ultimate consumption thing, 0:08:51 ’cause it’s you’re consuming someone, 0:08:53 consuming video games. 0:08:54 – Right, right. 0:09:00 – For me, I felt like maybe this was an analysis 0:09:01 I’ve done like retroactively, 0:09:03 but I think I’ve had this discussion 0:09:05 with a couple people with multiple time founders. 0:09:07 And when you’re an early stage founder, 0:09:09 when we started Kiko and then Twitch, 0:09:10 when we started Twitch, we were like, 0:09:12 or just in TV, it was like we were 23 years old. 0:09:14 And we had no skills. 0:09:17 I never had a real full time job in my life. 0:09:18 And even though we were programmers, 0:09:20 we were like new college grad programmers. 0:09:22 We were not good. 0:09:24 We were horrible managers. 0:09:26 We basically had nothing going for us. 0:09:28 When you have nothing going for you, 0:09:31 except for your like willing to put in hard amount, 0:09:35 like long hours on a lot of blood, sweat, and tears, 0:09:38 then you should focus on things that are like, 0:09:40 where there’s a lot of market risk, 0:09:43 because ideas where there’s market risk, 0:09:45 you can potentially win those, right? 0:09:50 Now, as someone who has abilities and skills, 0:09:53 where I’ve learned skills over the last 14 years, 0:09:57 you wanna focus much more on like execution risk things. 0:09:59 And so I felt like B2B startups 0:10:01 are more about execution risk. 0:10:03 I felt like this legal market, 0:10:05 particularly was already a big market, 0:10:06 and you just have to figure out 0:10:08 how to do it 10 times better, right? 0:10:09 And I felt like I had a roadmap 0:10:11 for how to do that in my head. 0:10:14 And so that’s why I felt that this was a better use of time. 0:10:17 ‘Cause some of the consumer startups that I had incubated 0:10:18 and played around with post Twitch, 0:10:20 actually it was really hard to find product market fit, right? 0:10:22 Like, I don’t have any advantage 0:10:25 in finding product market fit with a consumer app 0:10:27 more than the 22 year old Justin. 0:10:28 – You might have a disadvantage. 0:10:29 – Yeah, I probably have a disadvantage 0:10:31 ’cause I’m like already more set in my ways. 0:10:34 I’m like less in tune with the culture. 0:10:36 I’m like, I don’t know what the kids are doing. 0:10:40 So, with the Justin of today, invent the Twitch of today. 0:10:41 Like, I don’t think so, right? 0:10:43 Like I’m an old guy now, man. 0:10:44 (laughing) 0:10:45 It’s game over for me. 0:10:48 – I wanna unpack this market risk, execution risk. 0:10:49 ‘Cause that’s obviously, 0:10:51 it’s such an important distinction, 0:10:55 but very colloquial, kind of in our understanding of it. 0:10:57 What do you mean by market risk 0:10:59 and sort of maybe new entrepreneurs 0:11:00 can have an advantage in market 0:11:02 and tackling something with new market? 0:11:04 How do you know if an idea has a lot of market risk? 0:11:05 – Sure, that’s great. 0:11:08 So, Twitch and H&M are the perfect examples, right, almost. 0:11:10 So, Twitch, it’s like when we started, 0:11:12 when we pivoted Justin DV to Twitch, 0:11:13 or even Justin DV is a good example, 0:11:15 but Twitch is the best one probably. 0:11:17 When we pivoted Justin DV to Twitch, 0:11:20 nobody believed that there would, this was a market, right? 0:11:22 No one believed, no investors, 0:11:25 very few of our even internal employees believed, 0:11:27 and even the founders were skeptical. 0:11:29 Emmett deserves the credit here, ’cause he had belief, 0:11:31 but a lot of the other co-founders were skeptical. 0:11:34 I’m like, does this exist as a business? 0:11:37 And so, the good part is that the competition there 0:11:39 was very low, right? 0:11:41 There weren’t experienced entrepreneurs being like, 0:11:43 this is gonna be a huge business, we should compete. 0:11:47 So, really it was, the entire thing was market risk 0:11:49 and figuring out, do we have product market fit, 0:11:50 how do we build product market fit? 0:11:51 – ‘Cause in that case, 0:11:53 you’re trying to be the first in the category. 0:11:54 There’s no substitutes really. 0:11:57 You’re watching someone else play street fighter 0:11:59 at the arcade, that’s a substitute. 0:12:00 – Exactly, you have a lottery ticket, right? 0:12:01 – You have a lottery, yeah. 0:12:04 – It’s a lottery ticket, and if you pivot a bunch of times 0:12:05 and listen to your customers, 0:12:06 you might be buying more and more lottery tickets. 0:12:08 But your lottery tickets are just as valuable 0:12:11 as the experienced entrepreneur’s lottery tickets 0:12:12 that he’s buying. 0:12:14 So, he’s a fool to play that game, 0:12:16 and I don’t think you see as many experienced entrepreneurs 0:12:18 playing that same game. 0:12:20 Whereas, with execution risk businesses, 0:12:23 my lottery ticket now is way bigger than the guy 0:12:26 who’s like the 22-year-old Justin, right? 0:12:30 So, for a B2B startup, I know, oh, I can attract talent. 0:12:32 I can hire a sales team. 0:12:34 I can raise capital. 0:12:36 And so, it’s a lot more, for something 0:12:39 where it’s very established that that’s gonna be a business, 0:12:41 or some business is gonna be in there, 0:12:45 it’s like he’s stupid to play against me, you know? 0:12:47 – Right, that makes sense. 0:12:48 Well, you know, and I always find it funny 0:12:51 that in the consumer startup world, 0:12:54 that if you look at the last kind of decade of hits, 0:12:56 if you were to tell people, oh, yeah, 0:12:58 the biggest hits are gonna be this app 0:13:00 that lets you get in strangers’ cars, 0:13:02 this app that lets you stay at someone 0:13:04 you don’t know’s like house, 0:13:07 an app where you swipe left and right 0:13:09 in order to meet people on the internet, 0:13:12 and an app that lets you watch other people play video games. 0:13:16 You would be like, that’s the craziest list of 0:13:18 billion-dollar companies I’ve ever heard. 0:13:21 And yet, that that is actually how consumer 0:13:23 the internet actually, you know, 0:13:25 the ecosystem unfolds, it’s insane. 0:13:27 – It takes a lot of people who have nothing to lose 0:13:30 to discover those ideas, right? 0:13:33 – Right, yeah, and so one of the things, 0:13:35 one of the clear advantages in all of this 0:13:38 is that, you know, let’s talk about fundraising, 0:13:42 and the decision on whether or not to raise 0:13:44 a bunch of money out of the gate, 0:13:48 versus doing the kind of, you know, 0:13:51 ramen profitable, you know, cockroach thing, right? 0:13:54 That’s sort of like, you know, one common contrast, 0:13:55 and then obviously you had a very unique 0:13:57 fundraising strategy as well. 0:13:59 So maybe talk about that decision, 0:14:01 and then kind of how you ended up pursuing it. 0:14:03 – Look, I’m not convinced that raising a ton of money 0:14:05 out of the gate is the right strategy. 0:14:08 You know, the Silicon Valley is littered 0:14:10 with dead bodies of these companies, 0:14:13 you know, the Juceros, and the Kools, 0:14:15 and like all these companies that have raised 0:14:17 a ton of money, and then like they, 0:14:20 when you have a ton of money, you spend a ton of money, right? 0:14:23 Now there’s other companies like the Jet.coms 0:14:24 that, you know, they made it work. 0:14:28 So, you know, I’m not convinced it’s the worst strategy ever, 0:14:29 but I’m not convinced it’s the best. 0:14:32 But for me personally, you know, 0:14:34 where it’s an execution risk business, 0:14:37 I’m too rich to like, fuck around with the like, 0:14:39 you know, okay, I’m just gonna do a seed round, 0:14:42 and like, and just like, take a long time, right? 0:14:45 Like for me, speed to market and execution 0:14:47 was really important, and I felt also like 0:14:48 this market really supported and required it, 0:14:51 because, you know, it is the legal space, 0:14:54 there is, you know, so a lot of value 0:14:57 in making sure that the clients and the talent, 0:15:00 the attorneys talent, and other legal providers, 0:15:03 like, think this company’s gonna be around, right? 0:15:05 So that’s very important. 0:15:06 And so, you know, this is a strategy, 0:15:08 I don’t necessarily recommend to anyone 0:15:11 who can raise a ton of money that this is the right strategy, 0:15:13 it’s just the strategy that we picked. 0:15:16 In terms of the tactics of like how we did our round, 0:15:18 especially our Series A, you know, my idea was, 0:15:22 because VCs are kind of naturally adjacent to legal, 0:15:25 right, like, for example, when you fund a company, 0:15:27 they need someone to help them on the legal side 0:15:30 with all of the, you know, paperwork 0:15:32 and the execution of that funding round. 0:15:35 We felt like getting a lot of VCs on our side 0:15:37 would be a good tactic, and so we ended up going out 0:15:40 and raising money for our seed round of, you know, 0:15:44 over 90 different investors from all over Silicon Valley 0:15:46 because I felt like it would be really good 0:15:48 to have those investors on our side 0:15:50 and recommending Atrium as a channel partner, 0:15:54 and so that was our tactic there. 0:15:56 – Right, and I think it’s something where, 0:16:00 you know, to your point, if something 0:16:03 that you’re working on is primarily execution, 0:16:05 then that means that, you know, you can, 0:16:08 there are times and places where you can use money 0:16:10 to solve it versus, it seems like, you know, 0:16:12 part of the market risk thing is it just, you know, 0:16:14 to your point, it sort of lets you buy more lottery tickets, 0:16:17 but it may not accelerate the process 0:16:19 of actually doing it, right, and so I think that, 0:16:21 that sort of feels like one main difference, 0:16:23 and then the other one is, you know, 0:16:25 that just to build on what you’re saying is that 0:16:28 if you are in an industry where trustworthiness 0:16:31 is really important, then being well-capitalized is key, 0:16:33 you know, the same way you wouldn’t, you know, 0:16:36 if you’re gonna, you know, for example, 0:16:39 you know, build a FinTech startup where, you know, 0:16:41 you’re gonna ask consumers to trust their money 0:16:44 with you, you know, like you wanna be legit, 0:16:46 you wanna be well-capitalized, you wanna have, like, 0:16:48 you know, super strong executives 0:16:50 and board members and investors, and like, 0:16:52 and that’s a strategy, kind of, that’s a little bit, 0:16:54 kind of, like, self-perpetuating as well. 0:16:55 – Yeah, that’s a great example. 0:16:57 If you’re gonna build, like, a new bank, right, 0:16:59 like an online, or like a mobile-first bank, 0:17:01 which is, I think, some people are doing, 0:17:05 would you wanna raise, you know, $1 million, 0:17:07 or $1 million seed round, or a $5 million seed round, 0:17:11 or a $50 million series A out the gate? 0:17:12 Obviously, if it’s available to you, 0:17:15 you want more money, ’cause you know people need banks, right? 0:17:17 It’s just a matter of can you do it better, right? 0:17:19 That’s a perfect example, and there’s, you know, 0:17:20 there’s many others here in Silicon Valley. 0:17:22 I think we’ve actually shifted more 0:17:25 as the cycle has changed over the last 10 years, 0:17:26 the tech cycle. 0:17:29 We’ve shifted more to these execution-risk startups, 0:17:31 and so, you know, you’ve consequently seen, 0:17:33 I mean, it’s a chicken and egg thing, really, 0:17:35 what came first, but like, you’ve seen these more and more, 0:17:37 like, bigger rounds, I’d say, 0:17:39 that are supporting these companies. 0:17:41 – Well, you know, one of my, 0:17:44 one of the partners here, Chris Dixon, 0:17:46 has talked about the idea that, you know, 0:17:50 if you have a set of problems that has not been able 0:17:52 to get solved and improved in, you know, 0:17:55 the 20 years of the modern internet, 0:17:59 then maybe all the techniques that we pioneered 0:18:01 in the last, you know, decade plus, 0:18:04 like being asset-light and just throwing software, 0:18:06 and just shipping really quickly, and being really lean, 0:18:09 like, maybe those techniques don’t work for a reason 0:18:12 in healthcare and fintech and legal services 0:18:14 in real estate, and like, you know, 0:18:15 some set of those things, right? 0:18:17 And so then, very quickly, you have to think, 0:18:19 okay, well, you know, if those techniques don’t work, 0:18:20 otherwise, it would have, 0:18:21 someone would have tried it already, 0:18:23 it would have, it would have sort of, you know, 0:18:25 a little bit like efficient market hypothesis. 0:18:26 – Yeah, that’s the thing. 0:18:27 – It would have kind of like happened already, 0:18:31 like, maybe you need a foundationally different approach. 0:18:34 And so, I think that is actually one of the reasons why 0:18:35 there’s more of these like, quote unquote, 0:18:37 full-stack startups that are going after 0:18:39 these really, really difficult areas. 0:18:40 ‘Cause like, you know, otherwise, 0:18:41 you wouldn’t be able to do it. 0:18:44 – Yeah, well, running experiment right now. 0:18:46 – Yes, yeah, right, no, I think that’s right, 0:18:48 I think that’s right. 0:18:50 So, you know, one of the fun topics, 0:18:52 one thing that I have a tremendous amount of respect 0:18:55 for you on is you’re very, you know, 0:18:58 you’re always on the leading edge, 0:19:00 thinking about, you know, self-improvement, 0:19:03 how to sort of, you know, your own, you know, 0:19:07 personal performance at work, you know, at home, et cetera. 0:19:10 And obviously, one of the big things about, you know, 0:19:12 running a company is that it is enormously stressful. 0:19:13 Right? – Yeah. 0:19:16 – And so, talk to us about like, you know, 0:19:18 when you were a first-time entrepreneur, 0:19:21 kind of Kiko, Justin TV, you know, 0:19:22 how did you think about, you know, 0:19:26 managing the stress of, you know, running a company? 0:19:27 And what was your approach there? 0:19:29 And then let’s talk about kind of like, 0:19:30 the new and improved Justin now, you know, 0:19:32 kind of 10 years later. 0:19:33 – Yeah, so in the early days, you know, 0:19:37 10 years ago, I was not doing anything 0:19:39 in terms of like improving myself. 0:19:44 In fact, I think I used to think about people’s attributes, 0:19:47 maybe not your skills so much in terms of like, you know, 0:19:48 it’s your skills at programming or stuff like that, 0:19:50 but more of like your attributes. 0:19:52 Like, I don’t know if you ever played Dungeons and Dragons, 0:19:55 but when you create a character in Dungeons and Dragons, 0:19:57 you roll this like 20-sided die and, you know, 0:20:00 your strength, it’s like 14, your intelligence, 0:20:02 you roll it and it’s six or whatever, 0:20:04 and that’s what you have, you can’t change it. 0:20:06 And so I felt like people’s attributes 0:20:07 were kind of like that. 0:20:09 And so I never worked on that very much, 0:20:11 self-improvement stuff outside of, you know, 0:20:12 like yeah, I became a better programmer 0:20:14 ’cause we were programmed, you know, 0:20:17 but I didn’t work on things to like make myself, 0:20:20 I don’t know, smarter, right, or harder working, 0:20:22 or like awake more hours of the day, right, 0:20:25 like alert more hours a day or anything like that. 0:20:26 So it was very happened, you know, 0:20:28 everything was kind of accidental. 0:20:30 Like we would, you know, I was not dealing with stress 0:20:31 well at all. 0:20:35 If there was a problem in the company, 0:20:37 I would be very emotionally avoided to it, 0:20:41 or I would like drown my sorrows in like alcohol, right, 0:20:43 which is not a very good coping mechanism at all. 0:20:46 And so, and then like, you know, 0:20:47 in terms of even just down to like 0:20:49 what we were like eating at lunch, 0:20:51 I was, you know, we were talking about this this earlier, 0:20:53 but I would, we would have like pizza every day 0:20:56 at lunch at Justin TV, and then I’d like fall asleep. 0:20:57 Like I’d go into like a carb comb, 0:21:00 like a carb comb in the afternoon, like every day. 0:21:04 And so now more recently, like with atrium, 0:21:06 it’s been a tremendous vehicle 0:21:08 for my own personal discovery, 0:21:10 because I experienced the stress again. 0:21:12 You know, I was like, oh my God, it’s stressful. 0:21:16 Again, this is crazy, why is it so stressful? 0:21:18 And so I started looking for ways to deal with that, 0:21:21 and I really, I mean, 0:21:24 if I had a number of things that started working for me 0:21:26 after I started really exploring it, 0:21:27 a number of things that started working for me 0:21:31 starting last year, you know, everything from, 0:21:33 friend of mine recommended a daily gratitude journal. 0:21:35 So I’ve just been doing that every day, 0:21:37 writing this gratitude journal, five minute journal. 0:21:39 You write down the three things every morning 0:21:40 that you’re grateful for. 0:21:42 And that seems like a very simple thing 0:21:44 and kind of hokey, actually, when I first heard about it. 0:21:48 But what I realized was it helps recontextualize 0:21:49 all the ups and downs that you experience 0:21:51 as an entrepreneur, especially, I mean, 0:21:53 the downs really, like throughout the day, 0:21:54 they’re not as big of a deal, 0:21:55 because in the morning, you’re writing down, 0:21:58 like, wow, I have this opportunity. 0:22:00 You know, I remember the day we came in and pitched, 0:22:02 you guys here, I wrote my gratitude journal, 0:22:05 I get to pitch, you know, Andrewson Horowitz. 0:22:06 That’s amazing, right? 0:22:09 Even whatever happens, that’s an incredible opportunity. 0:22:13 It puts me in the top 0.01% of people in the world 0:22:15 without opportunity, you know? 0:22:18 So that was like pretty amazing. 0:22:20 And then, you know, every day there’s something, 0:22:21 like if you really think about it, 0:22:24 there’s so many amazing things that happen to you 0:22:27 as a human being here in Silicon Valley. 0:22:29 Even just like actually one thing I think in the morning, 0:22:31 oftentimes it’s like the supply chain, 0:22:33 the global supply chain to deliver coffee 0:22:35 from like around the world, 0:22:38 so that I can grind up fresh coffee 0:22:41 and like a pour over in my Chem-X in the morning. 0:22:41 – Right. 0:22:42 – That’s amazing. 0:22:43 – It is amazing. – Like if you think about it. 0:22:44 – It’s totally amazing. – Yeah. 0:22:45 – Right. 0:22:47 – So the gratitude journal really working for me. 0:22:47 And then another thing– 0:22:49 – You’re still eating pizza for lunch every day? 0:22:52 – Yeah, so diet, another thing, I stopped eating pizza. 0:22:54 – So, entirely? 0:22:56 – No, well I started eating a ketogenic diet, 0:22:58 which is a high-fat diet, 0:23:01 but really the reason for me is like, 0:23:05 I just don’t get as tired in the day anymore. 0:23:06 And so that’s been really helpful. 0:23:07 And last year I was experimenting, 0:23:11 I did like some one meal a day, days, 0:23:13 you know, like four days a week. 0:23:15 Some of these weird like Jack Dorsey diets, 0:23:17 kind of Jack Dorsey light or whatever, 0:23:19 but it’s been good for me. 0:23:21 You know, you had to do what your body feels like, 0:23:21 it feels good. 0:23:24 So, you know, that’s some diet and exercise, 0:23:25 pretty religious about exercising, 0:23:28 you know, try to do something every day during the work days. 0:23:29 – Yeah. 0:23:31 – And then the last thing that was really big 0:23:33 is meditation for me. 0:23:35 Start off just with headspace, you know, 0:23:38 I’m not the type of person that people, you know, 0:23:40 assume would be a heavy meditator 0:23:42 or very introspective or anything like that. 0:23:46 But for me, just like starting off with headspace last year, 0:23:48 and then now I’ve been doing transcendental meditation, 0:23:51 which is kind of what Ray Dahlio talks about 0:23:53 and the principles, that’s worked really well for me 0:23:58 to just be more present during the day in my life. 0:24:02 It’s pretty amazing, amazingly profound effect. 0:24:03 – I feel like your Twitter stream 0:24:05 is part of your gratitude journal. 0:24:06 – Yeah. 0:24:07 – ‘Cause I read your Twitter stream and I’m like, 0:24:09 oh, this is like very philosophical. 0:24:11 You know, it’s not like just pulling out like, 0:24:15 oh, here’s a blurb from the latest S1 or something. 0:24:17 Like you’re like, you know, you’re like sharing ways 0:24:20 that the startup community can, you know, 0:24:21 to think about themselves. 0:24:23 – Yeah, the way I think about it is like, 0:24:25 if you want something to be part of your identity, 0:24:26 talk about it. 0:24:28 If you want to learn something, teach it. 0:24:29 You know, and I really believe that. 0:24:31 So for me putting out, you know, 0:24:33 what I’ve been doing on the mindfulness side 0:24:37 and for myself to be more emotionally kind of stable 0:24:40 throughout the days and weeks and months, 0:24:42 as a startup founder, that’s been really valuable to me. 0:24:44 So if I can spread that to other people, 0:24:46 it’s gonna help reinforce it as an identity for me, 0:24:48 but it’s also gonna hopefully help those people as well. 0:24:49 – Right. 0:24:51 You know, one thing I’ve been working on a lot as well, 0:24:54 the last thing I’ll say is just working on realizing 0:24:56 that your attachment out in the very early days 0:24:59 of Justin TV and Twit, Kiko and all these companies, 0:25:03 I had a huge ego attachment to the outcome of the company. 0:25:04 – Yeah. 0:25:06 – Right, my identity and the companies 0:25:07 were like very intertwined. 0:25:10 And more recently, I realized that was the same 0:25:11 at Atrium actually. 0:25:13 Like again, I was like creating that same pattern, 0:25:15 but it was like, it’s a very unhealthy pattern. 0:25:18 So what I realized was I needed to start telling, 0:25:21 reminding myself that no matter what happens 0:25:24 with this company, I’m not gonna be any happier 0:25:25 or any less happy in the long run. 0:25:27 There might be a short burst of unhappiness 0:25:30 if it fails or happiness if it succeeds amazingly, 0:25:33 but you’re not gonna get any happier. 0:25:35 Really, if you’re relying on outside things, 0:25:38 external factors to drive your inner happiness, 0:25:40 you will always be disappointed in the long run. 0:25:43 And the funny thing is I’ve actually run the experiment, 0:25:45 not an A/B test, but a linear experiment on that, 0:25:48 because we’ve had more and more success over time. 0:25:51 Like we started off really paying ourselves, 0:25:54 I think with Kiko, $7,800 a month, 0:25:57 but we only paid ourselves every other month. 0:26:00 And so then we raised funding for Justin D.V. 0:26:02 and we were able to make a little salary, 0:26:05 then we became profitable, we made a lot more, 0:26:07 and then we sold one company, then we sold a lot of companies. 0:26:09 And so we kind of ramped over time. 0:26:14 And after the basics of Maslow’s hierarchy of needs, 0:26:17 we’re taking care of, I felt like I could go out to eat. 0:26:18 I don’t know if that’s on one of those Maslow’s 0:26:20 hierarchy of needs, but after that basic thing, 0:26:22 I could afford to go out to eat. 0:26:24 None of it never mattered. 0:26:26 It never made me sustainably more happy. 0:26:28 And so just reminding yourself of that 0:26:30 and trying to remove those attachments in your mind, 0:26:31 it’s easier said than done, 0:26:34 but having that as like an active practice 0:26:36 has been really important to me. 0:26:38 – Let’s talk about mentorship as part of that, right? 0:26:41 So when you’re building Kiko, Justin D.V. 0:26:45 and you’re a first time entrepreneur, 0:26:49 there’s a lot of experienced people 0:26:52 who are kind of like, go ahead of you in the thing. 0:26:55 And so that’s fantastic ’cause you can learn a ton. 0:26:57 Would love to hear kind of who you thought of 0:26:59 as your kind of lifelong mentors, 0:27:01 people that have helped you for a long time. 0:27:04 And then the other problem that’s super interesting is, 0:27:06 then you start atrium and you’ve had 0:27:08 some major successes behind you. 0:27:11 And then in some ways, the number of people 0:27:13 you can learn from is a much smaller pool 0:27:16 and sort of like, how do you kind of curate your mentorship? 0:27:17 Maybe how has it changed over time? 0:27:20 That’s a broad topic, but would love to hear your opinion. 0:27:21 – Sure, absolutely. 0:27:23 So like the best part about Silicon Valley, in my opinion, 0:27:26 is that there are people here who have done it before 0:27:27 who are willing to help you. 0:27:30 That we would never have made it here even day one 0:27:35 without people who helped us when it was like economically 0:27:37 bad, waste of time. 0:27:40 We weren’t like, didn’t look like a hot prospect 0:27:41 company or anything, right? 0:27:44 So those people will, Paul Graham, great example, 0:27:45 founder of Y Combinator, Paul Buhay, 0:27:47 who is like a partner in Y Combinator, 0:27:50 but they invented Gmail inside of Google. 0:27:52 These are people who invested in us very early on, 0:27:54 mentored us very early on and helped us out. 0:27:57 And that was pretty amazing. 0:28:01 And really that ethos perpetuates itself, 0:28:03 because then as a partner in Y Combinator, 0:28:05 even today as an entrepreneur, 0:28:08 where there’s like lots of conflicting, 0:28:09 competing interests for my time, 0:28:12 I always make sure to spend a little bit of time 0:28:14 mentoring other startups, 0:28:16 because it’s kind of like a pay it forward thing. 0:28:19 And you do the thing that people did for you 0:28:21 when you were younger. 0:28:25 – One of the really obvious sources of mentorship 0:28:26 is actually peer mentorship, right? 0:28:28 And some of the folks that kind of came up with you 0:28:32 at the same time and ended up running 0:28:34 really interesting companies on their own. 0:28:37 And I know they all live in DeBose with you 0:28:39 in San Francisco. 0:28:40 Talk to us about some of your friends 0:28:42 that you consider your mentors. 0:28:44 – Yeah, so it’s great to have friends 0:28:46 who are kind of doing the same path in a lot of ways, 0:28:48 but are one or two steps ahead of you. 0:28:49 So it’s still a case. 0:28:52 Obviously Emmett, my co-founder who’s still running Twitch, 0:28:53 it’s over a thousand people. 0:28:56 I think it’s like 1500 people or something like that. 0:28:59 My brother, who’s COO of Cruise, 0:29:01 co-founder of Cruise, the self-driving car company. 0:29:02 And they’re like over a thousand people. 0:29:05 And Steve, the founder of Reddit, 0:29:08 they’re like 400 something people or whatever. 0:29:11 So, you know, seeing what their problems are, 0:29:13 you know, obviously the problems are always the same, 0:29:14 actually the problems are always like, 0:29:16 I don’t have the right alignment among my team 0:29:19 and I don’t have the right executive team. 0:29:21 It’s usually some variation of those things. 0:29:24 But, you know, hearing it from the horse’s mouth 0:29:28 is super helpful in terms of making decisions for me. 0:29:31 And then, you know, even outside of DeBose, 0:29:32 sometimes I venture into Soma 0:29:33 and having some of those, you know, 0:29:35 kind of early YC founders who have really made it, 0:29:38 you know, like the Drew from Dropbox, for example, 0:29:41 and just knowing like how do they think about 0:29:44 every, all those things, executive hiring, et cetera, 0:29:45 it’s been really helpful, helpful to me. 0:29:48 So, luckily here at Silicon Valley, 0:29:49 I have a lot of great resources. 0:29:51 And the last thing I’ll shout out is, 0:29:55 I have this new executive coach, Matt Mocharri, 0:29:57 who’s like amazing. 0:29:58 This guy is the guru. 0:30:01 He’s, you know, mentors a lot of different fast-growing 0:30:02 startups around here. 0:30:04 I just talked to him for like a 360 thing. 0:30:05 – Yes. – Yeah, yes. 0:30:07 – This guy. – Yeah, that was great. 0:30:09 – Incredible, highly recommend, can’t speak, 0:30:11 he’s like changing my life. 0:30:13 So, I feel like he’s an angel sent down from Heaven 0:30:16 to teach me, finally, after 14 years, 0:30:19 how to like manage a system. 0:30:21 And I’ve learned a lot from him, so. 0:30:21 – That’s great. 0:30:23 – Those are kind of three sets, you know, 0:30:26 executive coach, the peer mentors, 0:30:28 and then kind of those early stage mentors 0:30:29 that I had back in the day. 0:30:30 – That’s great. 0:30:33 And I know one of the topics that you must end up 0:30:36 talking about often is that when you’re building something 0:30:39 that has a little bit more just execution risk, 0:30:42 you know, you’ve raised some real money 0:30:44 to kind of get started. 0:30:46 A lot of it ends up being sort of like organizational, 0:30:47 complexity, company culture. 0:30:51 I know this is the big, big, big, big area focus for you. 0:30:53 And that’s something that is very different 0:30:54 when you’re trying to build something for the long run 0:30:56 versus when you’re kind of just trying to find 0:30:59 product market fit, and it’s kind of like 10 people, 0:31:00 and you’re just like, is this even going to make it? 0:31:02 Like, let’s not even work on this. 0:31:03 – Yeah. 0:31:05 – So talk to us about kind of how your approach has changed 0:31:07 on building the company and your leadership style. 0:31:09 – That is a great question because it’s something 0:31:10 I think a lot about. 0:31:15 So I had never thought before a couple of months ago, 0:31:18 and this may sound stupid in a way, 0:31:20 but I’d never thought, what is the kind of company 0:31:22 that I want to show up to work at? 0:31:25 So 14 years later, finally thinking about it. 0:31:28 But the real answer is like, when you’re a 22 year old, 0:31:30 just starting your company, or you’re in Silicon Valley 0:31:33 and you’re thinking funding rounds and exits, 0:31:35 you’re always thinking, what’s the next milestone? 0:31:37 Like, how do I just claw my way desperately? 0:31:39 However, whatever it takes, how do I get to that 0:31:40 next milestone? 0:31:41 It’s do or die. 0:31:43 And for some, you know, oftentimes it is do or die. 0:31:46 You don’t have the luxury, oftentimes, of thinking 0:31:48 about what kind of, or you feel like you don’t have 0:31:50 the luxury of thinking about what kind of company 0:31:51 you want to build culturally. 0:31:54 And so, I started at Atrium actually very much 0:31:56 in the same way, but like, what are the metrics milestones 0:31:57 we want to hit? 0:31:58 What’s the next metrics milestone? 0:32:00 What do we need to get to for a Series B 0:32:02 or a next round of funding? 0:32:05 And so, the problem with that was that a year in, 0:32:07 I realized, oh, shoot, I need to like, 0:32:10 there are like a lot of things that I’ve neglected 0:32:13 that are actually affecting our ability to execute. 0:32:15 And the number one thing there was, 0:32:18 what’s the culture going to be? 0:32:21 People didn’t know, like, what’s the alignment aspect 0:32:22 of like, what are we building? 0:32:24 What kind of company are we? 0:32:25 Who are we? 0:32:26 What are we building? 0:32:28 And then, what’s the culture? 0:32:29 How are we being intentional about it? 0:32:32 So, we did a lot to work on that, 0:32:35 ran through a collaborative values process over a year ago 0:32:36 where we brought the whole company together, 0:32:38 figure out what we care about. 0:32:40 And then more recently, I’ve been thinking about, 0:32:43 after a lot of this self-work in terms of making myself 0:32:46 feel kind of consistently good every day 0:32:49 and move my attachments to the outcome, 0:32:53 I’ve realized there’s a set of principles 0:32:55 that I want to implement at the company. 0:32:57 And that I think that execution will flow 0:32:59 from those things, right? 0:33:01 If we build a company that has a high empathy 0:33:04 for each other, where we have care for each other, 0:33:06 where people are very collaborative, 0:33:10 where people feel like the locus of control 0:33:12 for what’s happening is inside of them, 0:33:13 instead of outside of them. 0:33:16 Things are happening through them, not to them. 0:33:19 I think that all the execution will actually flow from that. 0:33:21 One of the things I never understood before, 0:33:23 which I feel like I really understand now, 0:33:26 is that saying that culture eats strategy, right? 0:33:28 I felt like I had very good strategy with Atrium, 0:33:31 but I forgot about culture in that first part of the company, 0:33:34 and now I realized how important it is. 0:33:37 So one of the things I’ll say that we’re doing 0:33:39 is recently I read this book called 0:33:41 “The 15 Commitments of Conscious Leadership,” 0:33:42 which is an amazing book, 0:33:46 but it’s really about building a certain type of company, 0:33:49 what the authors call a conscious company. 0:33:54 But I would centered around that locus of control question. 0:33:56 Do you have radical responsibility 0:33:59 for what’s going on in your life, in your company, 0:34:00 regardless of who you are? 0:34:03 And read that book, and I realized this is the type 0:34:05 of company that I want to work at. 0:34:08 It’s a company populated by team members 0:34:11 who really believe in these principles. 0:34:13 And so we’re kind of going through a process 0:34:15 of trying to implement that at our company. 0:34:18 And really culture is one of the highest. 0:34:20 It went from something that I didn’t prioritize 0:34:22 to my top priority. 0:34:24 – Yeah, well, and I think it’s really interesting 0:34:29 because you’d started Kiko out of school, right? 0:34:33 And so unlike some folks who maybe, 0:34:35 they go and they work at Google or Facebook or something, 0:34:36 and they maybe have a template 0:34:39 for the company culture they want to create, 0:34:41 this is something that you kind of had to learn 0:34:44 and adjust over many, many kind of company iterations 0:34:46 of various companies that you built. 0:34:48 – That’s right, we had never worked at a place 0:34:51 with good culture or a culture, right? 0:34:53 Because we had always worked at our own company, 0:34:55 so we were just making it up as we went along. 0:34:57 When you’re not intentional about your culture 0:34:59 or type of company you want to be, 0:35:02 then the culture ends up being the accidental collection 0:35:05 of good and bad choices and personality quirks 0:35:08 and good and bad behaviors that your founding 0:35:12 and executive teams propagate, right? 0:35:13 And it’s accidental, right? 0:35:17 And often times there’s things that are not good behaviors, 0:35:19 they get propagated culturally. 0:35:22 And often times people justify it because they’re like, 0:35:26 well, they conflate the correlation with causation, right? 0:35:29 They’re like, because we, you know, 0:35:34 or have behaviors where maybe we’re like a low empathy 0:35:35 company, let’s say, but they don’t call it that. 0:35:36 They’re just like, we make decisions 0:35:38 based on like meritocracy, right? 0:35:40 And they’re like, but the best idea is gonna win, 0:35:43 but then maybe that’s like because that’s a behavior 0:35:44 that they’ve propagated, 0:35:46 but that might not be the real reason 0:35:49 why they’ve actually been winning, right? 0:35:51 I think a lot of companies in Silicon Valley 0:35:55 kind of succeed despite their management actually, 0:35:56 not because of it. 0:35:59 And what I mean by that is like the idea was so good 0:36:02 that a bunch of 25-year-olds could run the company, right? 0:36:04 That core product market was so good, 0:36:05 it was just a rocket ship 0:36:07 and then people were just trying to hang on. 0:36:09 Now, eventually I think they do figure it out, 0:36:10 but often times in those early days, 0:36:12 and I think it’s actually quite, you know, 0:36:14 not intentional and often times not that good. 0:36:15 – Yeah. 0:36:18 You know, in our conversation today, 0:36:20 we’ve talked about all the things that you’ve changed. 0:36:21 – Yeah. – Right? 0:36:24 You’ve changed, you know, from consumer to B to B, 0:36:27 you’ve changed, you know, how fast you fundraise, 0:36:30 you know, there’s been a lot of different changes. 0:36:33 You know, is there anything that you feel like you, 0:36:35 like there’s a core that you’re like, 0:36:37 okay, there’s this thread that I’m trying to do the same 0:36:39 between all the companies, 0:36:41 or is it just really like iterating very quickly 0:36:44 and, you know, you’re doing a lot that’s different? 0:36:46 – Well, I think that core ethos of, yeah, 0:36:50 iterating quickly, you know, that’s like a YC ethos, 0:36:52 that’s something that we really did, 0:36:54 carry on at the very beginning. 0:36:59 So speed was, you know, something that’s pretty important. 0:37:04 I think that really being helpful in the community, 0:37:06 not just your startup, I mean, including your startup, 0:37:08 but also the community of startups, 0:37:09 that’s something that we learn, 0:37:11 the behavior we learn from, you know, the early days of YC, 0:37:14 and even like our community of friends who were founders, 0:37:17 who all became successful, you know, helped each other out, 0:37:19 and then now today, you know, it’s an ethos 0:37:21 that would take the atrium to really build a company 0:37:24 that’s, you know, kind of for startups, you know, 0:37:26 by startups that helps out these, you know, 0:37:28 fast-growing startups. 0:37:30 So that ethos is probably something 0:37:33 that’s pretty similar, you know, kind of similar 0:37:35 to like what you guys have at Andreessen, right? 0:37:38 Which is like, if we can, you know, be the most helpful 0:37:40 in terms of providing these networks of services, 0:37:43 that is something that is going to, you know, 0:37:45 kind of pay dividends for us as a brand. 0:37:47 You know, that’s something I believed personally, 0:37:50 and then also at each room throughout my entire career. 0:37:51 – Right, right. 0:37:54 Yeah, I mean, as you know, 0:37:56 one of the things that’s great about the Bay Area 0:37:59 is that it ends up being this very long-running, 0:38:02 relationship-driven place where, you know, 0:38:05 you meet people like, I mean, we met like 10 plus years ago, 0:38:07 right, and that’s the kind of interesting thing 0:38:11 where there’s many, many cases where you can work together. 0:38:14 And so, you know, focusing on value creation, 0:38:16 as opposed to like, how am I going to try to position myself 0:38:18 to like capture the most value? 0:38:21 Like, I think that, you know, like that certainly runs, 0:38:22 like I think it’s one of the very special things 0:38:24 about the Bay Area. – Absolutely. 0:38:26 If I think about where we’re at, you know, 0:38:29 where all the people who I, you know, saw in the early days, 0:38:32 like 13 years ago, all these different founders, 0:38:33 you know, these two-person startups 0:38:35 that won’t even anything, where they’re at, 0:38:37 their company might not have succeeded, 0:38:40 but they have like created some value here in Silicon Valley 0:38:42 by being in that ecosystem, being helpful, 0:38:44 and then, you know, maybe becoming an executive 0:38:45 at someone else’s company, 0:38:47 or becoming an investor, early-stage investor 0:38:48 at another company that really worked. 0:38:51 And so, there really is like a feeling 0:38:55 of if you kind of get out what you put into this community, 0:38:57 and that’s one of the things I really love about it. 0:39:02 – How do you think about the idea that, you know, 0:39:05 when you’re first getting started, 0:39:07 you’re a first-time entrepreneur, 0:39:09 there’s kind of like low expectations, right? 0:39:11 ‘Cause you’re like, maybe this’ll work, 0:39:11 maybe this won’t work, 0:39:15 people’s expectations of you are like kind of low too, 0:39:16 ’cause they’re kind of like, I don’t know, 0:39:18 who knows, you know, Justin’s often in San Francisco 0:39:20 doing this thing, he’s running around with a camera 0:39:24 on his head, and it’s just kind of a fun thing. 0:39:27 And then, now, several companies later, 0:39:30 because, you know, also you’ve raised money, 0:39:32 and because you’ve done a lot, et cetera, 0:39:34 you know, the expectations must be higher. 0:39:36 Like, how do you think about, you know, 0:39:38 how do you think about those expectations 0:39:39 managing your own, you know, 0:39:41 your own expectations around that? 0:39:46 – Well, I always think that every entrepreneur’s expectations 0:39:49 for themselves are very exceedingly high, right? 0:39:52 If you’re the type of person who was a PM or engineer 0:39:53 at, you know, one of these fan companies, 0:39:55 and then you’re like, I’m gonna start a startup 0:39:57 ’cause I see other people doing it, 0:39:59 then you don’t think, you don’t go into it thinking, 0:40:01 well, I’m just gonna create like a whatever, 0:40:04 something that’s like a nice small business, right? 0:40:06 You go in being like, I’m gonna raise series A 0:40:08 from Andreessen, and we’re gonna be, you know, 0:40:10 this product that just goes, 0:40:12 the next Snapchat or whatever. 0:40:16 And so I think that, like, it’s always a battle 0:40:20 against your own, like, the kind of devil 0:40:22 on your shoulders telling you, you’re not good enough, 0:40:24 you know, you’re not doing well enough, 0:40:25 you could be doing better. 0:40:27 And I think the way that you win that battle 0:40:31 is by really internalizing and realizing 0:40:36 that whatever happens, you’re gonna be fine. 0:40:38 And you’re probably gonna be the same, 0:40:40 you’re not gonna be happier or less happy, actually. 0:40:43 And now most people do not actually successfully 0:40:46 internalize that well enough, in my opinion. 0:40:48 But it is true, I firmly believe it’s true, 0:40:51 and it’s something that the sooner you start practicing 0:40:54 that in your head, really, you know, 0:40:58 feeling that and experiencing that in yourself internally, 0:41:00 then the happier you’ll be. 0:41:02 And that’s not to say that a lot of, you know, 0:41:04 people I come into contact with, like, friends even, 0:41:06 or people who work for me are like, 0:41:11 well, my drive, like, that need to win at all costs 0:41:12 is my edge. 0:41:14 But I really don’t believe, I mean, 0:41:15 maybe that’s true for other people, 0:41:17 but for me, I never found that to be the case. 0:41:21 It’s like, no, it was just like the kind of unhappiness 0:41:23 that was created around it that would make it 0:41:25 actually less sustainable for me to continue on, 0:41:27 because I was always, you know, you can’t, 0:41:28 human beings don’t wanna live in a high stress, 0:41:30 high anxiety, stay for too long. 0:41:31 That’s how you can burn out, right? 0:41:35 So startups are not, I mean, contrary to popular belief, 0:41:37 it’s not a sprint, it’s a marathon, right? 0:41:39 There’s these overnight successes, 0:41:42 Twitch came out of nowhere, eight years from 0:41:44 incorporating that company to selling it, 0:41:45 almost exactly eight years, right? 0:41:47 So that is a long time. 0:41:49 And in order to last a long time, 0:41:52 you need to figure out a way that you are okay 0:41:54 with what’s going on, and what’s going on 0:41:57 is always gonna involve bad things. 0:41:59 Like things, you know, there’s gonna be good 0:42:00 and there’s gonna be bad. 0:42:01 So if you don’t figure out a way 0:42:04 that you psychologically are okay with that, 0:42:06 you’re gonna give up, and if you give up, 0:42:09 you’re never gonna see it to the ultimate potential 0:42:12 that whatever your startup is can be. 0:42:12 – Great. 0:42:18 I’m gonna throw in kind of two last questions in here. 0:42:23 One is, you know, what are you reading these days? 0:42:25 Do you have any sort of recommendations, podcasts 0:42:27 that you like, kind of media consumption 0:42:30 is kind of a way to learn yourself? 0:42:31 – Yeah. 0:42:33 – And anything sort of especially super impactful 0:42:37 over the last couple years that you wanna reference? 0:42:38 – Yeah, that’s great. 0:42:40 So reading a lot, actually, 0:42:43 ’cause I deleted all the entertainment apps off my phone, 0:42:45 including the browser, and I’d locked it 0:42:47 so that I don’t can install new apps 0:42:48 ’cause I was a total phone addict. 0:42:49 That includes Twitch. 0:42:50 – Wait, how do you lock? 0:42:51 How do you lock your– 0:42:51 – So you can lock installing new apps, 0:42:52 you can delete the app store 0:42:54 and put a passcode lock on it. 0:42:55 – Oh. 0:42:56 – And I gave my wife the passcode, 0:42:58 or she put in a passcode, I don’t know what it is. 0:42:59 – Right. 0:43:00 – So I can’t– 0:43:01 – This is like a parental lock. 0:43:02 – It’s a parental lock, exactly. 0:43:04 So I don’t have control over my own phone anymore. 0:43:07 But the consequence of that is I read a lot more books, 0:43:09 which is good. 0:43:12 A couple of ones that have been particularly impactful to me. 0:43:15 Number one is this book called The Untethered Soul. 0:43:17 So this book changed my life. 0:43:20 It’s really about the idea is that you are not 0:43:22 the thing that you think you are. 0:43:25 Most people go through life thinking they are the experiences 0:43:27 that they’ve experienced, the thoughts that they have, 0:43:29 or the emotions that they have. 0:43:33 But really, you’re just the observer of these things 0:43:34 that are happening. 0:43:36 And by creating, it’s almost like you’re watching a movie, 0:43:39 right, like a movie that has all the five senses, 0:43:40 plus emotions, plus thoughts. 0:43:45 So like a seven dimensional movie of the just end life, right? 0:43:49 And I think that was a very important message to me 0:43:52 to realize, like an internalize that like actually 0:43:55 these attachment, these things that I think will happen 0:43:57 that will like drive happiness, like experiences or events 0:43:59 or whatever will never actually drive 0:44:01 true internal happiness. 0:44:04 So amazing book, The Untethered Soul, highly recommend it. 0:44:07 Another book that I think made me a much better leader, 0:44:10 I was this book I read called Leadership and Self Deception. 0:44:12 Amazing book. 0:44:14 The book, the premise of the book is really there’s two ways 0:44:17 to treat other people inside the box and outside the box, 0:44:21 but really I mean like without empathy or with empathy, 0:44:25 like treating them like an object or a person, right? 0:44:28 And the idea, the fundamental idea behind the book 0:44:31 is if you treat people like an object, 0:44:33 number one, they don’t like it, right? 0:44:36 But the second thing is, if you treat people like objects 0:44:38 who are just there to fulfill something for you, right? 0:44:40 Like at work, it would be like to, you know, 0:44:43 hit some number or metric or whatever, do some job. 0:44:46 The problem is that you will lie to yourself about 0:44:50 when there’s negative situations about what your role is, 0:44:51 you’ll self deceive, you’ll say, 0:44:53 oh, this person is 100% at fault 0:44:55 and I’m 0% at fault. 0:44:58 And I found myself actually doing that a lot of times. 0:45:02 I realized, you know, I felt like I was, 0:45:07 you know, had a high degree of empathy for other people, 0:45:09 but I realized that was for people who, 0:45:10 I felt like we’re performing really good 0:45:13 where my observation was like high performance. 0:45:16 But I wasn’t, where my observation, 0:45:19 where my feeling was that there wasn’t high performance, 0:45:23 I felt like I would slip into this like low empathy mode. 0:45:25 And the problem is that when you’re in that mode, 0:45:27 you don’t admit, what are the things that I have done? 0:45:29 What, you know, Justin, what are the things I have done 0:45:30 to contribute to that situation? 0:45:34 So examples could be put someone in the wrong job, 0:45:37 didn’t give them clear enough criteria for success or failure, 0:45:39 didn’t support them with the right resources, right? 0:45:42 There’s many reasons why I could have contributed 0:45:44 to some situation failing. 0:45:47 And I found that like I would lie to myself in those situations. 0:45:51 So, you know, that was a really important book for me. 0:45:53 And those are probably two ones I really recommend. 0:45:55 – That’s great. 0:45:57 Are you doing a lot of podcasts right now? 0:45:59 – Listening to them? – Yeah. 0:46:01 – Been listening to– – That kind of entertainment. 0:46:03 – Yeah, I listened to some podcasts. 0:46:06 I’ve been listening to some of the Joe Rogan experience. 0:46:09 – Nice. – And that’s probably the, 0:46:10 this is probably it. 0:46:12 I like, I just listened to one with Alex Honol, 0:46:15 where he’s talking about climbing, you know, 0:46:16 so it’s pretty interesting. 0:46:20 – I’m gonna ask you the time travel question. 0:46:24 So if you were today, an enlightened, repeat entrepreneur, 0:46:27 to go back to yourself when, you know, 0:46:30 you’re 22, 23, just getting started doing this thing. 0:46:34 You know, what advice would you give yourself? 0:46:37 – When I’m just getting started, you know, 0:46:40 probably join Facebook. 0:46:45 No, but the, maybe the real answer, 0:46:47 I’ve actually, you know, I don’t really regret 0:46:49 any of the, like, economic choices or anything. 0:46:52 I think I’ve had this tremendous opportunity 0:46:55 to, like, build and discover these new things 0:46:59 and build companies and I wouldn’t trade it for anything. 0:47:01 I think the thing I could have done better 0:47:03 or, like, learned, you know, back then is, 0:47:07 you know, self-improvement is a thing. 0:47:08 You should probably, like, work on that. 0:47:09 Maybe that’s number one. 0:47:10 The second thing would be– 0:47:11 – Stop eating pizza at work. 0:47:12 – Yeah, stop eating pizza at work. 0:47:14 Number three would be, 0:47:18 you should, you know, things take time. 0:47:20 Like, don’t be in such a, you know, 0:47:22 it’s not, like, a one-year, one-and-done, 0:47:24 like, no, you’re a billionaire, you know? 0:47:26 Like, if you look at the, let’s say, 0:47:29 any sort of, like, the Amazon share price 0:47:30 or, like, market cap over time, right? 0:47:32 It looks, like, even through, like, 0:47:36 the last couple years looks like an exponential curve, right? 0:47:39 And so, you know, if Bezos had been, like, your, 0:47:41 you know, like, 15, 0:47:44 which is a long time to be starting a company, 0:47:46 like, oh, man, I made it, I’m done, 0:47:47 like, I’m gonna retire, like, 0:47:50 well, you know, the company would look a lot different, 0:47:52 right, so, you know, things take time 0:47:54 and I have to constantly remind myself that. 0:47:57 I think humans, you know, here in Silicon Valley, 0:47:59 especially, but then human beings in general, 0:48:01 are wired to, like, always want the new, 0:48:02 you know, look for the new thing. 0:48:03 What’s new? 0:48:05 What are you, like, what’s your new thing? 0:48:07 Something that everybody’s asking about in Silicon Valley? 0:48:10 They’re always, you know, that’s a question here. 0:48:12 But the best entrepreneurs here, 0:48:13 the ones who have created lasting companies 0:48:16 and lasting value, they stick with their thing 0:48:19 for decades, you know? 0:48:21 And that’s what impresses me most now 0:48:25 and I wish that I had kind of realized that, you know, 0:48:25 before. 0:48:28 – Awesome. 0:48:29 – Better late than never. 0:48:30 (laughing) 0:48:31 – Awesome. 0:48:33 Justin, thank you for coming by. 0:48:33 – Yeah. 0:48:35 – Thanks for having me. – Really good discussion. 0:48:45 [BLANK_AUDIO]
Want actionable advice from a founder who has built multiple tech companies and has invested the time to be open, introspective, and transparent about lessons learned?
In this episode (which originally aired as a YouTube video), a16z General Partner Andrew Chen (@@andrewchen) talks with Justin Kan (@justinkan). Justin is a repeat entrepreneur who co-founded Kiko Software (a Web 2.0 calendar that pre-dated Google Calendar by 4 years); Justin.tv (a lifecasting platform); Twitch.tv (a live streaming platform for esports, music, and other creatives now part of Amazon); Socialcam; and now Atrium, a software-powered law firm for startups.
Justin reflects on his journey and shares 10 + 1 lessons he’s learned:
The paradox of choice: choosing a focus
Tradeoffs between B2B versus B2C companies
Market risk vs execution risk
Fundraising strategy: go big or stay lean?
Managing the stress of being a startup CEO (again!)
Seeking out mentors, coaches, and peers for help
Intentionally designing a culture to avoid the pitfalls of “culture eating strategy”
Things he’s still doing in his latest startup—and things he’s doing very differently
Managing higher expectations
What he’s reading and listening to
Bonus: advice he’d give his 20-year old self
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
0:00:02 Hi, and welcome to the A16Z podcast. 0:00:03 I’m Hannah. 0:00:05 Deep learning has come to the life sciences. 0:00:09 Lately it seems every week a published study comes out with code on top. 0:00:14 In this episode, A16Z general partner on the bio fund Vijay Pandey and Bart Remsundar talk 0:00:17 about how AI and ML is unlocking the field in a new way. 0:00:22 In a conversation around their recently published book, Deep Learning for the Life Sciences, 0:00:25 written along with co-authors Peter Eastman and Patrick Walters. 0:00:30 The book aims to give developers and scientists a toolkit on how to use deep learning for 0:00:35 genomics, chemistry, biophysics, microscopy, medical analysis, and other areas. 0:00:36 So why now? 0:00:40 What is it about ML’s development that is allowing it to finally make an impact in this 0:00:41 field? 0:00:43 And what is the practical toolkit? 0:00:44 The right problems to attack? 0:00:46 The right questions to ask? 0:00:51 Above and beyond that, as this deep learning toolkit becomes more and more accessible, biology 0:00:54 is becoming democratized through ML. 0:00:57 So how is the hacker ethos coming to the world of biology? 0:01:01 And what might open source biology truly look like? 0:01:05 So Bart, we spent a lot of time thinking about deep learning and life sciences. 0:01:10 It’s a great time, I think, for people to become practitioners in this space, especially 0:01:15 for people maybe that’s never done machine learning before from the life sciences side, 0:01:18 or maybe people from the machine learning side to get into life sciences. 0:01:22 But maybe the place to kick it off is what’s special about now? 0:01:23 Why should people be thinking about this? 0:01:28 The challenge of programming biology has been that we don’t know biology, and we make up 0:01:33 theoretical models, and the computers are wrong, and biologists and chemists understandably 0:01:36 get grumpy and say, “Why are you wasting my time?” 0:01:40 But with machine learning, the advantage is that we can actually learn from the raw data. 0:01:43 And all of a sudden, we have this powerful new tool there. 0:01:45 It can find things that we didn’t know before. 0:01:50 And this is why it now is the time to get into it, really to enable that next wave of breakthroughs 0:01:51 in the core science. 0:01:57 The part that still blows me away is just how fast this field is moving, and it feels 0:02:03 like it’s a combination of having the open source code on places like GitHub and Archive, 0:02:07 and there’s a paper or a week that’s impactful when it used to be maybe a paper or a quarter 0:02:09 or a paper a year. 0:02:13 And the fact that code is coming with the paper, it’s just layering on top. 0:02:17 That seems to me to be the critical thing that’s different now. 0:02:21 I think when you can clone a repo off GitHub, you also don’t have new insights just because 0:02:23 I’m using a new language. 0:02:27 And now that thousands of people are getting into it, I think all of a sudden you’ll find 0:02:32 lots of semi-self-taught biologists who are really starting to find new, interesting things. 0:02:33 And that is why it’s exciting. 0:02:38 It’s like the hacker ethos, but kind of coming into the bio world, which has typically been 0:02:40 much more buttoned down now. 0:02:44 I think anyone who can clone a repo can start really making a difference. 0:02:47 I think that’s going to be where the real long-term impact arises from these types 0:02:48 of efforts. 0:02:53 You don’t need a journal subscription to get archive or to get the code, which is actually 0:02:54 that alone is kind of amazing. 0:02:59 It wasn’t that long ago where a lot of academics offer was sold, and it was maybe sold for 0:03:01 $500, which is very material. 0:03:02 That’s one piece. 0:03:08 You connect that to the concept of now AI or ML can unlock things in biology. 0:03:12 Then biology is becoming democratized as kind of your point. 0:03:17 And so let’s talk about that because we’re still learning biology collectively. 0:03:20 What is it about deep learning in biology now? 0:03:22 Because biology’s old, machine learning is old. 0:03:23 What’s new now? 0:03:26 Deep learning has this question all over the place. 0:03:27 Why does it work now? 0:03:30 The first neural nets kind of popped out in the 1950s. 0:03:32 And I think it’s really a combination of things. 0:03:38 I think that part of it is the hardware, really, the hardware, the software, the growth of kind 0:03:42 of rapid linear algebra stacks that have made it accessible. 0:03:47 I think also an underappreciated part of it is the growth of the cloud and the internet 0:03:48 really. 0:03:51 Neural nets are about as janky now as it used to be in the ’80s. 0:03:55 The difference is that I can now pull up a blog post where someone says, “Oh, these things 0:03:56 are janky. 0:03:57 Here’s the 17 things I did. 0:03:59 I can copy, paste that into my code.” 0:04:01 And all of a sudden, I’m a neural net expert. 0:04:02 It’s all quite that easy. 0:04:07 It turns it to a tradecraft almost that you can learn by just working through it. 0:04:09 That’s why the deep learning tool again has been accessible. 0:04:14 Then you get to biology, and the question is why biology, why now? 0:04:17 And I think you’re actually the question’s a little deeper. 0:04:21 I think that it’s really about, I think, representation learning. 0:04:27 So we have now reached this point where I think we can learn representations of molecules 0:04:28 that are useful. 0:04:33 This has been something that in the science of chemistry, we’ve been doing a long time. 0:04:38 There’s been all sorts of hand-encoded representations of parts of molecular behavior that we think 0:04:39 are important. 0:04:44 But I think now using the new technology from image processing, from word processing, we 0:04:47 can begin to learn molecular representations. 0:04:50 To be fair, I actually don’t think we’ve really broken through there. 0:04:55 If you look at what’s happening in images or text, there are five years ahead of us. 0:05:00 Well, let me break in here because just for the listeners to give a sense for why representation 0:05:05 is important, and one of my pet examples is that if I gave anybody, say, two five-digit 0:05:07 numbers to add, it’d be trivial. 0:05:12 If I gave you those same five-digit numbers in Roman numerals and you wanted to add them, 0:05:14 the representation there would make this insane. 0:05:15 And what would you do? 0:05:21 Well, you would convert into appropriate representation where the operations are trivial or obvious. 0:05:26 And then the operation is done, and maybe it re-encodes, auto-encodes back to the other 0:05:27 representation. 0:05:28 So this is the problem. 0:05:32 It’s like when you have a picture, representations are obvious because it’s pixels, and computers 0:05:34 love pixels. 0:05:39 And maybe even for DNA, DNA is like a one-dimensional image, and so you have bases that are kind 0:05:40 of like pixels. 0:05:44 We used to joke early days that we would just take a photograph with a small molecule and 0:05:46 then use all the other stuff, but that’s kind of insane too. 0:05:51 And so with the right representation, things become transparent and obvious with the wrong 0:05:53 representation becomes hard. 0:05:54 This is really at the heart of machine learning. 0:05:59 It’s that there’s something about the world that I want to compute on, but computers only 0:06:06 accept very limited forms of input, zero ones, tack strings, like simple structures. 0:06:11 Whereas if you take a molecule, a molecule is like a frighteningly complex entity. 0:06:16 So one thing that we often don’t realize is that until 100 years ago, we barely had any 0:06:17 idea what a molecule was. 0:06:23 It’s this alarmingly strange concept that although we see little diagrams in 10th grade 0:06:26 chemistry or whatever, that isn’t what a molecule is. 0:06:31 It’s a much weirder, weirder quantum object, dynamic, kind of shifting, flowing. 0:06:33 We barely understand it even now. 0:06:37 So then you just really start asking the question of what is water, for example? 0:06:40 Is it the three characters, H2O? 0:06:43 Is it two hydrogens and oxygen? 0:06:45 Is it some quantum construct? 0:06:47 Is it this dynamic vibrating thing? 0:06:49 Is it this bulk mass? 0:06:52 There’s so many layers to kind of the science of it. 0:06:55 So what you really want to do is you’ve got to pick one, and this is where it gets really 0:06:56 hard, right? 0:07:01 Like, if I’m thirsty, what I care about in water is a glass of water. 0:07:06 If I’m trying to answer deep questions about the structure of Neptune, I might want a slightly 0:07:08 different representation of water. 0:07:14 The power of the new deep learning techniques is we don’t necessarily have to pick a representation. 0:07:17 We don’t have to say water is X or water is Y. 0:07:22 Instead, you say, let’s do some math, and let’s take that math and let the machine really 0:07:27 learn the form of water that it needs to answer the question at hand. 0:07:32 So one form of mathematical construct is thinking of a molecule as a graph. 0:07:37 And if you do this, you can begin to do these graph-deep learning algorithms that can really 0:07:41 extract meaningful structure from the molecule itself. 0:07:46 We’ve learned, finally, that here’s a general enough mathematical form we can use to extract 0:07:52 meaningful insights about molecules or these critical biological chemical entities that 0:07:56 we can then use to answer real questions in the real world. 0:08:00 What I think is interesting here in particular is that so much has been developed on images, 0:08:03 and there’s a lot of biology that’s images, and so we could just spend the whole time 0:08:08 talking about images, and it could be microscopy or radiology and tons of good stuff there. 0:08:12 But there’s a lot of biology that’s more than images, and molecules is a good example. 0:08:16 For a long time, it seemed like deep learning was being so successful in images that that’s 0:08:17 all it really did. 0:08:23 And if you could take your square peg and put in whatever holes you got, it would work. 0:08:26 What you’re talking about for graphs is kind of an interesting evolution of this, because 0:08:30 a graph and an image are different types of representations. 0:08:35 But at a technical level, convolutional networks for images or graph convolutions for graphs 0:08:39 are kind of a sort of borrowing a concept at a higher level. 0:08:44 The biology version of machine learning is starting to sort of grow up and starting to 0:08:49 not just be a direct copy of what was done with images and in other areas, but now starting 0:08:50 to be its own thing. 0:08:55 A five-year-old can really point out the critical points in an image, but you almost 0:08:58 need a PhD to understand the critical points of a protein. 0:09:04 So you have this like dual kind of weights, a burden of understanding, so it’s taken 0:09:09 a while for the biological machine learning approach to really mature because we’ve had 0:09:13 to spend so much time even figuring out the basics. 0:09:18 But now we’re finally at this point where it feels like we are diverging a little bit 0:09:23 from the core trunk of what people have done for images or text. 0:09:26 In another five years, I’m going to be blown away by what this thing does. 0:09:28 It’s going to understand more deeply. 0:09:35 So we kind of have this sort of connection between democratization of ML, ML into biology, 0:09:38 democratization into biology, but I don’t think we’re there yet. 0:09:42 I think for ML, I think there really is a sense of democratization. 0:09:49 You could code on your phone and do some useful things or certainly on a laptop, a cheap laptop. 0:09:51 But for biology, what is missing? 0:09:53 One is data, and there’s a fair bit of data. 0:09:58 In the book, we talk about the PDB, we talk about other data sets, and there are publicly 0:10:02 available data sets, but somehow that doesn’t get you into the big leagues. 0:10:06 So like if in this vision of democratizing biology, what’s left to be done? 0:10:12 In some ways, the democratization of ML is a teensy bit of an illusion even. 0:10:17 It’s because that the core constructs were mathematically invented, that there is this 0:10:24 convolutional neural net or its cousins, the LSTM or the other forms of core mathematical 0:10:28 breakthroughs that have been designed, that you can take these building blocks and just 0:10:30 apply them straight out. 0:10:35 In biology, as you pointed out earlier, I think we don’t have those core building blocks 0:10:36 just yet. 0:10:41 We don’t know what the LEGO pieces are that would enable a newcomer to really start to 0:10:44 do breakthrough work. 0:10:45 We’re closer than we were. 0:10:49 I think we’ve had the beginnings of a toolbox, but we’re not there yet. 0:10:53 Let’s think about what happened on the ML side as inspiration for the Bio side. 0:10:54 How much is it driven through academia? 0:10:56 How much driven through companies? 0:10:59 Because what I’m getting at is that there’s a lot of IO in academia. 0:11:02 I don’t know if we’re seeing that being made open sourced in companies. 0:11:07 We’re getting to this really weird set of influences where in order for companies to 0:11:09 gain influence, they need to open source. 0:11:14 This is why 10 years ago, I can’t imagine that Google would have open sourced TensorFlow. 0:11:20 It would have been core proprietary technology, but now they know that if they don’t do that, 0:11:24 developers will shift to some other platform by some other company. 0:11:25 Exactly. 0:11:30 It’s weird that the competitive market forces are driving democratization. 0:11:35 Most of high torch basically are Facebook-based and TensorFlow is from Google. 0:11:37 Let’s say Google kept TensorFlow proprietary. 0:11:39 What would be so bad for them if they did that? 0:11:41 What if everybody outside used high torch? 0:11:45 I think there’s a really neat analogy to the financial sector. 0:11:50 A lot of financial banks have masses of functional programs that they keep under the hood, under 0:11:51 the covers. 0:11:55 If you look at Jane Street, or I believe Standard and Chartered, or a few other of these other 0:12:00 big institutions, lots and lots of functional code hiding behind those walls. 0:12:04 But that really hasn’t really infiltrated further out. 0:12:09 This actually, I think, in the long run weakens them because it’s harder to train, it’s harder 0:12:12 to find new talent, it’s more specialized. 0:12:17 A lot of the code base at Google is proprietary, like the original MapReduce was never put 0:12:18 out there. 0:12:22 This I think has actually caused them a little bit of a problem in that new developers coming 0:12:27 in have to spend months and months and months getting up to speed with the Google stack, 0:12:32 whereas if you look at TensorFlow, it doesn’t take any time at all, someone could walk in 0:12:34 and basically be able to write TensorFlow. 0:12:36 They’ve been using it for months to years. 0:12:37 Exactly. 0:12:42 And I think at the scale that Big Tech is at, this is just like, it’s a powerful market 0:12:43 advantage. 0:12:45 They’re almost outsourcing their education process. 0:12:48 And I guess if they don’t put it out, someone else will, and then they’ll learn on their 0:12:49 platform. 0:12:52 Yes, but then maybe what is the missing part in biology? 0:12:57 We’ve got pharma, a huge force there, but they have very specific goals. 0:13:02 A lot of agricultural companies, but it’s much more disparate. 0:13:08 Yeah, it’s dramatically hard to actually take an existing organization and turn it into 0:13:10 an AI machine learning organization. 0:13:17 So one thing I’ve honestly been surprised by is that when I’ve seen companies or organizations 0:13:22 I know try to incorporate AI into their drug discovery process, it ends up taking them 0:13:27 years and years and years, because they’re fighting all these upstream battles, weeks 0:13:32 to get their old computing system to upgrade to the right version of their numerical library 0:13:35 so they could even install TensorFlow. 0:13:41 And then they had all these things about who can actually say, upgrade the core software, 0:13:44 who is it this department? 0:13:47 How much do they need to talk to the biologists, to the chemists? 0:13:52 And the fact is that pharma and existing big codes are not built this way. 0:13:57 That’s not their core expertise, whereas if you look at Facebook or Google, they’ve been 0:14:02 machine learning for almost two decades now, from the first AdWords model. 0:14:07 So in some sense, they had to change very little about their culture, like, yeah, there’s 0:14:11 a slight difference instead of this function, use that function, but whatever. 0:14:15 But the core culture was there, and I think the culture, the people, changing that is 0:14:20 going to be dramatically hard, which is why I think it will really take, I think, ten 0:14:24 years and a generation of students who have been trained in the new way to come in and 0:14:25 shift it. 0:14:26 Yeah. 0:14:27 Well, Google was a startup too, right? 0:14:31 I think, you know, the thesis was that, and is that, that startups will be able to build 0:14:32 a new culture. 0:14:37 And I think the key thing that I think we’re seeing sort of boots on the ground is that 0:14:41 culture has to be not, here’s your data scientists are machine learning people in one room and 0:14:45 you’re biologists in another room, that they’d have to be the same. 0:14:50 What’s intriguing to me is just the size of the bio market. 0:14:55 Biology is healthcare, it’s agriculture, it’s food, it could be the future of manufacturing. 0:14:59 There’s so many different places that biology plays a role to date and will play a role, 0:15:02 but it just means that I think, I think to the point we’re talking about these companies 0:15:06 just are being built right now. 0:15:12 There’s I think this whole host of challenges here because biology is hard and building kind 0:15:17 of like that selective understanding of like, you know, of the 10 best practices that existed. 0:15:19 Five are actually still best practices. 0:15:23 The other five we need to toss out a window and stick in a deep learning model. 0:15:27 That kind of very painstaking process of experimentation and understanding. 0:15:31 That I think is like where the really hard innovation is happening. 0:15:32 And that’s going to take time. 0:15:36 You’re never going to be able to replace like a world-class biologist with any machine learning 0:15:37 program. 0:15:43 A world-class biologist is typically fricking brilliant and they often bring a set of understanding 0:15:46 that no programmer or no computer scientist can. 0:15:51 Now, the flip side holds true and I think that merger, as you said, that’s where like 0:15:53 there’s power for magic dynamism. 0:15:57 One really interesting factoid I heard from an entrepreneur in the space is that, you 0:16:03 know, the best biologists that they could hire had a market rate that was lower than 0:16:10 a introductory, intermediate, you know, front-end dev and, you know, of course, front-end is 0:16:11 very hard engineering. 0:16:15 I don’t want to put that down, but there’s so many fewer of these biologists, so there’s 0:16:21 almost this market imbalance of how is it possible that, you know, you can take really 0:16:27 a world-class biologist of whom there’s maybe a couple of hundred in the world and not have 0:16:29 them be valued properly by the market. 0:16:32 So do you even out those pay scales in one company? 0:16:37 Do you like have two awkward pay ladders that coexist and create tension in your company? 0:16:41 These are the types of like really hard operational questions that almost have nothing to do with 0:16:43 the science, but at the heart of it they do. 0:16:45 Maybe it’s interesting to talk about like how we can help people get there. 0:16:49 Yeah, so what’s like the training they should be doing, maybe we could even go like super 0:16:50 nuts and bolts. 0:16:52 So I got my laptop, what do I do? 0:16:58 So I mean, like, I guess there’s a couple key packages we install, like TensorFlow, maybe 0:17:00 DeepGam, something like that. 0:17:04 Python is often already installed, let’s say on a Mac, is that it? 0:17:06 And then we start going through papers and books and code. 0:17:11 I think the first place really is to, you need to form an understanding of like what 0:17:14 are the problems even that you can think about. 0:17:19 I think if you’re not trained as a biologist, and even if you are, you might not see that 0:17:25 intersection of these are the problems where biological machine learning can or cannot work. 0:17:29 And that I think is really what the book tries to teach you, as in like, what’s the frame 0:17:30 of thinking? 0:17:34 What’s the lens at which you look at this world and say that, oh, that is data coming 0:17:36 out of a microscope. 0:17:40 I should spend 30 minutes, spin up a connet, and iterate on that. 0:17:46 This is a really gnarly thing about how I prepare my like, you know, C.Elegant samples. 0:17:49 I don’t think the deep learning is going to help me here. 0:17:52 And I think it’s that blend of knowledge that the book tries to give you. 0:17:53 It’s like a guidebook. 0:17:57 When you see a new problem, you ask, is this a machine learning problem? 0:17:59 If so, let me use these muscles. 0:18:03 If it’s not a machine learning problem, well, I know that I need to talk to someone who 0:18:05 does know these things. 0:18:06 And that’s what we try to give. 0:18:08 Andring has a great rule of thumb. 0:18:12 If, you know, a human can do it in a second, deep learning can probably figure it out. 0:18:19 So start with something like say microscopy, you have an image coming in and an expert 0:18:22 can probably eyeball and say, interesting, not interesting. 0:18:24 So there’s this binary choice. 0:18:30 And there’s some arcane black box that was trained within the expert’s head and experience. 0:18:34 That’s actually the sort of thing machine learning is like made to solve. 0:18:38 So really ask yourself, like, when you see something like that, is there some type of 0:18:44 perceptual input coming in, image, sound, text, and increasingly molecules, a weird 0:18:49 new form of perception, almost magnetic or quantum, but you have perceptual input coming 0:18:50 in. 0:18:56 And is there a simple right, wrong, left, right, intensity type answer that you want 0:18:57 from it? 0:19:00 If you do, that’s really a machine learning problem at its heart. 0:19:01 Well, so that’s one type of machine learning. 0:19:06 And I think the benefit there of that, what human can do in a second, deep learning can 0:19:12 do, especially since, in principle, on the cloud, you could spin up 100,000, 10,000 servers. 0:19:15 Suddenly you’ve got 10,000 people working to solve the problem. 0:19:17 And then they go back to something else. 0:19:19 That’s just something you can’t do with people. 0:19:24 Or you’ve got 10,000 people working 24/7, as necessary, can’t do that with people. 0:19:28 But there’s another type of machine learning, which is to do things people can’t. 0:19:32 Or maybe more specifically, do things individual people can’t, but maybe crowds could. 0:19:37 So like we see this in radiology, right, where the machine learning can have accuracies 0:19:41 greater than any individual, akin to what, let’s say, the consensus would be, which would 0:19:43 be the gold standard. 0:19:47 That’s maybe the real exciting part, sort of the so-called superhuman intelligence. 0:19:49 Where are the boundaries of possibilities there? 0:19:54 One of the biggest problems really with deep learning is that you have some like strange 0:19:56 and crazy prediction. 0:20:02 Now I think that there’s a fallacy that people fall into of trusting the machine too easily. 0:20:07 Because 90% of the time that’s going to be garbage. 0:20:12 And I think that really kind of the challenge of picking out these bits of superhuman insight 0:20:16 is to know how to shave off the trash predictions. 0:20:17 Yeah. 0:20:19 Is 90% an exaggeration or is it really 90%? 0:20:23 I like nice round numbers, so that might have just been something I picked out. 0:20:26 But there’s like this great example, I think, in medicine. 0:20:32 So there’s scans coming in and the deep learning algorithm is doing like amazing at predicting 0:20:33 it. 0:20:38 And then like they dug into it and it turned out that the scans came from three centers. 0:20:43 One of them had like some type of center label that was like the trauma center or something. 0:20:44 There’s the other non-trauma center. 0:20:49 The deep learning algorithm had like a kindergartner told to do this, learn to identify the trauma 0:20:52 flag and flag those and uptake those. 0:20:57 So if you did this like naive statistics of blending them all out, you’d look amazing. 0:20:58 But really it’s looking for a sticker. 0:20:59 Yeah. 0:21:00 I mean, there’s tons of examples like that. 0:21:04 One with the pathologist with the ruler in there and it’s becoming a ruler detector and 0:21:05 so on. 0:21:09 Like, you know, this AUC like a sense of accuracy of close to 1.0. 0:21:14 We all got to be very suspicious of that because just running a second experiment wouldn’t 0:21:16 predict the first experiment with that type of accuracy. 0:21:18 Anything that’s too good to be true probably is. 0:21:19 Yeah. 0:21:23 I think then you get into the really subtle challenges, which is that, you know, the algorithm 0:21:28 tells me this molecule should be non-toxic to a human and should have effect on this, 0:21:29 you know, indication. 0:21:31 Do I trust it? 0:21:34 Is it possible that there’s a false pattern learned there? 0:21:37 Humans make these types of mistakes all the time, right? 0:21:42 Like if you have any type of like actual biotech, you know that there’s gonna be molecules 0:21:44 made or DCs that are disproven. 0:21:49 So you’re getting into the hard core of learning, which is that, is this real? 0:21:51 The reality is we don’t have answers to these. 0:21:55 They were really kind of trending into the edge of machine learning today, which is that, 0:21:58 is this a causal mechanism? 0:21:59 Does A cause B? 0:22:01 Is it a spurious correlation? 0:22:04 And now we’re getting to that place where humans aren’t necessarily better. 0:22:09 We talk about some techniques for interpreting, for looking at kind of what informed the decision 0:22:11 of the deep learning algorithm. 0:22:16 And we do provide a few kind of tips and tricks to start thinking about it, but the reality 0:22:19 is that’s kind of the hard part of machine learning. 0:22:20 It’s the edge. 0:22:24 The interpreting chapter is one of my favorite ones because it’s often sort of become so-called 0:22:29 common wisdom that machine learning is a black box, but in fact, it doesn’t have to be and 0:22:32 there’s lots of things to do and we are quite prescriptive there. 0:22:36 So the interpretability I think also is frankly what’s going to make human beings more at 0:22:38 peace with this. 0:22:40 And this isn’t anything unique to machine learning. 0:22:46 If you had some guru who’s just spouting off stuff and said, “Buy this stock X and short 0:22:51 the stock Y and put all your life savings into it,” you probably would be thinking, “Okay, 0:22:54 well, maybe, but why?” 0:22:57 So I think this is just human nature and there’s no reason why our interaction with machines 0:22:59 would be any different. 0:23:04 What I think is interesting is human beings are notoriously bad at causality. 0:23:07 We kind of attribute things to be causal when they’re not causal at all. 0:23:12 We do that in our lives from why did that person give me that cup of coffee to why did 0:23:13 that drug fail? 0:23:15 All these different reasons. 0:23:17 There’s two big misconceptions about machine learning. 0:23:18 One is lack of interpretability. 0:23:22 The second one is correlation doesn’t mean causation, which is true, but somehow people 0:23:26 take that to mean it’s impossible to compute causality. 0:23:30 And that’s the part that I think people have to really be educated on because there are 0:23:33 now numerous theories of causality. 0:23:36 And you could use probabilistic generative models, PGMs. 0:23:38 There’s lots of ways to go after causality. 0:23:40 The whole trick though is you need time series data. 0:23:43 What’s beautiful about biology or at least in healthcare is that we’ve got time series 0:23:45 data in many cases. 0:23:51 So now perhaps finally there’s the ability to really understand causality in a way that 0:23:55 human beings couldn’t because we’re so bad at it and machines are good at it and we’ve 0:23:56 got the data. 0:24:00 Can you think of a place where in your experience the algorithms have succeeded in teasing out 0:24:03 a causal structure that people missed? 0:24:10 Yeah, so I think in healthcare we always think about what is leading to various changes like 0:24:15 this drug having adverse effects, this diet having positive or negative effects. 0:24:20 All of these things are being understood in the category of real world evidence, which 0:24:23 is a big deal in pharma these days. 0:24:28 And if you think about it like a clinical trial is really a poor man surrogate for not 0:24:32 understanding causality because if we don’t understand causality you’ve got to do this 0:24:36 thing where it’s double blind, we start from scratch and I’m following it in time and we 0:24:37 see it. 0:24:42 If you understood causality you might be able to just get a lot of results from just mining 0:24:43 the data itself. 0:24:47 As a great example you can’t do clinical trials for all pairs of drugs. 0:24:51 I mean just doing for a single drug is ridiculously expensive and important, but all pairs of 0:24:54 drugs would never happen, but people take pairs of drugs all the time. 0:24:59 And so their adverse effects from real world data is probably the only way to do it and 0:25:03 we can actually get causality and there’s tons of interesting journal medicine papers 0:25:07 sort of saying, “Aha, we found this from doing data analyses.” 0:25:09 I think that’s just starting out. 0:25:14 Honestly, I think that bio-AI drug discovery needs to take a page from the self-driving 0:25:19 car companies, in the neighboring self-driving car world, simulators are all the rage. 0:25:25 And really because it’s that same notion of causality almost, like there’s a structure 0:25:30 to the world like pedestrians walk out, chickens, alligators, whatever, crazy thing. 0:25:34 I saw this for the picture, it happens. 0:25:39 So I think there they’ve built this amazing infrastructure of being able to run these 0:25:44 repeated experiments, almost a randomized clinical trials, but informed by real data. 0:25:49 We don’t yet have that infrastructure in bio-world and I know there’s a couple of exciting 0:25:53 startups are starting to kind of move towards that direction, but I think it’s when we 0:25:58 can really probe the causality at scale and then in addition to just probing it, when 0:26:04 the simulator is wrong, use the new data point that came in and have the simulator learn 0:26:05 to fix itself. 0:26:09 That’s when you get to this really amazing feedback loop that could really revolutionize 0:26:10 biology. 0:26:15 Yeah, so we talked about some basic nuts and bolts about how to get started and the framing 0:26:17 of questions, which is a key part. 0:26:22 So let’s say people, they’re set up, they’ve got their question, where do they go from 0:26:23 there? 0:26:27 I mean, in a sense, we’re talking about something closer to open source biology and to the extent 0:26:33 that biology is programmable and synthetic biology is, I think, very much, it’s been around 0:26:35 for a while, but I think it’s really starting to crest. 0:26:39 How do these pieces come together such that we could finally get to this sort of open source 0:26:41 biology democratization of biology? 0:26:45 A big part of this is really the growth of community. 0:26:49 There are people behind all these GitHub pages that you see. 0:26:54 There’s real decentralized, powerful organizations that, if you look at the Linux Foundation, 0:26:59 if you look at, say, the Bitcoin Core Foundation, there are networks of open source contributors 0:27:02 really that form this brain trust. 0:27:03 It’s very diffuse. 0:27:08 It’s not centralized in the Stanford, Harvard, Med Department, or whatever. 0:27:11 And I think what we’re going to see is the advent of similar decentralized brain trusts 0:27:13 in the bio world. 0:27:17 It’s in a network of experts who are kind of spread across the world and who kind of 0:27:20 contribute through these code patches. 0:27:22 And that, I think, is not at all new to the software world. 0:27:24 We’ve seen that for decades. 0:27:25 It’s totally new to biology. 0:27:26 It’s alien. 0:27:32 Like, you would be surprised how much skepticism there can be at the idea that a non-Harvard 0:27:35 trained, say, biologist can come up with a deep insight. 0:27:37 We know that to be a fact, right? 0:27:43 There is multiple PhDs worth of work in just like the Linux kernel that that community 0:27:46 really doesn’t care to get that stamp of approval. 0:27:50 So I think we’re going to see the similar parallel kind of knowledge base that grows 0:27:51 organically. 0:27:55 But it takes time because you’re talking about the building of another kind of almost educational 0:27:57 structure, which is this new and exciting direction. 0:28:00 Here’s the challenge I worry about the most, which is that, like, so you’re building a 0:28:05 Linux kernel, you can test whether it works or doesn’t work relatively easily. 0:28:09 Even as it is, there’s this huge reproducibility crisis in biology. 0:28:14 So how does one sort of become immune from that, or at least not tainted by that? 0:28:15 How do you know what to trust? 0:28:18 And this is a really, really interesting question. 0:28:23 And this is kind of shading a little bit almost into the crypto world, right? 0:28:27 Like you could potentially think about this experiment where you have like a molecule. 0:28:31 You don’t know what’s going to happen to it, but maybe you create a prediction market 0:28:34 that talks about the future and the small kill. 0:28:38 And you could then begin to create these historical records of predictions. 0:28:42 And we all know there are kind of like expert drug pickers at Big Pharma who can like eyeball 0:28:45 and say that is going through, that is failing. 0:28:48 And five years later, you’re like, well, shit, okay, yes, I was right. 0:28:52 There is the beginnings of infrastructure for these feedback mechanisms, but it’s a really 0:28:53 hard problem. 0:28:54 Yeah. 0:28:55 I’m trying to think though what that would be like. 0:28:59 The huge thing is like, you could imagine if it was a simple question, like, is this 0:29:01 drug soluble? 0:29:03 Someone might run a cheap software calculation. 0:29:07 Someone might do the experiment and there’s different levels of cost of having different 0:29:09 levels of certainty. 0:29:14 You’re essentially describing a decentralized research facility. 0:29:16 Maybe the question is who would use it? 0:29:21 This is, I think, the really hard part because I think that biopharma tends to be a little 0:29:24 more risk averse for good reasons than many other industries. 0:29:29 But I actually think that in the long run, this could be really interesting because if 0:29:34 you have multiple assets in a company, you could kind of like, disbundle the assets and 0:29:39 then you could start to get this much more granular understanding of like, what assets 0:29:41 actually do well, what assets don’t. 0:29:47 And if you make it okay for people to like, place a bet on these assets, all of a sudden 0:29:53 it’s de-risk because if you’re a big farmer and you’re like, I don’t really believe that 0:30:00 Alzheimer’s molecule does what is claimed, but I’m going to say like 15% odds it goes 0:30:01 through. 0:30:04 I’ll just invest 15% of what I would have in another world. 0:30:07 The trick is, and especially what we’re talking about now is the world of financial instruments 0:30:11 as well, is the trick is you have to be able to know how to risk an asset. 0:30:15 And so it could be in the end, one of the first interesting applications of deep learning, 0:30:20 machine learning is to use all the available data to give the maximum likelihood estimator 0:30:22 of what we think this asset is going to be. 0:30:25 It prices the asset and then people can go from there. 0:30:29 It’s kind of a fun world where we’re sort of thinking about how the financial world, 0:30:33 machine learning world and biology come together to kind of decentralize it and democratize 0:30:34 it. 0:30:40 I think there’s opportunities to kind of like, allow for more risks, the long tail to be played 0:30:41 out. 0:30:45 You don’t have as many interesting hypotheses that grow dead in the water because it wasn’t 0:30:48 de-risk enough for a big bet. 0:30:53 So, you know, what I think the big takeaway for me here is that there is that possible 0:30:56 world, but I forget if this is the way you learned how to program. 0:31:03 The way many of us did is I learned when I was like 11 on like actually a TI99A and 0:31:07 I was just playing around with it and I learned so much because it was, I could just get my 0:31:09 hands right in it. 0:31:13 And I think kind of, my hope for the book is that it’s kind of the equivalent in biology 0:31:16 that people can get their hands in it and I don’t know where they’re going to go with 0:31:17 it. 0:31:19 I don’t know if they go where we’re describing. 0:31:22 That’s one of the possible, any futures, but I think that’s what we’re hopefully being 0:31:23 able to give people. 0:31:25 We are opening out the sandbox. 0:31:30 Here’s what we’ve learned in kind of these very exclusive academic institutions. 0:31:36 Let’s throw the gate open, say here’s as much as we know as we can try to distill it down 0:31:38 and do what you will with it. 0:31:42 Like open source means no permission, so go to town and hopefully do something good for 0:31:44 the world is kind of the dream. 0:31:45 That sounds fantastic. 0:31:46 Well, thank you so much for joining us. 0:31:47 Thank you for having me.
with Vijay Pande (@vijaypande) and Bharath Ramsundar
Deep learning has arrived in the life sciences: every week, it seems, a new published study comes out… with code on top. In this episode, a16z General Partner Vijay Pande and Bharath Ramsundar talk about how AI/ML is unlocking the field in a new way, in a conversation around their book, Deep Learning for the Life Sciences: Applying Deep Learning to Genomics, Microscopy, Drug Discovery, and More (also co-authored with Peter Eastman and Patrick Walters.
So — why now? ML is old, bio is certainly old. What is it about deep learning’s evolution that is allowing it to finally making a major impact in the life sciences? What is the practical toolkit you need, the right kinds of problems to attack, and the right questions to ask? How is the hacker ethos coming to the world of biology? And what might “open source biology” look like in the future?
0:00:04 I’m Chris Lyons, and I lead the Cultural Leadership Fund here at Andreessen Horowitz, 0:00:08 a strategic investment vehicle that connects the world’s greatest cultural leaders to 0:00:10 the best new technology companies. 0:00:15 This segment of the A16Z podcast was based on an event hosted by the CLF in which we 0:00:20 featured a special early screening of Van Jones’s new series, The Redemption Project, followed 0:00:25 by a fireside chat between Van Jones and Chaka Senghor. 0:00:29 The Redemption Project is an eight-part series that looks at victims’ families in a life-altering 0:00:34 crime as they come together to actually meet their offender in hopes of finding personal 0:00:35 healing or peace. 0:00:40 It’s a rare glimpse into the U.S. prison system and also the incredible human potential 0:00:43 for redemption through restorative justice. 0:00:47 In this episode, Jones brought together a police officer who was shot and the man who 0:00:51 committed the crime decades earlier when he was only 17 years old. 0:00:54 In addition to the conversation between Van and Chaka, you’ll also hear two spoken 0:00:56 word performances. 0:01:00 Both artists are formally incarcerated inmates who have contributed to The Beat Within, an 0:01:05 organization and publication that serves over 5,000 youth annually through workshops 0:01:11 operated across California County juvenile halls and encourages literacy, self-expression, 0:01:15 healthy and supportive relationships with adults from their community. 0:01:19 First off, we’ll open up with Kevin Gentry performing his piece, My Heart. 0:01:24 And please note, there is some profanity and mature material in this episode. 0:01:32 For all intents and purposes, this piece, I loosely call it a piece, it’s more a letter 0:01:38 and the recipients of which are going to become readily apparent as I read this. 0:01:43 Excuse me, I’m sorry, may I please have just a few minutes of your time to say how much 0:01:47 I’m sorry for destroying your life. 0:01:51 Strong words that fall so short I can only imagine. 0:01:56 How can I, especially I, even begin to measure the impact of what I’ve done. 0:02:02 The loss, the pain, the emptiness, the sorrow, the guilt, what ifs, if onlys. 0:02:03 Is that a good start? 0:02:05 Maybe, I don’t know. 0:02:11 For so long I have dreamt of just how, what to say, the right words, but everything just 0:02:13 feels so flat. 0:02:18 So now here I am, resigned to having faith in the process, releasing my heart to you 0:02:23 through the words, praying that they will do, sparing even the slightest amount of any 0:02:25 additional hurt. 0:02:31 In no way did you deserve these years of torment, the anguish, the pain, the emptiness, perhaps 0:02:37 even bearing the burden of having to be strong for others when support was the furthest thing 0:02:39 from your mind. 0:02:43 You didn’t deserve such a fate, I’m sorry. 0:02:48 Sorry that on that faithful day I largely treated others like I felt. 0:02:55 Empty and devoid of any value, I saw your loved one as an object, though human, an obstacle 0:02:57 to my hopes and dreams. 0:03:03 Hopes and dreams are belonging and feeling relevant in the eyes of others, relevant so 0:03:10 unattainable it seemed for so long, so empty such a void I felt barren to the core. 0:03:14 My attempts to self heal I thought while I was perfecting. 0:03:21 If I get more I’ll be more, value was in the end more, irrelevance was in the knot. 0:03:29 In genius I believed back then, feel bad, fill with stuff, feel good, but not for long. 0:03:32 Try again, something’s wrong. 0:03:40 The pattern I repeated a revolving door in my life, try to feel, feeling full, just temporary, 0:03:43 once again feeling empty setting in. 0:03:50 The writing expectation that a life, his life, our lives should be unrestrained and unimpeded 0:03:55 by the untrue self defeating and outwardly destructive thoughts and behavior of someone 0:03:57 just as me. 0:04:04 To stand in the way with an idea, a belief in some time, to cowardly step with hollow purpose 0:04:07 to fill a void that was never real. 0:04:13 Your loved ones so deserving of everything good, unaffected by me, unfortunately there 0:04:15 wasn’t me. 0:04:20 But thank God there is also you through which his life still lives. 0:04:26 Through the memories and lessons in love, the affection and joy and promise and hope 0:04:31 and countless other memories I’m sure, though I cut them way too short. 0:04:37 Now illuminated to the precious sanctity of life, the gift of the beauty and purpose that 0:04:44 lies within us all, staying ever mindful that I will never grasp the gravity of the destruction 0:04:46 I caused you that day. 0:04:55 I stay primed and fueled to walk boldly, purposefully, into any and every venue to answer my call. 0:05:00 To carry his memory in my heart to others with a message of life, of promise even on 0:05:03 the lowest rung to all. 0:05:09 Hope is eternal, believe it, a bright future can spring from even the darkest past. 0:05:16 The words that I now utter, I do so to breathe life into those who may feel that they have 0:05:24 guest but last. 0:05:26 And now we’ll hear from Van Jones and Shaka Sengor. 0:05:31 Shaka was most recently the executive director of the Anti-Rocidicism Coalition, a New York 0:05:36 Times bestselling author for his memoir Writing My Wrongs, Life, Death and Redemption in an 0:05:41 American prison and star of the highly anticipated One Man Show. 0:05:46 Van Jones is an American news commentator, author, co-founder of several non-profit organizations 0:05:52 including Reform and Yes We Code, host of The Van Jones Show and co-host of CNN’s political 0:05:54 debate show Crossfire. 0:05:58 Their conversation is all about the redemption project, the American prison system and how 0:06:03 we can normalize rehabilitation and restorative justice in our culture. 0:06:09 The journey toward redemption is one I understand on a very personal level. 0:06:14 And you and I, we’ve been friends for a while and we’ve had a chance to talk about, you 0:06:18 know, what does redemption look like for people? 0:06:24 What is something that you would say really stood out to you as a lesson that we can all 0:06:30 take away to create space for redemption to happen? 0:06:35 Doing this whole series has changed me in ways I haven’t really caught up to yet. 0:06:42 You know, now when I’m on TV and we’re supposed to be tearing each other up over some tweet 0:06:49 or some other nonsense that’s going on, which is terrible stuff, but I have a hard time 0:06:58 getting as petty and shitty as you have to be to do good television. 0:07:04 And then jeopardizing my career, I have to figure out some way to get petty again. 0:07:08 I have some answers for you. 0:07:13 That one was the hardest one for me to do because my dad used to be a cop. 0:07:20 And my uncle Milton just retired from Memphis City Police Force a couple of years ago. 0:07:25 And so that one was hard for me as much as I do criminal justice stuff and as much as 0:07:29 I’ve like, you know, been against police brutality. 0:07:33 That’s always your fear when you have a family member who’s a cop. 0:07:41 And you can see me struggling in this episode to be my usual sort of like open self. 0:07:42 Like I was really tight. 0:07:47 You know, I was really trying, but I wasn’t succeeding in this episode. 0:07:52 And I told Jason, I said, I don’t think this is going to go well. 0:07:58 Tom has admitted that he’s got racial bias, which was a big deal. 0:08:01 You know, this is not going to go well. 0:08:04 This is going to be a shit show. 0:08:08 And I guess one has to go terribly, like that was basically my view. 0:08:13 And so I didn’t have any hope in that one. 0:08:17 I was just waiting for him to come out and, you know, say some stuff that wasn’t going 0:08:18 to work. 0:08:26 And as soon as the door opened, just something changed, both of them became something different 0:08:30 than they had been up until the moment they saw each other. 0:08:36 Something fell away and, you know, between men, there’s almost always some shielding 0:08:43 in a patriarchal society, like you’ll tell a woman you just met more than you’ll tell 0:08:46 your homeboy, you know, for 20 years about how you actually feel. 0:08:49 You know, it’s just the trap. 0:08:54 And between white and black people, there’s always a lot of gulf. 0:09:03 And between cops and black people, it’s like planetary levels of gulf and all just disappeared. 0:09:08 And you saw these two guys who had literally tried to kill each other, laugh at each other, 0:09:14 saw each other, have this conversation that I bet they couldn’t have with any other human 0:09:16 being. 0:09:18 And I haven’t processed it. 0:09:22 And there’s a lot of stuff in this series I haven’t processed. 0:09:23 Yeah, I can imagine. 0:09:26 I struggle with this episode. 0:09:31 You know, I’ve watched a few episodes, I’ve actually struggled with all of them. 0:09:38 And you know, for those who may not know my story, I was convicted of second degree homicide. 0:09:45 And while I was in prison, I got into an altercation and I punched the officer in the neck and 0:09:47 almost killed him. 0:09:54 The family of the man whose life I’m responsible for taking, one of them reached out to me 0:09:58 and extended a letter of forgiveness during my incarceration. 0:10:04 The officer that I got into the conflict with in prison advocated for me to die in solitary 0:10:05 confinement. 0:10:12 And so as I’ve done this work over the years, that’s one of the areas of my life I haven’t 0:10:14 been able to reconcile. 0:10:21 So watching Jason come out and seeing that through the lens of his 17-year-old self, 0:10:26 and knowing where he was back then, and knowing that I was him back then. 0:10:33 And I’m thinking about this larger conversation that this is presenting to the world about 0:10:35 how do we see what’s possible. 0:10:41 You know, I’ve been out of prison almost nine years now, I’ve been highly successful 0:10:46 and been able to do a lot of work in this space and prevent acts of violence in communities 0:10:48 throughout the country. 0:10:56 But the reality is, for many men like Jason, like myself, society just says, “Watch our 0:10:57 hands of them. 0:10:58 They’re broken. 0:10:59 They’re beyond repair. 0:11:00 Throw them away. 0:11:02 Let them die in prison.” 0:11:08 And one of the things that really struck me was that restorative justice gives space for 0:11:13 people who have been hurt by the Jason’s of the world to have their say. 0:11:17 And we saw what happens when you create space for that. 0:11:22 You know, Tom’s a remarkable man, Christie is an extraordinary woman. 0:11:27 And the courage that the exhibit was honest, you know, she went from, you know, “I want 0:11:35 them to die in prison because we can’t kill them because of a particular crime to forgiveness.” 0:11:41 And so as we think about this show, how do we amplify that part of the message? 0:11:46 How do we get people to understand that people do change in a very real way? 0:11:54 Well, look, I mean, part of what’s crazy about this show is that it exists at all. 0:12:00 You know, CNN has put this at nine o’clock on Sundays, which is prime time. 0:12:03 And that’s Anthony Bourdain’s slot. 0:12:06 Against getting my thrones now. 0:12:13 So they either really like it or they really don’t. 0:12:23 Our idea was we wanted to do media that would be healing, that would be positive, that would 0:12:29 be transformative and, you know, living in Hollywood and all that, you know, you get 0:12:35 a lot of side-eye looks at you when you talk that way, as you know, until you actually 0:12:40 can produce something that makes the point, you’re just one of those people talking in 0:12:45 the cafe that everybody rolls their eyes at, which is half the population of LA. 0:12:51 Luckily, Jana’s best friend from college, Antonia, married to a guy named Jason Cohen. 0:12:58 Jason Cohen is the guy that did “Facing Fear,” that Oscar-nominated film about a former U.S. 0:13:02 neo-Nazi who reconciled with his victim of violence. 0:13:08 So Jason, having done that film, said, “Hey, let’s do this. 0:13:10 Let’s do this kind of a series.” 0:13:15 So we just went totally renegade, you know, seeing in I’m not allowed to do anything without 0:13:19 their permission on camera, but we just went totally renegade, shot something. 0:13:21 It wasn’t a good idea. 0:13:23 Let me stop you there, right? 0:13:30 So you basically what you’re saying is that you are willing to compromise your career. 0:13:34 You’re standing something you’ve worked long and hard for. 0:13:38 Most people would, you know, who talk a lot, especially people on social media, they would 0:13:43 love to be on CNN sharing their opinions and views and thoughts. 0:13:51 And you were willing to sacrifice or compromise that because you felt so strongly about the 0:13:53 importance of this mission. 0:13:59 Yeah, but yeah, because who gives a shit if we’re going to just be up here, I mean, you’re 0:14:00 the same way. 0:14:03 I mean, you could, people in this room are the same way. 0:14:04 Look. 0:14:06 I might not quit my job. 0:14:08 And you’re about to quit. 0:14:11 I got a seven-year-old. 0:14:14 But honestly, like that’s how we got the messy truth on the air. 0:14:18 I think it’s a very important point is that we have to take chances. 0:14:23 I mean, for me, I felt like this is the moment. 0:14:28 I feel like criminal justice reform is finally becoming a mainstream conversation. 0:14:33 The problem that we have right now is that there’s a level that people won’t go to. 0:14:36 So we can have the conversation about innocence, right? 0:14:40 And that’s an important conversation because that begins to chip away at people’s confidence 0:14:43 in the system that innocent people are being put away in prison. 0:14:48 So that used to be risky to say that our system, our American system, is putting people to 0:14:49 death who are innocent. 0:14:50 That was radical. 0:14:52 But we’ve been able to establish that. 0:14:55 Then we went to the nonviolent drug offenders. 0:14:57 They’re guilty, but they’re guilty of stuff that you did in college. 0:14:59 So why are they in prison? 0:15:00 Or maybe you did this weekend. 0:15:05 So don’t raise your hand. 0:15:08 And so now that’s been established. 0:15:13 But then the way that the danger is that then, well, okay, but if you’re not innocent and 0:15:17 if you’re not nonviolent, well, then we really don’t have to care about you at all. 0:15:22 And we have all these funerals in the community and we have all this harm and we can’t talk 0:15:23 about it. 0:15:30 And I said, this true crime genre has to be hacked and used for something positive because 0:15:34 true crime on the left wing, it’s about exoneration. 0:15:35 Like who done it? 0:15:38 Well, we’ve got to exonerate the person because they’re actually innocent. 0:15:41 Or on the right wing, it’s catch a killer. 0:15:45 But true crime as a who done it genre doesn’t get to the truth because a lot of times we 0:15:47 know who did it. 0:15:48 We already know who did it. 0:15:53 It’s about the truth long after the crime, which is that growth is possible for people 0:15:55 who have done harm. 0:16:00 And healing is sometimes impossible for people who’ve been harmed because of separation. 0:16:04 Because we don’t let people actually eventually come back together. 0:16:06 And so I say it was worth the risk. 0:16:07 And so we did it. 0:16:08 It was a little bit nuts. 0:16:10 We showed it to CNN. 0:16:15 Look, that day when we did the first one, I literally, I cried so hard when it was over 0:16:17 that my nose started bleeding. 0:16:21 Like, because my blood pressure was so high, it was just such an intense thing to see a 0:16:24 man who would kill someone’s mother sit down with the daughter 20 years later and try to 0:16:26 explain. 0:16:29 And we showed that to CNN. 0:16:35 And at that point, you know, we had no other people to go talk to. 0:16:38 It wasn’t like there’s thousands of people for us to go talk to, but CNN said if you 0:16:40 can find more, shoot it. 0:16:41 So we shot it. 0:16:42 Why am I saying all this? 0:16:47 I’m saying this to say that from my point of view, we’re at a point where those of us 0:16:53 who have privilege earned or otherwise, those of us who have positions of power, those of 0:16:59 us who have positions where, you know, people have to listen to what we say. 0:17:00 We have to push. 0:17:03 The Phadra Ellis Lampkins is here. 0:17:08 And she’s an African-American entrepreneur in the tech space, female. 0:17:13 You know, they say, like, that’s like a plaid unicorn or something, like, you know, it’s 0:17:16 not even supposed to exist in fantasy land. 0:17:22 And yet she’s building a company called Promise, pushing technology to solve some of these problems 0:17:25 in the community and winning, right? 0:17:27 You know, she doesn’t have to do that. 0:17:31 She could have taken an easy job and not try or put together a company to, like, you know, 0:17:36 make, you know, I don’t know, pictures or something, I don’t know. 0:17:40 But she’s doing the hard thing the hard way for the right reasons. 0:17:43 So all I’m saying is this. 0:17:47 The culture is not a show about criminal justice, first of all. 0:17:51 We have to market it that way and promote it that way, but it’s not about that. 0:17:52 It’s about humanity. 0:17:58 All of us have done something that we profoundly regret and don’t have any way to apologize 0:17:59 for. 0:18:03 All of us have had something done to us that’s hard to get past, and the stakes are higher 0:18:06 in our show, but this is humanity. 0:18:08 This is the human condition. 0:18:13 And yet in our culture, empathy is no longer trendy. 0:18:15 Compassion is no longer trendy. 0:18:19 It’s about the cancel culture, the call-out culture, and it’s poison. 0:18:21 This is the human condition. 0:18:26 We have to be able to listen to each other, to forgive each other, to hold each other, 0:18:27 to help each other. 0:18:28 That’s not fashionable. 0:18:31 And so we want to put some medicine back in the culture. 0:18:35 This show is our attempt to put some medicine back in the culture, and a little bit of medicine 0:18:37 can go a long way. 0:18:42 And so, you know, that’s what we’re trying to do. 0:18:43 [APPLAUSE] 0:18:47 I really want to push the envelope a little bit. 0:18:48 Eight episodes. 0:18:49 Yes, sir. 0:18:52 One of the episodes of “The Aviation Restorative Justice” happening, right? 0:18:55 In small pockets throughout the country, some prisons are a lot more progressive with 0:18:58 creating space for that. 0:19:02 But the reality is, it doesn’t happen for everybody. 0:19:07 So a lot of men, I work with men and women every day to come home from prison. 0:19:14 As executive director, anti-recidivism coalition, our staff has comprised 54% of system-impacted 0:19:16 men to come out of prison. 0:19:19 A lot of them have armed robberies. 0:19:20 Homicides. 0:19:21 Attempted murder. 0:19:26 I have scores of friends who are coming home after the “Warm Drugs” campaign. 0:19:33 Thousands of men and women come home every day who have served 15, 20, 30 years in prison. 0:19:38 They haven’t gone through a restorative justice process, because for years, our prison system 0:19:41 was designed for nothing more than punishment. 0:19:44 And that’s somebody who was deeply immersed in that environment. 0:19:46 And I know the type of works it takes to get there, right? 0:19:50 I know what it takes to transform a life. 0:19:54 I can honestly say I was super blessed and fortunate, because I was actually literate 0:19:55 when I’m with the prison. 0:19:57 And so I was able to read books that inspired me. 0:20:03 I was able to read Malcolm and read Mandela and read books about personal transformation 0:20:05 in these things, right? 0:20:06 And then I put the work in. 0:20:09 That’s not the norm in prison. 0:20:12 This is not the norm in prisons throughout the country. 0:20:21 And so one of the things that I’m always thoughtful about is how do we normalize restorative justice? 0:20:24 How do we normalize redemption? 0:20:26 You watch somebody in their worst moment. 0:20:30 It’s one of the things that I love that Sheriff Tom spoke about is that he met him in his 0:20:31 worst moment. 0:20:33 He met Jason in his worst moment, right? 0:20:36 But he was also in his worst moment. 0:20:40 And now we have many men and women coming home, and I deal with them all the time, and they’re 0:20:44 broken and they haven’t been able to make peace. 0:20:48 We think about the victim and them working through their trauma. 0:20:52 But there’s also work that those of us who have perpetrated a violent crime have to do 0:20:55 on our own. 0:21:00 And when I say on our own, oftentimes on our own, because in most cases we’re scary. 0:21:02 People are afraid. 0:21:04 You’ve killed another human being. 0:21:09 I don’t know if I can trust when you’re upset or when you’re angry or when things aren’t 0:21:13 going your way that you won’t react in that manner again. 0:21:20 So how do we create a space where there’s more honesty about what’s really not working? 0:21:23 We know about the policies and things like that, right? 0:21:29 But once the policies work, there’s real human beings coming home with deep, deep trauma. 0:21:35 My first 10 years in prison, I was in solitary my second year, and I ended up in solitary 0:21:38 my seventh year that extended to my 11th year. 0:21:43 So I did a total of seven years in hell, and I’m fortunate to have that breakthrough. 0:21:50 But what about the men and women who don’t have space to reconcile their past? 0:21:52 And what is our responsibility? 0:21:57 Ultimately, I guess the question is, what is our societal responsibility when it comes 0:22:02 to welcoming those men and women home in a healthy way? 0:22:06 I think this is the key question for American society. 0:22:10 I don’t have an answer, but it’s the key question. 0:22:11 We have… 0:22:17 People has become almost numb to throw out the numbers, but we have the biggest incarceration 0:22:23 industry in the world here in the United States, trafficking in human flesh, trafficking in 0:22:25 human bodies. 0:22:28 On the stock exchange, you have private prison companies that get more money than more people 0:22:32 who are locked up, and there’s no business model in de-incarceration. 0:22:35 The business model is in incarceration. 0:22:40 But what I do know is this, this is a political problem, kind of. 0:22:42 It’s a policy problem, kind of. 0:22:45 It’s an economic problem, kind of. 0:22:48 It’s a spiritual problem, for sure. 0:22:49 Absolutely. 0:22:55 It’s a spiritual problem, and separation is the enemy. 0:22:56 That’s the problem. 0:23:02 And unfortunately, you have now both political parties preaching separation and superiority. 0:23:08 Those red state people, those big it’s those idiots, those Trump voters, they’re terrible. 0:23:13 It’s almost like we in the blue, we’re good, they’re bad. 0:23:15 And it’s almost like a colonial thing. 0:23:22 Like the people in the red state, these unwashed heathens that need to be conquered and converted 0:23:31 to the NPR religion, and force fed some kale until they can rise up to our level of 0:23:32 civilization. 0:23:34 I mean, this is how people talk. 0:23:38 Separation and superiority, and then, of course, you know how the other side does. 0:23:40 And so, for me, it’s a spiritual problem. 0:23:42 Separation is the enemy. 0:23:49 And so, I have discovered all these diamonds behind those prison walls. 0:23:50 Absolutely. 0:23:52 No pressure, no diamonds. 0:23:53 There are diamonds behind those walls. 0:24:00 There are people behind those walls that are much wiser, much braver, much stronger, much 0:24:06 more creative than 99.99% of people who are on the outside. 0:24:12 When I worked in the Obama White House on a Friday, I was at San Quentin doing my work. 0:24:17 And then on Monday, I was in the Obama White House reporting for work. 0:24:20 So I went from the jailhouse to the White House in 72 hours. 0:24:26 And even under the Obama administration, the smartest people in the Obama administration 0:24:29 were no smarter than the smartest people at San Quentin. 0:24:35 But the wisest people at San Quentin were wiser than anybody in Washington, D.C. 0:24:39 All I know is that I have to tell the truth as I see it. 0:24:40 Absolutely. 0:24:45 And part of it is, you know, telling people, “Look, I went to Yale Law School. 0:24:50 I saw more kids doing drugs at Yale than I ever saw doing drugs and housing projects.” 0:24:51 Period. 0:24:54 And none of those kids even saw a police office. 0:24:58 They got in trouble when they went to rehab or France. 0:25:03 They sure didn’t go to prison. 0:25:08 And yet four or five blocks away, those kids, you know, doing fewer drugs because they had 0:25:12 less money and selling fewer drugs because they were dealing with a different clientele, 0:25:15 they almost all at least got arrested if they didn’t go to prison. 0:25:19 And yet now we sit here and say, “Well, I can’t, my God, I can’t hire you. 0:25:20 You’re a drug felon.” 0:25:24 You know what I mean? 0:25:30 So the hypocrisy of a society where almost everybody’s addicted to something and nobody 0:25:35 can survive, think about this, these phones we carry around, if I told you right now that 0:25:41 for the past three months we have been audio taping and video taping, everything you’ve 0:25:49 been doing, and we’re now about to show it on the screen, you would run out of here because 0:25:52 none of us are as good all the time as we’re supposed to be. 0:25:53 Absolutely. 0:25:57 And nobody wants to be defined by their worst moment or their worst mistake, as you said 0:25:58 many times. 0:26:03 And so for me, I don’t know, but I do know that everybody in here has a lot of power 0:26:04 in the matter. 0:26:09 And everybody in here has a lot of ability to turn it. 0:26:12 And I think it’s trying to happen. 0:26:17 I think the fact that this many people are here, the fact that CNN put this up, I think 0:26:19 it’s trying to happen. 0:26:25 You know, your voice, Topeka Sam’s voice, Lewis Reed’s voice, the voice of people who 0:26:30 are directly impacted, people who are coming out of prison, you’re right, everybody doesn’t 0:26:33 come out of prison as whole as you. 0:26:36 Everybody doesn’t come out of prison and have Oprah as their best friend. 0:26:40 In fact, most people who haven’t gone to prison don’t have those things. 0:26:44 So art as a tool to shift culture. 0:26:52 How important is art and technology towards shifting this larger idea culturally? 0:26:58 You know, the opposite of humanization is criminalization. 0:27:01 If you can criminalize a whole population of people, all everybody in the neighborhood 0:27:05 is bad, all the people from that racial group are bad. 0:27:10 If you can criminalize a whole population, then you dehumanize them and then anything 0:27:14 can be done and people won’t respond to it if it’s my child. 0:27:20 Everybody says, oh my God, my child’s on drugs, give him 17 years in prison. 0:27:21 Nobody says that. 0:27:22 People say my child needs help. 0:27:28 And so what I would say is that the opposite of criminalization, though, is humanization. 0:27:32 And so art and technology, which helps us to humanize and spread these stories is really 0:27:33 critical. 0:27:34 Thank you. 0:27:40 Hey, Shalucha, we love y’all. 0:27:41 All right. 0:27:46 A round of applause for Shalucha and Van. 0:27:51 Now we’ll enjoy a performance by Missy Hart, who will share an amazingly powerful piece 0:27:55 called “Bloom,” a trilogy, and the titles of the three different poems are “Just Us, 0:28:02 The Dream,” and “What’s Your Seed?” 0:28:06 Before I share these pieces, I want to share a big part of myself. 0:28:09 And I feel it’s really important to really paint a picture of the power of healing and 0:28:11 redemption, and creative art therapies. 0:28:15 I’m from Norfolk, Redwood City, California, it’s not too far from here. 0:28:18 My beautiful struggle began when my father committed suicide before I was two. 0:28:21 So I was raised by my strong single mother, who had to work multiple jobs. 0:28:26 She came up out of the gang culture as well, and not just working jobs, but taking care 0:28:27 of my grandmother, who was mentally ill. 0:28:29 But most of the time it was me and my brother taking care of her. 0:28:31 So I had to grow up really fast. 0:28:37 And during that time, growing up in the streets and trying to find my identity, we all go 0:28:41 through those times trying to find our identity, and being biracial, and a lesbian growing up 0:28:45 in the late ’90s, early 2000s, I tried to find my place, you know, and I found my place 0:28:46 in the streets. 0:28:50 And I started gang banging when I was 10, and being a girl smaller than everyone else, 0:28:51 I had to go hard. 0:28:55 In the streets, you were all in, or you’re not, you’re not going to survive. 0:28:59 So I was fully committed, went all in, caught my first case when I was 11, when they just 0:29:03 passed Prop 21, and then I went to the system. 0:29:06 When I was 13, I started writing for the B. And the B really gave me a voice, gave me 0:29:09 a way to express my truths in my way. 0:29:13 Because going through the system, you’re constantly trying to go through all these therapies 0:29:17 and stuff, but you don’t even have language growing up and not being shown what you’re 0:29:21 feeling, or you just learn to speak with the language of violence and aggression. 0:29:24 And that’s what I learned to speak. 0:29:28 So over the next few years, I was in and out of the system, I became a ward of the court, 0:29:32 so I was in group homes, being locked up, and then being on the run, and then just in 0:29:33 this constant cycle. 0:29:38 And it wasn’t until I got released two months before my 18th birthday, and my mom’s boy 0:29:41 didn’t want me at the house, so I was homeless, serving crack on Army Block. 0:29:46 I don’t know if y’all from the city, but I on the blade on 2/6, and then I started changing 0:29:47 my life. 0:29:50 And then when I caught a tent of murder charges day before my 18th birthday, I fought the 0:29:52 whole case in solitary. 0:29:57 But by the grace of God, I was taken and arrested when I did, because where I lived that back 0:30:00 home, my boy ended up stabbing his due to death not even an hour later. 0:30:03 So if I didn’t get arrested when I did, I would be in there for murder. 0:30:05 He’s doing 25 with L right now. 0:30:09 And I just really, you know, just started to see that my chances were running out. 0:30:12 And I got out, and you don’t change overnight, it’s a process, you know, and putting that 0:30:13 work in. 0:30:16 But it’s so important. 0:30:19 And I got out, you know, in and out of county, but then, you know, I started to change my 0:30:23 life and really see that education was the way to liberate myself. 0:30:26 So I went back to adult school, got my high school diploma. 0:30:30 And then I went to community college, when Jason was saying, like, you know, just having 0:30:32 someone believe in you, that is so powerful. 0:30:35 It may seem so little, but just even in times when you don’t believe in yourself and you’re 0:30:39 just raised to be taught in a system like broken down, you’re identity to nothing, you 0:30:42 know, and the beat really gave us our voice, and the beat really planted that seed for me 0:30:46 because now I’m doing, like, all these amazing things that I can’t even imagine back then. 0:30:50 So I went to community college, ended up winning a full ride to UC Santa Cruz where I attend 0:30:52 now I’m studying psychology and the history of consciousness. 0:30:53 Thank you. 0:30:54 Right on. 0:31:03 And I also just want a national scholarship to go study abroad this fall where I’m going 0:31:07 to study psychology, neuroscience, come back, go to DC, do an internship, come back, and 0:31:11 then I’m planning to get my PhD in positive psychology and my end goal. 0:31:12 Thank you. 0:31:16 Thank you, thank you. 0:31:19 My end goal is to open a group home with an art therapy program because our creative 0:31:23 art therapy is like, it’s so powerful, I can’t stress it enough, like, there’s no words 0:31:28 that I can even put it, express to explain how powerful it is. 0:31:31 So yeah, without further ado, I’m just going to spray my pieces then. 0:31:35 So thank you, and I just want to say thank you because this is a privilege of me being 0:31:39 on the stage because a lot of my loved ones and people I don’t even know, you know, we 0:31:43 lost to the streets and the system, they don’t get the same opportunity, and I’m just thankful 0:31:46 and I don’t just do this for myself, but I do this for my people and everyone’s still 0:31:50 behind those walls and who are lost, you know, and I’m just, I got this motto, it’s called 0:31:51 be the change, lead the way. 0:31:57 Alright, so this is a bloom, so this first one’s called Just Us, a little spin of justice. 0:32:00 Is it just us who see no justice and no peace struggling to achieve the American dream? 0:32:05 A dream just to have an opportunity to succeed, but somehow it’s so far to reach. 0:32:08 Car in the system that’s designed to keep us at war, at war with each other, a storm 0:32:10 is brewing right outside of your door. 0:32:14 Is it our choice to endure, or is there a power much greater than the plans of the hate-filled 0:32:17 hearts, wedging a war on the people and the power within, a power capable in manifesting 0:32:22 a revolutionary change, a change that ripples through generations in time, seeded in this 0:32:24 message trying to reach your mind through a rhyme, because you see the powers in the 0:32:28 people and the passion that’s in our hearts, but change only start when you shine the light 0:32:32 on the dark, beginning within ourselves and branching out to the people, educating each 0:32:36 other to fight for our rights to be equal, no one who walks upon this earth is illegal, 0:32:39 all this misguided hate and bigotry is spiritually lethal, empowerment for each other starts 0:32:44 with the peaceful, not the deceitful, don’t let the con steer you wrong, the power you 0:32:48 hold within remains strong, you just gotta believe in the power of your seed to plant 0:32:52 amongst the weeds of the world’s evil deeds, to grow strong like a tree to feed the minds 0:32:56 of the future, but first you must take your time to find your design that creates change 0:33:00 in people’s lives one day at a time, then you will see it begins within thee, so may 0:33:04 the life you lead be the life of the sea, be the change lead the way and ask yourself, 0:33:06 what can I do today? 0:33:17 Thank you, thank you, so this next one is called the dream, many underestimate the power 0:33:21 of the mind, but what really lies inside the complexity of the emotional pathways that 0:33:27 lead us to act in a certain way, what drives us to manifest positive change, is it love, 0:33:31 is it pain, maybe it’s the dream that we all dare to scheme, this dream to be free and 0:33:35 all live in peace, but it seems just to be that, but a dream, a dream that seems impossible 0:33:37 to conceive or is it? 0:33:41 We create our limitations gate, it’s the power of your mind that can grow with time or deplete 0:33:46 with lies depending on what vibe you choose to feed inside, it begins with the light that 0:33:49 burns deep and bright, the young activists that just wants to raise their fists to fight 0:33:53 for the people in and out of sight because you see it’s not just about you or me, but 0:33:57 we, together we can be this dream that we dream, but in order to achieve this dream we 0:34:01 all need to see that I am you and you are me, that beating in your chest is your first 0:34:04 clue, purpose, you feel that? 0:34:07 That’s what we need to remember when faced with the choice to endeavor, it’s the power 0:34:10 of your mind that leads you to believe that you can achieve all that you seek, it just 0:34:14 leaves one question, what’s your seed? 0:34:22 Thank you, and you know to like kind of answer the question, like if you don’t got no purpose 0:34:24 when you get out, you’re going to end up right back in because you got nothing that’s going 0:34:27 to bring you up out of that, so I just want to say it because that’s whatever, everyone 0:34:30 has a seed, everyone has a seed to plant, you got to believe in that seed and believe in 0:34:31 yourself. 0:34:36 So this last one is called what’s your seed, I actually wrote this one when I started changing 0:34:40 my life when I went to community college, I wrote it before the other two, so it means 0:34:42 a lot to me, so. 0:34:45 Like a scientist gone mad, creativity flows out of me, like knowledge to history, like 0:34:49 wise words to a revolutionary, like the power in the people, but nobody is listening, while 0:34:53 the falling clock of life just keeps on ticking, life hitting you with trials and tribulations 0:34:57 and man you still don’t get it, you don’t need to wake the fuck up and get on with it, 0:35:00 if not when judgment day comes don’t look at me to save you because I wasn’t the one, 0:35:03 while you’re gone and with your funds, steady stacking your funds, you feel the biggest 0:35:06 test kid, you had to prove you was worthy of the sun, instead of bringing peace to the 0:35:10 world, they brought hay and guns, put them in the kids’ heads, said have some fun, then 0:35:13 prove to the world cops are just out of killing the dumb, while we’re all blinded by the government’s 0:35:17 thumb, and you all fail to see we’re all kids of the sun, I’m on this world of righteousness, 0:35:21 steady fighting the wicked, seems like growth, love and spirituality is extremely restricted, 0:35:25 kids caught in the cycle, like to the shelter world it’s explicit, on your worst enemies 0:35:29 you wouldn’t wish it, I know because I lived it, but this life is not a burden, nah, because 0:35:33 I’m that seed planted in the garden of grief, they try to drown me statistically, put me 0:35:37 through some shit you wouldn’t even believe, but instead of dying, I rose from the depths 0:35:40 of despair, to breathe truths for share, soon I found myself the heir to the knowledge’s 0:35:44 lair, now it’s up to me to train my mind, to learn how to share, many stop and stare, 0:35:48 but not many opt to care, they’d rather shop and hate than bring these kids up and congratulate, 0:35:52 designing our futures fate, and it doesn’t look pretty, so before the last grain of sand 0:35:57 falls in God’s land, what will you build to grow on with sand, thank you. 0:36:01 Thanks again for listening to this episode of the A16Z Podcast, and if you want to learn 0:36:05 more about the Culture Leadership Fund, please visit A16Z.com.
with Van Jones (@VanJones68), Shaka Senghor (@ShakaSeghnor), and Chris Lyons (@clyons)
True redemption can be hard to come by in our justice system today. And yet, we need it more than ever before. In this episode (based on an event hosted by Andreessen Horowitz’s Cultural Leadership Fund), CNN news commentator and author Van Jones and Shaka Senghor, author of the New York Times bestseller Writing my Wrongs and director’s fellow of the MIT Media Lab, discuss the U.S. prison system; the human potential for redemption; and how we begin to go about normalizing restorative justice in our society.
The conversation, introduced by a16z partner Chris Lyons, followed screening of an episode of Van Jones’ new series, The Redemption Project. The eight-part series looks at the families of victims of a life-altering crime as they come together to meet their offender; this episode featured the meeting between a police officer along with the man who shot him as a young boy of 17 years, decades earlier. The episode also includes two spoken word performances before and after the conversation, from two formerly incarcerated artists: first, Kevin Gentry, with ”My Heart”; and second, Missy Hart, with ”Bloom: A Trilogy.” Both are contributors to The Beat Within, a publication and organization that serves youth across California country juvenile halls and encourages literacy, self-expression, and community.