Category: Uncategorized

  • a16z Podcast: Five Open Problems Toward Building a Blockchain Computer

    AI transcript
    0:00:05 The content here is for informational purposes only, should not be taken as legal business
    0:00:10 tax or investment advice, or be used to evaluate any investment or security and is not directed
    0:00:15 at any investors or potential investors in any A16Z fund. For more details, please see
    0:00:21 a16z.com/disclosures. Hi, welcome to the A16Z podcast. This is
    0:00:27 Frank Chen. Today’s episode is called “Five Open Problems for the Blockchain Computer.”
    0:00:32 It originally aired as a YouTube video. And you can watch all of our YouTube videos at
    0:00:39 youtube.com/a16zvideos. Well, welcome to the A16Z YouTube channel. Today,
    0:00:45 I’m here with Ali Yahya, our deal partner in the A16Z crypto team. And we’re going to
    0:00:49 have a fun conversation. So here’s what we’re going to do. I’m going to pretend Ali to
    0:00:54 be a Google software engineer or an Apple software engineer, right? So I’m somebody
    0:00:59 who knows how to write software, has been doing it for a while. And then all of a sudden,
    0:01:03 I saw my friends start to peel off and go to crypto startups. And I’m looking around
    0:01:08 going, “What’s happening? Google’s a great place or Apple’s a great place. Why are people
    0:01:13 leaving to go to crypto startups?” And maybe you can help me understand what’s causing
    0:01:16 all these smart, talented people to head into crypto land.
    0:01:17 I love it.
    0:01:22 Fantastic. So maybe let’s just start with the world in which we live today, which is,
    0:01:28 you know, I use my iPhone or my Android phone. I happen to use a Google phone, the Pixel.
    0:01:35 I use Google Photos. I use Gmail. My carrier is T-Mobile. It’s sort of a centralized world.
    0:01:39 And it works pretty well, right? Like, it’s pretty reliable. And Google gets all my photos
    0:01:44 and my mail arrives when I want it. And so that’s not a bad world. Is crypto really
    0:01:48 trying to overturn that world?
    0:01:53 That world does work fine, but it’s not the frontier. So what I would say is the reason
    0:02:00 that crypto is so exciting is because it offers a fundamentally new paradigm for computation
    0:02:03 that has features that are completely novel and different from the features that enable
    0:02:09 applications like social media, as it exists today, like sharing photos, like all of sort
    0:02:14 of the centralized services that we know and love today. And so I think with every successive
    0:02:22 wave of computation that we’ve seen throughout the history of computing, normally, the new
    0:02:28 paradigm tends to suck at first and tends to be pretty bad at most things that the old
    0:02:34 paradigm is very good at. But it happens to shine in one or two particular ways that enable
    0:02:38 new applications that previously were just not possible to build. And so with, I mean,
    0:02:43 I think one of the clearest examples is just the example of mobile phones enabling applications
    0:02:49 like Uber, where applications like Instagram, by virtue of having a camera and a GPS bolted
    0:02:54 onto the phone, that enable those kinds of behaviors that with a personal computer, you
    0:02:57 couldn’t have possibly, couldn’t have possibly built.
    0:03:00 Your PC didn’t know where you were necessarily, so it couldn’t enable Lyft.
    0:03:04 Exactly. And it would have been also just deeply impractical for one to pull out one’s
    0:03:05 laptop.
    0:03:08 Hold on, let me get my PC.
    0:03:16 And so with crypto, I think the key dimension along which these decentralized computers
    0:03:22 that people are building in the world of crypto shine is that of trust. They provide this
    0:03:32 new angle that previous computers didn’t have because previous computers are owned and operated
    0:03:39 by individuals or by single entities like companies. And so you have to trust that individual,
    0:03:43 you have to trust that company to actually run the software that they’re claiming that
    0:03:47 they’re running and to actually do what they claim they will do with your data. And with,
    0:03:51 we’re basically with the entire interaction between you and them. And so we trust Google
    0:03:56 with our photos, we trust Google with our email, we trust Google with just about everything
    0:04:01 that entails the kinds of interactions that we have with Google.
    0:04:06 This new paradigm of computation is such that you now have a computational fabric that is
    0:04:11 not owned and operated by any one person. This is the whole point of decentralization.
    0:04:15 When people talk about decentralization in the world of crypto, they mean decentralization
    0:04:23 of human control. Not decentralization of computing in a geographic sense. It’s not
    0:04:28 decentralization in any other way that you may think. Like the key thing about crypto
    0:04:34 is decentralization of human power and human control over systems and figuring out clever
    0:04:41 ways to build a system such that it is self policing and such that it’s security and its
    0:04:47 trust emerges bottom up from its participants and from individuals as opposed to top down
    0:04:54 from like some trusted organization at the top that kind of enforces the rules.
    0:05:00 Got it. So instead of trusting Google or Facebook or Apple, I can then trust the collective
    0:05:05 of people who have contributed their computing, their storage, power, etc., etc., to deliver
    0:05:12 the service that I’m consuming. And that’s the big innovation. And so why don’t we go
    0:05:18 through the implications of that by sort of talking through, well, what will we need to
    0:05:26 rebuild in crypto land, starting with compute, so that all of the applications that run on
    0:05:31 top of this distributed computer sort of will have the power that you’re describing. So
    0:05:35 let’s start with distributed compute. What do we need? So I guess maybe to set the stage,
    0:05:40 if we think of compute today, we have computers that can perform certain monotransactions
    0:05:46 per second. We have visa, which can clear so many financial transactions per second.
    0:05:49 And then we compare those things with things like, well, how many Bitcoin transactions
    0:05:54 can clear, how many Ethereum smart contracts can clear. So why don’t we talk about how
    0:05:57 do we get distributed compute to really sing in crypto land?
    0:06:04 Absolutely. So I think so much of the attention in crypto tends to be on this metric of transactions
    0:06:08 per second. And I think we would argue that that’s the wrong, I mean, it’s not even the
    0:06:13 right framing because we’re not talking about just a ledger that processes payments. We’re
    0:06:19 talking about a computer, we’re talking about a decentralized fabric for general computation.
    0:06:23 So the right metric is not really transactions per second, it’s really instructions per second.
    0:06:28 How many, how many instructions of some computation can you process in any given period of time?
    0:06:33 And that’s kind of what people know as, know as as throughput. And so that’s one of the
    0:06:38 metrics for scalability that that matter when it comes to compute. The other one is the
    0:06:42 latency to finality. It’s like, how do we know that the computation was done and that
    0:06:49 it can no longer be reverted, that it can no longer, that its output was final and that
    0:06:54 nothing can happen that could reverse it and have it be something different. You can trust
    0:06:59 that that outcome is settled. So that’s latency to finality is how much time do you have to
    0:07:05 wait before that happens. And people talk all the time about how Bitcoin has terrible latency
    0:07:10 to finality, you have to wait 60 minutes before you can be reasonably sure that your
    0:07:14 payment is final. So that’s another another access. And then the final one when we talk
    0:07:22 about scalability of computation is what is the cost per instruction? How much do I have
    0:07:26 to pay for that transaction? How much do I have to pay for just an arbitrary computation
    0:07:31 on Ethereum or on some of the more general platforms for for computation? And the reason
    0:07:35 obviously that all of this matters is because the kinds of applications that we want to
    0:07:44 build will just require far greater scalability and also will require far lower cost to really
    0:07:49 to really work. And I think it may be helpful to just exemplify what those applications
    0:07:53 are. I think some of the some of the things that we we are seeing already in the world
    0:07:59 of of Ethereum kind of the Ethereum ecosystem is maybe the richest so far in terms of actual
    0:08:04 developer activity on top. So we’ve seen kind of the emergence of this parallel financial
    0:08:12 world where you have things like like stable coins, which are price stable cryptocurrencies
    0:08:18 that have some logic that modulate the supply of the of the token to keep it stable to some
    0:08:23 external reference like the US dollar. And then on top of that, people build things like
    0:08:27 lending platforms and they build they build things like derivatives platforms, they build
    0:08:33 things like decentralized exchanges where you can exchange tokens or exchange crypto assets
    0:08:38 without depending on some central exchange. So all of these things that’s like one one
    0:08:43 example one trend that is already happening among among many other trends that we can
    0:08:48 talk about other examples later review if you think it’s helpful. But all of that depends
    0:08:59 on far greater throughput, far lower latency to finality and far lower cost per per instruction.
    0:09:04 It’s already just with this initial activity, we’re already seeing the limits of the current
    0:09:09 of the current technology. So it’s an open problem. How do we increase throughput? And
    0:09:13 for that particular question, people people like what one of the things that matters the
    0:09:19 most is the delay in propagation of messages in a distributed system. That’s what ends
    0:09:28 up dominating the cost of that particular problem. So people talk about the block time
    0:09:34 in crypto was like how much time do you have to wait before you can append a new block
    0:09:39 to the blockchain and blocks usually contain computations they contain transactions. So
    0:09:44 you can lower the amount of time that you have to wait for new blocks to come along,
    0:09:50 then you will process more transactions and more computations per per unit time than you
    0:09:55 would otherwise. And so propagating messages in the network is is a is the dominant factor
    0:10:00 is that’s what makes it slow to sort of finalize a transaction. Exactly. And the reason for
    0:10:04 this is that we are building a distributed system. And so if you think about it, what
    0:10:09 is the difference between a distributed system and one that’s just centralized? And that
    0:10:14 is that there’s distance between the different nodes that are participating in the system.
    0:10:18 So the key difference is that now there’s this additional communication cost between
    0:10:24 the different nodes in the system. And that cost is also is significant because it’s bounded
    0:10:29 is it like the lower bound on it is the speed of light. You cannot get faster than the speed
    0:10:35 of light. So so it provides it causes this this kind of lower bound as to how perform
    0:10:40 it can possibly be. And you can only get so clever before you reach that that that kind
    0:10:46 of lower bound. But it is the case that today we’re still far far from from that lower bound.
    0:10:49 There’s still a lot of room for improvement. Yeah, I mean, people in general pretty impatient.
    0:10:53 I remember when the chip and pin system started getting deployed here in the United States,
    0:10:57 it was just a couple of years ago, right? And then the you didn’t start your credit
    0:11:02 card, it would take like five seconds, right? And that was a lot slower than the swipe. And
    0:11:06 people were like, this is never going to work. I’m not waiting five seconds for my credit
    0:11:12 card to clear. And so maybe talk a little bit about, you know, sort of what is the propagation
    0:11:17 delay today? And then what’s practical to get to given sort of speed of light limitations?
    0:11:20 And then what’s the target? And how do we get there?
    0:11:28 Yeah, for sure. So today, I mean, there’s this there’s enormous tension between, well,
    0:11:33 to back up a little bit. So there are two things that matter here. One of them is how
    0:11:38 much time does it take to send a message between two two points in space? But then there’s
    0:11:46 also the problem of what what influence does the size of the message have on on that amount
    0:11:51 of time? And so there’s basically the two angles here are latency and bandwidth, latency
    0:11:55 being the amount of time that it takes to send a message bandwidth, being how much how
    0:12:00 much data can you actually fit through the pipe per unit of per unit of time. So there’s
    0:12:05 a tension in this space, you can see this reflected in in say the Bitcoin like block
    0:12:14 size debate. Yeah, between sort of the throughput that you can get out of a network and the
    0:12:19 propagation delay that that is caused by say increasing the block size. So in the case
    0:12:24 of Bitcoin, people people were talking about doubling the block size from one megabyte to
    0:12:29 two megabytes. So that would increase the throughput of the Bitcoin blockchain, because
    0:12:35 now you can fit twice as many transactions, and the blocks will still come at a 10 minute
    0:12:42 cadence. But that would increase the propagation delay for those blocks, which would cause
    0:12:46 certain miners to no longer really be able to participate because they won’t get the
    0:12:49 block in time. So they would have to they would eventually end up falling out, you’d
    0:12:55 end up with a more centralized system. So we see here there’s a trade off between performance
    0:12:59 and decentralization, assuming that you want to keep security constant, you don’t want
    0:13:04 to suffer. You still need the trust, right, you can give up on the trust, you can’t allow
    0:13:10 double spending, right. So I always thought that the reason say Bitcoin was slow was the
    0:13:15 proof of work was so computationally demanding. Is that still the case, or is that is that
    0:13:16 a solved problem?
    0:13:21 So it’s a very, very good point. So we’ve been talking so far we’ve been talking about
    0:13:29 the throughput of of instructions for a blockchain, which is one of the three different dimensions
    0:13:35 for scalability of compute. The third one was the cost of instruction of an instruction
    0:13:41 how much does it cost to have a transaction be processed. So the cost of proof of work
    0:13:49 is what ends up driving the cost of an instruction so far, so so high. Rather than I mean, like
    0:13:56 so there are, there are like various different lines of work. There’s the line of work that’s
    0:14:01 trying to improve the propagation of messages and to make that more efficient. So there
    0:14:07 are companies like blocks route, which are building like a kind of a content delivery
    0:14:13 network which has advanced computer networking technology that allows miners to propagate
    0:14:18 their blocks to other miners very efficiently. And so that’ll help with the propagation delay,
    0:14:19 which will help with the throughput problem.
    0:14:23 So the new generation CDN that is optimized for crypto.
    0:14:27 And in fact, they call it a blockchain distribution network, a BDN.
    0:14:28 Oh, got it.
    0:14:31 And so that’s an interesting angle and that operates at layer zero is like the networking
    0:14:36 layer below the blockchain layer. And it can help any blockchain project, any blockchain
    0:14:42 project that builds on top of it will benefit from faster propagation of messages.
    0:14:45 And the classic internet definitely needed this. Like it’s impossible to imagine the
    0:14:50 internet without a CDN, right? You’d be waiting a lot longer for almost anything without that
    0:14:54 layer of infrastructure. So that makes sense. So there’s sort of a CDN layer.
    0:14:57 Yeah. And then, and then there are people who are working on this latency to finality
    0:15:01 dimension, which we also talked about, which is how much time do you have to wait before
    0:15:10 your message or your update, your computation is final. And so that is a consensus problem.
    0:15:16 How do we agree that the update is final? How do we agree that something can no longer
    0:15:23 be reversed? So proof of work is a probabilistic consensus algorithm in that there is always
    0:15:28 some probability that whatever update to the ledger was performed could be reverted at
    0:15:35 some point in time later. And the key aspect there is that the more time passes, the less
    0:15:40 likely it becomes that that update gets reverted. But it’s always probabilistic. And this is
    0:15:43 why you kind of have to wait 60 minutes before you know that it’s final, because that’s
    0:15:48 the point at which that probability becomes so minimal that you can effectively trust
    0:15:49 that it won’t be reverted.
    0:15:54 But there is innovation in consensus algorithms that are better than that, that are not probabilistic
    0:16:01 and that are actually deterministic and are final on a far shorter time span.
    0:16:05 Yeah. And you need both, right? You need non probabilistic and you need fast, right?
    0:16:10 So I, you know, everybody talks about the, the Ethereum contract things underlying things
    0:16:16 like, hey, when you go rent an Airbnb, that lock will open because I know it’s you. There’s
    0:16:20 a smart contract that governs the, oh, you’re allowed to stay here tonight. No one’s going
    0:16:26 to wait there 60 minutes for that. And so is it feasible? Are we on a path to basically
    0:16:31 enable use cases like that where like, I’ve got my smart key and I’m in front of the Airbnb
    0:16:35 and like in seconds that thing is going to open because the contract clear, is that possible
    0:16:38 or is that not quite possible yet? We don’t know the path.
    0:16:43 I think, I mean, we do see, we do see a path. I think that, so given the improvements on
    0:16:49 the networking layer with companies like BoxRoute, improvements on the consensus layer with companies
    0:16:53 like DFINITY and Ethereum 2.0 and Cosmos and Polkadot, there’s like a large number of people
    0:17:00 working at that level. And then finally, improvements on the cost per instruction. Similarly, proof
    0:17:05 of stake and other consensus algorithms don’t use the expensive proof of work that the original
    0:17:09 blockchain’s used. And so that can also come down as you can see a world where, where this
    0:17:15 does come down to a degree that it becomes fairly practical for everyday use for things
    0:17:23 like kind of like a quick lighthearted interactions between people or between people and machines.
    0:17:29 And so I think that that is certainly possible and I think we’re on our way. But I think
    0:17:35 it’s worth noting, decentralized systems will always be more expensive and less performant
    0:17:41 than centralized ones. There’s just an inherent tradeoff there and there’s an inherent cost
    0:17:47 to decentralizing a computer system. And so it won’t replace everything. There will
    0:17:53 be applications that will always make sense for a centralized world, for a centralized
    0:17:57 world. And there will be some applications for which decentralization very much does
    0:18:03 make sense. And those are the ones where trust is the key differentiator, where trust is
    0:18:08 the bottleneck to scale. That’s where decentralized systems will shine.
    0:18:12 I want you to give me a couple of examples of sort of applications where trust is the
    0:18:18 key as opposed to performance or cost or whatever. But before we do that, I want to talk about
    0:18:23 this notion of proof of work, transitioning to proof of stake, because this is super important.
    0:18:30 I read all the time that Bitcoin mining is consuming some known fraction of the world’s
    0:18:35 electricity because the math is so hard to actually do one of these proof of works, right?
    0:18:41 It has to go similar to public key cryptography. And so what’s happening here? Like, how do
    0:18:45 we get on a path where we’re not consuming all the world’s electricity doing these proofs?
    0:18:51 Yeah. So the key goal for crypto networks is to build trust in a way that is bottom-up
    0:18:55 and that does not depend on some central authority. And so as a result, you have to figure out
    0:19:01 a way to make the network be self-policing and to kind of have an incentive structure
    0:19:07 that makes its members police one another in a way that the entire network kind of works
    0:19:12 and sort of proceeds according to people’s expectations. So in a sense, you have to make
    0:19:20 it the rational equilibrium to play by the rules of the game rather than to defect and
    0:19:26 profit in some way that is against the rules and that kind of figures out a way to game
    0:19:31 the system. And so in Bitcoin, one of the key ways this worked was through this proof
    0:19:35 of work. So maybe talk a little bit about how did it work? Why did it consume so much
    0:19:40 electricity? And then where are we going? So the key problem that crypto networks have
    0:19:46 to solve is figuring out who gets to participate because there’s no one central party who is
    0:19:50 able to decide who gets to participate and who doesn’t. That’s the entire point we want
    0:19:56 to do away with that. And so proof of work does this by requiring every participant to
    0:20:01 compute an expensive proof of work. It’s a computation that’s done on top of every block
    0:20:05 that they want to add to the blockchain. So that’s an extrinsic resource that they have
    0:20:11 to come across, they have to procure to be able to participate and it prevents any one
    0:20:18 person from completely monopolizing the system and from having unilateral ability to modify
    0:20:22 the underlying blockchain. That of course is very expensive because you have to come
    0:20:27 across all of this computational power in order to participate. So proof of stake says
    0:20:31 something different is instead of making the resource that you have to come across and
    0:20:37 you have to procure be extrinsic to the system, why not make it something that’s intrinsic
    0:20:44 to it, namely why not make it a crypto asset, why not make it a token that you have to own
    0:20:50 in order to buy your participation in the system. So what proof of stake does is it
    0:20:57 says if you own 2% of the tokens in the network, then by and large on average you’ll have 2%
    0:21:01 of the say in what blocks get to make it onto the blockchain and which ones don’t. But now
    0:21:09 because of the asset itself, the resource that you need to be in procession of in order
    0:21:14 to participate, because it’s no longer extrinsic to the system, it’s no longer a resource that
    0:21:19 is sort of a physical resource like electricity and rather it’s entirely virtual, now the
    0:21:25 cost of actually participating in consensus and making the entire network work in real
    0:21:31 terms comes down dramatically. And it’s just secure or at least theoretically can be made
    0:21:37 just to secure and that’s a controversial statement but I will sort of stand by it.
    0:21:42 But it’s less expensive and so from a cost per instruction, on a cost per instruction
    0:21:47 basis it’ll be much more performant than a proof of work system.
    0:21:51 So if I could restate that it sounds like in Bitcoin with its proof of work I had to
    0:21:56 bring electricity, consume the electricity, do this hard math and that was how I sort
    0:22:00 of entered the system and participated and my reward is a minor for burning all this
    0:22:06 electricity as I get paid in Bitcoin. In proof of work I’m bringing basically tokens, sorry,
    0:22:10 in proof of stake I’m bringing the tokens themselves, I’m not consuming electricity I’m just bringing
    0:22:17 the tokens themselves and I’m by virtue of my ownership of the tokens I can participate
    0:22:23 in a proof of stake that delivers the same trust properties as proof of work without
    0:22:25 burning all the electricity.
    0:22:26 Exactly.
    0:22:31 Got it. Got it. So it sounds like all of these things need to come together for us to build
    0:22:36 sort of distributed compute in this new world, right? We need the new CDNs, we need this transition
    0:22:40 to things that look like proof of stake so we’re not consuming all this electricity.
    0:22:46 Any other big innovations that need to happen in this space to bring the transaction cost
    0:22:48 down and the transaction speed up?
    0:22:54 I think those are the big ones. So we’re talking about the three pillars of computation, well
    0:23:00 the three pillars of scalability of computation, there’s throughput, there’s latency and then
    0:23:03 there’s a cost per instruction and so we kind of addressed all three, there’s companies
    0:23:06 that are working in all three and I think these are very much still open problems and
    0:23:12 there’s just a lot of green field for exploration and so I will tell you Google engineer, this
    0:23:17 is where it’s exciting, this is where your skills as a sort of distributed systems engineer
    0:23:23 or machine learning expert can kind of leverage those skills to figure out some of these open
    0:23:24 problems.
    0:23:29 Got it, so if I’m motivated by doing things like I want to create a better TCP/IP, I want
    0:23:34 to create a better HTTPS and oh man I’m just too late to the party, like I arrived when
    0:23:38 all those protocols were already settled, like you’re saying this space is for me because
    0:23:42 a lot of the problems haven’t been settled yet, right? A thorny problem, unsettled, big
    0:23:43 world impact.
    0:23:48 Yeah, even if it’s been 10 years since the publishing of the Bitcoin white paper, this
    0:23:55 is still very early days, because I think that it’s only recently that people have begun
    0:24:00 to conceive of blockchains as computers as opposed to just payment systems, so the emergence
    0:24:09 of Ethereum was in 2014 and it’s only really been five years or four years really of people
    0:24:14 thinking of blockchains in this way and so it’s very early days, the space is very nascent
    0:24:17 and there’s just a lot of work to do, great.
    0:24:21 So perfect time, why don’t we sort of move on to part two, so we’ve talked about distributed
    0:24:28 compute, let’s talk about distributed storage and so we started with the Google photos example,
    0:24:34 I kind of trust Google to have all my storage, all my photos, but to do that they have huge
    0:24:38 servers with lots of hard drives in them scattered around the world, it’s pretty expensive and
    0:24:44 so if I was a startup trying to mount a frontal assault against that, I kind of only have
    0:24:50 two choices, one is raise like a trillion dollars and try to duplicate their infrastructure,
    0:24:58 put data centers everywhere, points of presence, or I could do what the distributed crypto community
    0:25:02 is trying to do which is convince you to lend me a bit of your hard drive space.
    0:25:09 The key reason that we need a decentralized layer of storage is because it itself will
    0:25:14 be a foundational building block for this decentralized world computer that we’re talking
    0:25:19 about and so in order for some of these applications that we’ve talked about that are not really
    0:25:26 possible to build on top of a centralized architecture to really work, we need the full extent of
    0:25:31 a computer that works in this way and so if we had a centralized storage layer instead
    0:25:38 of a decentralized one then that would be the weakest link, it would dilute the promise
    0:25:44 of the decentralized layer of computation if you don’t have all of the pieces themselves
    0:25:49 being decentralized and so that’s why it’s important because we want to enable these
    0:25:56 applications that kind of depend on decentralization for trust and it’s not so much to compete
    0:26:02 head on with Amazon because the economics are different, as we said like decentralizing
    0:26:08 a system always comes with the cost and so it won’t make sense for just storing your
    0:26:14 photos if storing your photos is something that Amazon/Google can do and if there’s
    0:26:19 no trust dimension to doing that, maybe if you care deeply about your photos not ever
    0:26:24 being seen by anyone but yourself or by anyone but your close friends then maybe you can
    0:26:28 imagine using a different kind of architecture, one that’s maybe more decentralized, but for
    0:26:32 that kind of use case I imagine sort of the centralized data center model, it works very
    0:26:38 well and they’re not paying the decentralization tax and so it’s always going to be cheaper
    0:26:45 for them to store your photos but there are interesting opportunities, so for example
    0:26:48 there’s a project called Filecoin, there are a number of others too that are working in
    0:26:52 the same space, one of them is called SIA, another one is called StoreJ and they are
    0:26:59 trying to build decentralized marketplaces for storage, so the idea is yes I can rent
    0:27:05 some of your idle storage space on your laptop and pay you for that storage in Filecoin and
    0:27:11 the reason that that’s now possible is because I can now trust that you will actually store
    0:27:15 my files and you can trust that I will pay you for that storage even if we’re complete
    0:27:20 strangers and reside across the world from one another because of the cryptographic guarantees
    0:27:23 of the underlying protocol.
    0:27:29 And so that’s important because previously without crypto and without blockchains that
    0:27:34 would have been a very difficult interaction to coordinate and it would have been very hard
    0:27:39 for us to establish trust from halfway across the world and make that exchange happen.
    0:27:44 So it’s a marketplace that now emerges where previously it couldn’t have and it gives us
    0:27:54 this property that no one controls this sort of layer of storage and we can use it in conjunction
    0:27:58 with a computation layer to build applications that are fully decentralized and that are
    0:28:07 unstoppable and kind of run in their own right and therefore command greater trust than
    0:28:09 applications that are centralized.
    0:28:13 So I remember before the crypto craze there were definitely startups that were trying
    0:28:17 to do that do this exact thing which is sort of let’s share hard drive space.
    0:28:22 I remember there were backup companies that basically say your price would be I’ll make
    0:28:26 up a price $20 per gig per month but if you contribute your own hard drive space your
    0:28:31 price is $10 per gig a month or whatever and making up those numbers but they never really
    0:28:32 got to scale.
    0:28:39 So what are the advantages of doing this with sort of a cryptographic protocol as their
    0:28:43 intermediary as opposed to just hey there’s a company and there’s a service and there’s
    0:28:48 a price chart and please participate right and we’re going to sort of transact value
    0:28:50 and fiat currency.
    0:28:55 Yeah, well I think that the key difference is that those companies were operating on
    0:29:03 the assumption that the value at here is an economic that you actually that is kind of
    0:29:12 like Uber you’ll tap into all of this unused storage space that previously was inaccessible
    0:29:17 that you’ll offer that at a cheaper rate but I think that in the end because storage is
    0:29:24 the most commoditized of computational resources and because there’s just so strong economies
    0:29:31 of scale that benefit companies like Amazon as they build data centers the economic argument
    0:29:32 just doesn’t doesn’t work.
    0:29:37 So the reason that we need decentralized storage networks is not because they’re going to reduce
    0:29:43 the price of storage by orders of magnitude at least for most kinds of files and for most
    0:29:47 use cases I don’t believe that that’ll be the case.
    0:29:53 The value proposition is that again we now no longer have this central entity that’s
    0:29:57 controlling the storage on this network and so for the kinds of applications that depend
    0:30:02 on that the kinds of applications that really cannot be built unless you have that you just
    0:30:04 have no other option.
    0:30:05 Yeah.
    0:30:09 So you would pay the additional cost you pay a higher price for storing your files and
    0:30:12 file coin because that matters.
    0:30:17 Is there a privacy argument here which is the because it’s decentralized for instance
    0:30:23 there’s nobody to give a government subpoena to to say I want to see your files.
    0:30:29 I think privacy comes into it to some extent but I think it’s a little bit orthogonal because
    0:30:33 you can imagine encrypting your files before storing them in a centralized service.
    0:30:34 Yeah.
    0:30:35 Fair.
    0:30:40 So you there are ways of building privacy into into sort of existing centralized storage
    0:30:41 networks.
    0:30:42 Okay.
    0:30:46 And then what are some of the challenging computer science things about building these
    0:30:51 and so if there was proof of work for computation what are the proof of in the space.
    0:30:52 Yeah.
    0:30:55 Well the biggest one is trusting that the people who are claiming to be storing your
    0:30:58 files actually are storing your files.
    0:31:04 So there’s this line of work that’s been spearheaded by the people at Filecoin and by
    0:31:11 Stanford’s cryptography lab, Dan Bonet’s lab and sort of people like Confisch and Benedict
    0:31:18 Boone’s underneath him have done a lot of work on figuring out how to create cryptographic
    0:31:20 proofs of retrievability.
    0:31:26 How can I prove to you that I actually am storing the files that I’m claiming that I
    0:31:27 am storing.
    0:31:28 Right.
    0:31:29 And it’s super interesting.
    0:31:33 It’s extremely cutting edge and it’s basically at the heart of how you make a system like
    0:31:34 this work.
    0:31:35 Yeah.
    0:31:39 So you basically have to catch the pretenders which is you don’t want somebody to be able
    0:31:44 to say yes I’ll store your files and then not actually store them right because it would
    0:31:49 be cheaper for them not to store them right and so these sort of proofs of retrievability
    0:31:51 are basically ways to catch the pretenders.
    0:31:52 Exactly.
    0:31:53 Right.
    0:31:54 And it’s sort of a mathematical fashion.
    0:31:56 So yeah, you actually had a conversation with Ben Fisch on this.
    0:32:01 So for people who are interested in exploring this topic further, there’s a couple YouTube
    0:32:02 videos.
    0:32:03 Awesome.
    0:32:04 So, okay.
    0:32:05 So we’ve talked about distributed compute.
    0:32:07 We’ve talked about distributed storage.
    0:32:10 I guess the third leg is now networking like what’s happening here.
    0:32:14 Actually this kind of reminds me, do you remember the company Phone?
    0:32:20 This is sort of back in 2006 and the idea was I could buy a Wi-Fi router from this company
    0:32:23 called Phone and then I could sort of do one of two things with it.
    0:32:29 One, I could sort of offer it in Linus mode which is I gave it away free Wi-Fi access
    0:32:30 right.
    0:32:34 Anybody that came to my house with my Wi-Fi router, you could access my Wi-Fi for free
    0:32:40 and then in exchange I could access anybody else’s Wi-Fi access point for free.
    0:32:44 So that’s mode one or I could be in bill mode and in bill mode basically I would say look
    0:32:52 my Wi-Fi is available to you but you’re going to rent it for two bucks and in exchange for
    0:32:56 that I would have to pay to access other people’s Wi-Fi.
    0:33:01 So I could be in open source mode or I could be in rent seeking mode and it was this attempt
    0:33:08 to basically create a distributed ISP out of millions and millions of wireless.
    0:33:09 So it’s something wireless access points.
    0:33:12 Is something similar going on in crypto today?
    0:33:13 Absolutely.
    0:33:19 So the difficulty with those efforts has often been just the problem of density and the problem
    0:33:20 of incentives.
    0:33:26 So how do you get enough people to offer up hardware that forwards packets and provides
    0:33:32 bandwidth within a particular geographic region to make it make sense and to make it work
    0:33:39 and to be at all competitive with sort of the more kind of centralized top-down internet
    0:33:41 backbone infrastructure that we rely on.
    0:33:46 And so there are a number of projects that are, it’s very early because this is actually
    0:33:48 probably one of the hardest problems in this space to tackle.
    0:33:52 How do you decentralize even the networking layer, the communication between different
    0:33:58 nodes and to not have it depend on centralized internet infrastructure.
    0:34:05 So people are talking about incentivized mesh networking protocols where you can earn cryptocurrency,
    0:34:13 you can earn the asset that’s native to a particular protocol by setting up a router.
    0:34:17 This router can be a normal router that just forwards packets but it could also be wireless
    0:34:24 and provide a different layer of connectivity that otherwise, that essentially makes the
    0:34:31 networking layer more robust and more resistant to censorship and perhaps even more performant
    0:34:35 if you have just greater connectivity to the people you want to interact with.
    0:34:40 So yeah, I think this is one of these problem areas that’s fairly far out because it kind
    0:34:46 of depends on the other two building blocks and it has its unique challenges because now
    0:34:49 we’re talking about bringing hardware into the picture.
    0:34:51 That’s always a whole other kind of worms.
    0:34:56 But it is very interesting and I think in the end, it’ll also be a piece of the puzzle.
    0:35:03 So that’s one angle to it, it’s decentralizing networking and then the other angle is making
    0:35:08 networking itself just more performant for the use cases of decentralization.
    0:35:12 So we talked a little bit about the CDNs for blocks.
    0:35:16 So that kind of falls into that category, into this category as well.
    0:35:17 Got it.
    0:35:20 So those are the key ingredients that you need to build a computer, right?
    0:35:24 You need compute, you need network, you need storage and it looks like there’s sort of
    0:35:27 efforts underway in all of these things.
    0:35:32 Let’s assume for a second that time has gone by and protocols and sort of Darwinian fashion
    0:35:38 have competed and a couple winners have emerged and these things look more like solved problems.
    0:35:43 So now the exciting opportunity is, okay, now we can build killer apps on top of the
    0:35:47 blockchain computer and so maybe talk to me about what is the community most excited about?
    0:35:50 What kinds of apps are you going to build?
    0:35:53 Because as you’ve been pointing out, it’s not going to be the straightforward replacements
    0:35:55 for the things that we know and love today.
    0:36:01 It’s not like instantly the replacement for Airbnb or Google Photos or Lyft because those
    0:36:04 systems don’t have to pay the decentralization tax.
    0:36:08 It’s probably going to be another class of apps at least to begin with.
    0:36:12 That is the killer question and I think as with any new technology it is very difficult
    0:36:16 to predict what applications will be the most impactful.
    0:36:22 I think one reason to believe that the kind of innovation that we’ll see will be enormous
    0:36:28 is that everything happens, all of the code that’s written in the space ends up being
    0:36:29 open source.
    0:36:31 And so as a result the ideas are out there.
    0:36:36 People share their ideas with other teams, other teams sort of build on top of one another’s
    0:36:37 ideas.
    0:36:43 And so the kind of innovation that we’re likely to see is just combinatorial in nature and
    0:36:49 likely more explosive and will accelerate more quickly than it has for previous waves
    0:36:52 of computing and previous waves of technology.
    0:36:59 We do have this kind of decentralized world computer that is a kind of computational fabric
    0:37:04 on top of which applications can run and it is unified in that one application can easily
    0:37:06 talk to another.
    0:37:10 Then we have the possibility of composability of applications.
    0:37:16 So not only do we have the sharing of ideas that are just available to people because
    0:37:22 by virtue of being open source, but we also have the actual composability of running code.
    0:37:25 Code that runs on top of this computational fabric that builds on top of the code that
    0:37:27 other people have built.
    0:37:32 And this kind of composability will just fuel the flame of combinatorial innovation even
    0:37:33 further.
    0:37:38 So I feel like the kinds of applications that we’ll see as a result are fundamentally
    0:37:40 impossible to predict.
    0:37:44 But I will say, I think the kinds of things that we’ve started to see, the kinds of applications
    0:37:48 that seem to be working so far, and it’s still very early and they’re working only in kind
    0:37:56 of niche, within niche communities, are ones where trust is the bottleneck to scale.
    0:37:59 So I think the most obvious one began with Bitcoin.
    0:38:08 It’s attempting to be money and the only way that you would trust a program that maintains
    0:38:15 a ledger of tokens that claims that those tokens should be money is if that ledger isn’t
    0:38:20 in the control of any one entity or any one individual.
    0:38:27 I guess you would trust the central government, maybe, but you would not trust a company to
    0:38:28 do that.
    0:38:31 So you wouldn’t have been able to build Bitcoin on top of Amazon.
    0:38:35 Bitcoin is like one example of an application that you can build and a bunch of the applications
    0:38:41 that have worked so far in the Ethereum ecosystem primarily have been financial in nature, have
    0:38:47 been things that build on top of that initial idea, so things like lending platforms, derivatives,
    0:38:54 exchanges, things that depend on trust for them to really take off.
    0:38:59 But we’ve also started to see other applications that benefit from this feature.
    0:39:05 I think gaming is an interesting one where you can imagine taking the existing world
    0:39:09 of gaming, you can imagine, for example, a world of Warcraft where people have significant
    0:39:14 investment in their character and in the gear that they have and in the lives that they
    0:39:22 live within these games, taken to a whole other level where you actually own your character
    0:39:27 and you own the gear for your character and you can take your character and gear out of
    0:39:33 the game and maybe into another game because you now have this interoperable trustworthy
    0:39:36 fabric of computation that other developers can build on top of.
    0:39:43 So that kind of investment in your personality and in your character in the game is unlike
    0:39:44 what we’ve seen in gaming before.
    0:39:48 So this could take gaming just to a whole other level and that could be a very interesting
    0:39:49 set of applications.
    0:39:53 But it very much depends on these three building blocks, we need scalability before gaming
    0:39:55 can really take off.
    0:40:00 And we’ve seen examples of this, I think CryptoKitties is one where people became very invested
    0:40:08 in owning this digital collectible, which is something that is fundamentally new.
    0:40:13 Never before would you be able to directly own something that’s digital is the first
    0:40:14 time that that’s possible.
    0:40:18 I love this idea of being able to take sort of a high level character that I’ve developed
    0:40:23 in one game and moving it to another because, you know, look, essentially a high level character
    0:40:27 in say, World of Warcraft is the ultimate proof of work, right, which is I had to do
    0:40:31 a lot in order to get this character to be super high level.
    0:40:34 And now I’m kind of stuck in World of Warcraft, which is great if I want to play more World
    0:40:38 of Warcraft, but like it’d be awesome if I could take my proof of work and move it to
    0:40:39 another system.
    0:40:40 Absolutely.
    0:40:41 Yeah.
    0:40:46 Yeah, I mean, I think there’s a story about how Vitalik, part of Vitalik’s inspiration
    0:40:51 for starting Ethereum is having, I’m not sure which game it was, but it was like some
    0:40:58 gaming platform that revoked his ownership over, over like a key, a key item in the game.
    0:40:59 There it is.
    0:41:02 That made him like, made him like so, so mad, right.
    0:41:06 This is the problem with centralization, right, which is you have a company operating
    0:41:08 a game and they can do whatever they want with the game, right.
    0:41:13 One change to the terms of service and all of a sudden your proof of work is basically,
    0:41:14 it’s invalid.
    0:41:15 Exactly.
    0:41:16 Yeah.
    0:41:17 That would make you mad.
    0:41:20 Well, if you think about sort of trust being the key feature, I mean, there’s so many
    0:41:26 sort of, you know, properties that we think about on the internet that are essentially
    0:41:27 sort of brokers of trust, right.
    0:41:33 So LinkedIn is sort of the trusted entity to manage your resume and present your resume.
    0:41:37 And eBay is sort of the trusted marketplace where the sellers or Etsy is sort of the trusted
    0:41:42 place where you sort of send money and expect stuff, right.
    0:41:49 There’s Airbnb and Lyft, and so like trust seems like a super powerful primitive for
    0:41:50 creating killer apps.
    0:41:51 Yeah, definitely.
    0:41:58 And I think the web 2.0 world has figured out how to bootstrap trust in a way that
    0:42:05 depends on things like identity and reputation, where there’s social capital associated with
    0:42:08 your track record on the internet.
    0:42:16 So things like reviews on Yelp or things like reviews on or stars on Uber or number of likes
    0:42:21 on Twitter and number of followers on just generally social media.
    0:42:28 These things are, this is like the mechanism for trust that’s used in web 2.0.
    0:42:30 And I think crypto is orthogonal to that.
    0:42:36 Crypto today has no sense of identity, that people are pseudonymous, people can create
    0:42:38 multiple addresses and pretend to be different people.
    0:42:43 People can abandon identities that maybe have a bad reputation and move over to new identities
    0:42:45 that don’t.
    0:42:53 And the entire fabric of trust, therefore, depends not on social capital but rather on
    0:42:54 financial incentives.
    0:43:00 So it’s this orthogonal layer where you’re incentivized to behave honestly because there
    0:43:03 is real money at stake.
    0:43:08 And if you lie or if you behave in a way that’s not in accordance to the rules of the protocol,
    0:43:10 then there’s something that you will lose as a result.
    0:43:17 So there’s sort of financial capital and financial incentives as a way of bootstrapping trust
    0:43:20 and then there’s social capital as a way of bootstrapping trust and I think that’s one
    0:43:29 of the key differences between the web 2.0 world and the now web 3.0 crypto-enabled world.
    0:43:34 And what will be very interesting to see is the two models coming together.
    0:43:36 So that’s something to kind of look out for.
    0:43:41 And it seems like Keybase is sort of an early attempt at that, which is on the one hand
    0:43:46 you had all of these private keys that represented you in these cryptographic networks and on
    0:43:50 the other hand you had sort of your Twitter profile and your LinkedIn profile and your
    0:43:56 Facebook profile and Keybase sort of bridged them.
    0:44:00 What other things do you think we will see in this space of sort of making identity more
    0:44:01 seamless?
    0:44:02 Yeah.
    0:44:04 I think Keybase is a key one.
    0:44:13 A key problem is how do you map a real human individual to a public key in a way that is
    0:44:15 trustworthy and in a way that you can rely on.
    0:44:21 So Keybase is this and I think it’s a very apropos example where you can use your existing
    0:44:26 web 2.0 world identity to bootstrap your web 3.0 identity.
    0:44:31 You can use your Twitter account and your Facebook account and your GitHub account and
    0:44:39 your website and point them all to this cryptographic identity that you can then use to interact
    0:44:43 with other people in the sort of crypto anonymous world.
    0:44:47 And they can verify that that really is you because of the cryptographic assurances of
    0:44:54 those connections between Twitter and so on and your public key.
    0:44:55 You can take that further though.
    0:45:00 I think once you do have identity in the crypto world and I think it is an unsolved problem,
    0:45:05 Keybase is the first kind of attempt, but there’s still a lot to do there.
    0:45:10 Once you have a solid layer of identity within crypto, that also doesn’t sacrifice privacy.
    0:45:15 So it’s worth noting there’s a big trade-off there, like if you have strong identities
    0:45:19 and you have less privacy and it’s kind of difficult to come to the right balance between
    0:45:22 the two and it will vary per application.
    0:45:26 But once you have a good system for that, then you can start building reputation systems.
    0:45:30 You can even imagine like a page rank style algorithm for reputation.
    0:45:41 If I trust Frank and Frank trusts Joe, then I kind of indirectly trust Joe.
    0:45:46 And you can imagine kind of taking this to the whole other level to really enhance the
    0:45:53 kind of trust that emerges from financial incentives with social capital and with reputation.
    0:45:54 That’s very powerful.
    0:45:58 There’s some of this even in things like the Facebook marketplace today, which is you see
    0:46:02 somebody listing cheese or a bike or whatever and you’ll see, “Oh, this is a friend of Ali.”
    0:46:06 And then that brings a level of trust to that transaction that wouldn’t otherwise exist.
    0:46:07 Exactly.
    0:46:12 And one of the reasons this is so important for crypto is that today, every interaction
    0:46:16 in the world of crypto tends to be very transactional.
    0:46:20 You don’t even know who you’re dealing with and so it really is about that one transaction.
    0:46:27 It’s one-off and whenever there’s conflict, it’s a one-off prisoner/dilemma style game.
    0:46:30 Whereas if you had identity and if you had reputation, you could turn all of those one-off
    0:46:35 prisoner/dilemma style games into iterated prisoner/dilemma style games, which are far
    0:46:36 easier to solve.
    0:46:38 You have like the long view of relationships.
    0:46:45 You can have a track record and rapport with the people that you interact with if you only
    0:46:47 had that other layer.
    0:46:50 So so many of the problems, the game theoretical problems that need to be solved for crypto
    0:46:54 to work that are so hard to solve will become easier once you have this additional lever
    0:46:55 to play with.
    0:46:56 Yeah, that’s super interesting.
    0:46:57 Right.
    0:47:01 So every prisoner/dilemma type game is sort of assume perfect strangers go in and now
    0:47:05 we have to sort of mathematically model what will happen, not knowing anything about them.
    0:47:06 Exactly.
    0:47:09 But if you threw me and you into a prisoner’s dilemma, right, like we’ll have much higher
    0:47:14 fidelity predictions about what each other like, “I’m not going to squelch on you, Ali.
    0:47:15 He’s a friend of mine.”
    0:47:18 Especially if we know that we’re going to be in a similar kind of game in the future.
    0:47:21 And it’s like if we cooperate now, then we’ll build rapport and then it’ll be easier for
    0:47:24 us to cooperate in the future.
    0:47:29 And if we cheat each other now, then we will kind of ruin that opportunity later on and
    0:47:32 make it harder for us to cooperate down the line.
    0:47:33 Super interesting.
    0:47:38 So one thing before we go, I want to talk a little bit about governance, because today
    0:47:43 it seems like there’s a lot of conversation in the tech community about, “Gee, maybe the
    0:47:47 tech giants have gotten too big,” right, because with the stroke of a pen and one change in
    0:47:52 terms of service, like all of a sudden, the rules of engagement or the winners and losers
    0:47:55 in that environment are dramatically different.
    0:48:00 In crypto land, the idea would be let’s not have one company which completely owns their
    0:48:05 terms of service control that, there’s going to be sort of a decentralized community.
    0:48:10 But we end up with some of the same questions, right, like who gets to change the terms of
    0:48:11 service?
    0:48:14 How do those changes come about, who proposes them?
    0:48:19 So maybe talk to me a little bit about what’s happening in the community as we iterate on
    0:48:20 systems of governance.
    0:48:25 Yeah, you’re hitting at one of the most fundamental questions in this space, which is that if
    0:48:32 you do build a system that is decentralized and that control over it does not rest with
    0:48:36 any one individual, then there’s a question, well, how do you go about updating it?
    0:48:39 How do you go about changing it in any meaningful way?
    0:48:43 If it is going to be a complex system that adapts and evolves over time, this question
    0:48:46 most certainly has to be answered in order for any of this to work.
    0:48:53 So this is a question of sort of governance of protocols, and there are enormous number
    0:48:57 of experiments that people are running, like different kind of approaches.
    0:49:04 The canonical and sort of initial approach was out of Bitcoin, which is that essentially
    0:49:07 you do just have to coordinate with all of the stakeholders, all of the people who are
    0:49:14 running the Bitcoin node software in order to change the protocol.
    0:49:18 And in this case, that would be that all miners, all of the people who are running the code
    0:49:23 to mine Bitcoin, have to modify their software, and this is a human level process.
    0:49:28 You have to call them up or you have to sort of issue an announcement saying that the protocol
    0:49:31 is being upgraded and get that to work.
    0:49:35 There are other approaches that people are exploring with that are more sort of they’re
    0:49:38 formalized and are built into the protocol.
    0:49:41 So there’s the idea of being able to vote with tokens.
    0:49:50 So if I own a certain stake, a certain amount of the network, then I can use the tokens
    0:49:55 that constitute that stake to vote in favor or against proposals that may be made by the
    0:49:56 community.
    0:50:01 It’s just another approach to decentralize governance that tries to lower the barrier
    0:50:03 and tries to make it a little bit more seamless.
    0:50:07 There’s an enormous set of challenges associated with that because there are possible attacks
    0:50:10 where you can bribe people.
    0:50:12 There is the issue of voter participation.
    0:50:17 And all of the issues that you see in governance systems outside of the world of crypto, just
    0:50:18 an offline government.
    0:50:19 Offline?
    0:50:20 Who’s going to the election?
    0:50:21 How do they vote?
    0:50:22 Exactly.
    0:50:24 How do we prevent dead people from voting?
    0:50:25 Exactly.
    0:50:30 These problems have become replicated in crypto as well and they are therefore like fundamentally
    0:50:33 difficult problems that have been unsolved for millennia.
    0:50:39 So it’s not as if crypto will solve any of that, it’ll just have to figure out the right
    0:50:44 mechanisms and the right structures to be good enough and to enable systems that are
    0:50:51 decentralized to adapt and to change and evolve while striking a balance between sort of
    0:50:53 evolvability and decentralization.
    0:50:59 And actually, we did two podcasts specifically on this question, the question of governance
    0:51:03 and crypto, that will go much, much deeper and talk about all of the challenges.
    0:51:07 So if you’re interested in that topic, I recommend checking those out.
    0:51:08 Perfect.
    0:51:12 We’ll throw the links into the YouTube video so you can follow them easily.
    0:51:15 Well, Ali, this has been super interesting.
    0:51:20 There’s so many problems to be solved, there’s so many meeting computer science things to
    0:51:25 be had, like how do you prove that I’m actually storing your photos instead of just pretending
    0:51:28 to store your photos and collecting the money.
    0:51:33 And so I guess the way I think about it is like if you have ever wished that you could
    0:51:41 have been like a semiconductor engineer at Bell Labs in the 1950s or a PC enthusiast in
    0:51:47 the 1970s and you were like, “I missed the 50s and then I missed the 70s.”
    0:51:53 And then like if you wished you were at UIUC with Mark at the dawn of the internet in the
    0:51:55 90s, like, “Look, here it is.
    0:51:57 This is the new computing platform.
    0:51:59 Here is your opportunity.
    0:52:00 It’s not too late.”
    0:52:05 And these are the times to exactly insert yourself into that conversation if sort of that’s
    0:52:09 what you wish you had the opportunity to do is influence some of these protocols, these
    0:52:12 incentive systems at the ground level.
    0:52:13 Absolutely.
    0:52:14 Awesome.
    0:52:15 Fantastic.
    0:52:18 So that’s it for this episode.
    0:52:22 And if you liked what you saw, go ahead and subscribe to the list.
    0:52:25 If you have comments, go ahead and leave them down below.
    0:52:28 Maybe you could pick one thing that you were super excited about, like what problem do
    0:52:32 you wish you could solve as an engineer.
    0:52:33 And we will see you next episode.
    0:52:42 [BLANK_AUDIO]

    Do you sometimes wish you had been born in a different decade so you could have worked on the fundamental building blocks of modern computing? How fun, challenging, and fulfilling would it have been to work on semiconductors in the 1950s or Unix in the 1960s (both at Bell Labs) or personal computers at the Homebrew Computer Club in the 1970s or on the Internet browser at the University of Illinois at Urbana-Champaign (and later Mountain View, CA) in the 1990s?

    Good news: it’s not too late. There’s a new computing platform being built today by a vibrant and rapidly growing cryptocurrency community. You might have noticed some of your coworkers and friends leaving big stable tech companies to join crypto startups.

    In this episode, which originally appeared on YouTube, a16z crypto partner Ali Yahya (@ali01) talks with Frank Chen (@withfries2) about five challenging problems the community is trying to solve right now to enable a new computing platform and a new set of killer apps:

    *Scaling decentralized computing
    *Scaling decentralized storage
    *Scaling decentralized networks
    *Establishing trusted identities and reputation
    *Establishing trusted governance models

    If you’re a software engineer, product manager, UX designer, investor, or tech enthusiast who thrives on the particular challenges of building a new computing platform, this is the perfect time to join the crypto community.


    The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.

    This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/.

    Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

  • Bonus: Origins and Growth of The Side Hustle Show

  • a16z Podcast: A Guide to Making Data-Based Decisions in Health, Parenting… and Life

    AI transcript
    0:00:03 – Hi, and welcome to the A16Z podcast.
    0:00:04 I’m Hannah.
    0:00:06 Good data, bad data, there’s maybe no other area
    0:00:09 where understanding what the evidence actually tells us
    0:00:11 is harder than in health and parenting.
    0:00:14 In this episode, economics professor Emily Oster,
    0:00:16 author of “Expecting Better”
    0:00:18 and the recently released “Cribsheet,”
    0:00:21 a data-driven guide to better, more relaxed parenting,
    0:00:23 does just that, looking at the science and the data
    0:00:25 behind the studies we hear about
    0:00:27 and make decisions based on in those worlds.
    0:00:30 From whether to breastfeed your child to screen time
    0:00:32 to sleep training, we talk about what it means
    0:00:35 to make database decisions in these settings,
    0:00:37 in diet and in health and in life,
    0:00:39 like whether chia seeds are actually good for you
    0:00:42 and how we can tell what’s real and what’s not.
    0:00:44 We also talk about how guidelines and advice like this
    0:00:47 gets formalized and accepted for better or for worse
    0:00:49 and how they can or can’t be changed.
    0:00:52 And finally, how the course of science itself
    0:00:55 can be changed by how these studies are done.
    0:00:57 – You describe yourself as teasing out causality
    0:00:58 in health economics.
    0:01:01 Can you give us a little primer on what exactly that means
    0:01:02 and how you start going about doing that?
    0:01:04 – So there are a lot of settings in health
    0:01:06 and in all of those settings,
    0:01:09 we have to figure out what does the evidence say.
    0:01:11 And I think about some of them in this context of parenting,
    0:01:14 but you can think about even questions like,
    0:01:15 is it a good idea to eat eggs
    0:01:17 or is it a good idea to take vitamins,
    0:01:19 other kinds of health decisions.
    0:01:20 And you can sort of think about there being
    0:01:23 kind of two types of data you could bring to that.
    0:01:25 One would be randomized data.
    0:01:26 So you could run a randomized trial
    0:01:29 in which half of the people got eggs
    0:01:30 and half of the people didn’t.
    0:01:31 And you followed them for 50 years
    0:01:33 and you saw which of them died.
    0:01:36 And that would be very compelling and convincing.
    0:01:38 And when we have data like that, it’s really great.
    0:01:40 – I mean, I kind of think of that as being the default.
    0:01:42 No, is that not at all the standard?
    0:01:45 – That is the gold standard, but it is not the default.
    0:01:47 So many of the kinds of recommendations
    0:01:48 that I look at in parenting,
    0:01:50 but that you look at in general in health
    0:01:52 are based on observational data,
    0:01:54 which is the other kind where we compare people
    0:01:56 who do one thing to people who do another thing
    0:01:58 and we look at their outcomes.
    0:02:00 And one of the ways in which the people differ
    0:02:02 is on the thing that you’re studying,
    0:02:06 but of course there are other ways that they may differ also.
    0:02:06 – A million other ways.
    0:02:08 – A million other ways, yes.
    0:02:10 And data like that is really subject
    0:02:12 to these kind of biases that the kind of people
    0:02:13 who make one choice are different
    0:02:16 from the kind of people who make another choice.
    0:02:17 One of the things that’s very frustrating
    0:02:19 in a lot of the health literature
    0:02:21 is that there isn’t always that much effort
    0:02:24 to improve the conclusions that we draw
    0:02:25 from those kind of data.
    0:02:27 – And we’re using that kind of approach
    0:02:31 because of the inability to have long longitudinal studies,
    0:02:33 or it does tend to be a shortcut.
    0:02:35 – So I think it is both things.
    0:02:38 So it is much easier, faster to write papers,
    0:02:40 to produce research about that,
    0:02:42 and it can be really useful for developing hypotheses.
    0:02:43 – So it’s like a scratch pad almost.
    0:02:46 – In the best case scenario, it’d be like a scratch pad.
    0:02:48 Like let’s just look in the data
    0:02:50 and see what kinds of things are associated
    0:02:52 with good health or associated with good outcomes for kids.
    0:02:54 And then we could imagine a next step
    0:02:58 where you would analyze more rigorous gold standard.
    0:02:59 And sometimes that happens.
    0:03:01 So there’s one really nice example of the book
    0:03:03 where this happens exactly like you would hope,
    0:03:06 which is in studying the impact of peanut exposure
    0:03:07 on peanut allergies.
    0:03:10 So the first paper on that is written by a guy,
    0:03:12 and what he did was he just compared Jewish kids
    0:03:16 in the U.K. to Jewish kids in Israel,
    0:03:18 and he saw that the kids in Israel
    0:03:19 were less likely to be allergic to peanuts,
    0:03:22 and he said that’s because they eat this peanut snack
    0:03:24 when they’re being as a bomba.
    0:03:28 And so then that’s like the hypothesis generation,
    0:03:30 and then he went and did the thing you would really like,
    0:03:32 would just say, okay, let’s run a randomized trial,
    0:03:34 and let’s randomly give some kids early peanuts,
    0:03:35 and some kids not.
    0:03:37 And indeed, like he found that he was right.
    0:03:39 So that’s like a great example
    0:03:42 of like how you would hope that literature would evolve.
    0:03:44 But in many of the kinds of health settings
    0:03:47 we’re interested in, that you can’t do that,
    0:03:49 or it is much harder to do that,
    0:03:52 because the outcomes would take a long time to realize,
    0:03:54 or it’s expensive, or it’s hard to manipulate
    0:03:55 what people are doing.
    0:03:57 And then we often end up relying
    0:04:00 on these more bias sources of data
    0:04:02 to draw our conclusions, not just as a scratch pad.
    0:04:06 And I think that’s where we encounter problems.
    0:04:08 – That’s where it gets murky,
    0:04:10 and we never know whether we should eat eggs or not.
    0:04:13 Yeah, and that’s exactly the area that you tend to focus on.
    0:04:14 – Yeah, exactly.
    0:04:16 I try to first see are there good pieces of data
    0:04:19 that we can use, and then if we’re stuck with the data
    0:04:20 that isn’t good, trying to figure out
    0:04:25 which of the murky studies are better than others.
    0:04:27 And what would you mean by better?
    0:04:30 Well, it’s roughly like how good is this study
    0:04:34 at controlling or adjusting for the differences across people?
    0:04:38 – So you talk about kind of breaking it down
    0:04:42 into both into the relationship between data and preference.
    0:04:44 How do you factor in that in the healthcare system
    0:04:46 where it’s so diverse, where preference
    0:04:48 has such an incredible effect
    0:04:51 and puts you into so many different possibilities?
    0:04:53 – I think this is why in these spaces,
    0:04:55 decision-making should be so personal.
    0:04:59 We often run up in health and also in parenting
    0:05:01 and all of these spaces into a place
    0:05:04 where we’re telling people like there’s a right thing
    0:05:06 there’s a right thing to do.
    0:05:10 And I think that that can be problematic
    0:05:12 because it doesn’t recognize this difference
    0:05:14 in preferences across people.
    0:05:16 – You have to basically accept the variety
    0:05:17 in the system and then give a space
    0:05:18 for preference in the decision-making.
    0:05:20 – Yeah, but I think it’s exactly these preferences
    0:05:22 that of course make it hard to learn
    0:05:24 about these relationships in the data.
    0:05:26 ‘Cause once you recognize that a lot of the reason
    0:05:28 that some people choose to eat eggs
    0:05:29 and some people choose to eat cocoa crispies
    0:05:31 is that some people really like cocoa crispies
    0:05:32 and some people really like eggs.
    0:05:34 How can you ever learn about the impact of eggs
    0:05:37 because we know there must be differences across people.
    0:05:39 And I think that that becomes even more extreme
    0:05:41 when we think about really important decisions
    0:05:42 that people are making,
    0:05:44 like the kinds of choices they make in parenting
    0:05:45 or also in their diets.
    0:05:48 – So can you walk us through one example like that
    0:05:51 of where it was a really kind of murky gray area
    0:05:53 and how you pull out the causality?
    0:05:54 – The best example of this in the data
    0:05:57 in the parenting space is probably in breastfeeding.
    0:05:59 Let’s say you wanna know the impact of breastfeeding
    0:06:00 on obesity in kids.
    0:06:02 That’s a thing which you hear a lot.
    0:06:06 Breastfeeding is a way to make your kid skinny and so on.
    0:06:09 And so the basic way you might analyze that
    0:06:12 is to compare kids who are breastfed to kids
    0:06:13 who are not and look at their obesity
    0:06:14 when they’re say seven or eight.
    0:06:16 And indeed, if you do that,
    0:06:17 you will find that the kids who are breastfed
    0:06:20 are less likely to be obese than the kids who are not.
    0:06:24 But you will also find that there’s all kinds of relationships
    0:06:27 between obesity and income and obesity,
    0:06:28 mother’s income and mother’s education
    0:06:30 and other things about the family.
    0:06:32 And those things correlate with breastfeeding
    0:06:34 and they also correlate with obesity.
    0:06:36 – So you can’t really pull apart this web.
    0:06:38 – So it’s hard to pull apart the web.
    0:06:39 So I would say this is an example
    0:06:42 where the data is suggestive.
    0:06:43 It would certainly be consistent
    0:06:45 with an effect of breastfeeding on obesity,
    0:06:48 but I think it doesn’t prove an effect.
    0:06:50 And then you can sort of take the next step
    0:06:52 and say, okay, well, do we have any data that’s better?
    0:06:53 And in that example,
    0:06:55 we do have one kind of randomized data.
    0:06:57 But again, we run up against the limits
    0:06:59 of all kinds of evidence.
    0:07:02 So the randomized data on this question
    0:07:05 is from a randomized trial that was run in Belarus
    0:07:07 in the 1990s.
    0:07:09 They randomly encourage some moms to breastfeed
    0:07:09 and some moms not.
    0:07:11 And so there’s a lot of good things
    0:07:12 that we can learn from that.
    0:07:14 – But such a specific place in time.
    0:07:15 – Exactly, it’s so specific.
    0:07:16 And you said like, well, you know,
    0:07:21 how do I take that result to the Bay Area in 2019?
    0:07:23 It’s a challenge.
    0:07:24 Okay, well, is there any other within this space
    0:07:28 of not randomized datas or anything that’s better?
    0:07:29 And in that case, there is,
    0:07:32 there are some studies that like compare siblings,
    0:07:35 where you look at two kids born to the same siblings,
    0:07:36 born to the same mom.
    0:07:39 One of whom was breastfed and one of whom was not.
    0:07:40 And then look at their obesity rates.
    0:07:41 And when you do that,
    0:07:43 you find there’s basically no impact.
    0:07:46 So then you’re kind of holding constant like who’s the mom.
    0:07:49 So if you’re a worry was that there are differences
    0:07:51 across parents in their choices to breastfeed,
    0:07:53 well now you’re looking at the same parent.
    0:07:54 – Right, you’re normalizing.
    0:07:55 – You’re normalizing.
    0:07:57 And so you may think, oh, that’s great, perfect.
    0:07:57 I’m totally done.
    0:07:59 But of course you’re not,
    0:08:02 this isn’t perfect because why did the mom choose
    0:08:03 to breastfeed one kid and not the other?
    0:08:04 – People are not choosing random.
    0:08:05 – You had to see section one time,
    0:08:07 you didn’t another time.
    0:08:08 – If that were the reason that would be great, right?
    0:08:11 If the reason were just like kind of worked one time,
    0:08:12 didn’t work the other time.
    0:08:14 If there was something that was effectively
    0:08:16 a little bit random,
    0:08:19 then that would be exactly the kind of variation
    0:08:21 you’d wanna use.
    0:08:23 But the thing you worry about is like one kid
    0:08:25 is not doing well, is unhealthy.
    0:08:26 So the mom chooses not to breastfeed
    0:08:30 or chooses to breastfeed to try to make them healthier.
    0:08:32 Those are the kind of things where there’s some other reason
    0:08:34 that they’re choosing differences in breastfeeding
    0:08:36 which has its own effect on the kid’s outcomes.
    0:08:41 So you kind of like some of what I try to do in the book
    0:08:43 is sort of like put all of these pieces together
    0:08:46 and kind of like look at them
    0:08:50 and think about them all as a sort of totality of evidence
    0:08:52 and just think like how compelling is this altogether?
    0:08:55 – It sounds almost like sifting, like using a sifter.
    0:08:58 You take all this very murky data,
    0:09:00 very variable from all sorts of different contexts
    0:09:02 and like put it through the sifter of like
    0:09:03 this kind of data, this kind of data
    0:09:06 and then match it all up and say, okay, what do we have left?
    0:09:08 And then therefore, and then hand that over and say,
    0:09:11 and now you make the decision based on this.
    0:09:12 – Based on this.
    0:09:14 – Right, here’s kind of what we can be more or less
    0:09:15 or less sure about.
    0:09:17 – You talk a little bit about the idea
    0:09:20 of constrained optimization as being very important.
    0:09:23 Can you explain what that means and how that plays out?
    0:09:25 – In economics, we think about people
    0:09:27 optimizing their utility function.
    0:09:28 The idea is that you have a bunch of things
    0:09:30 that make you happy, that’s your utility.
    0:09:33 They produce your utility and you want to make the choices
    0:09:35 that are going to optimize your utility.
    0:09:39 They’re going to give you the most amount of happiness points,
    0:09:40 eudals, eudals.
    0:09:44 It’s really, it’s a very warm and fuzzy.
    0:09:46 – Yeah, I feel like I’m gonna go home and use that.
    0:09:47 – Absolutely.
    0:09:49 – Like you gave me some eudals today.
    0:09:53 – But we also recognize that people have constraints.
    0:09:54 In the absence of constraints,
    0:09:58 like having money to buy things or time to do them,
    0:10:00 people would just have an infinite amount of stuff.
    0:10:02 That’s the thing that would make them the most happy.
    0:10:04 And so, but when you’re actually making choices,
    0:10:07 you’re constrained by either money or time.
    0:10:09 And in the book, I talk a lot about this
    0:10:12 in the context of time, that you’re as a parent,
    0:10:15 you’re making choices, and you have some preferences
    0:10:16 and things you would like to do,
    0:10:19 but you are also facing some constraints.
    0:10:22 – But is there, is information flow kind of what,
    0:10:25 and the data itself a constraint in that regard?
    0:10:27 Is that a, because it’s so piecemeal,
    0:10:29 the information you get.
    0:10:30 That feels almost totally random.
    0:10:32 Like some media story picks up on something,
    0:10:34 you tend, you know, some tidbit, you hear some,
    0:10:36 unless you’re like systemically
    0:10:38 studying a graduate seminar on parenting,
    0:10:41 which none of us do, you know, then it is random.
    0:10:44 – Yeah, and I think we wouldn’t necessarily think of that
    0:10:46 as in constraints, because of course in our models,
    0:10:49 people are fully informed about everything all the time.
    0:10:52 That’s one of the great things about the models.
    0:10:52 – But in real life?
    0:10:55 – But in real life, yeah, I think people face constraints
    0:10:58 associated with just not having all the information.
    0:11:01 And, you know, also the fact that this,
    0:11:04 these kind of information, like whipsaws over time,
    0:11:07 that you know, you get one piece and then you kind of,
    0:11:10 the next day there’s a different piece of information
    0:11:12 and we have a tendency to kind of
    0:11:15 a glom onto whatever is the most recent thing
    0:11:16 that we have seen about this,
    0:11:19 as opposed to what is the whole literature
    0:11:21 over this whole period of time say.
    0:11:24 – Right, you say, you have a great quote where you say,
    0:11:26 in confronting the questions here,
    0:11:28 we also have to confront the limits of the data
    0:11:29 and the limits of all data.
    0:11:30 There’s no perfect studies,
    0:11:33 so there will always be some uncertainty about conclusions.
    0:11:35 The only data we have will be problematic.
    0:11:37 There will be a single not very good study
    0:11:39 and all we can say is that this study
    0:11:41 doesn’t support a relationship.
    0:11:44 So it feels kind of hopeless.
    0:11:46 I loved when you talked about the first three days
    0:11:47 of when you brought Penelope home
    0:11:50 and it really brought that back for me
    0:11:52 as I was just this dark room
    0:11:54 that you’re kind of alone making these decisions.
    0:11:57 How do you even begin to see this data,
    0:12:00 you know, as a decision making practice?
    0:12:02 Like how does that translate?
    0:12:04 – There are pieces where it’s easier,
    0:12:08 where the data is better and makes it is clearer
    0:12:10 about what you need to do or what the choices are.
    0:12:12 You will be making many choices
    0:12:17 without the benefit of evidence or data or very good data.
    0:12:19 I think part of what makes some of this parenting so hard
    0:12:21 is that for those of us who like, you know,
    0:12:24 evidence and facts and it’s hard to accept,
    0:12:26 I’m just going to have to make this decision
    0:12:29 basically based on what I think is a good idea.
    0:12:30 – Based on my gut.
    0:12:31 – Based on my gut.
    0:12:36 – And, you know, maybe based on my mom and, you know.
    0:12:37 – Which is a sample size of one.
    0:12:39 – A sample size of one.
    0:12:41 And, you know, maybe if you have like a mother-in-law
    0:12:43 and father-in-law, it’s like a sample size of two,
    0:12:46 but that’s kind of, that’s kind of it.
    0:12:48 And I think that that’s really scary,
    0:12:50 especially when the choices seem so important.
    0:12:52 – Yeah, I mean, but it feels like, you know,
    0:12:54 that’s kind of at heart what you’re trying to do, right?
    0:12:56 Is like to translate and to give tools
    0:12:58 in this decision making place.
    0:13:02 So how would you begin to systematize that?
    0:13:06 I mean, is there a way to bridge that gap better
    0:13:07 in the system?
    0:13:09 – I think that it would be helpful
    0:13:12 if more information was shared.
    0:13:14 So I think a lot of these things,
    0:13:17 there is a lot of information that is contained
    0:13:21 in people’s experiences that we are not using
    0:13:23 in our evidence production.
    0:13:27 So in the book I talk about like the sleep schedule, right?
    0:13:30 So you’re sort of told as a parent, like, oh, you know,
    0:13:32 this is kind of roughly like around six weeks,
    0:13:35 your kid will start sleeping like longer at night,
    0:13:38 but there’s no, the information that’s sort of
    0:13:42 typically conveyed to people is not a range.
    0:13:44 It’s just like around six weeks-ish,
    0:13:46 you know, that’ll start to happen.
    0:13:48 But the truth is like, yeah, that’s kind of right,
    0:13:51 but it’s, if you look at data on when that actually happens,
    0:13:54 it’s pretty, it’s a pretty wide range.
    0:13:57 And I think part of what is so stressful
    0:14:00 about this early, these like early parts of parenting
    0:14:03 are that it’s very hard to understand
    0:14:06 whether what you’re experiencing is like normal.
    0:14:08 And I think if you could understand, like, yeah,
    0:14:11 most kids don’t do this thing at this time
    0:14:13 or most parents have this experience or-
    0:14:15 – The way the graph plots kind of.
    0:14:16 – Yeah. – A little more broadly.
    0:14:17 – Exactly.
    0:14:19 What is, I think that would be, that would be super helpful.
    0:14:21 And that’s a place where I can imagine,
    0:14:23 you know, data collection helping, right?
    0:14:27 You know, we have a much more of an ability at this point
    0:14:30 to like get information about what is happening
    0:14:33 with our kids, what’s happening with, you know, with our health.
    0:14:35 There is a sense in which that could be helpful
    0:14:39 in just setting some norms for the normal,
    0:14:42 the standard variation across people.
    0:14:45 – So looking at the variation and providing that
    0:14:46 as like a piece of the information.
    0:14:47 – As a piece of the information.
    0:14:48 – Here’s also the variation on that.
    0:14:49 – Yeah. – Yeah.
    0:14:51 – Yeah, and I think that is kind of part of like
    0:14:53 generating the uncertainty and sort of showing people
    0:14:55 like what are the limits of the data, right?
    0:14:58 That how sure are you that this should happen at this time?
    0:15:00 – Not just how sure, but like what are some of the
    0:15:02 like other ends of the curve?
    0:15:04 I mean, that’s just information you just don’t get.
    0:15:05 – Yeah, you just don’t get, right.
    0:15:07 – So let’s zoom out a little bit
    0:15:09 as somebody who lives in the world deeply of data
    0:15:11 in the health system.
    0:15:14 We’re in a time of enormous shift, right, for data.
    0:15:17 Does the improvement, does our kind of the sea of data
    0:15:20 and like better data, cleaner data, more granular data,
    0:15:23 all that help this at all, this question?
    0:15:28 – Yeah, and I think we are collecting so much data
    0:15:32 on people, both sort of individual people
    0:15:34 are collecting a lot of data about themselves.
    0:15:37 Health systems are collecting a lot of data about people.
    0:15:40 This data is like underutilized, I think.
    0:15:41 We’re amassing pools of it,
    0:15:43 but not in ways that are especially helpful.
    0:15:45 So, you know, when I go to conferences
    0:15:47 and people who work on healthcare,
    0:15:48 like there’s a tremendous amount of data
    0:15:50 that’s being used on health claims, right?
    0:15:51 So if you sort of think about like,
    0:15:53 what are some kinds of data that we have?
    0:15:55 We have like health claims data, like payments,
    0:15:56 everything that we’re,
    0:15:58 where there’s an individual payment for it,
    0:16:00 we’ll like see, we’ll see it.
    0:16:02 There’s almost no work with medical records.
    0:16:05 Even though every hospital, everybody’s using Epic,
    0:16:08 you would think that that would make it straightforward
    0:16:11 to have that data in a usable form, but it’s not.
    0:16:12 And, but you know, at the same time,
    0:16:16 the potential for sort of going beyond like,
    0:16:18 here is all the tests that you ordered
    0:16:21 into actually like, what happened with those tests?
    0:16:23 And then what happened to this person later?
    0:16:27 Like that data is not being mined in the way
    0:16:30 that we could to try to look at some, you know,
    0:16:33 at some of the kinds of outcomes that are a result.
    0:16:36 – The causality that you would pull out afterwards.
    0:16:37 – Yeah, absolutely.
    0:16:39 You know, how can we improve our causality?
    0:16:40 More data is helpful.
    0:16:42 More information about people is helpful.
    0:16:43 Being able to look at, you know,
    0:16:45 the timing relationship between some treatment
    0:16:47 and some outcome, those are all the kinds of things
    0:16:50 that, you know, having better data would help us,
    0:16:51 would help us do.
    0:16:53 – Are there other areas where you start,
    0:16:56 you are starting to see the data coalesce in a way
    0:16:59 where you’re able to pull meaningful insights from it?
    0:17:01 – So I think, yes, you know, when we have better data,
    0:17:06 we can use better tools, even if we don’t have randomization.
    0:17:09 A classic example in health is looking at the impacts
    0:17:11 of like a really advanced neonatal care.
    0:17:14 Like how cost effective is it to have like,
    0:17:16 you know, kids in sort of getting like,
    0:17:17 really extensive NICU care?
    0:17:21 Like how effective is that in terms of improving survival
    0:17:22 and how much does it cost?
    0:17:23 – No, such a basic question.
    0:17:27 – Such a basic question, and super hard to imagine analyzing
    0:17:29 because of course, you know, babies that are very small
    0:17:32 and are sick cost more but also have worse outcomes.
    0:17:34 And so if you sort of looked at that,
    0:17:36 you would be like, well, actually like spending more,
    0:17:38 we’re not getting anything because those babies
    0:17:41 are more likely to die than babies that are spending less.
    0:17:46 We define very low-birthway babies as less than 1500 grams,
    0:17:48 which means that the treatment that you get
    0:17:51 if you’re a baby at 1500 and three grams
    0:17:53 is very different than the treatment that you get
    0:17:56 as a baby at 1497 grams, which is completely arbitrary.
    0:17:59 I mean, the choice of 1500 grams has nothing to do with science.
    0:18:00 – It’s like this line in the sand.
    0:18:02 – That’s not a good way to set policy.
    0:18:04 However, having set the policy like that,
    0:18:07 you can then say, okay, well now we have some babies
    0:18:08 that are almost exactly the same,
    0:18:11 but the babies that are a little bit lighter
    0:18:12 that are like 1497 grams
    0:18:14 get all kinds of additional interventions
    0:18:17 relative to the babies that are 1500 and three grams.
    0:18:18 And when people have done that,
    0:18:21 they see actually the babies at 1497 grams do better.
    0:18:24 – So the line actually is beneficial in that way
    0:18:26 because you’re defining these two groups very closely.
    0:18:28 – Oh, interesting.
    0:18:29 – Setting this line in this arbitrary way
    0:18:31 lets you get at some causality.
    0:18:32 – Even though not good for the babies.
    0:18:34 – Sort of having done it good for research.
    0:18:35 – Good for information.
    0:18:36 Interesting.
    0:18:38 What are some of the other tools?
    0:18:39 Are there others in that list?
    0:18:44 – So that’s an example called regression discontinuity,
    0:18:47 that there’s some discontinuous change in policy
    0:18:49 on either side of a cutoff.
    0:18:52 And that has become a sort of part of a big toolkit
    0:18:55 of things people are using more of.
    0:18:57 The other is to look at sort of sharp changes
    0:19:01 in policies at a time, at like a moment in time.
    0:19:03 – Oh, so the same thing at the same time.
    0:19:05 – Then there’s that and then there’s looking across
    0:19:07 when different policies change differently
    0:19:08 for different groups.
    0:19:13 So, all of these things have become easier with more data
    0:19:15 and become more possible with more data.
    0:19:16 And I think that that has improved our inference
    0:19:19 in some of these settings.
    0:19:21 – I love that you talked a little bit about the experience
    0:19:24 of doing your own data collection kind of in the wild
    0:19:27 with this spreadsheet after Penelope was born,
    0:19:27 which made me laugh so much
    0:19:30 ’cause it was so much like my spreadsheet.
    0:19:33 It was just so sad to think of like all these moms alone
    0:19:35 in their bedrooms at night.
    0:19:35 – I know, I know.
    0:19:40 I mean, I think there’s been a lot more apps
    0:19:44 since like we had that help, yeah.
    0:19:48 But still, I love that you said
    0:19:52 it gives the illusion of control, not control.
    0:19:55 And in that particular, in these kinds of like data vacuums,
    0:19:58 like if we’re not good at statistical analysis
    0:20:01 or like pulling out causality from these murky areas,
    0:20:03 like if we’re not Emily Oster basically,
    0:20:05 how do you like, or even if you are,
    0:20:08 how do you kind of stay on that line
    0:20:10 of like the illusion of control
    0:20:11 versus like actual knowledge
    0:20:14 that like impacts real decision-making?
    0:20:15 – No, I think it’s super hard
    0:20:17 because the thing is the illusion of control
    0:20:19 is a very powerful illusion.
    0:20:20 – Very, yeah.
    0:20:24 – And both empowering and dangerous in health context.
    0:20:25 – Exactly.
    0:20:26 Like we would, you know, you would sort of,
    0:20:29 we like people to feel like they’re in control.
    0:20:30 Some of the message of this book,
    0:20:31 I think people have taken, not quite right,
    0:20:33 but to say like, well, it doesn’t really matter
    0:20:36 what choices you make, like all choices are good choices.
    0:20:38 I think there’s, that’s not quite the right,
    0:20:39 it’s not quite the message.
    0:20:41 – That’s, I’m surprised that’s the message
    0:20:42 that people take from this.
    0:20:43 – Occasionally.
    0:20:45 – There are a lot of different good choices
    0:20:47 that you could make about parenting.
    0:20:50 And so I think that there is a piece that like,
    0:20:54 we maybe don’t need to be so like obsessive
    0:20:55 about all of these.
    0:20:55 – About one of those.
    0:20:57 – About one of those, any one of those,
    0:20:58 of those choices.
    0:20:59 – What’s your point about range?
    0:21:02 It’s like, well, let’s educate a little bit more
    0:21:03 about like the spectrum of possibilities.
    0:21:05 – Spectrum of good, of good choices.
    0:21:06 – Yeah.
    0:21:09 Another area I feel like where every other day
    0:21:11 there’s a new study that says something different.
    0:21:14 And it feels like there’s a plethora of studies
    0:21:15 is screen time.
    0:21:17 I’m just, I’m gonna put that out there right now.
    0:21:18 I’m sorry.
    0:21:21 Everybody, we’re gonna touch that third row.
    0:21:22 – Three times.
    0:21:24 – So can you walk us through,
    0:21:28 like can you help guide us through some of that maze?
    0:21:29 – So when I looked into screen time,
    0:21:31 I had always thought about like screen time
    0:21:32 is like bad.
    0:21:35 Like it’s like, the question is, is it bad or not?
    0:21:37 But actually there’s like a whole other side of this,
    0:21:38 which is some people like screen time
    0:21:40 is the way to make your kid smart.
    0:21:42 Like you can, like your baby can learn from that.
    0:21:42 – Okay.
    0:21:44 So point number one is what does screen time
    0:21:45 actually mean?
    0:21:45 – Right.
    0:21:46 – Which is a bunch of different stuff.
    0:21:47 – Yeah.
    0:21:48 And I think that’s part of the,
    0:21:49 like that’s part of the problem with this is like,
    0:21:51 when you say screen time, like what do you mean?
    0:21:52 – Yeah.
    0:21:54 – Do you mean like, you know, educational apps?
    0:21:55 – Yeah.
    0:21:56 – Do you mean…
    0:21:57 – Do you mean Sesame Street?
    0:21:57 – Sesame Street?
    0:21:58 – Where you like jump in the shower?
    0:21:59 – Yeah.
    0:22:01 – Or yeah, and that point like while you jump in the shower,
    0:22:03 like what is the other thing you’re going to be doing
    0:22:04 with your time?
    0:22:07 I think this is where all of these recommendations
    0:22:11 seem to assume that the alternative use of your,
    0:22:13 like if your kid wasn’t watching Sesame Street,
    0:22:15 you would be like on the floor,
    0:22:16 like playing puzzles with them
    0:22:18 and like super engaged with them.
    0:22:19 Which like, maybe is true.
    0:22:21 – Taking them to the zoo and like having them touch
    0:22:22 different textures of animal skins
    0:22:25 or whatever like sensory development, yeah.
    0:22:26 – Yeah, which like is great stuff
    0:22:28 that you should definitely do with your kid.
    0:22:30 But some of the time when, you know,
    0:22:32 when our kids are watching TV,
    0:22:34 it’s cause like we, it’s,
    0:22:35 that maybe isn’t the thing
    0:22:36 that you would otherwise be doing.
    0:22:37 – Yeah.
    0:22:39 – You could be like purring healthy vegetables
    0:22:40 to like feed them well.
    0:22:42 – Yeah, exactly.
    0:22:43 I’m sure that’s what we’re all doing.
    0:22:47 – Or maybe watching a little reality TV
    0:22:49 for five minutes while you fold laundry.
    0:22:51 – You look a little bit of called a midwife,
    0:22:53 you know, got a little bit of, yeah.
    0:22:54 – The problem with screen time is that the evidence
    0:22:56 is very, is very poor.
    0:22:58 – Can you just break up like why the evidence
    0:22:59 is so poor?
    0:23:00 Because this does seem like an area
    0:23:02 where there should have been time
    0:23:04 for that kind of gold standard randomized study
    0:23:05 that to develop.
    0:23:07 No, what is the evidence problem?
    0:23:09 – So the, I think the evidence problem is twofold.
    0:23:11 One, it’s actually not a super easy thing
    0:23:14 to run a randomized trial on
    0:23:15 because these are choices
    0:23:17 that people are thinking a lot about.
    0:23:19 And, you know, think about something like an iPad.
    0:23:20 Like, do you want to be involved
    0:23:23 in a randomized trial of whether you’re a kid?
    0:23:24 – Oh, there’s too much intention,
    0:23:25 too much at stake.
    0:23:26 – Too much attention, exactly.
    0:23:28 – Too much like lifestyle stuff.
    0:23:30 Some people have been able to use like the introduction
    0:23:33 of TV, which was sort of had some random features
    0:23:34 to like look at the impacts of TV.
    0:23:36 And that evidence is sort of reassuring
    0:23:37 and suggested TV is okay.
    0:23:38 But of course it’s very old.
    0:23:40 It’s like from the fifties.
    0:23:42 – A whole different way of consuming everything.
    0:23:43 – Yeah.
    0:23:44 – And I think the other thing is,
    0:23:46 the other problem with the sort of current,
    0:23:48 answering the current questions people want,
    0:23:50 like what about iPads, what about apps, you know,
    0:23:52 is that they just haven’t been around long enough.
    0:23:54 So a lot of the kinds of outcomes you would want to know
    0:23:58 that even things like short run, like test scores,
    0:24:02 you know, I got the first iPad when my daughter was born.
    0:24:03 Like that was like one,
    0:24:04 and I remember getting in being like,
    0:24:06 this is never going to catch on.
    0:24:09 This is why I’m not in tech.
    0:24:10 I was like, who would use this?
    0:24:12 – I mean, while your daughter’s like swiping.
    0:24:14 – I mean, while you’re just like, okay.
    0:24:17 But you know, now she’s in second grade.
    0:24:20 Like that’s kind of the earliest that you could kind of
    0:24:22 imagine getting some kind of,
    0:24:24 what we’d measured test scores or something like that.
    0:24:26 But even, you know, she didn’t use the iPad anywhere
    0:24:30 near as like facile away as my four year old, right?
    0:24:32 This is evolving so quickly
    0:24:35 that any kind of even slightly longer term outcomes
    0:24:37 are really hard to imagine measuring,
    0:24:40 let alone, you know, absent a randomized trial,
    0:24:42 the, like if you weren’t able to randomize this,
    0:24:44 which I think you won’t be able to,
    0:24:48 the amount of time kids spend on these screens
    0:24:52 is really wrapped up with other features of their household.
    0:24:54 – Yeah, okay, so you have the definitions.
    0:24:59 You have the time and the speed at which things are changing.
    0:25:01 And then you have the willingness for people
    0:25:03 to actually like engage and change
    0:25:05 or doing things differently.
    0:25:07 And then so all of that leads to what kind of,
    0:25:10 so what do the studies actually tend to look like
    0:25:12 in this space that we draw conclusions from?
    0:25:14 – So actually there’s almost nothing
    0:25:16 about iPads or phones.
    0:25:18 – That seems so contrary to like
    0:25:20 what the media is saying every five minutes.
    0:25:22 – Yeah, so there’s tons of studies on TV,
    0:25:24 which compare kids who watch more and less TV.
    0:25:26 And, you know, you can, but most of that, again,
    0:25:28 is sort of studies that are like based on data
    0:25:31 where before people were watching TV on these screens,
    0:25:35 maybe TV is TV, and you know, you can imagine
    0:25:36 that that would be kind of similar.
    0:25:39 But things like these apps, these just like no studies,
    0:25:41 you know, or there’ll be, there’s like,
    0:25:45 I think there’s one like abstract from a conference.
    0:25:47 This is not a paper, there’s like answering comments
    0:25:49 where it was just like we have some kids
    0:25:51 and we like compare the kids who like spend,
    0:25:52 like the babies who spend more time
    0:25:54 watching their parents’ phones.
    0:25:57 And then they like do worse, they’re like, look worse.
    0:25:59 – But it’s like, it’s pathetic, it’s sad.
    0:26:01 – It’s a terrible piece of evidence.
    0:26:04 – So is this an area in which you just go with your gut?
    0:26:07 – I mean, I try to generate a fancy version of go
    0:26:10 with your gut, which is called Bayesian updating.
    0:26:13 And so I basically try to say, look, you know,
    0:26:16 I mean, we want to step back and think about
    0:26:18 what are the places of uncertainty?
    0:26:21 Logic would tell you, you know, your kid is awake
    0:26:24 for what it is like 13 hours a day, 12 hours a day.
    0:26:27 If your two year old is spending seven of those 12 hours
    0:26:30 playing on the iPad, then there’s a lot of things
    0:26:31 that they are not doing.
    0:26:32 That’s probably not good.
    0:26:36 On the other hand, you know, if your kid is spending
    0:26:40 20 minutes every three days, it’s very hard to imagine
    0:26:41 how that could be bad.
    0:26:44 – So just thinking about it purely in times of like,
    0:26:46 time allotted to any one activity, basically.
    0:26:48 And then I think once you do that, then you’re sort of like,
    0:26:50 okay, but you know, there are things that we’re uncertain
    0:26:52 about, you know, what if my kid watches an hour of TV
    0:26:54 every day or spends an hour on a screen every day?
    0:26:56 Like is that too much, is the limit?
    0:26:59 If we sort of accept like five minutes a day is fine.
    0:27:00 Seven hours a day is too much.
    0:27:03 Like is the limit at an hour, is the limit at two hours?
    0:27:06 You know, and I think the truth is what we will find
    0:27:08 if we end up in for doing any studies like this,
    0:27:10 is that it depends a lot what other things
    0:27:13 they would be doing with their time.
    0:27:15 – Wouldn’t it also depend so much on the child?
    0:27:19 – Some children need, you know, learn in a kind of way
    0:27:21 that lends itself to this technology.
    0:27:24 Some children need other kinds of learning, you know.
    0:27:25 It’s highly individual.
    0:27:27 – Yeah, I mean, I think this gets into the problem
    0:27:29 with studying older kids in general,
    0:27:30 that just like there’s so much,
    0:27:31 there’s so many differences across kids.
    0:27:33 It’s hard to even think about how you would structure
    0:27:34 a study to learn about them.
    0:27:37 Nevermind actually, like using evidence that exists.
    0:27:40 – It’s really interesting because the last time we went
    0:27:43 to take my daughter for her annual checkup,
    0:27:44 or maybe it was my son, I can’t even remember it.
    0:27:46 It’s so different from the first days
    0:27:47 of those early spreadsheet where now I’m like,
    0:27:49 did I even get it on which one is that?
    0:27:51 – Yeah, exactly.
    0:27:54 Anyways, the doctor said very concretely,
    0:27:57 two hours, two hours max, within any date.
    0:27:58 But it was really interesting to me
    0:28:01 that it was such a specific line in the sand.
    0:28:04 And now I’m thinking about how that information
    0:28:06 would even get into that,
    0:28:08 to percolate down to that level of like the system
    0:28:10 and get kind of fossilized into the system,
    0:28:12 so that that recommendation is being passed on to parents.
    0:28:14 Like how does that happen with these studies?
    0:28:18 How do they translate to that level of advice?
    0:28:22 – Yeah, I think what happens is like organizations
    0:28:23 like the American Academy of Pediatrics,
    0:28:26 they bring people together to basically talk
    0:28:28 about the conversation we just had,
    0:28:30 which was like, okay, let’s agree,
    0:28:31 sort of like we don’t know that much about this,
    0:28:34 like five minutes seems fine, seven hours is too much.
    0:28:36 These are like smart people who see kids a lot,
    0:28:38 who presumably are using some knowledge
    0:28:41 that they have about kids to pick some number.
    0:28:43 But the answer is like, you could pick
    0:28:44 a lot of different numbers.
    0:28:47 We sort of say this and then it becomes like this rule.
    0:28:49 And people have some impression that it comes
    0:28:52 from some piece of evidence as opposed to sort of like,
    0:28:58 you know, a synthesis of expert opinion or something,
    0:29:01 which is really what it’s from.
    0:29:04 – You also work specifically on certain health recommendations.
    0:29:08 So how they change over time and how we stick to them.
    0:29:10 You wrote a paper on behavioral feedback.
    0:29:13 And then you talk about how those individual choices
    0:29:15 might in fact be changing the science itself.
    0:29:17 Can you talk about what that means
    0:29:18 and how that might be happening?
    0:29:20 – I was thinking about exactly this issue of like,
    0:29:21 okay, we just make some recommendation
    0:29:24 and sometimes those recommendations are kind of arbitrary,
    0:29:25 but then they go out in the world
    0:29:27 and people respond to them. – Take on lives of their own.
    0:29:28 – Exactly.
    0:29:30 And so like a sort of a good example,
    0:29:32 vitamin E is supplements.
    0:29:35 Like in the early 90s, there were a couple of studies
    0:29:37 which suggested that like they are good for your health,
    0:29:38 that like prevent cancer.
    0:29:40 And so then there was like a recommendation,
    0:29:42 like people should take vitamin E.
    0:29:45 And then we had to ask a question like what,
    0:29:47 like who takes vitamin E after that?
    0:29:51 And one of the concerns is the kind of people
    0:29:53 who would adopt these new recommendations,
    0:29:55 like who listens to their doctor.
    0:29:59 It is people who are probably,
    0:30:01 maybe they’re more educated, maybe they’re richer,
    0:30:05 but like above all, they are interested in their health.
    0:30:07 So they are taking vitamin E, so they avoid cancer,
    0:30:09 but they’re also exercising, so they avoid cancer.
    0:30:11 And they’re eating vegetables, so they avoid cancer.
    0:30:12 We use, call them selected.
    0:30:14 These people are like positively selected
    0:30:16 on other health things.
    0:30:18 And so indeed you can see in the data
    0:30:21 that the people who start taking vitamin E
    0:30:22 after this recommendation changes
    0:30:25 are kind of also exercising and not smoking
    0:30:27 and doing all kinds of other stuff.
    0:30:30 Well, why is that like interesting or problematic?
    0:30:34 Well, later we’re gonna go back to the data
    0:30:36 ’cause that’s like the way science works.
    0:30:38 But now the people who take vitamin E
    0:30:42 are even more different than they were before, right?
    0:30:43 So now these people are like.
    0:30:44 So you’ve added another layer.
    0:30:45 You’ve added another layer.
    0:30:47 So in fact, you can see that in the data.
    0:30:49 You can see that basically before these recommendations
    0:30:52 changed, there was sort of a small relationship
    0:30:56 between taking vitamin E and like subsequent mortality rates.
    0:30:58 But after the recommendations change,
    0:31:01 you see like a very large relationship
    0:31:03 between vitamin E and mortality rates.
    0:31:06 And so it looks like basically ends up looking like vitamin E
    0:31:08 like is really great for you.
    0:31:09 – Has this big impact.
    0:31:11 – But of course that’s because at least,
    0:31:13 it seems like it must be at least in part
    0:31:16 because the people who adopt vitamin E
    0:31:21 are the people who are also doing these other things.
    0:31:23 – So what does that mean then?
    0:31:24 It feels like such a loss.
    0:31:26 Like how does one ever- – It’s so depressing.
    0:31:28 – Yes. (laughs)
    0:31:31 How would one ever develop like a recommendation
    0:31:33 based on what we think we know.
    0:31:34 – I know.
    0:31:36 – And untangle it from like-
    0:31:39 – So this paper is very destructive in some sense.
    0:31:42 – Other than saying like it probably doesn’t matter
    0:31:44 if you take vitamin E, so that’s like news you can use.
    0:31:46 You can take that home with you.
    0:31:50 But I mean, I think it does more or less just highlight
    0:31:54 some of the inherent and very deep limitations
    0:31:58 with our ability to learn about some of these effects,
    0:31:59 particularly when they’re small.
    0:32:01 – Is this basically part of the sort of crisis
    0:32:02 of reproducibility?
    0:32:04 – I think it’s not unrelated.
    0:32:06 So I often think about this idea of p-hacking,
    0:32:10 which refers to the idea that you keep running your studies
    0:32:12 until you get a significant result.
    0:32:16 There’s a bunch of people interested in this process
    0:32:19 of like how science evolves
    0:32:24 and the ways in which the evolution of science
    0:32:27 influences the science itself or the incentives
    0:32:31 for research influence how science works.
    0:32:35 And I think it’s particularly hard to draw conclusions
    0:32:39 in these spaces like diet or these health behaviors
    0:32:41 where the honest truth is probably a lot
    0:32:43 of these effects are very small.
    0:32:45 So if you ask the question like,
    0:32:47 what is the effect of chia seeds on your health?
    0:32:49 My dad is like really into chia seeds.
    0:32:50 – That was a thing.
    0:32:51 There was a moment.
    0:32:53 – Well, he’s still in that moment.
    0:32:53 He’s still in there.
    0:32:56 And what is the effect of those on your health?
    0:32:58 The actual effect is probably about zero.
    0:33:00 Maybe it’s not exactly zero,
    0:33:01 but it’s almost certainly about zero.
    0:33:03 – But are there sometimes secret sleeper?
    0:33:05 Like, whoa, they’re actually might,
    0:33:08 the only way to find out is to do these things.
    0:33:09 – Yeah, yeah.
    0:33:10 And so maybe there are some secrets.
    0:33:12 – Like maybe kale really is magic.
    0:33:14 – Maybe it is, but it’s probably not.
    0:33:18 I spent a lot of time with these diet data
    0:33:21 and there’s these sort of like dietary patterns
    0:33:23 like the Mediterranean diet,
    0:33:26 which do seem to have some sort of vague support
    0:33:30 in the evidence, but I would be extremely surprised
    0:33:33 if we ever turn up like the one single thing.
    0:33:35 – One magical food.
    0:33:36 – So the point is it’s the pattern.
    0:33:38 – It’s the pattern and it’s all the other things
    0:33:39 that you’re doing, right?
    0:33:42 If you smoke three packs a day and you never exercise,
    0:33:45 but you eat some kale, that’s not gonna help you.
    0:33:46 – Yeah, yeah.
    0:33:47 – The kale’s not gonna help.
    0:33:50 – What about when you really do need to affect change?
    0:33:54 What are the ways in which these guidelines
    0:33:57 can shift over time with kind of new sources of information
    0:33:58 or data and statistics?
    0:33:59 Like what’s the positive?
    0:34:02 How does that actually play out in the right manner?
    0:34:05 – Yeah, so I think there are times in which we,
    0:34:08 the change in evidence is so big
    0:34:12 and so like compelling that we can get changes,
    0:34:15 best practices in obstetrics,
    0:34:19 like how do you deliver, reach baby, as an example,
    0:34:21 they change, like those changed over time
    0:34:25 because there was like one very big well-recognized study
    0:34:28 that everybody agreed like this is now the state of the art.
    0:34:30 – And it happens fast at that point?
    0:34:31 – And then it happens pretty fast.
    0:34:32 It doesn’t happen immediately.
    0:34:33 Like you might have thought that those kind of changes
    0:34:35 could be like immediately affected
    0:34:36 and I think that they’re not,
    0:34:39 but they do happen over time.
    0:34:44 Those examples really rely on there being like a cohort
    0:34:49 of sort of like experts who are all reading the guidelines
    0:34:51 and sort of seeing that they changed
    0:34:56 and then themselves are sort of doing this all the time.
    0:34:59 I think part of what’s hard in the broader health behavior
    0:35:01 space where it’s people who need to make the choices,
    0:35:03 not physicians.
    0:35:06 – Yes, when it’s in the home and those dark bedrooms.
    0:35:08 – It’s like that’s much harder to get people
    0:35:10 to change their behavior in those spaces.
    0:35:12 – It’s not these pediatric guidelines.
    0:35:13 Those are not effective.
    0:35:14 – Yeah, I do not think those are effective.
    0:35:16 Or I think we don’t see any evidence in the data
    0:35:18 that those are effective at moving these,
    0:35:20 at least in these kind of spaces.
    0:35:21 – So what’s the answer?
    0:35:23 How do we positively affect change
    0:35:25 and gather these insights and have smart people
    0:35:26 making good recommendations?
    0:35:29 – So I mean, I think one answer is media attention.
    0:35:32 The kind of few times when we see very large spikes
    0:35:35 and changes, they actually seem to correspond
    0:35:36 with some media coverage.
    0:35:39 On the flip side, like media can often be very bad.
    0:35:41 Some of these big changes in these expert things
    0:35:44 were kind of resulted from media coverage,
    0:35:46 which was really like sensationalist
    0:35:48 and like totally inappropriate.
    0:35:50 And, you know, it wasn’t like a very nice,
    0:35:53 like New York, New York time story about some study.
    0:35:56 It was like a sensationalist 2020.
    0:35:57 – About what?
    0:36:00 – About, this is about vacuum extraction,
    0:36:02 which is a way of pulling the baby out
    0:36:04 and has gone down a lot over time.
    0:36:05 And it was like the sort of sensationalist
    0:36:08 like John Stossel 2020 episode
    0:36:09 about how it could hurt your baby,
    0:36:11 which caused like big productions.
    0:36:12 – Interesting.
    0:36:14 – Yeah, you know that like.
    0:36:15 – But the science was there.
    0:36:17 – The science, yeah, was there.
    0:36:19 I mean, he overstated the science,
    0:36:21 but it was, it was probably there.
    0:36:23 – So it’s almost like a random confluence
    0:36:24 of like when the science is there
    0:36:26 and the media hits it the right way,
    0:36:27 and then we see change?
    0:36:28 – Yeah.
    0:36:29 (laughing)
    0:36:30 – It’s okay. – That’s something
    0:36:31 to hope for.
    0:36:34 – Yeah, that doesn’t feel like we can plan so much for that.
    0:36:38 – You also study when we are resistant to change.
    0:36:40 You looked specifically at diabetes,
    0:36:42 people I think who had been diagnosed with diabetes
    0:36:44 and then whether or not their behavior changed,
    0:36:46 even given a certain amount of information.
    0:36:49 So what do you see there about our resistance to change
    0:36:52 even with the right kinds of information?
    0:36:54 – I mean, I think one of the big challenges
    0:36:56 in the health space at the moment
    0:36:57 is that like so much of the,
    0:37:00 so many of the health problems that we have in the US
    0:37:03 are like problems associated with behavior,
    0:37:06 just the fundamental fact that like people do not eat great
    0:37:09 and we have a lot of morbidity
    0:37:11 and expense associated with that.
    0:37:14 And I think there is often a lot of emphasis
    0:37:16 on the idea like if we just get the information out,
    0:37:19 if people just understood vegetables were good for them,
    0:37:20 hey, what are you vegetables?
    0:37:21 – Doesn’t happen. – That’s not true.
    0:37:24 I think, and so this paper is about sort of looking
    0:37:26 at something where kind of a pretty extreme thing
    0:37:28 happens to people, like they are diagnosed with diabetes
    0:37:31 and we can see what happens to their diet.
    0:37:33 And the answer is, it improves a tiny amount.
    0:37:36 – Even with a real come to Jesus moment.
    0:37:38 – Exactly, and a lot of new information.
    0:37:41 – Right, and monitoring, follow-up, right?
    0:37:42 I mean, you’re diagnosed with diabetes,
    0:37:44 like you have to take medicine every day,
    0:37:46 you got to go to the doctor like get a tester,
    0:37:47 you know, test your insulin,
    0:37:48 at least for some period of time.
    0:37:51 So this isn’t like something where you can just forget
    0:37:54 that it happened and even then the changes in diet,
    0:37:56 you know, they’re there, but they’re really small.
    0:38:00 They’re like, you know, like one less soda week or something.
    0:38:01 – Oh gosh. – Like really,
    0:38:02 like really small.
    0:38:03 – And how are you noticing these?
    0:38:06 – We’re inferring information on diagnosis
    0:38:09 from people’s purchases of testing products
    0:38:11 and then following their grocery purchases.
    0:38:13 So this is like an example of using,
    0:38:15 you know, a different kind of data.
    0:38:16 So not health data in this case.
    0:38:19 It’s actually like Nielsen data.
    0:38:21 So Nielsen data on what people buy,
    0:38:24 but then, you know, using some like machine learning techniques
    0:38:27 to try to figure out from the kinds of things people buy,
    0:38:29 when were they diagnosed with diabetes,
    0:38:31 and then looking at their diets over time.
    0:38:33 – So is the answer that there has to be
    0:38:36 some sensational story that talks about like–
    0:38:38 – I mean, I think there, I’m not even sure that would help.
    0:38:40 I think part of the problem is people really like
    0:38:42 the diets that they’re comfortable with.
    0:38:45 Like diet is like such a habit formation thing,
    0:38:48 and you know, people are willing to make
    0:38:52 important health sacrifices to maintain the diet
    0:38:53 that they like.
    0:38:55 We get into some of these questions of preferences,
    0:38:57 like, and you know, if people,
    0:38:59 if that is the choice that people wanna make,
    0:39:03 like should we be trying to intervene with policy?
    0:39:05 Like let’s say everybody had all the information,
    0:39:07 they knew that they shouldn’t drink so much soda
    0:39:08 and that they should lose weight,
    0:39:10 but they still chose not to.
    0:39:13 Like do we wanna develop policies that affect that?
    0:39:13 I’m not sure.
    0:39:15 – Yeah, maybe that’s just free will.
    0:39:16 – Yeah, maybe that’s just free will.
    0:39:18 And it comes up in the parenting stuff too.
    0:39:20 Like, you know, how much do we wanna be externally
    0:39:23 controlling the choices people make with their kids,
    0:39:25 even if we don’t think that they’re the right choices.
    0:39:28 – But I do think there’s a segment of people
    0:39:30 who want to make the change, but the gravity,
    0:39:32 you know, because of the information,
    0:39:33 but the gravity of the habit is so much
    0:39:36 that it’s hard to know where to go about it.
    0:39:39 – I guess I would say, where do you see this data going?
    0:39:42 Like if you had your fantasy for where you want
    0:39:44 the kind of data and the way that we see this data evolving
    0:39:47 and the way that you see that kind of percolating out
    0:39:50 to the public, I mean, in terms of being sort of a translator
    0:39:51 and providing people the tools,
    0:39:54 like what do you wanna see in terms of the way the system
    0:39:57 responds to or integrates this data in the future?
    0:40:00 – Yeah, I mean, I think the big message of the book
    0:40:03 is in some sense that you should use the data
    0:40:07 to make yourself confident and happy in your choices.
    0:40:10 I think so much of what is hard about parenting
    0:40:14 is that in the moment you are not often confident
    0:40:17 in your choices, and then when somebody asks you,
    0:40:20 like, why did you do that, then you feel bad, right?
    0:40:23 And I think that there’s a sense in which sort of
    0:40:24 looking at the data, but then confronting like,
    0:40:27 well, we don’t know, but you’d be like, okay, I made this choice.
    0:40:28 You know, I decided to let my kids watch an hour
    0:40:31 of TV every day, because like I thought about it
    0:40:33 and I thought there wasn’t any data,
    0:40:35 and like that’s the choice that I made,
    0:40:37 that sort of that confidence is like important
    0:40:40 for being happy, and if we could sort of like
    0:40:44 move in that direction, I think that would be good.
    0:40:46 – It reminds me a lot of what one of my good friends,
    0:40:47 Brandy said to me when I was in the trenches
    0:40:49 of like babyhood and having a lot of anxieties
    0:40:52 around all these hot button issues, breastfeeding,
    0:40:53 sleep time, like all of it.
    0:40:55 She had been through it, her kids were in college,
    0:40:57 and she was like, let me give you a piece of advice.
    0:40:59 Be wrong, but be wrong with confidence.
    0:41:00 – Yes.
    0:41:01 – Just be wrong with confidence.
    0:41:02 That’s all that matters.
    0:41:03 – Yeah. – Yeah.
    0:41:04 – No, exactly.
    0:41:05 – I love confidence, I love that.
    0:41:07 Yes, I am wrong with confidence so frequently.
    0:41:08 – Yes, and actually it turns out to be right.
    0:41:09 – Like it turns out that it’s fine.
    0:41:10 – The truth is there’s a lot of good options.
    0:41:11 – A lot of good options.
    0:41:12 – Yeah. – Thank you so much
    0:41:14 for joining us on the A16Z podcast.
    0:41:15 – Thank you for having me.

    with Emily Oster (@ProfEmilyOster) and Hanne Tidnam (@omnivorousread)

    Are chia seeds actually that good for you? Will Vitamin E keep you healthy? Will breastfeeding babies make them smarter? There’s maybe no other arena where understanding what the evidence truly tells us is harder than in health… and parenting. And yet we make decisions based on what we hear about in studies like the ones listed above every day. In this episode, Brown University economics professor Emily Oster, author of Expecting Better and the recently released book Cribsheet: A Data-driven Guide to Better, More Relaxed Parenting, from Birth to Preschool, in conversation with Hanne Tidnam, dives into what lies beneath those studies… and how to make smarter decisions based on them (or not). Oster walks us through the science and the data behind the studies we hear about — especially those hot-button parenting issues that are murkiest of all, from screen time to sleep training.

    How we can tell what’s real and what’s not? Oster shows us the research about how these guidelines and advice that we are ”supposed” to follow get formalized and accepted inside and outside of healthcare settings — from obstetrics practices to pediatrics to diet and lifestyle; how they can (or can’t) be changed; and finally, how the course of science itself can be influenced by how these studies are done.

  • 334: Unfair Advantages: How to Find Yours—and 10 of Mine

    There’s a topic that no one really talks about in entrepreneurship or online business, and that’s the unfair advantages someone had starting out.

    Behind every killer case study, every income report, every “overnight” success, there’s a backstory you don’t always hear.

    The truth is, no one ever really starts from scratch. We bring our own history, perspectives, and baggage to the table.

    We also have the advantage of learning from everyone who’s gone before us. Like Newton said, we “stand on the shoulders of giants.”

    In this episode, I’ll share some unfair advantages that undoubtedly helped me, and offer up some you probably have working in your favor as well.

    Full Show Notes: Unfair Advantages: How to Find Yours—and 10 of Mine

  • 377. The $1.5 Trillion Question-How to fix student loan debt?

    As the cost of college skyrocketed, it created a debt burden that’s putting a drag on the economy. One possible solution: shifting the risk of debt away from students and onto investors looking for a cut of the graduates’ earning power.

  • a16z Podcast: Innovating in Bets

    AI transcript
    0:00:06 Hi, everyone. Welcome to the A6NC podcast. I’m Sonal, and today Mark and I are doing another one
    0:00:10 of our book author episodes. We’re interviewing Annie Duke, who’s a professional poker player and
    0:00:17 World Series champ and is the author of Thinking in Bets, which is just out in paperback today.
    0:00:21 The subtitle of the book is Making Smarter Decisions When You Don’t Have All the Facts,
    0:00:25 which actually applies to startups and companies of all sizes and ages, quite frankly. I mean,
    0:00:30 basically any business or new product line operating under conditions of great uncertainty,
    0:00:34 which I’d argue is my definition of a startup and innovation. So that will be the frame for
    0:00:39 this episode. Annie is also working on her next book right now and founded HowIDecide.org,
    0:00:43 which brings together various stakeholders to create a national education movement around
    0:00:48 decision education, empowering students to also be better decision makers. So anyway,
    0:00:51 Mark and I interview her about all sorts of things in and beyond her book,
    0:00:55 going from investing to business to life. But Annie begins with a thought experiment,
    0:00:58 even though neither of us really know that much about football.
    0:01:02 So what I’d love to do is kind of throw a thought experiment at you guys so that we can
    0:01:06 have a discussion about this. So I know you guys don’t know a lot about football,
    0:01:09 but this one’s pretty easy. You’re going to be able to feel this one, which is do this thought
    0:01:16 experiment. Pete Carroll calls for Marshawn Lynch to actually run the ball.
    0:01:18 So we’re betting on someone who we know is really good.
    0:01:21 Well, they’re all really good, but we’re betting on the play that everybody’s expected.
    0:01:25 This is the default. This is the assumed irrational thing to do.
    0:01:30 Right. So he has Russell Wilson handed off to Marshawn Lynch. Marshawn Lynch goes to barrel
    0:01:35 through the line. He fails. Now they call the time out. So now they stop the clock,
    0:01:39 they get another play now, and they hand the ball off to Marshawn Lynch,
    0:01:45 what everybody expects. Marshawn Lynch again, attempts to get through that line and he fails.
    0:01:52 End of game, Patriots win. My question to you is, are the headlines the next day
    0:01:58 that worst call in Super Bowl history? Is Chris Collins we’re saying, I can’t believe the call,
    0:02:04 I can’t believe the call, or is he saying something more like, that’s why the Patriots are so good,
    0:02:09 their line is so great. That’s the Patriots line that we’ve come to see this whole season.
    0:02:15 This will seal Belichick’s place in history. It would have all been about the Patriots.
    0:02:21 So let’s sort of divide things into, we can either say the outcomes are due to skill or luck,
    0:02:27 and luck in this particular case is going to be anything that has nothing to do with Pete Carroll.
    0:02:31 And we can agree that the Patriots line doesn’t have anything to do with Pete Carroll. Belichick
    0:02:34 doesn’t have anything to do with Pete Carroll. Tom Brady doesn’t have anything to do with Pete
    0:02:38 Carroll as they’re sealing their fifth Super Bowl victory. So what we can see is there’s two
    0:02:44 different routes to failure here. One route to failure, you get resulting. And basically what
    0:02:50 resulting is, is that retrospectively, once you have the outcome of a decision, once there’s a
    0:02:55 result, it’s really, really hard to work backwards from that single outcome to try to figure out
    0:02:59 what the decision quality is. This is just very hard for us to do. They say, oh my gosh, the outcome
    0:03:05 was so bad. This is clearly, I’m going to put that right into the skill bucket. This is because of
    0:03:10 Pete Carroll’s own doing. But in the other case, they’re like, oh, you know, there’s uncertainty.
    0:03:15 What could you do? Weird, right? Yeah. Okay. So you can kind of take that and you can say, aha,
    0:03:21 now we can sort of understand some things. Like, for example, people have complained for a very
    0:03:28 long time that in the NFL, they have been very, very slow to adopt what the analytics say that
    0:03:32 you should be adopting, right? And even though now we’ve got some movement on like fourth down
    0:03:36 calls and when are you going for two point conversions and things like that, there’s still
    0:03:40 nowhere close to where they’re supposed to be. So they don’t make the plays corresponding to
    0:03:45 the statistical probabilities? No. In fact, the analytics show that if you’re on your own one
    0:03:51 yard line and it’s fourth down, you should go for it, no matter what. The reason for that is if you
    0:03:54 kick it, you’re only going to be able to kick to midfield. So the other team is basically almost
    0:03:59 guaranteed three points anyway. So you’re supposed to just try to get the, try to get the yards.
    0:04:03 Like, when have you ever seen a team on their own one yard line on fourth down be like, yeah,
    0:04:08 let’s go for it. That does not happen. Okay. So we know that they’ve been like super slow
    0:04:12 to do what the analytics say is, is correct. And so you sit here and you go, well, why is that?
    0:04:18 And that thought experiment really tells you why, because we’re all human beings. We all
    0:04:23 understand that there are certain times when we don’t allow uncertainty to bubble up to the surface
    0:04:29 as the explanation. And there are certain times then we do. And it seems to be that we do when we
    0:04:35 have this kind of consensus around the decision, there’s other ways we get there. And so, okay,
    0:04:39 if I’m a human decision maker, I’m going to choose the path where I don’t get yelled at.
    0:04:45 Yeah, exactly. So basically we can kind of walk back and we can say, are we allowing the uncertainty
    0:04:49 to bubble to the surface? And this is going to be the first step to kind of understanding what
    0:04:55 really slows innovation down, what really slows adoption of, of what we might know is good decision
    0:04:58 making, because we have conflicting interests, right, making the best decision for the long run,
    0:05:03 or making the best decision to keep us out of a room where we’re getting judged or
    0:05:07 yelled at or possibly fired. So can I, let me propose the framework that I used to think about
    0:05:12 this and see if you agree with it. So it’d be a two by two, a two by two grid and it’s consensus
    0:05:17 versus non-consensus and it’s right versus wrong. And the way we think about it, at least in our
    0:05:24 business is basically consensus, right is fine. Consensus, non-consensus right is fine. In fact,
    0:05:29 generally you get called a genius. Consensus wrong is fine because you just, you know,
    0:05:32 it’s just the same mistake everybody else made. You all agree, right, it was wrong.
    0:05:36 Non-consensus wrong is really bad. It’s horrible. It’s radioactively bad.
    0:05:40 Right. And so, and then, and then as a consequence of that, and maybe this gets to the innovation
    0:05:44 stuff that you’ll be talking about, but as a consequence of that, there are only two scripts
    0:05:49 for talking about people operating in, in the non-consensus directions. One script is they’re
    0:05:54 a genius because it went right and the other is they’re a complete moron because it went wrong. Is
    0:05:58 that, does that map? That’s, that’s exactly, that’s exactly right. And I think that the problem
    0:06:04 here is that what is right and wrong mean in your two by two, wrong and right is really this,
    0:06:08 just to turn out well or not. Yeah, okay. And this is where we really get into this problem
    0:06:13 because now what people are doing is they’re trying to swat the outcomes away and they understand,
    0:06:19 just as you said, that on that consensus wrong, you will have like a cloak of invisibility over
    0:06:24 you. Like, you don’t have to deal with it. Right. So, let’s think about other things besides
    0:06:30 consensus. So, consensus is one way to do that, especially when you have like complicated cost
    0:06:34 benefit analyses going into it. I don’t think that people, when they’re getting in a car,
    0:06:41 are actually doing any kind of calculation about what the cost benefit analysis is to their own
    0:06:47 productivity versus the danger of something very bad happening to them. Like, well, as a society,
    0:06:50 someone’s done this calculation, we’ve all kind of done this together. And so therefore,
    0:06:54 like getting in a car is totally fine. I’m going to do that. And nobody second guesses anybody.
    0:06:57 Somebody dies in a car crash. You don’t say, wow, what a moron for getting in a car.
    0:07:03 No. Another way that we can get there is through transparency. So, if the decision is pretty
    0:07:09 transparent, another way to get there is status quo. So, like a good status quo example that I
    0:07:14 like to give because everybody can understand it is, you have to get to a plane and you’re with
    0:07:21 your significant other in the car and you go the usual route. So, you go your usual route.
    0:07:26 Like, you go literally, this is the route that you’ve always gone and there’s some sort of accident,
    0:07:30 there’s bad traffic, you missed the plane and you’re mostly probably comforting each other
    0:07:35 in the car. It’s like, what could we do? But then you get in the car and you announce to
    0:07:41 your significant other, I’ve got a great shortcut. So, let’s take the shortcut to the airport. And
    0:07:46 there’s same accident, whatever, horrible traffic, you missed the flight. And that’s like that status
    0:07:50 quo versus non-status quo decision. Right. You’re going against what’s familiar and comfortable.
    0:07:56 Exactly. If we go back to the car example, when you look at what the reaction is to a pedestrian
    0:08:02 dying because of an autonomous vehicle versus because of a human, we’re very, very harsh with
    0:08:07 algorithms. For example, if you get in a car accident and you happen to hit a pedestrian,
    0:08:12 I can say something like, well, Mark didn’t intend to do that. Because I think that I understand
    0:08:17 your mind is not such a black box to me. So, I feel like I have some insight into what your
    0:08:23 decision might be and so more allowing some of the uncertainty to bubble up there. But if this
    0:08:29 black box algorithm makes the decision, now all of a sudden, I’m like, get these cars off the road.
    0:08:33 Never mind that the human mind is a black box itself. Of course. But we have some sort of
    0:08:37 illusion that I understand sort of what’s going on in there, just like I have an illusion that I
    0:08:40 understand what’s going on in my own brain. And you can actually see this in some of the
    0:08:46 language around crashes on Wall Street, too, when you have a crash that comes from human
    0:08:50 beings selling. People say things like, the market went down today. When it’s algorithms,
    0:08:56 they say it’s a flash crash. So now they’re sort of pointing out, this is clearly in the
    0:08:59 skilled category. It’s the algorithm’s fault. We should really have a discussion about algorithmic
    0:09:04 trading and whether there should be allowed. When obviously the mechanism for the market
    0:09:08 going down is the same either way. So now if we understand that, so exactly your matrix,
    0:09:12 now we can say, well, okay, human beings understand what’s going to get them in the room.
    0:09:19 And pretty much anybody who’s living and breathing in the top levels of business at this point is
    0:09:22 going to tell you, process, process, process. I don’t care about your outcomes, process, process,
    0:09:27 process. But then the only time they ever have like an all hands on deck meeting is when something
    0:09:31 goes wrong. Like let’s say that you’re in a real estate investing group. And so you invest in a
    0:09:37 particular property based on your model. And the appraisal comes in 10% lower than what you
    0:09:42 expected. Like everybody’s in a room, right? You’re all having a discussion. You’re all examining
    0:09:46 the model. You’re trying to figure out, but what happens when the appraisal comes in 10% higher
    0:09:50 than expected? Is everyone in the room going, what happened here? Now there is the obvious
    0:09:54 reality, which is like, we don’t get paid in process. We get paid in outcomes. Booker players,
    0:09:58 you don’t get paid in process, you get paid in outcome. And so there is an incentive alignment.
    0:10:02 It’s not completely emotional. There’s also an actual, there’s a real component to it.
    0:10:08 Yeah. So two things. One is you have to make it very clear to the people who work for you that
    0:10:13 you understand that outcomes will come from good process. That’s number one. And then number two,
    0:10:19 what you have to do is try to align the fact that as human beings, we tend to be outcome driven
    0:10:28 to what you want in terms of getting an individual’s risk to align with the enterprise risk.
    0:10:31 Because otherwise you’re going to get the CYA behavior. And the other thing is that we want
    0:10:35 to understand if we have the right assessment of risk. So one of the big problems with the
    0:10:39 appraisal coming in 10% too high there could be that your model’s correct. It could be that you
    0:10:44 could have just a tail result, but it certainly is a trigger for you to go look and say, was there
    0:10:48 risk in this decision that we didn’t know was there? And it’s really important for deploying
    0:10:54 resources. I have a question about translating this to say non-investing context. So if in the
    0:11:01 example of Mark’s Matrix, even if it’s a non-consensus wrong, you are staking money
    0:11:06 that you are responsible for. In most companies, people do not have that kind of skin in the game.
    0:11:12 So how do you drive accountability in a process-driven environment that the results actually
    0:11:17 do matter? You want people to be accountable yet not overly focused on the outcome? How do you
    0:11:23 calibrate that? So let’s think about how can we create balance across three dimensions
    0:11:26 that makes it so that the outcome you care about is the quality of the forecast.
    0:11:33 So first of all, obviously this demands that you have people making forecasts. You have to stay
    0:11:38 in advance. Here’s what I think. This is my model of the world here where all the places are going
    0:11:45 to fall. So this is what I think. So now you stated that and the weather the outcome is “good”
    0:11:50 or “bad” is how close are you to whatever that forecast is. So now it’s not just like,
    0:11:55 oh, you won to it or you lost to it, it was your forecast good. So that’s piece number one is make
    0:12:00 sure that you’re trying to be as equal across quality as you can and focus more on forecast
    0:12:04 quality as opposed to like traditionally what we would think of as outcome quality.
    0:12:12 So now the second piece is directional. So when we have a bad outcome and everybody gets in the room,
    0:12:16 when was the last time that someone suggested, “Well, you know, we really should have lost more
    0:12:24 here.” Like nobody’s saying that. But sometimes that’s true. Sometimes if you examine it, you’ll
    0:12:29 find out that you didn’t have a big enough position. It turned out, okay, well maybe we should have
    0:12:36 actually lost more. So you want to ask both up, down, and orthogonal. So could we have lost less?
    0:12:42 Should we have lost more? And then the question of should we have been in this position at all?
    0:12:46 So Inventure Capital, after a company works and exits, they say it sells for a lot of money,
    0:12:51 you do often say, “God, I wish we had invested more money.” You never, ever, ever, ever,
    0:12:55 I have never heard anybody say on a loss we should have invested more money.
    0:12:59 See, I wouldn’t be great if someone said that. Like wouldn’t you love for someone to come up and
    0:13:03 say that to you? That would make you so happy. And what would be the logic of why they should say
    0:13:06 that? I still don’t get the point. Exactly. Why does that matter? I don’t really understand that.
    0:13:11 So let’s, can I just, like simple in a poker example. So let’s say that I get involved in a
    0:13:20 hand with you and I have some idea about how you play. And I have decided that you are somebody
    0:13:26 that if I, if I bet X, you will continue to play with me. Let’s say this is a spot where I know
    0:13:32 that I have the best hand. But if I bet X plus C that you will fold. So if I go above X that I’m
    0:13:36 not going to be able to keep you coming along with me, but if I bet X or below that you will. So I
    0:13:43 bet X you call, but you call really fast in a way that makes me realize, oh, I could have actually
    0:13:48 bet X plus C. You hit a very lucky card on the end and I happened to lose the pot. I should have
    0:13:52 maximized at the point that I was a mathematical favorite. Your model of me was wrong, which is
    0:13:56 a learning independent of the winner, the loss. Exactly. So you need to be exploring those questions
    0:14:01 in a real honest way. Because it has to do with how you size future bets. This is exactly like a
    0:14:05 company betting on a product line. Correct. And then like ticking, like, you know, what the next
    0:14:09 product line is going to be, and then not having had the information that would then drive a better
    0:14:12 decision-making process around that. Right. So think about the learning loss that’s happening,
    0:14:16 because we’re not exploring that. The negative direction is, and now you should do this on
    0:14:22 wins as well. So if you do ever discuss a win, you always think like, how could I press? How
    0:14:25 could I have won more? How could I have made this even better? How could I do this again in the
    0:14:29 future? Should we have won less? We oversized the bet and then got bailed out by a fluke.
    0:14:33 We should have actually had less in it. And sometimes not at all, because sometimes
    0:14:37 the reasons that we invested turned out to be orthogonal to the reasons that it
    0:14:41 actually ended up playing out in the way that it was. And so had we had that information,
    0:14:45 we actually wouldn’t have bet on this at all, because it was completely orthogonal. Like,
    0:14:50 we totally had this wrong. It just turned out that we ended up winning. And that can happen.
    0:14:54 Obviously, that happens in poker all the time. But what does that communicate to the people on
    0:14:59 your team? Good, bad, I don’t care. I care about our model. I want to know that we’re
    0:15:03 modeling the world well and that we’re thinking about how do we incorporate the things that we
    0:15:09 learn? Because we can generally think about stuff we know and stuff we don’t know. There’s
    0:15:13 stuff we don’t know we know, obviously. So we don’t worry about that, because we don’t know
    0:15:18 we don’t know it. But then there’s stuff we could know and stuff we can’t know. It’s things like
    0:15:22 the size of the universe or the thoughts of others. Or what the outcome will actually be.
    0:15:28 We don’t know that. I have a question about this, though. What is a time frame for that forecast?
    0:15:32 So let’s say you have a model of the world, a model of a technology, how it’s going to adopt,
    0:15:38 how it’s going to play out. In some cases, there are companies that can take years to get traction.
    0:15:42 You want to get your customers very early to figure that out, right? So you can get that data.
    0:15:48 But how much time do you give? How do you size that time frame for the forecast? So you’re not
    0:15:52 constantly updating with every customer data point. And so you’re also giving it enough time for your
    0:15:57 model, your plan, your forecast to play out. You have to think about very clearly in advance,
    0:16:03 what’s my time horizon? How long do I need for this to play out? But also, don’t just do this
    0:16:06 for the big decisions, because there’s things that you can forecast for tomorrow as well,
    0:16:11 so that you end up bringing it into just the way that people think. And then once you’ve decided,
    0:16:15 okay, this is the time horizon of my forecast. And you would want to be thinking about what
    0:16:22 are forecasts we make for a year, two years, five years for the specific decision to play out.
    0:16:27 And then just make sure that you talk in advance at what point you’ll revisit the forecast.
    0:16:31 So you want to think in advance, what are the things that would have to be true
    0:16:36 for me to be willing to come in and actually revisit this forecast? Because otherwise, you can
    0:16:39 start, as you just said, you can turn into super bad. You like to leave in the wind.
    0:16:44 Exactly, because then you’re one bad customer and you suddenly over-rotate on that when,
    0:16:49 in fact, it could have been not leaving the thing. So if you include that in your forecast,
    0:16:52 here are the circumstances under which we would come in and check on our model.
    0:16:57 Then you’ve already got that in advance. So that’s actually creating constraints
    0:17:01 around the reactivity, which is helpful. Two questions on practical implementation of the
    0:17:05 theory. So what I’m finding is more and more people understand the logic of what you’re describing,
    0:17:09 because people are getting exposed to these ideas and kind of expanding in importance.
    0:17:12 And so more and more people intellectually understand this stuff. But there’s two kind of,
    0:17:16 I don’t know, so-called emotion-driven warps or something that people just really have a hard
    0:17:21 time with. So one is, you understand this could be true of investors, CEO, product line manager,
    0:17:24 in a company, kind of anybody, in one of these domains, which is you can’t get the
    0:17:27 non-consensus right results unless you’re willing to take the damage,
    0:17:32 the risk on the non-consensus wrong results. But people cannot cope with the non-consensus
    0:17:37 wrong outcome. They just emotionally cannot handle it. And they would like to think that
    0:17:39 they can. And they intellectually understand that they should be able to. But as you say,
    0:17:44 when they’re in the room, it’s such a traumatizing experience that it’s touching the hot stove,
    0:17:48 they will do anything in the future to avoid that. Is that just a… And so one interpretation would
    0:17:52 be, that’s just simply flat out human nature. And so to some extent, the intellectual understanding
    0:17:56 of here doesn’t actually matter that much because there’s an emotional override. And so that would
    0:18:00 be a pessimistic view on our ability as a species to learn these lessons. Or do you have a more
    0:18:04 optimistic view of that? I’m going to be both pessimistic and optimistic at the same time.
    0:18:09 So let me explain why. Because I think that if you move this a little bit, it’s a huge difference.
    0:18:13 You sort of have two tasks that you want to take. One is, how much can you move the individual to
    0:18:19 sort of train this kind of thinking for them? And that means naturally, they’re thinking in
    0:18:24 forecasts a little bit more, that when they do have those kinds of reactions, which naturally
    0:18:30 everybody will, they write the ship more quickly so that they can learn the lessons more quickly.
    0:18:35 Right? I mean, I actually just had this happen. I turned in a draft of my next book, the first
    0:18:38 part of my next book to my editor, and I just got the worst comments I’ve ever gotten back.
    0:18:42 And I had a really bad 24 hours. But after 24 hours, I was like, you know what, she’s right.
    0:18:48 Now, I still had a really bad 24 hours. And I’m the like, give me negative feedback, like Queen,
    0:18:52 because I’m a human being. But I got to it fast. Like, I sort of got through it pretty quickly
    0:18:57 after this. I mean, I, you know, the, you know, on the phone with my agent saying, I’m standing
    0:19:01 my ground. This is ridiculous. And then he got a text the next day being like, no, she’s right.
    0:19:05 And then I rewrote it. And you know what? It’s so much better for having been rewritten. And now
    0:19:10 I can get to a place of gratitude for having the negative feedback. But I still had the really
    0:19:15 bad day. So it’s okay. It doesn’t go away. Right. Yeah. And it’s okay. Like, we’re all human.
    0:19:22 Like, we’re not robots. So number one is like, how much are you getting the individuals to say,
    0:19:27 okay, I improved 2%. That’s so amazing for my decision making and my learning going forward.
    0:19:32 And then the second through line is, what are you doing to not make it worse?
    0:19:37 Because obviously, for a long time, people like to talk about I’m results oriented.
    0:19:39 I mean, it’s like the worst sentence that could come out of somebody’s mouth.
    0:19:43 Why is that the worst? I’ve heard that a lot. Because you’re letting people know that all you
    0:19:47 care about is like, did you win or lose? That’s fantastic. Be results oriented to all you want.
    0:19:52 You should pay by the piece. You will get much faster work. But the minute that you’re asking
    0:19:56 people to do intellectual work, results oriented is like the worst thing that you could say to
    0:20:01 somebody. So I think that we need to take responsibility. And the people in our orbit,
    0:20:06 we can make sure at minimum that we aren’t making it worse. And I think that that’s,
    0:20:10 so that’s pessimistic and optimistic. I don’t think anyone’s making a full reversal here.
    0:20:13 So the second question then goes to the societal aspect of this.
    0:20:18 And so we’ll talk about the role of the storytellers or as they’re sometimes known,
    0:20:22 the journalists and the editors and the editors and the publishers. And so the very
    0:20:26 first reporter I ever met when I was a kid, this is Jared Sandberg at the Wall Street Journal.
    0:20:30 The internet was first emerging. Like there were no stories in the press about the internet.
    0:20:33 And I used to say, like, there’s all this interesting stuff happening. Why am I not
    0:20:36 reading about any of it in these newspapers? And he’s like, well, because
    0:20:39 the story of something is happening is not an interesting story. He said,
    0:20:42 there are only two stories that sell newspapers. He said, one is, oh, the glory of it.
    0:20:45 And the other is, oh, the shame of it. And basically he said, it’s conflict. So it’s
    0:20:48 either something wonderful has happened or something horrible has happened. Like those
    0:20:51 are the two stories. And then you think about business journalism as kind of our domain.
    0:20:54 You got to think about it. You’re like, those are the only two profiles of a CEO
    0:20:57 or a founder you’ll ever read. It’s just like, what a super genius for doing something,
    0:21:01 presumably not consensus and right, or what a moron. Like what a hopeless idiot for doing
    0:21:05 something, not consensus and wrong. And so, and so since I’ve become more aware of this,
    0:21:09 like it’s actually gotten, it’s gotten very hard for me to actually read any of the coverage
    0:21:12 of the people I know, because it’s like the people who got not consensus, right,
    0:21:15 they’re being lavished with too much praise. And the people who got not consensus wrong,
    0:21:19 they’re being damaged for all kinds of reasons. The traits are actually the same in a lot of
    0:21:25 cases. And so I guess as a consequence, like if you read the coverage, it really reinforces this
    0:21:30 bias of being results oriented. And it’s like, it’s not our fault that people don’t want to
    0:21:34 read a story that says, well, he tried something and it didn’t work this time, right?
    0:21:34 Yes, exactly.
    0:21:35 And so is there a…
    0:21:38 But it was mathematically pretty good. If we go back to Pete Carroll,
    0:21:43 this is a pretty great case. If we think about options theory, that just quickly the past preserved
    0:21:47 the option for two run plays. So if you want to get three tries at the end zone instead of two,
    0:21:51 for strictly for clock management reasons, you pass first.
    0:21:54 Right. And that’s not going to kick off ESPN Sports Center that night. And so optimistic or
    0:22:00 pessimistic that the narrative, the public narrative on these topics will ever move.
    0:22:08 I’m super, super pessimistic on the societal level, but I’m optimistic on if we’re educating
    0:22:13 people better, that we can equip them better for this. So I’m really focused on how do we
    0:22:19 make sure that we’re equipping people to be able to parse those narratives in a way that’s more
    0:22:27 rational. And particularly, now there’s so much information. And it’s all about the framing
    0:22:33 and the storytelling. And it’s particularly driven by what’s the interaction of your own
    0:22:36 point of view. We could think about it as partisan point of view, for example, versus
    0:22:40 the point of view of the communicator of the information and how is that interacting with
    0:22:44 each other, in terms of how critically are you viewing the information, for example.
    0:22:50 I think this is another really big piece of the pie and somewhat actually related to the question
    0:22:53 about journalism, which is that third dimension of the space. So we talked about two dimension,
    0:22:57 which is sort of outcome quality, and how are you allowing that you’re exploring both
    0:23:02 downside and upside outcomes in a way that’s really looking at forecast. How are you thinking
    0:23:07 directionally, so that you’re more directionally neutral. But then the other piece of the puzzle
    0:23:13 is how are you treating omissions versus commissions? So one of the things that we know
    0:23:19 with this issue of resulting is, here’s a really great way to make sure that nobody ever
    0:23:26 results on you. Don’t do anything. If I just don’t ever make a decision, I’m never going to
    0:23:31 be in that room with everybody yelling at me for the stupid decision I made, because I had a bad
    0:23:36 outcome. But we know that not making a decision is making a decision. We just don’t think about it
    0:23:40 that way. And it doesn’t have to just be about investing. You can have a shadow of your own
    0:23:44 personal decision. So, you know, it’s really interesting. I remember I was giving somebody
    0:23:51 advice who was like 23. And so obviously, you know, newly out of college had been in this position
    0:23:56 for a year and was really, really unhappy in the position. And he was asking me like, I don’t know
    0:24:01 what to do. I don’t know if I should change jobs. And I said, well, you know, so I did all the tricks,
    0:24:04 you know, time traveling. And so I was like, well, okay, imagine it’s a year from now. Do you
    0:24:09 think you’re going to be happy in this job? No. Okay, well, maybe you should go and choose this
    0:24:14 other, maybe you should go and try to find another position. And this is what he said to me. And this,
    0:24:18 I think, shows you how much people don’t realize that the thing you’re already doing, the status
    0:24:23 quo thing, choosing to stay in that really is a decision. So he said to me, but if I go and find
    0:24:28 another position, and then I have to spend another year, which I just spent trying to learn the ins and
    0:24:33 outs of the company. And it turns out that I’m not happy there. I’ll have wasted my time. And I said
    0:24:38 to him, okay, well, let’s think about this, though, the job you’re in, which is a choice to stay in,
    0:24:44 you’ve now told me it’s 100% in a year that you will be sad. Then if you go to the new job,
    0:24:49 yes, of course, it’s more volatile. But at least you’ve opened your, you’ve opened the range of
    0:24:54 outcomes up, but he didn’t want to do it because it doesn’t feel like staying where he was, didn’t
    0:24:59 feel like somehow he was choosing it. So that he felt like if he went to the other place, he
    0:25:04 ended up sad that somehow that would be his fault in a bad decision. So profound. In my case,
    0:25:08 this is my beginning a little too personal, but in my case, it was a decision that I didn’t know
    0:25:13 I had made to not have kids. And it’s still an option, but it’s probably not going to happen.
    0:25:18 And my therapist kind of told me that my not deciding was a choice. And I was like so blown
    0:25:23 away by that, that it had then allowed me to then examine what was going on there in that
    0:25:28 framework in order to not do that for other arenas in my life, where I might actually
    0:25:32 want something, or maybe I don’t, but at least it’s a choice that there’s intentionality behind
    0:25:36 it. Well, I appreciate you sharing. I mean, I really want to thank you for that because I think
    0:25:41 that people, first of all, should be sharing this kind of stuff so that people feel like they can
    0:25:45 talk about these kinds of things. Number one, and number two, in my book, I’ve got all these
    0:25:49 examples in there of like, how are you making choices about raising your kids when it feels so
    0:25:52 consequential? You do decisions for other people. Right. And you’re trying to decide like,
    0:25:56 should I have kids or shouldn’t I have kids? Or this school or that school or where am I supposed
    0:26:03 to live? And the thing that I try to get across is, we can talk about investing like I’m putting
    0:26:09 money into some kind of financial instrument, but we all have resources that we’re investing.
    0:26:14 That’s right. It’s our time, your energy, your heart. It could be whatever, your friendships,
    0:26:19 your relationships. So you’re deploying resources and like for the kind of decision that you’re
    0:26:24 talking about, it’s like, if you choose to have children, you’re choosing to deploy
    0:26:29 certain resources with some expected return, some of it good, some of it bad. And if you’re
    0:26:34 choosing not to have children, that’s a different deployment of your resources toward other things.
    0:26:38 And you need to know that there are limits. Everything isn’t a zero-sum game.
    0:26:43 No. But approaching the world and the fact that evolution has approached the world as a zero-sum
    0:26:48 game and our toolkit makes it a zero-sum game, means that we need to still view everything
    0:26:52 as a zero-sum game when it comes to those trade-offs and resources. Because you are losing
    0:26:56 something every time, even in a non-zero game. Right. So I don’t feel like the world is a zero-sum
    0:27:01 game in terms of like, most of the activities that you and I would engage on, we can both win too.
    0:27:06 But it’s a zero-sum game to go back to your therapist. It’s a zero-sum game between you
    0:27:10 and the other versions of yourself that you don’t choose. Exactly. Or an organization and
    0:27:14 the other versions of itself it doesn’t choose. Exactly. So there’s a set of possible futures
    0:27:20 that result from not making a decision as well. So on an individual decision, let’s put things
    0:27:26 into three categories. Clear misses, near misses, and hits. There’s some that would just be a clear
    0:27:30 miss, throw them out. And there’s some that I’m going to sort of really agonize over and I’m going
    0:27:36 to think about it and I’m going to do a lot of analysis on it. And so the ones which become a
    0:27:42 yes go into the hit category. And the other one is a near miss. I came close. What happens with
    0:27:48 those near misses is they just go away. So what I realized is that on any given decision, let’s
    0:27:52 take an investment decision. If I went to you or you came to me and said, well, tell me what’s
    0:27:57 happening with the companies that you have under consideration. On a single decision,
    0:28:02 when I explain to you why I didn’t invest in a company, it’s going to sound incredibly reasonable
    0:28:07 to you. So you’ll only be able to see in the aggregate, if you look across many of those
    0:28:13 decisions, that I tend to be having this bias toward missing. Towards saying, you know what,
    0:28:18 we’re not going to do it so that I don’t want to stick my neck out. Now this for you is incredibly
    0:28:21 hard to spot because you do have to see it in the aggregate because I’m going to be able to
    0:28:27 tell you a very good story on any individual decision. So the way to combat that and again,
    0:28:31 get people to think about what we really care around here as forecast, not really outcomes,
    0:28:36 is actually to keep a shadow book. The anti portfolio should contain basically all of your
    0:28:41 near misses, but then you have to take a sample of the clear misses as well, which nobody ever looks
    0:28:46 at because the near misses tend to be a little in your periphery. If they happen to be big hits.
    0:28:50 So here’s the problem. So the good news is bad news. So the good news is we have actually done
    0:28:54 this. And so we call it the shadow portfolio. Awesome. And the way that we do it is we make
    0:28:59 the investment. We take the other equivalent deal of that vintage of that size that we almost did,
    0:29:02 but didn’t do. We put that in the shadow portfolio and we’re trying to do kind of
    0:29:07 apples to apples comparison. In finance theory terms, the shadow portfolio may well outperform
    0:29:11 the real portfolio. And in finance terms, that’s because the shadow portfolio may be higher variance,
    0:29:16 higher volatility, higher risk, and therefore higher return. Because the fear is the ones that
    0:29:20 are hitting are the ones that are less, they’re less spiky, they’re less volatile, they’re less
    0:29:25 risky. Right. So what’s wonderful about that, when you decide not to invest in a company,
    0:29:29 you actually model out why that’s in there. It’s often, by the way, it’s often a single flaw that
    0:29:34 we’ve identified. Yeah. Like it’s just like, oh, we would do it except for X. Right. Where X looks
    0:29:37 like something that’s like potentially existentially bad. Right. And then that’s just
    0:29:41 written in there. And so you know that. So, and then just make sure people, like those ones that
    0:29:45 people are just rejecting out of hand. That’s my question. So we never do that. But let me ask
    0:29:48 you how to do that though. So that’s what we don’t do. And as you’re describing,
    0:29:51 I’m like, of course we should do that. I’m trying to think of how we would do that. Because the
    0:29:57 problem is we reject 99 for everyone we do. Yeah. So you just literally, it’s a sample. You just
    0:30:00 take a random sample. A random sample. Okay. I mean, as long as it’s just sort of being kept
    0:30:06 in view a little bit, because what that does is it basically just asks as pushing against your model.
    0:30:10 You’re just sort of getting people to have the right kind of discussion.
    0:30:15 So all of that communicates to the people around you, like, I care about your model.
    0:30:19 So let me ask you a different question on, because you talk about sort of groups of decisions.
    0:30:22 So the other question, portfolios of decisions. So the other question is, early on in the firm,
    0:30:25 I happen to have this discussion with a friend of mine. And he basically looked at me and he’s like,
    0:30:29 you’re thinking about this all wrong. You were thinking about this as a decision. You’re thinking
    0:30:32 about investor not. He said, that’s totally the wrong way to think about this. You should be thinking
    0:30:38 about this is, is this one of the 20 investments of this kind, of this class size that you’re
    0:30:42 going to put in your portfolio. When you’re evaluating an opportunity,
    0:30:47 you are kind of definitionally talking about that opportunity. But it’s very hard to
    0:30:51 abstract that question from the broader concept of a portfolio or a basket.
    0:30:54 Yeah. What I would suggest there is actually just doing some time traveling that as people
    0:30:58 are really down in the weeds to say, let’s imagine it’s a year from now, and what does the portfolio
    0:31:04 look like of these investments of this kind. So I’m a big promoter of time traveling, of just
    0:31:08 making sure that you’re always asking that question, what does this look like in a year?
    0:31:11 What does this look like in five years? Are we happy? Are we sad?
    0:31:15 If we imagine that we have this, what percentage of this do we think will have failed?
    0:31:19 We understand that any one of these individual ones could have failed. So let’s remember that.
    0:31:24 And I think that that really allows you to sort of get out of what feels like the biggest decision
    0:31:29 on earth because that’s the decision you have to be making and be able to see it in the context of
    0:31:35 kind of all of what’s going on. It’s fantastic. One of the most powerful things my therapist
    0:31:39 gave me, and it was such a simple construct. It was sort of like doing certain things today is
    0:31:45 like stealing from my future self. Oh, it blew me. It blew my mind. So beautiful.
    0:31:49 It’s so beautiful. And it seems so like, you know, hokey, like personal self help you. But
    0:31:55 actually I had never thought of because we were on a continuum by making discreet individuals
    0:31:59 like Sonal in the past, Sonal today, Sonal this woman in the future I haven’t met yet.
    0:32:06 Wow. Like the idea of stealing from her was like I… That’s really a lovely way to put it.
    0:32:10 Yeah, she is. I have an amazing therapy. I like talking publicly about therapy because
    0:32:15 I like to stick on it. No, I’m very, very open about like, let’s not hide it. It’s totally fine.
    0:32:19 There’s no fucking reason to hide it. I totally agree. Some of the ways that we deal with this
    0:32:23 is actually prospectively employing really good decision hygiene, which involves a couple of
    0:32:28 things. One is some of this good time traveling that we talked about where you’re really imagining
    0:32:32 what is this going to look like in the future so that that’s metabolized into the decision.
    0:32:39 Two is making sure that you have pushback once there’s consensus reached. Great. Now let’s go
    0:32:44 disagree with each other. Then the next thing is in terms of the consensus problem is to make
    0:32:51 sure that you’re eliciting as much input not in a room with other people. So, you know,
    0:32:55 when somebody has a deal they want to bring to everybody that goes to the people individually,
    0:32:58 they have to sort of write their thoughts about it individually. And then it comes into the
    0:33:01 room after that. As opposed to the pile on effect that just happened. As opposed to the pile on
    0:33:06 effect. And that reduces the sort of effects of consensus anyway. So now this is how you then come
    0:33:11 up with basically what your forecast of the future is that then is absolutely memorialized because
    0:33:15 that memorializing of it acts as the prophylactic. First of all, it gives you your forecast, which
    0:33:19 is what you’re trying to push against anyway. You’re trying to change the attitude to be that
    0:33:24 the forecast is the outcome that we care about. And it acts as a prophylactic for those emotional
    0:33:28 issues, right? Which is now you, it’s like, okay, well, we all talked about this and we had our
    0:33:33 red team over here and we had a good steel man going on. And we kind of really thought about
    0:33:39 why we were wrong. We questioned if somebody, you know, let’s, if somebody has the outside view,
    0:33:45 what would this really look like to them by eliciting the information individually. We were
    0:33:51 less likely to be in the inside view anyway. We’ve done all of that good hygiene. And then that acts
    0:33:56 as a way to, to protect yourself against these kinds of issues in the first place. Again,
    0:34:02 you’re going to have a bad 24 hours. I’m just like, for sure. But you can get out of it more
    0:34:07 quickly, more often and get to a place where you can say, okay, moving on to the next decision,
    0:34:11 how do I, how do I improve this going forward? Yeah. So building on that, but returning real
    0:34:16 quick to my optimism pessimism question, if society is not going to move on these issues,
    0:34:19 but we can move as individuals. So one form of optimism would be more of us can move as
    0:34:23 individuals. The other form of optimism could be there will just always be room in these
    0:34:26 probabilistic domains for the rare individual who’s actually able to think about this stuff
    0:34:30 correctly. Like there will always be an edge. There will always be certain people who are
    0:34:34 like much better at poker than everybody else. There will. Oh, I think that’s for sure. Okay.
    0:34:38 Because most people simply, most people just simply can’t or won’t get there. Like a few
    0:34:42 people in every domain might be able to take the time and have the discipline of willpower to kind
    0:34:46 of get all the way there. But most people can’t or won’t. I think that in some, in some ways, maybe
    0:34:51 that, that’s okay. Like, I mean, I sort of think about from an evolutionary standpoint, that kind
    0:34:55 of thinking was selected for, for a reason, right? Like it’s better for survival, likely better for
    0:34:59 happiness. You mean the conventional, conventional wisdom? Yeah. Don’t touch the burn stove twice.
    0:35:03 Yeah. Or run away when you hear rustling in the leaves. Don’t sit around and say, well, it’s a
    0:35:06 probabilistic world. I have to figure out how often is that a lion that’s going to come eat me?
    0:35:09 Most people shouldn’t be playing in the World Series of Poker. I have people come up to me all
    0:35:14 the time and be like, Oh, you know, I play poker, but it’s just a home game. You know, and I’m like,
    0:35:17 what are you saying? Just a home game. Like there are different purposes to poker. Like
    0:35:21 you probably have a great time doing that. And it brings you a tremendous amount of enjoyment.
    0:35:24 And you don’t have an interest in becoming a professional poker player and why just be
    0:35:30 proud of that. I think that that’s amazing. Like I play tennis. I’m not saying, Oh, but you know,
    0:35:36 I’m just playing week, you know, I’m just playing in like USTA, like 3.5. Like I’m really happy with
    0:35:42 my tennis. I think it’s great. So I think we need to remember that like people have different things
    0:35:48 that they love. And this kind of thinking, I think that I would love it if we could spread it more.
    0:35:52 But of course, there are going to be some people who are going to be ending up in this
    0:35:56 category more than others. And that’s okay. Like not everybody has to think like this. I think
    0:36:00 it’s all right. So one of the things I get asked all the time is like, well, we can’t really do
    0:36:06 this because people expect us to be confident in our choices. Don’t confuse confidence and certainty.
    0:36:12 So I can express a lot of uncertainty and still convey confidence. Ready? I’m weighing these
    0:36:16 three options, A, B and C. I’ve really done the analysis. Here’s the analysis. And this is what
    0:36:22 I think. I think that option A is going to work out 60% of the time. Option B is going to work
    0:36:27 out 25% of the time. And option C is going to work out 15% of the time. So option A is the clear
    0:36:33 winner. Now, I just expressed so much uncertainty in that sentence. But also a lot of confidence.
    0:36:37 But also a lot of confidence. I’ve done my analysis. This is my forecast. And all that I
    0:36:42 ever ask people to do when they do that is make sure that they ask a question before they bank
    0:36:46 the decision, which is, is there some piece of information that I could find out that would
    0:36:51 reverse my decision that would actually cause, not that would make it go from 60 to 57. I don’t
    0:36:55 care modulating so much. I care that you’re going to actually change. And your point is that organizations
    0:36:59 can then bake that into their process. And not just in the forecasting, but in arriving to that
    0:37:04 decision. So that then the next time they get to it right or wrong, they make a better decision.
    0:37:10 And if the answer is yes, go find it. Or sometimes the answer is yes, but the cost is too high. It
    0:37:15 could be time. It could be opportunity costs, whatever. Exactly. So then you just don’t. And
    0:37:18 then you would say, well, then you all recognize as a group, we knew that if we found this out,
    0:37:22 it would change our decision. But we’ve agreed that it would, the cost was too high. And so we
    0:37:25 didn’t. So then if it reveals itself afterwards, you’re not sad. Well, you’ve talked a lot about
    0:37:29 how people should use confidence intervals and communicating, which I love because we’re both
    0:37:37 ex-PhD psychology people, neither are finished. So I love that idea. One thing that I struggle with,
    0:37:40 though, is again, in the organizational context, like if you’re trying to translate this to a
    0:37:46 big group of people, not just one-on-one or small group decisions, how do you communicate a confidence
    0:37:52 interval and all the variables in it in an efficient kind of compressed way? Like honestly,
    0:37:57 part of communication and organizations is emails and quick decisions. And yes, you can have all
    0:38:03 the process behind the outcome. But how do you then convey that even though the people were not
    0:38:08 part of that room of that discussion? I think that there’s a simpler way to express uncertainty,
    0:38:12 which is using percentages. Now, obviously, sometimes you can only come up with a range.
    0:38:19 But for example, if I’m talking to my editor, and this is very quick in an email, I’ll say,
    0:38:24 you’ll have the draft by Friday, 83% of the time. By Monday, you’ll have it 97% of the time.
    0:38:27 Those are inclusive, right? That’s another way of doing a confidence interval, but without
    0:38:32 making it so wonky. Without making it so wonky. So I’m just letting her know, most of the time,
    0:38:37 you’re going to get it on Friday, but I’m building, like my kid gets sick or I have trouble with a
    0:38:42 particular section of the draft or whatever it is, and I set the expectations for that way.
    0:38:45 That’s fantastic. I mean, we’ve been trying to do forecasting even for like timelines for
    0:38:49 podcasts, editing and episodes. And I feel frustrated because I have like a set of frameworks,
    0:38:55 like if there’s accents, if there’s more than two voices, if there’s, you know, a complexing,
    0:38:59 room tone, like interaction, feedback, sound effects. I know all the factors that can go into
    0:39:05 my model, but I don’t know how to put a confidence interval in our pipeline spreadsheet for, you
    0:39:09 know, all the content that’s coming out. Yeah. So one way to do it is think about what’s the range,
    0:39:13 what’s the earliest that I could get it, and you put a percentage on that. And then you think about
    0:39:17 the latest day, they’re going to get it. And you put a percentage on that. And so now,
    0:39:23 what’s wonderful about that is that it’s a few things. One is I’ve set the expectations properly
    0:39:28 now so that I’m not getting, you know, yell that on Friday, like where the hell’s the draft.
    0:39:32 Exactly. And I think that, and a lot of what happens is that because we’re sort of, we think
    0:39:37 that we have to give a certain answer. It ends up, boy, who cried wolf, right? So that if I’m
    0:39:44 telling her, I’m going to get it on Friday and, you know, 25% of the time, 25% of the time I’m
    0:39:50 late, she just starts to not put much stock in what I’ve said anyway. So that’s number one.
    0:39:54 Number two is what happens is that you really kind of infect other people with this in a good
    0:39:59 way, where you get them, it just moves them off of that black and white thinking. So like,
    0:40:02 I love that. One of the things that I love thinking about, and this is the difference
    0:40:09 between a deadline or kind of giving this range, is that I think that we ask ourselves,
    0:40:16 am I sure? And other people, are you sure way too often that it’s a terrible question to ask
    0:40:20 somebody because the only answer is yes or no. So what should we be asking? How sure are you?
    0:40:25 How sure are you? I have a quick question for you on this because earlier you mentioned uncertainty.
    0:40:30 How do you as an organization build that uncertainty in by default? So first of all,
    0:40:34 we obviously talked a little bit about time traveling and the usefulness of time traveling.
    0:40:39 So one thing that I like to think about is not overvalue the decision that’s right at hand,
    0:40:45 the things that are right sitting in front of us. So you can kind of think about it like,
    0:40:50 how are you going to figure out the best path? As you think about what your goals are and obviously
    0:40:54 the goal that you want to reach is going to sort of define for you what the best path is.
    0:40:58 If you’re standing at the bottom of a mountain that you want to summit, let’s call the summit your
    0:41:02 goal, all you can really see is the base of the mountain. So as you’re doing your planning,
    0:41:06 you’re really worried about how do I get the next little bit, right? How do I start?
    0:41:10 But if you’re at the top of the mountain having a tanger goal, now you can look at the whole
    0:41:14 landscape. You get this beautiful view of the whole landscape and now you can really see what
    0:41:18 the best path looks like. And so we want to do this not just physically like standing up on a
    0:41:23 mountain, but we want to figure out a cognitive way to get there. And that’s to do this really good
    0:41:28 time traveling. And you do this through back casting and premortem. And now let’s look backwards
    0:41:33 instead of forwards to try to figure out, this is now the headline. Let me think about why that
    0:41:37 happened. So you could think about this like as a simple like weight loss goal. I want to lose a
    0:41:41 certain amount of weight within the next six months. It’s the end of the six months. I’ve lost
    0:41:47 that weight. What happened? You know, I went to the gym. I avoided bread. I didn’t eat any sweets.
    0:41:53 I made sure that, you know, whatever. So you now have this list. Then in pairing with that,
    0:41:59 you want to do a premortem, which is I didn’t get to the top of the mountain. I failed to lose
    0:42:03 the weight. I failed to do whatever it is. And then all the things you can do to counter program
    0:42:07 against that. Exactly. Because that’s going to reveal really different things. It’s going to reveal
    0:42:13 some things that are just sort of luck, right? Let me think, can I do something to reduce the
    0:42:17 influence of luck there? Then there’s going to be some things that have to do with your
    0:42:22 decisions. Like I went into the break room every day and there were donuts there. And so I couldn’t
    0:42:26 resist them. So now you can think about how do I counter that, right? How can I bring other people
    0:42:30 into the process and that kind of thing? And then there’s stuff that’s just, you can figure out,
    0:42:33 it’s just out of your control. It turned out out of slow metabolism. And now what happens is that
    0:42:36 you’re just much less reactive and you’re much more nimble because you’ve gotten a whole view of
    0:42:40 the landscape and you’ve gotten a view of the good part of the landscape and the bad part of the
    0:42:46 landscape. But I’m sure as he told you, people are very low to do these premortems because I think
    0:42:52 that the imagining of failure feels so much like failure that people are like, no, and you should
    0:42:56 posit, you know, positive visualization. I mean, even in brainstorming meetings, everyone’s like,
    0:43:00 don’t dump on an idea. But the exact point is you have to dump on an idea and kill the winnowing
    0:43:06 of options. No. As part of the process, you should be then premorteming it. Exactly. There’s
    0:43:11 wonderful research by Gabrielle Adinchin that I really recommend that people see that the references
    0:43:18 are in my book. In across domains, what she’s found is that when people do this sort of positive
    0:43:22 fantasizing, the chances that they actually complete the goal are just lower than if people
    0:43:27 do this negative fantasizing. And then there’s research that shows that when people do this
    0:43:32 time travel and this backwards thinking that increases identifying reasons for success or
    0:43:39 failure by about 30%, you’re just more likely to see what’s in your way. So like, for example,
    0:43:44 she did like one of the simple studies was she asked people who were in college,
    0:43:49 you know, who do you have a crush on that you haven’t talked to yet? She had one group who,
    0:43:53 you know, it was all positive fantasies. So like, oh, I’m going to meet them and I’m going to ask
    0:43:56 them out on a date and it’s going to be great and then we’re going to live happily ever after and
    0:44:01 whatever. And then she had another group that engaged in negative fantasizing. What if I asked
    0:44:06 them out and they say no? Like they said no and I was really embarrassed and so on and so forth.
    0:44:12 And then she revisited them like four months later to see which group had actually gone out on a date
    0:44:16 with the person that they had a crush on and the ones that did the negative fantasizing were much
    0:44:20 more likely to have gone out on the date. It’s fantastic. Yeah. So one of the things that I
    0:44:25 say is like, look, when we’re in teams to your point, we tend to sort of view people as naysayers,
    0:44:32 right? But we don’t want to think of them as downers. So I suggest divide those up into two
    0:44:37 processes. Have the group individually do a back cast, have the group individually write a narrative
    0:44:42 about a pre-mortem. And what that does is when you’re now doing a pre-mortem, it changes the
    0:44:47 rules of the game where being a good team player is now actually identifying the ways that you fail.
    0:44:51 I love what you said because it’s like having two modes as a way of getting into these two
    0:44:55 mindsets. Right. Where you’re not stopping people from feeling like they’re a team player. And I
    0:44:59 think that that’s the issue. As you said, it’s like, don’t sit there and like, you know, crap on
    0:45:04 my goal. Well, because what are they really saying? You’re not being a team player. So change the
    0:45:09 rules of the game. You had this line in your book about how regret is an unproductive. The issue is
    0:45:14 that it comes after the fact, not before. So the one thing that I don’t want people to do is think
    0:45:18 about how they feel right after the outcome. Because I think that then you’re going to overweight
    0:45:25 regret. So you want to think about regret before you make the decision. You have to get it within
    0:45:29 the right timeframe. What we want to do instead is write in the moment of the outcome when you’re
    0:45:34 feeling really sad. You can stop and say, am I going to care about this in a year? Think about
    0:45:39 yourself as a happiness stock. And so if we can sort of get that more 10,000 foot view on our own
    0:45:45 happiness and we think about ourselves as we’re investing in our own stock, our own happiness
    0:45:50 stock, we can get to that regret question a lot better. You don’t need to improve that much
    0:45:57 to get really big dividends. You make thousands of decisions a day. If you can get a little better
    0:46:04 at this stuff, if you can just, you know, de-bias a little bit, think more probabilistically,
    0:46:10 really sort of wrap your arms around uncertainty to free yourself up from sort of the emotional
    0:46:15 impact of outcomes. A little bit is going to have such a huge effect on your future decision making.
    0:46:19 Well, that’s amazing, Annie. Thank you so much for joining the A6NZ podcast.
    0:46:21 -Thank you very much. -Yes, thank you.

    with @annieduke, @pmarca, and @smc90

    Every organization, whether small or big, early or late stage — and every individual, whether for themselves or others — makes countless decisions every day, under conditions of uncertainty. The question is, are we allowing that uncertainty to bubble to the surface, and if so, how much and when? Where does consensus, transparency, forecasting, backcasting, pre-mortems, and heck, even regret, usefully come in?Going beyond the typical discussion of focusing on process vs. outcomes and probabilistic thinking, this episode of the a16z Podcast features Thinking in Bets author Annie Duke — one of the top poker players in the world (and World Series of Poker champ), former psychology PhD, and founder of national decision education movement How I Decide — in conversation with Marc Andreessen and Sonal Chokshi. The episode covers everything from the role of narrative — hagiography or takedown? — to fighting (or embracing) evolution. How do we go from the bottom of the summit to the top of the summit to the entire landscape… and up, down, and opposite?The first step to understanding what really slows innovation down is understanding good decision-making — because we have conflicting interests, and are sometimes even competing against future versions of ourselves (or of our organizations). And there’s a set of possible futures that result from not making a decision as well. So why feel both pessimistic AND optimistic about all this??

  • a16z Podcast: Seven Trends in Blockchain Computing

    AI transcript
    0:00:05 The content here is for informational purposes only, should not be taken as legal business
    0:00:10 tax or investment advice or be used to evaluate any investment or security and is not directed
    0:00:14 at any investors or potential investors in any A16Z fund.
    0:00:19 For more details, please see A16Z.com/disclosures.
    0:00:22 Welcome to the A16Z YouTube channel.
    0:00:25 Today I’m here with Olaf from PolyGener, good friend of Olaf.
    0:00:31 You’re both longtime cryptocurrency enthusiasts, maybe if you don’t mind, we’ll just go back
    0:00:32 a little bit.
    0:00:36 You were employee one at Coinbase back in what year was that?
    0:00:37 2013.
    0:00:38 Okay.
    0:00:39 And I guess you got interested in crypto before that?
    0:00:40 Yeah.
    0:00:44 So I was in college when I got into Bitcoin and I wrote my undergraduate thesis on Bitcoin
    0:00:45 in 2011.
    0:00:48 And what first got you excited about it?
    0:00:53 So when I first read about it, I thought there’s no way this is possible to have a native
    0:00:56 internet money that isn’t controlled by any sort of central party.
    0:00:58 So I found it fascinating on its face.
    0:00:59 It’s just sort of technically–
    0:01:00 Yeah.
    0:01:06 But then once I dug into it, I kind of thought about the nth order implications and you realize
    0:01:08 this is a huge deal.
    0:01:12 It means that for the first time, you can have digital scarcity on the internet.
    0:01:17 And of course, you could move to a global unified financial and monetary system that’s
    0:01:23 outside the scope of any sort of sovereign state, political control, and is really opt
    0:01:24 in by all the users.
    0:01:28 So the general idea was just really fascinating to me.
    0:01:32 And I really did right away sort of buy as much Bitcoin as I could.
    0:01:36 But that was back when Bitcoin was kind of the dominant idea and everyone thought the
    0:01:40 kind of main thing you could do with this kind of new architecture was digital money.
    0:01:45 Since then, the kind of possibility space, at least to me, feels like it’s expanded dramatically.
    0:01:46 Yes, it has.
    0:01:52 And so to me, the big moment was when Ethereum launched.
    0:01:58 For me, I started seeing– a big breakthrough in my head was when I realized that Ethereum
    0:02:03 wallets were actually more like browsers than bank accounts.
    0:02:07 And I started seeing some stuff get built on Ethereum that people in Bitcoin had tried
    0:02:08 to do for a long time.
    0:02:12 In Bitcoin, you only have one asset, which is Bitcoins.
    0:02:15 People had tried to build other sorts of tokens or assets that would settle to the Bitcoin
    0:02:16 blockchain.
    0:02:19 And there were projects like Mastercoin, Counterparty–
    0:02:20 We funded this project Lighthouse.
    0:02:21 We helped–
    0:02:22 Oh, yeah.
    0:02:24 And that was basically decentralized crowdfunding.
    0:02:25 Yeah.
    0:02:26 There was crowdfunding on Bitcoin.
    0:02:27 It was just very, very difficult.
    0:02:37 I mean, Bitcoin has decided, perhaps correctly, to trade off the expressiveness of the programming
    0:02:39 language for increased security.
    0:02:43 So they have a very weak programming language– very deliberate, though– which provides,
    0:02:48 I think, perhaps better security and kind of– it’s more conservative kind of development
    0:02:49 plot path.
    0:02:51 So as a result, it’s very hard to build crowdfunding.
    0:02:56 And I remember when Ethereum came out, it was literally one of the 20 line pieces of
    0:02:57 code on the home page with that.
    0:03:02 And this is one of the things I think people really underestimate how much the developer
    0:03:04 abstraction matters.
    0:03:09 So it took Mike Hearn, something like eight months to build Lighthouse using Bitcoin scripting,
    0:03:15 the Ethereum ERC20 system– you and I could practically do this on our cell phones now.
    0:03:21 And that layer of abstraction opens up use cases that I think people underestimate how
    0:03:22 big a deal it is.
    0:03:24 Well, it’s the same with all computing.
    0:03:30 You could have done– there were mobile phones that had GPS and cell phone connectivity
    0:03:32 pre-iPhone.
    0:03:35 But the iPhone made it so the app developer didn’t have to understand any of that stuff
    0:03:36 worked.
    0:03:41 You could focus on recruiting drivers and building a beautiful UI.
    0:03:44 And to me, obviously, the iPhone– there’s a bunch of great things about the iPhone and
    0:03:45 the Android and what made smart phones take off.
    0:03:50 But a lot of it was that they figured out the right abstraction layer for the developers
    0:03:56 so that you could get a million apps and a whole bunch of creativity that happened as
    0:03:57 a result of that.
    0:04:02 And I actually think we’re going to see the next wave of that now with WebAssembly or
    0:04:08 Wasm because there have been problems with the Ethereum solidity language, huge security
    0:04:09 problems.
    0:04:13 And there’s not actually as much expressivity as people think there is.
    0:04:16 It’s still limited to solidity.
    0:04:20 Even one other language was basically found to be totally insecure.
    0:04:29 So I think that these VM systems moving towards a Wasm compiler– and this is like Polkadot,
    0:04:34 DFINITY, eWasm, so like Ethereum 2– I think it’s a really big deal.
    0:04:37 So just to explain to people– so Bitcoin comes up with this new kind of architecture
    0:04:42 that I think of as– I think it’s, frankly, mischaracterized today as ledger.
    0:04:43 I think of it as a computing platform.
    0:04:45 So what seems to be a computer and a ledger?
    0:04:46 Ledger is more like a hard drive.
    0:04:48 A computer is a hard drive plus a processor.
    0:04:50 And Bitcoin has a processor.
    0:04:54 It’s just a processor that has a limited– but deliberately limited in the applications
    0:04:55 it can run.
    0:05:00 And the main application it runs is the thing that moves Bitcoins around, right?
    0:05:02 Ethereum says, hey, let’s take that processor and let’s expand it a lot.
    0:05:06 But as you’re saying it does, they developed their own programming language solidity, which
    0:05:10 is kind of JavaScript-like, but it’s kind of eccentric.
    0:05:16 And just so people know, eWasm, so that’s WebAssembly, which is now baked into every browser.
    0:05:23 And so it’s sort of– there are now billions of computers that run eWasm natively.
    0:05:27 And it will soon become– there already is, and it’s going to continue to become the most
    0:05:31 dominant kind of runtime environment for software in the world.
    0:05:34 And what that means is now that all the blockchains are supporting eWasm, that means that all
    0:05:40 of these compilers that are built from other programming languages, Python, Rust, whatever,
    0:05:44 if you’re a favorite language, you already have– you get the piggyback off of all of
    0:05:47 the tooling that’s been built over the last 20 years to the other programming languages.
    0:05:52 So you make it a much more kind of familiar experience to developers.
    0:05:56 And so instead of needing to learn solidity, which is, again, this custom language, it’s
    0:06:02 a pretty new language, the scheme of things, you can use your off-the-shelf favorite programming
    0:06:03 language.
    0:06:06 To me, this is a similar step function that we saw from Bitcoin scripting.
    0:06:08 Well, see, it’s not just the programming language then.
    0:06:11 It’s also like– it’s like the great thing about Python is not just like there’s 10,000
    0:06:13 GitHub projects, or I mean, there’s formal verification.
    0:06:17 So just as an example, why does it take a long time to release an Ethereum project today?
    0:06:21 I think at least half the development time probably is security audits, right?
    0:06:25 And that’s because you’ve got this really kind of this new programming language, people
    0:06:29 don’t fully understand it, there aren’t these kind of tools around it, and suddenly you
    0:06:34 switch to something like Python, and you’ve got just like 20 years of whatever, 15 years
    0:06:39 of incredible tools that are built around that environment.
    0:06:40 Yeah, that’s exactly right.
    0:06:48 And so to me, this is one way that we’re building useful abstractions to make this even easier
    0:06:51 to ship like end user applications.
    0:06:52 Yeah.
    0:06:55 Yeah, I mean, the big thing’s happening now, so I guess kind of jumping forward.
    0:06:58 So I think you and I probably see it similarly, there was kind of the first era, which was
    0:07:01 Bitcoin, but it was sort of the main– the only thing really in that first era, one of
    0:07:02 the only things.
    0:07:06 Then there’s sort of the Ethereum era, which sort of takes this idea of digital money and
    0:07:09 expands it to blockchain computers, right?
    0:07:16 And now I think what we’re seeing over the next 12 months or so, maybe 12 to 24 months,
    0:07:18 is the kind of the wave three happening, right?
    0:07:24 Which is taking the ideas of Ethereum, upgrading the developer experience like you just discussed,
    0:07:29 very importantly upgrading the scalability, which means multiple things, it means more.
    0:07:34 It basically means what we call in traditional venture capital, scale out, not scale up.
    0:07:38 So instead of getting scaled by adding more, a beefier computer, you can get scaled by
    0:07:40 adding more computers to the network, right?
    0:07:43 Which lets you kind of expand linearly with the demand.
    0:07:47 And that requires what’s known as sharding or some sort of parallelism that lets you
    0:07:48 run.
    0:07:51 And that’s what a lot of these new projects, or they’re doing better developer experience
    0:07:56 and things like WASM and just all the other tooling around it, they’re building parallelism
    0:08:01 in from the start, right, as opposed to having to upgrade later.
    0:08:04 And what else?
    0:08:08 I think a third one for me is they’re often building the ability to upgrade the protocol
    0:08:09 into the protocol.
    0:08:10 Yep.
    0:08:11 Yep.
    0:08:13 So the kind of governance of the protocol itself and also the governance of the smart contracts
    0:08:14 themselves.
    0:08:15 Yes, exactly.
    0:08:22 So to me, Bitcoin and Ethereum, and maybe very much intentionally, have not had formal
    0:08:25 systems to upgrade themselves.
    0:08:29 And that’s because it does open up a potential security threat to the system.
    0:08:32 If it can upgrade, then who controls that upgrade process?
    0:08:37 But if you can adequately design an upgrade process that is controlled by the same people
    0:08:43 that already control the consensus layer, you know, it’s an equivalent threat as baking
    0:08:46 a bad block or something like that.
    0:08:51 So to me, you know, the ability to say, actually, there’s a better system, let’s upgrade and
    0:08:55 move to that system in a coordinated manner.
    0:08:56 You know, I think that’s really exciting.
    0:09:02 That’s the way I think of that, is there’s always a trade-off between the security of
    0:09:08 the system and the very promise of a blockchain computer to me is that it’s making a commitment
    0:09:12 that the code will continue to run as designed, there’s sort of game theoretic guarantees.
    0:09:15 And you want to, of course, maintaining that commitment is very, very important.
    0:09:16 Yep.
    0:09:22 But there’s a trade-off because software also, as we know from, you know, decades of experience,
    0:09:29 A, has bugs that needs to be fixed, and B, benefits from, you know, from sort of iterative
    0:09:30 upgrade cycles, right?
    0:09:31 Yeah.
    0:09:32 And so how do you balance those two things?
    0:09:37 And so Bitcoin Ethereum kind of took the extreme kind of conservative route, which said the
    0:09:40 only way to upgrade is to kind of get a whole bunch of people to just literally upgrade their
    0:09:44 software simultaneously, which led to all these kind of offline things, including sort of
    0:09:49 famously the Bitcoin Civil War and then the Ethereum fork, which was very contentious.
    0:09:52 And so they were kind of built in a way to be very conservative with their governance
    0:09:53 methods.
    0:09:54 Exactly.
    0:09:55 Yep.
    0:09:56 And so how do you find the right balance?
    0:10:03 And the people are experimenting and trying new systems to get a better balance.
    0:10:09 So I think a big part of this is there are actors in the Bitcoin and Ethereum and other
    0:10:16 crypto systems that are part of what defines like the reality of those systems.
    0:10:21 And so you could call these node operators in Bitcoin, miners obviously have a role to
    0:10:22 play in it.
    0:10:26 In proof of stake protocols, it’s very much the token holders who are staking.
    0:10:29 And we’ve seen really, really strong participation.
    0:10:33 So in a lot of these delegated proof of stake protocols, you see, you know, 70, 80 percent
    0:10:36 of token holders participating in consensus.
    0:10:39 So they’re already defining what is the latest block in the blockchain.
    0:10:43 They’re already defining the rules of that computer.
    0:10:49 So in my mind, you know, how can we say we’re going to use a decentralized mechanism to
    0:10:55 come to consensus about the computer state, but we’re going to also say it’s impossible
    0:10:58 to come to a decision about how to change the rules of the computer.
    0:11:02 So I’m very skeptical that we can’t achieve very secure on-chain governance.
    0:11:04 I think we can.
    0:11:09 And to me, it’s a very big deal because if you get governance right, in theory, everything
    0:11:11 else should be a sort of waterfall down from that.
    0:11:16 And you can do very exciting things that I think we haven’t done.
    0:11:21 You know, I think a big problem for both Bitcoin and Ethereum has been funding of core protocol
    0:11:22 development.
    0:11:24 So application developers have found all sorts of ways to monetize.
    0:11:28 You can go raise a VC round, you can do a token sale, you know, there’s lots of money
    0:11:32 sloshing around in general if you’re building on top of these protocols.
    0:11:36 But Ethereum has this weird problem where there’s probably 100x the number of developers
    0:11:40 building apps on top as there are building core protocol stuff for Ethereum.
    0:11:41 And so to me…
    0:11:43 Well, that has to do with the history of Ethereum, right?
    0:11:48 So there was a foundation which has a certain amount of money, but there was never kind
    0:11:50 of a structure set up.
    0:11:51 Yeah, there’s no structure.
    0:11:52 And in reality…
    0:11:54 Set up to continuously fund the development.
    0:11:59 And in reality, there needs to be some sort of, basically like a tax system, where if
    0:12:02 I contribute to the core protocol and create all of this value…
    0:12:05 Well, it’s like what Zcash does, what they have inflation baked into the protocol and
    0:12:07 some portion of that goes through…
    0:12:13 Which is kind of crude, because their system is, you know, it’s designed around one team.
    0:12:17 I don’t think it’s designed to last 100 years and its current implementation.
    0:12:19 In their defense, I don’t think they do either.
    0:12:20 Yeah, yeah.
    0:12:23 I mean, I think that they think of it as a MVP to a better system.
    0:12:24 Yeah, to a better system, yeah.
    0:12:30 And so in my mind, the ability for developers to contribute new protocol suggestions and
    0:12:33 basically add a build to them.
    0:12:37 So then I could say, if this gets merged in and this actually becomes the new version
    0:12:41 of the protocol, me and my development team are actually going to inflate a certain number
    0:12:42 of coins.
    0:12:43 They’re just going to be created.
    0:12:46 It’s like dilution, basically, for the existing holders, and they’re going to be rewarded
    0:12:47 to us.
    0:12:51 And because this is a long-term iterative game between all the token holders and the developers
    0:12:54 who are going to contribute code to the protocol, it’s actually in the token holder’s best
    0:13:01 interest to pay them and say, “Okay, we’re going to pay you guys what I accept as an
    0:13:02 Ethereum holder.”
    0:13:04 Like a 1% dilution to ship Ethereum 2?
    0:13:06 Absolutely, right?
    0:13:07 It’s a no-brainer too.
    0:13:13 And so if you could create 1% of the Ethereum tokens and grant those to the development
    0:13:17 team, today that’s like, what, $200 million?
    0:13:18 It’s a large amount.
    0:13:23 What do you say to the skeptics who think that proof-of-stake governance will devolve
    0:13:29 into either like a blue talkercy on one hand or the big investors or whatever, whatever
    0:13:35 type of talkercy, kind of control for their own interests, or alternatively are vulnerable
    0:13:38 to bribery attacks and other kinds of…
    0:13:44 Yeah, so I just think that we have relatively at scale proof-of-stake systems today.
    0:13:47 This argument seemed better 12 months ago before Tezos and Cosmos.
    0:13:48 Yeah, that’s my thing.
    0:13:51 It’s like, you see Tezos and Cosmos, it’s like, if you can get away with these attacks,
    0:13:54 there are $100 million bounties to go through them.
    0:13:55 The biggest bug bounties?
    0:13:58 Yeah, I’m a big believer in economic incentives for these bug bounties.
    0:14:03 I mean, if you can attack Tezos and break consensus and get bad blocks through it…
    0:14:06 I haven’t followed the Tezos stuff, I’m sure there are people trying to attack it.
    0:14:08 Oh, I’m sure there are.
    0:14:11 And I’m just like there’s people trying to attack Bitcoin all the time, right?
    0:14:13 And these are highly adversarial environments.
    0:14:18 But in my view, proof-of-stake to me has a few features that I really like about it.
    0:14:25 So one, you have node operator and miner type participants and token holders.
    0:14:30 And in the Bitcoin system, we’ve actually seen cases where these parties don’t have
    0:14:37 the best interests in mind, like there’s not a perfect overlap for their interests.
    0:14:40 And so in a way, you could argue there’s like a check and balance or something like that.
    0:14:43 But in a system like Tezos or Cosmos, those are the same people, right?
    0:14:45 So the token holders are the validators.
    0:14:49 And I think that just means in general, there’s going to be a better alignment of interests
    0:14:53 between the block producers and the token holders.
    0:14:59 The second thing is that if you attack a proof-of-stake network, mitigation of the attack after it
    0:15:01 happens is significantly easier.
    0:15:07 So if you come in with 51% of the coins, and in most proof-of-stake, it’s actually 34%
    0:15:11 of the coins is enough to attack, and you start doing bad things, right?
    0:15:14 Bad blocks and stuff like that.
    0:15:18 The minority people here can really just hard fork the chain and delete your coins and keep
    0:15:19 going.
    0:15:26 However, the reason they can do that is because that attacker’s validation was intra-protocols,
    0:15:30 like within the protocol, so you can delete their stuff and move forward.
    0:15:36 If you do that with hardware and proof-of-work systems, you actually have to change the hashing
    0:15:40 algorithm for the entire proof-of-work chain and burn everything to the ground, like for
    0:15:42 the good guys and the bad guys.
    0:15:46 Because you have to fork so that the hardware is now bad for everyone.
    0:15:50 And so you have to basically punish the good guys and the bad guys to mitigate a proof-of-work
    0:15:51 system.
    0:15:54 So in a proof-of-work system, summarize that, in a proof-of-work system, the worst-case
    0:15:57 scenario is your attack doesn’t work.
    0:16:01 And a proof-of-stake system, the worst-case scenario is you lose all of your– not only
    0:16:04 your attack doesn’t work, but you also lose your entire life savings in that protocol.
    0:16:05 Yes, exactly.
    0:16:08 So it’s a much more punitive measure.
    0:16:13 Well, it’s disproportionately punitive to the bad guy in proof-of-stake.
    0:16:17 By the way, and so I would add also the other thing about proof– I mean, there’s also the
    0:16:18 energy use.
    0:16:19 Oh, well, yeah.
    0:16:20 Yeah.
    0:16:21 It doesn’t– Bitcoin mining destroys all this– waste all this energy.
    0:16:22 It deliberately does.
    0:16:23 But it’s still bad.
    0:16:27 Also, very– for me, it’s a critical thing is– you’re talking about developer experience
    0:16:29 and user experience.
    0:16:34 You just simply can’t have sub-second transaction finality in a proof-of-work system.
    0:16:38 So Bitcoin, you really need to wait– each block is 10 minutes, and it has to do with
    0:16:43 the coordination among the network and the network propagation latency and things.
    0:16:44 But also, it’s a probabilistic method.
    0:16:47 So you really have to wait probably 60 minutes, if not longer.
    0:16:50 And from a user experience point of view, if I send you– and it’s the same with the
    0:16:51 theorem today, it’s proof-of-work.
    0:16:55 And you go, if you download Quid, Miss Wallet, and you try to use some of these apps, there’s
    0:17:00 a lot of really cool apps as early, but you’ve got to wait 30 seconds after you click a button.
    0:17:02 That’s not a modern user experience.
    0:17:05 And the only way we’re going to get to modern user experience is through these proof-of-stake
    0:17:06 systems.
    0:17:09 They have all these different methods that get much faster transactions.
    0:17:11 So just, I think, a whole bunch of reasons why.
    0:17:15 For example, with sharding, no one that I’ve ever heard of knows how to do sharding in
    0:17:16 proof-of-work.
    0:17:20 So a pairless of scaling, all these other things we’re talking about require a mistake.
    0:17:24 There’s a reason that every– I think 2017 was a major year of fundraising, and 2019
    0:17:25 is a major year of launches.
    0:17:30 And there’s a reason that every blockchain that’s launching today is mostly using proof-of-stake.
    0:17:32 I mean, with the exception of Grin and things like this, right?
    0:17:33 Yeah.
    0:17:38 But those are all just simple transactional– they don’t have smart contracts, they don’t
    0:17:41 have really scaling solutions.
    0:17:47 Evolution is not– is very much focused on private payments and scalable payments.
    0:17:52 It’s not trying to open up a suite of new applications that were not possible with bolder protocols.
    0:17:54 Which to me is the really exciting thing.
    0:17:59 What is possible that we haven’t seen happening today?
    0:18:03 Because even the Ethereum developers, when they shipped the protocol in 2015, I don’t
    0:18:07 think any of them could have conceived of the whole ICO wave.
    0:18:14 And that was like 18 months away, and it was still hard to see that that was coming.
    0:18:17 To me, this is what makes computing interesting, right?
    0:18:18 Is there’s this interplay.
    0:18:23 If you go look at the PC, the internet, smartphones, I think we’re going to see– it’s a crypto,
    0:18:27 I think we’re going to see it with VR in a couple– this year, in a couple years.
    0:18:30 There’s this interplay where you get– the platforms get better.
    0:18:32 In this case, we’re talking about Layer 1 smart contract platforms, right?
    0:18:36 Which are the ones we’re talking about that are coming out over the next 12 to 18 months.
    0:18:40 And those are kind of the equivalent of the Apple II or the iPhone or whatever in this
    0:18:41 world.
    0:18:42 To me, that’s cool.
    0:18:43 That’s great.
    0:18:44 And we’re into that, right?
    0:18:47 But the really cool part is all of the crazy stuff that people– no one imagined– it’s
    0:18:50 really funny if you go back and look at the early Apple II ads.
    0:18:53 So Apple II came out in ’77, PCs didn’t really take off for six years.
    0:18:56 And for those six-year peer people, we’re trying to figure out what do you do with these
    0:18:57 things.
    0:18:58 And all the old ads are really funny.
    0:19:01 They always have people at the kitchen table doing their recipes, and computer companies
    0:19:02 didn’t really know.
    0:19:05 But then the developers came along and invented word processing spreadsheets.
    0:19:06 All this other cool stuff.
    0:19:09 And so that to me is what’s really– like, right now, we’re seeing a little bit on the
    0:19:10 application side.
    0:19:14 But it’s limited because the platforms, the Layer 1 smart contract platforms, just aren’t
    0:19:15 there.
    0:19:16 Right?
    0:19:17 So we can’t– I mean, we’re seeing cool stuff.
    0:19:21 We’ll talk about it today, like in DeFi, for example, in terms of finance, where maybe
    0:19:24 the performance parameters are looser and things.
    0:19:26 They don’t need the kind of performance you need for other things.
    0:19:30 But what’s really going to get exciting to me is that period of, like, hopefully a year
    0:19:34 or two from now when we’ve got a great platform, and then we just see this explosion of creativity.
    0:19:38 Yeah, well, and the big thing is people need to untether themselves from thinking only
    0:19:42 in terms of efficiency improvements of existing processes.
    0:19:46 So like early use cases for Bitcoin that people talked about a lot is basically cost savings
    0:19:52 of remittance or cost savings of micropayments or something like that.
    0:19:57 But that’s really looking at existing use cases and applications that– like a recipe
    0:19:59 book and saying, oh, let’s put this on the computer.
    0:20:01 That’s how it always happens, by the way.
    0:20:04 Like, you look at early web, and they took magazines, or they put brochures, and they
    0:20:05 put them on the web.
    0:20:06 But that’s just how people think.
    0:20:09 And then it took people 10 years to realize, wait, this is a two-way medium.
    0:20:10 Yeah, there’s all these–
    0:20:12 You could generate a content in YouTube and Facebook.
    0:20:16 So what I really care about are what are going to be the native apps that are only possible
    0:20:17 with blockchains.
    0:20:23 And also, the other thing is people are very caught on the Web2 model.
    0:20:27 People are talking about daily active users, but of financial products.
    0:20:28 It’s just an odd thing.
    0:20:31 They’re like, are you a daily active user of your mortgage?
    0:20:32 Yeah.
    0:20:33 Right?
    0:20:34 It’s like the wrong framework.
    0:20:35 It’s just the wrong question.
    0:20:36 Right?
    0:20:37 But to me, I think we need to–
    0:20:42 Well, the reason everyone was so focused on DAUs for Web2 was because the main business
    0:20:46 model was advertising, and that was proven– so it was a proxy for what the business model
    0:20:47 was.
    0:20:50 But ultimately, if you have a business model that is not dependent on DAUs, that’s not
    0:20:51 your main metric.
    0:20:52 Yeah, exactly.
    0:20:59 To me, I just think we’re going to see this iteration and explosion of basically financial
    0:21:03 services and finance, but at the speed of open source software development, which is
    0:21:04 really, really fast.
    0:21:10 And it’s highly iterative, and it’s like a big shared open code repository that people
    0:21:11 are building on.
    0:21:15 So to me, the innovation here is going to be very, very fast.
    0:21:18 I mean, it already has been, but it will continue to be.
    0:21:25 And the thing I look forward to is what’s going to happen that is sort of unimaginable
    0:21:30 today, and sort of by definition wasn’t possible with the old architecture.
    0:21:34 I think, to me, one of the– there are many kind of cool sci-fi things in crypto.
    0:21:41 I think one of the coolest things is the idea of a kind of code software that has agency
    0:21:43 or sort of autonomous software.
    0:21:49 So you think about maker today or compound, and this idea that the code itself actually
    0:21:54 controls money and has business processes and logic, and it’s not the code that’s run
    0:21:59 by– it’s not like code– Google code controls stuff, too, or PayPal does, but it’s not really
    0:22:00 the code that does it.
    0:22:04 They’re just the instruments through which the management of that company executes their
    0:22:05 will.
    0:22:09 Here, the code itself actually is autonomous and is no longer controlled.
    0:22:14 This is the sort of idea, to me, the key idea of a blockchain is that the code continues
    0:22:19 to run as designed, and it has sort of game theoretic guarantees that it will.
    0:22:24 And that gives code this autonomous– I use autonomous not in the sense of AI autonomous,
    0:22:28 but in the sense of having agency and self-control and runs forever.
    0:22:33 As we speak right now, these contracts on Ethereum are running and doing things and distributing
    0:22:36 money or collecting money or running other business logic.
    0:22:42 A rough but potentially useful analogy is thinking about the corporate structure.
    0:22:46 So the idea of a corporation, in theory, is that it kind of runs forever.
    0:22:50 And management can turn over, and there’s different types of capital formation to keep
    0:22:52 it funding and everything.
    0:22:57 And it’s all through legal contracts in certain regions, right?
    0:23:01 So the corporation, as a legal entity, is always sort of registered with the state in
    0:23:06 a specific geographic region, and it’s all papered through legal contracts.
    0:23:11 But could a system like that that coordinates capital from many, many different people and
    0:23:16 outlives any of the individual people, could that move to a pure software system, using,
    0:23:19 as you said, sort of autonomous software?
    0:23:23 Instead of these legal contracts that are based in specific geographic regions, can
    0:23:26 it be sort of sovereign to the internet?
    0:23:32 These are the types of ideas that– it sounds crazy today, but when you think about this
    0:23:38 sort of history of the corporation and the liquid stock markets that we have, all these
    0:23:42 concepts that we think of as– they’ve been around forever, they’re really only about
    0:23:44 100 years old or something like that.
    0:23:47 To me, then, an obvious question is, why would you want that?
    0:23:51 And to me, the answers are– one is very important you mentioned before is open source, the fact
    0:23:56 that all of these things we’re discussing, they’re all available by definition.
    0:23:58 They have to be, if they’re on Ethereum, they have to be open.
    0:24:01 You can go read the GitHub code, if you can’t do it yourself, you can have somebody else
    0:24:03 do it, so it’s completely open.
    0:24:09 But then another very important feature is this, what we call compositionality or composability,
    0:24:14 is the idea that you can have one organization here and I can take that and I can build another
    0:24:17 one on top of it that references it.
    0:24:20 And I know I can do– and that’s– the only reason I can do that is a couple of things.
    0:24:23 One is it software that you can actually call the functions and things like that, and it’s
    0:24:25 open source and so I can audit it and trust it.
    0:24:30 But the third thing is because the code itself sort of exists on its own, I know I can build
    0:24:33 on top of it and the code will continue to operate that way, and there won’t be some
    0:24:38 whimsical change in business strategy by the owners of the code, right?
    0:24:39 Exactly.
    0:24:45 Which to me, I guess, and this is informed partly through my experience in non-crypto-tech,
    0:24:50 is just so much– there’s so many issues created around platforms and around trusting platforms.
    0:24:54 And so you think about Zynga building on Facebook and the hundreds of entrepreneurs who tried
    0:25:00 to build on top of Twitter and just like, it would have been so cool if, to me, if Twitter
    0:25:04 were this sort of open protocol the way SMTP email is, and you could have– someone could
    0:25:11 build the superhuman of Twitter as opposed to– and the anti– people are complaining
    0:25:12 about spam on Twitter.
    0:25:15 Why isn’t there a third party marketplace with spam filters the way there is with email
    0:25:16 clients?
    0:25:17 It used to be.
    0:25:21 And just all the kind of cool stuff that you get– and you see in the open source world
    0:25:23 now where it’s like Lego bricks and you’re building these buildings out of the different
    0:25:27 bricks, and every piece of code is a new Lego brick, and then you get this kind of combinatorial
    0:25:28 explosion of innovation.
    0:25:29 Yeah.
    0:25:33 Well, and this is– I think a lot of people get caught up or confused on this.
    0:25:35 Decentralization is not an ideological thing.
    0:25:39 It’s an architecture to support that permission-less building.
    0:25:42 This is why the internet was so big.
    0:25:46 If there was like an intranet and Microsoft owned it, like Microsoft MSN and Microsoft
    0:25:51 Net back in the day, we would never have seen Amazon, Google, and all these companies grow
    0:25:53 like they have.
    0:25:58 So to me, that decentralized architecture of all these systems, it’s not like an ideological
    0:25:59 thing.
    0:26:02 It’s really just an architecture that allows developers to build anything they want.
    0:26:07 And as you said, it’s all sort of permanent, and it’s like if every data structure on the
    0:26:10 entire internet was open and had an open API.
    0:26:15 We’ve seen the power of the kind of composability in the open source world now in the traditional
    0:26:19 open source world, meaning like Linux, and Apache, and all this other stuff.
    0:26:21 That has been a phenomenal success.
    0:26:23 90 plus percent of the software in the world today is open source.
    0:26:27 Every, you know, the bulk of the software on your iOS phone, the bulk of your software
    0:26:29 on your Android phone, every data center.
    0:26:30 Why is open source done so well?
    0:26:32 Because you can remix it, right?
    0:26:34 You can take the piece of code and you can do stuff with it.
    0:26:37 And it just gets this, you know, it starts off, and you go back to like when Linux came
    0:26:41 out in like whatever early 90s, it was definitely worse than Windows, but then it just followed
    0:26:47 this much faster like innovation curve because of this fact that you can compose these Lego
    0:26:48 bricks together.
    0:26:51 And you had just anyone in the world who can come contribute, some smart person in some
    0:26:55 random place can see some bug and fix it, just like all those effects.
    0:27:01 And now, but the problem was open source still depended on the goodwill or this financial
    0:27:04 interest of somebody to actually run the code.
    0:27:07 And that’s of course where AWS and Google Cloud stepped in, like we’re going to actually
    0:27:10 run it because open source was just code, right?
    0:27:16 And whereas blockchains are code instantiated, right, it’s code that’s running, and it doesn’t
    0:27:20 depend on the kindness of strangers or capitalists to run it.
    0:27:23 And therefore can’t be usurped in the same way, and it’s just much more powerful because
    0:27:27 it keeps state and has data and has computing ability and just all these other things that
    0:27:28 open source didn’t have.
    0:27:32 So to me, it’s like the best of those two worlds is like all the power of a modern computer
    0:27:36 and then the, and then the composability that made open source successful.
    0:27:37 Yep.
    0:27:38 Yep.
    0:27:44 And I do think that people underestimate just the scope of types of applications that will
    0:27:46 come out of this.
    0:27:52 I think that this idea of a global unified internet money is one of the basics and it’s
    0:27:54 a very, very big deal.
    0:28:01 And if we do have these sort of decentralized autonomous corporations or something, they’re
    0:28:05 going to be using the internet money in order to communicate among each other and create
    0:28:08 financial contracts and things like that.
    0:28:14 But this is why this is such an exciting area because it just feels like the possibilities
    0:28:15 are sort of limitless.
    0:28:19 So let’s talk about the kind of the state of the world right now too.
    0:28:24 So I think the New York Times just talked about how they think the crypto is over and
    0:28:27 there’s all these sort of negative articles about it.
    0:28:28 As you fuel.
    0:28:29 Yeah.
    0:28:30 I’ve been reading these since.
    0:28:31 I’ve been reading these for almost 10 years.
    0:28:36 I’ve been reading these about the internet too for even longer and technology for longer.
    0:28:44 But there has been a price downturn, I don’t know, maybe some of the excitement is down
    0:28:45 or something.
    0:28:46 I don’t know.
    0:28:49 But so kind of like it’s what I’m getting at is where are we in the in the life cycle
    0:28:50 of this kind of.
    0:28:51 Yeah.
    0:28:58 So I do think that 2017 was a year of new financial instruments and it was actually I think a
    0:29:03 lot of people underestimate how small of an amount of money was available 2016 and before
    0:29:04 that.
    0:29:05 Yeah.
    0:29:06 For cryptocurrency and blockchains.
    0:29:09 The whole universe was just pretty small.
    0:29:13 You know, there was no billion dollar company anywhere.
    0:29:16 It was really just a sort of nichey thing.
    0:29:19 And for that reason, there just wasn’t a lot of capital available.
    0:29:22 Now the people that were very excited though about cryptocurrency were the people using
    0:29:23 cryptocurrency.
    0:29:27 But I do think that we saw a huge amount of funding and projects that had been in the
    0:29:29 works for many, many years.
    0:29:33 How to funding this file coin taso stuff like that.
    0:29:39 And so then, you know, I think 2019 is turning out to be the year of launches.
    0:29:44 You’ve just seen these hugely ambitious projects actually get to across the finish line.
    0:29:47 And Cosmos is a great example launched just about a month ago.
    0:29:53 And it’s sort of the first system we’ve ever seen that will allow cross blockchain interaction.
    0:29:58 So we’ve always had these kind of siloed logic and state in say Ethereum.
    0:30:03 And now you could have smart contracts or tokens on Ethereum transfer like to other
    0:30:04 blockchains potentially.
    0:30:08 It also gives you a scaling story because you can have multiple blockchains.
    0:30:10 So it’s almost kind of like sharding, right?
    0:30:11 You have different blockchains that contract to it.
    0:30:12 Sort of.
    0:30:14 I think we think of it as heterogeneous shards as opposed to homogeneous shards.
    0:30:17 So each shard can have run its own language and its own environment.
    0:30:18 That’s exactly right.
    0:30:22 And so the development momentum feels very strong to me and we’re going to see a lot
    0:30:24 of very, very exciting launches in 2019.
    0:30:27 However, I think that will be to very little fanfare.
    0:30:31 It’s kind of like Ethereum launched in, you know, the middle of crypto winter in 2015
    0:30:32 and nobody cared.
    0:30:36 You know, it’s not like Ethereum launching was a year of time.
    0:30:39 And it’s not like you’re going to launch it and it’s going to be an overnight success.
    0:30:41 You need to then, I think of this as a two-step go to market.
    0:30:44 So the first step is getting developers, right?
    0:30:47 And so, and you got to build that community and they got to build tools and you got to
    0:30:50 build like, you just think about all the stuff that we take for granted probably in the Ethereum
    0:30:55 world of like, you know, wallets and, you know, IDEs and debuggers and just like, you
    0:31:00 know, ether scan and just like the whole, like cashing, you know, alchemy stuff, cashing
    0:31:01 tools or whatever.
    0:31:02 There’s a whole set of infrastructure.
    0:31:03 Right.
    0:31:04 So that’s got to get built.
    0:31:05 You’ve got to get people fired up.
    0:31:06 You’ve got to have like hackathons.
    0:31:11 You’ve got to, people got to learn the, you know, due tutorial, just a whole set of things
    0:31:12 that have to happen.
    0:31:16 And so even when you launch these, these new layer ones, I think it’s probably, I don’t
    0:31:20 know, at least 12 months, probably before you see like higher quality applications coming
    0:31:21 out.
    0:31:25 The other thing about, about, as you know, with these, with these, because the code is
    0:31:27 autonomous, because once you write it, it’s out there.
    0:31:31 You really have to get the security right and some of that, those improvements will come
    0:31:34 through better programming languages and tools, but it also just takes longer.
    0:31:37 I think here people compare it to kind of hardware development versus software development.
    0:31:41 Like you can’t, if you build faulty hardware, you have to recall it physically.
    0:31:42 Yeah.
    0:31:43 Yeah.
    0:31:44 You know, faulty SaaS software.
    0:31:47 You can fix a few things and deploy.
    0:31:48 And so it just takes a while.
    0:31:53 So, so I think, yeah, I do, I share your feeling that this will be your launches.
    0:31:57 However, it will be more of a developer kind of phenomenon than a user phenomenon.
    0:32:03 I do think it’ll take, yeah, 12 months, as you said, before we see a lot of the ways
    0:32:06 that these will be used in surprising manners, right?
    0:32:11 I do think that Ethereum was very exciting when it came out, but I really do think even
    0:32:17 the people that built Ethereum didn’t, couldn’t properly predict exactly how it would be used.
    0:32:20 And these, these use cases are like 18 months down the line.
    0:32:22 It’s not that far around the corner.
    0:32:27 This is one thing I love about cryptocurrency is if you miss like three months, you’re already
    0:32:32 behind on, on the scope of, of kind of what is possible and, and what is happening.
    0:32:36 So when you talk about applications, so what are you, so like, I think the thing that’s
    0:32:40 working the most probably on Ethereum today is, is DeFi, decentralized finance, right?
    0:32:44 And I know, let’s, maybe let’s talk a little bit about that and what you’re excited about
    0:32:46 and then like other, other types of applications.
    0:32:52 So I do think that one of the very big things being built on Ethereum that’s exciting are
    0:32:54 stablecoins.
    0:32:59 And particularly for me, it’s crypto collateralized stablecoins where the, the stablecoin that’s
    0:33:03 pegged to say the dollar or, but it really couldn’t be anything, any asset that’s not
    0:33:04 endogenous to the blockchain.
    0:33:10 So it could be Google stock or it could be S&P 500, it could be a bond, whatever, whatever
    0:33:11 it might be.
    0:33:17 The backing for that value is a smart contract that’s holding, you know, Ethereum compatible
    0:33:19 assets.
    0:33:21 And this is like the MakerDAO system.
    0:33:25 I think it’s a really, really big deal because a lot of the use cases that people originally
    0:33:30 envisioned for cryptocurrencies related to financial services or payments had this significant
    0:33:32 problem, which is just the volatility.
    0:33:37 So even e-commerce with something as volatile as Bitcoin, you said like the one hour you
    0:33:41 wait until you have to actually close and receive those bitcoins.
    0:33:44 I mean, margin on e-commerce often is pretty low, right?
    0:33:49 You might be getting four or 5%, but the volatility in an hour in Bitcoin can be more than that.
    0:33:55 So I do think that these stablecoins are critical for other types of applications.
    0:34:01 And so the auger prediction market, you know, other, even just like token trading, you know,
    0:34:05 what is the base pair you’re trading against in a decentralized exchange?
    0:34:08 Is it Ether against some other coin?
    0:34:11 I think also like you just think about lending, for example, like people don’t, you know,
    0:34:17 if you’re buying a house in dollars, you want your stablecoin pegged to dollars.
    0:34:21 And the stablecoin can actually then act as collateral in other types of use cases.
    0:34:25 So I do think that stablecoins are like a critical building.
    0:34:30 The other thing about MakerDAO that’s interesting is just how it’s a very interesting kind of
    0:34:35 economic structure for how they enforce the peg and how they kind of incentivize the ecosystem.
    0:34:38 And the fact that that runs in a smart contract which holds a significant amount of money is
    0:34:47 just a real, I think to me, a testament to the power of the Ethereum design and the
    0:34:49 sort of what smart contract platforms can do.
    0:34:58 It’s one of many examples, but it’s got more traction, I think people realize, as in it’s
    0:35:03 about 2% of all Ether is held in the MakerDAO contract.
    0:35:06 And now that’s hard capped by the protocol.
    0:35:08 So they could take off that cap.
    0:35:13 And when I say they, I mean actually the MKR holders who vote on these changes.
    0:35:19 And so if they wanted to potentially massively increase the amount of Ether locked in that
    0:35:21 contract, they really could.
    0:35:27 Now I do think it almost starts to create systemic risk at around, say, 5% of all Ether.
    0:35:31 I mean for the Ethereum protocol, for the MakerDAO protocol, so you don’t want half
    0:35:35 of all Ether held in this thing, but in just sheer dollar terms, you know, there’s hundreds
    0:35:42 of millions of dollars locked in this protocol that people are basically using to get a loan.
    0:35:47 And so it’s, while these DeFi things are very, very hard to use, it’s kind of a disaster
    0:35:48 from a UX perspective.
    0:35:52 You have to download all the software, you have to have Ether, you know, and you have
    0:35:56 to click through a million different things and have a mental model for what you’re doing.
    0:36:02 You have to be, I mean, it’s just a testament to how hardcore the enthusiasts are that…
    0:36:03 Yeah, exactly.
    0:36:07 And you know, I think a lot of them are arbitrageers and folks like that that are just doing kind
    0:36:09 of profit seeking behavior.
    0:36:14 But it’s, yeah, I mean, to me, we are seeing kind of the early success of some of these
    0:36:17 low level stablecoin systems.
    0:36:22 And I think that stablecoins are going to be a critical part of the recipe for a lot of
    0:36:25 more abstracted, higher level use cases.
    0:36:30 I think of it as, our friend Boloji has a kind of framework I like, which is, you know,
    0:36:36 he would say, I think, is that the idea that you’d buy a cup of coffee using a cryptocurrency
    0:36:39 is sort of one of the least interesting use cases.
    0:36:43 And he has this kind of model where it’s kind of U-shaped where it’s, on the one hand, there’s
    0:36:47 about a billion and a half people that have smartphones but are unbanked, are not part
    0:36:49 of the internet economy.
    0:36:54 And for those people, it’s very interesting to have a digitally native currency, right?
    0:36:58 And architecturally, it makes a lot of sense because one of the key features of cryptocurrencies,
    0:37:03 it’s a bearer instrument, meaning the recipient can verify that they got paid using just sort
    0:37:07 of math on the internet and not having to rely on a bank or some third party and therefore
    0:37:10 doesn’t need an ID and doesn’t have fraud risk and everything else.
    0:37:13 So that’s sort of the one end where the stuff is so powerful.
    0:37:17 And then the other end is kind of the high end of the software developers and you now
    0:37:21 have programmable money, programmable loans, all these kind of cool new things you can
    0:37:24 do on the innovation side.
    0:37:30 I think of it as like what if, here’s a sort of metaphor, but the fact that photos are
    0:37:35 just a file format that you can send to people, allowed people to invent Facebook and Instagram,
    0:37:39 and if instead, this is again a metaphor, but if instead you had to kind of get permission
    0:37:44 every time you sent a photo, if it was a service and not a file format, like there would have
    0:37:48 been way less innovation around kind of media over the last 20 years and now what if money
    0:37:52 is a file format, it’s just a string of bits, it’s just a string of bits, it’s no longer
    0:37:55 a web service that’s connected to PayPal or Visa or something and they can’t take their
    0:37:58 money and screw it up or do whatever they want and it make you get permission and make
    0:38:03 you get, you know, and disenfranchise a billion and a half people and everything else, like
    0:38:07 now it’s just bits and like what can you do, it’s a very powerful concept.
    0:38:15 It is and I do think that, you know, an interesting feature of cryptocurrencies for me is that
    0:38:19 the people that become knowledgeable about cryptocurrencies, I would say about 95% of
    0:38:25 them or more, think it’s a good idea once you become knowledgeable about it.
    0:38:31 And so to me, a lot of this is just an education process of like how do we get more and more
    0:38:37 people to recognize why cryptocurrencies have this extremely unique value?
    0:38:41 It’s the most misunderstood, I feel like tech is often misunderstood, but this is by far
    0:38:45 the most, at least that I’ve worked in by far the most, the delta between the reality and
    0:38:50 the perception and partly it’s self inflicted wound because of the kind of early crypto
    0:38:53 movement and it was, you know, a lot of kind of political anarchist types got into it
    0:38:58 and things, but it’s that’s lingered and it’s just really misunderstood and it’s very
    0:38:59 where I agree with you.
    0:39:02 I have this, I have this a go over and over again, especially people that are technical,
    0:39:06 you give them like the Ethereum white paper, the Filecoin white paper, whatever, you know,
    0:39:10 just a bunch of the Bitcoin white paper and they come back and they’re like, oh my God,
    0:39:12 this is totally different than what people described to me and what I read about.
    0:39:13 Exactly.
    0:39:21 It’s because it’s easy to pay attention to the bad actors and prices and stuff when
    0:39:28 in reality, the kind of fundamental development, yeah, like you said, from Bitcoin to this
    0:39:32 more general computer to the more advanced applications that, again, like Filecoin being
    0:39:39 this low level building block that’s going to enable all sorts of new behaviors because
    0:39:44 just thinking about Filecoin, like, how am I supposed to build any sort of decentralized
    0:39:47 application if I can’t do file storage, right?
    0:39:51 It’s kind of this basic building block, but I can’t build Twitter the protocol or Uber
    0:39:55 the protocol to compete with the centralized web platform unless I have a decentralized
    0:39:59 file architecture underneath it, which today is not really possible.
    0:40:05 And so these low level systems, it’s really remarkable the rippling implications of what
    0:40:10 will become possible. And I do think that the number one barrier is just very simply
    0:40:11 education.
    0:40:17 This is an esoteric and complex area and there’s also a huge amount of smoke and mirrors, right?
    0:40:22 I do think that there are, have always been in the crypto space, it’s international and
    0:40:28 it’s permissionless. So there’s just a lot of crazy behaviors and crazy characters and
    0:40:30 it’s easy to focus on that stuff.
    0:40:35 That’s actually one of the good things about the price downturn is I think it’s cleaned
    0:40:41 up a lot of that and sort of put the focus back on innovation and technology.
    0:40:49 Yeah, I agree. I think that the sort of builders of all this stuff never really stop, but they’re
    0:40:56 also not who the media necessarily pays attention to. I think that the media tends to be a reflection
    0:41:00 of the investors and the investors tend to be really short sighted and focus very much
    0:41:04 on month to month or even day to day type volatility.
    0:41:11 So one interesting trend is what we call vertically integrated applications and something we’ve
    0:41:16 been talking about. And I think the way I think about it is sometimes when you don’t
    0:41:26 have the full kind of tech stack built out, sometimes for a project to kind of get adoption,
    0:41:31 they need to build more themselves. So like a good historical example is Blackberry, they
    0:41:37 come up with an email smartphone in 2003 and at the time you just didn’t have sort of a
    0:41:40 great smartphone platform like the iPhone, you didn’t have great connectivity, you didn’t
    0:41:43 have great backend. So they built the whole thing. They built this hardware, they built
    0:41:47 the software, they built the network, they built the backend and they were able to kind
    0:41:51 of get kind of, I think it was like pull the future forward. Eventually you could do this
    0:41:54 by building an app on the iPhone, but like at the time you couldn’t, so they had to build
    0:41:58 it all. And I think we’re seeing some of that pattern now because we don’t have all the
    0:42:05 layers kind of at the ideal state now, particularly like the layer one smart contract platform
    0:42:08 we were talking about earlier, just we don’t have kind of a great scalable everything else.
    0:42:16 But the, I mean the old wisdom was sort of build a low level, you know, extensible protocol
    0:42:21 and developers will come and build all the useful apps. And I think a great example of
    0:42:26 that was the Zero X protocol system, which is like token trading on Ethereum using a
    0:42:31 smart contract. So they said, we’re not going to own sort of the end user interface, we’re
    0:42:36 going to build a low level system and then different people are going to come build web
    0:42:40 interfaces. I think the newer generation of the smart contract developers, we’ve seen
    0:42:44 say we’re going to build that low level protocol, but we’re also going to own the user interface
    0:42:49 and kind of build that full stack experience. And that vertical integration as you put it,
    0:42:54 I think is potentially going to be a catalyst for a lot of the stuff to move a little bit
    0:43:02 faster than it has historically. And so there’s the project Sello that’s working on first
    0:43:09 a kind of low level stablecoin designed for payments and remittances, as well as an Android
    0:43:14 kind of mobile first application designed for folks that don’t have access to traditional
    0:43:18 banking or financial services. And so by owning kind of both pieces, they can kind of iterate
    0:43:24 a bit faster and potentially understand the full scope of how the customers is using this
    0:43:29 platform. And provide kind of a modern user user experience
    0:43:33 that you would hope for from a non kind of blockchain app and they’ll provide kind of
    0:43:37 a similar user experience. But then also I think have the kind of the what I think is
    0:43:43 the modern crypto business model of, you know, they own some of the coins and they ultimately
    0:43:49 want to see the tokens appreciate and don’t need to and therefore okay with other people,
    0:43:52 for example, starting to build their own apps and like and supplanting their app, they don’t
    0:43:57 need to control the end to end thing all the way in the future because they have this business
    0:44:01 model that’s aligned with the community, it’s this fighting the community of the model
    0:44:06 where like the more you give away, the better you do for yourself, which is obviously in
    0:44:09 web two, it was kind of own everything and fight.
    0:44:12 So it’s interesting because it’s start at the model is sort of start web to like just
    0:44:16 to get the user experience right, but then but then have the business model that’s sort
    0:44:21 of web three and therefore let you have this great property of grow the pie not fight over
    0:44:27 the pie. Yeah, exactly. Okay, so that’s that’s and then and then I guess one of the thing
    0:44:34 we haven’t covered is we talked about payments, we talked about centralized finance. We talked
    0:44:38 a little bit about like file coin and kind of what I would call incentivized infrastructure
    0:44:43 like kind of new infrastructure that has incentives built in. What are some of the areas that that
    0:44:49 you know kind of application areas. I mean, one thing with with these crypto protocols
    0:44:55 is you can build markets for anything. And so anything today that’s sort of a one to
    0:45:00 one service with for example, in the case of file coin, Amazon Web Services, Microsoft
    0:45:05 Azure, whatever, Google Cloud, you can turn that into a competitive marketplace that sort
    0:45:10 of unifies all of these. And so while file coin builds this competitive spot market for
    0:45:14 file storage, you could have a similar thing for many of these kind of low level computer
    0:45:20 resources. So you could do that for compute. You could do that. I think AI data would be
    0:45:26 very interesting one. Yeah, a genetic data. Right. So then you could even where is the
    0:45:29 AI like it seems to me a critical question of the next 10 years is where is AI data live?
    0:45:34 Does it live in Google and Amazon servers? Or is it an open protocol where you know anyone
    0:45:38 can access it and there’s some incentive model for providing it and forgetting it. One interesting
    0:45:44 intersection is homomorphic encryption, which allows you to train a machine learning system
    0:45:47 based on data that you actually don’t know the plain text. So you only see the encrypted
    0:45:54 version. It allows people to say, okay, I’m going to share the data from my Tesla or my
    0:45:59 smartphone with a major corporation and get paid for that data. And that corporation will
    0:46:03 actually never learn the data but can still train the machine learning algorithm. It’s
    0:46:10 a bit abstract and I think it’s early on that type of use case, but it’s potentially very
    0:46:14 transformative. I think also, you know, you could architect social networks, marketplaces
    0:46:17 like ride sharing, all of this stuff could be architected using these methods and I think
    0:46:21 there would be benefits to all sorts of community members, kind of stakeholders, including the
    0:46:26 drivers and riders. And so that’s a separate, maybe a longer conversation. Yeah. Yeah. Yeah.
    0:46:32 Yeah. I do think the value accrual when these things succeed go to the entire large and
    0:46:37 they’re governed by the larger bases, by, you know, by instead of basically an extractive
    0:46:42 corporation that owns the platform and at the end of the day has some level of an adversarial
    0:46:48 relationship with its users. Yeah. I mean, it’s today, yes, Facebook loves its users,
    0:46:54 but also it wants to put as many ads in front of the users as it possibly can, which actually
    0:47:00 disrupt the user experience. So it’s, yeah, it’s an odd relationship, I think that these
    0:47:04 Web2 platforms have with their user bases. I think another interesting area is, it’s
    0:47:09 kind of out of fashion at the moment, but I think it will come back as NFTs or, you
    0:47:15 know, digital goods. It’s always been, you know, there was a whole, I don’t know if you
    0:47:19 were around for this, but the, during when World of Warcraft was a big deal, there was
    0:47:24 this whole kind of underground market called farming. So people wanted to, instead of having
    0:47:28 to, you know, earn your way up to level 70, people wanted to buy their way and there was
    0:47:32 this whole thing where like people would, there’s this, these off, off-game X protocol
    0:47:37 websites where you could go do this and it was a big deal. And so a similar idea is to
    0:47:41 sort of take that and legitimate it and say, hey, you can earn, you know, in a game or
    0:47:45 in a virtual world or in some other kind of experience, you know, what if there are goods
    0:47:49 that the user can actually own and take from one game to another and buy and sell them
    0:47:53 and you add economic incentives and you can make a living doing this and you can actually
    0:47:56 own these things in a way that you can’t today. Today you’re really just kind of borrowing
    0:48:00 them and these games will come and go and they’ll, you’ll spend all this time earning
    0:48:04 stuff and it’ll all then disappear or you’ll forget about it. And this is just a much kind
    0:48:08 of more, it’s much more like the offline world, like when you get stuff, you keep it and people,
    0:48:11 and people that’s really popular in the offline world and I think it will be popular in the
    0:48:12 online world too.
    0:48:17 Oh, I mean, the rippling implications of it are, are, are big too. So if you can own your
    0:48:22 avatar and you can own the avatar sword and shield and everything, other, like we said
    0:48:25 earlier, everything here is interoperable. It’s like an open API. So any developer can
    0:48:31 then build an expansion pack or a mod on the game. It turns like the modding community
    0:48:37 around various games into like a real economic system. And so then you could actually imagine
    0:48:43 like in the longer term, it’s almost like, think about every like rupee you’ve ever
    0:48:48 earned in a game or every bit of gold. Imagine if that was actually all unified among like
    0:48:52 almost every game, right? And there were like secondary markets between one game and another
    0:48:58 game and you could actually maybe bring your avatar from one game to another game. There’s
    0:49:05 just, you know, it’s almost like turning the universe of video games into Minecraft, right?
    0:49:12 Obviously, that’s a sort of far future, but I do think this, this open and interoperable
    0:49:14 low level systems do enable that type of thing.
    0:49:17 Also, the other cool thing is with, with the economic incentives, you suddenly, for example,
    0:49:22 you could imagine funding your game instead of going to Activision and asking them for
    0:49:27 money, you can fund your game by pre-selling some of the goods. You could have third party
    0:49:32 creators who earn living, some person, you know, whatever with the smartphone is designing
    0:49:35 virtual goods and selling them and earning a living that way.
    0:49:40 Well, and one of the most successful categories on Kickstarter is kickstarting video games,
    0:49:44 because, you know, gamers are hardcore and they want to support independent developers.
    0:49:48 Now imagine if you could take that from, I’m just going to buy your game, I’m actually
    0:49:54 going to invest in your game, right? It’s way more powerful, and it aligns the interest
    0:49:59 between the gamers and the indie developers. So to me, yeah, that could be a very big trend.
    0:50:04 And we have seen some level of that, and I think one of the problems was, you know, when
    0:50:08 you can pre-sell these game items, you get this investor community rather than the gaming
    0:50:14 community interested. And so I do think it’s important to, you know, make sure that it’s
    0:50:19 not, it’s like, it’s people who actually want to play the game, right, that are sort
    0:50:24 of buying those game items. But I do think that that interoperability of avatars and
    0:50:27 items and levels and stuff like that is, is a big deal.
    0:50:30 Yeah. Right, awesome. Thanks, thanks a lot for being here.
    0:50:31 Yeah, thanks for having me, Chris.
    0:50:40 [BLANK_AUDIO]

    In a followup to one of our most popular podcast episodes which originally aired in April 2017 (https://a16z.com/2017/04/03/cryptocurrencies-protocols-appcoins/), a16z Crypto Fund General Partner Chris Dixon returns to talk with Olaf Carlson-Wee of Polychain Capital in a free-wheeling conversation about the seven major trends they see happening in blockchain computing now as we shift from basic protocol design to pragmatic product launches:

    • Improving developer productivity
    • Scaling out versus scaling up
    • On-chain governance
    • Proof of Stake Networks, and especially their resilience to attacks
    • 2017: year of of fund raising, 2019: year of launches
    • Autonomous and re-mixable code
    • Killer apps: distributed finance and beyond

    This conversation was originally recorded for our YouTube channel: https://www.youtube.com/c/a16zvideos

    The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates.This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investor or prospective investor, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund which should be read in their entirety.)Past performance is not indicative of future results. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Please see a16z.com/disclosures for additional important information.

  • E31: Entrepreneurialism is a Disease

    It’s crazy to think how much things can change in such a short period of time. In this episode I discuss the concept of compartmentalising and 6 positive ways to approach this. I also talk about how I deal with fear in life, how running a business is li…

  • 333: The Raw Truth About Blogging – How a “Dog Mom” Built her Online Business on the Side

    Kimberly Gauthier has been running KeepTheTailWagging.com since 2011, but things really started to take off when she niched down her focus to raw feeding for dogs.

    Raw feeding is basically the paleo diet for dogs — attempting to mimic their ancestral diet.

    Her blog had than 2 million page views last year — almost entirely from organic search in Google, and she’s monetized the traffic with a few different income streams.

    Tune in to hear how Kimberly arrived at his niche, what drives content and marketing today, and how the business makes money — all on the side from her day job.

    Full Show Notes: The Raw Truth About Blogging – How a “Dog Mom” Built her Online Business on the Side

  • 376. The Data-Driven Guide to Sane Parenting

    Humans have been having kids forever, so why are modern parents so bewildered? The economist Emily Oster marshals the evidence on the most contentious topics — breastfeeding and sleep training, vaccines and screen time — and tells her fellow parents to calm the heck down.