AI transcript
0:00:07 This year, over 60 countries have and will head to the polls,
0:00:11 representing just about half of the global population.
0:00:15 But governance extends past the leaders of nation states.
0:00:17 In fact, as the revenue of some companies
0:00:20 dwarf the economies of some nations,
0:00:22 the way these entities govern themselves
0:00:26 is both more interesting and more important than ever.
0:00:27 Listen into this episode,
0:00:29 originally published on our sister podcast,
0:00:31 “Webthrough with A16Z,”
0:00:34 as they explore topics ranging from content moderation
0:00:36 to incentivizing participation
0:00:37 to new governance experiments
0:00:40 from dows to oversight boards.
0:00:41 I hope you enjoy.
0:00:50 Hello and welcome to “Web3 with A16Z.”
0:00:51 I’m Robert Hackett,
0:00:53 and today we have a special episode
0:00:55 about governance in many forms,
0:00:57 from nation states to corporate boards
0:01:00 to internet services and beyond.
0:01:02 Our special guests are Noah Feldman,
0:01:04 constitutional law professor at Harvard,
0:01:07 who also architected the meta oversight board,
0:01:09 among many other things.
0:01:11 He is also the author of several books.
0:01:14 And our other special guest is Andy Hall,
0:01:16 professor of political science at Stanford,
0:01:20 who is an advisor to A16Z crypto research,
0:01:23 and who also coauthored several papers and posts
0:01:25 about “Web3” as a laboratory
0:01:29 for designing and testing new political systems,
0:01:32 including new work we’ll link to in the show notes.
0:01:35 Our hallway style conversation covers technologies
0:01:36 and approaches to governance
0:01:40 from constitutions to crypto blockchains and dows.
0:01:43 As such, we also discuss content moderation
0:01:45 and community standards,
0:01:47 best practices for citizens’ assemblies,
0:01:49 courts versus legislatures,
0:01:52 and much more where governance comes up.
0:01:56 Throughout, we reference the history and evolution of democracy,
0:01:58 from ancient Greece to the present day,
0:02:00 as well as examples of governance
0:02:04 from big companies like meta to startups like Anthropic.
0:02:07 As a reminder, none of the following should be taken
0:02:11 as tax, business, legal, or investment advice.
0:02:14 See A16Zcrypto.com/disclosures
0:02:16 for more important information,
0:02:18 including a link to a list of our investments.
0:02:24 So I want to start with a really broad question.
0:02:25 Maybe it’s too broad,
0:02:30 but what is the right model for internet governance?
0:02:33 We have all of these companies that host platforms
0:02:36 that people participate in,
0:02:37 that they build on top of,
0:02:41 that they connect and communicate on.
0:02:44 And so a question is, what is the right way
0:02:46 to govern those platforms?
0:02:48 Right now, the most popular ones at least
0:02:50 are owned by corporations,
0:02:53 and they sort of get to call the shots there.
0:02:55 Is that the way that it should be?
0:02:57 Idealistically, in like a world
0:03:00 that we all live in and work in and play in and…
0:03:04 – May I start with a little bit on your formulation?
0:03:07 Corporations don’t exist in the ether,
0:03:09 much as we sometimes like to fantasize
0:03:12 either positively or negatively that they do.
0:03:15 They’re products of law, usually Delaware law,
0:03:16 because of the weirdness
0:03:19 of how the American federal system works,
0:03:22 and they’re governed by a plethora of obligations
0:03:24 and regulations that are not only
0:03:26 in their place of incorporation,
0:03:28 but are also state law everywhere and federal law.
0:03:30 And a lot of the duties that they have,
0:03:31 they can’t get rid of,
0:03:33 even if they tried to get rid of them.
0:03:35 So if you make and sell a product,
0:03:36 you’re liable for the bad stuff
0:03:39 that happens to people as a result of that product,
0:03:41 even if you pretend you’re not,
0:03:43 and even if you don’t want to be.
0:03:44 That’s just a given example of a duty
0:03:45 you can’t get away from,
0:03:48 that means that all corporations
0:03:50 are intensely regulated all the time already,
0:03:53 even before we get to the specific regulations
0:03:54 that are relevant, say,
0:03:57 to a user-generated content social media platform.
0:03:59 So the first thing is,
0:04:03 you’re gonna be governed by state and national laws,
0:04:04 and in a more tenuous way,
0:04:07 you’re gonna be governed by those international laws
0:04:10 that your government bothers to apply to you.
0:04:11 – And which, by the way,
0:04:12 there are plenty of governments
0:04:13 that are democratic formulated,
0:04:16 and some of these laws are presumably
0:04:19 coming from the will of the people.
0:04:23 – Yeah, and some of them are undemocratic and lousy,
0:04:23 but they still affect you
0:04:26 if you want to do business in a country.
0:04:29 So all of these things are in place.
0:04:30 That’s the first thing I would say.
0:04:34 So there already is a lot of governance
0:04:35 that comes from governments,
0:04:37 like the good old-fashioned kind of governance.
0:04:41 Then there’s a further and really important question
0:04:44 of whether there should be more governance
0:04:45 coming from governments,
0:04:46 and if so of what kind,
0:04:49 and that’s very hot topic right now at the US Supreme Court,
0:04:52 this year is deciding three different sets of issues
0:04:55 all related to what are the rights of platforms,
0:04:56 what are the rights of users,
0:04:57 what is the role of government
0:04:59 with respect to social media.
0:05:01 And until now, there’s been exactly zero
0:05:04 Supreme Court doctrine on social media companies,
0:05:05 and now we’re gonna have a whole bunch of it.
0:05:08 So this is a major transformational era.
0:05:09 Unfortunately, we don’t know the answers
0:05:10 to any of those things yet definitively.
0:05:13 We have sort of tea leaves from the oral arguments.
0:05:14 So that’s another really huge issue,
0:05:16 like how far should it go?
0:05:19 How much regulation should the states of Florida and Texas
0:05:21 be able to weigh in on content moderation?
0:05:23 – Yeah, so you’re referring to the net choice case
0:05:25 that the Supreme Court is reviewing,
0:05:28 and what’s at issue there is these states
0:05:30 would like these social media platforms
0:05:34 not to censor or moderate certain types of speech.
0:05:36 – I mean, that’s how the states would put it.
0:05:41 The companies would say that the states want to force them
0:05:44 to carry content that they may not wanna carry,
0:05:47 or wanna limit the ways that they can moderate the content
0:05:49 on what they call their platforms.
0:05:51 And so that’s, like any Supreme Court case,
0:05:52 you’ve got two sides,
0:05:55 and those are the two main perspectives.
0:05:57 And then beyond that,
0:05:59 there’s to me a super fascinating question.
0:06:02 I know Andy’s thought very deeply about this too,
0:06:06 of modes of regulation that platforms can choose,
0:06:09 first of all, for themselves and use internally,
0:06:12 then there’s potential collaborations
0:06:14 as in between different platforms
0:06:17 on collective self-regulation,
0:06:19 which is a form of governance.
0:06:22 And then there’s the weird kind of hybrid structure
0:06:25 that the Facebook oversight board represents,
0:06:26 and then in a different way,
0:06:29 the anthropic long-term benefit trust represents.
0:06:32 And those are methods whereby a company
0:06:34 creates a kind of hybrid structure
0:06:36 where there’s some independent actors
0:06:39 whom the company agrees to be governed by
0:06:41 for some limited set of purposes.
0:06:43 And that’s another really interesting
0:06:47 and to my mind kind of innovative and cutting edge
0:06:48 way of doing this,
0:06:50 which like any innovative cutting edge thing
0:06:51 has its pros and its cons
0:06:54 and is also very much in an experimental phase rate.
0:06:56 – I think that we’re already getting into something
0:06:57 very deep and interesting,
0:06:59 which is like, where are the boundaries?
0:07:02 Where do we think the real world governance ends?
0:07:05 And when or why would any type of organization
0:07:07 want to go beyond it?
0:07:08 Seems like there’s at least a few reasons,
0:07:09 but no, I’d be curious what you think.
0:07:13 I mean, there’s obviously just a sense
0:07:14 as a company or as an organization
0:07:18 that you want to do more than government has told you to do.
0:07:20 That’s like one possible motivation.
0:07:21 Seems like there’s also something
0:07:24 about the global reach of some of these platforms.
0:07:26 We know that global coordination
0:07:28 around regulation is very challenging.
0:07:30 And so in the absence of that coordination,
0:07:32 you might feel the obligation
0:07:35 or the strategic need to do something
0:07:37 to coordinate that yourself.
0:07:38 And then there’s also something,
0:07:39 this hasn’t come up yet,
0:07:42 but I think a third explanation
0:07:44 is something to do with competition.
0:07:49 And I think going back to your original question, Robert,
0:07:52 in a world where we had perfect competition
0:07:53 in a very classical sense
0:07:56 between all different types of internet services,
0:07:58 I think a lot of the governance
0:07:59 beyond traditional regulation
0:08:03 would be done by users voting with their feet.
0:08:04 And the challenge we get into
0:08:07 is the scale of many internet services
0:08:09 and the network effects,
0:08:11 put them in this kind of interesting position
0:08:13 where they’re at a very, very large scale,
0:08:15 which makes it in some ways hard
0:08:16 for users to vote with their feet
0:08:19 because they like being where their friends are.
0:08:20 But at the same time,
0:08:22 there’s still a lot of economic competition across services.
0:08:23 It’s not obvious that these companies
0:08:25 aren’t really monopolies
0:08:27 in the traditional antitrust sense.
0:08:29 And it’s put them in this weird zone
0:08:33 where there is a sense that users
0:08:37 are unhappy being locked in to any particular platform.
0:08:38 But at the same time,
0:08:42 traditional antitrust tools don’t really seem to apply.
0:08:44 And in that weird gray area,
0:08:47 it might make sense for large platforms
0:08:49 to play with additional modes
0:08:51 of giving their users the ability
0:08:54 to decide together how the platform’s gonna work
0:08:57 in a world where they can’t freely individually move
0:08:59 between platforms for the same service.
0:09:01 – And Andy, I don’t think you would disagree with this.
0:09:02 There’s also the advertisers
0:09:04 who in a perfect economic picture
0:09:06 would just go where the people were,
0:09:07 where the users were.
0:09:09 So in that sense, they seem less important.
0:09:11 But as we’ve seen in the,
0:09:13 what you might call the X files,
0:09:15 the advertisers have turned out to be major players
0:09:18 because they don’t wait to see what the users will do.
0:09:20 I mean, they also do that,
0:09:22 but they take preemptive steps
0:09:24 based on their perceptions of what might happen
0:09:26 and the reputational costs
0:09:27 and all of those sorts of things.
0:09:29 And so they’re big players too.
0:09:32 And they’re another reason why a company,
0:09:34 a platform might want to have
0:09:36 its own self-regulatory mechanisms.
0:09:37 And here I just wanna add,
0:09:39 in this super polarized world,
0:09:43 there are no neutral decisions that you could say,
0:09:44 oh, well, everybody should leave me alone
0:09:46 ’cause I made a neutral decision.
0:09:49 Whether you take content down or leave it up,
0:09:50 you’re non-neutral.
0:09:53 Whether you amplify it or don’t amplify it,
0:09:54 you’re non-neutral.
0:09:56 Depending on how much you amplify it,
0:09:56 you’re non-neutral.
0:09:57 Depending on your algorithm,
0:09:58 you’re non-neutral.
0:10:00 Everyone is sort of realized by this point,
0:10:01 certainly in this ecosystem,
0:10:03 that there’s nothing neutral.
0:10:06 And where a lot of people are mad at you
0:10:08 and there is no neutral position,
0:10:11 you might have an incentive to offload some of the decisions
0:10:14 just so people will be mad at someone else.
0:10:17 And also because users might not trust you as the platform
0:10:20 and then you might think it’s better off
0:10:22 that people will trust somebody else
0:10:23 more than they trust me,
0:10:25 even though they may not trust them fully either.
0:10:26 – I totally agree.
0:10:29 I think it’s an exercise in trying to rebuild trust.
0:10:31 I think something notice that’s really important,
0:10:35 which is like trust in tech companies is falling
0:10:36 in a lot of parts of the world,
0:10:37 not in all parts of the world,
0:10:38 but in many parts of the world.
0:10:40 But at the same time, faith in government
0:10:43 is falling in a lot of those same places.
0:10:46 And so there is no obvious actor
0:10:49 to make some of these very hard calls
0:10:53 in a neutral, procedurally fair way.
0:10:55 Where the key is that I think you,
0:10:57 as a user, you’d like to be in a position
0:10:59 where a platform decides on a piece of content
0:11:03 or what app is allowed or what transactions are allowed,
0:11:05 in a way such that even if you disagree
0:11:07 with the particular decision,
0:11:09 you still buy into the process
0:11:12 by which the decision was taken.
0:11:15 And today, I think there’s a lot of people
0:11:18 who question both a company’s ability
0:11:19 to build such a process,
0:11:23 but also a government’s ability to build such a process.
0:11:27 And in the absence of trust in either of those processes,
0:11:29 it makes sense from a business perspective
0:11:34 to try to improve your trust in society as a company
0:11:36 by finding a third option
0:11:39 for a fair way to make that decision.
0:11:40 – This sounds like a really good segue
0:11:42 into the oversight board,
0:11:46 because that is an attempt to satisfy these conditions
0:11:47 that you’re talking about,
0:11:50 to kind of offload some of this responsibility
0:11:54 onto a third party to enhance transparency and trust,
0:11:57 hopefully, and create a process
0:11:59 that is reasonable and rule-bound
0:12:02 for moderation in all sorts of decision-making
0:12:03 behind the scenes.
0:12:05 Well, let’s talk about the formation of that
0:12:08 and the needs that led to its creation.
0:12:11 – I think the core background situation
0:12:15 that then Facebook was facing, now Metta,
0:12:20 when Mark Zuckerberg decided to create the oversight board,
0:12:23 was the recognition
0:12:25 that there were some very, very hard
0:12:27 content moderation questions.
0:12:29 They were still pretty simple at the time.
0:12:30 They were sort of, “Do you leave up content?
0:12:31 “Do you take it down?”
0:12:35 They were not as fine-grained as they now have become,
0:12:38 but in which he might have a strong intuition
0:12:40 about what the right answer was,
0:12:42 but reasonable people could differ.
0:12:46 And I think the insight that he was able to see
0:12:51 is that actually there might be a right and wrong decision.
0:12:52 I’m not a relativist.
0:12:54 I don’t think that there are always two decisions.
0:12:57 No, there might be a right and wrong decision on this,
0:12:59 but reasonable people could differ
0:13:01 about what the right decision is.
0:13:03 And it probably didn’t matter that much
0:13:06 from the standpoint of platform governance
0:13:09 which of those two decisions were taken.
0:13:12 But if he made it, it was gonna take him a lot of time.
0:13:13 It was gonna take him a lot of effort.
0:13:15 He had no special expertise
0:13:17 in any of the underlying questions.
0:13:20 And people were gonna be really angry at him
0:13:23 and distrust him so that no matter what he decided,
0:13:26 it would decrease the legitimacy of Facebook
0:13:27 as it then was.
0:13:30 And so the idea that I proposed to him
0:13:33 and that I think resonated for him was,
0:13:36 “Maybe this decision doesn’t have to be made by you.”
0:13:37 In the first instance,
0:13:39 the company is going to make a decision.
0:13:39 They’re gonna have a policy
0:13:42 and they’re gonna try to file that policy.
0:13:43 But if it’s controversial,
0:13:46 why not create a group of independent experts
0:13:50 and bring those questions, the hardest questions, to them?
0:13:52 Have them answer the question
0:13:54 in light of Facebook’s principles,
0:13:58 international principles, commonsense good judgment.
0:14:00 Have them explain their decision
0:14:04 and see if it worked.
0:14:06 And I would say that the secret sauce there
0:14:09 is the explanation of their decision.
0:14:11 The secret sauce is that if you’re answering
0:14:12 a really hard question where you admit
0:14:14 there are different possible answers,
0:14:17 you should explain why you’re doing it.
0:14:20 And that explanation, that practice of reason giving
0:14:24 has the capacity to confer a lot of legitimacy
0:14:25 when it comes to hard decisions.
0:14:27 ‘Cause I think it reassures people
0:14:30 that thoughtful people gave thought to this,
0:14:31 that they had some degree of expertise,
0:14:33 that they deployed that expertise,
0:14:35 and that they tried their best.
0:14:36 It’s not a magic solvent.
0:14:39 Some people will still be really mad about the decision,
0:14:41 but everyone will be able to say, I hope,
0:14:45 this is the closest thing to a fair decision-making process
0:14:46 that we could have come up with.
0:14:48 And my last point on this is,
0:14:51 obviously this idea of legitimacy and fairness
0:14:53 did not come out of the blue.
0:14:56 It’s borrowed from real-world institutions,
0:15:00 especially courts, that often are not elected.
0:15:02 Often their decisions aren’t appealable.
0:15:04 If it’s like a Supreme Court or a High Court,
0:15:05 they’re the last word.
0:15:06 They’re not always right.
0:15:08 They’re often controversial.
0:15:09 People get mad about them.
0:15:13 But the institutions on the whole
0:15:14 have a fair amount of legitimacy.
0:15:16 I mean, it’s a funny moment to be saying it now
0:15:18 when the Supreme Court has, you know,
0:15:21 done a lot of self-harm on the legitimacy front,
0:15:23 but the Supreme Court still has much greater legitimacy
0:15:25 in the eyes of Americans.
0:15:27 And that’s even after a range of other things
0:15:29 that the Supreme Court has done that they don’t care,
0:15:31 that the majority doesn’t care
0:15:32 that it’s undermined their legitimacy.
0:15:35 So the idea that you can get legitimacy
0:15:39 from reason decision-making was borrowed from that realm
0:15:42 and applied experimentally to the Opposite Board.
0:15:45 And I think on the whole it’s working,
0:15:47 not perfectly, but that it is working.
0:15:49 But that’s a topic for further discussion.
0:15:51 – You mentioned that this is a funny moment
0:15:54 to be talking about the legitimacy of the courts.
0:15:55 Now, of course, it’s an institution
0:15:57 that’s hundreds of years old,
0:16:01 but based on the way things have shaken out,
0:16:03 is there anything you would have done differently
0:16:08 in your conception of this sort of analog body
0:16:10 to a court system for–
0:16:12 – Well, actually, if I can answer in the reverse way,
0:16:16 there is something that I’m actually kind of proud of,
0:16:18 which differentiates us from the Supreme Court.
0:16:22 We have, the Oversight Board is a pretty big body of people
0:16:27 who decide cases, sometimes all of them,
0:16:32 but sometimes in subsets, and it isn’t partisan.
0:16:35 So it’s not set up so that when you change the composition
0:16:38 by a vote or two, everything flips on you,
0:16:40 which is what happens at the US Supreme Court.
0:16:43 And so, and then retirements and deaths.
0:16:47 And so that kind of, it can all change in a second.
0:16:49 It’s highly partisan and it’s very responsive
0:16:52 some of the time to political changes
0:16:54 are all features that we intentionally
0:16:56 did not put in the Oversight Board
0:16:59 to lower the temperature around its decision-making process.
0:17:03 – Is this something that the government should take
0:17:04 into account?
0:17:06 Like if you were setting up a new government
0:17:10 and a new court system, would you make changes of this sort?
0:17:12 – Yes, absolutely.
0:17:14 I mean, it’s crazy that the Supreme Court,
0:17:17 I mean, you have to go back to 1787.
0:17:19 People died a lot younger.
0:17:21 Being on the Supreme Court was not that good a job.
0:17:23 A bunch of people who had that job would quit
0:17:25 and go back to private law practice
0:17:27 ’cause they couldn’t stand the job and it didn’t pay enough
0:17:29 and you had to get on a horse and ride around
0:17:32 in your cases all, it was just wasn’t that pleasant.
0:17:36 Now people live longer, they get into that job
0:17:39 and they never leave, they leave the court
0:17:40 when they die a lot of them, which is very,
0:17:43 I mean, it’s unhealthy, it’s not good.
0:17:46 And we also have a much more polarized politics
0:17:48 than they certainly did at the beginning,
0:17:50 although there have been periods of intense polarization
0:17:51 in US history.
0:17:53 So yeah, if you were designing it from scratch now,
0:17:56 you would have staggered terms with time bound.
0:18:00 It wouldn’t be the luck of Justice Scalia died,
0:18:02 Barack Obama was president, but Mitch McConnell
0:18:04 was able to block the confirmation hearings.
0:18:06 I mean, that kind of crazy town,
0:18:09 hardball politics should not affect the composition
0:18:10 of the Supreme Court.
0:18:14 And since one of my jobs is that I watch the court closely,
0:18:16 part of my job shouldn’t be having to have
0:18:20 like a not quite expert, but pretty good amateurs knowledge
0:18:22 of like the health of the individual justices,
0:18:24 if they’re diagnosed with a certain cancer,
0:18:25 what are the probabilities that they’ll live
0:18:30 and how long, like that’s distasteful and also like absurd
0:18:32 that that should be part of our way we decide things.
0:18:34 We shouldn’t be thinking in those terms.
0:18:36 – I wanna raise one thing that I wanna make sure
0:18:38 we spend more time on how the board is doing
0:18:40 ’cause I think there’s a lot of interesting
0:18:42 different aspects of that to unpack.
0:18:44 One thing I just wanna raise that no one I have talked
0:18:47 a lot about in the past and lots of other people
0:18:50 thought about that I think is important is one,
0:18:53 one important difference between say the Supreme Court
0:18:57 or US courts or courts in general and the oversight board
0:19:01 is sort of whose laws they’re interpreting.
0:19:06 So Noah referenced this earlier that they’re basically,
0:19:08 they’re considering cases on the basis
0:19:11 of Facebook’s principles or Meta’s principles.
0:19:14 And I think that’s the logical way to start.
0:19:16 But one thing people obviously then point out is,
0:19:21 well, the Supreme Court isn’t referencing
0:19:23 single corporation’s rules.
0:19:27 It’s referencing democratically written rules
0:19:29 that the legislature writes.
0:19:33 Nevertheless, it’s an interesting I think thing
0:19:34 to consider in the long run.
0:19:37 Can you get a sufficient amount of legitimacy
0:19:41 as Noah called it from a court online
0:19:44 that’s interpreting the rules of the company
0:19:46 when part of the concern is whether the company
0:19:49 is setting the rules in the right place or not.
0:19:52 And so one thing people have talked quite a bit about
0:19:54 but I don’t think we’ve cracked
0:19:58 is how would you democratize the legislative process
0:20:00 as well so that something like the oversight board
0:20:03 would be making decisions with respect
0:20:06 to democratically written policies.
0:20:07 – Yeah, this is super interesting.
0:20:10 And let me add one more twist
0:20:12 to the way things have been playing out.
0:20:14 ‘Cause this is something that has not been broadly discussed
0:20:17 or broadly covered except in a very narrow group
0:20:21 of specialists, mostly academics and NGOs
0:20:22 who follow the oversight board.
0:20:24 The oversight board has been really worried
0:20:27 about the thought that they’re following Meta’s rules.
0:20:29 They’re more ambitious than that.
0:20:31 They wanna follow something more democratic
0:20:33 but they can’t use the laws of any particular country
0:20:36 ’cause there isn’t one particular country that’s binding.
0:20:37 So what they’ve been doing is they’re claiming
0:20:40 to apply international law,
0:20:41 the international law of free speech
0:20:42 which is not anywhere near as democratic
0:20:44 as the laws of a democratic country
0:20:47 but it is sort of second order democratic
0:20:51 because these are treaties that were enacted by countries
0:20:54 some of which are many of which were democratic
0:20:58 and which all countries recognize as legally binding.
0:21:02 Now, the upside of that is that it sounds
0:21:03 a lot more legitimate than we’re deciding
0:21:06 these important questions based on Meta’s rules
0:21:08 and it’s also a little more democratic.
0:21:10 The downside of it is international law actually
0:21:12 isn’t that well set up to answer these questions
0:21:15 ’cause it’s not designed to govern what a platform should do.
0:21:19 It’s designed to govern what countries should do.
0:21:20 That’s why it’s international.
0:21:22 It’s the law of stuff between countries.
0:21:25 So the oversight board has had to be really creative.
0:21:28 They’ve basically relying on some work by scholars.
0:21:31 They’ve basically created a whole new body
0:21:34 of what they call international law
0:21:37 that they say binds Meta and binds should in principle
0:21:39 bind other platforms.
0:21:41 And they’re out there making this law
0:21:43 and there’s no one to tell them no.
0:21:46 So I just supervised a student who wrote a dissertation
0:21:48 saying this is unjustified.
0:21:51 But my response to that was always that might be true
0:21:53 but who’s gonna stop them?
0:21:55 And they’re trying to do that.
0:21:57 So that’s a bid that is out there
0:21:58 and it’s interesting to watch.
0:22:00 And as I say, it hasn’t been very well covered.
0:22:02 But I also, I do wanna get back to the point
0:22:04 that Andy’s talking about which is
0:22:06 what are the experimental creative things
0:22:08 that a big platform like Meta could use
0:22:11 to bring democratic input to bear
0:22:13 on some of the hardest decisions that it makes?
0:22:16 – Well, it might be good to start with the history
0:22:19 which is that this is something apparently
0:22:21 Mark Zuckerberg has thought about for a long time
0:22:23 because back in the 2000s
0:22:26 when the platform was still pretty young,
0:22:30 they had a sort of a disagreement among users
0:22:33 about what the new community standards should be.
0:22:38 And Mark held a global referendum on the platform
0:22:40 and I mean, it’s super interesting.
0:22:41 You can actually, on the internet archive,
0:22:44 you can go and see the video he posted
0:22:47 encouraging all the users on Facebook
0:22:49 there were about 300 million at the time to vote.
0:22:53 And what they learned through that process was actually
0:22:55 it’s really hard to get people to vote.
0:22:58 So out of the 300 million users,
0:23:01 I think something like 60,000 of them voted.
0:23:04 And the reason for that is probably because
0:23:08 most people are not logging on to any part of the internet
0:23:10 to do the hard work of governance.
0:23:12 They’re logging on to have fun
0:23:14 or see their friends or whatever.
0:23:17 And in addition, the decisions being considered
0:23:18 where they’re pretty abstruse.
0:23:20 Like you had to read these like hundred page documents
0:23:21 and stuff.
0:23:24 So I think it was a really innovative idea
0:23:27 that suggested early on this idea
0:23:31 that there’s a way to potentially make decisions
0:23:33 about a platform democratically,
0:23:34 but it turned out to be challenging
0:23:36 to make work in practice.
0:23:38 And so now if we fast forward to today,
0:23:43 yeah, I think lots of people are interested in ways to set
0:23:46 at least some kind of broad, simple policies
0:23:48 where the values that users hold are relevant
0:23:53 to the decision through some kind of democratic process.
0:23:54 No one has really figured out
0:23:57 how to solve this core problem of participation
0:24:00 as well as informed participation,
0:24:04 as well as if it’s a platform that’s global in scope,
0:24:06 how do you actually get people in different countries
0:24:09 who speak different languages to even work together
0:24:11 to figure it out?
0:24:15 What if they have radically contrasting values or goals?
0:24:17 There’s a lot of really hard questions,
0:24:18 but I think there’s, yeah,
0:24:21 there’s at least two veins of experiments
0:24:23 worth going deep on.
0:24:25 One, which I think no-
0:24:27 – By the way, voter participation rates are a problem,
0:24:30 not just in online voting, but in, you know,
0:24:31 IRL voting too.
0:24:32 – That’s true, that’s true.
0:24:33 Although the participation-
0:24:35 – It’s a giant thing and you could make a mandatory, you know,
0:24:38 you can make it, you open Instagram
0:24:40 and you can’t use Instagram unless you vote on these issues,
0:24:43 but then you’re not gonna get highly informed voting, right?
0:24:46 I mean, these ones that he’s describing,
0:24:47 just to make them concrete,
0:24:49 ’cause I think they’re two kind of good examples.
0:24:50 One is the problem that,
0:24:52 I don’t know if people who are listening
0:24:53 all have heard of this or not,
0:24:54 because it may be a little dated,
0:24:57 but they call the Boatemic Boatface problem.
0:25:00 – Oh yeah, this was Reddit, right?
0:25:01 The, didn’t Reddit-
0:25:02 – It was the British government.
0:25:03 The British government decided,
0:25:06 they were gonna use, you know,
0:25:08 this new amazing thing, the internet,
0:25:12 to gather votes on what they should name a new battleship.
0:25:17 And, you know, the winning vote getter was Boatemic Boatface.
0:25:18 – Which I think we can all agree
0:25:20 is a fantastic name for battleship.
0:25:21 – I don’t see the problem.
0:25:23 – Yeah, Her Majesty’s Ship,
0:25:24 I guess it was then Her Majesty’s Ship,
0:25:25 Boatemic Boatface,
0:25:28 somehow it didn’t land for the Royal Navy.
0:25:30 And then the other example that I like to use,
0:25:31 and Andy mentioned this,
0:25:34 is let’s imagine you’re setting a nudity policy.
0:25:36 Well, there’s one policy
0:25:39 that works in the South of France on a beach,
0:25:42 which is more permissive than what we have in the US.
0:25:44 And then there’s Saudi Arabia,
0:25:47 which is a lot less permissive than we have in the US.
0:25:49 And it’s not obvious that either of those
0:25:52 should control the entirety of the platform.
0:25:54 It’s not obvious that our model
0:25:55 should follow the platform either.
0:25:59 It’s mostly a default that, you know,
0:26:01 a lot of these platforms are US-based companies,
0:26:04 and so US-based standards end up controlling.
0:26:04 Americans think, oh,
0:26:06 those are the reasonable standards.
0:26:08 But of course, there’s no inherent reasonableness answer
0:26:10 to the question of like how much skin you should show.
0:26:12 That’s a classic example of something
0:26:15 that different cultures are equally confident
0:26:16 that their way of doing it
0:26:18 is the only right way to do it.
0:26:20 So that’s a genuine challenge.
0:26:22 And then that leads to thinking,
0:26:23 well, maybe we should have regions
0:26:26 with different rules and different regions.
0:26:28 And it turns out it’s harder to do that
0:26:31 on a global platform than you’d think.
0:26:33 And most of the global platforms
0:26:35 have tried as much as they can to avoid that
0:26:37 because they don’t want this kind of
0:26:40 extremely balkanized platform
0:26:42 where it all depends on where you logged in from.
0:26:44 And, you know, if you have a VPN,
0:26:45 you can get different standards.
0:26:48 And you can imagine what a mess it can quickly become.
0:26:50 So those are part of the challenges.
0:26:54 And then the last part of the challenge is
0:26:56 all of our examples of democracy.
0:26:58 Andy, you can correct me if this is wrong
0:26:59 because you’re the political scientist,
0:27:01 but at least the ones that come to my mind,
0:27:04 all of our examples of democratic governance,
0:27:09 assume a denominator that is the relevant group of people.
0:27:10 It’s the people who live in your town
0:27:14 are gonna vote on town policy or your state, your country.
0:27:17 We don’t really have true democracy at the global level.
0:27:21 And so that raises the question of who’s the denominator?
0:27:25 Who’s the relevant community to vote on these issues?
0:27:26 And maybe it’s the users,
0:27:29 but a lot of issues affect non-users.
0:27:33 And so, you know, is it everybody, including non-users?
0:27:34 Is it a subset of the users?
0:27:37 That problem is, I think, the very first problem
0:27:38 to grapple with.
0:27:41 And in a lot of ways, it’s the most challenging.
0:27:43 – Let’s unpack the Voting McBoat Face thing for a second
0:27:45 ’cause I think it’s really important.
0:27:49 It’s a great example of two or really three
0:27:52 deeply related problems in the use of voting
0:27:54 to make collective decisions.
0:27:57 The first is just the lack of participation.
0:28:00 So very few people actually voted
0:28:02 on the Voting McBoat Face decision
0:28:04 because why would most people pay attention
0:28:06 or even know about it?
0:28:09 Second, which is closely related to the lack of participation
0:28:13 is that who then are the weirdos who choose to vote
0:28:15 when not very many other people are voting?
0:28:18 It’s people who have very extreme weird preferences
0:28:21 like the kind of people who think it would be hilarious
0:28:23 if there was a vote named Voting McBoat Face.
0:28:26 And so those two things go together quite tightly.
0:28:29 Low participation and then selection
0:28:33 for unusual, unrepresentative, extreme views
0:28:35 or preferences or trolling.
0:28:38 So I think that’s like an important thing to understand.
0:28:42 And then the third, which is also deeply related,
0:28:46 is just the general problem
0:28:49 that you’re asking people to vote on decisions
0:28:52 and they have no skin in the game.
0:28:55 And that, again, lowers the incentive to participate
0:28:57 and lowers the incentive for people
0:28:59 with representative views to participate.
0:28:59 And I think–
0:29:01 – What if you had to serve on that ship?
0:29:03 You might care a little bit more.
0:29:06 – Well, that kind of gets to Noah’s denominator point.
0:29:08 I think there’s a very broad challenge.
0:29:11 It’s much broader than just democracy,
0:29:13 which is how do we govern things
0:29:18 that are not economic in nature in ways that are smart
0:29:20 when the people were asking to do the governance
0:29:22 that don’t have skin in the game.
0:29:26 And that was the problem with open AIs governance.
0:29:28 It’s been a challenge with university governance.
0:29:31 The trustees in some sense don’t have skin in the game.
0:29:33 – You measure the skin by how much money they put in.
0:29:35 They would say, I mean,
0:29:36 not that I’m always sympathetic to them,
0:29:37 but the donors– – Good point.
0:29:39 – We’re the only ones with skin in the game.
0:29:40 We’re donating the money.
0:29:42 All you guys are doing is taking the salary.
0:29:44 – That’s a good point, that’s a good point.
0:29:45 – That’s what they say.
0:29:46 I mean, I’m not saying I’m on board with it.
0:29:47 Anyway, go on, yeah.
0:29:50 And when we look at all these voting problems,
0:29:53 if you’re asking people to vote on things
0:29:55 that don’t clearly affect them,
0:29:59 that’s a reason you might not get very informed participation.
0:30:02 Furthermore, when you ask large groups of people to vote,
0:30:05 you have this problem we call the paradox of voting,
0:30:08 which is that even if you do care a lot,
0:30:11 even if you do feel like you have skin in the game,
0:30:13 if lots of other people are voting,
0:30:15 then you know your vote doesn’t matter.
0:30:18 It’s not gonna swing the outcome.
0:30:22 And so you could care infinitely much about the decision,
0:30:23 and yet your incentive to actually pay attention
0:30:25 about it was very, very small.
0:30:28 So we put those things together and we ask,
0:30:31 okay, how are you gonna do something democratic
0:30:35 to make a big decision and a big online platform?
0:30:37 I think you have to start with this fundamental set
0:30:40 of problems that are, you wanna get people to participate,
0:30:42 you want them to pay attention,
0:30:44 you want them to feel like it matters,
0:30:46 and how on earth do you do that?
0:30:49 And I think there’s been several directions
0:30:51 of experiments that are pretty fascinating.
0:30:53 One, which Noah alluded to earlier,
0:30:56 is what Anthropic as well as Meta and others
0:30:59 have been playing with where you don’t use voting,
0:31:02 you use these things called citizens assemblies.
0:31:04 And then the other is what’s going on in crypto,
0:31:08 where you’re experimenting with different types of voting
0:31:10 that try to draw in only the people
0:31:13 who have some sense of skin in the game
0:31:14 through the tokens that they hold.
0:31:17 So those are I think two sets of experiments
0:31:18 we can learn a lot from.
0:31:21 – And I would just add one more framing thing here.
0:31:23 Most forms of democratic government
0:31:25 in the history of the world,
0:31:27 I tried to address these questions by saying,
0:31:30 we’re not gonna have direct governance
0:31:33 where every single person has to vote on every issue.
0:31:35 ‘Cause that’s also like a time consuming thing.
0:31:37 And so one way to solve both the,
0:31:40 how much time do you put in and the,
0:31:42 how much do you care and the skin in the game
0:31:45 is to have representative decision-making.
0:31:46 This is the move from like ancient Athens
0:31:49 where everyone can show up and vote every,
0:31:52 to be clear, every free man can show up as a citizen
0:31:56 and vote in the assembly to a system where you elect people
0:31:58 and your representatives are professionals
0:32:01 or quasi-professionals and they do the decision-making
0:32:04 and the voting, they have a skin in the game.
0:32:07 They have the job of caring about getting reelected
0:32:09 based on whatever incentives they have.
0:32:11 And that gives them the incentive to try to guess
0:32:13 or figure out what you’re thinking.
0:32:16 And one of the really interesting things about the internet,
0:32:18 and I think it’s important background to when we dive
0:32:20 into both crypto and citizen assemblies,
0:32:22 is that from its early days,
0:32:24 the internet has been broadly speaking,
0:32:26 the ideology of the internet
0:32:27 or the ideologies of the internet
0:32:30 have been really skeptical
0:32:32 of the idea of elected representatives.
0:32:34 There’s this impulse.
0:32:36 And it’s really interesting to explore why.
0:32:38 Is it because of hacker culture?
0:32:39 Is it because of coder culture?
0:32:41 Is it the personality type of the people
0:32:43 who first got really good at using the internet
0:32:44 before everyone used it?
0:32:46 I mean, there’s a lot of possible explanations.
0:32:48 But there’s been a deep impulse
0:32:50 to get around the idea of elected representatives.
0:32:54 And both of these experiments, the citizen assemblies
0:32:57 and the crypto or DAO or organizations
0:33:01 are attempts to like say, we are gonna reinvent the wheel.
0:33:03 Like we’re not gonna do the thing
0:33:05 that all democracies have done,
0:33:07 namely rely on elected representatives.
0:33:09 We’re gonna do something different and better.
0:33:11 And that’s kind of appealing
0:33:13 because especially if you have some skepticism
0:33:14 in our elected representatives,
0:33:17 it’s a good time to try to reinvent what they’re doing.
0:33:19 But there’s also a reason to be modest
0:33:22 when all of the countries in the world
0:33:25 that use a certain technology, namely democracy,
0:33:27 have converged to a first approximation
0:33:29 on one solution to this problem,
0:33:31 namely elected representatives.
0:33:32 And you’re like, nah,
0:33:33 they don’t know what they’re talking about.
0:33:34 We can do better.
0:33:35 Maybe you can.
0:33:38 It’s definitely worth trying.
0:33:41 – So you wanna have like some degree of modesty
0:33:43 about the probabilities of coming up with something
0:33:46 that in 2,500 years of thinking about this,
0:33:49 no one has yet really managed to solve in any other way.
0:33:51 – I wanna build on that quickly
0:33:52 ’cause I think it’s really important.
0:33:55 And I completely agree with Noah 1,000%.
0:33:59 There’s something about technologists and people online
0:34:02 that makes them really crave direct democracy.
0:34:04 And one of the key claims they make
0:34:08 is that technology is going to allow for direct democracy,
0:34:11 that it didn’t work in the physical world
0:34:12 because it was too burdensome.
0:34:13 – This is gonna be my question.
0:34:16 Like does the internet, is it a step change?
0:34:17 Like have we entered a new era
0:34:21 where based on the tech that’s available now,
0:34:22 this is conceivable.
0:34:25 – There are no certainties in the social sciences,
0:34:28 but this is as close to certainty as I’m willing to go
0:34:31 is the technology is gonna make no difference
0:34:32 to this problem.
0:34:35 And this is just the way it is.
0:34:36 – It goes back to the paradox of voting
0:34:38 I was talking about.
0:34:42 It’s very burdensome to be asked to understand
0:34:45 and become informed on every possible decision
0:34:47 that a group is going to make.
0:34:50 And you might think, and this is I think the fallacy
0:34:53 that a lot of people in tech have committed,
0:34:58 you might think that having more quote unquote democracy
0:35:01 by putting more votes to the people
0:35:05 is gonna get you a more empowered user base,
0:35:07 but the opposite is in some sense true.
0:35:09 The more things you ask them to do,
0:35:11 the more burdensome it is for them to do it,
0:35:14 the less that they’ll choose to do it.
0:35:17 And that then creates a really important
0:35:19 and dangerous vacuum because now you have
0:35:23 very few people voting and then interest groups
0:35:27 can come in and capture the decision making process
0:35:28 at a relatively low cost
0:35:31 because there’s not that many other votes out there
0:35:32 for them to compete with.
0:35:36 And this is I think fundamentally why yeah,
0:35:38 no successful society on earth has stuck
0:35:41 with direct democracy for very long.
0:35:43 And it’s why technology is not gonna solve the problem
0:35:45 because it’s actually not a problem
0:35:48 about voting in person being too difficult
0:35:49 or anything like that.
0:35:53 It’s a problem of information acquisition and analysis.
0:35:55 And I’ll just note on that point,
0:35:58 people have also claimed that we’ll be able
0:36:01 to provide voters with all the information
0:36:04 they need to make every decision through some kind of app.
0:36:06 I’ve had a lot of undergrads in my office pitching me
0:36:09 on the next app that’s gonna inform us all about politics
0:36:12 so we can vote on everything ourselves.
0:36:13 And the problem is that information
0:36:15 or that data is not enough.
0:36:16 You actually need to analyze it
0:36:18 and then decide what to do.
0:36:20 And that’s actually quite difficult.
0:36:25 And so being buried in data on what your community
0:36:29 is voting on is just not enough to help you do this.
0:36:32 And so I think it’s a completely fundamental problem
0:36:35 that goes way beyond technology that has to be solved
0:36:38 through other models besides direct democracy.
0:36:43 And I’ll just note almost all of the big voting DAOs
0:36:47 and crypto now have some form of representative democracy
0:36:48 as a result of this.
0:36:51 If you’re listening to this and you think,
0:36:53 no, that, you know, no one and you’re wrong,
0:36:54 you know, like technology–
0:36:55 – Yeah, you guys are just elitists.
0:36:56 Come on, let the mob rule.
0:36:57 – Yeah, exactly.
0:37:00 What I would say is if you’re an aspiring technologist,
0:37:01 ask yourself, what’s your vision
0:37:03 for the company you’re gonna found?
0:37:06 And in 99.9% of cases, people are like,
0:37:07 oh, I wanna found the company
0:37:08 and then I wanna have founder shares
0:37:10 and I wanna control the decision-making.
0:37:12 And then you say, well, why?
0:37:13 And they say, well, because otherwise,
0:37:16 like the VCs will first get involved,
0:37:17 then they’ll be shareholders.
0:37:18 They’re gonna dilute my votes
0:37:19 and they’re gonna take the company
0:37:20 and they’re gonna do all kinds of stuff.
0:37:22 I don’t wanna do with it.
0:37:24 And I’m the one who knows, ’cause it’s my company,
0:37:26 I spend the most time, I don’t, I care about it.
0:37:28 That’s exactly the problem that Andy’s describing.
0:37:31 So I always find it amazing when, you know,
0:37:33 those students come in and say that ’cause I say to them,
0:37:34 oh, are you imagining a company
0:37:36 where you will have no special input
0:37:37 into how the company runs?
0:37:39 You know, and every decision will be made collectively
0:37:40 and they’re like, are you crazy?
0:37:41 Of course not.
0:37:42 And I’m like, well, then there’s a contradiction
0:37:43 in what you’re proposing.
0:37:46 So it’s the, you know, the intuition
0:37:47 that technology will interfere.
0:37:49 You can easily fix that intuition
0:37:50 by just asking yourself,
0:37:52 when you get rich with your own company,
0:37:54 do you want to be in charge of it?
0:37:55 And in my experience,
0:37:58 almost everybody thinks the answer that is yes.
0:38:00 Although there are some tweaks, right?
0:38:03 I mean, so another thing that Anthropic did,
0:38:04 and I know about this
0:38:06 ’cause I was involved in advising them on it,
0:38:09 is they’ve created a long-term benefit trust,
0:38:13 which is a trust that appoints at first just a few,
0:38:15 but eventually will appoint a majority of members
0:38:18 of the board of directors of Anthropic.
0:38:20 And the people on the trust
0:38:22 who are going to appoint those board members
0:38:25 are not themselves shareholders.
0:38:27 So what Anthropic is doing there is,
0:38:28 it’s not the full open AI thing
0:38:32 where the parent board was completely made up of people
0:38:33 who had no stake in the company,
0:38:35 which blew up as we know.
0:38:38 This is a more modest in-between version
0:38:39 where there will still be shareholders
0:38:41 on the board of directors.
0:38:44 But the idea is to have some external check.
0:38:45 So it’s a little bit better.
0:38:46 But that’s very rare.
0:38:48 – So let’s talk about the example of Anthropic.
0:38:50 You’re saying that in this case,
0:38:52 there are actual shareholders
0:38:53 that are able to serve on the board
0:38:58 and that this is different from Meta’s oversight board.
0:39:01 And it’s also different from open AI
0:39:03 where people did not have a stake in the company.
0:39:07 And so maybe there will be a more representative
0:39:09 collection of views in there
0:39:11 for keeping the concern going
0:39:15 and also mitigating risks that could arise.
0:39:17 – Yeah, it’s designed to avoid problems on both sides.
0:39:19 So take the open AI problem.
0:39:21 You wanna have some, you know, break glass measure.
0:39:23 And so the break glass measure
0:39:26 was they created this nonprofit entity.
0:39:30 And that was it’s the overarching body
0:39:33 that the board of directors belong to.
0:39:34 And then underneath that board of directors
0:39:36 was the for profit entity
0:39:39 that was that we think of as open AI,
0:39:41 the actual company that does stuff.
0:39:44 If the people on the board of directors
0:39:47 of the nonprofit who had no financial stake at all
0:39:49 believe the company was going the wrong way,
0:39:50 they could break the glass
0:39:53 and fire the management of the company.
0:39:54 And they did that.
0:39:57 They believe that things were going terribly awry
0:39:58 that this was bad for the world.
0:39:59 And they broke the glass
0:40:01 and they fired the directors of the company,
0:40:02 the management of the company.
0:40:06 – Until it backfired and the glass went back in their face.
0:40:06 – Exactly.
0:40:08 They didn’t realize that what would then happen
0:40:11 would be that Sam Altman would then say,
0:40:12 well, I’ll just go to Microsoft
0:40:15 and I’ll take every single employee in the company with me.
0:40:17 And then the employees all went online
0:40:19 and signed up that they would go with him,
0:40:20 which is maybe not so shocking
0:40:22 because of course you’re gonna agree to jump ship.
0:40:24 And so then it backfired.
0:40:27 And then the members of that board directors had to resign.
0:40:30 So what you need in the real world
0:40:32 is something that doesn’t go that far
0:40:37 where there are people who are not financially incentivized
0:40:38 who serve on the board.
0:40:40 But there are also people on the board
0:40:42 who understand the real world
0:40:44 and who do have financial incentives
0:40:46 and they develop a relationship with each other
0:40:48 so that it is possible for the people
0:40:50 who are worried about a problem
0:40:51 and want to break the glass
0:40:54 to do so in a thoughtful and responsible way
0:40:57 where it won’t blow up in their faces.
0:41:01 So that’s what the long-term benefit trust is trying to do.
0:41:06 And it does give some real reasonable responsibility
0:41:08 to people who don’t work for the company.
0:41:12 I mean, in that sense, it’s a kind of cousin of the oversight board.
0:41:13 But it’s different from the oversight board
0:41:15 ’cause those folks don’t have day-to-day
0:41:17 decision-making responsibility.
0:41:18 They’re there at the level of oversight
0:41:21 and break glass at an emergency.
0:41:24 Whereas the oversight board is meant to be an everyday,
0:41:26 there are hard problems that Metta is dealing with
0:41:28 and the oversight board weighs in on that.
0:41:30 So I would call them cousins.
0:41:32 They have some conceptual similarities,
0:41:36 but they’re not siblings and they’re definitely not twins.
0:41:38 Hey, it’s deaf.
0:41:42 Look, people will endlessly debate and record podcasts
0:41:45 on the most important ingredients to success.
0:41:47 Is it technical talent?
0:41:48 Is it timing?
0:41:49 How about hard work?
0:41:50 Luck?
0:41:52 Well, in my opinion,
0:41:54 one of the most critical ingredients
0:41:56 is the power of effective communication.
0:42:00 And that’s why I recommend the Think Fast Talksmart podcast
0:42:03 from the Stanford Graduate School of Business.
0:42:05 Host and Stanford lecturer, Matt Abrahams,
0:42:07 sits down with the experts.
0:42:09 To discuss the best tips to help listeners
0:42:11 unlock their verbal intelligence,
0:42:13 whether it’s to excel in negotiations,
0:42:16 make a wedding toast or work presentation,
0:42:20 or even communicate better with AI chatbots.
0:42:22 And if you want to sneak peek,
0:42:27 Matt’s also got a great TED talk with over 1.5 million views.
0:42:29 So what are you waiting for?
0:42:32 Check out Think Fast Talksmart every Tuesday
0:42:33 or every Get Your Podcast,
0:42:36 whether that’s Apple Podcasts, Spotify, iHeart,
0:42:37 or even on YouTube.
0:42:44 – It seems a little bit interesting
0:42:46 that you have a certain set of people
0:42:48 who have this opportunity working side-by-side
0:42:50 with people who are, I assume,
0:42:51 just getting a regular salary.
0:42:54 And then how you get those people to really care about,
0:42:56 you know, the decisions that are being made.
0:42:58 – They have to care based on their reputation.
0:43:00 And this was relevant at the Oversight Board too.
0:43:02 So how do you get the Oversight Board members
0:43:05 actually to care that they do a good job?
0:43:06 – It’s not so simple,
0:43:09 but the short answer is it’s nobody’s full-time job.
0:43:13 And the jobs that they have are as activists
0:43:17 or as scholars or as people who care about free speech.
0:43:19 And so the idea was that they will care a lot
0:43:21 about their own reputations.
0:43:24 And that will give them the incentive to do a good job.
0:43:25 It’s not perfect,
0:43:27 but it was the best we could come up with.
0:43:29 If it was their full-time job,
0:43:32 their incentives might be too closely allied,
0:43:35 you know, with the power of their institution.
0:43:37 And they still probably want to do a good job,
0:43:39 but it wouldn’t be that they had a reputation
0:43:41 to preserve elsewhere.
0:43:44 So that was a kind of complicated compromise decision.
0:43:46 ‘Cause it’s not, it turns out it’s a hard problem.
0:43:49 You know, how do you get people to care about their jobs?
0:43:51 Usually it’s they get paid more if they do it well
0:43:53 and they get fired if they do it badly.
0:43:55 And on an independent body,
0:43:57 you can’t really do either of those things.
0:43:58 You know, you can’t like reward them
0:44:00 for making good decisions.
0:44:01 And if you can punish them by firing them
0:44:03 for doing, for making the wrong decisions,
0:44:05 they’re not independent.
0:44:07 So then you need something somewhere in between.
0:44:11 And reputation, I think, is the most powerful motivator.
0:44:12 And there I would just say, you know,
0:44:15 you might think, oh, no one’s motivated by reputation.
0:44:16 But there’s, well, it turns out
0:44:18 there’s a whole bunch of people in the world
0:44:19 who take the jobs that they,
0:44:20 a lot of them are academics,
0:44:22 who take the jobs that they have
0:44:25 rather than some job that would pay a lot better
0:44:26 ’cause they care about the thing they’re doing.
0:44:28 They like doing it.
0:44:31 And then so you ask, like, okay, now you have tenure.
0:44:32 When I remember, you know, getting tenure and think,
0:44:34 oh, this is so great.
0:44:34 And then someone said to me,
0:44:37 this is so great, you never have to do a day’s work again.
0:44:39 And I thought to myself, like,
0:44:40 I never thought of it that way.
0:44:41 Like I’m so much of a neurotic,
0:44:42 I’m gonna keep on working hard.
0:44:44 But rationally, if I were a real rational actor,
0:44:46 I maybe I would stop.
0:44:47 And the main reason not to is,
0:44:49 then my reputation would be in tatters.
0:44:50 And people would say,
0:44:52 as they do say about some academics,
0:44:53 oh, there goes Feldman.
0:44:54 He’s one of those people who,
0:44:57 the day you got tenure, he never did anything ever again.
0:44:58 And, you know, that hurts.
0:44:59 So, and I care about it, not hurting.
0:45:01 – It’s impossible to picture, you know,
0:45:04 stopping working after tenure.
0:45:04 – Such a layabout.
0:45:08 – So, anthropic is also one of the examples
0:45:10 of these citizens’ assemblies.
0:45:11 So it might be–
0:45:12 – Yeah, describe the citizens’ assemblies anyway.
0:45:14 Let’s dive into that.
0:45:16 – I think the general idea here
0:45:20 is you have cases where you’re either worried
0:45:24 that too many people won’t participate if you hold a vote
0:45:27 or that they won’t actually know enough about the issue
0:45:30 prior to having some kind of deliberation together
0:45:35 and/or briefings on the issue to make an informed decision.
0:45:37 And so instead of holding a vote,
0:45:42 you try to randomly sample a representative group
0:45:45 of your citizens, in the case of a citizens’ assembly,
0:45:48 or users, in the case of one of these online,
0:45:49 I’ll call them user assemblies.
0:45:52 Other people call them different things.
0:45:53 And then you bring those people together.
0:45:56 The idea is because you’ve randomly sampled,
0:46:00 they’ll be representative of the user base of your platform.
0:46:03 You get them together, you have them debate the issue,
0:46:05 you have them hear briefings from experts
0:46:07 so that they’re informed about the issue,
0:46:09 and then you have them make a decision
0:46:12 through voting or some other collective process
0:46:16 on whatever the difficult issue is.
0:46:18 And this is something,
0:46:22 it’s had some history in the real world going way back.
0:46:24 And it’s closely related to this idea
0:46:27 for a major grease of sortition.
0:46:29 – What is sortition?
0:46:34 – Sortition was this idea that you would randomly choose
0:46:36 people to be in charge of various issues.
0:46:38 – It’s election by lottery.
0:46:41 – Yeah, some people call them lotocracies also.
0:46:42 – The idea is basically,
0:46:45 we need someone to have the full-time job
0:46:47 of running some part of the government.
0:46:48 We don’t wanna vote for them
0:46:50 ’cause that creates all kinds of weird incentives.
0:46:54 So every year, we have a lottery, or twice a year,
0:46:57 and we pick one or two people who are gonna have this job.
0:46:59 And so, and they only do it for one term,
0:47:02 they don’t have to worry about getting reelected,
0:47:05 but for reputational reasons, they try to do a good job
0:47:08 because they’ll look bad if they don’t do a good job
0:47:09 and people will think well of them if they do a good job.
0:47:11 So you pick them randomly,
0:47:13 and then you do it again the next year.
0:47:14 – It’s sort of like jury duty,
0:47:17 but like heightened to the absolute utmost level.
0:47:20 – Well, the Athenians had very aggressive jury duty.
0:47:22 They had much larger juries
0:47:25 and they paid them quite well, which is interesting too.
0:47:28 But I think the main justification for “Sortician”
0:47:31 was this idea, I’m gonna paraphrase this,
0:47:34 but there’s this great quote from Douglas Adams,
0:47:35 the author of “Pitchhiker’s Guide”
0:47:37 and several other great books
0:47:41 that’s basically like any person who wants to be in charge
0:47:45 is ipso facto unqualified to be in charge.
0:47:47 And I think that was the idea of “Sortician”
0:47:52 is we don’t want these untrustworthy, overly ambitious,
0:47:58 potentially corrupt or sociopathic people
0:48:02 who desire power to be the ones in charge.
0:48:07 We want the average well-meaning patriotic citizen
0:48:09 to make decisions instead.
0:48:10 And so the way we’re gonna make sure
0:48:15 we don’t allow that sociopath to insert himself into power
0:48:18 is we’re gonna randomly choose who’s in charge.
0:48:19 And I just wanna note,
0:48:24 it’s an interesting solution to that problem.
0:48:26 It also raises a bunch of its own challenges.
0:48:30 If you think that a big part of what makes democracy work
0:48:32 is what we call accountability,
0:48:35 the idea that the person whose job it is
0:48:37 to pay attention, learn the issues,
0:48:39 figure out what maps to your preferences
0:48:42 and make decisions on that basis.
0:48:44 If it’s the fact that they’re worried about
0:48:46 whether they’re gonna get to keep their job
0:48:49 is what makes them do all those things well,
0:48:52 then “Sortician” is a terrible, terrible method
0:48:54 because I have no incentive to do anything.
0:48:57 I’ve been randomly chosen to do this job
0:49:00 and now I just have to do it no matter what.
0:49:02 I don’t have those strong incentives
0:49:05 that are coupled with my desire to win re-election.
0:49:07 So there’s no accountability in some sense
0:49:09 unless, like Noah was saying,
0:49:12 considerations like reputation, patriotism,
0:49:15 genuinely just caring about the issue are dominant.
0:49:18 So those are kind of, that’s one of the key trade-offs.
0:49:19 – But if you do a bad job,
0:49:22 maybe you’ll get ostracized and exiled.
0:49:24 – Also, they used to do that.
0:49:25 They would actually do that too.
0:49:26 But there’s another key trade-off,
0:49:28 which is the background assumption of “Sortician”
0:49:30 is that you don’t need very much
0:49:32 that all the expertise you need to do the job,
0:49:34 you already have or can pick up really fast.
0:49:37 So imagine you’re running your startup
0:49:42 and you’re gonna choose someone to be your CTO
0:49:43 and you’re like, oh, I’ll do it by “Sortician.”
0:49:46 Like bad luck for you if your lots fall on me
0:49:51 because I don’t code well enough to be able to do a good job
0:49:52 of reviewing the other people’s code
0:49:55 that’s necessary in the startup for the CTO.
0:49:56 And so, and there’s no chance
0:49:58 that in the six month or one year period,
0:50:00 I could get up to speed fast enough.
0:50:01 Like maybe if you gave me a year,
0:50:03 I’d make some progress, maybe,
0:50:06 but I don’t have the relevant expertise.
0:50:09 So it’s also probably based on a great amateur,
0:50:11 in a positive sense, amateur idea of government,
0:50:13 kind of, you know, you can imagine
0:50:15 19th century British aristocrats
0:50:18 who think that like anyone in their club,
0:50:20 if called upon, could sit in government
0:50:21 and figure it all out, you know?
0:50:24 But life is very complicated today.
0:50:26 And the things that government does today
0:50:28 are infinitely more complex
0:50:32 than they were in a, you know, an antique society,
0:50:34 you know, in Athens.
0:50:36 And so even if you assume
0:50:39 that people could have done a good job then,
0:50:41 it’s not so clear they could do it now.
0:50:44 And even the Athenians didn’t do it for certain jobs.
0:50:46 Like they didn’t pick their generals by sortition.
0:50:48 They weren’t dumb enough to think
0:50:50 that they could go out and win wars
0:50:52 by picking a random person to be the general.
0:50:54 That required some expertise.
0:50:56 So that’s a second problem.
0:50:58 And then a third problem is,
0:51:01 is this job the kind of job you get better at over time?
0:51:03 ‘Cause if you get better at it over time,
0:51:04 you’re not gonna get a chance to,
0:51:05 ’cause you’re in for six months or a year,
0:51:07 and then you’re out.
0:51:09 And so you’re not, if it’s the kind of thing
0:51:11 where there’s upsides to doing it for longer,
0:51:12 you’re gonna lose the upsides.
0:51:13 Obviously there are also downsides
0:51:16 to having someone in the job for too long.
0:51:17 But, you know, this sort of like,
0:51:19 it’s sort of like turn limits on steroids.
0:51:22 – So I think where I completely agree with all that.
0:51:23 And so where this has gone,
0:51:27 so I was one of a number of people
0:51:30 who helped design this user assembly for Metta.
0:51:32 It’s called the Community Forum.
0:51:34 And based on these concerns,
0:51:38 but also knowing that having straight up user voting
0:51:39 would be very challenging.
0:51:45 We really wanted to focus on really value-laden decisions
0:51:49 where it’s not really a matter so much of expertise,
0:51:53 but really it’s about trying to capture in this community,
0:51:57 what are your core values that affect this decision?
0:51:59 And that might be a place
0:52:02 where this is a more workable model,
0:52:04 but I do think it’s quite limited.
0:52:08 And going forward, I mean, my personal view,
0:52:08 I don’t know what, no, thanks.
0:52:10 My personal view is,
0:52:14 I think these things have to be expanded in some way
0:52:17 to bring in more expertise and to allow for delegation,
0:52:19 as we call it in crypto,
0:52:21 because of the things Noah was just saying.
0:52:24 I think it’s particularly hard
0:52:26 that someone serves on one of these things,
0:52:28 learns a bunch and then disappears.
0:52:31 And so developing ongoing expertise
0:52:35 and the types of decisions that are socially fraught
0:52:37 going back to the beginning of this whole conversation,
0:52:40 the kinds of decisions a platform might want to give over
0:52:43 to its users, we’re gonna want people to develop
0:52:46 ongoing expertise and a sense of accountability
0:52:47 for that, I think.
0:52:48 – And as Andy says, there’s also,
0:52:49 there couldn’t be a lot of upside.
0:52:53 So I mean, take a hard kind of values-based,
0:52:56 ethical or moral question.
0:52:59 Let’s go back to what should the nudity policy be
0:53:01 on a social media platform, right?
0:53:02 It’s a hard question.
0:53:07 And you could imagine that if you take 1,000 people
0:53:10 and they’re genuinely representative of the users,
0:53:11 which is a tricky thing as we’ve talked about,
0:53:13 but imagine that they are.
0:53:16 And maybe they all have an impulse
0:53:17 one way or another way when they start,
0:53:19 but then you give them enough information
0:53:21 for them to have some thoughtful conversations
0:53:23 and then see where they land after, say,
0:53:27 three longish conversations over three days.
0:53:30 They may all land in a place
0:53:31 that’s a lot more thoughtful.
0:53:32 In fact, they will land in a place
0:53:36 that’s a lot more thoughtful than where they started.
0:53:37 And they may have a different perspective
0:53:40 than say the people who work inside the company
0:53:43 or the outsiders who start off very committed
0:53:44 to one point of view or another,
0:53:46 either because they’re like, I don’t know,
0:53:48 they’re pro-sex and they wanna have
0:53:51 as few clothes as possible or they’re deeply religious
0:53:54 and they wanna have as many clothes as possible.
0:53:57 And so you could imagine that you get,
0:53:58 and I think it’s plausible.
0:54:00 And I think in some of the experiments
0:54:02 that Andy’s run with Metta,
0:54:05 you get a more thoughtful, nuanced, balanced answer.
0:54:08 And I think for those purposes, it’s great.
0:54:10 And it’s better than a focus group
0:54:12 ’cause there’s maybe a little less control.
0:54:13 You know, in focus grouping,
0:54:15 the problem is if you’re good at running a focus group,
0:54:17 you can make the focus groups say almost anything.
0:54:20 And here, there are more people, there’s more space,
0:54:22 there are more protocols to stop
0:54:23 the people who are presenting the question
0:54:26 from driving the answer in a particular way.
0:54:28 And it just makes sense that we would get a,
0:54:31 in general, we get a better result, not every time.
0:54:33 I mean, because there’s some scenarios
0:54:37 where deliberation produces perverse results,
0:54:38 where people get into a cascade
0:54:41 where this is like how people burn witches, you know.
0:54:43 – Group think and herd mentality.
0:54:44 – Yeah, you get into that
0:54:47 and then everyone goes towards some crazy extreme thing.
0:54:49 That can happen, but there are a lot of techniques
0:54:52 that Andy’s design builds in
0:54:55 that are designed to stop that from happening.
0:54:57 – Now, part of this community forum
0:54:58 is you’re trying to provide information
0:55:01 so people can make an educated decision about something
0:55:04 and maybe change their mind about some things.
0:55:07 They arrive at a, you know, workable solution.
0:55:11 But my question there is how do you ensure
0:55:12 that the information and education
0:55:17 that people are receiving is non-biased or objective?
0:55:20 And, you know, ’cause whoever’s presenting you
0:55:22 the menu of options, they have a great amount of control
0:55:25 on steering you toward a certain outcome.
0:55:26 – Good question for Andy.
0:55:28 And then the other tricky problem is
0:55:31 how do you make decisions about what informational decisions
0:55:33 about what’s the right information?
0:55:34 Right, imagine what we’re discussing
0:55:37 is precisely the misinformation problem.
0:55:38 What’s our baseline?
0:55:42 You know, where do we even start on what is reliable?
0:55:43 – It’s a super challenging problem.
0:55:46 I think there’s a couple different answers,
0:55:47 none of which are perfect.
0:55:50 And this is a broad problem in all democracy,
0:55:54 not just the online world and not just these assemblies,
0:55:56 though it’s perhaps sharpest in these assemblies
0:55:58 since the entity organizing the assembly
0:56:01 makes decisions about what information to present.
0:56:02 I would say a couple things.
0:56:07 One is you try not to have a monopoly on information.
0:56:10 And so, you know, a big part of these assemblies
0:56:13 is that people get to discuss and deliberate
0:56:15 and they’re completely free to bring in any information
0:56:18 they want gathered from whatever source they want.
0:56:21 So that’s, I mean, that’s the first most important thing,
0:56:21 probably.
0:56:24 And of course, you try to do as good a job as you can
0:56:27 of bringing in experts that are credible
0:56:30 to represent the different sides of the debate.
0:56:33 I think it’s become in vogue a little bit
0:56:37 on a certain part of the Twitter community
0:56:40 or ex-community, I guess it would be these days,
0:56:43 to say that there’s lots of fraught debates in society
0:56:47 where it’s obvious that one side is factually correct.
0:56:49 And so we shouldn’t show both sides
0:56:53 that both ciderism is itself a problem.
0:56:56 Of course, there are instances and places
0:56:58 where that might well be true.
0:56:59 But I don’t think you can get people
0:57:02 to make an informed decision about any issue
0:57:05 if you don’t let them hear both sides
0:57:07 or more than two sides of the issue,
0:57:10 even if you think the preponderance of evidence
0:57:12 rests on one side than the other.
0:57:14 You really, I think, can’t get to an informed decision
0:57:16 unless you’ve heard all the arguments.
0:57:19 And so I think that’s a big effort in this
0:57:24 is get a diverse set of experts to present all sides.
0:57:30 But long run, I think you want to be in a position,
0:57:32 if you really can build out a democratic system
0:57:34 that goes beyond these assemblies,
0:57:39 you’d want to foster a competitive ecosystem
0:57:42 in which different information providers crop up
0:57:44 and provide their different analyses and views.
0:57:46 And that’s something I’ve talked quite a bit about
0:57:49 with DAOs ’cause I think that’s an issue in crypto too.
0:57:51 – On that point, there’s also a body of literature
0:57:53 about people in brainstorming settings
0:57:56 and when there are bad ideas proposed,
0:57:58 it actually enables the group to arrive
0:58:00 at better decisions over time.
0:58:02 – Yeah, I mean, that idea, which I mean,
0:58:06 the classic modern formulation of that is John Stuart Mill.
0:58:08 His argument for free speech was,
0:58:10 we actually need the bad arguments
0:58:13 because working through why they’re bad and wrong
0:58:16 and false will help us get the right arguments.
0:58:18 And that’s, it sounds kind of like a trite argument,
0:58:21 but if you really think about it, it’s pretty deep
0:58:25 ’cause it’s not obvious and it’s interesting in that way.
0:58:28 I think another really interesting kind of aspect
0:58:33 of this whole challenge to me is jumping out
0:58:36 to the bigger political world of democracy.
0:58:39 So you were asking before Robert about,
0:58:40 it’s a funny time to be talking
0:58:41 about the US Supreme Court as a model.
0:58:44 And I think the same in a way is kind of true
0:58:47 about these ideas of democratizing,
0:58:50 but one of the aspirations of the community assemblies
0:58:52 or forums that citizens assemblies
0:58:54 or community forums that Andy’s talking about
0:58:57 is to get real disagreement that is thoughtful
0:59:01 that doesn’t turn into polarized yelling at each other.
0:59:05 And one of the ways to do that is by narrowing down the issue.
0:59:08 So let me point to one weird feature of our polarization.
0:59:11 One weird feature of polarization is that people turn out
0:59:12 to have really strong views about things,
0:59:14 they don’t know anything about,
0:59:17 and they have them instantaneously.
0:59:18 So you’re like, how?
0:59:19 Like, how do you, this happens to me sometimes,
0:59:21 I’m like talking to somebody and they’re like,
0:59:22 oh, I’m sure I think X.
0:59:23 And I’m like, I’ve known you for 30 years,
0:59:24 you don’t know anything about that.
0:59:26 Why do you have this strong position?
0:59:29 And the answer is as a time saving device
0:59:32 in a world where there’s so many issues
0:59:33 and they’re so complicated.
0:59:36 Once we’ve picked a team,
0:59:38 we’re blue or we’re red or we’re libertarian
0:59:41 or we’re anarchist or whatever we happen to be.
0:59:43 Once we’ve picked a team,
0:59:45 there’s now a list of positions
0:59:47 that’s associated with that team.
0:59:52 And so we just default as it’s defensible time saving heuristic
0:59:54 to siding with our team on those issues.
0:59:56 ‘Cause we kind of think that someone else
0:59:57 has thought about it or a whole bunch of people
0:59:58 have thought about it.
1:00:01 And probably this is where I’m gonna end up
1:00:03 because I generally agree with this group of people
1:00:04 on various things.
1:00:08 And one of the things about doing a citizen’s assembly
1:00:10 or a community discussion about a narrower topic
1:00:15 is you can sometimes avoid this habit that we have
1:00:18 of defaulting to a polarized position
1:00:19 as a time saving device
1:00:21 because you’re gonna be told about it.
1:00:23 Like you’re gonna get the information in front of you.
1:00:25 And so it’s not only that we’re closed-minded
1:00:28 and that we don’t want to listen to the other side,
1:00:29 although that does happen.
1:00:30 We’re also partly closed-minded
1:00:33 because we’re barraged with so much information
1:00:35 that we don’t have time
1:00:37 to consider every perspective on every issue.
1:00:40 We just don’t have the cognitive bandwidth to do it.
1:00:42 I actually think that’s one of the problems
1:00:43 with our current polarization.
1:00:45 It’s not the only cause of it.
1:00:47 There have been polarizations in many societies
1:00:50 that do not have anywhere near as much information
1:00:50 as we’re getting.
1:00:52 So I’m not making some like,
1:00:54 this is the main causal factor,
1:00:56 but it is one reason that we do better
1:00:59 in a well-designed citizen’s assembly
1:01:02 often than we do out in the wild with politics.
1:01:04 – I think that’s been one of the most interesting
1:01:06 to me surprising learnings from them,
1:01:09 both in the physical world and online,
1:01:11 is they’re generally organized around
1:01:15 the most fraught kind of culture wars type issues.
1:01:17 And when they’ve been run,
1:01:20 one of the big learnings I think has been that,
1:01:22 if you get a relatively small group of people together
1:01:25 and you structure the conversation well,
1:01:28 people generally are pretty reasonable.
1:01:32 It turns out they don’t want to be so partisan
1:01:36 or so hostile when they’re put into the right environment.
1:01:39 And that connects to other things
1:01:40 about our polarization, right?
1:01:42 Which is that it’s much larger than it seems
1:01:44 because we consume so much of it
1:01:47 through these online platforms
1:01:49 where we hear the loudest voices,
1:01:52 but we don’t see the in-person interactions
1:01:54 that turn out to be usually less polarized.
1:01:56 – I had a funny example of that for my own life.
1:02:00 I was moderating a panel at Harvard
1:02:03 and some students decided they were gonna leaflet against me
1:02:05 and they were handing out leaflets to people as they came in.
1:02:08 And the leaflet was like, it was like a four-page leaflet,
1:02:14 but its text was entirely a download from a Twitter,
1:02:17 like a Twitter thread that someone had created
1:02:19 attacking something that I had written.
1:02:23 And I was like, this is so, it was just so weird
1:02:26 because it didn’t translate well to the format of the,
1:02:29 you know, to the format of the leaflet.
1:02:31 But also I just thought to myself
1:02:33 the extremity of the formulations
1:02:37 that this person was using, you know, against me online,
1:02:39 you know, they didn’t really translate to the forum
1:02:41 where like we were having a thoughtful conversation.
1:02:43 And I sort of felt bad for the people who were leafleting
1:02:46 because it’s one thing to, you know, call what I said,
1:02:47 I think VAPID horse shit,
1:02:49 which is what they were describing me as saying,
1:02:52 like they seemed in context weird and rude to be saying that
1:02:53 ’cause everyone in the room
1:02:55 was having a rational reasonable conversation,
1:02:57 whereas on Twitter, I’m sure it seemed great.
1:02:59 And then the punchline of it was,
1:03:01 there was like a substantive issue where I had,
1:03:03 in the thing I’d written that they made them angry,
1:03:05 I’d made an argument, possibly wrong,
1:03:07 but I made an argument and I gave evidence for it
1:03:08 and I supported it.
1:03:10 And the response to this argument was,
1:03:12 it’s obvious that this is wrong,
1:03:14 so we’re not even going to entertain it.
1:03:17 And I was like, that just sounded so absurd
1:03:18 in the context of a university.
1:03:20 On Twitter, it sounds great, right?
1:03:22 Like Feldman is full of VAPID horse shit,
1:03:24 so why should we even bother to refute him?
1:03:27 All you need to know is that we think he’s stupid.
1:03:29 And then that’s enough to reach the conclusion.
1:03:31 But in personal conversation,
1:03:34 very hard to carry that off without looking silly, I think.
1:03:36 – Since you brought up Harvard,
1:03:38 are you feeling this intense tension
1:03:41 between the administration, the student body?
1:03:44 It just seems like such a powder keg over there.
1:03:47 – It was a very, very, very intense fall.
1:03:49 And I mean, very intense.
1:03:52 As intense as I have experienced on the university campus
1:03:54 and I’ve been on and off of this campus since 1988.
1:03:56 So it was, it was extreme.
1:03:59 Now things are sort of slowly getting back
1:04:01 to maybe normal is too strong a word,
1:04:03 but towards a calmer way of being.
1:04:05 And I will say during all of this,
1:04:11 our classes were normal, like school went on.
1:04:13 People learn stuff, they studied for exams.
1:04:16 We did have a normal campus life,
1:04:19 but all of the intensity that was being felt was,
1:04:20 it was felt.
1:04:23 So it was an experience and not for the most part,
1:04:25 I think it’s fair to say for most people,
1:04:27 they don’t think it was a good experience.
1:04:29 – Have things subsided a little bit?
1:04:33 – Yeah, I think so, you know, I mean, there’s now,
1:04:35 so what are universities bad at and what are they good at?
1:04:38 Universities are really bad at making fast decisions
1:04:39 about anything.
1:04:40 They don’t like to do it.
1:04:42 And if they do it, they make bad decisions.
1:04:44 ‘Cause if your personality were that you could really
1:04:46 react in real time, you know, you’d be like a trader.
1:04:48 You’d have, you’d have some job
1:04:50 where you could really take advantage of that.
1:04:53 And of course, there are some people in university
1:04:56 who are really intellectually quick,
1:04:59 but the best of them use that intellectual quickness
1:05:02 to fill in their debt.
1:05:04 And then you make the decision over time.
1:05:06 So universities are bad at the fast stuff
1:05:08 and the fall was all fast stuff,
1:05:10 responding to a news cycle,
1:05:13 making issuing statements and declarations.
1:05:16 Now we’re entering a phase where certainly here,
1:05:19 you know, the university has two task forces already set up
1:05:20 to study these issues.
1:05:23 There’s two more at least coming.
1:05:24 And this is going to be the part of the universities
1:05:26 do pretty well, like take a deep breath,
1:05:28 figure out what we’ve done wrong,
1:05:29 figure out what we can do better,
1:05:32 give reason explanations of how we should do better
1:05:33 in the future.
1:05:36 And so our kind of, the nervous system of the university
1:05:40 will return closer to what it’s best at.
1:05:41 You know, and again, I want to be clear.
1:05:42 There are some people whose, you know,
1:05:44 nervous system is always like, go, go, go.
1:05:45 And that’s really valuable
1:05:47 in a whole bunch of dimensions of life,
1:05:50 but the university is not really one of them.
1:05:54 And so we do our best when we read a little bit.
1:05:58 And that’s now what’s happening, and that’s better.
1:06:00 So in that sense, we’re heading directionally
1:06:03 in the right way, though by no means are we there yet.
1:06:06 – I think it’s a really good and important example
1:06:08 of just the general topic we’ve been discussing,
1:06:11 ’cause a lot of it has to do with governance
1:06:15 and the governance of these super important very,
1:06:18 in some cases, very old, long running institutions
1:06:22 that are not straightforward businesses
1:06:25 and therefore face quite complex governance issues.
1:06:27 And certainly some of the ones, you know,
1:06:31 I’ve observed over the last, let’s say 10 years,
1:06:33 that make it so challenging is,
1:06:37 you have the exact same uneven participation problem.
1:06:40 So not every faculty member leans in and participates
1:06:42 in the governance of their university evenly.
1:06:45 The ones who choose to do that may have different views
1:06:47 than the ones who don’t.
1:06:48 And then on top of that, you have,
1:06:50 I don’t necessarily mean this in a negative way,
1:06:53 but like the mission creep of the university,
1:06:57 which is a lot less focused on protecting academic freedom
1:07:00 and developing the very best research in the world
1:07:03 and a lot more focused on a lot of other issues
1:07:06 that extend way beyond research.
1:07:09 And I think it’s hard, it’s hard to figure out
1:07:11 how to fix those governance problems
1:07:13 because the institutions are so big,
1:07:17 entrenched in so many different areas
1:07:18 and trying to spread their attention
1:07:20 across so many different things.
1:07:21 And I worry–
1:07:22 – One thing to do is to step back from,
1:07:26 at least in my view, to step back from thinking that
1:07:28 by making declarations,
1:07:31 the institutional part of the university contributes
1:07:33 to our understanding of the truth and of knowledge.
1:07:34 I don’t think that’s true.
1:07:37 So if the deans get together and hold a meeting
1:07:40 and announce that the second law of thermodynamics is true,
1:07:42 I don’t think that gives us much special knowledge
1:07:44 into whether the second law of thermodynamics
1:07:45 is in fact true.
1:07:49 However, when the people who are at the cutting edge
1:07:51 of some area of inquiry publish their research
1:07:53 in a peer review journal,
1:07:54 there’s reason to take that seriously.
1:07:55 It’s not always correct.
1:07:57 It’s always subject to revision,
1:08:01 but I would stop and listen closely to what they had to say
1:08:05 and give it some epistemological benefit of the doubt
1:08:07 because they are expert
1:08:09 and they’re speaking on their area of expertise
1:08:10 in a thoughtful way.
1:08:12 And I think one of the things that’s happened
1:08:15 is that our universities, and not only our universities,
1:08:16 but it’s happened in universities
1:08:18 have kind of fallen into this part of the mission creep
1:08:19 that Andy’s describing,
1:08:22 of thinking that they have to express a public statement
1:08:25 on every matter of public importance.
1:08:28 And I understand the moral impulse to say the right thing,
1:08:33 but that has to be balanced against, are you good at that?
1:08:35 And are you contributing to the university’s mission,
1:08:39 which is to pursue the truth in the broadest sense,
1:08:41 you know, through reading, writing and teaching.
1:08:43 But that’s different from, you know,
1:08:47 being in the announcement and declaration game.
1:08:48 And I don’t think universities are very good at that.
1:08:50 And I think stepping back from that is,
1:08:53 it’s not by no means an overall solution,
1:08:54 but it’s the first step.
1:08:57 – This is happening all across corporate America and beyond,
1:08:58 you know, these calls for activism
1:09:02 and for institutions to take a position on various matters.
1:09:03 – I have clients coming to me,
1:09:05 corporate clients every day saying,
1:09:07 “We’re in so much trouble about this.”
1:09:10 You know, and I do try to tell them,
1:09:12 there’s no such thing as being neutral.
1:09:13 That’s their first point.
1:09:16 Genuine neutrality is not possible in the world.
1:09:19 But within the framework of realizing
1:09:20 that you can’t be neutral,
1:09:23 you can sometimes step back and say,
1:09:26 “Look, you know, we’re not gonna take a view
1:09:28 on this or that important thing.”
1:09:30 And the companies find themselves in these positions
1:09:32 because they’re lobbied.
1:09:34 And they’re lobbied by people
1:09:37 who are trying to affect consumer behavior often.
1:09:39 And sometimes they’re able to pull that off.
1:09:41 I mean, Met has been subject to a boycott.
1:09:43 You know, I mean, other big companies
1:09:46 are also subject to various boycotts.
1:09:48 So you are living in a real world.
1:09:49 If you have customers,
1:09:51 you have to worry about your customers.
1:09:52 But you also have to be aware
1:09:55 that your customers could be all over the place.
1:09:59 And that’s one of the reasons to have step back policies,
1:10:01 including policies of referring something
1:10:03 to some other group of people
1:10:05 and saying, “This is a really hard one.
1:10:07 We’re not qualified to decide this.
1:10:08 And we’re handing it off to somebody
1:10:10 who is qualified to weigh in on it.”
1:10:11 – I agree.
1:10:14 And I think the key value is the tying your hands.
1:10:16 You really, you have to make what we would call
1:10:19 in the social sciences a credible commitment.
1:10:20 You have to get to the point
1:10:23 where the people who would try to force you
1:10:24 to take a position on this issue
1:10:27 that’s irrelevant to your core mission,
1:10:29 yet important to them,
1:10:31 believe that there’s nothing they can do
1:10:33 to make you take the position.
1:10:34 And I think the critical mistake
1:10:38 that a lot of the most important universities made,
1:10:40 as well as a lot of corporations made,
1:10:42 was to give in to those demands
1:10:44 and then create common knowledge
1:10:47 that they can be forced to make those statements
1:10:49 and that they aren’t committed
1:10:50 to not making those statements.
1:10:52 And people often point to it,
1:10:54 but one of the few universities
1:10:56 that had less of a problem with this stuff
1:10:58 has been the University of Chicago
1:11:00 because they had made a preexisting written commitment
1:11:02 to not take such positions.
1:11:06 And that has turned out to be a huge luxury for them
1:11:08 that other universities haven’t afforded themselves
1:11:10 because they hadn’t made that commitment
1:11:12 or the commitment wasn’t credible.
1:11:14 And I think that’s, I think, you know,
1:11:17 the central problem is that over a 10 plus year period,
1:11:20 a lot of the top universities demonstrated
1:11:23 that they did not have a credible commitment
1:11:24 to not taking those statements.
1:11:26 And now they’re trying to walk that back,
1:11:28 but it’s hard to create that commitment out of nothing.
1:11:30 – I’d be remiss not to bring the conversation
1:11:32 back to some of the subjects we were discussing
1:11:36 at the very outset around internet governance.
1:11:40 And you’re mentioning this concept of credible commitments
1:11:42 in the context of the University of Chicago,
1:11:45 but when it comes to making commitments
1:11:49 with internet services and binding people to rules,
1:11:52 I wonder what the path forward is there.
1:11:54 When you look at something like the Meta Oversight Board,
1:11:56 they have made some decisions
1:12:00 that Meta doesn’t necessarily have to adhere to.
1:12:03 Now, it adds a lot of transparency to the process,
1:12:05 but I wonder if there are other systems
1:12:10 that would, you know, bind services and corporations
1:12:13 and applications to the way that they,
1:12:19 to some set of, you know, agreed upon circumstances.
1:12:23 Of course, the thing that comes to mind is DAOs,
1:12:25 distributed to autonomous organizations
1:12:29 and how you can sort of create these commitments
1:12:33 in blockchain code that bind all participants.
1:12:35 – I’ll just say, I mean, my view on this,
1:12:36 I think there’s two things that are interesting
1:12:40 about blockchain and DAOs with respect to this topic.
1:12:42 The first is this trust problem
1:12:46 and being able to write and commit to a process.
1:12:48 So one of the first things, you know, projects do
1:12:50 when they start in crypto is essentially
1:12:55 write a constitution, i.e. write down and code
1:12:58 how they’re gonna make decisions over different issues.
1:13:02 And, you know, exactly how binding those are
1:13:04 is up for debate, it’s a little bit complicated,
1:13:06 but in the long run with blockchain,
1:13:09 I think it really is true that the constitution
1:13:12 you commit to through that process in code
1:13:15 is a lot harder to change without resort
1:13:17 to some kind of democratic process
1:13:19 than if it’s not committed in that way,
1:13:22 especially if it goes way beyond what can be protected
1:13:25 through normal real world legal processes.
1:13:28 And so I think there is something really interesting
1:13:30 about being able to build an online community
1:13:33 where you can make a sort of promise into the future
1:13:36 that anytime this type of decision comes up,
1:13:39 this is the process by which we’re gonna make that decision.
1:13:42 I think there’s something quite interesting about that.
1:13:44 The second is more economic in nature.
1:13:46 It has to do with, you know,
1:13:48 one of the most important types of trust
1:13:50 for online platforms is the trust
1:13:53 between the different sides of the platform’s market.
1:13:54 And so, you know,
1:13:57 almost all online platforms have this characteristic
1:14:00 that they’re trying to bring together two sides of a market,
1:14:02 drivers and riders or developers and users
1:14:05 for the app store and so forth.
1:14:07 And you really need the producer side
1:14:12 of that two-sided platform to believe that into the future
1:14:15 they’re gonna have an economically beneficial relationship
1:14:16 with the platform.
1:14:17 And I think one of the biggest challenges
1:14:19 we’re seeing in the space right now
1:14:22 looking like the huge dispute between Epic Games and Apple,
1:14:26 for example, is that developers are starting to feel like
1:14:29 it’s still a tremendous opportunity to develop
1:14:33 on top of these extraordinarily good global platforms.
1:14:35 But at the same time, the taxes they pay
1:14:37 to the platforms are going up,
1:14:40 the decisions made around their services
1:14:42 are changing in unpredictable ways.
1:14:44 They would like to have a much longer term promise
1:14:47 about how the platform’s gonna treat them.
1:14:50 And that’s another place where I think being able to make
1:14:53 a long-term in some sense immutable promise
1:14:55 over the economic relationship between a platform
1:14:59 that it’s producers, I think could be important.
1:15:02 – In a broad sense, that’s what a constitution is.
1:15:05 I mean, a constitution, the best metaphor for it
1:15:09 is Odysseus and the Odyssey when he’s on his boat
1:15:11 and he’s going to the island of the Sirens.
1:15:14 And he knows they’re gonna be so appealing to him,
1:15:16 so beautiful, their song so beautiful
1:15:18 that he’s gonna jump ship.
1:15:21 And so he ties himself to the mast
1:15:24 and tells his crew members like, “Don’t let me go.”
1:15:26 And that’s in one of the leading metaphors,
1:15:27 what a constitution is.
1:15:30 I mean, the idea is sort of like,
1:15:32 we know that when the ships are down
1:15:35 and we’re in a panic, we’re gonna take away people’s rights.
1:15:37 We’re gonna silence them or we’re gonna take away their,
1:15:40 their right against arbitrary arrest.
1:15:43 And so we try to bind ourselves from doing that.
1:15:45 And we try to create institutional mechanisms
1:15:47 to bind ourselves.
1:15:50 Notably in government, there are two parts to that.
1:15:52 There’s the written rules.
1:15:55 And then there’s the human ability to interpret
1:15:57 and maybe even override those rules.
1:15:59 And there’s a productive tension between those things.
1:16:00 And it goes all the way back to a debate,
1:16:02 believe it or not, I had to be so ancient Greek,
1:16:04 but we’ve done a lot of ancient Greeks today,
1:16:06 a debate between Plato and Aristotle
1:16:07 about which is better.
1:16:10 Whether it’s better to have the rules in the end in charge
1:16:12 ’cause no one’s perfect, which is the Aristotle view,
1:16:14 or whether it’s better to have the wisest person
1:16:16 you can get your hands on in charge.
1:16:18 – Because the laws are king.
1:16:20 – Yeah, ’cause the rules won’t give you
1:16:21 the best outcome all the time,
1:16:23 which is broadly the Plato view.
1:16:25 And they’re both right.
1:16:28 You need some back and forth or productive tension
1:16:29 between those things.
1:16:34 And that’s true in Dao’s, where you don’t want the thing
1:16:36 to lead to a total spiral down.
1:16:38 You need to have some break glass measure.
1:16:44 And it’s also gonna be true in something
1:16:48 like the oversight board, where the company sometimes
1:16:51 has committed itself absolutely to following their rulings.
1:16:54 Other cases, it’s asked for an advisory opinion.
1:16:58 But in the end, if Meta chooses to stop listening
1:17:02 to the advisory to the oversight board altogether, it could.
1:17:03 They would just have to pay a reputational cost
1:17:05 for doing it.
1:17:09 So, I mean, I think, this really is in the realm of art
1:17:10 rather than science.
1:17:13 You always need to have some of each.
1:17:15 You need to have some meaningful constraint,
1:17:19 and then you have to have some capacity for flexibility.
1:17:22 And so it’d be nice to say there’s like a magic solution
1:17:23 to this, but there isn’t.
1:17:25 And there’s no purely technological solution,
1:17:27 but there’s no solution that has no technology,
1:17:30 because rules and the following rules are a technology.
1:17:32 So you need both.
1:17:37 And that may not be the most thrilling conclusion.
1:17:41 Rules always win, or people always win.
1:17:44 Actually, sometimes some wins and some of the other wins.
1:17:46 And that’s real life.
1:17:48 So maybe that’s not a terrible place
1:17:51 to end what I have to say on the topic.
1:17:53 – I want to build on that in one particular way,
1:17:55 which is like one of the things we study a lot
1:18:00 in the history of democracy is this idea of constitutions,
1:18:02 and what we call in political science
1:18:06 and political economy, self-enforcing constitutions.
1:18:09 There’s no external authority
1:18:12 that can bind a country to its constitution.
1:18:15 And so the constitution only has power in the long run
1:18:18 to force people to follow the rules.
1:18:21 If there’s some track record of everyone
1:18:23 having agreed to it for long enough,
1:18:27 that it has some special power that it gets,
1:18:29 that it accrues over time,
1:18:30 because there’s nothing to stop someone
1:18:32 from tearing it up and ignoring it.
1:18:34 And so that’s why we call them self-enforcing.
1:18:36 They only really bind it to the long run
1:18:39 when everyone has a long enough track record
1:18:41 of agreeing to be bound by it.
1:18:42 – This goes back to the Magna Carta
1:18:45 where King John was forced to sign it, right?
1:18:46 And then he was sort of like,
1:18:48 well, actually I’m not gonna pay attention to it that much.
1:18:51 And then there was a battle
1:18:54 over the legitimacy of the crown and all that.
1:18:56 – Tons of history of this.
1:18:59 And the US constitution is somewhat unusual
1:19:01 in the length of time for which it’s proven
1:19:04 to be somewhat self-enforcing.
1:19:07 That same problem exists online.
1:19:09 And I think Noah is right.
1:19:12 He was hinting at this, I think,
1:19:15 that blockchain or ways
1:19:18 of writing immutable agreements into code
1:19:21 do not, in a comprehensive way,
1:19:23 solve this problem of self-enforcement
1:19:25 because you still always,
1:19:27 people have the option to fork and to leave
1:19:29 and to do whatever they want,
1:19:32 or to change the code as long as they agree to it.
1:19:35 And so there’s still something deep in,
1:19:37 like he was saying, art rather than science about it.
1:19:41 However, I do think for the online world,
1:19:44 blockchain provides something pretty fascinating
1:19:47 that I was really blown away by as a political scientist
1:19:48 when I discovered it.
1:19:51 And I think it’s best highlighted through an example.
1:19:54 So there’s a Tao called Lido.
1:19:58 And Lido has been discussing publicly for a while,
1:20:02 its desire to build in a veto
1:20:04 for certain types of decisions.
1:20:06 And what they wanna do is essentially update
1:20:08 their constitution to say,
1:20:10 when there’s this one set of decisions,
1:20:12 maybe the Tao just makes the decision,
1:20:15 but when it’s this other type of decision,
1:20:17 it goes to an external veto.
1:20:21 And when I was talking to them and to other people
1:20:24 about how that would work in practice,
1:20:29 one of my biggest concerns was that in the real world,
1:20:32 if you try to set some procedural rule
1:20:34 that only applies to one set of legislative votes
1:20:36 and not another,
1:20:38 then it’s just obvious that strategic actors
1:20:41 will channel it into whichever category
1:20:42 is favorable to them.
1:20:44 So if they’ll just claim like,
1:20:45 if I don’t want the veto to apply this,
1:20:47 they’ll just claim this is one of the votes
1:20:48 that the veto doesn’t count on.
1:20:51 And because legislatures get to make all their own rules,
1:20:53 you really can’t stop that.
1:20:54 And we’ve seen that in the Senate.
1:20:56 Occasionally the Senate parliamentarian
1:20:58 tries to stop the Senate from doing stuff.
1:21:00 There’s a weird equilibrium where sometimes the Senate
1:21:02 defers to the parliamentarian,
1:21:03 but in the long run,
1:21:06 the Senate has no real deep obligation to do that
1:21:09 and can always change its rules if it wants to,
1:21:10 if enough people want to.
1:21:12 And so I kind of thought,
1:21:13 this veto is not gonna work
1:21:16 because you can just redefine Lido’s votes
1:21:18 into one category or the other.
1:21:20 But I think the thing that turns out that I learned
1:21:22 that’s really interesting is no,
1:21:26 because the topic of the vote is defined
1:21:29 by the smart contracts that the vote actually touches
1:21:31 or doesn’t touch,
1:21:34 you can define in a very deep and immutable way
1:21:36 whether this is one of the votes
1:21:38 that the veto’s gonna apply to or not.
1:21:40 And that allows for a form of commitment
1:21:43 that I’ve never seen before in a legislature.
1:21:46 I don’t think it’s a panacea.
1:21:48 I think there are still these broader issues
1:21:50 of self enforcement that matter
1:21:51 for all the reasons that Noah’s saying,
1:21:54 but I do think it offers something pretty fascinating
1:21:57 in the vein of binding commitments
1:21:59 to legislative procedures.
1:22:01 – Isn’t this the miracle of the US Constitution, Noah?
1:22:04 I mean, you’ve written a biography of James Madison,
1:22:06 the fact that you can set these rules up front
1:22:09 and yet make them flexible enough that they can endure.
1:22:11 – Well, if we had Madison here,
1:22:15 he would agree that it was his perfect design.
1:22:20 You have to ignore the civil war and some other blips.
1:22:22 But yeah, I mean, the aspiration was to create something
1:22:25 that would be able to be both rule-based
1:22:28 and also responsive to change over time.
1:22:30 Some of the design elements didn’t work well.
1:22:34 I mean, the amendment provision, we use it very, very rarely.
1:22:36 He probably thought that was generally okay,
1:22:37 but there have been some circumstances
1:22:40 where we really need reforms and we can’t really get them.
1:22:44 More fundamentally, it’s a system that’s designed
1:22:48 to enable compromise and to enable compromise in the middle
1:22:51 and to push politicians back towards the center.
1:22:53 And that’s why, right now, if you ask
1:22:56 about sort of the big question, the grand questions,
1:22:59 plaguing politics and political science,
1:23:03 I think it’s fair to say that if you have some confidence
1:23:06 that the laws of political science are still true,
1:23:09 then you think that we will migrate back towards the middle
1:23:11 because our system is designed
1:23:13 to push us back towards the middle.
1:23:16 And if you, on the other hand, are really panicked
1:23:18 about where things are going and the possibility
1:23:22 for breakdown, you think that it might take two,
1:23:23 you might still believe in the rules,
1:23:25 but you think they might take too long to operate.
1:23:28 And then in the short term, you could get a breakdown
1:23:31 in overall faith in the system’s capacity.
1:23:33 And the other part of it is looking at the world around us
1:23:35 and seeing that for all of our polarization,
1:23:38 lots of elements of our society are still functioning
1:23:41 and functioning really reasonably well.
1:23:45 – Amazing, this has been a fantastic conversation.
1:23:47 Thank you both so much for all your time.
1:23:49 – Thank you so much, really fun.
1:23:50 – Thank you, yeah, this is great.
1:23:55 – We sort of took it by fiat
1:23:57 that direct democracy has problems.
1:23:59 We mentioned Bode McBoatface,
1:24:01 which is a pretty humorous example,
1:24:05 but are there more serious examples from history,
1:24:08 from the physical world of direct democracy in action
1:24:09 and going awry?
1:24:12 – I mean, the most classic example people point to
1:24:17 is at which the America’s founders pointed to was Athens.
1:24:21 So Athens had a pretty robust direct democracy.
1:24:24 The reality of how it function was quite complicated.
1:24:26 It wasn’t as simple as just like everyone
1:24:28 showed up and made every decision.
1:24:30 They’re actually like pretty complex layers
1:24:32 of decision makers and stuff,
1:24:35 but there was a significant component of direct democracy
1:24:38 among land-owning men.
1:24:44 And one way that people say it went awry
1:24:48 is that during the war with Sparta,
1:24:52 you know, this is just one telling,
1:24:56 but like the people got swept away with passion
1:24:59 and were uninformed about the strategic decisions
1:25:04 that had to be made and essentially forced Athens
1:25:10 to open a second front in the war by invading Sicily.
1:25:14 And that sort of turned into Athens, Vietnam.
1:25:16 It was like a disaster.
1:25:19 And it massively sapped Athens power.
1:25:22 It was super, super costly.
1:25:27 And that is often held up as an example of like mob rule.
1:25:30 This is because the decision to open a second front
1:25:32 and invade Sicily was,
1:25:35 now you said that there are complex layers
1:25:37 of decision-making, but in this respect,
1:25:41 it was kind of a bottoms-up mob-like decision.
1:25:44 – The claim is that the mob was manipulated
1:25:49 by ego-driven generals who wanted to burnish
1:25:54 their reputations by opening a new invasion
1:25:57 and that they manipulated the mob into supporting them.
1:26:00 And if you look in the classical era,
1:26:02 and you could question the motives of all these authors,
1:26:06 that’s sort of the stereotype that arises.
1:26:09 And so post Athens, glory days,
1:26:13 a lot of Roman writing refers to mob rule
1:26:17 and the ability of demagogues to manipulate the mob
1:26:21 as reasons to be very skeptical of direct democracy.
1:26:25 There’s like a really famous passage in Virgil’s Aeneid
1:26:27 where he goes on at length
1:26:32 about how cynical leaders can whip up the mob
1:26:36 and what we really need are seasoned statesmen
1:26:40 who will make decisions carefully on behalf of people.
1:26:43 More modern examples, hard to say.
1:26:48 It’s so unfunctional that it’s really not even tried.
1:26:51 Yeah, it’s really, I mean, you can point to like,
1:26:54 in Switzerland, there’s a pretty aggressive
1:26:56 local referendum system.
1:26:58 I think there’s general agreement
1:26:59 that it’s kind of crazy,
1:27:02 but like it doesn’t work as badly as some other thing.
1:27:04 I mean, you literally at the Canton level in Switzerland,
1:27:08 you vote on who receives passports.
1:27:10 It’s really quite wild.
1:27:12 Yeah, there’s some great research on this.
1:27:15 And you know, in the US people point to California
1:27:19 is pretty extreme on the end towards direct democracy.
1:27:22 And that can go both ways.
1:27:24 In some ways, I think it’s an important
1:27:26 and potentially valuable institution
1:27:30 ’cause it allows voters to surface issues
1:27:32 that the legislature might want to ignore.
1:27:36 And so if you think your legislators are doing a good job
1:27:38 or are captured by special interests
1:27:40 or for whatever reason or not,
1:27:43 sufficiently accountable to voters,
1:27:45 then giving voters this alternative mechanism
1:27:50 to force issues could be quite valuable.
1:27:54 The downside is that you end up with tremendous voter fatigue
1:27:57 ’cause you have lots of these votes
1:28:00 and the California ballot is ridiculous.
1:28:03 It’s many, many pages long.
1:28:04 Most people, including myself,
1:28:07 can’t understand most of the issues being voted on.
1:28:09 And then the second problem,
1:28:10 and this is often a problem in direct democracy,
1:28:13 is interest group capture of the agenda.
1:28:17 And so how ballot initiatives get onto the ballot
1:28:20 is complicated, but it’s actually not that hard
1:28:22 for a well-resourced committed interest group
1:28:27 to force carve outs for themselves onto the ballot.
1:28:30 And so one thing that happens in California is every cycle,
1:28:35 we have to vote on this extremely abstruse ballot initiative
1:28:39 that has to do with whether doctors should be required
1:28:41 to sit in on all dialysis.
1:28:45 And you might think it’s doctors who are pushing this
1:28:47 because you might think like, oh, they make money from it.
1:28:50 But no, the doctors do not want to do this.
1:28:51 And they think it’s crazy
1:28:53 ’cause there’s no medical reason why a doctor needs
1:28:55 to be present for the administration of dialysis.
1:28:59 – Yeah, it sounds like just a ton of time wasted and…
1:29:01 – Yeah, yeah, it’s crazy.
1:29:05 And it’s being pushed by certain interest groups
1:29:08 that are kind of just more generally
1:29:09 in a conflict with doctors.
1:29:11 And they have stated publicly
1:29:14 that the reason they put on the ballot every two years
1:29:17 is to force the doctors and interest groups to spend money
1:29:19 convincing everyone not to vote for it.
1:29:24 So it’s a complete waste of everyone’s time and resources.
1:29:25 And so that I think is a great example
1:29:27 of how these processes get messed up.
1:29:29 – By the way, the Irish referendum that just happened,
1:29:31 my brother-in-law was just visiting from Ireland.
1:29:34 So I was interested in this going on.
1:29:40 It was interesting to see people reject so strongly
1:29:42 these proposed changes to the constitution.
1:29:44 I don’t know if you’ve been tracking it.
1:29:46 – No, I haven’t followed that very carefully,
1:29:49 but I do think that’s a good example
1:29:51 where referendums, if they’re embedded
1:29:53 into a broader process in a healthy way,
1:29:58 can be a really good way to get more signal from voters.
1:30:01 And this happened in California, I think it was 2018.
1:30:05 We had a bunch of pretty like culture war related
1:30:08 referendums or ballot initiatives.
1:30:09 – Yeah.
1:30:10 – And quite consistently,
1:30:12 the voters really signaled through them
1:30:15 that they were in a more centrist position
1:30:17 than California’s elected officials.
1:30:20 And I think that had a big impact
1:30:22 on how the elected officials then proceeded
1:30:26 from then on after they lost these big ones.
1:30:28 So I do think it can be valuable,
1:30:31 but it’s complicated how you enact in practice.
1:30:34 And I think that’s kind of where Dow governance is heading.
1:30:38 There will still be some direct token holder voting,
1:30:41 but there will also be a lot more delegation
1:30:46 to professional experts on these issues.
1:30:47 – You need to protect against mob rule,
1:30:49 but also plutocracy.
1:30:50 – Exactly.
1:30:55 – And find some sort of middle way.
1:30:58 – Yeah, and I think that the delegation stuff,
1:30:59 it’s very valuable,
1:31:02 but it’s not gonna solve the participation problem
1:31:03 fundamentally.
1:31:05 You’re still gonna need the token holders
1:31:08 or the other voters to pay attention
1:31:11 and make sure their delegates don’t go rogue.
1:31:13 And that’s gonna require some pretty careful planning
1:31:16 because there’s definitely a temptation to set it
1:31:19 and forget it in terms of delegating your tokens.
1:31:23 But we have lots of reasons to suspect
1:31:27 that if that behavior manifests regularly,
1:31:29 the delegates won’t have those incentives
1:31:31 we want them to have to do a good job.
1:31:33 So I think a lot of the most interesting work right now
1:31:35 with Dow Governance is around,
1:31:38 how do you build delegation programs
1:31:41 that first of all recruit and give good incentives
1:31:44 to delegates while at the same time, second of all,
1:31:46 still encourage token holders or other voters
1:31:50 to pay attention and to think about re-delegating
1:31:51 their votes on a regular basis.
1:31:55 So that delegates feel like they’re being watched.
1:31:56 – What are people putting in place?
1:32:00 What sorts of new rules or experiments are happening?
1:32:01 – First thing that’s happened,
1:32:02 which I think is really interesting
1:32:05 is a bunch of the largest Dow’s have instituted,
1:32:07 they each have a different name,
1:32:09 I would call them Delegate Programs,
1:32:12 which is a combination of often it includes paying
1:32:14 the delegates, sometimes as a function
1:32:16 of how many votes they accrue,
1:32:20 and creates like a online web interface
1:32:22 that makes it easy for token holders
1:32:24 to delegate to different delegates.
1:32:26 And so you’re putting together,
1:32:30 you’re creating incentives for there to be delegates
1:32:34 and you’re helping token holders find delegates.
1:32:36 And we’ve actually done some research,
1:32:37 there’s some pretty interesting evidence
1:32:39 that rolling out those programs
1:32:44 actually does increase token holder voting participation,
1:32:47 probably because you’re asking the token holder
1:32:49 to do a much easier task.
1:32:52 Instead of voting on like literally changes
1:32:55 to the underlying protocols code,
1:32:56 you’re just asking them like,
1:32:58 find one of these delegates who you like
1:33:00 and give them your voting power.
1:33:03 And part of these programs is also having the delegates
1:33:07 write platform statements and or post videos
1:33:09 about what they wanna accomplish as a delegate
1:33:12 that again helps the token holders find like,
1:33:14 oh, that’s a delegate that shares my views,
1:33:16 I’m gonna delegate to them for now.
1:33:17 Fascinating.
1:33:19 So this is a paper that you’re working on currently.
1:33:20 Yeah, yeah, I’m hoping to have it done
1:33:23 in the next month or so, we’ll see.
1:33:24 Very cool.
1:33:25 And so basically getting people to delegate
1:33:29 their votes more, making it low friction,
1:33:31 making the information available
1:33:32 for people to make reasonable decisions
1:33:34 about who they want to delegate to
1:33:39 and the directions that they’ll take a Dow.
1:33:40 Exactly, yeah.
1:33:42 No, and I think it’s gonna be a really important model
1:33:47 because this came up in the conversation with Noah,
1:33:50 I think a lot of people in other parts
1:33:54 of online governance are coming to the same conclusion
1:33:55 that we need representatives
1:33:58 ’cause we need this kind of expertise in the accountability
1:34:00 that we can’t rely on direct democracy.
1:34:04 But Web3 is years ahead, literally years ahead
1:34:07 in experimenting with how you actually set up
1:34:09 representative democracy.
1:34:12 So I think it’s gonna be, yeah, really important development.
1:34:16 I mean, we’ve got this intense like Darwinian combat going on,
1:34:18 all of these experiments just let loose
1:34:20 and seeing which ones will flourish.
1:34:25 How far away are we from actually determining what works,
1:34:29 what doesn’t work and seeing the kind of best practices
1:34:30 shake out from all this?
1:34:31 I don’t know.
1:34:33 I think there’s probably two aspects to it
1:34:36 that we need to have happen before we’ll know for sure.
1:34:42 One is to get Dow’s to a place where they have
1:34:47 more broader killer use cases for society,
1:34:49 which will in turn make the governance decisions
1:34:51 higher stakes for society.
1:34:54 And then we’ll see which governance structures
1:34:56 can stand up to that pressure.
1:34:59 In some ways they’re already involved
1:35:01 in the high stakes decisions,
1:35:04 certainly in these like DeFi protocols and stuff,
1:35:09 but they don’t have the same kind of global public pressure
1:35:12 on them that other online platforms have faced
1:35:15 because other online platforms are much more mature,
1:35:18 have many more regular users and so forth.
1:35:21 So I think that’s gonna be a really big interesting shift
1:35:25 as it occurs that’s gonna bring the Dow governance
1:35:27 more together with what we talked about with Noah
1:35:30 in terms of like web 2.0 governance.
1:35:33 And then the second is we need over time
1:35:36 to have more Dow’s with broader distributions
1:35:40 of voting power, which is happening over time.
1:35:43 Big criticism of Dow’s historically has been,
1:35:45 they’re clothed in the rhetoric of democracy,
1:35:48 but the voting power is very unevenly distributed.
1:35:52 And that may make some of this like delegation experiment
1:35:55 sort of unrepresentative of what would happen
1:35:58 in a broader democratic system
1:36:00 where there’s more conflict among users
1:36:02 with roughly equal amounts of power.
1:36:06 And so that is the trend, I think,
1:36:08 in token holding over time.
1:36:10 And again, as these become larger,
1:36:13 as they touch on more killer use cases for society,
1:36:15 I think we’ll see that broader distribution
1:36:17 and that will be another kind of pressure test
1:36:19 for these systems.
1:36:22 So those are the two things I’m keeping my eye on.
1:36:24 (upbeat music)
1:36:28 (upbeat music)
1:36:30 (upbeat music)
1:36:33 (upbeat music)
1:36:36 (upbeat music)
1:36:38 (upbeat music)
1:36:41 (upbeat music)
1:36:51 [BLANK_AUDIO]
with @NoahRFeldman, @ahall_research, @rhhackett
Welcome to web3 with a16z. I’m Robert Hackett and today we have a special episode about governance in many forms — from nation states to corporate boards to internet services and beyond.
Our special guests are Noah Feldman, constitutional law scholar at Harvard who also architected the Meta oversight board (among many other things); he is also the author of several books. And our other special guest is Andy Hall, professor of political science at Stanford who is an advisor of a16z crypto research — and who also co-authored several papers and posts about web3 as a laboratory for designing and testing new political systems, including new work we’ll link to in the shownotes.
Our hallway style conversation covers technologies and approaches to governance, from constitutions to crypto/ blockchains and DAOs. As such we also discuss content moderation and community standards; best practices for citizens assemblies; courts vs. legislatures; and much more where governance comes up.
Throughout, we reference the history and evolution of democracy — from Ancient Greece to the present day — as well as examples of governance from big companies like Meta, to startups like Anthropic.
Resources for references in this episode:
- On the U.S. Supreme Court case NetChoice, LLC v. Paxton (Scotusblog)
- On Meta’s oversight board (Oversightboard.com)
- On Anthropic’s long term benefit trust (Anthropic, September 2023)
- On “Boaty McBoatface” winning a boat-naming poll (Guardian, April 2016)
- On Athenian democracy (World History Encyclopedia, April 2018)
- The Three Lives of James Madison: Genius, Partisan, President by Noah Feldman (Random House, October 2017)
A selection of recent posts and papers by Andrew Hall:
- The web3 governance lab: Using DAOs to study political institutions and behavior at scale by Andrew Hall and Eliza Oak (a16z crypto, June 2024)
- DAO research: A roadmap for experimenting with governance by Andrew Hall and Eliza Oak (a16z crypto, June 2024)
- The effects of retroactive rewards on participating in online governance by Andrew Hall and Eliza Oak (a16z crypto, June 2024)
- Lightspeed Democracy: What web3 organizations can learn from the history of governance by Andrew Hall and Porter Smith (a16z crypto, June 2023)
- What Kinds of Incentives Encourage Participation in Democracy? Evidence from a Massive Online Governance Experiment by Andrew Hall and Eliza Oak (working paper, November 2023)
- Bringing decentralized governance to tech platforms with Andrew Hall (a16z crypto Youtube, July 2022)
- The evolution of decentralized governance with Andrew Hall (a16z crypto Youtube, July 2022)
- Toppling the Internet’s Accidental Monarchs: How to Design web3 Platform Governance by Porter Smith and Andrew Hall (a16z crypto, October 2022)
- Paying People to Participate in Governance by Ethan Bueno de Mesquita and Andrew Hall (a16z crypto, November 2022)
As a reminder: none of the following should be taken as tax, business, legal, or investment advice. See a16zcrypto.com/disclosures for more important information, including a link to a list of our investments.