AI transcript
0:00:15 There is a sort of subtle distinction between wisdom of a random crowd and wisdom of an informed crowd.
0:00:27 Instead of having politicians decide what policies to have, politicians and voters would just decide on what our metric for success is going to be.
0:00:36 As you’re deciding which thing to build first or as we’re progressively decentralizing, what do we prioritize, you actually have to understand the market context you’re working in.
0:00:45 We as humans love making predictions, and to improve our predictive power, we’ve built up mechanisms that leverage the wisdom of the masses,
0:00:50 whether it be political polls, financial markets, even Twitter or ex-community notes.
0:01:00 One such mechanism that had its moment this year was prediction markets, with search queries for platforms like Polymarket or Kalshee going hyperbolic ahead of the election.
0:01:10 Today’s episode is all about prediction markets, including where they’re useful and where they’re limited, but also how they coexist with other mechanisms like the polls.
0:01:15 So was the attention they received an election year phenomena or a sign for something to come?
0:01:23 And what’s the difference between gambling and speculation anyway, and what implications does that question have on their future in the United States?
0:01:32 Given that this episode was originally published on our sister podcast Web3 with A16Z, we also explore where Web3 and decentralized networks play a role here.
0:01:40 Finally, if you’re excited about the next generation of the internet, be sure to check out Web3 with A16Z wherever you get your podcasts.
0:01:42 Alright, on to the episode.
0:01:54 Welcome to Web3 with A16Z, a show from A16Z Crypto about building the next generation of the internet.
0:01:59 I’m Sonal Choksi, and today’s episode is all about prediction markets and beyond.
0:02:12 Our special guest is Alex Tabarak, professor of economics at George Mason University and chair in economics at the Mercatus Center and Scott Commoners, research partner at A16Z Crypto and professor at Harvard Business School.
0:02:23 Prediction markets hit the main stage in the recent election, which we covered briefly, especially to tease apart the hype from the reality there, since people have talked about the promise and promise of these for a very long time.
0:02:32 But we also go more deeply into the how, why, and where these markets work, challenges, and opportunities, including implications for designers throughout.
0:02:45 We also briefly cover other information aggregation mechanisms and discuss applications for all these markets, including touching on trends like futarki, AI entering the market, desci, and more.
0:02:50 About halfway through, we discuss where do and don’t blockchain and crypto technologies come in.
0:02:59 And as a reminder, none of the following should be taken as business, investment, legal, or tax advice. Please see A16Z.com/disclosures for more important information.
0:03:06 Be sure to also check out the show notes for this episode. We have a really rich set of links, including all the research cited in this conversation.
0:03:15 But first, we begin with a quick overview of what prediction markets are. The first voice you’ll hear is Alex’s, followed by Scott’s.
0:03:22 So I think a prediction market is a very simple idea. The bottom line is that we’re interested in forecasting.
0:03:33 Lots of people are interested in forecasting things. And prediction markets are some of the best methods of forecasting which we have yet created.
0:03:41 They tend to be better than complicated statistical models. They tend to be better than polls, or at least as good.
0:03:45 And there’s one reason for that. Suppose that a model was better.
0:03:53 I suppose I have the Nate Silver statistical model of predicting elections, and it’s better than the prediction market.
0:04:02 I suppose that worked true. Well, if that were true, I could make money. I could use Nate’s model to go and make bets.
0:04:07 And in making bets on the prediction market, I pushed the prediction market closer to the truth.
0:04:17 So almost by definition, the prediction markets have to be at least as good, and typically they’re better than other methods of forecasting.
0:04:24 And actually, that is an illustration of why we think of these things as information aggregation mechanisms. What are they really doing?
0:04:32 They’re aggregating information from all of the people in the market. And so if many different people are out there doing their own private forecasts and like calibrating their own models,
0:04:37 there’s Nate Silver and like Jonathan Gold, Melanie Bronze, you know, will make up all of our variations.
0:04:43 You know, they all have their own models. They all have their own estimates, which they trust with some degree of confidence.
0:04:50 They come together, right? They’re all buying or selling the prediction asset based on what their model leads them to believe.
0:04:58 And so as a result, the asset is sort of like aggregating all of this information. It’s price discovery, just like we think about in financial markets and commodities markets.
0:05:03 Like everybody’s demand together discovers the price at which the market clears.
0:05:12 And here, because what the value of the asset is depends on probability, right? It’s like its value is sort of like a function of the probability of the outcome of the event.
0:05:15 The price aggregates people’s estimates of that probability.
0:05:28 Yeah, exactly. I think it’s useful that these are markets and actually all markets do this. And we learned this going back to Hayek’s 1945 article, the use of knowledge in society.
0:05:33 This is a Nobel Prize winning paper, which doesn’t have a single equation in it.
0:05:38 So anybody can go and read this paper and they should. It’s a fantastic paper.
0:05:39 I’ll link it in the show notes.
0:05:40 Awesome.
0:05:49 And so what Hayek said is, you know, prior to hiring people thinking what prices, you know, the coordinate and they make demand equal supply and production consumption.
0:05:52 Hayek said, no, no, no, you’re thinking about the price system all wrong.
0:05:56 The price system is really about aggregating and transmitting information.
0:06:02 And he said, look, there’s all this information sort of out there in the world and it’s in heads, right?
0:06:07 It’s in people’s heads, like what people prefer their preferences, but also people know things.
0:06:14 They know what the substitutes are, what the compliments of everything are, how to increase supply, what the demands and the supplies are.
0:06:15 It’s all in heads.
0:06:20 And for a good economy, you want to use that information, which is buried in people’s heads.
0:06:22 But how do you get it out?
0:06:23 Because it’s dispersed.
0:06:25 It’s dispersed in millions of people’s heads.
0:06:29 The information, sometimes it’s fleeting information.
0:06:31 It’s sometimes tacit.
0:06:33 It’s hard to communicate to a central planner.
0:06:45 So what Hayek said is that markets do this because markets give people an incentive through their buying and selling to reveal this kind of information.
0:06:51 To pull this dispersed information for millions of people and people who are buying, they’re pushing the price up.
0:06:53 People are selling, they’re pushing the price down.
0:06:56 Suppliers, consumers, they’re all in the same market.
0:07:09 And so all of this dispersed information comes to be embedded in the prices and kind of remarkably, the price can sort of know more than any person in the market.
0:07:15 I just want to pause on that for a quick second because you guys might take it for granted, but that’s a very profound insight.
0:07:24 Like what you’re basically saying is that it’s really surfacing what people know collectively at scale and getting at the truth in that way.
0:07:27 I mean, that’s a very profound thing, so I’m going to pause on that for a quick second.
0:07:38 Exactly. What economists have found is that these markets are actually very good at producing predictions which tend to be more accurate than polls.
0:07:50 So if you go to a prediction market, for example, the recent election with Trump and Harris, you can buy an asset which pays off a dollar if Trump wins and nothing if Trump doesn’t win.
0:07:53 Now you think about how much are you willing to pay for that asset?
0:08:04 Well, if you think that Trump has a 70% chance of winning and you go to the market and you see that the price of that asset is 55 cents, you’re going to want to buy.
0:08:11 Because you’re buying something you think is worth 70 cents, 70% chance that Trump wins and you get a dollar, and you can buy it for 55 cents.
0:08:22 So you expect to make 15 cents. And by doing that, you push the price closer to 70 cents so you can interpret the price as a prediction.
0:08:30 And in the most recent election, the prediction markets were tending to predict a Trump win even when the polls were closer to 50/50.
0:08:36 Actually, the polymarket CEO said a lot of people trust the market, not the polls, at least when it came to the election.
0:08:42 Like, do you guys agree with that or no? I’m just curious, because if that’s a place where we can quickly tease some hype versus signal.
0:08:47 I don’t think polling is dead. Polling is one of the inputs into a prediction market. It’s pretty useful.
0:08:53 I do think people need to be more sophisticated about how they poll and who they poll.
0:09:00 It’s pretty clear that a lot of people now obviously are not answering their telephones and a lot of people don’t want to talk to the pollsters.
0:09:08 So there needs to be some new sophisticated techniques, but there has to be ways of drawing information from asking people questions.
0:09:09 That’s not going away.
0:09:14 I saw on Twitter that like landline poll response rates in the olden days were like above 60%.
0:09:23 But today the response rates are like 5%, which means you’re getting like a very bad sample bias in terms of who’s willing to answer a call on a poll.
0:09:25 Like, I’ll hang up right away if someone tries polling me.
0:09:29 Yeah. And in particular, it’s not that like prediction markets will outmode polls.
0:09:34 It’s actually they’re going to lead to revolutions in technology for doing this well.
0:09:39 If anything, like the availability of prediction markets increases the incentive to conduct polls, right?
0:09:47 Like, you know, as we literally saw with the whale, you know, they went out and ran their own poll precisely because they thought they could use it usefully in this market.
0:09:54 That’s fantastic. I have to ask though, so this may seem obvious to you, but the key point is that you’re putting a price on it where people are putting skin in the game.
0:10:01 Essentially with their opinion or prediction, so to speak, and that seems very interesting and useful.
0:10:08 How is that different from betting? I mean, can prediction markets be incredibly tiny amounts that don’t have big value to be valid?
0:10:12 Like, how does the pricing part of this all work in terms of the incentive design?
0:10:17 Well, so at some fundamental level, the pricing works exactly as Alex described.
0:10:22 If you think the probability that Trump is going to win is 70%, you see the price of 55 cents.
0:10:26 If you believe your prediction, you now have, you know, an incentive to show up and buy.
0:10:31 And like, you know, if enough people have beliefs of different types and they all come into the market and they all purchase,
0:10:38 eventually the price sort of converges according to the convex combination of all of their different predictions.
0:10:47 But when you ask about like the size of the market or the size of a betting market, doesn’t matter once people are there and they’ve already formed their opinions,
0:10:55 but it might affect their incentive to gather information, for example, you know, if the size of the market is capped at $1,000 and you think the probability is 70%,
0:11:00 you’re not going to invest like $10,000 to get a more precise estimate, right?
0:11:09 Like if the maximum possible upside for you is on the order of $1,000, you can’t possibly invest more than that to learn new information that will change your estimates
0:11:13 and thus potentially sort of like inform the information in the market even more.
0:11:24 And it’s funny, I mean, we’ve been talking a lot in the wake of this most recent presidential election about, you know, sort of prediction markets as having been very strong predictors in the trend direction of what actually happened.
0:11:28 But of course, if you look at say the 2016 election, that didn’t happen at all, right?
0:11:34 You know, the prediction markets totally didn’t call Trump and they also didn’t call Brexit, which happens sort of like, I think the preceding summer or so.
0:11:36 Oh yeah, yeah.
0:11:59 And like there, people were asking like, well, what happened? Like how did these miss this? And at the time I wrote an opinion column where I argued that this information thing was a key part of the story, that like at least at the time prediction markets were relatively narrow, both in terms of the total amount that could be, you know, sort of the total upside, the total amount that was enclosed in the market, and in terms of who participated in them, right?
0:12:11 That’s sort of like concentrated in a small number of locations and those participants, because the upside was not necessarily that high, didn’t necessarily have an incentive to go out and find out, you know, sort of like, what’s going on in other parts of the country.
0:12:18 And so you end up aggregating information just from the people who were already there, which might not be a good estimate in that circumstance.
0:12:20 I want to push back a little bit on what Scott said.
0:12:22 Oh, yeah, that’s what I want.
0:12:32 Cards on the table, I am much more of a like prediction market bear than Alex’s. We’re both really excited about them, but there’s a stack rank in our estimate of, oh, funny.
0:12:42 Well, so I agree, you want a thick market, of course, and it helps to have people willing to bet a lot of money, because then they’re willing to invest a lot in making their predictions accurate.
0:12:54 The part which I want to push back on, however, is this idea that the market did not predict well if it predicted a 40% chance of Trump winning and Trump, you know, actually won, right?
0:12:57 Because this is what people always do and frustrating, right?
0:13:03 Because you can go back and look at individual examples and say, well, did the market predict well?
0:13:09 But that’s just like, you know, you flip a coin and it says 50% chance of coming up heads and it came up tails.
0:13:13 You say, oh, well, your probability theory isn’t very good, is it?
0:13:18 They would have a 50% chance and it came up 100%.
0:13:20 So what’s the real test?
0:13:30 Well, the real test is you need a large sample of predictions, which could be predictions from political markets, but prediction markets predict other things as well.
0:13:44 You need a large sample, and then you have to say, in the sample of cases in which the market predicted 40% a win of, you know, the Republican or whatever, of that, how many times did the Republican actually win?
0:13:54 And what you find is that pretty close 40% of the time that the market predicted a win, 40% of the time Republicans actually won in those cases.
0:14:01 So in other words, there’s sort of a linear relationship that when the market predicts a high chance of winning, that happens a lot.
0:14:05 When markets predict something with a low chance of winning, that doesn’t happen very often.
0:14:08 But of course, sometimes it does happen, right?
0:14:13 You know, something that happens with only a 5% probability ought to happen one every 20 times.
0:14:17 And that’s exactly what you see with these prediction markets.
0:14:25 They tend to be more accurate than other methods of forecasting, and they tend to be not systematically biased.
0:14:31 We can talk about there’s some odd biases which are possible, but they tend not to be systematically biased.
0:14:36 So it’s not the case that something which is predicted 40% of the time actually only happens 20% of the time.
0:14:42 The markets systematically get 40% of the time it’s predicted 40% of the time it happens.
0:14:46 Let me give a simple non-market example, which I think illustrates this kind of a famous.
0:14:48 People have heard of the wisdom of the crowds, right?
0:14:52 And so you ask people, how much does this cow, does this cow weigh?
0:14:58 And people are not that good at, you know, figuring out how much a cow weighs, some are too high, some are too low.
0:15:07 But if you take the median prediction of how much the cow weighs, the median prediction tends to be very, very accurate.
0:15:13 So in a sense, the crowd knows more than any individual predictor knows.
0:15:21 And in the same way markets do the same thing, they embed in the price more information than any single individual knows.
0:15:25 Right, and just to be super precise, like you’re specifically saying the median, not the mean, not the mode.
0:15:31 It has to be like the exact middle point, literally not like averaging out from the extremes.
0:15:33 In that particular example, yes.
0:15:35 In that particular example, but it varies by context.
0:15:36 Got it.
0:15:37 Exactly.
0:15:43 Let me build on that and like illustrated again, sort of like through a simple example, but in the language of the price system.
0:15:52 So when you’re going around and polling people about the weight of a cow, you do have to go around and ask them and they don’t necessarily have a strong incentive to figure it out.
0:16:00 But suppose you have a very large amount of money to invest in commodities or commodities futures or something of the sort.
0:16:05 And you have a predictive model that tells you what you think is going to happen to these markets.
0:16:12 Like you have reason to believe that there’s going to be a big shortage of oil or surplus of orange juice or something of the sort.
0:16:21 You can buy and sell in the market in a way that reflects that estimate that you have and it pushes the price accordingly.
0:16:22 Right.
0:16:25 So if you think there’s going to be a big shortage of oil, you’re going to stockpile oil today.
0:16:32 You’re going to buy a lot of it today and that’s going to push up the price because, you know, suddenly there’s there’s more demand than there was before.
0:16:41 And so when you see the price of oil going up, it’s like it’s a signal that somehow people think oil is more valuable right now than it was five minutes ago.
0:16:48 By the way, of course, you know, these are all hypotheticals, like none of this is investment advice, like people should not go out and like buy a bunch of oil or oil futures or whatever.
0:16:53 But like conceptually, that’s how the price reflects the information.
0:16:59 And the more strongly you believe that there’s going to be a shortage, the more you’re going to be willing to pay to buy right now.
0:17:00 Right.
0:17:06 That’s the sharper the price movement, even sort of the stronger the inference about the information that that buyer brought to the market.
0:17:07 Yeah.
0:17:11 If you want to know whether there’s going to be a war in the Middle East, keep an eye on the price of oil.
0:17:13 I remember that as a child in the 80s.
0:17:14 And it’s still true today.
0:17:15 That’s exactly right.
0:17:18 And know that the oil market is a little bit of a prediction market too.
0:17:19 Right.
0:17:25 The oil market is revealing information about people’s beliefs about things that are correlated with the availability of oil.
0:17:26 Yeah.
0:17:27 Like whether there’s a war in the Middle East.
0:17:32 Well, that actually goes perfectly to the question I was about to ask because I still want to dig a little bit more into the economic and market foundations.
0:17:36 And then we can go more into the challenges of prediction markets and where they’re going.
0:17:43 But on that very note of oil, actually a great example, Scott, the question I wanted to ask you both is where does this break?
0:17:48 Because in the oil example, one could argue, well, it’s not like a quote pure market.
0:17:49 You have cartels.
0:17:51 You have other horses that play.
0:17:59 Now, you might be saying it doesn’t matter because all that matters is people’s opinions, which is what the prediction market is putting his inputs into the market.
0:18:00 Or doesn’t it matter?
0:18:03 I guess my question is really getting at what are the distortions that can happen here?
0:18:12 Like there are things that can manipulate it or other distortions where people’s behavior changes so significantly that they untether the market from reality.
0:18:13 Yeah, sure.
0:18:20 I mean, one of the things about these markets, you know, oil predicting possibility of war in the Middle East, of course, they’re not designed to do that.
0:18:21 Right.
0:18:25 In those cases, the information is sort of a leakage.
0:18:28 It’s an unintended consequence of market behavior, which is very useful.
0:18:34 You know, it’s very useful for economists to be able to pull information out of these market prices.
0:18:45 It’s with the creation of prediction markets, which was really the first ones go back to the Iowa political prediction markets created in 1988.
0:18:53 It was there almost for the first time that a market was created in order to produce information.
0:18:54 Right.
0:19:04 So there’s a much more direct connection between the output of the market, the prices on the market and the predictions, because that’s what they were designed to do.
0:19:14 Now, of course, you’re totally correct that if you want to get a market to predict the future, you’re going to want, as Scott said earlier, to have lots of people.
0:19:25 Because you’re going to take advantage of all the dispersed knowledge because, you know, there are people in Pennsylvania who have extra knowledge, you know, about what their neighbors are talking about.
0:19:31 You know, that can give them a little bit of insight, right, that you might not have if you’re living in New York or San Francisco.
0:19:40 So you want lots of people to participate and you want the markets to be quite thick because you want people to be able to want to kind of invest some time and energy.
0:19:45 But the prediction should be that maybe apply some models perhaps to it, things like that.
0:19:50 And of course, you want it to be free and open and you have to be a little bit worried about manipulation.
0:19:51 Yeah.
0:19:55 There are some like funny edge cases that we’ve seen crop up occasionally.
0:20:07 In fact, there were even allegations that maybe that was going on here where if there’s some external outcome or even some like internal like behavioral outcome that conditions on the prediction.
0:20:08 Right.
0:20:22 So if like political candidates are going to decide how hard to campaign in a given state based on what the prediction for that state says, you might want to influence the price, not for the sake of earning money in the prediction market, although that might happen too.
0:20:28 But rather because you just want to place the prediction in a given position.
0:20:34 Now, that’s very hard to do because you actually have to change beliefs from doing that in the, I think it was the Obama versus McCain campaign.
0:20:38 Somebody tried to sink a bunch of money to move the McCain percentage.
0:20:46 And then, you know, people who had estimates that the Obama probability was higher just sort of arbitrage that out over an hour or two.
0:20:47 Right.
0:20:48 You know, markets work.
0:20:51 If you see something that looks to you like a market anomaly, you buy or sell accordingly.
0:20:52 Yes.
0:20:53 Yes.
0:20:54 I mean, we’re all market purists here.
0:20:55 So that seems like that’s working.
0:20:56 Yes.
0:21:02 But if the market is thin or if the information signals are very dispersed, maybe you can convince people, right?
0:21:08 If you have enough money to like swing the market in a very sharp way, especially if you’re doing it through symbols, like many identities.
0:21:20 If you’re doing it through many identities, who it looks like a surge of people who have a given belief, you might actually change the beliefs of the market participants in a way that actually distorts the probability and could have various other impacts.
0:21:27 And then the other thing is this idea that the oil markets are leaking information.
0:21:28 We’ll stick with that example.
0:21:33 The oil markets leak information about potential conflict in the Middle East, right?
0:21:35 That’s a feature and a bug, right?
0:21:39 The fact that it’s an oil market that is informative about the Middle East.
0:21:46 On the one hand, as Alex said, it means that the market is not optimized for specifically answering the question, what’s going to happen in the Middle East?
0:21:52 There’s lots of other stuff that affects the oil market, like how popular electronic vehicles are at that given moment in time, right?
0:21:55 So you have this very complicated signal extraction problem, right?
0:21:57 You see a big spike in the oil price.
0:22:06 Is it because there’s a potential conflict coming in the Middle East or is it because there’s just been like some new electronic vehicle test that failed and like somebody knows that.
0:22:09 And so they know that oil is going to be more important next month.
0:22:12 Whereas if you have a market that’s just predicting, will there be a conflict in the Middle East?
0:22:14 That’s all it’s predicting.
0:22:16 But of course, that’s now a zero-sum market.
0:22:22 It’s sort of a harder market to participate in if you only have dispersed information, right?
0:22:29 If you don’t actually know whether there’s a conflict in the Middle East forthcoming, but know that some things that are happening, like sort of suggest that.
0:22:31 For example, you saw an oil price change.
0:22:41 You have to do a much more complicated and you’re taking us slightly in some ways a riskier bet by participating in a prediction market where you’re staking everything on this one outcome rather than on something that’s like heavily correlated.
0:22:49 Where there are many different things that could have related predictions could be mostly correct even if your main prediction is wrong.
0:22:53 So the takeaway is prediction markets narrowness is a feature in a bug.
0:22:59 It’s sort of dual to the sense in which ordinary markets sort of broadness is a feature in a bug, right?
0:23:09 Because a prediction market is a narrow zero-sum contract on a specific event, many people’s information about that event is actually coming from all these correlates.
0:23:15 It’s not that they know specifically like is there a conflict coming in the Middle East as they see a lot of potential signals on it.
0:23:23 And so if you’re buying and selling in a market that responds to those signals, that sort of like ensures you a little bit, right?
0:23:35 If you get the main estimate wrong, but all your signals were correct, you know, you’re at less risk than if you go into a prediction market and had all the signals right but the final estimate wrong and then, you know, you’re just betting on the wrong side of the event.
0:23:40 I think what Scott said also has implications for why don’t we have prediction markets and everything?
0:23:45 I mean, if these markets are so great and they work so well at predicting things, you know, why don’t we have more of them?
0:23:49 And I think Scott was basically giving the answer there. This is how I would put it.
0:23:58 You know, if you have the market for oil, then there are lots of people who are buying and selling oil who are not interested in what’s going on in the Middle East.
0:24:18 Okay. They’re not trying to, you know, predict that, right? But it’s precisely because you have lots of sort of organic demand and supply that this provides a subsidy to the sharks who go in there in order to make the price more accurate.
0:24:29 Or take the example of, you know, wheat. There are lots of farmers who are buying and selling in the market for wheat just to insure themselves, just to hedge themselves.
0:24:47 And it’s because of that native organic demand that the market is thick enough that you then have all of the sharks who are not themselves farmers, but they go in there and they use models and techniques and whatever to predict which way the market for wheat is going to go and they make that market more accurate.
0:25:01 Now, if you didn’t have the organic demand, then you’re going to have a market with just sharks in it, no farmers and just sharks. And who wants to be in a market where you’re only with other sharks, right?
0:25:11 If I know that the other guy is just trying to predict this one thing as much as I am trying to predict it, you know, I don’t want to be in a market with Scott. He’s just too smart, right?
0:25:14 Right. I would say the same thing about you.
0:25:36 And that’s why the market wouldn’t work. That’s why the market wouldn’t work. So some of these markets, even though they might be forecasting something which is useful, there isn’t enough organic demand where you have to subsidize it from outside the market in order to get a useful prediction out of it in order to get the sharks willing to go against one another to try and predict this thing.
0:25:43 And that’s why we don’t have markets and everything yet, potentially. This is maybe jumping ahead a little bit, but I just have to ask at this point.
0:25:59 I mean, Scott, you’re like a market design expert. So on the market design front, what does that mean if there isn’t organic demand? Is there a way for market designers to essentially create markets in situations where there isn’t that kind of latent organic or existing thing to harness?
0:26:07 Like, can you actually manufacture that market without distorting it and kind of create conditions that could design a market into place?
0:26:16 That’s a great question. I mean, there are two different ways to get at it. One of them is, which is sort of what the framing of the question is pointing at is, could you find a way to create latent demand?
0:26:27 And Alex was saying you could subsidize it, right? You could basically like somehow subsidize the experience of some people trying to predict this event. Like, you know, subsidize a bunch of college students developing forecasting models.
0:26:32 So then they have a lower cost of entering the prediction market or something of the sort. Again, not advocating this specific policy.
0:26:43 Right. Although in Alex’s example, that subsidy was not intentionally a subsidy. It’s just a result of the behavior. Like, it wasn’t like people are trying to subsidize. It was just subsidizing because of their natural behaviors.
0:26:56 True. No, exactly. So like, we started this conversation with the recent presidential election and all of these other associated elections. Those have proven at least in practice to be much thicker markets because there are some people who seem just interested in betting on them.
0:27:05 Right. A lot of people have some amount of information, some amount of opinion. And so there’s a little bit of that latent demand that sort of comes from people’s general interest in the question.
0:27:14 Yeah. One could try and create that for other contexts, right? You can try and like help people feel that something is interesting or feel that they have an opinion about it enough that they’re willing to participate in a prediction market.
0:27:24 The other thing you can do is you can use other types of information elicitation mechanisms. Prediction markets are one of many ways of doing incentivized information aggregation.
0:27:40 And others are things like incentivized surveys or peer prediction mechanisms. There’s a whole class of what are called peer prediction mechanisms where what you’re in effect doing is asking people what they believe about an outcome and what they think other people will believe about an outcome.
0:27:50 And then you use sort of their beliefs about others as a way of cross-examining whether they were telling the truth because you survey a lot of people, you get sort of like your crowd volume.
0:28:02 You sort of know the aggregate belief of the population and you can check whether someone’s own belief about the population is sort of the right mixture of that aggregate belief and their belief.
0:28:18 So like, if you yourself think that Trump is more likely to win, then you yourself are more likely to believe that other people think Trump is likely to win because the frame you have, that your information sort of like indicates that at least one person in the market believes that.
0:28:27 And so one can cross-examine your predictions with your estimate of what the population believes and what the population actually reveals that they believe.
0:28:35 And then you can reward people based on how well they did, in effect, like how good are you at estimating what everyone else thinks given what you think.
0:28:43 And those sorts of mechanisms you can incentivize, you can pay people immediately, incidentally, unlike prediction markets where the event has to be realized, the payment’s only realized at the end.
0:28:50 Here you’re not paying people based on their accuracy about the event, you’re paying them based on their accuracy about everyone’s estimates.
0:28:56 And so you can do that all at once, right? Collect all the estimates, pay people, they go home, you have your estimate.
0:29:06 And these have been shown in practice to be very effective for small population or like opinion estimates, things where there isn’t a thick market and like a very, very big public source of signal.
0:29:18 That really answers that question. And by the way, it brings up a very important point that we did not address in the recent example of the election, which is the French quote, “whale,” who won by using the neighbor poll where, you know,
0:29:26 their neighbors won’t say what they think, but when you ask them, like, who do you think your neighbors are going to vote for? It’s kind of a way to indirectly reveal their own preferences.
0:29:32 And that’s the so-called neighbor poll. I don’t know if that’s a standard thing or that just came up in this election. It’s the first time I heard of it.
0:29:40 But it’s a great example of something I did study in grad school when I was doing ethnography work, which is never trust what people say they’re going to do, but what they actually do.
0:29:43 This goes to your economist world of revealed preferences.
0:29:44 Absolutely.
0:29:59 Right. Very similar. But anyway, in that case, that person pulled his neighbors and then used that data essentially off-chain to then go back onto the market, holly market in that case, to up his bet and essentially won big as a result.
0:30:06 So, like, that would be an example of what you were mentioning. Although in that context, you were mentioning it in how can we address a case where there’s a thin market.
0:30:10 This is a case where that played out in a thick market of the election.
0:30:16 Well, you might say that he was using this in the thin market of trying to understand his neighbors’ sort of, like, local preferences and estimates, right?
0:30:18 There you go. That’s more precise, yeah.
0:30:22 Although we actually don’t know the details of how he produced these estimates. It doesn’t sound like they were incentivized.
0:30:35 So, it’s not exactly like what I was talking about with peer prediction, but you’re right. It’s the same core idea that, like, using people’s beliefs about the distribution can be much more effective than using their personal beliefs a lot of the time.
0:30:45 So, I would underline two things there. One, yeah, the market is a way of bringing all of this dispersed information and creating an aggregation, but it’s not the only way.
0:30:58 That’s kind of what Scott is saying, right? And understanding this is one of the first information aggregation mechanisms which we have studied and understood reasonably well, but there are other ones.
0:31:12 And so, you can think about prediction markets as being one example of a class of mechanisms which take dispersed information and out of that pool some knowledge which none of the people in the market are,
0:31:18 none of the people you polled, none of them might be aware of it, and yet somehow it is in the air as it were.
0:31:20 That’s fantastic.
0:31:30 There are also other ways of subsidizing these markets, which is something that corporations may be very interested in doing,
0:31:41 because corporations are interested in forecasting the future, and some of them in the past have created their own internal prediction markets.
0:31:45 So, one famous example of this is Hewlett-Packard.
0:31:55 They were interested in forecasting how many printers are going to be sold in the next quarter, in the next two quarters, three quarters, four quarters, and so forth.
0:32:06 So, they created a market where if you correctly predicted how many printers would be sold in which time period you could earn money, and they subsidized that market.
0:32:12 So, everybody going in, which is just HP employees, got like $100 to play with.
0:32:21 So, that’s a way of trying to get more people involved and interested in playing on these markets to elicit disinformation.
0:32:25 That example is actually really interesting to me, because when I was at Xerox Park, we talked about that.
0:32:32 And one of the things that came up is it’s a very useful mechanism to your point, Alex, for getting certain things right.
0:32:40 But it is not a useful mechanism for actually figuring out the future in terms of what to invent, because it doesn’t address a case of you don’t know what you don’t know.
0:32:41 You only know what you know.
0:32:42 And this came up just yesterday.
0:32:46 Trump announced his candidate for attorney general.
0:32:53 And one of the examples someone cited on Twitter was it’s the first time they’ve seen a polymarket contract resolved to zero for all potential outcomes,
0:32:58 because Getz wasn’t even listed among the 12 potential nominees in those range of possible outcomes.
0:33:05 So, that’s an example in that case where you have to have the right information itself in that prediction market.
0:33:13 And maybe you guys can explain that a little bit more too really quickly, because I think that HP example is super interesting on multiple levels.
0:33:19 Yeah, so these markets are good at when you figure people have got some knowledge and it’s hard to aggregate that knowledge.
0:33:26 The other thing they’re good at, you know, the people have run these markets for predicting when a project will be complete, right?
0:33:28 And this is a classic case.
0:33:35 But if you ask people, they’re going to be, oh, no problem, it’ll be ready in five weeks, you know, whatever, right?
0:33:37 They’re very optimistic.
0:33:40 And yet they tell the boss it’s going to be ready in five weeks.
0:33:44 Well, they go back and tell their friends, oh my God, it’s delayed, you can get all these problems.
0:33:50 But if you let people bid anonymously in these markets, then the truth comes out.
0:34:00 So this is a way of the corporate leaders can learn information that their employees know but are not willing to tell them, right?
0:34:08 But to your larger point, yeah, I mean, nothing is more difficult to predict in the future.
0:34:10 Right, right.
0:34:13 And, you know, Trump is a chaos agent, right?
0:34:16 Whatever he’s going to do, like, it is hard to predict.
0:34:20 And I agree, I don’t think anybody predicted Matt Gutz.
0:34:28 Well, and indeed, actually, so this sort of highlights, you know, we were talking about what prediction markets are good at versus where you might want to use other sorts of information elicitation mechanisms.
0:34:40 The two examples that Alex gave of within company prediction markets, you know, predicting sales or sales growth or something that’s like, you know, a metric that many people in the firm are tracking and have different windows of information into.
0:34:49 Predicting when a product is going to launch where, like, you know, you might have product managers who know something, you might have engineers who know there’s a hidden bug that they haven’t even told the product managers about yet.
0:35:01 Again, it’s like, these are contexts where many of the people in the company have some information that only they have and that the aggregate of all that information is a pretty good prediction of the truth.
0:35:06 Because the actual outcome is the aggregate of all those people’s information directly, right?
0:35:12 It’s like, how many sales calls are you making that are succeeding? Or, you know, how is the coding for this specific feature going?
0:35:20 By contrast, you mentioned with Xerox PARC, you know, trying to predict whether a new sort of totally imagined product is going to succeed.
0:35:25 Well, that’s really, really hard. And it doesn’t rely on information in particular that the company has, right?
0:35:32 Like, yes, the company has some idea of what products people might buy, but you might be like, you know, AT&T and invent the first picture phone or something of the sort.
0:35:39 And like, you thought that was a great idea, but you don’t actually know until you put in the market and see whether people are like interested in using it.
0:35:44 And so the aggregate of all the information in the company, there there’s a product they went through with, right?
0:35:48 They concluded was a good idea based on all the signal that everyone in the company could see and it still flopped.
0:35:55 The total information in the company wasn’t high enough to actually like provide the right answer even when aggregated.
0:36:04 Right. But I do think there is a sort of subtle distinction between wisdom of a random crowd and wisdom of an informed crowd, right?
0:36:12 Like, again, with our Hewlett-Packard example, Hewlett-Packard sort of knows that if you’re trying to figure out now like, you know, whether a product could launch on time,
0:36:15 a random person on the street has no information about this.
0:36:19 You don’t want to like pull together a focus group of miscellaneous Hewlett-Packard customers and ask them,
0:36:23 “When do you think we’re going to finish designing our new printers?” Right? I don’t know.
0:36:26 Like you released a printer last year, probably next year, maybe. Who knows?
0:36:32 And so there is this question, are you learning things from the right crowd?
0:36:36 You know, you could have the best incentivized information elicitation mechanism on the planet.
0:36:42 And if you only survey people who don’t know anything at all about the topic, you incentivize them.
0:36:46 You’ll learn what they believe truthfully, but you won’t be able to do anything with it.
0:36:51 Yeah. And then back to the future, like the whole idea of the best way to, you know, predict the future is to invent it.
0:36:56 Like that goes, just like the jobs and the, you know, the phone, like no one, you can ask a million people, will they ever use a touch phone?
0:37:02 People’s behaviors can also evolve and change in ways that they themselves are not aware of, which is that other, that example.
0:37:05 Yeah, prediction market, it’s like a candle in a dark room, right?
0:37:10 I mean, it helps us see a little bit, but there’s still areas which you can’t see very far.
0:37:13 Great. I’m going to ask a couple of quick follow-up questions from you guys so far.
0:37:20 So just to be super clear. So thin versus thick, you guys are talking about the depth of the market, like in terms of the number of participants.
0:37:25 Thin is too few. Thick is many. Is that correct or is there a better, more precise way of defining that?
0:37:31 Yeah. So, I mean, in the prediction market, a thin market is few people betting small amounts.
0:37:39 And in fact, one of the problems we’ve had is that prediction markets are mostly illegal in the United States.
0:37:47 So the biggest one in this past election was polymarket, which it was illegal for U.S. citizens to bet on that market.
0:37:51 We’re slowly changing, but we do have this kind of ridiculous situation.
0:38:00 I think it’s ridiculous anyway, that we have huge markets in sports betting, gambling, huge, huge markets.
0:38:08 And we allow that, and yet here we have a kind of gambling market, a prediction market, where the output is actually really quite useful.
0:38:11 It’s quite socially valuable, and we don’t allow it.
0:38:22 So making these markets legal and open to more U.S. citizens would thicken those markets, make them more accurate, attract more dispersed information.
0:38:25 And I think would be really quite useful.
0:38:31 But to your bigger point, Alex, you’re basically arguing that they can be a public good in the right context informationally.
0:38:32 Absolutely.
0:38:40 And interestingly, if you think about some of these prediction markets that are getting served notices and whatnot, and we don’t know why to be clear,
0:38:46 but it’s interesting because in some cases, people might argue some people trying to get information is a manipulation of the market.
0:38:54 But in fact, to your guys’ entire point throughout this discussion, it’s actually ways to provide more input of information into the market itself, too.
0:38:57 So that’s kind of an interesting point on the public interest side.
0:39:01 Let me give you another example on this public good nature of these prediction markets.
0:39:08 One of the most interesting, fascinating uses of these prediction markets is to predict which scientific papers will replicate.
0:39:09 Oh, yeah.
0:39:18 You know, we have this big replication crisis in the sciences, psychology, and other fields as well of, you know, lots of research and it doesn’t replicate.
0:39:24 Well, what some people have done is it’s expensive to replicate a paper.
0:39:30 But one thing people have done is to have a betting market, a prediction market, in which papers will replicate.
0:39:33 And that turned out to be very accurate.
0:39:39 And then you only have to replicate a few of those papers in order to have the markets pay off.
0:39:46 And for the rest of them, you use the prediction market result as a pretty good estimate of whether it will replicate or not.
0:39:51 So this is a way of improving science, making science better and quicker and more accurate.
0:39:52 I love that.
0:40:01 I ran a lot of op-eds when I was at Wired on open access and science and kind of like evolving, you know, peer review and replication crisis and the whole category and theme.
0:40:05 So it’s very exciting to me to hear that that’s something that we can do to address that.
0:40:20 It leads to a quick follow-up question, which actually happens to be on my list of follow-up questions for you in the lightning round of this, which is when you guys were talking earlier about this just kind of tapping into this intuition information dispersed across many people into these prediction markets.
0:40:25 One of the first questions that came to mind is, do you need domain experts or does that actually distort a market?
0:40:30 And this actually comes up as a perfect segue from your point, Alex, that example of scientific papers.
0:40:41 Because that’s a case where one would imagine that people in that industry or that domain or just other scientists who have the experience of analyzing research would be the best at predicting things.
0:40:42 But is that necessarily true?
0:40:46 And do we have any research or data into domain expertise in these markets?
0:40:48 I don’t know the answer to that last part.
0:40:51 Let me talk about the first part because it also speaks to your thick versus thin.
0:40:52 Great. Yeah, good.
0:40:59 So when Alex said a thin market is small number of participants betting small dollar amounts, why is that a thin market?
0:41:03 It’s because the total information is small in two ways.
0:41:06 One is that there are few people bringing their own individual estimates.
0:41:08 You just have like a small number of people saying things.
0:41:19 And second, because they’re betting small dollar amounts, it’s sort of a signal that their information is not very like strong signal or confident, at least relative to what it could be otherwise.
0:41:28 You know, if you are staking a very large amount of money on this, the market inference is that you have done the research, you know, and indeed you have the incentive to do their research.
0:41:37 You know, why is the inference that you’ve done the research is because if you’re staking a large amount of money, you should have done the research because otherwise, you know, you’re putting money at risk without sort of full information.
0:41:39 Like the French whale who did the neighbor poll to find out.
0:41:40 Right, exactly.
0:41:46 And one can argue about how good or bad that new poll was or whatever, like whether he should have trusted his information that much.
0:41:57 But it’s unambiguous that part of his confidence and he said this part of the confidence that he had to make that huge bet was that he thought he had a signal that was accurate and the market had missed.
0:42:05 And so like thickness and thinness, like the proxy for the way we think about measuring it is how many people and how much are they staking?
0:42:09 How much value are they putting behind their beliefs?
0:42:11 Thickness and thinness is really in terms of the information.
0:42:20 It’s like, do we have a lot of different signals of information that are strong coming together and mixing to determine the price?
0:42:24 Or is it really just like a very small number of pretty uninformed signals?
0:42:33 That’s this tension when Alex is saying it’s a problem that the biggest prediction market of the US election was not actually in the US and was not legal to participate in the US.
0:42:38 Well, yeah, a lot of the information, a lot of the like real signal is in the United States.
0:42:44 And so without those people being able to participate in the market, you miss at least sort of a lot of that to a first order, right?
0:42:48 You know, people internationally will be figuring out ways to aggregate and sort and try and use it.
0:42:52 But like you miss a lot of the people who have that information already at their fingertips.
0:42:55 And so you ask about domain expertise.
0:43:01 It’s not exactly domain expertise versus not, but rather information richness.
0:43:09 For example, in predicting scientific replication success or failure, domain experts are especially well equipped to do that, right?
0:43:16 Like a random person chosen off the street, you know, you can tell them a scientific study and maybe they’ll have an instinct one way or another, whether they think they believe it.
0:43:22 But like a lot of the detail of figuring out whether something will replicate comes from knowing how to read the statistical analyses,
0:43:26 trying to understand the setup of the experiment and like the surrounding literature.
0:43:30 And so their domain experts have a particularly large amount of information.
0:43:40 If you think about something like a political betting market, maybe domain experts who are focused in the world of politics and polls and so forth have like a big slice of information they do.
0:43:49 But there also might be other categories of people, like people who know that their neighborhood has like recently switched its political affiliation in a way that isn’t yet captured in the national polls.
0:43:56 Or our French whale who went and ran his own sort of poll using a custom chosen method.
0:44:04 And so the context of the question the prediction market is trying to evaluate, and this is like true for any informational cetacean problem.
0:44:14 This is just about prediction markets, right? The context of the type of information you’re trying to learn tells you something about who has the most information to bring to the market and thus who it’s important to have there.
0:44:16 Yeah, I agree with everything Scott said.
0:44:25 One of the interesting things is you often don’t know who the domain expert is, right, until after the market has been run.
0:44:33 So, of course, it’s absolutely true that, you know, if you’re going to be predicting political events, you want people who are interested in politics.
0:44:38 If you’re predicting scientific articles, people need to be able to read stats and things like that.
0:44:48 But one of the guys in the scientific replication paper on markets, he made like $10,000 was just one of these super obsessive guys, right, who just really got into it.
0:44:53 And, you know, was running all kinds of regressions and was doing all kinds of things and stuff like that.
0:45:08 And so when you say domain expert, I think one of the virtues of these prediction markets is that they’re open to everyone and they don’t try and say, oh, no, only the experts, you know, get to have a voice, right?
0:45:14 It’s more only ex-posts do we learn, hey, who really made some money at these markets?
0:45:15 Absolutely.
0:45:22 I’m so glad I asked you guys about the definition of thick versus thin because you guys gave me so much interesting nuance to that.
0:45:28 Because people, I think, following this podcast definitely understood what you meant about thin versus thick early on, but you guys just took it to a new level.
0:45:31 If you’re so smart, why aren’t you rich? Hey, I am rich.
0:45:32 Yes, exactly.
0:45:34 I made some money in this market.
0:45:41 Well, and that again, that’s about the incentives where we talk about like the dollar value staked, like the amount of money someone is staking on their prediction.
0:45:50 Again, in equilibrium, it should be a measure of their confidence, how confident they are in their own beliefs and how much effort they’ve put in to learn the information to be precise.
0:45:58 And so exactly as Alex says, one person who might be really good at predicting a scientific replication failure is someone who works in that exact same area.
0:46:06 Another one, it might be someone who just like enjoys doing this for fun and like has never had a real incentive to triple down on doing it, but now suddenly they can.
0:46:07 Right, right.
0:46:12 And by the way, Scott, does it have to be dollar and price incentives?
0:46:17 I’m asking you this question specifically because you and I have done a lot of pieces in the past on reputation systems.
0:46:24 And I almost wonder if the skin in the game can just be karma points and not even any money because I think from a pride perspective, 100%.
0:46:27 So like Alex mentioned subsidy, right?
0:46:35 Like one way that you can subsidize, I think he said Hewlett Packard subsidized by giving all their employees $100 and saying spend it all on this market.
0:46:45 You can subsidize people with cash, but you can also subsidize them with tokens or, you know, reputation or like various other sources of value.
0:46:52 And one of the advantages of using tokens is that that way you can deliver a subsidy that’s sort of only useful in this market, right?
0:46:56 You know, if it’s like a personal non-transferable token, but I give you a bucket of them.
0:47:04 And the only thing you can do with it is use it to enter predictions and you just choose which prediction markets you choose to enter into and how much you spend in each one, right?
0:47:06 And then you earn payoffs.
0:47:14 Payoffs are also measured in tokens and maybe downstream you might get prizes for having large numbers of tokens or something you get to join the elite predictor force or even just serves the measurement of your reputation.
0:47:18 How good you are at making predictions, which maybe you leverage into something else, right?
0:47:22 Like people who win data science contests leverage that into data science jobs.
0:47:25 Maybe you like leverage this into a forecasting job or something.
0:47:27 All of that.
0:47:37 So long as you find people who are willing to be incentivized by those types of outcomes, you can subsidize their participation in a unit that locks them into the market, right?
0:47:45 That they’re one thing to do with it is to participate in the market and reinforces more and more participation among the people who are most successful and most engaged.
0:47:46 That’s super interesting.
0:47:52 And I’m going to push back on you on that actually because I actually wonder if it necessarily needs to be crypto based and you can just do any kind of.
0:47:54 Oh, yeah, no, it’s any like internal marker.
0:48:04 But for all the reasons we normally know, like it’s much better to do this in an open protocol form because, for example, if the token is eventually going to be leveraged for reputation, you want anyone to be able to verify that you have it.
0:48:05 Right.
0:48:07 And if they audit it, see it, hence blockchains.
0:48:08 Got it.
0:48:09 Great.
0:48:10 And we’ll talk a little bit more about that.
0:48:11 Just more lightening questions.
0:48:12 Go for it.
0:48:15 So where do super forecasters like Phillip Tettlach’s work come into all of this?
0:48:17 Like, are they especially good at prediction markets?
0:48:23 Because that’s a case where they’re like generally better at the general public and sort of quote forecasting and making predictions.
0:48:26 Is there a place for them in this world or are they kind of the outliers here?
0:48:28 Or does it not even matter here?
0:48:30 I think there’s two things.
0:48:39 I think the basic lesson of Tettlach’s work is most people, even the ones who are in the forecasting business are terrible forecasters.
0:48:40 Right.
0:48:51 I mean, he first started tracking so-called political experts and seeing what their forecasts were, you know, 10 years later, were they right or five years later, and they were completely wrong.
0:48:57 So he then shifted into looking for, is anybody ever right, are there super forecasters?
0:49:05 And yes, he found that some people, you know, not typically the ones in the public eye, but some people can definitely forecast better than others.
0:49:10 One of the things those people can do is then participate in these markets.
0:49:18 And by their participation, they push the market price closer to their predicted probabilities.
0:49:26 So forecasters have an incentive to be in these markets and by being in these markets, they make the markets more accurate.
0:49:30 Now, is the market always going to be more accurate than the super forecaster?
0:49:31 No.
0:49:37 I mean, Warren Buffett, you know, he has made a lot of money even though markets are basically efficient.
0:49:45 But Warren Buffett has shown that he, in many cases, is able to predict better than the market price itself and more power to him.
0:49:50 And so there are going to be some super forecasters, but they’re hard to find, they’re rare.
0:49:56 And a virtue of the price is that everyone can see it, right?
0:49:57 It’s public.
0:50:11 So this actually gets at a bigger, maybe more obvious point to you guys, but a recurring theme I’m hearing is it’s not that the prediction market is only taking in, like, guesses and people’s intuitions and bets and opinions and any information it has.
0:50:15 But theoretically done well, it’s taking in all information.
0:50:17 It could be super forecasters contributing to it.
0:50:23 It could be Nate Silver taking his 80,000 simulations and feeding his inputs and adding that signal into it.
0:50:28 It could be people who are pollsters putting their data and predictions.
0:50:31 Basically, it doesn’t even matter how people get at their intuition.
0:50:35 All that matters is that they’re pricing that information into that market, essentially.
0:50:39 Do you know the Wall Street bets is the famous everything is priced in the post?
0:50:40 No, I don’t actually.
0:50:41 I don’t know this one either.
0:50:43 Let me read it just a little bit.
0:50:44 It’s a fantastic post.
0:50:46 It’s like five years ago.
0:50:50 It’s called everyone is priced in and it says the answer is yes, it’s priced in.
0:50:52 Think Amazon will beat the next earning?
0:50:54 That’s already been priced in.
0:50:58 You work the drive-through for Mickey D’s and found out that the burgers are made of human meat?
0:50:59 That’s priced in.
0:51:02 You think insiders don’t already know that?
0:51:08 The market is an all-powerful, all-encompassing being that knows the very inner workings of your subconscious.
0:51:18 Your very existence was priced in decades ago when the market was valuing standard oil’s expected future earnings based on population growth.
0:51:21 That is so great.
0:51:24 Okay, you have to send me that link, Alex, and then I’ll put it on the show notes.
0:51:28 So you’re basically agreeing that it’s the market’s new price, everything in.
0:51:30 Yeah, I mean, that’s an exaggeration.
0:51:33 But yeah, I mean, anything is fair game.
0:51:38 I want to push back, but we’re fine here because anything is fair game.
0:51:43 But you have to wonder who’s going to show up to those markets and where their signals are coming from.
0:51:49 If you’re a super forecaster, maybe you work for like a super secretive hedge fund.
0:51:53 And the last thing you want to do is directly leak what it is you believe.
0:51:57 And in fact, you would prefer that the market be confused by this public signal.
0:51:59 We talked about manipulation.
0:52:06 You might show up and tank the prediction in one direction or the other just to take advantage of that in the financial market off to the side.
0:52:13 And so while in principle, these things can be very comprehensive, you still have to think about who participates in which market we’re in.
0:52:18 Just like we see in other markets where like some people trade in dark pools, some people trade in public exchanges,
0:52:23 and that selection sort of affects what information price is really aggregating where.
0:52:25 That’s fantastic. Yeah.
0:52:32 Public forecasters, super or otherwise, is that they are very salient to the average person.
0:52:37 And so another thing we see in prediction markets is herd behavior.
0:52:44 Again, just like we see in other types of markets, like, you know, if a lot of people are suddenly buying oil futures,
0:52:49 does that mean that they all have knowledge that there’s going to be a conflict in the Middle East?
0:52:54 Or does it mean they saw other people buying oil futures and are like, oh, gosh, like, I’d better do this, too.
0:52:59 Or, you know, did they see one analyst report and they all saw the same analyst report?
0:53:02 And as a result, they all went and bought oil futures because they believe the report.
0:53:10 Or worse, did they see one analyst report that said, you know, like oil is going to be expensive next quarter.
0:53:14 And they went and bought oil futures not because they believe the report.
0:53:18 Maybe they even have information that it’s not true, but they know everyone else is going to see the report.
0:53:20 And so there will be purchasing pressure.
0:53:21 Yes.
0:53:27 There’s this very famous paper by Morrison Shin in the American Economic Review called Social Value of Public Information.
0:53:29 Okay, I’m going to put that in the show notes.
0:53:31 It talks about information herding, right?
0:53:37 The idea is basically if you have a market where everyone has private signals and then there are some very salient public signals
0:53:39 and people have to coordinate, right?
0:53:40 You know, are you going to run on a bank or not?
0:53:43 Or like, what do you think is the probability of this thing happening?
0:53:50 People might ignore their private signals if the public signal is strong enough that they think other people are going to follow it.
0:53:51 Yes.
0:53:59 And so when a very prominent forecaster makes a prediction, like as the sort of polls were coming in and the week leading up to the election,
0:54:07 a new major poll would drop and then the prediction markets would judder around and sort of veer off at least briefly in the direction of that poll.
0:54:10 And that’s this like public information effect, right?
0:54:11 This is salient.
0:54:14 You expect a lot of market movement based on this information.
0:54:16 And so the market actually moves even more.
0:54:21 It incorporates not just the information, but also the fact that other people are incorporating the information too.
0:54:26 And are there any market design implications for how to avoid that happening?
0:54:30 Like if you’re setting up the conditions of a perfect great prediction market?
0:54:32 Oh, gosh, that’s a great question.
0:54:34 I mean, first of all, it’s not completely avoidable.
0:54:41 You can’t have a market where a sufficiently strong public signal doesn’t generate some herd behavior, right?
0:54:43 It’s just at that level, it’s unavoidable.
0:54:46 But you can try and do things to dampen the effect.
0:54:48 Off the top of my head, I can think of two.
0:54:49 There are probably others.
0:54:52 One is you could basically like slow trading a little bit, right?
0:54:57 You could sort of like limit people’s abilities to enter or exit positions very, very quickly.
0:54:59 So it sort of forces people to like average.
0:55:03 Well, it’s also kind of an example of slowing contagion, right?
0:55:05 Like an infection spreading very fast.
0:55:06 Totally.
0:55:07 Kind of like the herding becoming viral.
0:55:10 Yeah, contagion is a very good example of what Scott’s talking about.
0:55:12 You know, in stock markets, we have…
0:55:13 Circuit breakers.
0:55:14 Circuit breakers.
0:55:15 Yes, yes, yes, exactly.
0:55:16 Circuit breakers.
0:55:17 There we go.
0:55:18 So that’s one of the ways.
0:55:25 Another thing you could do is try and refine your market contracts in a way that orthogonalizes,
0:55:31 by which I mean it sort of extracts out the signal that is independent of that signal, right?
0:55:34 So a prediction market contract somehow incorporates the information,
0:55:37 sort of like adjusted for whatever Nate Silver claims.
0:55:40 Let me give you an example because my colleague, Robin Hansen,
0:55:43 who is one of the founders of prediction markets,
0:55:45 Robin is usually many steps ahead.
0:55:47 He has a very clever proposal for this,
0:55:49 which I don’t think anyone has ever implemented,
0:55:53 but he says you have a prediction market and then you have a second prediction market
0:55:57 on whether that prediction market will revert in the future to something else.
0:55:58 Yes.
0:55:59 Oh, so genius.
0:56:00 Yes, exactly.
0:56:01 That’s the way you do it.
0:56:02 That’s the way you orthogonalize.
0:56:03 Perfect.
0:56:04 That’s way better than my example.
0:56:08 That’s so great because I was actually going to guess something like combining the reputation thing.
0:56:12 And this is essentially a way of combining reputation by having a parallel market
0:56:13 that verifies and validates.
0:56:14 Exactly, totally.
0:56:15 That’s so interesting.
0:56:17 And by the way, that’s not futarky, right?
0:56:18 His new thing.
0:56:22 One of the criticisms of futarky was precisely the point that Scott made.
0:56:25 And then Robin’s response to that is, well,
0:56:29 the solution to a problem of futarky is more futarky.
0:56:30 Okay.
0:56:32 And by the way, just quickly define futarky for me.
0:56:33 Yeah.
0:56:40 So Robin Hansen’s idea is let’s take these decision markets and apply them to government.
0:56:42 Let’s create a new form of government.
0:56:44 You know, there aren’t many new forms of government in the world.
0:56:49 There’s democracy, monarchy, you know, futarky is a new form of government.
0:56:57 And the way it would work is that instead of having politicians decide what policies to have,
0:57:03 politicians and voters would just decide on what our metric for success is going to be.
0:57:07 So it might be something like GDP would be one metric of success,
0:57:12 but you might want to adjust it for inequality or for environmental issues.
0:57:16 So you’re going to create some net statistic GDP plus.
0:57:22 Then anytime you have a question, should we pass this health care policy?
0:57:24 How should we change immigration rules?
0:57:26 Should we have this new immigration rule?
0:57:34 You have a market on whether GDP plus would go up or down if we pass this new law.
0:57:36 And then you just choose which one.
0:57:39 If GDP plus goes up, you say, okay, we’re going to do that.
0:57:43 And so people would just submit new ideas to the futarky.
0:57:45 Here’s a proposal for immigration.
0:57:47 Here’s a proposal for health care.
0:57:50 Here’s one for science policy.
0:57:52 And then you just run a prediction market.
0:57:56 Would GDP plus go up with that or would it go down?
0:57:58 And then you choose whichever comes out.
0:58:05 So Robin expands this idea of decision markets to an entirely new form of government.
0:58:07 That’s fascinating.
0:58:11 It relates so much to one of our partners, collaborators work, Andrew Hall at Stanford.
0:58:15 He studies a lot on on-chain and kind of liquid democracies and more.
0:58:16 That’s very interesting.
0:58:20 Thank you for explaining that, Alex, because I’ve actually never fully gotten what futarky is.
0:58:22 People toss it around and I’m like, but actually what is it?
0:58:23 I still don’t get it.
0:58:24 So that was very helpful.
0:58:27 It also sounds like it could be the subject of like a Borges short story or something.
0:58:28 Oh my God.
0:58:29 Yes.
0:58:30 Yes, absolutely.
0:58:31 Oh gosh.
0:58:34 What was the last one that we put in the last reading list, Scott, for the founder summit?
0:58:36 Was it the Labyrinth short story collection?
0:58:37 I think it was Labyrinth, right?
0:58:38 Yeah, yeah, I think so.
0:58:40 That’s so funny.
0:58:42 So a few more questions and I want to switch to crypto.
0:58:48 So since we’re talking actually about like kind of market theories and practice in this recent segment, Alex,
0:58:51 did you want to say a little bit more about efficient markets?
0:58:52 Sure, sure.
0:58:58 So another fascinating example of how markets could leak information, which then could be used for other things,
0:59:06 is if you ever seen the movie Trading Places, you probably know that the main determinant of orange juice futures
0:59:09 is what the weather is going to be in Florida.
0:59:10 Of course.
0:59:14 So Richard Rohl, who is a finance economist, had this interesting question.
0:59:18 Well, can we use orange juice futures to predict the weather?
0:59:22 And what he found is that there was information in those market prices,
0:59:26 which could be used to improve weather forecasts in Florida.
0:59:30 Kind of an amazing example, because no one, again, knew this.
0:59:36 No one was even predicting this, but this was kind of a leakage of this amazing information.
0:59:37 Fantastic.
0:59:45 Another fascinating one is, you know, Richard Feynman famously demonstrated that it was the O-rings,
0:59:52 which were responsible for the challenge disaster by dipping the O-ring in the ice water at the congressional committee.
1:00:03 However, economists went back, and when they looked at the prices of the firms which were supplying inputs into NASA and to the Challenger,
1:00:10 they found that the stock price of more than thaiical, which was the firm which was produced the O-rings,
1:00:15 that dropped much more quickly and a much larger amount than any of the other firms.
1:00:21 So the stock market had already predicted and factored in that it was probably the O-rings,
1:00:27 which were the cause of the Challenger disaster even before Richard Feynman had figured this out.
1:00:30 And by the way, it’s another that ties back to your HP example in a way,
1:00:36 because if I recall, part of the backstory with the Challenger was also that it was a case of death by PowerPoint,
1:00:43 because of the way they were communicating information internally and that the format and the structure kind of constrained how that information was presented.
1:00:47 I think Tufty gives a famous case study of this in one of his many books.
1:00:51 So another way of putting that actually, which is kind of disturbing,
1:00:57 but I think you’re right in that the people on the ground, they knew this wasn’t a good idea.
1:01:04 They knew it was not a good idea to launch the Challenger on such a cold day.
1:01:10 And if there had been a prediction market of like what’s going to happen or should we do this,
1:01:17 then I think it is quite likely that that dispersed information, which no one was willing to tell their bosses,
1:01:21 you know, no one was willing to stand up and say, we should not do this.
1:01:23 Instead, it got buried in PowerPoints.
1:01:28 That dispersed information might have found its way to the top if there had been a prediction market.
1:01:31 And is this launch going to go well?
1:01:34 Exactly. Or said another way, the earlier definition of a prediction market,
1:01:39 it would have been another way for management to elicit better information from their employees.
1:01:42 Exactly. That is a mechanism for communication, essentially.
1:01:49 Exactly. The HP thing really kind of struck me because I just remember that as like a communication no-no for how information is presented.
1:01:55 And that’s actually a good segue, by the way, to the crypto section because I want to ask you guys,
1:02:00 and this is going to help me break some, you know, I love doing a good taxonomy of definitions in any podcast,
1:02:04 because one of the things we talk about in crypto is the ethos of decentralization.
1:02:07 Sometimes the information is public and a public blockchain.
1:02:11 It’s an open source, distributed. It can be real time.
1:02:15 I don’t know if it’s necessarily accurate information, but the information can be corrected very quickly,
1:02:19 which then makes it more likely to be accurate because of the speed of revision,
1:02:23 which by the way, we also saw in the recent election, I think, compared to media.
1:02:27 One of the observations people made is that media didn’t move fast enough to, or even want to,
1:02:33 because of biases, their polls and predictions, whereas the prediction markets were faster self-correcting.
1:02:38 So, one question I have for you guys to kind of kick off the section about the underlying technology
1:02:41 and how it works is first, let’s tease apart all those words.
1:02:44 I just gave you like a big buzzword, bingo soup of words.
1:02:50 What are the words that actually matter when it comes to this context of eliciting better information
1:02:52 and aggregating that information in a market?
1:02:54 Like, what is the key qualities that we should start with?
1:02:57 And then we can talk about the technologies underlying that.
1:03:04 One way of answering that question might be like, the largest prediction market was the poly market,
1:03:06 crypto prediction market.
1:03:10 And the question is, is crypto a necessary part of this?
1:03:13 And I think the answer is probably no.
1:03:15 I think, why was the crypto market particularly successful?
1:03:19 Well, because it was open to anybody in the world, barring U.S. citizens, right?
1:03:20 Yes.
1:03:23 And the market, because of that, was much thicker than the other markets.
1:03:27 So there are some prediction markets which limit people’s best to a thousand dollars.
1:03:32 And the crypto whale was betting millions of dollars on these markets.
1:03:39 So that’s why the crypto market, I think as a kind of regulatory arbitrage, became very important.
1:03:42 And, you know, now the FBI is kind of looking at this.
1:03:44 The French are looking at this.
1:03:45 Was it legal?
1:03:47 Is it violating some laws?
1:03:51 But I think the crypto part of it was not actually necessary.
1:03:52 Yeah.
1:03:57 I’m glad you pointed that out to Alex, because I think people have been kind of hyping and over-inflating the crypto part of it.
1:03:59 And I actually agree with you completely.
1:04:05 Like, I don’t know if crypto was at the heart of the way that that market works, except in those qualities you mentioned.
1:04:07 Scott, any thoughts on that point?
1:04:09 So I totally agree with all of that.
1:04:17 One thing that crypto does very well on top of being open and interoperable and transparent is it enables commitment, right?
1:04:21 You can write a piece of software that is going to run in the exact specified way.
1:04:25 It can be audited by all of the users, and then they can be convinced that it’s going to run correctly.
1:04:31 And some ways we do informational agitation have challenges with commitment.
1:04:38 If you’re going to survey people and pay them six months from now based on whether their survey estimate was accurate or not,
1:04:41 they might be worried that you’re not going to show up and pay them.
1:04:45 And so long as whatever the information is can also exist on chain, right?
1:04:51 The resolution of the uncertainty can somehow be visible on chain either through an oracle or if it were like an on-chain function to begin with,
1:04:54 like just what is the price of this asset or something.
1:05:00 You can commit in a way that you can’t necessarily or you can’t do easily without complicated contracts.
1:05:03 You can just commit that it’s going to run as expected.
1:05:10 Now, in order for that to work, your informational agitation mechanism has to be fairly robustly committed and often also decentralized.
1:05:20 Like PolyMarket, by contrast, famously changed the terms of a couple of their resolutions because something happened that didn’t quite make sense
1:05:23 in the context of the way they said they were going to evaluate the outcome.
1:05:31 And so they post hoc, this is after people have already bought in under the original terms of resolution, changed the terms of resolution.
1:05:38 And so that’s like a lack of commitment that’s actually hard for markets to form when people don’t trust that they’re going to be resolved as described.
1:05:43 Right. I mean, it’s not the most basic rule of markets, like you can’t just suddenly change the rules under you.
1:05:48 Isn’t that why we always talk about why we don’t trust governments that don’t enforce property rights and whatnot?
1:05:49 Like you just can’t mess around.
1:05:50 No, you’re exactly right.
1:05:59 And the same way that blockchains create a form of property right that you can trust even without sort of a very trustworthy entity having established it
1:06:03 because, you know, the property right itself lives in this immutable ledger.
1:06:14 Same thing here, like you can at least in principle set up resolution contracts that are trustable and immutable and therefore expand the scope of the set of marketplaces we can configure.
1:06:23 Right. You know, it’s not just the set of tools we had when you have to be able to trust the market organizer, but actually now this sort of like, you know, commitment enables you to go further.
1:06:30 Just to break this down a little bit more because I think you said some really important things in there and I want to pause and make sure we flesh it out for our audience.
1:06:38 So first of all, based on what Alex said earlier in the case of poly market, one of the key points was public and the information being out there. That’s one.
1:06:43 I mentioned earlier the example of it being updated quickly as compared to media at least.
1:06:52 You just mentioned the importance of credible commitments and we’ve often described blockchains as a technology that blockchains are computers that make commitment.
1:06:54 So that’s a third or fourth.
1:06:56 I don’t know the number count, but I’ll just keep leasing the features.
1:07:02 And then you also mentioned potentially decentralized, but I couldn’t tell if it really needed to be decentralized or not.
1:07:05 Can you give me more bottom line on decentralization where you stand there?
1:07:06 Yeah, it’s a great question.
1:07:08 And actually, maybe we should have started here.
1:07:12 The necessity of all of these different features moves around with the type of market.
1:07:15 The more complicated your information elicitation mechanism is.
1:07:20 And this is especially important for the context where sort of pure information markets don’t work.
1:07:25 The more complicated your information elicitation mechanism is, the more likely it is that you want something that looks like crypto rails.
1:07:27 That’s actually good to know.
1:07:36 So like if Hewlett Packard is running an internal prediction market, first of all, it doesn’t have to be open to the entire world because you’re only trying to learn information from your employees.
1:07:38 So openness is important within the firm.
1:07:42 Maybe there’s someone in the mailroom who knows something that you don’t know they know.
1:07:45 And so you actually want that market of people to be able to participate.
1:07:54 But Hewlett Packard does not necessarily care what a person on the street thinks about printer sales and certainly doesn’t need to build the architecture to bring in like random people’s estimates of printer sales.
1:08:03 And so you need some amount of transparency because you need people to be able to see what the current price is and see whether they agree or disagree and they can sort of move the price around.
1:08:07 But in other types of elicitation mechanisms, maybe you don’t need transparency.
1:08:14 If you’re just going to pay someone based on the accuracy of their forecast down the line, you don’t need them to be able to see what else is happening.
1:08:19 You just need them to believe that you have committed and that the final accuracy is going to be transparent.
1:08:24 That they can verify that you didn’t just stiff them by like the thing they predicted happened exactly.
1:08:27 But then you said, no, it didn’t. And then you don’t pay them.
1:08:33 And so transparency is important only there with respect to the resolution, not with respect to the interim states.
1:08:40 But by contrast, like commitment is incredibly essential and needs to be believed or else the user won’t even participate.
1:08:43 Right. By the way, great that you gave the example of the transparency.
1:08:50 And I’ll let you finish your example in a second. But I’m just jumping in because it reminds me of how we talk about the things that can be done on chain and off chain
1:08:55 when it comes to scaling blockchains and approvers versus verifiers when it comes to zero knowledge or whatnot.
1:09:01 And it’s really interesting you pointed that out because I want to make sure people are listening who are builders listen to that because that means
1:09:08 you can do certain things on chain in order to whatever your goals of the design are and then put other things off chain.
1:09:13 You don’t have to have this purest view of how truth must be transparent. It’s very smart to point that out.
1:09:15 Anyway, keep going with your other example.
1:09:23 Yeah. And I completely agree by the way. I mean, like one of the things when I talk to teams, I’m constantly trying to get them to think about
1:09:29 which features of the marketplace are the most essential for market function.
1:09:34 And it varies by market context. And even even if eventually you’re planning and having all of these features.
1:09:40 Yeah, like as you’re deciding like which thing do we build first or like as we’re progressively decentralizing like what do we prioritize?
1:09:43 You actually have to understand the market context you’re working in.
1:09:49 That’s so smart because it’s basically another way to hit product market fit to because then you’re not like overbuilding and over featuring something.
1:09:51 Anyway, yeah, but keep going with your other side of that.
1:09:56 Totally 100%. So to get to the question of like when does decentralization matter?
1:10:00 The centralization has lots of different components that might make it matter.
1:10:04 One of them is just the ability to like make these commitments even more enforceable.
1:10:09 Like it makes it possible to be confident and function and liveness and so forth.
1:10:18 All of those things are important for a market because if your prediction market goes down the night before the election, you know, first of all, you lose the information signal from it.
1:10:23 Second of all, you lose the ability for people to participate in the market, which would sort of adjust the price and move the signal around.
1:10:34 Similarly, if you lose the ability to like resolve the truth, then maybe you can’t finally resolve the market and you have all of these bets that are sitting in limbo because the market doesn’t know what happened.
1:10:44 The key is everyone is bringing in their own information, but in order to finally resolve the contract and determine who gets the payout for the bet, you have to have the chain have a way to know what actually happened.
1:10:53 Another place decentralization is sometimes very important is in that resolution function. Like, you know, if the market is on chain, you somehow have to get what actually happened onto the chain.
1:10:59 And maybe the biggest better happens to also control the one resolution function.
1:11:05 And so they can now sort of rob the prediction market by just lying about the resolution of the event.
1:11:09 They tell the system like, you know, candidate A1 would actually candidate B1.
1:11:15 And then by the time people realize that this wasn’t correct, they might not have a way to fix it, but even if so, that person might just be gone.
1:11:21 So decentralization and resolution, just like we think about decentralized oracle sort of mechanisms, this is basically an oracle, right?
1:11:26 You have to bring off chain information on chain in a lot of these contexts to resolve the contract.
1:11:33 Or if you’re doing this in a centralized platform, the users have to trust the centralized platform to resolve the contract correctly.
1:11:37 By contrast, if the information does not need to be brought in through an oracle, right?
1:11:46 If it already lives in a system that’s verified and the resolution is like provably going to do what it’s claimed, then you don’t actually care about decentralization, say, in the discovery of the resolution.
1:11:50 You’re actually just like reading information and your commitment contract takes care of everything else.
1:11:53 And just really quick, Scott, you’ve said oracle a few times.
1:11:56 Can you actually properly define what you mean by oracle in this context?
1:11:58 I know we talk about it in crypto.
1:12:01 Yeah, and indeed oracle is not a completely uniformly well-defined term.
1:12:09 In this context, I’m talking about oracles as like a truthful source of information about what the actual resolution of the event was.
1:12:13 So if Trump won the election, the oracle tells us Trump won the election.
1:12:16 And if Harris won the election, the oracle tells us Harris won the election.
1:12:22 And the reason we’re using that is because the election is, of course, not being conducted, or at least maybe in the future we can dream.
1:12:26 But in 2024, the U.S. presidential election was very much not conducted on a blockchain.
1:12:37 And so if you’re going to have an on-chain prediction market, you somehow need the chain to be able to learn the information of what actually happened in the off-chain election.
1:12:40 And so the oracle is like basically the source of that information.
1:12:46 The key of the oracle, as God says, is to bring it off-chain and bring it on-chain.
1:12:51 I mean, the thing about off-chain is that people can look at the New York Times, right?
1:12:58 And so the New York Times is often considered a oracle in that you go by whatever is printed in the New York Times.
1:13:00 That would be a way of resolving a lot of bets.
1:13:03 Like, did the New York Times report that Trump won?
1:13:06 That might be one way of resolving these bets.
1:13:07 Yeah, great.
1:13:17 But the key problem is to bring that off-chain knowledge on-chain in a way in which the information is not distorted in the transmission.
1:13:26 And the reason why that transmission, you’re worried about it being distorted is precisely because it’s the revelation where all the money is, right?
1:13:31 So there are big incentives to distort the transmission of that information.
1:13:41 In fact, a lot of the crypto hacks which have happened have happened because people found a way of distorting the oracle and then using that on the crypto market.
1:13:49 The market resolves in one way, and if you can change the oracle, then you can make a huge amount of profit out of doing that.
1:13:52 So there’s a big incentive to mess with the oracle.
1:13:54 That’s why it’s really difficult.
1:13:56 And we can stick with the New York Times example, right?
1:14:03 A lot of people are going to make their morning trading decisions based on what they see in the New York Times and on the Bloomberg terminal and so forth.
1:14:09 And so if you could, in a coordinated way, feed the wrong information to that, it would change many, many people’s behavior.
1:14:12 And you could trade against that because you knew that they were going to get the wrong information.
1:14:13 Exactly.
1:14:15 So this can happen in the off-chain world.
1:14:23 And indeed, we saw there was one tweet, right, that the SEC is going to legalize, you know, ETF Bitcoin contracts.
1:14:27 It looked like, you know, it was an official ruling and it turned out to be a hack.
1:14:31 It turned out to be correct, but that wasn’t revealed until days later.
1:14:34 But yeah, so if you can distort an oracle, you can make money.
1:14:35 Totally.
1:14:41 Or I mean, if we’re talking about the New York Times, it would be remiss for us to not have the like Dewey defeats Truman, right?
1:14:46 Famous front page, like huge text headline that just turns out to be inaccurate.
1:14:47 Right.
1:14:49 That’s a famous case of what we did in media at Wired 2.
1:14:52 It’s called the pre-write and then you accidentally print it sooner and you get it wrong.
1:14:58 There actually have been cases of someone, you write their obituary months or years in advance and it goes out and says they’re dead.
1:15:05 Okay, you conflated earlier and I agree, they’re generally connected and similar, but there are some nuances between decentralized and distributed.
1:15:13 Like distributed can just be like redundant systems that have multiple, like the system going down is what you were giving the example the night before something.
1:15:17 That’s a case where being distributed matters, but it doesn’t have to be decentralized necessarily.
1:15:21 Like IE, there could be distributed nodes managed by a centralized entity, for instance.
1:15:22 Absolutely.
1:15:25 I just want to make sure we’re very clear about the distinction between decentralized and distributed as well.
1:15:26 Totally.
1:15:30 Whereas by contrast with the oracles, for example, you might really care about being decentralized, right?
1:15:35 You might care that no individual entity can sort of unilaterally change how the contracts resolve.
1:15:36 Exactly.
1:15:37 Just one other point.
1:15:41 Another advantage of doing all this stuff on blockchains is that it’s composable.
1:15:45 It’s not that we’re just like intrinsically interested in some of these questions.
1:15:46 Like maybe so, right?
1:15:50 Some people are just like, you know, intellectually curious, like who’s going to win the presidency in a month.
1:15:54 But rather like lots of other stuff depends on it, right?
1:16:02 If you’re making decisions about which supplies to order in advance, you need to have beliefs about the likelihood the terrorists are imposed under the next administration.
1:16:11 And so having these things live on open composable architectures is useful because they can be wrapped with other information and other processes.
1:16:19 You can tie your corporate operations in a very direct way into these sort of information aggregation mechanism signals.
1:16:24 Yeah, to put it even a more basic way, just because I don’t know if everyone necessarily knows composable in the way that we talk about it.
1:16:36 It’s like the Lego building blocks, the markets on chain or the information on chain is a platform that people can build around, build with, bring in pieces of information, combine it with other tools, etc.
1:16:38 And you can create like different things.
1:16:39 And that’s a composability.
1:16:43 And I’ll put a link in the show notes to post explaining composability as well.
1:16:45 And then the other quick one is open source.
1:16:51 Does the code itself have to be open source, auditable, public good?
1:16:55 Again, it depends how much you trust the market creator.
1:17:00 And again, this is true across the board for applications that can be run on blockchains or not.
1:17:07 Like you’re always making tradeoffs between trust through reputational incentives and institutions and trust through code.
1:17:14 You know, for example, like in actual commodities markets, there’s a lot of trust through institution and legal contract.
1:17:29 But there’s an architecture in place to establish the trust between the institutions and the contracts and their enforceability via the institutions for those contracts to be real enough that people believe in them enough to pay money for them and to have all of these market features.
1:17:39 Blockchains enable these sorts of trusted activities in lots of contexts where the institutions are not strong enough or present enough to do it for you.
1:17:45 If you’re having like $5 bets, like small money bets on some incredibly minor question.
1:17:51 Like, will the horse that wins the Kentucky Derby have a prime number of letters in their name or something like this?
1:18:00 Right. You’re not going to have necessarily an institution that is even able to evaluate and like set up that contract in a way that is worth doing at the amount of money it’s going to raise.
1:18:04 I like how Scott changes that Kentucky Derby into something he would be interested in.
1:18:10 Well, if it involves prime numbers, horses, forget horses, but prime numbers.
1:18:13 That’s so funny.
1:18:15 I love how well you know.
1:18:21 I will have you know the Kentucky Derby is also interesting because it has all sorts of cool statistical questions going on.
1:18:22 And cool hats.
1:18:24 Fascinating hats.
1:18:27 Absolutely fascinating hats, undefinitely intended.
1:18:28 I love it.
1:18:35 So like substituting code for the source of trust for these like very unusual or sort of like micro or international.
1:18:37 There’s not a clear jurisdiction, right?
1:18:41 All of these contexts sort of push you more into security via code rather than security via institution.
1:18:44 Let me add one more point on the blockchain.
1:18:49 So I think generally speaking, as I said, the blockchain is not necessary.
1:18:56 However, as we’re looking towards the future, it may become more and more useful to have these very decentralized rails.
1:19:02 So Vitalik Buterin recently wrote a post on info finance talking about prediction markets.
1:19:05 And he credited you at the top as one of the people reviewed it.
1:19:07 But yeah, I keep going.
1:19:08 Exactly.
1:19:15 And so one of their interesting points which he made is that AIs may become very prominent predictors.
1:19:19 They may become very prominent participants in these prediction markets.
1:19:25 Because if you can have a lot of AIs trying to predict things, well that lowers the cost tremendously.
1:19:31 And that opens up the space of possibilities of what you can use prediction markets for.
1:19:37 And so the blockchain, you know, is very good for, you know, nobody knows you’re an AI on the blockchain.
1:19:38 Right, right, right.
1:19:46 And so if we’re going to have a lot of AIs interacting and acting as participants in markets, then the blockchain is very good for that.
1:19:47 That’s absolutely right.
1:19:54 And we have a lot of content that’s already on this topic, which actually gets at the intersection of crypto and AI and where they’re a match made in heaven.
1:20:11 In fact, not only because of AI centralizing tendencies and cryptos decentralizing tendencies, but because of concepts like proof of personhood, being able to in privacy preserving ways, yet even if it on a public blockchain, find ways of adding attribution.
1:20:13 And there’s just so much more that you can do with crypto.
1:20:14 I agree, Alex.
1:20:15 And I’m so glad you brought that up.
1:20:27 It’s funny because when you were saying earlier that in the early definition of a prediction market as this way to kind of elicit information that’s dispersed across many people, I immediately went to like, oh, that’s the original AGI.
1:20:32 If you think about artificial intelligence, let’s just talk about human intelligence at scale.
1:20:34 Like that’s what a prediction market can be.
1:20:38 I do want to make sure we also touch on other applications a little bit on the future.
1:20:43 One quick thing though, before we do that, so now we’ve summarized some of the key features we’ve talked about the election.
1:20:47 We’ve talked about some of the underlying market foundations and some of the nuances.
1:20:56 We’ve talked about what does and doesn’t make prediction markets work and also mentioned earlier that they’re part of a class of mechanisms that can aggregate information.
1:21:09 So I want to really quickly, before we talk about applications in the future, near future, I want to quickly summarize what are some of those other mechanisms that could get at this kind of information aggregation that aren’t necessarily prediction markets.
1:21:10 Awesome.
1:21:17 So first of all, like, again, just to think about what is this class of information aggregation mechanisms that Alex defined earlier.
1:21:28 These are mechanisms that bring together lots of dispersed information to produce like an aggregate statistic or set of statistics that combine the information of many different sources.
1:21:30 And ideally that that aggregate is informative.
1:21:32 Now there are lots of ways to do that, right?
1:21:39 Like some of the simplest ones we actually talked about earlier are just to like ask people for their predictions and later pay them based on whether they’re correct, right?
1:21:44 And you can do that with random people, wisdom of the crowd style, or you could do that with experts, right?
1:21:53 And so like very simple types of information aggregation back is they’re just like incentivize people to tell you what they know or even just go and survey them, right?
1:21:59 Surveying people like in an unincentivized context, but where people have no incentive to lie and just like have an opinion, right?
1:22:03 They don’t have to do any research or like invest any effort to know their version of the answer.
1:22:14 You just run a survey. But then, you know, sort of there’s this whole menagerie maybe of incentivized elicitation mechanisms that are designed around different elicitation challenges.
1:22:17 So I mentioned earlier, pure prediction mechanisms.
1:22:22 These are the mechanisms where you ask people for their beliefs and their beliefs about other people’s beliefs.
1:22:31 And then you use people’s estimate of the population beliefs to infer like whether they were lying to you about what they believe and or like how informed they were in aggregate.
1:22:36 So if you can use that to figure out where the person fits in the distribution and pure prediction is like an incentivized version of that.
1:22:43 So you’re going to actually like pay people based on how accurate they are, but you’re not paying them based on how accurate they are about what actually happens in the future.
1:22:48 Rather, you’re paying them based on, you know, how accurate they are about the population estimate.
1:22:49 Right.
1:22:52 And so that enables you to pay people upfront immediately.
1:22:57 These are used for like, you know, subjective information or sort of like information that’s dispersed among small populations.
1:23:08 Maybe it’s not big enough to have a thick prediction market, but people are informed enough that if you can directly incentivize them to tell you the truth, then you can actually like aggregate the information usefully.
1:23:23 A couple of my colleagues at HBS, Reshma Hassan Natalia Regal and Ben Roth have this beautiful paper where they use these pure prediction mechanisms in the field in developing country context where they ask people who in their community is likely to be the most successful micro entrepreneur.
1:23:29 And then they allocate sort of funding according to these predictions. And it turns out that like the predictions are actually quite accurate.
1:23:43 So like the incentivized pure prediction mechanism sort of produces answers that line up with like who actually ends up being successful in these businesses down the line in a way that is more effective say than just asking people and telling them, oh, we’re going to allocate the money according to whatever you said,
1:23:47 because then people will lie and say, oh, my neighbor or my friend is like, you know, the best.
1:23:49 I’ll put that paper in the show notes too.
1:23:52 Yeah, it’s a great paper. Super fun to read, very readable too.
1:24:00 So one way in which the wisdom of the crowds doesn’t work, of course, is when the crowd thinks they know the answer to a problem, but they actually don’t.
1:24:03 Oh, okay, of course. Yeah.
1:24:13 So there’s this great paper by Freilich and Song and McCoy, and they give the example of suppose you ask people, what’s the capital of Pennsylvania?
1:24:21 And most people will think, oh, well, it’s probably Philadelphia, right? It’s the biggest city, popular city, you know, American Heritage, Liberty Bell, all that kind of stuff.
1:24:28 But it actually is the wrong answer. So if you go just by the wisdom of the crowds, you’re going to get Philadelphia and that’s wrong.
1:24:32 The correct answer is actually Harrisburg, which most people don’t know.
1:24:39 However, a small minority of people do know the correct answer. So how do you elicit this?
1:24:45 So their mechanism for doing this is what they call the surprisingly popular mechanism.
1:24:54 And what you do is you do what Scott says, is you ask people, not only what do they think is the correct answer, but what do they think other people will say?
1:25:00 And most people, of course, will think, well, I think the correct answer is Philadelphia, other people will say Philadelphia.
1:25:05 But then you’re going to see a bump, right, of Harrisburg. It’s going to be very surprising.
1:25:12 There’s going to be a substantial number of people who will say Harrisburg, and that will be quite different than what people expect.
1:25:18 And if you choose that, the authors show that this can improve on the wisdom of the crowds.
1:25:27 So the surprisingly popular answer, the answer which a minority chooses in contrast to the majority, that can actually get you more information out.
1:25:39 So depending upon the question, there are these clever ways of pulling this inco-hate information out of the crowd and eliciting the truth,
1:25:42 even when most people in the crowd don’t know the truth.
1:25:48 That’s fantastic. I’m obviously going to include all these things we’re referencing in our show notes, but that one is really interesting.
1:25:57 Right. That’s wild. And then maybe one other piece in the menagerie, of course, the listeners of this podcast will be very familiar with, are simple auctions, right?
1:26:00 Auctions are information aggregation mechanisms, too.
1:26:05 We talk about price discovery in an ordinary, like sort of very liquid market as being an information aggregation source.
1:26:09 But some markets aren’t like big and liquid all the time.
1:26:11 They don’t have like lots of flow transactions.
1:26:20 Maybe it’s a super rare piece of art, but an auction is still exactly useful for figuring out what the art is sort of like worth in the eyes of the market.
1:26:22 And you can often discover things, right?
1:26:28 Like there’s some artist that was not popular to the best of your knowledge, and then they have a piece with like a major sale.
1:26:36 And people’s estimates of the values of all of their other works change accordingly because of the information that’s been revealed about people’s change in taste or whatever from this one sale.
1:26:39 While we’re thinking from things from the show notes, there’s an incredible book.
1:26:47 Oh, actually, I think this is my very first A16Z Crypto Booklist contribution called Auctions, the Social Construction of Value by Charles Smith,
1:26:55 which talks about auctions from a sociological perspective as a way of establishing an understanding of value in a bunch of different contexts.
1:27:03 That’s great. And by the way, I do want to plug the episode you, me, and Tim Ruffgaard and did where we literally dug into auction design for all day for hours.
1:27:04 That was so much fun.
1:27:07 So it was even like arcing through these different types of mechanisms.
1:27:18 It’s a really good reminder that the type of question you’re asking, the type of market participants you have, and like this, we were just saying it shapes your decisions about how to like structure your market mechanism.
1:27:22 It also shapes your decisions about what type of market mechanism to use, right?
1:27:34 Like if you think that the population is not super informed on average, but like informed at the second order level, then this mechanism Alex was describing is like perfect because the information is there.
1:27:36 It’s just not like immediately apparently there.
1:27:37 Right.
1:27:43 What I love that you guys are talking about and we can now segue into some quick discussion of some applications in the future and then we can wrap up.
1:27:51 We’ve been talking about implications for design throughout this podcast, but I think it is very interesting because you’ve been saying throughout both of you that it really depends on the context and your goals.
1:27:53 And then you can design accordingly.
1:27:59 And that’s actually what incentive mechanism design is all about as I’ve learned from you and Tim Rothgard and then seen over and over and over again.
1:28:07 But two quick things just lightning round style that I want to make sure I touch on one multiple times you both have alluded to this payout feedback loop.
1:28:14 Like I’m inferring from what you’ve said that the payouts have to be almost quick that you get like an instant feedback loop on your outcomes.
1:28:19 Because you gave an example earlier where if it’s like delayed by two weeks or so and so it may be less effective.
1:28:21 Is that necessarily true?
1:28:23 Depends on trust and attention.
1:28:24 Right.
1:28:30 Some people have said that one of their concerns about prediction markets is that people like betting on sports because you know it’s happening in real time.
1:28:34 You know the answer within a couple of hours or in the case of a horse race within minutes.
1:28:40 Whereas these prediction markets often take months to resolve the final answer or the time of resolution might not even be known.
1:28:41 Right.
1:28:49 It might be you know sort of who will be appointed to this position so there’s possibility that speed is relevant for who chooses to participate in some context whether they find it fun.
1:28:53 The other context we were talking about is when time matters for trust.
1:29:07 If you’re in the developing world trying to figure out how to allocate grants people might not trust or even just have the infrastructure support to participate in a mechanism where they’re going to be paid six months out based on the resolution of some confusing outcome.
1:29:09 Whereas if you could pay them today they’ll participate today.
1:29:13 Hence why they experimented with pure prediction mechanisms in that context in the first place.
1:29:17 It was sort of a setting where you could in principle pay people based on the outcome.
1:29:22 Like you know how successful their neighbor was at being an entrepreneur with whatever grant they’d received.
1:29:29 But a lot of complexity goes into actually doing that in practice because you have to track down the people again and all of that.
1:29:30 Ah yeah.
1:29:32 One other quick buildery thing that came up.
1:29:42 It again seems so obvious to you guys probably but the best systems are where their prediction markets and such systems work when there is a discrete event like an election or something to be resolved.
1:29:47 It probably wouldn’t work for some ongoing kind of loosely defined non discrete event or.
1:30:01 So the prediction market mechanism sort of like the canonical prediction market as we’ve described it is a mechanism where you’re buying like an asset that has a payout as a function of a discrete event.
1:30:15 But that is of course not even the average case of markets right like you know when you’re buying oil futures or something most of the transactions in many of these markets are actually sort of in the interim it’s based on changes in people’s estimates.
1:30:30 And so if you have a market where you know it’s possible to sort of continually update and trade it you know as estimates change then like you can still gather a lot of information even if the value attained is in a flow or you’re in stages or something of the sort.
1:30:32 It could be sort of a single cutoff date.
1:30:34 I think you can design them in different ways.
1:30:36 They do have to resolve at a point in time.
1:30:42 But the way that they resolve could be based upon a stock price or something like that.
1:30:43 Yeah.
1:30:53 And you can have like dividends or something right to you can have things that pay out over time based on sort of interim steps like lots of things have continuous payouts based on like the growth of a company or something of the sort.
1:30:56 And so you could imagine like prediction securities that are kind of like that.
1:30:57 I eat the stock market.
1:30:58 I eat exactly.
1:30:59 I eat the stock market.
1:31:04 The HP example I gave earlier divided the time into two month periods.
1:31:05 Right.
1:31:09 So is it May to June or is it July to August is it September October.
1:31:16 So you know you can always take a continuous event and make it into five or six discrete periods.
1:31:17 Yeah.
1:31:18 Yeah.
1:31:19 Even if somewhat arbitrary that makes so much sense.
1:31:25 So so far these prediction markets have been used just for what we’ve been saying for predicting something.
1:31:32 But you can also create and here I’m going to riff off Robin Hansen again my colleague on these questions.
1:31:36 And he says we can also create these conditional markets.
1:31:46 So the question would be something like as I said earlier with few Tarky what would happen to GDP if we put together this science policy.
1:31:51 Now we might not want to jump all the way from democracy into few Tarky in one go.
1:31:53 We’re probably not ready for that.
1:31:54 We’re not ready for the full.
1:31:57 Not quite ready for prime time I think.
1:31:58 Yeah.
1:32:05 But here’s a fascinating idea of Robbins which I think we are ready for which we should use.
1:32:08 And that is what would happen if we fired the CEO.
1:32:12 So this is a huge question that companies want to know.
1:32:21 You know we saw a few years ago it was kind of remarkable when Steve Bomber left Microsoft and the stock price went way up.
1:32:26 You know suggesting that the market thought that Bomber was not a great CEO.
1:32:36 Or we just saw you know with Brian Nichols he moved to Starbucks from five guys he means extremely successful at five guys who moved to Starbucks.
1:32:44 On the day that Starbucks announced that they were hiring Brian Nichols as CEO the price of Starbucks jumped up.
1:32:47 So why however do we need to wait.
1:32:57 How about creating a continuous market which says at any given time would the price of Starbucks be higher if they fired the CEO.
1:33:01 And so you can create these decision markets prediction markets.
1:33:11 You create a prediction market in would the stock price be higher if we had the same CEO or would the stock price be higher if we fired the CEO.
1:33:14 Now that’s an incredibly useful piece of information.
1:33:15 Yes.
1:33:21 So companies this is billions of dollars every single day are based upon exactly this question.
1:33:28 And that’s a question which I think decision markets prediction markets would be really good at answering.
1:33:38 We already have the stock market people already investing billions of dollars in exactly this question and we can make it more precise and more detailed and more usable.
1:33:44 What I really like about that application is it leverages a type of information that people are already developing.
1:33:45 Right.
1:33:49 Like people are spending a lot of time reasoning about what’s going to change the stock price of Starbucks.
1:33:52 And they have a lot of different refined ways of doing it.
1:33:57 But it uses it to address a question that’s like useful sort of as a practical hypothetical.
1:34:00 As Alex said it brings the information forward in time.
1:34:08 You know normally in a current market context we can only learn what happens if Starbucks replaces the CEO when they replace the CEO.
1:34:13 But actually that’s like the least important time for us to learn that we actually want to know it like when they’re deciding should they replace the CEO.
1:34:14 Yeah exactly.
1:34:15 You want to know it before.
1:34:16 Yeah.
1:34:32 And so being able to harness that same effort that people are putting into understanding what affects the stock price of Starbucks and like you know which companies are well run and which aren’t and like pushing it towards this question can reveal important information at a time when it’s more useful.
1:34:35 Leveraging things people are already good at predicting.
1:34:36 Exactly.
1:34:41 That’s such an interesting and such a useful and extremely real and possible right now thing to do.
1:34:45 We’re not just being crazy futuristic like 10 15 20 years from now.
1:34:46 That’s so great.
1:34:48 Can I be crazy futuristic pushing a little bit more.
1:34:49 Yeah.
1:34:50 Yeah.
1:34:51 We actually want a little of that.
1:34:52 Go for it.
1:34:53 You’re absolutely right.
1:34:57 The should be far the CEO market could be implemented right now and it would be extremely useful.
1:35:06 And it’s the first step towards making more decisions by like dows by a blockchain consensus.
1:35:07 Right.
1:35:09 I mean so if you can make a decision about should be far the CEO.
1:35:13 Should we expand into Argentina or into China.
1:35:16 Should we have a new model this year.
1:35:17 Right.
1:35:21 You can start asking the market lots of different types of these types of questions.
1:35:31 So let’s start with should be far the CEO one of the biggest and most important most salient of these questions where Scott says it’s an information rich environment.
1:35:35 People are already collecting lots of information on exactly this question.
1:35:42 And once we’ve got some experience in this market we can start applying it to further markets down the line footnote.
1:35:43 Okay.
1:35:44 I love that application too.
1:35:50 And that ties into the importance we talked earlier about you know maybe running these markets in like an internal currency.
1:35:54 You know an advantage there is you can use it to put everyone on the same footing at the outset.
1:35:55 Right.
1:36:04 Like you know the Starbucks CEO question there are many different sort of like very high value and high ability to trade entities that already are like participating in this style of question.
1:36:13 Whereas for a Dow you actually might have tremendous inequality and wealth of the participants but you can make them wealthy in proportion to their reputation or something.
1:36:20 You know in the internal token which can then be used to like you know sort of have them all participate equitably at the entrance to these decisions.
1:36:28 I love this and this is where I’m very proud that we have published a deep body of research across many people not just our own team into Dow’s what makes them work what doesn’t work.
1:36:32 What’s effective governance mechanisms I’m going to link to that in the show notes.
1:36:38 Because also we’re arguing that sometimes you can do a lot of these things not just in the crypto world but you can apply them to other decentralized communities.
1:36:43 And I want people to remember that that’s a useful use of Dow’s which are just decentralized autonomous organizations.
1:36:48 Are there any other pet applications either current or futuristic that either of you have.
1:36:51 I have one but I’m going to wait till you guys are done.
1:36:54 I mean two other very quick hits.
1:37:00 You know we haven’t touched directly yet in the podcast on the idea of markets for private data.
1:37:10 Right for like you know another form of information aggregation is you know maybe a lot of people have information that will be useful in designing a new pharmaceutical or medical treatment.
1:37:20 And they have their own private information of this form and we’d like to be able to elicit from them in a way that also fairly compensates them for their participation or something of the sort.
1:37:28 And we have some mechanisms for this already like you might have you know surveys managed by a health center and they pay you sort of a show up fee for participating in the survey or whatever.
1:37:37 But there’s a possibility for much richer markets of that form that leverage sort of like individual data ownership and like permissioning and so forth.
1:37:49 Yeah one example by the way just concretely is like in the Deci movement decentralized science where people are putting their information like medical data using blockchains to bring more ownership transparency consent which they don’t have.
1:37:50 That’s just one example.
1:37:52 What’s the other one you had Scott.
1:38:02 The other one you know is getting incentivized subjective beliefs right we’ve talked a lot about like predictions of things that are have an objective truth.
1:38:11 But another big frontier for information aggregation is getting really good estimates of things that people believe that are fundamentally subjective.
1:38:16 Right and like you know if you’re trying to do like market research for your product you know do people want this.
1:38:24 You know one of the advantages of crowdfunding for example is that it’s a better information elicitation mechanism where you could go and ask 10,000 people do you want to buy this and some of them might say yes.
1:38:29 But unless you’re actually taking money from them you don’t know whether that’s like a truthful representation.
1:38:30 Yeah.
1:38:37 And so crowdfunding lets you learn about the total market for your sort of initial version of the product in a way that’s incentivized.
1:38:42 More broadly I think like subjective elicitation is like a really important direction to go into.
1:38:51 Can you quickly maybe give a very short definition in the uniquely crypto blockchain context of a Bayesian truth serum here because isn’t this where Bayesian truth serum supply.
1:38:52 Sure.
1:38:57 I mean the Bayesian truth serum is actually an example of those pure prediction mechanisms we described and there are many different versions of it.
1:39:03 But loosely the idea is if I ask you your opinion on something did you like this movie.
1:39:08 And then I ask you what’s the likely that you know another person I ask will say that they liked the movie.
1:39:12 You might have a reason to lie to me about whether you like the movie or not.
1:39:16 You might say oh I really liked it because you know you produced it what am I going to do but you actually hated it.
1:39:22 Your estimates of everybody else’s beliefs will be sort of tilted in the direction of them mostly disliking it.
1:39:29 So long as I’m going to reward you to proportional to your accuracy like you know that you disliked it and so everyone else probably will too because you’re a Bayesian.
1:39:41 And so I can detect looking at everybody else’s responses I can detect whether you sort of like told me a distribution of other people’s beliefs that’s consistent with what you said your belief is great.
1:39:55 One of my quick applications and kind of an obvious one but I want to just call it out because I find it very boring when people say the same thing like oh media whatever what I find very interesting is and people often talk a lot about having mechanisms for quote finding truth.
1:40:03 But sometimes I find it to be very pedantic and moralistic and equally as grating as a way that the very people they’re trying to bring down.
1:40:07 And so it’s a pet peeve of mine when I’m on the Twitter discourse like oh God I’m so bored by this.
1:40:18 But I do find it very interesting that some of the commentary surface at prediction markets for basically resolving more accurately and faster than mainstream media but not having some of the same filtering of partisan interest.
1:40:23 I mean although this might be different with certain communities of DAOs if you do predictions limited to certain DAOs.
1:40:25 Yeah again it depends who’s in your market.
1:40:28 Yeah exactly let’s get back to your point about thick and thin.
1:40:43 But it’s also interesting because it’s a way to put a little bit more skin in the game which is one of the biggest drawbacks in current media is like the people writing don’t have skin in the game which is why I’ve always been a believer in not having third party voices but the experts write their own posts and then editing them is more interesting to me.
1:40:52 So I do think it’s very interesting to think about this use case of reinventing news media using prediction markets and Vitalik’s post actually had a great headline which is.
1:40:58 That think of a prediction market as a betting site for participants and a news site for everyone else.
1:40:59 That’d be my application.
1:41:03 So I think more generally it is odd how we do quite a bit of journalism.
1:41:17 So for example it’s totally standard practice for a financial journalist right for it to be against company policy for them to invest in the companies which they’re recommending right.
1:41:24 And as an economist I kind of think wait a second don’t want the exact opposite right.
1:41:25 You want more skin in the game exactly.
1:41:27 Yeah more skin in the game right.
1:41:30 So you know I say that a bet is a tax on bullshit right.
1:41:31 I like that line.
1:41:32 That’s a great line.
1:41:33 I love it.
1:41:36 So you know how about you have to be upfront about it.
1:41:39 You have to be honest about it transparent about it.
1:41:46 But maybe journalists should say this is what I think will happen and these are the bets which I’ve made and you can see my bets on chain right.
1:41:49 And let’s see what their past track record is right.
1:41:57 Like it’s kind of amazing that we do not have any track record of opinion editorialists whatsoever.
1:42:01 Only Ted Lock you know started to create that and found that they were terrible right.
1:42:14 But how about let’s create a series of bets and on chain and this would you know change the types of people who become you know editorialists who get these jobs in the first place right.
1:42:21 So let’s start making sure you bet your beliefs and then let’s promote people whose bets are not to be accurate.
1:42:28 And that’s going to change journalism entirely if we were to change the metrics by which journalists are evaluated.
1:42:29 I agree.
1:42:30 Annie Duke talks a lot about this too.
1:42:31 Yes.
1:42:41 It’s not just bets like in a binary true false way but bets that are weighted in terms of likelihood probability of like you don’t have to make a binary like it will be this or that.
1:42:42 Absolutely.
1:42:53 I think 80% that X will happen and that is also another way to kind of assess in a more nuanced way and that gives a lot of room for the nuances that are often true when it comes to guessing the truth.
1:42:54 Absolutely.
1:42:55 Exactly.
1:42:57 There’s a big incentive to say this is never going to happen.
1:42:58 This is impossible.
1:42:59 Right.
1:43:04 But then if you ask them well if it’s never going to happen are you willing to bet $10 that it might happen.
1:43:05 Exactly.
1:43:09 You should all be willing to of course I’m willing that they’re never willing to make those bets.
1:43:10 That’s right.
1:43:21 But I think the Elon Musk as journalists will then start saying well actually I’m going to bet on that guy for building X to happen because I saw that you know shuttle launch and now I’m thinking OK maybe I’ll increase that from 10 to 20% or whatever.
1:43:22 Yeah.
1:43:23 Exactly.
1:43:25 So betting could reduce the hyperbole.
1:43:26 That’s exactly right.
1:43:27 Yeah.
1:43:28 Totally.
1:43:36 By the way this ordered on some other really critical information elicitation mechanism that uses a different version of this sort of cross examining some people’s beliefs against others.
1:43:38 Community notes on Twitter.
1:43:40 That’s an information aggregation mechanism.
1:43:41 Right.
1:43:48 It’s like getting a lot of people’s opinions and then only deciding that they’re correct if you have agreement from people who usually disagree.
1:43:49 Yes.
1:43:50 Exactly.
1:43:53 Because that’s where Wikipedia failed when they had the cabal of expert reviewers.
1:43:56 They didn’t have that kind of check and balance mechanism.
1:43:57 Yeah.
1:43:58 Totally.
1:43:59 Community notes is a great one.
1:44:03 I have one last question for you guys because we don’t have enough time to go into the policy.
1:44:08 In general like some of these became popular because they’re offering contracts that were banned from the market.
1:44:12 So a big question is whether the offshore crypto markets will follow the rules or not.
1:44:15 So how do you sort of create like innovation obviously in that environment.
1:44:20 To me the core question here is what’s the difference between gambling and speculation.
1:44:21 Is there a difference.
1:44:24 I’m curious if you guys have a thought on as a parting note on this.
1:44:32 I mean so one very important thing to remember is that depending on the context like you may be in a different point on a continuum.
1:44:44 Like part of what what makes sporting events like exciting and suspenseful is that there’s a lot of stochasticity and like sort of the amount of information that any individual has is reasonably small even if they put a lot of effort into figuring it out.
1:44:48 But there might be some amount of like sort of informed betting in sporting events.
1:44:59 And then as you move towards things where there’s a lot of information to be had and a lot of like value also to knowing the answer and a lot of market value to actually figuring it out.
1:45:00 Right.
1:45:09 So how do we allocate goods and markets going back to the very beginning when we were talking about like the role of markets and determining the value of something and clearing supply and demand.
1:45:10 Right.
1:45:14 Like there there is value generated through the process of people engaging.
1:45:17 Now there’s one really important caveat about speculation.
1:45:19 We talk about this like a lot in crypto land.
1:45:20 Right.
1:45:23 There is speculation of the form.
1:45:29 I have beliefs and you know I’m investing to support a product that I think will exist and that I want to exist.
1:45:31 And then I think other people will want.
1:45:37 And then there’s also speculation on speculation where you’re actually not so much betting based on your own beliefs.
1:45:41 You’re betting on you know what you think other people will choose to bet on like we talked earlier about herding.
1:45:46 You know you might place bets because you think other people are going to place bets in a given direction.
1:45:51 Not because you actually have any information about what’s going to happen just because you have information about how the market might move.
1:45:52 That’s right.
1:45:53 That’s speculating on speculation.
1:46:07 Exactly. That’s speculating on speculation. So there’s this sort of like valuable type of speculation which is people moving resources around in a way that reflects their beliefs and sort of like can help us make markets work better and achieve better outcomes.
1:46:13 Like that’s sort of in this midspace between the randomness where moving the money around has no impact on outcomes.
1:46:14 Right.
1:46:22 You’re just betting on coin flips like you know your money does nothing and this other edge where moving the money around becomes sort of its own project that is independent of outcomes.
1:46:25 And so again like sort of doesn’t provide information. Right.
1:46:34 Like these prediction markets are particularly well architected again at least in the cases where they’re very large and thick and all the things we talked about that you need to make them work.
1:46:47 They’re particularly well architected to try and be in that midspace where the information provided is valuable and comes out of like real knowledge and activity in a way that actually sort of means the market does something valuable.
1:46:57 Yeah. And by the way on the earlier example when we talk about a lot the obvious examples where it plays out is like the Carlotta Perez framework of like speculation phase followed by an installation phase.
1:47:08 That’s like a driver of technology cycles. There’s also the example of Bern Hobart wrote a piece for me a few years ago on how bubbles are actually a good thing when they have a certain type of quality in this case.
1:47:14 And he also wrote a new book about it recently for Stripe Press with the Tobias Huber which they go into greater detail about that.
1:47:15 I should read that.
1:47:29 It’s basically an example of quote. I don’t want to put moralistic terms on it necessarily but useful speculation that kind of leads to other things as an outcome versus speculating for the sake of speculating which is partly the distinction you’re pointing out.
1:47:44 Well I think people in Las Vegas who are at the slot machines they’re gambling because they have no way of influencing or of improving their predictions of what the slot machine is going to show up right.
1:47:55 It’s just pure random chance. On the other hand there are many many areas in which we are trying to predict the future and in which investing can help us improve our predictions.
1:48:04 And this is why I think prediction markets should be completely legal should be legalized because of all the forms of gambling of all the forms of speculation.
1:48:17 This is one of the most useful forms. So we want to incentivize the type of speculation or gambling which as a side product produces you know these useful public goods which is trying to predict the future.
1:48:27 Incredibly important you think about all of the questions that we have you know what is happening with climate change which of these scientific predictions are accurate.
1:48:42 Who is the best candidate for the presidency. All of these questions we have prediction markets can help us answer these questions in a way which is more objective more accurate and more open to everyone.
1:48:46 So I think the case for legalizing these is very very strong.
1:48:52 That’s amazing. I’m going to give you the last word on that Alex. You guys thank you so much for joining this episode. That was so fun.
1:48:55 Thanks Annelle. Thanks Scott. It’s been fantastic being here.
1:48:58 Thanks so much. Really fun conversation and QED.
1:49:02 Hi QED.
1:49:14 Thank you for listening to Web 3 with A6NZ. You can find show notes with links to resources, books or papers discussed, transcripts and more at A6NZcrypto.com.
1:49:18 This episode was produced and edited by Sonal Choxi. That’s me.
1:49:22 The episode was technically edited by our audio editor Justin Golden.
1:49:27 Credit also to Moonshot Design for the Art and all thanks to support from A6NZcrypto.
1:49:35 To follow more of our work and get updates, resources from us and from others, be sure to subscribe to our Web 3 Weekly newsletter.
1:49:43 You can find it on our website at A6NZcrypto.com. Thank you for listening and for subscribing. Let’s f***ing go.
1:49:48 [music fades out]
1:49:58 [BLANK_AUDIO]
This episode was originally published on our sister podcast, web3 with a16z. If you’re excited about the next generation of the internet, check out the show: https://link.chtbl.com/hrr_h-XC
We’ve heard a lot about the premise and the promise of prediction markets for a long time, but they finally hit the main stage with the most recent election. So what worked (and didn’t) this time? Are they really better than pollsters, is polling dead?
So in this conversation, we tease apart the hype from the reality of prediction markets, from the recent election to market foundations… going more deeply into the how, why, and where these markets work. We also discuss the design challenges and opportunities (including implications for builders throughout). And we also cover other information aggregation mechanisms — from peer prediction to others — given that prediction markets are part of a broader category of information-elicitation and information-aggregation mechanisms.
Where do domain experts, superforecasters, pollsters, and journalists come in (and out)? Where do (and don’t) blockchain and crypto technologies come in — and what specific features (decentralization, transparency, real-time, open source, etc.) matter most, and in what contexts? Finally, we discuss applications for prediction and decision markets — things we could do right away to in the near-future to sci-fi — touching on trends like futarchy, AI entering the market, DeSci, and more.
Our special expert guests are Alex Taborrok, professor of economics at George Mason University and Chair in Economics at the Mercatus Center; and Scott Duke Kominers, research partner at a16z crypto, and professor at Harvard Business School — both in conversation with Sonal Chokshi.
As a reminder: None of the following should be taken as business, investment, legal, or tax advice; please see a16z.com/disclosures for more important information.
Resources:
(from links to research mentioned to more on the topics discussed)
- The Use of Knowledge in Society by Friedrich Hayek (American Economic Review, 1945)
- Everything is priced in by rsd99 (r/wallstreetbets, 2019)
- Idea Futures (aka prediction markets, information markets) by Robin Hanson (1996)
- Auctions: The Social Construction of Value by Charles Smith
- Social value of public information by Stephen Morris and Hyun Song Shin (American Economic Review, December 2002)
- Using prediction markets to estimate the reproducibility of scientific research by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson (Proceedings of the National Academy of Sciences (November 2015)
- A solution to the single-question crowd wisdom problem by Dražen Prelec, Sebastian Seung, and John McCoy (Nature, January 2017)
- Targeting high ability entrepreneurs using community information: Mechanism design in the field by Reshmaan Hussam, Natalia Rigol, and Benjamin Roth (American Economic Review, March 2022)
- Information aggregation mechanisms: concept, design, and implementation for a sales forecasting problem by Charles Plott and Kay-Yut Chen, Hewlett Packard Laboratories (March 2002)
- If I had a million [on deciding to dump the CEO or not] by Robin Hanson (2008)
- Futarchy: Vote values, but bet beliefs by Robin Hanson (2013)
- From prediction markets to info finance by Vitalik Buterin (November 2024)
- Composability is innovation by Linda Xie (June 2021)
- Composability is to software as compounding interest is to finance by Chris Dixon (October 2021)
- resources & research on DAOs, a16z crypto
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.