AI transcript
0:00:05 88% of the work week is spent communicating,
0:00:08 so it’s important to make sure your team does it well.
0:00:10 Enter Grammarly.
0:00:14 Grammarly’s AI can help teams communicate clearly the first time.
0:00:17 It shows you how to make words resonate with your audience,
0:00:19 helps with brainstorming,
0:00:23 and lets you instantly create and revise drafts in just one click.
0:00:29 Join the 70,000 teams and 30 million people who use Grammarly to move work forward.
0:00:33 Go to grammarly.com/enterprise to learn more.
0:00:37 Grammarly, Enterprise Ready AI.
0:00:42 When you think of what makes us human,
0:00:45 what marks us as living beings,
0:00:49 would you say our powers of prediction?
0:00:54 I probably wouldn’t have, at least until this conversation.
0:00:57 It’s true that our ability to process information
0:00:59 and use it to predict what’s going to happen
0:01:04 helps us craft survival strategies and pursue our goals.
0:01:09 But too much predictive power is usually the stuff of dystopian sci-fi stories,
0:01:14 where being creative and unpredictable are the hallmarks of humanity,
0:01:18 while the power of prediction is cast as the weapon of technology.
0:01:22 And yet, one of the latest big theories in neuroscience
0:01:27 says that we humans are fundamentally creatures of prediction,
0:01:29 and creativity isn’t at odds with that,
0:01:33 but that actually creativity and prediction can go hand in hand.
0:01:36 That life itself is one big process
0:01:40 of creatively optimizing prediction as a survival strategy
0:01:44 in a universe that’s otherwise trending towards chaos.
0:01:48 So, how should we think about the balance between
0:01:51 what’s predictable and what surprises us?
0:01:53 How can they work together?
0:01:58 And what happens when you get too much of one and not enough of the other?
0:02:01 I’m O’Shaughn Jarrow, sitting in for Sean Hilling,
0:02:03 and this is the Gray Area.
0:02:17 My guest today is Mark Miller.
0:02:19 He’s a philosopher of cognition
0:02:23 and a research fellow at the University of Toronto’s psychology department
0:02:27 and Monash University’s Center for Consciousness and Contemplative Studies.
0:02:31 He’s also the host of the Contemplative Science podcast.
0:02:35 Miller’s work starts with this big idea known as predictive processing,
0:02:41 which says that your brain and body are constantly taking in information,
0:02:44 using it to build predictive models of the world,
0:02:49 and that our conscious experience is shaped by these predictions.
0:02:52 Predictive processing explains why we’re so quick to notice
0:02:54 when something unusual happens,
0:02:57 when a vinyl record playing a familiar song scratches,
0:03:02 or you notice that the tree that’s always been outside your apartment is suddenly gone.
0:03:07 These are prediction errors that your brain feeds on to update its model of the world.
0:03:09 And according to Miller,
0:03:13 using prediction errors to get better and better at doing this sort of thing
0:03:15 is a pretty big deal.
0:03:19 He’s argued that it could even be one of the keys to happiness.
0:03:21 But when the brain gets too deep into prediction
0:03:25 without a healthy dose of creativity and surprise,
0:03:26 it can cause problems.
0:03:30 Miller says that it’s healthy for us to be pushed to the edge
0:03:32 of what he calls informational chaos,
0:03:38 where our predictive models begin to break down and we encounter the unknown.
0:03:42 So I invited Miller on the show to help unravel this paradox.
0:03:46 What does it mean to be creatures that survive on prediction
0:03:48 but need chaos in order to thrive?
0:03:54 Mark, welcome to the show.
0:03:55 Hi.
0:03:56 Thank you so much for being here.
0:03:57 Thanks for having me.
0:03:59 I mean, this is, I love Vox.
0:04:03 I love what you guys do and I love the podcast and I was stoked to get an invite.
0:04:03 So thanks.
0:04:04 Wonderful.
0:04:06 I’m excited to dig into your work.
0:04:10 I think the foundational idea for a lot of your work
0:04:14 is this big theory known as predictive processing.
0:04:18 How would you describe that to someone without a neuroscience background?
0:04:20 One thing you can do is you can say what it’s not.
0:04:24 So, you know, for a few hundred years,
0:04:28 we thought that perception works in one way and that is, you know,
0:04:33 there’s light, there’s light and sound and things to feel out in the environment.
0:04:33 Let’s take light.
0:04:35 That’s a nice example.
0:04:37 So light is out in the environment.
0:04:38 It bounces off of objects.
0:04:41 That light then hits our sensory apparatus like our eye.
0:04:47 And then what the brain’s job is in the older model is to take that information
0:04:51 and then render it comprehensible as that information rolls up
0:04:53 through the visual hierarchy.
0:04:57 It’s getting more and more fleshed out until the end product is, you know,
0:04:59 the rich, the rich world revealing experience.
0:05:00 The world.
0:05:03 Exactly the world, the world that we have, right?
0:05:03 And that’s fine.
0:05:06 And I think most people think that that’s how it is and we feel pretty comfortable with that.
0:05:11 But if this is right, then that’s wrong or really important parts of it are wrong.
0:05:16 Rather than thinking of the brain as largely passive,
0:05:20 like the brain is waiting around in that vision.
0:05:24 It’s waiting for signals from the world and then it’s only working once it gets those signals.
0:05:28 This framework takes that idea and literally flips it on its head.
0:05:30 So rather than thinking about the brain waiting around for anything,
0:05:34 no, no, no, what if we recast the brain as radically proactive?
0:05:36 It’s not waiting around for anything.
0:05:39 If this model is right, this framework is right.
0:05:42 The brain is first and foremost a prediction engine.
0:05:43 It’s an anticipatory engine.
0:05:50 It’s using what it knows about the world and it’s seeking out to understand the world for itself
0:05:56 so that it can create from the top down what it expects to be happening next.
0:05:58 And then it only uses signals from the world.
0:06:00 Those signals aren’t what you perceive.
0:06:07 Those signals are now just used as tests to see how good your own top down modeling is.
0:06:11 So, if I didn’t make you feel a little bit funny, sometimes I say this,
0:06:16 then you didn’t quite catch it because what it means is that you’re not seeing light from the world per se.
0:06:20 You’re seeing your own best guess about what’s happening right now
0:06:24 and the light from the world is just there to update your model.
0:06:28 So you are, in a way, I like a Neal Seth, it’s a little bit provocative, but I like it.
0:06:32 A Neal Seth from Sussex University Neuroscientist of Consciousness says,
0:06:37 “Then we might say something like perception is controlled hallucination.”
0:06:40 It’s hallucination because it’s being generated from the top down,
0:06:45 but it’s controlled hallucination because it’s not just that you’re having any experience,
0:06:50 you’re hallucinating your brain’s best guess about what’s actually happening.
0:06:52 So, of course, it’s controlled by real-world dynamics.
0:06:56 So, just to try and understand how this actually works,
0:06:59 right now I’m looking out my window and I see a particular scene.
0:07:04 And naively, it seems to me like the light is coming in from the outside into my body,
0:07:06 reaching my brain, and that’s what I’m seeing.
0:07:10 What you’re telling me is actually what I’m seeing is the model being predicted by my brain.
0:07:15 What happens, though, when the sensory stimuli, when the light actually does get passed through my body?
0:07:19 Am I experiencing that at any point or when do we switch from experiencing our
0:07:22 predictions of the world to raw sensory data?
0:07:27 Right. Probably never. You don’t ever have access. Maybe Kant was right.
0:07:32 There’s just this numinalness where you just don’t have access to it.
0:07:33 That’s just not what you’re built to do.
0:07:36 And actually, you don’t need access to it.
0:07:42 What you need is you need the driving signal from the world to be making sure that the models
0:07:46 that you’re generating are elegant, sophisticated, tracking real-world dynamics
0:07:50 in touch with real temporal stuff. That’s what you need most.
0:07:52 This does get dizzying the more you think about it.
0:07:53 Yeah, right.
0:07:59 But it really is. This is a huge claim, right? My experience of the world is not a direct experience
0:08:03 of objective reality. It is my brain’s best guess of the world outside of my skull.
0:08:07 How early stage is predictive processing as a theory?
0:08:13 Well, not that early. I don’t think it’s irresponsible to say that it’s the
0:08:19 preeminent theory today in all sorts of communities, computational psychiatry,
0:08:27 computational psychology, neuroscience. If it’s not the foremost theory, it’s adjacent.
0:08:34 I guess it’s a mix. It’s younger than the other. It is the new kid on the block in a way,
0:08:39 but it’s a very popular new kid and very exciting. That being said, of course,
0:08:41 we’re not at the end of science.
0:08:49 So you wrote a paper about how this predictive framework can explain a lot about what makes
0:08:54 us humans happy, right, taking the predictive framework and turning it on these other big
0:08:57 questions. So tell me about that. What is the predictive account of happiness?
0:09:04 Yeah, gosh, that’s such a good question. Let me start by telling you what it’s not. For five or
0:09:10 six or seven years, I worked with people like Julian Kiverstein and Eric Reitfeld and other
0:09:14 really wonderful people. Were these neuroscientists or philosophers?
0:09:23 Both. Philosophers of neuroscience and others on producing new models of various psychopathologies.
0:09:33 So we have work on addiction, depression, OCD, PTSD, disassociative disorders, anxiety.
0:09:40 So there’s a big range of psychopathologies that people are applying this framework to better
0:09:45 understand what is that pathology all about. So one of the things that we
0:09:49 kept bumping into is that a huge number of these psychopathologies we’re looking at
0:09:54 all had this one quality in them, which was like a kind of sticky bad belief network.
0:10:00 So the system starts predicting something about itself or something about the world.
0:10:04 When you say system, do you mean is this a human being?
0:10:05 Yeah, sure. Yeah, right.
0:10:07 Yeah, like a cognitive system?
0:10:08 Yeah. Yeah, that’s right.
0:10:12 Yeah, the human system, right. So the system that makes us up.
0:10:16 So the system starts predicting for one reason or another that the world is some way.
0:10:24 And then the trouble looks like when that prediction becomes strong enough and divergent
0:10:30 enough from the way things actually are. So we call it has a sticky quality to it.
0:10:36 Just think about depression. So you’ve installed the belief for whatever reason
0:10:41 that you just can’t fit with the world that either it’s because you are not good enough
0:10:45 or the world isn’t good enough. But for some reason, you can’t resolve this difference
0:10:49 between the way that you want the world to be and the way the world actually is,
0:10:51 either because of something on your side or something in the world side.
0:10:56 And if you get that belief installed, one thing that marks depression
0:11:00 is that that belief persists even if the conditions were to change,
0:11:07 right? Even if you were to change the situation entirely, there’s a sticky quality to these
0:11:14 pathologies. Maybe even a better one is PTSD. PTSD in a war zone, in a way, can be really
0:11:20 adaptive to wake up often, to wake up ready for combat when you’re in a highly volatile state.
0:11:26 That’s not a completely pathological state to be in. But when you shift from a really scary,
0:11:34 uncertain situation like war to a peacetime experience and the system can’t let go of
0:11:40 the structure that’s been embedded in it, then we start calling it pathological. And the sticky
0:11:45 quality is the thing that’s really the problem there, is that there’s one sort of way of believing
0:11:50 or predicting the world that won’t budge even though you get better evidence.
0:11:54 So you’re saying that when we ask about happiness, we’re going to start by pointing
0:11:59 to what it isn’t. That’s right. And you get problems that arise when the predicted model
0:12:05 of the world that our brains and bodies are generating diverges from the world itself
0:12:10 and sticks to its model as opposed to updating with the world. Good. You got it. You got it, right?
0:12:15 So a divergent belief, a bad divergent belief, I mean, a divergent belief that causes harm,
0:12:20 causes suffering, that then gets stuck, it’s resistant to change. And indeed,
0:12:25 even sometimes looks like it protects itself. So I’ll give you an example. They did this great
0:12:31 study on depression where they had people who were suffering major depression who
0:12:36 self-reported being depressed all the time. So how often are you depressed? And they said,
0:12:41 “I’m depressed all the time. I wake up depressed. I’m depressed all day. I go to sleep depressed.
0:12:47 I’m always depressed.” Then they gave them a beeper and they beeped them randomly and had them
0:12:51 write down what mood they were in, what they were experiencing, what they were thinking about
0:12:58 at the time of the beeper. And what they found was that something like 9% to 11% of the time,
0:13:03 they were feeling depressed. And the rest of the time, they were either in neutral or positive
0:13:09 affective states. So what’s happening there? Because when you ask them what’s your experience
0:13:15 like, it’s not like they were lying. It’s not like they were trying to deceive the investigator.
0:13:21 What’s likely happening is they just don’t notice all of the other experiences because
0:13:26 it doesn’t conform to the model they have of themselves in the world. The model here,
0:13:32 the prediction is so strong, it’s drowning out the signal that should be helping it update.
0:13:39 And that can happen for a number of reasons. So let me ask you then about swinging back to the
0:13:43 positive dimension, happiness in particular. That’s a picture of depression and psychopathology
0:13:49 and mental illness. So what does this predictive framework say about the feeling of happiness
0:13:56 itself? Well, I’m going to say two things. There’s a difference between momentary, subjective happiness
0:14:02 and well-being. Eudaimoneic well-being, like having a good life.
0:14:06 Is that Aristotle’s thing? It is. Yeah, you’re right. Yeah, exactly. The ancients were on to it.
0:14:11 And the ancients were on to also, you need to have, so just in case anybody doesn’t know what
0:14:17 these are, the momentary, subjective well-being is like hedonic well-being. That’s just the feeling
0:14:24 good stuff. Is that like pleasure? Yeah, right, right, exactly. And the overall well-being doesn’t
0:14:28 look like it’s exactly identical with that because to have a really rich, meaningful,
0:14:38 good life may mean you’re in pain quite a lot. Momentary, subjective well-being is a reflection,
0:14:47 at least in part, of predicting better than expected. So we have this idea that valence,
0:14:54 valence is that good or bad feeling that comes as part of your embodied system evaluating.
0:15:00 It’s telling you, how’s it going? So when you feel good, that’s your body, and we’ve known this for
0:15:04 a long time, that’s your body and nervous system and brain telling you, I’ve got it. Whatever’s
0:15:08 happening right now, I’m on top of it. I’m predicting it for us, I’m predicting it well,
0:15:14 I’m managing uncertainty really well. And when you feel bad, that’s an indicator. I don’t understand
0:15:18 something here. When you feel good, you want to engage a little bit more with that. That keeps
0:15:24 us doing things where we’re succeeding. When we feel bad, usually we pull away or we task switch
0:15:29 because that’s an indicator that maybe something is a little suboptimal. In predictive parlance,
0:15:34 we think it has to do with prediction. So we feel good when we’re predicting better than expected,
0:15:39 we feel bad when we’re predicting worse than expected, and we use those good feelings or
0:15:46 those bad feelings to hone how we’re predicting our environments. So that feeling of pleasure
0:15:52 or valence is a signal that we’re on a good track. But at the same time, you mentioned this isn’t
0:15:57 just about maximizing pleasure, there’s more to well-being. And you actually used substance
0:16:03 addiction as a really nice example of showing why just maximizing these pleasure in these loops is
0:16:08 not enough, it’s too narrow. So what does addiction show us about why pleasure alone is not enough
0:16:15 to talk about happiness here? So if your brain is an optimal engine, optimal predictive engine,
0:16:22 how is it that we keep finding ourselves in all of these suboptimal cul-de-sacs like addiction,
0:16:28 depression, anxiety, because those don’t seem very optimal. And addiction is such a good example,
0:16:34 it’s a good test case to see how that happens. In the case of opioids, for instance, the opioid
0:16:44 signals to the brain directly that you have predicted better than expected sort of over
0:16:50 all of your cares and concerns. Opioid signals the brain directly that whatever just happened,
0:17:00 whatever behavioral package, whatever context was just on tap, you have just found an amazing
0:17:04 opportunity, better than anything you’ve ever found before, wildly unexpected reductions in
0:17:09 uncertainty. And that caches out as that burst of pleasure, the pleasure is what’s signaling that
0:17:15 to me. Massive pleasure, massive pleasure. The reason heroin feels as good as it does
0:17:21 is because it’s signaling to the brain directly. You’ve got to remember this framework really
0:17:26 exposes this. The predictive system, like your brain and nervous system, they don’t have access
0:17:31 to the outside world per se. All they have are the signals at the edge that they’re making predictions
0:17:37 over. So you feed it a signal using an opioid, you feed it a signal that just says, well, whatever
0:17:43 just happened, you just hit the jackpot. And for a system like you and me and everyone else,
0:17:49 that is basically an uncertainty managing system. It’s not surprising that people do heroin. It’s
0:17:55 surprising not everybody does heroin. We have evolved to manage uncertainty. This chemical
0:18:01 signals to us that uncertainty has been completely managed. And so of course, the drug seeking and
0:18:08 taking behaviors that produced that signal are the ones that the system then puts the volume up on.
0:18:24 What is AI actually for? Everybody is talking about AI. Wall Street is obsessed with AI. AI will
0:18:30 save us. AI will kill us. It’s just AI, AI, AI, everywhere you turn. But what does any of that
0:18:35 actually mean, like in your life right now? That’s what we’re currently exploring on the
0:18:41 Vergecast. We’re looking for places and products where AI might actually matter in your day-to-day
0:18:46 life. And I don’t just mean for making your emails less mean, though I guess that’s good too.
0:18:50 Lots of big new ideas about AI this month on The Vergecast, wherever you get podcasts.
0:19:00 You’ve written a paper about horror movies and predictability. Can you tell me how you got
0:19:06 started on that research and what you found there? The paper is called Serving Uncertainty with Screams.
0:19:13 It was done with some excellent people. And there was a few steps up to it, starting from the idea
0:19:19 that we feel good not by getting rid of all error, not by vanquishing uncertainty,
0:19:26 but that we feel good when we have the right kinds of uncertainty to reduce. We start there,
0:19:34 and then we moved into developing a model of play. And they invited me onto that paper
0:19:44 to think about playfulness. And play there was showcased as exciting and alluring and super fun
0:19:51 because play so often creates these at-edge experiences. So we tie one leg up, we blindfold
0:19:56 ourselves, we do everything we can to create a bunch of uncertainty that we then resolve. And
0:20:01 that’s sort of the nature of lots of what we do in terms of play. So if you’ve already got sort of
0:20:06 risky play on tap, then it’s sort of a hub skip and a jump to think about really risky things,
0:20:12 like potentially going to horror theme parks or going to horror movies. And so we started digging
0:20:18 there. But then when we’re investigating that, lo and behold, a number of other benefits started
0:20:24 to be exposed. There are all sorts of bits of life that are really critical for us to understand,
0:20:28 but that we get no exposure to because of the kind of cultures that we live in, like death
0:20:34 or pain or, you’re like, why do you, why do you rubber neck when you drive past a car accident?
0:20:39 Even if you’re the best person in the world, why do you look? Why do you really look? Why
0:20:43 when your friend comes to you and says, my partner died, no matter how compassionate and
0:20:49 skillful of a person you are, you want to say, how, how exactly, how exactly did they die?
0:20:53 Like before you even say, I’m so sorry, you’re like, wait, wait, wait, how old were they?
0:20:57 How old were they and how did it happen? And we might feel ashamed that we have those little
0:21:01 thoughts, but that’s just the generative model doing what it’s doing. It’s trying to figure out
0:21:05 what are the, what are the variables in the world that I need to know about so that I’m predicting
0:21:10 well moment to moment to moment. And actually horror movies turn out to be a treasure trove
0:21:14 of this kind of information. We can see what is it like if I get chased? What is it like if
0:21:19 somebody ended up in my house? What is it like if I was under extreme duress? That’s all model
0:21:24 updating stuff. Is that the idea with horror movies? Is it just that exposes me to a form
0:21:29 of uncertainty that ultimately helps me become a better predicting creature? That’s right,
0:21:38 exactly. So horror is like the smaller step, you know, cousin of those sorts of more extreme cases.
0:21:46 Got it. So what horror does is it produces a safe kind of uncertainty for us to get involved in.
0:21:51 It’s certain uncertainty in a way. It’s not volatility. It’s not actually being chased by
0:21:55 somebody with a chainsaw. You get to go to a place where you know you’re safe, where most
0:22:00 of you know that you’re safe, and you can still flirt with all of these sort of uncertainty
0:22:05 generating and uncertainty minimizing dynamics, which we find thrilling because the evolutionary
0:22:10 system sort of like turns on. It acts as if you’re being chased and then the rest of the system
0:22:18 goes, “Hey, wait, we’re in the theater. It’s all good.” Right. So so far, we’ve told this story
0:22:22 that prediction can produce, getting better and better at prediction produces these feelings of
0:22:28 happiness coupled with exposing ourselves to the right kind of uncertainty that can broaden the
0:22:33 scope of our predictive powers. This conversation we’re having today, it’s part of a series we’re
0:22:39 doing on creativity. And I think at this point, we’ve probably set up enough context for me to just
0:22:45 ask you directly, how does creativity fit into this story? I think a starting point for thinking
0:22:55 about creativity using this model is to start by maybe showing the puzzle. So we ran into the
0:23:02 same puzzle thinking about horror. So why would a predictive system that looks like it’s trying to
0:23:08 reduce uncertainty be attracted to situations and indeed make those situations where it’s bumping
0:23:15 into uncertainty? Why do we build roller coasters? Why do we go to horror movies? When I give this
0:23:20 lecture, similar lectures to this in different spaces and I ask people, raise your hand if
0:23:30 you would want to be one of the first people to colonize Mars, which is an insane thing to want
0:23:35 to do. I’m not raising my hand here. No, it’s massively uncertain. It’s like the most uncertain
0:23:40 thing you could positively do. I have never asked that question and had no one put up their hand.
0:23:45 It’s always 5, 6, 10 people put up their hands and you push them and they’re like, yeah, given the
0:23:51 opportunity, I think I’d really take that chance. So there’s a puzzle there or there’s a seeming
0:23:57 puzzle. Why would a system that looks like it’s trying to reduce uncertainty actually not only
0:24:01 be attracted to uncertainty, but systematically create uncertainty in all of these different
0:24:08 situations? And part of the answer I think we’ve exposed in these papers is that too much certainty
0:24:13 is a problem for us, especially when that certainty drifts from the real world dynamics.
0:24:20 So in order to protect our prediction engine, our brain and nervous system, from getting into
0:24:24 what we’ve called the bad bootstrap, that is from getting very, very certain about something that’s
0:24:29 wrong, because that’s really dangerous for the kind of animal that we are. It’s really dangerous.
0:24:37 We are built to get it right. So in that kind of world and for that kind of system, it really
0:24:45 behooves us to occasionally inject ourselves with enough uncertainty, with enough like humility,
0:24:52 intellectual humility in a way, like be uncertain about your model enough that you can check to see
0:24:57 whether or not you’ve been stuck in one of these bad bootstraps. And I think if you’re with me to
0:25:04 there, then we have a wonderful first principles approach to thinking about the benefit of creativity
0:25:10 and art, especially provocative art, especially art that like calls you to rethink who you are and how
0:25:17 it is. Because as far as we’ve seen, and you know, the research just keep pointing in this direction,
0:25:24 anything that gets you out of your ordinary mode of interacting with the world so that you can check
0:25:28 to see how good it is or how poor it is, is going to be a benefit for us. It’s going to protect us
0:25:34 from those bad siloed opportunities. And I think art does that, right? You can go somewhere, see
0:25:40 something grand, see something beautiful, see something ugly and horrible. And if you let yourself
0:25:47 be impressed by it, it can be an opportunity for you to be jostled out of your ordinary way of
0:25:51 seeing the world, which would let the system check to see whether or not it’s running optimal models
0:26:00 or not. So it sounds like you’re likening creativity to this injection of the right kind of uncertainty
0:26:05 into our experience of the world. And it’s really interesting. In the paper on horror movies,
0:26:09 actually, you used a term that I think captures a lot of this. It’s a thread that seems to run
0:26:14 through everything so far, the art, the creativity, the horror movies, meditation and psychedelics
0:26:20 we’ll get to. You wrote that the brain evolved to seek out the edge of informational chaos,
0:26:23 which is a place where our predictive models begin to break down.
0:26:29 And in those uncertain zones, we actually have much to learn. It’s a very rich learning environment.
0:26:34 And so it sounds to me like this edge of chaos actually explains at least one perspective on
0:26:40 why art, why creativity, why play, why all these things benefit us, because that edge is a really
0:26:45 healthy place to be. So I wanted to ask you about this framing of the edge of informational chaos
0:26:50 and why that’s a place that our brains would want or benefit from.
0:26:58 You already say it so beautifully. Where are we going to learn the most if you are a learning
0:27:05 system? And this is amazing. We have right from the lab, we see that animals and us,
0:27:11 we get rewarded not only when we get food and watered and sexed, we get rewarded when we get
0:27:16 better information. Isn’t that amazing to acknowledge? Like if you get better information,
0:27:21 my system is treating it like I’ve been fed. That’s how important good information is for us.
0:27:27 And in fact, in lots of situations, it’s more rewarding for us than the food itself. Because
0:27:33 one bit of food is one thing, information about how to get food over time, that could be much,
0:27:40 much more important, right? So where do we learn? Where do we learn the most if really what matters
0:27:45 is that we’re learning? Well, we don’t learn where our predictive models are so refined
0:27:49 that everything is just being done by rote. We’re definitely not learning much there.
0:27:57 And we’re not learning the most way out in deep volatility, unexpected uncertainty environments.
0:28:00 That’s like where you not only do you not know what’s going on, but you don’t know how to get to
0:28:05 knowing what’s going on. That’s why we have culture shock. If we move somewhere else,
0:28:09 sometimes some people can have this like really disorienting, even hallucinating,
0:28:14 engendering experiences. Because not only do you not get it, but you don’t know how to get.
0:28:17 You don’t know how to get to getting it. You don’t know, like you’re not only
0:28:20 uncertain about this, but I’m uncertain about myself trying to get a hold of this.
0:28:26 That’s no good for us either. So where do we learn the most? We learn it at this Goldilocks zone,
0:28:33 which is that healthy boundary between order and chaos, between what’s knowable
0:28:37 and leverageable and that thing which is not known. And you said it so beautifully,
0:28:43 right at that edge is where our predictive models necessarily break down. It is by its very nature
0:28:49 the place that the model breaks down. And the hope there is is that in breaking down,
0:28:56 new, better models are possible. Every chance you get to be at that edge is a chance to be learning,
0:29:02 breaking and making better models. And I love the research agenda that’s looking at all the
0:29:06 benefit and all the ways that we can find that edge and leverage all the good stuff at that edge,
0:29:12 including horror movies and provocative art. Well, this is really dangerous territory because
0:29:19 it sounds to me like what you’re saying from the predictive perspective is when I settle in to watch
0:29:24 my Netflix series that is perfectly predictable, where I know the template, I know the plot,
0:29:28 how it’s going to unfold, but I just enjoy watching it kind of fill in the lines anyway.
0:29:34 I’m not getting that uncertainty, whereas when I watch a really strange indie movie where things
0:29:38 are happening that I don’t know why they’re happening, I can’t follow the plot, that I’m
0:29:42 getting uncertainty out of that that’s going to benefit my predictive system. Is that kind of
0:29:48 the case? Well, if you can’t catch the plot, I don’t know how much benefit there is because that
0:29:52 sounds to me like it’s a little bit too far outside of your spectrum. Like if all you know is punk
0:29:56 music and somebody takes you to a classical concert, there might not be a bunch of useful
0:30:02 uncertainty here. That might just be aggravating uncertainty. I just don’t know what to do here.
0:30:06 So that’s probably not going to be not all that important for your system.
0:30:13 What you would want is to be at your edge. So if you love reading and you’re into
0:30:20 science fiction or something, and then you get a chance to get your hands on Dostoevsky,
0:30:25 there might be, you know how to read, you know how to engage with literature.
0:30:29 There’s an edge here that you don’t really understand. Pushing that edge is going to be
0:30:34 valuable because it’s going to expose you to different species of information that might
0:30:40 have the the bang on effect of improving your grip in lots of different scenes. But why is it
0:30:46 that we’re attracted to really regular things? If what we’ve been saying here is I’m especially
0:30:52 charged to find my edge and hang at my edge and where I’m improving my predictions, that feels
0:30:57 super good. Why is it that I like, you know, sometimes we find ourselves just rewatching
0:31:04 the same show over and over and over again? One of the answers looks like the degree to which
0:31:11 you expect everything else in your life to be highly, highly uncertain is the degree to which
0:31:17 doing something that’s really, really regular feels to the system as if it’s doing better than
0:31:26 expected at managing uncertainty. So watching friends for the 17th time can feel very rewarding
0:31:32 insofar as you have expectations that everything other than watching friends tonight
0:31:38 is volatility city. My essay isn’t working right. My editing of this thing isn’t working right. I
0:31:44 have this work coming up that I don’t know what to do about. My relationship is tanked. If you see
0:31:51 uncertainty dynamics going all uphill from where you are, then just doing something
0:31:56 super regular actually gets registered by the system as if you’re reducing error better than
0:32:02 expected because the temporary reprieve of friends is reducing error better than expected
0:32:08 relative to the runaway error everywhere else. Yeah. I’m very happy you’ve provided justification
0:32:13 for me to continue watching predictable shows. Hold on. If you want more, I’ll give you one more
0:32:18 because you definitely should do that. One of the things that looks like it engenders depression is
0:32:24 repeated failures where you are just getting information back. But everything you try in
0:32:29 order to improve your predictive grip on the scene is failing. You reach and slip and reach and slip
0:32:33 and reach and slip and reach and slip and reach and slip. Eventually what the system does to
0:32:38 manage that is it installs this deep level belief that, look, this is just the kind of place where
0:32:43 you reach and slip. That’s it. You are a reach and slip thing. As soon as it has that prediction,
0:32:48 then you go about trying to confirm that prediction. One of the ways you can protect yourself from
0:32:56 that is giving yourself lots of wins. We know this deep in COVID, Animal Crossing was a massively
0:33:04 popular game because you get a cute, easy, regular, close to hand opportunity to have some wins.
0:33:09 And actually, I think that’s totally protective. I’m a meditation teacher. I don’t know how
0:33:15 avant-garde this is, but I’m quick to say you should watch Netflix and play video games when
0:33:20 you don’t feel well. I don’t think that’s always a numbing process. I think avoiding technologies
0:33:27 are real technologies and getting little wins when the world is especially vicious in terms of
0:33:31 uncertainty, I think is a really great way to protect the predictive system from having one of
0:33:37 those dumps where, oh, I just can’t do anything. And so, I better turn on sickness behaviors and
0:33:44 back up. From this perspective, do you think there’s a difference between me setting up an easel
0:33:50 and painting versus going to a museum and consuming and looking at a painting? How do you see those
0:33:56 from the lens of uncertainty? I would say there might be a difference between taking painting as
0:34:03 a craft, where what you have here is you have an opportunity to improve your painting skills when
0:34:09 you sit at the easel. And so, you’re getting lots of little, potentially, you have the opportunity
0:34:14 here to get lots of little bumps of doing better than expected as you’re increasing your skills.
0:34:20 So, that’s nice. Every new painting is a little bit of uncertainty that you’re managing in a small
0:34:25 way, but I think something else could be happening there too, especially if you think about it as
0:34:31 like art therapy, where you’re not just trying to paint the scene, but you’re trying to paint
0:34:37 something about yourself as you’re painting a scene. You’re trying to expose something
0:34:43 about yourself while you’re engaging in this creative act. And why would you want to do that?
0:34:48 Why would you want to take something hidden and put it somewhere public? What do you think?
0:34:54 Well, I imagine it’s going to help me resolve some things that have been uncertain about
0:34:59 something in my understanding of the world. Love that, right? So, if the first thing we are
0:35:05 is informational machines, we’re epistemic machines. We’re trying to figure out how the world is.
0:35:10 The most important part about the world, potentially, is figuring out ourselves, right? And there’s a
0:35:15 bunch of things that are hidden to us. They’re just deep down in the subconscious. We don’t have
0:35:19 access to them. The degree to which we don’t have access to them means we’re running over a model
0:35:24 that’s not complete. And that’s dangerous, actually, for a predictive system like us. Every
0:35:30 opportunity you can to get out stuff that’s hidden to better understand it, that’s good stuff. So,
0:35:36 one, you’re going to start knowing yourself better. Two, if you put it out into a public sphere,
0:35:42 you might invite people that you trust to come and talk about it, which is going to let you
0:35:48 possibly optimize some of these things in yourself. If you can’t expose it, how do you work on it?
0:35:55 And so, bringing that up and out into a public sphere where then you can have friends look at it
0:36:00 and give suggestions relative to that is really, again, really valuable for a predictive system
0:36:04 like us. You’re exposing part of your generative model and you’re exposing it in a way where you
0:36:11 can have people talk about it and where then you can reimbibe it and potentially benefit from its
0:36:17 exposure and its digestion. I think ARC can definitely do that. Expose something that you
0:36:21 didn’t know about yourself in a way that can let you optimize over that thing for yourself.
0:36:34 Support for this podcast comes from Shopify. Every business owner knows how valuable a great
0:36:40 partner can be. Growth and expansion feels a lot more doable when you team up with someone who is
0:36:45 tech savvy, loaded with cutting-edge ideas and really great at converting browsers into buyers.
0:36:51 Finding that partner, though, is easier said than done until now. That is, thanks to Shopify.
0:36:56 Shopify is an all-in-one digital commerce platform that wants to help your business
0:37:01 sell better than ever before. When you partner up with Shopify, you may be able to convert
0:37:07 more customers and end those abandoned shopping carts for good thanks to their shop pay feature.
0:37:12 There’s a reason companies like Allbirds turn to Shopify to sell more products to more customers,
0:37:18 whether they’re online in a brick-and-mortar shop or on social media. You can upgrade your business
0:37:23 and get the same checkout Allbirds uses with Shopify. You can sign up for your $1 per month
0:37:30 trial period at Shopify.com/VoxBusiness. Just go to Shopify.com/VoxBusiness to upgrade your
0:37:42 selling today. Shopify.com/VoxBusiness. You’ve written about how this predictive view of the mind
0:37:47 can explain why some digital technologies, particularly social media, can undermine or
0:37:51 harm our mental health. I’m curious, given this framework we’ve talked about,
0:37:56 how do you think about the impact of social media and this growing role of digital technologies
0:37:59 on well-being? Yeah, I love that. What a great question.
0:38:06 You know, this long-form podcast that you guys have is so good, because we can actually get
0:38:10 through some territory, because I think we have enough on the table now to say something
0:38:17 moderately sophisticated about that. Social media is so dangerous in its current form.
0:38:21 I don’t mean it can’t be good or that it doesn’t have good qualities. I don’t want to go that far,
0:38:28 but just think about it. If there’s a problem where you install models of the world that drift
0:38:35 from reality, I mean, do I have to even say anymore or are we all on the same page?
0:38:41 Social media is a lie factory. It’s made to deceive us about reality. That’s what it’s,
0:38:48 by its very nature, and this is how it’s being used. We’re all the time looking
0:38:56 to improve our model and the design and the kind of media that people are benefiting from posting
0:39:02 has almost, by its very nature, this quality of both attractive and deceptive.
0:39:09 And no wonder we’re increasingly uncertain and increasingly anxious when you are literally
0:39:16 being fed models that don’t track reality. You are inundating your generative model
0:39:22 with bad evidence. You are literally doing what might be the worst possible thing for this kind
0:39:28 of system. You are just feeding it bad evidence about the world. Nobody’s home looks like that.
0:39:37 Nobody’s kid is always happy. No couples are always blissful. This does not exist.
0:39:43 This is just not realistic. And what we’re doing is we’re bending our generative model.
0:39:49 We’re bending our generative model, say this is actually how it is. It is so upsetting and so
0:39:54 dangerous for our kind of system because it does exactly the thing that we think is problematic.
0:39:59 First of all, it’s creating a model of the world that is divergent from the real world.
0:40:03 Two, you’re spending so much time with it that it’s pinning it. Even though you might be getting
0:40:09 regular counter evidence from your world, you are spending more time there than you are garnering
0:40:14 evidence from the world. Now, you have a sticky bad belief that is divergent from the real world
0:40:21 model. We’re saying these technologies like social media are presently designed in a way that can
0:40:27 hijack the brain’s predictive models in a way that freezes us into rigid or patterns and habits of
0:40:33 mind rather than helping us towards some more flexible ones you’ve talked about that get us up
0:40:37 to the edge of informational chaos. But you’re saying that’s not something inherent to digital
0:40:41 tech or social media. It’s something that could presumably be designed otherwise.
0:40:46 Absolutely. I don’t think anybody did it on purpose. I’m a big optimist. I don’t think anybody
0:40:51 was trying to do this. I think this is an emergent feature. It’s an emergent feature of a confluence
0:40:58 of pressures, including making sure the people investing in you are happy and individual influencers
0:41:02 are making a living doing this. I think it’s a confluence of problems, and yet it is a real problem.
0:41:08 One other aspect of that that fits very succinctly here that I’m worried about and that I’ve written
0:41:16 about is, according to the framework, if you have persistent error, you can resolve that error a
0:41:21 couple of different ways. One way is you can update your model to better fit the world. You run into
0:41:27 some new evidence, and you might just go, “Oh, well, that’s just a better way to believe.” Model
0:41:33 gets updated. Or you can change the world to better fit the model. Let’s say you believe
0:41:38 the earth is flat, and then you go to Thanksgiving dinner, and somebody in your family says,
0:41:43 “That’s stupid. You should believe something else.” You can either be like, “Oh, maybe you’re
0:41:48 right. That is good counter-evidence, and I’m going to update.” Or you can behave in the world
0:41:52 in a way that gets you back to status quo. In that example, what you’re doing is you’re leaving,
0:41:56 you’re cutting off your family, you’re getting out of Thanksgiving dinner, and you’re getting back
0:42:02 to your echo chamber. You’re getting back to the filter bubble where you’re now going to be exposed
0:42:11 to the evidence that aligns with your prediction. Conspiracy theory thinking falls so naturally
0:42:17 from this kind of system, because this system, remember, if you’re putting yourself in a situation
0:42:25 where you are constantly awash with bad evidence, it will inevitably adjust the generative model,
0:42:28 which is just to say it will inevitably change the reality you live in.
0:42:33 And so where you’re getting your information from, the people you’re spending time with,
0:42:38 the information that you’re exposing yourself to, that is all having a really direct and serious
0:42:44 impact on your reality-generating mechanisms. I wanted to loop in your work on contemplative
0:42:51 practices. We’ve talked about how art and creativity can bring us to that edge of chaos,
0:42:57 but you’ve also said elsewhere that meditation can do a similar kind of thing, which is confusing
0:43:00 at first, because meditation looks pretty different than watching a horror movie, for example.
0:43:06 In meditation, I’m sitting there very quietly in what looks like the opposite of chaos.
0:43:10 So how do you understand what meditation is doing in this predictive framework,
0:43:14 and how does that relate to creativity and these beneficial kinds of uncertainty?
0:43:23 So I think horror movies can help us get exposed to scary stuff. I think being exposed to scary
0:43:29 stuff at our edge in a safe way helps us. It helps us get better at managing our own emotions.
0:43:33 It helps us get better at managing uncertainty. I think that’s valuable for an uncertainty
0:43:39 minimizing machine. Yes, it’s cool to hang out at our edge. How does that relate to meditation?
0:43:43 So we get this idea, I think commonly now, especially in the West, meditation might be
0:43:47 more about relaxation, maybe addressing- Stress relief and so on.
0:43:51 Addressing stress or pain, but that’s not actually, that’s not the meat. That’s not the meat of that
0:43:59 program. At the center of that program is a deep, profound and progressive investigation
0:44:05 about the nature of who we are, how our own minds work. It is a deep investigation about the way
0:44:08 that our emotional system is structured and the way that it works is ultimately a deep
0:44:12 investigation of the nature of our own conscious experience. What are we experiencing? Why are
0:44:18 we experiencing it? What does that have to do with the world? And then, how can we adjust
0:44:24 progressively and skillfully the shape of who and what we are so that we fit the world the best,
0:44:29 so that we are as close as possible to what’s real and true and so that we can be as serviceable as
0:44:36 possible. But that’s really what it’s for. And ultimately, I think you can do everything that
0:44:39 we’ve been talking about, including all the stuff that psychedelics does for the predictive system,
0:44:42 all the stuff that horror and violent video games does for the predictive system.
0:44:46 You can do it all contemplatively in a way that’s better for you, I think.
0:44:53 Yeah. So, you’re saying one way to kind of try to find that thread that puts meditation and horror
0:44:58 movies in kind of the same vein of practice. Is it thinking about meditation, and you mentioned
0:45:04 psychedelics as well, as these modes of injecting uncertainty into our experience, and particularly
0:45:09 about kind of provoking us out of our ordinary habits of how we experience the world? Is that
0:45:14 kind of the common currency there? Absolutely. And you get that through these imaginative
0:45:21 contemplative practices, but you also get it directly from the more standard, well-known
0:45:29 attention and awareness program too. Now, whether you’re encountering useful uncertainty because
0:45:36 you’re generating uncertainty, provoking images like your death, or you’re just looking closer and
0:45:44 closer at your own experience, your own self-experience, and it’s increasingly reflected back to you
0:45:50 that your old ideas of who and what you are might not stand. In both of those directions,
0:45:55 you are on a steep learning curve about who you are and what matters here.
0:46:02 Let me ask you this. After this whole story we’ve unpacked, there’s still a kind of tension
0:46:10 that leaves me a little bit uncomfortable. It feels like we’re saying that creativity is just
0:46:17 kind of an input or a means towards juicing the powers of prediction. And part of me pushes
0:46:23 against that in that it almost feels reductive. Is creativity really just this evolutionary
0:46:29 strategy that makes us better predictive? Creatures, does that make creativity feel less
0:46:35 intrinsically valuable? Because when I think about creativity, at least in part, it doesn’t
0:46:40 just feel like a tool for survival that evolution has honed. Sometimes it feels like it is that
0:46:45 which makes life worth living, that it has intrinsic value of its own, not as a tool for the
0:46:50 predictive powers that be, my brain or the algorithms or whatever it is. So I’m curious if
0:46:56 you feel this tension at all and how you think about creativity being framed in the service of
0:47:04 prediction. So two things. One, even though we are excited by this new framework, I don’t think
0:47:10 we need to be afraid of it being overly reductionistic. I mean, in a way, it’s radically reductionistic.
0:47:14 We’re saying that everything that’s happening in the brain can be written on a t-shirt,
0:47:23 basically. But the way that it actually gets implemented in super complex, beautiful systems
0:47:31 like us, it shouldn’t make us feel like all of the wonderful human endeavors are simply explainable
0:47:38 in a sort of overly simplified way. I don’t have any worry like that. I think if it turned out that
0:47:45 life was operating over a simple principle of optimization, that’s the most beautiful thing
0:47:52 I’ve ever heard, first of all, that all of life is about optimization. All of life is this resistance
0:47:59 to entropy. That’s just what it is to be alive, is just your optimal resistance to entropy.
0:48:05 As the universe expands and entropy is inevitable, life is that single force that’s defying,
0:48:14 that’s defying that gradient. That’s so beautiful. When it comes to art, I want to even be careful
0:48:19 to say that art is only about finding this critical edge. I think that’s one really interesting way
0:48:22 of thinking about it. It’s one way that we’ve been thinking about it. If you consider movies and
0:48:29 video games as forms of art also, another central reason that this kind of system might benefit
0:48:34 from artistic expression that we didn’t cover, but that’s completely relevant for our discussion,
0:48:41 is that art creates this wonderful opportunity for endless uncertainty and uncertainty management.
0:48:49 Not very many things do that. As you progressively create dancing, painting, singing, whatever,
0:48:54 the enthusiasm of that, literally being in the spirit of that creative endeavor,
0:49:00 is you managing uncertainty in a new and remarkable way that it’s never been done before?
0:49:05 In all of existence through all time, nobody has ever encountered and resolved that uncertainty
0:49:12 in particular. It should be endlessly rewarding, fascinating, and I think no wonder we find it
0:49:20 so beautiful. It might be by its very nature, maybe the purest expression of uncertainty generation
0:49:26 and management. Like you say, that would make it intrinsically valuable for an uncertainty
0:49:32 minimizing system like us. I think that’s a great place to wrap up. Mark Miller, thank you so much
0:49:42 for being here. This was a pleasure. This was the best interview I’ve ever had. You’re awesome.
0:49:52 All right. I hope you enjoyed the episode. I definitely did. For me, optimization usually
0:49:59 conjures the idea of a cold and calculating logic of efficiency, not what it ultimately means to be
0:50:05 alive and supported by the creative injection of uncertainties into our experience of the world,
0:50:10 but I thought that Mark made the case beautifully. As always, we want to know what you think.
0:50:15 So drop us a line at thegrayarea@vox.com. And once you’re finished with that,
0:50:22 go ahead and rate and review and subscribe to the podcast. This episode was produced
0:50:29 by Beth Morrissey and hosted by me, O’Shawn Jarrah. My day job is as a staff writer with Future Perfect
0:50:34 at Vox, where I cover the latest ideas in the science and philosophy of consciousness,
0:50:39 as well as political economy. You can read my stuff over at vox.com/futureperfect.
0:50:48 Today’s episode was engineered by Erika Huang, fact-checked by Anook Dusso, edited by Jorge Just,
0:50:54 and Alex Overington wrote our theme music. New episodes of the Gray Area drop on Mondays.
0:50:59 Listen and subscribe. The show is part of Vox, and you can support Vox’s journalism by joining
0:51:06 our membership program today. Go to vox.com/members to sign up. And if you decide to sign up because
0:51:20 of this show, let us know.
0:51:30 Your own weight loss journey is personal. Everyone’s diet is different. Everyone’s
0:51:35 bodies are different. And according to Noom, there is no one-size-fits-all approach.
0:51:40 Noom wants to help you stay focused on what’s important to you with their psychology
0:51:45 and biology-based approach. This program helps you understand the science behind your eating
0:51:50 choices and helps you build new habits for a healthier lifestyle. Stay focused on what’s
0:51:57 important to you with Noom’s psychology and biology-based approach. Sign up for your free trial
0:52:00 today at Noom.com.
In part three of our series on creativity, guest host Oshan Jarow speaks with philosopher of neuroscience Mark Miller about how our minds actually work. They discuss the brain as a predictive engine that builds our conscious experience for us. We’re not seeing what we see. We’re predicting what we should see. Miller says that depression, opioid use, and our love of horror movies can all be explained by this theory. And that injecting beneficial kinds of uncertainty into our experiences — embracing chaos and creativity — ultimately make us even better at prediction, which is one of the keys to happiness and well-being.
This is the third conversation in our three-part series about creativity.
Learn more about your ad choices. Visit podcastchoices.com/adchoices