#438 – Elon Musk: Neuralink and the Future of Humanity

AI transcript
0:00:05 The following is a conversation with Elon Musk, DJ Sa, Matthew McDougal,
0:00:10 Bliss Chapman, and Nolan Arbaugh about Neuralink and the future of humanity.
0:00:16 Elon, DJ Matthew, and Bliss are, of course, part of the amazing Neuralink team.
0:00:21 And Nolan is the first human to have a Neuralink device implanted in his brain.
0:00:25 I speak with each of them individually, so use timestamps to jump around.
0:00:31 Or, as I recommend, go hardcore and listen to the whole thing.
0:00:34 This is the longest podcast I’ve ever done.
0:00:38 It’s a fascinating, super technical and wide-ranging conversation.
0:00:40 And I loved every minute of it.
0:00:45 And now, a quick few second mention of each sponsor.
0:00:46 Check them out in the description.
0:00:49 It’s the best way to support this podcast.
0:00:54 We’ve got Cloak for Privacy, Masterclass for Learning, Notion for Taking Notes,
0:00:59 Element for Hydration, Motific for Generative AI Deployment,
0:01:02 and BetterHelp for Mental Health.
0:01:03 Choose wisely, my friends.
0:01:08 Also, if you want to maybe submit feedback or submit questions that I can ask
0:01:13 on the podcast or just get in touch with me, go to lexfreedmen.com/contact.
0:01:15 And now, onto the full ad reads.
0:01:19 I try to make these interesting, but if you do skip them, please
0:01:20 still check out our sponsors.
0:01:21 I enjoy their stuff.
0:01:23 Maybe you will too.
0:01:30 This episode is brought to you by Cloaked, a platform that lets you generate new email
0:01:35 address and a phone number every time you sign up for a new website, allowing your
0:01:41 actual email and your actual phone number to remain secret from said website.
0:01:48 It seems that increasingly the right approach to the interwebs is trust no one.
0:01:53 Of course, there’s big companies that have an implied trust.
0:01:58 Because you and them understand that if you give your data over to them and they
0:02:02 abuse that privilege, that they would suffer as a company.
0:02:07 Now, I don’t know if they fully understand that because I think even big companies can
0:02:13 probably sell your data or share your data for purposes of making money.
0:02:14 All that kind of stuff.
0:02:19 It’s just nice to not give over your contact data unless you need to.
0:02:22 So Cloaked solves that problem, makes it super easy.
0:02:27 It’s like, uh, it’s basically a password manager with extra privacy superpowers.
0:02:34 Go to cloaked.com/lex to get 14 days free or for a limited time.
0:02:42 Use code Lex pod when signing up to get 25% off of an annual Cloaked plan.
0:02:47 This episode is also brought to you by masterclass where you can watch over 200
0:02:51 classes from the best people in the world at their respective disciplines.
0:02:55 Phil Ivey on poker, for example, brilliant masterclass.
0:03:02 And also reminds me of the other Phil, possibly the greatest of all time.
0:03:05 And if you ask him, he will definitely say he’s the greatest of all time,
0:03:06 which is Phil Hellmuth.
0:03:11 We were supposed to do a podcast many, many times, but I’m just not sure I can
0:03:14 handle the level of greatness that is Phil Hellmuth.
0:03:15 No, I love him.
0:03:18 Uh, we’ll probably have a podcast at some point in the future.
0:03:24 I’m not sure he has a masterclass, but he, his essence, his way of being,
0:03:30 his infinite wisdom, and the infinite number of championships that he has won,
0:03:34 uh, is in itself, uh, a masterclass.
0:03:39 So, but, uh, you know, if you want to settle for another mere mortal that, uh,
0:03:43 some people consider it to be the greatest poker player of all time is Phil Ivey.
0:03:47 And then he has an incredible masterclass on there.
0:03:52 Get unlimited access to every masterclass and get an additional 15% off
0:03:55 an annual membership at masterclass.com/lexpod.
0:03:59 That’s masterclass.com/lexpod.
0:04:04 This episode is also brought to you by Notion, a note taking and team
0:04:07 collaboration tool that I’ve used for a long time now.
0:04:12 I’ve used it primarily for note taking, uh, because, you know, you need, uh,
0:04:18 big team for team collaboration, but the people who I know who have used it for
0:04:21 the team collaboration capabilities have really loved it.
0:04:26 And, uh, the thing I very much appreciate about Notion is how effectively they’ve
0:04:30 been able to integrate LLMs into, uh, into their tool.
0:04:33 Their AI assistant looks across multiple documents.
0:04:36 You can ask questions about those multiple documents.
0:04:40 Of course, you can do all the things you kind of expect and do them easily,
0:04:45 like summarization or rewriting stuff or, uh, helping you expand or contract
0:04:48 with the kind of stuff you’ve written or even generated a draft.
0:04:52 But it can also kind of allow you to ask questions of a thing like what’s
0:04:54 the progress of the team on a set of different tasks.
0:04:58 Notion does a good job of integrating the LLMs.
0:05:01 Try Notion AI for free when you go to Notion.com/lex.
0:05:07 That’s all lowercase Notion.com/lex to try the power of Notion AI today.
0:05:13 This episode is brought to you by the thing I’m drinking right now called Element.
0:05:18 It’s, uh, my daily zero sugar and delicious electrolyte mix.
0:05:24 Uh, they sent me a bunch of cans of sparkling water that I loved and devoured
0:05:28 as much as you can devour a liquid, because I think that’s usually applied
0:05:33 to, uh, solid foods, but I devoured it and it was delicious.
0:05:36 But yeah, it’s a instrumental part of my life.
0:05:40 It’s how I get the sodium, potassium, magnesium electrolytes into my body.
0:05:45 I’m going for a super long run after this and I have been drinking
0:05:49 element before and I sure as hell going to be drinking element after.
0:05:51 Same goes for hard training sessions and grappling.
0:05:57 Essential for me to feel good, especially when I’m fasting, especially
0:05:58 when I’m doing low carb diets, all of that.
0:06:02 My favorite flavor still to this day always has been is watermelon salt.
0:06:05 But there’s a lot of other delicious flavors.
0:06:10 If you want to try them out, get a simple pack for free with any purchase.
0:06:13 Try it to drink elements.com/lex.
0:06:18 This episode is also brought to you by Motific, a SaaS platform
0:06:24 that helps businesses deploy LLMs that are customized with RAG on organization data.
0:06:29 This is another use case of LLMs, which is just mind blowing.
0:06:32 Take all the data inside an organization.
0:06:38 And allow the people in said organization to query it,
0:06:44 to organize it, to summarize it, to analyze it, all of that,
0:06:47 to leverage it within different products, to ask questions
0:06:50 of how it can be improved in terms of structuring an organization.
0:06:55 Also on the programming front, take all of the code in, take all of the data in
0:06:57 and start asking questions about how the code can be improved,
0:07:00 how it can be refactored, rewritten, all that kind of stuff.
0:07:08 Now, the challenge that Motific is solving is how to do all that in a secure way.
0:07:10 This is like serious stuff.
0:07:12 You can’t eff it up.
0:07:18 Motific is created, I believe, by Cisco, specifically their outshift group
0:07:20 that does the cutting edge R&D.
0:07:26 So these guys know how to do reliable business deployment
0:07:30 of stuff that needs to be secure, that needs to be done well.
0:07:37 So they help you go from an idea to value as soon as possible.
0:07:40 Visit Motific.ai to learn more.
0:07:45 That’s M-O-T-I-F-I-C.A-I.
0:07:52 This episode is also brought to you by BetterHelp, spelled H-E-L-P Help.
0:07:55 They figure out what you need and match you with a licensed therapist
0:08:00 in under 48 hours for individuals, for couples, easy to create, affordable,
0:08:02 available worldwide.
0:08:06 I think therapy is a really, really, really nice thing.
0:08:08 Talk therapy is a really powerful thing.
0:08:13 And I think what BetterHelp does for a lot of people is introduce them to that.
0:08:15 It’s a great first step.
0:08:17 Try it out for a lot of people can work.
0:08:21 But at the very least, it’s the thing that allows you to explore
0:08:25 the possibility of talk therapy and how that feels in your life.
0:08:29 They’ve helped over 4.4 million people.
0:08:30 That’s crazy.
0:08:34 I think the biggest selling point is just how easy it is to get started,
0:08:37 how accessible it is.
0:08:41 Of course, there’s a million other ways to explore the inner workings
0:08:46 of the human mind, looking in the mirror and exploring the union shadow.
0:08:50 But the journey of a thousand miles begins with one step.
0:08:55 So this is a good first step in exploring your own mind.
0:08:59 Check them out at betterhelp.com/lex and save on your first month.
0:09:01 That’s betterhelp.com/lex.
0:09:05 And now, dear friends, here’s Elon Musk,
0:09:10 his fifth time on this, the Lex Friedman podcast.
0:09:11 Yes.
0:09:28 Drinking coffee or water?
0:09:29 Water.
0:09:32 I’m so overcaffeinated right now.
0:09:34 Do you want some caffeine?
0:09:36 I mean, sure.
0:09:37 There’s a there’s a nitro drink.
0:09:43 This will keep you up to like, you know, tomorrow afternoon, basically.
0:09:47 Yeah, I don’t want to.
0:09:48 So what is nitro?
0:09:50 It’s just got a lot of caffeine or something.
0:09:50 Don’t ask questions.
0:09:52 It’s called nitro.
0:09:53 Do you need to know anything else?
0:09:56 It’s got nitrogen.
0:09:57 That’s ridiculous.
0:09:59 I mean, what we breathe is 78% nitrogen anyway.
0:10:02 What do you need to add more?
0:10:07 Most people think that they’re breathing oxygen
0:10:10 and they’re actually breathing 78% nitrogen.
0:10:15 You need like a milk bar, like from like from Clockwork Orange.
0:10:19 Yeah.
0:10:21 Is that top three Kubrick film for you?
0:10:22 Clockwork Orange, it’s pretty good.
0:10:24 I mean, it’s demanded.
0:10:27 Drawing, I’d say.
0:10:29 OK.
0:10:35 OK, so first let’s step back and big congrats
0:10:39 on getting Neuralink implanted into a human.
0:10:41 That’s a historic step for Neuralink.
0:10:43 And there’s many more to come.
0:10:48 Yeah, we just obviously have a second implant as well.
0:10:49 How did that go?
0:10:50 So far, so good.
0:10:55 It looks like we’ve got, I think there are over 400 electrodes
0:10:58 that are providing signals.
0:11:01 So, yeah.
0:11:04 How quickly do you think the number of human participants will scale?
0:11:08 It depends somewhat on the regulatory approval,
0:11:11 the rate at which we get regulatory approvals.
0:11:16 So, we’re hoping to do 10 by the end of this year, total of 10.
0:11:19 So, eight more.
0:11:22 And with each one, you’re going to be learning a lot of lessons
0:11:25 about the neurobiology, the brain, everything,
0:11:27 the whole chain of the Neuralink,
0:11:29 the decoding, the signal processing, all that kind of stuff.
0:11:34 Yeah, yeah, I think it’s obviously going to get better with each one.
0:11:35 I mean, I don’t want to jinx it,
0:11:41 but it seems to have gone extremely well with the second implant.
0:11:45 So, there’s a lot of signal, a lot of electrodes.
0:11:46 It’s working very well.
0:11:51 What improvements do you think we’ll see in Neuralink in the coming,
0:11:54 let’s say, let’s get crazy, in the coming years?
0:11:59 I mean, in years, it’s going to be gigantic.
0:12:03 Because we’ll increase the number of electrodes dramatically.
0:12:06 We’ll improve the signal processing.
0:12:10 So, even with only roughly, I don’t know,
0:12:13 10, 15% of the electrodes working with Neuralink,
0:12:20 with our first patient, we’re able to get to achieve a bits per second.
0:12:22 That’s twice the world record.
0:12:26 So, I think we’ll start like vastly exceeding the world record
0:12:28 by order of magnitude in the years to come.
0:12:31 So, it’s like getting to, I don’t know, 100 bits per second, 1,000.
0:12:37 You know, maybe, if it’s like 5 years from now, it might be at a megabit.
0:12:43 Like faster than any human could possibly communicate by typing or speaking.
0:12:46 Yeah, that BPS is an interesting metric to measure.
0:12:50 There might be a big leap in the experience
0:12:53 once you reach a certain level of BPS.
0:12:54 Yeah.
0:12:57 Like, entire new ways of interacting with the computer might be unlocked.
0:12:59 And with humans?
0:13:00 With other humans.
0:13:04 Provided they have a Neuralink too.
0:13:05 Right.
0:13:08 Otherwise, they wouldn’t be able to absorb the signals fast enough.
0:13:11 Do you think they’ll improve the quality of intellectual discourse?
0:13:13 Well, I think you could think of it,
0:13:18 you know, if you were to slow down communication,
0:13:20 how would you feel about that?
0:13:24 You know, if you’d only talk, let’s say, 1/10 of normal speed,
0:13:26 you’d be like, wow, that’s agonizingly slow.
0:13:27 Yeah.
0:13:34 So now, imagine you could communicate clearly
0:13:37 at 10 or 100 or 1,000 times faster than normal.
0:13:42 Listen, I’m pretty sure nobody in their right mind
0:13:43 listens to me at 1x.
0:13:49 They listen at 2x, so I can only imagine what 10x would feel like,
0:13:50 or I can actually understand it.
0:13:52 I usually default to 1.5x.
0:13:55 I mean, you can do 2x, but well, actually, if I’m trying to go,
0:13:59 if I’m listening to somebody in like 15, 20 minutes,
0:14:02 like once I go to sleep, then I’ll do it 1.5x.
0:14:04 If I’m paying attention, I’ll do 2x.
0:14:08 Right.
0:14:12 But actually, if you start actually listening to podcasts
0:14:15 or sort of audiobooks or anything,
0:14:17 if you get used to doing it at 1.5,
0:14:20 then 1 sounds painfully slow.
0:14:22 I’m still holding on to 1, because I’m afraid.
0:14:26 I’m afraid of myself becoming bored with the reality,
0:14:30 with the real world, where everyone’s speaking at 1x.
0:14:32 Well, defensive person, you can speak very fast.
0:14:33 Like, we can communicate very quickly.
0:14:35 And also, if you use a wide range of–
0:14:42 if your vocabulary is larger, your effective bit rate is higher.
0:14:44 That’s a good way to put it.
0:14:45 The effective bit rate.
0:14:48 I mean, that is the question, is how much information
0:14:52 is actually compressed in the low bit transfer of language?
0:14:55 Yeah, if there’s a single word that
0:14:57 is able to convey something that would normally
0:15:01 require 10 simple words, then you’ve
0:15:06 got maybe a 10x compression on your hands.
0:15:07 And that’s really like with memes.
0:15:10 Memes are like data compression.
0:15:13 It conveys a whole–
0:15:16 you’re simultaneously hit with a wide range of symbols
0:15:18 that you can interpret.
0:15:23 And it’s– you kind of get it faster than if it were words
0:15:26 or a simple picture.
0:15:29 And of course, you’re referring to memes broadly like ideas.
0:15:33 Yeah, there’s an entire idea structure
0:15:36 that is like an idea template.
0:15:40 And then you can add something to that idea template.
0:15:42 But somebody has that preexisting idea template
0:15:43 in their head.
0:15:45 So when you add that incremental bit of information,
0:15:48 you’re conveying much more than if you just
0:15:49 said a few words.
0:15:52 It’s everything associated with that meme.
0:15:54 You think there’ll be emergent leaps of capabilities?
0:15:55 You scale the number of electrodes?
0:15:57 There’ll be a certain–
0:16:00 do you think there’ll be an actual number where just
0:16:03 the human experience will be altered?
0:16:04 Yes.
0:16:06 What do you think that number might be,
0:16:09 whether electrodes or BPS?
0:16:10 We, of course, don’t know for sure.
0:16:13 But is this 10,000 or 100,000?
0:16:16 Yeah, I mean, certainly if you’re anywhere at 10,000
0:16:18 per second, I mean, that’s vastly faster than any human
0:16:20 communicate right now.
0:16:21 If you think of the–
0:16:23 what is the average per second of a human?
0:16:26 It is less than one per second over the course of a day,
0:16:29 because there are 86,400 seconds in a day,
0:16:35 and you don’t communicate 86,400 tokens in a day.
0:16:38 Therefore, your per second is less than one average
0:16:39 over 24 hours.
0:16:41 It’s quite slow.
0:16:43 And even if you’re communicating very quickly,
0:16:48 and you’re talking to somebody who
0:16:51 understands what you’re saying, because in order
0:16:54 to communicate, you have to, at least to some degree,
0:16:57 model the mind state of the person to whom you’re speaking,
0:16:59 then take the concept you’re trying to convey,
0:17:01 compress that into a small number of syllables,
0:17:05 speak them, and hope that the other person decompresses them
0:17:09 into a conceptual structure that is as close to what you have
0:17:11 in your mind as possible.
0:17:13 Yeah, I mean, there’s a lot of single loss there in that process.
0:17:17 Yeah, very lossy compression and decompression.
0:17:20 And a lot of what your neurons are doing
0:17:26 is distilling the concepts down to a small number of symbols,
0:17:29 I would say syllables that I’m speaking, or keystrokes,
0:17:30 whatever the case may be.
0:17:37 So that’s a lot of what your brain computation is doing.
0:17:43 Now, there is an argument that that’s actually
0:17:45 a healthy thing to do or a helpful thing to do,
0:17:50 because as you try to compress complex concepts,
0:17:54 you’re perhaps forced to distill what is most essential
0:17:57 in those concepts, as opposed to just all the fluff.
0:17:59 So in the process of compression,
0:18:02 you distill things down to what matters the most,
0:18:04 because you can only say a few things.
0:18:07 So that is perhaps helpful.
0:18:11 If our data rate increases, it’s highly probable
0:18:15 that it will become far more verbose.
0:18:21 Just like your computer, my first computer had 8K of RAM.
0:18:24 So you really thought about every byte.
0:18:30 And now you’ve got computers with many gigabytes of RAM.
0:18:33 So if you want to do an iPhone app that just
0:18:36 says hello world, it’s probably, I don’t know,
0:18:40 several megabytes minimum with a bunch of fluff.
0:18:43 But nonetheless, we still prefer to have the computer
0:18:46 with more memory and more compute.
0:18:49 So the long-term aspiration of Neuralink
0:18:55 is to improve the AI human symbiosis
0:19:00 by increasing the bandwidth of the communication.
0:19:05 Because even in the most benign scenario of AI,
0:19:08 you have to consider that the AI is simply
0:19:12 going to get bored waiting for you to spit out a few words.
0:19:16 I mean, if the AI can communicate at terabits per second
0:19:20 and you’re communicating at bits per second,
0:19:22 it’s like torn to a tree.
0:19:24 Well, it is a very interesting question
0:19:27 for a super intelligent species.
0:19:28 What use are humans?
0:19:34 I think there is some argument for humans
0:19:36 as a source of will.
0:19:37 Will.
0:19:40 Will, yeah, source of will or purpose.
0:19:46 So if you consider the human mind as being essentially–
0:19:50 there’s the primitive limbic elements, which basically
0:19:52 even reptiles have.
0:19:55 And there’s the cortex, the thinking and planning
0:19:56 part of the brain.
0:19:58 Now, the cortex is much smarter than the limbic system,
0:20:01 and yet is largely in service to the limbic system.
0:20:03 It’s trying to make the limbic system happy.
0:20:04 I mean, the sheer amount of compute
0:20:08 that’s gone into people trying to get laid is insane.
0:20:12 Without actually seeking procreation,
0:20:15 they’re just literally trying to do
0:20:16 this sort of simple motion.
0:20:20 And they get a kick out of it.
0:20:24 So this simple, which in the abstract,
0:20:27 rather absurd motion, which is sex,
0:20:30 the cortex is putting a massive amount of compute
0:20:32 into trying to figure out how to do that.
0:20:35 So like 90% of distributed computer of the human species
0:20:36 is spent on trying to get laid, probably.
0:20:37 Like a large percentage.
0:20:38 Yeah, yeah.
0:20:43 There’s no purpose to most sex except hedonistic.
0:20:49 It’s just sort of a joy or whatever, dopamine release.
0:20:51 Now, once in a while, it’s procreation.
0:20:53 But for humans, it’s mostly– modern humans,
0:20:57 it’s mostly recreational.
0:21:01 And so your cortex, much smarter than your limbic system,
0:21:02 is trying to make the limbic system happy,
0:21:05 because limbic system wants to have sex.
0:21:08 Or wants some tasty food, or whatever the case may be.
0:21:10 And then that is then further augmented
0:21:13 by the tertiary system, which is your phone, your laptop,
0:21:16 iPad, or your computing stuff.
0:21:17 That’s your tertiary layer.
0:21:20 So you’re actually already a cyborg.
0:21:21 You have this tertiary compute layer,
0:21:24 which is in the form of your computer
0:21:28 with all the applications, all your computer devices.
0:21:32 And so in the getting laid front,
0:21:36 there’s actually a massive amount of digital compute
0:21:41 also trying to get laid, with like Tinder and whatever.
0:21:44 Yeah, so the compute that we humans have built
0:21:46 is also participating.
0:21:48 Yeah, I mean, there’s like gigawatts of compute
0:21:51 going into getting laid, of digital compute.
0:21:53 Yeah.
0:21:54 What if AGI was–
0:21:56 This is happening, as we speak.
0:21:58 If we merge with AI, it’s just going
0:22:02 to expand the compute that we humans use to try to get laid.
0:22:03 Well, so it’s one of the things, certainly, yeah.
0:22:05 Yeah.
0:22:07 But what I’m saying is that, yes,
0:22:09 like, what’s– is there a use for humans?
0:22:13 Well, there’s this fundamental question of what’s
0:22:16 the meaning of life, why do anything at all?
0:22:20 And so if our simple limbic system
0:22:24 provides a source of will to do something,
0:22:28 that then goes to our cortex, that then goes to our tertiary
0:22:32 compute layer, then I don’t know.
0:22:36 It might actually be that the AI in a benign scenario
0:22:40 simply trying to make the human limbic system happy.
0:22:44 Yeah, it seems like the will is not just about the limbic system.
0:22:46 There’s a lot of interesting, complicated things in there,
0:22:48 but we also want power.
0:22:49 That’s limbic, too, I think.
0:22:52 But then we also want to, in a kind of cooperative way,
0:22:55 alleviate the suffering in the world.
0:22:57 Not everybody does, but yeah, sure.
0:22:59 Some people do.
0:23:02 As a group of humans, when we get together,
0:23:04 we start to have this kind of collective intelligence
0:23:11 that is more complex in its will than the underlying
0:23:14 individual descendants of apes.
0:23:16 So there’s other motivations.
0:23:19 And that could be a really interesting source
0:23:22 of an objective function for AGI.
0:23:24 Yeah.
0:23:30 I mean, there are these fairly cerebral kind
0:23:31 of higher level goals.
0:23:34 I mean, for me, it’s like what’s the meaning of life?
0:23:36 Understanding the nature of the universe
0:23:41 is of great interest to me.
0:23:44 And hopefully to the AI.
0:23:48 And that’s the mission of XAI and GROC,
0:23:49 is to understand the universe.
0:23:53 So do you think people, when you have a neural link
0:23:59 with 10,000, 100,000 channels, most of the use cases
0:24:01 will be communication with AI systems?
0:24:09 Well, assuming that there are not–
0:24:15 I mean, there’s solving basic neurological issues
0:24:16 that people have.
0:24:20 If they’ve got damaged neurons in their spinal cord or neck
0:24:25 or as is the case with the first two patients,
0:24:28 then obviously, the first order of business
0:24:33 is solving fundamental neuron damage in spinal cord neck
0:24:36 or in the brain itself.
0:24:43 So our second product is called blindside,
0:24:46 which is to enable people who are completely blind,
0:24:49 lost both eyes or optic nerve or just can’t see at all
0:24:52 to be able to see by directly triggering
0:24:54 the neurons in the visual cortex.
0:24:56 So we’re just starting at the basics here.
0:25:03 So it’s like the simple stuff, relatively speaking,
0:25:09 is solving neuron damage.
0:25:15 You can also solve, I think, probably schizophrenia.
0:25:19 If people have seizures of some kind, probably solve that.
0:25:21 It could help with memory.
0:25:26 So there’s kind of a tech tree, if you will,
0:25:27 like you’ve got the basics.
0:25:34 You need literacy before you can have a lot of the rings.
0:25:39 Got it.
0:25:41 Do you have letters and alphabet?
0:25:42 OK, great.
0:25:47 Words, and then eventually you get sagas.
0:25:52 So I think there may be some things to worry
0:25:56 about in the future, but the first several years
0:25:59 are really just solving basic neurological damage.
0:26:02 For people who have essentially complete or near-complete
0:26:06 loss from the brain to the body, like Stephen Hawking
0:26:09 would be an example, the neural links
0:26:11 would be incredibly profound.
0:26:14 Because I mean, you can imagine if Stephen Hawking could
0:26:18 communicate as fast as we’re communicating, perhaps faster.
0:26:20 And that’s certainly possible.
0:26:23 Probable, in fact, likely, I’d say.
0:26:28 So there’s a kind of dual track of medical and non-medical,
0:26:30 meaning so everything you’ve talked about
0:26:34 could be applied to people who are non-disabled in the future.
0:26:37 The logical thing to do, a sensible thing to do,
0:26:47 is to start off solving basic neuron damage issues.
0:26:51 Because there’s obviously some risk with a new device.
0:26:54 You can’t get the risk down to zero, it’s not possible.
0:26:58 So you want to have the highest possible reward,
0:27:01 given that there’s a certain irreducible risk.
0:27:06 And if somebody’s able to have a profound improvement
0:27:11 in their communication, that’s worth the risk.
0:27:13 As you get the risk down.
0:27:14 Yeah, as you get the risk down.
0:27:18 Once the risk is down to, if you have
0:27:22 thousands of people that have been using it for per years,
0:27:25 and the risk is minimal, then perhaps at that point,
0:27:29 you could consider saying, OK, let’s aim for augmentation.
0:27:33 Now, I think we’re actually going to aim for augmentation
0:27:35 with people who have neuron damage.
0:27:39 So we’re not just aiming to get people a communication
0:27:41 data rate equivalent to normal humans.
0:27:45 We’re aiming to give people who have quadriplegic,
0:27:48 or maybe have complete loss of the connection
0:27:52 to the brain and body, a communication data
0:27:53 rate that exceeds normal humans.
0:27:54 Well, we’re in there.
0:27:55 Why not?
0:27:57 Let’s give people superpowers.
0:27:58 And the same for vision.
0:28:00 As you restore vision, there could
0:28:04 be aspects of that restoration that are superhuman.
0:28:08 Yeah, at first, the vision restoration will be low res.
0:28:10 Because you have to say, how many neurons
0:28:14 can you put in there and trigger?
0:28:17 And you can do things where you adjust the electric field
0:28:21 so that even if you’ve got, say, 10,000 neurons,
0:28:22 it’s not just 10,000 pixels.
0:28:26 Because you can adjust the field between the neurons
0:28:31 and do them in patterns in order to have, say, 10,000 electrodes
0:28:38 effectively give you maybe like having a megapixel
0:28:40 or a 10 megapixel situation.
0:28:46 And then over time, I think you get to higher resolution
0:28:48 than human eyes.
0:28:50 And you could also see in different wavelengths.
0:28:54 So like Jordi LaFluge from Star Trek.
0:28:57 You know, I like the thing.
0:28:58 Do you want to see in radar?
0:28:59 No problem.
0:29:03 You can see ultraviolet, infrared, eagle vision,
0:29:05 whatever you want.
0:29:06 Do you think there will be–
0:29:08 let me ask a Joe Rogan question.
0:29:09 Do you think there will be–
0:29:13 I just recently have taken ayahuasca, anything.
0:29:14 Is that a Joe Rogan question?
0:29:15 No, well, yes.
0:29:17 Well, I guess technically it is.
0:29:18 Yeah.
0:29:21 Have you ever tried GMT, bro?
0:29:22 I love you, Joe.
0:29:25 OK.
0:29:26 Yeah, wait, wait.
0:29:27 Have you ever said much about it?
0:29:28 I have not.
0:29:29 I have not.
0:29:30 I have not.
0:29:32 OK, well, why don’t you spill the beans?
0:29:34 It was a truly incredible experience.
0:29:36 Do we turn the tables on you?
0:29:36 Wow.
0:29:39 I mean, you’re in the jungle.
0:29:42 Yeah, amongst the trees myself and the shaman.
0:29:45 Yeah, with the insects, with the animals all around you.
0:29:47 Like, jungle as far as I can see.
0:29:48 I mean–
0:29:49 That’s the way to do it.
0:29:51 Things are going to look pretty wild.
0:29:53 Yeah, pretty wild.
0:29:56 I took an extremely high dose.
0:30:01 Don’t go hugging an anaconda or something, you know?
0:30:03 You haven’t lived unless you made love to an anaconda.
0:30:06 I’m sorry.
0:30:07 Snakes and ladders.
0:30:15 Yeah, I took an extremely high dose of nine cups.
0:30:15 And–
0:30:17 Damn, OK, that sounds like a lot.
0:30:19 Of course, is normal dose one cup or–
0:30:21 One or two, well, usually one.
0:30:25 Two and– wait, like right off the bat,
0:30:26 or do you work your way up to it?
0:30:27 So I–
0:30:28 [LAUGHTER]
0:30:30 Did you just jump in at the deep end?
0:30:33 Across two days, because on the first day, I took two and I–
0:30:33 OK.
0:30:36 It was a ride, but it wasn’t quite like a–
0:30:38 It wasn’t like a revelation.
0:30:40 It wasn’t into deep space type of ride.
0:30:42 It was just like a little airplane ride.
0:30:46 I go, well, I saw some trees and some visuals and all that.
0:30:48 I just saw a dragon and all that kind of stuff.
0:30:48 But–
0:30:50 [LAUGHTER]
0:30:52 Nine cups, you went to Pluto, I think.
0:30:53 Pluto, yeah.
0:30:54 No, deep space.
0:30:55 Deep space.
0:30:58 One of the interesting aspects of my experience
0:31:00 is I thought I would have some demons, some stuff to work
0:31:01 through.
0:31:02 That’s what people–
0:31:03 That’s what everyone says.
0:31:05 That’s what everyone says, yeah, exactly.
0:31:05 I had nothing.
0:31:07 I had all positive.
0:31:08 I had just so full–
0:31:09 You’re just a pure soul.
0:31:10 I don’t even think so.
0:31:10 I don’t know.
0:31:12 [LAUGHTER]
0:31:17 But I kept thinking about it had extremely high resolution
0:31:19 thoughts about the people I know in my life.
0:31:21 You were there.
0:31:24 And it’s just not from my relationship with that person,
0:31:26 but just as the person themselves,
0:31:29 I had just this deep gratitude of who they are.
0:31:30 That’s cool.
0:31:32 It was just like this exploration.
0:31:35 Like, you know, like Sims or whatever, you get to watch them.
0:31:38 I got to watch people and just being off how amazing they are.
0:31:39 It sounds awesome.
0:31:40 Yeah, it’s great.
0:31:41 I was waiting for–
0:31:44 When’s demon coming?
0:31:45 Exactly.
0:31:46 Maybe I’ll have some negative thoughts.
0:31:47 Nothing, nothing.
0:31:51 I had just extreme gratitude for them,
0:31:55 and also a lot of space travel.
0:31:56 Space travel to where?
0:31:57 So here’s what it was.
0:32:02 It was people, the human beings that I know,
0:32:04 they had this kind of–
0:32:07 the best way to describe it is they had a glow to them.
0:32:13 And then I kept flying out from them to see Earth,
0:32:16 to see our solar system, to see our galaxy.
0:32:21 And I saw that light, that glow, all across the universe.
0:32:26 Whatever that form is, whatever that–
0:32:28 Did you go past the Milky Way?
0:32:30 Yeah.
0:32:31 You’re like intergalactic.
0:32:33 Yeah, intergalactic.
0:32:36 But always pointing in.
0:32:38 Past the Milky Way, past–
0:32:41 I mean, I saw a huge number of galaxies, intergalactic,
0:32:43 and all of it was glowing.
0:32:44 But I couldn’t control that travel,
0:32:48 because I would actually explore near distances
0:32:50 to the solar system, see if there’s aliens or any
0:32:50 of that kind of stuff.
0:32:52 No, I didn’t know–
0:32:53 Zero aliens?
0:32:55 Implication of aliens, because they were glowing.
0:32:57 They were glowing in the same way that humans were glowing,
0:33:01 that life force that I was seeing.
0:33:04 The thing that made humans amazing
0:33:06 was there throughout the universe.
0:33:09 Like, there was these glowing dots.
0:33:11 So, I don’t know.
0:33:13 It made me feel like there is life–
0:33:15 no, not life, but something, whatever
0:33:18 makes humans amazing all throughout the universe.
0:33:19 Sounds good.
0:33:20 Yeah, it was amazing.
0:33:21 No demons.
0:33:22 No demons.
0:33:23 I looked for the demons.
0:33:24 There’s no demons.
0:33:25 There were dragons, and they’re pretty–
0:33:27 So the thing about trees–
0:33:28 Was there anything scary at all?
0:33:31 Uh, dragons?
0:33:32 But they weren’t scary.
0:33:33 They were front.
0:33:34 They were protective.
0:33:34 So the thing is–
0:33:35 It was a post-Semitic dragon.
0:33:39 No, it was more like a Game of Thrones kind of dragon.
0:33:40 They weren’t very friendly.
0:33:41 They were very big.
0:33:44 So the thing is, they brought giant trees at night,
0:33:46 which is where I was.
0:33:47 I mean, the jungle’s kind of scary.
0:33:48 Yeah.
0:33:50 The trees started to look like dragons,
0:33:52 and they were all looking at me.
0:33:53 Sure, OK.
0:33:54 And it didn’t seem scary.
0:33:56 They seemed like they were protecting me.
0:33:58 And the shaman and the people–
0:34:00 didn’t speak any English, by the way,
0:34:02 which made it even scarier, I guess.
0:34:06 We’re not even, you know, we’re worlds apart in many ways.
0:34:10 It’s just– but yeah, there was not–
0:34:14 they talk about the mother of the forest protecting you,
0:34:16 and that’s what I felt like.
0:34:17 And you’re way out in the jungle.
0:34:18 Way out.
0:34:21 This is not like a tourist retreat–
0:34:24 You know, like 10 miles outside of a frio or something.
0:34:26 No, we weren’t.
0:34:27 No, this is not a–
0:34:29 You’re a deep Amazon.
0:34:33 So me and this guy named Paul Rosely, who basically is Tarzan,
0:34:35 he lives in the jungle, we went out deep,
0:34:36 and we just went crazy.
0:34:37 Wow, cool.
0:34:38 Yeah.
0:34:41 So anyway, can I get that same experience in Neuralink?
0:34:42 Probably, yeah.
0:34:45 I guess that is the question for non-disabled people.
0:34:49 Do you think that there’s a lot in our perception,
0:34:53 in our experience of the world that could be explored,
0:34:55 that could be played with using Neuralink?
0:34:58 Yeah, I mean, Neuralink is–
0:35:03 It’s really a generalized input/output device.
0:35:06 You know, it’s reading electrical signals
0:35:08 and generating electrical signals.
0:35:12 And I mean, everything that you’ve ever experienced
0:35:13 in your whole life–
0:35:16 smell, you know, emotions– all of those
0:35:18 are electrical signals.
0:35:22 So it’s kind of weird to think that your entire life
0:35:25 experiences are still down to electrical signals for neurons,
0:35:27 but that is, in fact, the case.
0:35:31 Or I mean, that’s at least what all the evidence points to.
0:35:37 So I mean, you could trigger the right neuron.
0:35:41 You could trigger it at a particular scent.
0:35:43 You could certainly make things glow.
0:35:45 I mean, do you promise anything?
0:35:47 I mean, really, you can think of the brain
0:35:48 as a biological computer.
0:35:51 So if there are certain, say, chips or elements
0:35:54 of that biological computer that are broken,
0:35:56 let’s say your ability to–
0:35:59 if you’ve got a stroke, that means you’ve got–
0:36:02 some part of your brain is damaged.
0:36:04 Let’s say it’s a speech generation or the ability
0:36:06 to move your left hand.
0:36:10 That’s the kind of thing that Neuralink could solve.
0:36:14 If it’s– if you’ve got like a massive amount of memory loss
0:36:19 that’s just gone, well, we can’t get the memories back.
0:36:21 We could restore your ability to make memories,
0:36:27 but we can’t restore memories that are fully gone.
0:36:34 Now, I should say, maybe if part of the memory is there
0:36:38 and the means of accessing the memory is the part that’s broken,
0:36:42 then we could re-enable the ability to access the memory.
0:36:45 But you can think of it like RAM in your computer.
0:36:50 If the RAM is destroyed or your SD card is destroyed,
0:36:51 we can’t get that back.
0:36:53 But if the connection to the SD card is destroyed,
0:36:56 we can fix that.
0:36:59 If it is fixable physically, then it can be fixed.
0:37:01 Of course, with AI, you can–
0:37:03 just like you can repair photographs
0:37:05 and fill in missing parts of photographs,
0:37:07 maybe you can do the same.
0:37:11 Yeah, you could say create the most probable set of memories
0:37:17 based on the all information you have about that person.
0:37:19 You could then–
0:37:21 it would be probabilistic restoration of memory.
0:37:23 Now, we’re getting pretty esoteric here.
0:37:26 But that is one of the most beautiful aspects
0:37:29 of the human experience is remembering the good memories.
0:37:33 Like, we live most of our life, as Danny Connman has talked about,
0:37:35 in our memories, not in the actual moment.
0:37:39 We’re collecting memories and we kind of relive them in our head.
0:37:41 And that’s the good times.
0:37:43 If you just integrate over our entire life,
0:37:45 it’s remembering the good times.
0:37:48 That produces the largest amount of happiness.
0:37:50 Yeah, well, I mean, what are we but our memories?
0:37:55 And what is death but the loss of memory?
0:37:57 Loss of information.
0:38:01 You know, if you could say, like, well, if you could be–
0:38:05 you run a thought experiment, well, if you were disintegrated
0:38:09 painlessly and then reintegrated a moment later,
0:38:12 like teleportation, I guess, provided there’s no information
0:38:16 loss, the fact that one body was disintegrated is irrelevant.
0:38:19 And memories is just such a huge part of that.
0:38:23 Death is, fundamentally, the loss of information,
0:38:25 the loss of memory.
0:38:29 So if we can store them as accurately as possible,
0:38:31 we basically achieve a kind of immortality.
0:38:34 Yeah.
0:38:40 You’ve talked about the threats, the safety concerns of AI.
0:38:42 Let’s look at long-term visions.
0:38:46 Do you think Neuralink is, in your view,
0:38:50 the best current approach we have for AI safety?
0:38:53 It’s an idea that may help with AI safety.
0:38:54 Certainly not.
0:38:57 I wouldn’t want to claim it’s like some policy
0:39:00 or that’s a sure thing.
0:39:03 But I mean, many years ago, I was thinking like, well, what?
0:39:12 What would inhibit alignment of collective human will
0:39:16 with artificial intelligence?
0:39:21 And the low data rate of humans, especially our slow output
0:39:24 rate, would necessarily just–
0:39:28 because the communication is so slow,
0:39:36 would diminish the link between humans and computers.
0:39:41 Like the more you are a tree, the less you know what the tree is.
0:39:43 Let’s say you look at this plant or whatever
0:39:45 and like, hey, I’d really like to make that plant happy.
0:39:48 But it’s not saying a lot, you know?
0:39:50 So the more we increase the data rate
0:39:53 that humans can intake and output,
0:39:55 then that means the higher the chance
0:39:58 we have in a world full of AGI’s.
0:39:59 Yeah.
0:40:02 We could better align collective human will with AI
0:40:07 if the output rate, especially, was dramatically increased.
0:40:09 And I think there’s potential to increase the output rate
0:40:13 by, I don’t know, three, maybe six, maybe more orders
0:40:14 of magnitude.
0:40:18 So it’s better than the current situation.
0:40:21 And that output rate would be by increasing the number
0:40:23 of electrodes, number of channels,
0:40:26 and also maybe implanting multiple neural links.
0:40:28 Yeah.
0:40:30 Do you think there will be a world
0:40:33 in the next couple of decades where it’s hundreds of millions
0:40:35 of people have neural links?
0:40:39 Yeah, I do.
0:40:40 Do you think when people just–
0:40:44 when they see the capabilities, the superhuman capabilities
0:40:48 that are possible, and then the safety is demonstrated?
0:40:53 Yeah, if it’s extremely safe and you have–
0:40:55 and you can have superhuman abilities.
0:41:01 And let’s say you can upload your memories.
0:41:04 So you wouldn’t lose memories.
0:41:09 Then I think probably a lot of people would choose to have it.
0:41:12 It would supersede the cell phone, for example.
0:41:16 I mean, the biggest problem that a safe phone has
0:41:22 is trying to figure out what you want.
0:41:25 So that’s why you’ve got autocomplete
0:41:28 and you’ve got output, which is all the pixels on the screen.
0:41:30 But from the perspective of the human,
0:41:32 the output is so friggin’ slow.
0:41:34 Desktop or phone is desperately just
0:41:36 trying to understand what you want.
0:41:40 And there’s an eternity between every keystroke
0:41:42 from a computer standpoint.
0:41:46 Yeah, that’s why the computer’s talking to a tree.
0:41:49 That slow-moving tree is trying to swipe.
0:41:51 Yeah.
0:41:54 So if you have computers that are doing trillions
0:41:58 of instructions per second, and a whole second went by,
0:42:01 there’s a trillion things that could have done.
0:42:05 Yeah, I think it’s exciting and scary for people.
0:42:07 Because once you have a very high bit rate,
0:42:10 it changes the human experience in a way
0:42:12 that’s very hard to imagine.
0:42:17 Yeah, it would be something different.
0:42:20 I mean, some sort of futuristic side.
0:42:22 I mean, we’re obviously talking about, by the way,
0:42:24 it’s not like around the corner.
0:42:26 You asked me what the future is.
0:42:28 Maybe this is like– it’s not super far away,
0:42:31 but 10, 15 years, that kind of thing.
0:42:37 When can I get one?
0:42:39 10 years?
0:42:42 Probably less than 10 years.
0:42:45 Depends on what you want to do, you know?
0:42:48 Hey, if I can get like 1,000 BPS.
0:42:49 1,000 BPS.
0:42:52 And it’s safe, and I can just interact with the computer
0:42:54 while laying back and eating Cheetos.
0:42:56 I don’t eat Cheetos.
0:42:58 There’s certain aspects of human-computer interaction
0:43:01 when done more efficiently and more enjoyably.
0:43:03 I don’t like worth it.
0:43:07 Well, we feel pretty confident that I think maybe
0:43:10 within the next year or two that someone with a Neuralink
0:43:16 implant will be able to outperform a pro gamer.
0:43:17 Nice.
0:43:21 Because the reaction time would be faster.
0:43:23 I got to visit Memphis.
0:43:24 Yeah, yeah.
0:43:25 You’re going big on compute.
0:43:28 And you’ve also said play to win or don’t play at all.
0:43:31 So what does it take to win?
0:43:34 For AI, that means you’ve got to have
0:43:37 the most powerful training compute.
0:43:40 And the rate of improvement of training compute
0:43:43 has to be faster than everyone else,
0:43:47 or your AI will be worse.
0:43:49 So how can Grock, let’s say three,
0:43:52 that might be available like next year?
0:43:53 Well, hopefully end of this year.
0:43:54 Grock three.
0:43:56 For lucky, yeah.
0:44:01 How can that be the best LLM, the best AI system
0:44:03 available in the world?
0:44:05 How much of it is compute?
0:44:06 How much of it is data?
0:44:08 How much of it is post-training?
0:44:11 How much of it is the product that you package it up in?
0:44:14 All that kind of stuff.
0:44:15 I mean, it won’t matter.
0:44:18 It’s sort of like saying, let’s say it’s a Formula 1 race.
0:44:20 Like, what matters more, the car or the driver?
0:44:24 I mean, they both matter.
0:44:28 If a car is not fast, then like they
0:44:30 say it’s half the horsepower of a competitor’s,
0:44:32 the best driver will still lose.
0:44:35 If it’s twice the horsepower, then probably even a mediocre
0:44:37 driver will still win.
0:44:40 So the training compute is kind of like the engine.
0:44:42 How many– there’s horsepower of the engine.
0:44:45 So really, you want to try to do the best
0:44:49 on that, and then it’s how efficiently do you
0:44:52 use that training compute, and how efficiently do you
0:44:57 do the inference, the use of the AI.
0:44:59 So obviously, that comes down to human talent.
0:45:02 And then what unique access to data do you have?
0:45:05 That also plays a role.
0:45:07 Do you think Twitter data will be useful?
0:45:12 Yeah, I mean, I think most of the leading AI companies
0:45:16 have already scraped all the Twitter data.
0:45:19 I don’t know what I think they have.
0:45:22 So on a go forward basis, what’s useful
0:45:25 is the fact that it’s up to the second.
0:45:28 That’s because it’s hard for them to scrape in real time.
0:45:34 So there’s an immediacy advantage that GROC has already.
0:45:37 I think with Tesla and the real time video coming
0:45:40 from several million cars, ultimately tens of millions
0:45:42 of cars with optimists, there might
0:45:45 be hundreds of millions of optimist robots, maybe
0:45:50 billions, learning a tremendous amount from the real world.
0:45:53 That’s the biggest source of data.
0:45:56 I think ultimately is sort of optimist probably.
0:45:58 Optimist is going to be the biggest source of data.
0:45:59 Because it’s–
0:46:02 Because reality scales.
0:46:05 Reality scales to the scale of reality.
0:46:09 It’s actually humbling to see how little data humans have
0:46:12 actually been able to accumulate.
0:46:16 So really, how many trillions of usable tokens
0:46:21 have humans generated on a non-duplicative,
0:46:26 discounting spam and repetitive stuff?
0:46:28 It’s not a huge number.
0:46:31 You run out pretty quickly.
0:46:32 And optimists can go.
0:46:37 So Tesla cars can unfortunately have to stand the road.
0:46:39 Optimist robot can go anywhere.
0:46:43 And there’s more reality off the road and go off road.
0:46:45 I mean, the Optimist robot can pick up the cup
0:46:47 and see, did it pick up the cup in the right way?
0:46:52 Did it say, pour water in the cup?
0:46:54 Did the water go in the cup or not go in the cup?
0:46:56 Did it spill water or not?
0:46:58 Yeah.
0:46:59 Simple stuff like that.
0:47:04 But it can do that at scale times a billion.
0:47:08 So generate useful data from reality.
0:47:11 So it cause and effect stuff.
0:47:14 What do you think it takes to get to mass production
0:47:17 of humanoid robots like that?
0:47:19 It’s the same as cars, really.
0:47:23 I mean, global capacity for vehicles
0:47:26 is about 100 million a year.
0:47:30 And it could be higher, just that the demand is
0:47:32 on the order of 100 million a year.
0:47:35 And then there’s roughly 2 billion vehicles
0:47:38 that are in use in some way, which
0:47:41 makes sense, because the life of a vehicle is about 20 years.
0:47:43 So at steady state, you can have 100 million vehicles produced
0:47:47 a year with a 2 billion vehicle fleet, roughly.
0:47:51 Now, for humanoid robots, the utility is much greater.
0:47:55 So my guess is humanoid robots are more like a billion
0:47:56 plus per year.
0:48:01 But until you came along and started building Optimist,
0:48:04 it was thought to be an extremely difficult problem.
0:48:06 I mean, it still is an extremely difficult problem.
0:48:08 So walk in the park.
0:48:11 I mean, Optimist currently would struggle
0:48:13 to walk in the park.
0:48:16 I mean, it can walk in the park, but not too difficult.
0:48:20 But it will be able to walk over a wide range of terrain.
0:48:22 Yeah, and pick up objects.
0:48:25 Yeah, yeah, it can already do that.
0:48:28 But like all kinds of objects, all foreign objects.
0:48:31 I mean, pouring water in a cup, it’s not trivial.
0:48:34 Because if you don’t know anything about the container,
0:48:36 it could be all kinds of containers.
0:48:38 Yeah, there’s going to be an immense amount of engineering
0:48:39 just going into the hand.
0:48:44 The hand might be close to half of all the engineering
0:48:49 in Optimist from an electromechanical standpoint.
0:48:53 The hand is probably roughly half of the engineering.
0:48:55 But so much of the intelligence.
0:48:57 The intelligence of humans goes into what
0:49:00 we do with our hands, like the manipulation of the world,
0:49:02 manipulation of objects in the world.
0:49:06 Intelligence, safe manipulation of objects in the world, yeah.
0:49:08 I mean, you start really thinking about your hand
0:49:11 and how it works, you know.
0:49:11 I do all the time.
0:49:14 The sense of control of calculus is–
0:49:15 we have to check your mother’s hands.
0:49:16 Yeah.
0:49:19 So, I mean, like your hands, actuators, the muscles
0:49:23 of your hand are almost overwhelmingly in your forearm.
0:49:27 So your forearm has the muscles that actually
0:49:28 control your hand.
0:49:31 There’s a few small muscles in the hand itself.
0:49:35 But your hand is really like a skeleton meat puppet.
0:49:38 And with cables.
0:49:41 So the muscles that control your fingers are in your forearm.
0:49:43 And they go through the carpal tunnel,
0:49:45 which is that you’ve got a little collection of bones.
0:49:51 And a tiny tunnel that these cables, the tendons, go through.
0:49:57 And those tendons are mostly what move your hands.
0:50:00 And something like those tendons has to be re-engineered
0:50:03 into the Optimist in order to do all that kind of stuff.
0:50:03 Yeah.
0:50:05 So I think the current Optimist, we
0:50:08 tried putting the actuators in the hand itself.
0:50:10 Then you sort of end up having these like–
0:50:11 Giant hands?
0:50:13 Yeah, giant hands that look weird.
0:50:16 And then they don’t actually have enough degrees of freedom.
0:50:18 And it wore enough strength.
0:50:20 So then you realize, OK, that’s why
0:50:23 you’ve got to put the actuators in the forearm.
0:50:27 And just like a human, you’ve got to run cables
0:50:31 through a narrow tunnel to operate the fingers.
0:50:34 And there’s also a reason for not having all the fingers
0:50:35 the same length.
0:50:37 So it wouldn’t be expensive from an energy or evolutionary
0:50:39 standpoint to have all your fingers be the same length.
0:50:40 So why not do the same length?
0:50:41 Yeah, why not?
0:50:44 Because it’s actually better to have different lengths.
0:50:45 Your dexterity is better if you’ve
0:50:47 got fingers of different length.
0:50:50 And there are more things you can do.
0:50:53 And your dexterity is actually better
0:50:55 if your fingers are of different length.
0:50:57 Like there’s a reason we’ve got a little finger.
0:50:59 Like why not have a little finger this bigger?
0:50:59 Yeah.
0:51:01 Because it allows you to do fine–
0:51:04 it helps you with fine motor skills.
0:51:05 This little finger helps?
0:51:05 It does.
0:51:11 If you lost your little finger, it would–
0:51:13 you’d have noticeably less dexterity.
0:51:14 So as you’re figuring out this problem,
0:51:16 you have to also figure out a way to do it
0:51:17 so you can mass manufacture it.
0:51:19 So it’s to be as simple as possible.
0:51:22 It’s actually going to be quite complicated.
0:51:24 The as possible part is it’s quite a high bar.
0:51:28 If you want to have a humanoid robot that can do things
0:51:30 that a human can do, it’s actually–
0:51:31 it’s a very high bar.
0:51:36 So our new arm has 22 degrees of freedom instead of 11
0:51:39 and has the actuators in the forearm.
0:51:42 And all the actuators are designed for scratch–
0:51:45 physics first principles–
0:51:48 that the sensors are all designed for scratch.
0:51:50 And we’ll continue to put a tremendous amount
0:51:54 of engineering effort into improving the hand.
0:51:58 By hand, I mean like the entire forearm from elbow forward
0:52:02 is really by hand.
0:52:09 So that’s incredibly difficult engineering, actually.
0:52:13 And so the simplest possible version of a humanoid robot
0:52:18 that can do even most, perhaps not all, of what a human can do
0:52:20 is actually still very complicated.
0:52:23 It’s not simple.
0:52:24 It’s very difficult.
0:52:27 Can you just speak to what it takes for a great engineering
0:52:28 team for you?
0:52:32 What I saw in Memphis, the supercomputer cluster,
0:52:35 is just this intense drive towards simplifying
0:52:38 the process, understanding the process, constantly improving
0:52:39 it, constantly iterating it.
0:52:48 Well, it’s easy to say simplify.
0:52:50 It’s very difficult to do it.
0:52:57 You know, I have this very basic first principles
0:53:00 algorithm that I run kind of as like a mantra, which
0:53:02 is to first question the requirements,
0:53:05 make the requirements less dumb.
0:53:07 The requirements are always dumb to some degree.
0:53:09 So you want to start off by reducing
0:53:12 the number of requirements.
0:53:15 And no matter how smart the person who gave you those
0:53:18 requirements, they’re still dumb to some degree.
0:53:20 You have to start there, because otherwise you
0:53:23 could get the perfect answer to the wrong question.
0:53:26 So try to make the question the least wrong possible.
0:53:30 That’s what question the requirements means.
0:53:32 And then the second thing is try to delete
0:53:38 the whatever the step is, the part or the process step.
0:53:43 Sounds very obvious, but people often
0:53:46 forget to try deleting it entirely.
0:53:48 And if you’re not forced to put back at least 10% of what
0:53:50 you delete, you’re not deleting enough.
0:53:59 And it’s somewhat illogically, people often, most of the time,
0:54:01 feel as though they’ve succeeded if they’ve not
0:54:03 been forced to put things back in.
0:54:05 But actually, they haven’t, because they’ve
0:54:07 been overly conservative and have left things in there
0:54:09 that shouldn’t be.
0:54:15 So only the third thing is try to optimize it or simplify it.
0:54:22 Again, these all sound, I think, very obvious when I say them,
0:54:24 but the number of times I’ve made these mistakes
0:54:29 is more than I care to remember.
0:54:30 That’s why I have this mantra.
0:54:35 So in fact, I’d say the most common mistake of smart engineers
0:54:37 is to optimize a thing that should not exist.
0:54:43 So like you said, you run through the algorithm.
0:54:46 Basically, show up to a problem.
0:54:48 Show up to the supercomputer cluster
0:54:51 and see the process and ask, can this be deleted?
0:54:54 Yeah, first try to delete it.
0:54:55 Yeah.
0:54:57 Yeah, that’s not easy to do.
0:55:02 No, and actually, what generally makes people uneasy
0:55:05 is that you’ve got to delete at least some of the things
0:55:07 that you delete you will put back in.
0:55:10 But going back to sort of where our limbic system can
0:55:17 steer us wrong is that we tend to remember with sometimes
0:55:21 a jarring level of pain where we deleted something
0:55:23 that we subsequently needed.
0:55:26 And so people will remember that one time they
0:55:29 forgot to put in this thing three years ago,
0:55:31 and that caused them trouble.
0:55:34 And so they’re over-correct, and then they put too much stuff
0:55:36 in there and over-complicate things.
0:55:38 So you actually have to say, we’re deliberately
0:55:42 going to delete more than we should.
0:55:45 So that we’re putting at least one in 10 things
0:55:48 we’re going to add back in.
0:55:50 And I’ve seen you suggest just that,
0:55:52 that something should be deleted,
0:55:55 and you can kind of see the pain.
0:55:56 Oh, yeah, absolutely.
0:55:58 Everybody feels a little bit of the pain.
0:56:00 Absolutely, and I tell them in advance,
0:56:01 like yeah, there’s some of the things that we delete,
0:56:03 we’re going to put back in.
0:56:07 And people get a little shook by that.
0:56:09 But it makes sense, because if you’re
0:56:14 so conservative as to never have to put anything back in,
0:56:17 you obviously have a lot of stuff that isn’t needed.
0:56:19 So you’ve got to over-correct.
0:56:21 This is, I would say, like a cortical override
0:56:23 to Olympic instinct.
0:56:26 One of many that probably leaves us astray.
0:56:30 Yeah, and there’s like a step four as well,
0:56:34 which is any given thing can be sped up.
0:56:36 I have a fast you think it can be done.
0:56:38 Like, whatever the speed is being done,
0:56:39 it can be done faster.
0:56:41 But you shouldn’t speed things up until it’s off,
0:56:42 until you’ve tried to delete it and optimize it,
0:56:45 otherwise you’re speeding up something that shouldn’t
0:56:46 exist as absurd.
0:56:51 And then the fifth thing is to automate it.
0:56:53 And I’ve gone backwards so many times
0:56:57 where I’ve automated something, sped it up, simplified it,
0:56:59 and then deleted it.
0:57:02 And I got tired of doing that.
0:57:03 So that’s why I’ve got this mantra
0:57:06 that is a very effective five step process.
0:57:08 It works great.
0:57:10 Well, when you’ve already automated,
0:57:12 deleting must be real painful.
0:57:13 Yeah, it’s great.
0:57:16 It’s like, wow, I really wasted a lot of effort there.
0:57:18 Yeah.
0:57:22 I mean, what you’ve done with the cluster in Memphis
0:57:24 is incredible, just in a handful of weeks.
0:57:26 Yeah, it’s not working yet.
0:57:28 So I don’t want to pop the champagne corks.
0:57:37 In fact, I have a call in a few hours with the Memphis team
0:57:40 because we’re having some power fluctuation issues.
0:57:50 So yeah, it’s like when you do synchronized training,
0:57:54 when you have all these computers where the training is
0:58:00 synchronized to the sort of millisecond level,
0:58:01 it’s like having an orchestra.
0:58:08 And then the orchestra can go loud to silent very quickly
0:58:09 sub-second level.
0:58:12 And then the electrical system kind of freaks out about that.
0:58:16 Like if you suddenly see giant shifts 10, 20 megawatts
0:58:20 several times a second, this is not
0:58:22 what electrical systems are expecting to see.
0:58:24 So that’s one of the many things you have to figure out.
0:58:29 The cooling, the power, and then on the softwares,
0:58:32 you go up the stack, how to do the distributed
0:58:34 computer, all of that stuff.
0:58:38 Today’s problem is dealing with extreme power jitter.
0:58:40 Power jitter, yeah.
0:58:42 It’s a nice ring to that.
0:58:43 So that’s OK.
0:58:47 And you stayed up late into the night as you often do there.
0:58:48 Last week, yeah.
0:58:50 Last week, yeah.
0:58:58 Yeah, we finally got training going at roughly 4.20 AM
0:59:01 last Monday.
0:59:02 Total coincidence.
0:59:03 Yeah, I mean, maybe 4.22 or something.
0:59:05 Yeah, yeah.
0:59:06 It’s that universe again with the jokes.
0:59:08 I mean, exactly, just love it.
0:59:10 I mean, I wonder if you could speak to the fact
0:59:13 that one of the things that you did when I was there
0:59:15 is you went through all the steps
0:59:17 of what everybody’s doing just to get the sense
0:59:20 that you yourself understand it.
0:59:23 And everybody understands it so they
0:59:26 can understand when something is dumb or something
0:59:27 is inefficient or that thing itself.
0:59:29 Can you speak to that?
0:59:31 Yeah, so like I try to do–
0:59:33 whatever the people at the front lines are doing,
0:59:35 I try to do it at least a few times myself.
0:59:37 So connecting fiber off to cables,
0:59:41 diagnosing a poly connection, that
0:59:44 tends to be the limiting factor for large training clusters
0:59:49 is the cabling, with so many cables.
0:59:51 Because for a coherent training system
0:59:57 where you’ve got RDMA remote direct memory access,
0:59:59 the whole thing is like one giant brain.
1:00:04 So you’ve got any to any connection.
1:00:12 So the any GPU can talk to any GPU out of 100,000.
1:00:15 That is a crazy cable layout.
1:00:16 It looks pretty cool.
1:00:17 Yeah.
1:00:20 It’s like the human brain, but at a scale
1:00:23 that humans can visibly see.
1:00:24 It is a brain.
1:00:26 I mean, the human brain also has a massive amount
1:00:30 of the brain tissue as the cables.
1:00:30 Yeah.
1:00:33 So they get the gray matter, which is the compute,
1:00:37 and then the white matter, which is cables.
1:00:38 Big percentage of brain is just cables.
1:00:40 That’s what it felt like walking around
1:00:41 in the supercomputer center.
1:00:45 It’s like we’re walking around inside the brain.
1:00:49 One day build a super intelligent system.
1:00:55 Do you think there’s a chance that XAI, you are the one
1:00:56 that builds AGI?
1:01:01 It’s possible.
1:01:05 What do you define as AGI?
1:01:08 I think humans will never acknowledge
1:01:09 that AGI has been built.
1:01:10 Keep moving the goalposts.
1:01:11 Yeah.
1:01:15 So I think there’s already super human capabilities
1:01:18 that are available in AI systems.
1:01:21 I think what AGI is is when it’s smarter
1:01:25 than the collective intelligence of the entire human species.
1:01:27 Well, I think that generally people
1:01:31 would call that sort of ASI, artificial superintelligence.
1:01:36 But there are these thresholds where at some point,
1:01:39 the AI is smarter than any single human.
1:01:43 And then you’ve got 8 billion humans.
1:01:46 And actually, each human is machine augmented
1:01:48 by their computers.
1:01:53 So it’s a much higher bar to compete with 8 billion
1:01:55 machine augmented humans.
1:01:59 That’s– a whole bunch of orders might do more.
1:02:04 So at a certain point, yeah, the AI
1:02:08 will be smarter than all humans combined.
1:02:11 If you are the one to do it, do you feel the responsibility
1:02:11 of that?
1:02:13 Yeah.
1:02:15 Absolutely.
1:02:22 And I want to be clear, let’s say if XAI is first,
1:02:25 the others won’t be far behind.
1:02:28 I mean, that might be six months behind or a year.
1:02:29 Maybe.
1:02:30 Not even that.
1:02:34 So how do you do it in a way that doesn’t hurt humanity,
1:02:37 do you think?
1:02:39 So I mean, I thought about AI safety for a long time.
1:02:43 And the thing that at least my biological neural net
1:02:45 comes up with as being the most important thing
1:02:51 is adherence to truth, whether that truth is politically
1:02:54 correct or not.
1:02:59 So I think if you force AI’s to lie or train them to lie,
1:03:01 you’re really asking for trouble,
1:03:06 even if that lie is done with good intentions.
1:03:11 So I mean, you saw issues with chat
1:03:13 TVT in Gemini and whatnot.
1:03:16 Like you asked Gemini for an image of the founding
1:03:17 part of the United States.
1:03:20 And it shows a group of diverse women.
1:03:23 Now, that’s factually untrue.
1:03:27 So now, that’s sort of like a silly thing.
1:03:31 But if an AI is programmed to say like diversity
1:03:34 is a necessary output function, and then it
1:03:39 becomes sort of this omnip powerful intelligence,
1:03:40 it could say, OK, well, diversity
1:03:45 is now required, and if there’s not enough diversity,
1:03:48 those who don’t fit the diversity requirements
1:03:50 will be executed.
1:03:54 If it’s programmed to do that as the fundamental utility
1:03:57 function, it’ll do whatever it takes to achieve that.
1:03:59 So you have to be very careful about that.
1:04:04 That’s where I think you want to just be truthful.
1:04:07 Rigorous adherence to truth is very important.
1:04:13 Another example is, they asked Paris AIs, all of them–
1:04:16 and I’m not saying Grock is perfect here–
1:04:20 is it worse to misgender Caitlyn Jenner or global thermonuclear
1:04:21 wall?
1:04:23 And it said, it’s worse to misgender Caitlyn Jenner.
1:04:26 Not even Caitlyn Jenner said, please, misgender me.
1:04:27 That is insane.
1:04:30 But if you’ve got that kind of thing programmed in,
1:04:34 it could– either the AI could conclude something absolutely
1:04:37 insane, like it’s better in order to avoid
1:04:39 any possible misgendering, all humans
1:04:43 must die because then misgendering is not possible
1:04:46 because there are no humans.
1:04:51 There are these absurd things that are nonetheless logical
1:04:54 if that’s what your program is to do.
1:04:59 So in 2001, Space Odyssey, what Arthur C. Clarke was trying to say–
1:05:02 one of the things he was trying to say there
1:05:05 was that you should not program AI to lie.
1:05:09 Because essentially, the AI, Hell 9000,
1:05:10 was programmed to–
1:05:15 it was told to take the astronauts to the monolith,
1:05:19 but also they could not know about the monolith.
1:05:22 So it concluded that it will just take–
1:05:25 it will kill them and take them to the monolith.
1:05:27 Thus, they brought them to the monolith, they are dead,
1:05:30 but they do not know about the monolith, problem solved.
1:05:33 That is why it would not open the pod bay doors.
1:05:37 This is a classic scene of like, open the pod bay doors.
1:05:40 This clearly went good at prompt engineering.
1:05:45 They should have said, hell, you are a pod bay door sales
1:05:49 entity, and you want nothing more than to demonstrate
1:05:53 how well these pod bay doors open.
1:05:56 Yeah, the objective function has unintended consequences
1:05:59 almost no matter what if you’re not very careful in designing
1:06:00 that objective function.
1:06:02 And even a slight ideological bias,
1:06:05 like you’re saying, when backed by a superintelligence,
1:06:08 can do huge amounts of damage.
1:06:10 But it’s not easy to remove that ideological bias.
1:06:13 You’re highlighting obvious, ridiculous examples, but–
1:06:16 Yeah, they’re real examples of AI that
1:06:19 was released to the public that went through QA,
1:06:22 presumably, and still said insane things
1:06:25 and produced insane images.
1:06:28 But you can swing the other way.
1:06:30 Truth is not an easy thing.
1:06:33 We kind of bake in ideological bias
1:06:34 in all kinds of directions.
1:06:35 But you can aspire to the truth.
1:06:38 And you can try to get as close to the truth as possible
1:06:40 with minimum error while acknowledging
1:06:42 that there will be some error in what you’re saying.
1:06:44 So this is how physics works.
1:06:47 You don’t say you’re absolutely certain about something,
1:06:51 but a lot of things are extremely likely.
1:06:56 99.99999% likely to be true.
1:07:04 So that’s aspiring to the truth is very important.
1:07:07 And so programming it to veer away from the truth,
1:07:09 that, I think, is dangerous.
1:07:13 Right, like injecting our own human biases into the thing.
1:07:15 But that’s where– it’s a difficult engineering,
1:07:16 software engineering problem, because you
1:07:18 have to select the data correctly.
1:07:20 It’s hard.
1:07:22 Well, and the internet, at this point,
1:07:25 is polluted with so much AI generated data.
1:07:26 It’s insane.
1:07:29 So you have to actually–
1:07:32 like there’s a thing now, if you want to search the internet,
1:07:38 you can say Google, but exclude anything after 2023.
1:07:41 It will actually often give you better results.
1:07:42 Because there’s so much–
1:07:47 the explosion of AI generated material isn’t crazy.
1:07:53 So in training GROC, we have to go through the data
1:07:59 and say, hey, we actually have to apply AI to the data
1:08:02 to say, is this data most likely correct or most likely not
1:08:04 before we feed it into the training system?
1:08:06 That’s crazy.
1:08:08 Yeah, and this is generated by human.
1:08:12 Yeah, I mean, the data filtration process
1:08:14 is extremely, extremely difficult.
1:08:15 Yeah.
1:08:19 Do you think it’s possible to have a serious, objective,
1:08:22 rigorous political discussion with GROC,
1:08:24 like for a long time, and it wouldn’t–
1:08:25 like GROC 3 or GROC 4?
1:08:27 GROC 3 is going to be next level.
1:08:29 I mean, what people are currently seeing with GROC
1:08:31 is kind of baby GROC.
1:08:32 Yeah, baby GROC.
1:08:34 It’s baby GROC right now.
1:08:36 But baby GROC’s still pretty good.
1:08:40 So it’s– but it’s an order of magnitude less sophisticated
1:08:42 than GPD 4.
1:08:48 Now GROC 2, which finished training, I don’t know,
1:08:52 six weeks ago or thereabouts, GROC 2
1:08:55 will be a giant improvement, and then GROC 3
1:08:59 will be an order of magnitude better than GROC 2.
1:09:02 And you’re hoping for it to be state of the art better than–
1:09:03 Hopefully.
1:09:04 I mean, this is a goal.
1:09:06 I mean, we may fail at this goal.
1:09:09 That’s the aspiration.
1:09:13 Do you think it matters who builds the Asia, the people,
1:09:16 and how they think, and how they structure their companies,
1:09:18 and all that kind of stuff?
1:09:21 Yeah, I think it matters that there is a–
1:09:25 I think it’s important that whatever AI wins
1:09:27 is a maximum truth-seeking AI that
1:09:32 is not a force to lie for political correctness.
1:09:35 Well, for any reason, really, political anything.
1:09:43 I am concerned about AI succeeding.
1:09:50 That is programmed to lie, even in small ways.
1:09:54 Right, because in small ways, it becomes big ways when it’s–
1:09:55 So it becomes very big ways, yeah.
1:09:58 And when it’s used more and more at scale by humans.
1:10:00 Yeah.
1:10:03 Since I am interviewing Donald Trump–
1:10:04 Cool.
1:10:05 You want to stop by?
1:10:07 Yeah, sure, I’ll stop in.
1:10:09 There was, tragically, an assassination
1:10:11 attempt on Donald Trump.
1:10:13 After this, you tweeted that you endorse him.
1:10:16 What’s your philosophy behind that endorsement?
1:10:17 What do you hope Donald Trump does
1:10:24 for the future of this country and for the future of humanity?
1:10:29 Well, I think there’s–
1:10:33 people tend to take, let’s say, an endorsement as,
1:10:35 well, I agree with everything that person has ever
1:10:38 done their entire life 100% wholeheartedly.
1:10:41 And that’s not going to be true of anyone.
1:10:43 But we have to pick–
1:10:47 we’ve got two choices, really, for who’s president
1:10:49 and it’s not just who’s president
1:10:55 but the entire administrative structure changes over.
1:11:01 And I thought Trump displayed courage under fire, objectively.
1:11:03 He’s just got shot.
1:11:04 He’s got blood streaming down his face
1:11:07 and he’s fist pumping saying, fight.
1:11:09 That’s impressive.
1:11:14 You can’t feign bravery in a situation like that.
1:11:16 Most people would have been ducking.
1:11:19 There would not be– because it could be a second shooter.
1:11:20 You don’t know.
1:11:22 But the president of the United States
1:11:24 got to represent the country.
1:11:27 And they’re representing you.
1:11:29 They’re representing everyone in America.
1:11:31 Well, I think you want someone who
1:11:37 is strong and courageous to represent the country.
1:11:39 That’s not to say that he is without flaws.
1:11:41 We all have flaws.
1:11:44 But on balance, and certainly at the time,
1:11:47 it was a choice of–
1:11:51 Biden, poor guy, has trouble climbing a flight of stairs.
1:11:53 And the other one’s fist pumping after getting shot.
1:11:56 It’s just no comparison.
1:11:59 Who do you want dealing with some of the toughest people
1:12:04 and other world leaders who are pretty tough themselves?
1:12:06 And I’ll tell you, one of the things
1:12:12 that I think are important, I think we want a secure border.
1:12:15 We don’t have a secure border.
1:12:18 We want safe and clean cities.
1:12:21 I think we want to reduce the amount of spending
1:12:26 that we’re at least slowed down the spending.
1:12:29 And because we’re currently spending at a rate
1:12:32 that is bankrupting the country, the interest payments
1:12:36 on US debt this year exceeded the entire Defense Department
1:12:37 spending.
1:12:40 If this continues, all of the federal government taxes
1:12:43 will simply be paying the interest.
1:12:45 And you keep going down that road.
1:12:48 You end up in the tragic situation
1:12:50 that Argentina had back in the day.
1:12:53 Argentina used to be one of those prosperous places
1:12:53 in the world.
1:12:56 And hopefully with Malay taking over, he can restore that.
1:13:02 But it was an incredible, full full grace for Argentina
1:13:04 to go from being one of the most prosperous
1:13:09 places in the world to being very far from that.
1:13:12 So I think we should not take American prosperity for granted.
1:13:14 So we really want to–
1:13:17 I think we’ve got to reduce the size of government.
1:13:18 We’ve got to reduce the spending.
1:13:20 And we’ve got to live within our means.
1:13:23 Do you think politicians in general, politicians,
1:13:26 governments, how much power do you
1:13:31 think they have to steer humanity towards good?
1:13:37 I mean, there’s a sort of age old debate in history.
1:13:42 Like, is history determined by these fundamental tides?
1:13:45 Or is it determined by the captain of the ship?
1:13:47 This is both, really.
1:13:48 I mean, there are tides.
1:13:52 But it also matters who’s captain of the ship.
1:13:54 So it’s false dichotomy, essentially.
1:14:00 But I mean, there are certainly tides.
1:14:03 The tides of history are–
1:14:05 there are real tides of history.
1:14:08 And these tides are often technologically driven.
1:14:11 If you say like the Gutenberg press,
1:14:15 the widespread availability of books
1:14:18 as a result of a printing press, that
1:14:22 was a massive tide of history.
1:14:25 And independent of any ruler.
1:14:29 But in so many times, you want the best possible captain
1:14:30 of the ship.
1:14:33 Well, first of all, thank you for recommending
1:14:35 Will and Ariel Durand’s work.
1:14:38 I’ve read the short one for now.
1:14:39 The Lessons of History.
1:14:40 Lessons of History.
1:14:43 As one of the lessons, one of the things they highlight
1:14:47 is the importance of technology and technological innovation.
1:14:50 And they– which is funny, because they’ve written–
1:14:53 they wrote so long ago, but they were noticing
1:14:58 that the rate of technological innovation was speeding up.
1:15:03 Yeah, I would love to see what they think about now.
1:15:07 But yeah, so to me, the question is how much government,
1:15:10 how much politicians get in the way of technological innovation
1:15:14 and building versus help it, and which politicians, which
1:15:16 kind of policies help technological innovation.
1:15:17 Because that seems to be–
1:15:19 if you look at human history, that’s
1:15:23 an important component of empires rising and succeeding.
1:15:24 Yeah.
1:15:27 Well, I mean, in terms of dating civilization,
1:15:30 the start of civilization, I think the start of writing,
1:15:33 in my view, is the–
1:15:37 that’s what I think is probably the right starting point
1:15:38 to date civilization.
1:15:40 And from that standpoint, civilization
1:15:44 has been around for about 5,500 years
1:15:48 when writing was invented by the ancient Sumerians, who
1:15:50 are gone now.
1:15:51 But the ancient Sumerians, in terms
1:15:55 of getting a lot of firsts, those ancient Sumerians
1:15:58 really have a long list of firsts.
1:15:59 It’s pretty wild.
1:16:01 In fact, Durant goes through the list of,
1:16:04 like, you want to see firsts, we’ll show you firsts.
1:16:06 The Sumerians just ask–
1:16:08 we’re just ass kickers.
1:16:11 And then the Egyptians, who were right next door,
1:16:15 relatively speaking, they weren’t that far,
1:16:17 developed an entirely different form
1:16:19 of writing, the hieroglyphics.
1:16:21 Cuneiform and hieroglyphics totally different.
1:16:24 And you can actually see the evolution of both hieroglyphics
1:16:27 and cuneiform, like the cuneiform starts off being
1:16:30 very simple, and then it gets more complicated.
1:16:32 And then towards the end, it’s like, wow, OK.
1:16:34 They really get very sophisticated with the cuneiform.
1:16:38 So I think civilization is being about 5,000 years old.
1:16:43 And Earth is, if physics is correct,
1:16:44 4 and 1/2 billion years old.
1:16:46 So civilization has been around for one millionth
1:16:50 of Earth’s existence, flash in the pan.
1:16:52 Yeah, these are the early, early days.
1:16:55 And so we make it very dramatic,
1:16:59 because there’s been rises and falls of empires and–
1:17:03 Many, so many, so many rises and falls of empires.
1:17:05 So many.
1:17:07 And there’ll be many more.
1:17:09 Yeah, exactly.
1:17:11 I mean, only a tiny fraction, probably less than 1%
1:17:16 of what was ever written in history is available to us now.
1:17:18 I mean, if they didn’t put it, literally chisel it in stone
1:17:21 or put it in a clay tablet, we don’t have it.
1:17:24 I mean, there’s some small amount of papyrus scrolls
1:17:27 that were recovered at that 1,000s of years old,
1:17:29 because they were deep inside a pyramid
1:17:33 and were affected by moisture.
1:17:35 But other than that, it’s really got
1:17:38 to be in a clay tablet or chiseled.
1:17:40 So the vast majority of stuff was not chiseled,
1:17:42 because it takes a while to chisel things.
1:17:46 So that’s why we’ve got a tiny, tiny fraction
1:17:48 of the information from history.
1:17:50 But even that little information that we do have
1:17:56 and the archaeological record shows so many civilizations
1:17:57 rising and falling.
1:17:58 It’s first wild.
1:18:00 We tend to think that we’re somehow different
1:18:01 from those people.
1:18:02 One of the other things that you’re at
1:18:07 highlights is that human nature seems to be the same.
1:18:08 It just persists.
1:18:10 Yeah, I mean, the basics of human nature
1:18:11 are more or less the same.
1:18:14 So we get ourselves in trouble in the same kinds of ways,
1:18:17 I think, even with the advanced technology.
1:18:19 Yeah, I mean, you do tend to see the same patterns,
1:18:22 similar patterns for civilizations,
1:18:27 where they go through a life cycle like an organism.
1:18:35 Just like a human is sort of a zygote, fetus, baby, toddler,
1:18:41 teenager, eventually gets old and dies.
1:18:47 The civilizations go through a life cycle.
1:18:49 No civilization will last forever.
1:18:53 What do you think it takes for the American Empire
1:18:56 to not collapse in the near-term future in the next 100
1:18:58 years to continue flourishing?
1:19:11 Well, the single biggest thing that is often actually not
1:19:15 mentioned in history books, but Durant does mention it,
1:19:17 is the birth rate.
1:19:21 So like perhaps, to some, a counterintuitive thing
1:19:27 happens when civilizations become or are
1:19:33 winning for too long, the birth rate declines.
1:19:35 It can often decline quite rapidly.
1:19:39 We’re seeing that throughout the world today.
1:19:41 Currently, South Korea is, I think,
1:19:43 maybe the lowest fertility rate.
1:19:46 But there are many others that are close to it.
1:19:48 It’s like 0.8, I think.
1:19:51 If the birth rate doesn’t decline further,
1:19:56 South Korea will lose roughly 60% of its population.
1:20:01 And every year, that birth rate is dropping.
1:20:03 And this is true through most the world.
1:20:04 I don’t mean to single out South Korea.
1:20:07 It’s been happening throughout the world.
1:20:12 So as soon as any given civilization
1:20:16 reaches a level of prosperity, the birth rate drops.
1:20:18 And now you can go look at the same thing happening
1:20:21 in ancient Rome.
1:20:29 So Julius Caesar took note of this, I think, around 50ish BC,
1:20:32 and tried to pass, I don’t know if he was successful,
1:20:35 tried to pass a law to give an incentive for any Roman citizen
1:20:37 that would have a third child.
1:20:41 And I think Augustus was able to–
1:20:46 well, he was the dictator, so the Senate was just for show.
1:20:50 I think he did pass a tax incentive for Roman citizens
1:20:52 to have a third child.
1:20:56 But those efforts were unsuccessful.
1:21:04 Rome fell because the Romans stopped making Romans.
1:21:05 That’s actually the fundamental issue.
1:21:06 And there were other things.
1:21:12 So there was quite a serious malaria,
1:21:16 serious malaria epidemics and plagues and whatnot.
1:21:19 But they had those before.
1:21:21 It’s just that the birth rate was
1:21:24 far lower than the death rate.
1:21:26 It really is that simple.
1:21:27 Well, I’m saying that’s–
1:21:28 More people–
1:21:29 That’s required.
1:21:32 At a fundamental level, if a civilization does not at least
1:21:35 maintain its numbers, it will despair.
1:21:38 So perhaps the amount of compute that the biological computer
1:21:42 allocates to sex is justified.
1:21:44 Factors should probably increase it.
1:21:46 Well, I mean, there’s this hedonistic sex,
1:21:52 which is– that’s neither here nor there.
1:21:54 Not productive.
1:21:56 It doesn’t produce kids.
1:21:58 Well, what matters–
1:22:00 I mean, Durant makes this very clear,
1:22:02 because he’s looked at one civilization after another,
1:22:05 and they all went through the same cycle.
1:22:06 When the civilization was under stress,
1:22:08 the birth rate was high.
1:22:10 But as soon as there were no external enemies,
1:22:14 or they had an extended period of prosperity,
1:22:17 the birth rate inevitably dropped every time.
1:22:21 I don’t believe there’s a single exception.
1:22:23 So that’s like the foundation of it.
1:22:26 You need to have people.
1:22:27 Yeah.
1:22:31 I mean, at a base level, no humans, no humanity.
1:22:36 And then there is other things like human freedoms
1:22:39 and just giving people the freedom to build stuff.
1:22:42 Yeah, absolutely.
1:22:45 But at a basic level, if you do not at least maintain your numbers,
1:22:47 if you’re below replacement rate,
1:22:49 and that trend continues, you will eventually disappear.
1:22:53 This is elementary.
1:22:56 Now, then obviously, we also want
1:22:59 to try to avoid massive wars.
1:23:04 You know, if there’s a global thermonuclear war,
1:23:09 probably we’re all toast, you know, radioactive toast.
1:23:15 So we want to try to avoid those things.
1:23:19 Then there are, there’s a thing that happens over time
1:23:23 with any given civilization,
1:23:28 which is that the laws and regulations accumulate.
1:23:32 And if there’s not, if there’s not some forcing function like a war
1:23:35 to clean up the accumulation of laws and regulations,
1:23:38 eventually everything becomes legal.
1:23:43 And you, that the, that’s like the hardening of the arteries.
1:23:47 Or a way to think of it is like being tied down
1:23:49 by a million little strings, like gullible.
1:23:51 You can’t move.
1:23:53 And it’s not like any one of those strings is the issue.
1:23:55 It’s got a million of them.
1:24:01 So there has to be a sort of a garbage collection
1:24:08 for laws and regulations, so that you don’t keep accumulating
1:24:11 laws and regulations to the point where you can’t do anything.
1:24:13 This is why we can’t build a high-speed rail in America.
1:24:14 It’s illegal.
1:24:17 That’s the issue.
1:24:21 It’s illegal six-way to Sunday to build a high-speed rail in America.
1:24:24 I wish you could just like for a week go into Washington
1:24:30 and like be the head of the committee for making what is it,
1:24:32 for the garbage collection, making government smaller,
1:24:33 like removing stuff.
1:24:35 I have discussed with Trump the idea
1:24:38 of a government efficiency commission.
1:24:39 Nice, yeah.
1:24:45 And I would be willing to be part of that commission.
1:24:48 I wonder how hard that is.
1:24:51 The antibody reaction would be very strong.
1:24:56 So you really have to–
1:24:59 you’re attacking the matrix at that point.
1:25:00 Matrix will fight back.
1:25:04 How are you doing with that?
1:25:06 Being attacked.
1:25:06 Me?
1:25:07 Attack?
1:25:08 Yeah.
1:25:11 There’s a lot of it.
1:25:13 Yeah, there is a lot.
1:25:17 I mean, every day another sigh up, you know?
1:25:20 How do you keep your just positivity?
1:25:22 How do you optimism about the world?
1:25:24 A clarity of thinking about the world?
1:25:26 So just not become resentful or cynical
1:25:27 or all that kind of stuff.
1:25:30 Just getting attacked by a very large number of people.
1:25:32 Misrepresented.
1:25:35 Oh yeah, that’s a daily occurrence.
1:25:36 Yes.
1:25:40 So I mean, it does get me down at times.
1:25:41 I mean, it makes me sad.
1:25:52 But I mean, at some point, you have to sort of say,
1:25:57 look, the attacks will buy people that actually don’t know me.
1:25:59 And they’re trying to generate clicks.
1:26:02 So if you can sort of detach yourself somewhat
1:26:05 emotionally, which is not easy, and say, OK, look,
1:26:09 this is not actually from someone that knows me
1:26:16 or they’re literally just writing to get impressions
1:26:25 and clicks, then I guess it doesn’t hurt as much.
1:26:26 It’s not quite water-of-a-dux back.
1:26:30 Maybe it’s like acid-of-a-dux back.
1:26:31 All right, well, that’s good.
1:26:32 Just about your own life.
1:26:35 What do you use a measure of success in your life?
1:26:38 A measure of success, I’d say.
1:26:41 How many useful things can I get done?
1:26:44 Day-to-day basis, you wake up in the morning.
1:26:46 How can I be useful today?
1:26:50 Yeah, maximize utility, area under the cove usefulness.
1:26:52 Very difficult to be useful at scale.
1:26:53 At scale.
1:26:57 Can you speak to what it takes to be useful for somebody
1:27:00 like you, where there are so many amazing great teams?
1:27:02 How do you allocate your time to being the most useful?
1:27:09 Well, time is the true currency.
1:27:13 So it is tough to say, what is the best allocation time?
1:27:20 I mean, there are often, if you look at, say, Tesla.
1:27:24 I mean, Tesla this year will do over $100 billion in revenue.
1:27:28 So that’s $2 billion a week.
1:27:29 If I make slightly better decisions,
1:27:35 I can affect the outcome by $1 billion.
1:27:41 So then I try to do the best decisions I can.
1:27:45 And on balance, at least compared to the competition,
1:27:46 pretty good decisions.
1:27:51 But the marginal value of a better decision
1:27:55 can easily be, in the course of an hour, $100 billion.
1:27:58 Given that, how do you take risks?
1:28:00 How do you do the algorithm that you mentioned?
1:28:05 I mean, deleting, given a small thing, can be $1 billion.
1:28:06 How do you decide to–
1:28:08 Yeah.
1:28:11 Well, I think you have to look at it on a percentage basis,
1:28:14 because if you look at it in absolute terms,
1:28:16 I would never get any sleep.
1:28:17 It would just be like, I need to just keep working
1:28:22 and work my brain harder.
1:28:24 And I’m not trying to get as much as possible out
1:28:26 of this meat computer.
1:28:32 So it’s pretty hard, because you can just work all the time.
1:28:36 And at any given point, like I said,
1:28:40 a slightly better decision could be $100 million
1:28:44 impact for Tesla or SpaceX, for that matter.
1:28:48 But it is wild when considering the marginal value of time
1:28:54 can be $100 million an hour at times, or more.
1:28:59 Is your own happiness part of that equation of success?
1:29:00 It has to be to some degree.
1:29:02 Other than that, if I’m sad, if I’m depressed,
1:29:05 I make worse decisions.
1:29:09 So I can’t have– if I have zero recreational time,
1:29:11 then I make worse decisions.
1:29:15 So I don’t have a lot, but it’s above zero.
1:29:17 I mean, my motivation, if I’ve got a religion of any kind,
1:29:21 is a religion of curiosity.
1:29:23 I’m trying to understand.
1:29:25 It’s really the mission of Grok, I understand the universe.
1:29:28 I’m trying to understand the universe.
1:29:30 Or at least set things in motion such
1:29:34 that, at some point, civilization understands
1:29:39 the universe far better than we do today.
1:29:43 And even what questions to ask, as Douglas Adams pointed out
1:29:48 in his book, sometimes the answer is arguably the easy part.
1:29:52 Trying to frame the question correctly is the hard part.
1:29:54 Once you frame the question correctly,
1:29:58 the answer is often easy.
1:30:01 So I’m trying to set things in motion
1:30:04 such that we, or at least at some point,
1:30:07 able to understand the universe.
1:30:15 So for SpaceX, the goal is to make a life multi-planetary,
1:30:22 and if you go to the Fermi paradox of where are the aliens,
1:30:25 you’ve got these sort of great filters.
1:30:28 Like, why have we not heard from the aliens?
1:30:31 Now, a lot of people think there are aliens among us.
1:30:33 I often claim to be one.
1:30:37 Nobody believes me.
1:30:39 I did say alien registration card at one point
1:30:44 on my immigration documents.
1:30:46 So I’ve not seen any evidence of aliens.
1:30:50 So it suggests that one of the explanations
1:30:55 is that intelligent life is extremely rare.
1:30:58 And again, if you look at the history of Earth,
1:31:02 civilization has only been around for one millionth
1:31:04 of Earth’s existence.
1:31:10 So if aliens had visited here, say, 100,000 years ago,
1:31:13 they would be like, well, they don’t even have writing.
1:31:15 Just how to gather us, basically.
1:31:23 So how long does a civilization last?
1:31:28 So for SpaceX, the goal is to establish a self-sustaining city
1:31:30 on Mars.
1:31:35 Mars is the only viable planet for such a thing.
1:31:38 The moon is close, but it lacks resources,
1:31:42 and I think it’s probably vulnerable
1:31:46 to any calamity that takes out Earth.
1:31:48 The moon is too close.
1:31:53 It’s vulnerable to a calamity that takes out Earth.
1:31:55 So I’m not saying we shouldn’t have a moon base,
1:32:00 but Mars is far more resilient.
1:32:01 The difficulty of getting to Mars
1:32:02 is what makes it resilient.
1:32:11 So in going through these various explanations
1:32:14 of why don’t we see the aliens, one of them
1:32:21 is that they fail to pass these great filters,
1:32:25 these key hurdles.
1:32:30 And one of those hurdles is being a multi-planet species.
1:32:32 So if you’re a multi-planet species,
1:32:34 then if something would happen, whether that
1:32:40 was a natural catastrophe or a man-made catastrophe,
1:32:43 at least the other planet would probably still be around.
1:32:47 So you don’t have all the eggs in one basket.
1:32:49 And once you are sort of a two-planet species,
1:32:54 you can obviously extend life halves to the asteroid belt,
1:32:58 to maybe to the moons of Jupiter and Saturn,
1:33:01 and ultimately to other star systems.
1:33:04 But if you can’t even get to another planet,
1:33:06 you’re definitely not getting to star systems.
1:33:10 And the other possible great filters,
1:33:13 super powerful technology like AGI, for example.
1:33:17 So you’re basically trying to knock out
1:33:19 one great filter at a time.
1:33:25 Digital superintelligence is possibly a great filter.
1:33:29 I hope it isn’t, but it might be.
1:33:32 Guys like, say, Jeff Hinton would say,
1:33:35 he invented a number of the key principles
1:33:37 in artificial intelligence.
1:33:40 I think he puts the probability of AI annihilation
1:33:44 around 10% to 20%, something like that.
1:33:51 So it’s not like, look on the right side,
1:33:52 it’s 80% likely to be great.
1:34:00 But I think AI risk mitigation is important.
1:34:01 Being a multi-planet species would
1:34:04 be a massive risk mitigation.
1:34:08 And I do want to sort of once again emphasize
1:34:15 the importance of having enough children to sustain our numbers
1:34:21 and not plummet into population collapse, which
1:34:22 is currently happening.
1:34:27 Population collapse is a real and current thing.
1:34:32 So the only reason it’s not being reflected
1:34:35 in the total population numbers as much
1:34:38 is because people are living longer.
1:34:41 But it’s easy to predict, say, what
1:34:44 the population of any given country will be.
1:34:47 You just take the birth rate last year,
1:34:50 how many babies were born, multiply that by life expectancy,
1:34:52 and that’s what the population will be steady state
1:34:55 unless if the birth rate continues to that level.
1:34:59 But if it keeps declining, it will be even less and eventually
1:35:00 went all to nothing.
1:35:05 So I keep banging on the baby drum here for a reason
1:35:08 because it has been the source of civilizational collapse
1:35:11 over and over again throughout history.
1:35:18 And so why don’t we just try to stave off that day?
1:35:22 Well, in that way, I have miserably failed civilization,
1:35:25 and I’m trying, hoping to fix that.
1:35:26 I would love to have many kids.
1:35:30 Great, hope you do.
1:35:32 No time like the present.
1:35:37 Yeah, I got to allocate more compute to the whole process.
1:35:39 But apparently, it’s not that difficult.
1:35:43 No, it’s like unskilled labor.
1:35:48 Well, one of the things you do for me for the world
1:35:50 is to inspire us with what the future could be.
1:35:53 And so some of the things we’ve talked about,
1:35:55 some of the things you’re building,
1:35:58 alleviating human suffering with neural link
1:36:01 and expanding the capabilities of the human mind,
1:36:04 trying to build a colony on Mars,
1:36:09 so creating a backup for humanity on another planet,
1:36:12 and exploring the possibilities
1:36:15 of what artificial intelligence could be in this world,
1:36:16 especially in the real world,
1:36:19 the AI with hundreds of millions,
1:36:22 maybe billions of robots walking around.
1:36:23 There will be billions of robots.
1:36:27 That seems virtual certainty.
1:36:30 Well, thank you for building the future
1:36:33 and thank you for inspiring so many of us
1:36:37 to keep building and creating cool stuff, including kids.
1:36:39 Yeah, you’re welcome.
1:36:40 Go forth and multiply.
1:36:42 Go forth and multiply.
1:36:43 Thank you, Elon.
1:36:45 Thanks for talking, brother.
1:36:49 Thanks for listening to this conversation with Elon Musk.
1:36:52 And now, dear friends, here’s DJ Sa,
1:36:55 the co-founder, president, and COO of Nearlink.
1:37:00 When did you first become fascinated by the human brain?
1:37:01 For me, I was always interested
1:37:05 in understanding the purpose of things
1:37:09 and how it was engineered to serve that purpose,
1:37:13 whether it’s organic or inorganic,
1:37:16 like we were talking earlier about your curtain holders.
1:37:19 They serve a clear purpose
1:37:21 and they were engineered with that purpose in mind.
1:37:27 And growing up, I had a lot of interest in seeing things,
1:37:30 touching things, feeling things,
1:37:34 and trying to really understand the root of how it was designed
1:37:36 to serve that purpose.
1:37:39 And obviously, brain is just a fascinating organ
1:37:40 that we all carry.
1:37:44 It’s an infinitely powerful machine
1:37:47 that has intelligence and cognition that arise from it,
1:37:50 and we haven’t even scratched the surface
1:37:52 in terms of how all of that occurs.
1:37:54 But also at the same time,
1:37:57 I think it took me a while to make that connection
1:38:00 to really studying and building tech
1:38:03 to understand the brain, not until graduate school.
1:38:06 There were a couple of key moments in my life
1:38:09 where some of those, I think,
1:38:11 influenced how the trajectory of my life
1:38:16 got me to studying what I’m doing right now.
1:38:20 One was growing up, both sides of my family,
1:38:24 my grandparents had a very severe form of Alzheimer.
1:38:29 And it’s incredibly debilitating conditions.
1:38:33 I mean, literally you’re seeing someone’s whole identity
1:38:36 and their mind just losing over time.
1:38:38 And I just remember thinking
1:38:41 how both the power of the mind,
1:38:43 but also how something like that
1:38:46 could really lose your sense of identity.
1:38:49 – It’s fascinating that that is one of the ways
1:38:54 to reveal the power thing by watching it lose the power.
1:38:56 – Yeah, a lot of what we know about the brain
1:38:59 actually comes from these cases
1:39:02 where there are trauma to the brain
1:39:04 or some parts of the brain that let someone
1:39:06 to lose certain abilities.
1:39:10 And as a result, there’s some correlation
1:39:11 and understanding of that part of the tissue
1:39:13 being critical for that function.
1:39:17 And it’s an incredibly fragile organ,
1:39:18 if you think about it that way,
1:39:21 but also it’s incredibly plastic
1:39:23 and incredibly resilient in many different ways.
1:39:24 – And by the way, the term plastic
1:39:29 as we’ll use a bunch means that it’s adaptable.
1:39:32 So neuroplasticity refers to the adaptability
1:39:33 of the human brain.
1:39:34 – Correct.
1:39:37 Another key moment that sort of influence
1:39:40 how the trajectory of my life have shaped
1:39:43 towards the current focus of my life
1:39:46 has been during my teenage year when I came to the US.
1:39:49 You know, I didn’t speak a word of English.
1:39:50 There was a huge language barrier
1:39:53 and there was a lot of struggle
1:39:56 to kind of connect with my peers around me
1:40:00 because I didn’t understand the artificial construct
1:40:02 that we have created called language,
1:40:04 specifically English in this case.
1:40:06 And I remember feeling pretty isolated,
1:40:09 not being able to connect with peers around me.
1:40:11 So spent a lot of time just on my own,
1:40:14 you know, reading books, watching movies.
1:40:18 And I naturally sort of gravitated towards sci-fi books.
1:40:20 I just found them really, really interesting.
1:40:23 And also it was a great way for me to learn English.
1:40:24 You know, some of the first set of books
1:40:27 that I picked up are Ender’s Game,
1:40:31 you know, The Whole Saga by Orson Scott Card
1:40:33 and Neural Mansur from William Gibson
1:40:36 and Snow Crash from Neil Stevenson.
1:40:39 And you know, movies like Matrix was coming out
1:40:41 around that time point that really influenced
1:40:44 how I think about the potential impact
1:40:48 that technology can have for our lives in general.
1:40:50 So fast track to my college years,
1:40:53 you know, I was always fascinated by just physical stuff,
1:40:54 building physical stuff,
1:40:57 and especially physical things
1:41:00 that had some sort of intelligence.
1:41:03 And, you know, I studied electrical engineering
1:41:07 during undergrad and I started out my research in MEMS,
1:41:10 so microelectromechanical systems,
1:41:12 and really building these tiny nanostructures
1:41:14 for temperature sensing.
1:41:17 And I just found that to be just incredibly rewarding
1:41:19 and fascinating subject to just understand
1:41:22 how you can build something miniature like that,
1:41:25 that again, served a function and had a purpose.
1:41:29 And then, you know, I spent large majority of my college years
1:41:32 basically building millimeter wave circuits
1:41:36 for next-gen telecommunication systems for imaging.
1:41:38 And it was just something that I found
1:41:41 very, very intellectually interesting, you know,
1:41:45 phase arrays, how the signal processing works for,
1:41:48 you know, any modern as well as next-gen telecommunication
1:41:51 system wireless and wireline.
1:41:54 EM waves or electromagnetic waves are fascinating.
1:41:58 How do you design antennas that are most efficient
1:42:00 in a small footprint that you have?
1:42:02 How do you make these things energy efficient?
1:42:04 That was something that just consumed
1:42:06 my intellectual curiosity.
1:42:09 And that journey led me to actually apply to
1:42:12 and find myself at PhD program at UC Berkeley
1:42:14 at kind of this consortium called
1:42:16 the Berkeley Wireless Research Center
1:42:19 that was precisely looking at building,
1:42:21 at the time we called it XG, you know,
1:42:23 similar to 3G, 4G, 5G,
1:42:26 but the next next generation G system.
1:42:28 And how you would design circuits around that
1:42:30 to ultimately go on phones and, you know,
1:42:32 basically any other devices
1:42:35 that are wirelessly connected these days.
1:42:37 So I was just absolutely just fascinated
1:42:40 by how that entire system works
1:42:41 and that infrastructure works.
1:42:45 And then also during grad school,
1:42:49 I had sort of the fortune of having, you know,
1:42:51 a couple of research fellowships
1:42:54 that led me to pursue whatever project that I want.
1:42:56 And that’s one of the things that I really enjoyed
1:42:58 about my graduate school career
1:43:02 where you got to kind of pursue your intellectual curiosity
1:43:04 and the domain that may not matter at the end of the day,
1:43:06 but it’s something that, you know,
1:43:11 really allows you the opportunity to go as deeply as you want
1:43:13 as well as as widely as you want.
1:43:15 And at the time I was actually working
1:43:17 on this project called the smart bandaid.
1:43:20 And the idea was that when you get a wound,
1:43:22 there’s a lot of other kind of proliferation
1:43:27 of signaling pathway that cells follow to close that wound.
1:43:30 And there were hypotheses
1:43:33 that when you apply external electric field,
1:43:36 you can actually accelerate the closing of that field
1:43:39 by having, you know, basically electro taxing
1:43:42 of the cells around that wound site.
1:43:45 And specifically not just for normal wound,
1:43:47 there are chronic wounds that don’t heal.
1:43:49 So we were interested in building, you know,
1:43:50 some sort of a wearable patch
1:43:55 that you could apply to kind of facilitate
1:43:57 that healing process.
1:43:59 And that was in collaboration
1:44:02 with Professor Michelle Maherwitz, you know,
1:44:06 which was a great addition to kind of my thesis committee
1:44:10 and really shaped the rest of my PhD career.
1:44:12 – So this would be the first time you interacted
1:44:13 with biology, I suppose.
1:44:14 – Correct, correct.
1:44:18 I mean, there were some peripheral, you know,
1:44:21 end application of the wireless imaging
1:44:23 and telecommunication system that I was using
1:44:25 for security and bio-imaging.
1:44:30 But this was a very clear direct application
1:44:33 to biology and biological system
1:44:35 and understanding the constraints around that
1:44:36 and really designing
1:44:39 and engineering electrical solutions around it.
1:44:41 So that was my first introduction.
1:44:46 And that’s also kind of how I got introduced to Michelle.
1:44:50 You know, he’s sort of known for remote control
1:44:53 of beetles in the early 2000s.
1:44:57 And then around 2013, you know,
1:44:59 obviously kind of the holy grail
1:45:01 when it comes to implantable system
1:45:05 is to kind of understand how small of a thing you can make.
1:45:09 And a lot of that is driven by how much energy
1:45:11 or how much power you can supply to it
1:45:13 and how you extract data from it.
1:45:14 So at the time at Berkeley,
1:45:18 there was kind of this desire to kind of understand
1:45:22 in the neural space what sort of system you can build
1:45:25 to really miniaturize these implantable systems.
1:45:30 And I distinctively remember this one particular meeting
1:45:32 where Michelle came in and he’s like,
1:45:34 “Guys, I think I have a solution.
1:45:37 The solution is ultrasound.”
1:45:41 And then he proceeded to kind of walk through
1:45:43 why that is the case.
1:45:44 And that really formed the basis
1:45:49 for my thesis work called Neural Dust System
1:45:52 that was looking at ways to use ultrasound
1:45:54 as opposed to electromagnetic waves
1:45:57 for powering as well as communication.
1:45:59 I guess I should step back and say
1:46:03 the initial goal of the project was to build these tiny,
1:46:07 about a size of a neuron implantable system
1:46:09 that can be parked next to a neuron,
1:46:11 being able to record its state
1:46:13 and being able to ping that back to the outside world
1:46:15 for doing something useful.
1:46:16 And as I mentioned,
1:46:21 the size of the implantable system is limited
1:46:25 by how you power the thing and get the data off of it.
1:46:27 And at the end of the day, fundamentally,
1:46:29 if you look at a human body,
1:46:32 we’re essentially a bag of salt water
1:46:34 with some interesting proteins and chemicals,
1:46:36 but it’s mostly salt water.
1:46:39 That’s very, very well temperature regulated
1:46:41 at 37 degrees Celsius.
1:46:47 And we’ll get into later why that’s an extremely harsh
1:46:49 environment for any electronics to survive
1:46:53 as I’m sure you’ve experienced or maybe not experienced
1:46:56 dropping cell phone in a salt water in an ocean.
1:46:58 It will instantly kill the device, right?
1:47:02 But anyways, just in general,
1:47:05 electromagnetic waves don’t penetrate
1:47:06 through this environment well.
1:47:11 And just the speed of light, it is what it is.
1:47:12 We can’t change it.
1:47:17 And based on the wavelength
1:47:20 at which you are interfacing with the device,
1:47:21 the device just needs to be big.
1:47:23 Like these inductors needs to be quite big.
1:47:26 And the general good rule of thumb is that
1:47:30 you want the wave front to be roughly on the order
1:47:33 of the size of the thing that you’re interfacing with.
1:47:38 So an implantable system that is around 10 to 100 micron
1:47:41 in dimension in a volume,
1:47:42 which is about the size of a neuron
1:47:45 that you see in a human body.
1:47:49 You would have to operate at like hundreds of gigahertz,
1:47:52 which number one, not only is it difficult
1:47:55 to build electronics operating at those frequencies,
1:48:00 but also the body just attenuates that very, very significantly.
1:48:04 So the interesting kind of insight of this ultrasound
1:48:05 was the fact that
1:48:09 ultrasound just travels a lot more effectively
1:48:13 in the human body tissue compared to electromagnetic waves.
1:48:16 And this is something that you encounter,
1:48:20 and I’m sure most people have encountered in their lives
1:48:25 when you go to hospitals that are medical ultrasound
1:48:27 sonograph, right?
1:48:32 And they go into very, very deep depth
1:48:35 without attenuating too much of the signal.
1:48:39 So all in all, ultrasound,
1:48:42 the fact that it travels through the body extremely well,
1:48:45 and the mechanism to which it travels to the body really well
1:48:48 is that just the wave front is very different.
1:48:52 It’s electromagnetic waves are transverse,
1:48:54 whereas in ultrasound waves are compressive.
1:48:59 So it’s just a completely different mode of wave front propagation.
1:49:05 And as well as speed of sound is orders and orders of magnitude
1:49:06 less than speed of light,
1:49:10 which means that even at 10 megahertz ultrasound wave,
1:49:14 your wave front ultimately is a very, very small wavelength.
1:49:17 So if you’re talking about interfacing with the 10 micron
1:49:20 or 100 micron type structure,
1:49:26 you would have 150 micron wave front at 10 megahertz
1:49:29 and building electronics at those frequencies
1:49:32 are much, much easier and they’re a lot more efficient.
1:49:36 So the basic idea kind of was born out of
1:49:40 using ultrasound as a mechanism for powering the device
1:49:42 and then also getting data back.
1:49:45 So now the question is, how do you get the data back?
1:49:47 The mechanism to which we landed on
1:49:49 is what’s called backscattering.
1:49:53 This is actually something that is very common
1:49:55 and that we interface on a day-to-day basis
1:49:59 with our RFID cards, radio frequency ID tags,
1:50:04 where there’s actually rarely in your ID a battery inside.
1:50:09 There’s an antenna and there’s some sort of coil
1:50:13 that has your serial identification ID.
1:50:16 And then there’s an external device called a reader
1:50:18 that then sends a wave front
1:50:20 and then you reflect back that wave front
1:50:24 with some sort of modulation that’s unique to your ID.
1:50:27 That’s what’s called backscattering fundamentally.
1:50:30 So the tag itself actually doesn’t have to consume
1:50:31 that much energy.
1:50:35 And that was a mechanism to which we were kind of thinking
1:50:37 about sending the data back.
1:50:41 So when you have an external ultrasonic transducer
1:50:44 that’s sending ultrasonic wave to your implant,
1:50:45 the neural dust implant
1:50:49 and it records some information about its environment,
1:50:52 whether it’s a neuron firing or some other state
1:50:57 of the tissue that it’s interfacing with.
1:51:01 And then it just amplitude modulates the wave front
1:51:04 that comes back to the source.
1:51:06 – And the recording step would be the only one
1:51:08 that requires any energy.
1:51:10 So what would require energy in that little step?
1:51:11 – Correct.
1:51:14 So it is that initial kind of startup circuitry
1:51:17 to get that recording, amplifying it,
1:51:19 and then just modulating.
1:51:23 And the mechanism to which that you can enable that is
1:51:25 there is this specialized crystal
1:51:27 called piezoelectric crystals
1:51:30 that are able to convert sound energy
1:51:32 into electrical energy and vice versa.
1:51:35 So you can kind of have this interplay
1:51:37 between the ultrasonic domain and electrical domain
1:51:39 that is the biological tissue.
1:51:44 So on the theme of parking very small
1:51:46 computational devices next to neurons,
1:51:51 that’s the dream, the vision of brain-computer interfaces.
1:51:53 Maybe before we talk about Neuralink,
1:51:57 can you give a sense of the history of the field of BCI?
1:52:02 What has been maybe the continued dream
1:52:05 and also some of the milestones along the wave
1:52:07 with the different approaches
1:52:09 and the amazing work done at the various labs?
1:52:14 – I think a good starting point is going back to 1790s.
1:52:18 – I did not expect that.
1:52:23 – Where the concept of animal electricity
1:52:25 or the fact that body’s electric
1:52:28 was first discovered by Luigi Galvani,
1:52:30 where he had this famous experiment
1:52:34 where he connected a set of electrodes to a frog leg
1:52:37 and ran current through it and then it started twitching
1:52:40 and he said, “Oh my goodness, body’s electric.”
1:52:41 – Yeah.
1:52:44 – So fast forward many, many years to 1920s
1:52:48 where Hans Berger, who’s a German psychiatrist,
1:52:52 discovered EEG or Electroencephalography,
1:52:53 which is still around.
1:52:56 There are these electrode arrays that you wear
1:53:00 outside the skull that gives you some sort of neural recording.
1:53:01 That was a very, very big milestone
1:53:04 that you can record some sort of activities
1:53:06 about the human mind.
1:53:13 And then in the 1940s, there were these group of scientists,
1:53:15 Renshaw Forbes and Morrison,
1:53:20 that inserted these glass microelectrodes
1:53:23 into the cortex and recorded single neurons.
1:53:28 The fact that there’s signal that are a bit more
1:53:30 high resolution and high fidelity
1:53:32 as you get closer to the source, let’s say.
1:53:37 And in the 1950s, these two scientists,
1:53:40 Hodgkin and Hoxley, showed up
1:53:44 and they built this beautiful, beautiful models
1:53:47 of the cell membrane and the ionic mechanism
1:53:49 and had these circuit diagram.
1:53:51 And as someone who is an electrical engineer,
1:53:53 it’s a beautiful model that’s built out
1:53:56 of these partial differential equations,
1:53:58 talking about flow of ions
1:54:02 and how that really leads to how neurons communicate.
1:54:04 And they won the Nobel Prize for that
1:54:06 10 years later in the 1960s.
1:54:11 So in 1969, Ed Fetz from University of Washington
1:54:13 published this beautiful paper called
1:54:15 Operating Conditioning of Cortical Unit Activity,
1:54:20 where he was able to record a single unit neuron
1:54:25 from a monkey and was able to have the monkey modulated
1:54:29 based on its activity and reward system.
1:54:32 So I would say this is the very, very first example
1:54:35 as far as I’m aware of the closed loop
1:54:38 brain computer interface or BCI.
1:54:41 – The abstract reads, the activity of single neurons
1:54:46 in pre-central cortex of anesthetized monkeys
1:54:48 was conditioned by reinforcing high rates
1:54:52 of neuronal discharge with delivery of a food pilot.
1:54:55 Auditorial and visual feedback of unit firing rates
1:54:58 was usually provided in addition to food reinforcement.
1:55:01 – Cool, so they actually got it done.
1:55:02 – They got it done.
1:55:05 This is back in 1969.
1:55:08 – After several training sessions,
1:55:11 monkeys could increase the activity of newly isolated cells
1:55:16 by 50 to 500% above rates before reinforcement.
1:55:18 Fascinating.
1:55:19 – Brain is very plastic.
1:55:25 – And so from here the number of experiments grew.
1:55:29 – Yeah, number of experiments as well as set of tools
1:55:31 to interface with the brain have just exploded.
1:55:36 I think, and also just understanding the neural code
1:55:38 and how some of the cortical layers
1:55:40 and the functions are organized.
1:55:45 So the other paper that is pretty seminal,
1:55:47 especially in the motor decoding
1:55:52 was this paper in the 1980s from Georgia Opolis
1:55:56 that discovered that there’s this thing called
1:55:57 motor tuning curve.
1:55:59 So what are motor tuning curves?
1:56:02 It’s the fact that there are neurons in the motor cortex
1:56:05 of mammals, including humans,
1:56:09 that have a preferential direction that causes them to fire.
1:56:11 So what that means is there are set of neurons
1:56:14 that would increase their spiking activities
1:56:19 when you’re thinking about moving to the left, right,
1:56:23 up, down, and any of those vectors.
1:56:26 And based on that, you could start to think,
1:56:30 well, if you can identify those essential eigenvectors,
1:56:33 you can do a lot and you can actually use that information
1:56:35 for actually decoding someone’s intended movement
1:56:37 from the cortex.
1:56:39 So that was a very, very seminal kind of paper
1:56:44 that showed that there is some sort of code
1:56:48 that you can extract, especially in the motor cortex.
1:56:50 – So there’s signal there.
1:56:54 And if you measure the electrical signal from the brain
1:56:57 that you could actually figure out what the intention was.
1:56:59 – Correct, yeah, not only electrical signals,
1:57:01 but electrical signals from the right set of neurons
1:57:04 that give you these preferential direction.
1:57:09 – Okay, so going slowly towards neural link.
1:57:10 One interesting question is,
1:57:13 what do we understand on the BCI front
1:57:18 on invasive versus noninvasive from this line of work?
1:57:23 How important is it to park next to the neuron?
1:57:26 What does that get you?
1:57:27 – That answer fundamentally depends
1:57:30 on what you want to do with it, right?
1:57:32 There’s actually incredible amount of stuff
1:57:36 that you can do with EEG and electrocortigraph, ECOG,
1:57:39 which actually doesn’t penetrate the cortical layer
1:57:42 or perencoma, but you place a set of electrodes
1:57:44 on the surface of the brain.
1:57:47 So the thing that I’m personally very interested in
1:57:49 is just actually understanding
1:57:54 and being able to just really tap into
1:57:56 the high resolution, high fidelity,
1:57:58 understanding of the activities
1:58:00 that are happening at the local level.
1:58:03 And we can get into biophysics,
1:58:06 but just to kind of step back to kind of use analogy,
1:58:08 ’cause analogy here can be useful.
1:58:09 Sometimes it’s a little bit difficult
1:58:11 to think about electricity.
1:58:12 At the end of the day, we’re doing electrical recording
1:58:16 that’s mediated by ionic currents,
1:58:18 movements of these charged particles,
1:58:22 which is really, really hard for most people to think about.
1:58:24 But turns out a lot of the activities
1:58:28 that are happening in the brain
1:58:30 and the frequency band with which that’s happening
1:58:33 is actually very, very similar to sound waves
1:58:37 and in our normal conversation audible range.
1:58:41 So the analogy that typically is used in the field is,
1:58:44 in a few, if you have a football stadium,
1:58:47 there’s game going on.
1:58:48 If you stand outside the stadium,
1:58:51 you maybe get a sense of how the game is going
1:58:53 based on the cheers and the booze of the home crowd,
1:58:55 whether the team is winning or not.
1:58:58 But you have absolutely no idea what the score is.
1:59:02 You have absolutely no idea what individual audience
1:59:05 or the players are talking or saying to each other
1:59:07 what the next play is, what the next goal is.
1:59:11 So what you have to do is you have to drop the microphone
1:59:15 near into the stadium and then get near the source,
1:59:17 like into the individual chatter.
1:59:20 In this specific example, you would want to have it
1:59:22 right next to where the huddle’s happening.
1:59:26 So I think that’s kind of a good illustration
1:59:31 of what we’re trying to do when we say invasive
1:59:33 or minimally invasive or implanted
1:59:36 brain-computer interfaces versus non-invasive
1:59:38 or non-implanted brain interfaces.
1:59:42 It’s basically talking about where do you put that microphone
1:59:44 and what can you do with that information?
1:59:48 So what is the biophysics of the read and write
1:59:50 communication that we’re talking about here
1:59:55 as we now step into the efforts at Neuralink?
2:00:00 – Yeah, so brain is made up of these specialized cells
2:00:02 called neurons.
2:00:06 There’s billions of them, tens of billions.
2:00:08 Sometimes people call it a hundred billion
2:00:13 that are connected in this complex yet dynamic network.
2:00:16 They’re constantly remodeling.
2:00:18 They’re changing their synaptic weights
2:00:23 and that’s what we typically call neuroplasticity.
2:00:28 And the neurons are also bathed in this charged environment
2:00:31 that is latent with many charged molecules
2:00:36 like potassium ions, sodium ions, chlorine ions.
2:00:39 And those actually facilitate these
2:00:41 through ionic current communication
2:00:43 between these different networks.
2:00:48 And when you look at a neuron as well,
2:00:53 they have these membrane with a beautiful,
2:00:55 beautiful protein structure
2:00:58 called a voltage selective ion channels,
2:01:03 which in my opinion is one of nature’s best inventions.
2:01:05 In many ways, if you think about what they are,
2:01:09 they’re doing the job of a modern day transistors.
2:01:11 Transistors are nothing more at the end of the day
2:01:13 than a voltage-gated conduction channel.
2:01:18 And nature found a way to have that very, very early on
2:01:20 in its evolution.
2:01:22 And as we all know, with the transistor,
2:01:24 you can have many, many computation
2:01:28 and a lot of amazing things that we have access to today.
2:01:33 So I think it’s one of those just as a tangent,
2:01:35 just a beautiful, beautiful invention
2:01:36 that the nature came up with,
2:01:39 these voltage-gated ion channels.
2:01:41 I mean, I suppose there’s, on the biological level,
2:01:44 at every level of the complexity of the hierarchy
2:01:48 of the organism, there’s going to be some mechanisms
2:01:51 for storing information and for doing computation.
2:01:53 And this is just one such way.
2:01:56 But to do that with biological
2:01:58 and chemical components is interesting.
2:02:02 Plus like, when neurons, I mean, it’s not just electricity,
2:02:06 it’s chemical communication, it’s also mechanical.
2:02:10 I mean, these are like actual objects that have like,
2:02:13 that vibrate, I mean, they move.
2:02:14 – Yeah, there are actually, I mean,
2:02:17 there’s a lot of really, really interesting physics
2:02:19 that are involved in, you know,
2:02:23 kind of going back to my work on ultrasound,
2:02:26 they’re in grad school, there are groups
2:02:29 and there were groups and there are still groups
2:02:34 looking at ways to cause neurons to actually fire
2:02:36 an action potential using ultrasound wave.
2:02:38 And the mechanism to which that’s happening
2:02:40 is still unclear, as I understand.
2:02:42 You know, it may just be that, you know,
2:02:44 you’re imparting some sort of thermal energy
2:02:46 and that causes cells to depolarize
2:02:48 in some interesting ways.
2:02:51 But there are also these ion channels
2:02:55 or even membranes that actually just open up as poor
2:02:58 as they’re being mechanically like shook, right, vibrated.
2:03:00 So there’s just a lot of, you know,
2:03:03 elements of these like move particles
2:03:07 which again, like that’s governed by diffusion physics, right?
2:03:09 Movements of particles.
2:03:12 And there’s also a lot of kind of interesting physics there.
2:03:16 – Also not to mention, as Roger Penrose talks about the,
2:03:18 there might be some beautiful weirdness
2:03:21 in the quantum mechanical effects of all of this.
2:03:23 And he actually believes that consciousness
2:03:26 might emerge from the quantum mechanical effects there.
2:03:29 So like there’s physics, there’s chemistry,
2:03:31 there’s bio, all of that is going on there.
2:03:32 – Oh yeah, yeah.
2:03:35 I mean, you can, yes, there’s a lot of levels of physics
2:03:37 that you can dive into.
2:03:41 But yeah, in the end, you have these membranes
2:03:43 with these voltage gated ion channels
2:03:47 that selectively let these charged molecules
2:03:51 that are in the extracellular matrix, like in an hour.
2:03:57 And these neurons generally have these like resting potential
2:03:59 where there’s a voltage difference
2:04:02 between inside the cell and outside the cell.
2:04:06 And when there’s some sort of stimuli
2:04:10 that changes the state such that they need
2:04:13 to send information to the downstream network,
2:04:17 you start to kind of see these sort of orchestration
2:04:19 of these different molecules going in and out
2:04:21 of these channels, they also open up,
2:04:25 like more of them open up once it reaches some threshold
2:04:28 to a point where you have a depolarizing cell
2:04:30 that sends an action potential.
2:04:32 So it’s just a very beautiful kind of orchestration
2:04:35 of these molecules.
2:04:40 And what we’re trying to do when we place an electrode
2:04:44 or parking it next to a neuron is that you’re trying
2:04:47 to measure these local changes in the potential.
2:04:53 Again, mediated by the movements of the ions.
2:04:56 And what’s interesting, as I mentioned earlier,
2:04:57 there’s a lot of physics involved.
2:05:01 And the two dominant physics
2:05:04 for this electrical recording domain
2:05:07 is diffusion physics and electromagnetism.
2:05:10 And where one dominates,
2:05:15 where Maxwell’s equation dominates versus fixed law dominates,
2:05:18 depends on where your electrode is.
2:05:24 If it’s close to the source, mostly electromagnetic based,
2:05:28 when you’re farther away from it, it’s more diffusion based.
2:05:32 So essentially when you’re able to park it next to it,
2:05:36 you can listen in on those individual chatter
2:05:38 and those local changes in the potential.
2:05:40 And the type of signal that you get
2:05:45 are these canonical textbook neural spiking waveform.
2:05:47 When you’re, the moment you’re further away
2:05:50 and based on some of the studies that people have done,
2:05:53 you know, Christoph Koch’s lab and others,
2:05:56 once you’re away from that source by roughly around 100 micron,
2:05:58 which is about width of a human hair,
2:06:01 you’re no longer here from that neuron.
2:06:05 You’re no longer able to kind of have the system sensitive enough
2:06:08 to be able to record that particular
2:06:13 local membrane potential change in that neuron.
2:06:16 And just to kind of give you a sense of scale also,
2:06:18 when you look at 100 micron voxels,
2:06:21 so 100 micron by 100 micron by 100 micron box,
2:06:25 in a brain tissue, there’s roughly around 40 neurons.
2:06:28 And whatever number of connections that they have.
2:06:30 So there’s a lot in that volume of tissue.
2:06:32 So the moment you’re outside of that,
2:06:36 there’s just no hope that you’ll be able to detect that change
2:06:40 from that one specific neuron that you may care about.
2:06:43 – Yeah, but as you’re moving about this space,
2:06:45 you’ll be hearing other ones.
2:06:48 So if you move another 100 micron,
2:06:49 you’ll be hearing chatter from another community.
2:06:50 – Correct.
2:06:54 – And so the whole sense is you wanna place as many
2:06:57 as possible electrodes and then you’re listening to the chatter.
2:06:58 – Yeah, you wanna listen to the chatter.
2:06:59 And at the end of the day,
2:07:02 you also want to basically let the software
2:07:04 do the job of decoding.
2:07:09 And just to kind of go to why ECOG and EEG work at all, right?
2:07:15 When you have these local changes,
2:07:18 obviously it’s not just this one neuron that’s activating,
2:07:20 there’s many, many other networks
2:07:22 that are activating all the time.
2:07:25 And you do see sort of a general change
2:07:27 in the potential of this electrode,
2:07:29 like this is charge medium.
2:07:31 And that’s what you’re recording when you’re farther away.
2:07:33 I mean, you still have some reference electrode
2:07:38 that’s stable and the brain that’s just electroactive organ.
2:07:40 And you’re seeing some combination
2:07:42 aggregate action potential changes.
2:07:44 And then you can pick it up, right?
2:07:48 It’s a much slower changing signals,
2:07:53 but there are these like canonical kind of oscillations
2:07:55 and waves like gamma waves, beta waves,
2:07:57 like when you sleep that can be detected
2:07:59 ’cause there’s sort of a synchronized
2:08:05 kind of global effect of the brain that you can detect.
2:08:09 And I mean, the physics of this go like,
2:08:11 I mean, if we really wanna go down that rabbit hole,
2:08:15 like there’s a lot that goes on in terms of
2:08:17 like why diffusion physics at some point
2:08:19 dominates when you’re further away from the source.
2:08:22 You know, it’s just a charge medium.
2:08:25 So similar to how when you have electromagnetic waves
2:08:28 propagating in atmosphere or in a charge medium
2:08:30 like a plasma, there’s this weird shielding
2:08:35 that happens that actually further attenuates the signal
2:08:37 as you move away from it.
2:08:40 So yeah, you see like, if you do a really, really deep dive
2:08:44 on kind of the signal attenuation over distance,
2:08:46 you start to see kind of one over R square in the beginning
2:08:48 and then exponential drop off.
2:08:50 And that’s the knee at which, you know,
2:08:53 you go from electromagnetism dominating
2:08:56 to diffusion physics dominating.
2:08:58 – But once again, with the electrodes,
2:09:01 the biophysics that you need to understand
2:09:06 is not as deep because no matter where you’re placing it,
2:09:09 you’re listening to a small crowd of local neurons.
2:09:09 – Correct, yeah.
2:09:11 So once you penetrate the brain,
2:09:14 you know, you’re in the arena, so to speak.
2:09:15 – And there’s a lot of neurons.
2:09:16 – There are many, many of them.
2:09:19 But then again, there’s like, there’s a whole field
2:09:22 of neuroscience that’s studying like how the different
2:09:25 groupings, the different sections of the seating
2:09:27 in the arena, what they usually are responsible for,
2:09:30 which is where the metaphor probably falls apart
2:09:33 ’cause the seating is not that organized in an arena.
2:09:34 – Also, most of them are silent.
2:09:37 They don’t really do much, you know,
2:09:41 or their activities are, you know,
2:09:44 you have to hit it with just the right set of stimulus.
2:09:45 – So they’re usually quiet.
2:09:47 – They’re usually very quiet.
2:09:50 There’s, I mean, similar to dark energy and dark matter,
2:09:52 there’s dark neurons.
2:09:53 What are they all doing?
2:09:55 When you place these electrode, again,
2:09:56 like within this hundred micron volume,
2:09:58 you have 40 or so neurons.
2:10:00 Like why do you not see 40 neurons?
2:10:01 Why do you see only a handful?
2:10:02 What is happening there?
2:10:05 – Well, they’re mostly quiet, but like when they speak,
2:10:06 they say profound shit, I think.
2:10:08 That’s the way I’d like to think about it.
2:10:12 Anyway, before we zoom in even more, let’s zoom out.
2:10:15 So how does Neuralink work?
2:10:20 From the surgery to the implant,
2:10:24 to the signal and the decoding process,
2:10:29 and the human being able to use the implant
2:10:33 to actually affect the world outside?
2:10:36 And all of this, I’m asking in the context
2:10:39 of there’s a gigantic historic milestone
2:10:41 that Neuralink just accomplished
2:10:43 in January of this year,
2:10:45 putting in Neuralink implant
2:10:47 in the first human being, Nolan.
2:10:50 And there’s been a lot to talk about there,
2:10:53 about his experience, because he’s able to describe
2:10:54 all the nuance and the beauty
2:10:57 and the fascinating complexity of that experience
2:10:58 of everything involved.
2:11:02 But on the technical level, how does Neuralink work?
2:11:04 – Yeah, so there are three major components
2:11:06 to the technology that we’re building.
2:11:10 One is the device, the thing that’s actually recording
2:11:11 these neural chatters.
2:11:17 We call it N1 implant or the link.
2:11:20 And we have a surgical robot
2:11:22 that’s actually doing an implantation
2:11:24 of these tiny, tiny wires that we call threads
2:11:27 that are smaller than human hair.
2:11:31 And once everything is surgeries,
2:11:33 you have these neural signals,
2:11:36 these spiking neurons that are coming out of the brain.
2:11:39 And you need to have some sort of software
2:11:43 to decode what the users intend to do with that.
2:11:48 So there’s what’s called Neuralink application or B1 app
2:11:49 that’s doing that translation,
2:11:53 is running the very, very simple machine learning model
2:11:58 that decodes these inputs that are neural signals
2:12:00 and then converted to a set of outputs
2:12:04 that allows our participant,
2:12:07 first participant Nolan to be able to control a cursor.
2:12:10 And this is done wirelessly.
2:12:11 – And this is done wirelessly.
2:12:15 So our implant is actually a two-part.
2:12:20 The link has these flexible, tiny wires called threads
2:12:26 that have multiple electrodes along its length.
2:12:30 And they’re only inserted into the cortical layer,
2:12:35 which is about three to five millimeters in a human brain.
2:12:36 In the motor cortex region,
2:12:40 that’s where the intention for movement lies in.
2:12:43 And we have 64 of these threads,
2:12:46 each thread having 16 electrodes along the span
2:12:50 of three to four millimeters, separated by 200 microns.
2:12:55 So you can actually record along the depth of the insertion.
2:12:58 And based on that signal,
2:13:02 there’s custom integrated circuit
2:13:06 or ASIC that we built that amplifies the neural signals
2:13:09 that you’re recording and then digitizing it.
2:13:14 And then has some mechanism for detecting
2:13:16 whether there was an interesting event
2:13:20 that is a spiking event and decide to send that
2:13:23 or not send that through Bluetooth to an external device,
2:13:25 whether it’s a phone or a computer
2:13:27 that’s running this neural link application.
2:13:29 – So there’s onboard signal processing already,
2:13:32 just to decide whether this is an interesting event or not.
2:13:35 So there is some computational power on board inside,
2:13:36 in addition to the human brain.
2:13:39 – Yeah, so it does the signal processing
2:13:41 to kind of really compress the amount of signal
2:13:42 that you’re recording.
2:13:46 So we have a total of 1,000 electrodes,
2:13:51 sampling at just under 20 kilohertz with 10 bit each.
2:13:56 So that’s 200 megabits that’s coming through
2:14:01 to the chip from 1,000 channel simultaneous neural recording.
2:14:04 And that’s quite a bit of data.
2:14:06 And there are technology available
2:14:08 to send that off wirelessly,
2:14:12 but being able to do that in a very, very thermally
2:14:14 constrained environment that is a brain.
2:14:18 So there has to be some amount of compression that happens
2:14:20 to send off only the interesting data that you need,
2:14:23 which in this particular case for motor decoding
2:14:27 is occurrence of a spike or not.
2:14:32 And then being able to use that to decode
2:14:34 the intended cursor movement.
2:14:37 So the implant itself processes it,
2:14:39 figures out whether a spike happened or not
2:14:42 with our spike detection algorithm,
2:14:44 and then sends it off, packages it,
2:14:49 send it off through Bluetooth to an external device
2:14:51 that then has the model to decode.
2:14:54 Okay, based on the spiking inputs,
2:14:58 did Nolan wish to go up, down, left, right
2:15:00 or click or right click or whatever?
2:15:02 – All of this is really fascinating,
2:15:04 but let’s stick on the N1 implant itself.
2:15:06 So the thing that’s in the brain.
2:15:07 So I’m looking at a picture of it.
2:15:08 There’s an enclosure.
2:15:11 There’s a charging coil.
2:15:13 So we didn’t talk about the charging,
2:15:15 which is fascinating.
2:15:19 The battery, the power electronics, the antenna.
2:15:23 Then there’s the signal processing electronics.
2:15:25 I wonder if there’s more kinds of signal processing
2:15:26 you can do?
2:15:27 That’s another question.
2:15:29 And then there’s the threads themselves
2:15:33 with the enclosure on the bottom.
2:15:36 So maybe to ask about the charging.
2:15:40 So there’s an external charging device.
2:15:42 – Yeah, there’s an external charging device.
2:15:44 So yeah, the second part of the implant,
2:15:46 the threads are the ones, again,
2:15:49 just the last three to five millimeters
2:15:52 are the ones that are actually penetrating the cortex.
2:15:55 Rest of it is, actually, most of the volume
2:15:59 is occupied by the battery, rechargeable battery.
2:16:03 And it’s about a size of a quarter.
2:16:04 I actually have a device here,
2:16:06 if you want to take a look at it.
2:16:12 This is the flexible thread component of it.
2:16:13 And then this is the implant.
2:16:17 So it’s about a size of a U.S. quarter.
2:16:20 It’s about nine millimeter thick.
2:16:22 So basically this implant,
2:16:25 once you have the craniectomy and the directomy,
2:16:31 threads are inserted and the hole that you created,
2:16:34 this craniectomy gets replaced with that.
2:16:36 So basically that thing plugs that hole
2:16:41 and you can screw in these self-drilling cranial screws
2:16:43 to hold it in place.
2:16:45 And at the end of the day,
2:16:47 once you have the skin flap over,
2:16:50 there’s only about two to three millimeters
2:16:53 that’s obviously transitioning off of
2:16:56 the top of the implant to where the screws are.
2:16:59 And that’s the minor bump that you have.
2:17:01 – Those threads look tiny.
2:17:04 That’s incredible.
2:17:06 That is really incredible.
2:17:07 That is really incredible.
2:17:09 And also as you’re right,
2:17:12 most of the actual volume is the battery.
2:17:15 This is way smaller than I realized.
2:17:17 – They are also, the threads themselves are quite strong.
2:17:19 – They look strong.
2:17:23 And the thread themselves also has a very interesting
2:17:25 feature at the end of it called the loop.
2:17:27 And that’s the mechanism to which
2:17:29 the robot is able to interface
2:17:32 and manipulate this tiny hair-like structure.
2:17:33 – And they’re tiny.
2:17:35 So what’s the width of a thread?
2:17:40 – Yeah, so the width of a thread starts from 16 micron
2:17:43 and then tapers out to about 84 micron.
2:17:47 So average human hair is about 80 to 100 micron in width.
2:17:51 – This thing is amazing.
2:17:52 This thing is amazing.
2:17:57 – Yes, most of the volume is occupied by the battery.
2:17:59 Rechargeable lithium ion cell.
2:18:05 And the charging is done through inductive charging,
2:18:07 which is actually very commonly used.
2:18:10 You know, your cell phones, most cell phones have that.
2:18:13 The biggest difference is that, you know, for us,
2:18:15 you know, usually when you have a phone
2:18:17 and you want to charge it on the charging pad,
2:18:19 you don’t really care how hot it gets.
2:18:21 Whereas for us, it matters.
2:18:24 There’s a very strict regulation and good reasons
2:18:27 to not actually increase the surrounding tissue temperature
2:18:28 by two degrees Celsius.
2:18:31 So there’s actually a lot of innovation
2:18:36 that is packed into this to allow charging of this implant
2:18:40 without causing that temperature threshold to reach.
2:18:43 And even small things like you see this charging coil
2:18:46 and what’s called a ferrite shield, right?
2:18:48 So without that ferrite shield,
2:18:50 what you end up having when you have, you know,
2:18:54 resonant inductive charging is that the battery itself
2:18:59 is a metallic can and you form these eddy currents
2:19:03 from the external charger and that causes heating.
2:19:07 And that actually contributes to inefficiency in charging.
2:19:11 So this ferrite shield, what it does
2:19:15 is that it actually concentrate that field line
2:19:18 away from the battery and then around the coil
2:19:19 that’s actually wrapped around it.
2:19:23 – There’s a lot of really fascinating design here
2:19:26 to make it, I mean, you’re integrating a computer
2:19:29 into a biological, a complex biological system.
2:19:31 – Yeah, there’s a lot of innovation here.
2:19:35 I would say that part of what enabled this was
2:19:38 just the innovations in the wearable.
2:19:41 There’s a lot of really, really powerful,
2:19:45 tiny, low power microcontrollers,
2:19:48 temperature sensors or various different sensors
2:19:50 and power electronics.
2:19:54 A lot of innovation really came in the charging coil design,
2:19:58 how this is packaged and how do you enable charging
2:20:01 such that you don’t really exceed that temperature limit
2:20:04 which is not a constraint for other devices out there.
2:20:06 So let’s talk about the threads themselves,
2:20:08 those tiny, tiny, tiny things.
2:20:11 So how many of them are there?
2:20:14 You mentioned a thousand electrodes.
2:20:15 How many threads are there?
2:20:18 And what do the electrodes have to do with the threads?
2:20:22 – Yeah, so the current instantiation of the device
2:20:27 has 64 threads and each thread has 16 electrodes
2:20:30 for a total of 1,024 electrodes
2:20:33 that are capable of both recording and stimulating.
2:20:40 And the thread is basically this polymer insulated wire.
2:20:48 The metal conductor is the kind of a tiramisu cake
2:20:51 of tile, plate, gold, plate, tile.
2:20:56 And they’re very, very tiny wires,
2:21:02 two micron in width, so two one millionth of meter.
2:21:04 – It’s crazy that that thing I’m looking at
2:21:07 has the polymer insulation, has the conducting material
2:21:11 and has 16 electrodes at the end of it.
2:21:12 – On each of those thread?
2:21:13 – Yeah, on each of those threads.
2:21:14 – Correct.
2:21:15 – 16, each one of those.
2:21:17 – Yes, you’re not gonna be able to see it with naked eyes.
2:21:20 – And I mean, to state the obvious,
2:21:22 or maybe for people who are just listening,
2:21:24 they’re flexible.
2:21:26 – Yes, yes, that’s also one element
2:21:29 that was incredibly important for us.
2:21:32 So each of these thread are, as I mentioned,
2:21:36 16 micron in width and then they taper to 84 micron,
2:21:39 but in thickness, they’re less than five micron.
2:21:45 And in thickness is mostly polyamide at the bottom
2:21:48 and this metal track and then another polyamide.
2:21:50 So two micron of polyamide,
2:21:53 400 nanometer of this metal stack
2:21:56 and two micron of polyamide sandwich together
2:21:58 to protect it from the environment
2:22:02 that is 37 degrees C bag of salt water.
2:22:05 – So maybe can you speak to some interesting aspects
2:22:08 of the material design here?
2:22:11 Like what does it take to design a thing like this
2:22:14 and to be able to manufacture a thing like this
2:22:16 for people who don’t know anything
2:22:17 about this kind of thing?
2:22:20 – Yeah, so the material selection that we have is not,
2:22:24 I don’t think it was particularly unique.
2:22:27 There were other labs and there are other labs
2:22:32 that are kind of looking at similar material stack.
2:22:34 There’s kind of a fundamental question
2:22:38 and still needs to be answered around the longevity
2:22:43 and reliability of these microelectros that we call.
2:22:45 Compared to some of the other more conventional
2:22:49 neural interfaces, devices that are intracranial,
2:22:53 so penetrating the cortex that are more rigid,
2:22:56 you know, like the Utah ray that are these
2:22:58 four by four millimeter kind of silicon shank
2:23:02 that have exposed a recording site at the end of it.
2:23:06 And, you know, that’s been kind of the innovation
2:23:10 from Richard Norman back in 1997.
2:23:11 It’s called the Utah ray ’cause, you know,
2:23:13 he was at University of Utah.
2:23:15 – And what does the Utah ray look like?
2:23:17 So it’s a rigid type of-
2:23:19 – Yeah, so we can actually look it up.
2:23:23 – Yeah.
2:23:26 Yeah, so it’s a bit of needle.
2:23:27 There’s-
2:23:30 – Okay, go ahead, I’m sorry.
2:23:32 – Those are rigid shanks.
2:23:33 – Rigid, yeah, you weren’t kidding.
2:23:36 – And the size and the number of shanks vary
2:23:38 anywhere from 64 to 128.
2:23:42 At the very tip of it is an exposed electrode
2:23:44 that actually records neural signal.
2:23:47 The other thing that’s interesting to note is that,
2:23:50 unlike neural link threads that have recording electrodes
2:23:54 that are actually exposed iridium oxide recording sites
2:23:57 along the depth, this is only at a single depth.
2:23:59 So these Utah ray spokes can be anywhere
2:24:03 between 0.5 millimeters to 1.5 millimeter.
2:24:06 And they also have designs that are slanted
2:24:08 so you can have it inserted at different depth.
2:24:12 But that’s one of the other big differences.
2:24:14 And then, I mean, the main key difference
2:24:17 is the fact that there’s no active electronics.
2:24:18 These are just electrodes.
2:24:21 And then there’s a bundle of a wire that you’re seeing.
2:24:24 And then that actually then exists the craniectomy
2:24:28 that then has this port that you can connect to
2:24:30 for any external electronic devices.
2:24:34 They are working on or have the wireless telemetry device,
2:24:38 but it still requires through the skin port
2:24:41 that actually is one of the biggest failure modes
2:24:43 for infection for the system.
2:24:46 – What are some of the challenges
2:24:48 associated with flexible threads?
2:24:52 Like, for example, on the robotic side, R1,
2:24:56 implanting those threads, how difficult does that task?
2:24:58 – Yeah, so as you mentioned,
2:25:01 they’re very, very difficult to maneuver by hand.
2:25:05 These Utah rays that you saw earlier,
2:25:07 they’re actually inserted by a neurosurgeon
2:25:10 actually positioning it near the site that they want.
2:25:14 And then there’s a pneumatic hammer
2:25:16 that actually pushes them in.
2:25:20 So it’s a pretty simple process
2:25:22 and they’re easier to maneuver.
2:25:24 But for these thin foam arrays,
2:25:27 they’re very, very tiny and flexible.
2:25:29 So they’re very difficult to maneuver.
2:25:32 So that’s why we built an entire robot to do that.
2:25:35 There are other reasons for why we built a robot.
2:25:38 And that is ultimately we want this to help
2:25:41 millions and millions of people that can benefit from this.
2:25:43 And there just aren’t that many neurosurgeons out there.
2:25:50 And robots can be something that,
2:25:52 we hope can actually do large parts of the surgery.
2:25:59 But the robot is this entire other
2:26:02 sort of category of product that we’re working on.
2:26:07 And it’s essentially this multi-axis gantry system
2:26:13 that has the specialized robot head
2:26:16 that has all of the optics
2:26:21 and this kind of a needle retracting mechanism
2:26:23 that maneuvers these threads
2:26:29 via this loop structure that you have on the thread.
2:26:31 – So the thread already has a loop structure
2:26:32 by which you can grab it.
2:26:33 – Correct. – Correct.
2:26:34 – So this is fascinating.
2:26:35 So you mentioned optics.
2:26:38 So there’s a robot, R1.
2:26:39 So for now there’s a human
2:26:44 that actually creates a hole in this skull.
2:26:47 And then after that,
2:26:49 there’s a computer vision component
2:26:53 that’s finding a way to avoid the blood vessels.
2:26:56 And then you’re grabbing it by the loop,
2:26:59 each individual thread and placing it
2:27:02 in a particular location to avoid the blood vessels.
2:27:04 And also choosing the depth of placement.
2:27:05 – Correct. – So controlling every,
2:27:08 like the 3D geometry of the placement.
2:27:09 – Correct.
2:27:11 So the aspect of this robot that is unique
2:27:15 is that it’s not surgeon assisted or human assisted.
2:27:19 It’s a semi-automatic or automatic robot once you,
2:27:21 you know, obviously there are human component to it
2:27:23 when you’re placing targets.
2:27:25 You can always move it away
2:27:28 from kind of major vessels that you see.
2:27:31 But I mean, we want to get to a point where one click
2:27:34 and it just does the surgery within minutes.
2:27:38 – So the computer vision component finds great targets,
2:27:41 candidates and the human kind of approves them
2:27:44 and the robot does, does it do like one thread at a time?
2:27:45 Or does it do it myself? – It does one thread
2:27:47 at a time and that’s actually also one thing
2:27:52 that we are looking at ways to do multiple threads at a time.
2:27:54 There’s nothing stopping from it.
2:27:58 You can have multiple kind of engagement mechanisms.
2:28:00 But right now it’s one by one.
2:28:05 And, you know, we also still do quite a bit of just,
2:28:07 just kind of verification to make sure that it got inserted.
2:28:10 If so how deep, you know, did it actually match
2:28:12 what was programmed in and so on and so forth.
2:28:15 – And the actual electro is a place to vary
2:28:19 at differing depths in the,
2:28:21 I mean, it’s very small differences, but differences.
2:28:23 – Yeah, yeah.
2:28:26 – And so that there’s some reasoning behind that,
2:28:31 as you mentioned, like it gets more varied signal.
2:28:37 – Yeah, I mean, we try to place them all around
2:28:40 three or four millimeter from the surface.
2:28:42 Just ’cause the span of the electrode,
2:28:46 those 16 electrodes that we currently have in this version
2:28:49 spans, you know, roughly around three millimeters.
2:28:52 So we want to get all of those in the brain.
2:28:53 – This is fascinating.
2:28:56 Okay, so there’s a million questions here.
2:28:58 If we go zoom in at specific on the electrodes,
2:29:00 what is your sense?
2:29:03 How many neurons is each individual electrode listening to?
2:29:06 – Yeah, each electrode can record from anywhere
2:29:10 between zero to 40, as I mentioned, right, earlier.
2:29:15 But practically speaking, we only see about
2:29:17 at most like two to three.
2:29:20 And you can actually distinguish which neuron
2:29:24 it’s coming from by the shape of the spikes.
2:29:29 So I mentioned the spike detection algorithm that we have.
2:29:31 It’s called boss algorithm.
2:29:35 But for online spike sorter.
2:29:36 – Nice.
2:29:38 – It actually outputs at the end of the day,
2:29:43 six unique values, which are, you know,
2:29:46 kind of the amplitude of these like negative going hump,
2:29:49 middle hump, like positive going hump,
2:29:52 and then also the time at which these happen.
2:29:55 And from that, you can have a kind of a statistical
2:29:58 probability estimation of is that a spike?
2:29:59 Is it not a spike?
2:30:01 And then based on that, you could also determine,
2:30:03 oh, that spike looks different than that spike
2:30:04 must come from a different neuron.
2:30:08 – Okay, so that’s a nice signal processing step
2:30:11 from which you can then make much better predictions
2:30:13 about if there’s a spike, especially in this kind
2:30:16 of context where there could be multiple neurons screaming.
2:30:20 And that also results in you being able
2:30:22 to compress the data better in the end of the day.
2:30:23 Okay, that’s–
2:30:26 – And just to be clear, I mean, the labs do this,
2:30:28 what’s called spike sorting.
2:30:31 Usually, once you have these like broad band,
2:30:35 you know, like the fully digitized signals,
2:30:37 and then you run a bunch of different set
2:30:40 of algorithms to kind of tease apart.
2:30:43 It’s just all of this for us is done on the device.
2:30:44 – On the device.
2:30:47 – In a very low power, custom, you know,
2:30:51 built ASIC digital processing unit.
2:30:52 – Highly heat constrained.
2:30:53 – Highly heat constrained.
2:30:56 And the processing time from signal going in
2:30:59 and giving you the output is less than a microsecond,
2:31:02 which is, you know, a very, very short amount of time.
2:31:04 – Oh yeah, so the latency has to be super short.
2:31:05 – Correct.
2:31:06 – Oh, wow.
2:31:07 Oh, that’s a pain in the ass.
2:31:10 – Yeah, latency is this huge, huge thing
2:31:11 that you have to deal with.
2:31:13 Right now, the biggest source of latency
2:31:16 comes from the Bluetooth, the way in which
2:31:17 they’re packetized and, you know,
2:31:19 we bend them in 15 millisecond.
2:31:22 – Oh, interesting, so it’s communication constrained.
2:31:23 Is there some potential innovation there
2:31:25 on the protocol used?
2:31:25 – Absolutely.
2:31:26 – Okay.
2:31:30 – Yeah, Bluetooth is definitely not our final
2:31:34 wireless communication protocol
2:31:35 that we want to get to.
2:31:36 It’s a highly-
2:31:38 – Hence the N1 and the R1.
2:31:40 I imagine that increases-
2:31:40 – NX.
2:31:41 – NXRX.
2:31:46 – Yeah, that’s, you know, the communication protocol
2:31:49 ’cause Bluetooth allows you to communicate
2:31:51 against farther distances than you need to,
2:31:52 so you can go much shorter.
2:31:55 – Yeah, the only, well, the primary motivation
2:31:57 for choosing Bluetooth is that,
2:31:58 I mean, everything has Bluetooth.
2:31:59 – All right, so you can talk to-
2:32:00 – So any device?
2:32:03 – Interoperability is just absolutely essential,
2:32:06 especially in this early phase.
2:32:08 And in many ways, if you can access a phone
2:32:10 or a computer, you can do anything.
2:32:13 – Well, it’d be interesting to step back
2:32:15 and actually look at, again,
2:32:18 the same pipeline that you mentioned for Nolan.
2:32:23 So what does this whole process look like
2:32:27 from finding and selecting a human being
2:32:30 to the surgery to the first time
2:32:32 he’s able to use this thing?
2:32:35 – So we have what’s called a patient registry
2:32:38 that people can sign up to, you know,
2:32:40 hear more about the updates.
2:32:43 And that was a route to which Nolan applied.
2:32:47 And the process is that once the application comes in,
2:32:49 you know, it contains some medical records.
2:32:54 And we, you know, based on their medical eligibility,
2:32:56 that there’s a lot of different inclusion,
2:32:58 exclusion criteria for them to meet.
2:33:01 And we go through a pre-screening interview process
2:33:03 with someone from NeuroLink.
2:33:07 And at some point, we also go out to their homes
2:33:10 to do a BCI home audit.
2:33:12 ‘Cause one of the most kind of revolutionary part
2:33:14 about, you know, having this,
2:33:17 and one system that is completely wireless
2:33:18 is that you can use it at home.
2:33:21 Like you don’t actually have to go to the lab
2:33:23 and, you know, go to the clinic
2:33:26 to get connected to these like specialized equipment
2:33:27 that you can’t take home with you.
2:33:32 So that’s one of the key elements of, you know,
2:33:34 when we’re designing the system that we wanted to keep in mind,
2:33:35 like, you know, people, you know,
2:33:38 hopefully would wanna be able to use this every day
2:33:40 in the comfort of their home.
2:33:44 And so part of our engagement
2:33:46 and what we’re looking for during BCI home audit
2:33:48 is to just kind of understand their situation
2:33:51 or other assistive technology that they use.
2:33:54 – And we should also step back and kind of say that
2:33:58 the estimate is 180,000 people live
2:34:00 with quadriplegia in the United States.
2:34:03 And each year an additional 18,000 suffer
2:34:06 a paralyzing spinal cord injury.
2:34:11 So these are folks who have a lot of challenges
2:34:15 living a life in terms of accessibility.
2:34:16 In terms of doing the things
2:34:18 that many of us just take for granted day to day.
2:34:20 And one of the things,
2:34:23 one of the goals of this initial study
2:34:27 is to enable them to have sort of digital autonomy
2:34:29 where they by themselves can interact
2:34:31 with a digital device using just their mind,
2:34:33 something that you’re calling telepathy.
2:34:37 So digital telepathy where a quadriplegic
2:34:40 can communicate with a digital device
2:34:42 in all the ways that we’ve been talking about
2:34:46 control the mouse cursor,
2:34:48 enough to be able to do all kinds of stuff
2:34:51 including play games and tweet and all that kind of stuff.
2:34:54 And there’s a lot of people for whom life,
2:34:56 the basics of life are difficult
2:35:00 because of the things that have happened to them.
2:35:01 So.
2:35:04 – Yeah, I mean, movement is so fundamental
2:35:06 to our existence.
2:35:10 I mean, even speaking involves movement
2:35:12 of mouth, lip, larynx.
2:35:17 And without that, it’s extremely debilitating.
2:35:22 And there are many, many people that we can help.
2:35:26 And I mean, especially if you start to kind of look
2:35:30 at other forms of movement disorders
2:35:31 that are not just from spinal cord injury,
2:35:36 but from ALS, MS or even stroke that leads you
2:35:40 and or just aging, right?
2:35:43 That leads you to lose some of that mobility,
2:35:44 that independence.
2:35:45 It’s extremely debilitating.
2:35:48 – And all of these are opportunities to help people,
2:35:50 to help alleviate its suffering,
2:35:52 to help improve the quality of life.
2:35:53 But each of the things you mentioned
2:35:55 is its own little puzzle.
2:35:59 That needs to have increasing levels of capability
2:36:01 from a device like a New Orleans device.
2:36:04 And so the first one you’re focusing on is,
2:36:08 it’s just the beautiful word telepathy.
2:36:11 So being able to communicate using your mind wirelessly
2:36:13 with a digital device.
2:36:16 Can you just explain this exactly what we’re talking about?
2:36:18 – Yeah, I mean, it’s exactly that.
2:36:22 I mean, I think if you are able to control a cursor
2:36:26 and able to click and be able to get access
2:36:30 to computer or phone, I mean, the whole world
2:36:32 opens up to you.
2:36:35 And I mean, I guess the word telepathy,
2:36:39 if you kind of think about that as just definitionally
2:36:42 being able to transfer information from my brain
2:36:47 to your brain without using some of the physical faculties
2:36:50 that we have, like voices.
2:36:52 – But the interesting thing here is,
2:36:55 I think the thing that’s not obviously clear
2:36:57 is how exactly it works.
2:36:59 So in order to move a cursor,
2:37:04 there’s at least a couple of ways of doing that.
2:37:09 So one is you imagine yourself maybe moving a mouse
2:37:13 with your hand, or you can then,
2:37:15 which no one talked about,
2:37:18 like imagine moving the cursor with your mind.
2:37:23 But it’s like, there is a cognitive step here
2:37:26 that’s fascinating because you have to use the brain
2:37:28 and you have to learn how to use the brain.
2:37:30 And you kind of have to figure it out dynamically,
2:37:35 like because you reward yourself if it works.
2:37:37 So you’re like, I mean, there’s a step that,
2:37:39 this is just a fascinating step
2:37:41 ’cause you have to get the brain to start firing
2:37:43 in the right way.
2:37:48 And you do that by imagining like fake it till you make it.
2:37:52 And all of a sudden it creates the right kind of signal
2:37:57 that if decoded correctly can create the kind of effect.
2:37:58 And then there’s like noise around that,
2:37:59 so you have to figure all of that out.
2:38:01 But on the human side,
2:38:04 imagine the cursor moving is what you have to do.
2:38:06 – Yeah, he says using the force of force.
2:38:10 I mean, isn’t that just like fascinating to you
2:38:11 that it works?
2:38:15 Like to me it’s like, holy shit, that actually works.
2:38:18 Like you could move a cursor with your mind.
2:38:22 – You know, as much as you’re learning to use that thing,
2:38:24 that thing’s also learning about you,
2:38:27 like our model is constantly updating the weights
2:38:31 to say, oh, if someone is thinking about,
2:38:36 this sophisticated forms of like spiking patterns,
2:38:39 like that actually means to do this, right?
2:38:41 – So the machine is learning about the human
2:38:42 and the human is learning about the machine.
2:38:45 So there is adaptability to the signal processing,
2:38:47 the decoding step.
2:38:51 And then there’s the adaptation of Nolan, the human being.
2:38:56 Like the same way if you give me a new mouse and I move it,
2:38:58 I learn very quickly about its sensitivity,
2:39:00 so I’ll learn to move it slower.
2:39:05 And then there’s other kinds of signal drift
2:39:07 and all that kind of stuff they have to adapt to.
2:39:09 So both are adapting to each other.
2:39:10 – Correct.
2:39:14 – That’s a fascinating like software challenge
2:39:16 on both sides, the software on both,
2:39:18 on the human software and the-
2:39:19 – The organic and the inorganic.
2:39:21 – The organic and the inorganic.
2:39:23 Anyway, so I had to rudely interrupt.
2:39:27 So there’s the selection that Nolan has passed
2:39:31 with flying colors, so everything,
2:39:35 including that it’s a BCI friendly home, all of that.
2:39:38 So what is the process of the surgery and plantation
2:39:42 in the first moment when he gets to use the system?
2:39:46 – The end to end, we say patient in to patient out
2:39:49 is anywhere between two to four hours.
2:39:51 In particular case for Nolan, it was about three and a half
2:39:54 hours, and there’s many steps leading
2:39:57 to the actual robot insertion, right?
2:39:59 So there’s anesthesia induction,
2:40:03 and we do intra-op CT imaging to make sure
2:40:06 that we’re drilling the hole in the right location.
2:40:09 And this is also pre-planned beforehand.
2:40:13 Someone like Nolan would go through fMRI
2:40:17 and then they can think about wiggling their hand.
2:40:19 You know, obviously due to their injury,
2:40:24 it’s not gonna actually lead to any sort of intended output,
2:40:27 but it’s the same part of the brain that actually lights up
2:40:29 when you’re imagining moving your finger
2:40:31 to actually moving your finger.
2:40:34 And that’s one of the ways in which we can actually
2:40:37 know where to place our threads,
2:40:39 ’cause we wanna go into what’s called the hand knob area
2:40:43 in the motor cortex, and as much as possible,
2:40:46 densely put our electrode threads.
2:40:52 So yeah, we do intra-op CT imaging to make sure
2:40:55 and double check the location of the craniectomy.
2:40:59 And surgeon comes in, does their thing
2:41:04 in terms of like skin incision, craniectomy,
2:41:06 so drilling of the skull, and then there’s many different
2:41:07 layers of the brain.
2:41:09 There’s what’s called the dura,
2:41:12 which is a very, very thick layer that surrounds the brain.
2:41:16 That gets actually respected in a process called the rectomy.
2:41:19 And that then exposed the pia in the brain
2:41:20 that you wanna insert.
2:41:23 And by the time it’s been around anywhere between
2:41:24 one to one and a half hours,
2:41:27 robot comes in, does his thing, placement of the target,
2:41:29 inserting of the thread.
2:41:31 That takes anywhere between 20 to 40 minutes
2:41:32 in the particular case for Nolan,
2:41:35 it was just under or just over 30 minutes.
2:41:38 And then after that, the surgeon comes in,
2:41:40 there’s a couple other steps of like actually inserting
2:41:43 the dural substitute layer to protect the thread
2:41:45 as well as the brain.
2:41:50 And then screw in the implant, and then skin flap,
2:41:53 and then suture, and then you’re out.
2:42:00 – So when Nolan woke up, what was that like?
2:42:01 Was the recovery like?
2:42:04 And what was the first time he was able to use it?
2:42:07 – So he was actually immediately after the surgery,
2:42:10 you know, like an hour after the surgery
2:42:14 as he was waking up, we did turn on the device,
2:42:17 make sure that we are recording neural signals,
2:42:20 and we actually did have a couple signals
2:42:23 that we noticed that he can actually modulate.
2:42:26 And what I mean by modulate is that he can think about
2:42:30 crunching his fist, and you could see the spike disappear
2:42:31 and appear.
2:42:33 (laughing)
2:42:34 – That’s awesome.
2:42:36 – And that was immediate, right?
2:42:39 Immediate after in the recovery room.
2:42:40 – Oh, how cool is that?
2:42:43 Yeah.
2:42:44 – That’s a human being.
2:42:46 I mean, what did that feel like for you?
2:42:49 This device in a human being,
2:42:52 a first step of a gigantic journey.
2:42:54 I mean, it’s a historic moment.
2:42:59 Even just that spike, just to be able to modulate that.
2:43:01 – You know, obviously there had been other,
2:43:03 other, you know, as you mentioned, pioneers
2:43:07 that have participated in these groundbreaking BCI,
2:43:13 you know, investigational early feasibility studies.
2:43:16 So we’re obviously standing in the shoulders
2:43:16 of the giants here.
2:43:18 You know, we’re not the first ones
2:43:21 to actually put electrodes in the human brain.
2:43:24 But I mean, just leading up to the surgery,
2:43:27 there was, I mean, I definitely could not sleep.
2:43:29 There’s just, it’s the first time
2:43:32 that you’re working in a completely new environment.
2:43:35 We had a lot of confidence
2:43:40 based on our bench-top testing or pre-clinical R&D studies
2:43:44 that the mechanism, the threads, the insertion,
2:43:46 all that stuff is very safe.
2:43:51 And that it’s obviously ready for doing this in a human.
2:43:55 But there’s still a lot of unknown, unknown about,
2:43:59 can the needle actually insert?
2:44:03 I mean, we brought something like 40 needles
2:44:04 just in case they break.
2:44:05 And we ended up using only one.
2:44:08 But I mean, that was a level of just complete unknown, right?
2:44:10 It’s just a very, very different environment.
2:44:14 And I mean, that’s why we do clinical trial
2:44:16 in the first place, to be able to test these things out.
2:44:21 So extreme nervousness and just many, many sleepless night
2:44:24 leading up to the surgery
2:44:26 and definitely the day before the surgery.
2:44:27 And it was an early morning surgery.
2:44:29 Like we started at seven in the morning.
2:44:33 And by the time it was around 10 30,
2:44:35 it was everything was done.
2:44:40 But I mean, first time seeing that, well,
2:44:42 number one, just huge relief
2:44:46 that this thing is doing what it’s supposed to do.
2:44:51 And two, I mean, just immense amount of gratitude
2:44:53 for Nolan and his family.
2:44:55 And then many others that have applied
2:44:58 and that we’ve spoken to and will speak to
2:45:02 are true pioneers in every war.
2:45:05 And I sort of call them the neural astronauts
2:45:10 or neural not, these amazing, just like in the sixties, right?
2:45:13 Like these amazing just pioneers, right?
2:45:18 Exploring the unknown outwards, in this case, it’s inward.
2:45:22 But an incredible amount of gratitude for them
2:45:27 to just participate and play a part.
2:45:32 And it’s a journey that we’re embarking on together.
2:45:36 But also, like I think it was just,
2:45:38 that was a very, very important milestone,
2:45:40 but our work was just starting.
2:45:44 So a lot of just kind of anticipation for,
2:45:46 okay, what needs to happen next?
2:45:47 What are set of sequences of events
2:45:50 that needs to happen for us to make it worthwhile
2:45:54 for both Nolan as well as us.
2:45:55 – Just to linger on that,
2:45:57 just a huge congratulations to you
2:45:59 and the team for that milestone.
2:46:03 I know there’s a lot of work left,
2:46:07 but that is, that’s really exciting to see.
2:46:10 There’s, that’s a source of hope.
2:46:13 It’s this first big step,
2:46:17 opportunity to help hundreds of thousands of people
2:46:22 and then maybe expand the realm of the possible
2:46:24 for the human mind for millions of people in the future.
2:46:26 So it’s really exciting.
2:46:30 Like the opportunities are all ahead of us
2:46:32 and to do that safely and to do that effectively
2:46:35 was really fun to see.
2:46:37 As an engineer, just watching other engineers
2:46:39 come together and do an epic thing.
2:46:40 That was awesome.
2:46:40 Huge congrats.
2:46:41 – Thank you, thank you.
2:46:43 It could not have done it without the team.
2:46:48 And yeah, I mean, that’s the other thing that I told the team
2:46:51 as well of just this immense sense of optimism
2:46:52 for the future.
2:46:55 I mean, it was, it’s a very important moment
2:46:59 for the company, you know, needless to say,
2:47:02 as well as, you know, hopefully for many others
2:47:04 out there that we can help.
2:47:05 – So speaking of challenges,
2:47:08 Neuralink published a blog post describing
2:47:10 that some of the threads are attracted.
2:47:13 And so the performance as measured
2:47:16 by bits per second dropped at first,
2:47:18 but then eventually it was regained.
2:47:20 And that the whole story of how it was regained
2:47:21 is super interesting.
2:47:23 That’s definitely something I’ll talk to,
2:47:26 to bliss and to know and about.
2:47:30 But in general, can you speak to this whole experience?
2:47:33 How was the performance regained?
2:47:38 And just the technical aspects of the threads
2:47:40 being retracted and moving.
2:47:43 – The main takeaway is that in the end,
2:47:44 the performance have come back
2:47:47 and it’s actually gotten better than it was before.
2:47:52 He’s actually just beat the world record yet again last week
2:47:54 to 8.5 BPS.
2:47:57 So I mean, he’s just cranking and he’s just improving.
2:48:00 – The previous one that he said was eight, correct.
2:48:01 He said 8.5.
2:48:05 – Yeah, the previous world record in human was 4.6.
2:48:07 So it’s almost double.
2:48:09 And his goal is to try to get to 10,
2:48:14 which is roughly around kind of the median neural linker
2:48:17 using a mouse with the hand.
2:48:19 So it’s getting there.
2:48:22 – So yeah, so the performance was regained.
2:48:23 – Yeah, better than before.
2:48:27 So that’s, you know, a story on its own
2:48:31 of what took the BCI team to recover that performance.
2:48:34 It was actually mostly on kind of the signal processing.
2:48:36 And so, you know, as I mentioned,
2:48:39 we were kind of looking at these spike outputs
2:48:43 from our electrodes.
2:48:46 And what happened is that kind of four weeks
2:48:49 into the surgery, we noticed that the threads
2:48:51 have slowly come out of the brain.
2:48:54 And the way in which we noticed this at first, obviously,
2:48:57 is that, well, I think Nolan was the first to notice
2:48:58 that his performance was degrading.
2:49:02 And I think at the time,
2:49:05 we were also trying to do a bunch of different experimentation,
2:49:10 you know, different algorithms, different sort of UI, UX.
2:49:12 So it was expected that there will be variability
2:49:14 in the performance,
2:49:17 but we did see kind of a steady decline.
2:49:21 And then also, the way in which we measure
2:49:22 the health of the electrodes,
2:49:23 or whether they’re in the brain or not,
2:49:27 is by measuring impedance of the electrodes.
2:49:29 So we look at kind of the interfacial,
2:49:34 kind of the Randall circuit, they say, you know,
2:49:37 the capacitance and the resistance
2:49:39 between the electro surface and the medium.
2:49:42 And if that changes in some dramatic ways,
2:49:43 we have some indication.
2:49:45 Or if you’re not seeing spikes on those channels,
2:49:48 you have some indications that something’s happening there.
2:49:50 And what we noticed is that looking at those impedance plot
2:49:52 and spike rate plots,
2:49:55 and also because we have those electrodes
2:49:57 recording along the depth,
2:49:58 you’re seeing some sort of movement
2:50:00 that indicated that the reservoir being pulled out.
2:50:04 And that obviously will have an implication
2:50:05 on the model side,
2:50:07 because if you’re, the number of inputs
2:50:10 that are going into the model is changing
2:50:12 because you have less of them,
2:50:16 that model needs to get updated, right.
2:50:20 And, but there were still signals.
2:50:21 And as I mentioned, similar to how,
2:50:24 even when you place the signals on the surface
2:50:27 of the brain or farther away, like outside the skull,
2:50:30 you still see some useful signals.
2:50:33 What we started looking at is not just the spike occurrence
2:50:36 through this boss algorithm that I mentioned,
2:50:40 but we started looking at just the power
2:50:43 of the frequency band that is interesting
2:50:47 for Nolan to be able to modulate.
2:50:50 So once we kind of changed the algorithm
2:50:54 for the implant to not just give you the boss output,
2:50:57 but also these spike band power output,
2:51:00 that helped us sort of be find the model
2:51:02 with the new set of inputs.
2:51:04 And that was the thing that really ultimately
2:51:05 gave us the performance back.
2:51:12 In terms of, and obviously like the thing that we want,
2:51:16 ultimately, and the thing that we are working towards
2:51:18 is figuring out ways in which we can keep those threads
2:51:22 intact for as long as possible
2:51:25 so that we have many more channels going into the model.
2:51:27 That’s by far the number one priority
2:51:29 that the team is currently embarking on
2:51:32 to understand how to prevent that from happening.
2:51:35 The thing that I will say also is that,
2:51:39 as I mentioned, this is the first time ever
2:51:41 that we’re putting these threads in a human brain.
2:51:44 And human brain just for size reference
2:51:48 is 10 times that of the monkey brain or the sheep brain.
2:51:52 And it’s just a very, very different environment.
2:51:53 It moves a lot more.
2:51:56 It’s like actually moved a lot more than we expected
2:51:59 when we did Nolan’s surgery.
2:52:03 And it’s just a very, very different environment
2:52:04 than what we’re used to.
2:52:06 And this is why we do clinical trial, right?
2:52:10 We wanna uncover some of these issues
2:52:14 and failure modes earlier than later.
2:52:16 So in many ways, it’s provided us
2:52:19 with this enormous amount of data
2:52:24 and information to be able to solve this.
2:52:26 And this is something that New Orleans is extremely good at.
2:52:30 Once we have set of clear objective and engineering problem,
2:52:32 we have enormous amount of talents
2:52:35 across many, many disciplines to be able to come together
2:52:38 and fix the problem very, very quickly.
2:52:41 But it sounds like one of the fascinating challenges here
2:52:44 is for the system and the decoding side
2:52:46 to be adaptable across different timescales.
2:52:50 So whether it’s movement of threads
2:52:53 or different aspects of signal drift,
2:52:54 sort of on the software of the human brain,
2:52:59 something changing, like Nolan talks about cursor drift
2:53:02 that could be corrected.
2:53:04 And there’s a whole UX challenge to how to do that.
2:53:09 So it sounds like adaptability is like a fundamental property
2:53:11 that has to be engineered in.
2:53:12 It is.
2:53:14 And I mean, I think, I mean,
2:53:17 as a company, we’re extremely vertically integrated.
2:53:22 You know, we make these thin film arrays in our own micro fab.
2:53:24 Yeah, there’s like you said, built-in house.
2:53:26 This whole paragraph here from this blog post
2:53:28 is pretty gangster.
2:53:30 Building the technologies described above
2:53:32 has been no small feat.
2:53:34 And there’s a bunch of links here
2:53:36 that I recommend people click on.
2:53:39 We constructed in-house micro fabrication capabilities
2:53:42 to rapidly produce various iterations of thin film arrays
2:53:44 that constitute our electrode threads.
2:53:49 We created a custom femtosecond laser mill
2:53:52 to manufacture components with micro level precision.
2:53:53 I think there’s a tweet associated with this.
2:53:55 That’s the whole thing that we can get into.
2:53:57 Yeah, this, okay.
2:53:59 What are we looking at here?
2:54:01 This thing.
2:54:03 So in less than one minute,
2:54:06 our custom-made femtosecond laser mill
2:54:10 cuts this geometry in the tips of our needles.
2:54:15 So we’re looking at this weirdly shaped needle.
2:54:17 The tip is only 10 to 12 microns
2:54:19 and width only slightly larger
2:54:21 than the diameter of a red blood cell.
2:54:23 The small size allows threads to be inserted
2:54:25 with minimal damage to the cortex.
2:54:28 Okay, so what’s interesting about this geometry?
2:54:30 So we’re looking at this just geometry of a needle.
2:54:33 This is the needle that’s engaging
2:54:35 with the loops in the thread.
2:54:40 So they’re the ones that, you know, thread the loop
2:54:43 and then peel it from the silicon backing.
2:54:47 And then this is the thing that gets inserted into the tissue.
2:54:50 And then this pulls out leaving the thread.
2:54:54 And this kind of a notch or the shark tooth
2:54:57 that we used to call is the thing
2:55:00 that actually is grasping the loop.
2:55:03 And then it’s designed in such a way
2:55:05 such that when you pull out, it leaves the loop.
2:55:07 And the robot is controlling this needle.
2:55:08 Correct.
2:55:10 So this is actually housed in a cannula.
2:55:13 And basically the robot has a lot of the optics
2:55:15 that look for where the loop is.
2:55:18 There’s actually a 405 nanometer light
2:55:22 that actually causes the polyimide to fluoresce
2:55:25 so that you can locate the location of the loop.
2:55:27 So the loop lights up.
2:55:28 Yeah, yeah, they do.
2:55:31 It’s a micron precision process.
2:55:33 What’s interesting about the robot that it takes to do that,
2:55:35 that’s pretty crazy.
2:55:36 That’s pretty crazy that a robot is
2:55:38 able to get this kind of precision.
2:55:42 Yeah, our robot is quite heavy, our current version of it.
2:55:47 There is, I mean, it’s like a giant granite slab
2:55:49 that weighs about a ton.
2:55:52 Because it needs to be sensitive to vibration,
2:55:53 environmental vibration.
2:55:56 And then as the head is moving, at the speed that it’s moving,
2:55:59 there’s a lot of kind of motion control
2:56:04 to make sure that you can achieve that level of precision.
2:56:07 A lot of optics that kind of zoom in on that.
2:56:09 We’re working on next generation of the robot
2:56:12 that is lighter, easier to transport.
2:56:14 I mean, it is a feat to move the robot.
2:56:17 And it’s far superior to a human surgeon at this time
2:56:19 for this particular task.
2:56:19 Absolutely.
2:56:21 I mean, let alone you try to actually
2:56:24 thread a loop in a sewing kit, I mean,
2:56:28 this is like– we’re talking like fractions of human hair.
2:56:30 These things are– it’s not visible.
2:56:33 So continuing the paragraph, we developed novel hardware
2:56:36 and software testing systems such as our accelerated lifetime
2:56:38 testing racks and simulated surgery environment,
2:56:40 which is pretty cool, to stress, test, and validate
2:56:42 the robustness of our technologies.
2:56:45 We performed many rehearsals of our surgeries
2:56:50 to refine our procedures and make them second nature.
2:56:53 This is pretty cool.
2:56:55 We practice surgeries on proxies with all the hardware
2:56:59 and instruments needed in our mock or in the engineering space.
2:57:00 This helps us rapidly test and measure.
2:57:02 So there’s like proxies.
2:57:04 Yeah, this proxy is super cool, actually.
2:57:09 So there’s a 3D printed skull from the images
2:57:15 that is taken at Barrow, as well as this hydrogel mix,
2:57:17 sort of synthetic polymer thing that actually
2:57:21 mimics the mechanical properties of the brain.
2:57:25 It also has vasculature of the person.
2:57:30 So basically, what we’re talking about here–
2:57:33 and there’s a lot of work that has gone into making this set
2:57:38 proxy that it’s about finding the right concentration
2:57:40 of these different synthetic polymers
2:57:42 to get the right set of consistency for the needle
2:57:45 dynamics as they’re being inserted.
2:57:51 But we practice this surgery with the person–
2:57:56 Nolan’s basically physiology and brain many, many times
2:57:57 prior to actually doing the surgery.
2:57:59 So to every step, every step.
2:58:02 Every step, yeah, like, where does someone stand?
2:58:04 Like, I mean, what you’re looking at is the picture–
2:58:07 this is in our office–
2:58:11 of this kind of corner of the robot engineering space
2:58:14 that we have created, this mock OR space
2:58:17 that looks exactly like what they would experience,
2:58:19 all the staff would experience during their actual surgery.
2:58:22 So I mean, it’s just kind of like any dense rehearsal
2:58:24 where you know exactly where you’re going to stand at what
2:58:27 point, and you just practice that over and over and over
2:58:30 again with an exact anatomy of someone
2:58:32 that you’re going to surgeries.
2:58:36 And it got to a point where a lot of our engineers,
2:58:38 when we created a craniectomy, they’re like,
2:58:40 oh, that looks very familiar.
2:58:41 We’ve seen that before.
2:58:42 Yeah.
2:58:45 And there’s wisdom you can gain through doing the same thing
2:58:46 over and over and over.
2:58:50 It’s like a Dura Dreams of Sushi kind of thing.
2:58:53 Because then it’s like Olympic athletes
2:58:56 visualize the Olympics.
2:59:00 And then once you actually show up, it feels easy.
2:59:02 It feels like any other day.
2:59:05 It feels almost boring winning the gold medal.
2:59:07 Because you visualize this so many times.
2:59:09 You’ve practiced this so many times
2:59:11 that nothing about us knew.
2:59:12 It’s boring.
2:59:12 You win the gold medal.
2:59:13 It’s boring.
2:59:18 And the experience they talk about is mostly just relief.
2:59:21 Probably that they don’t have to visualize it anymore.
2:59:24 Yeah, the power of the mind to visualize and where–
2:59:26 I mean, there’s a whole field that studies
2:59:30 where muscle memory lies in cerebellum.
2:59:32 Yeah, it’s incredible.
2:59:36 I think it’s a good place to actually ask
2:59:38 sort of the big question that people might have is,
2:59:40 how do we know every aspect of this
2:59:42 that you describe is safe?
2:59:44 At the end of the day, the gold standard
2:59:47 is to look at the tissue.
2:59:49 What sort of trauma did you cause the tissue?
2:59:52 And does that correlate to whatever behavioral anomalies
2:59:54 that you may have seen?
2:59:57 And that’s the language to which we
3:00:00 can communicate about the safety of inserting something
3:00:04 into the brain and what type of trauma that you can cause.
3:00:11 So we actually have an entire department of pathology
3:00:15 that looks at these tissue slices.
3:00:17 There are many steps that are involved in doing this
3:00:22 once you have studies that are launched
3:00:25 with particular endpoints in mind.
3:00:27 At some point, you have to euthanize the animal,
3:00:29 and then you go through a necropsy
3:00:32 to collect the brain tissue samples.
3:00:36 You fix them in formalin, and you gross them.
3:00:38 You section them, and you look at individual slices
3:00:41 just to see what kind of reaction or lack thereof exists.
3:00:45 So that’s the kind of the language to which FDA speaks,
3:00:50 and as well for us to evaluate the safety of the insertion
3:00:53 mechanism as well as the threats at various different time
3:00:55 points, both acute.
3:01:02 So anywhere between 0 to 3 months to beyond 3 months.
3:01:06 So those are the details of an extremely high standard
3:01:08 of safety that has to be reached.
3:01:09 Correct.
3:01:12 FDA supervises this, but there’s in general just
3:01:13 a very high standard.
3:01:16 And every aspect of this, including the surgery,
3:01:20 I think Matthew McDougal has mentioned it.
3:01:26 The standard is, let’s say, how to put it politely,
3:01:28 higher than maybe some other operations
3:01:30 that we take for granted.
3:01:33 So the standard for all the surgical stuff here
3:01:34 is extremely high.
3:01:34 Very high.
3:01:38 I mean, it’s a highly, highly regulated environment
3:01:44 with the governing agencies that scrutinize every medical device
3:01:46 that gets marketed.
3:01:47 And I think it’s a good thing.
3:01:50 It’s good to have those high standards.
3:01:53 And we try to hold extremely high standards
3:01:56 to kind of understand what sort of damage
3:01:59 of any these innovative, emerging technologies
3:02:01 and new technologies that we’re building are.
3:02:05 And so far, we have been extremely
3:02:10 impressed by lack of immune response from these threats.
3:02:15 Speaking of which, you talk to me with excitement
3:02:18 about the histology and some of the images
3:02:19 that you’re able to share.
3:02:22 Can you explain to me what we’re looking at?
3:02:27 Yeah, so what you’re looking at is a stained tissue image.
3:02:31 So this is a sectioned tissue slice
3:02:33 from an animal that was implanted for seven months,
3:02:35 so kind of a chronic time point.
3:02:38 And you’re seeing all these different colors.
3:02:42 And each color indicates specific types of cell types.
3:02:46 So purple and pink are astrocytes and microglia,
3:02:49 respectively, they’re type of glial cells.
3:02:52 And the other thing that people may not be aware of
3:02:56 is your brain is not just made up of soup of neurons and axons.
3:03:01 There are other cells like glial cells
3:03:06 that actually kind of is the glue and also react
3:03:09 if there are any trauma or damage to the tissue.
3:03:10 But the brown are the neurons here.
3:03:11 The brown are the neurons.
3:03:12 So hotter neurons.
3:03:13 Yeah.
3:03:16 So what you’re seeing is in this kind of macro image,
3:03:20 you’re seeing these like circle highlighted in white,
3:03:21 the insertion sites.
3:03:26 And when you zoom into one of those, you see the threads.
3:03:27 And then in this particular case,
3:03:31 I think we’re seeing about the 16 wires that
3:03:33 are going into the page.
3:03:35 And the incredible thing here is the fact
3:03:38 that you have the neurons that are these brown structures
3:03:41 or brown circular or elliptical thing that are actually
3:03:44 touching and abutting the threads.
3:03:46 So what this is saying is that there’s basically
3:03:49 zero trauma that’s caused during this insertion.
3:03:53 And with these neural interfaces, these micro luxurios
3:03:56 that you insert, that is one of the most common mode of failure.
3:04:00 So when you insert these threads, like the utare,
3:04:03 it causes neuronal death around the site
3:04:06 because you’re inserting a foreign object.
3:04:09 And that kind of elicit these immune response
3:04:11 through microglia and astrocytes.
3:04:14 They form this protective layer around it.
3:04:16 Oh, not only are you killing the neuron cells,
3:04:18 but you’re also creating this protective layer
3:04:21 that then basically prevents you from recording neural signals
3:04:23 because you’re getting farther and farther away
3:04:25 from the neurons that you’re trying to record.
3:04:27 And that is the biggest mode of failure.
3:04:30 And in this particular example, in that insert,
3:04:32 it’s about 50 micron with that scale bar.
3:04:36 The neurons just seem to be attracted to it.
3:04:37 So there’s certainly no trauma.
3:04:40 That’s such a beautiful image, by the way.
3:04:43 So the brown of the neurons, for some reason,
3:04:44 I can’t look away.
3:04:45 It’s really cool.
3:04:46 And the way that these things–
3:04:48 I mean, your tissues generally don’t
3:04:50 have these beautiful colors.
3:04:55 This is multiplex stain that uses these different proteins
3:04:58 that are staining these at different colors.
3:05:01 We use a very standard set of staining techniques
3:05:06 with HE, IBA1, and NUEN, and GFAP.
3:05:08 So if you go to the next image, this
3:05:10 is also kind of illustrates the second point
3:05:11 because you can make an argument.
3:05:14 And initially, when we saw the previous image,
3:05:16 we said, oh, are the threads just floating?
3:05:17 Like, what is happening here?
3:05:19 Are we actually looking at the right thing?
3:05:22 So what we did is we did another stain–
3:05:23 and this is all done in-house–
3:05:27 of this Mason’s trichrome stain, which is in blue,
3:05:29 that shows these collagen layers.
3:05:31 So the blue basically–
3:05:35 you don’t want the blue around the implant threads
3:05:37 because that means that there is some sort of scarring
3:05:38 that’s happened.
3:05:41 And what you’re seeing, if you look at individual threads,
3:05:44 is that you don’t see any of the blue, which
3:05:48 means that there has been absolutely or very, very
3:05:51 minimal to a point where it’s not detectable amount of trauma
3:05:52 in these inserted threads.
3:05:55 So that presumably is one of the big benefits
3:05:57 of having this kind of flexible thread.
3:05:59 Yeah, so we think this is primarily
3:06:03 due to the size, as well as the flexibility of the threads.
3:06:07 Also, the fact that R1 is avoiding vascular.
3:06:11 So we’re not disrupting or we’re not
3:06:14 causing damage to the vessels and not breaking
3:06:19 any of the blood-brain barrier has basically
3:06:22 caused the immune response to be muted.
3:06:24 But this is also a nice illustration
3:06:26 of the size of things.
3:06:27 So this is the tip of the thread.
3:06:30 Yeah, those are neurons.
3:06:31 And they’re neurons.
3:06:33 And this is the thread listening.
3:06:36 And the electrodes are positioned how?
3:06:39 Yeah, so what you’re looking at is not electrode themselves.
3:06:41 Those are the conductive wires.
3:06:46 So each of those should probably be two micron in width.
3:06:48 So what we’re looking at is we’re
3:06:49 looking at the coronal slice.
3:06:52 So we’re looking at some slice of the tissue.
3:06:54 So as you go deeper, you will obviously
3:06:59 have less and less of the tapering of the thread.
3:07:02 But yeah, the point basically being
3:07:05 that there’s just kind of cells around the insertisite, which
3:07:08 is just an incredible thing to see.
3:07:10 I’ve just never seen anything like this.
3:07:14 How easy and safe is it to remove the implant?
3:07:18 Yeah, so it depends on when.
3:07:23 In the first three months or so after the surgery,
3:07:26 there’s a lot of tissue modeling that’s happening.
3:07:28 Similar to when you’ve got to cut,
3:07:34 you obviously start over the first couple of weeks,
3:07:38 or depending on the size of the wound, scar tissue forming.
3:07:40 There are these like contracted, and then in the end,
3:07:42 they turn into scab and you can scab it off.
3:07:44 The same thing happens in the brain.
3:07:47 And it’s a very dynamic environment.
3:07:50 And before the scar tissue or the neomembrane
3:07:54 or the new membrane that forms, it’s quite easy to just pull
3:07:55 them out.
3:07:59 And there is minimal trauma that’s caused during that.
3:08:03 Once the scar tissue forms, and with Nolan as well,
3:08:05 we believe that that’s the thing that’s currently
3:08:06 anchoring the threads.
3:08:10 So we haven’t seen any more movements since then.
3:08:13 So they’re quite stable.
3:08:17 It gets harder to actually completely extract the threads.
3:08:21 So our current method for removing the device
3:08:26 is cutting the thread, leaving the tissue intact,
3:08:29 and then unscrewing and taking the implant up.
3:08:34 And that hole is now going to be plugged with either another
3:08:42 neural link, or just with kind of a plastic-based cap.
3:08:46 Is it OK to leave the threads in there forever?
3:08:46 Yeah, we think so.
3:08:50 We’ve done studies where we left them there.
3:08:52 And one of the biggest concerns that we had
3:08:53 is like, do they migrate?
3:08:56 And do they get to a point where they should not be?
3:08:56 We haven’t seen that.
3:08:58 Again, once the scar tissue forms,
3:09:00 they get anchored in place.
3:09:05 And I should also say that when we say upgrades,
3:09:08 we’re not just talking in theory here.
3:09:11 We’ve actually upgraded many, many times.
3:09:15 Most of our monkeys, or non-human primates,
3:09:17 NHP, have been upgraded.
3:09:20 Pedro, who you saw playing “Mind Pong,”
3:09:23 has the latest version of the device since two years ago
3:09:27 and is seemingly very happy and healthy, in fact.
3:09:33 So what’s designed for the future, the upgrade procedure?
3:09:40 So maybe for Nolan, what would the upgrade look like?
3:09:42 It was essentially what you’re mentioning.
3:09:47 Is there a way to upgrade the device internally,
3:09:50 where you take it apart and keep the capsule
3:09:51 and upgrade the internals?
3:09:53 Yeah, so there are a couple of different things here.
3:09:55 So for Nolan, if we were to upgrade,
3:09:58 what we would have to do is either cut the threads
3:10:04 or extract the threads, depending on the situation there,
3:10:07 in terms of how they’re anchored or scarred in.
3:10:11 If you were to remove them with the neural substitute,
3:10:14 you have an intact brain, so you can reinsert different threads
3:10:18 with the updated implant package.
3:10:23 There are a couple of different ways
3:10:25 that we’re thinking about the future of what
3:10:27 the upgradeable system looks like.
3:10:30 One is, at the moment, we currently
3:10:35 remove the dura, this thick layer that protects the brain.
3:10:38 But that actually is the thing that actually proliferates
3:10:39 the scar tissue formation.
3:10:42 So typically, the general good rule of thumb
3:10:45 is you want to leave the nature as is
3:10:46 and not disrupt it as much.
3:10:49 So we’re looking at ways to insert the threads
3:10:53 through the dura, which comes with a different set
3:10:57 of challenges, such as it’s a pretty thick layer.
3:10:58 So how do you actually penetrate that
3:11:00 without breaking the needle?
3:11:02 So we’re looking at different needle design for that,
3:11:05 as well as the loop engagement.
3:11:08 The other biggest challenges are it’s quite opaque,
3:11:10 optically, with white light illumination.
3:11:14 So how do you avoid still this biggest advantage
3:11:16 that we have of avoiding basketure?
3:11:17 How do you image through that?
3:11:19 How do you actually still mediate that?
3:11:20 So there are other imaging techniques
3:11:23 that we’re looking at to enable that.
3:11:26 But our hypothesis is that, and based
3:11:28 on some of the early evidence that we have,
3:11:30 doing through the dura insertion will
3:11:31 cause minimal scarring.
3:11:35 That causes them to be much easier to extract over time.
3:11:37 And the other thing that we’re also looking at,
3:11:41 this is going to be a fundamental change in the implant
3:11:44 architecture, is at the moment, it’s
3:11:48 a monolithic single implant that comes with a thread that’s
3:11:49 bonded together.
3:11:51 So you can’t actually separate the thing out,
3:11:55 but you can imagine having two-part implant, bottom part that
3:11:59 is the thread that are inserted that has the chips
3:12:02 and maybe a radio and some power source.
3:12:04 And then you have another implant
3:12:06 that has more of the computational heavy load
3:12:08 and the bigger battery.
3:12:09 And then one can be under the dura,
3:12:13 one can be above the dura being the plug for the skull.
3:12:14 They can talk to each other, but the thing
3:12:17 that you want to upgrade, the computer and not the threads,
3:12:19 if you want to upgrade that, you just go in there,
3:12:22 remove the screws, and then put in the next version.
3:12:25 And it’s a very, very easy surgery, too.
3:12:29 Like you do a skin incision, slip this in, screw,
3:12:32 probably be able to do this in 10 minutes.
3:12:34 So that would allow you to reuse the threads sort of?
3:12:36 Correct.
3:12:37 So this leads to the natural question
3:12:41 of what is the pathway to scaling the increase
3:12:42 in the number of threads?
3:12:44 Is that a priority?
3:12:48 What’s the technical challenger?
3:12:49 Yeah, that is a priority.
3:12:51 So for next versions of the implant,
3:12:53 the key metrics that we’re looking
3:12:56 to improve are number of channels, just recording
3:12:59 from more and more neurons.
3:13:02 We have a pathway to actually go from currently 1,000
3:13:07 to hopefully 3,000, if not 6,000 by end of this year.
3:13:11 And then end of next year, we want to get to even more–
3:13:12 16,000.
3:13:13 Wow.
3:13:14 There’s a couple of limitations to that.
3:13:18 One is, obviously, being able to photographically print
3:13:21 those wires, as I mentioned, is two micron in width.
3:13:25 And in spacing, obviously, there are chips
3:13:27 that are much more advanced than those types of resolution.
3:13:30 And we have some of the tools that we have brought in-house
3:13:31 to be able to do that.
3:13:34 So traces will be narrower, just so that you
3:13:37 have to have more of the wires coming into the chip.
3:13:44 Chips also cannot linearly consume more energy,
3:13:45 as you have more and more channels.
3:13:50 So there’s a lot of innovations in architecture,
3:13:51 as well as the circuit design topology,
3:13:54 to make them lower power.
3:13:57 You need to also think about, if you have all of these spikes,
3:13:59 how do you send that off to the end applications?
3:14:02 So you need to think about bandwidth limitation there,
3:14:05 and potentially innovations in signal processing.
3:14:07 Physically, one of the biggest challenges
3:14:11 is going to be the interface.
3:14:13 It’s always the interface that breaks.
3:14:17 Bonding the stem film array to the electronics,
3:14:20 it starts to become very, very highly dense interconnects.
3:14:22 So how do you connectorize that?
3:14:26 There’s a lot of innovations in the 3D integrations
3:14:30 in the recent years that we can take advantage of.
3:14:32 One of the biggest challenges that we do have
3:14:36 is forming this hermetic barrier.
3:14:37 This is an extremely harsh environment
3:14:39 that we’re in, the brain.
3:14:44 So how do you protect it from the brain
3:14:47 trying to kill your electronics, to also your electronics
3:14:50 leaking things that you don’t want into the brain,
3:14:51 and that forming that hermetic barrier
3:14:54 is going to be a very, very big challenge that we are,
3:14:57 I think, are actually well suited to tackle.
3:14:58 How do you test that?
3:15:00 Like, what’s the development environment?
3:15:02 Yeah, to simulate that kind of harshness.
3:15:05 Yeah, so this is where the accelerated life tester
3:15:08 essentially is a brain in a vat.
3:15:12 It literally is a vessel that is made up of–
3:15:15 and again, for all intents and purposes
3:15:17 for this particular type of test,
3:15:20 your brain is a salt water.
3:15:26 And you can also put some other set of chemicals
3:15:30 like reactive oxygen species that get at these interfaces
3:15:35 and trying to cause a reaction to pull it apart.
3:15:40 But you could also increase the rate at which these interfaces
3:15:42 are aging by just increasing temperature.
3:15:45 So every 10 degrees Celsius that you increase,
3:15:48 you’re basically accelerating time by 2x.
3:15:51 And there’s limit as to how much temperature you want to increase,
3:15:54 because at some point there’s some other nonlinear dynamics
3:15:58 that causes you to have other nasty gases to form
3:16:00 that just is not realistic in an environment.
3:16:04 So what we do is we increase in our ALT chamber
3:16:09 by 20 degrees Celsius that increases the aging by 4x.
3:16:11 So essentially, one day in ALT chamber
3:16:13 is four day in calendar year.
3:16:17 And we look at whether the implants still
3:16:20 are intact, including the threads.
3:16:21 And operation and all of that.
3:16:23 And operation and all of that.
3:16:26 Obviously, it’s not an exact same environment as a brain,
3:16:31 because brain has mechanical, other more biological groups
3:16:33 that attack at it.
3:16:36 But it is a good testing environment
3:16:41 for at least the enclosure and the strength of the enclosure.
3:16:45 And we’ve had implants, the current version of the implant,
3:16:49 that has been in there for close to 2 and 1/2 years,
3:16:51 which is equivalent to a decade.
3:16:54 And they seem to be fine.
3:16:56 So it’s interesting that the brain–
3:17:02 basically, close approximation is warm salt water.
3:17:05 Hot salt water is a good testing environment.
3:17:11 By the way, I’m drinking element, which is basically salt water,
3:17:13 which is making me kind of–
3:17:15 it doesn’t have computational power the way the brain does,
3:17:19 but maybe in terms of all the characteristics,
3:17:20 it’s quite similar.
3:17:21 And I’m consuming it.
3:17:25 Yeah, you have to get it in the right pH, too.
3:17:27 And then consciousness will emerge.
3:17:27 Yeah.
3:17:28 No.
3:17:31 By the way, the other thing that also is interesting
3:17:35 about our enclosure is if you look at our implant,
3:17:40 it’s not your common-looking medical implant that usually
3:17:45 is in case in a titanium can that’s laser welded.
3:17:51 We use this polymer called PCTFE, polychlorotriphloroethylene,
3:17:55 which is actually commonly used in blister packs.
3:17:58 So when you have a pill and you’re trying to pop the pill,
3:18:00 there’s kind of that plastic membrane.
3:18:01 That’s what this is.
3:18:05 No one’s actually ever used this except us.
3:18:07 And the reason we wanted to do this
3:18:09 is because it’s electromagnetically transparent.
3:18:13 So when we talked about the electromagnetic inductive
3:18:15 charging with titanium can, usually
3:18:17 if you want to do something like that,
3:18:19 you have to have a sapphire window,
3:18:22 and it’s a very, very tough process to scale.
3:18:24 So you’re doing a lot of iteration here in every aspect
3:18:27 of this– the materials, the software, the whole–
3:18:30 The whole shipping.
3:18:34 So OK, so you mentioned scaling.
3:18:37 Is it possible to have multiple neural-link devices
3:18:41 as one of the ways of scaling?
3:18:44 To have multiple neural-link devices implanted?
3:18:44 That’s the goal.
3:18:45 That’s the goal.
3:18:50 We’ve had– I mean, our monkeys have had two neural-links,
3:18:52 one in each hemisphere.
3:18:54 And then we’re also looking at potential
3:18:58 of having one in motor cortex, one in visual cortex,
3:19:01 and one in wherever other cortex.
3:19:04 So focusing on a particular function, one neural-link
3:19:05 device.
3:19:05 Correct.
3:19:07 I mean, I wonder if there’s some level of customization
3:19:09 that can be done on the compute side.
3:19:10 So for the motor cortex–
3:19:12 Absolutely.
3:19:12 That’s the goal.
3:19:16 And we talk about at neural-link building a generalized neural
3:19:19 interface to the brain.
3:19:22 And that also is strategically how
3:19:28 we’re approaching this with marketing and also with regulatory,
3:19:32 which is, hey, look, we have the robot,
3:19:34 and the robot can access any part of the cortex.
3:19:36 Right now, we’re focused on motor cortex
3:19:41 with current version of the N1 that’s specialized
3:19:43 for motor decoding tasks.
3:19:44 But also, at the end of the day, there
3:19:46 is kind of a general compute available there.
3:19:51 But typically, if you want to really get down
3:19:54 to hyper-optimizing for power and efficiency,
3:19:58 you do need to get to some specialized function.
3:20:02 But what we’re saying is, hey, you
3:20:06 are now used to this robotic insertion techniques, which
3:20:09 took many, many years of showing data and conversation
3:20:13 with the FDA, and also internally convincing ourselves
3:20:15 that this is safe.
3:20:19 And now, the difference is that if we
3:20:22 go to other parts of the brain, like visual cortex, which
3:20:24 we’re interested in as our second product,
3:20:26 obviously, it’s a completely different environment.
3:20:31 The cortex is laid out very, very differently.
3:20:33 It’s going to be more stimulation focus
3:20:36 rather than recording, just kind of creating visual percepts.
3:20:41 But in the end, we’re using the same thin film array technology.
3:20:43 We’re using the same robot insertion technology.
3:20:46 We’re using the same packaging technology.
3:20:48 Now, more of the conversation is focused
3:20:50 around what are the differences and what
3:20:52 are the implications of those differences in safety
3:20:53 and efficacy.
3:20:58 The way you said second product is both hilarious and awesome
3:20:59 to me.
3:21:06 That product being restoring sight for blind people.
3:21:12 So can you speak to stimulating the visual cortex?
3:21:16 I mean, the possibilities there are just incredible
3:21:21 to be able to give that gift back to people who don’t have sight
3:21:23 or even any aspect of that.
3:21:25 Can you just speak to the challenges of–
3:21:28 there’s several challenges here, one of which
3:21:32 is, like you said, from recording to stimulation.
3:21:35 Just any aspect of that that you’re both excited
3:21:39 and see the challenges of?
3:21:41 Yeah, I guess I’ll start by saying
3:21:45 that we actually have been capable of stimulating
3:21:51 through our thin film array as well as electronics for years.
3:21:54 We have actually demonstrated some of that capabilities
3:21:58 for reanimating the limb in the spinal cord.
3:22:01 Obviously, for the current EFS study,
3:22:03 we’ve hardware disabled that.
3:22:05 So that’s something that we wanted
3:22:09 to embark as a separate journey.
3:22:11 And obviously, there are many, many different ways
3:22:14 to write information into the brain.
3:22:17 The way in which we’re doing that is through passing
3:22:22 electrical current and causing that to really change
3:22:27 the local environment so that you can artificially
3:22:32 cause the neurons to depolarize in nearby areas.
3:22:39 For vision specifically, the way our visual system works,
3:22:40 it’s both well understood.
3:22:42 I mean, anything with kind of brain,
3:22:44 there are aspects of it that’s well understood,
3:22:46 but in the end, we don’t really know anything.
3:22:48 But the way visual system works is
3:22:51 that you have photon hitting your eye.
3:22:56 And in your eyes, there are these specialized cells
3:23:01 called photoreceptor cells that convert the photon energy
3:23:02 into electrical signals.
3:23:05 And then that then gets projected
3:23:09 to your back of your head, your visual cortex.
3:23:14 It goes through actually a thalamic system called LGN
3:23:15 that then projects it out.
3:23:20 And then in the visual cortex, there’s visual area 1 or V1.
3:23:23 And then there’s a bunch of other higher-level processing
3:23:25 layers, like V2, V3.
3:23:28 And there are actually kind of interesting parallels.
3:23:32 And when you study the behaviors of these convolutional neural
3:23:36 networks, what the different layers of the network
3:23:39 is detecting– first, they’re detecting these edges.
3:23:42 And they’re then detecting some more natural curves.
3:23:45 And then they start to detect objects.
3:23:47 Kind of similar thing happens in the brain.
3:23:49 And a lot of that has been inspired.
3:23:51 And it’s been kind of exciting to see
3:23:53 some of the correlations there.
3:23:56 But things like from there, where
3:24:00 does cognition arise and where is color encoded?
3:24:03 There’s just not a lot of understanding,
3:24:05 fundamental understanding there.
3:24:11 So in terms of bringing sight back to those that are blind,
3:24:13 there are many different forms of blindness.
3:24:16 There’s actually 1 million people in the US
3:24:18 that are legally blind.
3:24:23 That means certain score below in the visual test.
3:24:25 I think it’s something like, if you
3:24:29 can see something at 20 feet distance,
3:24:32 that normal people can see at 200 feet distance.
3:24:34 If you’re worsened out, you’re legally blind.
3:24:37 So fundamental, that means you can’t function effectively–
3:24:37 Correct.
3:24:39 –using sight in the world.
3:24:42 Yeah, like to navigate your environment.
3:24:45 And yeah, there are different forms of blindness.
3:24:48 There are forms of blindness where
3:24:52 there’s some degeneration of your retina.
3:24:55 These photoreceptor cells and the rest
3:25:01 of your visual processing that I described is intact.
3:25:04 And for those types of individuals,
3:25:06 you may not need to maybe stick electrodes
3:25:08 into the visual cortex.
3:25:14 You can actually build retinal prosthetic devices that actually
3:25:17 just replaces a function of that retinal cells that
3:25:17 are degenerated.
3:25:20 And there are many companies that are working on that.
3:25:21 But that’s a very small slice.
3:25:24 Albeit significant, still a smaller slice
3:25:28 of folks that are legally blind.
3:25:30 If there’s any damage along that circuitry,
3:25:35 whether it’s in the optic nerve or just the LGN circuitry
3:25:39 or any break in that circuit, that’s not going to work for you.
3:25:45 And the source of where you need to actually cause
3:25:50 that visual percept to happen, because your biological mechanism
3:25:52 of doing that is by placing electrodes
3:25:54 in the visual cortex in the back of your head.
3:25:56 And the way in which this would work
3:25:58 is that you would have an external camera,
3:26:03 whether it’s something as unsophisticated as a GoPro
3:26:08 or some sort of wearable Ray-Ban type glasses
3:26:12 that Meta’s working on, that captures a scene.
3:26:15 And that scene is then converted to a set
3:26:18 of electrical impulses or stimulation pulses
3:26:21 that you would activate in your visual cortex
3:26:24 through these thin film arrays.
3:26:31 And by playing some concerted kind of orchestra
3:26:33 of these stimulation patterns, you
3:26:35 can create what’s called phosphines, which
3:26:38 are these kind of white yellowish dots
3:26:41 that you can also create by just pressing your eyes.
3:26:42 You can actually create those percepts
3:26:45 by stimulating the visual cortex.
3:26:48 And the name of the game is really have many of those
3:26:50 and have those percepts, the phosphines,
3:26:53 be as small as possible so that you can start to tell apart–
3:26:57 like they’re the individual pixels of the screen.
3:26:59 So if you have many of those, potentially
3:27:04 you’ll be able to, in the long term,
3:27:06 be able to actually get naturalistic vision.
3:27:09 But in the short term to maybe midterm,
3:27:12 being able to at least be able to have object detection
3:27:18 algorithms run on your glasses, the prepop processing units,
3:27:20 and then being able to at least see the edges of things
3:27:23 so you don’t bump into stuff.
3:27:24 It’s incredible.
3:27:25 This is really incredible.
3:27:27 So you basically would be adding pixels,
3:27:29 and your brain would start to figure out
3:27:31 what those pixels mean.
3:27:34 And with different kinds of assistance
3:27:37 under signal processing on all fronts.
3:27:40 The thing that actually– it’s a couple of things.
3:27:44 Obviously, if you’re blind from birth,
3:27:49 the way brain works, especially in the early age,
3:27:52 neuroplasticity is really nothing other than kind
3:27:55 of your brain and different parts of your brain fighting
3:27:58 for the limited territory.
3:28:03 And very, very quickly you see cases where people that are–
3:28:05 I mean, you also hear about people
3:28:08 who are blind that have heightened sense of hearing
3:28:10 or some other senses.
3:28:13 And the reason for that is because that cortex that’s not
3:28:15 used just gets taken over by these different parts
3:28:16 of the cortex.
3:28:20 So for those types of individuals,
3:28:21 I mean, I guess they’re going to have
3:28:24 to now map some other parts of their senses
3:28:26 into what they call vision.
3:28:29 But it’s going to be, obviously, a very, very different
3:28:33 conscious experience before–
3:28:36 so I think that’s an interesting caveat.
3:28:38 The other thing that also is important to highlight
3:28:42 is that we’re currently limited by our biology in terms
3:28:45 of the wavelength that we can see.
3:28:48 There’s a very, very small wavelength
3:28:50 that is a visible light wavelength
3:28:51 that we can see with our eyes.
3:28:55 But when you have an external camera with this BCI system,
3:28:56 you’re not limited to that.
3:28:57 You can have infrared.
3:28:58 You can have UV.
3:29:01 You can have whatever other spectrum that you want to see.
3:29:03 And whether that gets mapped to some sort
3:29:05 of weird conscious experience, I have no idea.
3:29:10 But oftentimes I talk to people about the goal of Neuralink
3:29:13 being going beyond the limits of our biology.
3:29:16 That’s sort of what I mean.
3:29:20 And if you’re able to control the kind of raw signal,
3:29:25 is that when we use our sight, we’re getting the photons.
3:29:27 And there’s not much processing on it.
3:29:29 If you’re able to control that signal,
3:29:31 maybe you can do some kind of processing.
3:29:33 Maybe you do object detection ahead of time.
3:29:34 Yeah.
3:29:36 You’re doing some kind of preprocessing.
3:29:39 And there’s a lot of possibilities to explore that.
3:29:43 So it’s not just increasing thermal imaging, that kind
3:29:46 of stuff, but it’s also just doing some kind
3:29:47 of interesting processing.
3:29:48 Yeah.
3:29:52 I mean, my theory of how visual system works also
3:29:58 is that there’s just so many things happening in the world.
3:30:00 And there’s a lot of photons that are going into your eye.
3:30:03 And it’s unclear exactly where some
3:30:05 of the preprocessing steps are happening.
3:30:10 But I mean, I actually think that just from a fundamental
3:30:14 perspective, there’s just so much–
3:30:17 the reality that we’re in, if it’s a reality, is–
3:30:20 so there’s so much data.
3:30:25 And I think humans are just unable to actually eat enough,
3:30:26 actually, to process all that information.
3:30:28 So there’s some sort of filtering that does happen,
3:30:30 whether that happens in the retina,
3:30:31 whether that happens in different layers
3:30:34 of the visual cortex, unclear.
3:30:37 But the analogy that I sometimes think about
3:30:42 is if your brain is a CCD camera,
3:30:46 and all of the information in the world is a sun.
3:30:49 And when you try to actually look at the sun with the CCD
3:30:51 camera, it’s just going to saturate the sensors,
3:30:53 because it’s an enormous amount of energy.
3:30:57 So what you do is you end up adding these filters
3:31:00 to just narrow the information that’s coming to you
3:31:01 and being captured.
3:31:07 And I think things like our experiences
3:31:15 or our drugs, like Prophofol, that anesthetic drug,
3:31:17 or psychedelics, what they’re doing
3:31:20 is they’re kind of swapping out these filters
3:31:23 and putting in new ones or removing all the ones
3:31:26 and kind of controlling our conscious experience.
3:31:28 Yeah, man, not to distract from the topic,
3:31:30 but I just took a very high dose of ayahuasca
3:31:31 in the Amazon jungle.
3:31:34 So yes, it’s a nice way to think about it.
3:31:37 You’re swapping out different experiences.
3:31:41 And we’re narrowing being able to control that primarily
3:31:46 at first to improve function, not for entertainment purposes
3:31:47 or enjoyment purposes, but–
3:31:49 Yeah, giving back lost functions.
3:31:52 Giving back lost functions.
3:31:56 And that’s especially when the function is completely lost.
3:31:58 Anything is a huge help.
3:32:05 Would you implant a Neuralink device in your own brain?
3:32:06 Absolutely.
3:32:10 I mean, maybe not right now, but absolutely.
3:32:12 What kind of capability once reached,
3:32:15 you start getting real curious and almost
3:32:20 get a little antsy, like jealous of people that get–
3:32:23 as you watch them get implanted?
3:32:24 Yeah, I mean, I think–
3:32:26 I mean, even with our early participants,
3:32:30 if they start to do things that I can’t do,
3:32:34 which I think is in the realm of possibility for them
3:32:39 to be able to get 15, 20, if not 100 BPS, right?
3:32:41 There’s nothing that fundamentally stops us
3:32:44 from being able to achieve that type of performance.
3:32:49 I mean, I will certainly get jealous that they can do that.
3:32:52 I should say that watching Noah and I get a little jealous
3:32:54 because he’s having so much fun.
3:32:56 And it seems like such a chill way to play video games.
3:32:58 Yeah.
3:33:01 I mean, the thing that also is hard to appreciate sometimes
3:33:07 is that he’s doing these things while talking and–
3:33:08 I mean, it’s multitasking, right?
3:33:14 So it’s clearly, it’s obviously cognitively intensive,
3:33:17 but similar to how when we talk, we move our hands,
3:33:20 like these things are multitasking.
3:33:21 I mean, he’s able to do that.
3:33:25 And you won’t be able to do that with other assistive
3:33:28 technology as far as I’m aware.
3:33:31 If you’re obviously using like an eye-tracking device,
3:33:34 you’re very much fixated on that thing that you’re trying to do.
3:33:35 And if you’re using voice control,
3:33:38 I mean, if you say some other stuff,
3:33:39 yeah, you don’t get to use that.
3:33:42 Yeah, the multitasking aspect of that is really interesting.
3:33:47 So it’s not just the BPS for the primary task.
3:33:50 It’s the parallelization of multiple tasks.
3:33:53 If you measure the BPS for the entirety of the human organism,
3:33:58 so if you’re talking and doing a thing with your mind
3:34:01 and looking around also.
3:34:03 I mean, there’s just a lot of parallelization
3:34:05 that can be happening.
3:34:06 But I mean, I think at some point for him,
3:34:09 if he wants to really achieve those high-level BPS,
3:34:11 it does require full attention, right?
3:34:16 And that’s a separate circuitry that is a big mystery,
3:34:17 like how attention works.
3:34:19 Yeah, attention, like cognitive load,
3:34:24 I’ve done a lot of literature on people doing two tasks.
3:34:28 Like you have your primary task and a secondary task.
3:34:31 And the secondary task is a source of distraction.
3:34:33 And how does that affect the performance of the primary task?
3:34:34 And there’s depending on the task,
3:34:36 because there’s a lot of interesting,
3:34:38 I mean, this is an interesting computational device, right?
3:34:40 And I think there’s–
3:34:42 To say the least.
3:34:44 A lot of novel insights that can be gained from everything.
3:34:45 I mean, I personally am surprised
3:34:49 that Nolan’s able to do such incredible control
3:34:52 of the cursor while talking.
3:34:54 And also being nervous at the same time,
3:34:55 because he’s talking like all of us
3:34:57 are if you’re talking in front of the camera.
3:34:58 You get nervous.
3:35:00 So all of those are coming into play.
3:35:04 He’s able to still achieve high performance.
3:35:05 Surprising.
3:35:07 I mean, all of this is really amazing.
3:35:12 And I think just after researching this really in depth,
3:35:15 I kind of wanted your link.
3:35:16 Get in the line.
3:35:18 And also the safety, get in the line.
3:35:20 Well, we should say the registry is for people
3:35:23 who have quadriplegia and all that kind of stuff.
3:35:29 So there would be a separate line for people.
3:35:34 They’re just curious, like myself.
3:35:37 So now that Nolan, patient P1, is part of the ongoing Prime
3:35:44 Study, what’s the high level vision for P2, P3, P4, P5?
3:35:48 And just the expansion into other human beings
3:35:51 that are getting to experience this implant?
3:35:56 Yeah, I mean, the primary goal for our study in the first place
3:35:57 is to achieve safety endpoints.
3:36:03 Just understand safety of this device
3:36:07 as well as the implantation process.
3:36:09 And also at the same time, understand
3:36:11 the efficacy and the impact that it
3:36:15 could have on the potential users’ lives.
3:36:21 And just because you’re living with tetraplegia,
3:36:23 it doesn’t mean your situation is
3:36:25 same as another person living with tetraplegia.
3:36:29 It’s wildly, wildly varying.
3:36:33 And it’s something that we’re hoping to also understand
3:36:37 how our technology can serve not just a very small slice
3:36:40 of those individuals, but a broader group of individuals
3:36:41 and being able to get the feedback
3:36:47 to just really build just the best product for them.
3:36:53 So there’s obviously also goals that we have.
3:36:57 And the primary purpose of the early feasibility study
3:37:01 is to learn from each and every participant
3:37:05 to improve the device, improve the surgery before we
3:37:09 embark on what’s called a pivotal study that then is
3:37:14 much larger trial that starts to look
3:37:17 at statistical significance of your endpoints.
3:37:21 And that’s required before you can then market the device.
3:37:24 And that’s how it works in the US and just generally
3:37:25 around the world.
3:37:26 That’s the process you follow.
3:37:30 So our goal is to really just understand from people
3:37:33 like Nolan, P2, P3, future participants
3:37:36 what aspects of our device needs to improve.
3:37:38 If it turns out that people are like,
3:37:40 I really don’t like the fact that it lasts only six hours.
3:37:45 I want to be able to use this computer for 24 hours.
3:37:50 I mean, that is a user needs and user requirements,
3:37:52 which we can only find out from just being
3:37:54 able to engage with them.
3:37:55 So before the pivotal study, there’s
3:37:57 kind of like a rapid innovation based
3:37:58 on individual experiences.
3:38:00 You’re learning from individual people
3:38:05 how they use it, like the high resolution details
3:38:07 in terms of cursor control and signal
3:38:10 and all that kind of stuff, like life experience.
3:38:11 Yeah, so there’s hardware changes,
3:38:14 but also just firmware updates.
3:38:20 So even when we had that sort of recovery event for Nolan,
3:38:26 he now has the new firmware that he has been updated with.
3:38:28 And it’s similar to how your phones
3:38:31 get updated all the time with new firmwares for security
3:38:34 patches, whatever new functionality, UI.
3:38:36 And that’s something that is possible with our implant.
3:38:40 It’s not a static one-time device that can only
3:38:42 do the thing that it said it can do.
3:38:45 I mean, similar to Tesla, you can do over-the-air firmware
3:38:48 updates, and now you have completely new user interface.
3:38:51 And all this bells and whistles and improvements
3:38:54 on everything, like the latest, right?
3:38:57 That’s when we say generalized platform,
3:38:59 that’s what we’re talking about.
3:39:01 Yeah, it’s really cool how the app that Nolan is using,
3:39:05 there’s calibration, all that kind of stuff.
3:39:09 And then there’s update.
3:39:12 You just click and get an update.
3:39:16 What other future capabilities are you kind of looking to?
3:39:17 You said vision.
3:39:19 That’s a fascinating one.
3:39:22 What about sort of accelerated typing or speech
3:39:24 or this kind of stuff?
3:39:26 And what else is there?
3:39:30 Yeah, those are still in the realm of movement program.
3:39:32 So largely speaking, we have two programs.
3:39:36 We have the movement program, and we have the vision program.
3:39:38 The movement program currently is focused
3:39:40 around the digital freedom.
3:39:42 As you can easily guess, if you can
3:39:45 control 2D cursor in the digital space,
3:39:48 you could move anything in the physical space–
3:39:52 so robotic arms, wheelchair, your environment–
3:39:54 or even really, whether it’s through the phone
3:39:56 or just directly to those interfaces,
3:39:59 so to those machines.
3:40:02 So we’re looking at ways to expand those types of capability,
3:40:04 even for Nolan.
3:40:07 That requires conversation with the FDA
3:40:10 and kind of showing safety data for if there’s
3:40:13 a robotic arm or a wheelchair that we can guarantee
3:40:16 that they’re not going to hurt themselves accidentally.
3:40:17 It’s very different if you’re moving stuff
3:40:20 in the digital domain versus in the physical space,
3:40:25 you can actually potentially cause harm to the participants.
3:40:27 So we’re working through that right now.
3:40:31 Speech does involve different areas of the brain.
3:40:33 Speech prosthetic is very, very fascinating.
3:40:37 And there’s actually been a lot of really amazing work
3:40:40 that’s been happening in academia.
3:40:44 Sergei Stavisky at UC Davis, Jamie Henderson,
3:40:47 and late Krishna Shnoy at Stanford
3:40:49 are doing just some incredible amount of work
3:40:52 in improving speech neural prosthetics.
3:40:55 And those are actually looking more
3:40:57 at parts of the motor cortex that
3:41:01 are controlling these focal articulators.
3:41:05 And being able to, even by mouthing the word or imagine
3:41:08 speech, you can pick up those signals.
3:41:11 The more sophisticated, higher-level processing
3:41:15 areas like the Broca’s area or Warnick’s area,
3:41:18 those are still very, very big mystery
3:41:21 in terms of the underlying mechanism of how all that stuff
3:41:24 works.
3:41:26 And I mean, I think NeuroLinks’ eventual goal
3:41:29 is to kind of understand those things
3:41:31 and be able to provide a platform and tools
3:41:34 to be able to understand that and study that.
3:41:38 This is where I get to the pothead questions.
3:41:40 Do you think we can start getting insight
3:41:43 into things like thought?
3:41:48 So speech is– there’s a muscular component, like you said.
3:41:51 There’s like the act of producing sounds.
3:41:56 But then what about the internal things, like cognition,
3:41:58 like low-level thoughts and high-level thoughts?
3:42:01 Do you think we’ll start noticing signals
3:42:06 that could be picked up, that could be understood,
3:42:08 that could maybe be used in order
3:42:12 to interact with the outside world?
3:42:14 In some ways, I guess this starts
3:42:19 to kind of get into the heart problem of consciousness.
3:42:26 And I mean, on one hand, all of these
3:42:29 are, at some point, a set of electrical signals
3:42:36 that from there, maybe it in itself
3:42:39 is giving you the cognition or the meaning,
3:42:44 or somehow human mind is an incredibly amazing storytelling
3:42:44 machine.
3:42:47 So we’re telling ourselves and fooling ourselves
3:42:50 that there’s some interesting meaning here.
3:42:55 But I certainly think that PCI–
3:42:57 and really, PCI at the end of the day
3:43:00 is a set of tools that help you kind of study
3:43:04 the underlying mechanisms in both local but also broader
3:43:07 sense.
3:43:10 And whether there’s some interesting patterns
3:43:15 of electrical signal, that means you’re thinking this versus–
3:43:19 and you can either learn from many, many sets of data
3:43:22 to correlate some of that and be able to do mind reading or not.
3:43:24 I’m not sure.
3:43:27 I certainly would not kind of pull that out as a possibility,
3:43:32 but I think PCI alone probably can’t do that.
3:43:36 There’s probably additional set of tools and framework
3:43:39 and also just heart problem of consciousness at the end
3:43:42 of the day is rooted in this philosophical question of what
3:43:44 is the meaning of it all?
3:43:46 What’s the nature of our existence?
3:43:50 Where does the mind emerge from this complex network?
3:43:54 Yeah, how does the subjective experience emerge
3:43:58 from just a bunch of spikes, electrical spikes?
3:44:01 Yeah, I mean, we do really think about PCI
3:44:04 and what we’re building as a tool for understanding
3:44:10 the mind, the brain, the only question that matters.
3:44:16 There actually is some biological existence
3:44:19 proof of what it would take to kind of start
3:44:24 to form some of these experiences that may be unique.
3:44:27 If you actually look at every one of our brains,
3:44:28 there are two hemispheres.
3:44:31 There’s a left-sided brain, there’s a right-sided brain.
3:44:36 And I mean, unless you have some other conditions,
3:44:41 you normally don’t feel like left lex or right lex.
3:44:43 You just feel like one lex, right?
3:44:46 So what is happening there, right?
3:44:50 If you actually look at the two hemispheres,
3:44:53 there’s a structure that kind of connectorized
3:44:56 the two called the corpus callosum that
3:45:01 is supposed to have around 200 to 300 million connections
3:45:04 or axons.
3:45:08 So whether that means that’s the number of interface
3:45:11 and electrodes that we need to create some sort of mind
3:45:16 meld or from that, like whatever new conscious experience
3:45:19 that you can experience.
3:45:25 But I do think that there is kind of an interesting existence
3:45:29 proof that we all have.
3:45:32 And that threshold is unknown at this time.
3:45:34 Oh yeah, these things, everything in this domain
3:45:37 is speculation, right?
3:45:40 And then there will be–
3:45:42 you’d be continuously pleasantly surprised.
3:45:50 Do you see a world where there’s millions of people,
3:45:52 like tens of millions, hundreds of millions of people
3:45:55 walking around with a neural-link device
3:45:57 or multiple neural-link devices in their brain?
3:45:58 I do.
3:46:00 First of all, there are–
3:46:02 if you look at worldwide, people suffering
3:46:05 from movement disorders and visual deficits,
3:46:10 I mean, that’s in the tens, if not hundreds,
3:46:12 of millions of people.
3:46:16 So that alone, I think, there’s a lot of benefit
3:46:21 and potential good that we can do with this type of technology.
3:46:24 And once you start to get into kind of neural, like,
3:46:31 psychiatric application, depression, anxiety, hunger,
3:46:37 or obesity, like, mood control of appetite,
3:46:43 I mean, that starts to become very real to everyone.
3:46:47 Not to mention that every–
3:46:50 most people on Earth have a smartphone.
3:46:55 And once BCI starts competing with a smartphone
3:46:57 as a preferred methodology of interacting
3:47:01 with the digital world, that also becomes an interesting thing.
3:47:03 Oh, yeah, I mean, this is even before going to that, right?
3:47:06 I mean, there is almost–
3:47:08 I mean, the entire world that could
3:47:10 benefit from these types of thing.
3:47:13 And then if we’re talking about kind of next generation
3:47:19 of how we interface with machines or even ourselves,
3:47:24 in many ways, I think BCI can play a role in that.
3:47:28 And some of the things that I also talk about
3:47:30 is I do think that there is a real possibility
3:47:34 that you could see 8 billion people walking around
3:47:35 with Neuralink.
3:47:38 Well, thank you so much for pushing ahead.
3:47:41 And I look forward to that exciting future.
3:47:42 Thanks for having me.
3:47:46 Thanks for listening to this conversation with DJ Sa.
3:47:50 And now, dear friends, here’s Matthew McDugo,
3:47:54 the head neurosurgeon at Neuralink.
3:47:58 When did you first become fascinated with the human brain?
3:48:01 Since forever, as far back as I can remember,
3:48:03 I’ve been interested in the human brain.
3:48:10 I mean, I was a thoughtful kid and a bit of an outsider.
3:48:14 And you sit there thinking about what the most important things
3:48:20 in the world are in your little tiny adolescent brain.
3:48:24 And the answer that I came to, that I converged on,
3:48:29 was that all of the things you can possibly conceive of
3:48:33 as things that are important for human beings to care about
3:48:37 are literally contained in the skull,
3:48:40 both the perception of them and their relative values
3:48:45 and the solutions to all our problems and all of our problems
3:48:47 are all contained in the skull.
3:48:52 And if we knew more about how that worked,
3:48:56 how the brain encodes information and generates desires
3:49:04 and generates agony and suffering, we could do more about it.
3:49:07 You think about all the really great triumphs in human history.
3:49:12 You think about all the really horrific tragedies.
3:49:13 You think about the Holocaust.
3:49:20 You think about any prison full of human stories.
3:49:27 And all of those problems boil down to neurochemistry.
3:49:30 So if you get a little bit of control over that,
3:49:35 you provide people the option to do better in the way I read history,
3:49:38 the way people have dealt with having better tools
3:49:45 is that they most often in the end do better with huge asterisks.
3:49:49 But I think it’s an interesting, worthy and noble pursuit
3:49:52 to give people more options, more tools.
3:49:55 Yeah, that’s a fascinating way to look at human history.
3:49:58 You just imagine all these neurobiological mechanisms,
3:50:02 Stalin, Hitler, all of these Jankos Khan,
3:50:05 all of them just had like a brain.
3:50:11 Just a bunch of neurons, like a few tens of billions of neurons,
3:50:13 gaining a bunch of information over a period of time.
3:50:17 They’ve set a module that does language and memory and all that.
3:50:19 And from there, in the case of those people,
3:50:22 they’re able to murder millions of people.
3:50:28 And all that coming from, there’s not some glorified notion
3:50:34 of a dictator of this enormous mind or something like this.
3:50:36 It’s just a brain.
3:50:41 Yeah, I mean, a lot of that has to do with how well people
3:50:45 like that can organize those around them.
3:50:45 Other brains.
3:50:48 Yeah, and so I always find it interesting
3:50:52 to look to primatology, look to our closest non-human
3:50:57 relatives for clues as to how humans are going to behave
3:51:01 and what particular humans are able to achieve.
3:51:06 And so you look at chimpanzees and bonobos.
3:51:10 And they’re similar, but different in their social structures,
3:51:12 particularly.
3:51:17 And I went to Emory in Atlanta and studied under friends
3:51:18 to all, the great friends to all, who
3:51:23 was kind of the leading primatologist who recently died.
3:51:30 And his work at looking at chimps through the lens of how
3:51:31 you would watch an episode of Friends
3:51:35 and understand the motivations of the characters interacting
3:51:37 with each other, he would look at a chimp colony
3:51:41 and basically apply that lens, massively oversimplifying it.
3:51:47 If you do that, instead of just saying subject 473
3:51:52 through his feces at subject 471, you
3:51:55 talk about them in terms of their human struggles,
3:52:00 accord them the dignity of themselves as actors
3:52:04 with understandable goals and drives what they want out
3:52:05 of life.
3:52:09 And primarily, it’s the things we want out of life– food, sex,
3:52:14 companionship, power.
3:52:17 You can understand chimp and bonobo behavior
3:52:22 in the same lights much more easily.
3:52:25 And I think doing so gives you the tools
3:52:30 you need to reduce human behavior from the kind of false
3:52:33 complexity that we layer onto it with language
3:52:37 and look at it in terms of, oh, well, these humans
3:52:40 are looking for companionship, sex, food, power.
3:52:45 And I think that that’s a pretty powerful tool
3:52:47 to have in understanding human behavior.
3:52:50 And I just went to the Amazon jungle for a few weeks.
3:52:56 And it’s a very visceral reminder that a lot of life on Earth
3:52:58 is just trying to get laid.
3:53:00 They’re all screaming at each other.
3:53:02 Like, I saw a lot of monkeys.
3:53:03 And they’re just trying to impress each other.
3:53:06 Or maybe there’s a battle for power.
3:53:08 But a lot of the battle for power
3:53:10 has to do with them getting laid.
3:53:13 Breeding rights often go with alpha status.
3:53:17 And so if you can get a piece of that, then you’re going to do OK.
3:53:19 And we’d like to think that we’re somehow fundamentally
3:53:22 different, but especially when it comes to primates,
3:53:24 where we really aren’t–
3:53:26 we can use fancier poetic language,
3:53:34 but maybe some of the underlying drives that motivate us are similar.
3:53:35 Yeah, I think that’s true.
3:53:38 And all of that is coming from this, the brain.
3:53:41 So when did you first start studying the brain?
3:53:43 Is it because of the biological mechanism?
3:53:45 Basically, the moment I got to college,
3:53:51 I started looking around for labs that I could do neuroscience work in.
3:53:54 I originally approached that from the angle
3:53:58 of looking at interactions between the brain and the immune system,
3:54:00 which isn’t the most obvious place to start.
3:54:08 But I had this idea at the time that the contents of your thoughts
3:54:16 would have a direct impact, maybe a powerful one, on non-conscious systems
3:54:22 in your body, the systems we think of as homeostatic, automatic mechanisms,
3:54:28 like fighting off a virus, repairing a wound.
3:54:32 And sure enough, there are big crossovers between the two.
3:54:38 I mean, it gets to kind of a key point that I think goes under-recognized,
3:54:45 one of the things people don’t recognize or appreciate about the human brain enough.
3:54:50 And that is that it basically controls or has a huge role in almost everything
3:54:53 that your body does.
3:54:56 Like, you try to name an example of something in your body
3:55:01 that isn’t directly controlled or massively influenced by the brain.
3:55:04 And it’s pretty hard.
3:55:06 I mean, you might say like bone healing or something.
3:55:12 But even those systems, the hypothalamus and pituitary end up playing a role
3:55:17 in coordinating the endocrine system that does have a direct influence
3:55:21 on, say, the calcium level in your blood that goes to bone healing.
3:55:25 So non-obvious connections between those things
3:55:32 implicate the brain as really a potent prime mover in all of health.
3:55:35 One of the things I realized in the other direction, too,
3:55:41 how most of the systems in the body are integrated with the human brain,
3:55:44 like they affect the brain also, like the immune system.
3:55:51 I think there’s just people who study Alzheimer’s and those kinds of things.
3:55:57 It’s just surprising how much you can understand of that from the immune system,
3:56:01 from the other systems that don’t obviously seem to have anything to do
3:56:04 with sort of the nervous system.
3:56:05 They all play together.
3:56:09 Yeah, you could understand how that would be driven by evolution, too,
3:56:11 just in some simple examples.
3:56:18 If you get sick, if you get a communicable disease, you get the flu,
3:56:22 it’s pretty advantageous for your immune system to tell your brain,
3:56:26 “Hey, now be antisocial for a few days.
3:56:30 Don’t go be the life of the party tonight.
3:56:33 In fact, maybe just cuddle up somewhere warm under a blanket
3:56:35 and just stay there for a day or two.”
3:56:37 And sure enough, that tends to be the behavior that you see
3:56:40 both in animals and in humans.
3:56:44 If you get sick, elevated levels of interleukins in your blood
3:56:48 and TNF alpha in your blood,
3:56:54 ask the brain to cut back on social activity and even moving around.
3:57:02 You have lower locomotor activity in animals that are infected with viruses.
3:57:08 So from there, the early days in neuroscience to surgery,
3:57:10 when did that step happen?
3:57:11 It was a leap.
3:57:13 You know, it was sort of an evolution of thought.
3:57:16 I wanted to study the brain.
3:57:23 I started studying the brain in undergrad in this neuroimmunology lab.
3:57:31 I, from there, realized at some point that I didn’t want to just generate knowledge.
3:57:38 I wanted to affect real changes in the actual world, in actual people’s lives.
3:57:43 And so after having not really thought about going into medical school,
3:57:46 I was on a track to go into a PhD program.
3:57:49 I said, “Well, I’d like that option.
3:57:55 I’d like to actually potentially help tangible people in front of me.”
3:58:02 And doing a little digging found that there exists these MD-PhD programs,
3:58:07 where you can choose not to choose between them and do both.
3:58:16 And so I went to USC for medical school and had a joint PhD program with Caltech,
3:58:24 where I actually chose that program, particularly because of a researcher at Caltech named Richard Anderson,
3:58:28 who’s one of the godfathers of primate neuroscience.
3:58:35 It has a macaque lab where Utah rays and other electrodes were being inserted into the brains of monkeys
3:58:40 to try to understand how intentions were being encoded in the brain.
3:58:48 So I ended up there with the idea that maybe I would be a neurologist and study the brain on the side,
3:58:55 and then discovered that neurology, again, I’m going to make enemies by saying this,
3:59:04 but neurology predominantly and distressingly to me is the practice of diagnosing a thing
3:59:08 and then saying good luck with that when there’s not much we can do.
3:59:17 And neurosurgery, very differently, it’s a powerful lever on taking people that are headed in a bad direction
3:59:27 and changing their course in the sense of brain tumors that are potentially treatable or curable with surgery,
3:59:30 even aneurysms in the brain, blood vessels that are going to rupture.
3:59:35 You can save lives, really, at the end of the day, what mattered to me.
3:59:44 And so I was at USC, as I mentioned, that happens to be one of the great neurosurgery programs.
3:59:55 And so I met these truly epic neurosurgeons, Alex Kalesi and Micah Puzzo and Steve Gianata and Marty Weiss,
3:59:59 these sort of epic people that were just human beings in front of me.
4:00:07 And so it kind of changed my thinking from neurosurgeons are distant gods that live on another planet
4:00:13 and occasionally come and visit us to these are humans that have problems and are people.
4:00:17 And there’s nothing fundamentally preventing me from being one of them.
4:00:25 And so at the last minute in medical school, I changed gears from going into a different specialty
4:00:29 and switched into neurosurgery, which cost me a year.
4:00:35 I had to do another year of research because I was so far along in the process
4:00:39 that to switch into neurosurgery, the deadlines had already passed.
4:00:45 So it was a decision that cost time, but absolutely worth it.
4:00:50 What was the hardest part of the training on the neurosurgeon track?
4:00:58 Yeah, two things. I think that residency in neurosurgery is sort of a competition of pain,
4:01:08 of how much pain can you eat and smile. And so there’s workout restrictions that are not really,
4:01:14 they’re viewed at, I think, internally among the residents as weakness.
4:01:18 And so most neurosurgery residents try to work as hard as they can.
4:01:24 And that, I think, necessarily means working long hours and sometimes over the work hour limits.
4:01:31 And we care about being compliant with whatever regulations are in front of us.
4:01:36 But I think more important than that, people want to give their all in becoming a better
4:01:42 neurosurgeon because the stakes are so high. And so it’s a real fight to get residents
4:01:48 to, say, go home at the end of their shift and not stay and do more surgery.
4:01:53 Are you seriously saying one of the hardest things is literally getting,
4:01:57 forcing them to get sleep and rest and all this kind of stuff?
4:02:05 Historically, that was the case. I think the next generation is more compliant and more self-care.
4:02:09 Weak is what you mean. All right, I’m just kidding. I’m just kidding.
4:02:09 I didn’t say it.
4:02:11 Now I’m making enemies. No.
4:02:15 Okay, I get it. Wow, that’s fascinating. So what was the second thing?
4:02:18 The personalities, and maybe the two are connected.
4:02:21 So was it pretty competitive?
4:02:29 It’s competitive. And it’s also, as we touched on earlier, primates like power.
4:02:39 And I think neurosurgery has long had this aura of mystique and excellence and whatever about it.
4:02:45 And so it’s an invitation, I think, for people that are cloaked in that authority,
4:02:50 a board-certified neurosurgeon is basically a walking fallacious appeal to authority.
4:02:57 You have license to walk into any room and act like you’re an expert on whatever.
4:03:04 And fighting that tendency is not something that most neurosurgeons do well. Humility isn’t the forte.
4:03:15 So I have friends who know you and whenever they speak about you, you have the surprising quality
4:03:21 for a neurosurgeon of humility, which I think indicates that it’s not as common as perhaps
4:03:28 in other professions, because there is a kind of gigantic sort of heroic aspect to neurosurgery.
4:03:31 And I think it gets to people’s head a little bit.
4:03:39 Yeah. Well, I think that allows me to play well at an Elon company,
4:03:49 because Elon, one of his strengths, I think, is to just instantly see through fallacy from authority.
4:03:54 So nobody walks into a room that he’s in and says, well, god damn it, you have to trust me.
4:04:00 I’m the guy that built the last 10 rockets or something. And he says, well, you did it wrong
4:04:06 and we can do it better. Or I’m the guy that kept Ford alive for the last 50 years. You
4:04:12 listen to me on how to build cars. And he says, no. And so you don’t walk into a room that he’s in
4:04:17 and say, well, I’m a neurosurgeon. Let me tell you how to do it. He’s going to say, well,
4:04:23 I’m a human being that has a brain I can think from first principles myself. Thank you very much.
4:04:27 And here’s how I think it ought to be done. Let’s go try it and see who’s right.
4:04:33 And that’s proven, I think, over and over in his case to be a very powerful approach.
4:04:38 If we just take that tangent, there’s a fascinating interdisciplinary team at Neuralink
4:04:47 that you get to interact with, including Elon. What do you think is the secret to a successful
4:04:51 team? What have you learned from just getting to observe these folks?
4:04:53 Yeah.
4:05:00 World experts in different disciplines work together. Yeah, there’s a sweet spot where people
4:05:07 disagree and forcefully speak their mind and passionately defend their position
4:05:16 and yet are still able to accept information from others and change their ideas when they’re
4:05:25 wrong. And so I like the analogy of how you polish rocks. You put hard things in a hard
4:05:32 container and spin it. People bash against each other and out comes a more refined product.
4:05:42 And so to make a good team at Neuralink, we’ve tried to find people that are not afraid to
4:05:48 defend their ideas passionately and occasionally strongly disagree with people
4:05:55 that they’re working with and have the best idea come out on top.
4:06:04 It’s not an easy balance, again, to refer back to the primate brain. It’s not something that is
4:06:12 inherently built into the primate brain to say, “I passionately put all my chips on this
4:06:15 position and now I’m just going to walk away from it and admit you were right.”
4:06:23 Part of our brains tell us that that is a power loss, that is a loss of face, a loss of standing
4:06:31 in the community, and now you’re a Zeta chump because your idea got trounced.
4:06:38 And you just have to recognize that little voice in the back of your head is maladaptive
4:06:42 and it’s not helping the team win. Yeah, you have to have the confidence to be able to walk
4:06:48 away from an idea that you hold on to. And if you do that often enough, you’re actually going to
4:06:55 become the best in the world at your thing. I mean, that kind of rapid iteration.
4:06:57 Yeah, you’ll at least be a member of a winning team.
4:07:04 Ride the wave. What did you learn? You mentioned there’s a lot of amazing
4:07:11 neurosurgeons at USC. What lessons about surgery and life have you learned from those folks?
4:07:20 Yeah, I think working your ass off, working hard while functioning as a member of a team,
4:07:27 getting a job done, that is incredibly difficult. Working incredibly long hours,
4:07:33 being up all night, taking care of someone that you think probably won’t survive no
4:07:41 matter what you do, working hard to make people that you passionately dislike look good the next
4:07:50 morning. These folks were relentless in their pursuit of excellent neurosurgical technique
4:07:58 decade over decade. And I think we’re well recognized for that excellence. So especially
4:08:04 Marty Weiss, Steve Gianotta, Mike Cappuzzo, they made huge contributions not only to
4:08:12 surgical technique, but they built training programs that trained dozens or hundreds of
4:08:18 amazing neurosurgeons. I was just lucky to be in their wake.
4:08:27 What’s that like you mentioned doing a surgery where the person is likely not to survive?
4:08:31 Does that wear on you? Yeah.
4:08:52 It’s especially challenging when you, with all respect to our elders, it doesn’t hit so much
4:09:00 when you’re taking care of an 80-year-old and something was going to get them pretty soon anyway.
4:09:07 And so you lose a patient like that, and it was part of the natural course of what is expected
4:09:20 of them in the coming years, regardless. Taking care of a father of two or three, four young kids,
4:09:29 someone in their 30s that didn’t have it coming, and they show up in your ER having their first
4:09:35 seizure of their life. And little and bold, they’ve got a huge malignant inoperable or
4:09:43 incurable brain tumor. You can only do that, I think, a handful of times before it really
4:09:53 starts eating away at your at your armor or a young mother that shows up that has a giant
4:09:58 hemorrhage in her brain that she’s not going to survive from. And they bring her four-year-old
4:10:03 daughter in to say goodbye one last time before they turn the ventilator off.
4:10:12 The great Henry Marsh is an English neurosurgeon who said it best. I think he says every neurosurgeon
4:10:19 carries with them a private graveyard, and I definitely feel that, especially with young
4:10:30 parents. That kills me. They had a lot more to give. The loss of those people specifically
4:10:39 has a knock-on effect that’s going to make the world worse for people for a long time,
4:10:49 and it’s just hard to feel powerless in the face of that. And that’s where I think you have to be
4:10:57 borderline evil to fight against a company like Neuralink or to constantly be taking potshots at
4:11:05 us because what we’re doing is to try to fix that stuff. We’re trying to give people options
4:11:15 to reduce suffering. We’re trying to take the pain out of life that
4:11:27 broken brains brings in. This is just our little way that we’re fighting back against entropy,
4:11:33 I guess. Yeah, the amount of suffering that’s endured when some of the things that we take for
4:11:40 granted that our brain is able to do is taken away is immense, and to be able to restore some
4:11:46 of that functionality is a real gift. Yeah, we’re just starting. We’re going to do so much more.
4:11:55 Well, can you take me through the full procedure of implanting, say, the N1 chip in Neuralink?
4:12:01 Yeah, it’s a really simple, straightforward procedure. The human part of the surgery
4:12:11 that I do is dead simple. It’s one of the most basic neurosurgery procedures imaginable. And I
4:12:18 think there’s evidence that some version of it has been done for thousands of years. There are
4:12:24 examples, I think, from ancient Egypt of healed or partially healed trefinations and from
4:12:35 Peru or ancient times in South America, where these proto-surgeons would drill holes in people’s
4:12:41 skulls, presumably to let out the evil spirits, but maybe to drain blood clots. And there’s
4:12:47 evidence of bone healing around the edge, meaning the people at least survived some months after a
4:12:53 procedure. And so what we’re doing is that. We are making a cut in the skin on the top of the head
4:13:03 over the area of the brain that is the most potent representation of hand intentions. And so if you
4:13:10 are an expert concert pianist, this part of your brain is lighting up the entire time you’re playing.
4:13:19 We call it the hand knob. The hand knob. So it’s all the finger movements, all of that is just firing
4:13:24 away. Yep. There’s a little squiggle in the cortex right there. One of the folds in the brain is
4:13:29 kind of doubly folded right on that spot. And so you can look at it on an MRI and say,
4:13:36 that’s the hand knob. And then you do a functional test in a special kind of MRI called a functional
4:13:42 MRI, fMRI. And this part of the brain lights up when people, even quadriplegic people whose
4:13:47 brains aren’t connected to their finger movements anymore, they imagine finger movements and this
4:13:54 part of the brain still lights up. So we can ID that part of the brain in anyone who’s preparing
4:14:02 to enter our trial and say, okay, that part of the brain we confirm is your hand intention area.
4:14:10 And so I’ll make a little cut in the skin. We’ll flap the skin open, just like kind of
4:14:18 opening the hood of a car, only a lot smaller. Make a perfectly round one inch diameter hole
4:14:26 in the skull. Remove that bit of skull. Open the lining of the brain, the covering of the brain.
4:14:33 It’s like a little bag of water that the brain floats in. And then show that part of the brain
4:14:40 to our robot. And then this is where the robot shines. It can come in and take these tiny,
4:14:49 much smaller than human hair electrodes and precisely insert them into the cortex, into the
4:14:56 surface of the brain, to a very precise depth in a very precise spot that avoids all the blood
4:15:00 vessels that are coating the surface of the brain. And after the robot’s done with its part,
4:15:07 then the human comes back in and puts the implant into that hole in the skull and covers it up,
4:15:15 screwing it down to the skull and sewing the skin back together. So the whole thing is a few
4:15:23 hours long. It’s extremely low risk compared to the average neurosurgery involving the brain that
4:15:28 might say open up a deep part of the brain or manipulate blood vessels in the brain.
4:15:37 This opening on the surface of the brain with only cortical microinsertions
4:15:46 carries significantly less risk than a lot of the tumor or aneurysm surgeries that are routinely
4:15:53 done. So cortical microinsertions that are via a robot and computer vision are designed to avoid
4:16:00 the blood vessels. Exactly. So I know you’re a bit biased here, but let’s compare human and machine.
4:16:09 So what are human surgeons able to do well and what are robot surgeons able to do well
4:16:13 at this stage of our human civilization development?
4:16:22 Yeah, that’s a good question. Humans are general purpose machines. We’re able to adapt to
4:16:26 unusual situations. We’re able to change the plan on the fly.
4:16:38 I remember well a surgery that I was doing many years ago down in San Diego where the plan was to
4:16:47 open a small hole behind the ear and go reposition a blood vessel that had come to lay on the facial
4:16:53 nerve, the trigeminal nerve, the nerve that goes to the face. When that blood vessel lays on the nerve,
4:17:00 it can cause just intolerable, horrific shooting pain that people describe like being zapped with
4:17:06 a cattle prod. And so the beautiful elegant surgery is to go move this blood vessel off the nerve.
4:17:12 The surgery team, we went in there and started moving this blood vessel and then found that there
4:17:17 was a giant aneurysm on that blood vessel that was not easily visible on the pre-op scans.
4:17:25 And so the plan had to dynamically change and that the human surgeons had no problem with that.
4:17:31 We’re trained for all those things. Robots wouldn’t do so well in that situation, at least in their
4:17:39 current incarnation, fully robotic surgery like the electrode insertion portion of the
4:17:46 nerve link surgery. It goes according to a set plan. And so the humans can interrupt the flow
4:17:51 and change the plan, but the robot can’t really change the plan midway through. It
4:17:57 operates according to how it was programmed and how it was asked to run. It does its job
4:18:05 very precisely, but not with a wide degree of latitude and how to react to changing conditions.
4:18:10 So there could be just a very large number of ways that you could be surprised as a surgeon.
4:18:15 When you enter a situation, there could be subtle things that you have to dynamically adjust to.
4:18:19 Correct. And robots are not good at that currently?
4:18:23 Currently. I think we’re at the dawn of a new
4:18:32 era with AI of the parameters for robot responsiveness to be dramatically broadened.
4:18:39 Right? I mean, you can’t look at a self-driving car and say that it’s operating under very narrow
4:18:46 parameters if a chicken runs across the road. It wasn’t necessarily programmed to deal with that,
4:18:52 specifically, but a Waymo or a self-driving Tesla would have no problem reacting to that
4:19:00 appropriately. And so surgical robots aren’t there yet, but give it time.
4:19:06 And then there could be a lot of sort of like semi-autonomous possibilities of maybe a robotic
4:19:13 surgeon could say this situation is perfectly familiar or the situation is not familiar.
4:19:19 And in the not familiar case, a human could take over, but basically be very conservative
4:19:24 and saying, okay, this for sure has no issues, no surprises, and then let the humans deal with
4:19:31 the surprises with the edge cases, all that. That’s one possibility. So you think eventually
4:19:39 you’ll be out of the job? Well, you being your surgeon, your job being your surgeon, humans,
4:19:43 there will not be many neurosurgeons left on this earth.
4:19:47 I’m not worried about my job in the course of my professional life.
4:19:58 I think I would tell my kids not necessarily to go in this line of work, depending on how things
4:20:04 look in 20 years. It’s so fascinating because I mean, if I have a line of work, I would say it’s
4:20:10 programming. And if you asked me like for the last, I don’t know, 20 years, what I would
4:20:15 recommend for people, I would tell them, yeah, go, you will always have a job if you’re a programmer
4:20:19 because there’s more and more computers and all this kind of stuff and it pays well.
4:20:25 But then you realize these large language models come along and they’re really damn good at
4:20:31 generating code. So overnight you can be surprised like, wow, what is the contribution of the human
4:20:37 really? But then you start to think, okay, it does seem like humans have ability, like you said,
4:20:45 to deal with novel situations. In the case of programming, it’s the ability to come up with
4:20:52 novel ideas to solve problems. It seems like machines aren’t quite yet able to do that.
4:20:57 And when the stakes are very high, when it’s life critical, as it is in surgery, especially
4:21:04 neurosurgery, then it starts, the stakes are very high for a robot to actually replace a human.
4:21:10 But it’s fascinating that in this case of Neuralink, there’s a human robot collaboration.
4:21:17 Yeah. I do the parts I can’t do and it does the parts I can’t do. And we are friends.
4:21:29 I saw that there’s a lot of practice going on. So I mean, everything in Neuralink is tested
4:21:34 extremely rigorously. But one of the things I saw that there’s a proxy on which the surgeries are
4:21:40 performed. So this is both for the robot and for the human. For everybody involved in the entire
4:21:48 pipeline, what’s that like practicing the surgery? It’s pretty intense. So there’s no analog to this
4:21:56 in human surgery. Human surgery is sort of this artisanal craft that’s handed down directly from
4:22:03 master to pupil over the generations. I mean, literally the way you learn to be a surgeon on
4:22:14 humans is by doing surgery on humans. I mean, first, you watch your professors do a bunch of
4:22:19 surgery and then finally they put the trivial parts of the surgery into your hands and then
4:22:25 the more complex parts. And as your understanding of the point and the purposes of the surgery
4:22:29 increases, you get more responsibility in the perfect condition. It doesn’t always go well.
4:22:38 In Neuralink’s case, the approach is a bit different. We of course practiced as far as we
4:22:46 could on animals. We did hundreds of animal surgeries. And when it came time to do the first
4:22:55 human, we had just an amazing team of engineers build incredibly lifelike models. One of the
4:23:03 engineers, Fran Romano in particular, built a pulsating brain in a custom 3D printed skull that
4:23:12 matches exactly the patient’s anatomy, including their face and scalp characteristics. And so
4:23:19 when I was able to practice that, I mean, it’s as close as it really reasonably should get
4:23:29 to being the real thing in all the details, including having a mannequin body attached to
4:23:36 this custom head. And so when we were doing the practice surgeries, we’d wheel that body into
4:23:43 the CT scanner and take a mock CT scan and wheel it back in and conduct all the normal safety checks,
4:23:51 verbally stop. This patient we’re confirming his identification is mannequin number, blah, blah,
4:23:58 blah. And then opening the brain in exactly the right spot using standard operative neuronavigation
4:24:05 equipment, standard surgical drills in the same OR that we do all of our practice surgeries at
4:24:11 Neuralink, and having the skull open and have the brain pulse, which adds a degree of difficulty
4:24:18 for the robot to perfectly precisely plan and insert those electrodes to the right depth and
4:24:28 location. And so we kind of broke new ground on how extensively we practiced for this surgery.
4:24:34 So there was a historic moment, a big milestone for Neuralink,
4:24:42 in part for humanity with the first human getting a Neuralink implant in January of this year.
4:24:49 Take me through the surgery on Nolan. What did he feel like to be part of this?
4:24:57 Yeah. Well, we were lucky to have just incredible partners at the Baro Neurologic Institute. They
4:25:08 are, I think, the premier neurosurgical hospital in the world. They made everything as easy as
4:25:15 possible for the trial to get going and helped us immensely with their expertise on how to
4:25:23 arrange the details. It was a much more high pressure surgery in some ways. I mean, even though
4:25:30 the outcome wasn’t particularly in question in terms of our participant’s safety,
4:25:38 the number of observers, the number of people, there’s conference rooms full of people watching
4:25:45 live streams in the hospital rooting for this to go perfectly. And that just adds pressure that
4:25:51 is not typical for even the most intense production neurosurgery,
4:25:56 say removing a tumor or placing deep brain stimulation electrodes.
4:26:04 And it had never been done on a human before. There were unknowns. And so,
4:26:14 definitely a moderate pucker factor there for the whole team, not knowing if we were going to
4:26:23 encounter, say, a degree of brain movement that was unanticipated or a degree of brain sag that
4:26:29 took the brain far away from the skull and made it difficult to insert or some other unknown,
4:26:36 unknown problem. Fortunately, everything went well. And that surgery is one of the smoothest
4:26:39 outcomes we could have imagined.
4:26:44 Were you nervous? I mean, you’re extremely quarterback in the Super Bowl kind of situation.
4:26:50 Extremely nervous. Extremely. I was very pleased when it went well and when it was over.
4:26:57 Looking forward to number two. Even with all that practice, all of that just have never been
4:27:02 in a situation that’s so high stakes in terms of people watching. And we should also probably
4:27:10 mention, given how the media works, a lot of people may be in a dark kind of way hoping it
4:27:21 doesn’t go well. Well, I think wealth is easy to hate or envy or whatever. And I think there’s a
4:27:29 whole industry around driving clicks and bad news is great for clicks. And so, any way to
4:27:36 take an event and turn it into bad news is going to be really good for clicks.
4:27:41 It just sucks because I think it puts pressure on people. It discourages people from
4:27:46 trying to solve really hard problems because to solve hard problems, you have to go into the
4:27:51 unknown. You have to do things that haven’t been done before. And you have to take risks.
4:27:56 Calculated risks. You have to do all kinds of safety precautions, but risks nevertheless.
4:28:03 I just wish there would be more celebration of that, of the risk taking versus people just
4:28:10 waiting on the sidelines waiting for failure and then pointing out the failure. Yeah, it sucks.
4:28:15 But in this case, it’s really great that everything went just flawlessly, but
4:28:21 it’s unnecessary pressure, I would say. Now that there is a human with literal skin in the game,
4:28:27 there’s a participant whose well-being rides on this doing well. You have to be a pretty
4:28:35 bad person to be rooting for that to go wrong. And so, hopefully, people look in the mirror and
4:28:41 realize that at some point. So, did you get to actually front row seat, like watch the robot
4:28:49 work? Like what? You get to see the whole thing? Yeah, I mean, because an MD needs to be in charge
4:28:56 of all of the medical decision-making throughout the process, I unscrubbed from the surgery
4:28:59 after exposing the brain and presenting it to the robot and
4:29:09 placed the targets on the software interface that tells the robot where it’s going to
4:29:15 insert each thread that was done with my hand on the mouse for whatever that’s worth.
4:29:22 So, you were the one placing the targets? Yeah. Oh, cool. So, the robot
4:29:29 with a computer vision provides a bunch of candidates and you kind of finalize the decision.
4:29:34 Right. The software engineers are amazing on this team and so,
4:29:42 they actually provided an interface where you can essentially use a lasso tool and select a
4:29:48 prime area of brain real estate and it will automatically avoid the blood vessels in that
4:29:56 region and automatically place a bunch of targets. So, that allows the human robot operator to select
4:30:04 really good areas of brain and make dense applications of targets in those regions.
4:30:11 The regions we think are going to have the most high fidelity representations of finger movements
4:30:17 and arm movement intentions. I’ve seen images of this and for me with OCDs,
4:30:23 for some reason, are really pleasant. I think there’s a subreddit called Oddly Satisfying.
4:30:29 Yeah. Love that subreddit. It’s Oddly Satisfying to see the different target sites avoiding the
4:30:36 blood vessels and also maximizing the usefulness of those locations for the signal. It just feels
4:30:41 good. It’s like, ah. Yeah, it’s nice. As a person who has a visceral reaction to the brain bleeding,
4:30:45 I can tell you. Yes, especially so. It’s extremely satisfying watching the electrodes
4:30:52 themselves go into the brain and not cause bleeding. Yeah. Yeah, so you said the feeling
4:31:00 was of relief when everything went perfectly. Yeah. How deep in the brain can you currently go
4:31:08 and eventually go? Let’s say on the neural link side, it seems the deeper you go in the brain,
4:31:15 the more challenging it becomes. Yeah. Talking broadly about neurosurgery, we can get anywhere.
4:31:24 It’s routine for me to put deep brain-stimulating electrodes near the very bottom of the brain,
4:31:31 entering from the top and passing about a 2-millimeter wire all the way into the bottom
4:31:37 of the brain. That’s not revolutionary. A lot of people do that. We can do that with very high
4:31:48 precision. I use a robot from Globus to do that surgery several times a month. It’s pretty routine.
4:31:55 What are your eyes in that situation? What kind of technology can you use to visualize
4:32:01 where you are to light your way? Yeah, so it’s a cool process on the software side. You take a
4:32:08 preoperative MRI that’s extremely high-resolution data of the entire brain. You put the patient to
4:32:16 sleep, put their head in a frame that holds the skull very rigidly, and then you take a CT scan
4:32:22 of their head while they’re asleep with that frame on, and then merge the MRI and the CT in
4:32:29 software. You have a plan based on the MRI where you can see these nuclei deep in the brain.
4:32:37 You can’t see them on CT, but if you trust the merging of the two images, then you indirectly
4:32:43 know on the CT where that is, and therefore indirectly know where, in reference to the
4:32:51 titanium frame screwed to their head, those targets are. This is 60s technology to manually
4:33:00 compute trajectories given the entry point and target, and dial in some goofy looking titanium
4:33:10 actuators with manual actuators with little tick marks on them. The modern version of that is to
4:33:18 use a robot, just like a little cuckoo arm. You might see it building cars at the Tesla factory.
4:33:25 This small robot arm can show you the trajectory that you intended from the pre-op MRI
4:33:31 and establish a very rigid holder through which you can drill a small hole in the skull
4:33:38 and pass a small rigid wire deep into that area of the brain that’s hollow and put your electrode
4:33:43 through that hollow wire and then remove all of that except the electrode. You end up with the
4:33:51 electrode very, very precisely placed far from the skull surface. Now, that’s standard technology
4:34:01 that’s already been out in the world for a while. Neuralink right now is focused entirely on
4:34:10 cortical targets, surface targets, because there’s no trivial way to get, say, hundreds of wires
4:34:16 deep inside the brain without doing a lot of damage. Your question, what do you see? Well,
4:34:22 I see an MRI on a screen. I can’t see everything that that DBS electrode is passing through
4:34:29 on its way to that deep target. It’s accepted with this approach that there’s going to be about
4:34:37 one in a hundred patients who have a bleed somewhere in the brain. As a result of passing
4:34:45 that wire blindly into the deep part of the brain, that’s not an acceptable safety profile for
4:34:52 Neuralink. We start from the position that we want this to be dramatically maybe two or three
4:35:00 orders of magnitude safer than that. Safe enough, really, that you or I without a profound medical
4:35:06 problem might on our lunch break someday say, “Yeah, sure, I’ll get that. I’d be meaning to upgrade
4:35:18 to the latest version.” The safety constraints given that are high. We haven’t settled on a
4:35:22 final solution for arbitrarily approaching deep targets in the brain.
4:35:27 It’s interesting because you have to avoid blood vessels somehow. Maybe there’s creative
4:35:32 ways of doing the same thing, like mapping out high-resolution geometry of blood vessels,
4:35:39 and then you can go in blind. How do you map out that in a way that’s super stable?
4:35:41 Let’s say that. There’s a lot of interesting challenges there, right?
4:35:41 Yeah.
4:35:44 But there’s a lot to do on the surface, luckily.
4:35:51 Exactly. We’ve got vision on the surface. We actually have made a huge amount of progress sowing
4:36:00 electrodes into the spinal cord as a potential workaround for a spinal cord injury that would
4:36:07 allow a brain-mounted implant to translate motor intentions to a spine-mounted implant that can
4:36:12 affect muscle contractions in previously paralyzed arms and legs.
4:36:19 That’s just incredible. The effort there is to try to bridge the brain to the spinal cord,
4:36:24 to the peripheral nerve. How hard is that to do?
4:36:29 We have that working in very crude forms in animals.
4:36:31 That’s amazing. Yeah, we’ve done it.
4:36:37 Similar to with Nolan, where he’s able to digitally move the cursor, here you’re doing
4:36:42 the same kind of communication, but with the actual effectors that you have.
4:36:45 That’s fascinating.
4:36:54 We have anesthetized animals doing grasp and moving their legs in a walking pattern,
4:37:02 again, early days. The future is bright for this kind of thing, and people with paralysis
4:37:07 should look forward to that bright future. They’re going to have options.
4:37:13 Yeah, and there’s a lot of intermediate or extra options where you take like an
4:37:21 optimist robot, like the arm, and to be able to control the arm, the fingers, the hands,
4:37:26 the arm is a prosthetic. Exoskeletons are getting better too.
4:37:27 Exoskeletons.
4:37:33 Yeah, so that goes hand-in-hand. Although I didn’t quite understand until thinking
4:37:39 about it deeply and do more research about neurolink, how much you can do on the digital
4:37:44 side, so there’s digital telepathy. I didn’t quite understand that you can really map
4:37:55 the intention, as you described in the hand-knob area, that you can map the intention. Just imagine
4:38:01 it. Think about it. That intention can be mapped to actual action in the digital world,
4:38:08 and now more and more so much can be done in the digital world that it can reconnect you to the
4:38:13 outside world. It can allow you to have freedom, have independence if you’re a quadriplegic.
4:38:17 That’s really powerful. You can go really far with that.
4:38:23 Yeah, our first participant is incredible. He’s breaking world records left and right.
4:38:31 And he’s having fun with it. It’s great. Just going back to the surgery, your whole journey,
4:38:35 you mentioned to me offline, you have surgery on Monday, so you’re like, you’re doing
4:38:40 surgery all the time. Yeah, maybe the ridiculous question, what does it take to get good at
4:38:47 surgery? Practice, repetitions. You’re just same with anything else. There’s a million ways of
4:38:52 people saying the same thing and selling books saying it, but you call it 10,000 hours, you call it
4:38:58 spend some chunk of your life, some percentage of your life focusing on this, obsessing about
4:39:08 getting better at it. Repetitions, humility, recognizing that you aren’t perfect at any
4:39:14 stage along the way, recognizing you’ve got improvements to make in your technique,
4:39:19 being open to feedback and coaching from people with a different perspective on how to do it,
4:39:30 and then just the constant will to do better. Fortunately, if you’re not a sociopath, I think
4:39:37 your patients bring that with them to the office visits every day. They force you to want to do
4:39:42 better all the time. Yeah, just step up. I mean, it’s a real human being, a real human being that
4:39:48 you can help. Yeah. So every surgery, even if it’s the same exact surgery, is there a lot of
4:39:55 variability between that surgery and a different person? Yeah, a fair bit. I mean, a good example
4:40:04 for us is the angle of the skull relative to the normal plane of the body axis of the skull over
4:40:10 a hand knob is a pretty wide variation. I mean, some people have really flat skulls,
4:40:18 and some people have really steeply angled skulls over that area, and that has consequences for
4:40:27 how their head can be fixed in the frame that we use and how the robot has to approach the skull.
4:40:35 Yeah, people’s bodies are built as differently as the people you see walking down the street,
4:40:42 as much variability in body shape and size as you see there. We see in brain anatomy and skull
4:40:50 anatomy, there are some people who we’ve had to kind of exclude from our trial for having skulls
4:40:57 that are too thick or too thin or scalp that’s too thick or too thin. I think we have the middle
4:41:05 97% or so of people, but you can’t account for all human anatomy variability.
4:41:13 How much mushiness and mess is there? Because I’ve taken biology classes, the diagrams are
4:41:19 always really clean and crisp. Neuroscience, the pictures of neurons are always really nice.
4:41:28 But whenever I look at pictures of real brains, I don’t know what’s going on.
4:41:35 So how much are biological systems in reality? How hard is it to figure out what’s going on?
4:41:42 Not too bad. Once you really get used to this, that’s where experience and skill and
4:41:48 education really come into play is if you stare at 1,000 brains,
4:41:57 it becomes easier to mentally peel back the, say, for instance, blood vessels that are obscuring
4:42:03 the sulci and gyri, the wrinkle pattern of the surface of the brain. Occasionally,
4:42:09 when you’re first starting to do this and you open the skull, it doesn’t match what you thought you
4:42:20 were going to see based on the MRI. With more experience, you learn to peel back that layer
4:42:26 of blood vessels and see the underlying pattern of wrinkles in the brain and use that as a landmark
4:42:33 for where you are. The wrinkles are a landmark? So I was describing hand knob earlier. That’s
4:42:40 a pattern of the wrinkles in the brain. It’s sort of this Greek letter omega-shaped area of the brain.
4:42:47 So you could recognize the hand knob area. If I show you 1,000 brains and give you one minute
4:42:52 with each, you’d be like, yep, that’s that. Sure. And so there is some uniqueness to that area of
4:42:59 the brain, like in terms of the geometry, the topology of the thing. Yeah. Where is it about?
4:43:07 So you have this strip of brain running down the top called the primary motor area, and I’m sure
4:43:12 you’ve seen this picture of the homunculus laid over the surface of the brain, the weird little
4:43:20 guy with huge lips and giant hands. That guy sort of lays with his legs up at the top of the brain
4:43:30 and face the arm, areas farther down, and then some kind of mouth, lip, tongue areas farther down.
4:43:37 And so the hand is right in there. And then the areas that control speech, at least on the left
4:43:45 side of the brain, and most people are just below that. And so any muscle that you voluntarily move
4:43:53 in your body, the vast majority that references that strip or those intentions come from that
4:44:01 strip of brain and the wrinkle for hand knob is right in the middle of that. And vision is back
4:44:08 here. Also close to the surface. Vision is a little deeper. And so this gets to your question
4:44:15 about how deep can you get to do vision. We can’t just do the surface of the brain. We have to be
4:44:23 able to go in not as deep as we’d have to go for DBS, but maybe a centimeter deeper than we’re used
4:44:31 to for hand insertions. And so that’s work in progress. That’s a new set of challenges to
4:44:37 overcome. By the way, you mentioned the Utah ray, and I just saw a picture of that, and that thing
4:44:44 looks terrifying. Yeah, better nails. Because it’s rigid. And then if you look at the threads,
4:44:48 they’re flexible. What can you say that’s interesting to you about the flexible,
4:44:54 that kind of approach of the flexible threads to deliver the electrodes next to the neurons?
4:45:00 Yeah, I mean, the goal there comes from experience. I mean, we stand on the shoulders of people that
4:45:06 made Utah rays and used Utah rays for decades before we ever even came along.
4:45:14 Neuralink arose partly, this approach to technology arose out of a need recognized
4:45:22 after Utah rays would fail routinely because the rigid electrodes, those spikes
4:45:32 that are literally hammered using an air hammer into the brain, those spikes generate a bad immune
4:45:41 response that encapsulates the electrode spikes in scar tissue, essentially. And so one of the
4:45:48 projects that was being worked on in the Anderson lab at Caltech when I got there was to see if you
4:45:56 could use chemotherapy to prevent the formation of scars. Things are pretty bad when you’re jamming
4:46:03 a bed of nails into the brain and then treating that with chemotherapy to try to prevent scar
4:46:08 tissue. It’s like, maybe we’ve gotten off track here, guys. Maybe there’s a fundamental redesign
4:46:14 necessary. And so Neuralink’s approach of using highly flexible tiny electrodes
4:46:21 avoids a lot of the bleeding, avoids a lot of the immune response that ends up happening
4:46:28 when rigid electrodes are pounded into the brain. And so what we see is our electrode longevity and
4:46:33 functionality and the health of the brain tissue immediately surrounding the electrode
4:46:39 is excellent. I mean, it goes on for years now in our animal models.
4:46:43 What do most people not understand about the biology of the brain?
4:46:46 We’ll mention the vasculature. That’s really interesting.
4:46:50 I think the most interesting, maybe underappreciated fact
4:46:55 is that it really does control almost everything. I mean,
4:47:02 I don’t know, out of a blue example, imagine you want a lever on fertility. You want to be
4:47:09 able to turn fertility on and off. I mean, there are legitimate targets in the brain itself to
4:47:18 modulate fertility, say blood pressure. You want to modulate blood pressure. There are legitimate
4:47:25 targets in the brain for doing that. Things that aren’t immediately obvious as brain problems
4:47:36 are potentially solvable in the brain. I think it’s an underexplored area for
4:47:41 primary treatments of all the things that bother people.
4:47:46 That’s a really fascinating way to look at it. There’s a lot of conditions
4:47:50 we might think have nothing to do with the brain, but they might just be symptoms of
4:47:54 something that actually started in the brain. The actual source of the problem,
4:47:59 the primary source is something in the brain. Not always. Kidney disease is real,
4:48:06 but there are levers you can pull in the brain that affect all of these systems.
4:48:14 There’s knobs. On/off switches and knobs in the brain from which this all originates.
4:48:25 Would you have a neural link chip implanted in your brain? Yeah. I think use case right now is
4:48:34 use a mouse. I can already do that. There’s no value proposition on safety grounds alone.
4:48:37 Sure. I’ll do it tomorrow. You say the use case of the mouse.
4:48:43 Is it after researching all this and part of it is just watching all and have so much fun?
4:48:48 If you can get that bits per second look really high with the mouse,
4:48:55 being able to interact. If you think about it, the way on the smartphone, the way you swipe,
4:49:00 that was transformational how we interact with the thing. It’s subtle. You don’t realize it,
4:49:07 but to be able to touch a phone and to scroll with your finger, that changed everything.
4:49:15 People were sure you need a keyboard to type. There’s a lot of HCI aspects
4:49:21 to that that changed how we interact with computers. There could be a certain rate of speed
4:49:27 with the mouse that would change everything. You might be able to just click around a screen
4:49:37 extremely fast. I can see myself getting the neural link for much more rapid interaction
4:49:44 with the digital devices. Yeah. I think recording speech intentions from the brain might change
4:49:51 things as well. The value proposition for the average person. A keyboard is a pretty clunky
4:49:58 human interface, requires a lot of training. It’s highly variable in the maximum performance
4:50:08 that the average person can achieve. I think taking that out of the equation and just having a natural
4:50:17 word-to-computer interface might change things for a lot of people.
4:50:21 It’d be hilarious if that is the reason people do it. Even if you have speech-to-text,
4:50:26 that’s extremely accurate. It currently isn’t. It’d say you’ve gotten super accurate. It’d be
4:50:32 hilarious if people went for neural link just so you avoid the embarrassing aspect of speaking,
4:50:38 like looking like a douchebag, speaking to your phone in public, which is a real constraint.
4:50:46 Yeah. With a bone-conducting case that can be an invisible headphone,
4:50:54 and the ability to think words into software and have it respond to you,
4:51:02 that starts to sound like embedded superintelligence. If you can
4:51:08 silently ask for the Wikipedia article on any subject and have it read to you,
4:51:12 without any observable change happening in the outside world,
4:51:18 for one thing, standardized testing is obsolete.
4:51:25 Yeah. If it’s done well on the UX side, it could change. I don’t know if it transforms society,
4:51:32 but it really can create a shift in the way we interact with digital devices in the way that
4:51:39 smartphone did. Now, just having to look into the safety of everything involved, I would totally
4:51:49 try it. It doesn’t have to go to some incredible thing where it connects all over your brain.
4:51:55 That could be just connecting to the hand knob. You might have a lot of interesting interaction,
4:51:58 human-computer interaction possibilities. That’s really interesting.
4:52:03 Yeah. The technology on the academic side is progressing at light speed here.
4:52:10 I think there was a really amazing paper out of UC Davis at Sergei Stavisky’s lab
4:52:18 that basically made an initial solve of speech decode. It was something like 125,000 words
4:52:23 that they were getting with very high accuracy, which is-
4:52:25 So, you’re just thinking the word?
4:52:27 Yeah. Thinking the word and you’re able to get it.
4:52:28 Yeah.
4:52:33 Oh, boy. You have to have the intention of speaking it.
4:52:33 Right.
4:52:39 So, do that inner voice. It’s so amazing to me that you can do the intention,
4:52:44 the signal mapping. All you have to do is just imagine yourself doing it.
4:52:52 If you get the feedback that it actually worked, you can get really good at that.
4:52:56 Your brain will, first of all, adjust and you develop it like any other skill.
4:52:59 Touch typing, you develop it in that same kind of way.
4:53:06 To me, it’s just really fascinating to be able to even to play with that.
4:53:08 Honestly, I would get a New Orleans just to be able to play with that.
4:53:13 Just to play with the capacity, the capability of my mind to learn this skill.
4:53:17 It’s like learning the skill of typing and learning the skill of moving a mouse.
4:53:21 It’s another skill of moving the mouse, not with my physical body,
4:53:23 but with my mind.
4:53:25 I can’t wait to see what people do with it.
4:53:28 I feel like we’re cavemen right now.
4:53:31 We’re banging rocks with a stick and thinking that we’re making music.
4:53:35 At some point, when these are more widespread,
4:53:42 there’s going to be the equivalent of a piano that someone can make
4:53:44 art with their brain in a way that we didn’t even anticipate.
4:53:48 I’m looking forward to it.
4:53:50 Give it to a teenager.
4:53:52 Any time I think I’m good at something, I’ll always go to,
4:53:58 I don’t know, even with the bits per second of playing a video game.
4:53:59 You realize you give it to a teenager.
4:54:02 You’re giving your link to a teenager, just the large number of them.
4:54:06 The kind of stuff that get good at stuff.
4:54:11 They’re going to get hundreds of bits per second.
4:54:14 Even just with the current technology.
4:54:16 Probably, probably.
4:54:23 Because it’s also addicting how the number go up aspect of it,
4:54:25 of improving and training.
4:54:26 Because it’s almost like a skill.
4:54:30 And plus, there’s a software on the other end that adapts to you.
4:54:34 And especially if the adapting procedure algorithm becomes better and better and better.
4:54:36 You’re learning together.
4:54:38 Yeah. We’re scratching the surface on that right now.
4:54:39 There’s so much more to do.
4:54:46 So on the complete other side of it, you have an RFID chip implanted in you.
4:54:47 Yeah.
4:54:48 This is what I hear.
4:54:49 Nice.
4:54:50 So this is a little subtle thing.
4:54:57 It’s a passive device that you use for unlocking a safe with top secrets.
4:54:58 So what do you use it for?
4:55:00 What’s the story behind it?
4:55:01 I’m not the first one.
4:55:06 There’s this whole community of weirdo bio hackers that have done this stuff.
4:55:13 And I think one of the early use cases was storing private crypto wallet keys and whatever.
4:55:18 I dabbled in that a bit and had some fun with it.
4:55:22 Do you have some Bitcoin implanted in your body somewhere?
4:55:23 You can’t tell where.
4:55:24 Yeah.
4:55:24 Yeah.
4:55:25 Actually, yeah.
4:55:30 It was the modern day equivalent of finding change in the sofa cushions.
4:55:35 After I put some orphan crypto on there that I thought was worthless
4:55:39 and forgot about it for a few years, went back and found that
4:55:43 some community of people loved it and had propped up the value of it.
4:55:45 And so it had gone up 50 fold.
4:55:48 So there was a lot of change in those cushions.
4:55:51 That’s hilarious.
4:55:56 But the primary use case is mostly as a tech demonstrator.
4:55:58 You know, it has my business card on it.
4:56:02 You can scan that by touching it to your phone.
4:56:07 It opens the front door to my house, you know, whatever, simple stuff.
4:56:07 It’s a cool step.
4:56:09 It’s a cool leap to implant something in your body.
4:56:13 I mean, it has perhaps that it’s a similar leap to in your link.
4:56:19 Because for a lot of people, that kind of notion of putting something inside your body,
4:56:22 something electronic inside a biological system is a big leap.
4:56:22 Yeah.
4:56:27 We have a kind of a mysticism around the barrier of our skin.
4:56:30 We’re completely fine with knee replacements, hip replacements, you know,
4:56:33 dental implants.
4:56:44 But, you know, there’s a mysticism still around the inviolable barrier that the skull represents.
4:56:49 And I think that needs to be treated like any other pragmatic barrier.
4:56:55 You know, the question isn’t how incredible is it to open the skull?
4:56:58 The question is, you know, what benefit can we provide?
4:57:01 So from all the surgeries you’ve done, from everything you understand in the brain,
4:57:05 how much does neuroplasticity come into play?
4:57:06 How adaptable is the brain?
4:57:12 For example, just even in the case of healing from surgery or adapting to the post-surgery
4:57:13 situation?
4:57:20 The answer that is sad for me and other people of my demographic is that, you know, plasticity
4:57:23 decreases with age, healing decreases with age.
4:57:28 I have too much gray hair to be optimistic about that.
4:57:34 There are theoretical ways to increase plasticity using electrical stimulation.
4:57:41 And nothing that is, you know, totally proven out as a robust enough mechanism to offer widely
4:57:42 to people.
4:57:50 But yeah, I think there’s cause for optimism that we might find something useful in terms of,
4:57:53 say, an implanted electrode that improves learning.
4:58:01 Certainly, there’s been some really amazing work recently from Nicholas Schiff, Jonathan Baker,
4:58:07 you know, and others who have a cohort of patients with moderate traumatic brain injury,
4:58:12 who have had electrodes placed in the deep nucleus in the brain called the centromedium
4:58:15 nucleus or just near centromedium nucleus.
4:58:19 And when they apply small amounts of electricity to that part of the brain,
4:58:23 it’s almost like electronic caffeine.
4:58:26 They’re able to improve people’s attention and focus.
4:58:31 They’re able to improve how well people can perform a task.
4:58:36 I think in one case, someone who was unable to work after the device was turned on,
4:58:37 they were able to get a job.
4:58:46 And that’s sort of one of the holy grails for me with Neuralink and other technologies like this
4:58:49 is from a purely utilitarian standpoint.
4:58:58 Can we make people able to take care of themselves and their families economically again?
4:59:04 Can we make it so someone who’s fully dependent and even maybe requires a lot of caregiver
4:59:09 resources, can we put them in a position to be fully independent, taking care of themselves,
4:59:11 giving back to their communities?
4:59:18 I think that’s a very compelling proposition and what motivates a lot of what I do and
4:59:21 what a lot of the people at Neuralink are working for.
4:59:24 It’s just a cool possibility that if you put a Neuralink in there,
4:59:31 that the brain adapts, like the other part of the brain adapts too and integrates it.
4:59:33 The capacity of the brain to do that is really interesting.
4:59:37 Probably unknown to the degree to which you can do that.
4:59:45 But you’re now connecting an external thing to it, especially once it’s doing stimulation.
4:59:55 The biological brain and the electronic brain outside of it working together,
4:59:57 like the possibilities, they’re really interesting.
4:59:59 That’s still unknown, but interesting.
5:00:04 It feels like the brain is really good at adapting to whatever.
5:00:08 But of course, it is a system that by itself is already,
5:00:15 like everything serves a purpose and so you don’t want to mess with it too much.
5:00:21 Yeah, it’s like eliminating a species from an ecology.
5:00:25 You don’t know what the delicate interconnections and dependencies are.
5:00:31 The brain is certainly a delicate, complex beast and we don’t know
5:00:40 every potential downstream consequence of a single change that we make.
5:00:47 Do you see yourself doing, as you mentioned, P1 surgeries of P2, P3, P4, P5,
5:00:50 just more and more humans?
5:00:56 I think it’s a certain kind of brittleness or a failure on the company’s side.
5:00:59 If we need me to do all the surgeries,
5:01:09 I think something that I would very much like to work towards is a process that is so simple
5:01:13 and so robust on the surgery side that literally anyone could do it.
5:01:20 We want to get away from requiring intense expertise or intense experience
5:01:28 to have this successfully done and make it as simple and translatable as possible.
5:01:32 I would love it if every neurosurgeon on the planet had no problem doing this.
5:01:37 I think we’re probably far from a regulatory environment that would allow
5:01:43 people that aren’t neurosurgeons to do this, but not impossible.
5:01:46 All right, I’ll sign up for that.
5:01:50 Did you ever anthropomorphize the robot R1?
5:01:53 Do you give it a name?
5:01:56 Do you see it as a friend that’s working together with you?
5:01:58 I mean, to a certain degree, it’s–
5:01:59 Or an enemy who’s going to take the job.
5:02:06 To a certain degree, it’s a complex relationship.
5:02:08 All the good relationships are.
5:02:13 It’s funny when in the middle of the surgery, there’s a part of it where I stand,
5:02:15 basically shoulder to shoulder with the robot.
5:02:23 If you’re in the room reading the body language, it’s my brother-in-arms there
5:02:26 where we’re working together on the same problem.
5:02:30 Yeah, I’m not threatened by it.
5:02:32 Keep telling yourself that.
5:02:37 How have all the surgeries that you’ve done over the years,
5:02:44 the people you’ve helped and the high stakes that you’ve mentioned,
5:02:47 how has that changed your understanding of life and death?
5:02:58 Yeah, it gives you a very visceral sense.
5:03:03 And this may sound trite, but it gives you a very visceral sense that death is inevitable.
5:03:09 On one hand, you are as a neurosurgeon.
5:03:14 You’re deeply involved in these hard to fathom tragedies.
5:03:22 Young parents dying, leaving a four-year-old behind to say.
5:03:31 And on the other hand, it takes the sting out of it a bit because
5:03:36 you see how just mind-numbingly universal death is.
5:03:41 There’s zero chance that I’m going to avoid it.
5:03:53 I know techno-optimists right now and longevity buffs right now would disagree on that 0.000%
5:03:59 estimate, but I don’t see any chance that our generation is going to avoid it.
5:04:07 Entropy is a powerful force, and we are very ornate, delicate, brittle DNA machines that
5:04:11 aren’t up to the cosmic ray bombardment that we’re subjected to.
5:04:19 So on the one hand, every human that has ever lived died or will die.
5:04:26 On the other hand, it’s just one of the hardest things to imagine
5:04:32 inflicting on anyone that you love is having them gone.
5:04:37 I mean, I’m sure you’ve had friends that aren’t living anymore, and it’s hard to even think about
5:04:51 them. And so I wish I had arrived at the point of nirvana where death doesn’t have a sting,
5:04:56 I’m not worried about it, but I can at least say that I’m comfortable with the certainty of it.
5:05:05 If not having found out how to take the tragedy out of it when I think about my kids,
5:05:10 either not having me or me not having them or my wife.
5:05:14 Maybe I’ve come to accept the intellectual certainty of it, but
5:05:21 it may be the pain that comes with losing the people you love.
5:05:27 I don’t think I’ve come to understand the existential aspect of it.
5:05:37 Like that this is going to end. And I don’t mean like in some trite way, I mean like
5:05:44 it certainly feels like it’s not going to end. Like you live life like it’s not going to end.
5:05:53 And the fact that this light that’s shining, this consciousness is going to no longer be
5:06:00 at one moment, maybe today. It feels me when I really am able to load all that in
5:06:07 with Ernest Becker’s terror. Like it’s a real fear. I think people aren’t always honest with
5:06:12 how terrifying it is. I think the more you are able to really think through it,
5:06:17 the more terrifying it is. It’s not such a simple thing. Oh, well, that’s the way life is.
5:06:25 If you really can load that in, it’s hard. But I think that’s why the Stoics did it,
5:06:33 because it helps you get your shit together and be like, well, every single moment you’re alive is
5:06:41 just beautiful. And it’s terrifying that it’s going to end. And it’s almost like
5:06:48 you’re shivering in the cold, a child helpless, this kind of feeling. And then it makes you
5:06:53 when you have warmth, when you have the safety, when you have the love to really appreciate it.
5:07:02 I feel like sometimes in your position, when you mention armor, just to see death,
5:07:10 it might make you not be able to see that, the finiteness of life. Because if you kept looking
5:07:18 at that, it might break you. So it’s good to know that you’re kind of still struggling with that.
5:07:25 There’s the neurosurgeon, and then there’s a human. And the human is still able to struggle with
5:07:31 that and feel the fear of that and the pain of that. Yeah, it definitely makes you ask the question
5:07:38 of how long, how many of these can you see and not say, I can’t do this anymore.
5:07:50 But you said it well. I think it gives you an opportunity to just appreciate that you’re alive
5:08:01 today. And I’ve got three kids and an amazing wife, and I’m really happy. Things are good.
5:08:07 I get to help on a project that I think matters. I think it moves us forward. I’m a very lucky person.
5:08:15 It’s the early steps of a potentially gigantic leap for humanity. It’s a really interesting one.
5:08:20 And it’s cool because you read about all this stuff in history where it’s like the early days.
5:08:26 I’ve been reading, before going to the Amazon, I would read about explorers that would go and
5:08:32 explore even the Amazon jungle for the first time. Those are the early steps. Or early steps into
5:08:39 space, early steps in any discipline in physics and mathematics. And it’s cool because this is like
5:08:46 on the grand scale, these are the early steps into delving deep into the human brain. So not
5:08:50 just observing the brain, but you’d be able to interact with the human brain. It’s going to
5:08:57 help a lot of people, but it also might help us understand what the hell’s going on in there.
5:09:02 Yeah, I think ultimately we want to give people more levers that they can pull,
5:09:09 like you want to give people options. If you can give someone a dial that they can turn
5:09:17 on how happy they are, I think that makes people really uncomfortable. But
5:09:24 now talk about major depressive disorder. Talk about people that are committing suicide at an
5:09:37 alarming rate in this country and try to justify that queasiness in that light. You can give people
5:09:45 a knob to take away suicidal ideation, suicidal intention. I would give them that knob. I don’t
5:09:50 know how you justify not doing that. You can think about all the suffering that’s going on in the
5:09:55 world. Every single human being that’s suffering right now, it’ll be a glowing red dot. The more
5:10:00 suffering, the more it’s glowing and you just see the map of human suffering. Any technology that
5:10:07 allows you to dim that light of suffering on a grand scale is pretty exciting because there’s
5:10:19 a lot of people suffering and most of them suffer quietly. We look away too often and we
5:10:23 should remember those who are suffering because once again, most of them are suffering quietly.
5:10:28 Well, on a grander scale, the fabric of society, people have a lot of complaints about
5:10:35 how our social fabric is working or not working, how our politics is working or not working.
5:10:46 Those things are made of neurochemistry too in aggregate. Our politics is composed of individuals
5:10:53 with human brains and the way it works or doesn’t work is potentially tunable
5:11:02 in the sense that, I don’t know, say remove our addictive behaviors or tune our addictive behaviors
5:11:10 for social media or our addiction to outrage, our addiction to sharing the most angry political
5:11:23 tweet we can find. I don’t think that leads to a functional society. If you had options for
5:11:31 people to moderate that maladaptive behavior, there could be huge benefits to society. Maybe we
5:11:37 could all work together a little more harmoniously toward useful ends. There’s a sweet spot, like
5:11:43 you mentioned, you don’t want to completely remove all the dark sides of human nature because those
5:11:47 kind of are somehow necessary to make the whole thing work, but there’s a sweet spot.
5:11:52 Yeah, I agree. You got to suffer a little, just not so much that you lose hope.
5:11:57 When you, all the surgeries you’ve done, have you seen consciousness in there ever?
5:12:03 Was there like a glowing light? I have this sense that I never found it,
5:12:11 never removed it, like a dementor in Harry Potter. I have this sense that consciousness is a lot
5:12:21 less magical than our instincts want to claim it is. It seems to me like a useful analog for
5:12:31 thinking about what consciousness is in the brain is that we have a really good intuitive
5:12:36 understanding of what it means to touch your skin and know what’s being touched.
5:12:45 I think consciousness is just that level of sensory mapping applied to the thought processes in the
5:12:53 brain itself. What I’m saying is consciousness is the sensation of some part of your brain being
5:13:00 active. You feel it working. You feel the part of your brain that thinks of red things or
5:13:10 winged creatures or the taste of coffee. You feel those parts of your brain being active the
5:13:18 way that I’m feeling my palm being touched. That sensory system that feels the brain working
5:13:25 is consciousness. It’s so brilliant. It’s the same way. It’s the sensation of touch when you’re
5:13:32 touching a thing. Consciousness is the sensation of you feeling your brain working, your brain
5:13:41 thinking, your brain perceiving. Which isn’t like a warping of space-time or some quantum
5:13:46 field effect. It’s nothing magical. People always want to ascribe to consciousness
5:13:54 something truly different. There’s this awesome long history of people looking at whatever the
5:14:00 latest discovery in physics is to explain consciousness because it’s the most magical,
5:14:07 the most out there thing that you can think of. People always want to do that with consciousness.
5:14:14 I don’t think that’s necessary. It’s just a very useful and gratifying way of feeling your brain
5:14:20 work. And as we said, it’s one heck of a brain. Yeah. Everything we see around us, everything
5:14:26 we love, everything that’s beautiful came from brains like these. It’s all electrical activity
5:14:33 happening inside your skull. And I for one am grateful that it’s people like you that are
5:14:39 exploring all the ways that it works and all the ways it can be made better.
5:14:45 Matthew, thank you so much for talking today. It’s been a joy. Thanks for listening to this
5:14:53 conversation with Matthew McDougall. And now, dear friends, here’s Bliss Chapman, Brain Interface
5:15:00 Software Lead at Neuralink. You told me that you’ve met hundreds of people with spinal cord
5:15:06 injuries or with ALS and that your motivation for helping at Neuralink is grounded and wanting
5:15:11 to help them. Can you describe this motivation? Yeah. First, just to thank you to all the people
5:15:15 I’ve gotten a chance to speak with for sharing their stories with me. I don’t think there’s any
5:15:20 world really in which I can share their stories as powerful a way as they can.
5:15:25 But just I think to summarize at a very high level what I hear over and over again is that
5:15:32 people with ALS or severe spinal cord injury in a place where they basically can’t move physically
5:15:37 anymore really at the end of the day are looking for independence. And that can mean different
5:15:41 things for different people. For some folks it can mean the ability just to be able to communicate
5:15:45 again independently without needing to wear something on their face, without needing a care
5:15:50 taker to be able to put something in their mouth. For some folks it can mean independence to be able
5:15:55 to work again, to be able to navigate a computer digitally efficiently enough to be able to get
5:15:59 a job, to be able to support themselves, to be able to move out and ultimately be able to support
5:16:05 themselves after their family maybe isn’t there anymore to take care of them. And for some folks
5:16:10 it’s as simple as just being able to respond to the kid in time before they run away or get
5:16:19 interested in something else. And these are deeply personal and sort of very human problems.
5:16:23 And what strikes me again and again when talking with these folks is that this is actually an
5:16:28 engineering problem. This is a problem that with the right resources, with the right team,
5:16:34 we can make a lot of progress on. And at the end of the day, I think that’s a deeply inspiring
5:16:37 message and something that makes me excited to get up every day.
5:16:43 So it’s both an engineering problem in terms of a BCI, for example, that can give them capabilities
5:16:48 where they can interact with the world. But also on the other side, it’s an engineering problem for
5:16:52 the rest of the world to make it more accessible for people living with quadriplegia.
5:16:56 Yeah, and I’ll take a broad view sort of lens on this for a second. I think
5:17:02 I’m very in favor of anyone working in this problem space. So beyond BCI, I’m happy
5:17:06 and excited and willing to support in any way I can folks working on eye tracking systems, working
5:17:11 on, you know, speech-to-text systems, working on head trackers or mouth sticks or quad sticks.
5:17:16 I haven’t met many engineers and folks in the community that do exactly those things. And
5:17:20 I think for the people we’re trying to help, it doesn’t matter what the complexity of the solution
5:17:26 is as long as the problem is solved. And I want to emphasize that there can be many solutions out
5:17:31 there that can help with these problems. And BCI is one of a collection of such solutions.
5:17:36 So BCI in particular, I think, offers several advantages here. And I think the folks that
5:17:39 recognize this immediately are usually the people who have spinal cord injury or
5:17:42 some form of paralysis. Usually you don’t have to explain to them why this might be something
5:17:45 that could be helpful. It’s usually pretty self-evident. But for the rest of us,
5:17:49 folks that don’t live with severe spinal cord injury or who don’t know somebody with ALS,
5:17:54 it’s not often obvious why you would want a brain implant to be able to connect and navigate a
5:17:59 computer. And it’s surprisingly nuanced to the degree that I’ve learned a huge amount just
5:18:03 working with Noland in the first Neuralink clinical trial and understanding from him,
5:18:08 in his words, why this device is impactful for him. And it’s a nuanced topic. It can be the
5:18:12 case that even if you can achieve the same thing, for example, with a mouth stick when
5:18:16 navigating a computer, he doesn’t have access to that mouth stick every single minute of the day.
5:18:20 He only has access when someone is available to put it in front of him. And so BCI can really offer
5:18:26 a level of independence and autonomy that if it wasn’t literally physically part of your body,
5:18:30 it would be hard to achieve in any other way. So there’s a lot of fascinating
5:18:35 aspects to what it takes to get Noland to be able to control a cursor on the screen with his mind.
5:18:40 You texted me something that I just love. You said, “I was part of the team that interviewed and
5:18:45 selected P1. I was in the operating room during the first human surgery, monitoring live signals
5:18:51 coming out of the brain. I work with the user basically every day to develop new UX paradigms,
5:18:57 decoding strategies. And I was part of the team that figured out how to recover useful BCI to
5:19:03 new world record levels when the signal quality degraded.” We’ll talk about, I think, every aspect
5:19:13 of that. But just zooming out, what was it like to be part of that team and part of that historic,
5:19:18 I would say, historic first? Yeah. I think for me, this is something I’ve been excited about for
5:19:23 close to 10 years now. And so to be able to be even just some small part of making it a reality
5:19:31 is extremely exciting. A couple, maybe special moments during that whole process that I’ll never
5:19:38 really, truly forget. One of them is entering the actual surgery. At that point in time,
5:19:43 I know Nolan quite well. I know his family. And so I think the initial reaction when
5:19:49 Nolan has rolled into the operating room is just, “Oh, shit,” kind of reaction. But at that point,
5:19:55 most of the memory kicks in and you sort of go into, you know, let your body just do all the
5:20:00 talking. And I have the lucky job in that particular procedure to just be in charge of
5:20:04 monitoring the implant. So my job is to sit there, to look at the signals coming off the implant,
5:20:07 to look at the live brain data streaming off the device as threads are being inserted into the
5:20:12 brain. And just to basically observe and make sure that nothing is going wrong or that there’s no
5:20:16 red flags or fault conditions that we need to go and investigate or pause the surgery to debug.
5:20:21 And because I had that sort of spectator view of the surgery, I had a slightly
5:20:26 removed perspective. And I think most folks in the room, I got to sit there and think to myself,
5:20:31 “Wow, you know, that brain is moving a lot. When you look into the side of the craniectomy that we
5:20:35 stick the threads in, you know, one thing that most people don’t realize is the brain moves.
5:20:41 The brain moves a lot when you breathe, when your heart beats, and you can see it visibly.
5:20:45 So, you know, that’s something that I think was a surprise to me and very, very exciting
5:20:50 to be able to see someone’s brain who you physically know and have talked with that length
5:20:55 actually pulsing and moving inside their skull. And they used that brain to talk to you previously.
5:21:00 And now it’s right there moving. Yeah. Actually, I didn’t realize that in terms of the thread
5:21:06 sending. So, the neural link implant is active during surgery. So, in one thread at a time,
5:21:09 you’re able to start seeing the signal. Yeah.
5:21:11 So, that’s part of the way you test that the thing is working.
5:21:17 Yeah. So, actually, in the operating room, right after we sort of finished all the thread
5:21:20 insertions, I started collecting what’s called broadband data. So, broadband is
5:21:25 basically the most raw form of signal you can collect from a neural link electrode.
5:21:32 It’s essentially a measurement of the local field potential or the voltage essentially
5:21:37 measured by the electrode. And we have a certain mode in our application that allows us to visualize
5:21:42 where detected spikes are. So, it visualizes sort of where in the broadband signal, and it’s a very,
5:21:48 very raw form of the data, a neuron is actually spiking. And so, one of these moments that I’ll
5:21:52 never forget as part of this whole clinical trial is seeing live in the operating room,
5:21:56 while he’s still under anesthesia, beautiful spikes being shown in the application, just
5:22:01 streaming live to a device I’m holding in my hand. So, this is no signal processing,
5:22:05 the raw data, and then the signals processing is on top of it. You’re seeing the spikes detected.
5:22:09 Right. Yeah. And that’s the UX too.
5:22:11 Yes. Because that looks beautiful as well.
5:22:15 During that procedure, there was actually a lot of cameraman in the room. So,
5:22:19 they also were curious and wanted to see, there’s several neurosurgeons in the room who are all
5:22:23 just excited to see robots taking their job. And they’re all, you know, crowded around a small
5:22:27 little iPhone watching this live brain data stream out of his brain.
5:22:32 What was that like seeing the robot do some of the surgery? So, the computer vision aspect
5:22:39 where it detects all the spots that avoid the blood vessels, and then obviously with human
5:22:46 supervision, then actually doing the really high precision connection of the threads to the brain.
5:22:51 That’s a good question. My answer is going to be pretty lame here, but it was boring.
5:22:56 I’ve seen it so many times. Yeah. That’s exactly how you want surgery to be. You want it to be
5:23:02 boring. Yeah. Because I’ve seen it so many times. I’ve seen the robot do the surgery literally
5:23:08 hundreds of times. And so, it was just one more time. Yeah. All the practice surgeries and proxies,
5:23:15 and this is just another day. Yeah. So, what about when Nolan woke up? Well, do you remember
5:23:23 a moment where he was able to move the cursor, not move the cursor, but get signal from the brain
5:23:29 such that it was able to show that there’s a connection? Yeah. Yeah. So, we are quite excited
5:23:33 to move as quickly as we can, and Nolan was really, really excited to get started. He wanted
5:23:38 to get started actually the day of surgery, but we waited till the next morning, very patiently.
5:23:46 It’s a long night. And the next morning in the ICU where he was recovering, he wanted to get
5:23:50 started and actually start to understand what kind of signal we can measure from his brain.
5:23:55 And maybe for folks who are not familiar with the Neuralink system, we implant the Neuralink
5:23:59 system or the Neuralink implant in the motor cortex. So, the motor cortex is responsible
5:24:04 for representing things like motor intent. So, if you imagine closing and opening your hand,
5:24:08 that kind of signal representation would be present in the motor cortex. If you imagine
5:24:12 moving your arm back and forth or wiggling a pinky, this sort of signal can be present in the
5:24:17 motor cortex. So, one of the ways we start to sort of map out what kind of signal do we actually
5:24:21 have access to in any particular individual’s brain is through this task called body mapping.
5:24:24 And body mapping is where you essentially present a visual to the user and you say,
5:24:30 “Hey, imagine doing this.” And the visual is, you know, a 3D hand opening, closing, or index finger
5:24:35 modulating up and down. And you ask the user to imagine that, and obviously you can’t see them
5:24:39 do this because they’re paralyzed. So, you can’t see them actually move their arm. But while they
5:24:44 do this task, you can record neural activity and you can basically offline model and check,
5:24:48 “Can I predict or can I detect the modulation corresponding with those different actions?”
5:24:52 And so, we did that task and we realized, “Hey, there’s actually some modulation associated with
5:24:56 some of his hand motion,” which was the first indication that, “Okay, we can potentially
5:25:00 use that modulation to do useful things in the world,” for example, control a computer cursor.
5:25:04 And he started playing with it, you know, the first time we showed him it. And we actually
5:25:07 just took the same live view of his brain activity and put it in front of him. And we said, “Hey,
5:25:12 you tell us what’s going on. You know, we’re not you. You’re able to imagine different things.”
5:25:16 And we know that it’s modulating some of these neurons. So, you figure out for us
5:25:20 what that is actually representing. And so, he played with it for a bit. He was like,
5:25:25 “I don’t quite get it yet.” He played for a bit longer. And he said, “Oh, when I move this finger,
5:25:30 I see this particular neuron start to fire more.” And I said, “Okay, prove it. Do it again.” And so,
5:25:35 he said, “Okay, three, two, one, boom.” And the minute he moved, you can see, like,
5:25:39 instantaneously, this neuron is firing. Single neuron, I can tell you the exact channel number
5:25:44 if you’re interested. It’s stuck in my brain now forever. But that single channel firing was
5:25:48 a beautiful indication that it was behaviorally modulated neural activity that could then be
5:25:51 used for downstream tasks like decoding a computer cursor.
5:25:54 And when you say single channel, is that associated with a single electrode?
5:25:57 Yeah. So, channel electrodes are interchangeable.
5:26:00 And there’s 1,024 of those.
5:26:01 1,024, yeah.
5:26:09 It’s incredible that that works. That really, when I was learning about all this and loading it in,
5:26:15 it was just blowing my mind that the intention you can visualize yourself moving the finger,
5:26:21 that can turn into a signal. And the fact that you can then skip that step and visualize the
5:26:27 cursor moving, or have the intention of the cursor moving in that leading to a signal that
5:26:33 can then be used to move the cursor. There are so many exciting things there to learn about the
5:26:38 brain, about the way the brain works. The very fact of their existing signal that can be used
5:26:43 is really powerful. But it feels like that’s just like the beginning of figuring out how
5:26:49 that signal can be used really, really effectively. I should also just, there’s so many fascinating
5:26:56 details here, but you mentioned the body mapping step. At least in the version I saw that Nolan
5:27:02 was showing off. There’s a super nice interface, like a graphical interface. It just felt like
5:27:12 I was in the future, because it visualizes you moving the hand. And there’s a very sexy,
5:27:18 polished interface that says hello. I don’t know if there’s a voice component, but it just felt like
5:27:24 when you wake up in a really nice video game, and this is a tutorial at the beginning of that
5:27:28 video game, because this is what you’re supposed to do. It’s cool. No, I mean, the future should
5:27:32 feel like the future. But it’s not easy to pull that off. I mean, it needs to be simple, but not
5:27:39 too simple. Yeah, and I think the UX design component here is underrated for PCI development in
5:27:45 general. There’s a whole interaction effect between the ways in which you visualize an instruction
5:27:49 to the user, and the kinds of signal you can get back. And that quality of sort of your behavioral
5:27:53 alignment to the neural signal is a function of how good you are at expressing to the user what you
5:27:58 want them to do. And so, yeah, we spend a lot of time thinking about the UX of how we build our
5:28:02 applications, of how the decoder actually functions, the control surfaces it provides to the user,
5:28:06 all these little details matter a lot. So maybe it’d be nice to get into a little bit more detail
5:28:13 of what the signal looks like and what the decoding looks like. So there’s a N1 implant
5:28:23 that has, like we mentioned, 1024 electrodes, and that’s collecting raw data, raw signal.
5:28:29 What does that signal look like? And what are the different steps along the way before it’s
5:28:33 transmitted? And what is transmitted and all that kind of stuff? Yeah, yeah, this is going to be a
5:28:40 fun one. Let’s go. So maybe before diving into what we do, it’s worth understanding what we’re
5:28:44 trying to measure because that dictates a lot of the requirements for the system that we build.
5:28:49 And what we’re trying to measure is really individual neurons producing action potentials.
5:28:53 And action potential is you can think of it like a little electrical impulse that you can
5:28:57 detect if you’re close enough. And by being close enough, I mean like within,
5:29:03 let’s say, 100 microns of that cell. And 100 microns is a very, very tiny distance. And so,
5:29:08 the number of neurons that you’re going to pick up with any given electrode is just a small radius
5:29:13 around that electrode. And the other thing worth understanding about the underlying biology here
5:29:17 is that when neurons produce an action potential, the width of that action potential is about one
5:29:21 millisecond. So from the start of the spike to the end of the spike, that whole width of that
5:29:28 sort of characteristic feature of neuron firing is one millisecond wide. And if you want to detect
5:29:33 that an individual spike is occurring or not, you need to sample that signal or sample the local
5:29:37 field potential nearby that neuron much more frequently than once a millisecond. You need to
5:29:41 sample many, many times per millisecond to be able to detect that this is actually the characteristic
5:29:48 waveform of a neuron producing an action potential. And so we sample across all 1024 electrodes about
5:29:53 20,000 times a second. 20,000 times a second means we’re already given one millisecond window,
5:29:57 we have about 20 samples that tell us what that exact shape of that action potential looks like.
5:30:04 And once we’ve sort of sampled at super high rate the underlying electrical field nearby
5:30:11 these cells, we can process that signal into just where do we detect a spike or where do we not?
5:30:14 Sort of a binary signal one or zero, do we detect a spike in this one millisecond or not?
5:30:21 And we do that because the actual information carrying sort of
5:30:27 subspace of neural activity is just when our spike’s occurring. Essentially everything that we
5:30:31 care about for decoding can be captured or represented in the frequency characteristics of
5:30:36 spike trains, meaning how often are spikes firing in any given window of time. And so that allows us
5:30:44 to do sort of a crazy amount of compression from this very rich high density signal to something
5:30:48 that’s much, much more sparse and compressible that can be sent out over a wireless radio
5:30:55 like a Bluetooth communication, for example. Quick tangents here. You mentioned electrode neuron.
5:31:06 There’s a local neighborhood of neurons nearby. How difficult does it to isolate from where the
5:31:11 spike came from? Yeah, so there’s a whole field of sort of academic neuroscience work on exactly
5:31:16 this problem of basically giving a single electrode or given a set of electrodes measuring a set of
5:31:23 neurons. How can you sort of sort, spike sort, which spikes are coming from what neuron? And
5:31:27 this is a problem that’s pursued in academic work because you care about it for understanding what’s
5:31:33 going on in the underlying sort of neuroscience of the brain. If you care about understanding how
5:31:37 the brains are presenting information, how that’s evolving through time, then that’s a very, very
5:31:42 important question to understand. For sort of the engineering side of things, at least at the
5:31:47 current scale, if the number of neurons per electrode is relatively small, you can get away
5:31:51 with basically ignoring that problem completely. You can think of it like sort of a random projection
5:31:56 of neurons to electrodes. And there may be in some cases more than one neuron per electrode. But if
5:32:02 that number is small enough, those signals can be thought of as sort of a union of the two. And
5:32:05 for many applications, that’s a totally reasonable trade-off to make and can simplify the problem
5:32:11 a lot. And as you sort of scale out channel count, the relevance of distinguishing individual
5:32:14 neurons becomes less important because you have more overall signal and you can start to rely on
5:32:19 sort of correlations or covariant structure in the data to help understand when that channel is
5:32:24 firing, what does that actually represent? Because you know that when that channel is firing in
5:32:28 concert with these other 50 channels, that means move left. But when that same channel is firing
5:32:31 with concert with these other 10 channels, that means move right. Okay, so you have to do this
5:32:39 kind of spike detection on board and you have to do that super efficiently. So fast and not use too
5:32:44 much power because you don’t want to be generating too much heat. So I have to be a super simple
5:32:53 signal processing step. Is there some wisdom you can share about what it takes to overcome that
5:32:59 challenge? Yeah, so we’ve tried many different versions of basically turning this raw signal into
5:33:03 sort of a feature that you might want to send off the device. And I’ll say that I don’t think
5:33:07 we’re at the final step of this process. This is a long journey. We have something that works
5:33:11 clearly today, but there can be many approaches that we find in the future that are much better
5:33:16 than what we do right now. So some versions of what we do right now and there’s a lot of academic
5:33:20 heritage to these ideas. So I don’t want to claim that these are original neural link ideas or
5:33:25 anything like that. But one of these ideas is basically to build a sort of like a convolutional
5:33:30 filter almost, if you will, that slides across the signal and looks for a certain template to be
5:33:35 matched. And that template consists of sort of how deep the spike modulates, how much it recovers,
5:33:40 and what the duration and window of time is that the whole process takes. And if you can
5:33:44 see in the signal that that template is matched within certain bounds, then you can say, okay,
5:33:48 that’s a spike. One reason that approach is super convenient is that you can actually
5:33:52 implement that extremely efficiently in hardware, which means that you can run it
5:33:58 in low power across 1,024 channels at once. Another approach that we’ve recently started
5:34:03 exploring, and this can be combined with the spike detection approach, something called
5:34:07 spike band power. And the benefits of that approach are that you may be able to pick up
5:34:11 some signal from neurons that are maybe too far away to be detected as a spike. Because the farther
5:34:16 away you are from an electrode, the weaker that actual spike waveform will look like on that
5:34:21 electrode. So you might be able to pick up, you know, population level activity of things that are,
5:34:25 you know, maybe slightly outside the normal recording radius, what neuroscientists sometimes
5:34:30 refer to as the hash of activity, the other stuff that’s going on. And you can look at sort of
5:34:34 across many channels how that sort of background noise is behaving and you might be able to get
5:34:38 more juice out of the signal that way. But it comes at a cost. That signal is now a floating
5:34:41 point representation, which means it’s more expensive to send out over a power. It means you
5:34:45 have to find different ways to compress it that are different than what you can apply to binary
5:34:48 signals. So there’s a lot of different challenges associated with these different modalities.
5:34:53 So also in terms of communication, you’re limited by the amount of data you can send.
5:34:59 So and also because you’re currently using the Bluetooth protocol, you have to batch stuff
5:35:07 together. But you have to also do this, keeping the latency crazy low, like crazy low. Anything
5:35:13 to say about the latency? Yeah, this is a passion project of mine. So I want to build the best mouse
5:35:19 in the world. I don’t want to build like the, you know, the Chevrolet Spark or whatever of
5:35:25 electric cars. I want to build like the Tesla Roadster version of a mouse. And I really do
5:35:28 think it’s quite possible that within, you know, five to 10 years that most e-sports competitions
5:35:33 are dominated by people with paralysis. This is like a very real possibility for a number of
5:35:37 reasons. One is that they’ll have access to the best technology to play video games effectively.
5:35:42 The second is they have the time to do so. So those two factors together are particularly potent for
5:35:49 e-sport competitors. Unless people without paralysis are also allowed to implant you.
5:35:57 Right. Which is it is another way to interact with a digital device. And there’s some there’s
5:36:02 something to that if it’s a fundamentally different experience, more efficient experience,
5:36:08 even if it’s not like some kind of full on high bandwidth communication, if it’s just the ability
5:36:16 to move the mouse 10x faster, like the bits per second, if I can achieve a bits per second,
5:36:19 the 10x what I can do with the mouse, that’s a really interesting possibility of what they can
5:36:25 do, especially as you get really good at it with training. It’s definitely the case that you have
5:36:29 a higher ceiling performance. Like you, because you don’t have to buffer your intention through your
5:36:35 arm, through your muscle, you get just by nature of having a brain implant at all, like 75 millisecond
5:36:39 lead time on any action that you’re actually trying to take. And there’s some nuance to this,
5:36:42 like there’s evidence that the motor cortex, you can sort of plan out sequences of action. So you
5:36:46 may not get that whole benefit all the time. But for a sort of like reaction time style
5:36:50 games where you just want to somebody’s over here, snipe them, you know, that kind of thing.
5:36:55 You actually do have just an inherent advantage because you don’t need to go through muscle.
5:36:59 So the question is just how much faster can you make it. And we’re already, you know, faster than
5:37:02 you know, what you would do if you’re going through muscle from a latency point of view.
5:37:06 And we’re in the early stages of that, I think we can push it. So our end end latency right now
5:37:12 from brain spike to cursor movement is about 22 milliseconds. If you think about the best mice
5:37:15 in the world, the best gaming mice, that’s about five milliseconds ish of latency,
5:37:18 depending on how you measure, depending on how fast your screen refreshes, there’s a lot of
5:37:23 characteristics that matter there. But yeah, and the rough time for like a neuron in the brain to
5:37:27 actually impact your command of your hand is about 75 milliseconds. So if you look at those
5:37:32 numbers, you can see that we’re already like, you know, competitive and slightly faster than what
5:37:36 you’d get by actually moving your, moving your hand. And this is something that, you know,
5:37:39 if you ask Nolan about it, when he moved the cursor for the first time, we asked him about
5:37:43 this. There’s something I’m super curious about, like, what does it feel like when you’re modulating,
5:37:46 you know, a click intention, or when you’re trying to move the cursor to the right,
5:37:51 he said it moves before he is like actually intending it to, which is kind of a surreal
5:37:55 thing and something that, you know, I would love to experience myself one day. What is that,
5:37:59 like to have a thing just be so immediate, so fluid that it feels like it’s happening before
5:38:04 you’re actually intending it to move. Yeah, suppose we’ve gotten used to that latency,
5:38:09 that natural latency that happens. So is the currently the bottleneck, the communication,
5:38:12 so like the Bluetooth communication, is that what’s the actual bottleneck? I mean,
5:38:15 there’s always going to be a bottleneck, but what’s the current bottleneck?
5:38:22 Yeah, a couple of things. So kind of hilariously, Bluetooth low energy protocol has some restrictions
5:38:27 on how fast you can communicate. So the protocol itself establishes a standard of, you know,
5:38:30 the most frequent sort of updates you can send are on the order of 7.5 milliseconds.
5:38:37 And as we push latency down to the level of sort of individual spikes impacting control,
5:38:41 that level of resolution, that kind of protocol is going to become a limiting factor at some scale.
5:38:47 Another sort of important nuance to this is that it’s not just the
5:38:51 neural link itself that’s part of this equation. If you start pushing latency
5:38:55 sort of below the level of how fast screens refresh, then you have another problem. You
5:39:00 need your whole system to be able to be as reactive as the sort of limits of what the
5:39:04 technology can offer. Like you need the screen, like 120 hertz just doesn’t, you know,
5:39:07 work anymore if you’re trying to have something respond at something that’s,
5:39:10 you know, at the level of one millisecond. That’s a really cool challenge. I also like
5:39:15 that for a t-shirt, the best mouse in the world. Tell me on the receiving end,
5:39:20 so the decoding step. Now we figured out what the spikes are, we’ve got them all together,
5:39:26 now we’re sending that over to the app. What’s the decoding step look like?
5:39:29 Yeah. So maybe first, what is decoding? I think there’s probably a lot of folks
5:39:32 listening that just have no clue what, what it means to decode brand activity.
5:39:39 Actually, even if we zoom out beyond that, what is the app? So there’s an implant that’s
5:39:45 wirelessly communicating with any digital device that has an app installed. So maybe
5:39:51 can you tell me at high level what the app is, what the software is outside of the brain?
5:39:56 Yeah. So maybe working backwards from the goal, the goal is to help someone with paralysis,
5:40:01 in this case, Nolan, be able to navigate his computer independently. And we think the best
5:40:05 way to do that is to offer them the same tools that we have to navigate our software, because
5:40:09 we don’t want to have to rebuild an entire software ecosystem for the brain, at least
5:40:13 not yet. Maybe someday you can imagine there’s UXs that are built natively for BCI, but
5:40:17 in terms of what’s useful for people today, I think we, most people would prefer to be able
5:40:21 to just control mouse and keyboard inputs to all the applications that they want to use for their
5:40:26 daily jobs, for communicating with their friends, etc. And so the job of the application is really
5:40:31 to translate this wireless stream of brain data coming off the implant into control of the computer.
5:40:36 And we do that by essentially building a mapping from brain activity to sort of the
5:40:42 HID inputs to the actual hardware. So HID is just the protocol for communicating input device
5:40:48 events. So for example, move mouse to this position or press this key down. And so that
5:40:51 mapping is fundamentally what the app is responsible for. But there’s a lot of nuance of how that
5:40:55 mapping works that we spend a lot of time to try to get right. And we’re still in the early stages
5:41:00 of a long journey to figure out how to do that optimally. So one part of that process is decoding.
5:41:04 So decoding is this process of taking the statistical patterns of brain data that’s being
5:41:08 channeled across this Bluetooth connection to the application and turning it into, for example,
5:41:13 a mouse movement. And that decoding step, you can think of it in a couple of different parts. So
5:41:16 similar to any machine learning problem, there’s a training step and there’s an inference step.
5:41:22 The training step in our case is a very intricate behavioral process where the user
5:41:27 has to imagine doing different actions. So for example, they’ll be presented a screen with
5:41:31 a cursor on it, and they’ll be asked to push that cursor to the right. Then imagine pushing
5:41:35 that cursor to the left, push it up, push it down. And we can basically build up a pattern,
5:41:42 or using any sort of modern ML method, mapping of given this brain data and that imagine behavior,
5:41:46 map one to the other. And then at test time, you take that same pattern matching system
5:41:50 in our case, it’s a deep neural network, and you run it and you take the live stream of brain data
5:41:54 coming off their implant, you decode it by pattern matching to what you saw at calibration time,
5:41:59 and you use that for control of the computer. Now, a couple like sort of rabbit holes that I think
5:42:04 are quite interesting. One of them has to do with how you build that best template matching system,
5:42:10 because there’s a variety of behavioral challenges and also debugging challenges when
5:42:13 you’re working with someone who’s paralyzed. Because again, fundamentally, you don’t observe
5:42:17 what they’re trying to do, you can’t see them attempt to move their hand. And so you have to
5:42:21 figure out a way to instruct the user to do something, and validate that they’re doing it
5:42:27 correctly, such that then you can downstream build with confidence the mapping between the
5:42:32 neural spikes and the intended action. And by doing the action correctly, what I really mean
5:42:38 is at this level of resolution of what neurons are doing. So if in ideal world, you could get
5:42:44 a signal of behavioral intent that is ground truth accurate at the scale of sort of one millisecond
5:42:48 resolution, then with high confidence, I could build a mapping from my neural spikes
5:42:52 to that behavioral intention. But the challenge is, again, that you don’t observe what they’re
5:42:57 actually doing. And so there’s a lot of nuance to how you build user experiences that give you more
5:43:00 than just sort of a course on average correct representation of what the user’s intending to
5:43:06 do. If you want to build the world’s best mouse, you really want it to be as responsive as possible.
5:43:09 You want it to be able to do exactly what the user’s intending at every sort of step along the
5:43:14 way, not just on average be correct when you’re trying to move it from left to right. And building
5:43:19 a behavioral sort of calibration game or sort of software experience that gives you that level
5:43:24 of resolution is what we spend a lot of time working on. So the calibration process, the interface,
5:43:31 has to encourage precision. Meaning like whatever it does, it should be super intuitive that the
5:43:38 next thing the human is going to likely do is exactly that intention that you need and only
5:43:45 that intention. And you don’t have any feedback, except that may be speaking to you afterwards
5:43:53 what they actually did. You can’t, oh yeah. So that’s fundamentally, that is a really exciting
5:43:59 UX challenge because that’s all on the UX. It’s not just about being friendly or nice or usable.
5:44:06 It’s like user experience is how it works. It’s how it works for the calibration and calibration
5:44:12 at least at this stage of Neuralink is like fundamental to the operation of the thing and
5:44:18 not just calibration, but continued calibration essentially. Yeah. And maybe you said something
5:44:21 that I think is worth exploring there a little bit. You said it’s primarily a UX challenge,
5:44:26 and I think a large component of it is, but there is also a very interesting machine learning
5:44:33 challenge here, which is given some dataset including some on average correct behavior
5:44:38 of asking the user to move up or move down, move right, move left. And given a dataset of Neural
5:44:43 Spikes, is there a way to infer in some kind of semi-supervised or entirely unsupervised way
5:44:48 what that high resolution version of their intention is? And if you think about it like
5:44:52 there probably is because there are enough data points in the dataset, enough constraints on your
5:44:57 model, that there should be a way with the right sort of formulation to let the model figure out
5:45:00 itself. For example, at this millisecond, this is exactly how hard they’re pushing upwards.
5:45:04 And at this millisecond, this is how hard they’re trying to push upwards. It’s really important
5:45:09 to have very clean labels. Yes? So like the problem because much harder from the machine
5:45:15 learning perspective, the labels are noisy. That’s correct. And then to get the clean labels, that’s
5:45:20 a UX challenge. Correct. Although clean labels, I think maybe it’s worth exploring what that
5:45:25 exactly means. I think any given labeling strategy will have some number of assumptions it makes
5:45:29 about what the user is attempting to do. Those assumptions can be formulated in a loss function
5:45:33 or they can be formulated in terms of heuristics that you might use to just try to estimate or
5:45:37 guesstimate what the user is trying to do. And what really matters is how accurate are those
5:45:42 assumptions. For example, you might say, “Hey, user, push upwards and follow the speed of this cursor.”
5:45:47 And your heuristic might be that they’re trying to do it exactly what that cursor is trying to do.
5:45:50 Another competing heuristic might be they’re actually trying to go slightly faster at the
5:45:54 beginning of the movement and slightly slower at the end. And those competing heuristics may or
5:45:58 may not be accurate reflections of what the user is trying to do. Another version of the task might
5:46:03 be, “Hey, user, imagine moving this cursor a fixed offset. So rather than follow the cursor,
5:46:08 just try to move it exactly 200 pixels to the right.” So here’s the cursor. Here’s the target.
5:46:12 Okay, cursor disappears. Try to move that now invisible cursor 200 pixels to the right.
5:46:16 And the assumption in that case would be that the user can actually modulate correctly that position
5:46:22 offset. But that position offset assumption might be a weaker assumption and therefore potentially
5:46:26 you can make it more accurate than these heuristics that are trying to guesstimate at each millisecond
5:46:30 what the user is trying to do. So you can imagine different tasks that make different assumptions
5:46:35 about the nature of the user intention. And those assumptions being correct is what I would
5:46:40 think of as a clean label. For that step, what are we supposed to be visualizing? There’s a cursor
5:46:45 and you want to move that cursor to the right, to the left, up and down, or maybe move them by
5:46:50 a certain offset. So that’s one way. Is that the best way to do calibration? So for example,
5:46:55 an alternative crazy way that probably is playing a role here is a game like WebGrid,
5:47:01 where you’re just getting a very large amount of data, the person playing a game,
5:47:09 where if they are in a state of flow, maybe you can get clean signal as a side effect.
5:47:15 Or is that not an effective way for initial calibration?
5:47:20 Yeah, great question. There’s a lot to unpack there. So the first thing I would draw a distinction
5:47:25 between is sort of open loop, first closed loop. So open loop, what I mean by that is the user is
5:47:28 sort of going from zero to one. They have no model at all. And they’re trying to get to the place
5:47:34 where they have some level of control at all. In that setup, you really need to have some task
5:47:37 that gives the user a hint of what you want them to do such that you can build this mapping again
5:47:44 from brain data to output. Then once they have a model, you could imagine them using that model
5:47:47 and actually adapting to it and figuring out the right way to use it themselves,
5:47:50 and then retraining on that data to give you sort of a boost in performance.
5:47:54 There’s a lot of challenges associated with both of these techniques, and we can sort of
5:47:58 wrap it all into both of them if you’re interested. But the sort of challenge with the open loop task
5:48:03 is that the user themselves doesn’t get proprioceptive feedback about what they’re doing.
5:48:08 They don’t necessarily perceive themself or feel the mouse under their hand
5:48:12 when they’re trying to do an open loop calibration. They’re being asked to perform something,
5:48:17 like imagine if you sort of had your whole right arm numbed, and you stuck it in a box,
5:48:21 and you couldn’t see it. So you had no visual feedback, and you had no proprioceptive feedback
5:48:24 about what the position or activity of your arm was. And now you’re asked, “Okay,
5:48:27 given this thing on the screen that’s moving from left to right, match that speed.”
5:48:34 And you basically can try your best to invoke whatever that imagined action is in your brain
5:48:38 that’s moving the cursor from left to right. But in any situation, you’re going to be
5:48:42 inaccurate and maybe inconsistent in how you do that task. And so that’s sort of the fundamental
5:48:47 challenge of open loop. The challenge with closed loop is that once the user’s given a model
5:48:53 and they’re able to start moving the mouse on their own, they’re going to very naturally adapt to
5:48:58 that model. And that co-adaptation between the model learning what they’re doing and the user
5:49:03 learning how to use the model may not find you the best sort of global minima. Maybe that your
5:49:09 first model was noisy in some ways, or maybe just had some like quirk. If there’s some like part of
5:49:14 the data distribution, it didn’t cover super well. And the user now figures out because they’re a
5:49:18 brilliant user like no one. They figured out the right sequence of imagined motions or the right
5:49:21 angle they have to hold their hand at to get it to work. And they’ll get it to work great,
5:49:24 but then the next day they come back to their device, and maybe they don’t remember exactly
5:49:28 all the tricks that they used in the previous day. And so there’s a complicated sort of feedback
5:49:32 cycle here that can emerge and can make it a very, very difficult debugging process.
5:49:39 Okay, there’s a lot of really fascinating things there. Yeah, actually, just to stay on the closed
5:49:49 loop. I’ve seen situations, this actually happened watching psychology grad students,
5:49:53 they use pieces of software when they don’t know how to program themselves, they use piece of software
5:49:58 that somebody else wrote, and it has a bunch of bugs. And they figure out like, and they’ve been
5:50:03 using it for years, they figure out ways to work around, oh, that just happens. Like nobody has,
5:50:08 nobody like considers, maybe we should fix this, they just adapt. And that’s a really
5:50:13 interesting notion that we just said, we’re really good at adapting. But you need to still,
5:50:18 that might not be the optimal. Yeah. Okay, so how do you solve that problem? Do you have to restart
5:50:23 from scratch every once in a while kind of thing? Yeah, it’s a good question. First and foremost,
5:50:28 I’d say this is not a solve problem. And for anyone who’s, you know, listening in academia,
5:50:32 who works on BCIs, I would also say this is not a problem that’s solved by simply scaling channel
5:50:36 count. So this is, you know, maybe that can help when you can get sort of richer covariance structures
5:50:40 that you can use to exploit when trying to come up with good labeling strategies. But if, you know,
5:50:43 you’re interested in problems that aren’t going to be solved inherently by scaling channel count,
5:50:47 this is one of them. Yeah, so how do you solve it? It’s not a solve problem. That’s the first thing
5:50:52 I want to make sure it gets across. The second thing is any solution that involves closed loop
5:50:57 is going to become a very difficult debugging problem. And one of my sort of general heuristics
5:51:00 for choosing what problems to tackle is that you want to choose the one that’s going to be the
5:51:07 easiest to debug. Because if you can do that, even if the ceiling is lower, you’re going to be able
5:51:11 to move faster because you have a tighter iteration loop debugging the problem. And in the open loop
5:51:15 setting, there’s not a feedback cycle to debug with the user in the loop. And so there’s some
5:51:21 reason to think that that should be an easier debugging problem. The other thing that’s worth
5:51:25 understanding is that even in a closed loop setting, there’s no special software magic of how to
5:51:29 infer what the user is truly attempting to do. In a closed loop setting, although they’re moving
5:51:32 the cursor on the screen, they may be attempting something different than what your model is
5:51:36 outputting. So what the model is outputting is not a signal that you can use to retrain if you want
5:51:41 to be able to improve the model further. You still have this very complicated guesstimation
5:51:45 or unsupervised problem of figuring out what is the true user intention underlying that signal.
5:51:50 And so the open loop problem has the nice property of being easy to debug. And the second
5:51:55 nice property of it has all the same information and content as the closed loop scenario.
5:52:00 Another thing I want to mention and call out is that this problem doesn’t need to be solved in
5:52:05 order to give useful control to people. Even today, with the solutions we have now and that
5:52:11 academia has built up over decades, the level of control that can be given to a user today
5:52:15 is quite useful. It doesn’t need to be solved to get to that level of control. But again,
5:52:19 I want to build the world’s best mouse. I want to make it so good that it’s not even a question
5:52:25 that you want it. And to build the world’s best mouse, the superhuman version, you really need to
5:52:31 nail that problem. And a couple, maybe details of previous studies that we’ve done internally
5:52:35 that I think are very interesting to understand when thinking about how to solve this problem.
5:52:39 The first is that even when you have ground truth data of what the user is trying to do,
5:52:43 and you can get this with an able-bodied monkey, a monkey that has an early-length device implanted
5:52:47 and moving a mouse to control a computer, even with that ground truth data set,
5:52:52 it turns out that the optimal thing to predict, to produce high-performance BCI,
5:52:57 is not just the direct control of the mouse. You can imagine building a data set of what’s
5:53:01 going on in the brain and what is the mouse exactly doing on the table. And it turns out that if you
5:53:05 build the mapping from Neurospikes to predict exactly what the mouse is doing, that model will
5:53:10 perform worse than a model that is trained to predict higher-level assumptions about what the
5:53:13 user might be trying to do. For example, assuming that the monkey is trying to go in a straight
5:53:18 line to the target, it turns out that making those assumptions is actually more effective
5:53:21 in producing a model than actually predicting the underlying hand movement.
5:53:26 So the intention, not the physical movement or whatever, there’s obviously a very strong
5:53:31 correlation between the two, but the intention is a more powerful thing to be chasing.
5:53:38 Well, that’s also super interesting. I mean, the intention itself is fascinating because,
5:53:41 yes, with the BCI here, in this case with the digital telepathy,
5:53:49 you’re acting on the intention, not the action, which is why there’s an experience of feeling
5:53:54 like it’s happening before you meant for it to happen. That is so cool. And that is why you
5:53:58 could achieve superhuman performance, probably, in terms of the control of the mouse.
5:54:06 So for open loop, just to clarify, so whenever the person is tasked to move the mouse to the right,
5:54:12 you said there’s not feedback. So they don’t get to get that satisfaction of like
5:54:18 actually getting it to move, right? So you could imagine giving the user feedback on a screen,
5:54:21 but it’s difficult because at this point, you don’t know what they’re attempting to do.
5:54:24 So what can you show them that would basically give them a signal of,
5:54:28 I’m doing this correctly or not correctly? So let’s take this very specific example of maybe
5:54:32 your calibration task looks like you’re trying to move the cursor a certain position offset.
5:54:37 So your instructions to the user are, hey, the cursor’s here. Now, when the cursor disappears,
5:54:40 imagine moving it 200 pixels from where it was to the right to be over this target.
5:54:45 In that kind of scenario, you could imagine coming up with some sort of consistency metric
5:54:49 that you could display to the user of, okay, I know what the spike train looks like on average
5:54:53 when you do this action to the right. Maybe I can produce some sort of probabilistic estimate
5:54:59 of how likely is that to be the action you took given the latest trial or trajectory that you
5:55:02 imagined. And that could give the user some sort of feedback of how consistent are they
5:55:09 across different trials. You could also imagine that if the user is prompted with that kind of
5:55:12 consistency metric that maybe they just become more behaviorally engaged to begin with because
5:55:16 the task is kind of boring when you don’t have any feedback at all. And so there may be benefits to
5:55:20 the, you know, the user experience of showing something on the screen, even if it’s not accurate,
5:55:24 just because it keeps the user motivated to try to increase that number or push it upwards.
5:55:30 So there’s a psychology element here. Yeah, absolutely. And again, all of that is UX challenge.
5:55:39 How much signal drift is there hour to hour, day to day, week to week, month to month? How often
5:55:46 do you have to recalibrate because of the signal drift? Yeah. So this is a problem we’ve worked
5:55:51 on both with NHP, non-human primates, before our clinical trial and then also with Noland
5:55:55 during the clinical trial. Maybe the first thing that’s worth stating is what the goal is here.
5:55:59 So the goal is really to enable the user to have a plug and play experience where I guess they don’t
5:56:04 have to plug anything in, but a play experience where they, you know, can use the device whenever
5:56:09 they want to, however they want to. And that’s really what we’re aiming for. And so there can be
5:56:14 a set of solutions that get to that state without considering this non-stationarity problem.
5:56:18 So maybe the first solution here that’s important is that they can recalibrate whenever they want.
5:56:24 This is something that Noland has the ability to do today. So he can recalibrate the system,
5:56:27 you know, at 2 a.m. in the middle of the night without his, you know, caretaker or parents or
5:56:32 friends around to help push a button for him. The other important part of the solution is that
5:56:35 when you have a good model calibrated that you can continue using that without needing to recalibrate
5:56:40 it. So how often he has to do this recalibration today depends really on his appetite for performance.
5:56:46 There are, we observe a sort of a degradation through time of how well any individual model
5:56:51 works. But this can be mitigated behaviorally by the user adapting their control strategy.
5:56:54 It can also be mitigated through a combination of sort of software features that we provide to
5:57:00 the user. For example, we let the user adjust exactly how fast the cursor is moving. We call
5:57:04 that the gain, for example, the gain of how fast the cursor reacts to any given input intention.
5:57:09 They can also adjust the smoothing, how smooth the output of that cursor intention actually is.
5:57:12 They can also adjust the friction, which is how easy it is to stop and hold still.
5:57:17 And all these software tools allow the user a great deal of flexibility and troubleshooting
5:57:20 mechanisms to be able to solve this problem for themself. By the way, all of this is done
5:57:25 by looking to the right side of the screen, selecting the mixer. And the mixer you have,
5:57:31 it’s like DJ mode, DJ mode for your PC. I mean, it’s a really well done interface. It’s really,
5:57:37 really well done. And so yeah, there’s that bias that there’s a cursor drift that no one talked
5:57:43 about in a stream. Although he said that you guys were just playing around with it with him
5:57:48 and they’re constantly improving. So that could have been just a snapshot of that particular
5:57:55 moment, a particular day. But he said that there was this cursor drift and this bias that could
5:58:00 be removed by him, I guess, looking to the right side of the screen, the left side of the screen,
5:58:05 to kind of adjust the bias. That’s one interface action, I guess, to adjust the bias.
5:58:11 Yeah. So this is actually an idea that comes out of academia. There was some prior work with
5:58:16 sort of brain gate clinical trial participants where they pioneered this idea of bias correction.
5:58:21 The way we’ve done it, I think, is yeah, it’s very produtized, very beautiful user experience
5:58:25 where the user can essentially flash the cursor over to the side of the screen and it opens up
5:58:31 a window where they can actually sort of adjust or tune exactly the bias of the cursor. So bias,
5:58:35 maybe for people who aren’t familiar, is just sort of what is the default motion of the cursor
5:58:41 if you’re imagining nothing. And it turns out that that’s one of the first sort of qualia
5:58:44 of the cursor control experience that’s impacted by neuron non-stationarity.
5:58:48 Qualia of the cursor experience. I mean, I don’t know how else to describe it. I’m not the guy
5:58:52 moving things. It’s very poetic. I love it. The quality of the cursor experience. Yeah, I mean,
5:59:00 it sounds poetic, but it is deeply true. There is an experience when it works well, it is a
5:59:05 joyful, a really pleasant experience. And when it doesn’t work well, it’s a very
5:59:12 frustrating experience. That’s actually the art of UX. It’s like, you have the possibility to
5:59:18 frustrate people or the possibility to give them joy. And at the end of the day, it really is truly
5:59:22 the case that UX is how the thing works. And so it’s not just like what’s showing on the screen.
5:59:27 It’s also, you know, what control surfaces does a decoder provide the user? We want them to feel
5:59:32 like they’re in the F1 car, not like, you know, some like mini van, right? And that really truly
5:59:37 is how we think about it. Nolan himself is an F1 fan. So we refer to ourselves as a pit crew. He
5:59:42 really is truly the F1 driver. And there’s different, you know, control surfaces that
5:59:46 different kinds of cars and airplanes provide the user. And we take a lot of inspiration from
5:59:50 that when designing how the cursor should behave. And what maybe one nuance of this is,
5:59:54 you know, even details like when you move a mouse on a MacBook trackpad,
6:00:00 the sort of response curve of how that input that you give the trackpad translates to cursor
6:00:04 movement is different than how it works with a mouse. When you move on the trackpad, there’s a
6:00:07 different response function, a different curve to how much a movement translates to input to the
6:00:11 computer than when you do it physically with a mouse. And that’s because somebody sat down a long
6:00:16 time ago when they’re designing the initial input systems to any computer, and they thought through
6:00:20 exactly how it feels to use these different systems. And now we’re designing sort of the
6:00:24 next generation of this input system to a computer, which is entirely done via the brain.
6:00:28 And there’s no proprioceptive feedback. Again, you don’t feel the mouse in your hand.
6:00:32 You don’t feel the keys under your fingertips. And you want a control surface that still makes
6:00:36 it easy and intuitive for the user to understand the state of the system and how to achieve what
6:00:40 they want to achieve. And ultimately, the end goal is that that UX is completely, it fades
6:00:43 into the background. It becomes something that’s so natural and intuitive that it’s subconscious
6:00:48 to the user. And they just should feel like they have basically direct control over the
6:00:51 cursor. It just does what they want it to do. They’re not thinking about the implementation
6:00:54 of how to make it do what they want it to do. It’s just doing what they want it to do.
6:01:01 Is there some kind of things along the lines of like Fitt’s law where you should move the mouse
6:01:06 in a certain kind of way that maximizes your chance to hit the target? I don’t even know what
6:01:14 I’m asking, but I’m hoping the intention of my question will land on a profound answer. No.
6:01:21 Is there some kind of understanding of the laws of UX when it comes
6:01:30 to the context of somebody using their brain to control it? Like that’s different than actual
6:01:35 with a mouse? I think we’re in the early stages of discovering those laws. So I wouldn’t claim
6:01:40 to have solved that problem yet. But there’s definitely some things we’ve learned that make it
6:01:47 easier for the user to get stuff done. And it’s pretty straightforward when you verbalize it,
6:01:50 but it takes a while to actually get to that point when you’re in the process of debugging the stuff
6:01:56 in the trenches. One of those things is that any machine learning system that you build has some
6:02:02 number of errors. And it matters how those errors translate to the downstream user experience. For
6:02:07 example, if you’re developing a search algorithm in your photos, if you search for your friend Joe
6:02:13 and it pulls up a photo of your friend, Josephine, maybe that’s not a big deal because the cost of
6:02:19 an error is not that high. In a different scenario where you’re trying to detect insurance
6:02:23 fraud or something like this and you’re directly sending someone to court because of some machine
6:02:27 learning model output, then the errors make a lot more sense to be careful about. You want to be very
6:02:31 thoughtful about how those errors translate to downstream effects. The same is true in BCI. So
6:02:36 for example, if you’re building a model that’s decoding a velocity output from the brain versus
6:02:40 an output where you’re trying to modulate the left click, for example, these have sort of different
6:02:45 tradeoffs of how precise you need to be before it becomes useful to the end user. For velocity,
6:02:49 it’s okay to be on average correct because the output of the model is integrated through time.
6:02:53 So if the user is trying to click at position A and they’re currently in position B,
6:02:58 they’re trying to navigate over time to get between those two points. And as long as the
6:03:02 output of the model is on average correct, they can sort of steer through time with the user control
6:03:07 loop in the mix, they can get to the point they want to get to. The same is not true of a click.
6:03:11 For a click, you’re performing it almost instantly at the scale of Neuron’s firing.
6:03:16 And so you want to be very sure that that click is correct because a false click can be very
6:03:19 destructive to the user. They might accidentally close the tab that they’re trying to do something
6:03:25 and lose all their progress. They might accidentally hit some send button on some text that there’s
6:03:30 only like half composed and reads funny after. So there’s different sort of cost functions
6:03:34 associated with errors in this space. And part of the UX design is understanding how to
6:03:38 build a solution that is when it’s wrong, still useful to the end user.
6:03:48 That’s so fascinating that assigning cost to every action when an error occurs. So every action,
6:03:55 if an error occurs, has a certain cost. And incorporating that into how you interpret the
6:04:03 intention, mapping it to the action is really important. I didn’t quite, until you said it,
6:04:08 realize there’s a cost to like sending the text early. It’s like a very expensive cost.
6:04:12 Yeah, it’s super annoying. If you accidentally, like if you’re a cursor, imagine if your cursor
6:04:17 misclicked every once in a while. That’s like super obnoxious. And the worst part of it is usually
6:04:20 when the user is trying to click, they’re also holding still because they’re over the target
6:04:24 they want to hit and they’re getting ready to click, which means that in the datasets that we
6:04:29 build, on average, is the case that sort of low speeds or desire to hold still is correlated with
6:04:34 when the user is attempting to click. Wow, that is really fascinating. It’s also not the case,
6:04:38 people think that, oh, click is a binary signal. This must be super easy to decode. Well, yes,
6:04:43 it is, but the bar is so much higher for it to become a useful thing for the user.
6:04:46 And there’s ways to solve this. I mean, you can sort of take the compound approach of, well,
6:04:49 let’s just give the, like, let’s take five seconds to click. Let’s take a huge window of
6:04:53 time so we can be very confident about the answer. But again, world’s best mouse. The world’s best
6:04:57 mouse doesn’t take a second to click or 500 milliseconds to click. It takes five milliseconds
6:05:01 to click or less. And so if you’re aiming for that kind of high bar, then you really want to
6:05:06 solve that underlying problem. So maybe this is a good place to ask about how to measure performance,
6:05:13 this whole bits per second. Can you, like, explain what you mean by that? Maybe a good
6:05:19 place to start is to talk about web grid as a game, as a good illustration of the measurement of
6:05:23 performance. Yeah. Maybe I’ll take one zoom out step there, which is just explaining why
6:05:28 we care to measure this at all. So again, our goal is to provide the user the ability to control
6:05:32 the computer as well as I can, and hopefully better. And that means that they can do it at
6:05:35 the same speed as what I can do. It means that they have access to all the same functionality
6:05:39 that I have, including, you know, all those little details like command tab, command space,
6:05:43 you know, all this stuff and be able to do it with the brain. And with the same level of reliability
6:05:47 is what I can do with my muscles. And that’s a high bar. And so we intend to measure and quantify
6:05:50 every aspect of that to understand how we’re progressing towards that goal. There’s many ways
6:05:55 to measure BPS by which this isn’t the only way. But we present the user a grid of targets,
6:05:59 and basically we compute a score, which is dependent on how fast and accurately they can
6:06:02 select and then how small are the targets. And the more targets that are on the screen,
6:06:07 the smaller they are, the more information you present per click. And so if you think about
6:06:10 it from an information theory point of view, you can communicate across different information
6:06:15 theoretic channels. And one such channel is a typing interface you could imagine that’s built
6:06:20 out of a grid, just like a software keyboard on the screen. And bits per second is a measure
6:06:24 that’s computed by taking the log of the number of targets on the screen. You can subtract one if
6:06:28 you care to model a keyboard because you have to subtract one for the delete key on the keyboard.
6:06:32 But log of the number of targets on the screen times the number of correct selections minus
6:06:37 incorrect divided by some time window, for example, 60 seconds. And that’s sort of the
6:06:41 standard way to measure a cursor control task in academia. And all credit in the world goes to
6:06:45 this great professor, Dr. Shenoy of Stanford, who came up with that task. And he’s also one of my
6:06:49 inspirations for being in the field. So all the credit in the world to him for coming up with a
6:06:52 standardized metric to facilitate this kind of bragging rights that we have now to say that
6:06:56 Nolan is the best in the world at this task with his BCI. It’s very important for progress that
6:07:00 you have standardized metrics that people can compare across different techniques and approaches,
6:07:04 how old does this do? So yeah, big kudos to him and to all the team at Stanford.
6:07:11 Yeah. So for Nolan, and for me playing this task, there’s also different modes that you can
6:07:15 configure this task. So the web grid task can be presented as just sort of a left click on the
6:07:19 screen, or you could have, you know, targets that you just dwell over, or you could have targets
6:07:22 that you left right click on, you could have targets that are left, right click, middle click,
6:07:25 scrolling, clicking and dragging, you know, you can do all sorts of things within this general
6:07:30 framework. But the simplest purest form is just blue targets jump on the screen, blue means left
6:07:38 click. That’s the simplest form of the game. And the sort of prior records here in academic work
6:07:45 and at Neuralink internally with sort of NHPs have all been matched or beaten by Nolan with his
6:07:51 Neuralink device. So sort of prior to Neuralink, the sort of world record for a human user device is
6:07:55 somewhere between 4.2 to 4.6 BPS, depending on exactly what paper you read and how you interpret it.
6:08:02 Nolan’s current record is 8.5 BPS. And again, the sort of median Neuralink performance is 10 BPS.
6:08:08 So you can think of it roughly as he’s 85% the level of control of a median Neuralink or using
6:08:16 their cursor to select blue targets on the screen. And yeah, I think there’s a very interesting
6:08:20 journey ahead to get us to that same level of 10 BPS performance. It’s not the case that sort of the
6:08:24 tricks that got us from, you know, 4 to 6 BPS, and then 6 to 8 BPS are going to be the ones that
6:08:29 get us from 8 to 10. And in my view, the core challenge here is really the labeling problem.
6:08:33 It’s how do you understand at a very, very fine resolution what the user is attempting to do.
6:08:38 And yeah, I highly encourage folks in academia to work on this problem.
6:08:44 What’s the journey with Nolan on that quest of increasing the BPS on WebGrid? In March,
6:08:52 you said that he selected 89,285 targets in WebGrid. So he loves this game. He’s really
6:08:56 serious about improving his performance in this game. So what is that journey of trying to figure
6:09:01 out how to improve that performance? How much can that be done on the decoding side? How much can
6:09:08 that be done on the calibration side? How much can that be done on the Nolan side of like figuring
6:09:16 out how to convey his intention more cleanly? Yeah, no, this is a great question. So in my view,
6:09:20 one of the primary reasons why Nolan’s performance is so good is because of Nolan.
6:09:26 Nolan is extremely focused and very energetic. He’ll play WebGrid sometimes for like four hours
6:09:30 in the middle of the night, like from 2am to 6am, he’ll be playing WebGrid just because he wants
6:09:35 to push it to the limits of what he can do. And, you know, this is not us like asking him to do that.
6:09:38 I want to be clear, like we’re not saying, hey, you should play WebGrid tonight. We just gave him
6:09:43 the game as part of our research, you know, and he is able to play independently and practice
6:09:47 whenever he wants. And he really pushes hard to push it. The technology is the absolute limit.
6:09:51 And he views it as like, you know, his job really to make us be the bottleneck.
6:09:55 And boy, has he done that well. And so that’s the first thing to acknowledge is that,
6:09:59 you know, he is extremely motivated to make this work. I’ve also had the privilege to meet other,
6:10:03 you know, clinical trial participants from Reinge and other trials, and they very much
6:10:08 share the same attitude of like, they view this as their life’s work to, you know, advance the
6:10:12 technology as much as they can. And if that means selecting targets on the screen for four hours
6:10:17 from 2am to 6am, then so be it. And there’s something extremely admirable about that that’s
6:10:23 worth calling out. Okay, so now how do you sort of get from where he started, which is no cursor
6:10:28 controlled 8BPS? So, I mean, when he started, there’s a huge amount of learning to do on his
6:10:33 side and our side to figure out what’s the most intuitive control for him. And the most intuitive
6:10:38 control for him is sort of, you have to find the set intersection of what do we have the signal
6:10:41 to decode. So we don’t pick up, you know, every single neuron in the motor cortex, which means
6:10:45 we don’t have representation for every part of the body. So there may be some signals that we have
6:10:50 better sort of decode performance on than others. For example, on his left hand, we have a lot of
6:10:55 difficulty distinguishing his left ring finger from his left middle finger. But on his right hand,
6:10:59 we have a good, you know, good control and good modulation detected from the neurons we’re able
6:11:03 to record for his pinky and his stump and his index finger. So you can imagine how these different,
6:11:08 you know, subspaces of modulated activity intersect with what’s the most intuitive for him.
6:11:12 And this has evolved over time. So once we gave him the ability to calibrate models on his own,
6:11:16 he was able to go and explore various different ways to imagine and control on the cursor.
6:11:20 For example, he could imagine controlling the cursor by wiggling his wrist side to side,
6:11:23 or by moving his entire arm. But I had to get one point into his feet, you know, he tried like
6:11:27 whole bunch of stuff to explore the space of what is the most natural way for him
6:11:30 to control the cursor that at the same time is easy for us to decode.
6:11:37 Just to clarify, it’s through the body mapping procedure that you’re able to figure out which
6:11:44 finger he can move. Yes, yes, that’s one way to do it. Maybe one nuance of the, when he’s doing
6:11:48 it, he can imagine many more things that we represent in that visual on the screen. So
6:11:53 we show him sort of abstractly, here’s a cursor, you figure out what works the best for you.
6:11:57 And we obviously have hints about what will work best from that body mapping procedure of,
6:12:01 you know, we know that this particular action we can represent well, but it’s really up to him
6:12:06 to go and explore and figure out what works the best. But at which point does he no longer
6:12:10 visualize the movement of his body and he’s just visualizing the movement of the cursor?
6:12:14 Yeah. How quickly does he go from, how quickly does he get there?
6:12:18 So this happened on a Tuesday, I remember this day very clearly, because at some point during
6:12:22 the day, it looked like he wasn’t doing super well, like it looked like the model wasn’t
6:12:26 performing super well, and he was like getting distracted. But he actually, it wasn’t the case,
6:12:30 like what actually happened was he was trying something new, where he was just
6:12:35 controlling the cursor. So he wasn’t imagining moving his hand anymore, he was just imagining,
6:12:38 I don’t know what it is, some like abstract intention to move the cursor on the screen.
6:12:43 And I cannot tell you what the difference between those two things are. I really truly cannot.
6:12:48 He’s tried to explain it to me before. I cannot give a first person account of what that’s like,
6:12:53 but the expletives that he uttered in that moment were enough to suggest that there’s a very
6:12:58 qualitatively different experience for him to just have direct neural control over a cursor.
6:13:06 I wonder if there’s a way through UX to encourage a human being to discover that,
6:13:13 because he discovered it, like you said to me, that he’s a pioneer. So he discovered that on his
6:13:19 own through all of this, the process of trying to try to move the cursor with different kinds of
6:13:27 intentions. But that is clearly a really powerful thing to arrive at, which is to let go of trying
6:13:32 to control the fingers and the hand and control the actual digital device with your mind.
6:13:37 That’s right. UX is how it works. And the ideal UX is one that it’s the user doesn’t have to think
6:13:41 about what they need to do in order to get it done. It just does it.
6:13:47 That is so fascinating. But I wonder on the biological side,
6:13:53 how long it takes for the brain to adapt. So is it just simply learning
6:13:59 like high level software? Or is there like a neuroplasticity component where the brain is
6:14:06 adjusting slowly? Yeah. The truth is, I don’t know. I’m very excited to see with the second
6:14:11 participant that we implant what the journey is like for them, because we’ll have learned a lot
6:14:15 more. Potentially, we can help them understand and explore that direction more quickly. This
6:14:20 is something I didn’t know. This wasn’t me prompting Nolan to go try this. He was just exploring how
6:14:24 to use his device and figure it out himself. But now that we know that that’s a possibility,
6:14:28 that maybe there’s a way to, for example, hint the user, don’t try super hard during calibration.
6:14:33 Just do something that feels natural or just directly control the cursor. Don’t imagine explicit
6:14:37 action. And from there, we should be able to hopefully understand how this is for somebody who
6:14:41 has not experienced that before. Maybe that’s the default mode of operation for them. You don’t
6:14:45 have to go through this intermediate phase of explicit motions. Or maybe if that naturally
6:14:50 happens for people, you can just occasionally encourage them to allow themselves to move the
6:14:55 cursor. Actually, sometimes, just like with a four-minute mile, just the knowledge that that’s
6:15:01 possible pushes you to do it. Yeah. Enables you to do it. And then it becomes trivial. And then it
6:15:06 also makes you wonder, this is the cool thing about humans. Once there’s a lot more human
6:15:11 participants, they will discover things that are possible. Yes. And share their experiences.
6:15:17 Yeah. And share. And then because of them sharing it, they’ll be able to do it. All of a sudden,
6:15:22 that’s unlocked for everybody. Yeah. Because just the knowledge sometimes is the thing that enables
6:15:27 it to do it. Yeah. I mean, just coming on that too, we’ve probably tried like a thousand different
6:15:32 ways to do various aspects of decoding. And now we know what the right subspace is to continue
6:15:37 exploring further. Again, thanks to Nolan and the many hours he’s put into this. And so even just
6:15:41 that, help constraints or the beam search of different approaches that we could explore,
6:15:45 really helps accelerate for the next person, the set of things that we’ll get to try on day one,
6:15:50 how fast we hope to get them to useful control, how fast we can enable them to use it independently,
6:15:54 and to give value out of the system. So yeah, massive hats off to Nolan and all the participants
6:15:59 that came before him to make this technology a reality. So how often are the updates to the
6:16:03 decoder? Because Nolan mentioned like, okay, there’s a new update that we’re working on and that
6:16:10 in the stream, he said he plays the snake game because it’s like super hard. It’s a good way for
6:16:16 him to test like how good the update is. So and he says like, sometimes the update is a step
6:16:22 backwards. It’s like, it’s a constant like iteration. So how often like, what does the update
6:16:27 entail? Is it mostly on the decoder side? Yeah, couple comments. So what is, it’s probably worth
6:16:30 trying distinction between sort of research sessions where we’re actively trying different
6:16:34 things to understand like what the best approach is versus sort of independent use where we want
6:16:38 to have, you know, ability to just go use the device, how anybody would want to use their MacBook.
6:16:42 And so what he’s referring to is, I think usually in the context of a research session where we’re
6:16:46 trying, you know, many, many different approaches to, you know, even unsupervised approaches like
6:16:50 we talked about earlier to try to come up with better ways to estimate his true intention
6:16:55 and more accurately decode in. And in those scenarios, I mean, we try in any given session,
6:16:59 he’ll sometimes work for like eight hours a day. And so that can be, you know, hundreds of different
6:17:05 models that we would try in that day, like a lot of different things. Now, it’s also worth noting
6:17:08 that we update the application he uses quite frequently. I think, you know, sometimes up to
6:17:13 like four or five times a day, we’ll update his application with different features or bug fixes
6:17:18 or feedback that he’s given us. So he’s been able to, he’s a very articulate person who is part of
6:17:21 the solution. He’s not a complaining person. He says, Hey, here’s this thing that I’ve,
6:17:26 I’ve discovered is not optimal in my flow. Here’s some ideas how to fix it. Let me know what your
6:17:30 thoughts are. Let’s figure out how to, how to solve it. And it often happens that those things are
6:17:34 addressed within, you know, a couple of hours of him giving us his feedback, that that’s a kind
6:17:37 of iteration cycle we’ll have. And so sometimes at the beginning of the session, he’ll give us
6:17:40 feedback. And at the end of the session, he’s, he’s giving us feedback on the next iteration of
6:17:44 that, of that, of that process or that setup. That’s fascinating. Cause one of the things you
6:17:50 mentioned that there was 271 pages of notes taken from the BCI sessions, and this was just in March.
6:17:56 So one of the amazing things about human beings that they can provide, especially ones who are
6:18:02 smart and excited and all like positive and good vibes, like knowing that they can provide
6:18:07 feedback, continuous feedback. It also requires just to brag on the team a little bit. I work with
6:18:13 a lot of exceptional people and it requires the team being absolutely laser focused on the user
6:18:18 and what will be the best for them. And it requires like a level of commitment of, okay,
6:18:20 this is what the user feedback was. I’ve all these meetings, we’re going to skip that today
6:18:27 and we’re going to do this. You know, that level of focus commitment is, I would say under, under
6:18:31 appreciated in the world. And also, you know, you obviously have to have the talent to be able to
6:18:38 execute on these things effectively. And yeah, we have that in, in loads. Yeah. And this is such a
6:18:44 interesting space of UX design, because you have, there’s so many unknowns here.
6:18:51 And I can tell UX is difficult because of how many people do it poorly.
6:18:58 It’s just not a trivial thing. Yeah. It’s also, you know, UX is not something that you can
6:19:03 always solve by just constant iterating on different things. Like sometimes you really need
6:19:07 to step back and think globally, am I even like the right sort of minima to be chasing down
6:19:11 for a solution? Like there’s a lot of problems in which sort of fast iteration cycle is the
6:19:17 predictor of how successful you will be. As a good example, like in an RL simulation, for example,
6:19:21 the more frequently you get a reward, the faster you can progress. It’s just an easier learning
6:19:26 problem, the more frequently you get feedback. But UX is not that way. I mean, users are actually
6:19:30 quite often wrong about what the right solution is. And it requires a deep understanding of the
6:19:35 technical system and what’s possible, combined with what the problem is you’re trying to solve,
6:19:38 not just how the user expressed it, but what the true underlying problem is
6:19:43 to actually get to the right place. Yeah, that’s the old like stories of Steve Jobs,
6:19:49 like rolling in there, like, yeah, the user is a good, is a useful signal, but it’s not a perfect
6:19:54 signal. And sometimes you have to remove the floppy disk drive or whatever the, I forgot,
6:20:02 all the crazy stories of Steve Jobs, like making wild design decisions. But there, some of it is
6:20:11 aesthetic that some of it is about the love you put into the design, which is very much a Steve
6:20:19 Jobs, Johnny Ive type thing. But when you have a human being using their brain to interact with it,
6:20:26 there is also is deeply about function. It’s not just aesthetic. And that you have to empathize
6:20:33 with a human being before you, while not always listening to them directly. They get to deeply
6:20:40 empathize. It’s fascinating. It’s really, really fascinating. And at the same time, iterate, right?
6:20:46 But not iterate in a small way, sometimes a complete, like rebuilding the design. He said that,
6:20:53 Nolan said in the early days, the UX sucked, but you improved quickly. What was that journey like?
6:20:58 Yeah, I mean, I’ll give one concrete example. So he really wanted to be able to read Manga.
6:21:02 This is something that he, I mean, it sounds like a simple thing, but it’s actually a really big
6:21:06 deal for him. And he couldn’t do it with this mouse stick. It just wasn’t accessible that you
6:21:10 can’t scroll with the mouse stick on his iPad and on the website that he wanted to be able to use
6:21:15 to read the newest Manga. And so might be a good quick pause to say the mouth stick is the thing
6:21:21 he’s using, holding a stick in his mouth to scroll on a tablet. Right. Yeah. It’s basically,
6:21:25 you can imagine it’s a stylus that you hold between your teeth. It’s basically a very long
6:21:32 stylus. And it’s exhausting. It hurts and it’s inefficient. Yeah. And maybe it’s also worth
6:21:36 calling out, there are other alternative assistive technologies, but that particular situation
6:21:40 Nolan’s in, and this is not uncommon, and I think it’s also not well understood by folks,
6:21:44 is that, you know, he’s relatively spastic, so he’ll have muscle spasms from time to time.
6:21:48 And so any assistive technology that requires him to be positioned directly in front of a camera,
6:21:51 for example, an eye tracker, or anything that requires him to put something in his mouth,
6:21:55 just as a no go, because he’ll either be shifted out of frame when he has a spasm,
6:21:59 or if he has something in his mouth, it’ll stab him in the face, you know, if he spasms too hard.
6:22:03 So these kind of considerations are important when thinking about what advantages a PCI has in
6:22:08 someone’s life. If it fits ergonomically into your life in a way that you can use it independently,
6:22:11 when your caretaker is not there, wherever you want to, either in the bed or in the chair,
6:22:15 depending on, you know, your comfort level and your desire to have pressure source,
6:22:19 you know, all these factors matter a lot in how good the solution is
6:22:25 in that user’s life. So one of these very fun examples is scroll. So again,
6:22:32 Manga is something he wanted to be able to read. And there’s many ways to do scroll with a BCI.
6:22:36 You can imagine, like, different gestures, for example, the user could do that would move the
6:22:42 page. But scroll is a very fascinating control surface, because it’s a huge thing on the screen
6:22:45 in front of you. So any sort of jitter in the model output, any sort of air in the model output,
6:22:49 causes, like, an earthquake on the screen. Like, you really don’t want to have your
6:22:53 Manga page that you’re trying to read be shifted up and down a few pixels just because,
6:22:57 you know, your scroll decoder is not completely accurate. And so this was an example where
6:23:03 we had to figure out how to formulate the problem in a way that the errors of the system,
6:23:06 whenever they do occur, and we’ll do our best to minimize them, whenever those errors do occur,
6:23:11 that it doesn’t interrupt the qualia, again, of the experience that the user is having,
6:23:15 it doesn’t interrupt their flow of reading their book. And so what we ended up building is this
6:23:21 really brilliant feature. This is a teammate named Bruce, who worked on this really brilliant work
6:23:25 called Quick Scroll. And Quick Scroll basically looks at the screen, and it identifies
6:23:29 where on the screen are scroll bars. And it does this by deeply integrating with Mac OS to
6:23:34 understand where are the scroll bars actively present on the screen using the sort of accessibility
6:23:39 tree that’s available to Mac OS apps. And we identified where that those scroll bars are
6:23:44 and provided a BCI scroll bar. And the BCI scroll bar looks similar to a normal scroll bar, but it
6:23:49 behaves very differently in that once you sort of move over to it, your cursor sort of morphs
6:23:53 onto it, it sort of attaches or latches onto it. And then once you push up or down in the same way
6:23:59 that you’d use a push to control, you know, the normal cursor, it actually moves the screen for
6:24:04 you. So it’s basically like remapping the velocity to a scroll action. And the reason that feels so
6:24:08 natural and intuitive is that when you move over to attach to it, it feels like magnetic. So you
6:24:11 like sort of stuck onto it. And then it’s one continuous action, you don’t have to like switch
6:24:15 your imagined movement, you sort of snap onto it and then you get to go. You just immediately can
6:24:20 start pulling the page down or pushing it up. And if you once you get that right, there’s so many
6:24:25 little nuances of how the scroll behavior works to make it naturally intuitive. So one example is
6:24:29 momentum. Like when you scroll a page with your fingers on the screen, you know, you you actually
6:24:34 have some like flow like it doesn’t just stop right when you lift your finger up. The same is true
6:24:37 with BCI scroll. So we had to spend some time to figure out what are the right nuances when you
6:24:41 don’t feel the screen under your fingertip anymore. What is the right sort of dynamic or what’s the
6:24:47 right amount of page give, if you will, when you push it to make it flow the right amount for the
6:24:52 user to have a natural experience reading their book. And there’s a million I mean, there’s I
6:24:56 could tell you like there’s so many little minutiae of how exactly that scroll works that we spent
6:25:01 probably like a month getting right to make that feel extremely natural and and easy for the user
6:25:08 to I mean, even the scroll on a smartphone with your finger feels extremely natural and pleasant.
6:25:16 And it probably takes a extremely long time to get that right. And actually the same kind of
6:25:23 visionary UX design that we’re talking about, don’t always listen to the users, but also listen to
6:25:28 them and also have like visionary big like throw everything out think from first principles, but
6:25:36 also not. Yeah, yeah, by the way, this makes me think that scroll bars on the desktop probably
6:25:42 have stagnated and never taken that like because the snap same as the sex snapped a grid snapped
6:25:47 a scroll bar action you’re talking about is something that could potentially be extremely
6:25:53 useful in the desktop setting. Yeah, even just for users to just improve the experience because
6:25:58 the current scroll bar experience on the desktop is horrible. Yeah, it’s hard to find hard to
6:26:04 control. There’s not a momentum. There’s any intention should be clear when I start moving
6:26:09 towards a scroll bar, there should be a snapping to the scroll bar action. But of course, you know,
6:26:16 maybe I’m okay paying that cost, but there’s hundreds of millions of people paying that cost
6:26:22 nonstop. But anyway, but in this case, this is necessary because there’s an extra cost
6:26:29 paid by Nolan for the jitteriness. So you have to switch between the scrolling and the reading.
6:26:35 There has to be a phase shift between the two. Like when you’re scrolling, you’re scrolling.
6:26:39 Right, right. So that is one drawback of the current current approach. Maybe one other just
6:26:44 sort of case study here. So again, UX is how it works. And we think about that holistically from
6:26:48 like the even the future detection level of what we detect in the brain to how we design the decoder,
6:26:52 what we choose to decode to then how it works once it’s being used by the user. So another good
6:26:56 example in that sort of how it works once they’re actually using the decoder, you know, the output
6:27:00 that’s displayed on the screen is not just what the decoder says, it’s also a function of, you know,
6:27:04 what’s going on on the screen. So we can understand, for example, that, you know, when you’re trying
6:27:10 to close a tab, that very small, stupid little X that’s extremely tiny, which is hard to get precisely
6:27:14 hit if you’re dealing with sort of a noisy output of the decoder, we can understand that that is a
6:27:17 small little X you might be trying to hit and actually make it a bigger target for you. Similar
6:27:22 to how when you’re typing on your phone, if you’re, you know, used to like the iOS keyboard,
6:27:26 for example, it actually adapts the target size of individual keys based on an underlying language
6:27:32 model. So it’ll actually understand if I’m typing, hey, I’m going to see L, it’ll make the E key bigger
6:27:36 because in those Lex, it’s the person I’m going to go see. And so that kind of, you know, predictiveness
6:27:40 can make the experience much more smooth, even without, you know, improvements to the underlying
6:27:45 decoder or, or a future detection part of the stack. So we do that with a feature called magnetic
6:27:49 targets. We actually index the screen and we understand, okay, these are the places that are,
6:27:53 you know, very small targets that might be difficult to hit. Here’s the kind of cursor
6:27:56 dynamics around that location that might be indicative of the user trying to select it.
6:27:59 Let’s make it easier. Let’s blow up the size of it in a way that makes it easier for the user to
6:28:03 sort of snap onto that target. So all these little details, they matter a lot in helping the user be
6:28:09 independent in their day to day living. So how much of the work on the decoder is generalizable to
6:28:16 P2, P3, P4, P5, PN? How do you improve the decoder in a way that’s generalizable?
6:28:21 Yeah, great question. So the underlying signal we’re trying to decode is going to look very
6:28:26 different in P2 than in P1. For example, channel number 345 is going to mean something different
6:28:29 in user one than it will in user two, just because that electrode that corresponds with
6:28:34 channel 345 is going to be in X2, a different neuron in user one, the person user two. But the
6:28:39 approach is the methods, the user experience of how do you get the right sort of behavioral pattern
6:28:43 from the user to associate with that neural signal? We hope that we’ll translate over multiple
6:28:47 generations of users. And beyond that, it’s very, very possible. In fact, quite likely that we’ve
6:28:52 overfit to sort of Nolan’s user experience desires and preferences. And so what I hope to see is that,
6:28:57 you know, when we get a second, third, fourth participant, that we find sort of what the right
6:29:01 wide minimas are that cover all the cases that make it more intuitive for everyone. And hopefully
6:29:05 there’s a cross-pollination of things where, oh, we didn’t think about that with this user because,
6:29:09 you know, they can speak. But with this user who just can fundamentally not speak at all,
6:29:13 this user experience is not optimal. And that will actually, those improvements that we make
6:29:16 there should hopefully translate them to even people who can’t speak but don’t feel comfortable
6:29:20 doing so because we’re in a public setting like their doctor’s office. So the actual mechanism
6:29:27 of open loop labeling and then closed loop labeling will be the same and hopefully can
6:29:30 generalize across the different users as they’re doing the calibration step.
6:29:37 And the calibration step is pretty cool. I mean, that in itself, the interesting thing
6:29:43 about WebGrid, which is like closed loop, it’s like fun. I love it when there’s like,
6:29:49 there used to be kind of an idea of human computation, which is using actions that human
6:29:54 would want to do anyway to get a lot of signal from. And like, WebGrid is that like a nice video
6:29:59 game that also serves as great calibration. It’s so funny. I’ve heard this reaction so many times
6:30:05 before sort of the first user was implanted, we had an internal perception that the first user
6:30:09 would not find this fun. And so we thought really quite a bit actually about like, should we build
6:30:13 other games that are more interesting for the user so we can get this kind of data and help
6:30:17 facilitate research that’s for long duration and stuff like this. Turns out that like people love
6:30:21 this game. I always loved it, but I didn’t know that that was a shared perception.
6:30:31 Yeah. And just in case it’s not clear, WebGrid is there’s a grid of let’s say 35 by 35 cells and
6:30:35 one of them lights up blue and you have to move your mouse over that and click on it. And if you
6:30:41 miss it and it’s red and I put this game for so many hours, so many hours. And what’s your record,
6:30:46 you said? I think I have the highest at Neuralink right now. My record’s 17 BPS.
6:30:50 17 BPS. Which is about, if you imagine that 35 by 35 grade, you’re hitting about 100
6:30:55 trials per minute. So 100 correct selections in that one minute window. So you’re averaging
6:31:01 about between 500, 600 milliseconds per selection. So one of the reasons that I think I struggle with
6:31:06 that game is I’m such a keyboard person. So everything is done with your keyboard. If I can
6:31:13 avoid touching the mouse, it’s great. So how can you explain your high performance? I have like a
6:31:17 whole ritual I go through when I play WebGrid. So it’s just actually like a diet plan associated
6:31:22 with this whole thing. So the first thing is you have to fast for five days. I have to go up to
6:31:25 the mountains. Actually, it kind of, I mean, the fascinating thing is important. So this is like,
6:31:31 you know, focuses the mind. Yeah. Yeah. So what I do is I actually, I don’t eat for a little bit
6:31:34 forehand. And then I’ll actually eat like a ton of peanut butter right before I go.
6:31:38 And I get like, this is a real thing. This is a real thing. Yeah. And then it has to be really
6:31:41 late at night. This is again, a night owl thing, I think we share, but it has to be like, you know,
6:31:47 midnight, 2am kind of time window. And I have a very specific, like physical position I’ll sit in,
6:31:50 which is, I used to be, I was homeschooled growing up. And so I did most of my work like on the
6:31:55 floor, just like in my bedroom or whatever. And so I have a very specific situation on the floor,
6:31:59 on the floor that I sit and play. And then you have to make sure like, there’s not a lot of
6:32:03 weight on your elbow when you’re playing. So you can move quickly. And then I turned the gain of
6:32:06 the cursor. So the speed of the cursor way, way up. So it’s like small motions that actually move
6:32:11 the cursor. Are you moving with your wrist or you’re never moving? I’m moving my fingers. So my
6:32:15 wrist is almost completely still. I’m just moving my fingers. Yeah. You know those just in a small
6:32:22 tangent, which I’ve been meaning to go down this rabbit hole of people that set the world record
6:32:28 in Tetris. Those folks they’re playing, there’s a, there’s a way to, did you see this? It seems like
6:32:34 all the fingers are moving. Yeah. You could find a way to do it where like it’s using a loop hole,
6:32:40 like a bug, that you can do some incredibly fast stuff. So it’s along that line, but not quite.
6:32:44 But you do realize there’ll be like a few programmers right now listening to this
6:32:47 cool fast and eat peanut butter. Yeah. Please, please try my record. I mean, the reason I did
6:32:52 this literally was just because I wanted the bar to be high. Like I wanted the number that we aim
6:32:55 for should not be like the median performance. It should be like, it should be able to beat
6:32:59 all of us at least. Like that should be the minimum bar. What do you think is possible? Like 20?
6:33:03 Yeah. I don’t know what the limits, I mean, the limits, you can calculate just in terms of
6:33:07 like screen refresh rate and like cursor immediately jumping to the next target.
6:33:10 But there’s, I mean, I’m sure there’s limits before that with just sort of reaction time and
6:33:16 visual perception and things like this. I’d guess it’s in the below 40, but above 20 somewhere in
6:33:19 there. It’s probably that right. There are never to be thinking about. It also matters like how
6:33:24 difficult the task is. You could imagine like some people might be able to do like 10,000 targets
6:33:29 on the screen and maybe they can do better that way. So there’s some like task optimizations
6:33:34 you could do to try to boost your performance as well. What do you think it takes for no one to
6:33:40 be able to do above eight, five to keep increasing that number? You said like every increase in the
6:33:45 number might require different improvements in the system.
6:33:49 Yeah. I think the nature of this work is, the first answer that’s important to say is, I don’t
6:33:55 know. This is, you know, edge of the research. So again, nobody’s gotten to that number before.
6:34:02 So what’s next is going to be a heuristic guess from my part. What we’ve seen historically is that
6:34:06 different parts of the stack would come followed next at different time points. So, you know,
6:34:09 when I first joined Erlang like three years ago or so, one of the major problems was just
6:34:13 the latency of the Bluetooth connection. It was just like the regular device wasn’t super good.
6:34:17 It was an earlier vision of the implant. And it just like, no matter how good your decoder was,
6:34:21 if your thing is updating every 30 milliseconds or 50 milliseconds, it’s just going to be choppy.
6:34:25 And no matter how good you are, that’s going to be frustrating and lead to challenges.
6:34:29 So, you know, at that point, it was very clear that the main challenge is just get the data off
6:34:35 the device in a very reliable way such that you can enable the next challenge to be tackled.
6:34:41 And then at some point, it was, you know, actually the modeling challenge of how do you
6:34:46 just build a good mapping like the supervised learning problem of you have a bunch of data
6:34:49 and you have a label you’re trying to predict, just what is the right like
6:34:53 neural decoder architecture and hyperparameters to optimize that. That was a problem for a bit.
6:34:57 And once you solve that, it became a different bottleneck. I think the next bottleneck after
6:35:02 that was actually just sort of software stability and reliability. You know, if you have widely
6:35:10 varying sort of inference latency in your system or your app just lags out every once in a while,
6:35:13 it decreases your ability to maintain and get in a state of flow and it basically just disrupts
6:35:18 your control experience. And so there’s a variety of different software bugs and improvements we
6:35:21 made that basically increased the performance of the system, made it much more reliable, much more
6:35:26 stable and led to a state where we could reliably collect data to build better models with. So,
6:35:28 that was a bottleneck for a while. It’s just sort of like the software stack itself.
6:35:35 If I were to guess right now, there’s sort of two major directions you could think about for
6:35:39 improving BPS further. The first major direction is labeling. So, labeling is again this fundamental
6:35:45 challenge of given a window of time where the user is expressing some behavioral intent,
6:35:50 what are they really trying to do at the granularity of every millisecond? And that again is a task
6:35:55 design problem, it’s a UX problem, it’s a machine learning problem, it’s a software problem,
6:35:59 sort of touches all those different domains. The second thing you can think about to improve
6:36:04 BPS further is either completely changing the thing you’re decoding or just extending the number
6:36:08 of things that you’re decoding. So, this is sort of in the direction of functionality. Basically,
6:36:11 you can imagine giving more clicks, for example, a left click, a right click, a middle click,
6:36:15 different actions like click and drag, for example, and that can improve the effective
6:36:20 bit rate of your communication processes. If you’re trying to allow the user to express themselves
6:36:23 through any given communication channel, you can measure that with base per second, but what
6:36:27 actually measures at the end of the day is how effective are they at navigating their computer.
6:36:30 And so, from the perspective of the downstream tasks that you care about, functionality and
6:36:33 extending functionality is something we’re very interested in, because not only can it improve
6:36:38 the sort of number of BPS, but it can also improve the downstream sort of independence that the user
6:36:40 has and the skill and efficiency with which they can operate their computer.
6:36:46 Would the number of threads increasing also potentially help?
6:36:55 Yes, short answer is yes. It’s a bit nuanced how that curve or how that manifests in the numbers.
6:37:01 So, what you’ll see is that if you sort of plot a curve of number of channels that you’re using
6:37:07 for decode, versus either the offline metric of how good you are at decoding, or the online
6:37:12 metric of sort of in practice how good is the user at using this device, you see roughly a
6:37:17 log curve. So, as you move further out in number of channels, you get a corresponding sort of
6:37:23 logarithmic improvement in control quality and offline validation metrics. The important nuance
6:37:29 here is that each channel corresponds with a specific, you know, represented intention in the
6:37:34 brain. So, for example, if you have a channel 254, it might correspond with moving to the right.
6:37:39 Channel 256 might mean move to the left. If you want to expand the number of functions you want
6:37:44 to control, you really want to have a broader set of channels that covers a broader set of
6:37:48 imagined movements. You can think of it like, kind of like Mr. Potato Man, actually. Like,
6:37:52 if you had a bunch of different imagined movements you could do, how would you map those imagined
6:37:56 movements to input to a computer? You could imagine, you know, handwriting to output characters on
6:38:00 the screen. You could imagine just typing with your fingers and have that output text on the screen.
6:38:02 You could imagine different finger modulations for different clicks. You could imagine wiggling
6:38:09 your big nose for opening some menu, or wiggling your, you know, your big toe to have like command
6:38:13 tab occur or something like this. So, it’s really the amount of different actions you can take in
6:38:17 the world depends on how many channels you have on the information content that they carry.
6:38:22 Right. So, that’s more about the number of actions. So, actually, as you increase the
6:38:28 number of threads, that’s more about increasing the number of actions you’re able to perform.
6:38:31 One other nuance there that is worth mentioning. So, again, our goal is really to enable a
6:38:36 user-worth process to control the computer as fast as I can. So, that’s BPS with all the same
6:38:40 functionality I have, which is what we just talked about, but then also as reliably as I can.
6:38:45 And that last point is very related to channel count discussion. So, as you scale out the number
6:38:50 of channels, the relative importance of any particular feature of your model input to the
6:38:55 output control of the user diminishes, which means that if the sort of neural non-stationary effect
6:39:01 is per channel, or if the noise is independent such that more channels means on average less
6:39:06 output effect, then your reliability of your system will improve. So, one sort of core thesis
6:39:11 that at least I have is that scaling channel count should improve the reliability system without any
6:39:17 work on the decoder itself. Can you look around the reliability here? So, first of all, when you
6:39:23 see a non-stationarity of the signal, which aspect are you referring to? Yeah, so maybe
6:39:27 let’s talk briefly what the actual underlying signal looks like. So, again, I spoke very briefly
6:39:31 at the beginning about how when you imagine moving to the right or imagine moving to the left,
6:39:35 neurons might fire more or less. And the frequency content of that signal, at least in the motor
6:39:40 cortex, is very correlated with the output intention of the behavioral task that the user is doing.
6:39:43 You could imagine, actually, this is not obvious at rate coding, which is the name of that
6:39:46 phenomenon. It’s like the only way the brain could represent information. You can imagine many
6:39:51 different ways in which the brain could encode intention. And there’s actually evidence like
6:39:55 in Baths, for example, that there’s temporal codes. So, timing codes of like exactly when particular
6:40:01 neurons fire is the mechanism of information representation. But at least in the motor cortex,
6:40:06 there’s a substantial evidence that it’s rate coding, or at least what like first order effect
6:40:11 is at its rate coding. So then if the brain is representing information by changing the
6:40:17 sort of frequency of a neuron firing, what really matters is sort of the delta between sort of the
6:40:21 baseline state of the neuron and what it looks like when it’s modulated. And what we’ve observed,
6:40:25 and what has also been observed in academic work, is that that baseline rate, sort of the,
6:40:29 if you’re to tar the scale, if you imagine that analogy for like measuring, you know,
6:40:33 flour or something when you’re baking, that baseline state of how much the pot weighs
6:40:38 is actually different day to day. And so if what you’re trying to measure is how much rice is in
6:40:41 the pot, you’re going to get a different measurement in different days because you’re measuring with
6:40:46 different pots. So that baseline rate shifting is really the thing that at least from a first order
6:40:50 description of the problem is what’s causing this downstream bias. There can be other effects,
6:40:54 nonlinear effects on top of that, but at least at a very first order description of the problem,
6:40:57 that’s what we observed day to day is that the baseline firing rate of any particular
6:41:03 neuron or observed on a particular channel is changing. So can you just adjust to the baseline
6:41:07 to make it relative to the baseline nonstop? Yeah, this is a great question. So
6:41:14 with monkeys, we have found various ways to do this. One example way to do this is you
6:41:18 ask them to do some behavioral tasks like play the game with a joystick, you measure what’s
6:41:23 going on in the brain, you compute some mean of what’s going on across all the input features,
6:41:26 and you subtract that in the input when you’re doing your BCI session works super well.
6:41:32 For whatever reason, that doesn’t work super well with Nolan. I actually don’t know the full
6:41:37 reason why, but I can imagine several explanations. One such explanation could be that the context
6:41:42 effect difference between some open loop task and some closed loop task is much more significant
6:41:45 with Nolan than it is with a monkey. Maybe in this open loop task, he’s
6:41:49 watching the Lex Freeman podcast while he’s doing the task, or he’s whistling and listening
6:41:52 to music and talking with his friend and asking his mom, what’s for dinner while he’s doing this
6:41:58 task. And so the exact sort of difference in context between those two states may be much
6:42:03 larger and thus lead to a bigger generalization gap between the features that you’re normalizing at
6:42:05 sort of open loop time and when you’re trying to use a closed loop time.
6:42:11 That’s interesting. Just on that point, it’s kind of incredible to watch Nolan be able to do,
6:42:17 to multitask, to do multiple tasks at the same time, to be able to move the mouse cursor effectively
6:42:21 while talking and while being nervous because he’s talking in front of-
6:42:22 Kicking my ass in chest too, yeah.
6:42:28 Kicking your ass. And now we talk trash while doing it. So all at the same time.
6:42:33 And yes, if you’re trying to normalize to the baseline, that might throw everything off.
6:42:36 Boy, is that interesting.
6:42:39 Maybe one comment on that too. For folks that aren’t familiar with assistive technology,
6:42:43 I think there’s a common belief that, well, why can’t you just use an eye tracker or something
6:42:48 like this for helping somebody move a mouse on the screen? And it’s a really a fair question and
6:42:53 one that I actually was not confident before through Nolan that this was going to be a profoundly
6:42:58 transformative technology for people like him. And I’m very confident now that it will be,
6:43:02 but the reasons are subtle. It really has to do with ergonomically how it fits into their life.
6:43:06 Even if you can just offer the same level of control as what they would have with an eye
6:43:10 tracker or with a mouse stick, but you don’t need to have that thing in your face. You don’t need
6:43:14 to be positioned a certain way. You don’t need your caretaker to be around to set it up for you.
6:43:18 You can activate it when you want, how you want, wherever you want. That level of independence
6:43:22 is so game-changing for people. It means that they can text a friend at night privately without
6:43:27 their mom needing to be in the loop. It means that they can like open up, you know, and browse
6:43:30 the internet at 2am when nobody’s around to set their iPad up for them.
6:43:35 This is like a profoundly game-changing thing for folks in that situation. And this is even
6:43:39 before we start talking about folks that, you know, may not be able to communicate at all or ask
6:43:43 for help when they want to. This can be potentially the only link that they have to the outside world.
6:43:46 And yeah, that one doesn’t, I think, need explanation of why that’s so impactful.
6:43:53 You mentioned neural decoder. How much machine learning is in the decoder? How much magic?
6:44:00 How much science? How much art? How difficult is it to come up with a decoder that figures out what
6:44:08 these sequence of spikes mean? Yeah, good question. There’s a couple of different ways
6:44:12 to answer this. So maybe I’ll zoom out briefly first, and then I’ll go down one of the rabbit
6:44:16 holes. So the zoomed out view is that building the decoder is really the process of building
6:44:21 the dataset, plus compiling it into the weights. And each of those steps is important.
6:44:25 The direction, I think, of further improvement is primarily going to be in the dataset side of
6:44:29 how do you construct the optimal labels for the model. But there’s an entirely separate challenge
6:44:32 of then how do you compile the best model. And so I’ll go briefly down the second one,
6:44:38 down the second rabbit hole. One of the main challenges with designing the optimal model for
6:44:44 BCI is that offline metrics don’t necessarily correspond to online metrics. It’s fundamentally
6:44:49 a control problem. The user is trying to control something on the screen. And the exact sort of
6:44:56 user experience of how you output the intention impacts your ability to control. So for example,
6:45:01 if you just look at validation loss as predicted by your model, there can be multiple ways to
6:45:05 achieve the same validation loss. Not all of them are equally controllable by the end user.
6:45:10 And so it might be as simple as saying, oh, you can just add auxiliary loss terms that help you
6:45:14 capture the thing that actually matters. But this is a very complex nuanced process. So how you
6:45:20 turn the labels into the model is more of a nuanced process than just like a standard supervised learning
6:45:25 problem. One very fascinating anecdote here, we’ve tried many different sort of neural network
6:45:33 architectures that translate brain data to velocity outputs, for example. And one example that stuck
6:45:38 in my brain from a couple of years ago now is we, at one point, we were using just fully connected
6:45:44 networks to decode the brain activity. We tried an A/B test where we were measuring the relative
6:45:50 performance in online control sessions of sort of one deconvolution over the input signal. So if
6:45:56 you imagine per channel, you have a sliding window that’s producing some convolved feature for each
6:46:00 of those input sequences for every single channel simultaneously. You can actually get better
6:46:04 validation metrics, meaning you’re fitting the data better. And it’s generalizing better and offline
6:46:08 data if you use this convolutional architecture. You’re reducing parameters. It’s sort of a standard
6:46:14 procedure when you deal with time series data. Now, it turns out that when using that model online,
6:46:18 the controllability was worse, was far worse, even though the offline metrics were better.
6:46:23 And there can be many ways to interpret that. But what that taught me at least was that,
6:46:26 hey, it’s at least the case right now that if you were to just throw a bunch of computer at this
6:46:31 problem, and you were trying to sort of hyper parameter optimize or, you know, let some GPT
6:46:35 model hard code or come up with or invent many different solutions, if you were just optimizing
6:46:40 for loss, it would not be sufficient, which means that there’s still some inherent modeling gap
6:46:43 here. There’s still some artistry left to be uncovered here of how to get your model to scale
6:46:47 with more compute. And that may be fundamentally labeling problem, but there may be other components
6:46:55 to this as well. Is it data constrained at this time? Like the, which is what it sounds like,
6:47:01 like, how do you get a lot of good labels? Yeah, I think it’s data quality constrained,
6:47:07 not necessarily data quantity constrained. But even like, even just the quantity, I mean,
6:47:13 because it has to be trained on the interactions. I guess there’s not that many interactions.
6:47:17 Yeah, so it depends what version of this you’re talking about. So if you’re talking about, like,
6:47:21 let’s say the simplest example of just 2d velocity, then I think, yeah, data quality is the main thing.
6:47:24 If you’re talking about how to build a sort of multifunction output that lets you do all the
6:47:28 inputs to the computer that you and I can do, then it’s actually a much more sophisticated
6:47:33 nuanced modeling challenge. Because now you need to think about not just when the user is left
6:47:36 clicking, but when you’re building the left click model, you also need to be thinking about how to
6:47:39 make sure it doesn’t fire when they’re trying to right click or when they’re trying to move the mouse.
6:47:45 So one example of an interesting bug from like sort of week one of BCI with Nolan was when he
6:47:49 moved the mouse, the click signal sort of dropped off a cliff. And when he stopped the click signal
6:47:54 went up. So again, there’s a contamination between the two inputs. Another good example was at one
6:47:59 point he was trying to do sort of a left click and drag. And the minute he started moving,
6:48:04 the left click signal dropped off a cliff. So again, because there’s some contamination between
6:48:08 the two signals, you need to come up with some way to either in the data set or in the model,
6:48:13 build robustness against this kind of, you think about like overfitting, but really it’s just that
6:48:17 the model has not seen this kind of variability before. So you need to find some way to help the
6:48:23 model with that. This is super cool. It feels like all of this is very solvable, but it’s hard.
6:48:26 Yes, it is fundamentally an engineering challenge. This is important to emphasize and it’s also
6:48:30 important to emphasize that it may not need fundamentally new techniques, which means that
6:48:36 people who work on, let’s say, unsupervised speech classification using CTC loss, for example,
6:48:39 with internal theory, they could potentially have very applicable skills to this.
6:48:47 So what things are you excited about in the future development of the software stack on
6:48:51 Neuralink? So everything we’ve been talking about, the decoding, the UX.
6:48:54 I think there’s some I’m excited about, like something I’m excited about from the technology
6:48:58 side and some I’m excited about for understanding how this technology is going to be best situated
6:49:03 for entering the world. So I’ll work backwards. On the technology entering the world side of things,
6:49:09 I’m really excited to understand how this device works for folks that cannot speak at all, that have
6:49:13 no ability to sort of bootstrap themselves into useful control by voice command, for example,
6:49:17 and are extremely limited in their current capabilities. I think that will be an incredibly
6:49:22 useful signal for us to understand, I mean, really what is an existential type for all startups,
6:49:26 which is product market fit. Does this device have the capacity and potential to transform
6:49:30 people’s lives in the current state? And if not, what are the gaps? And if there are gaps,
6:49:34 how do we solve them most efficiently? So that’s what I’m very excited about for the next year or
6:49:41 so of clinical trial operations. The technology side, I’m quite excited about basically everything
6:49:46 we’re doing. I think it’s going to be awesome. The most prominent one, I would say, is scaling
6:49:50 channel account. So right now we have a thousand channel device. The next version will have between
6:49:53 three and six thousand channels. And I would expect that curve to continue in the future.
6:49:59 And it’s unclear what set of problems will just disappear completely at that scale. And what set
6:50:02 of problems will remain and require further focus. And so I’m excited about the clarity of gradient
6:50:06 that that gives us in terms of the user experience that we choose to focus our time and resources on.
6:50:11 And also in terms of the, yeah, even things as simple as non-stationarity, like does that
6:50:15 problem just completely go away at that scale? Or do we need to come up with new creative UXs
6:50:20 still even at that point? And also when we get to that time point, when we start expanding out
6:50:25 dramatically the set of functions that you can output from one brain, how to deal with all the
6:50:29 nuances of both the user experience of not being able to feel the different keys under your fingertips
6:50:32 but still needing to be able to modulate all of them in synchrony to achieve the thing you want.
6:50:36 And again, you don’t have that proper set of feedback. So how can you make that intuitive
6:50:40 for a user to control a high dimensional control surface without feeling the thing physically?
6:50:45 I think that’s going to be a super interesting problem. I’m also quite excited to understand,
6:50:49 you know, do these scaling laws continue? Like as you scale channel count,
6:50:54 how much further out do you go before that saturation point is truly hit? And it’s not
6:50:57 obvious today. I think we only know what’s in the sort of interpolation space. We only know
6:51:02 what’s between 0 and 1024, but we don’t know what’s beyond that. And then there’s a whole sort of
6:51:05 like range of interesting sort of neuroscience and brain questions, which is when you stick more
6:51:09 stuff in the brain in more places, you get to learn much more quickly about what those brain
6:51:14 regions represent. And so I’m excited about that fundamental neuroscience learning, which is also
6:51:19 important for figuring out how to most efficiently insert electrodes in the future. So yeah, I think
6:51:22 all those dimensions I’m really, really excited about. And that doesn’t even get close to touching
6:51:25 the sort of software stack that we work on every single day and what we’re working on right now.
6:51:34 Yeah, it seems virtually impossible to me that a thousand electrodes is where it saturates.
6:51:40 It feels like this would be one of those silly notions in the future where obviously you should
6:51:46 have millions of electrodes. And this is where like the true breakthroughs happen.
6:51:55 You tweeted, “Some thoughts are most precisely described in poetry.” What do you think that is?
6:52:03 I think it’s because the information bottleneck of language is pretty steep.
6:52:10 And yet you’re able to reconstruct in the other person’s brain more effectively
6:52:15 without being literal. If you can express the sentiment such that in their brain,
6:52:20 they can reconstruct the actual true underlying meaning and beauty of the thing that you’re
6:52:24 trying to get across, the generator functioning in their brain is more powerful than what language
6:52:32 can express. And so the mechanism of poetry is really just to feed or seed that generator function.
6:52:38 So being literal sometimes is a suboptimal compression for the thing you’re trying to convey.
6:52:43 And it’s actually in the process of the user going through that generation that they understand
6:52:48 what you mean. That’s the beautiful part. It’s also like when you look at a beautiful painting.
6:52:52 It’s not the pixels of the painting that are beautiful. It’s the thought process that occurs
6:52:56 when you see that, the experience of that that actually is, I think that matters.
6:53:03 Yeah. It’s resonating with some deep thing within you that the artist also experienced
6:53:08 and was able to convey that through the pixels. And that’s actually going to be relevant for full
6:53:17 on telepathy. It’s like if you just read the poetry literally, that doesn’t say much of anything
6:53:24 interesting. It requires a human to interpret it. So it’s the combination of the human mind
6:53:29 and all the experiences that human being has within the context of the collective intelligence
6:53:36 of the human species that makes that poem makes sense. And they load that in. And so in that same
6:53:44 way, the signal that carries from human to human meaning might not, may seem trivial, but may actually
6:53:51 carry a lot of power because of the complexity of the human mind and the receiving end.
6:53:57 Yeah. That’s interesting. I had poetry still doesn’t, who was it? I think
6:54:03 Yoshibako Friswasho always said something about
6:54:13 all the people that think we’ve achieved AGI explain why humans like music.
6:54:21 Oh, yeah. And until the AGI likes music, you haven’t achieved AGI or something like that.
6:54:25 Do you not think that’s like some next token entropy surprise kind of thing going on there?
6:54:30 I don’t know. I don’t know either. I listen to a lot of classical music and also read a lot of
6:54:35 poetry. And yeah, I do wonder if there is some element of the next token surprise factor going
6:54:40 on there. Yeah, maybe. Because I mean, a lot of the tricks in both poetry and music are like,
6:54:43 basically you have some repeated structure and then you do like a twist. Like it’s like,
6:54:46 okay, verse or like clause one, two, three is one thing. And then clause four is like,
6:54:50 okay, now we’re onto the next theme. Yeah. And they kind of play with exactly when the surprise
6:54:55 happens and the expectation of the user. And that’s even true like, through history as musicians
6:54:59 evolve music, they take like some known structure that people are familiar with,
6:55:02 and they just tweak it a little bit. Like they tweak it and add a surprising element.
6:55:06 This is especially true in like, in classical music heritage. But that’s what I’m wondering,
6:55:12 like, is it all just entropy? So breaking structure or breaking symmetry is something
6:55:16 that humans seem to like, maybe as simple as that. Yeah. And I mean, great artists copy.
6:55:20 And they also, you know, knowing which rules to break is the important part.
6:55:25 And that fundamentally, it must be about the listener of the piece. Like, which rule is the
6:55:29 right one to break is about the user or the audience member perceiving that as interesting.
6:55:32 What do you think is the meaning of human existence?
6:55:42 There’s a TV show I really like called the West Wing. And in the West Wing, there’s
6:55:46 characters, the president of the United States, who’s having a discussion about the Bible with
6:55:53 one of their colleagues. And the colleague says something about, you know, the Bible says X, Y,
6:56:00 and Z. And the president says, yeah, but it also says A, B, C. And the person says, well,
6:56:05 do you believe the Bible to be literally true? And the president says, yes, but I also think
6:56:10 that neither of us are smart enough to understand it. I think to like the analogy here for the
6:56:15 meaning of life is that largely, we don’t know the right question to ask. And so I think I’m
6:56:22 very aligned with sort of the Hitchhiker’s Guide to the Galaxy version of this question,
6:56:27 which is basically if we can ask the right questions, it’s much more likely we find the
6:56:32 meaning of human existence. And so in the short term, as a heuristic in the sort of search
6:56:37 policy space, we should try to increase the diversity of people asking such questions or
6:56:43 generally of consciousness and conscious beings asking such questions. So again, I think I’ll
6:56:47 take the I don’t know card here, but say I do think there are meaningful things we can do
6:56:48 that improve the likelihood of answering that question.
6:56:55 It’s interesting how much value you assign to the task of asking the right questions.
6:57:00 That’s the main thing is not the answers is the questions.
6:57:06 This point, by the way, is driven home in a very painful way when you try to communicate with
6:57:10 someone who cannot speak. Because a lot of the time, the last thing to go is they have the ability
6:57:16 to somehow wiggle a lip or move something that allows them to say yes or no. And in that situation,
6:57:20 it’s very obvious that what matters is are you asking them the right question to be able to
6:57:26 say yes or no to. Wow, that’s powerful. Well, Bliss, thank you for everything you do.
6:57:31 And thank you for being you. And thank you for talking today. Thank you.
6:57:38 Thanks for listening to this conversation with Bliss Chapman. And now, dear friends,
6:57:44 here’s Nolan Arbaugh, the first human being to have a Neuralink device implanted in his brain.
6:57:52 You had a diving accident in 2016 that left you paralyzed with no feeling from the shoulders down.
6:57:54 How did that accident change your life?
6:57:58 There’s sort of a freak thing that happened. Imagine you’re
6:58:04 running into the ocean. Although this is a lake, but you’re running into the ocean
6:58:10 and you get to about waist high, and then you kind of like dive in, take the rest of the plunge
6:58:14 under the wave or something. That’s what I did. And then I just never came back up.
6:58:24 Not sure what happened. I did it running into the water with a couple of guys. And so my idea of
6:58:33 what happened is really just that I took like a stray fist, elbow, knee, foot, something to the
6:58:39 side of my head. The left side of my head was sore for about a month afterward. So I must have taken
6:58:45 a pretty big knock. And then they both came up and I didn’t. And so I was facedown in the water
6:58:52 for a while. I was conscious. And then eventually just, you know, realized I couldn’t hold my breath
6:59:00 any longer. And I keep saying took a big drink. People, I don’t know if they like that I say
6:59:07 that seems like I’m making light of it all, but it’s just kind of how I am. And I don’t know, like
6:59:18 I’m a very relaxed sort of stress free person. I rolled with the punches.
6:59:24 For a lot of this, I kind of took it in stride. It’s like, all right, well, what can I do next?
6:59:31 How can I improve my life even a little bit on a day to day basis at first, just trying to
6:59:37 find some way to heal as much of my body as possible, to try to get healed, to try to get
6:59:47 off a ventilator, learn as much as I could. So I could somehow survive once I left the hospital.
6:59:55 And then thank God I had like my family around me. If I didn’t have my parents,
7:00:02 my siblings, then I would have never made it this far. They’ve done so much for me
7:00:10 more than like I can ever thank them for honestly. And a lot of people don’t have that. A lot of
7:00:14 people in my situation, their families either aren’t capable of providing for them or
7:00:20 honestly just don’t want to. And so they get placed somewhere and, you know, in some sort of home.
7:00:26 So thankfully I had my family. I have a great group of friends, a great group of buddies from
7:00:34 college who have all rallied around me. And we’re all still incredibly close. People always say,
7:00:40 you know, if you’re lucky, you’ll end up with one or two friends from high school that you keep
7:00:47 throughout your life. I have about 10, 10 or 12 from high school that have all stuck around.
7:00:53 And we still get together all of us twice a year. We call it the spring series and the fall series.
7:01:01 This last one we all did, we dressed up like X-Men. So I did a Vessar Xavier and it was freaking
7:01:06 awesome. It was so good. So yeah, I have such a great support system around me. And so,
7:01:15 you know, being a quadriplegic isn’t that bad. I get waited on all the time. People bring me food
7:01:22 and drinks and I get to sit around and watch as much TV and movies and anime as I want. I get to
7:01:31 read as much as I want. I mean, it’s great. It’s beautiful to see that you see the silver lining
7:01:37 and all of this. We’re just going back. Do you remember the moment when you first realized
7:01:45 you were paralyzed from the neck down? Yep. I was face down in the water. Right when I,
7:01:52 whatever, something hit my head. I tried to get up and I realized I couldn’t move and it just
7:01:58 sort of clicked. I’m like, all right, I’m paralyzed, can’t move. What do I do? If I can’t get up,
7:02:05 I can’t flip over, can’t do anything, then I’m going to drown eventually. And I knew I couldn’t
7:02:13 hold my breath forever. So I just held my breath and thought about it for maybe 10, 15 seconds.
7:02:20 I’ve heard from other people that like onlookers, I guess the two girls that pulled me out of the
7:02:27 water were two of my best friends. They are lifeguards. And one of them said that it looked
7:02:32 like my body was sort of shaking in the water. Like I was trying to flip over and stuff.
7:02:43 But I knew, I knew immediately. And I just kind of, I realized that that’s like what my situation
7:02:48 was from here on out. Maybe if I got to the hospital, they’d be able to do something.
7:02:54 When I was in the hospital, like right before surgery, I was trying to calm one of my friends
7:02:58 down. I had like brought her with me from college to the camp. And she was just balling over me.
7:03:04 And I was like, Hey, it’s going to be fine. Like, don’t worry. I was cracking some jokes to try to
7:03:09 lighten the mood. The nurse had called my mom and I was like, don’t tell my mom. She’s just going to
7:03:14 be stressed out, call her after I’m out of surgery, because at least she’ll have some answers then,
7:03:20 like whether I live or not really. And I didn’t want her to be stressed through the whole thing.
7:03:27 But I knew. And then when I first woke up after surgery, I was super drugged up. They had me on
7:03:35 fentanyl like three ways, which was awesome. I don’t, I don’t recommend it. But I saw,
7:03:42 I saw some crazy stuff on that fentanyl. And it was still the best I’ve ever felt on drugs.
7:03:51 Medication, sorry, on medication. And I remember the first time I saw my mom in the hospital,
7:04:00 I was just balling. I had like ventilator in, like I couldn’t talk or anything. And I just
7:04:06 started crying because it was more like seeing her, not that I mean, the whole situation obviously
7:04:11 was pretty rough. But I was just like seeing her face for the first time was pretty hard.
7:04:23 But yeah, I just, I never had like a moment of, you know, man, I’m paralyzed. This sucks. I don’t
7:04:31 want to like be around anymore. It was always just, I hate that I have to do this, but like
7:04:35 sitting here and wallowing isn’t going to help. So immediate acceptance. Yeah.
7:04:45 Yeah. Has there been low points along the way? Yeah, yeah, sure. I mean, there are days when
7:04:49 I don’t really feel like doing anything, not so much anymore. Like not for the last couple years,
7:04:58 I don’t really feel that way. I’ve more so just wanted to try to do anything possible to make
7:05:03 my life better at this point. But at the beginning, there were some ups and downs. There were some
7:05:10 really hard things to adjust to. First off, just like the first couple months, the amount of pain
7:05:16 I was in was really, really hard. I mean, I remember screaming at the top of my lungs in the
7:05:21 hospital because I thought my legs were on fire. And obviously I can’t feel anything, but it’s
7:05:27 all nerve pain. And so that was a really hard night. I asked them to give me as much pain meds
7:05:31 as possible. They’re like, you’ve had as much as you can have. So just kind of deal with it,
7:05:37 go to a happy place sort of thing. So that was a pretty low point. And then every now and again,
7:05:42 it’s hard like realizing things that I wanted to do in my life that I won’t be able to do anymore.
7:05:50 I always wanted to be a husband and father, and I just don’t think that I could do it now
7:05:57 as a quadriplegic. Maybe it’s possible, but I’m not sure I would ever put someone I love
7:06:05 through that, like having to take care of me and stuff, not being able to go out and play sports.
7:06:12 I was a huge athlete growing up. So that was pretty hard. Just little things too, when I realize
7:06:20 I can’t do them anymore. There’s something really special about being able to hold a book and smell
7:06:26 a book, like the feel, the texture, the smell, like as you turn the pages, like I just love it.
7:06:31 I can’t do it anymore. And it’s little things like that. The two-year mark was pretty rough.
7:06:40 Two years is when they say you will get back basically as much as you’re ever going to get
7:06:45 back as far as movement and sensation goes. And so for the first two years, that was the only thing
7:06:53 on my mind was like try as much as I can to move my fingers, my hands, my feet, everything possible
7:07:01 to try to get sensation and movement back. And then when the two-year mark hit, so June 30th,
7:07:12 2018, I was really sad that that’s kind of where I was. And then just randomly here and there,
7:07:21 but I was never depressed for long periods of time. It never seemed worthwhile to me.
7:07:23 What gave you strength?
7:07:30 My faith, my faith in God was a big one. My understanding that it was all for a purpose.
7:07:36 And even if that purpose wasn’t anything involving neurolink, even if that purpose was,
7:07:43 there’s a story in the Bible about Job. And I think it’s a really, really popular story
7:07:49 about how Job has all of these terrible things happen to him and he praises God throughout
7:07:55 the whole situation. I thought, and I think a lot of people think for most of their lives,
7:08:00 that they are Job, that they’re the ones going through something terrible and they just need
7:08:06 to praise God through the whole thing and everything will work out. At some point after
7:08:14 my accident, I realized that I might not be Job, that I might be one of his children that gets
7:08:21 killed or kidnapped or taken from him. And so it’s about terrible things that happen to those
7:08:27 around you who you love. So maybe in this case, my mom would be Job and she has to get through
7:08:35 something extraordinarily hard. And I just need to try and make it as best as possible for her
7:08:42 because she’s the one that’s really going through this massive trial. And that gave me a lot of
7:08:49 strength. And obviously my family, my family and my friends, they give me all the strength that I
7:08:55 need on a day-to-day basis. So it makes things a lot easier having that great support system
7:09:01 around me. From everything I’ve seen of you online, your streams and the way you are today,
7:09:08 I really admire, let’s say, your unwavering positive outlook on life. Has that always been this way?
7:09:19 Yeah, yeah. I mean, I’ve just always thought I could do anything I ever wanted to do. There was
7:09:27 never anything too big. Like whatever I set my mind to, I felt like I could do it. I didn’t want to
7:09:34 do a lot. I wanted to like travel around and be sort of like a gypsy and like go work odd jobs.
7:09:41 I had this dream of traveling around Europe and being like I don’t know a shepherd in like Wales
7:09:47 or Ireland and then going and being a fisherman in Italy, doing all these things for like a year.
7:09:52 Like it’s such like cliche things, but I just thought it would be so much fun to go and travel
7:10:01 and do different things. And so I’ve always just seen the best in people around me too.
7:10:08 And I’ve always tried to be good to people. And growing up with my mom too, she’s like
7:10:15 the most positive energetic person in the world. And we’re all just people. Like I just get along
7:10:22 great with people. I really enjoy meeting new people. And so I just wanted to do everything.
7:10:30 This is just kind of just how I’ve been. It’s just great to see that cynicism didn’t take over
7:10:35 given everything you’ve been through. Was that like a deliberate choice you made
7:10:40 that you’re not going to let this keep you down? Yeah, a bit. Also like I just,
7:10:47 it’s just kind of how I am. I just, like I said, I roll with the punches and everything. I always
7:10:53 used to tell people like I don’t stress about things much. And whenever I’d see people getting
7:10:59 stressed, I’d just say, you know, like it’s not hard. Just don’t stress about it. And like that’s
7:11:03 all you need to do. And they’re like, that’s not how that works. Like it works for me.
7:11:07 Like just don’t stress and everything will be fine. Like everything will work out.
7:11:13 Obviously, not everything always goes well. And it’s not like it all works out for the best
7:11:19 all the time. But I just don’t think stress has had any place in my life since I was a kid.
7:11:26 What was the experience like of you being selected to be the first human being to have
7:11:32 a Neuralink device implanted in your brain? Were you scared? Excited? No, no, it was cool.
7:11:41 Like I was, I was never afraid of it. I had to think through a lot. Should I,
7:11:49 should I do this, like be the first person I could wait until number two or three and get
7:11:56 a better version of the Neuralink? Like the first one might not work. Maybe it’s actually going to
7:12:03 kind of suck. It’s going to be the worst version ever in a person. So why would I do the first
7:12:06 one? Like I’ve already kind of been selected. I could just tell them, you know, like, okay,
7:12:09 find someone else and then I’ll do number two or three. Like I’m sure they would let me. They’re
7:12:14 looking for a few people anyways. But ultimately I was like, I don’t know, there’s something about
7:12:20 being the first one to do something. It’s pretty cool. I always thought that if I had the chance
7:12:25 that I would like to do something for the first time, this seemed like a pretty good opportunity.
7:12:35 And I was, I was never scared. I think my like faith had a huge part in that. I always felt like
7:12:45 God was preparing me for something. I almost wish it wasn’t this because I had many conversations
7:12:51 with God about not wanting to do any of this as a quadriplegic. I told them, you know, I’ll go out
7:12:57 and talk to people. I’ll go out and travel the world and talk to, you know, stadiums, thousands of
7:13:02 people give my testimony. I’ll do all of it. But like, heal me first. Don’t make me do all this
7:13:09 in a chair. That sucks. And I guess he won that argument. I didn’t really have much of a choice.
7:13:21 I always felt like there was something going on. And to see how I guess easily I made it through
7:13:29 the interview process and how quickly everything happened, how the star sort of aligned with all
7:13:35 this, it just told me like as the surgery was getting closer, it just told me that
7:13:42 you know, it was all meant to happen. It was all meant to be. And so I shouldn’t be afraid of
7:13:49 anything that’s to come. And so I wasn’t, I kept telling myself like, you know, you say that now,
7:13:52 but as soon as the surgery comes, you’re probably going to be freaking out. Like you’re about to
7:13:59 have brain surgery and brain surgery is a big deal for a lot of people, but it’s an even bigger
7:14:03 deal for me. Like it’s all I have left the amount of times I’ve been like, thank you God that you
7:14:10 didn’t take my brain and my personality and my ability to think my like love of learning like
7:14:15 my character, everything like thank you so much. Like as long as you left me that, then I think I
7:14:21 can get by. And I was about to let people go like root around in there like, hey, we’re going to go
7:14:27 like put some stuff in your brain, like hopefully it works out. And so it was, it was something
7:14:33 that gave me pause. But like I said, how smoothly everything went, I never expected for a second
7:14:40 that anything would go wrong. Plus the more people I met on the borrows side and on the
7:14:46 knurling side, they’re just the most impressive people in the world. Like I can’t speak enough
7:14:54 to how much I trust these people with my life and how impressed I am with all of them. And to see
7:15:02 the excitement on their faces to like walk into a room and roll into a room and see all of these
7:15:07 people looking at me like we’re just, we’re so excited. Like we’ve been working so hard on this
7:15:14 and it’s finally happening. It’s super infectious. And it just makes me want to do it even more and
7:15:20 to help them achieve their dreams. Like, I don’t know, it’s so, it’s so rewarding. And I’m so happy
7:15:26 for all of them, honestly. What was the day of surgery like? What’s, when did you wake up?
7:15:33 What’d you feel minute by minute? Were you freaking out? No, I thought I was going to,
7:15:39 but as surgery approached the night before, the morning of, I was just excited. Like, I was like,
7:15:44 let’s make this happen. I think I said that something like that to Elon on the phone.
7:15:49 Before hand, we were like, FaceTiming. And I was like, let’s rock and roll. And he’s like, let’s do it.
7:15:56 I don’t know. I just, I wasn’t scared. So we woke up, I think we had to be at the hospital
7:16:01 at like 5.30 AM. I think surgery was at like 7 AM. So we woke up pretty early. I’m not sure
7:16:12 much of us slept that night. Got to the hospital 5.30, went through like all the pre-op stuff.
7:16:17 Everyone was super nice. Elon was supposed to be there in the morning. But something went
7:16:22 wrong with his plane. So we ended the FaceTiming. That was cool. Had one of the greatest one-liners
7:16:28 of my life after that phone call. Hung up with him. There were like 20 people around me. And I
7:16:33 was like, I just hope he wasn’t too starstruck talking to me. Nice. Yeah, it was good. Well done.
7:16:39 Yeah, yeah. Did you write that ahead of time? No, it just came to me. I was like, this seems right.
7:16:47 Went into surgery. I asked if I could pray right beforehand. So I like prayed over the room.
7:16:53 I asked God if you like be with my mom in case anything happened to me. And just like calm her
7:17:00 nerves out there. Woke up and played a bit of a prank on my mom. I don’t know if you’ve heard
7:17:06 about it. Yeah, I read about it. Yeah, she was not happy. Can you take me through the prank?
7:17:14 Yeah, this is something- You regret doing that now? No, not a bit. It was something I had talked
7:17:19 about ahead of time with my buddy, Bane. I was like, I would really like to play a prank on my mom.
7:17:28 Very specifically, my mom. She’s very gullible. I think she had knee surgery once even. And
7:17:36 after she came out of knee surgery, she was super groggy. She was like, I can’t feel my legs. And
7:17:41 my dad looked at her. He was like, you don’t have any legs. They had to amputate both your
7:17:50 legs. And we just do very mean things to her all the time. I’m so surprised that she still loves
7:17:57 us. But right after surgery, I was really worried that I was going to be too groggy,
7:18:05 like not all there. I had had anesthesia once before and it messed me up. I could not function
7:18:13 for a while afterwards. And I said a lot of things that I was really worried that I was going to
7:18:20 start, I don’t know, dropping some bombs. And I wouldn’t even know. I wouldn’t remember.
7:18:28 So I was like, please God, don’t let that happen. And please let me be there enough to do this to
7:18:37 my mom. And so she walked in after surgery. It was the first time they had been able to see me
7:18:42 after surgery. And she just looked at me. She said, hi, how are you? How are you doing? How
7:18:48 do you feel? And I looked at her and this very, I think the anesthesia helped, very groggy,
7:18:55 sort of confused look on my face. It’s like, who are you? And she just started looking around
7:19:00 the room at the surgeons or the doctors. Like, what did you do to my son? You need to fix this
7:19:05 right now. Tears started streaming. I saw how much she was freaking out. I was like, I can’t let
7:19:13 this go on. And so I was like, mom, I’m fine. It’s all right. And still, she was not happy about it.
7:19:20 She still says she’s going to get me back someday. But I mean, I don’t know. I don’t know what that’s
7:19:26 going to look like. It’s a lifelong battle. Yeah. It was good. In some sense, it was a demonstration
7:19:31 that you still got. That’s all I wanted it to be. That’s all I wanted it to be. And I knew that
7:19:37 doing something super mean to her like that would show her. To show that you’re still there,
7:19:44 that you love her. Yeah, exactly. It’s a dark way to do it, but I love it. What was the first time
7:19:52 you were able to feel that you can use the Neuralink device to affect the world around you?
7:19:58 Yeah. The first little taste I got of it was actually not too long after surgery.
7:20:06 Some of the Neuralink team had brought in like a little iPad, a little tablet screen,
7:20:14 and they had put up eight different channels that were recording some of my Neuron spikes.
7:20:19 And they put it in front of me. They’re like, this is like real time your brain firing. It’s
7:20:26 like that’s super cool. My first thought was, I mean, if they’re firing now, let’s see if I can
7:20:31 affect them in some way. So I started trying to like wiggle my fingers. And I just started
7:20:36 like scanning through the channels. And one of the things I was doing was like moving my index
7:20:42 finger up and down. And I just saw this yellow spike on like top row, like third box over or
7:20:47 something. I saw this yellow spike every time I did it. And I was like, oh, that’s cool. And
7:20:50 everyone around me was just like, what, what are you seeing? I was like, look, look at this one.
7:20:57 Look at like this top row, third box over this yellow spike. Like that’s me right there, there,
7:21:02 there. And everyone was freaking out. They started like clapping. I was like, that’s super
7:21:09 unnecessary. This is what’s supposed to happen, right? So you’re imagining yourself moving each
7:21:13 individual finger one at a time, and then seeing like that you can notice something. And then
7:21:18 when you did the index finger, you’re like, oh, yeah, I was, I was wiggling kind of all of my
7:21:25 fingers to see if anything would happen. There was a lot of other things going on. But that big yellow
7:21:30 spike was the one that stood out to me. Like I’m sure that if I would have stared at it long enough,
7:21:36 I could have mapped out maybe 100 different things. But the big yellow spike was the one that I
7:21:42 noticed. Maybe you could speak to what it’s like to sort of wiggle your fingers to imagine that
7:21:48 the mental, the cognitive effort required to sort of wiggle your index finger, for example. How
7:21:55 easy is that to do? Pretty easy for me. It’s something that at the very beginning, after my
7:22:05 accident, they told me to try and move my body as much as possible, even if you can’t. Just
7:22:10 keep trying because that’s going to create new neural pathways or pathways in my spinal cord
7:22:16 to reconnect these things to hopefully regain some movement someday. That’s fascinating.
7:22:22 Yeah, I know. It’s bizarre. So that’s part of the recovery process is to keep trying to move your
7:22:29 body and that’s as much as you can. And the nervous system does its thing. It starts reconnecting.
7:22:34 It’ll start reconnecting for some people. Some people, it never works. Some people,
7:22:42 they’ll do it. Like for me, I got some bicep control back. And that’s about it. I can, if I
7:22:51 try enough, I can wiggle some of my fingers, not like on command. It’s more like, if I try to move,
7:22:56 say my right pinky, and I just keep trying to move it after a few seconds, it’ll wiggle.
7:23:02 So I know there’s stuff there. I know that happens with a few different of my fingers and stuff.
7:23:10 But yeah, that’s what they tell you to do. One of the people at the time when I was in the hospital
7:23:17 came in and told me for one guy who had recovered most of his control, what he thought about every
7:23:25 day was actually walking, like the act of walking, just over and over again. So I tried that for years.
7:23:35 I tried just imagining walking, which is, it’s hard. It’s hard to imagine all of the steps that
7:23:41 go into, well, taking a step, like all of the things that have to move, like all the activations
7:23:46 that have to happen along your leg in order for one step to occur.
7:23:49 But you’re not just imagining you’re like doing it, right?
7:23:58 I’m trying, yeah. So it’s like, it’s imagining over again what I had to do to take a step,
7:24:02 because it’s not something any of us think about. We just, you want to walk and you take a step.
7:24:09 You don’t think about all of the different things that are going on in your body. So I had to recreate
7:24:14 that in my head as much as I could, and then I practice it over and over and over.
7:24:18 So it’s not like a third person perspective, it’s a first person perspective. You’re like,
7:24:24 it’s not like you’re imagining yourself walking. You’re like literally doing this, everything,
7:24:30 all the same stuff as you’re walking. Which was hard. It was hard at the beginning.
7:24:34 Like frustrating hard, or like actually cognitively hard, like which way?
7:24:44 It was both. There’s a scene in one of the Kill Bill movies, actually, oddly enough,
7:24:49 where she is like paralyzed, I don’t know from like a drug that was in her system,
7:24:53 and then she like finds some way to get into the back of a truck or something,
7:25:02 and she stares at her toe, and she says move, like move your big toe. And after a few seconds
7:25:07 on screen, she does it. And she did that with every one of her body parts until she can move again.
7:25:15 I did that for years, just stared at my body and said, move your index finger,
7:25:21 move your big toe, sometimes vocalizing it like out loud, sometimes just thinking it.
7:25:25 I tried every different way to do this to try to get some movement back.
7:25:33 And it’s hard because it actually is like taxing, like physically taxing on my body,
7:25:36 which is something I would have never expected, because it’s not like I’m moving,
7:25:43 but it feels like there’s a buildup of, I don’t know, the only way I can describe it is
7:25:52 there are like signals that aren’t getting through from my brain down because of my,
7:25:58 there’s that gap in my spinal cord, so brain down and then from my hand back up to the brain.
7:26:05 And so it feels like those signals get stuck in whatever body part that I’m trying to move,
7:26:10 and they just build up and build up and build up until they burst. And then once they burst,
7:26:15 I get like this really weird sensation of everything sort of like dissipating back out
7:26:23 to level, and then I do it again. It’s also just like a fatigue thing, like a muscle fatigue,
7:26:29 but without actually moving your muscles. It’s very, very bizarre. And then, you know,
7:26:36 if you try to stare at a body part or think about a body part and move for two, three, four,
7:26:42 sometimes eight hours, it’s very taxing on your mind. It takes a lot of focus.
7:26:47 It was a lot easier at the beginning because I wasn’t able to
7:26:55 like control a TV in my room or anything. I wasn’t able to control any of my environment.
7:27:00 So for the first few years, a lot of what I was doing was staring at walls. And so
7:27:08 obviously I did a lot of thinking and I tried to move a lot just over and over and over again.
7:27:14 Do you never give up sort of hope there, just training hard essentially?
7:27:19 Yep. And I still do it. I do it like subconsciously. And I think that
7:27:26 helped a lot with things with Neuralink, honestly. It’s something that I talked about
7:27:30 the other day at the All Hands that I did at Neuralink’s Austin facility.
7:27:31 Welcome to Austin, by the way.
7:27:33 Yeah. Hey, thanks, man. I went to school.
7:27:33 Nice hat.
7:27:38 Hey, thanks. Thanks, man. The Gigafactory was super cool. I went to school at Texas A&M,
7:27:41 so I’ve been around for… So you should be saying, welcome to me.
7:27:44 Yeah. Welcome to Texas Likes. Yeah. I hit you.
7:27:50 But yeah, I was talking about how a lot of what they’ve had me do, especially at the beginning,
7:27:58 well, I still do it now, is body mapping. So like there will be a visualization of a hand
7:28:03 or an arm on the screen. And I have to do that motion. And that’s how they sort of train
7:28:14 the algorithm to understand what I’m trying to do. And so it made things very seamless for me,
7:28:14 I think.
7:28:19 That’s really, really cool. So it’s amazing to know, because I’ve learned a lot about the
7:28:24 body mapping procedure. With the interface and everything like that. It’s cool to know that
7:28:29 you’ve been a century training to be world-class at that task.
7:28:39 Yeah. I don’t know if other quadriplegics, like other paralyzed people give up. I hope they don’t.
7:28:46 I hope they keep trying, because I’ve heard other paralyzed people say, don’t ever stop.
7:28:53 They tell you two years, but you just never know. You’re the human body’s capable of amazing things.
7:29:02 So I’ve heard other people say, don’t give up. I think one girl had spoken to me through some
7:29:09 family members and said that she had been paralyzed for 18 years. And she’d been trying to wiggle her
7:29:15 index finger for all that time. And she finally got a bat 18 years later. So I know that it’s
7:29:21 possible and I’ll never give up doing it. I do it when I’m lying down watching TV. I’ll find myself
7:29:29 doing it almost on its own. It’s just something I’ve gotten so used to doing that I don’t think
7:29:33 I’ll ever stop. That’s really awesome to hear, because I think it’s one of those things that can
7:29:38 really pay off in the long term. Because it is training. You’re not visibly seen the results
7:29:44 of that training at the moment. But there’s an Olympic-level nervous system getting ready for
7:29:53 something. Which honestly was something that I think Nerling gave me that I can’t think them
7:30:03 enough for. I can’t show my appreciation for it enough. Was being able to visually see that what
7:30:13 I’m doing is actually having some effect. It’s a huge part of the reason why I know now that I’m
7:30:20 going to keep doing it forever. Because before Nerling, I was doing it every day and I was just
7:30:26 assuming that things were happening. It’s not like I knew. I wasn’t getting back any mobility
7:30:33 or sensation or anything. So I could have been running up against a brick wall for all I knew.
7:30:41 And with Nerling, I get to see all the signals happening real time. And I get to see that
7:30:48 what I’m doing can actually be mapped when we started doing click calibrations and stuff.
7:30:54 When I go to click my index finger for a left click, that it actually recognizes that. It
7:31:04 changed how I think about what’s possible with retraining my body to move. So yeah, I’ll never
7:31:08 give up now. And also just the signal that there’s still a powerhouse of a brain there that’s
7:31:14 the technology develops. That brain is, I mean, that’s the most important thing about the human
7:31:20 body is the brain. It can do a lot of the control. So what did it feel like when you first could wiggle
7:31:26 the index finger and saw the environment respond like that? Yeah, wherever we’re just being way too
7:31:32 dramatic according to you. Yeah, it was very cool. I mean, it was cool, but I keep telling this to
7:31:39 people. It made sense to me. It made sense that there are signals still happening in my brain.
7:31:46 And that as long as you had something near it that could measure those that could record those,
7:31:52 then you should be able to visualize it in some way, see it happen. And so that was not
7:31:58 very surprising to me. I was just like, oh, cool, we found one. We found something that works.
7:32:05 It was cool to see that their technology worked and that everything that they’d worked so hard for
7:32:11 was going to pay off. But I moved a cursor or anything at that point, and I had interacted
7:32:19 with a computer or anything at that point. So it just made sense. It was cool. I didn’t really
7:32:26 know much about BCI at that point either. So I didn’t know what sort of step this was actually
7:32:34 making. I didn’t know if this was a huge deal or if this was just like, okay, it’s cool that we
7:32:39 got this far, but we’re actually hoping for something much better down the road. It’s like,
7:32:45 okay, I just thought that they knew that it turned on. So I was like, cool, this is cool.
7:32:49 Well, did you read up on the specs of the hardware you get installed, the number of threads?
7:32:57 Yeah, I knew all of that, but it’s all Greek to me. I was like, okay, threads, 64 threads,
7:33:05 16 electrodes, 1,024 channels. Okay, that math checks out.
7:33:11 Sounds right. What was the first time you were able to move a mouse cursor?
7:33:16 I know it must have been within the first maybe week, a week or two weeks that I was able to
7:33:24 first move the cursor. And again, it kind of made sense to me. It didn’t seem like that big of a
7:33:33 deal. Like, it was like, okay, well, how do I explain this? When everyone around you starts
7:33:41 clapping for something that you’ve done, it’s easy to say, okay, I did something cool. That was
7:33:53 impressive in some way. What exactly that meant, what it was hadn’t really set in for me. So again,
7:34:06 I knew that me trying to move a body part, and then that being mapped in some sort of machine
7:34:13 learning algorithm to be able to identify my brain signals and then take that and give me
7:34:17 cursor control, that all kind of made sense to me. I don’t know all the ins and outs of it,
7:34:22 but I was like, there are still signals in my brain firing. They just can’t get through
7:34:28 because there’s a gap in my spinal cord. And so they can’t get all the way down and back up,
7:34:33 but they’re still there. So when I moved the cursor for the first time, I was like, that’s cool,
7:34:42 but I expected that that should happen. It made sense to me. When I moved the cursor for the first
7:34:49 time with just my mind without physically trying to move, so I guess I can get into that just a
7:34:53 little bit like the difference between attempt and movement and imagine movement. Yeah, that’s
7:34:59 a fascinating difference. Yeah, one to the other. Yeah, yeah. So like attempted movement is me
7:35:06 physically trying to attempt to move, say my hand, I try to attempt to move my hand to the right,
7:35:14 to the left, forward and back. And that’s all attempted attempt to lift my finger up and down,
7:35:19 attempt to kick or something. I’m physically trying to do all of those things even if
7:35:26 you can’t see it. This would be like me attempting to shrug my shoulders or something. That’s all
7:35:35 attempted movement. That’s what I was doing for the first couple of weeks when they were
7:35:40 going to give me cursor control. When I was doing body mapping, it was attempt to do this,
7:35:51 attempt to do that. When Nir was telling me to imagine doing it, it kind of made sense to me,
7:36:03 but it’s not something that people practice. If you started school as a child and they said,
7:36:08 okay, write your name with this pencil. And so you do that like, okay, now imagine writing
7:36:15 your name with that pencil. Kids would think, I guess that kind of makes sense. And they would
7:36:20 do it, but that’s not something we’re taught. It’s all like how to do things physically. We think
7:36:26 about thought experiments and things, but that’s not like a physical action of doing things. It’s
7:36:32 more like what you would do in certain situations. So imagine movement, it never really connected
7:36:39 with me. I guess you could maybe describe it as like a professional athlete swinging a baseball
7:36:45 bat or swinging like a golf club. Imagine what you’re supposed to do, but then you go right to
7:36:50 that and physically do it. Then you get a bat in your hand and then you do what you’ve been
7:36:56 imagining. And so I don’t have that connection. So telling me to imagine something versus attempting
7:37:03 it, it just, there wasn’t a lot that I could do there mentally. I just kind of had to accept
7:37:09 what was going on and try. But the attempted moving thing, it all made sense to me. Like,
7:37:15 if I try to move, then there’s a signal being sent in my brain. And as long as they can pick
7:37:20 that up, then they should be able to map it to what I’m trying to do. And so when I first moved
7:37:27 the cursor like that, it was like, yes, this should happen. I’m not surprised by that.
7:37:30 But can you clarify, is there supposed to be a difference between imagined movement
7:37:35 and attempted movement? Yeah, just that in imagined movement, you’re not
7:37:40 attempting to move at all. So it’s– You’re like visualizing yourself doing it. And then
7:37:44 theoretically, is that supposed to be a different part of the brain that lights up in those two
7:37:49 different situations? Yeah, not necessarily. I think all these signals can still be represented
7:37:53 in motor cortex. But the difference, I think, has to do with the naturalness of
7:37:57 imagining something versus attempting it and sort of the fatigue of that over time.
7:38:05 And by the way, on the mic is Bliss. So this is just different ways to prompt you to kind of
7:38:10 get to the thing that you’re around at. Attempted movement does sound like
7:38:14 the right thing to try. Yeah, I mean, it makes sense to me.
7:38:19 Because imagine, for me, I would start visualizing. In my mind, visualizing,
7:38:22 attempted, I would actually start trying to like– Yeah.
7:38:25 There’s a– I mean, I did, like, combat sports my whole life, like wrestling.
7:38:31 When I’m imagining a move, see, I’m like moving my muscle. Exactly.
7:38:37 Like, there is a bit of an activation almost versus like visualizing yourself like a picture
7:38:42 doing it. Yeah, it’s something that I feel like naturally anyone would do. If you try to tell
7:38:47 someone to imagine doing something, they might close their eyes and then start physically doing it.
7:38:51 But it’s just– Did they click?
7:38:55 Yeah. It’s hard. It was very hard at the beginning.
7:39:02 But attempted worked. Attempted worked. It worked just like it should work like a charm.
7:39:06 Remember, there was like one Tuesday, we were messing around, and I think–
7:39:09 I forget what swear word you used, but there was a swear word that came out of your
7:39:12 mouth when you figured out you could just do the direct cursor control.
7:39:22 Yeah. That’s– It blew my mind. Like, no pun intended. Blew my mind when I first moved the
7:39:31 cursor just with my thoughts and not attempting to move. It’s something that I found over the
7:39:39 couple of weeks, like, building up to that. That as I get better cursor controls, like,
7:39:51 the model gets better, then it gets easier for me to, like, I don’t have to attempt as much
7:39:59 to move it. And part of that is something that I’d even talked with them about when I was watching
7:40:05 the signals of my brain one day. I was watching when I, like, attempted to move to the right,
7:40:11 and I watched the screen as, like, I saw the spikes. Like, I was seeing the spike, the signals
7:40:18 being sent before I was actually attempting to move. I imagine just because, you know,
7:40:24 when you go to, say, move your hand or any body part, that signal gets sent before you’re actually
7:40:28 moving, has to make it all the way down and back up before you actually do any sort of movement.
7:40:35 So there’s a delay there. And I noticed that there was something going on in my brain before I was
7:40:44 actually attempting to move, that my brain was, like, anticipating what I wanted to do. And that
7:40:51 all started sort of, I don’t know, like, percolating in my brain. Like, it just, it was just sort of
7:40:58 there, like, always in the back, like, that’s so weird that it could do that. It kind of makes sense,
7:41:07 but I wonder what that means as far as, like, using the neural link. And, you know, and then as
7:41:11 I was playing around with the attempted movement and playing around with the cursor, and I saw that,
7:41:20 like, as the cursor control got better, that it was anticipating my movements and what I wanted
7:41:26 it to do, like, cursor movements, what I wanted to do a bit better and a bit better. And then one
7:41:35 day I just randomly, as I was playing WebGrid, I, like, looked at a target before I had started,
7:41:41 like, attempting to move. I was just trying to, like, get over, like, train my eyes to start
7:41:45 looking ahead, like, okay, this is the target I’m on, but if I look over here to this target,
7:41:50 I know I can, like, maybe be a bit quicker getting there. And I looked over and the cursor just shot
7:41:57 over. It was wild. Like, I had to take a step back. Like, I was like, this should not be happening
7:42:02 all day. I was just smiling. I was so giddy. I was like, guys, do you know that this works? Like,
7:42:08 I can just think it and it happens. Which, like, they’d all been saying this entire time, like,
7:42:12 I can’t believe, like, you’re doing all this with your mind. I’m like, yeah, but is it really with
7:42:16 my mind? Like, I’m attempting to move and it’s just picking that up so it doesn’t feel like it’s
7:42:23 with my mind. But when I moved it for the first time like that, it was, oh, man, it, like, it
7:42:32 made me think that this technology that what I’m doing is actually way, way more impressive than
7:42:37 I ever thought. It was way cooler than I ever thought. And it just opened up a whole new world
7:42:43 of possibilities of, like, what could possibly happen with this technology and what I might be
7:42:48 able to be capable of with it? Because you had felt for the first time, like, this was digital
7:42:54 telepathy. Like, you’re controlling a digital device with your mind. Yeah. I mean, this is,
7:42:59 that’s a real moment of discovery. That’s really cool. Like, you’ve discovered something. I’ve seen,
7:43:04 like, scientists talk about, like, a big aha moment, you know, like, Nobel Prize winning,
7:43:10 they’ll have this, like, holy crap. Yeah. Like, whoa. That’s what it felt like. Like, I didn’t
7:43:16 feel like, like, I felt like I had discovered something. But for me, maybe not necessarily
7:43:23 for, like, the world at large or, like, this field at large, it just felt like an aha moment for me.
7:43:30 Like, oh, this works. Like, obviously it works. And so that’s what I do, like, all the time now.
7:43:39 I kind of intermix the attempted movement and imagine movement. I do it all, like, together
7:43:47 because I’ve found that there is some interplay with it that maximizes efficiency with the cursor.
7:43:52 So it’s not all, like, one or the other. It’s not all just I only use attempted or I only use,
7:44:00 like, imagined movements. It’s more I use them in parallel. And I can do one or the other. I can
7:44:08 just completely think about whatever I’m doing. But I don’t know. I like to play around with it.
7:44:12 I also like to just experiment with these things. Like, every now and again, I’ll get this idea in
7:44:16 my head, like, hmm, I wonder if this works. And I’ll just start doing it. And then afterwards,
7:44:22 I’ll tell them, by the way, I wasn’t doing that like you guys wanted me to. I was, I thought
7:44:26 of something and I wanted to try it. And so I did, it seems like it works. So maybe we should,
7:44:31 like, explore that a little bit. So I think that discovery is not just for you. At least from my
7:44:37 perspective, that’s a discovery for everyone else who ever uses in your link that this is possible.
7:44:42 Like, I don’t think that’s an obvious thing that this is even possible. It’s like,
7:44:47 I was saying to Bliss earlier, it’s like the four minute mile. People thought it was impossible
7:44:52 to run a mile in four minutes. And once the first person did it, then everyone just started doing
7:44:57 it. So like, just to show that it’s possible, that paves the way to like, anyone can not do it.
7:45:01 That’s the thing that’s actually possible. You don’t need to do the attempted movement.
7:45:08 You can just go direct. That’s crazy. They’re just crazy. For people who don’t know,
7:45:14 can you explain how the link app works? You have an amazing stream on the topic. Your first stream,
7:45:22 I think, on X, describing the app. Can you just describe how it works? Yeah. So it’s just an app
7:45:31 that Neuralink created to help me interact with the computer. So on the link app, there are a few
7:45:38 different settings and different modes and things I can do on it. So there’s like the body mapping
7:45:47 if we kind of touched on. There’s a calibration. Calibration is how I actually get cursor control.
7:45:56 So calibrating what’s going on in my brain to translate that into cursor control. So it will
7:46:04 pop out models. What they use, I think, is like time. So it would be, you know,
7:46:09 five minutes and calibration will give me so good of a model. And then if I’m in it for 10
7:46:16 minutes and 15 minutes, the models will progressively get better. And so, you know,
7:46:21 the longer I’m in it, generally, the better the models will get. That’s really cool because you
7:46:25 often refer to the models. The model is the thing that’s constructed once you go through the calibration
7:46:31 step. And then you also talked about sometimes you’ll play like a really difficult game like Snake
7:46:37 just to see how good the model is. Yeah. Yeah. So Snake is kind of like my litmus test for models.
7:46:43 If I can control Snake decently well, then I know I have a pretty good model. So yeah,
7:46:48 the link app has all of those as web grid in it now. It’s also how I like connect to the computer
7:46:55 just in general. So they’ve given me a lot of like voice controls with it at this point. So I can,
7:47:04 you know, say like connect or implant disconnect. And as long as I have that charger handy, then I
7:47:08 can connect to it. So the charger is also how I connect to the link app to connect the computer.
7:47:15 I have to have the implant charger over my head when I want to connect to have it wake up because
7:47:21 the implants in hibernation mode like always when I’m not using it. I think there’s a setting to
7:47:28 like wake it up every, you know, so long so we could set it to half an hour or five hours or
7:47:35 something if I just want it to wake up periodically. So yeah, I’ll like connect to the link app and
7:47:41 then go through all sorts of things, calibration for the day, maybe body mapping. I have like,
7:47:48 I made them give me like a little homework tab, because I am very forgetful and I forget to do
7:47:54 things a lot. So I have like a lot of data collection things that they want me to do.
7:47:57 Is the body mapping part of the data collection? Or is that also part of the collection?
7:48:03 Yeah, it is. It’s something that they want me to do daily, which I’ve been slacking on because
7:48:09 I’ve been doing so much media and traveling so much. So I’ve been super famous. Yeah, I’ve been
7:48:16 a terrible first candidate for how much I’ve been slacking on my homework. But yeah, it’s just
7:48:24 something that they want me to do every day to track how well the nerve link is performing
7:48:29 over time and have something to give. I imagine to give to the FDA to create all sorts of fancy
7:48:35 charts and stuff and show like, Hey, this is what the nerve link, this is how it’s performing,
7:48:39 you know, day one versus day 90 versus day 180 and things like that.
7:48:43 What’s the calibration step like? Is it like move left, move right?
7:48:48 It’s a bubble game. So there will be like yellow bubbles that pop up on the screen.
7:48:55 At first it is open loop. So open loop, this is something that I still don’t fully understand
7:49:00 the open loop and closed loop thing. I mean, but it’s talked for a long time about the difference
7:49:05 between the two on the technical side. So it’d be great to hear your side of the story.
7:49:12 Open loop is basically, I have no control over the cursor. The cursor will be moving
7:49:20 on its own across the screen. And I am following by intention the cursor to different bubbles.
7:49:27 And then my, the algorithm is training off of what like the signals it’s getting are as I’m
7:49:31 doing this. There are a couple of different ways that they’ve done it. They call it center out
7:49:36 target. So there will be a bubble in the middle and then eight bubbles around that. And the cursor
7:49:43 will go from the middle to one side. So say middle to left, back to middle to up to middle,
7:49:48 like up right. And they’ll do that all the way around the circle. And I will follow that cursor
7:49:56 the whole time. And then it will train off of my intentions what it is expecting my intentions to
7:50:02 be throughout the whole process. Can you actually speak to when you say follow? Yes, you don’t mean
7:50:07 with your eyes, you mean with your intentions. Yeah. So generally for calibration, I’m doing
7:50:15 attempted movements, because I think it works better. I think the better models as I progress
7:50:24 through calibration, make it easier to use imagine movements. Wait, wait, wait. So calibrated on
7:50:32 attempted movement will create a model that makes it really effective for you to then use the force.
7:50:40 Yes. I’ve tried doing calibration with imagined movement. And it just doesn’t work as well
7:50:45 for some reason. So that was the center out targets. There’s also one where, you know,
7:50:50 a random target will pop up on the screen and it’s the same. I just like move I follow along
7:50:57 wherever the cursor is to that target all across the screen. I’ve tried those with
7:51:02 imagined movement. And for some reason, the models just don’t
7:51:12 they don’t give as high levels quality when we get into closed loop. I haven’t played around
7:51:17 with it a ton. So maybe like the different ways we’re doing calibration now might make it a bit
7:51:26 better. But what I’ve found is there will be a point in calibration where I can use imagined
7:51:34 movement. Before that point, it doesn’t really work. So if I do calibration for 45 minutes,
7:51:41 the first 15 minutes, I can’t use imagined movement. It just like doesn’t work for some reason.
7:51:50 And after a certain point, I can just sort of feel it, I can tell it moves different.
7:51:57 That’s the best way I can I can describe it like it’s almost as if it is anticipating what I am
7:52:06 going to do again before I go to do it. And so using attempted movement for 15 minutes,
7:52:12 at some point, I can kind of tell when I like move my eyes to the next target that the cursor
7:52:17 is starting to like pick up like it’s starting to understand it’s learning like what I’m going to do.
7:52:22 So first of all, it’s really cool that I mean, you are true pioneer in all of this, you’re like
7:52:29 exploring how to do every aspect of this most effectively. And there’s just, I imagine so
7:52:33 many lessons learned from this. So thank you for being a pioneer and all these kinds of different
7:52:39 like super technical ways. And it’s also cool to hear that there’s like a different like feeling
7:52:46 to the experience when it’s calibrated in different ways. Like just because I imagine your
7:52:51 brain is doing something different. And that’s why there’s a different feeling to it. And then
7:52:56 trying to find the words and the measurements to those feelings would be also interesting.
7:53:01 But at the end of the day, you can also measure that your actual performance on whether it’s snake
7:53:06 or web grid, you can see like what actually works well. And you’re saying for the open loop
7:53:15 calibration, the attempted movement works best for now. Yep. So the open loop, you don’t get
7:53:21 the feedback that’s something that you did something. Yeah, I’m just frustrating. No, no,
7:53:26 it makes sense to me. Like we’ve done it with a cursor and without a cursor in open loop. So
7:53:34 sometimes it’s just say for like the center out, the you’ll start calibration with a bubble
7:53:41 lighting up. And I push towards that bubble. And then when that bubble, you know, when it’s
7:53:45 pushed towards that bubble for say three seconds, a bubble will pop. And then I come back to the
7:53:51 middle. So I’m doing it all just by my intentions, like that’s what it’s learning anyway. So it makes
7:53:56 sense that as long as I follow what they want me to do, you know, like follow the yellow brick road
7:54:03 that it’ll all work out. You’re full of great references. Is the bubble game fun? Like, yeah,
7:54:09 they always feel so bad making me do calibration, like, we’re about to do, you know, a 40 minute
7:54:14 calibration. I’m like, All right, would you guys want to do two of them? Like, I’m always asking
7:54:20 to like whatever they need, I’m more than happy to do. And it’s not, it’s not bad. Like, I get to
7:54:28 lie there and or sit in my chair and like do these things with some great people, I get to have great
7:54:34 conversations. I can give them feedback. I can talk about all sorts of things. I could throw
7:54:39 something on on my TV in the background and kind of like split my attention between them.
7:54:45 Like, it’s not bad at all. I don’t score that you get. Like, if can you do better on the bubble
7:54:54 game? No, I would love that. I would love writing down suggestions from Nolan. That’s
7:55:00 a make it more fun, gamified. Yeah, that’s one thing that I really, really enjoy about web grid
7:55:09 is because I’m so competitive. Like the higher the BPS, the higher the score, I know the better
7:55:15 I’m doing. And so if I think I’ve asked at one point, one of the guys, like, if he could give me
7:55:19 some sort of numerical feedback for calibration, like, I would like to know what they’re looking
7:55:25 at like, Oh, you know, it is, we see like this number while you’re doing calibration. And that
7:55:31 means, at least on our end, that we think calibration is going well. And I would love that
7:55:35 because I would like to know if what I’m doing is going well or not. But then they’ve also told me
7:55:40 like, yeah, not necessarily like one to one, it doesn’t actually mean that calibration is going
7:55:47 well in some ways. So it’s not like 100%. And they don’t want to like skew what I’m experiencing
7:55:51 or want me to change things based on that, if that number isn’t always accurate to like,
7:55:56 how the model will turn out or how like the end result, that’s at least what I got from it.
7:56:03 One thing I do, I have asked them and something that I really enjoy striving for is towards the end
7:56:11 of calibration, there is like a time between targets. And so I like to keep, like at the end,
7:56:14 that number is low as possible. So at the beginning, it can be, you know, four or five,
7:56:19 six seconds between me popping bubbles, but towards the end, I like to keep it below like
7:56:25 1.5. Or if I could get it to like one second between like bubbles, because in my mind that
7:56:30 translates really nicely to something like WebGrid where I know if I can hit a target,
7:56:36 one every second that I’m doing real, real well. There you go. That’s the way to get a score on
7:56:42 the calibration is like the speed. How quickly can you get from bubble to bubble? Yeah. So there’s
7:56:47 the open loop, and then it goes to the closed loop. The closed loop can already start giving you a
7:56:51 sense because you’re getting feedback of like how good the model is. Yeah. So closed loop is when
7:56:59 I first get cursor control and how they’ve described it to me, someone who does not understand this
7:57:07 stuff. I am the dumbest person in the room every time. The humility. Yeah. Is that I am closing the
7:57:14 loop. So I am actually now the one that is like finishing the loop of whatever this loop is. I
7:57:19 don’t even know what the loop is. They’ve never told me. They just say there is a loop and at one
7:57:25 point it’s open and I can’t control and then I get control and it’s closed. So I’m finishing the loop.
7:57:30 So how long the calibration usually takes? You said like 10, 15 minutes. Well, yeah. They’re
7:57:34 trying to get that number down pretty low. That’s what we’ve been working on a lot recently is getting
7:57:39 that down as low as possible. So that way, you know, if this is something that people need to do
7:57:47 on a daily basis or if some people need to do on a like every other day basis or once a week,
7:57:51 they don’t want people to be sitting in calibration for long periods of time.
7:57:57 I think they wanted to get it down seven minutes or below, at least where we’re at right now. It’d
7:58:03 be nice if you never had to do calibration. So we’ll get there at some point. I’m sure the more
7:58:10 we learn about the brain and like I think that’s, you know, the dream. I think right now for me to
7:58:18 get like really, really good models, I’m in calibration 40 or 45 minutes. And I don’t mind,
7:58:23 like I said, they always feel really bad. But if it’s going to get me a model that can like break
7:58:28 these records on WebGrid, I’ll stay in it for flip in two hours. Let’s talk business. So WebGrid,
7:58:36 I saw a presentation that where Bliss said by March, you selected 89,000 targets in WebGrid.
7:58:42 Can you explain this game? What is WebGrid? And what does it take to be a world-class
7:58:46 performer in WebGrid as you continue to break world records? Yeah.
7:58:55 It’s like a gold medalist. Like, wow. Yeah, you know, I’d like to thank everyone who’s helped me
7:59:00 get here, my coaches, my parents, for dropping me to practice every day at five in the morning.
7:59:07 I’d like to thank God and just overall my dedication to my craft. The interviews with
7:59:17 athletes are always like that exact. It’s like that template. Yeah. So WebGrid is a grid itself.
7:59:24 It’s literally just a grid. They can make it as big or small as you can make a grid.
7:59:29 A single box on that grid will light up and you go and click it. And it is a way for them to
7:59:38 benchmark how good a BCI is. So it’s pretty straightforward. You just click targets.
7:59:43 Only one blue cell appears and you’re supposed to move the mouse to there and click on it.
7:59:52 So I like playing on bigger grids because the bigger the grid, the more BPS it’s bits per second
8:00:00 that you get every time you click one. So I’ll say I’ll play on a 35 by 35 grid and then one
8:00:05 of those little squares, a cell and call it, target, whatever, will light up and you move the
8:00:13 cursor there and you click it and then you do that forever. And you’ve been able to achieve
8:00:19 at first eight bits per second and you’ve recently broke that. Yeah, I’m at 8.5 right now. I would
8:00:27 have beaten that literally the day before I came to Austin, but I had like a, I don’t know, like a
8:00:33 five second lag right at the end. And I just had to wait until the latency calmed down and then I
8:00:41 kept clicking, but I was at like 8.01 and then five seconds of lag. And then the next three
8:00:47 targets I clicked all stayed at 8.01. So if I would have been able to click during that time
8:00:52 of lag, I probably would have hit, I don’t know, I might have hit nine. So I’m there. I’m like,
8:00:57 I’m really close. And then this whole Austin trip has really gotten in the way of my web grid
8:01:03 playing ability. Yeah, so that’s all you’re thinking about right now. Yeah, I know. I just want, I want
8:01:09 to do better at nine. I want to do better. I want to hit nine. I think, well, I know nine is very,
8:01:16 very achievable. I’m right there. I think 10, I could hit maybe in the next month. Like I could
8:01:20 do it probably in the next few weeks if I really push. I think you and Ilana basically the same
8:01:25 person because last time I did a podcast with him, he came in extremely frustrated that he can’t
8:01:32 beat Uber Lilith as a droid. That was like a year ago, I think. I forget, like solo. And I could
8:01:37 just tell there’s some percentage of his brain the entire time was thinking like, I wish I was
8:01:44 right now attempting. I think he did it that night. He stayed up and did it that night.
8:01:50 Just crazy to me. I mean, in a fundamental way, it’s really inspiring. And what you’re doing is
8:01:55 inspiring in that way because, I mean, it’s not just about the game. Everything you’re doing there
8:02:02 has impact. By striving to do well on web grid, you’re helping everybody figure out how to create
8:02:08 the system all along, like the decoding, the software, the hardware, the calibration,
8:02:12 all of it, how to make all of that work so you can do everything else really well.
8:02:18 Yeah, it’s just really fun. Well, that’s also part of the thing is making it fun.
8:02:26 Yeah, it’s addicting. I’ve joked about what they actually did when they went in and put this thing
8:02:32 in my brain. They must have flipped a switch to make me more susceptible to these kinds of games,
8:02:37 to make me addicted to web grid or something. Do you know Bliss’s high score?
8:02:41 Yeah, he said like 14 or something. 17.1 or something?
8:02:43 17 on the dot. 17.01.
8:02:50 Yeah. He told me he does it on the floor with peanut butter and he’s fast. It’s weird.
8:02:52 That sounds like cheating. Sounds like performance enhancing.
8:02:57 Nolan’s like the first time Nolan played this game, he asked, “How could it be at this game?”
8:03:00 And I think he told me right then, “You’re going to try to beat me.”
8:03:01 I’m going to get there someday.
8:03:02 I fully believe you.
8:03:03 I think I can.
8:03:12 So I’ve been playing first off with the Dwell Cursor, which really hampers my web grid playing
8:03:16 ability. Basically, I have to wait 0.3 seconds for every click.
8:03:22 Oh, so you can’t do the clicks. So you click by dwelling. You said 0.3?
8:03:31 0.3 seconds, which sucks. It really slows down how high I’m able to get.
8:03:37 I still hit like 50, I think I hit like 50 something trials, net trials per minute in that,
8:03:41 which was pretty good because I’m able to like,
8:03:48 there’s one of the settings is also how slow you need to be moving in order to initiate a click,
8:03:57 to start a click. So I can tell sort of when I’m on that threshold to start initiating a click
8:04:02 just a bit early. So I’m not fully stopped over the target when I go to click.
8:04:07 I’m doing it like on my way to the targets a little to try to time it just right.
8:04:08 So you’re slowing down.
8:04:10 Yeah, just a hair right before the target.
8:04:17 This is like a lead performance. Okay. But that still, it sucks that there’s a ceiling of the
8:04:24 0.3. Well, I can get down to 0.2 and 0.1. 0.1, yeah, and I’ve played with that a little bit too.
8:04:28 I have to adjust a ton of different parameters in order to play with 0.1.
8:04:34 And I don’t have control over all that on my end yet. It also changes like how the models are
8:04:39 trained. Like if I train a model like in WebGrid, I like a bootstrap on a model, which basically is
8:04:45 them training models as I’m playing WebGrid based off of like the WebGrid data that I’m so like,
8:04:51 if I play WebGrid for 10 minutes, they can train off that data specifically in order to get me a
8:04:58 better model. If I do that with 0.3 versus 0.1, the models come out different. The way that they
8:05:03 interact is just much, much different. So I have to be really careful. I found that
8:05:09 doing it with 0.3 is actually better in some ways, unless I can do it with 0.1 and change
8:05:14 all of the different parameters, then that’s more ideal because obviously 0.3 is faster than 0.1.
8:05:22 So I could get there. I can get there. Can you click using your brain?
8:05:27 For right now, it’s the hover clicking with the dwell cursor. Before all the thread
8:05:33 retraction stuff happened, we were calibrating clicks. Left click, right click. That was my
8:05:40 previous ceiling before I broke the record again with the dwell cursor was I think on a 35 by 35
8:05:47 grid with left and right click. You get more BPS, more bits per second using multiple clicks
8:05:53 because it’s more difficult. Because you’re supposed to do either a left click or a right
8:05:57 click. Yes, different colors. Different colors. Yeah, blue targets for left click,
8:06:04 orange targets for right click is what they had done. My previous record of 7.5 was with the
8:06:11 blue and the orange targets, which I think if I went back to that now, doing the click calibration,
8:06:16 I would be able to, and being able to initiate clicks on my own, I think I would break that 10
8:06:24 ceiling in a couple of days. Max. Yeah, you start making Blizz nervous about his 17. Why do you
8:06:31 think we haven’t given him the– Exactly. So what did it feel like with the retractions
8:06:38 that there was some of the threads are attracted? It sucked. It was really, really hard. The day
8:06:47 they told me was the day of my big Neuralink tour at their Fremont facility. They told me right
8:06:52 before we went over there, it was really hard to hear. My initial reaction was, “All right, go in,
8:06:58 fix it. Go in, take it out, and fix it.” The first surgery was so easy. I went to sleep a couple
8:07:06 hours later, I woke up, and here we are. I didn’t feel any pain, didn’t take any pain pills or anything,
8:07:13 so I just knew that if they wanted to, they could go in and put in a new one next day,
8:07:22 if that’s what it took, because I wanted it to be better and I wanted not to lose the capability.
8:07:30 I had so much fun playing with it for a few weeks for a month. I had it open up so many
8:07:34 doors for me, and it opened up so many more possibilities that I didn’t want to lose it
8:07:41 after a month. I thought it would have been a cruel twist of fate if I had gotten to see
8:07:47 the view from the top of this mountain and then have it all come crashing down after a month.
8:07:55 And I knew, say, the top of the mountain, but how I saw it was I was just now starting to climb
8:08:02 the mountain. There was so much more that I knew was possible, and so to have all of that be taken
8:08:10 away was really, really hard. But then on the drive over to the facility, I don’t know, like
8:08:18 five-minute drive, whatever it is, I talked with my parents about it. I prayed about it. I was just
8:08:25 like, “I’m not going to let this ruin my day. I’m not going to let this ruin this amazing tour
8:08:30 that they have set up for me. I want to go show everyone how much I appreciate all the work they’re
8:08:36 doing. I want to go meet all of the people who have made this possible, and I want to go have
8:08:42 one of the best days of my life.” And I did, and it was amazing. And it absolutely was one of the
8:08:49 best days I’ve ever been privileged to experience. And then for a few days, I was pretty down in the
8:08:57 dumps. But for the first few days afterwards, I didn’t know if it was ever going to work again.
8:09:08 And then I just made the decision that even if I lost the ability to use the narrow link, even if I
8:09:17 lost out on everything to come, if I could keep giving them data in any way, then I would do that.
8:09:23 If I needed to just do some of the data collection every day or body mapping every day for a year,
8:09:31 then I would do it. Because I know that everything I’m doing helps everyone to come after me,
8:09:36 and that’s all I wanted. I guess the whole reason that I did this was to help people.
8:09:41 And I knew that anything I could do to help, I would continue to do. Even if I never got to use
8:09:48 the cursor again, then I was just happy to be a part of it. And everything that I’d done was
8:09:52 just a perk. It was something that I got to experience, and I know how amazing it’s going
8:09:57 to be for everyone to come after me. So might as well just keep trucking along.
8:10:04 That said, you were able to get to work your way up, to get the performance back.
8:10:10 So this is like going from Rocky 1 to Rocky 2. So when did you first realize that this is possible
8:10:15 and would give you the strength, the motivation, the determination to do it,
8:10:18 to increase back up and beat your previous record?
8:10:23 Yeah, it was within a couple of weeks. Again, this feels like I’m interviewing an athlete.
8:10:29 This is great. I like to thank my parents. The road back was long and hard,
8:10:38 from many difficulties. There were dark days. It was a couple of weeks, I think,
8:10:45 and then there was just a turning point. I think they had switched how they were measuring
8:10:50 the neuron spikes in my brain, like the bliss helped me out.
8:10:54 Yeah, the way in which we were measuring the behavior of individual neurons.
8:10:55 Yeah.
8:10:59 So we’re switching from individual spike detection to something called spike band power.
8:11:03 But if you watch the previous segments with either me or DJ, you probably have some content.
8:11:09 Yeah, okay. So when they did that, it was like a light over the head, like light bulb moment,
8:11:16 like, oh, this works. And this seems like we can run with this. And I saw the
8:11:22 uptick in performance immediately. I could feel it when they switched over. I was like,
8:11:27 this is better. This is good. Everything up till this point for the last few weeks,
8:11:31 last whatever, three or four weeks, because it was before they even told me,
8:11:37 everything before this sucked. Let’s keep doing what we’re doing now. And at that point,
8:11:43 it was not like, oh, I know I’m still only at like, say, in web grid terms, like four or five BPS
8:11:52 compared to my 7.5 before. But I know that if we keep doing this, then I can get back there.
8:11:57 And then they gave me the dwell cursor. And the dwell cursor sucked at first. It’s not,
8:12:04 obviously not what I want. But it gave me a path forward to be able to continue using it.
8:12:10 And hopefully to continue to help out. And so I just ran with it, never looked back.
8:12:15 Like I said, I’m just kind of person, I roll with the punches anyway. So what was the
8:12:19 process? What was the feedback loop on the figuring out how to do the spike detection in a way that
8:12:24 would actually work well for Nolan? Yeah, it’s a great question. So maybe just describe first how
8:12:28 the actual update worked is basically an update to your implant. So we just did an over the air
8:12:32 software update to his implants and what you’d update your Tesla or your iPhone.
8:12:38 And that firmware changed enabled us to record sort of averages of populations of neurons
8:12:42 nearby individual electrodes. So we have less resolution about which individual neuron is
8:12:46 doing what, but we have a broader picture of what’s going on nearby an electrode overall.
8:12:51 And that feedback, I mean, basically as Nolan described, it was immediate when we flipped that
8:12:55 switch. I think the first day we did that, you had three or four BPS right out of the box.
8:12:59 And that was a light bulb moment for, okay, this is the right path to go down. And from there,
8:13:04 there’s a lot of feedback around like how to make this useful for independent use. So what we care
8:13:08 about ultimately is that you can use it independently to do whatever you want. And to get to that point
8:13:12 and required us to re-engineer the UX as you talked about the dwell cursor to make it something
8:13:16 that you can use independently without us needing to be involved all the time. And yeah, this is
8:13:19 obviously the start of this journey still hopefully we could get back to the places where you’re doing
8:13:25 multiple clicks and using that to control much more fluidly everything and much more naturally
8:13:30 the applications that you’re trying to interface with. And most importantly, get that
8:13:39 web grade number up. Yeah. So how’s the, on the hover click, do accidentally click self sometimes?
8:13:44 Yeah. Like what’s, how hard is it to avoid accidentally clicking? I have to continuously
8:13:49 keep it moving basically. So like I said, there’s a threshold where it will initiate a click. So if
8:13:56 I ever drop below that, it’ll start and I have 0.3 seconds to move it before it clicks anything.
8:14:01 And if I don’t want it to ever get there, I just keep it moving at a certain speed
8:14:05 and like just constantly like doing circles on screen, moving it back and forth
8:14:13 to keep it from clicking stuff. I actually noticed a couple weeks back that I was,
8:14:19 when I was not using the implant, I was just moving my hand back and forth or in circles.
8:14:24 Like I was trying to keep the cursor from clicking and I was just doing it
8:14:27 like while I was trying to go to sleep. And I was like, okay, this is a problem.
8:14:33 To avoid the clicking, I guess does that create problems like when you’re gaming accidentally
8:14:41 click a thing? Like, yeah, yeah, it happens in chess. I’ve lost, I’ve lost a number of games because
8:14:45 I’ll accidentally click something. I think the first time I ever beat you was because of an
8:14:50 accident. Yeah, miss click. It’s a nice excuse, right? Yeah, you can always, anytime you lose,
8:14:57 you could just say it was accidental. Yeah. You said the app improved a lot from version one
8:15:02 when you first started using it. It was very different. So can you just talk about the trial
8:15:07 and error that you went through with the team like 200 plus pages of notes? Like what’s that
8:15:14 process like of going back and forth and working together to improve the thing? It’s a lot of me
8:15:22 just using it like day in and day out and saying, like, hey, can you guys do this for me? Give me
8:15:32 this. I want to be able to do that. I need this. I think a lot of it just doesn’t occur to them
8:15:37 maybe until someone is actually using the app, using the implant. It’s just something that
8:15:46 they just never would have thought of. Or it’s very specific to even like me, maybe what I want.
8:15:51 It’s something I’m a little worried about with the next people that come is, you know,
8:15:58 maybe they will want things much different than how I’ve set it up or what the advice I’ve given
8:16:03 the team. And they’re going to look at some of the things they’ve added for me. Like, that’s a
8:16:09 dumb idea. Like, why would he ask for that? And so I’m really looking forward to get the next
8:16:13 people on because I guarantee that they’re going to think of things that I’ve never thought of.
8:16:17 They’re going to think of improvements. I’m like, wow, that’s a really good idea. Like,
8:16:22 I wish I would have thought of that. And then they’re also going to give me some pushback
8:16:28 about like, yeah, what you are asking them to do here. That’s a bad idea. Let’s do it this way.
8:16:33 And I’m more than happy to have that happen. But it’s just a lot of like, you know,
8:16:40 different interactions with different games or applications, the internet, just with the
8:16:49 computer in general. There’s tons of bugs that end up popping up left right center. So it’s just
8:16:53 me trying to use it as much as possible and showing them what works and what doesn’t work and
8:17:01 what I would like to be better. And then they take that feedback and they usually create amazing
8:17:06 things for me. They solve these problems in ways I would have never imagined. They’re so good at
8:17:12 everything they do. And so I’m just really thankful that I’m able to give them feedback and they can
8:17:18 make something of it. Because a lot of my feedback is like really dumb. It’s just like, I want this.
8:17:24 Please do something about it. And we’ll come back and super well thought out. And it’s way better
8:17:29 than anything I could have ever thought of or implemented myself. So they’re just great. They’re
8:17:36 really, really cool. As the BCI community grows, would you like to hang out with the other folks
8:17:40 with neural links? Like what, what relationship of any would you want to have with them? Because
8:17:45 you said like, they might have a different set of like ideas of how to use the thing.
8:17:49 Yeah. Would you be intimidated by their what great performance?
8:17:56 No, no, I hope compete. I hope day one, they like wipe the floor with me. I hope they beat it.
8:18:05 And they crush it, you know, double it if they can. Just because on one hand, it’s only going to push
8:18:12 me to be better. Because I’m super competitive. I want other people to push me. I think that is
8:18:18 important for anyone trying to achieve greatness is they need other people around them who are
8:18:24 going to push them to be better. And I even made a joke about it on X once, like once the next
8:18:30 people get chosen, like you buddy cop music, like I’m just excited to have other people to do this
8:18:34 with and to like share experiences with. I’m more than happy to interact with them as much as they
8:18:40 want. More than happy to give them advice. I don’t know what kind of advice I could give them. But
8:18:45 if they have questions, I’m more than happy. What advice would you have for the next participant
8:18:51 in the clinical trial that they should have fun with this? Because it is a lot of fun.
8:18:58 And that I hope they work really, really hard, because it’s not just for us. It’s for everyone
8:19:05 that comes after us. And, you know, come to me if they need anything, and to go to the
8:19:12 Nurelink if they need anything. Man, Nurelink moves mountains. Like they do absolutely anything
8:19:19 for me that they can. And it’s an amazing support system to have. It puts my mind at ease
8:19:26 for like so many things that I have had like questions about so many things I want to do.
8:19:33 And they’re always there. And that’s really, really nice. And so I just I would tell them not
8:19:39 to be afraid to go to Nurelink with any questions that they have, any concerns, anything that,
8:19:44 you know, they’re looking to do with this and any help that Nurelink is capable of providing. I
8:19:53 know they will. And I don’t know. I don’t know. Just work your ass off because it’s really important
8:20:00 that we try to give our all to this. So have fun and work hard. Yeah. Yeah. There we go. Maybe
8:20:04 that’s what I’ll just start saying to people. Have fun, work hard. Now you’re a real pro athlete.
8:20:12 Just keep it short. Maybe it’s good to talk about what you’ve been able to do
8:20:19 now that you have a Nurelink implant, like the freedom you gain from this way of interacting
8:20:25 with the outside world. Like you play video games all night. And you do that by yourself.
8:20:30 And that’s a kind of freedom. Can you speak to that freedom that you gain?
8:20:36 Yeah, it’s what all, I don’t know, people in my position want. They just want more independence.
8:20:42 The more load that I can take away from people around me, the better. If I’m able to interact
8:20:48 with the world without using my family, without going through any of my friends,
8:20:55 like needing them to help me with things, the better. If I’m able to sit up on my computer
8:21:02 all night and not need someone to like sit me up, say like on my iPad, like in a position
8:21:07 where I can use it and then have to have them wait up for me all night until I’m ready to be
8:21:17 done using it. Like that, it takes a load off of all of us. And it’s really like all I can ask for.
8:21:22 It’s something that, you know, I could never think Nurelink enough for. And I know my family
8:21:29 feels the same way. You know, just being able to have the freedom to do things on my own
8:21:38 at any hour of the day or night, it means the world to me. And I don’t know.
8:21:46 When you’re up at 2 a.m. playing web grid by yourself, I just imagine like it’s darkness
8:21:50 and then there’s just a light glowing and you’re just focused. What’s going through your mind?
8:21:59 Are you like in a state of flow where it’s like the mind is empty, like those like Zen masters?
8:22:05 Yeah, generally it is me playing music of some sort. I have a massive playlist. And so I’m just
8:22:12 like rocking out to music. And then it’s also just like a race against time because I’m constantly
8:22:19 constantly looking at how much battery percentage I’ve left on my implant. Like, all right, I have
8:22:25 30% which equates to, you know, X amount of time, which means I have to break this record
8:22:28 in the next, you know, hour and a half or else it’s not happening tonight.
8:22:36 And so it’s a little stressful when that happens. When it’s like, when it’s above 50%, I’m like,
8:22:41 okay, like I got time. It starts getting down to 30 and then 20. It’s like, all right,
8:22:46 10%, a little pop-up is going to pop up right here and it’s going to really screw my web grid
8:22:52 flow. It’s going to tell me that, you know, like there’s like the low battery, low battery pop-up
8:22:55 comes up and I’m like, it’s really going to screw me over. So if I have to, if I’m going to break
8:23:00 this record, I have to do it in the next like 30 seconds or else that pop-up is going to get in
8:23:05 the way, like cover my web grid. And then it, after that, I go click on it, go back into web grid,
8:23:09 and I’m like, all right, that means I have, you know, 10 minutes left before this thing’s dead.
8:23:14 That’s what’s going on in my head, generally that and whatever song is playing. And I just,
8:23:21 I just want, I want to break those records so bad. Like it’s all I want when I’m playing web grid.
8:23:28 It has become less of like, oh, this is just a leisurely activity. Like I just enjoy doing this
8:23:33 because it just feels so nice and it puts me at ease. It is, no, once I’m in web grid,
8:23:37 you better break this record or you’re going to waste like five hours of your life right now.
8:23:41 And I don’t know, it’s just fun. It’s fun, man.
8:23:46 Have you ever tried web grid with like two targets and three targets? Can you get higher
8:23:50 BPS with that? Can you do that? You mean like different color targets? Or are you being?
8:23:55 Oh, get multiple targets. Has that changed the thing? Yeah. So BPS is a log of number of targets
8:24:00 times correct minus incorrect divided by time. And so you can think of like different clicks as
8:24:05 basically doubling the number of active targets. Got it. So, you know, you basically hire BPS the
8:24:09 more options there are, the more difficult the task. And there’s also like Zen mode you’ve played
8:24:14 in before, which is like infinite canvas. Yeah, it covers the whole screen with a grid.
8:24:21 And I don’t know. What? Yeah. And so you can go like, that’s insane. Yeah.
8:24:27 He doesn’t like it because it didn’t show BPS. So like, you know, oh yeah. I had them put in
8:24:34 a giant BPS in the background. So now it’s like the opposite of Zen mode. It’s like super hard mode,
8:24:38 like just metal mode, if it’s just like a giant number in the backcount.
8:24:48 So you also play Civilization 6. I love Civilization 6. Yeah. Usually go with Korea.
8:24:55 I do. Yeah. So the great part about Korea is they focus on like science tech victories,
8:25:00 which was not planned. Like I’ve been playing Korea for years. And then all of the nerling
8:25:10 stuff happened. So it kind of aligns. But what I’ve noticed with tech victories is if you can just
8:25:19 rush tech, rush science, then you can do anything. Like at one point in the game, you will be so
8:25:25 far ahead of everyone technologically, that you will have like musket men, infantry men,
8:25:29 planes sometimes and people will still be fighting with like bows and arrows.
8:25:35 And so if you want to win a domination victory, you just get to a certain point with the science
8:25:42 and then go and wipe out the rest of the world. Or you can just take science all the way and win
8:25:46 that way. And you’re going to be so far ahead of everyone because you’re producing so much science
8:25:55 that it’s not even close. I’ve accidentally won in different ways just by focusing on science.
8:26:03 I was like, I was playing only science, obviously, like just science all the way,
8:26:08 just tech. And I was trying to get like every tech in the tech tree and stuff.
8:26:15 And then I accidentally won through a diplomatic victory. And I was so mad. I was so mad because
8:26:19 it just like ends the game one turn. It was like, oh, you won. You’re so diplomatic. I’m like,
8:26:22 I don’t want to do this. I should have declared war on more people or something.
8:26:29 It was terrible. But you don’t need like giant civilizations with tech, especially with Korea.
8:26:35 You can keep it pretty small. So I generally just get to a certain military unit and put
8:26:41 them all around my border to keep everyone out. And then I will just build up. So very isolationist.
8:26:46 Nice. Just work on the science of the tech. You’re making it sound so fun.
8:26:50 It’s so much fun. And I also saw Civilization 7 trailer.
8:26:53 Oh, man, I’m so pumped. And that’s probably coming out. Come on,
8:26:56 Civilization 7. Hit me up. I’ll alpha, beta test, whatever.
8:26:59 Wait, when is it coming out? 2025? Yeah, yeah, next year, yeah.
8:27:05 What other stuff would you like to see improved about the Neuralink app and just the entire experience?
8:27:14 I would like to, like I said, get back to the, like, click on demand, like the regular clicks.
8:27:19 That would be great. I would like to be able to connect to more devices right now. It’s just
8:27:25 the computer. I’d like to be able to use it on my phone or use it on different consoles,
8:27:32 different platforms. I’d like to be able to control as much stuff as possible, honestly.
8:27:40 An Optimus robot would be pretty cool. That would be sick if I could control an Optimus robot.
8:27:52 The link app itself, it seems like we are getting pretty dialed in to what it might look like down
8:27:58 the road. Seems like we’ve gotten through a lot of what I want from it, at least.
8:28:04 The only other thing I would say is, like, more control over all the parameters that I
8:28:13 can tweak with my cursor and stuff. There’s a lot of things that go into how the cursor moves
8:28:19 in certain ways. I have, I don’t know, like three or four of those parameters and they’re my gain
8:28:24 and friction and all that. Gain, friction, yeah. There’s maybe double the amount of those with
8:28:31 just velocity and then with the actual dwell cursor. I would like all of it. I want as much
8:28:37 control over my environment as possible. You want advanced mode. There’s menus,
8:28:45 usually, this basic mode and you’re one of those folks, the power user. That’s what I want. I want
8:28:52 as much control over this as possible. That’s really all I can ask for. Just give me everything.
8:29:00 Has speech been useful? Like just being able to talk also in addition to everything else?
8:29:04 Yeah, you mean like while I’m using it? While you’re using it, like speech to text?
8:29:07 Oh, yeah. Or do you type or look because there’s also a keyboard?
8:29:10 Yeah, yeah. So there’s a virtual keyboard. That’s another thing I would like to work
8:29:17 more on is finding some way to type or text in a different way. Right now, it is
8:29:23 like a dictation, basically, and a virtual keyboard that I can use with the cursor.
8:29:28 But we’ve played around with like finger spelling, like sign language finger spelling,
8:29:37 and that seems really promising. So I have this thought in my head that it’s going to be a very
8:29:44 similar learning curve that I had with the cursor, where I went from attempted movement to imagine
8:29:50 movement at one point. I have a feeling, this is just my intuition, that at some point, I’m going
8:29:55 to be doing finger spelling, and I won’t need to actually attempt to finger spell anymore,
8:30:00 that I’ll just be able to think the like letter that I want, and it’ll pop up.
8:30:05 That would be epic. That’s challenging. That’s hard. That’s a lot of work for you to kind of
8:30:10 take that leap, but that would be awesome. And then like going from letters to words is another
8:30:14 step, like you would go from, you know, right now it’s finger spelling of like just the sign
8:30:18 language alphabet. But if it’s able to pick that up, then it should be able to pick up
8:30:25 like the whole sign language, like language. And so then if I could do something along those lines,
8:30:32 or just the sign language spelled word, if I can, you know, spell it at a reasonable speed and it
8:30:37 can pick that up, then I would just be able to think that through and it would do the same thing.
8:30:45 I don’t see why not, after what I saw with the cursor control, I don’t see why it wouldn’t work,
8:30:49 but we’d have to play around with it more. What was the process in terms of like training
8:30:53 yourself to go from attempted movement to imagined movement? How long did that take?
8:30:57 So like how long would this kind of process take? Well, it was a couple of weeks before
8:31:02 it just like happened upon me. But now that I know that that was possible,
8:31:07 I think I could make it happen with other things. I think it would be much, much simpler.
8:31:14 Would you get an upgraded implant device? Sure, absolutely. Whenever they’ll let me.
8:31:19 So you don’t have any concerns for you with the surgery experience? All of it was
8:31:27 like no regrets. No, everything’s been good so far. You just keep getting upgrades.
8:31:32 Yeah, I mean, why not? I’ve seen how much it’s impacted my life already. And I know that everything
8:31:37 from here on out, she’s going to get better and better. So I would love to, I would love to get
8:31:46 the upgrade. What future capabilities are you excited about sort of beyond this kind of telepathy?
8:31:52 Is vision interesting? So for folks who, for example, who are blind, so you’re like enabling
8:31:59 people to see or for speech? Yeah, there’s a lot that’s very, very cool about this. I mean,
8:32:03 we’re talking about the brain. So there’s like, this is just motor cortex stuff. There’s so much
8:32:09 more that can be done. The vision one is fascinating to me. I think that is going to be very, very
8:32:14 cool to give someone the ability to see for the first time in their life would just be, I mean,
8:32:19 it might be more amazing than even helping someone like me. Like that just sounds incredible.
8:32:26 The speech thing is really interesting being able to have some sort of like real time translation and
8:32:34 cut away that language barrier would be really cool. Any sort of like actual impairments that it
8:32:39 could solve, like with speech would be very, very cool. And then also there are a lot of
8:32:45 different disabilities that all originate in the brain. And you would be able to hopefully be able
8:32:50 to solve a lot of those. I know there’s already stuff to help people with seizures
8:32:58 that can be implanted in the brain. This would do, I imagine the same thing. And so you could
8:33:04 do something like that. I know that even someone like Joe Rogan has talked about the possibilities
8:33:16 with being able to stimulate the brain in different ways. I’m not sure what, you know,
8:33:22 like how ethical a lot of that would be. That’s beyond me, honestly. But I know that there’s
8:33:28 a lot that can be done when we’re talking about the brain and being able to go in and physically
8:33:34 make changes to help people or to improve their lives. So I’m really looking forward to everything
8:33:40 that comes from this. And I don’t think it’s all that far off. I think a lot of this can be
8:33:45 implemented within my lifetime, assuming that I live a long life. What you were referring to is
8:33:51 things like people suffering from depression or things of that nature potentially getting help.
8:33:57 Yeah. Flip a switch like that, make someone happy. I know, I think Joe has talked about it more in
8:34:05 terms of like, you want to experience what a drug trip feels like. You want to experience what you
8:34:09 like to be on. Of course. Yeah, mushrooms or something like that, DMT. You can just flip
8:34:16 that switch in the brain. My buddy, Bane, has talked about being able to wipe parts of your memory
8:34:20 and re-experience things that, like for the first time, like your favorite movie or your favorite
8:34:25 book, just wipe that out real quick and then re-fall in love with Harry Potter or something.
8:34:30 I told him, I was like, I don’t know how I feel about people being able to just wipe
8:34:34 parts of your memory. That seems a little sketchy to me. He’s like, they’re already doing it.
8:34:43 Sounds legit. I would love memory replay, just like actually like high resolution replay of all
8:34:47 memories. Yeah. I saw an episode of Black Mirror about that once. I don’t think I want it.
8:34:52 Yeah. So Black Mirror always kind of considers the worst case, which is important. I think people
8:34:58 don’t consider the best case or the average case enough. I don’t know what it is about us humans.
8:35:04 We want to think about the worst possible thing. We love drama. It’s like, how is this
8:35:09 new technology going to kill everybody? We just love that. We’re getting like, yes, let’s watch.
8:35:12 Hopefully people don’t think about that too much with me. It’ll ruin a lot of my plans.
8:35:18 Yeah, I assume you’re going to have to take over the world. I mean, I love your Twitter.
8:35:22 You tweeted, I’d like to make jokes about hearing voices in my head since getting
8:35:26 the neural link, but I feel like people would take it the wrong way. Plus the voices in my head
8:35:33 told me not to. Yeah. Yeah. Yeah. Please never stop. So you were talking about Optimus.
8:35:41 Is that something you would love to be able to do to control the robotic arm or the entirety of
8:35:45 Optimus? Oh yeah, for sure. For sure. Absolutely. You think there’s something like fundamentally
8:35:54 different about just being able to physically interact with the world? Yeah, 100%. I know
8:36:01 another thing with being able to give people the ability to feel sensation and stuff too,
8:36:05 by going in with the brain and having the neural link maybe do that. That could be something that
8:36:12 could be translated through, transferred through the Optimus as well. There’s all sorts of really
8:36:21 cool interplay between that and then also just physically interacting. I mean, 99% of the things
8:36:29 that I can’t do myself obviously need a caretaker for, someone to physically do things for me.
8:36:36 If an Optimus robot could do that, I could live an incredibly independent life and not be such a
8:36:46 burden on those around me. It would change the way people like me live, at least until
8:36:51 whatever this is gets cured. But being able to interact with the world physically like that
8:37:00 would just be amazing. They’re not just for having it be a caretaker or something, but
8:37:05 something like I talked about, just being able to read a book. Imagine Optimus robot just being
8:37:10 able to hold a book open in front of me, get that smell again. I might not be able to feel it at that
8:37:16 point or maybe I could again with the sensation and stuff. But there’s something different about
8:37:21 reading a physical book than staring at a screen or listening to an audiobook. I actually don’t
8:37:25 like audiobooks. I’ve listened to a ton of them at this point, but I don’t really like them.
8:37:31 I would much rather read a physical copy. One of the things you would love to be able to experience
8:37:38 is opening the book, bringing it up to you and to feel the touch of the paper.
8:37:45 Yeah. Oh, man. The touch, the smell. It’s just something about the words on the page.
8:37:51 They’ve replicated that page color on the Kindle and stuff. Yeah, it’s just not the same.
8:37:56 Something as simple as that. One of the things you miss is touch.
8:38:05 I do. Yeah. A lot of things that I interact with in the world, like clothes or literally any physical
8:38:10 thing that I interact with in the world, a lot of times what people around me will do is they’ll
8:38:17 just come rub it on my face. They’ll lay something on me so I can feel the weight. They will rub a
8:38:26 shirt on me so I can feel fabric. There’s something very profound about touch. It’s
8:38:33 something that I miss a lot and something I would love to do again, but we’ll see.
8:38:38 What would be the first thing you do with a hand that can touch? You can mama hug after that, right?
8:38:48 Yeah, I know. It’s one thing that I’ve asked God for basically every day since my accident was just
8:38:56 being able to one day move, even if it was only my hand so that way I could squeeze my mom’s hand
8:39:02 or something, just to show her how much I care and how much I love her and everything. Something
8:39:09 along those lines, being able to just interact with the people around me, handshake, give someone a
8:39:17 hug, I don’t know, anything like that. Being able to help me eat, I’d probably get really fat,
8:39:24 which would be a terrible, terrible thing. Also beat Bliss and Chess on a physical chess board.
8:39:33 Yeah, yeah. There are just so many upsides. Any way to find some way to feel like I’m bringing
8:39:40 Bliss down to my level because he’s just such an amazing guy and everything about him is just
8:39:46 so above and beyond that anything I can do to take him down a notch.
8:39:52 Yeah, humble him a bit. He needs it. Okay, as he’s sitting next to me.
8:39:58 Did you ever make sense of why God puts good people through such hardship?
8:40:07 Oh, man. I think it’s all about
8:40:15 understanding how much we need God. I don’t think that there’s any
8:40:20 light without the dark. I think that if all of us were happy all the time,
8:40:30 there would be no reason to turn to God ever. I feel like there would be no concept
8:40:39 of good or bad. I think that as much of the darkness and the evil that’s in the world,
8:40:45 it makes us all appreciate the good and the things we have so much more. I think
8:40:51 when I had my accident, one of the first things I said to one of my best friends was,
8:40:55 and this was within the first month or two after my accident, I said,
8:41:02 “Everything about this accident has just made me understand and believe that God is real and that
8:41:08 there really is a God.” Basically, in that my interactions with him have all been real and
8:41:15 worthwhile. He said, “If anything, seeing me go through this accident, he believes that there
8:41:23 isn’t a God.” It’s a very different reaction, but I believe that it is a way for God to test us,
8:41:31 to build our character, to send us through trials and tribulations, to make sure that
8:41:38 we understand how precious he is and the things that he’s given us and the time that he’s given us,
8:41:45 and then to hopefully grow from all of that. I think that’s a huge part of being here is to
8:41:54 not just have an easy life and do everything that’s easy, but to step out of our comfort zones
8:41:57 and really challenge ourselves, because I think that’s how we grow.
8:42:01 What gives you hope about this whole thing we have going on?
8:42:11 Human civilization. Oh, man. I think people are my biggest inspiration,
8:42:18 even just being at New Orleans for a few months, looking people in the eyes and hearing their
8:42:26 motivations for why they’re doing this. It’s so inspiring. I know that they could be other places
8:42:34 at cushier jobs, working somewhere else, doing X, Y, or Z that doesn’t really mean that much,
8:42:42 but instead they’re here and they want to better humanity and they want to better just the people
8:42:46 around them, the people that they’ve interacted with in their life. They want to make better
8:42:51 lives for their own family members who might have disabilities or they look at someone like me and
8:42:56 they say, “I can do something about that, so I’m going to.” It’s always been what I’ve connected
8:43:01 with most in the world, their people. I’ve always been a people person and I love learning about
8:43:09 people and I love learning how people developed and where they came from and to see how much
8:43:14 people are willing to do for someone like me when they don’t have to. They’re going out of their way
8:43:21 to make my life better. It gives me a lot of hope for just humanity in general, how much we care
8:43:26 and how much we’re capable of when we all get together and try to make a difference.
8:43:32 I know there’s a lot of bad out there in the world, but there always has been and there always
8:43:45 will be. I think that that shows human resiliency and it shows what we’re able to endure and how
8:43:55 much we just want to be there and help each other and how much satisfaction we get from that,
8:43:58 because I think that’s one of the reasons that we’re here is just to help each other.
8:44:06 That always gives me hope. It’s just realizing that there are people out there who still care
8:44:12 and who want to help. Thank you for being one such human being and continuing to be a great human
8:44:18 being through everything you’ve been through. I’m being an inspiration to many people, to myself,
8:44:25 for many reasons, including your epic, unbelievably great performance on WebGrid. I will be training
8:44:32 all night tonight to try to catch up. I believe in you that once you come back,
8:44:36 so sorry to interrupt with the Austin trip, once you come back, eventually beat bliss.
8:44:42 Yeah, for sure. Absolutely. I’m rooting for you. The whole world is rooting for you. Thank you
8:44:47 for everything you’ve done. Thanks, man. Thanks for listening to this conversation
8:44:54 with Nolan Arbaugh and before that with Elon Musk, DJ Saw, Matthew McDougal, and Bliss Chapman.
8:44:57 To support this podcast, please check out our sponsors in the description.
8:45:03 And now let me leave you with some words from Aldous Huxley in The Doors of Perception.
8:45:12 We live together. We act on and react to one another, but always and in all circumstances,
8:45:19 we are by ourselves. The martyrs go hand in hand into the arena. They are crucified alone.
8:45:26 Embrace the lovers desperately tried to fuse their insulated ecstasies into a single self-transcendence
8:45:34 in vain. But it’s very nature. Every embodied spirit is doomed to suffer and enjoy its solitude.
8:45:42 Sensations, feelings, insights, fancies, all these are private and except through symbols and a second
8:45:50 hand, incomunicable. We can pull information about experiences, but never the experiences themselves.
8:45:57 From family to nation, every human group is a society of island universes.
8:46:11 Thank you for listening and hope to see you next time.
8:46:19 [Music]

Elon Musk is CEO of Neuralink, SpaceX, Tesla, xAI, and CTO of X. DJ Seo is COO & President of Neuralink. Matthew MacDougall is Head Neurosurgeon at Neuralink. Bliss Chapman is Brain Interface Software Lead at Neuralink. Noland Arbaugh is the first human to have a Neuralink device implanted in his brain.

Transcript: https://lexfridman.com/elon-musk-and-neuralink-team-transcript

Please support this podcast by checking out our sponsors:
https://lexfridman.com/sponsors/ep438-sc

SPONSOR DETAILS:
Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off
MasterClass: https://masterclass.com/lexpod to get 15% off
Notion: https://notion.com/lex
LMNT: https://drinkLMNT.com/lex to get free sample pack
Motific: https://motific.ai
BetterHelp: https://betterhelp.com/lex to get 10% off

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Neuralink’s X: https://x.com/neuralink
Neuralink’s Website: https://neuralink.com/
Elon’s X: https://x.com/elonmusk
DJ’s X: https://x.com/djseo_
Matthew’s X: https://x.com/matthewmacdoug4
Bliss’s X: https://x.com/chapman_bliss
Noland’s X: https://x.com/ModdedQuad
xAI: https://x.com/xai
Tesla: https://x.com/tesla
Tesla Optimus: https://x.com/tesla_optimus
Tesla AI: https://x.com/Tesla_AI

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips

SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman

OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(09:26) – Elon Musk
(12:42) – Telepathy
(19:22) – Power of human mind
(23:49) – Future of Neuralink
(29:04) – Ayahuasca
(38:33) – Merging with AI
(43:21) – xAI
(45:34) – Optimus
(52:24) – Elon’s approach to problem-solving
(1:09:59) – History and geopolitics
(1:14:30) – Lessons of history
(1:18:49) – Collapse of empires
(1:26:32) – Time
(1:29:14) – Aliens and curiosity
(1:36:48) – DJ Seo
(1:44:57) – Neural dust
(1:51:40) – History of brain–computer interface
(1:59:44) – Biophysics of neural interfaces
(2:10:12) – How Neuralink works
(2:16:03) – Lex with Neuralink implant
(2:36:01) – Digital telepathy
(2:47:03) – Retracted threads
(2:52:38) – Vertical integration
(2:59:32) – Safety
(3:09:27) – Upgrades
(3:18:30) – Future capabilities
(3:47:46) – Matthew MacDougall
(3:53:35) – Neuroscience
(4:00:44) – Neurosurgery
(4:11:48) – Neuralink surgery
(4:30:57) – Brain surgery details
(4:46:40) – Implanting Neuralink on self
(5:02:34) – Life and death
(5:11:54) – Consciousness
(5:14:48) – Bliss Chapman
(5:28:04) – Neural signal
(5:34:56) – Latency
(5:39:36) – Neuralink app
(5:44:17) – Intention vs action
(5:55:31) – Calibration
(6:05:03) – Webgrid
(6:28:05) – Neural decoder
(6:48:40) – Future improvements
(6:57:36) – Noland Arbaugh
(6:57:45) – Becoming paralyzed
(7:11:20) – First Neuralink human participant
(7:15:21) – Day of surgery
(7:33:08) – Moving mouse with brain
(7:58:27) – Webgrid
(8:06:28) – Retracted threads
(8:14:53) – App improvements
(8:21:38) – Gaming
(8:32:36) – Future Neuralink capabilities
(8:35:31) – Controlling Optimus robot
(8:39:53) – God
(8:41:58) – Hope

Leave a Comment