AI transcript
0:00:08 This episode is all about how artificial intelligence is coming to the doctor’s office, how it will
0:00:13 impact the nature of doctor-patient interactions, diagnosis, prevention, prediction, and everything
0:00:14 in between.
0:00:20 The conversation between A16Z’s general partner Vijay Pandey and Dr. Eric Topol, cardiologist
0:00:25 and chair of innovative medicine at Scripps Research, is based on Topol’s book, “Deep
0:00:29 Medicine,” and touches on everything from how AI’s deep phenotyping can shift our
0:00:34 thinking from population health to understanding the medical health essence of you, how the
0:00:38 industry might respond, the challenges in integrating and introducing the technology
0:00:43 into today’s system, what the doctor’s visit of the future might really look like, and
0:00:47 ultimately how AI can make healthcare more human.
0:00:50 Before we talk about technology and we talk about all the things that are changing the
0:00:54 world or making huge impacts, it’s interesting to just think like, “What should a doctor
0:00:55 be doing?”
0:00:56 And how do you see that?
0:01:02 That’s really what I was pondering and I really did this deep look into AI.
0:01:07 I actually didn’t expect it to be this back to the future story, but in many ways, I think
0:01:13 it turns out that as we go forward, particularly in a longer-term view, the ability to outsource
0:01:19 so many things with help from AI and machines, I think is going to get us back.
0:01:20 It could.
0:01:21 It could.
0:01:24 That’s a big if, to where we were back in the ’70s and before.
0:01:28 What was better then was that doctors were spending much more time with us, right?
0:01:29 Exactly.
0:01:34 That gift of time, the human side, which is the center of medicine that’s been lost.
0:01:40 The big business of healthcare and all of its components like electronic records and
0:01:48 relative value units and all this stuff basically has sucked out any sense of intimacy and time.
0:01:51 And it’s also accompanied by lots of errors.
0:01:58 But of course, it’s not a gimme because administrators want more efficiency, more product to be.
0:02:02 I think you put this on Twitter where it’s like some kid drew like a drawing of going
0:02:07 to the doctor and the picture was the doctor with their back turned working on a computer.
0:02:12 And that is what happens too much, but yet we’re talking about technology coming in.
0:02:15 So how does this all work out that more technology means less computer?
0:02:23 I think that is a kind of fundamental of the problem of doctors not even making eye contact
0:02:30 and as a child to draw that picture, how unnerving that was her trip to the pediatrician natural
0:02:35 language processing can actually liberate from keyboards.
0:02:41 And so it’s already being done in some clinics and even in the UK in emergency rooms.
0:02:49 And so if we keep that up and build on that, we can eliminate that whole distraction doctors
0:02:52 and nurses and clinicians being data clerks and this is ridiculous.
0:02:58 So the fact that voice recognition is just moving so fast in terms of accuracy and speed
0:02:59 is really encouraging.
0:03:03 Alex is a very basic version of voice recognition, but you’re talking about something much more
0:03:04 sophisticated.
0:03:07 That’s something where they’re actually doing NLP, they’re doing transcriptions of doctors
0:03:09 and have to take notes.
0:03:13 If you were more sophisticated, you could put an ontology onto this such that you’re
0:03:17 not just getting like a transcript of what’s going on, but that you have very machine learning
0:03:19 friendly organized data.
0:03:20 Exactly.
0:03:26 So the notes that are synthesized from the conversation are far better than the notes
0:03:32 that you would get an Epic or Cerner where 80% are cut and pasted and they’re error laden.
0:03:39 So I mean, just as we Google AI published in JAMA of their experience, I think it’s really
0:03:45 going much faster because the accuracy of the transcription and the synthesized note is
0:03:50 far better than what we have today and it exceeds professional medical transcriptionists in
0:03:51 terms of accuracy.
0:03:52 Yeah.
0:03:55 And so I’m imagining like what the doctor visits like then.
0:03:59 So we’ve got maybe NLP so the doctor doesn’t have to be transcribing and not interacting
0:04:00 with Epic.
0:04:02 Who knows what’s in the back end, but doesn’t even matter anymore.
0:04:03 Right.
0:04:10 So we’ve got billing and all that headache for like a PCP, all that’s a huge headache.
0:04:13 That’s all part of that conversation because you say, “Well, Mr. Jones, we’re going to
0:04:18 have you have lab tests for such and such and then we’re going to get this scan and
0:04:20 it’s all done through the conversation.”
0:04:22 We could bring in other technologies, right?
0:04:28 We’ve thought about just how imaging or other type of diagnosis comes in and we’ve seen
0:04:33 all these cool things about how machine learning can improve this, but then also it’s fine
0:04:38 because at the same point, and you bring this up in the book that over-diagnosing also can
0:04:39 be very difficult.
0:04:40 Yeah.
0:04:44 Like it was very stunning where you talked about how the incidence of thyroid cancer
0:04:47 is going up, but the mortality’s been flat.
0:04:48 Exactly.
0:04:53 And so I guess the challenge will be is how can we bring in these technologies in so doctors
0:04:56 can do the things they should be doing and not the things they shouldn’t be doing.
0:04:57 Right.
0:05:03 Well, I think there is getting arms around a person’s data, this whole deep phenotyping.
0:05:10 So no human could actually integrate all this data, not only the whole electronic record,
0:05:18 which all too often is incomplete, but also pulling together sensor data, genomic data,
0:05:24 gut microbiome, all the things that you’d want to be able to come up with not only better
0:05:29 diagnoses, but also the better strategy for prevention or treatment.
0:05:34 So I think what’s going to make life easier for both doctors and for the patients is having
0:05:39 that data fully processed and distilled.
0:05:44 In the book, I tell the story about my knee replacement and how it was a total fiasco.
0:05:49 Part of that was because my orthopedist who did the surgery wasn’t in touch with my congenital
0:05:50 condition.
0:05:57 And so that hopefully is going to be something we can transcend in the future.
0:06:02 And this is one thing that computers do, can do very well as logistics and coordination.
0:06:05 There’s tons of cases where you might like thyroid cancer, we’re just talking about maybe
0:06:09 you have to bring in an endocrinologist in addition to an oncologist.
0:06:13 And it’s shocking that often there’s no discussion there, there’s no communication.
0:06:19 But yet the challenge in my mind is how does computer magically know things that we can’t
0:06:20 do right now?
0:06:27 Well, that’s I think where the complementarity, the synergy between machines and people is
0:06:35 so ideal because we just have early satiety with data, whereas deep learning has insatiable
0:06:36 appetite.
0:06:38 And so that contrasts.
0:06:45 But we have, as doctors and humans, we have just great contextual abilities, the judgment,
0:06:53 the wisdom experience, and just the features that can basically build on that machine
0:06:58 processing because we don’t ever want to trust an algorithm for a serious matter.
0:07:04 But if it tees it up and we have oversight and we fit it into that person’s story, that
0:07:06 I think is the best of both worlds.
0:07:08 Well, here I’m really curious because what is the ground truth?
0:07:13 Because generally we don’t trust an individual person as like, as knowing everything either,
0:07:14 right?
0:07:19 But the ground truth would be like a second opinion or a third opinion or a fourth opinion
0:07:22 or even like, you know, a board to look at something.
0:07:24 And that would be what I think most people would view as the ground truth.
0:07:29 The ground truth, when it’s applied to training an algorithm, of course, is knowing that it
0:07:32 is the real deal, that it is really true.
0:07:40 And I think a great example of that is in radiology because radiologists have a false
0:07:44 negative rate of 32 percent, false negative.
0:07:49 And that’s the basis for most of the litigation in radiology, which is over the course of
0:07:54 a career, a third of radiologists get sued mostly because they miss something.
0:08:00 So what you have are algorithms with ground truths that are trained on, you know, hundreds
0:08:05 of thousands of scans so that whether it’s a chest x-ray, a CT scan or MRI, whatever
0:08:08 it is, that it’s not going to miss stuff.
0:08:12 But then, of course, you’ve got that over read by the trained radiologists.
0:08:20 So they’re, you know, I think that’s an example of how we can use AI to really rev up accuracy.
0:08:23 Somebody is going to need to interpret the algorithms or sort of be on top and that in
0:08:29 a sense, the doctor is freed from the stuff the doctor shouldn’t be doing, like the BS
0:08:32 accounting or the typing or all these things.
0:08:34 And that’s not the best use of doctors time.
0:08:39 And it’s funny because it seems like the best use of doctors time is in understanding what
0:08:43 these tests would mean, whether it be an AI test or a cholesterol or whatever, and in
0:08:45 how to communicate with the patient.
0:08:48 And so that seems to be a theme running through the book.
0:08:51 So for the first half, in terms of the understanding, what do you think that’s going to look like?
0:08:55 I mean, one of the things I’ve been constantly wondering about is whether there’ll be a new
0:08:56 medical specialty.
0:09:00 Like, you know, because you don’t have radiology without like CT or X-rays, right?
0:09:03 So presumably you don’t have radiology a hundred years ago.
0:09:05 I do have AIology or something like that.
0:09:12 Saurabh Jha, who is a radiologist at Penn, he and I Penn, a JAMA editorial about the
0:09:16 information specialist radiologists and pathologists.
0:09:21 Their foundation is reviewing patterns and information.
0:09:26 But what’s interesting is this an opportunity for them to connect with patients because
0:09:27 they don’t.
0:09:32 You know, right now radiologists never sees a patient radiologists live in the basement.
0:09:35 The pathologists look at the slides that that group of pathologists.
0:09:40 But they actually want to interact with patients and they have this unique insight because
0:09:43 they’re like the honest brokers, they’re not, they don’t want to do a surgery.
0:09:46 They want to give you their expertise.
0:09:49 And so I think that’s what we’re going to see a pretty substantial change.
0:09:54 And as you’re touched on this new specialty, it’ll look different than the way it is today.
0:09:58 In that case, it almost seems like everybody is better off.
0:10:01 The doctor is better off because of pathologists and not just looking at slides, but actually
0:10:02 is dealing with patients.
0:10:08 And presumably the patients better off because you have these false negatives get found.
0:10:12 The pathologists, you know, they have remarkable discordance when they look at slides and to
0:10:18 be able to have that basically looked at as if hundreds of thousands of them were reviewed,
0:10:24 getting back to your second, third and nth opinion to get that as input for them to help
0:10:26 and consult with a patient.
0:10:28 I think it’s really a bonus.
0:10:31 You’re talking about radiology here in pathology, but I mean, we could sort of think about this
0:10:33 as an issue for diagnosis in general.
0:10:36 And you know, there’s one thing you point in the book, I think really beautifully, you
0:10:40 know, you said once trained doctors are pretty much wedged into their level of diagnostic
0:10:42 performance throughout their career.
0:10:46 That’s kind of an amazing thing is that doctors, I guess, go through CPE and so on.
0:10:49 But like, you only go through med school once and that’s an intense process.
0:10:52 You learn a lot, but you can’t go through med school all the time.
0:10:53 No, it’s so true.
0:10:58 And that gets me to, you know, Danny Kahneman’s book about thinking fast and slow and the
0:11:05 system one that is the reflexive thinking that happens automatically versus what we
0:11:11 want are reflective thinking, which is system two, which takes time.
0:11:17 And it turns out that if a doctor doesn’t think of the diagnosis of a patient in the
0:11:21 first five minutes is over 70% error rate.
0:11:25 And that’s how actually that’s how much time is the average with patients.
0:11:31 So we have the problem as you’ve alluded to of kind of a plateauing early on in a career,
0:11:35 but we also suffer because of lack of time with the system one thinking.
0:11:40 The thought is that the machine learning is reflecting what system two would look like
0:11:44 because it’s trained from doctors sort of doing system two.
0:11:45 Exactly.
0:11:53 That brings in the integration of what would be the ground troops of thousands for that
0:11:55 particular dataset.
0:11:57 So I think it has a potential.
0:12:03 And of course, a lot of this stuff needs validation, but there’s a lot of promissory studies to
0:12:05 date that suggests that’s going to be very possible.
0:12:11 When I think about what ML or artificial intelligence AI could do, there’s like two axes.
0:12:17 There’s one like just scale, like the fact that you can scale up servers on Amazon trivially
0:12:19 much more than you could scale up human beings.
0:12:22 We could scale up a thousand servers right now or 10,000 servers right now.
0:12:25 I don’t think we could call 10,000 doctors and get them here.
0:12:29 The other thing you could do is that you could sort of talk about other axes is like sort
0:12:31 of intelligence or capability.
0:12:36 And you’re talking about both in a sense that you can scale up not just the fact that you
0:12:42 could have like a resident or med school students sort of doing things and having a lot of them,
0:12:48 but you in addition to that, you have in a sense a doctor through AI that’s in diagnostics
0:12:50 that’s better than any single doctor.
0:12:51 Right.
0:12:55 I think that’s really what is going to be one of the big early changes in this new AI
0:13:00 medical era is that diagnosis is going to get so much better.
0:13:05 Right now we have over 12 million serious errors a year in the United States.
0:13:09 And they’re not just costly, but they hurt people.
0:13:14 So this is a real opportunity to upgrade that and that’s a much more of a significant problem
0:13:16 than most people realize.
0:13:20 And so so far we’ve been talking about things that feel like the sci-fi fantasy version of
0:13:21 stuff, right?
0:13:26 I mean like because we’ve got like this doctor of sorts through diagnosis that can do what
0:13:30 no single doctor could do, presumably at scale it’s doing this at lower cost.
0:13:35 It’s allowing human beings to do the things they should be doing.
0:13:37 Is there any dystopian turn here or how does this?
0:13:39 It knows no shortage of those.
0:13:42 And so how could this go wrong and what can we do to prevent it?
0:13:47 Well, I mean I think one of the things we’ve harped on is that you’ve got to have human
0:13:48 oversight.
0:13:54 We can’t trust an algorithm absolutely because for any serious matter, because if we do that
0:13:59 and it has a glitch or it’s been hacked or it has some kind of adversarial input, it
0:14:01 could hurt people at scale.
0:14:03 So that’s one of the things that we got to keep an eye on.
0:14:11 For example, if an algorithm gets approved by the FDA, oftentimes it’s these days it’s
0:14:14 an in silico retrospective sort of thing.
0:14:20 And if we just trust that without seeing how it performs in a particular venue, a particular
0:14:24 cohort of people, these are things that we just shouldn’t accept blindly.
0:14:30 So there’s lots of deep liabilities and it runs from of course privacy, security, the
0:14:31 ethics.
0:14:37 There are many aspects that are not ideal about this, but when you think about the need
0:14:41 and how much it could provide to help medicine, I think those are the trade-offs that we have
0:14:43 to really consider.
0:14:47 What we should be doing now and what people could be doing now to sort of anticipate this?
0:14:48 Or do you think it’s like too early?
0:14:51 I mean, because people have these algorithms now.
0:14:54 What do we see in one year versus five years versus 10 years?
0:14:57 Well, it is rolling out in other parts of the world.
0:15:03 I just finished this review with the NHS and that was fascinating because they are really
0:15:04 going after this.
0:15:10 They are the leading in the world force in genomics and now they want to be in AI.
0:15:15 So they already have emergency rooms that are using that are liberated from keyboards
0:15:18 and they are going after this.
0:15:23 And this course in the middle of Brexit, so that’s kind of amazing.
0:15:27 But China is really implementing this that you could say, well, maybe too fast because
0:15:32 out of desperation or need, but one of the advantages that we don’t recognize with China,
0:15:37 not just that they have scale in terms of people, but they have all the data for each
0:15:38 person.
0:15:40 And we have no data for each person.
0:15:46 Basically, our data is just spread around all these different doctors and health systems.
0:15:47 Nobody has all their data.
0:15:52 And that is a big problem because without the inputs that are complete.
0:15:53 For like you personally.
0:15:54 Yeah.
0:15:55 Yeah.
0:15:56 Then what are you going to get out of that?
0:16:00 So this is a, this, we are at a handicapped position in this country.
0:16:05 And the other thing of course is we have no strategy as a nation, whereas China, UK and
0:16:10 many other countries, they are developing or have developed planning and strategy and
0:16:13 put in resources here as a nation.
0:16:14 We have zero resources.
0:16:19 In fact, we have proposed cuts to the same, you know, granting agencies that would potentially
0:16:20 help.
0:16:23 And so what, what should one do at that scale?
0:16:27 Like, you know, there’s various things people propose, you know, is this something to have
0:16:31 a new national institute of health, you know, in this area?
0:16:35 Is there, I mean, I think when I think about the government playing a role, I think I want
0:16:40 them to try to help them build the marketplace and set the rules, but we have to be careful
0:16:42 that we don’t put too much regulation as well.
0:16:46 I mean, what are, what do you, I mean, when you say we don’t, we don’t have a strategy,
0:16:47 what’s missing?
0:16:48 What should we be doing?
0:16:52 We don’t have no national planning or strategies.
0:17:00 How is AI not only for healthcare, but in general, how is it going to be cultivated and made
0:17:02 transformative?
0:17:07 The experience I had in the UK was really interesting because there they not only have the will,
0:17:12 but they have a whole wing of the NHS for education and training.
0:17:13 You just think about it.
0:17:20 They talked about professions within medicine that are going to have a more of their daily
0:17:21 function.
0:17:25 So, we’re not well prepared, you know, who should take leave them?
0:17:29 One of the problems we have that you’re touching on is our professional organizations haven’t
0:17:31 really been so forward thinking.
0:17:36 They mainly are centered on maintaining reimbursement for their constituents.
0:17:43 The, you know, entities like NIH and NSF and others could certainly be part of the solution.
0:17:47 What you want to do here, I think, is to really accelerate this.
0:17:52 We’re in the middle of an economic crisis in healthcare, which is in the US, the worst
0:17:53 outlier.
0:18:00 I mean, we, we spending over $11,000 per person and we have the worst outcomes, life expectancy
0:18:06 going down three years in a row, childhood mortality, infant mortality, maternal mortality.
0:18:09 The worst people don’t realize that.
0:18:14 Then you have the UK and so many other countries that are at the $4,000 per year level and
0:18:16 they have outcomes that are far superior.
0:18:21 So, if we use this, we could actually reduce inequities.
0:18:27 We could make for a far better business model, paradoxically, but we’re not grabbing the
0:18:28 opportunity.
0:18:31 Maybe there’s another solution we could think about, which you also point to in the book,
0:18:36 which is what if, what can we do to drive through consumer action?
0:18:39 For instance, a lot of our healthcare is sick here, right?
0:18:40 What happens when we get sick?
0:18:41 That’s almost all of it.
0:18:44 What about, what can we do to stay healthy?
0:18:47 First thing I think of is diet and lifestyle, right?
0:18:50 That could go a long way in so many diseases, so many things that we deal with.
0:18:52 So, actually, you touch on diet.
0:18:56 Before we even talking about like diagnosing whether you have cancer, should we be diagnosing
0:18:57 what you should be having for lunch?
0:19:01 I couldn’t agree more that that should be a direction.
0:19:09 We have had this so naive notion that everyone should have the same diet, and we never got
0:19:16 that right as a country, but now we know without any question that people have an individualized
0:19:19 and highly heterogeneous response.
0:19:21 That’s not just through glucose spikes.
0:19:26 If you and I ate the exact same food, the exact same amount, the exact same time, our
0:19:30 glucose response would be very different, but also triglyceride response would be different,
0:19:32 and they don’t track together.
0:19:37 So, what we’re learning is if you get all this multimodal data, not just your gut microbiome
0:19:43 and sensor data and your sleep and your activity, your stress level, and what exactly you eat
0:19:47 and drink, we can figure out what would be promoting your health.
0:19:50 We’re not there yet, but we’re seeing some pretty rapid progress.
0:19:55 What’s intriguing to me is that in cases, there are cases now, especially, let’s say,
0:20:00 just glucose, where you can take technology to develop for type one or type two diabetics.
0:20:05 Now, I’m not diabetic, but I actually had the sort of, I was about to say joy, but at
0:20:09 least the intellectual intrigue of having a CGM on me for two weeks.
0:20:10 Yeah.
0:20:11 So, I got to play all these experiments.
0:20:12 Right.
0:20:13 Right?
0:20:14 So, I tried white rice as brown rice.
0:20:15 Yeah.
0:20:16 How’s ice cream?
0:20:20 Wine versus scotch, all the important questions one has to figure out.
0:20:21 Exactly.
0:20:26 And actually, it was actually a surprise to me that how, for instance, I did not spike
0:20:27 on ice cream.
0:20:28 Spiked on brown rice.
0:20:29 Yeah.
0:20:32 I don’t think I prepared to go on the ice cream diet just yet.
0:20:33 Yeah.
0:20:34 And I don’t think you would prescribe that either, right?
0:20:37 But I think the idea is that it’s just different for everybody, right?
0:20:39 So, maybe you spike on ice cream, I don’t.
0:20:45 And that, you know, what’s I think been kind of so annoying about nutrition is that we
0:20:48 hear all these conflicting things, but perhaps part of the reason why we’re hearing these
0:20:50 conflicting things is that it is so individual.
0:20:51 Exactly.
0:20:56 And that it’s so complicated and such a fundamental data science problem that it probably takes
0:20:57 something like machine learning to figure it out.
0:20:59 Well, I think that’s central.
0:21:02 If we didn’t have machine learning, we wouldn’t have known this.
0:21:06 And only, you know, thanks to the group and the Wiseman Institute in Israel, they cracked
0:21:07 the case on this.
0:21:09 So, Aaron Segal’s work?
0:21:10 Yeah.
0:21:11 Aaron Segal.
0:21:15 And now, it’s been replicated by many others and it’s being extended.
0:21:17 What would be promoting your health?
0:21:22 And right now, it’s, you know, these proxy metrics like your glucose or your lipids in
0:21:30 the blood, but eventually we’ll see how outcomes and prevention can be fostered by your diet.
0:21:31 It’s really kind of mind blowing.
0:21:34 How difficult data science problem it seems nutrition is.
0:21:35 Yeah.
0:21:39 The problem VJ is a number of levels and the sea of data.
0:21:44 I mean, we’re talking about terabytes of data to crack the case for each individual.
0:21:48 So it’s not even just your gut microbiome of the species of bacteria and their density,
0:21:54 but now we know it’s the sequence of those bacteria that are part of the story.
0:21:59 Then you have, of course, these continuous glucose every five minutes for a couple of
0:22:00 weeks.
0:22:01 That’s a lot of data.
0:22:09 Besides that, you’ve got, you know, all your physical activity, your sensors for stress,
0:22:15 you know, your sleep data, and then even your genomics.
0:22:20 So when you add all this together, this is a real challenge.
0:22:23 No human being could assimilate all this data.
0:22:28 But what’s interesting is not only at the individual level, but then with thousands of people.
0:22:33 So take everything we just talked about, multiply by thousand or hundreds of thousands.
0:22:35 That’s how we learn here.
0:22:41 And so what I think is the biggest thing about the AI underappreciation is the things that
0:22:43 we’re going to learn that we didn’t know.
0:22:49 Like, for example, another great example is when you give a picture of a retina to international
0:22:55 retina expert, and you say, is this from a man or a woman, the chance of them getting
0:22:56 right is 50/50.
0:23:02 But you can train an algorithm to be over 97, 98% accurate.
0:23:06 And there’s so many examples like that, like you wouldn’t miss polyps in a colonoscopy,
0:23:08 which is a big issue.
0:23:13 Or you would be able to see your potassium through your smartwatch level in your blood
0:23:14 without any blood.
0:23:20 And then the imagination just runs wild, as far as what you could do when you train things.
0:23:27 And so training your diet with this torrent of data, not just from you, but from a population
0:23:29 is, I think, a realistic direction.
0:23:33 And what I think is interesting about this is that it’s something where, A, we don’t
0:23:39 need the AMA or NIA or anything else to get involved in terms of diet.
0:23:43 And B, actually, people want to take care of these problems, because I think most people
0:23:44 are motivated.
0:23:45 We just don’t know what to do.
0:23:46 Right.
0:23:52 And so many aspects of it, like now chronobiology is really this hot topic.
0:23:54 That’s about your circadian rhythm.
0:23:57 And should you eat only for eight hours during the day?
0:23:58 Well, certain people, yes.
0:24:03 But the whole idea that there’s this thing for everyone, we got to get over that.
0:24:08 That’s what deep phenotyping is all about, to learn about the medical health essence
0:24:09 of you.
0:24:13 And we haven’t had the tools until now to do that.
0:24:14 OK.
0:24:17 So there’s a ton of data, but a lot of it seems kind of subjective.
0:24:18 Right?
0:24:20 I mean, did I sleep well or not?
0:24:24 How do you sort of overcome the fact that not everything is quantitative, like my cholesterol
0:24:25 level?
0:24:30 Well, it turns out that was kind of old medicine where we just talked about your symptoms.
0:24:34 But new medicine is with all sorts of objective metrics.
0:24:39 So a great example of this is state of mind or mood.
0:24:43 And that’s going to be transformative for mental health, because now everything from
0:24:52 how you type on your smartphone to the voice, which is so rich in terms of tone in a nation,
0:24:57 to your breathing pattern, to your facial recognition of yourself.
0:25:02 I mean, there’s all these ways to say, you know, you’re VGA, you’re really depressed.
0:25:03 Yeah.
0:25:04 You know that you’re depressed.
0:25:12 So the point being is that you have objective metrics of one’s mental health as a cardiologist
0:25:13 for all these years.
0:25:18 I’d have these patients they come and tell me, I feel my heart’s fluttering.
0:25:20 And I would put in the note, the heart’s fluttering.
0:25:22 That was so unhelpful.
0:25:26 Now I can say, well, you know, you should be able to record this on your phone or if
0:25:33 you have a smartwatch and when your heart flutters, just send me the PDF of that.
0:25:38 And we have the diagnosis that is real world, no longer subjective.
0:25:39 The whole different look really.
0:25:45 And by the way, did the patient who has the fluttering records their, their cardiogram,
0:25:47 they don’t have to wait for me.
0:25:53 They already have an automated read from AI that’s more accurate than a doctor.
0:25:55 There is something very anecdotal about the doctor visit.
0:25:57 He’s not right there in the moment.
0:25:58 Yeah.
0:25:59 Right.
0:26:00 It’s a one off.
0:26:01 Yeah.
0:26:02 It’s a one off.
0:26:05 And so it’s a fine thing because people wonder about, let’s say the knock on a wearable will
0:26:08 be that it’s not like a eight point EKG or something like that.
0:26:10 But on the other hand, it’s there with you all the time.
0:26:11 Yeah.
0:26:12 No, exactly.
0:26:16 And then there’s this contrived aspect of going to see the doctor where you, a lot of
0:26:19 people find that very stressful.
0:26:23 And when we talk about white coat hypertension, we don’t even know what normal blood pressure
0:26:28 is because we need to check that out in thousands, hundreds of thousands of people in their real
0:26:30 world to find out what’s normal.
0:26:36 We’ve already had this chaos of the American Heart Association saying that they changed
0:26:42 the blood pressure guidelines on the basis of no data, speaking of, of lack of some objective
0:26:43 metric.
0:26:44 Yeah.
0:26:47 Well, so one other area that I thought was really intriguing and just to me, this was
0:26:52 almost paradoxical, the concept of AI being useful for empathy.
0:26:54 Because I would have thought like, if we’re thinking about the things that a computer
0:26:58 is good at, like multiplying numbers, that’s going to be something like they’re going to
0:26:59 beat humans at any day.
0:27:04 I would have thought that empathy would be the one, like the last bastion of what we’re
0:27:06 good at and what the computer is good at.
0:27:08 But how does AI get to empathy?
0:27:12 Because as we started the conversation with us about how as a key part of what a doctor
0:27:15 does, we’re like, what can AI do there?
0:27:19 Well, we are missing that in a big way today.
0:27:21 And how do we get it back?
0:27:27 Well, I think how we get it back is we take this deep phenotyping, we do deep learning
0:27:35 about the person, and that’s all outsourced with oversight for a doctor or clinician.
0:27:42 Now, when you have this remarkable improvement in productivity in workflow and efficiency
0:27:46 and accuracy, all of a sudden, you have the gift of time.
0:27:54 If we just lay down, as doctors have over decades for administrators to go ahead and
0:28:03 just drive revenue and basically have no consideration for patients or doctors, we’re not going to
0:28:05 see any growth of empathy.
0:28:09 We’re not going to see the restoration of care in healthcare.
0:28:15 But if we stand up and if we say that time, all that benefit of the AI part, the machine
0:28:19 support, and by the way, that’s also at the patient level.
0:28:23 So the patients now, with their algorithmic support, they’re decompressing the doctor
0:28:24 load too.
0:28:25 Yeah, they’re doing some of it.
0:28:29 A lot of simple things, ear infections, skin rashes and all that sort of stuff that’s not
0:28:35 life threatening or serious, but that’s bypassing a doctor potentially almost completely.
0:28:43 So between this flywheel of algorithmic performance enhancement, if we stand up for patients, then
0:28:45 we have all this time to give back.
0:28:51 Once we have time to give back, then we tap into why did humans go into the medical profession
0:28:52 in the first place.
0:28:57 And the reason was because they want to care for their fellow human being, but they lost
0:28:58 their way.
0:29:03 And now we have the peak burnout and depression and suicide in the history of the medical
0:29:04 profession.
0:29:08 And by the way, not just in the US, in many parts of the world, and how are we going to
0:29:10 get that back?
0:29:15 Because it turns out, if you have a burnout doctor, you have a doubling of errors and it’s
0:29:20 a vicious cycle, you have errors, and they get more burnout, more depressed.
0:29:21 So we have to break that up.
0:29:29 And I think if we can get people, so there’s time together, and that real reason why the
0:29:34 mission of healthcare is brought back, we can do this.
0:29:38 It’s going to take a lot of activism, it’s not going to be easy, and it’s going to take
0:29:39 a while.
0:29:42 But if we don’t start planning for this now, it’s not going to happen.
0:29:45 How do you think that changes for how do you become a doctor?
0:29:50 I mean, getting into med school and all the training is really difficult.
0:29:53 What does the future of medical education look like?
0:29:59 Right now, pre-med degree is a lot of biology and chemistry, not too much effort in psychology
0:30:06 or empathy or in statistics or in machine learning.
0:30:07 What does that look like in the future?
0:30:10 I think we’re missing the mark there.
0:30:18 We continue to cultivate Brainiacs, who have the highest MCAT scores and grade point averages,
0:30:22 and oftentimes relatively low on emotional intelligence.
0:30:24 So we have tilted things.
0:30:25 We want to go the other way.
0:30:30 We want to emphasize who are the people who have the highest interpersonal skills, communicative
0:30:35 abilities, and who really are the natural empathetic people.
0:30:40 Because a lot of that Brainiac work is going to be machine generated.
0:30:45 And so it’s something that we all start to lean in that direction.
0:30:46 Yeah.
0:30:49 Now, and it’s intriguing because I think there’s a chicken and egg problem here because I think
0:30:51 first this has to be put in.
0:30:55 Often in these eddies, big changes, there will be resistance.
0:30:56 Who’s going to be fighting this?
0:31:00 The resistance we have to anticipate is going to be profound.
0:31:08 One of the problems is that the medical profession, it may not be ossified, but it’s very difficult
0:31:09 to change.
0:31:16 The only changes that have occurred rapidly, like the adoption of robotics and surgery,
0:31:19 were because it enhanced revenue.
0:31:21 These are none of these things are going to enhance revenue.
0:31:24 They’re actually going to potentially be a hit.
0:31:27 We have all these interests that this is going to challenge.
0:31:33 But for example, we could get remote monitoring of everyone in their home instead of being
0:31:38 in a hospital room, unless they were needing an intensive care unit.
0:31:42 Now, do you think hospitals are going to allow that to happen?
0:31:47 Because they could be gutted, and then they won’t know what to do with all their facilities.
0:31:50 So the American Hospital Association is not going to like this.
0:31:51 So I’ll be delusional here.
0:31:53 They’re not going to like it revenue-wise.
0:31:58 Would they say that would put patients in danger, because obviously at home you don’t
0:31:59 have what a hospital has?
0:32:00 Well, you know what the interesting thing is?
0:32:03 I don’t know if you can get in more danger than going into our hospitals.
0:32:05 One in four people are harmed.
0:32:07 Yeah, they mean sepsis and infection.
0:32:10 The main thing, nose and commu infections from the hospital.
0:32:15 But other medication errors and other things, the comfort of your own home.
0:32:19 You can actually sleep, you’d be with your loved ones, the convenient.
0:32:22 But most importantly, just think of the difference in expense.
0:32:29 You could buy years of broadband data plan for one night in the hospital, which is $5,000
0:32:30 on average.
0:32:31 It’s amazing.
0:32:37 We have the tools to do that now, but you’re not seeing it being seriously undertaken because
0:32:38 of the conflicts.
0:32:43 So if you think about how all this has to actually happen, we talked about what’s possible.
0:32:48 But if you get to nuts and bolts, it’s interesting to think who’s going to do it.
0:32:53 Because if you take just a pure data scientist who doesn’t understand the medicine.
0:32:58 I don’t know if that would be enough, but also I don’t know if you could take a doctor
0:33:01 that doesn’t understand data science.
0:33:06 And so is it going to be teams of commingial groups that get this together?
0:33:09 Because there will be iterations between the data science and the biology and the clinical
0:33:14 aspects that have to come one after the other to be able to make these advances.
0:33:19 We need machines and people to get the best of both worlds.
0:33:28 So in the book, that example of how we cracked the potassium case between Mayo Clinic cardiologists
0:33:30 and a live core data scientist.
0:33:36 And what was amazing about that experience to review with them was that the cardiologists
0:33:40 thought you should only look at one part of the cardiogram, which historically known as
0:33:45 so-called QT interval, because it was known to have something to do with potassium.
0:33:51 But when that flunked and the algorithm was a farce, the data scientist said, well, why
0:33:52 are you so biased?
0:33:54 Why don’t we just look at the entire cardiogram?
0:33:58 And by the way, Mayo, you only gave us a few million cardiograms.
0:34:01 And why don’t you give us all the cardiograms?
0:34:02 So then they nailed it.
0:34:08 So the whole idea is that the biases that we have that are profound.
0:34:15 But when you start de-biasing both the data scientist and the doctors, the medical people,
0:34:18 then you start to get a really great result.
0:34:23 One of the scariest stories I saw was that this algorithm was getting cancer, no cancer
0:34:29 right with crazy high accuracy, like AUC of like 1.0, like never making a mistake.
0:34:33 And it turned out that there was some subtle difference between like a high Tesla magnet
0:34:39 and a low Tesla magnet, and that the patients who were very sick to start off with were
0:34:42 always getting one type of scan, and that a human being couldn’t tell the difference,
0:34:47 but that the machine was picking up some signal, not of whether it was cancer, no cancer,
0:34:52 but whether they were getting like the fancy measurement or the less complicated one.
0:34:56 Or another great example is like there’s a classic example where they’re I think predicting
0:35:01 tumors and they had rulers for the size of the tumor on all the tumor ones.
0:35:03 And so really ML was a great ruler detector.
0:35:04 Yeah.
0:35:10 The whole idea that as a pathologist we can’t see in a slide the driver mutation, but you
0:35:13 could actually train the algorithms.
0:35:17 So when the pathologist is looking at it, it’s already giving you what is the most likely
0:35:19 driver mutation.
0:35:20 It’s incredible.
0:35:27 And that does get me to touch on the deep science side of this, which we aren’t recognizing
0:35:31 is way ahead of the medical side, the ability to upend the microscope.
0:35:35 You don’t have to use fluorescence or H and E. You just train.
0:35:37 So you forget staining.
0:35:43 The idea that you used to be hard to find rare cells, just train the algorithms to find
0:35:44 the rare cells.
0:35:50 I mean, we’re seeing some things in science, no less in drug discovery, in processing cancer
0:35:57 and sequencing data and certainly in neuroscience, it’s a real quiet revolution that’s much
0:36:01 further ahead than on the medical side because there’s no regulatory hurdles.
0:36:05 And you make a good point because I think it’s tempting to just try to do what the human
0:36:08 can do better or what the human can do better now, try to do as well.
0:36:12 But now you’re talking about doing things that no human being could do.
0:36:13 Yeah.
0:36:17 Imaging plus genomics where the genomics read out, let’s say, or whatever the blood assay
0:36:18 is, the gold standard.
0:36:21 I don’t want to predict what the pathologist would say.
0:36:25 I want to predict the biopsy, I want to predict the blood or whatever the true gold standard
0:36:26 is behind it.
0:36:27 Right.
0:36:30 And if you’re training on the best labels, you can do things that no human being could
0:36:31 do.
0:36:36 Well, you know, this may be the most important point is that we have to start having imagination
0:36:44 because we don’t even have any idea of the limitless things that we could teach machines.
0:36:46 Because I’m getting stunned almost on a weekly basis.
0:36:49 I said, I never would have thought of that.
0:36:52 And so just fast forward, here we are in 2019.
0:36:53 What’s it going to be like?
0:36:56 You know, a few years of all the things, like when the Mayo Clinic told me they could look
0:37:01 at a 12-lead cardiogram for millions and be able to say this person’s going to get
0:37:06 atrial fibrillation in their life with X percent probability, I said, really?
0:37:07 And they’ve done it.
0:37:10 And so I never would have expected that.
0:37:11 Yeah.
0:37:16 That’s a really fun point because, and you could either think of two ways that the human
0:37:21 beings aren’t being imagined enough or what does imagination mean for an algorithm?
0:37:22 Right.
0:37:28 Well, if we get into heavy and unsupervised learning, we’re a bit limited by the annotation
0:37:31 and the ground truth going back to that.
0:37:35 You can only imagine things when you have those for supervised learning.
0:37:40 But you know, as we go forward, we’ll have more of those data sets to work with and we’ll
0:37:47 be better at going forward without, well with federated data sets and unsupervised learning.
0:37:50 So the opportunities going forward are pretty enthralling.
0:37:51 Yeah.
0:37:54 Well, the unsupervised learning is interesting because you can finally just, you know, and
0:37:57 for those who aren’t familiar with the term, it’s kind of like trying to find the clusters
0:38:01 to sort of not have the labels, but to see the lay of the land.
0:38:04 And that’s interesting because no human being can sort of, especially in high-dimensional
0:38:07 space, like visualize that and see that.
0:38:08 And so that’s one thing.
0:38:13 The second thing is that if you just throw all of the data in and maybe have the algorithm
0:38:18 make sure that it’s not overfitting, that it’s not trying to find an overly complicated story
0:38:23 almost like, you know, these conspiracy theories are like human beings overfitting for the
0:38:26 moon landing being a hoax or something like that when there’s a simpler explanation for
0:38:27 things.
0:38:30 If you keep it to a simple explanation, the computer can try everything.
0:38:31 Yeah.
0:38:32 Yeah.
0:38:33 So like you talked about, it could look at the whole cardiogram.
0:38:39 We could look at things that we don’t look at because either we’re expert enough to
0:38:42 know that couldn’t possibly write even if it is.
0:38:43 Or we just don’t have the time.
0:38:48 It reminds me sometimes these algorithms almost like children in that kids just don’t know
0:38:49 until they’ll try things.
0:38:50 Yeah.
0:38:53 And that’s where imagination and creativity often comes from.
0:38:55 I couldn’t agree with you more.
0:38:58 So we’ve been spending a lot of time talking about diagnosis, but prediction is another
0:39:00 thing that is really important.
0:39:03 I’d call that a real soft spot in AI.
0:39:09 And I told the story of my father-in-law who kind of was my adopted father just in the
0:39:13 book about how he was on death’s door.
0:39:19 He was about to come to our house to die and he was resurrected.
0:39:23 But any algorithm would have said he was a goner.
0:39:29 And so the idea that at the individual level, you could predict accurately whether it’s
0:39:33 end of life or when you’re going to die or in the hospital.
0:39:37 This is how long you’re going to stay or you’re going to be readmitted, all these things.
0:39:38 We’re not so good at that.
0:39:45 We can have a general sense from a population level, but so far prediction hasn’t really
0:39:51 panned out nearly as well as classification, diagnosis, triage, that kind of stuff.
0:39:57 And I still think that that’s one of the shakier parts because then you’re going to tell a
0:40:01 person about a prediction, we’re not very good at that.
0:40:06 When we talk to people with cancer and we tell them their prognosis, it’s all over
0:40:08 the place in reality.
0:40:13 And so the question is, are algorithms really going to do better or are they just going
0:40:17 to give us a little more precision, maybe not much?
0:40:21 Is there enough information to ever predict anything like that?
0:40:25 Well, that’s a part of the problem too is that the studies that have been done to date,
0:40:30 things like predicting Alzheimer’s, predicting all sorts of outcomes you can imagine, they’re
0:40:36 not with complete data, they’re just taking what you can get, like what’s in one electronic
0:40:41 health record, one system, rather than everything about that person.
0:40:43 So maybe it will get better when we fill in the holes.
0:40:47 I always think about what would be the interesting challenges to work on.
0:40:48 That’s like one of the most interesting ones.
0:40:54 I think it is because there you could improve the efficiency if you knew who are the people
0:40:58 at the highest risk and who you want to change the natural history or what their algorithm
0:41:01 is predicting if it’s something that’s an adverse outcome.
0:41:06 So eventually we’ll probably get there, but it isn’t nearly as refined as the other areas.
0:41:11 But if you combine all these things together, this thing where you’re monitoring your body
0:41:16 every five minutes and your diet and your exercise and your drugs and you have all this
0:41:19 longitudinal data, that’s something that no one’s ever had before.
0:41:27 Yeah, well, you’re bringing up a big hole in the story, which is multimodal data processing.
0:41:29 We are not doing it yet.
0:41:35 Like a perfect example is like in diabetes, people have a glucose sensor and the only
0:41:40 algorithm they have tells them if the glucose is going up or down, that’s pretty dumb.
0:41:45 Why isn’t it factoring in everything they eat and drink and their sleep and activity
0:41:47 and the whole works.
0:41:51 Some day we’ll have multimodal algorithms, but we’re not there yet.
0:41:55 Well, so let’s go back to where we start, you know, a visit to the doctor in the future.
0:42:01 And like the good news is that the doctor doesn’t have to do any of the typing or recording
0:42:06 AIs sort of figuring out the diagnosis and that the doctor has all the time now to actually
0:42:09 be empathetic and communicate, which is great.
0:42:11 But is that now all that’s left?
0:42:15 No, no, not at all, because human touch.
0:42:18 So when you go to see a doctor, you want to be touched.
0:42:21 That’s the exam part of this.
0:42:28 People, when they get examined for their heart and you don’t even take off their shirt, they
0:42:30 know, they know there’s a shortcut going on.
0:42:31 Yeah, that’s interesting.
0:42:41 They have a thorough exam because they know that that’s part of the real experience.
0:42:44 And so what we’re talking about is the exam may change.
0:42:46 Like, you know, for example, I don’t use a stethoscope.
0:42:50 I use a smartphone ultrasound and do an echocardiogram.
0:42:54 And I show it to the patient together as we’re doing it in real time, which the person would
0:42:55 never see.
0:42:59 And by the way, they wouldn’t know what love dub looks like, but you sure can see or sounds
0:43:02 like, but you sure can show them.
0:43:09 So the tools of the physical exam may change, but the actual hands-on aspects of it and
0:43:15 the interaction with the person, the patient, and that that’s the intimacy.
0:43:16 And we’ve lost that too.
0:43:23 You know, the physical exams have really gotten very much a detraction from what they used
0:43:24 to be.
0:43:25 I mean, we need to get back to that.
0:43:28 That’s what people want when they go see a doctor.
0:43:33 And people have deprecated exams because essentially they said they weren’t of value, but it sounds
0:43:36 like what was being done was not the part that needed to be done.
0:43:41 Well, when you’re dealing with analog tools and, you know, they can be so superseded by
0:43:42 the things we have today.
0:43:46 And when you’re sharing them with the patient, so here’s what you have.
0:43:51 And then you send them the video files or the metrics that they can look at, you know,
0:43:54 when they get home and get more familiar with their body.
0:44:00 It’s not only the physical exam that happens instantaneously in the encounter, but the
0:44:05 ability to have that archived data that people get more, they learn about themselves.
0:44:08 That’s all part of that awareness that’s important.
0:44:11 And you know, you talked about back to the future, there might be another sci-fi analogy.
0:44:15 I think there’s some Star Trek episodes like this where actually the group that has the
0:44:18 highest technology is the one where the technology is invisible.
0:44:19 Yeah.
0:44:23 And it sounds like that’s where the, all of this is going to be in the background.
0:44:24 That’s right.
0:44:28 You really are interacting with a person and this person now has just these powers that
0:44:29 they couldn’t have before.
0:44:30 Yeah.
0:44:31 I’m with you all the way.
0:44:32 Well, thank you so much.
0:44:33 This has been fantastic.
0:44:34 I really enjoyed it.
with Eric Topol (@EricTopol) and Vijay Pande (@vijaypande)
Artificial intelligence is coming to the doctor’s office. In this episode, Dr. Eric Topol, cardiologist and chair of innovative medicine at Scripps Research, and a16z’s general partner on the Bio Fund Vijay Pande, have a conversation around Topol’s new book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. What is the impact AI will have on how your doctor engages with you? On the nature of the doctor’s visit as a whole? How will AI impact not just doctor-patient interactions, but diagnosis, prevention, prediction, medical education, and everything in between?
Topol and Pande discuss how AI’s capabilities for deep phenotyping will shift our thinking from population health to understanding the medical health essence of you, how the industry might respond and the challenges in integrating and introducing the technology into today’s system—and ultimately, what that the doctor’s visit of the future might look like.