AI transcript
0:00:08 but completely change the world we live in today.
0:00:12 Just think, a hundred years ago, we didn’t have the internet, commercial flight,
0:00:15 or anything that resolves self-driving cars.
0:00:17 But equally in healthcare, a hundred years ago,
0:00:20 we were still decades away from the double helix discovery,
0:00:24 antibiotics, IVF, and so much more.
0:00:28 And perhaps there’s no better industry where we can make these truly monumental shifts
0:00:30 than in healthcare.
0:00:33 And in today’s episode, we explore these audacious grand challenges
0:00:38 over the next hundred years, or hopefully even less, in healthcare AI.
0:00:41 The immediate part is something that is so good,
0:00:43 like not 10% better than what you have now,
0:00:45 but like 10x better than what you have now,
0:00:47 that the adoption becomes natural.
0:00:49 From always on clinical trials.
0:00:54 A million clinical trials are just organically running in my population every day,
0:00:56 and I have no idea how to harness it.
0:00:58 Spot market pricing.
0:01:00 On any given day, at any given hour of the day,
0:01:03 you might have a very different dynamic of supply.
0:01:05 More effective scheduling.
0:01:08 Doctors design their schedule in a very protective, like almost a defensive way,
0:01:11 because they felt wronged by the system.
0:01:13 Continuous monitoring.
0:01:16 How do you augment that with more continuous information,
0:01:21 whether it be self-reported, whether it be room of patient monitoring data.
0:01:25 And even the Holy Grail of all Holy Grails, which is AI doctor.
0:01:27 And how far off that might be.
0:01:31 Joining us today are A16Z Bio and Health General Partners,
0:01:33 Vijay Pande and Julie Yu.
0:01:37 This episode also originally came from our sister podcast, Raising Health.
0:01:41 So if you do like this, don’t forget to catch more Just Like It,
0:01:43 by searching Raising Health wherever you get your podcasts,
0:01:46 or by clicking the link in our show notes.
0:01:49 All right, let’s get started.
0:01:54 So Vijay, you and I talk about the fact all the time that healthcare is an industry
0:01:58 that used to be intractable with respect to the adoption of technology.
0:02:03 But we are also super optimistic that healthcare is now potentially one of the biggest beneficiaries
0:02:05 of technology in the form of AI.
0:02:09 And so we wanted to have a conversation here today about basically what you and I do on a daily basis,
0:02:15 which is riff about some of the grand challenges that we see for builders in the healthcare AI space.
0:02:19 And so let’s actually start just with your sort of high-level thoughts on
0:02:23 where do you think AI will make the biggest difference in healthcare immediately?
0:02:25 Yeah, the immediate least hard part of the question, right?
0:02:28 Because if we talk about the 20-year arc, I think a lot can happen.
0:02:33 So the immediate part is an issue of both technology and also people.
0:02:36 What will people accept? What will people adopt?
0:02:39 And in many ways, I think when you think about the history of technology and healthcare,
0:02:43 who will buy it? Who will incorporate it? How will it work into doctors’ workflows?
0:02:46 So I think the immediate part is something that is so good,
0:02:50 not 10% better than what you have now, but 10x better than what you have now,
0:02:56 that the adoption becomes natural or so easy to adopt that even 10% better could work.
0:02:59 And so when you think about what could be 10x better,
0:03:04 it has to be something where maybe it’s making decisions or it’s helping doctors as a co-pilot
0:03:07 and something that’s like a superpower they didn’t have.
0:03:11 Or if it’s 10% better but easier to adopt, maybe it doesn’t even look like software.
0:03:15 Maybe it looks like staffing or maybe it looks like you’re texting something
0:03:18 and that’s easy to incorporate. And even if that does a little bit,
0:03:21 that could still be important because healthcare works at such great scales.
0:03:25 Right. Yeah, and I think it’s timely in the sense that obviously one of the number one crises
0:03:28 that our healthcare industry is facing right now is a labor crisis
0:03:32 in that we have a shortage of labor to do these kind of highly specialized jobs
0:03:35 that we have, whether it be clinical or whether it be administrative,
0:03:39 but also those individuals who are in those jobs today are extremely burnt out
0:03:42 because of, ironically, the technology burden that we put on them,
0:03:47 whether it be in the form of revenue cycle tasks or EHR workflows and things like that.
0:03:49 So that’s really something that we hear all the time.
0:03:52 I think the other thing that you touched upon is sort of the damn humans
0:03:55 that have gotten in the way in the past, not the technology per se,
0:03:58 but one of the hardest things in healthcare is behavior change
0:04:03 and whether that be on the part of the patient to adopt some sort of new behavior
0:04:06 that helps them get better or in the case of a clinician,
0:04:09 something that sort of changes the way that they do their job.
0:04:13 And I think that’s, to me, one of the biggest opportunities is how do you take things
0:04:17 as a result of the Institute of Behavior Change that have been proven in very niche populations
0:04:22 and productize them, package them in a way that can all of a sudden be sort of globally applied
0:04:25 to the broader population that should benefit from it.
0:04:27 And so, I mean, we have a number of examples in our portfolio
0:04:29 that I think we’ll touch on along those lines,
0:04:32 but those are very good and big points to think about right now.
0:04:36 So given what you just described and let’s assume that we all fundamentally believe
0:04:38 in this optimistic view of what AI can do,
0:04:42 what are the use cases for which AI can actually have utility,
0:04:43 you know, in the near term.
0:04:46 And we put forth a sort of a two-by-two, like good consultant.
0:04:47 Yeah, that’s what we’re supposed to do, right?
0:04:48 Yeah, exactly.
0:04:52 And we said, okay, on the one axis, you have B2B use cases.
0:04:55 So historically, a lot of technology first gets adopted by the people on the inside,
0:04:59 but then on the other side of the axis, you have consumers or patients.
0:05:03 And then the other axis is things that are administrative in nature,
0:05:06 so maybe more back office versus clinical in nature
0:05:09 where you’re actually delivering a clinical service to an end patient.
0:05:11 And you’ve talked about this, the stakes are very different.
0:05:12 Right.
0:05:13 In each of those quadrants.
0:05:16 The area that has had the lowest hanging fruit so far
0:05:20 has been really the administrative B2B side of that equation.
0:05:25 How do you think about sort of the healthcare administration internal facing set of use cases
0:05:28 and let’s talk about, like, what we’ve seen out there that we think has worked?
0:05:30 Well, that one seems like a no-brainer, right?
0:05:32 Because, like, I think what everyone’s worried about
0:05:34 is, like, being a doctor is really hard.
0:05:38 And if you’re having an AI do clinical recommendations or something like that,
0:05:42 we haven’t even figured out where the human goes in the loop and all these things.
0:05:43 And we will figure it out.
0:05:44 And that’s work to do.
0:05:46 If we’re thinking about today, thinking about the back office,
0:05:48 is that we’ve got computers there already.
0:05:50 We’ve got algorithms already.
0:05:52 We also have tons of people.
0:05:55 And you can ask yourself, like, why do you have people doing RCM or other tasks?
0:05:58 Are these tasks that actually could be automatable?
0:06:02 And that actually could really make huge impact on sort of the cost.
0:06:04 But also, actually, it also maybe changes how we think about this
0:06:08 as we think of the back office as a data problem instead of a staffing problem.
0:06:10 Yeah, that’s actually really interesting.
0:06:13 You said algorithm, and what that makes me think of is the claim.
0:06:16 So if you think about, like, the way that payments flow in health care,
0:06:19 90% of payments in health care are reimbursed revenue
0:06:23 where the provider has to submit literally a claim to the payer
0:06:25 that effectively an algorithm in many ways.
0:06:28 You could think of, like, the claim as a unit of a piece of logic
0:06:30 that needs to be interpreted.
0:06:32 Yeah, I don’t know if anyone’s ever told me that.
0:06:33 That’s actually kind of interesting.
0:06:37 But then, right now, the way that it’s processed is, like, a very serialized workflow
0:06:40 where, first, you have to interpret, OK, what kind of claim is this?
0:06:43 Is it a professional claim? Is it an inpatient or outpatient claim?
0:06:45 And then, which payer product is it?
0:06:48 There’s, like, thousands of payer products in any given market.
0:06:50 So which one do we bump that up against?
0:06:53 And then, within that plan product, you have a whole bunch of rules about,
0:06:56 you know, under what circumstances should this kind of claim be a valid one, etc.
0:06:59 So, anyways, you have this whole value chain of decision-making.
0:07:01 Oftentimes, you have to bring a nurse into that
0:07:03 because there might be some clinical judgment.
0:07:05 That’s where, like, prior authorization comes into play.
0:07:09 So you can almost imagine a world in which, like, what if we were able to eliminate the claim
0:07:13 and basically say, because we have all this data, as you alluded to,
0:07:19 as well as the automation of the set of decisions that need to be made around that atom of data,
0:07:22 that you could actually just eliminate that entire end-to-end process
0:07:24 and just have real-time payments.
0:07:28 So that, to me, is basically you could eliminate 30% of the waste in our system
0:07:29 if you were able to do that.
0:07:30 It’s a holy grail problem.
0:07:31 What’s holding that back?
0:07:33 I mean, one, it’s just so entrenched.
0:07:36 Two, there are companies that are out there that are doing this,
0:07:38 but you do have to digitize it.
0:07:40 You know, like, a lot of this is sitting in PDF documents.
0:07:42 Which, ironically, PDFs are not digitized in some sense.
0:07:43 Exactly.
0:07:44 You mean structured data.
0:07:45 Well, yeah, they’re very unstructured.
0:07:46 Unstructured.
0:07:48 And, yeah, we have, like, a company called Turquoise
0:07:50 that is basically doing this for contracts.
0:07:53 And, I mean, the average payer-provider contract,
0:07:56 which, by the way, could represent, like, hundreds of billions of dollars of revenue
0:08:01 for any given pairing of entities, is typically, like, a 200-page PDF document
0:08:05 that is completely monolithic, but any one line in that contract
0:08:09 might have huge implications for both the revenue to that provider,
0:08:11 as well as the cost structure for that payer.
0:08:13 And yet, those things don’t get litigated,
0:08:17 but for every two years when they come up for renegotiation,
0:08:18 but no one’s looking at that one line.
0:08:20 They’re looking at the whole kind of aggregate thing.
0:08:25 So, you know, what if you were to digitize structured data within that contract
0:08:27 and be able to run scenarios on it
0:08:31 and say, like, what if the price for these 10 services was this versus that?
0:08:35 What would that implication be on the payments flow through that system?
0:08:38 You could maybe even redo these things faster than every two years, too.
0:08:39 Yeah, exactly.
0:08:43 So, the other really interesting thing about this concept of newly digitized streams of information
0:08:46 and more longitudinal information about patients is,
0:08:51 what if we could have an always-on clinical trial infrastructure in our country
0:08:54 such that, on demand, you can slice and dice the population
0:08:56 for exactly the characteristics of humans you want
0:08:59 and produce a data analysis either retrospectively or prospectively.
0:09:01 You know, what do you think of that idea
0:09:04 and, like, what are the barriers for us to achieve something like that?
0:09:07 Yeah, the funny thing about that idea is, I mean, it’s a very exciting idea
0:09:09 because we could gain so much knowledge
0:09:11 and we could improve healthcare so much,
0:09:15 but, like, it’s a ridiculous thing to imagine doing, like, without something like AI.
0:09:18 Without AI, I don’t even know how you do that, how you could pay for that.
0:09:20 What do you mean by that? Like, what parts of that do you think is AI?
0:09:23 In many ways, this becomes a data problem in terms of slicing and dicing
0:09:26 and then keeping track of, like, this person got this drug
0:09:28 and, at this moment, had this response
0:09:30 and sort of understanding the causal nature.
0:09:31 Right.
0:09:33 So, what we love about clinical trials, and I’m going to get a little wonky,
0:09:36 is that, like, you have to have some sense of causality.
0:09:38 Like, I took this drug, this happened,
0:09:40 and people say correlation doesn’t mean causation,
0:09:42 but that doesn’t mean we can’t do causation.
0:09:45 That’s what trials are about. Trials are all about causation.
0:09:47 So, we have to understand the causal pathway.
0:09:50 But with all this data, AI is great, especially certain types of AI,
0:09:53 like Bayesian statistics, are really good at causality.
0:09:55 And so, we could actually have causal understanding.
0:09:58 And it could even be complicated, like, you know, you’re not supposed to take,
0:10:00 like, grapefruit juice with your birth control pill,
0:10:03 because the P450s will nullify this.
0:10:06 I don’t know how someone figured that out, but who knows what else is out there.
0:10:08 You know, that we just didn’t find yet.
0:10:11 And if you could have an app that, like, knows your diet and knows the basic things
0:10:14 and knows your drugs and has that information, that’s kind of mind-blowing.
0:10:17 That how much we can just have no new drugs,
0:10:20 no new treatments, and just optimize.
0:10:22 We’re such an unoptimized nature,
0:10:25 because that optimization is almost impossible without AI.
0:10:28 But then on top of that, once you’ve built out this infrastructure,
0:10:31 now new drugs go into that infrastructure and can optimize.
0:10:33 And finally, we don’t just optimize for health,
0:10:37 but we can jointly optimize for health and decrease in cost.
0:10:41 Like, this drug is, like, 10x better, 10x more expensive than the other drug,
0:10:44 but maybe the outcome for me is going to be no different.
0:10:46 So maybe I should get the other one.
0:10:47 Exactly.
0:10:51 Yeah, and how to think about that, that’s such a complex data problem.
0:10:54 And logistics problem, which is also another part of AI
0:10:56 that I think we could actually really finally tackle.
0:10:58 So as you can tell, very excited.
0:11:02 Yeah, yeah, I remember I had a conversation with, like, the CIO of the VA
0:11:05 many years ago where, you know, one of the ways he looked at his population,
0:11:09 he’s like, Julie, like a million clinical trials are just organically running
0:11:13 in my population every day, and I have no idea how to harness it, you know?
0:11:15 And so, I mean, even with just EHR data alone,
0:11:19 you can imagine the possibilities, let alone if you were also to layer
0:11:22 on top of that, just your daily behavioral data and all that kind of stuff.
0:11:25 And that’s where the LLMs could come in is just a conversational way
0:11:29 to capture the day-to-day journal of what you’re doing,
0:11:31 what you’re eating, who you’re interacting with, all that.
0:11:32 And it’s doctor behavior.
0:11:33 It’s like the whole system.
0:11:34 We can finally debug.
0:11:37 And the funny thing, coming from a tech background, like, you know,
0:11:41 if the RCT seems unfamiliar, this is basically a giant A/B test.
0:11:42 Exactly, yeah.
0:11:45 And so, this is deeply, deeply entrenched in tech.
0:11:47 Like, even every pixel is A/B tested.
0:11:51 I wish healthcare could be optimized the way pixels are in webpages.
0:11:54 And you’d have, like, an A/B, C/D, F/E, on and on in terms of the multivariate nature.
0:11:57 That degree of optimization is something that would be just fantastic.
0:11:58 Yeah.
0:12:00 And that gets to the notion, the other sort of holy grail problem
0:12:03 that you hear people talk about is, could you actually ever have
0:12:05 almost a spot market for pricing in healthcare?
0:12:08 And on any given day, at any given hour of the day,
0:12:12 you might have a very different dynamic of supply that’s available for a given service.
0:12:15 Why could you not price differently for it the same way we do in other industries?
0:12:16 Yeah, so why can’t that happen?
0:12:20 I mean, today it’s probably deemed illegal, honestly, by many of the contracts
0:12:23 because you’re sort of bound by these, again, these monolithic agreements
0:12:27 that highly specify, and the fact that you have this claim system.
0:12:32 There’s really no notion of a real-time adjudication of the actual price
0:12:34 that needs to be paid for that service.
0:12:38 And maybe this doesn’t require the fancies type of AI per se,
0:12:42 but certainly the notion of being able to run machine learning on these things
0:12:44 and say, how many of these rules are just useless
0:12:47 because they don’t actually move the needle on cost or price,
0:12:51 but which are the ones that are most consequential that we actually should keep
0:12:55 and therefore have some kind of automated and systematic way to adjudicate them.
0:12:59 So huge opportunity, and that, again, represents a very significant portion
0:13:01 of what gets wasted in our system today.
0:13:03 So that’s a fun one to think about.
0:13:06 Admittedly, I mean, I mentioned, at least in my career in healthcare,
0:13:10 one of the sort of general themes of the problems that I’d like to go after
0:13:14 are where there is a fundamental mismatch between supply and demand.
0:13:18 And I think a lot of the companies in our portfolio represent that problem space.
0:13:20 And I had built a company that was in the scheduling space,
0:13:24 which when you think about the phenomenon by which I’m sure you’ve experienced this,
0:13:27 you as a patient are told to wait weeks for a doctor’s appointment,
0:13:31 and you assume that that’s because every doctor is booked out solid,
0:13:35 but it actually turns out that a lot of the capacity in our system goes completely wasted.
0:13:40 And if you could just simply get better visibility into the underlying data streams,
0:13:44 then potentially you could really mitigate the wait times and the experience for the consumer
0:13:47 while also helping the providers kind of best use their time.
0:13:50 Are there any examples that you’ve seen that you think are an interesting representation of that?
0:13:55 I know we have a lot of companies that are trying to both increase transparency
0:13:59 of supply companies that are using coaching or provider groups
0:14:02 and then bringing that to bear in care models.
0:14:05 Well, here I think you’re describing something which is even like maybe before we even get to AI,
0:14:12 which is like we got to get like off the whiteboards and onto some more modern sort of computer approaches for things.
0:14:13 Yeah, like systems of record.
0:14:14 Systems of record.
0:14:17 And that actually, you know, we often talk about like technology versus people.
0:14:19 Where’s the weak spot? Where’s the problem?
0:14:23 Maybe the most heretical thing one could say is that maybe the way of doing medicine has to change.
0:14:25 Tell me more. What do you mean by that?
0:14:31 Yeah, well, and so for instance, like just the workflow of a doctor, you think how a doctor goes through their day,
0:14:35 how can we support the doctor to do what we all want them to do and what they want to do,
0:14:38 which is maximize patient welfare.
0:14:39 Yeah.
0:14:41 But and maybe view it as something that takes it off of them.
0:14:47 So like I think about like devoted and there’s infrastructure or an OCO like they address the system internally
0:14:51 and to the extent that they’re not optimized, that’s something that even they know and they can see.
0:14:52 Right.
0:14:56 So this is an example in the case of companies like devoted who actually did start with clean sheet,
0:15:01 right, and built their own scheduling system to accommodate the exact care model that they were going after.
0:15:04 Yeah, I remember from some of the work that we did at my old company,
0:15:08 you would see the way that doctors designed their schedule and very much to your point,
0:15:12 they would design the templates of their schedules specifically in a very protective,
0:15:18 like almost a defensive way because they felt wronged by the way that the system sent patients to them.
0:15:21 It’s a learned behavior because they want to protect their time.
0:15:22 Yeah.
0:15:23 And they’ve gone screwed in the past perhaps.
0:15:24 Exactly.
0:15:29 We understand what this feels like when, you know, let’s say that we have 10 pitches with new entrepreneurs that we’ve never met in a row in a given day.
0:15:35 And you’re just on, you know, for like hours straight and you know how like physically and mentally exhausting that is.
0:15:40 It’s the same thing for doctors when they say if you give me, you know, four new patients back to back in the morning,
0:15:46 that is far more taxing to me than if you interspersed repeat patients or other tasks and whatnot.
0:15:49 And after seeing the pitches, then we have to go in epic and write about the pitches.
0:15:50 Exactly.
0:15:52 That’s kind of worse, my God, if we had to do that.
0:16:00 But so you could see how the unfortunate side effect of the way that the systems traditionally have been designed has now caused this sort of ripple effect and defensive behavior.
0:16:09 But if you were to just kind of start from a clean sheet as companies like devoted are able to do, could you actually design a much more logical system that actually learns from historical data, right?
0:16:15 And there could be almost like a reinforcement learning component to that where the doctors could provide feedback and you can learn over time.
0:16:17 So we talked about one idea which is clean sheet of paper idea.
0:16:18 Yeah.
0:16:19 Is that the only way?
0:16:27 So what I’m hoping also happens as part of this wave of AI is that it’s really a forcing function for people to actually take advantage of the data that we have now digitized, right?
0:16:36 So we’re actually only what 10, 11 years post meaningful use in health care, which was the act that incentivize doctors to actually adopt electronic health records to begin with.
0:16:43 And it’s kind of remarkable to think that like even five years ago, less than 70% of doctors had any electronic health records.
0:16:50 It’s actually relatively new into the era of even having digitized forms of information about patients over a long student period of time.
0:16:59 And so in many ways we haven’t at all scratched the surface of exploiting those data sets and there really hasn’t been that much incentive to historically out argue.
0:17:07 And but certainly not the necessarily the technology capabilities on that kind of the middleware layer of the stack to to be able to do anything meaningful and useful with that data.
0:17:12 But I think that’s where the advent of the tremendous technology shifts that we’ve seen on the AI side.
0:17:19 And what you can do with that information, how you can synthesize it, how you can present it to someone in a way that’s actually usable and friendly.
0:17:22 That could be the tipping point that actually gets people to unleash the data.
0:17:28 We’re also, by the way, in a period of time when provider organizations, hospitals are struggling financially.
0:17:32 And so many of them are looking at, OK, how can I actually monetize my data assets, right?
0:17:42 What do all of our companies want? They want to eat data. And one of the ways by which they can do that is actually partnering with these provider organizations who have these systems of record that they haven’t been able to exploit
0:17:49 and actually pay them and give them ref share or equity or whatever it is to get access to proprietary data sets to train their models.
0:17:57 Hey, it’s Steph. Look, people will endlessly debate and record podcasts on the most important ingredients to success.
0:18:01 Is it technical talent? Is it timing? How about hard work?
0:18:03 Luck?
0:18:09 Well, in my opinion, one of the most critical ingredients is the power of effective communication.
0:18:15 And that’s why I recommend the Think Fast Talksmart podcast from the Stanford Graduate School of Business.
0:18:23 Host and Stanford lecturer Matt Abraham sits down with the experts to discuss the best tips to help listeners unlock their verbal intelligence,
0:18:32 whether it’s to excel in negotiations, make a wedding toast or work presentation, or even communicate better with AI chatbots.
0:18:39 And if you want to sneak peek, Matt’s also got a great TED Talk with over 1.5 million views.
0:18:45 So what are you waiting for? Check out Think Fast Talksmart every Tuesday or every get your podcasts,
0:18:50 whether that’s Apple Podcasts, Spotify, iHeart, or even on YouTube.
0:19:00 It’s interesting to ask, like, of the various crises, like the staffing crisis versus the issue hospitals are dealing with,
0:19:05 like, which crises are going to be catalysts and which will be impediments?
0:19:10 I think we kind of feel like the staffing crisis really is like tailwinds for AI.
0:19:11 Strangely, yes.
0:19:12 Yes, yes.
0:19:17 But maybe the — and COVID, I think, is tailwind for AI because we’re so used to virtual.
0:19:18 Yes.
0:19:22 But, like, maybe not all these crises will be helpful.
0:19:24 And I think that’ll be particularly interesting to see how it nets out.
0:19:25 Yeah, I think that’s a great point.
0:19:28 I think, certainly, there are many who would view it not as a tailwind,
0:19:36 but I think it’s a good forcing function because now people are at a breaking point where the status quo way of solving that problem,
0:19:39 which, again, is how do we produce more doctors? How do we produce more nurses?
0:19:41 We just — we can’t do that physically.
0:19:44 And so that is driving, I think, a lot of this adoption.
0:19:47 We were remarking as a team that at this last JPMorgan conference,
0:19:53 like 100% of the incumbent payers and providers got up on stage and talked about not just what they want to do with AI,
0:20:00 but how they’re actually deploying AI in practice because they found no other way to be able to solve those kind of more fundamental problems.
0:20:07 I think the other tailwind that some might call an impediment, but certainly builders in our universe call a tailwind,
0:20:09 is the business model change in healthcare, right?
0:20:16 So movement towards value-based care fundamentally breaks the kind of the schema of how healthcare has worked for decades,
0:20:21 and think incumbents are more likely to be on their heels with that dynamic versus the upstarts
0:20:29 who themselves can build their entire care model and operating model on the basis of those new payment domains.
0:20:32 I actually don’t envy organizations who have to have one foot in each world, right?
0:20:38 Because having half of your shop in a fee-for-service model and then the other half in a value-based model is very, very challenging to do.
0:20:45 Well, it’s fun to think that, like, in a fee-for-service world, AI is nice, but maybe actually doesn’t go against what you want.
0:20:49 And in a value-based care world, actually, AI is the catalyst, right?
0:20:53 Because if you can do things better, you can see the value too.
0:20:57 Yeah, it’s actually funny that the example that that reminds me of is how the AI are getting sued right now.
0:21:04 So there’s a bunch of major national payers who are using AI algorithms to automate prior authorization,
0:21:11 and so all they’re doing is taking the rules that humans wrote, that humans were executing slowly, and doing it faster.
0:21:15 And so now, all of a sudden, everyone is complaining, oh, everything, like, the denial rate is going up,
0:21:20 but it’s actually not, the rate of denials is not going up, it’s the speed with which the denials are happening that’s going up.
0:21:23 Don’t blame the technology, blame the humans who actually wrote the rules,
0:21:25 and you’re just seeing kind of an exacerbated version of it.
0:21:29 We’re going to deal with a lot of that, of kind of the finger pointing at the technology,
0:21:33 where it’s actually just implementing the broken system underneath it,
0:21:37 and that’s why this kind of move to, like, new business models give you the opportunity to clean that up
0:21:41 and start with logical ways to control spend.
0:21:46 Okay, so we talked about healthcare administration, we talked about scheduling opportunities.
0:21:48 Let’s actually talk about the EHR itself.
0:21:52 So LLM as an EHR, what do you think about that?
0:21:56 Yeah, well, so I think the thing that’s really underpreciation about the LLM is, like,
0:21:59 people think of it as, like, this oracle or something like that,
0:22:03 but I think it’s maybe, at least for us, I think of it as a UI.
0:22:06 And it’s kind of funny because we start with, like, command line interfaces, you know,
0:22:10 for those who have ever dealt with that, and, like, then we have GUIs because that’s better than command line,
0:22:15 but now we’re back to text and typing things in, except instead of, like, some weird command
0:22:18 that you have to memorize, you just, like, just tell me what you want.
0:22:19 Right, you just speak English.
0:22:22 Yeah, just speak English, and we’re so optimized for speaking English to each other.
0:22:25 I mean, that’s, like, easy, it doesn’t require training in the same ways,
0:22:28 and so I think as a UI now, that makes sense.
0:22:33 And now, when you’re saying as a EHR as an LLM, then I guess you’re kind of meaning that the data’s in there
0:22:36 and it can be queried like this and maybe synthesized.
0:22:37 That’s all very natural.
0:22:40 I think, obviously, you want to be very clear about partitioning things.
0:22:42 And so maybe you’re doing it with, like, RAG or something like that,
0:22:44 whereas getting information coming back.
0:22:49 But maybe the question to turn around is, like, again, the technology sounds very plausible.
0:22:52 You can imagine, you know, a hackathon that would put some pieces together and get that done,
0:22:57 but I think you need more than just, like, connecting to GPT-4 or Gemini or something like that.
0:22:59 You need something medically specific.
0:23:00 Yeah, absolutely.
0:23:03 This is where, I think, one of the prime examples where we certainly believe
0:23:07 that a specialist, you know, model is necessary to understand the specific nuances
0:23:11 of how to interpret medical information versus general internet information.
0:23:15 And, I mean, that’s certainly a big area of development for builders as we see it.
0:23:20 Companies that are building tools that can do everything from summarized existing medical record data.
0:23:24 How do you tell the story of Vijay Pandey before he walks in the door
0:23:29 so that you understand, like, his journey and not just look at a bunch of numerical records
0:23:32 and sort of sporadic information about different visits and whatnot,
0:23:36 but really truly the story of him, including things like your social determinants
0:23:38 and what happens in the home and outside of the clinical setting.
0:23:40 We see a lot of companies kind of building that.
0:23:45 The other obvious use case is the scribing use case where have a conversation with your doctor.
0:23:50 Actually, look them in the eye and rather than them sitting at a keyboard during your entire visit.
0:23:53 And that also gets written as a story.
0:23:56 It’s a story that then gets added to your medical record
0:24:01 and it can create that flywheel effect of continuing to add to the narrative of your journey.
0:24:05 One way to sort of think about AI in healthcare is take the existing jobs
0:24:07 and then see which ones can go in.
0:24:08 And that makes a lot of sense.
0:24:12 I’m also curious if you slice and dice to a different way because we don’t have people with AI.
0:24:14 How would we do it differently?
0:24:18 Because people are assigned specific jobs because of the way humans work.
0:24:22 But maybe when we’re finally said and done, when AI can do everything,
0:24:25 maybe the resident isn’t the role that it would take.
0:24:31 If you were to say unbundled, the job of the doctor, what components could you re-bundle into a different thing.
0:24:36 There was actually a time where this concept of a dataist was sort of popularized
0:24:41 where a baseline component of basically what every job in healthcare is doing
0:24:43 is some degree of data interpretation.
0:24:47 And so if you were to unbundle that component and create almost like a horizontal job
0:24:50 that was just doing data interpretation and this kind of thing.
0:24:53 And maybe that’s actually the better analog to what I described earlier.
0:24:57 It’s like what if there was a dataist role that effectively is an LLM
0:24:59 that is synthesizing all this information.
0:25:03 I think the thing that’s missing right now to make this a reality is
0:25:07 today’s information architecture is very sporadic.
0:25:11 So you pretty healthy person, you probably see your doctor maybe once or twice a year.
0:25:15 And so how do you augment that with more continuous information,
0:25:19 whether it be self-reported, whether it be remote patient monitoring data,
0:25:23 whether it be just other information sources to create that more holistic picture.
0:25:25 But I like that notion of flipping the jobs on their head
0:25:27 and thinking about the components a different way.
0:25:30 Well, the fun thing about the dataist is like, I think, do you guys pretty important, right?
0:25:33 Because we’re talking about a team of people.
0:25:35 And medicine often is done by a team.
0:25:39 There might be a nurse or a PA or a doctor or a specialist and all these people.
0:25:42 And where the AI comes in, one idea is a co-pilot,
0:25:44 which is like each one of the team members has a co-pilot.
0:25:51 But what’s interesting about the dataist is like the AI is a peer contributor.
0:25:55 And has its role that actually everyone feels pretty good about.
0:25:58 And you think about it like you don’t put a person to do the dataist job.
0:26:01 I mean in principle with a calculator and a lot of time,
0:26:04 maybe it could do all what’s necessary, but you never have a human being do that.
0:26:10 And that might be a very easy first entry where it’s like they’re doing what they’re good at.
0:26:15 Yeah, that actually reminds me of a company that we saw that what if every nurse in the inpatient ward,
0:26:20 because the inpatient setting is very chaotic, very active, things are like surprises happen all the time.
0:26:25 And a lot of nursing teams have sort of either live like walkie-talkie type devices just on their shoulder,
0:26:29 or they’ll have some real time communication mechanism with the rest of their care team.
0:26:35 And this company was saying, why not put an LLM into the same walkie-talkie signal
0:26:38 and actually literally just have it be like almost a Jiminy Cricket sitting on everyone’s shoulder
0:26:43 being like, hey, I’m sensing X pattern by virtue of listening into your conversations.
0:26:46 Let’s all remember that this is happening with this patient over here.
0:26:49 And I think there could be a safety issue over there.
0:26:51 It’s almost like the way that I talk about Baymax all the time.
0:26:54 So like could everyone just have kind of a Baymax companion, you know,
0:26:58 hang out in their care team and be sort of the steward of all the information flow.
0:27:03 Synthesize it and read it back when they find something that probably warrants an alert within that group.
0:27:05 I don’t know if you spent much time in the ED.
0:27:07 I like break this and cut that.
0:27:09 No, I haven’t had that privilege.
0:27:11 I have lots of scars and stitches and so on.
0:27:13 And like, so I remember it was a few years ago.
0:27:17 I actually ran around here, I cut myself with a chef’s knife.
0:27:18 Awesome.
0:27:19 I was probably showing off to the kids.
0:27:20 Oh, geez.
0:27:22 And so it wasn’t looking good.
0:27:24 I was like, oh, I should get stitches.
0:27:25 And I go to the ED.
0:27:27 I’m there for like two hours.
0:27:28 Wow.
0:27:29 And I look around.
0:27:30 Just waiting.
0:27:31 Yeah, just waiting.
0:27:33 And I look around and I’m like, this could be another four hours.
0:27:35 And like, I’m doing my math.
0:27:39 And maybe for the first hour, I’m bugging the nurse and the inbound.
0:27:41 But like, after all, I just leave.
0:27:44 And like, there’s various situations where I just want to talk to somebody,
0:27:47 but you can’t have everybody talk to somebody because they’re going to be overloaded.
0:27:49 If I could just be texting somebody.
0:27:51 I just want to know where things are.
0:27:53 And if it’s busy, that’s fine.
0:27:55 Or maybe I don’t even need to be there.
0:27:57 But like that triaging too could be huge.
0:27:58 Absolutely.
0:27:59 Yeah.
0:28:00 And this gets to, okay.
0:28:03 So going back to the concept of unbundling the role of a clinician,
0:28:05 there’s one part, which is actually the treatment part.
0:28:10 So that’s the part where maybe we can necessarily today build an LLM that will stitch your finger.
0:28:13 But the notion of triage getting you to the right site of care.
0:28:15 So should I stay home?
0:28:17 Should I go to an urgent care clinic?
0:28:21 Should I go immediately to the ED or am I okay just going to my PCP?
0:28:23 That is actually a critical role.
0:28:29 I wrote a piece about this where we said my, my version of that today is I call my doctor cousin as all of my family members do.
0:28:34 The poor lady, she’s a cardiologist, but she gets every single call about every single specialty under the sun.
0:28:39 And she’ll tell me literally like, should I, should you take your son to the urgent care or does this need to go immediately to the ED?
0:28:42 That is like one of the roles that LLM construct could play,
0:28:47 which actually would also do a huge service to doctors that they don’t have to be the ones who are filling those questions.
0:28:49 Well, especially the alternative is Dr. Google, right?
0:28:50 Correct.
0:28:51 Or WebMD or whatever.
0:28:52 Yeah.
0:28:53 And then the patient.
0:28:54 Which will tell you you have cancer.
0:28:55 Yeah.
0:29:01 The amateur is deciding like, you know, and especially with my friends that you’ve probably been through this, like with, like your kid is sick.
0:29:05 And you’re like, I probably don’t need to go in, but like it’s my kid.
0:29:07 So if it’s like even 5% maybe I’ll go.
0:29:08 Yep.
0:29:09 And that’s just a drain on everybody.
0:29:10 Yep.
0:29:11 Doctors and patients.
0:29:12 Yeah, yeah, yeah.
0:29:15 So we talked about EHRs and this notion of the patient’s story.
0:29:20 You know, we’re now getting into this notion of, okay, if you, if you were to take almost like the front door experience to healthcare.
0:29:24 And what are the big opportunities for AI to make an impact there?
0:29:30 One is simply instead of going to Google, you know, going to a specialized tool or whatever it might be that’s trained in this way.
0:29:35 What are the, one of the questions that always comes up is what are like the regulatory rails on this?
0:29:41 So like at what point do you sort of cross the line into actually clinical decision making and how should I think about this as a builder?
0:29:47 I know you’ve obviously done a ton of thinking on this and a ton of work, including like talking to the regulators and understanding what they think.
0:29:48 Yeah.
0:29:54 What kind of advice would you give to entrepreneurs who are trying to figure out where that line is and whether or not they should cross it?
0:29:55 Yeah.
0:29:58 So a couple of things I think some of the lines are more clear than others.
0:30:07 But in the cases where there is any gray zone, I think the regulators are, I think eager to chat with startups, especially maybe it’s on the software side, that might be one C, but you know, and so on.
0:30:10 Like, but like to try to figure out where you are.
0:30:14 And, you know, we’ve seen a lot of successful founders who’ve done that type of collaboration.
0:30:20 And I think generally that’s a pretty strong approach because then there’s no surprises on either side.
0:30:23 The tricky part is when nobody knows, you know.
0:30:33 And so I think it’s not just about consultation, but it’s also leading and sort of taking the framework and the philosophy for how we regulate things right now and really understanding is this software as a device?
0:30:34 Is that the right framework?
0:30:38 And really sort of being a leader in terms of how we should be thinking about this.
0:30:43 And I think there’s actually a welcoming of that as well because it’s new for everybody.
0:30:50 As you’re alluding to, we are actually an industry that has a regulatory framework when it comes to AI specifically.
0:30:55 And so in many, these are like some of the rare cases where healthcare is actually ahead of the curve as far as technology goes.
0:31:06 What’s your sense of, you know, does generative AI specifically do LLMs constitute enough of a sea change relative to historical waves of AI that it should warrant an entirely different regulatory framework?
0:31:11 Or do we think we should try to make, you know, the current system work for those new technologies?
0:31:13 So our space already has a ton of regulation.
0:31:21 And so in that case, you have to now ask, what’s the specific use case where more regulation is helpful for patients?
0:31:23 I don’t see people talking about that.
0:31:31 Yeah, the broader point of don’t necessarily focus on regulating the technology, but rather the thing for which the technology will be used.
0:31:32 Yeah, the use.
0:31:36 Okay, let’s go to the holy grail of all holy grails, which is AI doctor.
0:31:48 How far off in the horizon do you think that concept is where you could fully embody the full stack role of a clinician making diagnostic decisions and treatment decisions and whatnot?
0:31:54 And what needs to be true, do you think, in just like the broader ecosystem for that to be the case?
0:32:02 Yeah, so I think one of the two by twos I like, you know, is sort of trying to understand which decisions are complex and which are simple.
0:32:07 And then which answers are robust to mistakes and which are not robust to mistakes.
0:32:14 And so things that are simple and actually robust to mistakes, those can already be done by machine learning and so on.
0:32:19 Things that are simple, but actually have major consequences with mistakes, like driving a car.
0:32:22 Like everyone can drive a car, but like if you do it wrong, you could kill people.
0:32:28 So that one’s actually tricky, but you see people working on that with self-driving cars and there’s a lot of work.
0:32:34 I think that where medicine is hard is that it’s something that’s complex and mistakes can have huge impacts.
0:32:37 And so maybe what we could do is we should work our way up there.
0:32:43 And it’s not even a question of like, should we, but we kind of have to, if you think about some of these crises are coming.
0:32:48 And so maybe you start with nursing and we’ve seen this with Hippocratic and that makes sense.
0:32:53 You’re not doing diagnosing, you’re doing no harm, you know, literally.
0:32:58 And so that’s, I think, very clever. Then maybe you could work your way into PA, you know, physician assistant.
0:33:01 Maybe you could work your way from there into GP.
0:33:10 And I think the general practitioner concierge doctor, that tier is kind of a really interesting tier because largely you’re triaging and sending off to specialists.
0:33:14 So the AI doesn’t have to be an oncologist, anti-cardiologist and all these things.
0:33:19 And so that tier actually alone is kind of really interesting since so much of medicine is done at that tier.
0:33:23 And so much of issues of access or access to that tier too.
0:33:29 If everybody had the AI concierge doctor in their pocket, I think that would actually be dramatic in terms of the impact on health.
0:33:32 So even when we just get to that tier, I’ll be pretty excited.
0:33:36 And once we’re at that tier, then you can imagine sort of going into specialist world.
0:33:39 But that might be, that later part might be a bit off.
0:33:40 Yeah.
0:33:47 And I mean, to your point, this is, it’s an inevitability that we’ll have to figure out a way to create leverage on the supply side of this portion of our labor base.
0:33:50 I guess, what are your thoughts on co-pilots for doctors?
0:33:55 And is that a more near term tractable version of this that you think could have an impact in the near term?
0:33:56 I think so.
0:34:02 The whole problem with co-pilots is, can you work it in a way that goes in the doctor workflow where they view us as a benefit, not a nuisance?
0:34:03 Yeah.
0:34:06 You know, it’s not some alert or whatever.
0:34:08 It’s like something where they are going to it.
0:34:16 If you can do that, and maybe we’ve seen this with like scribes and so on, like something where doctors like, like, hell yeah, I want this, this is great.
0:34:24 If we can create that, and maybe that’s the sort of the challenge and the call to arms for founders, like create something, some product that like people are clamoring for it.
0:34:29 And that’s obviously knowing that space really well and knowing your customers and knowing your people are going to use it.
0:34:32 I think if you can get into the workflow, then I think it could go really well.
0:34:41 Bayesian did kind of the ultimate, which is like, let’s just embed it into the HR workflow so that inevitably it’s just there when they open it up and there’s not really any need for individual physician buy-in.
0:34:42 Yeah, totally.
0:34:51 Yeah, given what we just talked about and all these grand challenges, what are some of the types of startups that, you know, we wish that would walk through the door that we just haven’t seen yet?
0:35:01 Yeah, so one area that I’ve been waiting for, and I think it’s maybe a little early maintenance, maybe just right at the right time, is something where clinical trials can be addressed with AI.
0:35:04 And this is where it’s a confluence of a couple of things.
0:35:06 One is like clinical trials are obviously so important.
0:35:11 We talked about real world and like the ongoing clinical trials as a part of healthcare.
0:35:17 But then finally, clinical trials, because so much money flows through it, you could improve them 5% or 10%.
0:35:22 It’s not like you have to do something heroic to be able to not have to 10 exit or 100 exit.
0:35:30 I remember like a decade or so ago, I have a acquaintance who was working for Google and they’re optimizing various filters and this and that.
0:35:35 And they made it like 5x better, one of the ad filters, and basically 5x was like 100 million dollars.
0:35:38 Yeah, exactly. 5% was 100 million dollars.
0:35:40 And so I may be really jealous.
0:35:47 Like I’m working on like something in drug design or whatever to make big leaps and bounds and small things for big cash flows can have a huge impact.
0:35:55 So something for clinical trials could be huge or even just picking like the order of rank ordering of clinical trials to sort of do a better job there.
0:36:00 Anything in that space, I think would have a huge impact than we haven’t seen very much.
0:36:06 And part of it is like it’s maybe not where if you’re outside of that space, you may not think to go.
0:36:08 I think that would be my pick.
0:36:22 Mine would be kind of comparable in the sense of the nature of the opportunity, which is if you were to design an AI native health plan from scratch and basically be the way by which healthcare payments flow like all the problems that we talked about earlier.
0:36:26 What are the components of a health plan? It’s a payments mechanism and claims.
0:36:30 It is a underwriting chassis in terms of how you score risk within a population.
0:36:37 And then it’s a network of where the providers that you would actually steer patients to on the basis of understanding, you know, what kinds of services they need.
0:36:51 And the way that those are built today, you know, you see huge opportunities to both leverage data and AI in the sense of exactly what you just talked about where a 1% impact on the cost structure of a health plan or the way that you underwrite risk.
0:37:00 And a certain health plan could literally mean hundreds of millions of dollars of either cost savings or better economics to the providers who are part of those networks.
0:37:13 So that to me kind of this notion of a full stack, AI native health plan that takes full risk on populations and exploits all of these data sets that we’re talking about to really understand at almost at an individual level.
0:37:22 You know, you can almost sort of imagine like an individualized health plan that is like purpose-built for you on the basis of your behaviors and your medical history and things like that.
0:37:36 That is priced entirely different than all of your employee peers who are in the same group plan versus what it is today where it’s so least common denominator and like sort of everyone loses because you’re trying to design for everyone in the same sort of brute force fashion.
0:37:38 So that would be mine.
0:37:39 That’d be fun.
0:37:48 Yeah. Well, those are some very big audacious grand challenges that we hope many builders go off and pursue and we’d obviously love to talk to anyone who’s working on problems of this ilk.
0:37:49 Yeah, absolutely.
0:37:57 [Music]
0:37:59 Thank you for listening to Raising Health.
0:38:06 Raising Health is hosted and produced by me, Chris Tatiosian, and me, Olivia Webb with the help of the Bio and Health team at A16Z.
0:38:08 The show is edited by Phil Hegseth.
0:38:13 If you want to suggest topics for future shows, you can reach us at raisinghealth@a16z.com.
0:38:17 Finally, please rate and subscribe to our show.
0:38:31 The content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:38:37 Please note that A16Z and its affiliates may maintain investments in the companies discussed in this podcast.
0:38:43 For more details, including a link to our investments, please see A16Z.com/disclosures.
0:38:45 (upbeat music)
Vijay Pande, founding general partner, and Julie Yoo, general partner at a16z Bio + Health, come together to discuss the grand challenges facing healthcare AI today.
The talk through the implications of AI integration in healthcare workflows, AI as a potential catalyst for value-based care, and the opportunity for innovation in clinical trials. They also talk about the AI startup they each wish would walk through the door.
Resources:
Find Vijay on Twitter: https://x.com/vijaypande
FInd Julie on Twitter: https://x.com/julesyoo
Listen to more episode from Raising Health: https://a16z.com/podcasts/raising-health/
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.