How SDSC Uses AI to Transform Surgical Training and Practice – Episode 241

AI transcript
0:00:11 [MUSIC]
0:00:14 Hello, and welcome to the NVIDIA AI podcast.
0:00:16 I’m your host, Noah Krabitz.
0:00:20 Our guest today is a machine learning and AI leader who’s worked on projects ranging from
0:00:26 sustainability and reforestation efforts to creating a virtual closed swap platform for
0:00:30 environmentally friendly fashion and steps. But for the past year and a half or so, she’s been
0:00:35 serving as director of machine learning at the non-profit Surgical Data Science Collective,
0:00:40 where she leads research focused on utilizing video data from surgeries to develop tools
0:00:45 that can provide surgeons with immediate feedback and insights on their performance.
0:00:50 Just a recently gala TEDx talk titled, “Why You Want AI to Watch Your Surgery?”
0:00:54 which I encourage you all to go check out on YouTube after you listen to our conversation.
0:00:59 Because she’s here right now to talk with us about the potential for AI to help surgeons bring
0:01:05 better health care to everyone. Margot Mason Forsythe, welcome, and thank you so much for joining
0:01:10 the NVIDIA AI podcast. Hi, Noah. Thanks for having me. Margot, maybe we can still
0:01:15 do a little bit about your background. I alluded just a little bit in the intro. You’ve
0:01:20 worked on various projects after studying machine learning and leveraging your skills
0:01:24 and experience for a lot of AI for what we’d call AI for good projects, but really just
0:01:28 in my view, projects that are helping improve quality of life for everyone.
0:01:33 So maybe you can detail that a little bit and then kind of to bring us up to the present
0:01:37 with how you started that involved with the Surgical Data Science Collective.
0:01:42 Yeah, sure. So like you said, I’ve been working in the AI field for quite some time.
0:01:48 My first field of studies was come to science, so I’ve done enough software development,
0:01:54 and then I realized that I wanted to do a bit more scientific projects. So I went back
0:02:00 to finish my masters and specialized in computer vision. Computer vision is something that I really,
0:02:07 really love because I’m a very visual person. And so analyzing images and videos is something
0:02:14 that I’m pretty passionate about, I would say. And from that, I’ve worked on many different
0:02:21 projects with very big video and image files. So like you said, I’ve worked on several projects
0:02:28 that go from analyzing lumber scanning, for example, to satellite imagery, to detect a
0:02:36 deforestation, or now surgical videos. So it’s been quite a ride for sure. And I’ve learned
0:02:44 a lot mostly on how to producturize AI and how we can use AI to make an impact in the world and
0:02:50 have a focus on is it actually going to be useful, you know, and make a difference at some point.
0:02:53 So that’s kind of how I see my career so far.
0:02:59 Have you found that the projects you’ve been drawn to, has it been kind of being drawn to the
0:03:05 next sort of technical or scientific challenge, kind of pursuing the craft and kind of pushing
0:03:10 the boundaries of what computer vision and image and video analysis can do? Or have you been driven
0:03:15 more by the mission of these different projects or have you just kind of worked out that you sort
0:03:18 of been able to follow both of your blessings, so to speak?
0:03:23 I would say both actually, yes. Slightly luckily, always.
0:03:31 I mean, I’ve always been passionate about different sciences. So I actually have a
0:03:37 hard time focusing on only one thing or one project. And when I learned about climate tech,
0:03:43 for example, I really wanted to see how I could help and how AI could help process all of this
0:03:48 giant xactolite and imagery, you know, remote sensing is really hard to process.
0:03:55 And then I learned about surgical videos and I learned that there were thousands of terabytes
0:04:01 of surgical videos that were not used. And it’s a pretty big challenge because surgical videos are
0:04:06 really heavy, really long, you can imagine eight hours procedure. No one really wants to watch
0:04:14 those videos. So that’s when I was thinking that AI is indeed the perfect tool for this kind of data
0:04:21 that is really being really long, but also has a temporal element to it, which is quite difficult.
0:04:28 And when I learned about that, it was a really interesting impactful project, but also in terms
0:04:32 of technical challenges, I thought it was really interesting. And that’s actually what made me
0:04:37 join SDSC in the first place. So how did you find out what drew you into the
0:04:44 troubles of surgical videos? I started to work at the surgical data science collective SDSC
0:04:50 when I met the founder, who is a pediatric neurosurgeon, Dr. Denoho. And he introduced me
0:04:56 to this issue, you know, he was telling me I have all these videos saved in drives and I don’t do
0:05:02 anything with them. And I know any of my coworkers and friends and other surgeons have drives with
0:05:07 hours and hours of surgical videos, and they are just sitting on the desk, not really doing anything
0:05:13 with them. Right. Just a context set for the audience, for me too. Is it standard procedure that
0:05:19 surgeries are just all video to these cameras that are inside of people, sort of, you know,
0:05:25 guiding the surgeons? Or are they sort of operating room overhead cameras? Or how does this, how does
0:05:32 it work surgical video? That’s a great question. It’s pretty diverse. We have a lot of endoscopic
0:05:37 videos and microscopic videos. So endoscopic will go, for example, through the nose or any
0:05:43 other part of the body where you need to see inside. And actually, the endoscopic videos are
0:05:50 a really good data point for us because we see the computer vision algorithm sees what the surgeon
0:05:55 sees. Right. And that is, you know, golden because there is a lot of information in these videos.
0:06:00 For the microscopic videos, it is also used by the surgeons sometimes when they do the surgery to
0:06:06 really magnify what they’re looking at. For example, if they’re operating on very small arteries,
0:06:12 they need to have this big intense zone. That’s what they’re going to be using. Often it seems 3D
0:06:18 for them, which we have some of those 3D videos as well. So yeah, it’s a mix of microscopic surgical
0:06:25 videos and doscopic videos. And we don’t have yet the video, you know, kind of like camera security
0:06:30 from the OR, but some of our collaborators use this kind of videos to get a sense of what is
0:06:38 happening in the operating room. Right. And so you met the founder. So the surgical data science
0:06:43 collective was already in existence and you met the founder and got involved? Yes. It was pretty
0:06:50 early on. So the surgical data science collective is a non-profit organization that was started by
0:06:58 Dr. Daniel Ho and the main mission and the main idea was to create and analyze a repository of
0:07:05 surgical videos in order to improve surgical techniques and patient outcomes. Because still
0:07:13 today, there are 5 billion people who lack access to safe surgery and there are at least 4.2 million
0:07:21 people around the world who die within 30 days of surgery. So if we consider surgery as a disease,
0:07:27 it would be the third leading cause of death. That’s why, you know, the goal of SDSC is to
0:07:34 utilize this annual surgical videos to identify best practices, support medical education,
0:07:39 or even predict potential outcomes and complications in advance of surgery.
0:07:45 Right. So we’ve had other healthcare practitioners and people from the health industry on the podcast
0:07:51 talking about ripping the cums to mind is talking about analyzing still images, you know, x-rays
0:07:57 and scams and using AI to discover things often kind of, as you said, at that super zoomed in level
0:08:02 that, you know, cancer prediction and that kind of thing. Tell us about some of the opportunities
0:08:08 and challenges involved with analyzing all of the surgical video. I would assume, you know,
0:08:12 the first challenge is just gathering the data and then processing all of it. But
0:08:17 kind of take us through it. What is it that, you know, you said improving best practices and real
0:08:22 time feedback. So maybe you can speak to that as well. Yes. So the first challenge, like you said,
0:08:27 is actually to gather all of this data. And like I said earlier, a lot of these surgical
0:08:33 videos are stored on drives and it’s really difficult to get access to these drives. Sometimes,
0:08:38 you know, you kind of have to go and fly somewhere and meet with the surgeons to be able to get the
0:08:44 videos. Most of the time, the videos are not even recorded because people don’t know what
0:08:49 they can be used for. So why would they record them? So actually, one of the biggest challenge
0:08:55 is asking people to press the record button. Right. I mean, I’m laughing, but I’m imagining,
0:08:59 you know, if I was a surgeon, that’d probably be the last thing on my mind, right? So yeah.
0:09:05 Exactly. Yeah. I mean, that is definitely not the priority. And then if they think about recording,
0:09:12 so pressing this button, they have to export the video from the device, walk around with a USB key,
0:09:18 upload the videos on a laptop, upload to the cloud. So there are so many steps here for these people
0:09:24 who are extremely busy, who have so many other important things to do of their day. It’s definitely
0:09:29 not a priority. So that is our first challenge, and that has been one of the biggest challenges
0:09:35 that we’ve had. But we’ve been pretty successful in gathering at least a good first base of a
0:09:42 surgical video library. By now, we are about 40 terabytes of surgical videos. Okay. And we expect
0:09:50 to get more, you know, and the other challenge here is to get diverse surgical videos. We don’t want
0:09:57 obviously for an AI model, we don’t want videos from one surgeon in one hospital doing the same
0:10:01 procedure. Hundreds of hours of tonsillectomies is only going to get you so far, I’d imagine.
0:10:09 Yes, exactly. So that is the other challenge is how do we get these videos from diverse sources
0:10:16 and diverse fields, which is also a lot of networking, because you have to go and talk
0:10:21 to the people and ask them to record and then do they want to work with us so that we can start
0:10:27 gathering these videos. So this is the other second part of this challenge of data collection.
0:10:36 But in terms of the other challenges, obviously, surgical videos are quite long, but they are also
0:10:43 temporal. So videos, right? So it is a different type of models that you would use for still images.
0:10:47 We have the kind of the same architecture that you would use for other models,
0:10:52 but we always have to think about the temporality of what is happening in the video and that is
0:10:58 actually how we implement most of our models. Let’s say if you’re trained to track surgical
0:11:03 tools, you know, you have to think about all of the challenge that comes with that in surgical
0:11:10 videos, which are going to be obstructions and can sometimes you have, you know, explosion of blood
0:11:16 or something like that. And you want to be able to subtract the tools without losing them or
0:11:21 with dealing with these problems, which are pretty similar to other computer vision problems.
0:11:28 But it is slightly more challenging because of how messy these environments are.
0:11:35 Sure, I can only imagine. And so from a technical perspective, you know, obviously there are,
0:11:40 we’re hearing all the time these days about video models in the news kind of being opened up for
0:11:44 you know, consumer use, that kind of thing. You’ve been working with the Data Science Collective
0:11:51 for going on two years now. Is that right? Yes. So are you using, are you are you building tools
0:11:58 yourself? Are you using off the shelf to kind of modifying them to suit? Do you have partnerships
0:12:03 with other AI labs? How are you kind of fine tuning the tools to get what you need out of them?
0:12:11 A mix of all of what you just said, we usually will. So, you know, we’re a pretty small team and
0:12:17 a nonprofit organization. So we will try to use the most efficient methods for us. A lot of the
0:12:23 times that we’ll be reusing some architectures that are existing and then fine tuning them to our
0:12:28 needs. Combining some architectures is something that we’ve done a lot, especially with the temporal
0:12:35 model. So having, you know, a mix of a CNN and a temporal architecture or we’ve been playing that
0:12:41 with vision transformers and more recently with vision text transformers, which are the big models
0:12:49 you’re talking about here. And a lot of the time we will always be careful about new technologies.
0:12:54 So we want to try them and we want to make sure that we stay on, you know, on top of the innovation
0:12:59 that is happening to see if we can apply it to the surgical data science field. Something that is
0:13:04 quite interesting and challenging with this kind of data is that it requires a lot of expertise
0:13:11 that as computer scientists, engineers, we don’t have. So we need to work very closely
0:13:16 with clinicians and surgical experts. And that’s where the most important part of the work is
0:13:24 happening, actually, not even the model architecture or the new cool AI tools. For us, it’s really
0:13:31 understanding what the expertise is and then what model should apply to bring that information that
0:13:37 will be useful to the surgeons. Can you maybe walk us through an example of, and correct me if I’m
0:13:42 wrong here, but I imagine you have partnerships with surgeons and other medical professionals
0:13:50 and institutions and are you sending them images or videos to just kind of analyze and let you know
0:13:55 what they see or kind of what’s the, I guess I’m wondering what the process is or what it’s like
0:14:02 getting from footage and people revealing the footage to then an outcome that other practitioners
0:14:07 can benefit from, whether it’s, you know, a new technique or refining a best practice or something
0:14:14 like that. So for most of our collaborations, we will work with clinicians and surgeons who have
0:14:19 videos, but they don’t have the computer science knowledge. So they will come to us to do all of
0:14:26 the computer vision and analysis. So when they start working with us on these projects, maybe I
0:14:33 can go through a concrete example. We’ve been working with several NGOs who are focused on
0:14:41 surgical education. One of them is called All Safe and they focus on teaching surgical procedures to
0:14:47 several students all across low income countries and they do it through a digital platform. So
0:14:53 it’s online courses and then the review is done through videos. And so what we’re trying to see
0:14:59 here is can we analyze these videos and give feedback to the students with computer vision
0:15:05 and that is useful. So that is the important point is that is useful. So then we will work on
0:15:10 in developing the computer vision models to extract the features that we need to be
0:15:16 extracted to do the analysis and then collaborate with the clinicians and the surgeons on what
0:15:22 exactly do they need to have in the feedback or what do they believe is something we should focus
0:15:28 on because most of the time, you know, we’re going to look at something and I’m going to think with
0:15:32 my engineer mind. Oh, I’m going to look at this feature and I’m going to make this graph and
0:15:39 it’s going to be amazing. And then I say the surgeons and they’re like, what? So that’s why
0:15:46 it’s called the social data science collective because it’s the first step is creating a community
0:15:53 with the clinicians and the computer science experts. And we also have some collaboration
0:16:00 with computer scientists groups. So where we will work with them to analyze some of the videos we
0:16:06 have. So that is almost a connection between we have surgeons who want to do something very specific
0:16:12 and we have computer scientists collaborators who can help us do that specific task that maybe we
0:16:18 don’t have bandwidth for. So we are trying to expand that part of our community as well to really,
0:16:24 you know, have a really impact and scale that because it’s not only going to be a whole community
0:16:32 effort. I’m speaking with Marder Mason Forsyth. Marder is the director of machine learning at
0:16:39 the Surgical Data Science Collective, a nonprofit that is using AI machine learning tools to analyze
0:16:45 video data from surgeries to develop tools and feedback loops and other mechanisms that can
0:16:51 help surgeons with insights and feedback on their procedures and techniques and really just
0:16:56 bring better health care to more people across the globe. As Margot was just talking about,
0:17:02 you mentioned, you know, being a nonprofit doing AI research is a little bit unusual right now.
0:17:06 What is that like? Are there big things that are either, you know, well, I mean, I’ll be
0:17:11 saying I would imagine something. Resources is an issue as it is for almost all nonprofits. But
0:17:17 there are things, are there things specific to being a nonprofit AI kind of research group
0:17:24 that stick out to you? So there are many interesting aspect that come from being a nonprofit. Like you
0:17:30 said, resources are indeed limited. So we have to be creative in the way we train computer vision
0:17:36 models. We will always start simple, which is actually something I’ve always done and advocated
0:17:41 for is if you want to start a computer vision project, maybe you don’t need to start with the
0:17:47 biggest model that exists. You know, start simple with a small data set, do a proof of concept,
0:17:54 and then iterate. So that is what we have as a development pipeline and research pipeline
0:18:02 process. We will always start simple and small and then scale. And that is limited because of
0:18:07 our resources, obviously. But to me, it’s something that is actually good that I would even, I would
0:18:13 do the same if I had, you know, 10x budget. I would probably do the same, but it helps in that way.
0:18:20 And then it brings a lot of different projects. Being a nonprofit, we’re able to work on a project
0:18:25 that maybe we wouldn’t be able to work on if we were a for-profit. For sure it would be, it would
0:18:31 actually be completely different. And that’s why I really wanted to give SDSC a shot when I first
0:18:37 met Dr. De Noho because I was curious about it. I was like, how are we gonna do that? You know,
0:18:43 I’ve never done AI. I’ve never seen AI done in a nonprofit. There are some others, but it’s
0:18:49 really research focused and community focused, which you wouldn’t really be able to do as well
0:18:55 in a for-profit, I believe. We’re gonna ask this an answer, please, based on what’s happened so far
0:19:00 and/or, you know, what you see coming in the near future. What are some of the big benefits
0:19:06 for clinicians, for patients that you’ve seen or expect to see from, you know, not just the work
0:19:11 that you’re doing at the collective, but more broadly leveraging AI to help with the surgical
0:19:17 process? The AI field will bring a lot of new and good things to the medical field, I believe.
0:19:25 In the surgical space, which is what I’ve been exposed to mostly, it will bring a lot of standardization.
0:19:31 I believe something I’ve discovered working in that field is that every surgeon and every hospital
0:19:37 will perform surgeries and procedures in different ways, and no one really knows
0:19:45 ABCD of how you’re supposed to do a specific procedure. So by having a tool here, AI, to first
0:19:50 encourage people to collect the data. So first, we’re gonna get the surgical videos, we’re gonna
0:19:56 finally start looking at these videos that are not being looked at, and then share them between
0:20:02 surgeons all across the globe that will bring a lot of standardization, or at least they will
0:20:07 start to talk to each other, which I think is kind of beautiful, because right now, you don’t really
0:20:13 have a good way to talk to each other. And through the surgical videos, the hope is that they will
0:20:18 start to talk to each other. And when you can imagine so many applications, for example, the one
0:20:25 that always comes back is education. Instead of using a medical textbook with drawings, the students
0:20:30 can watch a tutorial on how to do the specific procedure. So that’s a big difference that I
0:20:37 think will change a lot of things. And then being able to find best practices through this
0:20:44 analysis of surgical videos is gonna be pretty interesting, because who knows what is in these
0:20:51 videos. And there’s so much that has to be discovered, and there is a big need to be creative
0:20:56 when we think about this data, because no one has ever looked at this data, and no one has ever
0:21:02 really thought about what can we do with all of that, and what is my question that I want to be
0:21:08 answered. And that’s one of our challenges, actually, is sometimes we ask surgeons, “Oh,
0:21:13 what do you want to answer through all of these videos that you have?” And they don’t really know,
0:21:18 because they haven’t had this option before. Right. That’s interesting. It makes me think of
0:21:24 that. I mentioned there were some of the examples of MRI and scan analysis and cardiac care and that
0:21:32 kind of thing. And I’m thinking about the AI tools being able to help practitioners find differences
0:21:38 in cells on a very, very sort of nano basis, right? But even with that, I’m thinking, “Oh,
0:21:42 well, they know what they’re looking for.” Or even if it’s they’re looking for an anomaly,
0:21:48 it’s still kind of we know what we’re looking for. But yeah, with surgery, my very kind of naive,
0:21:53 not knowing much about the field coming in this conversation thinking, “Oh, well, video footage
0:22:00 is being used to train AI systems. Are we moving towards better education for humans or even training
0:22:04 robotic surgical algorithms or that kind of thing?” But it’s fascinating to hear people say that it
0:22:10 makes sense to me as a non-surgeon that what would they be looking for? It’s not the same as looking
0:22:16 for an anomaly in a cell that might stick out. I think at the beginning probably of the when
0:22:23 they first started to analyze MRI with AI, they also had to be creative because someone had to be
0:22:28 asking for these questions for it. And for surgical videos, one of the first steps would be to look
0:22:33 at anomalies, which actually was trying to do now is what are the outliers? Who is using this tool
0:22:38 and no one else is using it for the same procedure? So we are kind of starting with the low hanging
0:22:46 foods, I guess, but the deeper existential questions are not there yet. And I’m really excited to work
0:22:51 with the clinicians to help them come up with these questions by showing them the data because
0:22:56 no one else is going to come up with these questions. It has to be the people who are working
0:23:02 every day in the OR. And actually, the videos are a really great source of data, but there is so much
0:23:07 more going on. Obviously, there is the patient data, there’s the patient outcomes, there is
0:23:13 everything that is going on in the operating room. And all our engineers have actually been
0:23:19 in the operating room so that they understand what is happening behind that camera. And I’ve
0:23:25 been in the operating room to a couple of times now. And it’s really helped me understand better
0:23:29 what is happening. And sometimes when we have a new procedure type that we’re exposed to,
0:23:35 I go to the OR because I want to understand better like, oh, some random questions sometimes
0:23:40 like, where are you? Where is it in the body? Or how many people are operating? Because sometimes
0:23:45 you have more than one surgeon. It’s just so many things that you don’t capture in the video,
0:23:48 but there’s still obviously a lot of information in the videos.
0:23:55 Fantastic. I go for listeners who would like to learn more or hopefully perhaps there’s even
0:24:00 some surgeons, some clinicians listening who are thinking, oh, I have surgical video that, you know,
0:24:05 in a shelf on a drive somewhere, maybe I can send it and how about the cause? Where can listeners go
0:24:09 to find out more about the work that the Surgical Data Science Collective was doing,
0:24:14 the work that you were doing perhaps to get involved as a partner? Who knows? Where can listeners go
0:24:21 to learn more? So we have our website is thesurgicalvideo.io and we can also find us on social media
0:24:28 at Surgical Data Science Collective. And I would also encourage if anyone is a computer engineer,
0:24:32 computer scientist who wants to work on a different project that they’ve been working on
0:24:37 and are interested in surgical AI to also reach out to us because we are working with quite a lot
0:24:42 of different parts in this. So anyone who is interested should reach out to us.
0:24:47 Fantastic. Well, Margot, thank you so much for taking the time to stop by, join the podcast
0:24:52 and talk a lot about the work you’re doing. It’s, I don’t know, stories like this with the technical
0:24:57 aspects kind of match up with the societal impact. I think they’re just fantastic stories.
0:25:01 There’s sort of something for everybody, right? And it sounds like you’re finding a really interesting
0:25:06 path to fuse your technical interests with making an impact in your own work. So congratulations
0:25:11 and all the best developed to you and all of your partners and cohorts at the collective.
0:25:15 Well, thank you, Noah. Thanks for having me on the podcast. It was really enjoyed the conversation.
0:25:17 Me too, our pleasure.
0:25:21 [Music]
0:25:25 [Music]
0:25:37 [Music]
0:25:49 [Music]
0:26:01 [Music]
0:26:05 [Music]
0:26:13 [BLANK_AUDIO]

Margaux Masson-Forsythe, director of machine learning at the Surgical Data Science Collective (SDSC), discusses how AI-driven video analysis is transforming surgical training and practice, making surgery safer and more accessible to billions of people worldwide.

Leave a Comment