AI explained: AI and product liability in life sciences
Manage episode 438152981 series 3402558
The rapid integration of AI and machine learning in the medical device industry offers exciting capabilities but also new forms of liability. Join us for an exciting podcast episode as we delve into the surge in AI-enabled medical devices. Product liability lawyers Mildred Segura, Jamie Lanphear and Christian Castile focus on AI-related issues likely to impact drug and device makers soon. They also give us a preview of how courts may determine liability when AI decision-making and other functions fail to get desired outcomes. Don't miss this opportunity to gain valuable insights into the future of health care.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Mildred: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, myself, Mildred Segura,, partner here at Reed Smith in the Life Sciences Practice Group, along with my colleagues, Jamie Lanphear and Christian Castile, will be focusing on AI and its intersection with product liability within the life sciences space. And especially as we see more and more uses of AI in this space, we've been talking about there's a lot of activity going on with respect to the regulatory landscape as well as the legislative landscape and activity going on there, but not a lot of discussion about product liability and its implications for companies who are doing business in this space. So that's what prompted our desire and interest in putting together this podcast for you all. And with that, I'll have my colleagues briefly introduce themselves. Jamie, why don't you go ahead and start?
Jamie: Thanks, Mildred. I'm Jamie Lanphear. I am of counsel at Reed Smith based in Washington, D.C. I'm in the Life Sciences and Health Industry Group. I've spent the last 10 years defending manufacturers and product liability litigation, primarily in the medical device and pharma space. I think, like you said, this is just a really interesting topic. It's a new topic, and it's one that hasn't gotten a lot of attention. A lot of airtime, you know, you go to conferences these days and AI is sort of front and center in a lot of the presentations and webinars. And much of the discussion is around, you know, regulatory cyber security and privacy. And I think that, you know, in the coming years, we're going to start to see product liability litigation in the AI medical device space that we haven't seen before. Christian, did you want to go ahead and introduce yourself?
Christian: Yeah, thanks, Jamie. Thanks, Mildred. My name is Christian Castile. I am an associate at Reed Smith in the Philadelphia office. And much like Mildred and Jamie, my practice consists primarily working alongside medical device and pharmaceutical manufacturers in product liability lawsuits. And Jamie, I think what you mentioned is so on point. It feels like everybody's talking about AI right now. And to a certain extent, I think that can be intimidating, but we actually are at a really interesting vantage point opportunity to get in the ground on the ground floor of some of this technology and how it is going to shape the legal profession. And so, you know, as the technology advances, we're going to see new use cases popping up across industries and, of course, of interest to this group in particular is that healthcare space. So it's really exciting to be able to grapple this headfirst and the people who are sort of investing in this now are going to be able to just really be a leg up when it comes to evaluating their risk.
Mildred: So thanks, Jamie and Christian, for those introductions. As we said at the outset, you know, we're all product liability litigators and based on what we're seeing, AI product liability is the next wave of product liability litigation on the horizon for those in the life sciences space and we're thinking very deeply about these issues and working with clients on them because of what we see on the horizon and what we're already seeing in other spaces in terms of litigation and that's, you know, what we're We're here to discuss today because of the developments that we're seeing in product liability litigation in these other spaces and the significant impact of, you know, what that litigation may represent for those of us in the life sciences space. And to level set our discussion today, we thought it would be helpful to briefly describe, you know, the kind of AI-enabled, you know, med tech or medical devices that we're seeing currently out there on the market. And I know, Jamie, you and I were talking about this, you know, in preparation for today's podcast in terms of, you know, just talking about FDA-cleared devices. I mean, what are the metrics that we're seeing with respect to that and the types of AI-enabled technology?
Jamie: Sure. So, we've seen a huge uptick in the number of medical devices that are incorporating artificial intelligence and machine learning. There are currently around 900 of those devices on the market in the United States, and more than 150 of those were authorized by FDA just in the last year. So, definitely seeing a growing number, and we can expect to see a lot more in the years to come. The majority of these devices, about 75%, are in the field of radiology. So, So for example, we now have algorithms that can assist radiologists when they're reviewing a CT scan of a patient's chest and highlight potential nodules that the radiologist should review. We see similar technology being used to detect cancer. So there's algorithms that can identify cancerous nodules or lesions that may not even be visible to a radiologist because they are undetectable by the human eye. And then other areas where we're seeing these devices being used, then cardiology and neurology.
Mildred: And I would add to that, you know, we're also seeing it with respect to, you know, surgical robots, right? And even though we don't have fully autonomous surgical robots out there on the market, you know, we do have some forms of surgical robots. And I think it's just on the horizon that we'll start to see, you know, in the near future, perhaps these surgical robots using, you know, artificial intelligence driven algorithms. And so that just the thought of that, right, that we're moving in that direction, I think, makes this discussion so important. And not just sort of in the medical device arena, but also within the pharma space where you're seeing the use of artificial intelligence to speed up and improve clinical development, drug discovery, and other areas. So you can see where the risks lie just within that space alone in addition to medical devices. And Christian, I know that you've been looking at other areas as well. So I wanted to tell us a little bit about those.
Christian: Sure. Yeah, and very similar to sort of the medical device space, there is a lot of really exciting room for growth and opportunity in the pharmaceutical space, seeing more and more technologies coming out that are focusing on streamlining things like drug discovery, using machine learning models, for example, to assist with identification of which molecules are going to be the most optimal to use in either pharmaceutical products, but also in the development of identifying mechanisms of action for being able to explain some of the medicines and the disease states that we have now that we're not able to explain as well. And then looking even more broadly, right? So you have, of course, these very specific use cases tied to the pharmaceutical products that we're talking about. But even more broadly, you'll see companies who are integrating AI into things like manufacturing processes, for example, and really working on driving the efficiency of the business, both from the product development standpoint, but also from a product production standpoint as well. So lots of opportunity here to get involved in the AI space and lots of ways to sort of grapple with how to best integrate it into a business.
Mildred: And I think that brings us to the question of what is product liability? For those listeners who may not be as familiar with the law of product liability, just to level set here too, you know, typically we're talking about three common types of product liability claims, right? You have your design defect, manufacturing defect, and failure to warn claims. Those are the typical claims that we see. And each of these scenarios or claims is premised on a product that leaves a manufacturer's facility, you know, with the defect in place, either in the product or in the warning. And these theories fit neatly for products that remain unchanged from the moment they leave the manufacturer's facility, such as consumer goods that are sold at retail. But what about when you start incorporating AI, machine learning technologies into these types of. Devices that are going to be learning and adapting, what does that mean for these types of product liability claims? And what is the impact? How will the courts deal and address and assess these types of claims as they start to see these types of devices and claims being made related to these types of technologies? And I think the key question that will come up right, is in the context of a product liability suit, is this AI-related technology even a product, right? And historically, courts have viewed software as a service that is not subject to product liability causes of action. However, that approach may be evolving to reflect the most Most products, you know, today maybe contain software or are composed entirely of software. We're seeing some litigation and other spaces that Jamie will touch on in a little bit that are showing sort of a change in that trend that we had been seeing and now moving in a different direction, which is something that we want to talk about. So maybe, Jamie, why don't you share a little bit about sort of what we're seeing in connection with product liability claims in other spaces that may inform what happens in the life sciences space.
Jamie: Yeah. So there have been a few cases and decisions over the last few years that I think help inform what we can expect to see with respect to products liability claims in the life science space, particularly around devices that incorporate artificial intelligence and software. One of those cases is the social media products liability MDL out of the Northern District of California. There you have plaintiffs who have filed suit on behalf of minors alleging that operators of various social media platforms designed these platforms to intentionally addict children. And this has allegedly resulted in a number of mental health issues and the sexual exploitation of minors. Now, last year, the defendants filed a motion to dismiss, and there were a lot of issues addressed in that motion, a lot of arguments made. We don't have time to go through all of them. But the one I do want to talk about that is relevant to our discussion today is the defendants argument that their social media platforms are not products, they're services. And as such, they should not be subject to product liability claims. And that argument is really in line with the historical approach courts have taken towards software, meaning that software has generally been considered a service, not a product. So software developers have generally not been subject to product liability claims. And so that's what the defendants argued in their motion, You know, that they were providing a platform where users could come, create content, share ideas. They weren't over in a warehouse making a good, distributing it to the general public, etc. So the court did not agree. The court rejected the defendant's argument and refused to take what it called an all or nothing approach to evaluating whether the plaintiff's design defect claims could proceed. And so the court took a more nuanced approach and it looked at the specific functions of these platforms that the plaintiffs were alleging was defective and evaluated whether each was more akin to tangible personal property or to ideas and content. So, for example, one of the claims that the plaintiff made was that the platforms lacked adequate parental controls and age verification. And so the court looked at, you know, what is the purpose of parental controls and age verification and its access? The court said this has nothing to do with sharing ideas, but this is more like products that contain parental controls, such as a prescription medicine bottle. And the court went through this analysis for each of the other allegedly defective functions. And interesting for each, it concluded that the plaintiff's product liability claims could proceed. And so what I think is huge to take away from this particular decision is that, you know, the court really moved away from the traditional approach courts have taken towards software with respect to product liability. And I think this really opens the door for more courts to do the same, specifically to expand products liability law, strict products liability to various types of software and software functions, such that the developers of the software can potentially be held liable for the software that they're developing. And while there have been a few one-off cases over the years, mostly in state court, in which the court has found that products liability liability law does apply to software. Here we have a huge MDL with a significant number of plaintiffs in federal court. And I think that this case is going to have, or this decision at least, is going to have a huge impact on future litigation.
Mildred: And I think that that's all really helpful, Jamie, in terms of the way you put the court's analysis. And I think one important to highlight is that in this particular case, the plaintiffs brought their causes of action both in strict liability as well as negligence. And I think the reason that's important to us and why it's of concern that you're seeing these plaintiffs, typically they might bring these types of claims under a negligence standard, which involves a reasonable person standard, assessing was there a duty to warn. And the court did look at some of that, you know, was there a duty here to the plaintiffs, but also strict liability, which is the one that you don't typically see brought in the case of, you know, software applications being discussed. And so the fact that you're seeing plaintiffs moving in this direction, asserting the strict product liability claims, in addition to negligence, which is what you would typically see, I think, is what is worth paying attention to. And, you know, this decision was at the motion to dismiss stage. So it will be interesting to see how it unfolds as the case moves forward through discovery and ultimately summary judgment. And it's not the only case out there. There are some other cases as well that are grappling with these issues. But this particular case, as Jamie noted, that the analysis was very detailed, very nuanced in terms of how the court got to where it did, you know, and it did a very thoughtful analysis going through, is it a software or a product? Once it answered that question and moved to, as Jamie noted, analyzing each of the product claims that were being asserted, and with failure to warn, it didn't really dive into that because of the way it had been pleaded. But nevertheless, it's still a very important decision from our perspective. And that was within sort of, you know, this product liability context. We've seen other developments in the case law. Involving cases alleging design defect, not necessarily in the product liability context, but more so in the consumer protection space, if you will. Specifically, one case that we were talking about in preparation for this podcast involving certain types. What was it, Jamie, the specific technology at issue?
Jamie: Yeah, so the Roots case is extremely interesting. And although it's a consumer protection case, not a products case, I do think that it foreshadows the types of new theories that we can expect to see in products liability litigation involving devices that incorporate software, artificial intelligence, and machine learning. So Roots Community Center is a California state case in which a community health center filed suit against manufacturers. Developers, distributors, and sellers of pulse oximeters, which are those devices that measure the amount of oxygen in your blood. And so the plaintiffs are alleging that these devices do not properly measure oxygen levels in people with darker skin, that the level of skin pigmentation can and does affect the output that these devices are generating by overestimating the oxygen level for these individuals. Individuals and that by doing so, these individuals are thinking that they have more oxygen than they do and they appear healthier than they are and they may not seek or receive the appropriate care as a result. And the reason for this, according to plaintiffs, is that the developers of the software when they were developing this device did not take into account the impact that skin color could have, that they essentially drew from data sets that were primarily white. And as such, they got results that. Largely apply to white folks. And so this issue of bias, right, is not one that I've ever seen raised as a theory of defect in a products case. And again, this isn't a products case, but I do expect to see this theory, products cases involving medical devices that incorporate artificial intelligence. You know, the FDA has been very clear that bias and health equity are are at the forefront of their efforts to develop guidelines and procedures specific to artificial intelligence machine learning-enabled devices. Particularly given that the algorithms depend on the data being used to generate output. And if the data is not reflective of the population who will be using the device and inclusive of groups like women, people of color, etc., the outputs for these groups may not be accurate.
Mildred: And what about with respect to failure to warn? You know, we know as product liability litigators that, you know, one of our typical defenses to a failure to warn claim is the learned intermediary doctrine, right? Which means that a manufacturer's duty to warn runs to the physician. You know, you're supposed to provide adequate warnings to the physician to enable them to be able to discuss the risks and benefits of a given device or pharmaceutical or treatment to the patient. But what happens to that? And then that's in the case of prescription medical devices or prescription pharmaceuticals, right? But what happens when you start incorporating AI and these, you know, machine learning technologies into a. Whether it's a medical device or in the pharma space, what happens to that learned intermediary defense? I mean, are you seeing anything that would change your mind in terms of learned intermediary doctrine is here to stay, it's not really going to change, right? I think if you ask me, I would say that based on what we're seeing so far, whether it's within the social media context or even cases that we've seen in the life sciences space that may not be specific to AI or machine learning. I think the fact that we're not yet at the stage where the technology is fully autonomous, it's more assistive, we're augmenting what a physician is doing, that will still require that you have this learned intermediary between the patient and the manufacturer who can speak to, you know, perhaps this treatment, whether it's through a medical type device or a pharmaceutical, is using this technology. Here's what it will be doing for you. This is the way it will function, et cetera. But, you know, does that mean that the manufacturer will have to make sure that they're providing clear instructions to the physician? You know, I think the answer to that is yes. And that's something that the FDA, through its guidance that it's put out, is looking at and has spoken to, Jamie, to your point, right? Not only with respect to bias, they're also looking to ensure that to the extent these technologies are being incorporated, that, you know, instructions related to their use are adequate for the end user, whether that's, you know, in many cases, the physician. But it also does raise questions as as these technologies get more sophisticated, who will be liable, right? What happens when you do start to see a more fully autonomous system who might be making decisions that. The physician just doesn't have the capacity to unpack, for instance, or fully evaluate, right? Who's responsible then? And I think that may explain a little bit of the reticence on the part of physicians to adopt these technologies. And I think ultimately, it's all about transparency, having clear, adequate information where they feel comfortable not only using the technology, right? But who ultimately will be responsible if, you know, God forbid, there's something that goes. Undetected, or the technology is telling the doctor to do something and the doctor overrides it, situations like that. And so I don't know if you all have any additional thoughts on that.
Christian: It's very interesting, right, this learned intermediary concept, I think, particularly because as we see this technology grow, we're going to see the bounds of this doctrine get stretched out a little bit. And to your point, Mildred, transparency here is going to be important, not only with respect to who is making those decisions, but also with respect to how those decisions are being made. So when you're talking about things like, how is the AI working and how are the algorithms that are underlying this technology coming to the conclusions that they are, it's going to be really important in this warning context that what's being discussed, how the AI is helping integrate into these products that everybody involved is able to understand specifically what that means. What is the AI doing? How is it doing it? And how does that translate to the medical service or the benefit that this product or pharmaceutical is providing? And that's all interesting and going to be, I think, incorporated in novel ways as we move forward.
Mildred: And Christian, sort of related to that, right, you mentioned in terms of, you know, whether it's regulation or guidance specific to FDA. What would you say about that in terms of, you know, how it goes hand in hand with product liability? ability?
Christian: Absolutely. So, I mean, as we see increased levels of regulation and increased regulatory attention on this topic, I think one aspect that's going to be really critical to keep in mind is that as we are developing our framework here, the regulations that come out are really going to represent, especially at this beginning stage when we're still coming in to better understanding of the technology itself, these regulations are really going to represent the floor rather than the ceiling. And so it's going to be important for companies who are working in this space and thinking about integrating these technologies to think about how can we incorporate and come into compliance with these regulations, but what are the very specific concerns that might be raised above and beyond these regulations? And so, Jamie, you're talking about some of these social media cases where some of the injuries alleged are very specific to subpopulations of of users in these social media platforms. And so how are we, for example, going to address the vulnerabilities in the population that our products are being marketed to while staying in compliance with the regulations as well? And so that sort of interplay is going to be really, really interesting to see to what degree these legal theories are stretched above and beyond what we're used to seeing and how that will impact understanding of the way the regulations are integrated into the business.
Jamie: You raise a great point, Christian, with respect to the regulations being a floor rather than a ceiling. I think there are a lot of companies out there that reasonably think that as long as they're following the regulations and doing what they're supposed to be doing on that front, that their risk in litigation is minimal or maybe even non-existent. But as we know, that's not the case. A medical device manufacturer can do everything right with respect to complying with FDA regulations and still be found liable in a courtroom. You know, plaintiff's lawyers often come up with pretty creative theories to put in front of a jury regarding the number of things the manufacturer, and I have my air quotes going over here, could or should have done but didn't. And these are often things that are not legally required or even practical sometimes. And ultimately, at least with respect to negligence, it's up to the fact finder to decide if the manufacturer acted reasonably. And while this question often involves considerations of whether the manufacturer complied with regulations and guidances and the like, compliance, even complete compliance, is not a bar to liability. And as product liability litigators, we see plaintiffs relying on a lot of the same theories, a lot of the same types of evidence, and a lot of the same arguments. And so having that base of knowledge and being able to share that with manufacturers and say, hey, look, I know we're not there yet. I know this litigation isn't happening today. But here are maybe some things that you can do to help mitigate your potential future risks or defend against these types of cases later on. And, you know, that's one of the reasons why we wanted to start this conversation.
Mildred: Yeah, and I would definitely echo that as well, Jamie, because as Christian mentioned, you know, the guidance that's being put out by FDA, for instance, really that's the floor in many ways and not the ceiling. And sort of looking at the guidance to provide input and insight into, okay, here's what we should be doing with respect to, you know, the design of this algorithm that will then be used for this clinical trial or to deliver this specific type of treatment, right, as you illustrated in the Roots case involving, you know, the allegation of bias in pulse oximeters, for instance, and really looking at mitigating the potential risk that is foreseeable and can be identified. And obviously, not every risk might be identifiable. And that all gets into, you know, the negligence standard in terms of, you know, what is foreseeable and what isn't. But when you're dealing with these very sophisticated, complex technologies, you know, these questions that we're so used to dealing with in sort of your normal product liability case, I think, will get more complex and nuanced. As we start to see these types of cases within the life sciences space, we're already starting to see it within the social media context, as Jamie touched on earlier. And so I would say, you know, because we're getting close to the end here of our podcast in terms of some key takeaways is obviously monitoring the case law, even if it's not in the life sciences space, but for instance, in the social media space and what's going on there as well as other areas. Monitoring what's going on in the regulatory space, because clearly we have a lot going on. And not just FDA, you also have the Federal Trade Commission issuing guidance and speaking to these issues. And if you don't have a prescription, you know, medical device that's governed by FDA, then you most certainly need to look at, you know, what other agencies govern your specific technology, especially if you're partnering with a medical device manufacturer or pharmaceutical company. And of course, legislation, we know there's a lot of activity, both at the federal and the state level, with respect to the regulation of AI. And so I think there too, we have our eye on that as well in terms of, you know, what if any legislation is coming out of that and how it will impact product liability. And Jamie and Christian, I know you guys have other thoughts on, you know, some of the key takeaways.
Jamie: Yeah. So I want to just sort of return to this issue of bias and the importance that manufacturers make sure that they're looking at and taking into account the available knowledge, whether in scientific journals or medical literature, et cetera, related to how factors like race, age, gender. Impact, the medical risks, diagnosis, monitoring, treatment of the condition that a particular device is intended to diagnose or treat, monitoring those things is going to be really important. I also think being really diligent about investigating and documenting the reasons for making certain decisions typically helps. Not always, but usually in litigation, being able to show documentation explaining the basis for decisions that were made can be extremely helpful. So, for example, the FDA put out a guidance document related to a predetermined change control plan, which is something that was developed specifically for medical devices that incorporate artificial intelligence and machine learning. And the plan is intended to set forth the modifications that manufacturers intend to or anticipate will occur over time as the device develops and the algorithm learns and changes post-market. And one of the recommendations in that guidance is that the manufacturer engage with FDA early before they submit the plan to discuss the modifications that will be included. Now, it's not a requirement, but I expect that if a company elects not to do this, that this is something plaintiff's counsel in a products case would say is evidence that the manufacturer was not reasonable, that the manufacturer could and should have talked to FDA, gotten FDA input, but didn't want to do that. Whereas if the manufacturer does do it and there's evidence of discussions with the FDA and even better, FDA's agreement with what the manufacturer ended up putting in its plan, That would be extremely useful to help defend against a product case because you're essentially showing the jury that, hey, this manufacturer talked with FDA, ran the plan by FDA, FDA agreed, the company did what even FDA thought was right, while that wouldn't be a bargain liability, right? Right. It's not it's not going to it's not going to completely immunize a manufacturer, but it is good evidence to support that the company acted reasonably at the time and under the circumstances.
Christian: Yeah. And I would just add to, you know, Jamie, you touched on so many of the important aspects here. I think the only thing I would add at this point is, you know, the importance as well, making sure that you understand the technology that you're integrating. And this goes so well in hand, Jamie, with much of what you just said about understanding who's making the decisions and why. Investing the energy upfront into ensuring that you're comfortable with the technology and how it works will allow you then moving down the line to just be much more efficient in the way that you respond, whether that's to regulatory modifications down the line, whether that's to legal risk. It will just put you in much of a stronger position if you are able to really explain and understand what that technology is doing.
Mildred: And I think that's the key, Christian, as you said, you know, being able to explain how it was tested, that it was robust, right? Yes, of course, it met the guidance, and if there's a regulation, even better. But that all measures, you know, within reasonable balance were taken to ensure that, you know, this technology being used is safe, is effective, and you try to identify all of the potential risks that could be known based on the anticipated way the technology is working. So with that, of course, we can do a whole podcast just on this topic alone with respect to mitigating the risk. And I think that will be a topic that we focus on as part of one of our subsequent podcasts, but I think unless Jamie or Christian, you have any other thoughts that brings us to an end here of our podcast.
Jamie: I think that pretty much covers it. Of course, there's a lot more detail we could get into with respect to the various theories of liability and what we're seeing and the developments in the case law and steps companies can be taking now, but maybe we can save that for another podcast.
Christian: And I completely agree. I think there's going to be so much to dig into over the next few months and years to come. So we're looking forward to it. Thank you, everybody, for listening to this episode of Tech Law Talks. And thank you for joining Mildred, Jamie, and I as we explore the dynamics between AI technologies and the product liability legal landscape. Stay connected by listening to this podcast moving forward. We're looking forward to putting out new episodes talking about AI and other emerging technologies. And we look forward to speaking with you soon.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
84 jaksoa