Artwork

Sisällön tarjoaa Reed Smith. Reed Smith tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

AI explained: AI and the impact on medical devices in the EU

24:37
 
Jaa
 

Manage episode 442057563 series 3402558
Sisällön tarjoaa Reed Smith. Reed Smith tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.

----more----

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.

Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself?

Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space.

Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR?

Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act?

Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians?

Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qualified as a high risk class. And what it would mean is that maybe different from a medical device, you know, manufacturer that would be very kind of used to a lot of the concepts that are used in the AI Act as well. And we'll come to that a little bit later. You know, manufacturers or developers of this kind of software solution and all, they wouldn't necessarily be sophisticated kind of in the medical technology space in terms of having a risk and a quality management system. Having your technical documentation verified, et cetera, et cetera. So I do think that's one of those examples where there could be a potential, like a bit of a mismatch between the two. You will have to see, of course, for a number of these obligations in relation to specific AI systems under the AI Act, whether it's high class or the devices that you mentioned, Cynthia, you know, which is more, I think, in Annex 3 of the AI Act, the European Commission is going to produce a lot of delegated acts and guidance documents. So it is to be seen, actually, what the Commission is going to kind of provide more in detail about this.

Cynthia: Thanks, Wim. I mean, we've talked a lot about high-risk AI, but the EU AI Act also regulates general-purpose AI, you know, and so chatbots and those kinds of things are regulated, but in a more minimum way under the EU AI Act. What if a pharma company, a medical device company has a chatbot on its website for customer service? Obviously, there's risks in relation to data and people inputting sensitive personal data, but there's got to be a risk in relation as well as to users of those chatbots seeking to use that system to triage or ask questions seeking medical device. How would that be regulated?

Wim: Yeah, that's a good point. I mean, it would ultimately be down to the intended purpose of that chat box. If that chat box is really about just connecting the patient or the user with maybe a physician and then take it forward. Or would it be a chat box that actually is also functioning, say, more as a kind of a triage system where the chat box, depending on the input given by the user or the answers given, would start actually kind of making their own decisions and would, you know, already kind of like point towards a certain decision, whether a cure or whether treatment is required, etc. So that would be already much more, again, in the space of the medical device definition, whereas a kind of, a general use chat box would not necessarily be but it really is kind of down to the intended purpose of of the chat box the one thing that is of course specific with an AI system versus a more kind of standard software or chat box system is that the AI kind of learning continuous learning, may actually go kind of beyond and above the intended purpose of what was initially initially envisaged for that chat box. And that might have been influenced, like you say, Cynthia, about the input. Maybe because the user is asking different questions, the chat box may react different and may actually go beyond the intended purpose. And that's really, I think, that's going to be a very difficult point of discussion, in particular with notified bodies, in case you need to have your AI system assessed by a notified body. Under the MDR and the IVDR, a lot of the notified bodies have gone on record saying that that kind of the process of continuous learning by an AI system, of course, entails a certain risk ultimately. And to the extent that a medical device manufacturer has not described like almost like in a what within a certain boundaries, you know, that the AI system can operate, that would actually mean that it goes beyond the approval and would need to be reassessed and re kind of confirmed by the notified body. So I think that's going to be something and it's really not clear under the AI Act, there's a certain idea about change, you know, to what extent if the AI system learns and changes, do you need to seek new approval and conformity assessment? And that change doesn't necessarily correspond with what is considered to be a significant change. That's the wording being used under the MDR. That doesn't necessarily correspond, again, between the two frameworks here as well. And maybe one point, Cynthia, that I, on the chat box, you know, because it is, you know, it wouldn't necessarily qualify as a high risk, but there are certain requirements under the AI Act, right? It's about, if I understand well, a lot about transparency, that you're transparent, right?

Cynthia: Mm-hmm. So anyone who interacts with a chatbot needs to be aware that that's what they're interacting with. So you're right, there's a lot about transparency and also explainability. But I think you're starting to get towards something about reconformity. I mean, if, you know, Notified Body has certified the use, whether it be a medical device or, you know, AI, if there's anomalies in that data or if it starts doing new things, potentially what is off-label use, surely that would trigger a new requirement for a new conformity assessment.

Wim: Yeah, absolutely. I think, you know, under the MDR, it wouldn't necessarily, even though with the MDR now, there's language that you need to report. If you see like a trend of off-label use, you need to report that. Under the AI Act, it's really down to what kind of change can you tolerate? And an off-label use, you know, by definition is, you know, kind of using the device for other purposes, for other intended purposes than the developer had in mind. And that, you know, again, if you're reading the AI Act strictly, you know, that would probably then indeed trigger a new conformity assessment procedure as well.

Cynthia: One of the things that we haven't talked about, and we've kind of alluded through that, what we've been discussing about off-label use and anomalies in the data, is that the EU is talking about essentially a separate AI liability, which is still in draft. So I would have thought that you know medical device manufacturers would need to be very cognizant that you know there's potential for increased liability under the AI act you've got liability under the MDR obviously the GDPR has its own penalty system so I mean you this will require quite a lot of governance to to try and minimize risk.

Wim: Yeah would you oh Oh, absolutely. Oh, absolutely. I think, I mean, you touched on a very, very good point. I think I wouldn't say that, you know, in Europe, we're moving entirely to a kind of claimant-friendly or, you know, class action-friendly litigation landscape. But, you know, there are a lot of new laws being put in place that might actually trigger that a bit. You know, you mentioned rightfully the AI liability directive that is still in draft stage but you already have the new general safety product regulation place you have the class action directive as well you have the whistleblower directive being you know rolled out in all the member states so i think all of that combined surely and then certainly with AI systems does create increased risk and certain risks you know, a medical technology company will be very familiar with, you know, if we're talking about risk management, quality management, the drafting of the technical documentation. All labeling, document keeping, adverse event reporting, all of that is well known. But what is less well known is a bit, you know, the use cases that we discussed, but also the potential, the overlap and the potential inconsistencies between the different legal systems, especially on data and data governance. I don't think, you know, a lot of medical technology companies are so advanced yet. And then we can tell already now, you know, when a medical device that incorporates, you know, AI software is being certified, there are some questions, you know, there's some language in the MDR about software and kind of continuous software along the lifecycle of the software compliance. Compliance, but it's not at all prescriptive as what will happen now with the AI Act, where you'll have a lot more requirements on like data quality, you know, what were the data sets that you have been used to train the algorithm? Can we have access to it, et cetera, et cetera. You need to disclose that. That's certainly a big risk area. The other risk area that I would see, and again, that differs maybe a bit from the MDR are is that it's not just about, under the AI Act, imposing requirements on the developers, which essentially are the manufacturers. But it's also on who are called the deployers. And the deployers are essentially the users. You know, that could be hospitals or physicians or patients. And there are also requirements now being imposed on them. And I do think that's a novelty to some extent as well. So that will be curious on how they deal with that, how medical device companies, you know, with their customers, with their doctors and hospitals are going to interact to guarantee kind of a continuous compliance, not just with the MDR, but now also with the AI act.

Cynthia: Thanks, Wim. That's a lot for organizations to think about, because those things weren't complicated enough under the MDR itself. I think some of the takeaways are obviously the interplay between the MDR, the IVDR, the EU Act, concerns around software as a medical device, and the overlap with what is an AI system, which is obviously the potential for inferences and generating of outputs. And then concerns around transparency, being able to be open about how they're using AI and explaining its use. We also talked a little bit about some of the risks in relation to clinician education, off-label use, anomalies within the data, and the potential for liability. So, please feel free to get in touch if you have any questions after listening to this podcast. Thank you, Wim. We appreciate your listening, and please be aware that we will have more podcasts on AI over the coming months. Thanks again.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

86 jaksoa

Artwork
iconJaa
 
Manage episode 442057563 series 3402558
Sisällön tarjoaa Reed Smith. Reed Smith tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.

----more----

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.

Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself?

Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space.

Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR?

Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act?

Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians?

Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qualified as a high risk class. And what it would mean is that maybe different from a medical device, you know, manufacturer that would be very kind of used to a lot of the concepts that are used in the AI Act as well. And we'll come to that a little bit later. You know, manufacturers or developers of this kind of software solution and all, they wouldn't necessarily be sophisticated kind of in the medical technology space in terms of having a risk and a quality management system. Having your technical documentation verified, et cetera, et cetera. So I do think that's one of those examples where there could be a potential, like a bit of a mismatch between the two. You will have to see, of course, for a number of these obligations in relation to specific AI systems under the AI Act, whether it's high class or the devices that you mentioned, Cynthia, you know, which is more, I think, in Annex 3 of the AI Act, the European Commission is going to produce a lot of delegated acts and guidance documents. So it is to be seen, actually, what the Commission is going to kind of provide more in detail about this.

Cynthia: Thanks, Wim. I mean, we've talked a lot about high-risk AI, but the EU AI Act also regulates general-purpose AI, you know, and so chatbots and those kinds of things are regulated, but in a more minimum way under the EU AI Act. What if a pharma company, a medical device company has a chatbot on its website for customer service? Obviously, there's risks in relation to data and people inputting sensitive personal data, but there's got to be a risk in relation as well as to users of those chatbots seeking to use that system to triage or ask questions seeking medical device. How would that be regulated?

Wim: Yeah, that's a good point. I mean, it would ultimately be down to the intended purpose of that chat box. If that chat box is really about just connecting the patient or the user with maybe a physician and then take it forward. Or would it be a chat box that actually is also functioning, say, more as a kind of a triage system where the chat box, depending on the input given by the user or the answers given, would start actually kind of making their own decisions and would, you know, already kind of like point towards a certain decision, whether a cure or whether treatment is required, etc. So that would be already much more, again, in the space of the medical device definition, whereas a kind of, a general use chat box would not necessarily be but it really is kind of down to the intended purpose of of the chat box the one thing that is of course specific with an AI system versus a more kind of standard software or chat box system is that the AI kind of learning continuous learning, may actually go kind of beyond and above the intended purpose of what was initially initially envisaged for that chat box. And that might have been influenced, like you say, Cynthia, about the input. Maybe because the user is asking different questions, the chat box may react different and may actually go beyond the intended purpose. And that's really, I think, that's going to be a very difficult point of discussion, in particular with notified bodies, in case you need to have your AI system assessed by a notified body. Under the MDR and the IVDR, a lot of the notified bodies have gone on record saying that that kind of the process of continuous learning by an AI system, of course, entails a certain risk ultimately. And to the extent that a medical device manufacturer has not described like almost like in a what within a certain boundaries, you know, that the AI system can operate, that would actually mean that it goes beyond the approval and would need to be reassessed and re kind of confirmed by the notified body. So I think that's going to be something and it's really not clear under the AI Act, there's a certain idea about change, you know, to what extent if the AI system learns and changes, do you need to seek new approval and conformity assessment? And that change doesn't necessarily correspond with what is considered to be a significant change. That's the wording being used under the MDR. That doesn't necessarily correspond, again, between the two frameworks here as well. And maybe one point, Cynthia, that I, on the chat box, you know, because it is, you know, it wouldn't necessarily qualify as a high risk, but there are certain requirements under the AI Act, right? It's about, if I understand well, a lot about transparency, that you're transparent, right?

Cynthia: Mm-hmm. So anyone who interacts with a chatbot needs to be aware that that's what they're interacting with. So you're right, there's a lot about transparency and also explainability. But I think you're starting to get towards something about reconformity. I mean, if, you know, Notified Body has certified the use, whether it be a medical device or, you know, AI, if there's anomalies in that data or if it starts doing new things, potentially what is off-label use, surely that would trigger a new requirement for a new conformity assessment.

Wim: Yeah, absolutely. I think, you know, under the MDR, it wouldn't necessarily, even though with the MDR now, there's language that you need to report. If you see like a trend of off-label use, you need to report that. Under the AI Act, it's really down to what kind of change can you tolerate? And an off-label use, you know, by definition is, you know, kind of using the device for other purposes, for other intended purposes than the developer had in mind. And that, you know, again, if you're reading the AI Act strictly, you know, that would probably then indeed trigger a new conformity assessment procedure as well.

Cynthia: One of the things that we haven't talked about, and we've kind of alluded through that, what we've been discussing about off-label use and anomalies in the data, is that the EU is talking about essentially a separate AI liability, which is still in draft. So I would have thought that you know medical device manufacturers would need to be very cognizant that you know there's potential for increased liability under the AI act you've got liability under the MDR obviously the GDPR has its own penalty system so I mean you this will require quite a lot of governance to to try and minimize risk.

Wim: Yeah would you oh Oh, absolutely. Oh, absolutely. I think, I mean, you touched on a very, very good point. I think I wouldn't say that, you know, in Europe, we're moving entirely to a kind of claimant-friendly or, you know, class action-friendly litigation landscape. But, you know, there are a lot of new laws being put in place that might actually trigger that a bit. You know, you mentioned rightfully the AI liability directive that is still in draft stage but you already have the new general safety product regulation place you have the class action directive as well you have the whistleblower directive being you know rolled out in all the member states so i think all of that combined surely and then certainly with AI systems does create increased risk and certain risks you know, a medical technology company will be very familiar with, you know, if we're talking about risk management, quality management, the drafting of the technical documentation. All labeling, document keeping, adverse event reporting, all of that is well known. But what is less well known is a bit, you know, the use cases that we discussed, but also the potential, the overlap and the potential inconsistencies between the different legal systems, especially on data and data governance. I don't think, you know, a lot of medical technology companies are so advanced yet. And then we can tell already now, you know, when a medical device that incorporates, you know, AI software is being certified, there are some questions, you know, there's some language in the MDR about software and kind of continuous software along the lifecycle of the software compliance. Compliance, but it's not at all prescriptive as what will happen now with the AI Act, where you'll have a lot more requirements on like data quality, you know, what were the data sets that you have been used to train the algorithm? Can we have access to it, et cetera, et cetera. You need to disclose that. That's certainly a big risk area. The other risk area that I would see, and again, that differs maybe a bit from the MDR are is that it's not just about, under the AI Act, imposing requirements on the developers, which essentially are the manufacturers. But it's also on who are called the deployers. And the deployers are essentially the users. You know, that could be hospitals or physicians or patients. And there are also requirements now being imposed on them. And I do think that's a novelty to some extent as well. So that will be curious on how they deal with that, how medical device companies, you know, with their customers, with their doctors and hospitals are going to interact to guarantee kind of a continuous compliance, not just with the MDR, but now also with the AI act.

Cynthia: Thanks, Wim. That's a lot for organizations to think about, because those things weren't complicated enough under the MDR itself. I think some of the takeaways are obviously the interplay between the MDR, the IVDR, the EU Act, concerns around software as a medical device, and the overlap with what is an AI system, which is obviously the potential for inferences and generating of outputs. And then concerns around transparency, being able to be open about how they're using AI and explaining its use. We also talked a little bit about some of the risks in relation to clinician education, off-label use, anomalies within the data, and the potential for liability. So, please feel free to get in touch if you have any questions after listening to this podcast. Thank you, Wim. We appreciate your listening, and please be aware that we will have more podcasts on AI over the coming months. Thanks again.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

86 jaksoa

Все серии

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas