In 2024, AI is everywhere, including healthcare. But how should it be regulated? And what challenges and opportunities does regulation bring? To gain some clarity on this important and ever-evolving topic, I sat down with Elisabethann Wright, a lawyer and regulatory policy expert who advises BrightInsight—she’s a true expert on all things regulatory in healthcare technology.
Jamie: In Europe, The Artificial Intelligence (AI) Act was recently passed. What can you tell me about the AI Act?
Elisabethann: The AI Act is a regulation proposed by the European Commission and adopted by the Council and the European Parliament in May 2024. Because it’s a regulation, rather than a directive, it is directly applicable in all EU member states without the need for related implementing national legislation. It was enacted to regulate “AI systems.” The Act defines an AI system as a machine-based system that is designed to operate with various levels of autonomy, which may exhibit adaptiveness after deployment.
The broad definition of AI system is intended to address concerns that differ from those addressed, for example, in the EU legislation governing medical devices and IVDs.
Jamie: What are some of the concerns that have surfaced in the industry since this legislation was announced?
Elisabethann: One of the concerns is that the Regulation distinctions between AI systems in general and “high-risk” AI systems. Software as a medical device (SaMD) is among the AI that are considered to be high risk.
We are, therefore, faced with the rather concerning situation in which SaMD is classified as Class 2A or Class 2B, mid-level risk devices within the EU medical device legislation, but is also classified as a high-risk device on the basis of the AI Act. This could result in obligations being imposed on manufacturers that, at least, vary depending on which they are applied by the medical device legislation or the AI Act and, potentially, related conflicting obligations.
As far as we can see, because it's not terribly clear at the moment, the assessment process and the responsibilities and requirements that must be fulfilled in relation to assessment of software as a medical device under the AI Act differ from those that must be fulfilled on the basis of the Medical Device Regulation (MDR) or the In-Vitro Device Regulation (IVDR).
As an example; the AI Act states that its purpose is to ensure consistency and to give manufacturers flexibility on operational decisions. However, the Act also acknowledges that issues concerning AI systems that are addressed in the Act differ from those governed by the MDR and the IVDR. The Act acknowledges that this results in two different sets of rules with a SaMD considered a mid-level risk device on the basis of the MDR or the IVDR, but a high-risk device on the basis of the AI Act.
Jamie: What will be the consequences of this complex legislation on the pace of innovation and safety?
Elisabethann: The safety of SaMD is largely governed by the obligations of demonstrating safety and effectiveness already imposed by the MDR and the IVDR.
The issue I think we may see is that the AI Act adds an additional layer of compliance obligations on IVD and medical device manufacturers. The Act acknowledges that the assessment and the standards to which a SaMD will need to demonstrate compliance under the AI Act differs from that which is required under the medical device legislation.
Jamie: Do you think this complexity will cause manufacturers, IVD or otherwise, to abandon AI-related projects? Will it impact the eagerness to launch these more innovative software solutions?
Elisabethann: There is a risk that companies will view the obligations imposed by the Act as an additional layer of onerous regulation with which they must comply before they place SaMD on the market in the EU. This added to the already increased obligations resulting from the MDR and the IVDR may lead some manufacturers to the conclusion that the sheer amount of resources, both financial and regulatory, of interacting with notified bodies that is necessary simply is not worth it.
That’s where partnering with a company like BrightInsight—with platforms, documentation and compliance practices all built to comply with the highest level of regulation—will be beneficial. BrightInsight can reduce the burden of compliance for the software for its customers.
Jamie: Reducing documentation burden is one of the first places biopharma companies are looking toward AI. Are there other use cases you find interesting?
Elisabethann: Regulatory institutions are seeking to encourage manufacturers to rely on real-world evidence as part of post-marketing surveillance obligations. AI resources can quickly search thousands of pages of post-marketing data. AI can, therefore, at least in theory educate itself to identify real world evidence that is important and relevant to individual devices and manufacturers, and what is less important. So that would be great—if it works. The risk is that AI may focus on irrelevant or incorrect priorities and generates data that is not credible. An initial question must be how to measure the credibility both, of the process used to search and generate data and, of the data that is generated.
As AI technology moves forward at an increasingly fast speed in the healthcare space, people like Elisabethann Wright are meticulously monitoring nuances between these new technologies and the legislation being implemented along the way.
While AI has the potential to provide invaluable information and improve our healthcare, there are also risks. No-one can be certain that AI advances will always be in the best interests of the patient. At present, all medical devices developed with AI are limited to no further learning once deployed. Regulators and health care professionals will need to see much more evidence before AI has increased responsibility to diagnose or treat a patient.
Standards development and regulations always lag behind developments—as in this case with AI. Nonetheless, innovators are applying AI in many aspects of healthcare, with promising results. The standards development process and regulators are watching with keen interest, and investing substantial effort to ensure we’re applying AI in a responsible and safe way.