Blog

Navigating the New Era: The AI Act and Its Impact on Medical Device Manufacturers

Coenraad Davidsdochter, MSc Coenraad is a professional with over 20 years’ experience in software development for the medical sector. He has worked on many aspects as requirements manager, functional designer, test engineer, implementation consultant and product owner, but in the last 10 years mostly on regulatory affairs, quality assurance, information security, GDPR and privacy at ICT Healthcare Technology Solutions (formerly BMA). 

Lit the fireworks, the AI Act is finally published and will enter into force in twenty days from the publication date. This is the end of a bumpy road of the making of the European Regulation on Artificial Intelligence that had already started in 2021. Congratulations and best wishes to Europe for yet another Regulation and the New Era that is about to start. When you’re a manufacturer of AI medical devices, you will not only have to comply with the MDR/IVDR, GDPR, Data Governance Act, Data Act and the Network and Information Systems Directive, but also with the AI Act.

Balancing Regulation and Innovation: Navigating the AI Act for Medical Device Manufacturers

Is it then all bad news for medical device manufacturers? Not necessarily so. It is not unreasonable to demand some oversight over the development and use of AI systems, and indeed, the MDR/IVDR fall a little short with this respect. Would that then justify the development of a separate regulation on AI? When you look only at the medical domain, maybe not, but it has been decided to introduce a horizontal regulation and that is now the way to go. Most of the product requirements introduced in the AI Act are no surprise and should/could already have been addressed in current MDR/IVDR technical documentation; they have just been made more explicit in the AI Act. Since the AIAct is proposing a single Notified Body assessment covering both AI Act and MDR/IVDR, a single technical file can be prepared, covered by a single QMS. 

In the process of the making, quite a number of improvements have been implemented with respect to earlier versions of the text. The definition of AI system has been changed and is now better aligned with existing standards where, amongst other things, statistical approaches are no longer (automatically) seen as AI systems. The QMS requirements for AI systems can be implemented on top of the existing ISO 13485 QMS, adding sections where the requirements are not yet covered. The definition of risk is added and is aligned with ISO 14971. Newly introduced is the statement that it is required to include on the Declaration of Conformity that the AI device also complies with the GDPR.

Implementation Timeline and Compliance Requirements for AI Act

The AI Act will be applicable 24 months from the date of entry into force, with some exceptions, as follows:

  • Six months for the General provisions and prohibited AI practices;
  • 12 months for High-risk AI systems regarding Notifying authorities and notified bodies, General-purpose AI models, Governance, Penalties Fines for providers of general-purpose AI models;
  • 36 months for Classification rules for high-risk AI systems and the corresponding obligations.

In addition, codes of practice are defined as a central tool for proper compliance with the obligations provided for under this Regulation for providers of general-purpose AI models that need to be available nine months from the date of entry into force. 

Most AI devices (MDSW) in healthcare will be considered high-risk AI devices and will need to comply with all the requirements of the AI Act. Also embedded AI software can be considered high-risk when the AI is used as a safety component for the device and the device classification is higher than I (MDR) or A (IVDR). 

Health software, medical devices class I and IVD devices Class A are not considered high-risk, except when they are used:

  • to evaluate and classify emergency calls by natural persons;
  • to dispatch, or to establish priority in the dispatching of, emergency first response services;
  • as emergency healthcare patient triage systems.

And do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.AI systems shall always be considered to be high-risk where the AI system performs profiling of natural persons.

The biggest challenge for manufacturers of AI medical devices iscomplying with the specific harmonized standards that will be developed under the AI Act.

Download the official document here.

Do you want to learn more about the AI Act?

Qserve is hosting a 2-day training program in Amsterdam, The Netherlands on September 25th and 26th. Join this dynamic training program on AI and elevate your skills to the next level! Dive deep into the latest advancements, hands-on applications, and strategic insights in artificial intelligence with our expert-led sessions. For more information about the event, visit the dedicated webpage.


 
Tags

Need more information?

Do you have questions, or do you need more information about this topic? Please contact us.

Contact us
How can we help you? Contact us