We are increasingly seeing Artificial Intelligence used to perform tasks that have previously required human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing.

With the recent surge in interest in sophisticated AI models, it can be tempting to assume that AI will completely replace more conventional software algorithms and even some human intervention in the medical space in the near future. In this article, we look at the current trends in adoption of AI In the medical industry, the emerging regulatory landscape for AI and what this may mean for your next medical product.

Fig 1: A graph showing the number of AI and Machine Learning enabled devices approved by the FDA over time

AI in the market

In the healthcare space, the global market size for artificial intelligence is expected to expand at a compound annual growth rate of 38.4% reaching USD 208.2 billion by 2030 [1]. Whilst it can be tempting to assume that AI is already everywhere within the medical space, due to the strict regulatory framework and need for careful consideration of the risks versus benefits in healthcare, there are still relatively few regulated medical devices on the market which incorporate AI today. Whilst approvals of such devices have been accelerating over the last 5 years, there are currently just over 500 such devices on the market today, a tiny fraction of the hundreds of thousand devices regulated by the FDA in total [2]. 

The devices listed include the use of AI in blood glucose tracking, breast cancer screening, apps for atrial fibrillation tracking and surgical systems. It is notable that in each of these examples, the AI algorithm generally forms some secondary function or augmentation of an existing baseline behaviour, or is intended to support human decision making rather than forming the core of the product. However, we expect to see an increase in adoption of AI in products, and for AI to become an increasingly fundamental part of the product and its marketing claims.

 

 

An emerging regulatory landscape

Whilst AI algorithms are fundamentally regulated as medical software, the existing medical device regulatory framework simply wasn’t designed to accommodate complex AI and machine learning models. Whilst there are currently no laws or harmonised standards that specifically regulate the use of artificial intelligence in medical devices, regulators are publishing an evolving collection of guidance for what is expected.  

Key concerns which regulators are seeking to address are summarised by the UK Regulatory Horizons Council in the following four key categories: 

  • Bias in AI: Differential performance across people groups leading to the perpetuation or worsening of health inequalities.
  • Failure of generalisability: Where an AI may perform well on training data but poorly when deployed in a new setting or population.
  • Evolving algorithms: The impending arrival of algorithms that will continuously update in response to new data, providing an opportunity for continuous improvement but the risk of continuously evolving further away from the algorithm for which regulatory approval was given.
  • Interpretability: How well a user can understand how the algorithm reached its output and challenge any decision arising from that output.
A person in a lab coat in a laboratory holding an i-pad and wearing clinical gloves.

In October 2021, the U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) published joint guidance identifying 10 guiding principles that can inform the development of Good Machine Learning Practice (GMLP) [3]. Some of these principles echo long established best practice in the development of AI and ML models such as: 

  • Ensuring that multi-disciplinary expertise is drawn on to consider the clinical workflow and patient risks.
  • Ensuring that the patient population is appropriately represented in a clinical study used to collect data.
  • Ensuring that testing demonstrates the performance of the algorithm on the target patient population and in the intended clinical environment.

However, other aspects of the guidance bring a much-needed medical focus to the use of AI such as ensuring the separation of training and test data sets, guaranteeing models are carefully selected to avoid common risks such as over-fitting, ensuring that training data is representative for the real use case.

A timeline of FDA approved devices using AI

The path forward

Whilst guidance such as those discussed above are clearly welcome, it is not clear at this stage whether a global consensus will form, leading to international standards or regulations, or whether guidance will continue to sit alongside existing medical software standards. Despite this, there are clear steps which are key to ensuring that your medical software is safe and effective and has the greatest chance of gaining regulatory approval including:

  • Ensuring compliance with the MDR and FDA requirements specifically focused on medical software.
  • Ensuring that software is developed in compliance with IEC 62304, the international standard on medical device software life cycle processes.
  • Considering any hazards associated with the intended use of AI and managing risks in accordance with ISO 14971, the international standard on the application of risk management to medical devices.
  • Following currently published guidance on best practice in the application of artificial intelligence and machine learning to medical devices.
A blue page with wavy text discussing

Is AI always the intelligent choice?

There is no doubt that AI allows us to solve problems which simply cannot be solved with traditional procedural software algorithms. However, it is important to recognise that there is often significant cost and time involved in gathering appropriate volumes of relevant training and test data, complexity in building accurate models and some risk associated with the future regulatory landscape for AI. For these reasons it is important to ensure that AI is being applied strategically to solve real clinical challenges.

Consider two devices; one which uses a camera to take an image of the skin and makes a diagnosis, and another which monitors data from a simple sensor and looks for peaks the readings to alert a clinician. Whilst both systems could leverage AI algorithms, the latter could also likely be implemented effectively using a traditional algorithm, a route which may require significantly lower investment in time and cost even if it also comes with some limitations which could be overcome with a more flexible AI solution. For these reasons, it is important to consider at an early project stage which aspects of a system may benefit from the use of AI, whether alternatives are available and whether appropriate time and budget are available to implement the solution all the way through verification and into market.

The outside window of Science Gallery with the title

Smart opportunities ahead

Whilst the lack of certainty around future regulatory pathways for medical AI is clearly challenging for startups and large businesses alike, AI has the power to radically change the way we address complex healthcare challenges and offers the opportunity to bring exciting life-changing technologies to market. By working within existing internationally recognised standards and being proactive in adopting emerging guidance we can ensure that we smooth the path to market for entirely new classes of exciting products.

Images from AI: Who’s Looking After Me? currently showing at King’s College London’ Science Galley. In collaboration with FutureEverything featuring a series of exhibits which reflect on what it might mean for society and individuals to entrust our healthcare to autonomous machines. Exhibition runs to 20 January 2024.

[1] AI in Healthcare Market 2022

[2] Number of Devices in FDA Database Linked by Class

[3] Good Machine Learning Practice for Medical Device Development: Guiding Principles

KD can help you to navigate the technical and regulatory challenges and bring your game changing AI powered solutions to market. Click here to find out more about Electronics & Software team and get in touch with Head of Portfolio Management, Sunny Panesar.