Guest Column | December 14, 2023

WHO's 6 Principles For An AI Regulatory Framework For Medical Product Development

By Sean Hilscher, vice president, and Tanvi Mehta, manager, Greenleaf Health

cyber law internet law-GettyImages-1350320510

In mid-October 2023, the WHO published a paper titled Regulatory Considerations on Artificial Intelligence for Health,1 identifying the key principles that international regulatory frameworks for artificial intelligence (AI) should address and are, in fact, starting to coalesce around. The paper was developed in consultation with WHO’s Working Group on Regulatory Considerations (WG-RC) on AI for Health, whose members include regulatory authorities, policy makers, academics, and representatives from industry. The document is intended to serve as a resource for regulators, providing a list of 18 regulatory considerations to address in their emerging regulatory frameworks. It should be noted that the scope of the document extends beyond medical product development and is also intended to inform healthcare delivery.

The 18 regulatory considerations discussed in the paper fall under six broad categories: documentation and transparency, risk management, intended use and validation, data quality, privacy and data protection, and engagement and collaboration. Many of these themes, like transparency, risk management, and validation, have been discussed by the FDA and the European Medicines Agency (EMA), in public forums or in publications. Considering WHO’s global mandate, WHO’s paper also focuses on issues that are not necessarily prioritized by national authorities, such as international collaboration and the challenges in navigating national privacy laws and data protection regulations.

1. Documentation and Transparency

In line with several key regulators, WHO identifies transparency and documentation as the cornerstone for any effective AI regulatory framework. Referencing AI’s ability to self-improve, WHO emphasizes the importance of regulators “to be able to trace back the development process and to have appropriate documentation of essential steps and decision points.”2 This echoes similar points made by both FDA and EMA in recent reflection papers. In the May 2023 discussion paper Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products, FDA noted that documentation is needed across the entire medical product life cycle and stated that “accountability and transparency are essential for the development of trustworthy AI.”3

Importantly, WHO also identifies effective documentation as an essential tool to guard against bias and to establish trust. EMA in its Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle4 encouraged sponsors to document areas of potential risk for bias or discriminatory outcomes prior to using a particular data set.

2. Risk Management

Considering AI’s self-learning capabilities, WHO recommends a total life cycle approach to risk management. Specifically, WHO states that a “life cycle approach can facilitate continuous AI learning and product improvement while providing effective safeguards.”5 To properly evaluate risk in pre-market development, WHO encourages developers to consider risk in terms of the intended use of the AI system and the clinical context. WHO identifies the International Medical Device Regulators Forum (IMDRF) Risk Framework for Software as a Medical Device (SaMD)as a reasonable approach for regulators to adopt to support risk assessment.

In the post-market, WHO recommends manufacturers develop post-market surveillance plans that specify how manufacturers will monitor, identify, and respond to emerging risks. WHO recommends that an active reporting and investigative function be established. While referencing CONSORT-AI as an appropriate resource, WHO stops short of endorsing a specific reporting standard.

3. Validation

Again, acknowledging the likelihood of AI-related change, WHO emphasizes the importance of conducting analytical and clinical validation throughout the product life cycle. Analytical validation, also known as “technical validation,” is defined as the process of “validating the AI system using data but without performing interventional or clinical studies.”7 This process requires careful documentation of the development and data selection processes. Analytic validation also can include a benchmarking exercise where the model is compared to other tools and established performance standards. To facilitate this process, WHO notes that benchmarking software is being developed as part of the Open Code Initiative.8

Clinical validation is a process by which an AI product or tool is assessed in the context of its intended use. WHO again references the IMDRF on clinical evaluation of SaMD as an effective risk-based approach for determining the amount of real-world data manufacturers should collect to evaluate a tool’s intended use. The adoption of a specific approach to clinical validation by a regulator should also be dependent upon the resources and expertise available in that country.

4. Data Considerations

While recognizing data as the “most important ingredient for training AI/ML (machine learning) algorithms,” WHO highlights the challenge in identifying and defining good quality data. The difficulty in thinking about data lies in the numerous dimensions upon which it can be analyzed. WHO encourages regulators and developers to classify data based upon the 10 V’s: volume, veracity, validity, vocabulary, velocity, vagueness, variability, venue, variety, and value. Armed with this classification system, developers can then home in on key data challenges within their development programs. Some key data challenges highlighted by WHO include data management, data inconsistency, data usability, and data integrity.

WHO recognizes data as the most important ingredient for AI/ML algorithms, primarily because AI can readily amplify biases that exist in training data sets. Along these lines, WHO notes that improper data source selection can result in selection bias “when data used to provide the model are not fully representative of the actual data that the model may receive or the environment in which the model will function.”9 Thus, inappropriate data selection can easily distort a model’s output, which can undermine the generalizability of a model and raise ethical considerations.

5. Privacy and Data Protection

WHO identifies the patchwork of different data protection regulations and privacy laws as a significant barrier to the further development of AI-enabled medical products. Indeed, WHO notes that “some 145 countries and regions have data protection regulations and privacy laws that regulate the collection, use, disclosure and security of personal information.”10 Many of these countries have their own definitions and interpretations of “privacy,” “confidentiality,” “identifiable,” “anonymous,” and “consent,” among other concepts. The jurisdictional scope of each country or region’s regulations may overlap and may restrict cross-border transfers of data.

To reduce barriers to cooperation among regulators and sponsors, WHO recommends agencies consider regulatory sandboxes — a tool that was socialized during COVID-19. These sandboxes can help regulators acquire “a better understanding of the AI systems during the development phase and before they are placed on the market.”11 Sandboxes provide a limited form of regulatory waiver that allow developers the opportunity to test new technologies without regulatory repercussions as well as provide regulators the opportunity to consider and test new regulatory approaches.

6. Engagement and Collaboration

WHO concludes the paper stating that greater collaboration among all stakeholders can improve the quality and safety of AI in general. Using case studies from an array of health authorities around the globe, WHO encourages regulators to position themselves as facilitators of innovation and partners in development. A broad engagement strategy, one that collaborates with patient advocacy groups, academia, healthcare professionals, industry, and other domestic government partners is encouraged.

Conclusion

WHO’s timely document highlights the key principles that international regulators are beginning to build their regulatory frameworks upon. Common to all regulators is the importance of and emphasis on transparency, documentation, and effective risk management. The alignment of these principles is critical when it comes to international collaboration among stakeholders, as it can help improve the quality and safety of AI in healthcare. Data quality as well as divergent privacy laws and data protection regulations across jurisdictions will continue to pose a challenge to regulators and developers alike.

References

  1. World Health Organization (WHO), “Regulatory Considerations on Artificial Intelligence for Health,” October 2023, https://iris.who.int/handle/10665/373421.
  2. Ibid., page 8.
  3. Food and Drug Administration (FDA), “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products,” May 2023, https://www.fda.gov/media/167973/download?attachment.
  4. European Medicines Agency (EMA), “Reflection Paper on the Use of Artificial Intelligence (AI) in the Medicinal Product Lifecycle,” July 2023, https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines.
  5. WHO, “Regulatory Considerations on Artificial Intelligence for Health,” page 29.
  6. International Medical Device Regulators Forum (IMDRF), "Software as a Medical Device": Possible Framework for Risk Categorization and Corresponding Considerations,” September 2014, https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf.
  7. WHO, “Regulatory Considerations on Artificial Intelligence for Health,” page 22.
  8. United Nations, ITU, “Open Code Initiative,” https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/opencode.aspx.
  9. WHO, “Regulatory Considerations on Artificial Intelligence for Health,” page 28.
  10. Ibid., page 33.
  11. Ibid., page 37

About The Authors:

Sean Hilscher is vice president of regulatory policy at Greenleaf Health. He works with clients on a range of regulatory and policy issues, including real-world evidence and digital health. Prior to Greenleaf, he managed a suite of real-world evidence platforms for providers, payers, and life science companies. He has an MBA from Georgetown University and an MA in politics, philosophy, and economics from the University of Oxford.

Tanvi Mehta is a manager of regulatory affairs and policy at Greenleaf Health. Formerly at Morgan Stanley and Invesco, she managed client relations and financial reporting. Later, her experience at Arc Initiatives in Washington D.C. involved policy analysis, strategic communications, and regulatory assessments. Throughout her education, she served on the board of the Healthcare Business Association and participated in D.C.-based public policy initiatives. Mehta earned her MBA from Georgetown University’s McDonough School of Business and B.A. in public health and economics from Agnes Scott College.