FDA Outlines Its Approach to Regulation of Artificial Intelligence

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

US regulation of artificial intelligence (AI) in medical devices will involve collaborative work between multiple departments within the FDA. On March 15, the FDA released “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” which explains how the agency’s Centers for Medical Products promote responsible innovation. How do you plan to address the efforts required to protect public health while giving In AI used in medical products and their development.

This paper outlines four priorities for cross-center collaboration to promote consistency at FDA in managing the development, deployment, use, and maintenance of AI technologies throughout the life cycle of medical products.

These include:

Promote cooperation to protect public health.

  • Solicit input from interested parties to consider key aspects of the use of AI in medical products, such as transparency, explainability, governance, bias, cybersecurity, and quality assurance.
  • Promote the development of educational initiatives to assist regulatory bodies, healthcare professionals, patients, researchers, and industry as they navigate the development of medical products and the safe and responsible use of AI in medical products. are
  • Continue to work with global collaborators to promote international collaboration on standards, guidelines and best practices to encourage consistency and harmonization in the use and assessment of AI in the medical product landscape.

Drive the development of regulatory approaches that support innovation.

  • Continuous monitoring and evaluation of trends and emerging issues to detect potential knowledge gaps and opportunities, including regulatory submissions, allows for timely adaptations that provide clarity for the use of AI in the medical product lifecycle.
  • Supporting regulatory science efforts to evaluate AI algorithms, develop mechanisms to identify and mitigate bias, and ensure robustness and flexibility of AI algorithms to cope with changing clinical input and circumstances .
  • Leverage and continue existing initiatives to assess and regulate the use of AI in medical products and medical product development, including manufacturing.
  • Issuing guidance on medical product development and the use of AI in medical products, including: final guidance on marketing submission recommendations for predefined change control plans for AI-enabled device software functions; Draft guidance on lifecycle management considerations and premarket submission recommendations for AI-enabled device software functions; and draft guidance for consideration of the use of AI to support regulatory decision-making for drugs and biological products.

Promote the development of standards, guidelines, best practices, and tools for the medical product lifecycle.

  • Continue to refine and develop considerations for evaluating the safe, responsible, and ethical use of AI across the lifecycle of medical products (for example, provide appropriate transparency and address safety and cybersecurity concerns Removes).
  • Identify and promote best practices for monitoring the long-term safety and real-world performance of AI-enabled medical products.
  • Explore best practices for documenting and ensuring that data used to train and test AI models are fit for purpose, including adequately representing the target population.
  • Develop a framework and strategy for quality assurance of AI-enabled tools or systems used in the lifecycle of medical products, emphasizing continuous monitoring and risk mitigation.

Supporting research on AI performance evaluation and monitoring

  • Identify projects that highlight various points where bias can be introduced in the AI ​​development lifecycle and how it can be addressed, including through risk management.
  • Support projects that consider health disparities associated with the use of AI in medical product development to promote equity and ensure data representativeness, leveraging ongoing diversity, equity and inclusion efforts. can go.
  • Support ongoing monitoring of AI tools in clinical product development within demonstration projects to ensure adherence to standards and maintain performance and reliability throughout their lifecycle.

About the author

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment