May 28, 2024|4 min reading

Who Releases AI Ethics and Governance Guidance for Large Multi-Modal Models

The word health organization for AI

The World Health Organization (WHO) has recently issued comprehensive guidance on the ethics and governance of large multi-modal models (LMMs) in healthcare. This guidance aims to address the rapid adoption of generative AI technologies and ensure their safe and effective use in promoting public health.

Understanding Large Multi-Modal Models (LMMs)

LMMs are a type of advanced AI capable of processing diverse data inputs—such as text, images, and videos—and generating varied outputs. These models have revolutionized consumer applications, with platforms like ChatGPT, Bard, and Bert gaining significant traction in 2023. Their ability to mimic human communication and perform complex tasks without explicit programming makes them a powerful tool in healthcare.

The Potential of LMMs in Healthcare

WHO's guidance identifies five key applications of LMMs in the healthcare sector:

Diagnosis and Clinical Care: Assisting in responding to patients' queries.

Patient-Guided Use: Enabling patients to investigate symptoms and treatments.

Clerical and Administrative Tasks: Enhancing the documentation of patient visits.

Medical and Nursing Education: Providing trainees with simulated patient encounters.

Scientific Research and Drug Development: Facilitating the discovery of new compounds.

These applications highlight the transformative potential of LMMs in improving healthcare delivery and outcomes.

Addressing the Risks

Despite their potential benefits, LMMs pose several risks that need careful consideration:

  • Inaccurate Outputs: LMMs can generate false, biased, or incomplete information, potentially harming users.
  • Data Quality and Bias: The training data for LMMs may be of poor quality or biased based on race, ethnicity, gender, or age.
  • Health System Risks: Issues like accessibility, affordability, and cybersecurity vulnerabilities could undermine trust in LMMs.

WHO emphasizes the need for a collaborative approach involving governments, technology companies, healthcare providers, and civil society to mitigate these risks and ensure ethical deployment.

Key Recommendations for Governments

WHO's guidance provides several recommendations for governments to regulate LMMs effectively:

Infrastructure Investment: Governments should invest in public infrastructure, such as computing power and public datasets, accessible to developers under ethical conditions.

Regulatory Frameworks: Implement laws and policies to ensure LMMs meet ethical standards and human rights obligations.

Regulatory Agencies: Establish or assign agencies to assess and approve LMMs for healthcare use.

Post-Release Auditing: Mandate independent third-party audits and impact assessments to ensure ongoing compliance and transparency.

Responsibilities of Developers

Developers of LMMs also have a crucial role in ensuring ethical AI:

  • Inclusive Design: Engage potential users and stakeholders from the early stages of development to address ethical concerns and improve design.
  • Task-Specific Accuracy: Design LMMs for well-defined tasks with high accuracy and reliability, anticipating potential secondary outcomes.

Conclusion

WHO's new guidance on the ethics and governance of LMMs is a significant step toward harnessing AI's potential in healthcare while safeguarding against its risks. By promoting transparent and inclusive development practices, this guidance aims to achieve better health outcomes and reduce health inequities worldwide.

For more details, access the full WHO document on Ethics and Governance of AI for Health: Guidance on Large Multi-Modal Models.

Author Marwan D.

published by

@Marwan D.

Explore more

Your Gateway to Cutting-Edge Tools

Welcome to ListMyAI.net. Discover the latest AI tools shaping the future. Find innovative solutions tailored for your needs.

About us