What's New :

Ethics and Governance of Artificial Intelligence for Health

Published: 5th Jul, 2021

The World Health Organization (WHO) has issued first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use.

Context

The World Health Organization (WHO) has issued first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use.

Background

  • The WHO guidance on ‘Ethics & Governance of Artificial Intelligence for Health’ is the product of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health
  • While new technologies that use artificial intelligence hold great promise
    • to improve diagnosis, treatment, health research and drug development and
    • to support governments carrying out public health functions, including surveillance and outbreak response
  • Such technologies, must put ethics and human rights at the heart of its design, deployment, and use.

Analysis

Ethical challenges to use of artificial intelligence for health care

  • Assessing whether artificial intelligence should be used or not.
  • Artificial intelligence and the digital divide due to poor digital literacy among many nations
  • Data collection and use in an ethical way remains a concern owing to privacy issues.
  • Accountability and responsibility for decision-making with artificial intelligence is yet to be ascertained.
  • Autonomous decision-making can be an impediment in human intelligence based decision making.
  • Bias and discrimination associated with artificial intelligence will avoid people to take up new R&D in the field of health which leads to slow pace in development.
  • Risks of artificial intelligence technologies to safety and cybersecurity of big data which is both sensitive and personal.
  • Impacts of artificial intelligence on labour and employment in health and medicine is a major challenge of Industrial Revolution 4.0.

Key ethical principles for use of artificial intelligence for health

  • Protecting human autonomy:
    • In the context of health care, this means that humans should remain in control of health-care systems and medical decisions.
    • It also requires protection of privacy and confidentiality and obtaining valid informed consent through appropriate legal frameworks for data protection.
  • Promoting human well-being and safety and the public interest.
    • The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications.
    • Preventing harm requires that AI not result in mental or physical harm that could be avoided by use of an alternative practice or approach.
  • Ensuring transparency, explainability and intelligibility.
    • AI technologies should be intelligible or understandable to developers, medical professionals, patients, users and regulators.
    • Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology.
  • Fostering responsibility and accountability.
    • Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies.
    • Appropriate mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
  • Ensuring inclusiveness and equity.
    • Inclusiveness requires that AI for health be designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
    • No technology, AI or otherwise, should sustain or worsen existing forms of bias and discrimination.
  • Promoting AI that is responsive and sustainable.
    • Responsiveness requires that designers, developers and users continuously should determine whether AI responds adequately and appropriately and according to communicated, legitimate expectations and requirements.
    • Sustainability, apart from environmental obligation also requires governments and companies to address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems

Framework for governance of artificial intelligence for health

  1. Governance of data
    • Governments should have clear data protection laws and regulations for the use of health data and protecting individual rights, including the right to meaningful informed consent.
    • Governments should require entities that seek to use health data to be transparent about the scope of the intended use of the data
    • Mechanisms for community oversight of data should be supported. These include data collectives and establishment of data sovereignty by indigenous communities and other marginalized groups.

Elements of transparent data use

  1. Control and benefit-sharing
    • Governments should consider alternative “push-and-pull” incentives instead of IP rights, such as prizes or end-to-end push funding, to stimulate appropriate research and development
    • Governments, research institutions and universities involved in the development of AI technologies should maintain an ownership interest in the outcomes so that the benefits are shared and are widely available and accessible, particularly to populations that contributed their data for AI development.
  2. Governance of the private sector
    • Governments should consider adopting models of co-regulation with the private sector to understand an AI technology, without limiting independent regulatory oversight.
    • Governments should also consider building their internal capacity to effectively regulate companies that deploy AI technologies and improve the transparency of a company’s relevant operations.
  3. Governance of the public sector
    • Governments and national health authorities should ensure that decisions about introducing an AI system for health care and other purposes are taken not only by civil servants and companies
    • It should be a democratic participation of a wide range of stakeholders and in response to needs identified by the public health sector and patients

Conclusion

AI for health is a fast-moving, evolving field, and many applications, not yet envisaged, will emerge with ever-greater public and private investment. India should consider issuing specific guidelines for tools and applications and update their approach periodically to keep pace with this rapidly changing field.

X

Verifying, please be patient.

Enquire Now