What's New :
Mains PYQ Practice Programme. Visit Here
18th December 2023 (9 Topics)

European Union’s Artificial Intelligence Act

Context

The objectives of the EU AI Act, the world’s first legislation on AI, are to create a regulatory framework for AI technologies, mitigate risks associated with AI systems, and establish clear guidelines for developers, users, and regulators.

Background

  • On June 14, 2023, the European Parliament passed its version of the Artificial Intelligence Act (the “Act”).
  • The Act is now under review by negotiators from the European Union’s (“EU”) three bodies – the European Commission, Council, and Parliament to reconcile different versions of the Act and finalize implementing language.
  • Recently, it has been passed by EU legislators.
  • The Act is the most comprehensive regulations affecting artificial intelligence (“AI”) systems to date.

About the Act:

  • The Act passed by the European Parliament would restrict what the EU believes to be AI’s riskiest uses, such as facial recognition software and Chatbots likeChatGPT.
  • The EU’s stated goal is to ensure better conditions for the development and use of innovative technologies.
  • The EU recognizes AI’s benefits in the healthcare, transportation, manufacturing, and energy sectors but hopes to promulgate regulations that curb potential excesses and violations of EU fundamental rights.
  • The Act is expansive and would govern any entity providing a service that uses AI.
  • This includes services that produce content, predictions, recommendations, or decisions.

Key Provisions of the Act:

  • Risk-Based Regulation:The AI Act proposes a risk-based system to regulate AI, dividing systems into unacceptable risk, high risk, and low or minimal risk categories.
  • Unacceptable Risk:AI systems posing a threat to people, such as cognitive manipulation and real-time biometric identification, fall under unacceptable risk and are banned.
  • High Risk:Systems operating critical infrastructure or impacting fundamental rights are labeled high risk, with compliance requirements and obligations for providers outlined in the Act.
  • Risk Management for High Risk:The Act outlines risk management efforts for high-risk systems, including documentation, transparency, human oversight, and obligations for providers to ensure compliance.
  • Low or Minimal Risk:Low or minimal risk AI systems must comply with minimal transparency requirements, allowing users to make informed decisions, but they are largely unregulated compared to high-risk systems.
  • Generative AI software such as ChatGPT will be required to comply with several transparency requirements including:
    • Disclosing that the content was generated by an AI system;
    • Designing the model to prevent it from generating illegal content; and
    • Publishing summaries of copyrighted data used for the system’s training.
    • Providers of foundation models are subject to obligations to undertake risk assessments and mitigate reasonably foreseeable risks; and to establish appropriate data governance measures, obligations relating to the design of the foundation model (including from an environmental impact perspective) and an obligation to register the foundation model in an EU database.

Verifying, please be patient.

Enquire Now