What's New :
UPSC CSE Result 2023.Download toppers list

EU’s Artificial Intelligence Act

  • Published
    4th May, 2023
Context

Members of the European Parliament reached a preliminary deal on a new draft of the European Union’s ambitious Artificial Intelligence Act, first drafted two years ago, paving the way for the world's first set of comprehensive laws governing the technology.

Background
  • The legislation was drafted in 2021 with the aim of bringing transparency, trust, and accountability to AI and creating a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU.
  • The Artificial Intelligence Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
  • Similar to how the EU’s 2018 General Data Protection Regulation (GDPR) made it an industry leader in the global data protection regime, the AI law aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market”.

What is the need to regulate AI?

  • Omnipresence: Artificial intelligence technologies have become omnipresent and their algorithms more advanced.
  • Associated risks: They are capable of performing a wide variety of tasks including voice assistance, recommending music, driving cars, detecting cancer, and even deciding on chances of getting shortlisted for a job— the risks and uncertainties associated with them have also ballooned.
  • Complex and unexplainable AI tools: Many AI tools are essentially black boxes, meaning even those who design them cannot explain what goes on inside them to generate a particular output.

What is in the Act?

  • Definition: The Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.
    • Under the definition, it identifies AI tools based on machine learning and deep learning, knowledge and logic-based approaches and statistical approaches.
  • Classification (on risk basis): The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person.
    • Risk category: There are four risk categories in the Act— unacceptable, high, limited and minimal.
      • Unacceptable: The Act prohibits using technologies in the unacceptable risk category with little

Companies deploying generative AI tools, such as ChatGPT or image generator Midjourney, will have to disclose whether copyrighted material was used to develop their systems.

  • High risk: The Act lays substantial focus on AI in the high-risk category, prescribing a number of pre-and post-market requirements for developers and users of such systems. The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high-risk criteria.
  • Limited and minimal risks: AI systems in the limited and minimal risk category such as spam filters or video games can be used with a few requirements like transparency obligations.
  • Non-compliance penalties: The Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too.
GS Mains Classes GS Classes 2024 UPSC Study Material

Verifying, please be patient.

Enquire Now