Europe has reached a provisional deal on landmark European Union rules governing the use of Artificial intelligence.
Data science and artificial intelligence legislation began with the General Data Protection Regulation (GDPR) in 2018.
GDPR act in the European Union is not only about AI, but it does have a clause that describes the ‘Right to explanation’ for the impact of artificial intelligence.
Later on, 2021’s AI Act in Europeclassifies AI systems into three categories:
Systems that create an unacceptable amount of risk must be banned
Systems that can be considered high-risk need to be regulated
Safe applications, which can be left unregulated
Other Similar regulations: Canada enacted the Artificial Intelligence and Data Act (AIDA) in 2022 to regulate companies using AI with a modified risk-based approach.
Unlike the AI Act, AIDA does ban the use of AI even in critical decision-making functions. However, the developers must create risk mitigation strategies as a backup plan.
About the Deal:
With the recent deal, the EU moves toward becoming the First Developed Country to enact laws governing AI.
The deal was held between EU countries and European Parliament members.
The deal comprises of:
Drawing up technical documentation,
Complying with EU copyright law and
Disseminating detailed summaries about the content used for training.
Procedure for High-impact foundation modelswith systemic risk will have to;
Conduct model evaluations,
Assess and mitigate systemic risks,
Conduct adversarial testing,
Report to the European Commission on serious incidents,
Ensure cyber security and
Report on their energy efficiency.
For Government’s Use: Governments can only use real-time biometric surveillance in public spaces in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.
The agreement bans cognitive behavioural manipulation,
The untargeted scrapping of facial images from the internet or CCTV footage,
Social scoring and biometric categorisation systems to infer political, religious, philosophical beliefs, sexual orientation and race.
Need for such regulation:
Unlimited Access: Easy access to such power is risky.
Job Loss: AI, like generative AI, might mess up jobs.
Biased Results: AI can be unfair. It learns from biased data and makes unfair choices.
Social Spying and Fakes: It can copy voices and faces perfectly and leads to generate fake videos and photos.
AI in Wars: Stopping the race for AI weapons is crucial for peace.
AI Regulation in India
India has taken a slightly different approach to the growth and proliferation of AI.
While the government is keen to regulate generative AI platforms like ChatGPT and Bard, there is no plan for a codified law to curb the growth of AI.
IT Minister AshwiniVaishnaw recently stated that the NITI Aayog, the planning commission of India, issued some guiding documents on AI.
These include the National Strategy for Artificial Intelligence and the Responsible AI for All report.
While these documents list good practices and steer towards a vision for responsible AI, they are not legally binding.