Global Agreement on Ethics of Artificial Intelligence
1st Dec, 2021
The United Nations adopted a historical text defining the common values and principles needed to ensure the healthy development of artificial intelligence.
- The agreement was adopted at the 41st session of the UNESCO General Conference, showing renewed cooperation on the ethics of artificial intelligence.
- The agreement is called the Recommendation on the Ethics of Artificial Intelligence.
- It approaches AI ethics as a systematic normative reflection, based on a holistic and evolving framework of interdependent values, principles, and actions that can guide societies in dealing responsibly with the known and unknown impacts of artificial technologies on human beings, societies, and the environment, and offers them a basis to accept or reject artificial intelligence technologies.
What is Artificial Intelligence?
- Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
What are AI Ethics?
- AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology.
- As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.
Why are AI ethics important?
- AI is a technology designed by humans to replicate, augment or replace human intelligence.
- These tools typically rely on large volumes of various types of data to develop insights. Poorly designed projects built on data that are faulty, inadequate or biased can have unintended, potentially harmful, consequences.
- Moreover, the rapid advancement in algorithmic systems means that in some cases it is not clear to us how the AI reached its conclusions, so we are essentially relying on systems we can't explain to make decisions that could affect society.
- An AI ethics framework is important because it shines a light on the risks and benefits of AI tools and establishes guidelines for its responsible use.
What are the ethical challenges of AI?
Enterprises face several ethical challenges in their use of AI technology.
- Explainability :
- When AI systems go awry, teams need to be able to trace through a complex chain of algorithmic systems and data processes to find out why.
- Organizations using AI should be able to explain the source data, resulting data, what their algorithms do and why they are doing that.
- "AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause," said Adam Wisniewski, CTO and co-founder of AI Clearing.
- Responsibility :
- Society is still sorting out responsibility when decisions made by AI systems have catastrophic consequences, including loss of capital, health or life.
- Responsibility for the consequences of AI-based decisions needs to be sorted out in a process that includes lawyers, regulators and citizens.
- One challenge is finding the appropriate balance in cases where an AI system may be safer than the human activity it is duplicating but still causes problems, such as weighing the merits of autonomous driving systems that cause fatalities but far fewer than people do.
- Fairness: In data sets involving personally identifiable information, it is extremely important to ensure that there are no biases in terms of race, gender or ethnicity.
- Misuse :
- AI algorithms may be used for purposes other than those for which they were created.
- Wisniewski said these scenarios should be analyzed at the design stage to minimize the risks and introduce safety measures to reduce the adverse effects in such cases.
What is an AI code of ethics?
A proactive approach to ensuring ethical AI requires addressing three key areas :
- Policy :
- This includes developing the appropriate framework for driving standardization and establishing regulations. Example Asilomar AI Principles.
- Ethical AI policies also need to address how to deal with legal issues when something goes wrong.
- Companies may incorporate AI policies into their own code of conduct.
- Education :
- Executives, data scientists, front-line employees and consumers all need to understand policies, key considerations and potential negative impacts of unethical AI and fake data.
- Technology :
- Executives also need to architect AI systems to automatically detect fake data and unethical behavior.
- This requires not just looking at a company's own AI but vetting suppliers and partners for the malicious use of AI.
- Examples include the deployment of deep fake videos and text to undermine a competitor, or the use of AI to launch sophisticated cyberattacks.
- To combat this potential snowball effect, organizations need to invest in defensive measures rooted in open, transparent and trusted AI infrastructure.
The world needs rules for artificial intelligence to benefit humanity. The Recommendation on the ethics of AI is a major answer. It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its 193 Member States in its implementation and ask them to report regularly on their progress and practices.