What's New :

‘How Is Ethical AI Different From Fair AI’

  • Category
    Ethics
  • Published
    24th Nov, 2020

Artificial intelligence is growing at a rapid pace to the point where it is making important decisions for every sector of society.

Context

Artificial intelligence is growing at a rapid pace to the point where it is making important decisions for every sector of society.

However, there is a very fine line between ethical AI and a fair AI. It becomes difficult to differentiate, as they also overlap at a few points.

Background

  • One of the biggest challenges that AI systems face is in regard to its ethics and fairness in its operations.
  • The best ways to demonstrate this would be the example of the secret AI tool that was used for recruitment purposes in e-commerce giant Amazon in 2014.
  • Only a year later, the organisation realised that the AI system was partial towards male candidates since it was trained to vet applications by observing patterns in resumes submitted to the company over ten years; most of these applications were from men.
  • Case of missed opportunity.
  • In order to understand these challenges, it is first necessary to differentiate between two aspects — ethics and fairness.
  • Ethical AI and fair AI are often used interchangeably, but there are few differences.

Analysis

Ethical AI vs Fair AI

The concept of machine morality, especially in the case of AI, has been explored by computer scientists since the late 1970s. These research are mainly aimed at addressing the ethical concerns that people may have about the design and application of AI systems. To formally define, at the core of ethical AI, the idea is that it should never lead to rash actions, the result of poor learning, that could impact human safety and dignity.

Following are some of the main and strong suites of an Ethical AI system, which are accepted and prescribed prominent field players such as Microsoft.

  • Technical robustness, reliability, and safety: It is important to build robust AI systems that are resilient to adversarial attacks. Such attacks manipulate the behaviour of the system by making changes to the input or training data. In worse case scenarios, these attacks can prove fatal to the environment they are in. Additionally, an ethicalAI system should also be able to fallback from a ‘rule-based system’ and ask for human intervention to prevent it from going rogue.
  • Privacy and security: An ethicalAI system must guarantee privacy and data protection throughout its lifecycle, which includes the information provided initially by the user and that generated during the course of interaction with the system. This is quite a slippery slope. Since these systems primarily rely on data, they are always hungry for new information. There have been multiple reports of tech giants, intentionally or otherwise, tapping into users’ sensitive information.
  • Transparency: The guidelinesfrom the European Commission released in 2019, defined AI transparency in three subparts: traceability, explainability, and communication. Vendors must make the decision-making capabilities of the AI device transparent to the users to protect against any possible harm against humans or their rights.
  • Fairness and inclusivity: Bias is one of the major problems with AI systems. These systems internalise the choice of the researchers, building them and further amplify them. Experts believe that to build a system completely devoid of such bias is impossible. However, there are a few steps that could be taken to minimise them, including using inclusive datasets to train these machines on.
  • Accountability: An ethical AIhas mechanisms that ensure responsibility and accountability, not just during its creation but also after development, deployment and use. Companies must adhere to rules and regulations to make sure that their systems conform to ethical principles.

So, what exactly is ‘fair AI’?

  • Having seen what an ethical AI system means, it is easy to infer that fairness is prominent yet just a part of it.
  • A fair AI refers to an attribute of ethical AIin the larger sense.
  • To define, fair AI refers to probabilistic decision support that prevents it from unfairly harming or benefiting a particular.
  • There are multiple reasons as to why ‘unfairness’ creeps into a system: the data the system learns from, the way algorithms are designed, and modelling by way of selecting relevant features as inputs and combining them in meaningful ways.

Need for Ethics and Regulation

  • AI is a technological wave that has taken over the market across the globe and has seeped through the Indian markets as well.
  • Even though India has not advanced to the level of providing citizenship to a robot like Sophia from Saudi Arabia, personalised chatbots have flooded the market.
  • With greater explorations into the space of AI, the world is moving towards a goal of near-complete automation of services. AI is wholly based on data generated and gathered from various sources.
  • The two main concerns that prominently come into picture are :
    • who owns the data about the users, and how that data is used to further power AI based apps, services and platforms
    • What are the circumstances if the machine makes biased decisions or provides an incorrect response. If the chatbot does not respond correctly once deployed by the business, a human fallback is provided to correct the error based on the data generated and provided by the business.
  • This is where the question of the ethics of AI comes into the picture, which the government needs to tackle with some amount of regulation.
  • A systematic mechanism and policies are needed to understand how these algorithms are written and how the data collected can be safeguarded and perhaps tracked to prevent breach.

AI Laws Across the Globe

  • Australia: In the 2018–2019 budget, the government of Australia set aside AU$29.9 million to support the development of AI in Australia.
  • Canada: The element of end-to-end ‘human involvement’ has been insisted upon by most AI advanced countries such as Canada, in order to ensure accountability and security of AI systems. At the same time, they will create a Technology Roadmap, a Standards Framework and a national AI Ethics Framework to support the responsible development of AI.
  • China: In 2017, China had unveiled what is called ‘A Next Generation Artificial Intelligence Development Plan’, which sets a ground for as far as the year 2030 with regards to development of AI in China and also the regulations and ethics to promote development of AI.
  • European Union: In 2018, the European Union outlined the Communication on Artificial Intelligence document which among other issues, outlines the need to have an ethical and legal framework is in place — and will prepare the draft guidelines which member countries would most likely adopt as is, or with certain localized changes.

Closure

Though, it is impossible to construct a 100% universally fair or unbiased system. Partly because there are up to 20 different mathematical definitions of fairness, however, organisations can design AI systems to meet specific goals, thus mitigating the unfairness and creating a more responsible system overall. Companies need to realise the difference between the two to develop a system that best suits their operation and creates an overall responsible AI system.

X

Verifying, please be patient.

Enquire Now