Ethics of AI: Benefits and risks of artificial intelligence
10th May, 2021
The increasing scale of AI in terms of its size of neural networks, the energy use, size of data sets and its authenticity, and the prevalence of the technology in society is raising the major ethical questions.
- The availability of a vast amount of big data, the speed and stretch of cloud computing platforms, and the advancement of machine learning algorithms have given birth to a good number of innovations in Artificial Intelligence(AI).
- Benefits: The beneficial impact of AI systems on government which translates into improving healthcare services, education, and transportation in smart cities. Along with some other applications that benefit from the implementation of AI systems in the public sector which include food supply chain, energy, and the environmental management.
- The benefits that AI systems bring to society are grand, and same with the challenges and worries.
- The evolving technologies learning give rise to the miscalculations and mistakes, which results in unanticipated harmful impacts.
- This brings the ethical concerns related to the AI.
What is Ethical AI?
- Ethics in AI is essentially questioning, constantly investigating, and not taking for granted the technologies which are being rapidly imposed upon human life.
- Concern Areas: The questioning of AI is made all the more urgent because of scale of its use and data.
- Sheer Size: AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume.
- Prevalence: And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras.
- Outreach: At the same time, increasing scale means several aspects of the technology, especially in its deep learningform that escape the comprehension of even the most experienced practitioners.
Reasons for ethical concern in the AI field
1. AI ethics: a new urgency and controversy
- Another area of concern is AI applied in the area of military and policing activities.
- For example, ImageNet has been used to enhance the U.S. military's surveillance systems.
- With the point of view of safety it is encouraging but with the point of view of unchecked surveillance it is not safe and causes concerns.
2. Mass surveillance backlash
- Calls are rising for mass surveillance, enabled by technology such as facial recognition, not to be used at all.
- The backlash against surveillance for example the monitoring of ethic Uyghurs in China's Xianxjang region and in February military coup in Myanmar, Human Rights Watch reports that human rights are in the balance given the surveillance system.
- There are fears that the AI tools will be weapons of first resort in future conflicts for the mass surveillance and would reduce the freedom of inidividuals.
3. Ethics of compute efficiency
- The risk of the huge cost of compute for ever-larger models has been a topic of debate for some time now.
- The measures of performance including the energy consumption, are often cloaked in secrecy.
4. AI ethics: a history of racial recognition
- It also demonstrates that how commercially available facial recognition systems had high accuracy while dealing with images of light-skinned men, but catastrophically bad inaccuracy when dealing with images of darker-skinned women.
- Such inaccuracy was tolerated in commercial systems and raised racial questions.
- AI in its machine learning form makes extensive use of principles of statistics. In statistics, bias is when an estimation of something turns out not to match the true quantity of that thing.
5. The rise of the fake
- The ethical issues of bias are the fact that neural networks are more and more "generative,".
- It means they are not merely acting as decision-making tools, such as a classic linear regression machine learning program. They are flooding the world with creations.
- The software can be used to generate realistic faces: It has spawned an era of fake likenesses.
- AI systems can now compose text, audio, and images to a sufficiently high standard that humans have a hard time telling the difference between synthetic and non-synthetic outputs for some constrained applications of the technology.
6. Societal biases
- There is the propagation of text that increases the societal biases, as pointed out by the Parrot paper.
- But there are other kinds of biases which can be created by the algorithms that act on that data.
- This includes, for example, algorithms whose goal is to classify human faces into categories of "attractiveness" or "unattractiveness." The so-called generative algorithms, such as GANs. It can be used to endlessly reproduce a narrow formulation of what is purportedly attractive in order to flood the world with that particular aesthetic to the exclusion of all else.
Artificial Intelligence systems implementation and its design must be held accountable. In general the AI might become moral agents with attributed moral responsibility. The engineers and designers of AI systems must assume responsibility and should be held accountable for the creation, design, and the program.