- Over the past few years, the MIT-hosted “Moral Machine” study has surveyed public preferences regarding the applications and behaviour of Artificial Intelligence in various settings.
- One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, it’s expected response largely depends on where one is from, and what one knows about the pedestrians or passengers involved.
- For example, in an AV version of the classic “trolley problem,” some might prefer that the car strike a convicted murderer before harming others, or that it hit a senior citizen before a child.
- Still, others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination.
These recommendations are broad and do not carry the force of laws or even rules. Instead, they seek to encourage member countries to incorporate these values or ethics in the development of AI.
Threat /Negatives of AI
Though there are vast prospects, a nascent technology can't be left unchecked especially after cases where:
- Google translator developed own system to process data and bypassed the supervisor.
- Facebook AI developed own language and started communicating which had to be shut down.
- Human intelligence versus machine processing: machines process at high speeds and can learn fast and adapt to it, thus outperform humans.
- Morality issues: AI can be programmed to do any specific task and cannot differentiate between right or wrong.
- Goal-oriented: It may, therefore, use unethical means. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal. This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.
- If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for.
- If a super-intelligent system is tasked with an ambitious geo-engineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
- The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties.
- Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation.
- Multi-tasking, consistency and speed have threatened jobs, especially semi-skilled. Thus, it can lead to increasing poverty leading widening inequality.
- Costly and expensive: Especially in India where to invest in costly AI or reducing poverty would still be a dilemma.
- The frenetic speed of digital adoption and the entry of machines is making the skills of millions of people irrelevant almost overnight
- It also raises the need to determine what a machine should do and which tasks should remain under a human’s purview.
- Supporting human values through technology, building consumer trust in an era of robotic solutions and the limits to the deployment of artificial intelligence (AI) are the issues generating debate among the corporate world today.
- Our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.