Teachers and academicians have expressed concerns over ChatGPT’s impact on written assignments, as have those at risk of running malicious code.
Ethics related to ChatGPT as an AI system:
ChatGPT is remarkable. It’s a new AI model from OpenAI that’s designed to chat in a conversational manner.
AI systems are not capable of behaving in an ethical or unethical manner on their own, as they do not have the ability to make moral judgments.
Instead, the ethical behaviour of an AI system is determined by the values and moral principles that are built into the algorithms and decision-making processes that it uses.
For example, an AI system designed to assist with medical diagnoses might be programmed to prioritize the well-being of patients and avoid causing harm.
Similarly, an AI system designed for use in a self-driving car might be programmed to prioritize safety and follow traffic laws.
In these cases, the AI system's behaviour is determined by the ethical guidelines that are built into its algorithms and decision-making processes.
However, it's important to note that these guidelines are determined by the humans who design and implement the AI system, so the ethics of an AI system ultimately depend on the ethics of the people who create it.
What is an Artificial Intelligence Ethics?
AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology.
As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.
An AI code of ethics, also called an AI value platform, is a policy statement that formally defines the role of artificial intelligence as it applies to the continued development of the human race.
The purpose of an AI code of ethics is to provide stakeholders with guidance when faced with an ethical decision regarding the use of artificial intelligence.
Why is ethics becomes necessary for AI?
AI is a technology designed by humans to replicate, augment or replace human intelligence.
These tools typically rely on large volumes of various types of data to develop insights. Poorly designed projects built on data that is faulty, inadequate or biased can have unintended, potentially harmful, consequences.
Moreover, the rapid advancement in algorithmic systems means that in some cases it is not clear to us how the AI reached its conclusions, so we are essentially relying on systems we can't explain to make decisions that could affect society.
Ethical challenges while using AI:
Explainability: When AI systems go awry, teams need to be able to trace through a complex chain of algorithmic systems and data processes to find out why. Organizations using AI should be able to explain the source data, resulting data, what their algorithms do and why they are doing that. "AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause," said Adam Wisniewski, CTO and co-founder of AI Clearing.
Responsibility: Society is still sorting out responsibility when decisions made by AI systems have catastrophic consequences, including loss of capital, health or life. Responsibility for the consequences of AI-based decisions needs to be sorted out in a process that includes lawyers, regulators and citizens. One challenge is finding the appropriate balance in cases where an AI system may be safer than the human activity it is duplicating but still causes problems, such as weighing the merits of autonomous driving systems that cause fatalities but far fewer than people do.
Fairness: In data sets involving personally identifiable information, it is extremely important to ensure that there are no biases in terms of race, gender or ethnicity.
Misuse: AI algorithms may be used for purposes other than those for which they were created. Wisniewski said these scenarios should be analysed at the design stage to minimize the risks and introduce safety measures to reduce the adverse effects in such cases.
Characteristics of an ethics-based AI model:
An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.
An inclusive AI system is unbiased and works equally well across all spectra of society.
It also requires a careful audit of the trained model to filter any problematic attributes learned in the process. And the models need to be closely monitored to ensure no corruption occurs in the future as well.
An AI system endowed with a positive purpose aims to, for example, reduce fraud, eliminate waste, reward people, slow climate change, cure disease, etc.
An AI system that uses data responsibly observes data privacy rights. Data is key to an AI system, and often more data results in better models. However, it is critical that in the race to collect more and more data, people's right to privacy and transparency isn't sacrificed.
Responsible collection, management and use of data are essential to creating an AI system that can be trusted.