Ethics and Risks of Artificial Intelligence: Sam Haris Urk20Ra1004

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

ETHICS AND RISKS OF

ARTIFICIAL INTELLIGENCE
SAM HARIS URK20RA1004
DEFINING A.I.
• Artificial intelligence is the simulation of human intelligence processes by
machines, especially computer systems. Specific applications of AI include
expert systems, natural language processing, speech recognition and machine
vision.
• Does Not Require System to Replicate Human Patterns of Thinking/Reasoning
• Frequently Entails Machine Learning
A.I. DOES NOT REQUIRE

• CONSCIOUSNESS (SELF- AWARENESS)


• UNDERSTANDING (WORLDLY KNOWLEDGE)
• SENTIMENTS (ABILITY TO FEEL/SUFFER)
• JUDGMENT (MORAL DISCERNMENT)
• AGENCY (RESPONSIBILITY, FREEDOM)
WHAT ARE AI ETHICS?

• Ethics is a set of moral principles which help us discern between right and wrong. AI ethics
is a set of guidelines that advise on the design and outcomes of artificial intelligence.
• Human beings come with all sorts of cognitive biases, such as recency and confirmation
bias, and those inherent biases are exhibited in our behaviors and subsequently, our data.
• Since data is the foundation for all machine learning algorithms, it’s important for us to
structure experiments and algorithms with this in mind as artificial intelligence has the
potential to amplify and scale these human biases at an unprecedented rate..
ESTABLISHING PRINCIPLES FOR AI
ETHICS
• While rules and protocols develop to manage the use of AI, the academic community has
leveraged the Belmont Report as a means to guide ethics within experimental research and
algorithmic development. There are main three principles that came out of the Belmont
Report that serve as a guide for experiment and algorithm design, which are:

Respect for Persons


Beneficence
Justice
1. Respect for Persons: This principle recognizes the autonomy of individuals and upholds an expectation
for researchers to protect individuals with diminished autonomy, which could be due to a variety of
circumstances such as illness, a mental disability, age restrictions. This principle primarily touches on the
idea of consent. Individuals should be aware of the potential risks and benefits of any experiment that
they’re a part of, and they should be able to choose to participate or withdraw at any time before and
during the experiment.

2. Beneficence: This principle takes a page out of healthcare ethics, where doctors take an oath to “do no
harm.” This idea can be easily applied to artificial intelligence where algorithms can amplify biases
around race, gender, political leanings, et cetera, despite the intention to do good and improve a given
system.

3. Justice: This principle deals with issues, such as fairness and equality. Who should reap the benefits of
experimentation and machine learning? The Belmont Report offers five ways to distribute burdens and
benefits, which are by:
1. Equal share
2. Individual need
3. Individual effort
4. Societal contribution
ETHICAL AI ORGANIZATIONS
• Algorithm Watch: This non-profit focuses on an explainable and traceable algorithm and decision process in AI programs.

• AI Now Institute: This non-profit at New York University researches the social implications of artificial intelligence.

• DARPA: The Defense Advanced Research Projects Agency  by the US Department of Defense focuses on promoting
explainable AI and AI research.

• CHAI: The Center for Human-Compatible Artificial Intelligence  is a cooperation of various institutes and universities to
promote trustworthy AI and provable beneficial systems.

• NASCAI: The National Security Commission on Artificial Intelligence  is an independent commission “that considers the
methods and means necessary to advance the development of artificial intelligence, machine learning and associated
technologies to comprehensively address the national security and defense needs of the United States.”
HOW TO ESTABLISH AI ETHICS ?
•Since artificial intelligence didn’t give birth to moral machines, teams have started to assemble
frameworks and concept to address some of the current ethical concerns and shape the future of work
within the field. While more structure is injected into these guidelines every day, there is some consensus
around incorporating the following:
•Governance: Companies can leverage their existing organizational structure to help manage ethical AI.
If a company is collecting data, it has likely already established a governance system to facilitate data
standardization and quality assurance.
•Explainability: Machine learning models, particularly deep learning models, are frequently called
“black box models” as it’s usually unclear how a model is arriving at a given decision. Explainability
seeks to eliminate this ambiguity around model assembly and model outputs by generating a “human
understandable explanation that expresses the rationale of the machine”.  This type of transparency is
important for building trust with AI systems to ensure that individuals understand why a model is
arriving to a given decision point. If we can better understand the why, we will be better equipped to
avoid AI risks, such as bias and discrimination.  
WHAT ARE THE RISKS OF ARTIFICIAL INTELLIGENCE?

• Artificial intelligence has many potential risks — and as AI’s capabilities and pervasiveness
expand, the associated risks will continue to evolve. For the purposes of this article, I will focus
on five of the most common AI risks that exist today. 
1. Lack of AI Implementation Traceability
o From a risk management perspective, we would often start with an inventory of systems and
models that include artificial intelligence. Utilizing a risk universe allows us to track, assess,
prioritize, and control AI risks. 
2. Introducing Program Bias into Decision Making
o One of the more damaging risks of artificial intelligence is introducing bias into decision-making
algorithms. AI systems learn from the dataset on which they were trained, and depending upon how this
compilation occurred there is potential for the dataset to reflect assumptions or biases. These biases could
then influence system decision making.  

3. Data Sourcing and Violation of Personal Privacy


o When data leaks or breaches occur, the resulting fallout can significantly damage a company’s reputation
and represent potential legal violations with many legislative bodies now passing regulations that restrict
how personal data can be processed. A well known regulatory example of this is the General Data
Protection Regulation (GDPR) adopted by the European Union in April 2016, which subsequently
influenced the California Consumer Privacy Act passed in June 2018.
4. Black Box Algorithms and Lack of Transparency

o The primary purpose of many AI systems is to make predictions, and as such the algorithms can be so
inordinately complex that even those who created the algorithm cannot thoroughly explain how the variables
combined together reach the resulting prediction. This lack of transparency is the reason why some algorithms
are referred to as a “black box,” and why legislative bodies are now beginning to investigate what checks and
balances may need to be put in place. If, for example, a banking customer is rejected based on an 
AI prediction about the customer’s creditworthiness, companies run the risk of not being able to explain why.

5. Unclear Legal Responsibility


o Considering the potential risks of artificial intelligence discussed so far, these concerns lead to the question of
legal responsibility. If an AI system is designed with fuzzy algorithms, and machine learning allows the
decision-making to refine itself, then who is legally responsible for the outcome? Is it the company, the
programmer, or the system? This risk is not theoretical — in 2018, a self-driving car hit and killed a
pedestrian. In that case, the car’s human backup driver was not paying attention and was held responsible
when the AI system failed. 
DO THE BENEFITS OF ARTIFICIAL INTELLIGENCE
OUTWEIGH THE RISKS?

• The risks of artificial intelligence are significant, but the use of these tools and their growth is also inevitable. The benefits go
beyond simple efficiency gains and include a more equitable decision-making scenario when the algorithms are trained to
avoid bias. As we increase our understanding from risk management and audit perspectives, we should look for key features in
the AI systems:

 AI systems should include clear design documentation.


 Machine learning should include testing and refinement.
 AI control and governance should take priority over algorithms and efficiency.
• We all have a responsibility to learn more about the risks of artificial intelligence and control those risks. The topic is not going
away, and the risks will continue to grow and change as technology becomes more advanced and more pervasive.
Organizations that embrace the three key points above will be better equipped to manage AI system risks that could otherwise
have devastating legal and reputational consequences.
THANK
YOU

You might also like