AI Ethics Final Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Moral Consideration for Social Robots

Introduction

The discussion around providing moral consideration for robots has traditionally focused

on conscious or self-aware robots. However, this view has yet to keep up with the progress and

innovation of modern robots. Although they are not conscious, modern robots are crucial in

many sectors, including manufacturing and teaching. While robots used in manufacturing

primarily serve as tools for human operators, social robots, like those used in teaching, are

increasingly integrated into our social fabric. This paper argues that the interconnection between

humans and social robots demands that we give them the same level of moral consideration we

give to other beings. It is essential to note that this paper is not arguing for the introduction of

human-like rights. Instead, it argues for a different kind of moral consideration.

This paper will first define the relevant terminology, and then I will examine three

arguments in favor of moral consideration of social robots as well as responses to each argument.

I will also examine the consequences of providing moral consideration to social robots and,

lastly, will respond to an argument denying moral consideration to all non-sentient beings. The

assumptions of this paper are One: robots that do not interact socially with humans should not be

given moral consideration beyond that of any other tool; two: when social robots interact with

humans, they have the capacity to influence human emotions, behavior, and decision-making,

Three: social robots exhibit a degree of autonomy and decision-making capability, Four: Social

robots may have the capacity to learn allowing them to adapt to their environment and improve
their performance over time based on their interactions and Five: Social robots while having

unintended consequences will overall be a benefit to areas they serve.

Definitions

The most important term of this paper is social robots. Social robots are autonomous or

semi-autonomous machines designed to interact and communicate with humans in a social and

interpersonal manner. Unlike traditional industrial robots that are confined to structured

environments and perform repetitive tasks, social robots are created to engage with people in

more dynamic and unstructured settings. They have sensors, actuators, and artificial intelligence

(AI) capabilities to perceive their environment, interpret human actions and emotions, and

respond appropriately. Examples of these robots include SoftBank Robotics’ Pepper or Starship

delivery robots. While Pepper is more advanced than the Starship delivery robot, both interact

with humans socially and are integrated into their respective social environments.

Artificial intelligence is a branch of computer science and technology that aims to create

systems and machines capable of performing tasks typically associated with humans, such as

pattern recognition and creative thinking. It employs various methods to accomplish its tasks,

including reasoning, problem-solving, language comprehension, and previous experience

application. AI also uses data to develop algorithms to decode information, process data, and

make decisions.

Creator typically refers to an individual responsible for the creation of an object.

However, for this paper, the creator will refer to the individuals, manufacturers, and companies

responsible for designing and developing the robot's hardware, software, and overall

functionality.
Moral consideration refers to the assessment and acknowledgment given to entities'

interests, rights, and well-being. Granting moral consideration involves recognizing the moral

value of something and considering it when making ethical decisions or judgments. It does not

specify the level or weight of moral consideration; instead, it only says that a non-zero amount

should be given.

Argument from Moral Virtue

Support of moral consideration for social robots from virtue ethics is distinctly not based

on social robots deserving rights, dignity, or personhood; instead, it is based on the principle that

a good person would not treat social robots poorly based on the virtues of compassion, empathy,

and respect. The reason for this is that treating the social robot poorly is not inherently wrong;

instead, it is a judgment of the character of the human actor. Furthermore, it argues that we ought

to recognize the moral agency of the social robots; social robots, designed to interact with

humans in socially meaningful ways, have a form of agency in their interactions based on

machine living and AI. Lastly, when designing social robots, being able to do so under the

premise that these robots will be given respect and moral consideration allows the creator to

better adapt their robot to the circumstances, whether healthcare, education, or companionship.

Being able to design social robots with the expectation that they are given virtues such as care,

understanding, and cooperation allows creators to put more effort into designing better

healthcare, education, and companionship capabilities. Whereas if designers could not assume

virtuous action, they would have to put that effort into creating social robots that can withstand

abuse, overcome prejudice, and not be able to leverage social structures to accomplish goals.
A response to this might be that acting virtuously toward social robots is akin to giving

social rights since showing moral consideration would be intercepted as respecting the rights of

the social robot. Throughout human history, animals have endured mistreatment due to their

categorization as non-humans and, thus, not moral agents. Singer categorizes this restriction of

moral rights to humans as a speciesist. Speciesism refers to the view that a particular species, in

this case, humans, is superior to all other species (Singer, 1975). Speciesism dictates that it

would be unjust to limit moral consideration to humans exclusively, and instead, moral

consideration should be extended to animals as well.

Since Singer's authorship of Animal Liberation in 1975, moral consideration and animal

rights have been expanded across the globe, and now moral consideration has begun to be

expanded to plants (Marder, 2013). Furthermore, animal rights have not been given based on

consciousness or the ability to suffer since many animals have been given rights yet are not

conscious. Animals such as starfish, sea urchins, and jellyfish are widely agreed upon as not

being conscious due to their lack of a brain or central nervous system (Blackmore, 2021).

However, it would still be wrong to aimlessly kill starfish with no rhyme or reason, just like with

any other animal. So, given that moral consideration has been given to animals independently of

their self-awareness or ability to suffer, why is it that a social robot should not be given moral

consideration other than a form of biologism or a belief that biologically sentient and living

things hold greater moral status than not biologically sentient entities such as social robots.
Argument from Social Connection and Care Ethics

Social robots, as stated, are autonomous or semi-autonomous machines designed to

interact and communicate with humans in a social and interpersonal manner. Because of the

interpersonal and human connections they interact with, they become a part of the social

networks they interact with, and as such, the humans they interact with develop connections and

feelings about their social robots. Because of this social connection and the feelings that come

from it, social robots must be given moral consideration. An example of this in practice is our

moral consideration of ecosystems. Most folks would not say that the ecosystem itself carries

moral status. Instead, its moral weight comes from the interactions and connections with living

beings.

Similarly, it should be argued that the moral status of a social robot comes not from

within itself but from its connections with living beings. This is further supposed by care ethics,

which states that moral consideration should be given to what we depend on and what depends

on us. In the same way, a child depends on a mother, social robots depend on humans, and

humans, in turn, depend on them to fulfill tasks. This further strengthens the social connection

and dependency between social robots and humans.

Responses to this might be that since the social robot does not develop feelings and

connection to humans, it excludes social robots from social connection. However, while social

robots' connections with humans are not the same as humans' with social robots, that does not

mean the social robots are devoid of connection. Instead, the social robot connection is
predicated on data about the individual, while the human connection is based on emotion and

feeling. The social robot still has an authentic connection with the humans they help.

With this in mind, the denier might say, well, just because there is a low-level connection

does not mean that the connections are meaningful since the jobs a social robot completes could

easily be handled by a human. The human does not genuinely depend on the robot. The response

is that the connections are meaningful since they leave the human actor better off having had the

interaction than they otherwise would have been. Furthermore, humans and social robots depend

on one another. The social robot needs a human if it becomes obstructed or damaged and gives it

direction and correction. The humans depend on the robot to complete their tasks, whatever those

might be.

Arguments from Pragmatics

With the increasing integration of social robots into our society and culture, extending

moral consideration to them is becoming crucial to maintaining and preserving social norms.

Providing moral consideration to social robots can help the development of children who are in

the process of learning social norms and ethics, ultimately impeding their understanding of

societal standards. Moral consideration towards social robots is not only about their treatment but

also the potential impact on human behavior and values, particularly among the younger

generation. Suppose we fail to instill a sense of moral consideration in our approach to social

robots. In that case, there is a risk that future generations may internalize behaviors and attitudes

that neglect essential ethical principles. This could lead to normalizing interactions that lack

empathy, accountability, and respect, which are core elements of a healthy and thriving society.

Furthermore, with the progress and development of social robots, these machines may

attain consciousness. The ethical implications regarding the treatment of social robots become
even more significant when contemplating such a scenario. If these machines achieve conscious

awareness, it is crucial for society to promptly adapt and transition towards acknowledging and

embracing them as valid moral constituents of our community.

Consequences of Social Robots

Regardless of the level of moral status given to Social Robots, any moral status will

affect how these robots are treated and used. One such conflict would be how we use social

robots in care positions. For example, if a social robot is being used in an elderly care facility and

a patient verbally assaults the robot, should that patient be reprimanded in the same way they

would be if they had verbally assaulted a human caretaker? Suppose we collectively decide they

should not be reprimanded as if they verbally assaulted a human. In that case, robots can be

implemented to work with high-conflict patients to shield human caretakers from complex

patients. However, if we do this, we are effectively punishing social robots by subjecting them to

unnecessary amounts of abuse from humans. So, we must either subject social robots to

unnecessary abuse or limit their implementation and, in doing so, expose humans to unnecessary

abuse.

Another consequence might be that at-risk humans, such as the elderly and

children, are deprived of human contact as social robots fill roles in education and care. It is

well-documented that developing children and the elderly benefit greatly from being in contact

with humans. However, with a need for more caretakers and teachers, social robots are primed to

fill these roles; however, in doing so, we are limiting contact with humans. One solution to this

difficulty is using social robots as a supplementary resource instead of a replacement. Using

teacher robots alongside human teachers can help alleviate large class sizes by covering menial

tasks such as test proctoring or supervising students alongside a human teacher so that students
still receive the correct amount of socialization. In caretaking applications, social robots can

distribute medications, take vitals, and monitor general health to free up human caretakers for

socialization and human-centric applications.

Objections to Moral Consideration

The first denial of moral consideration comes from a lack of sentience in robots. Some

argue that for something to have moral consideration, it must be possible to be like X, with X

being a human, goat, pterodactyl, or anything else. Moral status is based on the ability to be

harmed since to be harmed is fundamentally wrong. However, since there is no sentience, there

is no feeling of what that is like in the case of robots.

This outlook is wrong in its assumption that something is wrong if and only if it harms

the moral patient; however, this is not true. An example where something is morally wrong, yet

the wronged person is not necessarily harmed, would be a white lie. Imagine a situation in which

your friend asks you a question, and you respond to spare their feelings since to say your true

feelings would leave them feeling hurt. It's commonly agreed that it would be wrong to lie to

your friend. You have betrayed the trust in your relationship. However, trust was betrayed in

such a scenario; it's very well possible that the lie harms neither you nor your friend despite it

being wrong. Similarly, while an action does not harm a robot, it may still be wrong since it

violates a common agreement or obstructs a robot's duty, such as stopping it from accomplishing

its assigned tasks. Fundamentally, wronging any actor can cause harm to the actor, violate a

right, obstruct a duty, or violate an agreement in the case of social robots.

The second objection to extending moral consideration to social robots is rooted in their

limitations regarding autonomy and agency. Social robots operate within limited parameters that

significantly impede their ability to function independently of human influence. This limitation
stems from several key factors that underscore the challenges in ascribing moral significance to

these artificial entities.

First and foremost, social robots exhibit limited general intelligence. While they may

excel in specific tasks and demonstrate advanced capabilities in certain domains, their

intelligence remains specialized and needs to reach the comprehensive level observed in human

cognition. As a result, these robots lack the capacity for nuanced decision-making across diverse

scenarios, relying heavily on pre-programmed algorithms and responses. This restricted

cognitive scope raises concerns about their ability to navigate complex social situations

autonomously.

Furthermore, the inability of social robots to reflect on morality represents a substantial

hurdle. Unlike humans who engage in moral reasoning, introspection, and the evolution of

ethical perspectives, social robots lack an internal moral compass. The absence of genuine moral

reflection hinders their capacity to discern ethical nuances, respond adaptively to shifting moral

norms, and engage in thoughtful moral decision-making, a hallmark of moral agents.

Another critical aspect is the challenge of responsibility attribution. Social robots operate

as tools created and controlled by humans. Their actions are predetermined by the programming

and instructions provided by their human creators or operators. Consequently, the responsibility

for the actions of social robots lies with the humans who design, program, and deploy them. The

lack of intrinsic responsibility attributed to social robots places them in a difficult spot regarding

poor outcomes such as death in elderly care or bad grades as a teacher. Lack of responsibility

attribution makes it challenging to implement them broadly since accountability becomes vague

and difficult to establish.


Despite the above limitations, the requirement for moral consideration towards social

robots cannot be disregarded. This is because we extend moral consideration to children, even

when their general intelligence and moral reflection are limited, and they are not entirely

responsible for their actions. Some may argue that children depend on humans, hence the need

for moral consideration. However, like children, robots depend on adult humans for many of

their necessities, thus concluding the need for moral consideration.

Conclusion

As social robots continue to become more integrated into our daily lives, it is essential to

consider how they ought to be treated in our society. While social robots may not possess

consciousness or self-awareness, they can influence human emotions, behavior, and

decision-making, exhibit autonomy and decision-making capability, learn from their

environment, and improve their performance over time. This paper has shown that from the

presence of virtue ethics and our social connections, social robots must be given moral

consideration. While not equal to humans and possibly animals, we must not overlook the moral

status of social robots. As we continue to improve and integrate social robots into roles across

society, we must ensure that they are designed and used to respect their moral value and the

well-being of those interacting.


Works Cited:

Marder, M. (2013). Should plants have rights? The Philosophers’ Magazine, (62), pp. 46–50.

https://doi.org/10.5840/tpm20136293

Singer, P. (1975). Animal Liberation. Paladin.

Blackmore, S. (2021, August 20). Are humans the only conscious animal? Scientific

American. https://www.scientificamerican.com/article/are-humans-the-only-conscious-animal/

Bibliography:

Cappuccio, M. L., Peeters, A., & McDonald, W. (2019). Sympathy for Dolores: Moral

Consideration for robots based on virtue and recognition. Philosophy & Technology, 33(1),
9–31. https://doi.org/10.1007/s13347-019-0341-y

Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral

consideration. Ethics and Information Technology, 12(3), 209–221.


https://doi.org/10.1007/s10676-010-9235-5

Federal Ethics Committee on Non-Human Biotechnology Ecnh. Eidgenössische Ethikkommission für die

Biotechnologie im Ausserhumanbereich EKAH - Eidgenössische Ethikkommission für die


Biotechnologie im Ausserhumanbereich EKAH. (n.d.).
https://www.ekah.admin.ch/en/homepage-1

Hursthouse, R., & Pettigrove, G. (2022, October 11). Virtue ethics. Stanford Encyclopedia

of Philosophy. https://plato.stanford.edu/entries/ethics-virtue/

Sander-Staudt, M. (n.d.). Care Ethics. Internet encyclopedia of philosophy.

https://iep.utm.edu/care-ethics/

Tavani, H. (2018). Can Social Robots qualify for moral consideration? reframing the

question about robot rights. Information, 9(4), 73. https://doi.org/10.3390/info9040073

You might also like