AI Policy - EXTERNAL
AI Policy - EXTERNAL
AI Policy - EXTERNAL
Chapter
SAP
AI Ethics
Handbook
Handbook for applying
SAP’s Global AI Ethics Policy
across the AI Factory Process
External
Content.
SAP’s
of AI systems:
Definition
Rule-Based AI
of AI
their behavior is fully defined by rules created by human
experts. These systems are often described as symbolic
or expert systems.
Learning-Based AI
AI is typically defined as the ability • Learning-based AI systems are differentiating themselves
of a machine to perform cognitive by the fact that humans define the problem and the goal,
functions we associate with human but the behavior, rules and relationships required for the
minds, such as perceiving, reasoning, system are learned in an automatize way. With the help of
learning, and problem-solving. data, they train how to solve a problem and continuously
adapt their function in this process.
It requires a system to correctly
interpret external data, learn from At SAP, Risk Assessments shall be performed on such
such data, and use those learnings to Learning-based AI systems during the development phase
achieve specific goals through flexible and subsequent phases to ensure that there are
adaptation. no unintended biases.
07 .
We engage with the wider societal challenges of AI
Human Intervention
Allow human intervention for all automated decision
02. processes. DPP-361
03.
Addressing Bias (Product
Transparency Transparency
standard)
& Discrimination & Explainability Provide transparency when using AI during processing
of personal data.
Apply adequate
Bias anonymization
Ensure used AI methods do not result in inaccuracies techniques
detrimental to data subjects’ rights.
Safeguards
Implement safeguards to protect personal data used
to train AI models.
SAP classifies AI use cases as high-risk under certain Processing personal data:
circumstances. We base these criteria on what kind of AI Does the use case process any information relating to an identified
has led to negative consequences for individuals or whole or identifiable natural person for training purposes or during productive
populations in the past (see, for example, Amazon’s case usage? – Use Cases with anonymized data sets or only the process of
of a discriminating AI recruiting tool). anonymizing Personal data does not qualify as High Risk Case.
High-risk use cases are not prohibited within SAP; Processing sensitive personal data:
however, they must first go through an assessment Is the use case including the processing of sensitive personal data
process of the AI Ethics Steering Committee before they like information on sexual orientation, religion, biometric data
can be further developed, deployed, and sold. (including face imaging &/or voice recognition)?
High-risk application:
Does the use case belong to one of the following domains:
E.g. Categorisation of natural persons, Management and operation of
critical infrastructure, Employment/HR, Healthcare, Private services and
Public services and benefits, Law Enforcement, Migration, Democratic
processes?
yes yes
Are you working Complete the
on a AI use case? Risk Classification form stop
no
yes
Is it a Generative AI
usecase?
Assessment
Steering Committee
no
yes
Is it processing sensitive Complete the AI Ethics
personal data? yes
Policy Self- Assessment
no
yes Automated decision
making? High Risk Case
Is it processing yes
yes
personal data? no
Does it negatively no
no affect individuals? High-risk sector?
no
SAP 02.
AI Factory
Validation
03.
Realization
Process
Continuous
Improvement
Experiments
to assess feasibility
Development
of AI functions
04.
Producti-
zation
05.
Operations
Integrating AI functions
into b
usiness process
Deliver embedded
AI functions
to customers
Human
benefits of users and society in business, by reducing the
fears and/or existential threats our users may encounter.
Centered
Thus, our AI processes must be conducted through the
understanding of our user and stakeholder needs.
Approach
Methods of user research, design thinking, and correctly
defining user stories are mandatory aspects of our
development.
A.
understanding of his customers’ pain points. To help
them run efficiently, he identifies a business process that
can be automated. Before the team builds a prototype Human Agency & Oversight.
of the AI use case, they together collect all the relevant
information about the use case and discuss whether any
• Before implementation, the degree of • An appropriate governance mechanism
of SAP’s red lines are touched (e.g., AI used for mass freedom of the AI system must be defined. has to be chosen
surveillance) and identify risks of unethical behaviour
The decision-making degrees of freedom of the AI Human oversight shall be achieved through an
(E.g., automated decision-making). system must be defined. appropriate governance mechanism. This could include
but not be exclusive to human-in-the-loop, human-on-
• The target definition of the AI system the-loop, or human-in-command, and shall be decided
They fill out the use case risk assessment template and on a case-by-case basis.
must be given by a human.
send it to the AI Ethics Office for due diligence.
• Decisions by an AI system may always be • Human oversight must be introduced
overruled by a human. where humans are directly impacted.
AI systems shall be subject to appropriate human In situations where humans may be directly impacted
oversight, and the rights and freedoms of a human by a decision made by SAP’s AI system, human
shall exceed that of AI systems. oversight shall be introduced to safeguard that AI
system does not undermine human autonomy or
introduce unintended consequences.
A.
use case considering the instructions of the AI Ethics
Steering. They are interacting with potential customers Human Agency & Oversight
and user groups to discuss expectations in regards to
the functional scope as well ethical considerations like
• Before implementation, the degree of • Human oversight must be introduced
transparency and human agency.
freedom of the AI system must be defined. where humans are directly impacted.
Before implementation, the decision-making degrees In situations where humans may be directly impacted
Based on their findings they carry out experiments and of freedom of the AI system must be defined. by a decision made by SAP’s AI system, human
build prototypes, iteratively aligning the results with their oversight shall be introduced to safeguard that AI
system does not undermine human autonomy or
stakeholders. The team makes sure that the data used introduce unintended consequences.
• Decisions by an AI system may always be
for exploration and experimentation is balanced and overruled by a human
representative and that the used AI methods are suitable
AI systems shall be subject to appropriate human • Clear and simple explanations have to be
for the desired level of explainability. oversight, and the rights and freedoms of a human
provided for automated decisions.
shall exceed that of AI systems.
As far as is practical, a clear and simple explanation
shall be provided as to how decisions were made by an
• An appropriate governance mechanism AI system used in automated decision processes.
has to be chosen.
Human oversight shall be achieved through an
appropriate governance mechanism. This could include
but not be exclusive to human-in-the-loop, human-on-
the-loop, or human-in-command, and shall be decided
on a case-by-case basis.
We will follow
& document
these requirements:
Designer User Assistant
A.
Developer
C.
The model architectures shall not include target
• Inclusive data must be used for training. variables, features, processes, or analytical structures
which are unreasonable, ethically objectionable, or
Transparency & Explainability
Where relevant, the data used to train AI systems shall
unable to be validated according to the principles laid
be as inclusive as possible, representing as diverse a
out in this document.
cross-section of the population or past situations as
possible, and as free as possible from (or accounted
and mitigated for) any historic or socially constructed • The data sets and the development
biases, inaccuracies, errors, and mistakes.
• For decisions about affecting humans
• Where feasible, a fairness function shall processes must be documented. explanations have to be provided to the data
be used. subject.
The data sets and the processes that produce an AI
• Measures to detect bias must be realized. Where feasible, a fairness function shall be applied to system’s decisions, including those of data gathering In alignment and compliance with applicable data
test AI systems for unbiased output. and data labelling as well as the algorithms used by the
SAP shall endeavour to detect unfairly biased outputs protection and privacy law, AI systems that engage in
and shall implement technical and/or organizational developed AI system, shall be documented to allow for profiling or automated decision-making must be able
measures to prevent direct or indirect prejudice, traceability and transparency. to provide explanations to the extent possible to data
discrimination, or marginalization of groups or • The AI system should addresses the subjects upon request, describing the data segment
individuals, e.g. by reducing bias in training data. widest possible range of end-users. the subject was placed into and the reasons they were
AI software shall be user-centric, addressing the widest • The AI system’s capabilities and limita- placed there.
possible range of applicable end-users, and following tions must be documented. In addition, the reasons as to why the decision was
• Affected users should be involved in the relevant accessibility standards, regardless of users’
The capabilities and limitations shall be documented made shall be provided if requested by the data
development process. age, gender, abilities, or characteristics. subject. The explanation must be such as to provide
as part of the development process in a manner
Wherever possible, developers shall seek to involve appropriate to the use case at hand. This shall include the data subject grounds to challenge the decision.
impacted/affected users to evaluate and check that information regarding the AI system’s level of accuracy
outputs are diverse and discrimination free. • The use of data for testing must comply (performance metric), as well as its limitations and
with DPP Policy. capabilities. • The methods used for development,
• Data used for training and testing has to
The use of data for the testing of AI systems shall testing and validation must be documented.
comply with applicable data protection, and privacy
be representative and generalizable. laws. • Transparency on how personal data is The methods used for developing, testing and
validating, and the outcomes of or decisions made
It shall be trained and tested on as expansive as processed must be provided. by the AI system shall be fully documented as part
is feasible, representative, relevant, accurate, and of the development process according to SAP’s
In alignment and compliance with applicable data
generalizable datasets. protection, and privacy laws, products that use AI Global Development Policy and Product Development
systems in the processing of personal data must Standards.
provide transparency to the extent possible as to how
the AI system was used in clear and simple language if
requested by the data subject.
A.
Engineer Developer
Furthermore, they include a feedback control in the As far as is practical, a clear and simple explanation
shall be provided as to how decisions were made by an
product, which allows users to share feedback on the AI system used in automated decision processes.
quality of the AI recommendations. This feedback could be
used to re-train the algorithm, if needed.
C.
The model architectures shall not include target
• Inclusive data must be used for training. variables, features, processes, or analytical structures
which are unreasonable, ethically objectionable, or
Transparency & Explainability
Where relevant, the data used to train AI systems shall
unable to be validated according to the principles laid
be as inclusive as possible, representing as diverse a
out in this document.
cross-section of the population or past situations as
possible, and as free as possible from (or accounted
and mitigated for) any historic or socially constructed • The data sets and the development
biases, inaccuracies, errors, and mistakes.
• For decisions about affecting humans
• Where feasible, a fairness function shall processes must be documented. explanations have to be provided to the data
be used. subject.
The data sets and the processes that produce an AI
• Measures to detect bias must be realized. Where feasible, a fairness function shall be applied to system’s decisions, including those of data gathering In alignment and compliance with applicable data
test AI systems for unbiased output. and data labelling as well as the algorithms used by the
SAP shall endeavour to detect unfairly biased outputs protection and privacy law, AI systems that engage in
and shall implement technical and/or organizational developed AI system, shall be documented to allow for profiling or automated decision-making must be able
measures to prevent direct or indirect prejudice, traceability and transparency. to provide explanations to the extent possible to data
discrimination, or marginalization of groups or • The AI system should addresses the subjects upon request, describing the data segment
individuals, e.g. by reducing bias in training data. widest possible range of end-users. the subject was placed into and the reasons they were
AI software shall be user-centric, addressing the widest • The AI system’s capabilities and limita- placed there.
possible range of applicable end-users, and following tions must be documented. In addition, the reasons as to why the decision was
• Affected users should be involved in the relevant accessibility standards, regardless of users’
The capabilities and limitations shall be documented made shall be provided if requested by the data
development process. age, gender, abilities, or characteristics. subject. The explanation must be such as to provide
as part of the development process in a manner
Wherever possible, developers shall seek to involve appropriate to the use case at hand. This shall include the data subject grounds to challenge the decision.
impacted/affected users to evaluate and check that information regarding the AI system’s level of accuracy
outputs are diverse and discrimination free. • The use of data for testing must comply (performance metric), as well as its limitations and
with DPP Policy. capabilities. • The methods used for development,
• Data used for training and testing has to
The use of data for the testing of AI systems shall testing and validation must be documented.
comply with applicable data protection, and privacy
be representative and generalizable. laws. • Transparency on how personal data is The methods used for developing, testing and
validating, and the outcomes of or decisions made
It shall be trained and tested on as expansive as processed must be provided. by the AI system shall be fully documented as part
is feasible, representative, relevant, accurate, and of the development process according to SAP’s
In alignment and compliance with applicable data
generalizable datasets. protection, and privacy laws, products that use AI Global Development Policy and Product Development
systems in the processing of personal data must Standards.
provide transparency to the extent possible as to how
the AI system was used in clear and simple language if
requested by the data subject.
A.
Engineer makes sure that the data used for on-boarding
the customer is representative and regularly checks
for undesired bias during execution. Further the AI Ops Human Agency & Oversight
Engineer provides potential user requests in regards to
transparency and explainability to the Use Case Owner.
• Additional testing is necessary in case of • AI systems automating decisions must
composed AI systems. be tested extensively to avoid unintended
Because of the need for continues improvements by the behaviour.
When two or more AI systems are connected to each
AI Factory Process, one year after the successful product other or embedded within each other, additional testing When human-on-the-loop models are used,
launch, the feedback control shows that the accuracy and control measures should be performed for the appropriate extensive testing and governance shall
individual AI systems as well as for the overall system.
of the predictions provided by the AI by including two be conducted during development and deployment
to ensure the system behaves as intended by the
new parameters needs to improve. The AI OPS Engineer • Clear and simple explanations have to be developers and does not have any unintended
advises the development team to re-train the model provided for automated decisions. behaviour, outputs, or usage.
including the new parameters. Before the optimized As far as is practical, a clear and simple explanation
shall be provided as to how decisions were made by an
model is again built into the product, the team ensures AI system used in automated decision processes.
that the model is free from any biases.
C.
The model architectures shall not include target
• Inclusive data must be used for training. variables, features, processes, or analytical structures
which are unreasonable, ethically objectionable, or
Transparency & Explainability
Where relevant, the data used to train AI systems shall
unable to be validated according to the principles laid
be as inclusive as possible, representing as diverse a
out in this document.
cross-section of the population or past situations as
possible, and as free as possible from (or accounted
and mitigated for) any historic or socially constructed • The data sets and the development
biases, inaccuracies, errors, and mistakes.
• For decisions about affecting humans
• Where feasible, a fairness function shall processes must be documented. explanations have to be provided to the data
be used. subject.
The data sets and the processes that produce an AI
• Measures to detect bias must be realized. Where feasible, a fairness function shall be applied to system’s decisions, including those of data gathering In alignment and compliance with applicable data
test AI systems for unbiased output. and data labelling as well as the algorithms used by the
SAP shall endeavour to detect unfairly biased outputs protection and privacy law, AI systems that engage in
and shall implement technical and/or organizational developed AI system, shall be documented to allow for profiling or automated decision-making must be able
measures to prevent direct or indirect prejudice, traceability and transparency. to provide explanations to the extent possible to data
discrimination, or marginalization of groups or • The AI system should address the widest subjects upon request, describing the data segment
individuals, e.g. by reducing bias in training data. possible range of end-users. the subject was placed into and the reasons they were
AI software shall be user-centric, addressing the widest • The AI system’s capabilities and limita- placed there.
possible range of applicable end-users, and following tions must be documented. In addition, the reasons as to why the decision was
• Affected users should be involved in the relevant accessibility standards, regardless of users’
The capabilities and limitations shall be documented made shall be provided if requested by the data
development process. age, gender, abilities, or characteristics. subject. The explanation must be such as to provide
as part of the development process in a manner
Wherever possible, developers shall seek to involve appropriate to the use case at hand. This shall include the data subject grounds to challenge the decision.
impacted/affected users to evaluate and check that information regarding the AI system’s level of accuracy
outputs are diverse and discrimination free. • The use of data for testing must comply (performance metric), as well as its limitations and
with DPP Policy. capabilities. • The methods used for development,
• Data used for training and testing has to
The use of data for the testing of AI systems shall testing and validation must be documented.
comply with applicable data protection and privacy
be representative and generalizable. laws. • Transparency on how personal data is The methods used for developing, testing and
validating, and the outcomes of or decisions made
It shall be trained and tested on as expansive as processed must be provided. by the AI system shall be fully documented as part
is feasible, representative, relevant, accurate, and of the development process according to SAP’s
In alignment and compliance with applicable data
generalizable datasets. protection, and privacy laws, products that use AI Global Development Policy and Product Development
systems in the processing of personal data must Standards.
provide transparency to the extent possible as to how
the AI system was used in clear and simple language if
requested by the data subject.