Salesforce Ai Associate Certification Practice Questions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

SALESFORCE AI ASSOCIATE CERTIFICATION PRACTICE QUESTIONS

1.A business analyst (BA) is looking to boost their company's performance by


making improvements in their sales procedures and customer service.
What AI tools could the BA employ to address these requirements?

A. Lead Scoring, Opportunity forecasting, and Case Classification


B. Cleaning up sales data and ensuring proper governance of customer support data
C. Utilizing machine learning models and predicting chatbot behavior
(* Lead scoring, opportunity forecasting, and case categorization are important AI
applications for a business analyst (BA) aiming to enhance their company's sales
processes and customer support. These AI applications help the BA by providing data-
driven insights, automating manual tasks, and improving decision-making processes,
ultimately leading to improved sales and customer support performance.)

2.What is a key characteristic of machine learning in the context of AI


capabilities?

A. Can perfectly mimic human intelligence and decision-making


B. Relies on preprogrammed rules to make decisions
C. Utilizes algorithms to learn from data and make decisions
(* Machine learning is a type of AI that uses algorithms to learn from data and make
decisions.)

3.In the realm of AI capabilities, what is the primary function of computer vision?

A. Analyzing and comprehending visual information


B. Improving images through image processing techniques
C. Forecasting future results using data
(* Computer vision is a type of AI that interprets and understands visual data.)

4.What constitutes the foundational components of AI systems?

A. Algorithms, software, and hardware elements


B. Algorithms, data, and hardware infrastructure
C. Algorithms, data, and computational resources
(* The core components of AI systems are algorithms, data, and computation.
Algorithms provide the rules and instructions for the AI system, data is used to train the
AI system, and computation is the process of executing the algorithms on the data.)
5.A sales manager wants to improve Salesforce operations with AI. What AI
application would offer the greatest benefits?

A. Handling and organizing data effectively


B. Generating sales-related dashboards and reports
C. Prioritizing leads and predicting sales opportunities
(* In Salesforce, AI improves sales by scoring leads and predicting future opportunities,
helping leaders prioritize leads and enhance sales processes.)

6.A service leader plans to use AI to help customers resolve their queries more
quickly with a guided self-service app. Which Einstein feature offers the most
suitable solution for this?

A. Automated Chatbots
B. Categorizing Cases
C. Offer personalized recommendations
(* Chatbots in a self-service app interact with customers in real time, understand their
questions, and offer quick solutions, making self-help more efficient and user-friendly.)

7.A marketing manager wants to use AI to better engage with their customers.
Which functionality provides the best solution?

A. Bring Your Own Model


B. Journey Optimization
C. Einstein Engagement
(With Salesforce Marketing Cloud Engagement.

Journey Optimization allows one to Create, test, and optimize personalized campaign
variations with built-in predictive AI. Make every moment count by automating and
customizing all aspects of customer engagement — including channel, content, timing,
and send frequency. Scale dynamic journeys and improve productivity with AI.)

8.Which features of Einstein enhance sales efficiency and effectiveness?

A. Opportunity Scoring, Opportunity List View, Opportunity Dashboard


B. Opportunity List View, Lead List View, Account List View
C. Opportunity Scoring, Lead Scoring, Account Insights
(* Opportunity Scoring, Lead Scoring, and Account Insights are all features of Einstein
that contribute to enhancing sales efficiency and effectiveness.)
9.Cloudy Computing wants to use AI to enhance its sales processes and
customer support. Which capability should they use?

A. Dashboard of Current Leads and Cases


B. Sales Path and Automated Case Escalations
C. Einstein Lead Scoring and Case Classification
(* Einstein Lead Scoring enhances your sales processes and Case Classification
enhances your customer support processes using AI capabilities.)

10.In what way does AI aid in the process of lead qualification?

A. Generates customized SMS marketing campaigns


B. Engages with potential customers automatically
C. Evaluates leads using customer information
(* AI assists in the lead qualification process by analyzing customer data. AI algorithms
can assess various aspects of leads, such as their behavior, demographics, and
interactions with a company's website or products. By processing this information, AI
can assign scores or labels to leads, indicating their likelihood to convert into
customers. This automated evaluation streamlines the lead qualification process and
helps sales teams prioritize their efforts on leads that are more likely to result in
successful conversions.)

11.During a conversation with a customer considering AI implementation in


Salesforce, what should be the consultant's top priority when discussing the
ethical aspects of data management?

A. Privacy, bias, compliance, and security


B. Visualization, data storage, and retrieval
C. Network, software, and hardware
(* The consultant's main concerns with AI Ethics should be privacy, bias, security, and
following the rules. These things make sure AI is used responsibly and that people's
data is treated with respect.)

12.What is the term for bias that imposes the values of a system onto others?

A. Automation
B. Societal
C. Association
(* Automation bias means a system forces its own ideas onto others. For example, in a
beauty contest judged by AI in 2016, the AI mostly picked white winners because it was
trained on pictures of white women and didn't recognize the beauty in people with
different features or skin colors. This shows how the bias in the AI's training data
affected the contest's results.)

13.What could be the origin of bias in the training data used for AI models?

A. The data is gathered directly from source systems in real-time


B. The data is gathered from a wide variety of sources and people
C. The data primarily comes from a specific demographic or source
(* Skewed data can introduce bias into AI algorithms.)

14.What is the significance of data protection measures in AI usage?

A. Safeguards privacy and compliance


B. Expands the range of data collected
C. Enhances the quality of data
(* Data protection measures are primarily implemented to ensure privacy and
compliance with regulations.)

15.Why is the explainability of trusted AI systems important?

A. Clarifies how AI models reach decisions


B. Adds complexity to AI models
C. Boosts the security and precision of AI models
(* Explainability in AI systems is about providing clear explanations of how AI models
make decisions.)

16.What action leads to bias in the training data for AI algorithms?

A. Utilizing a sizable dataset that requires significant computational resources


B. Utilizing a dataset that includes a variety of perspectives and populations
C. Employing a dataset that lacks representation from various perspectives and
populations
(* Skewed data can introduce bias into AI algorithms.)

17.Why is it vital to address privacy issues when handling AI and CRM data?

A. Guarantees adherence to laws and regulations


B. Does not impact the quantity of data collected
C. Validates data accessibility for all users
(* Data protection measures are primarily implemented to ensure privacy and
compliance with regulations.)

18.What represents a significant obstacle in human-AI cooperation in decision-


making?

A. Facilitates more knowledgeable and impartial decision-making


B. Diminishes the necessity for human participation in decision-making procedures
C. Encourages dependence on AI, possibly reducing critical thinking and
supervision
(Over-reliance on AI can potentially lead to less critical thinking and oversight.)

19.Can you provide an instance of successful cooperation between humans and


AI systems?

A. Humans and AI collaborate to make well-informed decisions


B. Humans assign routine tasks to AI
C. Humans rely on AI for decision-making
(* Effective collaboration between humans and AI systems involves leveraging the
strengths of each, particularly in the context of Salesforce's suite of products humans
and AI to work together, leveraging the strengths of each to make more informed
decisions. This is evident in the design and implementation of Salesforce's Einstein
Bots, which are designed to work in tandem with human agents, not replace them.)

20.What is a method for reducing bias and promoting fairness in AI applications?

A. Continuously auditing and monitoring the data used in AI applications


B. Employing data that has a larger representation of minority groups compared to
majority groups
C. Including data features in the AI application to benefit a population
(* Regularly auditing AI models and implementing bias correction techniques is a
recognized method to mitigate bias and ensure fairness.)

21.What purpose do Salesforce's Trusted AI Principles serve within CRM


systems?

A. Defining the technical requirements for AI integration


B. Establishing a structure for AI data model precision
C. Guiding the ethical and responsible utilization of AI
(* Salesforce's Trusted AI Principles are a set of guidelines that the company follows
when developing and using AI in its CRM systems. These principles are based on the
following five values: responsible, accountable, transparent, empowering, and
inclusive.)

22.Cloud Kicks wants to implement AI features on its Salesforce Platform but has
concerns about potential ethical and privacy challenges. What should they
consider doing to minimize potential AI bias?

A. Implement Salesforce's Trusted AI Principles.


B. Integrate AI models that auto-correct biased data.
C. Use demographic data to identify minority groups.
(Salesforce's Trusted AI Principles are designed to guide ethical and responsible AI
implementation, including addressing and mitigating bias in AI systems. Following these
principles helps ensure that AI is used in a way that minimizes potential bias and ethical
concerns while promoting fairness and transparency in AI applications.)

23.Regarding Salesforce's Trusted AI Principles, what is the main emphasis of the


Responsibility principle?

A. Defining the technical requirements for AI integration


B. Establishing a structure for data model accuracy
C. Guaranteeing ethical AI usage
(The core emphasis of the Responsibility principle within Salesforce's trusted AI
principles is to secure ethical and accountable AI utilization.)

24.What is the main focus of the Accountability principle In Salesforce's Trusted


AI Principles?

A. Taking responsibility for one's actions toward customers, partners, and


society
B. Ensuring transparency in AI-driven recommendations and predictions
C. Safeguarding fundamental human rights and protecting sensitive data
(The core focus of the Accountability principle within Salesforce's trusted AI principles is
to guarantee that AI systems are responsible and their actions can be readily
understood.)

25.What does Salesforce's Trusted AI Principle of Transparency entail?

A. Tailoring AI features to align with particular business needs


B. Incorporating AI models into Salesforce workflows
C. Providing a clear and comprehensible explanation of AI decisions and actions
(* The principle of transparency in Salesforce's trusted AI principles primarily advocates
for the clear and understandable explanation of AI decisions and actions.)

26.Within Salesforce's Trusted AI Principles, what is the primary goal of the


Empowerment principle?

A. Enable users to address complex technical challenges using neural networks


B. Enable users of varying skill levels to create AI applications through user-
friendly interfaces, without needing to write code
C. Enable users to actively participate in the expanding field of AI research and
knowledge
(* The principle of empowerment in Salesforce's trusted AI principles primarily aims to
empower users to understand and control AI systems.)

27.What is a fundamental aspect to think about when it comes to data quality in


AI implementations?

A. The process of integrating AI models with Salesforce workflows


B. The role of data in training and refining Salesforce AI models
C. Methods for tailoring AI features in Salesforce
(* Data quality is vital because the data used to train and fine-tune AI models
significantly impacts their performance and outcomes. High-quality, representative, and
unbiased data is essential for training AI models that can make accurate predictions and
recommendations. Therefore, understanding the role of data in the training and
refinement of AI models is key to the success of AI implementations.)

28.What role does data quality play in accomplishing AI business goals?

A. Data quality is essential for generating precise AI data insights


B. Data quality is not needed because AI can handle all types of data
C. Data quality is crucial for adhering to AI data storage constraints
(* High-quality data is crucial for training AI models, making accurate predictions, and
providing valuable insights. Poor-quality data can lead to inaccurate or biased AI
results, which can hinder the achievement of business objectives. Therefore, ensuring
data quality is a fundamental requirement for AI to deliver meaningful and reliable
insights that can inform business decisions and strategies.)
29.What effect does a data quality evaluation have on business results for
companies utilizing AI?

A. Establishes a baseline for AI predictions


B. Speeds up the introduction of new AI solutions
C. Enhances the efficiency of AI recommendations
(* Assessing data quality provides insights into the quality of data, which is crucial for
ensuring accurate AI outcomes.)

30.Cloudy Computing employs Einstein for generating predictions but is


experiencing inaccuracies. What could be a possible explanation for this?

A. Low data quality


B. Excessive data volume
C. Incorrect product choice
(* Good quality data is crucial for accurate predictions. Poor data quality can lead to
inaccurate predictions.)

31.A company utilizes Einstein and maintains a high data quality score, yet they
are not experiencing the advantages of AI. What might be a potential explanation
for not realizing the benefits of AI?

A. The company isn't employing a sufficient amount of data for predictive purposes.
B. The data score might exceed the minimum threshold, but the company isn't using it
as a reference for future outcomes.
C. The company isn't utilizing the appropriate Salesforce product.
(* Even with a high data quality score, the company needs to use the score as a
benchmark for future results to see the benefits from AI.)

32.How does data quality and transparency impact bias in generative AI?

A. It eliminates the likelihood of bias


B. It increases the likelihood of bias
C. It reduces the likelihood of bias
(* AI systems can pick up biases from the data they learn from. If the data is biased or
doesn't represent all perspectives, AI can make biased predictions. Good data quality
helps spot and reduce these biases but can't completely get rid of them.)

33.Cloudy Computing depends on data analysis to optimize its product


recommendations; however, CK encounters a recurring issue of incomplete
customer records, with missing contact information and incomplete purchase
histories. How will this incomplete data quality impact the company's operations?

A. The accuracy of product recommendations is hindered.


B. The diversity of product recommendations is improved.
C. The response time for product recommendations is stalled.
(* Without comprehensive and accurate customer data, the AI system may struggle to
make precise recommendations, potentially impacting the company's ability to provide
relevant and effective product suggestions to customers. This incomplete data quality
can hinder the accuracy and relevance of the recommendations, which can, in turn,
affect the company's operations and customer satisfaction.)

34.A healthcare company implements an algorithm to analyze patient data and


assist in medical diagnosis. Which primary role does data quality play in this AI
application?

A. Reduced need for healthcare expertise in interpreting AI outputs


B. Enhanced accuracy and reliability of medical predictions and diagnoses
C. Ensured compatibility of AI algorithms with the system's infrastructure
(* High-quality data contributes to more precise and trustworthy medical predictions and
diagnoses, which is critical for patient care and treatment decisions. Inaccurate or
unreliable data could lead to incorrect diagnoses and treatment recommendations,
potentially harming patients. Therefore, data quality plays a primary role in enhancing
the accuracy and reliability of medical predictions and diagnoses in this AI application.)

35.What can happen if an organization experiences low data quality?

A. Loss of revenue, diminished customer service, and damage to reputation


B. Decreased employee satisfaction, reduced stock value, and difficulty in attracting top
talent
C. Technical challenges, inflexible system architecture, and slow data processing
(* Inaccurate or incomplete data can lead to errors in business operations, resulting in
financial losses. It can also affect customer service by causing delays, incorrect
information, and frustration among customers. Additionally, when an organization's data
quality is compromised, it can damage its reputation, eroding trust among customers
and stakeholders. Therefore, these consequences highlight the importance of
addressing data quality issues to maintain a successful and reputable organization.)

36.What is the possible outcome of poor data quality?


A. AI predictions become more focused and less robust.
B. AI models maintain accuracy but have slower response times
C. Biases in data can be inadvertently learned and amplified by AI systems.
(* When AI models are trained on data that is inaccurate, incomplete, or biased, it can
result in predictions and generated content that are unreliable and not representative of
the real-world scenarios. In other words, the quality of the data used directly impacts the
quality of the outcomes produced by AI models. Poor data quality can lead to imprecise
predictions and generative outputs, which can undermine the utility and effectiveness of
these AI applications.)

37.Cloudy Computing intends to employ an AI model for forecasting shoe


demand based on historical sales data and regional attributes. Which data quality
dimension is crucial for achieving this objective?

A. Age
B. Volume
C. Reliability
(* The Age, Completeness, Accuracy, Consistency, Duplication, and Usage of a dataset
are vital factors to assess when determining its suitability for AI models. However, the
size and the number of variables in the dataset are unrelated to its appropriateness for
AI models.)

38.A developer possesses a significant volume of data, yet it is dispersed across


various systems and lacks standardization. Which fundamental data quality
aspect should they prioritize to guarantee the efficiency of their AI models?

A. Consistency
B. Volume
C. Performance
(* It's important to emphasize the significance of consistency as a fundamental data
quality factor. Data volume and data location, on the other hand, are not directly tied to
data quality.)

39.Cloudy Computing aims to enhance the predictive accuracy of its AI model by


leveraging a substantial volume of data. What data quality aspect should the
company prioritize?

A. Location
B. Accuracy
C. Volume
40.A Business Analyst (BA) is in the process of creating a new AI use case. As
part of their preparations, they generate a report to examine whether there are
any null values in the attributes they intend to utilize. What data quality aspect is
the BA confirming by assessing null values?

A. Usage
B. Duplication
C. Completeness
(* For each business purpose, make a list of the necessary fields. Afterward, generate a
report indicating the percentage of empty values in these fields. Alternatively, you can
employ a data quality app from AppExchange.)

41.Cloudy Computing aims to reduce the workload of its customer care agents by
deploying a chatbot on its website to handle common queries. Which area of AI is
best suited for this situation?

A. Visual data analysis


B. Predictive data insights
C. Natural language processing
(* Natural language understanding (NLU) is AI's way of understanding human language.
In this situation, Cloudy Computing wants to use a chatbot to talk to customers and
answer their questions. NLU tech helps the chatbot understand what customers say and
give them good answers. This helps a lot with handling common questions and making
customer support better.)

42.What AI method involves a network of connections that are influenced by


weights and biases?

A. Neural networks
B. Rule-based systems
C. Predictive analytics
(* Neural networks are a type of AI tool made of interconnected nodes with weights and
biases. These connections are crucial for their ability to process and learn from data,
making them vital in AI tasks like recognizing patterns, understanding language, and
analyzing images. Neural networks work somewhat like human brain neurons, which is
why they're called "neural networks".)

43.Which type of AI focuses on very specific tasks?

A. General AI
B. Narrow AI
C. Super AI
(Weak/narrow AI encompasses AI systems that are created to execute a particular task
or a predefined set of tasks.)

44.What kind of AI employs machine learning to generate fresh and unique output
based on a provided input?

A. Generative
B. Predictive
C. Probabilistic
(* Generative AI employs machine learning techniques to produce novel and unique
output based on a given input. It can create new content, such as text, images, or even
music, by learning patterns and relationships in the input data and generating new data
that fit those patterns. This is why generative AI is often used in creative applications
like art generation, text generation, and more.)

45.How does an organization benefit from using AI to personalize the shopping


experience of online customers?

A. Customers are more likely to share personal information with a site that personalizes
their experience.
B. Customers are more likely to be satisfied with their shopping experience.
C. Customers are more likely to visit competitor sites that personalize their experience.
(* Using AI to personalize the online shopping experience leads to increased customer
satisfaction. This happens because personalized recommendations make shopping
more convenient, engaging, and relevant, which ultimately boosts conversion rates,
customer retention, and loyalty.)

46.What are three frequently employed examples of AI in CRM?

A. Predictive scoring, forecasting, recommendations


B. Predictive scoring, reporting, image classification
C. Einstein Bots, face recognition, recommendations
(* These are three common uses of AI in CRM involving predicting customer behavior,
forecasting future trends, and providing personalized recommendations to enhance
customer engagement and sales efficiency within the CRM system.)
47.What do predictive analytics, machine learning, natural language processing
(NLP), and computer vision refer to?
A. Different data models employed in Salesforce
B. Various AI applications applicable in Salesforce
C. Diverse automation tools used in Salesforce
(* Predictive analytics, machine learning, NLP, and computer vision represent distinct
forms of artificial intelligence utilized in Salesforce to improve various business functions
like sales, marketing, and customer service.)

48.What are some significant advantages of AI in enhancing customer


experiences within CRM?

A. Improves case handling by organizing and monitoring customer support


issues, recognizing subjects, and summarizing case solutions
B. Enhances security measures in CRM to protect sensitive customer information from
potential breaches and security risks
C. Completely automates the customer support journey, ensuring smooth, automated
interactions with customers
(* One big advantage of AI in CRM is that it sorts, organizes, and tracks customer
support cases and their details. This leads to more personalized and efficient customer
service.)

49.What are some key benefits of AI in improving customer experiences in CRM?

A. Streamlines case management by categorizing and tracking customer support


cases, identifying topics, and summarizing case resolutions
B. Fully automates the customer service experience, ensuring seamless automated
interactions with customers
C. Improves CRM security protocols, safeguarding sensitive customer data from
potential breaches and threats
(* AI in CRM categorizes cases, tracks support types, prioritizes cases, monitors status,
identifies topics, reasons, and closure codes, and tracks case types and channels. This
leads to more personalized and efficient customer service.)

50.Cloudy Computing latest email campaign is struggling to attract new


customers. How can AI increase the company's customer email engagement?

A. Remove invalid email addresses


B. Create personalized emails
C. Resend emails to inactive recipients
(* Creating personalized emails is a well-known strategy to increase customer email
engagement. AI can analyze customer data and behavior to generate personalized
content and recommendations, making emails more relevant to individual recipients.
This personalization can lead to higher open rates, click-through rates, and overall
engagement.)

51.What is the involvement of humans in AI-powered CRM procedures?

A. Humans are excluded from AI-driven CRM processes


B. Humans have a crucial role in supervising AI-powered CRM processes, adding
context, and making ultimate decisions
C. Humans are solely involved in configuring AI-driven CRM processes but are not part
of the ongoing operation
(* Humans do have a vital function in supervising AI-powered CRM procedures, offering
context, and rendering ultimate judgments.)

52.To avoid introducing unintended bias to an AI model, which type of data


should be omitted?

A. Transactional
B. Engagement
C. Demographic
(* To mitigate the risk of bias inherent in demographic targeting, use interest- and intent-
based targeting.)

53.What is the best method to safeguard customer data privacy?

A. Archive customer data on a recurring schedule.


B. Track customer data consent preferences.
C. Automatically anonymize all customer data.
(* By continuously tracking and respecting customer data consent preferences,
organizations can ensure that they are using customer data in compliance with privacy
regulations and the individual choices of their customers. This approach prioritizes
transparency and consent, which are essential principles in data privacy protection.)

54.What step should be followed to build and apply reliable generative AI while
considering Salesforce's safety guidelines?

A. Construct appropriately sized models to minimize environmental impact


B. Maintain transparency when AI generates and independently delivers content
C. Establish safeguards to mitigate harmful content and safeguard Personally
Identifiable Information (PII)
(* Establish safeguards to mitigate harmful content and safeguard Personally
Identifiable Information (PII)" is the correct answer because it aligns with the principles
of responsible and ethical AI development, as well as data protection.)

55.Which statement best reflects Salesforce's commitment to honesty in training


AI models?

A. Manage bias, toxicity, and harmful content by implementing integrated guardrails and
guidance
B. Guarantee proper consent and transparency when employing AI-generated
responses
C. Reduce the AI model's environmental impact and carbon footprint during training
(* Ensuring that users are aware of and have given their consent for the use of AI-
generated responses demonstrates transparency and honesty in AI interactions. It
respects user preferences and privacy.)

56.What constitutes an instance of ethical debt?

A. Breaching data privacy regulations and neglecting fine payments


B. Introducing an AI feature after identifying a detrimental bias
C. Postponing the release of an AI product to retrain a data model
(* Ethical debt refers to situations where ethical concerns or issues are recognized but
not immediately addressed or corrected. In this case, launching an AI feature despite
knowing it has a harmful bias creates ethical debt because the issue of bias has been
acknowledged but not rectified. This can lead to negative consequences and ethical
dilemmas down the line.)

57.What data does Salesforce automatically remove from Marketing Cloud


Einstein engagement model training to reduce bias and ethical risks?

A. Demographic
B. Geographic
C. Cryptographic
(* Demographic data includes information related to characteristics such as age,
gender, race, ethnicity, and other personal attributes. Excluding demographic data helps
prevent the AI model from learning biases associated with these attributes and
promotes fairness and non-discrimination in AI-driven processes. Salesforce's practice
of excluding demographic data aligns with ethical considerations in AI to avoid bias and
promote equity.)

58.What are some of the ethical challenges associated with AI development?

A. Implicit transparency of AI systems, which makes it easy for users to understand and
trust their decisions
B. Potential for human bias in machine learning algorithms and the lack of
transparency in AI decision-making processes
C. Inherent neutrality of AI systems, which eliminates any potential for human bias in
decision-making
(* The ethical challenges in AI development primarily revolve around the potential for
human bias in machine learning algorithms and the need for transparency in AI
decision-making processes. These challenges highlight the importance of addressing
biases and promoting transparency to ensure responsible and ethical AI development.)

59.Salesforce defines bias as using a person's immutable traits to classify them


or market to them. Which potentially sensitive attribute is an example of an
immutable trait?

A. Financial status
B. Nickname
C. Email address
(* Financial status is an example of an immutable trait, which is a characteristic that
cannot be changed or is highly resistant to change over time. Financial status typically
includes attributes like income level, wealth, or financial stability, which are not easily
altered by an individual. Using such immutable traits for classification or marketing
purposes can be sensitive and potentially discriminatory, which aligns with Salesforce's
definition of bias.)

60.Cloudy Computing is testing a new AI model. Which approach aligns with


Salesforce's Trusted AI Principle of Inclusivity?

A. Test with diverse and representative datasets appropriate for how the model
will be used.
B. Rely on a development team with uniform backgrounds to assess the potential
societal implications of the model.
C. Test only with data from a specific region or demographic to limit the risk of data
leaks.
(* Inclusive principle - consider diversity, equality, and fairness. Testing with diverse
data ensures the model's impact is understood in different situations. It's not just about
talking.)

61.A customer using Einstein Prediction Builder is confused about why a certain
prediction was made. Following Salesforce's Trusted AI Principle of
Transparency, which customer information should be accessible on the
Salesforce Platform?

A. A marketing article of the product that clearly outlines the product's capabilities and
features
B. An explanation of how Prediction Builder works and a link to Salesforce's Trusted AI
Principles
C. An explanation of the prediction's rationale and a model card that describes
how the model was developed
(* Transparency principle - Customers should comprehend the reasoning behind each
AI-generated recommendation and prediction. This involves offering comprehensive
details such as model cards.)

62.How can Cloudy Computing enhance its AI practices while adhering to


Salesforce's Trusted AI Principles?

A. Conducting internal surveys among the company's employees


B. Embracing AI practices in line with industry trends and competition
C. Soliciting independent feedback from external ethics experts, customers, and
advisory boards
(* The Accountable principle underscores taking responsibility for one's actions towards
stakeholders and actively seeking external input for ongoing enhancements.)

63.The technical team at Cloudy Computing is evaluating the efficiency of their AI


development procedures. Which well-established Salesforce model should guide
the creation of reliable AI solutions?

A. Ethical AI Process Maturity Model


B. Ethical AI Prediction Maturity Model
C. Ethical AI Practice Maturity Model
(* This model is designed to guide the development of AI solutions in an ethically
responsible manner, emphasizing best practices, transparency, and compliance with
ethical principles. It provides a framework for evaluating and improving the ethical
maturity of AI practices within an organization, making it the most suitable choice for
Cloudy Computing's evaluation of their AI development processes.)

64.Which Salesforce Trusted AI Principle highlights the significance of designing


AI models to reduce bias for everyone potentially affected?

A. Transparency
B. Inclusiveness
C. Accountability
(* This principle underscores the need to consider and include diverse perspectives,
demographics, and user groups when developing AI solutions to promote fairness and
equitable outcomes. It aligns with the goal of reducing bias and ensuring that AI benefits
a broad and inclusive audience.)

65.When should the use of natural language processing (NLP) for automated
customer service be disclosed to the customer, following Salesforce's Trusted AI
Principles?

A. After they have finished their interaction with AI


B. When they specifically ask for a live agent
C. At the outset of their conversation with AI
(* Disclosing the use of NLP for automated customer service at the beginning of the
conversation ensures transparency and informs the customer that they are interacting
with an AI system rather than a human agent. This upfront disclosure promotes trust
and transparency in the customer-agent interaction, allowing customers to make
informed decisions about their engagement with the AI system.)

66.In what way should a financial In what way should a financial institution
adhere to Salesforce's Trusted AI Principle of Transparency when executing a
campaign for preapproved credit cards?

A. Clarify how risk factors like credit score may influence customer eligibility
B. Highlight sensitive variables and their stand-ins to avoid biased lending practices
C. Integrate customer input into the ongoing training of the model
(* Transparency involves providing clear and understandable explanations of how AI-
driven decisions are made. In the context of a financial institution's campaign for
preapproved credit cards, explaining to customers how risk factors like a credit score
can impact their eligibility is essential for transparency. It ensures that customers clearly
understand the criteria used to determine eligibility and fosters trust in the decision-
making process. This transparency also helps customers make informed choices about
applying for a pre-approved credit card.)

67.Cloudy Computing is getting a dataset ready for an AI model but notices


certain irregularities in the data. What should the company do as the most
suitable course of action?

A. Modify the AI model to accommodate the data irregularities


B. Investigate the data inconsistencies and implement data quality methods
C. Expand the amount of data used for training the model
(* When inconsistencies are identified in a dataset, it's essential to examine the root
causes of those inconsistencies and take steps to improve data quality. This typically
involves investigating why the data is inconsistent, identifying errors or missing values,
and applying data cleaning or data quality techniques to ensure the dataset is accurate
and reliable for training AI models. Simply adjusting the AI model or increasing the
quantity of data won't address the underlying data quality issues, which can lead to
inaccurate model outcomes.)

68.A system administrator acknowledges the necessity of establishing a data


management strategy. What is a fundamental element of a data management
strategy?

A. Naming conventions
B. Data backup
C. Color coding
(* Naming conventions play a crucial role in establishing the guidelines for a data
management strategy.)

69.Cloudy Computing conducts a data quality evaluation and identifies several


contact records with future dates of birth. In this situation, which data quality
aspect should be employed to assess the date of birth?

A. Consistency
B. Timeliness
C. Validity
(* In this case, having future dates of birth is not valid, as it contradicts the expected and
accurate date ranges for birthdays.)

70.What does "data completeness" mean when discussing data quality?


A. The extent to which all necessary data points exist within the dataset
B. The act of combining multiple datasets from different databases
C. The capacity to retrieve data from various sources in real-time
(* Data completeness refers to how much of the required or necessary data is present
within a dataset. It measures whether all the relevant data points are there or if there
are missing or incomplete parts of the dataset.)

71.Cloud Kicks wants to create a custom service analytics application to analyze


cases in Salesforce. The application should rely on accurate data to ensure
efficient case resolution?

A. Consistency
B. Age
C. Duplication
(* Consistency in data quality ensures that data is uniform and follows a standardized
format, which is crucial for accurate analysis and efficient case resolution in this
scenario.)

72.A sales manager aims to improve the quality of lead data in their CRM system.
What is the most likely process to assist the team in achieving this objective?

A. Prioritize active leads quarterly


B. Review and update missing lead information
C. Redesign the lead conversion process

74.An administrator at Cloud Kicks wants to ensure that a field is set up on the
customer record so their preferred name can be captured. Which Salesforce field
type should the administrator use to accomplish this?

A. Rich Text Area


B. Text
C. Multi-Select Picklist
(* A text field allows for the entry of a single text value, which is appropriate for
capturing a customer's preferred name.)

75.How does data quality influence the ethical use of AI applications?

A. Good data quality is crucial to guarantee impartial and equitable AI decisions, uphold
ethical standards, and prevent discrimination
B. Having high-quality data ensures that the necessary demographic attributes are
available for creating personalized campaigns
C. Poor-quality data lowers the likelihood of unintentional bias since it doesn't overly
cater to specific demographic groups
(* This accurately describes the role of data quality in ensuring fair and ethical AI
applications. High-quality data helps AI systems make unbiased decisions, adhere to
ethical principles, and avoid discriminatory outcomes by providing a reliable foundation
for training and decision-making.)

76.How does data quality affect the reliability of AI-driven decisions?

A. High-quality data enhances the dependability and credibility of AI-driven


decisions, building trust among users
B. Low-quality data decreases the likelihood of model overfitting, enhancing the
reliability of predictions
C. A combination of low-quality and high-quality data can enhance the accuracy and
dependability of AI-driven decisions
(* High-quality data improves the reliability and credibility of AI-driven decisions,
fostering trust among users.)

77.Why is the use of good data crucial for effective AI development?

A. Diminishes the necessity for post-launch implementation monitoring


B. Results in more precise and dependable predictions and outcomes
C. Guarantees a shorter model training duration
(* High-quality data enhances the precision and reliability of AI predictions and
outcomes.)

78.What is the expected outcome of high-quality data on customer relationships?

A. Improved customer trust and satisfaction


B. Increased brand loyalty
C. Increased expenses for acquiring customers
(* When a business uses high-quality data effectively, it can better understand its
customers' needs and preferences. This enables the company to provide more
personalized and relevant experiences, products, and services. As a result, customers
tend to trust the brand more and are more satisfied with their interactions, ultimately
leading to improved customer trust and satisfaction.)

79.What advantage does a diverse, well-rounded, and extensive dataset offer?


A. Training duration
B. Data security
C. Model precision
(* Having a diverse, balanced, and large dataset is advantageous for machine learning
models because it enhances their accuracy and precision. These types of datasets
enable models to generalize patterns effectively, reducing the risk of overfitting and
improving performance on new, unseen data. Additionally, large datasets provide a
wealth of information that allows models to uncover subtle patterns and make more
accurate predictions. While data privacy and training time are important considerations
in machine learning, they are not direct benefits of dataset diversity but rather depend
on other aspects of the machine learning process.)

80.Cloud Kicks wants to optimize its business operations by incorporating AI into


its CRM system. What should the company do first to prepare its data for use with
AI?

A. Remove biased data


B. Determine data availability
C. Determine data outcomes
(* Cloudy Computing needs to make sure that the data required for AI in their CRM
system is easy to access. This step is vital because if the necessary data isn't readily
available, it can cause delays and complications in the AI implementation process.
Once data accessibility is confirmed, the company can then proceed with tasks like
cleaning and enhancing the data and addressing biases, which come later in the data
preparation process.)

81.What could be a potential result of inadequate data quality?

A. Reduced diversity and resilience in AI predictions


B. Unintentional reinforcement of biases in AI systems due to flawed data
C. AI models retaining accuracy but experiencing delayed response times
(* When data is of low quality, it often contains inaccuracies and biases. When AI
systems are trained on such data, they can inadvertently learn and perpetuate these
biases, causing unfair or discriminatory outcomes. This is a significant concern in AI and
highlights the importance of ensuring data quality to prevent biased AI predictions and
decisions.)

82.A Salesforce administrator creates a new field to capture an order's


destination country. Which field type should they use to ensure data quality?
A. Number
B. Picklist
C. Text

83.How does the "right of least privilege" reduce the risk of handling sensitive
personal data?

A. By reducing how many attributes are collected


B. By applying data retention policies
C. By limiting how many people have access to data
(* Treat Sensitive Data Carefully
Make sure that you're collecting only the data you need. Be intentional about why you're
collecting it. Using certain data—such as age, gender, or ethnicity—can introduce bias
into your personalization solution. Other data, such as postal codes (which can be
highly correlated with race), can serve as proxies for bias. Finally, observe the "right of
least privilege" and give access only to people that truly need it, and only when they
need it.)

84.What is one way to achieve transparency in AI?

A. Communicate AI goals and objectives with those involved prior to all


interactions.
B. Establish an ethical and unbiased culture amongst those involved.
C. Allow users to give feedback regarding the inferences the AI makes about them.
(* Disclosing the use of NLP for automated customer service at the beginning of the
conversation ensures transparency and informs the customer that they are interacting
with an AI system rather than a human agent. This upfront disclosure promotes trust
and transparency in the customer-agent interaction, allowing customers to make
informed decisions about their engagement with the AI system.)

85.What is the key difference between generative and predictive AI?

A. Generative AI finds content similar to existing data and predictive AI analyzes


existing data.
B. Generative AI creates new content based on existing data and predictive AI
analyzes existing data.
C. Generative AI analyzes existing data and predictive AI creates new content based on
existing data.
(* Generative AI is focused on generating new content, such as text, images, or even
music, based on patterns and information it has learned from existing data. It generates
novel output.

Predictive AI, on the other hand, uses existing data to make predictions or forecasts
about future events or outcomes. It analyzes data to identify patterns and trends that
can be used to predict specific outcomes.)

86.What is an example of Salesforce's Trusted AI Principle of Inclusivity in


practice?

A. Working with human rights experts


B. Striving for model explainability
C. Testing models with diverse datasets
(* Inclusivity in AI refers to ensuring that AI systems are fair and unbiased across
different demographic groups. Testing models with diverse datasets means using a
variety of data that represents different demographics, backgrounds, and perspectives
to train and evaluate AI models. This helps identify and mitigate potential biases and
ensures that AI systems work well for a wide range of users and stakeholders.)

87.What should organizations do to ensure data quality for their AI initiatives?

A. Collect and curate high-quality data from reliable sources.


B. Prioritize model fine-tuning over data quality improvements
C. Rely on AI algorithms to automatically handle data quality issues.
(* High-quality data is fundamental for the success of AI initiatives. It's important to
collect data from reliable sources, ensure it's clean and relevant, and curate it to remove
any inconsistencies or errors. Prioritizing data quality is essential for building accurate
and reliable AI models.)

88.Cloud Kicks discovered multiple variations of state and country values in


contact records. Which data quality dimension is affected by this issue?

A. Consistency
B. Accuracy
C. Usage
(* Inconsistent data can make it challenging to use the data effectively and can lead to
errors in analysis or operations. Therefore, improving consistency in state and country
values is essential for maintaining data quality.)
89.What is a sensitive variable that can lead to bias?

A. Country
B. Education level
C. Gender
(* Gender is a sensitive variable that, when not handled appropriately in data analysis or
AI models, can lead to biased outcomes. It's essential to ensure that gender-related
data is treated with fairness, equity, and consideration to avoid perpetuating biases or
discrimination.)

90.What type of bias results from data being labeled according to stereotypes?

A. Interaction
B. Societal
C. Association
(* Association bias, also known as associative bias, is a type of bias that arises in data
when there are systematic and non-random associations between variables. This bias
occurs when data labels or attributes are influenced by societal stereotypes,
preconceptions, or cultural biases.

Stereotypes and Preconceptions: Association bias often results from stereotypes and
preconceived notions that people hold about certain groups or categories. These
stereotypes can affect how data is labeled or categorized.)

91.What is a benefit of a diverse, balanced, and large dataset?

A. Model accuracy
B. Training time
C. Data privacy
(* Having a diverse dataset that accurately represents various aspects of the problem
you're trying to solve can significantly improve the accuracy of machine learning
models.

1. Representativeness: Diverse data ensures that the model has seen a wide range of
examples and variations relevant to the problem.
2. Reduced Bias: A balanced dataset with representation from different groups or
classes can reduce bias in model predictions.
3. Generalization: Large datasets provide a rich source of information for the model to
learn from. With more data, the model can better generalize patterns and make more
accurate predictions on new data points.)

92.A developer is tasked with selecting a suitable dataset for training an AI model
in Salesforce to accurately predict current customer behavior. What is a crucial
factor that the developer should consider during selection?

A. Size of dataset
B. Number of variables in the dataset
C. Age of dataset
(* The age of the dataset is important because using outdated data may not accurately
reflect the current behavior and preferences of customers. Customer behavior can
change over time, and using a dataset that is not up-to-date could lead to inaccurate
predictions.)

93.What is an implication of user consent in regard to AI data privacy?

A. AI ensures complete data privacy by automatically obtaining user consent.


B. AI operates independently of user privacy and consent.
C. AI infringes on privacy when user consent is not obtained.
(* User consent is a fundamental aspect of data privacy and ethics. In most cases, AI
should not collect, process, or use personal data without the explicit and informed
consent of the user. Failing to obtain user consent can indeed infringe on privacy rights
and may lead to privacy violations or legal issues.)

94.Cloud Kicks learns of complaints from customers who are receiving too many
sales calls and emails. Which data quality dimension should be assessed to
reduce these communication inefficiencies?

A. Usage
B. Duplication
C. Consent
(* Assessing the consent dimension involves ensuring that the company has obtained
explicit and valid consent from customers to contact them through sales calls and
emails. This includes understanding the preferences of customers regarding
communication frequency and type. By respecting customer consent and preferences,
the company can improve communication efficiency and reduce the likelihood of over-
communication, which can lead to customer dissatisfaction.)
95.Cloud Kicks wants to ensure that multiple records for the same customer are
removed from Salesforce. Which feature should be used to accomplish this?

A. Duplicate management
B. Trigger deletion of old records
C. Standardized field names
(* Duplicate management in Salesforce is a feature that allows you to identify and
handle duplicate records effectively. It provides tools to detect and merge duplicate
records, ensuring that only a single, accurate record is retained for each customer.)

96.Cloud Kicks wants to develop a solution to predict customers' product


interests based on historical data. The company found that employees from one
region use a text field to capture the product category, while employees from
other locations use a picklist. Which data quality dimension is affected in this
scenario?

A. Completeness
B. Consistency
C. Accuracy
(* The inconsistency in how product category information is captured, with some
employees using a text field and others using a picklist, represents a lack of consistency
in the data. Consistency is a data quality dimension that focuses on ensuring that data
is uniformly formatted and structured throughout a dataset.)

97.What is a key benefit of effective interaction between humans and AI systems?

A. Leads to more informed and balanced decision-making


B. Reduces the need for human involvement
C. Alerts humans to the presence of biased data
(* Effective collaboration between humans and AI systems involves leveraging the
strengths of each, particularly in the context of Salesforce's suite of products humans
and AI to work together, leveraging the strengths of each to make more informed
decisions. This is evident in the design and implementation of Salesforce's Einstein
Bots, which are designed to work in tandem with human agents, not replace them.)

98.How does data quality impact the trustworthiness of AI-driven decisions?

A. High-quality data improves the reliability and credibility of AI-driven decisions,


fostering trust among users.
B. The use of both low-quality and high-quality data can improve the accuracy and
reliability of AI-driven decisions.
C. Low-quality data reduces the risk of overfitting the model, improving the
trustworthiness of the predictions.
(* High-quality data, which is clean, accurate, complete, and representative, provides a
solid foundation for training AI models.

When AI systems are trained on high-quality data, they are more likely to produce
accurate and reliable results.

Users are more likely to trust AI-driven decisions when they see that the data used to
train the model is of high quality, as it suggests that the model has learned from reliable
sources and is less prone to errors or biases.)

99.Cloud Kicks implements a new product recommendation feature for its


shoppers that recommends shoes of a given color to display to customers based
on the color of the products from their purchase history. Which type of bias is
most likely to be encountered in this scenario?

A. Societal
B. Confirmation
C. Survivorship
(* Societal bias can occur when the historical data used to make recommendations
reflects existing societal biases or stereotypes. In this case, if the historical purchase
data contains biases related to the color preferences of customers, the recommendation
system may inadvertently perpetuate those biases by suggesting products of a certain
color more frequently. This can result in recommendations that align with societal biases
rather than providing fair and diverse recommendations.)

100.Which Einstein capability uses emails to create content for Knowledge


articles?

A. Generate
B. Predict
C. Discover
(*Salesforce Einstein Discover is an AI-powered feature that can analyze emails and
suggest content for Knowledge articles. It helps identify relevant information from email
conversations and assists in creating knowledge articles based on that content. This
capability can improve the efficiency of knowledge management by automating the
process of article creation from email correspondence.)
101.How is natural language processing (NLP) used in the context of AI
capabilities?How is natural language processing (NLP) used in the context of AI
capabilities?

A. To cleanse and prepare data for AI implementations


B. To interpret and understand programming language
C. To understand and generate human language

102.What can bias in AI algorithms in CRM lead to?

A. Personalization and target marketing changes


B. Advertising cost increases
C. Ethical challenges in CRM systems

103.What is an example of ethical debt?

A. Violating a data privacy law and falling to pay fines


B. Launching an AI feature after discovering a harmful bias
C. Delaying an AI product launch to retrain an AI

104.A consultant conducts a series of Consequence Scanning workshops to


support testing diverse datasets. Which Salesforce Trusted AI Principles is being

practiced?

A. Transparency
B. Inclusivity
C. Accountability

105.A financial institution plans a campaign for preapproved credit cards?


How should they implement Salesforce's Trusted AI Principle of Transparency?

A. Incorporate customer feedback into the model's continuous training.


B. Communicate how risk factors such as credit score can impact customer eligibility.
C. Flag sensitive variables and their proxies to prevent discriminatory lending
practices.

106.Cloud kicks wants to decrease the workload for its customer care agents by
implementing a chatbot on its website that partially deflects incoming cases by
answering frequently asked questions. Which field of AI is most suitable for this
scenario?
A. Natural language processing
B. Computer vision
C. Predictive analytics

107.Which AI type plays a crucial role in Salesforce's predictive text and speech
recognition capabilities, enabling the platform to understand and respond to user
commands accurately?

A. Computer Vision
B. Natural Language Processing (NLP)
C. Predictive Analysis

108.Which feature of Marketing Cloud Einstein uses AI to predict consumer


engagement with email and MobilePush messaging?

A. Content Selection
B. Email Recommendations
C. Engagement Scoring

109.What is a unique and distinguishing feature of deep learning in the context of


AI capabilities?

A. Deep learning uses neural networks with multiple layers to learn from a large
amount of data
B. Deep learning uses historical data to predict future outcomes
C. Deep learning uses algorithm to cleanse and prepare data for AI implementations

110.A Salesforce Consultant is discussing AI capabilities with a customer who is


interested in improving their sales processes. Which type of AI would be most
suitable for enhancing sales processes in Salesforce Customer 360?

A. Predictive Analysis
B. Computer Vision
C. Natural Language Processing (NLP)
(This AI can enhance sales processes by predicting future outcomes based on historical
data)

111.What are the 3 main types of AI capabilities in Salesforce?


A. Predictive, Generative, Analytic
B. Predictive, Reactive, Analytics
C. Generative, Descriptive, Analytic

112.Which type of Salesforce application is recommended to enhance the sales


process?

A. Einstein Prediction Builder


B. Einstein Voice
C. Einstein Lead Scoring

113.What is a key benefit in implementing AI in a CRM system?

A. Enhanced customer support


B. Improved platform speed
C. Reduced data governance

114.CloudKicks is implementing AI in their CRM system and is focusing on data


management. What is the benefit of using a data management approach in AI
implementation?

A. Eliminates the need for data governance


B. Reduces the amount of data in the CRM system
C. Emphasizes the importance of data quality

115.A consultant discusses the role of humans in AI-driven CRM processes with
a customer. What is one challenge the consultant should mention about human-
AI collaboration in decision-making?

A. Difficulty interpreting AI decisions


B. High cost of AI implementation
C. Lack of technical skills on the team
(AI decisions are often based on complex algorithms and large datasets, making them
difficult to interpret without sufficient expertise and understanding of AI principles)

116.CloudKicks wants to implement Salesforce AI features. They are concerned


about potential ethical and privacy challenges. What should be recommended to
minimize potential AI bias?
A. Salesforce Trusted AI Principles
B. Demographic data to identify minority groups
C. AI models that auto-correct biased data

117.A consultant designs a new AI model for a financial services company that
offers personal loans. Which variable within their proposed model might
introduce unintended bias?

A. Loan Date
B. Postal Code
C. Payment Due Date

118.CloudKicks is planning to automate its customer service chat using natural


language processing. According to Salesforce's Trusted AI Principles, how
should this be disclosed to the customer?

A. They do not need to be informed that they're chatting with AI


B. Inform the customer they're chatting with AI when they request a live agent
C. Inform them at the beginning of the interaction that they're chatting with AI

119.CloudKicks wants to implement AI features in their CRM System. They have


expressed concerns about the quality of their existing data. What advice should
be given to them regarding the importance of data quality for AI implementation?

A. Assessing data quality is only necessary for large datasets


B. AI systems can handle any data inaccuracies
C. Assessing and improving data quality is crucial for accurate AI predictions and
insights

120.What role does data play in AI models?

A. Data is used for testing and training AI models


B. Data is only used for validating AI models
C. Data is only used for testing AI models

121.What data quality dimension refers to the frequency and timelines of data
updates?

A. Data Source
B. Data Freshness
C. Data Leakage

122.A Salesforce consultant is considering the data sets to use for training AI
models for a project on the Customer 360 platform. What should be considered
when selecting data sets for the AI models?

A. Duplication, accuracy, consistency, storage location and usage of the data sets
B. Age, completeness, consistency, theme, duplication, and usage of the data sets
C. Age, completeness, accuracy, consistency, duplication, and usage of the data
set

123.Which statement exemplifies Salesforce's "honesty" guideline when training


Ai models?

A. Minimize the AI's carbon footprint and environment impact during training
B. Ensure appropriate consent and transparency when using AI-generated
responses.
C. Control bias, toxicity, and harmful content with embedded guardrails and guidance

124.What is a key consideration regarding data quality in AI implementation?

A. Techniques from customizing AI features in Salesforce


B. Data's role in training and fine-tuning the Salesforce AI models
C. Integration process of AI models with Salesforce workflows

125.How does a data quality assessment impact business outcomes for


companies using AI?

A. Improves the speed of AI recommendations


B. Accelerates the delivery of new AI solutions
C. Provides a benchmark for AI predictions

126.What are some ethical challenges associated with AI development?

A. Potential for human bias in machine learning algorithms and lack of


transparency in AI-driven decision-making processes
B. Implicit transparency of AI systems, which make it easy for users to understand and
trust their decisions
C. Inherent neutrality of AI systems, which eliminates any potential for human bias in
decision making

127.In the context of Salesforce's Trusted AI Principles, what does the principle of
"empowerment" aim to achieve?

A. Empower users of all skill levels to build AI applications with clicks not code
B. Empower users to contribute to the growing body of knowledge of leading AI
research
C. Empower users to solve challenging technical problems using neural networks

128.What should be done to prevent bias from entering an AI system when


training it?

A. Use alternative assumptions


B. Import diverse training data
C. Include proxy variables

129.What is a benefit of data quality and transparency as it pertains to bias in


generated AI?

A. Chances of bias are mitigated


B. Chances of bias are aggravated
C. Chances of bias are removed

DEFINITIONS
1.What is the definition of machine learning?
Uses models to generalize and apply their knowledge to new examples, learning from
labeled data to make accurate predictions and decisions.
2.What is the definition of Generative AI?
Creates new content based on existing data prompted by a human - ChatGPT

3.What is the definition of an algorithm?


A simple set of rules for turning an input into an output

4.What is the definition of a neural network?


A web of connections, guided by weights and biases.

5.What is Salesforce Data Cloud?


Harmonizes and stores customer data at massive scale, and transforms it into a
single, dynamic source of truth

6.What is the definition of GPT (Generative Pre-trained Transformer)?


An AI tool trained on large amounts of text data, to generate clear and relevant
text based on user prompts or queries.

7.What is the definition of deep learning?


Uses Layered neural networks to recognize complete patterns in data. Structured
after the human brain, it makes predictions on the patterns it has learned before.

8.What are Einstein Bots?


Allows you to build a smart assistant into your customers' favorite channels like
chat, messaging or voice. Uses Natural Language Process (NLP) to assist
customers.

9.What are Salesforce's Trusted AI Principles?


Responsible
Accountable
Transparent
Empowering
Inclusive

10.What is a bias?
An inaccuracy that is systematically incorrect in the same direction and affects
the success of the person or the algorithm that contains it
11.What is the disparate impact?
Discriminatory practices towards a particular demographic

12.What is a premortem?
A pre-project meeting to set measured and realistic expectations and catch 'what
went wrong' before it happens

13.What is Data Leakage (aka hindsight bias)?


Results from using variables in your prediction model that are known only after
the act of interest

14.What are data insights?


Findings in your data based on thorough analysis generated by Einstein
Discovery

15.What is Numeric Pr
A derived value, produced by a model, that represents a possible future outcome

16.What is Einstein Discovery?


Statistical modeling and supervised machine learning in a no-code required,
rapid-iteration environment

Requires CRM Analytics Plus or Einstein Predictions license

17.What are the 5 guidelines Salesforce follows to guide their development of


generative AI?
Accuracy
Safety
Honesty
Empowerment
Sustainability

18.What are the data quality attributes?


Age
Completeness
Accuracy
Consistency
Duplication
Usage

19.Salesforce's Trusted AI Principles


• Responsible.
• Accountable.
• Transparent.
• Empowering.
• Inclusive.

20.Bias
An inaccuracy that is systematically incorrect in the same direction and affects
the success of the person or algorithm that contains it.

21.Disparate Impact
Discriminatory practices toward a particular demographic.

22.Premortem
A pre-project meeting to set measured and realistic expectations and catch the
"what went wrong" before it happens.

23.Data Leakage - (hindsight bias)


Results from using variables in your prediction model that are known only after
the act of interest.

24.Data Insights
Findings in your data bases on thorough analysis generated by Einstein
Discovery.

25.Prediction
A derived value, produced by a model, that represents a possible future outcome.
Einstein Discovery
Statistical modeling and supervised machine learning in a no-code-required,
rapid-iteration environment.

26.Types of AI Capabilities
• Numeric Predictions.
• Classifications.
• Robotic Navigation.
• Language Processing.
27.Anthropomorphism
The attribution of human traits, emotions, or intentions to non-human entities,
often seen in AI when robots or software are given human-like attributes.

28.AI - Artificial Intelligence


Intelligence demonstrated by machines, enabling them to perform tasks typically
requiring human intelligence.

29.Artificial Neural Network


Computational models inspired by the human brain's structure, consisting of
interconnected nodes (neurons) used for pattern recognition and decision-
making.

30.Augmented Intelligence
Combining human intelligence with AI to enhance and scale decision-making
capabilities.

31.CRM with AI
Customer Relationship Management systems enhanced with AI features to
automate tasks and personalize customer interactions.

32.Discriminator (in GAN - In Generative Adversarial Networks)


The discriminator evaluates the authenticity of data.

33.Ethical AI Maturity Model


Framework for ensuring AI systems is developed and deployed ethically and
responsibly.

34.GAN - Generative Adversarial Network


A type of AI model comprising two parts: a generator and a discriminator.
Generator
In GANs, it creates potential data outputs.

35.GPT - Generative Pre-trained Transformer


a type of deep learning model specialized in generating text.

36.Grounding
Linking abstract concepts in AI models to concrete instances or data.
37.Hallucination
When AI models produce outputs that aren't based on the data they were trained
on.

38.HITL - Human in the Loop


AI systems incorporate human intervention for decision-making.

39.LLM - Language Learning Models


A general term for models trained on linguistic data.

40.Machine Learning
AI's subset where systems learn from data to improve task performance without
explicit programming.

41.Model
Refers to the specific mathematical structure and algorithms used for a particular
AI task.

42.NLP - Natural Language Processing


AI that interprets and generates human language.

43.Parameters
Configurable variables in a model that are learned from the training data to make
predictions.

44.Prompt Defense
Techniques ensuring AI models respond safely to given prompts.

45.Prompt Engineering
Methods of refining AI model responses through prompt optimization.

46.Red-Teaming
A strategy of simulating adversarial attacks to identify vulnerabilities in systems.

47.Reinforcement Learning
A type of machine learning where agents learn by interacting with environments
and receiving feedback.

48.Supervised Learning
Machine learning where models learn from labeled data.
49.Toxicity
Harmful outputs or behaviors in AI models.

50. Transparency
Making the decision-making process of AI models clear and understandable.

51.Validation
Evaluating the performance of an AI model on a separate dataset not used during
training.

52.Zero Data Retention


Policy of not retaining user data to ensure privacy and security.

53.ZPD - Zone of Proximal Development


A concept from educational psychology about the difference between what a
learner can do with and without help. Not inherently an AI term but can be applied
in AI-based adaptive learning systems.

54.Numeric Predictions
The predictive nature of AI that often assigns values between 0 (indicating an
unlikely event) and 1 (indicating a certain event). While it can include percentage
values, it's not limited to them; for instance, AI can predict specific numbers, like
the amount in dollars.

55.Classifications
AI's ability to categorize data or patterns, often outperforming human abilities.
Each AI classifier is specialized for a distinct task. For instance, an AI designed
for detecting phishing emails would not perform well in identifying images of fish.

56.Robotic Navigation
AI's proficiency in navigating dynamic environments, such as autonomous
vehicles adapting to various road conditions, including curves, wind gusts from
large trucks, and sudden traffic stops.

57.Language Processing
A facet of AI known as Natural Language Processing (NLP) that understands
word associations to extract underlying intentions. This capability enables
functions like translating a document from English to German or summarizing a
lengthy scientific paper.

58.ML - Machine Learning


A subset of AI that uses vast amounts of data to train models to make
predictions. Unlike traditional algorithms which are explicitly programmed, ML
models learn from the data they are fed.

59.Structured vs. Unstructured Data


Structured data is organized and labeled, like a spreadsheet, making it easily
analyzable. In contrast, unstructured data, like a news article or an unlabeled
image, lacks a predefined structure. The type of data dictates the kind of AI
training possible, with unstructured data typically being used for unsupervised
learning.

60.Neural Networks
An AI methodology that simulates the structure and function of the human brain
to process information, enabling the machine to learn from and interpret data in a
human-like manner.

61.Machine Learning Bias


Refers to AI algorithms that inadvertently mirror human prejudices. This
phenomenon results in systematically skewed outcomes, usually a consequence
of mistaken assumptions during the machine learning process.

62.Yes-and-No Predictions
AI's capability to provide binary answers by analyzing historical data, such as
determining if a lead is viable for a business or predicting email engagement.

63.Deep Learning and Data


Analogous to the human mind's ability to establish connections and derive
insights, deep learning employs artificial neural networks across large databases.
Using algorithms, it sifts through information, derives conclusions, and refines
performance.
64.Einstein Agent Productivity
Armed with predictive intelligence, this feature offers real-time suggestions to
agents, equipping them to deliver exemplary customer service.

65.Characteristics of an Effective Chatbot


A successful chatbot is transparent, personable, thorough, and continuously
evolving. It should clearly identify its nature, embody the brand's tone, provide
comprehensive information, and adapt based on feedback.

66.Generative AI
Specialized AI capable of generating media like text or images. These models
learn from input data patterns and produce new content resembling the learned
characteristics.
(AI techniques used to create content or data models.)

67.Predictive AI
Studies historical data, identifies patterns, and makes predictions about the
future that can better inform business decisions. Predictive AI's value is shown in
the ways it can detect data flow anomalies and extrapolate how they will play out
in the future in terms of results or behavior; enhance business decisions by
identifying a customer's purchasing propensity as well as upsell potential; and
improve business outcomes

68.LLMs - Large Language Models


Trained on vast amounts of textual data from the internet, these models execute
advanced language tasks such as summarizing, translating, and error correction,
among others.

69.User Spoofing
The act of creating a believable online profile using AI-generated images. This
allows fake users to interact convincingly with real users, posing challenges for
businesses trying to identify networks of bots promoting their own content.

70.NLP - Natural Language Processing


A branch of artificial intelligence that combines computer science with
linguistics. Its goal is to empower computers with the ability to comprehend,
interpret, and reproduce human language in ways meaningful to humans.

71.NLU - Natural Language Understanding


The transformation of unstructured data into structured data. It's essentially the
process where machines extract meaning from human input, often seen as a
subtask of NLP.

72.Elements of Natural Language in English


Components that make up the language include vocabulary, grammar, syntax,
semantics, pragmatics, discourse, dialogue, phonetics, phonology, and
morphology.

73.Syntactic Parsing
An analysis technique that interprets the words in a sentence based on grammar
and arranges them to illustrate relationships among the words. Sub-components
include segmentation, tokenization, stemming, lemmatization, part of speech
tagging, and named entity recognition.

74.Sentiment Analysis
A method that evaluates digital text to discern the emotional undertone of a
message, categorizing it as positive, negative, or neutral.

75.Generative AI vs. Predictive AI


Generative AI generates new data resembling its training data, while Predictive AI
uses historical data to make future predictions.

76.Generative AI for Business Leaders


A type of AI that's capable of generating new content, ideas, or solutions. This
contrasts with predictive or traditional AI, which typically analyzes existing data
for patterns. Generative AI can be crucial for innovation and strategy planning.

77.Salesforce Einstein Discovery Use Cases


The platform addresses specific scenarios like regressions for numeric outcomes
(e.g., currency, counts), binary classification for text outcomes with two possible
results (e.g., churned or not churned), and multiclass classification for text
outcomes with 3 to 10 potential results.

78.NER - Named Entity Recognition


An NLP technique that uses algorithms to identify and classify named entities
such as people, dates, places, and organizations within text. This aids in tasks
like question answering and information extraction.

79.Tokenization
The process of splitting sentences into individual words or tokens. While
straightforward in English due to spaces, languages like Thai or Chinese present
challenges, necessitating a deep understanding of vocabulary and morphology.

80.Stemming vs. Lemmatization


Both methods aim to reduce words to their base or root form.

Stemming might simplify words in a rudimentary manner, which can lead to non-
existent stems.

Lemmatization, on the other hand, considers the part of speech and provides a
more valid base word or lemma.

81.Speech Tagging
An essential function in NLP where each word in a sentence is labeled based on
its grammatical role, such as noun, verb, adjective, etc. This helps in
understanding the syntax and meaning of a sentence.

82.Einstein Discovery-Powered Solutions


These solutions leverage the power of Einstein Discovery for various business
scenarios. Examples include predicting numeric outcomes, classifying binary
text outcomes, and multi-class text classifications.

83.Einstein Discovery Insights


Einstein's Discovery explores patterns, relationships, and correlations in
historical data. Using machine learning and AI, it predicts future outcomes,
guiding business users in prioritizing their tasks and making informed decisions.

84.Sensitive Variables in Einstein Discovery


Variables related to legally protected classes, such as age, race, and gender,
have use restrictions in regions like the US and Canada. Discrimination against
these classes in sectors like employment, healthcare, and lending is prohibited.
Proxy values can correlate with these sensitive variables, potentially leading to
biased interpretations if not handled correctly.
85.Einstein Prediction Service
A public REST API service that facilitates interaction with Einstein Discovery-
driven models and predictions. Its capabilities range from fetching predictions on
data and suggesting beneficial actions to managing model refresh jobs and bulk
scoring tasks.
86.Sales Cloud Einstein
Focused on maximizing sales productivity. This tool prioritizes leads and
opportunities that are most likely to convert, analyzes sales cycles, automates
data capture, and fosters relevant customer outreach based on CRM data.

87.Service Cloud Einstein


Centers on enhancing customer service experience. It expedites case
resolutions, boosts call deflection, reduces handle times, and offers agents smart
suggestions and knowledge recommendations to resolve issues faster.

88.Marketing Cloud Einstein


Empowers marketers to better understand their customers. It uncovers consumer
insights, suggests optimal engagement channels, personalized content,
streamlines marketing operations, and even auto-generates subject lines and web
campaigns.

89.Commerce Cloud Einstein


A solution for multi-channel brand interaction. It personalized shopping
experiences, offers insights into buying patterns, and optimizes both explicit and
implicit search experiences for shoppers. It also supports the automatic
generation of smart product descriptions.

90.Einstein Prediction Builder


An intuitive tool that allows for custom predictions on non-encrypted Salesforce
data across various business sectors. It emphasizes user-friendly functionality
without the need for coding.

91.NBA - Einstein Next Best Action


A system that provides timely and intelligent recommendations using both rules-
based and predictive models. These insights are directly integrated within
Salesforce for optimal impact.

92.Descriptive Insights in Einstein Discovery


Derived from historical data via descriptive analytics and statistical analysis,
these insights depict what has transpired in your data.

93.Diagnostic Insights in Einstein Discovery


Insights, originating from the model, delve deep to explain why certain events
occurred. They identify which variables most significantly influence the analyzed
business outcome.

94.Comparative Insights in Einstein Discovery


Insights, also derived from the model, elucidate the difference in outcomes by
contrasting two specific subgroups. They help in comparing the impact of factors
against global averages, with waterfall charts assisting in visualization.

95.Proxy Values
Attributes in a dataset that correlate with sensitive variables. A strong correlation,
such as Account Name being a 90% proxy for postal code, might infer
associations between various data points.

96.Declarative vs. Programmatic Predictions


With Einstein Prediction Service, predictions can be fetched declaratively through
Salesforce features like the PREDICT function, or programmatically using APEX
and REST APIs.

97.Einstein Embedded in Clouds


Sales Cloud focuses on sales, Service Cloud centers on customer service,
Marketing Cloud caters to personalized marketing, and Commerce Cloud
enhances the multi-channel shopping experience.

98.Einstein Discovery vs. Einstein Prediction Builder


Einstein Discovery focuses on outcomes without requiring a dedicated data
scientist, whereas Einstein Prediction Builder allows for custom predictions with
a user-friendly approach.

99.Einstein GPT
A tool that leverages large language models, grounding them in CRM data to
generate personalized content for businesses in a safe and secure manner.

100.Predictive
Analysis dives deeper into data to offer insights into the future. While descriptive
analysis reveals what has happened and diagnostic analysis explains why
something has happened, predictive analysis forecasts what might happen in the
future. This analysis uses statistical algorithms and machine learning techniques
to identify the likelihood of future outcomes based on historical data. It's crucial
for businesses that want to make proactive decisions and stay ahead of potential
challenges.

101.Prescriptive
analysis provides recommendations on how to handle potential future scenarios.
Instead of just showing what might happen, it advises on how to take action. By
using insights from previous analyses (descriptive, diagnostic, and predictive),
prescriptive analysis offers potential solutions to decision-makers. For instance,
if predictive analysis forecasts a decline in sales, prescriptive analysis might
suggest increasing marketing spend or launching a promotional campaign.

102.AI Bias
Refers to the presence of systematic and unfair discrimination in outputs
delivered by AI models.

103.Deep Learning
A subset of machine learning inspired by the structure of the human brain. It uses
neural networks with many layers (hence "deep") to analyze various factors of
data.

104.Data Augmentation
Refers to the process of increasing the diversity of data available for training
models without actually collecting new data. Techniques like cropping, padding,
and horizontal flipping are used to artificially enhance the quantity of data by
introducing minor changes to existing data samples.

105.Neural Network Activation Functions


Decide the output of a neural network node, given an input or set of inputs.
Common activation functions include the sigmoid, hyperbolic tangent, and
rectified linear unit (ReLU). They introduce non-linearity to the model, enabling it
to learn from the error and make adjustments, which is essential for deep
learning.

106.Machine Learning Ethics


The study of ensuring that machine learning models, data, and applications are
developed and deployed in a manner that respects human rights, fairness,
transparency, and justice. Ethical considerations involve understanding biases in
data, avoiding misuse of technology, respecting user privacy, and ensuring
models do not perpetrate or amplify harmful behaviors or societal inequalities.

107.Predictive Data Analysis


Uses past data and complex statistical techniques to foresee future outcomes.
It's akin to a crystal ball but grounded in historical facts.

108.Prescriptive Analysis
The culmination of descriptive and predictive analytics. Utilizing machine
learning, it offers actionable recommendations based on past events, acting as a
roadmap for businesses.

109.Continuous Variables
Forming an unbroken whole, without interruption. These are variables that cannot
be counted in a finite amount of time because there is an infinite number of
values between any two values.

110.Aggregation
A collection of quantitative data and can show large data trends.

111.Discrete Variables
Individually separate and distinct. Simply stated, if you can count it individually, it
is a discrete variable.

112.Quantitative Variables
Variables that can be measured numerically, such as the number of items in a set.

113.Ordinal
Being of a specified position or order in a numbered series.

114.Nominal Qualitative Variables


Categories that cannot be ranked.

115.Well-Structured Data Module


Data that is organized into columns, or fields, and that are made up of variables,
one variable per field.
116.ETL - Extraction, Transformation, Loading
A process that extracts information from internal and external databases,
transforms the information using a common set of enterprise definitions and
loads the information into a data warehouse.
117.Quantitative Variable
A characteristic that can be measured numerically

118.Qualitative Variable
A qualitative variable describes qualities or characteristics, such as country of
origin, gender, name, or hair color.

119.Variable
A measurement, property, or characteristic of an item that may vary or change.

120.Accelerator
A class of microprocessors designed to accelerate AI applications.

121.ASI - Artificial Super Intelligence


Artificial intelligence that surpasses the capabilities of the human mind.

122.Back Propagation
An algorithm often used in training neural networks, referring to the method for
computing the gradient of the loss function with respect to the weights in the
network.

123.Chain of Thought
The sequence of reasoning steps an AI model uses to arrive at a decision.

124.CLIP - Contrastive Language-Image Pretraining


The computational resources (like CPU or GPU time) used in training or running
AI models.

125.CNN - Convolutional Neural Network


A type of deep learning model that processes data with a grid-like topology (e.g.,
an image) by applying a series of filters. Such models are often used for image
recognition tasks.

126.Diffusion
A technique used for generating new data by starting with a piece of real data and
adding random noise.
127.Double Descent
A phenomenon in machine learning in which model performance improves with
increased complexity, then worsens, then improves again.

128.Embedding
The representation of data in a new form, often a vector space. Similar data
points have more similar embeddings.

129.Emergence/Emergent Behavior
Refers to complex behavior arising from simple rules or interactions. "Sharp left
turns" and "intelligence explosions" are speculative scenarios where AI
development takes sudden and drastic shifts, often associated with the arrival of
AGI.

130.End-to-End Learning
Machine learning model that does not require hand-engineered features. The
model is simply fed raw data and expected to learn from these inputs.

131.Expert Systems
An application of artificial intelligence technologies that provides solutions to
complex problems within a specific domain.

132.XAI - Explainable AI
A subfield of AI focused on creating transparent models that provide clear and
understandable explanations of their decisions.

133.Fine-tuning
The process of taking a pre-trained machine learning model that has already been
trained on a large dataset and adapting it for a slightly different task or specific
domain. During fine-tuning, the model's parameters are further adjusted using a
smaller, task-specific dataset, allowing it to learn task-specific patterns and
improve performance on the new task.

134.Forward Propagation
The process where input data is fed into the network and passed through each
layer (from the input layer to the hidden layers and finally to the output layer) to
produce the output. The network applies weights and biases to the inputs and
uses activation functions to generate the final output.

135.Foundation Model
Large AI models trained on broad data, meant to be adapted for specific tasks.

136.GPU - Graphics Processing Unit


A specialized type of microprocessor primarily designed to quickly render images
for output to a display.

137.Gradient Descent
In machine learning, gradient descent is an optimization method that gradually
adjusts a model's parameters based on the direction of largest improvement in its
loss function. In linear regression, for example, gradient descent helps find the
best-fit line by repeatedly refining the line's slope and intercept to minimize
prediction errors.

138.Hidden Layer
Layers of artificial neurons in a neural network that are not directly connected to
the input or output.

139.Hyperparameter Tuning
The process of selecting the appropriate values for the hyperparameters
(parameters that are not learned from the data) of a machine learning model.

140.Inference
The process of making predictions with a trained machine-learning model.

141.Instruction Tuning
A technique in machine learning where models are fine-tuned based on specific
instructions given in the dataset.

142.Large Language Model - LLM


A type of AI model that can generate human-like text and is trained on a broad
dataset.

143.Latent Space
Refers to the compressed representation of data that a model (like a neural
network) creates. Similar data points are closer in latent space.

144.Loss Function (or Cost Function)


A function that a machine learning model seeks to minimize during training. It
quantifies how far the model's predictions are from the true values.
145.Mixture of Experts
A machine learning technique where several specialized submodels (the
"experts") are trained, and their predictions are combined in a way that depends
on the input.

146.Multimodal
In AI, this refers to models that can understand and generate information across
several types of data, such as text and images.

147.NeRF - Neural Radiance Fields


A method for creating a 3D scene from 2D images using a neural network. It can
be used for photorealistic rendering, view synthesis, and more.

148.Objective Function
A function that a machine learning model seeks to maximize or minimize during
training.

149.Overfitting
A modeling error that occurs when a function is too closely fit to a limited set of
data points, resulting in poor predictive performance when applied to unseen
data.

150.Pre-training
The initial phase of training a machine learning model where the model learns
general features, patterns, and representations from the data without specific
knowledge of the task it will later be applied to. This unsupervised or semi-
supervised learning process enables the model to develop a foundational
understanding of the underlying data distribution and extract meaningful features
that can be leveraged for subsequent fine-tuning on specific tasks.

151.Prompt
The initial context or instruction that sets the task or query for the model.
Regularization
In machine learning, regularization is a technique used to prevent overfitting by
adding a penalty term to the model's loss function. This penalty discourages the
model from excessively relying on complex patterns in the training data,
promoting more generalizable and less prone-to-overfitting models.

152.RLHF - Reinforcement Learning from Human Feedback


A method to train an AI model by learning from feedback given by humans on
model outputs.

153.Singularity
In the context of AI, the singularity (also known as the technological singularity)
refers to a hypothetical future point in time when technological growth becomes
uncontrollable and irreversible, leading to unforeseeable changes to human
civilization.

154.Symbolic Artificial Intelligence


A type of AI that utilizes symbolic reasoning to solve problems and represent
knowledge.

155.TensorFlow
An open-source machine learning platform developed by Google that is used to
build and train machine learning models.

156.TPU - Tensor Processing Unit


A type of microprocessor developed by Google specifically for accelerating
machine learning workloads.

157.Training Data
The dataset used to train a machine learning model.

158.Transfer Learning
A method in machine learning where a pre-trained model is used on a new
problem.

159.Transformer
A specific type of neural network architecture used primarily for processing
sequential data such as natural language. Transformers are known for their
ability to handle long-range dependencies in data, thanks to a mechanism called
"attention," which allows the model to weigh the importance of different inputs
when producing an output.

160.Underfitting
A modeling error in statistics and machine learning when a statistical model or
machine learning algorithm cannot adequately capture the underlying structure
of the data.
161.Unsupervised Learning
A type of machine learning where the model is not provided with labeled training
data, and instead must identify patterns in the data on its own.

162.Validation Data
A subset of the dataset used in machine learning that is separate from the
training and test datasets. It's used to tune the hyperparameters (i.e.,
architecture, not weights) of a model.

163.XAI (Explainable AI)


A subfield of AI focused on creating transparent models that provide clear and
understandable explanations of their decisions.

164.Zero-shot Learning
A type of machine learning where the model makes predictions for conditions not
seen during training, without any fine-tuning.

165.What are the 5 Salesforce's Trusted AI Principles?


Responsible: Safeguard human rights and protect the data we are entrusted with.
Accountable: Seek and leverage feedback for continuous improvement.
Transparent: Develop a transparent user experience to guide users through
machine driven recommendations. Be transparent about how AI is built.
Empowering: Promote economic growth and employment for our customers,
their employees and society as a whole. Empower customers to use AI
responsibly.
Inclusive: Respect the societal values of all those impacted, not just those of the
creators.

166.Trusted AI Principle: Responsible.


• Work with human rights experts.
• Educate and empower customers and partners.
• Open development and sharing research.

167.Trusted AI Principle: Accountable.


• Invite customer feedback.
• Ethical Use Advisory Council.
• External ethics review of research papers.

168.Trusted AI Principle: Transparent


• Strive for model explainability.
• Customers' control of their data and models.
• Clear disclosure of terms of use.

169.Trusted AI Principle: Enpowering


• Build AI apps with clicks not code.
• Free AI education with Trailhead.
• Deliver AI research breakthroughs.

170.Trusted AI Principle: Inclusive


• Test models with diverse data sets.
• Conduct Consequence Scanning Workshops.
• Build inclusive teams.

171.What are the 5 Guidelines for Responsible Development of trusted Generative


AI?
• Accuracy.
• Safety.
• Honesty.
• Empowerment.
• Sustainability.

172.Generative AI at Salesforce
Sales: uses AI insights to identify the best next steps and close deals faster.
Service: uses AI to have human-like conversations and provide answers to
repetitive questions and tasks, freeing up agents to handle more complicated
requests.
Marketing: leverages AI to understand customer behavior and personalize the
timing, targeting, and content of marketing activities.
Commerce: uses AI to power highly personalized shopping experiences and
smarter e-commerce.

173.What are the 5 components of the Responsible AI Development Lifecycle?


• Scope.
• Review.
• Test.
• Mitigate.
• Launch & Monitor.

174.AI Development Lifecycle: Scope


· Should this exist?
· What are your assumptions?
· Who are you designing for?
· Known risks?

175.AI Development Lifecycle: Review


· Who will be impacted?
· Who is excluded?
· Consequence Scanning.
· User Research.

176.AI Development Lifecycle: Test


· Bias Assessment in Dataset & Model.
· Ethical Red Teaming.
· Community Feedback.

177.AI Development Lifecycle: Mitigate


· Bias Mitigation.
· Retrain Model.
· Compared to the Threshold Set for Launch.
· In-app Support.

178.AI Development Lifecycle: Launch & Monitor


· Publish model cards.
· Post-Launch.
· Assessments.
· Logging.
· Community Feedback.

179.Generative AI Use Cases


Text: generate credible text on various topics. It can compose business letters,
provide rough drafts of articles, and compose annual reports.
Images: output realistic images from text prompts, create new scenes, and
simulate a new painting.
Video: compile video content from text automatically and put together short
videos using existing images.
Music: compile new musical content by analyzing a music catalog and rendering
a new composition.
Product Design: can be fed inputs from previous versions of a product and
produce several possible changes that can be considered in a new version.
Personalization: personalize experiences for users such as product
recommendations, tailored experiences, and feeding material that closely
matches their preferences.

180.Predictive AI Use Cases for Financial Services


Predictive AI enhances financial forecasts. By pulling data from a wider data set
and correlating financial information with other forward-looking business data,
forecasting accuracy can be greatly improved.

181.Predictive AI Use Cases for Fraud Detection


Predictive AI can be used to spot potential fraud by sensing anomalous behavior.
In banking and e-commerce, there might be an unusual device, location, or
request that doesn't fit with the normal behavior of a specific user. A login from a
suspicious IP address, for example, is an obvious red flag.

182.Predictive AI Use Cases for Healthcare


Predictive AI is already in use in healthcare. It is finding use cases such as
predicting disease outbreaks, identifying higher-risk patients, and spotting the
most successful treatments.

183.Predictive AI Use Cases for Marketing


Predictive AI can more closely define the most appropriate channels and
messages to use in marketing. It can provide marketing strategists with the data
they need to write impactful campaigns and thereby bring about greater success.

184.What are Data Quality Dimensions?


Data Quality is a measurement of the degree to which data is fit for purpose.
Good data quality generates trust in data. Data Quality Dimensions are a
measurement of a specific attribute of a data's quality.

185.What are the six Dimensions of Data Quality?


• Completeness.
• Validity.
• Uniqueness.
• Timeline.
• Consistency.
• Accuracy

186.Dimensions of Data Quality: Completeness


measures the degree to which all expected records in a dataset are present. At a
data element level, completeness is the degree to which all records have data
populated when expected.

187.Dimensions of Data Quality: Validity


measures the degree to which the values in a data element are valid.

188.Dimensions of Data Quality: Uniqueness


measures the degree to which the records in a dataset are not duplicated.

189.Dimensions of Data Quality: Timeliness


the degree to which a dataset is available when expected and depends on service
level agreements being set up between technical and business resources.

190.Dimensions of Data Quality: Consistency


measures the degree to which data is the same across all instances of the data.
Consistency can be measured by setting a threshold for how much difference
there can be between two datasets.

191.Dimensions of Data Quality: Accuracy


measures the degree to which data is correct and represents the truth.

192.What are the four Ethical AI Practice Maturity Model components?


• Ad Hoc.
• Organized & Repeatable.
• Managed & Sustainable.
• Optimized & Innovative.

193.Ethical AI Practice Maturity Model: Ad Hoc


• Someone raises their hand and starts asking not just "Can we do this?" but
"Should we do this?"
• Informal advocacy builds a groundswell of awareness.
• Ad hoc reviews and risk assessments take place among "woke" teams.

193.Ethical AI Practice Maturity Model: Organized & Repeatable


• Executive buy-in established.
• Ethical principles and guidelines are agreed upon.
• Build a team of diverse experts.
• Company-wide education.
• Ethics reviews are added onto existing reviews, often at the end of the dev
process.

194.Ethical AI Practice Maturity Model: Managed & Sustainable


• Ethical considerations are baked into the beginning of product development and
reviews happen throughout the lifecycle.
• Build or buy bias assessment and mitigation tooling.
• Metrics are identified to track progress and impact post-market for regular
audits.

195.Ethical AI Practice Maturity Model: Optimized & Innovative


• End-to-end inclusive design practices that combine ethical AI product and
engineering dev with privacy, accessibility, and legal partners.
• Ethical features and resolving ethical debt are a formal part of roadmap and
resourcing.
• Poor ethics metrics block launch.

196.What is bias in machine learning algorithms?


Systematic error in predictions due to unfairness or discrimination.

197.What can companies do to improve fairness in AI?


Examine context, design models with inclusion in mind, train on representative
data, and perform targeted testing.

198.How can examining context help improve fairness in AI?


By understanding where AI has struggled in the past, companies can build on
industry experience to improve fairness.

199.Why is it important to engage with humanists and social scientists in AI


design?
To ensure that AI models don't inherit bias from human judgment.

200.What is the importance of setting measurable goals for AI models?


To ensure equal performance across different use cases and age groups.

201.What is the significance of training AI models on complete and representative


data?
To avoid biased predictions, it is important to establish procedures for collecting,
sampling, and preprocessing training data.
202.How can companies identify potential sources of bias in training datasets?
By involving internal or external teams to spot discriminatory correlations and
sources of AI bias.

203.What is the purpose of performing targeted testing on AI models?


To uncover problems that may be masked by aggregate metrics and examine
performance across different subgroups.

204.Why is it important to continuously retest AI models?


To incorporate real-life data and user feedback, and to address any emerging
biases.

205.How can AI bias be identified and mitigated?


• Awareness: Accept AI can have biases.
• Data Analysis: Ensure representative data and inspect for biases.
• Algorithmic Fairness: Use tools to detect and reduce bias.
• Transparent Models: Opt for interpretable AI models.
• Diverse Teams: Diverse inputs reduce oversight.
• Regular Auditing: Continuously evaluate for fairness.
• Feedback Loops: Allow users to report biases.
• Ethical Guidelines: Prioritize fairness.
• External Audits: Seek third-party evaluations.
• User Education: Inform about AI limitations.
• Engage Stakeholders: Get community feedback.
• Accountability: Define responsibility for AI decisions.

206.What is AI explainability?
Understanding how AI generates predictions and uses data features.

You might also like