Salesforce Ai Associate Certification Practice Questions
Salesforce Ai Associate Certification Practice Questions
Salesforce Ai Associate Certification Practice Questions
3.In the realm of AI capabilities, what is the primary function of computer vision?
6.A service leader plans to use AI to help customers resolve their queries more
quickly with a guided self-service app. Which Einstein feature offers the most
suitable solution for this?
A. Automated Chatbots
B. Categorizing Cases
C. Offer personalized recommendations
(* Chatbots in a self-service app interact with customers in real time, understand their
questions, and offer quick solutions, making self-help more efficient and user-friendly.)
7.A marketing manager wants to use AI to better engage with their customers.
Which functionality provides the best solution?
Journey Optimization allows one to Create, test, and optimize personalized campaign
variations with built-in predictive AI. Make every moment count by automating and
customizing all aspects of customer engagement — including channel, content, timing,
and send frequency. Scale dynamic journeys and improve productivity with AI.)
12.What is the term for bias that imposes the values of a system onto others?
A. Automation
B. Societal
C. Association
(* Automation bias means a system forces its own ideas onto others. For example, in a
beauty contest judged by AI in 2016, the AI mostly picked white winners because it was
trained on pictures of white women and didn't recognize the beauty in people with
different features or skin colors. This shows how the bias in the AI's training data
affected the contest's results.)
13.What could be the origin of bias in the training data used for AI models?
17.Why is it vital to address privacy issues when handling AI and CRM data?
22.Cloud Kicks wants to implement AI features on its Salesforce Platform but has
concerns about potential ethical and privacy challenges. What should they
consider doing to minimize potential AI bias?
31.A company utilizes Einstein and maintains a high data quality score, yet they
are not experiencing the advantages of AI. What might be a potential explanation
for not realizing the benefits of AI?
A. The company isn't employing a sufficient amount of data for predictive purposes.
B. The data score might exceed the minimum threshold, but the company isn't using it
as a reference for future outcomes.
C. The company isn't utilizing the appropriate Salesforce product.
(* Even with a high data quality score, the company needs to use the score as a
benchmark for future results to see the benefits from AI.)
32.How does data quality and transparency impact bias in generative AI?
A. Age
B. Volume
C. Reliability
(* The Age, Completeness, Accuracy, Consistency, Duplication, and Usage of a dataset
are vital factors to assess when determining its suitability for AI models. However, the
size and the number of variables in the dataset are unrelated to its appropriateness for
AI models.)
A. Consistency
B. Volume
C. Performance
(* It's important to emphasize the significance of consistency as a fundamental data
quality factor. Data volume and data location, on the other hand, are not directly tied to
data quality.)
A. Location
B. Accuracy
C. Volume
40.A Business Analyst (BA) is in the process of creating a new AI use case. As
part of their preparations, they generate a report to examine whether there are
any null values in the attributes they intend to utilize. What data quality aspect is
the BA confirming by assessing null values?
A. Usage
B. Duplication
C. Completeness
(* For each business purpose, make a list of the necessary fields. Afterward, generate a
report indicating the percentage of empty values in these fields. Alternatively, you can
employ a data quality app from AppExchange.)
41.Cloudy Computing aims to reduce the workload of its customer care agents by
deploying a chatbot on its website to handle common queries. Which area of AI is
best suited for this situation?
A. Neural networks
B. Rule-based systems
C. Predictive analytics
(* Neural networks are a type of AI tool made of interconnected nodes with weights and
biases. These connections are crucial for their ability to process and learn from data,
making them vital in AI tasks like recognizing patterns, understanding language, and
analyzing images. Neural networks work somewhat like human brain neurons, which is
why they're called "neural networks".)
A. General AI
B. Narrow AI
C. Super AI
(Weak/narrow AI encompasses AI systems that are created to execute a particular task
or a predefined set of tasks.)
44.What kind of AI employs machine learning to generate fresh and unique output
based on a provided input?
A. Generative
B. Predictive
C. Probabilistic
(* Generative AI employs machine learning techniques to produce novel and unique
output based on a given input. It can create new content, such as text, images, or even
music, by learning patterns and relationships in the input data and generating new data
that fit those patterns. This is why generative AI is often used in creative applications
like art generation, text generation, and more.)
A. Customers are more likely to share personal information with a site that personalizes
their experience.
B. Customers are more likely to be satisfied with their shopping experience.
C. Customers are more likely to visit competitor sites that personalize their experience.
(* Using AI to personalize the online shopping experience leads to increased customer
satisfaction. This happens because personalized recommendations make shopping
more convenient, engaging, and relevant, which ultimately boosts conversion rates,
customer retention, and loyalty.)
A. Transactional
B. Engagement
C. Demographic
(* To mitigate the risk of bias inherent in demographic targeting, use interest- and intent-
based targeting.)
54.What step should be followed to build and apply reliable generative AI while
considering Salesforce's safety guidelines?
A. Manage bias, toxicity, and harmful content by implementing integrated guardrails and
guidance
B. Guarantee proper consent and transparency when employing AI-generated
responses
C. Reduce the AI model's environmental impact and carbon footprint during training
(* Ensuring that users are aware of and have given their consent for the use of AI-
generated responses demonstrates transparency and honesty in AI interactions. It
respects user preferences and privacy.)
A. Demographic
B. Geographic
C. Cryptographic
(* Demographic data includes information related to characteristics such as age,
gender, race, ethnicity, and other personal attributes. Excluding demographic data helps
prevent the AI model from learning biases associated with these attributes and
promotes fairness and non-discrimination in AI-driven processes. Salesforce's practice
of excluding demographic data aligns with ethical considerations in AI to avoid bias and
promote equity.)
A. Implicit transparency of AI systems, which makes it easy for users to understand and
trust their decisions
B. Potential for human bias in machine learning algorithms and the lack of
transparency in AI decision-making processes
C. Inherent neutrality of AI systems, which eliminates any potential for human bias in
decision-making
(* The ethical challenges in AI development primarily revolve around the potential for
human bias in machine learning algorithms and the need for transparency in AI
decision-making processes. These challenges highlight the importance of addressing
biases and promoting transparency to ensure responsible and ethical AI development.)
A. Financial status
B. Nickname
C. Email address
(* Financial status is an example of an immutable trait, which is a characteristic that
cannot be changed or is highly resistant to change over time. Financial status typically
includes attributes like income level, wealth, or financial stability, which are not easily
altered by an individual. Using such immutable traits for classification or marketing
purposes can be sensitive and potentially discriminatory, which aligns with Salesforce's
definition of bias.)
A. Test with diverse and representative datasets appropriate for how the model
will be used.
B. Rely on a development team with uniform backgrounds to assess the potential
societal implications of the model.
C. Test only with data from a specific region or demographic to limit the risk of data
leaks.
(* Inclusive principle - consider diversity, equality, and fairness. Testing with diverse
data ensures the model's impact is understood in different situations. It's not just about
talking.)
61.A customer using Einstein Prediction Builder is confused about why a certain
prediction was made. Following Salesforce's Trusted AI Principle of
Transparency, which customer information should be accessible on the
Salesforce Platform?
A. A marketing article of the product that clearly outlines the product's capabilities and
features
B. An explanation of how Prediction Builder works and a link to Salesforce's Trusted AI
Principles
C. An explanation of the prediction's rationale and a model card that describes
how the model was developed
(* Transparency principle - Customers should comprehend the reasoning behind each
AI-generated recommendation and prediction. This involves offering comprehensive
details such as model cards.)
A. Transparency
B. Inclusiveness
C. Accountability
(* This principle underscores the need to consider and include diverse perspectives,
demographics, and user groups when developing AI solutions to promote fairness and
equitable outcomes. It aligns with the goal of reducing bias and ensuring that AI benefits
a broad and inclusive audience.)
65.When should the use of natural language processing (NLP) for automated
customer service be disclosed to the customer, following Salesforce's Trusted AI
Principles?
66.In what way should a financial In what way should a financial institution
adhere to Salesforce's Trusted AI Principle of Transparency when executing a
campaign for preapproved credit cards?
A. Clarify how risk factors like credit score may influence customer eligibility
B. Highlight sensitive variables and their stand-ins to avoid biased lending practices
C. Integrate customer input into the ongoing training of the model
(* Transparency involves providing clear and understandable explanations of how AI-
driven decisions are made. In the context of a financial institution's campaign for
preapproved credit cards, explaining to customers how risk factors like a credit score
can impact their eligibility is essential for transparency. It ensures that customers clearly
understand the criteria used to determine eligibility and fosters trust in the decision-
making process. This transparency also helps customers make informed choices about
applying for a pre-approved credit card.)
A. Naming conventions
B. Data backup
C. Color coding
(* Naming conventions play a crucial role in establishing the guidelines for a data
management strategy.)
A. Consistency
B. Timeliness
C. Validity
(* In this case, having future dates of birth is not valid, as it contradicts the expected and
accurate date ranges for birthdays.)
A. Consistency
B. Age
C. Duplication
(* Consistency in data quality ensures that data is uniform and follows a standardized
format, which is crucial for accurate analysis and efficient case resolution in this
scenario.)
72.A sales manager aims to improve the quality of lead data in their CRM system.
What is the most likely process to assist the team in achieving this objective?
74.An administrator at Cloud Kicks wants to ensure that a field is set up on the
customer record so their preferred name can be captured. Which Salesforce field
type should the administrator use to accomplish this?
A. Good data quality is crucial to guarantee impartial and equitable AI decisions, uphold
ethical standards, and prevent discrimination
B. Having high-quality data ensures that the necessary demographic attributes are
available for creating personalized campaigns
C. Poor-quality data lowers the likelihood of unintentional bias since it doesn't overly
cater to specific demographic groups
(* This accurately describes the role of data quality in ensuring fair and ethical AI
applications. High-quality data helps AI systems make unbiased decisions, adhere to
ethical principles, and avoid discriminatory outcomes by providing a reliable foundation
for training and decision-making.)
83.How does the "right of least privilege" reduce the risk of handling sensitive
personal data?
Predictive AI, on the other hand, uses existing data to make predictions or forecasts
about future events or outcomes. It analyzes data to identify patterns and trends that
can be used to predict specific outcomes.)
A. Consistency
B. Accuracy
C. Usage
(* Inconsistent data can make it challenging to use the data effectively and can lead to
errors in analysis or operations. Therefore, improving consistency in state and country
values is essential for maintaining data quality.)
89.What is a sensitive variable that can lead to bias?
A. Country
B. Education level
C. Gender
(* Gender is a sensitive variable that, when not handled appropriately in data analysis or
AI models, can lead to biased outcomes. It's essential to ensure that gender-related
data is treated with fairness, equity, and consideration to avoid perpetuating biases or
discrimination.)
90.What type of bias results from data being labeled according to stereotypes?
A. Interaction
B. Societal
C. Association
(* Association bias, also known as associative bias, is a type of bias that arises in data
when there are systematic and non-random associations between variables. This bias
occurs when data labels or attributes are influenced by societal stereotypes,
preconceptions, or cultural biases.
Stereotypes and Preconceptions: Association bias often results from stereotypes and
preconceived notions that people hold about certain groups or categories. These
stereotypes can affect how data is labeled or categorized.)
A. Model accuracy
B. Training time
C. Data privacy
(* Having a diverse dataset that accurately represents various aspects of the problem
you're trying to solve can significantly improve the accuracy of machine learning
models.
1. Representativeness: Diverse data ensures that the model has seen a wide range of
examples and variations relevant to the problem.
2. Reduced Bias: A balanced dataset with representation from different groups or
classes can reduce bias in model predictions.
3. Generalization: Large datasets provide a rich source of information for the model to
learn from. With more data, the model can better generalize patterns and make more
accurate predictions on new data points.)
92.A developer is tasked with selecting a suitable dataset for training an AI model
in Salesforce to accurately predict current customer behavior. What is a crucial
factor that the developer should consider during selection?
A. Size of dataset
B. Number of variables in the dataset
C. Age of dataset
(* The age of the dataset is important because using outdated data may not accurately
reflect the current behavior and preferences of customers. Customer behavior can
change over time, and using a dataset that is not up-to-date could lead to inaccurate
predictions.)
94.Cloud Kicks learns of complaints from customers who are receiving too many
sales calls and emails. Which data quality dimension should be assessed to
reduce these communication inefficiencies?
A. Usage
B. Duplication
C. Consent
(* Assessing the consent dimension involves ensuring that the company has obtained
explicit and valid consent from customers to contact them through sales calls and
emails. This includes understanding the preferences of customers regarding
communication frequency and type. By respecting customer consent and preferences,
the company can improve communication efficiency and reduce the likelihood of over-
communication, which can lead to customer dissatisfaction.)
95.Cloud Kicks wants to ensure that multiple records for the same customer are
removed from Salesforce. Which feature should be used to accomplish this?
A. Duplicate management
B. Trigger deletion of old records
C. Standardized field names
(* Duplicate management in Salesforce is a feature that allows you to identify and
handle duplicate records effectively. It provides tools to detect and merge duplicate
records, ensuring that only a single, accurate record is retained for each customer.)
A. Completeness
B. Consistency
C. Accuracy
(* The inconsistency in how product category information is captured, with some
employees using a text field and others using a picklist, represents a lack of consistency
in the data. Consistency is a data quality dimension that focuses on ensuring that data
is uniformly formatted and structured throughout a dataset.)
When AI systems are trained on high-quality data, they are more likely to produce
accurate and reliable results.
Users are more likely to trust AI-driven decisions when they see that the data used to
train the model is of high quality, as it suggests that the model has learned from reliable
sources and is less prone to errors or biases.)
A. Societal
B. Confirmation
C. Survivorship
(* Societal bias can occur when the historical data used to make recommendations
reflects existing societal biases or stereotypes. In this case, if the historical purchase
data contains biases related to the color preferences of customers, the recommendation
system may inadvertently perpetuate those biases by suggesting products of a certain
color more frequently. This can result in recommendations that align with societal biases
rather than providing fair and diverse recommendations.)
A. Generate
B. Predict
C. Discover
(*Salesforce Einstein Discover is an AI-powered feature that can analyze emails and
suggest content for Knowledge articles. It helps identify relevant information from email
conversations and assists in creating knowledge articles based on that content. This
capability can improve the efficiency of knowledge management by automating the
process of article creation from email correspondence.)
101.How is natural language processing (NLP) used in the context of AI
capabilities?How is natural language processing (NLP) used in the context of AI
capabilities?
practiced?
A. Transparency
B. Inclusivity
C. Accountability
106.Cloud kicks wants to decrease the workload for its customer care agents by
implementing a chatbot on its website that partially deflects incoming cases by
answering frequently asked questions. Which field of AI is most suitable for this
scenario?
A. Natural language processing
B. Computer vision
C. Predictive analytics
107.Which AI type plays a crucial role in Salesforce's predictive text and speech
recognition capabilities, enabling the platform to understand and respond to user
commands accurately?
A. Computer Vision
B. Natural Language Processing (NLP)
C. Predictive Analysis
A. Content Selection
B. Email Recommendations
C. Engagement Scoring
A. Deep learning uses neural networks with multiple layers to learn from a large
amount of data
B. Deep learning uses historical data to predict future outcomes
C. Deep learning uses algorithm to cleanse and prepare data for AI implementations
A. Predictive Analysis
B. Computer Vision
C. Natural Language Processing (NLP)
(This AI can enhance sales processes by predicting future outcomes based on historical
data)
115.A consultant discusses the role of humans in AI-driven CRM processes with
a customer. What is one challenge the consultant should mention about human-
AI collaboration in decision-making?
117.A consultant designs a new AI model for a financial services company that
offers personal loans. Which variable within their proposed model might
introduce unintended bias?
A. Loan Date
B. Postal Code
C. Payment Due Date
121.What data quality dimension refers to the frequency and timelines of data
updates?
A. Data Source
B. Data Freshness
C. Data Leakage
122.A Salesforce consultant is considering the data sets to use for training AI
models for a project on the Customer 360 platform. What should be considered
when selecting data sets for the AI models?
A. Duplication, accuracy, consistency, storage location and usage of the data sets
B. Age, completeness, consistency, theme, duplication, and usage of the data sets
C. Age, completeness, accuracy, consistency, duplication, and usage of the data
set
A. Minimize the AI's carbon footprint and environment impact during training
B. Ensure appropriate consent and transparency when using AI-generated
responses.
C. Control bias, toxicity, and harmful content with embedded guardrails and guidance
127.In the context of Salesforce's Trusted AI Principles, what does the principle of
"empowerment" aim to achieve?
A. Empower users of all skill levels to build AI applications with clicks not code
B. Empower users to contribute to the growing body of knowledge of leading AI
research
C. Empower users to solve challenging technical problems using neural networks
DEFINITIONS
1.What is the definition of machine learning?
Uses models to generalize and apply their knowledge to new examples, learning from
labeled data to make accurate predictions and decisions.
2.What is the definition of Generative AI?
Creates new content based on existing data prompted by a human - ChatGPT
10.What is a bias?
An inaccuracy that is systematically incorrect in the same direction and affects
the success of the person or the algorithm that contains it
11.What is the disparate impact?
Discriminatory practices towards a particular demographic
12.What is a premortem?
A pre-project meeting to set measured and realistic expectations and catch 'what
went wrong' before it happens
15.What is Numeric Pr
A derived value, produced by a model, that represents a possible future outcome
20.Bias
An inaccuracy that is systematically incorrect in the same direction and affects
the success of the person or algorithm that contains it.
21.Disparate Impact
Discriminatory practices toward a particular demographic.
22.Premortem
A pre-project meeting to set measured and realistic expectations and catch the
"what went wrong" before it happens.
24.Data Insights
Findings in your data bases on thorough analysis generated by Einstein
Discovery.
25.Prediction
A derived value, produced by a model, that represents a possible future outcome.
Einstein Discovery
Statistical modeling and supervised machine learning in a no-code-required,
rapid-iteration environment.
26.Types of AI Capabilities
• Numeric Predictions.
• Classifications.
• Robotic Navigation.
• Language Processing.
27.Anthropomorphism
The attribution of human traits, emotions, or intentions to non-human entities,
often seen in AI when robots or software are given human-like attributes.
30.Augmented Intelligence
Combining human intelligence with AI to enhance and scale decision-making
capabilities.
31.CRM with AI
Customer Relationship Management systems enhanced with AI features to
automate tasks and personalize customer interactions.
36.Grounding
Linking abstract concepts in AI models to concrete instances or data.
37.Hallucination
When AI models produce outputs that aren't based on the data they were trained
on.
40.Machine Learning
AI's subset where systems learn from data to improve task performance without
explicit programming.
41.Model
Refers to the specific mathematical structure and algorithms used for a particular
AI task.
43.Parameters
Configurable variables in a model that are learned from the training data to make
predictions.
44.Prompt Defense
Techniques ensuring AI models respond safely to given prompts.
45.Prompt Engineering
Methods of refining AI model responses through prompt optimization.
46.Red-Teaming
A strategy of simulating adversarial attacks to identify vulnerabilities in systems.
47.Reinforcement Learning
A type of machine learning where agents learn by interacting with environments
and receiving feedback.
48.Supervised Learning
Machine learning where models learn from labeled data.
49.Toxicity
Harmful outputs or behaviors in AI models.
50. Transparency
Making the decision-making process of AI models clear and understandable.
51.Validation
Evaluating the performance of an AI model on a separate dataset not used during
training.
54.Numeric Predictions
The predictive nature of AI that often assigns values between 0 (indicating an
unlikely event) and 1 (indicating a certain event). While it can include percentage
values, it's not limited to them; for instance, AI can predict specific numbers, like
the amount in dollars.
55.Classifications
AI's ability to categorize data or patterns, often outperforming human abilities.
Each AI classifier is specialized for a distinct task. For instance, an AI designed
for detecting phishing emails would not perform well in identifying images of fish.
56.Robotic Navigation
AI's proficiency in navigating dynamic environments, such as autonomous
vehicles adapting to various road conditions, including curves, wind gusts from
large trucks, and sudden traffic stops.
57.Language Processing
A facet of AI known as Natural Language Processing (NLP) that understands
word associations to extract underlying intentions. This capability enables
functions like translating a document from English to German or summarizing a
lengthy scientific paper.
60.Neural Networks
An AI methodology that simulates the structure and function of the human brain
to process information, enabling the machine to learn from and interpret data in a
human-like manner.
62.Yes-and-No Predictions
AI's capability to provide binary answers by analyzing historical data, such as
determining if a lead is viable for a business or predicting email engagement.
66.Generative AI
Specialized AI capable of generating media like text or images. These models
learn from input data patterns and produce new content resembling the learned
characteristics.
(AI techniques used to create content or data models.)
67.Predictive AI
Studies historical data, identifies patterns, and makes predictions about the
future that can better inform business decisions. Predictive AI's value is shown in
the ways it can detect data flow anomalies and extrapolate how they will play out
in the future in terms of results or behavior; enhance business decisions by
identifying a customer's purchasing propensity as well as upsell potential; and
improve business outcomes
69.User Spoofing
The act of creating a believable online profile using AI-generated images. This
allows fake users to interact convincingly with real users, posing challenges for
businesses trying to identify networks of bots promoting their own content.
73.Syntactic Parsing
An analysis technique that interprets the words in a sentence based on grammar
and arranges them to illustrate relationships among the words. Sub-components
include segmentation, tokenization, stemming, lemmatization, part of speech
tagging, and named entity recognition.
74.Sentiment Analysis
A method that evaluates digital text to discern the emotional undertone of a
message, categorizing it as positive, negative, or neutral.
79.Tokenization
The process of splitting sentences into individual words or tokens. While
straightforward in English due to spaces, languages like Thai or Chinese present
challenges, necessitating a deep understanding of vocabulary and morphology.
Stemming might simplify words in a rudimentary manner, which can lead to non-
existent stems.
Lemmatization, on the other hand, considers the part of speech and provides a
more valid base word or lemma.
81.Speech Tagging
An essential function in NLP where each word in a sentence is labeled based on
its grammatical role, such as noun, verb, adjective, etc. This helps in
understanding the syntax and meaning of a sentence.
95.Proxy Values
Attributes in a dataset that correlate with sensitive variables. A strong correlation,
such as Account Name being a 90% proxy for postal code, might infer
associations between various data points.
99.Einstein GPT
A tool that leverages large language models, grounding them in CRM data to
generate personalized content for businesses in a safe and secure manner.
100.Predictive
Analysis dives deeper into data to offer insights into the future. While descriptive
analysis reveals what has happened and diagnostic analysis explains why
something has happened, predictive analysis forecasts what might happen in the
future. This analysis uses statistical algorithms and machine learning techniques
to identify the likelihood of future outcomes based on historical data. It's crucial
for businesses that want to make proactive decisions and stay ahead of potential
challenges.
101.Prescriptive
analysis provides recommendations on how to handle potential future scenarios.
Instead of just showing what might happen, it advises on how to take action. By
using insights from previous analyses (descriptive, diagnostic, and predictive),
prescriptive analysis offers potential solutions to decision-makers. For instance,
if predictive analysis forecasts a decline in sales, prescriptive analysis might
suggest increasing marketing spend or launching a promotional campaign.
102.AI Bias
Refers to the presence of systematic and unfair discrimination in outputs
delivered by AI models.
103.Deep Learning
A subset of machine learning inspired by the structure of the human brain. It uses
neural networks with many layers (hence "deep") to analyze various factors of
data.
104.Data Augmentation
Refers to the process of increasing the diversity of data available for training
models without actually collecting new data. Techniques like cropping, padding,
and horizontal flipping are used to artificially enhance the quantity of data by
introducing minor changes to existing data samples.
108.Prescriptive Analysis
The culmination of descriptive and predictive analytics. Utilizing machine
learning, it offers actionable recommendations based on past events, acting as a
roadmap for businesses.
109.Continuous Variables
Forming an unbroken whole, without interruption. These are variables that cannot
be counted in a finite amount of time because there is an infinite number of
values between any two values.
110.Aggregation
A collection of quantitative data and can show large data trends.
111.Discrete Variables
Individually separate and distinct. Simply stated, if you can count it individually, it
is a discrete variable.
112.Quantitative Variables
Variables that can be measured numerically, such as the number of items in a set.
113.Ordinal
Being of a specified position or order in a numbered series.
118.Qualitative Variable
A qualitative variable describes qualities or characteristics, such as country of
origin, gender, name, or hair color.
119.Variable
A measurement, property, or characteristic of an item that may vary or change.
120.Accelerator
A class of microprocessors designed to accelerate AI applications.
122.Back Propagation
An algorithm often used in training neural networks, referring to the method for
computing the gradient of the loss function with respect to the weights in the
network.
123.Chain of Thought
The sequence of reasoning steps an AI model uses to arrive at a decision.
126.Diffusion
A technique used for generating new data by starting with a piece of real data and
adding random noise.
127.Double Descent
A phenomenon in machine learning in which model performance improves with
increased complexity, then worsens, then improves again.
128.Embedding
The representation of data in a new form, often a vector space. Similar data
points have more similar embeddings.
129.Emergence/Emergent Behavior
Refers to complex behavior arising from simple rules or interactions. "Sharp left
turns" and "intelligence explosions" are speculative scenarios where AI
development takes sudden and drastic shifts, often associated with the arrival of
AGI.
130.End-to-End Learning
Machine learning model that does not require hand-engineered features. The
model is simply fed raw data and expected to learn from these inputs.
131.Expert Systems
An application of artificial intelligence technologies that provides solutions to
complex problems within a specific domain.
132.XAI - Explainable AI
A subfield of AI focused on creating transparent models that provide clear and
understandable explanations of their decisions.
133.Fine-tuning
The process of taking a pre-trained machine learning model that has already been
trained on a large dataset and adapting it for a slightly different task or specific
domain. During fine-tuning, the model's parameters are further adjusted using a
smaller, task-specific dataset, allowing it to learn task-specific patterns and
improve performance on the new task.
134.Forward Propagation
The process where input data is fed into the network and passed through each
layer (from the input layer to the hidden layers and finally to the output layer) to
produce the output. The network applies weights and biases to the inputs and
uses activation functions to generate the final output.
135.Foundation Model
Large AI models trained on broad data, meant to be adapted for specific tasks.
137.Gradient Descent
In machine learning, gradient descent is an optimization method that gradually
adjusts a model's parameters based on the direction of largest improvement in its
loss function. In linear regression, for example, gradient descent helps find the
best-fit line by repeatedly refining the line's slope and intercept to minimize
prediction errors.
138.Hidden Layer
Layers of artificial neurons in a neural network that are not directly connected to
the input or output.
139.Hyperparameter Tuning
The process of selecting the appropriate values for the hyperparameters
(parameters that are not learned from the data) of a machine learning model.
140.Inference
The process of making predictions with a trained machine-learning model.
141.Instruction Tuning
A technique in machine learning where models are fine-tuned based on specific
instructions given in the dataset.
143.Latent Space
Refers to the compressed representation of data that a model (like a neural
network) creates. Similar data points are closer in latent space.
146.Multimodal
In AI, this refers to models that can understand and generate information across
several types of data, such as text and images.
148.Objective Function
A function that a machine learning model seeks to maximize or minimize during
training.
149.Overfitting
A modeling error that occurs when a function is too closely fit to a limited set of
data points, resulting in poor predictive performance when applied to unseen
data.
150.Pre-training
The initial phase of training a machine learning model where the model learns
general features, patterns, and representations from the data without specific
knowledge of the task it will later be applied to. This unsupervised or semi-
supervised learning process enables the model to develop a foundational
understanding of the underlying data distribution and extract meaningful features
that can be leveraged for subsequent fine-tuning on specific tasks.
151.Prompt
The initial context or instruction that sets the task or query for the model.
Regularization
In machine learning, regularization is a technique used to prevent overfitting by
adding a penalty term to the model's loss function. This penalty discourages the
model from excessively relying on complex patterns in the training data,
promoting more generalizable and less prone-to-overfitting models.
153.Singularity
In the context of AI, the singularity (also known as the technological singularity)
refers to a hypothetical future point in time when technological growth becomes
uncontrollable and irreversible, leading to unforeseeable changes to human
civilization.
155.TensorFlow
An open-source machine learning platform developed by Google that is used to
build and train machine learning models.
157.Training Data
The dataset used to train a machine learning model.
158.Transfer Learning
A method in machine learning where a pre-trained model is used on a new
problem.
159.Transformer
A specific type of neural network architecture used primarily for processing
sequential data such as natural language. Transformers are known for their
ability to handle long-range dependencies in data, thanks to a mechanism called
"attention," which allows the model to weigh the importance of different inputs
when producing an output.
160.Underfitting
A modeling error in statistics and machine learning when a statistical model or
machine learning algorithm cannot adequately capture the underlying structure
of the data.
161.Unsupervised Learning
A type of machine learning where the model is not provided with labeled training
data, and instead must identify patterns in the data on its own.
162.Validation Data
A subset of the dataset used in machine learning that is separate from the
training and test datasets. It's used to tune the hyperparameters (i.e.,
architecture, not weights) of a model.
164.Zero-shot Learning
A type of machine learning where the model makes predictions for conditions not
seen during training, without any fine-tuning.
172.Generative AI at Salesforce
Sales: uses AI insights to identify the best next steps and close deals faster.
Service: uses AI to have human-like conversations and provide answers to
repetitive questions and tasks, freeing up agents to handle more complicated
requests.
Marketing: leverages AI to understand customer behavior and personalize the
timing, targeting, and content of marketing activities.
Commerce: uses AI to power highly personalized shopping experiences and
smarter e-commerce.
206.What is AI explainability?
Understanding how AI generates predictions and uses data features.