Aids 2 Mse
Aids 2 Mse
Aids 2 Mse
2. List and explain different search techniques with the help of a diagram.
9. Define Cognitive Computing. Draw a neat diagram of elements of the cognitive system and explain the
elements.
10. Describe the design Principles for Cognitive Systems with the help of diagrams.
11. Write a short note on 1) Supervised Learning 2) Reinforcement Learning 3) Unsupervised Learning
14. List steps in building a typical cognitive application. Explain the same for healthcare applications.
18. With the help of a diagram explaining the architecture of the Mamdani Type Fuzzy Control System.
19. Describe the design of Fuzzy Controllers with the help of a diagram.
Infrastructure and Deployment Modalities: organizations can leverage Software as a Service (SaaS)
applications and services to meet industry‐specific requirements. A highly parallelized and distributed
environment, including compute and storage cloud services, must be supported.
Data Access, Metadata, and Management Services: There will be a variety of internal and external data
sources that will be included in the corpus. To make sense of these data sources, there needs to be a set
of management services that prepares data to be used within the corpus.
The Corpus, Taxonomies, and Data Catalogs: corpus is the knowledge base of ingested data and is used
to manage codified knowledge. The data required to establish the domain for the system is included in
the corpus. Ontologies are often developed by industry groups to classify industry‐specific elements such
as standard chemical compounds, machine parts, or medical diseases and treatments. In a cognitive
system, it is often necessary to use a subset of an industry‐based ontology to include only the data that
pertains to the focus of the cognitive system. A taxonomy works hand in hand with ontologies. A
taxonomy provides context within the ontology.
https://medium.com/@jorgesleonel/the-main-components-of-a-cognitive-computing-system-
2dbfe9ee6c3 link for whole answer
1. Security challenges.
2. Long development cycle length.
3. Slow adoption.
4. Negative environmental impact.
Benefits
IBM Watson products unlock new levels of productivity by infusing AI and automation into core
business workflows.
IT is question answer based machine.
Uses natural language processing to understand grammar and context.
Evaluates all possible meanings and determine what is being asked and then answer based on
supporting evidence and quality of information found.
Q. Advanced analytics of cognitive computing
Advanced analytics refers to a collection of techniques and algorithms for identifying patterns in
large, complex, or high-velocity data sets with varying degrees of structure.
It includes sophisticated statistical models, predictive analytics, machine learning, neural
networks, text analytics, and other advanced data mining techniques.
Some of the specific statistical techniques used in advanced analytics include decision tree
analysis, linear and logistic regression analysis, social network analysis, and time series analysis.
These analytical processes help discover patterns and anomalies in large volumes of data that
can anticipate and predict business outcomes.
Accordingly, advanced analytics is a critical element in creating long-term success with a
cognitive system that can ask for the right answers to complex questions and predict outcomes.
1.Descriptive analytics
Descriptive analytics refers to the interpretation of historical data to better understand changes
that occur in a business.
Using a range of historic data and benchmarking, decision-makers obtain a holistic view of
performance and trends on which to base business strategy.
Descriptive analytics can help to identify the areas of strength and weakness in an organization.
Examples of metrics used in descriptive analytics include year-over-year pricing changes, month-
over-month sales growth, the number of users, or the total revenue per subscriber.
Descriptive analytics is used in conjunction with newer analytics, such a predictive and
prescriptive analytics.
2.Diagnostic analytics
Diagnostic analytics is the area of data analytics that is concerned with identifying the root cause
of problems or issues.
Diagnostic analytics is the process of using data to determine the causes of trends and
correlations between variables.
Diagnostic analytics can be leveraged to understand why something happened and the
relationships between related factors.
3.Predictive analytics
Predictive analytics is a branch of advanced analytics that makes predictions about future
outcomes using historical data combined with statistical modeling, data mining techniques and
machine learning
Companies employ predictive analytics to find patterns in this data to identify risks and
opportunities. Predictive analytics is often associated with big data and data science.
The term predictive analytics refers to the use of statistics and modeling techniques to make
predictions about future outcomes and performance.
4.Prescriptive analytics
Prescriptive analytics is the process of using data to determine an optimal course of action. By
considering all relevant factors, this type of analysis yields recommendations for next steps.
Because of this, prescriptive analytics is a valuable tool for data-driven decision-making.
Machine-learning algorithms are often used in prescriptive analytics to parse through large
amounts of data faster—and often more efficiently—than humans can.
Prescriptive analytics utilizes similar modeling structures to predict outcomes and then utilizes a
combination of machine learning, business rules, artificial intelligence, and algorithms to
simulate various approaches to these numerous outcomes. It then suggests the best possible
actions to optimize business practices. It is the “what should happen.”
Autonomous data analytics will be the driver of business analytics in the future and will be
seamlessly integrated into our lives
The future of business analytics is both adaptive and autonomous.
This means using machine learning to power the business analytics value chain which starts with
discovery, moves to preparing and augmenting data, then to analysis, modeling and finally to
prediction.
It is data-driven and is a powerful Platform for innovation. Personal context is all-important: the
system must understand who I am, where I am and what I need to know now.
This autonomous capability speeds up analysis provides automatically generated narration and
delivers quick real-time insight.
Then to receive automated recommendations for data standardization, cleansing, and
enrichment.
Explain Bayesian Theorem and its applications in AI
Bayes' theorem gives a mathematical rule allowing us to find the probability of a cause given its effect.
Bayes theorem is widely used to make predictions and improve the accuracy of systems
Applications in AI
Bayesian theorem is a cornerstone of many AI techniques, particularly in probabilistic AI. Here
are some key applications:
1. Bayesian Networks:
Text classification: A popular algorithm for text classification tasks like spam filtering
and sentiment analysis.
Assumption of independence: Naive Bayes assumes that the features of a data point are
independent given the class label.
Simple yet effective: Despite its simplicity, Naive Bayes often performs surprisingly
well in many real-world applications.
3. Bayesian Optimization:
where:
Sensitive to outliers: Outliers with high membership grades can significantly influence the
centroid.
May not be suitable for highly multimodal fuzzy sets: If the fuzzy set has multiple distinct peaks,
the centroid might not represent the most relevant output.
Assumes linear membership functions: The centroid method is based on the assumption of
linear membership functions. For highly nonlinear functions, it might not provide accurate
results.
In conclusion, the Centroid method is a popular defuzzification technique due to its simplicity and ease
of implementation. However, its suitability depends on the specific characteristics of the fuzzy set and
the desired output. For applications where outliers or highly multimodal fuzzy sets are common, other
methods like Weighted Average or Height-Based Defuzzification might be more appropriate.
Cognitive computing, a subset of artificial intelligence that simulates human thought processes,
is poised to revolutionize the healthcare industry. By leveraging machine learning, natural
language processing, and data analytics, cognitive computing systems can assist healthcare
professionals in various tasks, improving patient outcomes and enhancing efficiency.
Specific Applications:
1. Personalized Medicine:
o Predictive analytics: Cognitive computing can analyze vast amounts of patient data,
including genetic information, medical history, and lifestyle factors, to predict disease
risk and tailor treatment plans.
o Drug discovery: By analyzing molecular structures and biological pathways, cognitive
systems can accelerate the discovery of new drugs.
Potential Benefits:
Improved patient outcomes: By enabling early detection, personalized treatment, and better
disease management, cognitive computing can enhance patient outcomes.
Increased efficiency: Automation of routine tasks can free up healthcare professionals to focus
on more complex and patient-centric activities.
Reduced costs: Cognitive computing can help reduce healthcare costs by optimizing resource
allocation, preventing medical errors, and improving efficiency.
Enhanced decision-making: By providing evidence-based recommendations and insights,
cognitive systems can support clinicians in making more informed decisions.
Improved patient experience: Personalized care and timely interventions can enhance the
overall patient experience.
List three industries where data science is extensively used and briefly describe
one application in each industry.
1. Financial Services
Credit risk assessment: Data science models can analyze vast amounts of customer data to
assess creditworthiness, helping financial institutions make informed decisions about lending.
2. Healthcare
Disease prediction: Data science algorithms can analyze patient data, including medical records,
genetic information, and lifestyle factors, to predict the likelihood of developing certain diseases.
3. Retail
Customer segmentation: Data science techniques can be used to segment customers into
different groups based on their behavior, preferences, and demographics, enabling retailers to
tailor marketing campaigns and product offerings.
Which are the various metrics for evaluating the classifier performance
Metrics for Evaluating Classifier Performance
When evaluating the performance of a classifier, it's essential to use appropriate metrics that
align with the specific goals of the application. Here are some common metrics:
Accuracy:
Definition: The overall proportion of correct predictions (both true positives and true negatives)
out of the total number of predictions.
Formula: (TP + TN) / (TP + TN + FP + FN)
Suitable for: Balanced datasets where false positives and false negatives have similar costs.
Precision:
Definition: The proportion of positive predictions that were actually correct (true positives).
Formula: TP / (TP + FP)
Suitable for: Scenarios where false positives are more costly than false negatives (e.g., spam
filtering).
Recall:
Definition: The proportion of actual positive instances that were correctly predicted (true
positives).
Formula: TP / (TP + FN)
Suitable for: Scenarios where false negatives are more costly than false positives (e.g., medical
diagnosis).
F1-score:
The XOR (exclusive OR) function is a logical operation that returns true if and only if the
inputs are different. It's a challenging problem for a single-layer MP network.
Linear separability: The XOR function is not linearly separable. This means it's impossible to
draw a straight line to separate the input patterns that produce true and false outputs.
Limitations of MP neurons: MP neurons can only represent linear decision boundaries.
To implement the XOR function, we need a two-layer network. The first layer (hidden layer)
creates two intermediate representations, and the second layer (output layer) combines these
representations to produce the final output.
Output Layer:
Activation Function:
Step function: If the weighted sum of inputs exceeds the threshold, the neuron outputs 1;
otherwise, it outputs 0.
How it Works:
1. Hidden Layer:
o Neuron 1: Activates when both x1 and x2 are 1.
o Neuron 2: Activates when x1 is 1 and x2 is 0, or vice versa.
2. Output Layer:
o The output neuron activates only when exactly one of the hidden neurons is active,
correctly implementing the XOR function.
Purpose: The system should focus on solving the problems that matter to users.
How: Learn from the user’s goals and preferences to provide useful solutions.
2. Learn Continuously
Purpose: Cognitive systems must improve over time by learning from new information.
How: The system collects data and adjusts its responses or suggestions based on experience, like
a person who learns from their mistakes.
3. Be Context-Aware
Purpose: The system needs to understand the situation or environment it's operating in.
How: It uses data such as location, time, and user interactions to tailor its behavior accordingly,
like understanding the difference between a home and office setting.
Purpose: The system should communicate in a way that feels natural to humans.
How: Use language, gestures, or other forms of interaction that people are comfortable with,
like speaking in everyday language.
5. Support Decision-Making
Purpose: Help users make better decisions by analyzing information and providing
recommendations.
How: The system processes large amounts of data and presents the most relevant options or
answers, much like a personal assistant.
6. Be Transparent
Purpose: Users should understand why the system behaves a certain way or makes certain
suggestions.
How: Provide explanations for its actions or decisions so that users can trust and learn from the
system.
7. Handle Uncertainty
Purpose: Cognitive systems must deal with incomplete or uncertain information effectively.
How: When unsure, the system might offer multiple suggestions or ask clarifying questions,
similar to how humans handle uncertainty.
The holdout method is a simple technique used in machine learning to evaluate the performance
of a model. It involves randomly splitting the dataset into two parts: a training set and a testing
set. The model is trained on the training set, and its performance is evaluated on the testing set.
This helps to assess the model's ability to generalize to unseen data.
2. Defuzzification Methods
Defuzzification is the process of converting a fuzzy set into a crisp value. It is used in fuzzy logic
systems to obtain a meaningful output from the fuzzy inference process. Common
defuzzification methods include:
Data science for text involves the analysis and understanding of textual data. It has numerous
applications, including:
Natural Language Processing (NLP): Tasks such as text classification, sentiment analysis, machine
translation, and question answering.
Information Retrieval: Searching and retrieving relevant information from large text corpora.
Text Summarization: Automatically generating concise summaries of text documents.
Text Generation: Creating new te
Defuzzification is the process of converting a fuzzy set into a crisp value. In fuzzy logic
systems, decisions are often represented as fuzzy sets, which are sets with elements that have
degrees of membership between 0 and 1. Defuzzification is necessary to translate these fuzzy
sets into concrete, actionable outputs that can be used in real-world applications.
Necessity of Defuzzification:
Real-world applications: Many real-world systems require crisp, numerical outputs. For
example, a control system for a robot might need to determine the exact speed and
direction of movement.
Human-machine interaction: Humans typically prefer crisp, understandable outputs. A
fuzzy logic system that provides a fuzzy set as output might be difficult for a human
operator to interpret.
Integration with other systems: Defuzzification is often necessary to integrate fuzzy
logic systems with other systems that require crisp inputs or outputs.
Describe
Bayesian networks are graphical models that represent probabilistic relationships between
variables. They provide a framework for reasoning under uncertainty and making inferences
based on available evidence.
Key Components:
Nodes: Each node represents a variable, and its state can be discrete or continuous. The state of
a node can be observed (evidence) or unknown (query).
Edges: Directed edges (arrows) between nodes indicate conditional dependencies. If there's a
directed edge from node A to node B, it means that the probability distribution of B depends on
the value of A. In other words, knowing the value of A provides information about the likely
values of B.
A Hidden Markov Model (HMM) is a statistical model that describes a sequence of observable
events as a function of hidden states. It assumes that the underlying system is a Markov process,
meaning that the probability of the next state depends only on the current state. However, the
states themselves are not directly observed, hence the term "hidden."
Imagine you have a bag of marbles that are different colors and sizes. A GMM is like trying to
figure out how many different types of marbles are in the bag, what their colors are, and how big
they are.
1. Guess: We start by guessing how many types of marbles are in the bag and their
properties.
2. Assign: We look at each marble and decide which type it's most likely to be based on its
color and size.
3. Update: We use this information to refine our guesses about the number of types, their
properties, and how common they are.
We repeat steps 2 and 3 until our guesses stop changing much. This process is called the
Expectation-Maximization (EM) algorithm.
1. Fuzzification:
Inputs: The crisp inputs to the system are converted into fuzzy sets.
Membership functions: Each input is assigned a degree of membership to different fuzzy sets
(e.g., "low," "medium," "high").
2. Rule Base:
Fuzzy rules: A collection of IF-THEN rules that define the system's behavior.
Antecedent: The IF part of the rule, which specifies the conditions under which the rule is
activated.
Consequent: The THEN part of the rule, which specifies the action to be taken if the antecedent
is satisfied.
3. Inference Engine:
Matching: The degree of match between the input fuzzy sets and the antecedents of the rules is
calculated.
Inference: The consequent fuzzy sets are modified based on the degree of match.
Aggregation: The modified consequent fuzzy sets are combined to produce a single fuzzy set
representing the overall output.
4. Defuzzification:
Example:
Mamdani-type fuzzy control systems are widely used in various applications, including robotics,
process control, and decision-making, due to their ability to handle uncertainty and provide
human-understandable rules.
Imagine a thermostat.
Fuzzification: The thermostat measures the temperature and decides if it's "hot," "warm," or
"cold."
Rules: It has rules like "If it's hot, turn on the AC."
Inference: It decides how much to turn on the AC based on how hot it is.
Defuzzification: It turns the AC to a specific temperature based on the decision.
1. Fuzzification: We convert the temperature into fuzzy terms like "hot," "warm," or "cold."
2. Rules: We set up rules like "If it's hot, turn on the AC high."
3. Inference: We use these rules to decide how much to turn on the AC.
4. Defuzzification: We convert the fuzzy decision back into a specific temperature.
Why is it useful?
Fuzzy logic helps us deal with situations that are not clear-cut. It's used in many things, like
washing machines, air conditioners, and even medical diagnosis.
Inputs: Determine the relevant inputs for the controller. For example, a shower controller might
have inputs like desired water temperature and water pressure.
Outputs: Define the outputs that the controller will control. For a shower controller, the output
could be the flow rate of hot and cold water.
Fuzzy sets: Choose appropriate fuzzy sets to represent the range of values for each input and
output. For example, for temperature, you might use fuzzy sets like "low," "medium," and "high."
Membership functions: Determine the shape and parameters of the membership functions for
each fuzzy set. Common shapes include triangular, trapezoidal, and Gaussian.
Rule base: Develop a set of IF-THEN rules that describe the desired behavior of the system.
Antecedents: The IF part of the rule should specify conditions based on the input fuzzy sets.
Consequents: The THEN part of the rule should specify the corresponding output fuzzy sets.
Simulation: Simulate the FLC using different input values to evaluate its performance.
Hardware implementation: If necessary, implement the FLC on a microcontroller or other
hardware platform.
Testing: Thoroughly test the controller under various conditions to ensure it meets the desired
specifications.
Inputs: Desired water temperature, water pressure Outputs: Flow rate of hot and cold water
Fuzzy sets:
Fuzzy rules:
Taxonomies and ontologies are two fundamental methods used to represent knowledge in a
structured and organized manner. They are particularly valuable in fields like artificial
intelligence, knowledge management, and semantic web.
Taxonomies
A taxonomy is a hierarchical classification system that organizes concepts or entities into
categories based on their relationships. It is often represented as a tree structure, where each node
represents a category and the edges represent hierarchical relationships.
Ontology is like a dictionary that explains the meaning of words and how they relate to each other. It
helps computers understand the world in a way that's closer to how humans understand it.
Rich semantics: Ontologies capture more detailed information about concepts, including
their properties, relationships, and axioms.
Formal representation: Ontologies are often represented using formal languages like
OWL (Web Ontology Language).
Reasoning: Ontologies can be used for reasoning and inference, allowing for automatic
deductions and conclusions.