AI Notes Summary
AI Notes Summary
AI Notes Summary
Summary: Problem solving in AI involves finding a sequence of actions that lead to a desired goal. It is
often modeled as a search problem where the solution is a path from an initial state to a goal state. The
process includes defining the problem, states, actions, transition model, and goal test.
Example: In a maze-solving problem, the initial state is the starting point in the maze, actions are
movements in the maze (up, down, left, right), the transition model defines how actions change the
state, and the goal test checks if the current state is the exit of the maze.
Summary: Blind or uninformed search strategies do not have any additional information about the states
beyond that provided in the problem definition. These methods systematically explore the search space
to find a solution.
Examples:
1. Breadth-First Search (BFS): Explores all nodes at the present depth before moving on to nodes
at the next depth level. It's complete but can be very memory-intensive.
2. Depth-First Search (DFS): Explores as far down a branch as possible before backtracking. It's
memory-efficient but can get stuck in deep or infinite branches.
Informed Search
Summary: Informed search strategies use heuristic functions to evaluate the promise of each node in
the search space. These methods are generally more efficient than uninformed searches.
Examples:
1. A *Search: Uses a heuristic to estimate the cost from the current node to the goal and combines
it with the cost to reach the current node. It is optimal and complete when using an admissible
heuristic.
2. Greedy Best-First Search: Selects the node that appears to be closest to the goal based on a
heuristic. It is not guaranteed to be optimal.
Game Playing
Summary: AI in game playing involves creating agents that can play games against humans or other
agents. These agents use search algorithms to make decisions based on game states.
Example: Minimax Algorithm: Used in two-player games like chess and tic-tac-toe. The algorithm
simulates all possible moves, assumes the opponent will always make the optimal move, and chooses
the move that maximizes the player's minimum payoff.
Knowledge Representation
Summary: Knowledge representation is the way AI systems encode information about the world to make
it understandable and usable for reasoning and decision-making. It involves structures like semantic
networks, frames, and ontologies.
Example: Semantic Networks: Represent relationships between concepts. For instance, in a semantic
network for animals, nodes represent animals, and edges represent relationships like "is a" or "has a."
Summary: Logic-based knowledge representation uses formal logic to encode information and allows for
rigorous reasoning. It includes propositional logic and first-order logic (FOL).
Examples:
1. Propositional Logic: Represents statements as true or false. For example, "It is raining" can be
represented as a proposition P.
quantifiers. For instance, "All humans are mortal" can be represented as ∀x (Human(x) →
2. First-Order Logic (FOL): Extends propositional logic by including objects, predicates, and
Mortal(x)).
Knowledge-Based Systems
Summary: Knowledge-based systems (KBS) are AI systems that use knowledge representation to solve
complex problems. They include expert systems and rule-based systems.
Example: Expert Systems: Mimic the decision-making abilities of a human expert. For example, MYCIN
was an early expert system for diagnosing bacterial infections and recommending antibiotics.
Uncertainty
Summary: AI systems often need to operate under uncertainty due to incomplete, noisy, or ambiguous
information. Techniques to handle uncertainty include probability theory, Bayesian networks, and fuzzy
logic.
Example: Bayesian Networks: Graphical models that represent probabilistic relationships among
variables. They can be used for tasks like medical diagnosis where symptoms are probabilistically related
to diseases.
Summary: Simple decision-making involves selecting the best action from a set of alternatives based on
certain criteria. This often uses decision trees or rule-based systems.
Example: Decision Trees: Tree-like models where each node represents a decision point. For instance, a
decision tree for loan approval might split nodes based on income, credit score, and employment status.
Summary: Complex decision-making involves multiple steps, trade-offs, and potentially conflicting
objectives. It often uses more advanced techniques like Markov Decision Processes (MDPs) and
reinforcement learning.
Example: Markov Decision Processes (MDPs): Used to model decision-making where outcomes are
partly random and partly under the control of the decision-maker. They are used in robotics, economics,
and automated control systems.
Learning
Summary: Machine learning enables AI systems to improve their performance based on data. Types
include supervised learning, unsupervised learning, and reinforcement learning.
Examples:
1. Supervised Learning: Models are trained on labeled data. For instance, a spam filter learns to
classify emails as spam or not spam based on a labeled training set.
2. Reinforcement Learning: Agents learn to make decisions by receiving rewards or penalties for
actions. For example, a robot learns to navigate a maze by receiving positive rewards for
reaching the end.
AI Applications
Summary: AI applications span various fields including healthcare, finance, education, and
transportation.
Examples:
2. Finance: AI is used in fraud detection, algorithmic trading, and customer service chatbots.
Robotics
Summary: Robotics involves creating robots that can perform tasks autonomously or semi-
autonomously. It integrates AI for perception, decision-making, and control.
Examples:
1. Autonomous Vehicles: Use AI to navigate and make driving decisions. For example, self-driving
cars from companies like Tesla and Waymo.
2. Industrial Robots: Perform repetitive tasks with high precision, such as assembling parts in
manufacturing.