0% found this document useful (0 votes)
2 views39 pages

Artificial Intelligence

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 39

Artificial Intelligence

Introduction
What is AI?

• Artificial intelligence is the simulation of human intelligence processes by


machines, especially computer systems.

• Examples of AI applications include expert systems, natural language


processing (NLP), speech recognition and machine vision.

• AI requires specialized hardware and software for writing and training


machine learning algorithms.

• No single programming language is used exclusively in AI, but Python, R,


Java, C++ and Julia are all popular languages among AI developers.
How does AI work?

In general, AI systems work by ingesting large amounts of labeled training data,


analyzing that data for correlations and patterns, and using these patterns to
make predictions about future states.

For example, an AI chatbot that is fed examples of text can learn to generate
lifelike exchanges with people, and an image recognition tool can learn to
identify and describe objects in images by reviewing millions of examples
Programming AI systems focuses on cognitive skills such as the following:

Learning. This aspect of AI programming involves acquiring data and creating


rules, known as algorithms, to transform it into actionable information. These
algorithms provide computing devices with step-by-step instructions for
completing specific tasks.

Reasoning. This aspect involves choosing the right algorithm to reach a desired
outcome.

Self-correction. This aspect involves algorithms continuously learning and


tuning themselves to provide the most accurate results possible.

Creativity. This aspect uses neural networks, rule-based systems, statistical


methods and other AI techniques to generate new images, text, music, ideas
and so on.
Foundations of AI

Algorithms are the backbone of AI systems. Without algorithms, machines


would not be able to learn from data, make predictions, or take actions based
on those predictions.

Algorithms enable machines to process large amounts of data quickly and


accurately.

AI refers to the simulation of human intelligence in machines that are


programmed to think and act like humans.

It encompasses a wide range of capabilities, including problem-solving,


learning, perception, and language understanding.
History of AI
History of AI

Maturation of Artificial Intelligence (1943-1952)

• Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
• Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.
• Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial
neural network (ANN) named SNARC. They utilized 3,000 vacuum tubes to
mimic a network of 40 neurons.
History of AI

The birth of Artificial Intelligence (1952-1956)

• Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-
Playing Program, which marked the world's first self-learning program for
playing games.
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program
had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field.
History of AI

The golden years-Early enthusiasm (1956-1974)

• Year 1959: Arthur Samuel is credited with introducing the phrase "machine
learning" in a pivotal paper in which he proposed that computers could be
programmed to surpass their creators in performance.
• Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow
created STUDENT, one of the early programs for natural language
processing (NLP), with the specific purpose of solving algebra word
problems.
• Year 1972: The first intelligent humanoid robot was built in Japan, which
was named WABOT-1.
• Year 1973: James Lighthill published the report titled "Artificial Intelligence:
A General Survey," resulting in a substantial reduction in the British
government's backing for AI research.
History of AI

The first AI winter (1974-1980)

• The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was
decreased.

A boom of AI (1980-1987)
Year 1980: After AI's winter duration, AI came back with an "Expert System".
Expert systems were programmed to emulate the decision-making ability of a
human expert.
Year 1985: Judea Pearl introduced Bayesian network causal analysis,
presenting statistical methods for encoding uncertainty in computer systems.
History of AI

The emergence of intelligent agents (1993-2011)


• Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.

• Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.

• Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning," introducing
the concept of employing GPUs for the training of expansive neural networks.

• Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan Masci
created the initial CNN that attained "superhuman" performance by emerging as the
victor in the German Traffic Sign Recognition competition.
History of AI

Deep learning, big data and artificial general intelligence (2011-present)


• Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player
Lee Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess
match against Deep Blue nearly two decades earlier.Whereas Uber initiated a
pilot program for self-driving cars in Pittsburgh, catering to a limited group of
users.
• Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
• Google has demonstrated an AI program, "Duplex," which was a virtual assistant
that had taken hairdresser appointments on call, and the lady on the other side
didn't notice that she was talking with the machine.
• Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of
producing images based on textual prompts.
• Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented
interface to its GPT-3.5 LLM.
Types of Artificial Intelligence:
Types of Artificial Intelligence:

Weak AI or Narrow AI:


• Narrow AI is a type of AI which is able to perform a dedicated task with
intelligence.The most common and currently available AI is Narrow AI in the
world of Artificial Intelligence.
• Narrow AI cannot perform beyond its field or limitations, as it is only trained
for one specific task. Hence it is also termed as weak AI. Narrow AI can fail in
unpredictable ways if it goes beyond its limits.
• Apple Siriis a good example of Narrow AI, but it operates with a limited pre-
defined range of functions.
General AI:
• General AI is a type of intelligence which could perform any intellectual task
with efficiency like a human.
• The idea behind the general AI to make such a system which could be smarter
and think like a human by its own.
• Currently, there is no such system exist which could come under general AI
and can perform any task as perfect as a human.
• The worldwide researchers are now focused on developing machines with
General AI.
Types of Artificial Intelligence:

• Super AI is a level of Intelligence of Systems at which machines could surpass


human intelligence, and can perform any task better than human with cognitive
properties. It is an outcome of general AI.

• Some key characteristics of strong AI include capability include the ability to


think, to reason,solve the puzzle, make judgments, plan, learn, and
communicate by its own.

• Super AI is still a hypothetical concept of Artificial Intelligence. Development of


such systems in real is still world changing task.
Artificial Intelligence type-2: Based on functionality
1. Reactive Machines
• Purely reactive machines are the most basic types of Artificial Intelligence.
• Such AI systems do not store memories or past experiences for future actions.
• These machines only focus on current scenarios and react on it as per
possible best action.
• IBM's Deep Blue system is an example of reactive machines.
• Google's AlphaGo is also an example of reactive machines.

Limited Memory
• Limited memory machines can store past experiences or some data for a
short period of time.
• These machines can use stored data for a limited time period only.
• Self-driving cars are one of the best examples of Limited Memory systems.
These cars can store recent speed of nearby cars, the distance of other
cars, speed limit, and other information to navigate the road.
Theory of Mind
• Theory of Mind AI should understand the human emotions, people, beliefs,
and be able to interact socially like humans.
• This type of AI machines are still not developed, but researchers are making
lots of efforts and improvement for developing such AI machines.

Self-Awareness
• Self-awareness AI is the future of Artificial Intelligence. These machines will be
super intelligent, and will have their own consciousness, sentiments, and self-
awareness.
• These machines will be smarter than human mind.
• Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
Agents in Artificial Intelligence
• An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators.
• An Agent runs in the cycle of perceiving, thinking, and acting.

Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.

Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.

Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.
Agents in Artificial Intelligence
• Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.

• Actuators: Actuators are the component of machines that converts energy


into motion. The actuators are only responsible for moving and controlling a
system. An actuator can be an electric motor, gears, rails, etc.

• Effectors: Effectors are the devices which affect the environment. Effectors
can be legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents

An intelligent agent may learn from the environment to achieve their goals. A
thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:

Rule 1: An AI agent must have the ability to perceive the environment.

Rule 2: The observation must be used to make decisions.

Rule 3: Decision should result in an action.

Rule 4: The action taken by an AI agent must be a rational action.


Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI

Problem-solving agents are the goal-based agents and use atomic


representation.
Search Algorithm Terminologies:

Search: Searching is a step by step procedure to solve a search-problem in


a given search space. A search problem can have three main factors:

Search Space: Search space represents a set of possible solutions,


which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns
whether the goal state is achieved or not.
Problem-solving agents:

Search tree: A tree representation of search problem is called Search tree. The root
of the search tree is the root node which is corresponding to the initial state.

Actions: It gives the description of all the available actions to the agent.

Transition model: A description of what each action do, can be represented as a


transition model.

Path Cost: It is a function which assigns a numeric cost to each path.

Solution: It is an action sequence which leads from the start node to the goal node.

Optimal Solution: If a solution has the lowest cost among all solutions.
Types of search algorithms
Uninformed Search
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal.

Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the
goal, so it is also called blind search.

It examines each node of the tree until it achieves the goal node.

Informed Search

Informed search algorithms use domain knowledge.

Informed search strategies can find a solution more efficiently than an uninformed
search strategy.

Informed search is also called a Heuristic search.


Breadth-first Search:

• Breadth-first search is the most common search strategy for traversing a tree or
graph.

• This algorithm searches breadthwise in a tree or graph, so it is called breadth-


first search.

• BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.

• The breadth-first search algorithm is an example of a general-graph search


algorithm.

• Breadth-first search implemented using FIFO queue data structure.


Breadth-first Search:
Advantages:
• BFS will provide a solution if any solution exists.

• If there are more than one solutions for a given problem,


then BFS will provide the minimal solution which requires
the least number of steps.

• It also helps in finding the shortest path in goal state, since


it needs all nodes at the same hierarchical level before
making a move to nodes at lower levels.

• It is also very easy to comprehend with the help of this we


can assign the higher rank among path types.
Disadvantages:
• It requires lots of memory since each level of the tree must
be saved into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the
root node.
• It can be very inefficient approach for searching through
deeply layered spaces, as it needs to thoroughly explore all
nodes at each level before moving on to the next
Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K.

BFS search algorithm traverse in layers, so it will follow the path which is shown by the
dotted arrow, and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d= depth
of shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at


some finite depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of


the node.
Depth-first Search
• Depth-first search is a recursive algorithm for traversing a tree or graph data
structure.
• It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).
• With the help of this we can stores the route which is being tracked in memory to
save time as it only needs to keep one at a particular time.
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
• The de¬pth-first search (DFS) algorithm does not always find the shorte¬st path to
a solution.
In the below search tree, we have shown the flow of depth-first search,
and it will follow the order as:

Root node--->Left node ----> right node.


In the below search tree, we have shown the flow of depth-first search,
and it will follow the order as:

Root node--->Left node ----> right node.


It will start searching from root node S, and traverse A, then B, then D
and E, after traversing E, it will backtrack the tree as E has no other
successor and still goal node is not found.

After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it
will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a


predetermined limit.

Depth-limited search can solve the drawback of the infinite path in


the Depth-first search.

In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

Standard failure value: It indicates that problem does not have any
solution.
Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Depth-Limited Search Algorithm:
Uniform-cost Search Algorithm or dijkstra’s
algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted
tree or graph.
This algorithm comes into play when a different cost is available for each
edge.

The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost

Uniform-cost search expands nodes according to their path costs form the
root node.
It can be used to solve any graph/tree where the optimal cost is in demand

A uniform-cost search algorithm is implemented by the priority queue.

It gives maximum priority to the lowest cumulative cost


Advantages:

• Uniform cost search is optimal because at every state the path with the least cost is
chosen.
• It is an efficient when the edge weights are small, as it explores the paths in an
order that ensures that the shortest path is found early.
• It's a fundamental search method that is not overly complex, making it accessible
for many users.
• It is a type of comprehensive algorithm that will find a solution if one exists. This
means the algorithm is complete, ensuring it can locate a solution whenever a
viable one is available. The algorithm covers all the necessary steps to arrive at a
resolution.
Disadvantages:

• It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.

• When in operation, UCS shall know all the edge weights to start off the search.
• This search holds constant the list of the nodes that it has already discovered in a
priority queue.

• Such is a much weightier thing if you have a large graph. Algorithm allocates the
memory by storing the path sequence of prioritizes, which can be memory
intensive as the graph gets larger.

• With the help of Uniform cost search we can end up with the problem if the graph
has edge's cycles with smaller cost than that of the shortest path.

• The Uniform cost search will keep deploying priority queue so that the paths
explored can be stored in any case as the graph size can be even bigger that can
eventually result in too much memory being used.

You might also like