CS8691 Unit 1
CS8691 Unit 1
CS8691 Unit 1
INTRODUCTION
"It is a branch of computer science by which we can create intelligent machines which
can behave like a human, think like humans, and able to make decisions."
Artificial Intelligence exists when a machine can have human based skills such as
learning, reasoning, and solving problems. With Artificial Intelligence you do not need to
preprogram a machine to do some work, despite that you can create a machine with programmed
algorithms which can work with own intelligence, and that is the awesomeness of AI. It is
believed that AI is not a new technology, and some people says that as per Greek myth, there
were Mechanical men in early days which can work and behave like humans.
1.2 DEFINITIONS OF AI
AI definitions can be categorized into four, they are as follows:
Systems that think like humans
Systems that think rationally
Systems that act like humans
System that act rationally
With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
AI opens a path for other new technologies, new devices, and new Opportunities.
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
To create the AI first we should know that how intelligence is composed, so the
Intelligence is an intangible part of our brain which is a combination of Reasoning, learning,
problem-solving perception, language understanding, etc. To achieve the above factors for a
machine or software Artificial Intelligence requires the following discipline:
Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had proved
38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of funding
from government for AI researches.
During AI winters, an interest of publicity on artificial intelligence was decreased.
Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human expert.
In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
The duration between the years 1987 to 1993 was the second AI Winter duration.
Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
1.4.8 Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice that
she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM,
and Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
1.4.9 Future of artificial intelligence
Autonomous Transportation:
In future, enhanced automated transportation the technology will evolve and we will see in
our roads replicas from Back to the Future, where transportations like public buses, cabs, and
even private vehicles will go driverless and on autopilot. With more precision, smart vehicles
will take over the roads and pave way for safer, faster and economical transport systems.
Robots into Risky Jobs:
Today, some of the most dangerous jobs are done by humans. Right from cleaning sewage to
fighting fire and diffusing bombs, it‟s we who get down, get our hands dirty and risk our lives.
The number of human lives we lose is also very high in these processes. In the near future, we
can expect machines or robots to take care of them. As artificial intelligence evolves and smarter
robots roll out, we can see them replacing humans at some of the riskiest jobs in the world.
That‟s the only time we expect automation to take away jobs.
An agent is anything that can viewed as perceiving its environment through sensors and
acting upon that environment through effectors. An Agent runs in the cycle of perceiving,
thinking, and acting those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and
even we are also agents.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and
various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and act on
those inputs and display output on the screen.
A rational agent is an agent which has clear preference, models uncertainty, and acts in a
way to maximize its performance measure with all possible actions. A rational agent is said to
perform the right things. AI is about creating rational agents to use for game theory and decision
theory for various real-world scenarios. For an AI agent, the rational action is most important
because in AI reinforcement learning algorithm, for each best possible action, agent gets the
positive reward and for each wrong action, an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:
Performance measure which defines the success criterion.
Agent prior knowledge of its environment.
CS8691 – ARTIFICIAL INTELLIGENCE page 13
Best possible actions that an agent can perform.
The sequence of percepts.
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
F : P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function F.
Performance
Agent Environment Actuators Sensors
measure
Healthy
Patient Keyboard
1. Medical patient Tests
Hospital (Entry of
Diagnose Minimized Treatments
Staff symptoms)
cost
Camera
Dirt
Room detection
Cleanness Table Wheels sensor
2. Vacuum Efficiency Wood floor Brushes Cliff sensor
Cleaner Battery life Carpet Vacuum Bump
Security Various Extractor Sensor
obstacles Infrared
Wall Sensor
Camera
Percentage Conveyor belt Jointed
3. Part - Joint angle
of parts in with parts, Arms
picking Robot sensors.
correct bins. Bins Hand
As per Russell and Norvig, an environment can have various features from the point of
view of an agent:
2. Deterministic vs Stochastic:
If an agent's current state and selected action can completely determine the next state of
the environment, then such environment is called a deterministic environment.
A stochastic environment is random in nature and cannot be determined completely by an
agent.
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent:
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
5. Static vs Dynamic:
If the environment can change itself while an agent is deliberating then such environment
is called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at each
action.
Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
6. Discrete vs Continuous:
If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it is
called continuous environment.
A chess game comes under discrete environment as there is a finite number of moves that
can be performed.
A self-driving car is an example of a continuous environment.
7. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.
If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
An empty room whose state can be defined by its temperature is an example of an
accessible environment.
Information about an event on earth is an example of Inaccessible environment.
In 1950, Alan Turing introduced a test to check whether a machine can think like a
human or not, this test is known as the Turing Test. In this test, Turing proposed that the
computer can be said to be an intelligent if it can mimic human response under specific
conditions. Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?"
The Turing test is based on a party game "Imitation game," with some modifications.
This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two players
and his job is to find that which player is machine among two of them.
PlayerA (Computer): No
In this game, if an interrogator would not be able to identify which is a machine and
which is human, then the computer passes the test successfully, and the machine is said to be
intelligent and can think like a human.
"In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no AI program
to till date, come close to passing an undiluted Turing test".
ELIZA: ELIZA was a Natural language processing computer program created by Joseph
Weizenbaum. It was created to demonstrate the ability of communication between machine and
humans. It was one of the first chatterbots, which has attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to
simulate a person with Paranoid schizophrenia(most common chronic mental disorder). Parry
was described as "ELIZA with attitude." Parry was tested using a variation of the Turing Test in
the early 1970s.
The Simple reflex agents are the simplest agents. These agents take decisions on the basis
of the current percepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
Updating the agent state requires information about:
The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
A learning agent has mainly four conceptual components, which are:
Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
Problems are the issues which come across any system. A solution is needed to solve that
particular problem.
The definition of the problem must be included precisely. It should contain the possible
initial as well as final situations which should result in acceptable solution.
Analyzing the problem and its requirement must be done as few features can have
immense impact on the resulting solution.
Identification of Solutions:
This phase generates reasonable amount of solutions to the given problem in a particular
range.
From all the identified solutions, the best solution is chosen basis on the results produced
by respective solutions.
Implementation:
Problem Formulation
Problem formulation involves deciding what actions and states to consider, when the
description about the goal is provided. It is composed of:
Initial State - start state
Possible actions that can be taken
Transition model – describes what each action does
Goal test – checks whether current state is goal state
Path cost – cost function used to determine the cost of each path.
The initial state, actions and the transition model constitutes state space of the problem -
the set of all states reachable by any sequence of actions. A path in the state space is a sequence
of states connected by a sequence of actions. The solution to the given problem is defined as the
sequence of actions from the initial state to the goal states. The quality of the solution is
measured by the cost function of the path, and an optimal solution is the one with most feasible
path cost among all the solutions.
1.9.2 TIC-TAC-TOE Game
An element contains the value 0, if the corresponding square is blank; 1, if it is filled with
“O” and 2, if it is filled with “X”.
Hence starting state is {0,0,0,0,0,0,0,0,0}. The goal state or winning combination will be
board position having “O” or “X” separately in the combination of ({1,2,3}, {4,5,6},
{7,8,9},{1,4,7},{2,5,8}, {3,6,9}, {1,5,9}, { 3,5,7}) element values. Hence two goal states can be
{2,0,1,1,2,0,0,0,2} and {2,2,2,0,1,0,1,0,0}. These values correspond to the goal States.
It is a normal chess game. In a chess game problem, the start state is the initial
configuration of chessboard. The final or goal state is any board configuration, which is a
winning position for any player (clearly, there may be multiple final positions and each board
All or some of these production rules will have to be used in a particular sequence to find
the solution of the problem. The rules applied and their sequence is presented in the following
Table.
1. What is AI?
2. Define an agent.
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
An agent‟s behavior is described by the agent function that maps any given percept
sequence to an action.
Task environments are essentially the "problems" to which rational agents are the
"solutions". A Task environment is specified using PEAS (Performance, Environment,
Actuators, and Sensors) description.
The simplest kind of agent is the simple reflex agent. These agents select actions on the
basis AGENT of the current percept, ignoring the rest of the percept history.
Knowing about the current state of the environment is not always enough to decide what
to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The
correct decision depends on where the taxi is trying to get to. In other words, as well as a current
state description, the agent needs some sort of goal information that describes situations that are
desirable-for example, being at the passenger's destination.
Goals alone are not really enough to generate high-quality behavior in most
environments. For example, there are many action sequences that will get the taxi to its
destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper
than others. A utility function maps a state (or a sequence of states) onto a real number, which
describes the associated degree of happiness.
A learning agent can be divided into four conceptual components. The most important
distinction is between the learning element, which is re-ELEMENT possible for making
improvements, and performance element, which is responsible for selecting external actions. The
performance element is what we have previously considered to be the entire agent: it takes in
percepts and decides on actions. The learning element uses CRITIC feedback from the critic on
how the agent is doing and determines how the performance element should be modified to do
better in the future.
Goal formulation
Problem formulation
Search
Search Algorithm
Execution phase
The process of looking for sequences actions from the current state to reach the goal state
is called search. The search algorithm takes a problem as input and returns a solution in the form
of action sequence. Once a solution is found, the execution phase consists of carrying out the
recommended action.
An omniscient agent knows the actual outcome of its action and can act accordingly; but
omniscience is impossible in reality.
Agent should act as a rational agent. Rational agent is one that does the right thing, (i.e.) right
actions will cause the agent to be most successful in the environment.