Artificial Intelligence Assignment 01

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Question 1: Determine in your own words the following terms: agent, agent

function, agent program, rationality, autonomy, reflex agent, model-based agent,


goal-based agent, utility-based agent, learning agent.

Answer: All these are given below

Agent

An Agent is anything that takes actions according to the information that it gains from
the environment. A human agent has sensory organs to sense the environment and the
body parts to act while a robot agent has sensors to perceive the environment.

Agent Function

An agent is specified by an agent function f that maps sequences of percepts Y to actions


A.
f :Y->A

Agent Program

An agent program in artificial intelligence is a program that has certain goals and ways of
achieving those. In this sense, the program is an agent in its computer environment. For
example, an agent program may be developed to find the most "food" in its environment,
and to have certain methods of thinking or reasoning in order to get to that food.

Rationality

Such agents are also known as Rational Agents. The rationality of the agent is measured
by its performance measure, the prior knowledge it has, the environment it can perceive
and actions it can perform. This concept is central in Artificial Intelligence.

Autonomy

Autonomy is the ability to act independently of a ruling body. In AI, a machine or vehicle
is referred to as autonomous if it doesn't require input from a human operator to function
properly. In AI, a machine or vehicle is referred to as autonomous if it doesn't require input
from a human operator to function properly.

RDeflex agents

Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived till
date. The agent function is based on the condition-action rule. A condition-action rule is
a rule that maps a state i.e, condition to an action. If the condition is true, then the action
is taken, else not. This agent function only succeeds when the environment is fully
observable. For simple reflex agents operating in partially observable environments,
infinite loops are often unavoidable. It may be possible to escape from infinite loops if the
agent can randomize its actions.

Model-based reflex agents

It works by finding a rule whose condition matches the current situation. A model-based
agent can handle partially observable environments by use of model about the world. The
agent has to keep track of internal state which is adjusted by each percept and that
depends on the percept history. The current state is stored inside the agent which
maintains some kind of structure describing the part of the world which cannot be seen.

Goal-based agents

These kind of agents take decision based on how far they are currently from their goal
(description of desirable situations). Their every action is intended to reduce its distance
from the goal. This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state. The knowledge that supports its decisions
is represented explicitly and can be modified, which makes these agents more flexible.
They usually require search and planning.

Utility-based agents

The agents which are developed having their end uses as building blocks are called utility
based agents. When there are multiple possible alternatives, then to decide which one is
best, utility-based agents are used .They choose actions based on a preference for each
state. Sometimes achieving the desired goal is not enough. We may look for a quicker,
safer, cheaper trip to reach a destination. Agent happiness should be taken into
consideration. Utility describes how “happy” the agent is. Because of the uncertainty in
the world, a utility agent chooses the action that maximizes the expected utility. A utility
function maps a state onto a real number which describes the associated degree of
happiness.

Learning Agent
A learning agent in AI is the type of agent which can learn from its past experiences or it
has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element :It is responsible for making improvements by learning from the
environment
2. Critic: Learning element takes feedback from critic which describes how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.

Question 2: Distinguish between strong and weak Artificial Intelligence with diagram.

Answer: Distinguish between Strong and Weak AI is given below

Strong AI Weak AI
The machine can actually think and perform The devices cannot follow these tasks on their
tasks on its own just like a human being own but are made to look intelligent.

An algorithm is stored by a computer Tasks are entered manually to be performed.


program.
There are no proper examples of Strong AI. An automatic car or remote control devices.
Progress Initial Stage Progress Advanced Stage

Fig: Artificial Intelligence

You might also like