0% found this document useful (0 votes)
51 views1 page

AI Sheet Fuzzy

This document discusses fuzzy sets for representing concepts like age and quantity, and issues with communicating goals to an AI agent. It asks to: 1. Define fuzzy sets for representing concepts like "old", "not old", "young", and "a few". It also asks if "not old" and "young" should be considered the same. 2. Analyze why an AI agent designed to navigate a maze to get a reward of +1 for escaping is showing no improvement, and if its goal has been effectively communicated.

Uploaded by

Menna Saed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views1 page

AI Sheet Fuzzy

This document discusses fuzzy sets for representing concepts like age and quantity, and issues with communicating goals to an AI agent. It asks to: 1. Define fuzzy sets for representing concepts like "old", "not old", "young", and "a few". It also asks if "not old" and "young" should be considered the same. 2. Analyze why an AI agent designed to navigate a maze to get a reward of +1 for escaping is showing no improvement, and if its goal has been effectively communicated.

Uploaded by

Menna Saed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 1

Subject : AI

Dr. Yasser Fouad


Fuzzy Sheet

1. Define fuzzy sets that can be used to represent the following concepts:
a. Old (domain is real-valued age in years).
b. Not old (domain is real-valued age in years).
c. Young (domain is real-valued age in years).
d. A few (domain is positive integers).
e. Do "not old" and "young" should be the same?

2. Imagine that you are designing a robot to run a maze. You decide to give it a reward of
+1 for escaping from the maze and a reward of zero at all other times. The task seems to
break down naturally into episodes--the successive runs through the maze--so you decide
to treat it as an episodic task, where the goal is to maximize expected total reward
r1 + r2 + ... rN, where N is the length of the episode. After running the learning agent for a
while, you find that it is showing no improvement in escaping from the maze. What is
going wrong? Have you effectively communicated to the agent what you want it to
achieve?

You might also like