Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Introduction
What is AI?
For example, an AI chatbot that is fed examples of text can learn to generate
lifelike exchanges with people, and an image recognition tool can learn to
identify and describe objects in images by reviewing millions of examples
Programming AI systems focuses on cognitive skills such as the following:
Reasoning. This aspect involves choosing the right algorithm to reach a desired
outcome.
• Year 1943: The first work which is now recognized as AI was done by
Warren McCulloch and Walter pits in 1943. They proposed a model
of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
• Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to
human intelligence, called a Turing test.
• Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial
neural network (ANN) named SNARC. They utilized 3,000 vacuum tubes to
mimic a network of 40 neurons.
History of AI
• Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-
Playing Program, which marked the world's first self-learning program for
playing games.
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program
had proved 38 of 52 Mathematics theorems, and find new and more
elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field.
History of AI
• Year 1959: Arthur Samuel is credited with introducing the phrase "machine
learning" in a pivotal paper in which he proposed that computers could be
programmed to surpass their creators in performance.
• Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow
created STUDENT, one of the early programs for natural language
processing (NLP), with the specific purpose of solving algebra word
problems.
• Year 1972: The first intelligent humanoid robot was built in Japan, which
was named WABOT-1.
• Year 1973: James Lighthill published the report titled "Artificial Intelligence:
A General Survey," resulting in a substantial reduction in the British
government's backing for AI research.
History of AI
• The duration between years 1974 to 1980 was the first AI winter duration.
AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was
decreased.
A boom of AI (1980-1987)
Year 1980: After AI's winter duration, AI came back with an "Expert System".
Expert systems were programmed to emulate the decision-making ability of a
human expert.
Year 1985: Judea Pearl introduced Bayesian network causal analysis,
presenting statistical methods for encoding uncertainty in computer systems.
History of AI
• Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
• Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning," introducing
the concept of employing GPUs for the training of expansive neural networks.
• Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan Masci
created the initial CNN that attained "superhuman" performance by emerging as the
victor in the German Traffic Sign Recognition competition.
History of AI
Limited Memory
• Limited memory machines can store past experiences or some data for a
short period of time.
• These machines can use stored data for a limited time period only.
• Self-driving cars are one of the best examples of Limited Memory systems.
These cars can store recent speed of nearby cars, the distance of other
cars, speed limit, and other information to navigate the road.
Theory of Mind
• Theory of Mind AI should understand the human emotions, people, beliefs,
and be able to interact socially like humans.
• This type of AI machines are still not developed, but researchers are making
lots of efforts and improvement for developing such AI machines.
Self-Awareness
• Self-awareness AI is the future of Artificial Intelligence. These machines will be
super intelligent, and will have their own consciousness, sentiments, and self-
awareness.
• These machines will be smarter than human mind.
• Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
Agents in Artificial Intelligence
• An agent can be anything that perceive its environment through sensors and
act upon that environment through actuators.
• An Agent runs in the cycle of perceiving, thinking, and acting.
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and
act on those inputs and display output on the screen.
Agents in Artificial Intelligence
• Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
• Effectors: Effectors are the devices which affect the environment. Effectors
can be legs, wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents
An intelligent agent may learn from the environment to achieve their goals. A
thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
Search tree: A tree representation of search problem is called Search tree. The root
of the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Types of search algorithms
Uninformed Search
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the
goal, so it is also called blind search.
It examines each node of the tree until it achieves the goal node.
Informed Search
Informed search strategies can find a solution more efficiently than an uninformed
search strategy.
• Breadth-first search is the most common search strategy for traversing a tree or
graph.
• BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
BFS search algorithm traverse in layers, so it will follow the path which is shown by the
dotted arrow, and the traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d= depth
of shallowest solution and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size
of frontier which is O(bd).
After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it
will expand every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root
node, hence space complexity of DFS is equivalent to the size of the fringe set,
which is O(bm).
Depth-Limited Search Algorithm:
In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
Standard failure value: It indicates that problem does not have any
solution.
Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Depth-Limited Search Algorithm:
Uniform-cost Search Algorithm or dijkstra’s
algorithm:
Uniform-cost search is a searching algorithm used for traversing a weighted
tree or graph.
This algorithm comes into play when a different cost is available for each
edge.
The primary goal of the uniform-cost search is to find a path to the goal
node which has the lowest cumulative cost
Uniform-cost search expands nodes according to their path costs form the
root node.
It can be used to solve any graph/tree where the optimal cost is in demand
• Uniform cost search is optimal because at every state the path with the least cost is
chosen.
• It is an efficient when the edge weights are small, as it explores the paths in an
order that ensures that the shortest path is found early.
• It's a fundamental search method that is not overly complex, making it accessible
for many users.
• It is a type of comprehensive algorithm that will find a solution if one exists. This
means the algorithm is complete, ensuring it can locate a solution whenever a
viable one is available. The algorithm covers all the necessary steps to arrive at a
resolution.
Disadvantages:
• It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.
• When in operation, UCS shall know all the edge weights to start off the search.
• This search holds constant the list of the nodes that it has already discovered in a
priority queue.
• Such is a much weightier thing if you have a large graph. Algorithm allocates the
memory by storing the path sequence of prioritizes, which can be memory
intensive as the graph gets larger.
• With the help of Uniform cost search we can end up with the problem if the graph
has edge's cycles with smaller cost than that of the shortest path.
• The Uniform cost search will keep deploying priority queue so that the paths
explored can be stored in any case as the graph size can be even bigger that can
eventually result in too much memory being used.