Module 1 AI
Module 1 AI
Module 1 AI
According to John McCarthy, the father of AI, “The science and engineering of making
intelligent machines, especially intelligent computer programs”, is the definition of Artificial
Intelligence.
As the name suggests, AI is imparting intelligence to the machines so that the machines operate
like human beings. AI is that sector in computer science that emphasizes the creation of
intelligent machines that work, operate and react like human beings. AI is used in decision
making by the machines considering the real-time scenario. An Artificially Intelligent machine
reads the real-time data, understands the business scenario and reacts accordingly.
Some of the activities that the artificially intelligent machines are designed for are:
• Speech recognition
• Learning
• Planning
• Problem-solving
AI has now become a very important part of Information Technology. This branch aims to
create machines that are intelligent. AI has highly technical and specialized research associated
with it.
Hence, with the objective of making the machines that operate and react like human beings, AI
developed. The process of transforming a computer into a computer-controlled robot or
designing software that thinks and reacts exactly the way a human being thinks is what AI is
all about.
In order to use AI to develop intelligent systems, it is necessary that one understands how the
human brain functions. How the brain thinks, learns, decides and operates while solving a
problem is to be studied thoroughly. Then, the result thus obtained must be applied to the
software in order to develop smart and intelligent systems.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
o
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
o Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named
as WABOT-1.
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had
to solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice
that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
hines can only act, operate and react like human beings if they provide enough information
relating to the business and the world. Hence, it is important that AI have access to all the
information regarding the objects, categories, properties, and relations between all business use
cases so that the machine can efficiently implement Knowledge Engineering. However,
thearting the machines with common sense, decision making, reasoning, and problem-solving
power is quite difficult and tedious.
Machine learning
Machine learning is programming computers to optimize a performance criterion using
example data or past experience. We have a model defined up to some parameters, and learning
is the execution of a computer program to optimize the parameters of the model using the
training data or past experience. The model may be predictive to make predictions in the future,
or descriptive to gain knowledge from data, or both.
Machine learning uses the theory of statistics in building mathematical models, because the
core task is making inference from a sample. The role of computer science is twofold: First, in
training, we need efficient algorithms to solve the optimization problem, as well as to store and
process the massive amount of data we generally have. Second, once a model is learned, its
representation and algorithmic solution for inference needs to be efficient as well. In certain
applications, the efficiency of the learning or inference algorithm, namely, its space and time
complexity, may be as important as its predictive accuracy. Application of machine learning
methods to large databases is called data mining. The analogy is that a large volume of earth
and raw material is extracted from a mine, which when processed leads to a small amount of
very precious material.
Given below shows how ML algorithms differ from programmed Logic-based algorithms:
For a logic-based algorithm, flow is well defined and known in advance, however; there are
several real-life scenarios (such as image classification) where logic can’t be defined. In such
cases, Machine learning has proven to be extremely useful. Machine learning techniques take
input parameters & expected reference output data and generate logic which is then deployed
into production.
Applications of ML
In the past decade, the introduction to machine learning has transformed several industries,
including healthcare, social media, digital marketing, real estate, logistics, supply chain &
manufacturing. Early movers in these industries have already reaped significant profits. There
is a growing demand for a skilled workforce with machine learning along with domain
knowledge.
Following are a few applications where ML techniques have played a significant role:
Deep learning
Deep learning is a subset of machine learning in artificial intelligence, i.e., based upon
artificial neural network and representation learning, as it is capable of implementing a
function that is used to mimic the functionality of the brain by creating patterns and
processing data. Deep Learning is also used for decision-making in fields like driverless cars
( to detect pedestrians, street lights, other cars, etc.), speech recognition, image analysis( e.g.,
Identifying cancer in blood and tumors), smart TV, etc.
In general, we will do two tasks all the time consciously or subconsciously, i.e., categorize
what we felt through our senses (like feeling hot, cold mug, etc.) And prediction, for
example, predicts the future temperature based on the previous temperature data.
We do categorization and prediction tasks for several events or tasks in our daily life such as
below:
• Holding Cup of Tea/Water/Coffee etc., which may be hot or cold.
• Email categorization such as spam/ not spam.
• Day-light time categorization such as day or night.
• Long-term planning of the future based on our current position and things we have –
is called prediction.
• Every creature in the world will do these tasks in their life, for example, consider
animals like crow will categorize a place to build its nest or not, a bee will decide on
some factors when and where to get honey, the bat will come during the night and
sleeps during morning based on day and night categorization.
Let us visualize these tasks categorization and prediction, and they will look alike as in the
below image; for categorization, we are doing categorization between cats and dogs by
drawing a line through data points, and in the case of prediction, we draw a line through data
points to predict when it will increase and decrease.
1. Categorization
• In general, to categorize between cats and dogs, or men and women, we don’t draw a
line in our brains, and the position of dogs and cats is arbitrary for illustration
purposes only, and it is needless to say the way we categorize between cats and dogs
in our brains is much complex than drawing a red line as above.
• We will categorize between two things based on shapes, size, height, looks, etc., and
sometimes it will be difficult to categorize with these features such as a small dog
with fury and a newborn cat, so it is not a clear-cut categorization into cats and dogs.
• Once we are able to categorize between cats and dogs when we are children, then
onwards we are able to categorize any dog or cat even we didn’t see it before.
2. Prediction
• For prediction based on the line, we draw through data points if we are able to predict
where it is most likely to go upward or downward.
• The curve is also a prediction of fitting new data points within the range of existing
data points, i.e., how close the new data point is to the curve.
• The data points which are in red colour in the above image (right side) are examples
of both within and beyond the range of existing data points, and the curve attempts to
predict both.
Finally, both task categorization and prediction are ended at a similar point, i.e., drawing a
curvy line from data points. If we are able to train the computer model to draw the curvy line
based on data points we are done with, then we can extend this to apply in different models
such as drawing a curvy line in three-dimensional planes and so on. The above thing can be
achieved by training a model with a large amount of labeled and unlabelled data, which is
called deep learning.
Examples of Deep Learning
As we know, deep learning and machine learning are subsets of artificial intelligence, but
deep learning technology represents the next evolution of machine learning. Machine
learning will work based on algorithms and programs developed by humans, whereas deep
learning learns through a neural network model which acts similar to humans and allows
machines or computers to analyze the data in a similar way as humans do. This becomes
possible as we train the neural network models with a huge amount of data as data is the fuel
or food for neural network models.
• Speech and Natural Language Processing: Natural language processing deals with
algorithms for computers to understand, interpret, and manipulate human language.
NLP algorithms work with text and audio data and transform them into audio or text
output. Using NLP, we can do tasks such as sentiment analysis, speech recognition,
language transition, and natural language generation, etc.
• Autonomous Vehicles: Deep learning models are trained with a huge amount of data
for identifying street signs; some models specialize in identifying pedestrians,
identifying humans, etc., for driverless cars while driving.
• Text Generation: By using deep learning models trained by language, grammar, and
types of texts, etc., can be used to create a new text with correct spelling and grammar
from Wikipedia to Shakespeare.
• Image Filtering: By using deep learning models such as adding color to black-and-
white images, it can be done by deep learning models, which will take more time if
we do it manually.
Big data
Big data can be defined as a concept used to describe a large volume of data, which are both
structured and unstructured, and that gets increased day by day by any system or business.
However, it is not the quantity of data, which is essential. The important part is what any firm
or organization can do with the data matters a lot. Analysis can be performed on big data for
insight and predictions, which can lead to a better decision and reliable strategy in business
moves.
3Vs of Big Data-This conception theory gained thrust in the early 2000s when trade and
business analyst Mr. Doug Laney expressed the mainstream explanation of the keyword big
data over the pillars of 3v's
• Volume: Organizations and firms gather as well as pull together different data from
different sources, which includes business transactions and data, data from social
media, login data, as well as information from the sensor as well as machine-to-machine
data. Earlier, this data storage would have been an issue - but because of the advent of
new technologies for handling extensive data with tools like Apache Spark, Hadoop,
the burden of enormous data got decreased.
• Velocity: Data is now streaming at an exceptional speed, which has to be dealt with
suitably. Sensors, smart metering, user data as well as RFID tags are lashing the need
for dealing with an inundation of data in near real-time.
• Variety: The releases of data from various systems have diverse types and formats.
They range from structured to unstructured, numeric data of traditional databases to
non-numeric or text documents, emails, audios and videos, stock ticker data, login data,
Blockchains' encrypted data, or even financial transactions.
Better decision-making Data quality: The quality of data needs to be good and
arranged to proceed with big data analytics.
Increased productivity Hardware needs: Storage space that needs to be there for
housing the data, networking bandwidth to transfer it to and
from analytics systems are all expensive to purchase and
maintain the Big Data environment.
Weak AI just follows and is bound by the rules set on it, and it is unable to deviate from them.
Characters in a computer game who operate realistically within the context of their game
character but are unable to do anything else are a good illustration of bad AI. Weak AI is also
known as narrow AI.
Voice-based personal assistants like Siri and Alexa, for example, could be called weak AI
systems because they work within a limited pre-defined set of functions, implying that their
responses are often pre-programmed.
Weak AI is merely the belief that intelligent behaviour may be represented and exploited by
machines to solve complicated problems and tasks; it is not enthusiastic about the results of
AI. However, just because a machine may act intelligently does not mean it is intelligent in the
same way that a human is.
By finding patterns and making predictions, weak AI aids in the transformation of massive data
into meaningful knowledge. Meta's (previously Facebook) newsfeed, Amazon's suggested
purchases, and Apple's Siri, the iPhone technology that answers users' spoken questions, are
all examples of poor AI.
Spam filters in email are an example of weak AI; a computer uses an algorithm to determine
which messages are likely to be spam and then sends them to the spam folder instead of the
inbox.
Strong AI is a theoretical branch of AI that contends that robots can develop human intelligence
and consciousness in the same way as a conscious person can. A hypothetical machine with
strong AI capabilities is referred to as a strong AI machine.
Weak AI (also known as narrow AI) is a type of artificial intelligence that uses advanced
algorithms to complete certain problem-solving or reasoning tasks that do not require the entire
range of human cognitive abilities.
In comparison to strong AI, weak AI has fewer functions. Weak AI is unable to achieve self-
awareness or demonstrate the full spectrum of human cognitive capacities.
Weak AI refers to systems that are programmed to solve a wide variety of issues but only
perform within a limited set of functions. Strong AI, on the other hand, refers to machines that
have the ability to think like humans.
The goal is to advance AI to the point where humans can communicate with conscious,
intelligent computers that are motivated by emotions and self-awareness.
Limitations of Weak AI
Aside from its restricted capabilities, one of the issues with weak AI is the potential for harm
if the system fails. Consider a driverless car that misjudges the location of an oncoming vehicle,
resulting in a fatal collision. A terrorist using a self-driving car to detonate explosives in a
crowded area is an example of how the technology might do harm if it is utilised by someone
who wants to create harm.
The loss of jobs due to the automation of an expanding number of tasks is another problem
associated with poor AI. Will unemployment grow, or will civilization devise new ways for
humans to generate income?
Although the thought of a huge number of workers losing their employment is frightening,
proponents of AI argue that if this happens, new jobs will emerge that we can't currently predict
as AI becomes more widely used.
Narrow AI refers to any AI that is now in use—not depictions from the silver screen or pages
of science fiction novels depicting robots taking over the world, for example. Here are eight
examples of how to put this into practice:
1. Voice-activated digital assistants (Siri, Alexa)
Digital voice assistants like Siri and Alexa, which are sometimes referred to as the best
instances of weak AI, are examples of weak AI that we rely on every day. To function properly,
the AI identifies data and replies to requests at a breakneck speed.
2. Recommender systems
These suggestion engines are instances of narrow AI. Whether Netflix tells you what movie to
watch next or Amazon or other retail websites provide you useful advice on what else you
might be interested in buying, these recommendation engines are examples of narrow AI.
3. Search engines
Search engines such as Google and others are examples of poor AI. When you write your query
into the box, the algorithm goes to work classifying it and returning replies from its large
database.
4. Chatbots
If you've ever communicated with an organisation via chat, whether it's your bank, internet
service provider, or favourite e-commerce company, you've most certainly been conversing
with a chatbot powered by AI. The majority of the time, chat features are an AI algorithm that
takes care of answering simple questions, allowing humans to focus on higher-level jobs.
5. Automated vehicles
AI that enables vehicles to operate without the assistance of a human driver is considered weak
AI. Algorithms are used to fulfil pre-programmed functions. Because this AI lacks the complete
cognitive powers of a human brain, the difficulty is to programme and teach it to recognise any
potential road hazard or situation that the car may meet.
Image recognition, which aids radiologists with detecting disease in patient scans, is one
important way narrow AI is making an effect in healthcare. Image recognition is also hampered
by poor AI in other businesses. Speech recognition and translation systems like Google
Translate are also affected.
Predictive analytics employs narrow AI. It examines historical data using data, algorithms, and
machine learning to produce a prediction of a likely future outcome. AI can assist in
discovering maintenance issues that need to be addressed before a machine fails in warehouses
and other places where heavy machinery is used.
8. Robots
At the moment, robots do not have their own minds. Drones and factory robots have limited
AI and can only carry out a limited range of tasks that have been programmed for them. During
the epidemic, delivery bots were extremely beneficial in complying with social distancing
orders and as disinfecting robots.
Strong AI aims to create intelligent machines that are indistinguishable from the human mind.
But just like a child, the AI machine would have to learn through input and experiences,
constantly progressing and advancing its abilities over time.
While AI researchers in both academia and private sectors are invested in the creation of
artificial general intelligence (AGI), it only exists today as a theoretical concept versus a
tangible reality. While some individuals, like Marvin Minsky, have been quoted as being overly
optimistic in what we could accomplish in a few decades in the field of AI; others would say
that Strong AI systems cannot even be developed. Until the measures of success, such as
intelligence and understanding, are explicitly defined, they are correct in this belief. For now,
many use the Turing test to evaluate intelligence of an AI system.
Tests of Strong AI
Turing Test
Alan Turing developed the Turing Test in 1950 and discussed it in his paper, “Computing
Machinery and Intelligence” (PDF, 566 KB) (link resides outside IBM). Originally known as
the Imitation Game, the test evaluates if a machine’s behavior can be distinguished from a
human. In this test, there is a person known as the “interrogator” who seeks to identify a
difference between computer-generated output and human-generated ones through a series of
questions. If the interrogator cannot reliably discern the machines from human subjects, the
machine passes the test. However, if the evaluator can identify the human responses correctly,
then this eliminates the machine from being categorized as intelligent.
While there are no set evaluation guidelines for the Turing Test, Turing did specify that a
human evaluator will only have a 70% chance of correctly predicting a human vs computer-
generated conversation after 5 minutes. The Turing Test introduced general acceptance around
the idea of machine intelligence.
However, the original Turing Test only tests for one skill set — text output or chess as
examples. Strong AI needs to perform a variety of tasks equally well, leading to the
development of the Extended Turing Test. This test evaluates textual, visual, and auditory
performance of the AI and compares it to human-generated output. This version is used in the
famous Loebner Prize competition, where a human judge guesses whether the output was
created by a human or a computer.
Chinese Room Argument (CRA)
The Chinese Room Argument was created by John Searle in 1980. In his paper, he discusses
the definition of understanding and thinking, asserting that computers would never be able to
do this. In this excerpt from his paper, from Stanford’s website (link resides outside IBM),
summarizes his argument well,
“Computation is defined purely formally or syntactically, whereas minds have actual mental
or semantic contents, and we cannot get from syntactical to the semantic just by having the
syntactical operations and nothing else…A system, me, for example, would not acquire an
understanding of Chinese just by going through the steps of a computer program that simulated
the behavior of a Chinese speaker (p.17).”
The Chinese Room Argument proposes the following scenario:
Imagine a person, who does not speak Chinese, sits in a closed room. In the room, there is a
book with Chinese language rules, phrases and instructions. Another person, who is fluent in
Chinese, passes notes written in Chinese into the room. With the help of the language
phrasebook, the person inside the room can select the appropriate response and pass it back to
the Chinese speaker.
While the person inside the room was able to provide the correct response using a language
phrasebook, he or she still does not speak or understand Chinese; it was just a simulation of
understanding through matching question or statements with appropriate responses. Searle
argues that Strong AI would require an actual mind to have consciousness or understanding.
The Chinese Room Argument illustrates the flaws in the Turing Test, demonstrating
differences in definitions of artificial intelligence.
Strong AI trends
While there are no clear examples of strong artificial intelligence, the field of AI is rapidly
innovating. Another AI theory has emerged, known as artificial superintelligence (ASI), super
intelligence, or Super AI. This type of AI surpasses strong AI in human intelligence and ability.
However, Super AI is still purely speculative as we have yet to achieve examples of Strong AI.
With that said, there are fields where AI is playing a more important role, such as:
The major five components that make Artificial Intelligence as successful one are:
1. Discover: It is the basic ability of an intelligent system to explore the data from available
resources without any human intervention. Then it is processed by the ETL algorithm to
explore the large database and automatically finds the relationship between the content and the
needed solution to the problem. This not only solves a complex issue but also identify the
emergency phenomena
2. Predict: This approach is designed to identify future happenings by classification, ranking,
and regression. The algorithm used here is Random forest, linear learners and gradient
boosting. Rarely prediction goes wrong in some numerical values when there is bias.
3. Justify: The application needs human intervention to give a more recognizable and
believable result. So it needs to understands and justify what is wrong and right and then gives
human a correct solution to handle the situation. Similar to the Automation industry, it needs
to have a nut and bolt understanding of the machine to know why it is repaired and what needs
to be done further.
4. Act: Intelligent application needs to be active and live in the company to discover, predict
and Justify
5. Learn: The Intelligent system haves the habit of learning and updating itself day by day to
compete in the world’s needs.
Examples
Most of the programming languages used in AI are as follows.
Python is unique and most favorite for computer programmers because of its syntax, which is
simple and versatile. It is very comfortable and applied in all OS like Unix, Linux, Windows,
and Mac. As Python has a systematic arrangement, it is applied in OOPS, neural network, NLP
development and various types of programming. It is so unique and has a wide variety of
Library functions.
C++ is applied mostly in AI programming tasks because of its time-sensitive feature. It has
minimum response time and a quick execution process that is important for developing games
and search engines. It is reusable because of its inheritance and data hiding properties. It is
widely used to solve AI statistical techniques.
Java is another mostly used AI programming language, and it does not need any special
platform for recompilation because of Virtual Machine Technology. It combines the features
of C and C++ and makes it more simple and easy to debug. In addition, the Automatic memory
manager in Java reduces the work of the developer.
LISP is used in part of AI development. LISP has a specific macro system that alleviates
implementation and exploration of multiple levels of Intellectual Intelligence. It is mostly
applied in solving logic tasks and Machine learning. It favours Liberty and fast prototyping to
programmers and makes LISP as more standard language and User-friendly in AI.
PROLOG is used for basic algorithm automatic backtracking, tree-based structuring and
Pattern matching, which is mandatory for AI. In addition, it is extensively applied in medical
science.
Artificial intelligence is successfully set its milestones in all industries such as e-commerce,
biotechnology, diagnosis of diseases, military, mathematics and logistics, heavy industry,
finance, transportation, telecommunication, aviation, digital marketing, telephone customer
services, agriculture, and gaming.