Module 1 AI

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19
At a glance
Powered by AI
The key takeaways are that artificial intelligence aims to develop machines that can think and act like humans. It involves imparting intelligence to machines through techniques like machine learning, natural language processing, and computer vision.

Some of the major programming languages used in AI are Python, C++, Java, LISP and PROLOG. Each has its own advantages for different types of AI tasks.

Major areas of AI application include machine learning, robotics, natural language processing, computer vision, and autonomous vehicles.

Unit 1

Introduction to Artificial Intelligence


Artificial Intelligence is the ability to design smart machines or to develop
self-learning software applications that imitate the traits of the human mind
like reasoning, problem-solving, planning, optimal decision making, sensory
perceptions etc. The capacity of artificial intelligent approaches to
outperform human actions in terms of knowledge discovery gained the
attention of business and research communities all over the world, and this
field of study witnessed rapid progress in the past two decades.

According to John McCarthy, the father of AI, “The science and engineering of making
intelligent machines, especially intelligent computer programs”, is the definition of Artificial
Intelligence.

As the name suggests, AI is imparting intelligence to the machines so that the machines operate
like human beings. AI is that sector in computer science that emphasizes the creation of
intelligent machines that work, operate and react like human beings. AI is used in decision
making by the machines considering the real-time scenario. An Artificially Intelligent machine
reads the real-time data, understands the business scenario and reacts accordingly.

Some of the activities that the artificially intelligent machines are designed for are:

• Speech recognition
• Learning
• Planning
• Problem-solving
AI has now become a very important part of Information Technology. This branch aims to
create machines that are intelligent. AI has highly technical and specialized research associated
with it.

Philosophy of Artificial Intelligence


The man has been using computer systems for a while now. While machines have always
helped human beings, man always thought about exploring these slaves more and more. This
curiosity led man to question, “Can a machine be made to think and operate as human beings?”

Hence, with the objective of making the machines that operate and react like human beings, AI
developed. The process of transforming a computer into a computer-controlled robot or
designing software that thinks and reacts exactly the way a human being thinks is what AI is
all about.
In order to use AI to develop intelligent systems, it is necessary that one understands how the
human brain functions. How the brain thinks, learns, decides and operates while solving a
problem is to be studied thoroughly. Then, the result thus obtained must be applied to the
software in order to develop smart and intelligent systems.

The core concept of AI research is Knowledge Engineering.

History of Artificial Intelligence


Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which defines the journey from the AI generation to till date development.

Maturation of Artificial Intelligence (1943-1952)


o Year 1943: The first work which is no
o w recognized as AI was done by Warren McCulloch and Walter pits in 1943. They
proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)

o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
o
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)

o Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named
as WABOT-1.

The first AI winter (1974-1980)

o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)

o Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.

The second AI winter (1987-1993)

o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)

o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.

Deep learning, big data and artificial general intelligence (2011-present)

o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had
to solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice
that she was talking with the machine.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
hines can only act, operate and react like human beings if they provide enough information
relating to the business and the world. Hence, it is important that AI have access to all the
information regarding the objects, categories, properties, and relations between all business use
cases so that the machine can efficiently implement Knowledge Engineering. However,
thearting the machines with common sense, decision making, reasoning, and problem-solving
power is quite difficult and tedious.

Advantages of Artificial Intelligence


Following are some main advantages of Artificial Intelligence:
o High Accuracy with less errors: AI machines or systems are prone to less errors and
high accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because
of that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action
multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a
bomb, exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such as
AI technology is currently used by various E-commerce websites to show the products
as per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-
driving car which can make our journey safer and hassle-free, facial recognition for
security purpose, Natural language processing to communicate with the human in
human-language, etc.
Disadvantages of Artificial Intelligence
Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being
so advantageous technology still, it has some disadvantages which we need to keep in our mind
while creating an AI system. Following are the disadvantages of AI:
o High Cost: The hardware and software requirement of AI is very costly as it requires
lots of maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still
they cannot work out of the box, as the robot will only do that work for which they are
trained, or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it
does not have the feeling so it cannot make any kind of emotional attachment with
human, and may sometime be harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas
but still AI machines cannot beat this power of human intelligence and cannot be
creative and imaginative.

Machine learning
Machine learning is programming computers to optimize a performance criterion using
example data or past experience. We have a model defined up to some parameters, and learning
is the execution of a computer program to optimize the parameters of the model using the
training data or past experience. The model may be predictive to make predictions in the future,
or descriptive to gain knowledge from data, or both.

Machine learning uses the theory of statistics in building mathematical models, because the
core task is making inference from a sample. The role of computer science is twofold: First, in
training, we need efficient algorithms to solve the optimization problem, as well as to store and
process the massive amount of data we generally have. Second, once a model is learned, its
representation and algorithmic solution for inference needs to be efficient as well. In certain
applications, the efficiency of the learning or inference algorithm, namely, its space and time
complexity, may be as important as its predictive accuracy. Application of machine learning
methods to large databases is called data mining. The analogy is that a large volume of earth
and raw material is extracted from a mine, which when processed leads to a small amount of
very precious material.

Why do we need Machine Learning?


In the digital age, Data is something that is abundantly available. The conventional way of
programming is not the best solution to be offered to a given problem involving pattern
recognition or retaining a chunk of memory from a previous interaction. It gets complex and
messy when tried to update for new requirements. The traditional programming approach fails
to handle a huge variety of data whereas with Machine Learning, the more is always merrier
and with the massive volume of data that we generate, state-of-the-art Neural Nets models for
easy pattern recognition is now possible.
Let’s have an example of some of the most common things that we do almost every other day
like ordering food, groceries or even clothes all these that are now just a click away, is powered
by Machine Learning which can find patterns, behaviors and learn from them without being
programmed explicitly.

Given below shows how ML algorithms differ from programmed Logic-based algorithms:

For a logic-based algorithm, flow is well defined and known in advance, however; there are
several real-life scenarios (such as image classification) where logic can’t be defined. In such
cases, Machine learning has proven to be extremely useful. Machine learning techniques take
input parameters & expected reference output data and generate logic which is then deployed
into production.

Applications of ML
In the past decade, the introduction to machine learning has transformed several industries,
including healthcare, social media, digital marketing, real estate, logistics, supply chain &
manufacturing. Early movers in these industries have already reaped significant profits. There
is a growing demand for a skilled workforce with machine learning along with domain
knowledge.
Following are a few applications where ML techniques have played a significant role:

1. Spam Mail Classification


To classify mail as spam/not spam using labeled responses with using data such as message
content, use of vocabulary used in promotional emails, sender email address, sender IP, use of
hyperlinks, number punctuations, etc.
2. Cancer Detection
ML is increasingly being used in healthcare for diagnosis and even for cancer detection using
medical data for previous patients. For breast cancer detection, the training algorithm takes
inputs such as tumor size, radius, curvature, and perimeter as input. At the output, we get the
likelihood of the tumor is malignant or not.
3. Sales Prediction
An increasing number of vendors are digitizing their records; many of them have started using
machine learning tools to predict sales of a particular item in a given week so that they can
stock a sufficient amount of inventory. Introduction To machine learning techniques would
take inputs from previous year’s sales for different items and find patterns for seasonal
variations and give specific predictions for the sale of certain items. We can also identify low-
performing items in terms of sales.
4. Face Recognition
You have probably observed while uploading pictures on Facebook that it tags your friend’s
faces to their names. In the backend, machine/deep learning algorithms are doing this job. The
same fundamental introduction to machine learning principles is also used for face recognition,
where input face images are fed, and neural networks are trained to classify these images.
5. Text Classification
With the increasing population coming online, it has become mandatory for website/social
media companies such as Twitter, Facebook, Quora to deploy text classification-based systems.
Twitter/Quora uses this to identify hate comments/posts. Some news companies also use text
classification algorithms to group news articles that are similar.
6. Audio/Voice Interpretation
Ever wonder how devices like Alexa, Siri, Google are becoming intelligent day by day in
understanding audio data in different languages with different accents. A huge amount of data
is trained in these devices to introduce to machine learning techniques, which makes it possible.
7. Fraud Detection Systems
ML-based fraud detection systems are deployed by several e-commerce companies to identify
customers creating fake orders and also eliminate vendors selling counterfeit products on the
platform. Banking industries and other financial technology startups hugely rely on ML
techniques to detect fraud transactions.
Advantages and Disadvantages of Machine Learning
Given below are the advantages and disadvantages mentioned:
Advantages:
• Automate time-consuming tasks: ML-based applications have automated several
tasks like low-level decision making, data entry, tele-calling, loan approval processes.
• Cost saving: Once the algorithm is developed and put it into production, it can cause
significant cost saving as human labor and decision making is minimal.
• Turnaround time: For a lot of applications, total time is of paramount importance. ML
has been able to reduce time in domains such as auto insurance claims where user
uploads pictures and insurance amount gets calculated. It has also helped e-commerce
companies in handling returns of inventory sold.
• Data-driven decision making: Not only corporates but a lot of governments are
relying on ML to make decision making in deciding which projects to invest in and
how to optimally utilize existing resources.
Disadvantages:
• ML algorithms can be biased: Lots of times, input data to the ML algorithm is biased
to a specific gender, Race, Country, Caste, etc. This results in ML algorithms
propagating unwanted bias into the decision-making process. This has been observed
in some applications which deployed ML-like school/college admission process and
social media recommendations.
• Require large data to achieve acceptable accuracy: While people can learn easily
for small datasets, for some applications, introduction to machine learning requires
huge amounts of data to achieve sufficient accuracy.
• Currently, introduction to the machine learning algorithm may be well suited for
the future: ML technique trained on the current dataset may not be well suited for the
future as input distribution may change significantly over time. One of the
countermeasures to overcome this is to re-train the model periodically.

Deep learning
Deep learning is a subset of machine learning in artificial intelligence, i.e., based upon
artificial neural network and representation learning, as it is capable of implementing a
function that is used to mimic the functionality of the brain by creating patterns and
processing data. Deep Learning is also used for decision-making in fields like driverless cars
( to detect pedestrians, street lights, other cars, etc.), speech recognition, image analysis( e.g.,
Identifying cancer in blood and tumors), smart TV, etc.
In general, we will do two tasks all the time consciously or subconsciously, i.e., categorize
what we felt through our senses (like feeling hot, cold mug, etc.) And prediction, for
example, predicts the future temperature based on the previous temperature data.
We do categorization and prediction tasks for several events or tasks in our daily life such as
below:
• Holding Cup of Tea/Water/Coffee etc., which may be hot or cold.
• Email categorization such as spam/ not spam.
• Day-light time categorization such as day or night.
• Long-term planning of the future based on our current position and things we have –
is called prediction.
• Every creature in the world will do these tasks in their life, for example, consider
animals like crow will categorize a place to build its nest or not, a bee will decide on
some factors when and where to get honey, the bat will come during the night and
sleeps during morning based on day and night categorization.

Let us visualize these tasks categorization and prediction, and they will look alike as in the
below image; for categorization, we are doing categorization between cats and dogs by
drawing a line through data points, and in the case of prediction, we draw a line through data
points to predict when it will increase and decrease.

1. Categorization
• In general, to categorize between cats and dogs, or men and women, we don’t draw a
line in our brains, and the position of dogs and cats is arbitrary for illustration
purposes only, and it is needless to say the way we categorize between cats and dogs
in our brains is much complex than drawing a red line as above.
• We will categorize between two things based on shapes, size, height, looks, etc., and
sometimes it will be difficult to categorize with these features such as a small dog
with fury and a newborn cat, so it is not a clear-cut categorization into cats and dogs.
• Once we are able to categorize between cats and dogs when we are children, then
onwards we are able to categorize any dog or cat even we didn’t see it before.

2. Prediction

• For prediction based on the line, we draw through data points if we are able to predict
where it is most likely to go upward or downward.
• The curve is also a prediction of fitting new data points within the range of existing
data points, i.e., how close the new data point is to the curve.
• The data points which are in red colour in the above image (right side) are examples
of both within and beyond the range of existing data points, and the curve attempts to
predict both.

Finally, both task categorization and prediction are ended at a similar point, i.e., drawing a
curvy line from data points. If we are able to train the computer model to draw the curvy line
based on data points we are done with, then we can extend this to apply in different models
such as drawing a curvy line in three-dimensional planes and so on. The above thing can be
achieved by training a model with a large amount of labeled and unlabelled data, which is
called deep learning.
Examples of Deep Learning

As we know, deep learning and machine learning are subsets of artificial intelligence, but
deep learning technology represents the next evolution of machine learning. Machine
learning will work based on algorithms and programs developed by humans, whereas deep
learning learns through a neural network model which acts similar to humans and allows
machines or computers to analyze the data in a similar way as humans do. This becomes
possible as we train the neural network models with a huge amount of data as data is the fuel
or food for neural network models.

• Computer Vision: Computer vision deals with algorithms for computers to


understand the world using the image and video data and tasks such as image
recognition, image classification, object detection, image segmentation, image
restoration, etc.

• Speech and Natural Language Processing: Natural language processing deals with
algorithms for computers to understand, interpret, and manipulate human language.
NLP algorithms work with text and audio data and transform them into audio or text
output. Using NLP, we can do tasks such as sentiment analysis, speech recognition,
language transition, and natural language generation, etc.

• Autonomous Vehicles: Deep learning models are trained with a huge amount of data
for identifying street signs; some models specialize in identifying pedestrians,
identifying humans, etc., for driverless cars while driving.

• Text Generation: By using deep learning models trained by language, grammar, and
types of texts, etc., can be used to create a new text with correct spelling and grammar
from Wikipedia to Shakespeare.
• Image Filtering: By using deep learning models such as adding color to black-and-
white images, it can be done by deep learning models, which will take more time if
we do it manually.

Big data
Big data can be defined as a concept used to describe a large volume of data, which are both
structured and unstructured, and that gets increased day by day by any system or business.
However, it is not the quantity of data, which is essential. The important part is what any firm
or organization can do with the data matters a lot. Analysis can be performed on big data for
insight and predictions, which can lead to a better decision and reliable strategy in business
moves.

3Vs of Big Data-This conception theory gained thrust in the early 2000s when trade and
business analyst Mr. Doug Laney expressed the mainstream explanation of the keyword big
data over the pillars of 3v's

• Volume: Organizations and firms gather as well as pull together different data from
different sources, which includes business transactions and data, data from social
media, login data, as well as information from the sensor as well as machine-to-machine
data. Earlier, this data storage would have been an issue - but because of the advent of
new technologies for handling extensive data with tools like Apache Spark, Hadoop,
the burden of enormous data got decreased.
• Velocity: Data is now streaming at an exceptional speed, which has to be dealt with
suitably. Sensors, smart metering, user data as well as RFID tags are lashing the need
for dealing with an inundation of data in near real-time.
• Variety: The releases of data from various systems have diverse types and formats.
They range from structured to unstructured, numeric data of traditional databases to
non-numeric or text documents, emails, audios and videos, stock ticker data, login data,
Blockchains' encrypted data, or even financial transactions.

Importance of Big Data-


Big Data does not take care of how much data is there, but how it can be used. Data can be
taken from various sources for analyzing it and finding answers which enable:
• Reduction in cost.
• Time reductions.
• New product development with optimized offers.
• Well-groomed decision making.
When you merge big data with high-powered data analytics, it is possible to achieve business-
related tasks like:
• Real-time determination of core causes of failures, problems, or faults.
• Produce token and coupons as per the customer's buying behavior.
• Risk-management can be done in minutes by calculating risk portfolios.
• Detection of deceptive behavior before its influence

Advantages and Disadvantages of Big Data


Given below are the advantages and disadvantages as follows:
Advantages Disadvantages

Better decision-making Data quality: The quality of data needs to be good and
arranged to proceed with big data analytics.

Increased productivity Hardware needs: Storage space that needs to be there for
housing the data, networking bandwidth to transfer it to and
from analytics systems are all expensive to purchase and
maintain the Big Data environment.

Reduce costs Cybersecurity risks: Storing sensitive and large amounts of


data can make companies a more attractive target for
cyberattackers, which can use the data for ransom or other
wrongful purposes.

Improved customer Hiccups in integrating with legacy systems: Many old


service enterprises that have been in business for a long time have
stored data in different applications and systems throughout
different architecture and environments. This creates
problems in integrating outdated data sources and moving
data, which further adds to the time and expense of working
with big data.

What Is Weak Artificial Intelligence (AI) and What Does It Mean?


Weak artificial intelligence (weak AI) is a research and development approach to AI that takes
into account the fact that AI is and will always be a mimic of human cognitive function, and
that computers can only appear to think but are not conscious in any sense.

Weak AI just follows and is bound by the rules set on it, and it is unable to deviate from them.
Characters in a computer game who operate realistically within the context of their game
character but are unable to do anything else are a good illustration of bad AI. Weak AI is also
known as narrow AI.

Voice-based personal assistants like Siri and Alexa, for example, could be called weak AI
systems because they work within a limited pre-defined set of functions, implying that their
responses are often pre-programmed.

Weak AI is merely the belief that intelligent behaviour may be represented and exploited by
machines to solve complicated problems and tasks; it is not enthusiastic about the results of
AI. However, just because a machine may act intelligently does not mean it is intelligent in the
same way that a human is.

By finding patterns and making predictions, weak AI aids in the transformation of massive data
into meaningful knowledge. Meta's (previously Facebook) newsfeed, Amazon's suggested
purchases, and Apple's Siri, the iPhone technology that answers users' spoken questions, are
all examples of poor AI.

Spam filters in email are an example of weak AI; a computer uses an algorithm to determine
which messages are likely to be spam and then sends them to the spam folder instead of the
inbox.

Difference between a Strong and a Weak AI

Strong AI is a theoretical branch of AI that contends that robots can develop human intelligence
and consciousness in the same way as a conscious person can. A hypothetical machine with
strong AI capabilities is referred to as a strong AI machine.

Weak AI (also known as narrow AI) is a type of artificial intelligence that uses advanced
algorithms to complete certain problem-solving or reasoning tasks that do not require the entire
range of human cognitive abilities.

In comparison to strong AI, weak AI has fewer functions. Weak AI is unable to achieve self-
awareness or demonstrate the full spectrum of human cognitive capacities.

Weak AI refers to systems that are programmed to solve a wide variety of issues but only
perform within a limited set of functions. Strong AI, on the other hand, refers to machines that
have the ability to think like humans.

The goal is to advance AI to the point where humans can communicate with conscious,
intelligent computers that are motivated by emotions and self-awareness.

Limitations of Weak AI

Aside from its restricted capabilities, one of the issues with weak AI is the potential for harm
if the system fails. Consider a driverless car that misjudges the location of an oncoming vehicle,
resulting in a fatal collision. A terrorist using a self-driving car to detonate explosives in a
crowded area is an example of how the technology might do harm if it is utilised by someone
who wants to create harm.

The loss of jobs due to the automation of an expanding number of tasks is another problem
associated with poor AI. Will unemployment grow, or will civilization devise new ways for
humans to generate income?

Although the thought of a huge number of workers losing their employment is frightening,
proponents of AI argue that if this happens, new jobs will emerge that we can't currently predict
as AI becomes more widely used.

Practical Examples of Weak/Narrow AI

Narrow AI refers to any AI that is now in use—not depictions from the silver screen or pages
of science fiction novels depicting robots taking over the world, for example. Here are eight
examples of how to put this into practice:
1. Voice-activated digital assistants (Siri, Alexa)

Digital voice assistants like Siri and Alexa, which are sometimes referred to as the best
instances of weak AI, are examples of weak AI that we rely on every day. To function properly,
the AI identifies data and replies to requests at a breakneck speed.

2. Recommender systems

These suggestion engines are instances of narrow AI. Whether Netflix tells you what movie to
watch next or Amazon or other retail websites provide you useful advice on what else you
might be interested in buying, these recommendation engines are examples of narrow AI.

3. Search engines

Search engines such as Google and others are examples of poor AI. When you write your query
into the box, the algorithm goes to work classifying it and returning replies from its large
database.

4. Chatbots

If you've ever communicated with an organisation via chat, whether it's your bank, internet
service provider, or favourite e-commerce company, you've most certainly been conversing
with a chatbot powered by AI. The majority of the time, chat features are an AI algorithm that
takes care of answering simple questions, allowing humans to focus on higher-level jobs.

5. Automated vehicles

AI that enables vehicles to operate without the assistance of a human driver is considered weak
AI. Algorithms are used to fulfil pre-programmed functions. Because this AI lacks the complete
cognitive powers of a human brain, the difficulty is to programme and teach it to recognise any
potential road hazard or situation that the car may meet.

6. Recognition of images and voice

Image recognition, which aids radiologists with detecting disease in patient scans, is one
important way narrow AI is making an effect in healthcare. Image recognition is also hampered
by poor AI in other businesses. Speech recognition and translation systems like Google
Translate are also affected.

7. Analytics and predictive maintenance

Predictive analytics employs narrow AI. It examines historical data using data, algorithms, and
machine learning to produce a prediction of a likely future outcome. AI can assist in
discovering maintenance issues that need to be addressed before a machine fails in warehouses
and other places where heavy machinery is used.

8. Robots
At the moment, robots do not have their own minds. Drones and factory robots have limited
AI and can only carry out a limited range of tasks that have been programmed for them. During
the epidemic, delivery bots were extremely beneficial in complying with social distancing
orders and as disinfecting robots.

What is strong AI?


Strong artificial intelligence (AI), also known as artificial general intelligence (AGI) or general
AI, is a theoretical form of AI used to describe a certain mindset of AI development. If
researchers are able to develop Strong AI, the machine would require an intelligence equal
to humans; it would have a self-aware consciousness that has the ability to solve problems,
learn, and plan for the future.

Strong AI aims to create intelligent machines that are indistinguishable from the human mind.
But just like a child, the AI machine would have to learn through input and experiences,
constantly progressing and advancing its abilities over time.

While AI researchers in both academia and private sectors are invested in the creation of
artificial general intelligence (AGI), it only exists today as a theoretical concept versus a
tangible reality. While some individuals, like Marvin Minsky, have been quoted as being overly
optimistic in what we could accomplish in a few decades in the field of AI; others would say
that Strong AI systems cannot even be developed. Until the measures of success, such as
intelligence and understanding, are explicitly defined, they are correct in this belief. For now,
many use the Turing test to evaluate intelligence of an AI system.

Tests of Strong AI
Turing Test
Alan Turing developed the Turing Test in 1950 and discussed it in his paper, “Computing
Machinery and Intelligence” (PDF, 566 KB) (link resides outside IBM). Originally known as
the Imitation Game, the test evaluates if a machine’s behavior can be distinguished from a
human. In this test, there is a person known as the “interrogator” who seeks to identify a
difference between computer-generated output and human-generated ones through a series of
questions. If the interrogator cannot reliably discern the machines from human subjects, the
machine passes the test. However, if the evaluator can identify the human responses correctly,
then this eliminates the machine from being categorized as intelligent.

While there are no set evaluation guidelines for the Turing Test, Turing did specify that a
human evaluator will only have a 70% chance of correctly predicting a human vs computer-
generated conversation after 5 minutes. The Turing Test introduced general acceptance around
the idea of machine intelligence.

However, the original Turing Test only tests for one skill set — text output or chess as
examples. Strong AI needs to perform a variety of tasks equally well, leading to the
development of the Extended Turing Test. This test evaluates textual, visual, and auditory
performance of the AI and compares it to human-generated output. This version is used in the
famous Loebner Prize competition, where a human judge guesses whether the output was
created by a human or a computer.
Chinese Room Argument (CRA)
The Chinese Room Argument was created by John Searle in 1980. In his paper, he discusses
the definition of understanding and thinking, asserting that computers would never be able to
do this. In this excerpt from his paper, from Stanford’s website (link resides outside IBM),
summarizes his argument well,
“Computation is defined purely formally or syntactically, whereas minds have actual mental
or semantic contents, and we cannot get from syntactical to the semantic just by having the
syntactical operations and nothing else…A system, me, for example, would not acquire an
understanding of Chinese just by going through the steps of a computer program that simulated
the behavior of a Chinese speaker (p.17).”
The Chinese Room Argument proposes the following scenario:
Imagine a person, who does not speak Chinese, sits in a closed room. In the room, there is a
book with Chinese language rules, phrases and instructions. Another person, who is fluent in
Chinese, passes notes written in Chinese into the room. With the help of the language
phrasebook, the person inside the room can select the appropriate response and pass it back to
the Chinese speaker.
While the person inside the room was able to provide the correct response using a language
phrasebook, he or she still does not speak or understand Chinese; it was just a simulation of
understanding through matching question or statements with appropriate responses. Searle
argues that Strong AI would require an actual mind to have consciousness or understanding.
The Chinese Room Argument illustrates the flaws in the Turing Test, demonstrating
differences in definitions of artificial intelligence.

Strong AI trends
While there are no clear examples of strong artificial intelligence, the field of AI is rapidly
innovating. Another AI theory has emerged, known as artificial superintelligence (ASI), super
intelligence, or Super AI. This type of AI surpasses strong AI in human intelligence and ability.
However, Super AI is still purely speculative as we have yet to achieve examples of Strong AI.
With that said, there are fields where AI is playing a more important role, such as:

• Cybersecurity: Artificial intelligence will take over more roles in organizations’


cybersecurity measures, including breach detection, monitoring, threat intelligence,
incident response, and risk analysis.
• Entertainment and content creation: Computer science programs are already getting
better and better at producing content, whether it is copywriting, poetry, video games,
or even movies. OpenAI’s GBT-3 text generation AI app is already creating content
that is almost impossible to distinguish from copy that was written by humans.
• Behavioral recognition and prediction: Prediction algorithms will make AI stronger,
ranging from applications in weather and stock market predictions to, even more
interesting, predictions of human behavior. This also raises the questions around
implicit biases and ethical AI. Some AI researchers in the AI community are pushing
for a set of anti-discriminatory rules, which is often associated with the hashtag
#responsibleAI.

Strong AI terms and definitions


The terms artificial intelligence, machine learning and deep learning are often used in the
wrong context. These terms are frequently used in describing Strong AI, and so it’s worth
defining each term briefly:
Artificial intelligence defined by John McCarthy (PDF, 109 KB) (link resides outside IBM), is
"the science and engineering of making intelligent machines, especially intelligent computer
programs. It is related to the similar task of using computers to understand human intelligence,
but AI does not have to confine itself to methods that are biologically observable."
Machine learning is a sub-field of artificial intelligence. Classical (non-deep) machine learning
models require more human intervention to segment data into categories (i.e. through feature
learning).
Deep learning is also a sub-field of machine learning, which attempts to imitate the
interconnectedness of the human brain using neural networks. Its artificial neural networks are
made up layers of models, which identify patterns within a given dataset. They leverage a high
volume of training data to learn accurately, which subsequently demands more powerful
hardware, such as GPUs or TPUs. Deep learning algorithms are most strongly associated with
human-level AI.

Architects of Artificial Intelligence


Artificial Intelligence Tools, and Machine Learning Tools are the two areas that are
aggressively taking up the market in recent times. AI exists since the 1980s, but it is not until
very recent years that saw the tremendous growth of AI and its applications. We can say that
Artificial Intelligence is the intelligence that is demonstrated by machines and is more likely
to try and create a simulation of the human intelligence process.

The major five components that make Artificial Intelligence as successful one are:
1. Discover: It is the basic ability of an intelligent system to explore the data from available
resources without any human intervention. Then it is processed by the ETL algorithm to
explore the large database and automatically finds the relationship between the content and the
needed solution to the problem. This not only solves a complex issue but also identify the
emergency phenomena
2. Predict: This approach is designed to identify future happenings by classification, ranking,
and regression. The algorithm used here is Random forest, linear learners and gradient
boosting. Rarely prediction goes wrong in some numerical values when there is bias.
3. Justify: The application needs human intervention to give a more recognizable and
believable result. So it needs to understands and justify what is wrong and right and then gives
human a correct solution to handle the situation. Similar to the Automation industry, it needs
to have a nut and bolt understanding of the machine to know why it is repaired and what needs
to be done further.
4. Act: Intelligent application needs to be active and live in the company to discover, predict
and Justify
5. Learn: The Intelligent system haves the habit of learning and updating itself day by day to
compete in the world’s needs.

Examples
Most of the programming languages used in AI are as follows.
Python is unique and most favorite for computer programmers because of its syntax, which is
simple and versatile. It is very comfortable and applied in all OS like Unix, Linux, Windows,
and Mac. As Python has a systematic arrangement, it is applied in OOPS, neural network, NLP
development and various types of programming. It is so unique and has a wide variety of
Library functions.
C++ is applied mostly in AI programming tasks because of its time-sensitive feature. It has
minimum response time and a quick execution process that is important for developing games
and search engines. It is reusable because of its inheritance and data hiding properties. It is
widely used to solve AI statistical techniques.
Java is another mostly used AI programming language, and it does not need any special
platform for recompilation because of Virtual Machine Technology. It combines the features
of C and C++ and makes it more simple and easy to debug. In addition, the Automatic memory
manager in Java reduces the work of the developer.
LISP is used in part of AI development. LISP has a specific macro system that alleviates
implementation and exploration of multiple levels of Intellectual Intelligence. It is mostly
applied in solving logic tasks and Machine learning. It favours Liberty and fast prototyping to
programmers and makes LISP as more standard language and User-friendly in AI.
PROLOG is used for basic algorithm automatic backtracking, tree-based structuring and
Pattern matching, which is mandatory for AI. In addition, it is extensively applied in medical
science.

Artificial intelligence is successfully set its milestones in all industries such as e-commerce,
biotechnology, diagnosis of diseases, military, mathematics and logistics, heavy industry,
finance, transportation, telecommunication, aviation, digital marketing, telephone customer
services, agriculture, and gaming.

Areas and Applications of Artificial Intelligence


In the below figure, you will see a major number of areas where AI is being used extensively.
Let us discuss some of them:
1. Machine Learning
In Machine Learning, a goal is defined and the steps to reach the target have to be learned by
the machine. Let’s take an example where we have a sample set of pictures of a cat and a lion.
The goal of the model is to say a yes whenever a picture of a cat comes on the screen. The
machine can learn this by exposing it to a huge number of pictures of cat beforehand so that it
can train itself enough to identify the cat as soon as it comes on the screen.
2. Robotics in Artificial Intelligence Tools
This area of machine learning focusses on the building and manufacturing of robots. As we
see, today robots exist in any form. The ATM from where we withdraw cash is also one form
of a robot and then there are many intelligent working robots. Amazon warehouse is having
more than a hundred thousand robots that do the work of shipping inside the warehouse.
3. Natural Language Processing (NLP)
The process of manipulating speech or voices and texts is known as Natural language
processing. We can derive many important conclusions from NLP. For e.g. we can automate
the task of feedback categorization, if some users are happy or sad with the service, we can
implement an NLP to arrive at the conclusion by analyzing their comments through NLP.
4. Vision in Artificial Intelligence Tools
This field gives the machine the ability to see. For example, this ability can be given to a robot
or to a car that can use digital signal processing techniques to see through a camera.
5.Autonomous Driving and Vehicles
This area of Artificial Intelligence focuses on making driving and vehicles autonomous. For
instance, Uber has started making autonomous vehicles without a driver that are operating in
very few cities as well.

Artificial intelligence holds a much higher significance and importance than


what is read in this article. This will continue to grow in the future to come.
Don’t miss out, get involved, and have fun with the technology as much as
you can.

You might also like