AI Complete Notes - Unit 1 To Unit 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

Unit 1st AI KMC 101

Krishna Engineering College Ghaziabad

Syllabus: The evolution of AI to the present , Various approaches to AI , What should all
engineers know about AI? , Other emerging technologies , AI and ethical concerns

Q 1 What is Intelligence?
• Intelligence can have many faces:
creativity, solving problems, pattern recognition, classification, learning, optimization,
surviving in anenvironment, language processing, planning, and knowledge.

• Intelligence:“the capacity to learn and solve problems”

•the ability to solve novel problems

•the ability to act rationally

•the ability to act like humans

Q 2 What’s involved in Intelligence

Ability to interact with the real world


• to perceive, understand, and act
• e.g., speech recognition and image understanding
• e.g., ability to take actions
Reasoning and Planning
• solving new problems, planning, and making decisions
• ability to deal with unexpected problems, uncertainties
Learning and Adaptation
• we are continuously learning and adapting
• our internal models are always being “updated”
• e.g., a baby learning to categorize and recognize animals
Q 3 What is AI…..

❑ The field of AI aims to understand how humans perceive, interact and make decisions;

❑ then take this understanding to create machines that rival human competence in a wide
range of tasks.
❑ The study of mental faculties through the use of computational models.

❑ The study of how to make computers do things at which, at the moment, people are
better.(Rich & Knight, 1991 )

❑ It is the science and engineering of making intelligent machines, especially intelligent


computer programs. It is related to the similar task of using computers to understand
human intelligence, but AI does not have to confine itself to methods that are
biologically observable. (John McCarthy, Stanford University)

❑ A field of study that seeks to explain and emulate intelligent behavior in terms of
computational processes.

❑ AI is actually mapping of intelligence where intelligence is boundary less. Boundaries of


AI are:

1. Acting Humanly 2. Thinking Humanly

3. Thinking Rationally 4. Acting Rationally

Q 4 What is Turning Test

Acting humanly: The Turing Test approach

• Mr. Alan Turing proposed “Turing Test” in the year 1950.

• He proposed that Turing Test can be used to determine “whether or not a machine is
considered as intelligent?”
• The computer would need to possess the following capabilities:

• natural language processing to enable it to communicate successfully in English

• knowledge representation to store what it knows or hears

Ask questions of two entities ,receive answers from both If you can’t tell which of the entities is
Human and which is a computer program, then you are fooled and we should therefore consider
the computer to be intelligent.

Turing proposed that a computer can be said to possess artificial intelligence if it can mimic
human responses under specific conditions. The original Turing Test requires three terminals,
each of which is physically separated from the other two. One terminal is operated by a
computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the
computer function as respondents. The questioner interrogates the respondents within a specific
subject area, using a specified format and context. After a preset length of time or number of
questions, the questioner is then asked to decide which respondent was human and which was a
computer.
The test is repeated many times. If the questioner makes the correct determination in half of the
test runs or less, the computer is considered to have artificial intelligence because the questioner
regards it as "just as human" as the human respondent.
Q5 .Explain the History of Artificial Intelligence

(1943-1952)
o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behaviour equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)


o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist".
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an
academic field.

(1956-1974)
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
(1974-1980)
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.

(1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.

(1987-1993)
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.

(1993-2011)
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.

Deep learning, big data and artificial general intelligence (2011-present)


o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice
that she was talking with the machine.
Q7 What are the different Application of AI and what AI can do today

Application of AI

49

1. AI in Astronomy

o Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful
for understanding the universe such as how it works, origin, etc.

2. AI in Healthcare

o In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have
a significant impact on this industry.
o Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can help
doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the
patient before hospitalization.

3. AI in Gaming

o AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the
machine needs to think of a large number of possible places.

4. AI in Finance

o AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.

5. AI in Data Security
o The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital
world. AI can be used to make your data more safe and secure. Some examples such as AEG bot, AI2
Platform,are used to determine software bug and cyber-attacks in a better way.

6. AI in Social Media

o Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to
be stored and managed in a very efficient way. AI can organize and manage massive amounts of data. AI
can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.

7. AI in Travel & Transport

o AI is becoming highly demanding for travel industries. AI is capable of doing various travel related works
such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers.
Travel industries are using AI-powered chatbots which can make human-like interaction with customers for
better and fast response.

8. AI in Automotive Industry

o Some Automotive industries are using AI to provide virtual assistant to their user for better performance.
Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.
o Various Industries are currently working for developing self-driven cars which can make your journey
more safe and secure.

9. AI in Robotics:

o Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are programmed such that
they can perform some repetitive task, but with the help of AI, we can create intelligent robots which can
perform tasks with their own experiences without pre-programmed.
o Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as
Erica and Sophia has been developed which can talk and behave like humans.

10. AI in Entertainment

o We are currently using some AI based applications in our daily life with some entertainment services such
as Netflix or Amazon. With the help of ML/AI algorithms, these services show the recommendations for
programs or shows.

11. AI in Agriculture

o Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's
agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture
robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers.
12. AI in E-commerce

o AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the
e-commerce business. AI is helping shoppers to discover associated products with recommended size,
color, or even brand.

13. AI in education:

o AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with
students as a teaching assistant.
o AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any
time and any place.

Social Networking

• In Facebook When you upload photos to Facebook, the service automatically


highlight some faces and suggests friends to tag . How can it instantly identify which of
your friends is in the photo?

• Google uses AI to ensure that nearly all of the email landing in your inbox is
authentic. Their filters attempt to sort emails into the following categories (Primary
,Social, Promotions, Updates, Forums, Spam) The program helps your emails get
organized so you can find your way to important communications quicker.

• Chatbots : Chatbots recognize words and phrases in order to (hopefully) deliver helpful
content.

• Chatbots attempt to mimic natural language, simulating conversations as they help with
routine tasks such as booking appointments, taking orders etc

Q7 What are the different AI Application

Robotic vehicles :
• A driverless robotic car named STANLEY speed through the rough terrain of the Mojave
dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand
Challenge.
• STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders
to sense the environment and onboard software to command the steering, braking, and
acceleration (Thrun, 2006).

Speech recognition

A traveller calling United Airlines to book a flight can have the entire conversation
guided by an automated speech recognition and dialog management system.
Autonomous planning and scheduling

• A hundred million miles from Earth, NASA’s Remote Agent program became the first
on-board autonomous planning program to control the scheduling of operations for a
spacecraft.
• REMOTE AGENT generated plans from high-level goals specified from the ground and
monitored the execution of those plans—detecting, diagnosing, and recovering from
problems as they occurred

Game playing

• IBM’s DEEP BLUE became the first computer program to defeat the world champion in
a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition
match.

Spam fighting

• Each day, learning algorithms classify over a billion messages as spam, saving the
recipient from having to waste time deleting what, for many users, could comprise 80%
or 90% of all messages, if not classified away by algorithms

Logistics planning

• During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and
Replanning Tool, DART

• To do automated logistics planning and scheduling for transportation.

• This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for
starting points, destinations, routes, and conflict resolution among all parameters.

• The AI planning techniques generated in hours a plan that would have taken weeks with
older methods

Robotics

• The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for
home use.

• The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is
used to handle hazardous materials, clear explosives, and identify the location of snipers

Machine Translation

• A computer program automatically translates from Arabic to English

• The program uses a statistical model built from examples of Arabic-to-English


translations and from examples of English text totaling two trillion words
Q 8 What are the different type of AI

• Different Artificial Intelligence entities are built for different purposes, and that’s how
they vary.

• AI can be classified based on Type 1 and Type 2 (Based on functionalities).

Type-1:

Types of Artificial Intelligence:

• Artificial Narrow Intelligence (ANI)

• Artificial General Intelligence (AGI)

• Artificial Super Intelligence (ASI)

Artificial Narrow Intelligence (ANI)

• Designed to solve one single problem and able to execute a single task really well.

• They have narrow capabilities, like recommending a product for an e-commerce user or
predicting the weather.
Artificial General Intelligence (AGI)

• AGI is still a theoretical concept.

• It’s defined as AI which has a human-level of cognitive function, across a wide variety
of domains such as language processing, image processing, reasoning etc.

Artificial Super Intelligence (ASI)

• It would be able to surpass all human capabilities.

• This would include decision making, taking rational decisions, and even includes things
like making better art and building emotional relationships

Q9 what are the Types of Artificial Intelligence (Type2 based on functionalities)

• Based on the ways the machines behave, there are four types of Artificial Intelligence
approaches –

• Reactive Machines

• Limited Memory

• Theory of Mind, and

• Self-awareness.

Reactive Machines

• These machines are the most basic form of AI applications. Examples: of reactive
machines are games like Deep Blue, IBM’s chess-playing supercomputer.

• The AI teams do not use any training sets to feed the machines, nor do the latter store
data for future references.

• Based on the move made by the opponent, the machine decides/predicts the next move.

Limited Memory

• These machines belong to the class II category of AI applications. Self-driven cars are the
perfect example.

• These machines are fed with data and are trained with other cars’ speed and direction,
lane markings, traffic lights, curves of roads, and other important factors, over time
Theory of Mind

• This is where we are, struggling to make this concept work, however, we are not there
yet.

• Theory of mind is the concept where the bots will be able to understand the human
emotions, thoughts, and how they react to them

Self-Awareness

• These machines are the extension of the Class III type of AI.

• It is one step ahead of understanding human emotions.

This is the phase where the AI teams build machines with self-awareness factor programmed in
them which seems far-fetched from where we stand today.

Q 10 What is the difference between week and Strong AI

1. Strong Al refers to a machine that approaches or supersedes human intelligence.


Strong artificial intelligence (AI), also known as artificial general intelligence (AGI) or
general AI, is a theoretical form of AI used to describe a certain mindset of AI
development.
Strong AI's goal is to develop artificial intelligence to the point where the machine's intellectual
capability is functionally equal to a human's.

• if it can do typically human tasks,


• If it can apply a wide range of background knowledge and
• If it has some degree of self-consciousness
Strong Al aims to build machines whose overall intellectualability is indistinguishable from that
of a human being.

2. Weak AI refers to the use of software to study or accomplish specific problem solving or
reasoning tasks that do not encompass the full range of human cognitive abilities.
Example: a chess program such as Deep Blue.

Weak Al does not achieve self-awareness; it demonstrates wide range of human level cognitive
abilities; it is merely an intelligent, a specific problem-solver.
Weak AI Strong AI

• Narrow application with a limited scope. It is a wider application with a more vast scope.
• Good at specific tasks. has an incredible human-level intelligence.

• It uses supervised and unsupervised uses clustering & association to process data.
learning to process data.
• Example: Siri, Alexa Example: Advanced Robotics

Weak artificial intelligence (AI) also called narrow AI is a type of artificial intelligence that is
limited to a specific or narrow area. Weak AI simulates human cognition. It has the potential to
benefit society by automating time-consuming tasks and by analyzing data in ways that humans

Q11 Give the Difference between Supervised and Unsupervised Learning

• In Supervised learning, you train the machine using data which is well "labeled."
• Unsupervised learning is a machine learning technique, where you do not need to
supervise the model.
• Supervised learning allows you to collect data or produce a data output from the previous
experience.
• Unsupervised machine learning helps you to finds all kind of unknown patterns in data.
• For example, you will able to determine the time taken to reach back come base on
weather condition, Times of the day and holiday.
• For example, Baby can identify other dogs based on past supervised learning.
• Regression and Classification are two types of supervised machine learning techniques.
• Clustering and Association are two types of Unsupervised learning.
• In a supervised learning model, input and output variables will be given while with
unsupervised learning model, only input data will be given.

• One practical example of supervised learning problems is predicting house prices. How is
this achieved?

• First, we need data about the houses: square footage, number of rooms, features, whether
a house has a garden or not, and so on. We then need to know the prices of these houses,
i.e. the corresponding labels. By leveraging data coming from thousands of houses, their
features and prices, we can now train a supervised machine learning model to predict a
new house’s price based on the examples observed by the model.

• Example: Is it a cat or a dog?


• Image classification is a popular problem in the computer vision field. Here, the goal is to
predict what class an image belongs to. In this set of problems, we are interested in
finding the class label of an image. More precisely: is the image of a car or a plane? A cat
or a dog?

• Example: Finding customer segments

• Clustering is an unsupervised technique where the goal is to find natural groups or


clusters in a feature space and interpret the input data. There are many different clustering
algorithms. One common approach is to divide the data points in a way that each data
point falls into a group that is similar to other data points in the same group based on a
predefined similarity or distance metric in the feature space.

• Clustering is commonly used for determining customer segments in marketing data.


Being able to determine different segments of customers helps marketing teams approach
these customer segments in unique ways. (Think of features like gender, location, age,
education, income bracket, and so on.)

Q12 Why Artificial Intelligence?

Before Learning about Artificial Intelligence, we should know that what is the importance of AI
and why should we learn it. Following are some main reasons to learn about AI:

o With the help of AI, you can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues,
etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.
o The Google search engine uses numerous AI (machine learning) techniques Grouping
together top news stories from numerous sources , Analyzing data from over 3 billion
web pages to improve search results ,Analyzing which search results are most often
followed, i.e. which results are most relevant
o Artificial intelligence takes the role of an experienced clinical assistant who helps doctors
make faster and more reliable diagnoses.
o We already see AI applications in the areas of imaging and diagnostics, and oncology.
o AI algorithms are able to take information from electronic health records, prescriptions,
insurance records and even wearable sensor devices to design a personalized treatment
plan for patients.
o These AI-related technologies accelerate the discovery and creation of new medicines
and drugs.
Q 13 What are the Different Branches of AI

1. Machine Learning

It is the science that enables machines to translate, execute and investigate data for
solving real-world problems.ML algorithms are created by complex mathematical skills
that are coded in a machine language in order to make a complete ML system.
 Applications of Machine Learning
 Computer vision which is used for facial recognition and attendance mark through
fingerprints or vehicle identification through number plate.
 Information Retrieval from search engines like text search for image search.
 Automated email marketing with specified target identification.
 Medical diagnosis of cancer tumors or anomaly identification of any chronic disease

2. Neural Network

Replicating the human brain where the human brain comprises an infinite number of
neurons and to code brain-neurons into a system or a machine is what the neural network
functions.
Applications of neural networks
Character Recognition - The idea of character recognition has become very important
as handheld devices like the Palm Pilot are becoming increasingly popular. Neural
networks can be used to recognize handwritten characters.
Image Compression - Neural networks can receive and process vast amounts of
information at once, making them useful in image compression. With the Internet
explosion and more sites using more images on their sites, using neural networks for
image compression is worth a look.
Stock Market Prediction - The day-to-day business of the stock market is extremely
complicated. Many factors weigh in whether a given stock will go up or down on any
given day. Since neural networks can examine a lot of information quickly and sort it all
out, they can be used to predict stock prices.
3. Robotics

It determines the designing, producing, operating, and usage of robots. It deals with
computer systems for their control, intelligent outcomes, and information transformation.
Outer Space Applications: Robots are playing a very important role for outer space
exploration. The robotic unmanned spacecraft is used as the key of exploring the stars,
planets...
Military Applications Predator drone, which are capable of taking surveillance
photographs, and even accurately launching missiles at ground targets, without a pilot.
 Intelligent Home Applications: We can monitor home security, environmental
conditions and energy usage with intelligent robotic home systems. Door and windows
can be opened automatically and appliances such as lighting and air conditioning can be
pre programmed to activate. This assists occupants irrespective of their state of mobility.

4 Expert systems

These are built to deal with complex problems via reasoning through the bodies of
proficiency, expressed especially in particular of “if-then” rules instead of traditional
agenda to code.
 Medical Domain : Diagnosis Systems to deduce cause of disease from observed data,
conduction medical operations on humans.
 Monitoring Systems : Comparing data continuously with observed system or with
prescribed behavior such as leakage monitoring in long petroleum pipeline.
 Process Control Systems : Controlling a physical process based on monitoring.
 Knowledge Domain : Finding out faults in vehicles, computers.

5 Fuzzy logic

It is a technique that represents and modifies uncertain information by measuring the


degree to which the hypothesis is correct.
Extends logic from Boolean true/false to allow for partial truths.
characteristic of human thinking and many expert systems
Washing machine (Matsushita, Hitachi
Automatic transmission system (Nissan, Subaru, Mitsubishi)
Air conditioner (Mitsubishi)
Vacuum cleaner (Panasonic)

6 Natural Language Processing

It is the processing of the human language by computer programs, examples include;


spam detection by looking at the subject of a line or text of an email and checking if it is
junk.
o does automated generation and understanding of natural human languages.
o translate text or speech from one natural language to another.
o uses computer programs to translate words and sentences from one language to
another without much interpretation by humans

Q 14 What are the challenges in AI and Ethical Concern of AI

1. Unemployment

• Many companies and start-ups are automating a lot of the processes previously done by
humans.
• This leads to increment of unemployment in many developed countries and with the
growth of AI it will only skyrocket and lately there does not seem to be any measures
taken.
• An example is with the implementation of driver-less cars, assuming the time comes
when taxis are driver-less what then becomes of the millions of taxi drivers in the
country?
• This is only one of many major employment providers that in future could suffer a
massive amount of lost jobs.

2. AI is Imperfect — What if it Makes a Mistake?


• A Is are not immune to making mistakes and machine learning takes time to become
useful. If trained well, using good data, then AIs can perform well. However, if we feed
AIs bad date or make errors with internal programming, the AIs can be harmful

3. Should AI Systems Be Allowed to Kill?


• AIs as software that writes its own updates and renews itself. This means that, as
programmed, the machine is not created to do what we want it to do — it does what it
learns to do. Jay goes on to describe an incident with a robot called Tallon. Its
computerized gun was jammed and open fired uncontrollably after an explosion killing 9
people and wounding 14 more.

4. Legal Issues

o AI’s continues to grow so fast that the law can’t keep up with the same pace.
o A lot of legal issues arising with AI normally is determined by first impression.

An example is Uber’s driver-less delivery service in Arizona, United states. The car
accidentally killed a 49-year-old lady in May,2018 and have had to suspend all testing
due to this accident. On a first impression basis test permits are given but how can this
be regulated to ensure safety? The permit was obviously given from results of research,
accident mitigation and other variables but a fatal accident still happened.
5. Reduction of human to human Interaction

• AI is very convenient in doing simple tasks like automating e-mails, using chat bots to
make conversation and other nifty uses but there will come a day when they can do so
much more.
• Humans normally would attach mental states to robots who are not meant to have any so
Imagine an AI that plays games with you or an AI that regularly chats with you as though
they were your best friend.

This kind of interaction causes attachment. The damage that mobile phones and internet
have done to social interaction can already be seen so imagine a situation where the
attachments grow, the already declining level of social interaction could possibly dropped
if care is not taken.

6. Lack of Privacy

• AI sub category Machine Learning can predict shopping preferences, music preferences
and locations a person might be at a certain time and as mentioned earlier predict
possibility of someone being a criminal all done by the data that is shared.
• When a machine tracking your behaviors, there is almost zero privacy in these kinds of
situations, during investigation can the data be valid in court?

.................................
UNIT 2

History of Data , Data Storage And Importance of Data and its Acquisition , The Stages
of data processing ,Data Visualization , Regression, Prediction & Classification ,
Clustering & Recommender Systems

Q1 What is data

• Data – a collection of facts (numbers, words, measurements, observations, etc) that has
been translated into a form that computers can process
• In general, data is simply another word for information.
• But in computing and business (most of what you read about in the news when it comes
to data – especially if it’s about Big Data), data refers to information that is machine-
readable as opposed to human-readable.

• Human-readable (also known as unstructured data) refers to information that only
humans can interpret and study, such as an image or the meaning of a block of text.
• If it requires a person to interpret it, that information is human-readable.
• Machine-readable (or structured data) refers to information that computer programs can
process.
• A program is a set of instructions for manipulating data. And when we take data and
apply a set of programs, we get software.

Q 2 Why is data collection important

• It is through data collection that a business or management has the quality information
they need to make informed decisions from further analysis, study, and research.
• Without data collection, companies would stumble around in the dark using outdated
methods to make their decisions.
• Data collection instead allows them to stay on top of trends, provide answers to problems,
and analyze new insights to great effect.

Q3 What is Data Storage And Importance of Data and its Acquisition

• Data acquisition is the process of sampling signals that measure real world physical
conditions and converting the resulting samples into digital numeric values that can be
manipulated by a computer.
• Data acquisition systems, abbreviated by the DAS, DAQ, typically convert analog
waveforms into digital values for processing.
• The primary purpose of a data acquisition system is to acquire and store the data. But
they are also intended to provide real-time and post-recording visualization and analysis
of the data
• The components of data acquisition systems include:
• Sensors, to convert physical parameters to electrical signals.
• Signal conditioning circuitry, to convert sensor signals into a form that can be converted
to digital values.
• Analog-to-digital converters, to convert conditioned sensor signals to digital values.

Data acquisition applications are usually controlled by software programs developed using
various general purpose programming languages such as:

• Assembly,
• BASIC, C, C++,
• C#, Fortran, Java,
• LabVIEW, Lisp, Pascal, etc

Q 4 What are the benefits of Data Storage.

The benefits of data storage can be summarized as follows:

Capacity: Organizations may store the equivalent of a roomful of data on sets of disks that
take small space. A simple disk for a personal computer holds the equivalent of 500 printed
pages.
Reliability: Data in secondary storage is basically safe, since secondary storage is physically
reliable. Also, it is more difficult for illegal users access to data.

Convenience: With the help of a computer, authorized people can locate and access data
quickly.

Q 6 What are the different Stages of data processing.

Collection
Collection of data refers to gathering of data. The data gathered should be defined and accurate.
Preparation
Preparation is a process of constructing a dataset of data from different sources for future use in
processing step of cycle.
Input
Input refers to supply of data for processing. It can be fed into computer through any of input
devices like keyboard, scanner, mouse, etc.
Processing
The process refers to concept of an actual execution of instructions. In this stage, raw facts or
data is converted to meaningful information.
Output and Interpretation
In this process, output will be displayed to user in form of text, audio, video, etc. Interpretation
of output provides meaningful information to user.
Storage
In this process, we can store data, instruction and information in permanent memory for future
reference.
Collection, manipulation, and processing collected data for the required use is known as data
processing. It is a technique normally performed by a computer; the process includes retrieving,
transforming, or classification of information.
However, the processing of data largely depends on the following −

• The volume of data that need to be processed


• The complexity of data processing operations
• Capacity and inbuilt technology of respective computer system
• Technical skills
• Time constraints

Q 7 What is Data Visualization and what are the types of data Visualization.

• Visualization transforms data into images that effectively and accurately represent
information about the data.
• Data visualization is a graphic representation that expresses the significance of data.
• It reveals insights and patterns that are not immediately visible in the raw data. It is an art
through which information, numbers, and measurements can be made more
understandable.
• Data visualization convert large and small data sets into visuals, which is easy to
understand and process for humans.
• Data visualization tools provide accessible ways to understand outliers, patterns, and
trends in the data.
• In the world of Big Data, the data visualization tools and technologies are required to
analyze vast amounts of information.
• The main goal of data visualization is to communicate information clearly and effectively
through graphical means.
• By using visual elements like charts, graphs, and maps, data visualization tools provide
an accessible way to see and understand trends, outliers, and patterns in data.

 Different types of data Visualization Charts

Chart : The easiest way to show the development of one or several data sets is a chart. Charts
vary from bar and line charts that show the relationship between elements over time to pie charts
that demonstrate the components or proportions between the elements of one whole.
 Plots : Plots allow to distribute two or more data sets over a 2D or even 3D space to
show the relationship between these sets and the parameters on the plot.

• Table ,Map

Q8 What are the different tools for Data visualization.

Data visualization allows you to interact with data. Google, Apple, Facebook, and Twitter all
ask better a better question of their data and make a better business decision by using data
visualization. Here are the some data visualization tools that help you to visualize the data:

1. MS Excel
• You can display your data analysis reports in a number of ways in Excel. However, if
your data analysis results can be visualized as charts that highlight the notable points in
the data, your audience can quickly grasp what you want to project in the data. It also
leaves a good impact on your presentation style
• In Excel, charts are used to make a graphical representation of any set of data. A chart is
a visual representation of the data, in which the data is represented by symbols such as
bars in a Bar Chart or lines in a Line Chart. Excel provides you with many chart types
and you can choose one that suits your data or you can use the Excel Recommended
Charts option to view charts customized to your data and select one of those.

2. Tableau : Tableau is a data visualization tool. You can create graphs, charts, maps, and
many other graphics.
3 Infogram : Infogram is also a data visualization tool. It has some simple steps to process that:
1. First, you choose among many templates, personalize them with additional visualizations
like maps, charts, videos, and images.
2. Then you are ready to share your visualization

4 Plotly : Plotly will help you to create a slick and sharp chart in just a few minutes or in a very
short time. It also starts from a simple spreadsheet.
5 Chartblocks : Chartblocks is an easy way to use online tool which required no coding and
builds visualization from databases, spreadsheets, and live feeds.

Q 9 What Is Regression ?

Regression is a statistical method used in finance, investing, and other disciplines that attempts to
determine the strength and character of the relationship between one dependent variable (usually
denoted by Y) and a series of other variables (known as independent variables).

In Regression, we plot a graph between the variables which best fits the given datapoints, using
this plot, the machine learning model can make predictions about the data.

"Regression shows a line or curve that passes through all the datapoints on target-predictor graph
in such a way that the vertical distance between the datapoints and the regression line is
minimum."

Regression is a supervised learning technique which helps in finding the correlation between
variables and enables us to predict the continuous output variable based on the one or more
predictor variables. It is mainly used for prediction, forecasting, time series modeling, and
determining the causal-effect relationship between variables.
Now, the company wants to do the advertisement of $200 in the year 2019 and wants to know
the prediction about the sales for this year. So to solve such type of prediction problems in
machine learning, we need regression analysis.

 Regression analysis is used in stats to find trends in data. For example, you might guess
that there’s a connection between how much you advertise and how much your sales
improve; regression analysis can help you quantify that.
Q 10 What are the different type of Regression

 Linear regression analysis :

The dependent and independent variables show a linear relationship between the slope and the
intercept. Simple regression: Y = b0 + b1 x.

 Multiple Regression Analysis

Multiple regression analysis is used to see if there is a statistically significant relationship


between sets of variables. It’s used to find trends in those sets of data.

Multiple regression: Y = b0 + b1 x1 + b0 + b1 x2…b0…b1 xn.

Nonlinear regression analysis

It is commonly used for more complicated data sets in which the dependent and independent
variables show a nonlinear relationship.

Q 11 What are the examples of regression

 Prediction of rain using temperature and other factors


 Determining Market trends
 Prediction of road accidents due to rash driving.
 Analyzing trends and sales estimates
 Salary forecasting
 Real estate prediction
 Arriving at ETAs in traffic.
Q12 what is classification and prediction.

Classification

It is the process of finding a model that describes and distinguishes data classes and concepts.
The idea is to use this model to predict the class of objects. The derived model is dependent on
the examination of sets of training data.

Prediction

Predicting the identity of one thing based purely on the description of another, related thing.

Classification is the process of identifying the category or class label of the new observation to
which it belongs. Predication is the process of identifying the missing or unavailable numerical
data for a new observation. That is the key difference between classification and prediction. The
predication does not concern about the class label like in classification.

Prediction is like saying something which may going to be happened in future. Prediction may be
a kind of classification

Following are the examples of cases where the data analysis task is Classification −

 A bank loan officer wants to analyze the data in order to know which customer (loan
applicant) are risky or which are safe.
 A marketing manager at a company needs to analyze a customer with a given profile,
who will buy a new computer.
 In both of the above examples, a model or classifier is constructed to predict the
categorical labels. These labels are risky or safe for loan application data and yes or no
for marketing data.

Following are the examples of cases where the data analysis task is Prediction −

 Suppose the marketing manager needs to predict how much a given customer will spend
during a sale at his company. In this example we are bothered to predict a numeric value.
Therefore the data analysis task is an example of numeric prediction. In this case, a
model or a predictor will be constructed that predicts a continuous-valued-function or
ordered value.

 Supervised vs. Unsupervised Classification
 Supervised Classification = Classification
 We know the class labels and the number of classes
 Unsupervised Classification = Clustering
 We do not know the class labels and may not know the number of classes
Q13 How Does Classification Works?

 With the help of the bank loan application that we have discussed above, let us
understand the working of classification. The Data Classification process includes two
steps −
 Building the Classifier or Model
 Using Classifier for Classification\

 Building the Classifier or Model


 This step is the learning step or the learning phase.
 In this step the classification algorithms build the classifier.
 The classifier is built from the training set made up of database tuples and their
associated class labels.
 Each tuple that constitutes the training set is referred to as a category or class. These
tuples can also be referred to as sample, object or data points

 Using Classifier for Classification


 In this step, the classifier is used for classification. Here the test data is used to estimate
the accuracy of classification rules. The classification rules can be applied to the new data
tuples if the accuracy is considered acceptable
Q14 What is Clustering ?
 Clustering is the task of dividing the population or data points into a number of groups
such that data points in the same groups are more similar to other data points in the same
group than those in other groups. In simple words, the aim is to segregate groups with
similar traits and assign them into clusters.
 Clustering is the process of making a group of abstract objects into classes of similar
objects.
 In clustering, a group of different data objects is classified as similar objects. One group
means a cluster of data. Data sets are divided into different groups in the cluster analysis,
which is based on the similarity of the data
 A cluster of data objects can be treated as one group.
 While doing cluster analysis, we first partition the set of data into groups based on data
similarity and then assign the labels to the groups.
 The main advantage of clustering over classification is that, it is adaptable to changes and
helps single out useful features that distinguish different groups.
 Let’s understand this with an example. Suppose, you are the head of a rental store and
wish to understand preferences of your costumers to scale up your business. Is it possible
for you to look at details of each costumer and devise a unique business strategy for each
one of them? Definitely not. But, what you can do is to cluster all of your costumers into
say 10 groups based on their purchasing habits and use a separate strategy for
costumers in each of these 10 groups. And this is what we call clustering.
 Hard Clustering: In hard clustering, each data point either belongs to a cluster
completely or not. For example, in the above example each customer is put into one
group out of the 10 groups.
 Soft Clustering: In soft clustering, instead of putting each data point into a separate
cluster, a probability or likelihood of that data point to be in those clusters is assigned.
For example, from the above scenario each costumer is assigned a probability to be in
either of 10 clusters of the retail store

Q 15 Different Types of Clustering Algorithms

The following are the most important and useful ML clustering algorithms −

 K-means Clustering

This clustering algorithm computes the centroids and iterates until we it finds optimal centroid. It
assumes that the number of clusters are already known. It is also called flat clustering algorithm.
The number of clusters identified from data by algorithm is represented by ‘K’ in K-means.

 Mean-Shift Algorithm

It is another powerful clustering algorithm used in unsupervised learning. Unlike K-means


clustering, it does not make any assumptions hence it is a non-parametric algorithm.
 Hierarchical Clustering

It is another unsupervised learning algorithm that is used to group together the unlabeled data
points having similar characteristics.

Q16 Different Applications of Clustering

 Marketing : It can be used to characterize & discover customer segments for marketing
purposes.
 Biology : It can be used for classification among different species of plants and animals.
 Libraries : It is used in clustering different books on the basis of topics and information.
 Insurance : It is used to acknowledge the customers, their policies and identifying the
frauds.
 City Planning: It is used to make groups of houses and to study their values based on
their geographical locations and other factors present.
Earthquake studies: By learning the earthquake-affected areas we can determine the
dangerous zones.

Q17 What is recommender System ?

 Recommender systems are among the most popular applications of data science today.
They are used to predict the "rating" or "preference" that a user would give to an item.
 Almost every major tech company has applied them in some form. Amazon uses it to
suggest products to customers, YouTube uses it to decide which video to play next on
autoplay, and Facebook uses it to recommend pages to like and people to follow.
 Based on previous(past) behaviours, it predicts the likelihood that a user would prefer an
item.
 For example, Netflix uses recommendation system. It suggest people new movies
according to their past activities that are like watching and voting movies
 The purpose of recommender systems is recommending new things that are not seen
before from people
 What's more, for some companies like Netflix, Amazon Prime, Hulu, and Hotstar, the
business model and its success revolves around the potency of their recommendations.
Netflix even offered a million dollars in 2009 to anyone who could improve its system by
10%.
 There are also popular recommender systems for domains like restaurants, movies, and
online dating. Recommender systems have also been developed to explore research
articles and experts, collaborators, and financial services. YouTube uses the
recommendation system at a large scale to suggest you videos based on your history. For
example, if you watch a lot of educational videos, it would suggest those types of videos.
Q 18 What are the different type of Recommendation system ?

 Collaborative filtering engines(User based): these systems are widely used, and they
try to predict the rating or preference that a user would give an item-based on past ratings
and preferences of other users. Collaborative filters do not require item metadata like its
content-based counterparts.
 Content-based recommenders(Item Based): suggest similar items based on a particular
item. This system uses item metadata, such as genre, director, description, actors, etc. for
movies, to make these recommendations. The general idea behind these recommender
systems is that if a person likes a particular item, he or she will also like an item that is
similar to it. And to recommend that, it will make use of the user's past item metadata. A
good example could be YouTube, where based on your history, it suggests you new
videos that you could potentially watch
 User Based Collaborative Filtering
 Collaborative filtering is making recommend according to combination of your
experience and experiences of other people.
 First we need to make user vs item matrix.
 Each row is users and each columns are items like movie, product or websites
 Secondly, computes similarity scores between users.
 Each row is users and each row is vector.
 Compute similarity of these rows (users).
 Thirdly, find users who are similar to you based on past behaviour s Finally, it suggests
that you are not experienced before
 Lets make an example of user based collaborative filtering
 Think that there are two people
 First one watched 2 movies that are lord of the rings and hobbit
 Second one watched only lord of the rings movie
 User based collaborative filtering computes similarity of these two people and
sees both are watched a lord of the rings.
 Then it recommends hobbit movie to second one as it can be seen picture *

 Item Based Content Based Filtering



 In this system, instead of finding relationship between users, used items like movies or
stuffs are compared with each others.
 In user based recommendation systems, habits of users can be changed. This situation
makes hard to recommendation. However, in item based recommendation systems,
movies or stuffs does not change. Therefore recommendation is easier.
 On the other hand, there are almost 7 billion people all over the world. Comparing people
increases the computational power. However, if items are compared, computational
power is less.
 In item based recommendation systems, we need to make user vs item matrix that we use
also in user based recommender systems.
 Each row is user and each column is items like movie, product or websites.
 However, at this time instead of calculating similarity between rows, we need to calculate
similarity between columns that are items like movies or stuffs.
 Lets look at how it is works.
 Firstly, there are similarities between lord of the rings and hobbit movies because both
are liked by three different people. There is a similarity point between these two movies.
 If the similarity is high enough, we can recommend hobbit to other people

Unit 3rd
Speech recognition , Natural language understanding , Natural language generation
Chatbots , Machine Translation

Q1 What is NLP ?
 Natural language Processing is that subfield of computer science, more specifically of
AI, which enables computers/machines to understand, process and manipulate human
language

 Natural Language Processing (NLP) refers to AI method of communicating with an


intelligent systems using a natural language such as English.

 Processing of Natural Language is required when you want an intelligent system like
robot to perform as per your instructions, when you want to hear decision from a dialogue
based clinical expert system, etc.

 The input and output of an NLP system can be −

Speech
Written Text
Advantages of Natural Language Processing

 Automated Content Creation

 Significant Reduction in Human Involvement

 Predictive Inventory Management

 Performance Activity Management at Call Centre

Q2 Application of NLP

Sentiment Analysis
Mostly used on the web & social media monitoring, Natural Language Processing is a great tool
to comprehend and analyse the responses to the business messages published on social media
platforms. It helps to analyse the attitude and emotional state of the writer (person
commenting/engaging with posts).
Chatbots & Virtual Assistants

Chatbots and virtual assistants are used for automatic question answering,
designed to understand natural language and deliver an appropriate response
through natural language generation. Standard question answering systems follow
pre-defined rules, while AI-powered chatbots and virtual assistants are able to learn
from every interaction and understand how they should respond. The best part:
they learn from interactions and improve over time.

Speech Recognition

Speech recognition technology uses natural language processing to transform


spoken language into a machine-readable format. Speech recognition systems are
an essential part of virtual assistants, like Siri, Alexa, and Google Assistant, for
example. However, there are more and more use cases of speech recognition in
business. For example, adding speech-to-text capabilities to business software,
companies are able to automatically transcribe calls, send emails, and even
translate

Auto-Correct

Natural Language Processing plays a vital role in grammar checking software and
auto-correct functions. Tools like Grammarly, for example, use NLP to help you
improve your writing, by detecting grammar, spelling, or sentence structure errors.

Market Intelligence

Marketers can benefit from natural language processing to learn more about their
customers and use those insights to create more effective strategies.

Analyzing topics, sentiment, keywords, and intent in unstructured data can really
boost your market research, shedding light on trends and business opportunities.
You can also analyze data to identify customer pain points and to keep an eye on
your competitors (by seeing what things are working well for them and which are
not).

Text Summarization

Automatic summarization is pretty self-explanatory. It summarizes text, by


extracting the most important information. Its main goal is to simplify the process
of going through vast amounts of data, such as scientific papers, news content, or
legal documentation

Machine Translation

Machine translation (MT) is one of the first applications of natural language


processing. Even though Facebooks’s translations have been declared superhuman,
machine translation still faces the challenge of understanding context.

However, if you’ve been an avid user of Google Translate over the years, you’ll know
that it has come a long way since its inception, mainly thanks to huge advances in
the field of neural networks and the increased availability of large amounts of data.

Text Extraction

Text extraction, or information extraction, automatically detects specific


information in a text, such as names, companies, places, and more. This is also
known as named entity recognition. You can also extract keywords within a text, as
well as pre-defined features such as product serial numbers and models.

Applications of text extraction include sifting through incoming support tickets and
identifying specific data, like company names, order numbers, and email addresses
without needing to open and read every ticket.

Q3 What are the different types of NLP. or Component of NLP

• Natural Language Understanding (NLU)


o Mapping the given input in natural language into useful representations.

o Analyzing different aspects of the language.

 Natural language understanding (NLU) is a sub-topic of natural language processing,


which focuses on machines understanding human language. Interesting applications
include text categorization, machine translation, and question answering.

 NLU makes it possible for machines to understands the overall context and meaning of
“natural language,” beyond literal definitions. Its goal is to understand written or spoken
language the way a human would.

 NLU is used in natural language processing (NLP) tasks like topic classification,
language detection, and sentiment analysis:

• Sentiment analysis automatically interprets emotions within a text and categorizes them
as positive, negative, or neutral. By quickly understanding, processing, and analyzing
thousands of online conversations, sentiment analysis tools can deliver valuable insights
about how customers view your brand and products.

• Language detection automatically understands the language of written text. An essential


tool to help businesses route tickets to the correct local teams, avoid wasting time
passing tickets from one customer agent to the next, and respond to customer issues
faster.

 Topic classification is able to understand natural language to automatically sort texts into
predefined groups or topics. Software company Atlassian, for example, uses the
tags Reliability, Usability, and Functionality to sort incoming customer support tickets,
enabling them to deal with customer issues efficiently

Natural Language Understanding Examples

o Machine Translation (MT)

Accurately translating text or speech from one language to another is one of the toughest
challenges of natural language processing and natural language understanding.

o Automated Reasoning

Automated reasoning is a subfield of cognitive science that is used to automatically prove


mathematical theorems or make logical inferences about a medical diagnosis. It gives machines a
form of reasoning or logic, and allows them to infer new facts by deduction.

o Question Answering

Question answering is a subfield of NLP and speech recognition that uses NLU to help
computers automatically understand natural language questions
• Natural Language Generation (NLG)

It is the process of producing meaningful phrases and sentences in the form of natural
language from some internal representation.

 Natural Language Generation (NLG), a subcategory of Natural Language Processing


(NLP), is a software process that automatically transforms structured data into human-
readable text.

 Using NLG, Businesses can generate thousands of pages of data-driven narratives in


minutes using the right data in the right format. NLG generates text generated based on
structured data

 NLG is an AI-driven software solution that extracts data from complex sources to
produce naturally worded content.

 It Creates unlimited content variations for personas for hyper-personalized digital


experiences and better customer engagement.

 It Influences buying behavior with more targeted and individualized content for increased
sales.

 It Means you have to spend less time on routine tasks and more time perfecting digital
experiences that compel customers to action.

 Example of NLG

 Text planning − It includes retrieving the relevant content from knowledge base.

 Sentence planning − It includes choosing required words, forming meaningful phrases,


setting tone of the sentence.

 Text Realization − It is mapping sentence plan into sentence structure.

Q4 what are the steps perform in NLP

There are general five steps −


• Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon
of a language means the collection of words and phrases in a language. Lexical analysis
is dividing the whole chunk of txt into paragraphs, sentences, and words.
• Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for
grammar and arranging words in a manner that shows the relationship among the words.
The sentence such as “The school goes to boy” is rejected by English syntactic analyzer.
• Semantic Analysis − It draws the exact meaning or the dictionary meaning from the
text. The text is checked for meaningfulness. It is done by mapping syntactic structures
and objects in the task domain. The semantic analyzer disregards sentence such as “hot
ice-cream”.
• Discourse Integration − The meaning of any sentence depends upon the meaning of the
sentence just before it. In addition, it also brings about the meaning of immediately
succeeding sentence.
• Pragmatic Analysis − During this, what was said is re-interpreted on what it actually
meant. It involves deriving those aspects of language which require real world
knowledge.

Q 5 what is Speech Reorganization

 Speech recognition is the process that enables a computer to recognize and respond to
spoken words and then converting them in a format that the machine understands. The
machine may then convert it into another form of data depending on the end-goal.
 It is an interdisciplinary subfield of computer science and computational linguistics that
develops methodologies and technologies that enable the recognition and translation of
spoken language into text by computers.

 For example, Google Dictate and other transcription programs use speech recognition to
convert your spoken words into text while digital assistants like Siri and Alexa respond in
text format or voice

 Speech processing system has mainly three tasks −

 First, speech recognition that allows the machine to catch the words, phrases and
sentences we speak
 Second, natural language processing to allow the machine to understand what we speak,
and
 Third, speech synthesis to allow the machine to speak.

Q 6 What are the Components of Speech Recognition

• A speech capturing Device: It consists of a microphone, which converts the sound wave
signals to electrical signals and an Analog to Digital Converter which samples and
digitizes the analog signals to obtain the discrete data that the computer can understand.

• A Digital Signal Module or a Processor: It performs processing on the raw speech


signal like frequency domain conversion, restoring only the required information etc.

• Preprocessed signal storage: The preprocessed speech is stored in the memory to carry
out further task of speech recognition.

• Reference Speech patterns: The computer or the system consists of predefined speech
patterns or templates already stored in the memory, to be used as the reference for
matching.

• Pattern matching algorithm: The unknown speech signal is compared with the
reference speech pattern to determine the actual words or the pattern of words.


Q7 Application of Speech Reorganization

• Digital assistants,

• Smart speakers, Smart homes

• Automation for a variety of services,

• Products, and solutions.

• Security devices

• Smartphones for call routing, speech-to-text processing, voice dialling and voice search.

• Word processing applications like Microsoft Word, where users can dictate what they
want to show up as text

Q 8 What is Chat bots ?

 Recently, new tools designed to simplify the interaction between humans and computers
have hit the market: Chatbots or Virtual Assistants. In banking, chatbots and virtual
assistants are some of the industry’s newest tools designed to simplify the interaction
between humans and computers

 Chatbots are not a recent development. They are simulations which can understand
human language, process it and interact back with humans while performing specific
tasks.

 A chatbot is a software application used to conduct an on-line chat conversation via text
or text-to-speech, in lieu of providing direct contact with a live human agent

 A chatbot is a software application used to conduct an on-line chat conversation via text
or text-to-speech, in lieu of providing direct contact with a live human agent.

 From a technological point of view, a chatbot only represents the natural evolution of a
Question Answering system leveraging Natural Language Processing (NLP).

 Types of Chatbots

There are many types of chatbots available, a few of them can be majorly classified as
follows:

 Text-based chatbot: In a text-based chatbot, a bot answers the user’s questions via text
interface.

 Voice-based chatbot: In a voice or speech-based chatbot, a bot answers the user’s


questions via a human voice interface.

There are mainly two approaches used to design the chatbots, described as follows:

 In a Rule-based approach, a bot answers questions based on some rules on which it is


trained on. The rules defined can be very simple to very complex. The bots can handle
simple queries but fail to manage complex ones.

 Self-learning bots are the ones that use some Machine Learning-based approaches and are
definitely more efficient than rule-based bots. These bots can be further classified in two
types: Retrieval Based or Generative

Q 9 Different Chatbots Applications

 Chatbot’s for entertainment: Jokebot, Quotebot, Dinner ideas bot, Ruuh, Zo, Genius, etc
 Chatbot’s for health: Webot, Meditatebot, Health tap, etc
 Chatbot’s for news and weather: CNN, Poncho, etc
 Virtual reception assistant
 Virtual help desk assistant
 Virtual tutor or teacher
 Virtual driving assistant
 Virtual email, complaints, or content distributor
 Virtual home assistant [example: Google Home]
 Virtual operations assistant [example: Jarvis from the movie Iron Maiden]
 Virtual entertainment assistant [example: Amazon Alexa]
 Virtual phone assistant [example: Apple Siri]

Q 10 what is Machine Translation ?

 Machine translation (MT), process of translating one source language or text into another
language, is one of the most important applications of NLP.

 Machine Translation (MT) is the task of automatically converting one natural language
into another, preserving the meaning of the input text, and producing fluent text in the
output language

 Machine translation is the task of automatically converting source text in one language to
text in another language.

 In a machine translation task, the input already consists of a sequence of symbols in some
language, and the computer program must convert this into a sequence of symbols in
another language.

 There are many challenging aspects of MT:

 1) the large variety of languages, alphabets and grammars;

 2) the task to translate a sequence

 3) there is no one correct answer (e.g.: translating from a language without gender-
dependent pronouns, he and she can be the same).

 Three major approaches of machine translation are :

 Rule-based Machine Translation (RBMT): 1970s-1990s


 Statistical Machine Translation (SMT): 1990s-2010s
 Neural Machine Translation (NMT): 2014-

 Rule-based Machine Translation : A rule-based system requires experts’ knowledge


about the source and the target language to develop syntactic, semantic and
morphological rules to achieve the translation
 Statistical Machine Translation : This approach uses statistical models based on the
analysis of bilingual text corpora.
 Neural Machine Translation : The neural approach uses neural networks to achieve
machine translation. Compared to the previous models, NMTs can be built with one
network instead of a pipeline of separate tasks.

 A problem with neural networks occurs if the training data is unbalanced, the model
cannot learn from the rare samples as well as frequent ones.

 NMT examples

 Google Translate (from 2016)

 Microsoft Translate (from 2016)

 Translation on Facebook
UNIT 4
Artificial Neural Networks

• Deep Learning Recurrent Neural Networks Convolutional Neural Networks


• Generative Adversarial Networks

Ques 1: What are Artificial neural network or ANN?

• Artificial Neural Network ANNANN is an efficient computing system whose central


theme is borrowed from the analogy of biological neural networks. ANNs are also
named as “artificial neural systems,” or “parallel distributed processing systems,” or
“connectionist systems.”

• ANN acquires a large collection of units that are interconnected in some pattern to allow
communication between the units. These units, also referred to as nodes or neurons, are
simple processors which operate in parallel.
• Every neuron is connected with other neuron through a connection link. Each
connection link is associated with a weight that has information about the input signal.
This is the most useful information for neurons to solve a particular problem because the
weight usually excites or inhibits the signal that is being communicated.
• Each neuron has an internal state, which is called an activation signal. Output signals,
which are produced after combining the input signals and activation rule, may be sent to
other units.

 A neural network can be understood as a network of hidden layers, an input layer and an
output layer that tries to mimic the working of a human brain.
 The hidden layers can be visualized as an abstract representation of the input data itself.
These layers help the neural network understand various features of the data with the
help of its own internal logic.
 These neural networks are non-interpretable models. Non-interpretable models are those
which can’t be interpreted or understood even if we observe the hidden layers. This is
because the neural networks have an internal logic working on its own, that can’t be
comprehended by us.
 We can just see then as a vector of numerical values. Since the output of a neural
network is a numerical vector, we need to have an explicit output layer that bridges the
gap between the actual data and the representation of the data by the network.
 An output layer can be understood as a translator that helps us to understand the logic of
the network and convert the target values
 Two main characteristics of a neural network −
 Architecture
 Learning
 Architecture
 It tells about the connection type: whether it is feed forward, recurrent, multi-layered,
convolutional, or single layered. It also tells about the number of layers and the number
of neurons in every layer.
 Learning
 It tells about the method in which the neural network is trained. A common way to train
a neural network is to use gradient descent and backpropagation

Ques 2 : What are types of learning strategies in ANN?

ANNs are capable of learning and they need to be trained. There are several learning
strategies −
• Supervised Learning − It involves a teacher that is scholar than the ANN itself. For
example, the teacher feeds some example data about which the teacher already knows
the answers.
For example, pattern recognizing. The ANN comes up with guesses while recognizing.
Then the teacher provides the ANN with the answers. The network then compares it
guesses with the teacher’s “correct” answers and makes adjustments according to errors.
• Unsupervised Learning − It is required when there is no example data set with known
answers. For example, searching for a hidden pattern. In this case, clustering i.e.
dividing a set of elements into groups according to some unknown pattern is carried out
based on the existing data sets present.
• Reinforcement Learning − This strategy built on observation. The ANN makes a
decision by observing its environment. If the observation is negative, the network
adjusts its weights to be able to make a different required decision the next time.
Ques 3 :What is Deep Learning?

Deep learning is a branch of machine learning which is completely based on artificial


neural networks, as neural network is going to mimic the human brain so deep
learning is also a kind of mimic of human brain.
Deep Learning essentially means training an Artificial Neural Network (ANN) with a
huge amount of data. In deep learning, the network learns by itself and thus requires
humongous data for learning..
 A deep neural network (DNN) is an ANN with multiple hidden layers between the input
and output layers. Similar to shallow ANNs, DNNs can model complex non-linear
relationships.
 The main purpose of a neural network is to receive a set of inputs, perform progressively
complex calculations on them, and give output to solve real world problems like
classification. We restrict ourselves to feed forward neural networks.
 Deep Learning essentially means training an Artificial Neural Network (ANN) with a
huge amount of data. In deep learning, the network learns by itself and thus requires
humongous data for learning. While traditional machine learning is essentially a set of
algorithms that parse data and learn from it. They then used this learning for making
intelligent decisions.
 Each algorithm in deep learning goes through the same process. It includes a hierarchy of
nonlinear transformation of input that can be used to generate a statistical model as
output.

Ques 4 : Difference between Machine learning and deep learning


learning and deep learning?

Ques 5 : Explain Working of deep learning?

Consider the following steps that define the deep Learning process
 Identifies relevant data sets and prepares them for analysis.
 Chooses the type of algorithm to use
 Builds an analytical model based on the algorithm used.
 Trains the model on test data sets, revising it as needed.
 Runs the model to generate test scores
Ques 6 : What are limitations ,advantages and disadvantages of deep learning ?

Advantages :
1. Best in-class performance on problems.
2. Reduces need for feature engineering.
3. Eliminates unnecessary costs.
4. Identifies defects easily that are difficult to detect.
Disadvantages :
1. Large amount of data required.
2. Computationally expensive to train.
3. No strong theoretical foundation.

Q 7 What are the Applications of deep Learning :


1. Automatic Text Generation – Corpus of text is learned and from this model new
text is generated, word-by-word or character-by-character.
Then this model is capable of learning how to spell, punctuate, form sentences, or it
may even capture the style.
2. Healthcare – Helps in diagnosing various diseases and treating it.
3. Automatic Machine Translation – Certain words, sentences or phrases in one
language is transformed into another language (Deep Learning is achieving top
results in the areas of text, images).
4. Image Recognition – Recognizes and identifies peoples and objects in images as
well as to understand content and context. This area is already being used in
Gaming, Retail, Tourism, etc.
5. Predicting Earthquakes – Teaches a computer to perform viscoelastic computations
which are used in predicting earthquakes.
Ques 8 : Define Convolutional neural network in brief?

Definition

A Convolutional neural network (CNN) is a neural network that has one or more convolutional
layers and are used mainly for image processing, classification, segmentation and also for other
auto correlated data.

A convolution can be thought as “looking at a function’s surroundings to make better/accurate


predictions of its outcome.

Common uses for CNNs

• The most common use for CNNs is image classification, for example identifying satellite
images that contain roads or classifying hand written letters and digits.
• CNNs have been used for understanding in Natural Language Processing (NLP) and
speech recognition, although often for NLP Recurrent Neural Nets (RNNs) are used.

Architecture of CNN

Convolutional neural networks are distinguished from other neural networks by their superior
performance with image, speech, or audio signal inputs. They have three main types of layers,
which are:

• Convolutional layer
• Pooling layer
• Fully-connected (FC) layer

Generally, a Convolutional Neural Network has three layers, which are as follows;

o Input: If the image consists of 32 widths, 32 height encompassing three R, G, B


channels, then it will hold the raw pixel([32x32x3]) values of an image.
o Convolution: It computes the output of those neurons, which are associated with input's
local regions, such that each neuron will calculate a dot product in between weights and a
small region to which they are actually linked to in the input volume. For example, if we
choose to incorporate 12 filters, then it will result in a volume of [32x32x12].
o ReLU Layer: It is specially used to apply an activation function elementwise, like as
max (0, x) thresholding at zero. It results in ([32x32x12]), which relates to an unchanged
size of the volume.
o Pooling: This layer is used to perform a downsampling operation along the spatial
dimensions (width, height) that results in [16x16x12] volume.
o Locally Connected: It can be defined as a regular neural network layer that receives an
input from the preceding layer followed by computing the class scores and results in a 1-
Dimensional array that has the equal size to that of the number of classes.
Working of CNN

We will start with an input image to which we will be applying multiple feature detectors, which
are also called as filters to create the feature maps that comprises of a Convolution layer. Then on
the top of that layer, we will be applying the ReLU or Rectified Linear Unit to remove any
linearity or increase non-linearity in our images.

Next, we will apply a Pooling layer to our Convolutional layer, so that from every feature map we
create a Pooled feature map as the main purpose of the pooling layer is to make sure that we have
spatial invariance in our images. It also helps to reduce the size of our images as well as avoid any
kind of overfitting of our data. After that, we will flatten all of our pooled images into one long
vector or column of all of these values, followed by inputting these values into our artificial neural
network. Lastly, we will feed it into the locally connected layer to achieve the final output.

Ques 10: What are Recurrent Neural Networks?

Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous
step are fed as input to the current step. In traditional neural networks, all the inputs and outputs
are independent of each other, but in cases like when it is required to predict the next word of a
sentence, the previous words are required and hence there is a need to remember the previous
words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer.
The main and most important feature of RNN is Hidden state, which remembers some
information about a sequence.
Recurrent Neural Network(RNN) are a type of Neural Network where the output from
previous step are fed as input to the current step. In traditional neural networks, all the inputs
and outputs are independent of each other, but in cases like when it is required to predict the next
word of a sentence, the previous words are required and hence there is a need to remember the
previous words. Thus RNN came into existence, which solved this issue with the help of a
Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers
some information about a sequence
Working of RNN
The network takes a single time-step of the input.We calculate the current state through the
current input and the previous state.Now, the current state output ht becomes ht-1 for the next
state.There can be n number of steps and in the end, all the information can be joined.After
completion of all the steps, the final step is for calculating the output.Finally, we compute the
error by calculating the difference between actual output and the predicted output.The error
is backpropagated to the network to adjust the weights and produce a better outcome.

Applications of Recurrent Neural Networks

 Speech Recognition: A set of inputs containing phoneme(acoustic signals) from an audio is


used as an input. This network will compute the phonemes and produce a phonetic segments
with the likelihood of output.
 Machine Translation: In Machine Translation, the input is will be the source language(e.g.
Hindi) and the output will be in the target language(e.g. English). The main difference
between Machine Translation and Language modelling is that the output starts only after the
complete input has been fed into the network.
 Image recognition and characterization : Recurrent Neural Network along with a ConvNet
work together to recognize an image and give a description about it if it is unnamed. This
combination of neural network works in a beautiful and it produces fascinating results. Here is
a visual description about how it goes on doing this, the combined model even aligns the
generated words with features found in the images.
Ques : Difference between CNN and RNN ??
CNN RNN

It is suitable for spatial data such as RNN is suitable for temporal data, also
images. called sequential data.

CNN is considered to be more powerful RNN includes less feature compatibility


than RNN. when compared to CNN.

This network takes fixed size inputs and RNN can handle arbitrary input/output
generates fixed size outputs. lengths.

CNN is a type of feed-forward artificial RNN unlike feed forward neural networks -
neural network with variations of can use their internal memory to process
multilayer perceptrons designed to use arbitrary sequences of inputs.
minimal amounts of preprocessing.

CNNs use connectivity pattern between the Recurrent neural networks use time-series
neurons. This is inspired by the information - what a user spoke last will
organization of the animal visual cortex, impact what he/she will speak next.
whose individual neurons are arranged in
such a way that they respond to
overlapping regions tiling the visual field.

CNNs are ideal for images and video RNNs are ideal for text and speech analysis.
processing.

Ques :What is Generative Adversarial Networks (GANs) ?

GANs are a powerful class of neural networks that are used for unsupervised learning. It was
developed and introduced by Ian J. Goodfellow in 2014. GANs are basically made up of a
system of two competing neural network models which compete with each other and are able to
analyze, capture and copy the variations within a dataset.

GANs extend that idea to generative models:

• Generator: generate fake samples, tries to fool the Discriminator


• Discriminator: tries to distinguish between real and fake samples

• Train them against each other

• Repeat this and we get better Generator and Discriminator

In GANs, there is a generator and a discriminator. The Generator generates fake samples of
data(be it an image, audio, etc.) and tries to fool the Discriminator. The Discriminator, on the
other hand, tries to distinguish between the real and fake samples. The Generator and the
Discriminator are both Neural Networks and they both run in competition with each other in the
training phase. The steps are repeated several times and in this, the Generator and Discriminator
get better and better in their respective jobs after each repetition.
- when the generator fools the discriminator, it is rewarded, or no change is needed to the
model parameters, but the discriminator is penalized and its model parameters are
updated.
- At a limit, the generator generates perfect replicas from the input domain every time, and
the discriminator cannot tell the difference and predicts “unsure” (e.g. 50% for real and
fake) in every case. This is just an example of an idealized case; we do not need to get to
this point to arrive at a useful generator model.

 The easiest way to understand what GANs are is through a simple analogy:
 Suppose there is a shop which buys certain kinds of wine from customers which they will
later resell.
 However, there are Some customers who sell fake wine in order to get money. In this
case, the shop owner has to be able to distinguish between the fake and authentic wines.


 You can imagine that initially, the forger might make a lot of mistakes when trying to sell
the fake wine and it will be easy for the shop owner to identify that the wine is not
authentic. Because of these failures, the forger will keep on trying different techniques to
simulate the authentic wines and some will eventually be successful. Now that the forger
knows that certain techniques got past the shop owner's checks, he can start to further
improve the fake wines based on those techniques.
 At the same time, the shop owner would probably get some feedback from other shop
owners or wine experts that some of the wines that she has are not original. This means
that the shop owner would have to improve how she determines whether a wine is fake or
authentic. The goal of the forger is to create wines that are indistinguishable from the
authentic ones, and the goal of the shop owner is to accurately tell if a wine is real or not


 There are two major components within GANs: the generator and the discriminator. The
shop owner in the example is known as a discriminator network which assigns a
probability that the image is real.
 The forger is known as the generative network. This network takes some noise vector and
outputs an image. When training the generative network, it learns which areas of the
image to improve/change so that the discriminator would have a harder time
differentiating its generated images from the real ones.
 The generative network keeps producing images that are closer in appearance to the real
images while the discriminative network is trying to determine the differences between
real and fake images. The ultimate goal is to have a generative network that can produce
images which are indistinguishable from the real ones
Unit 5

5.1 Image and face recognition , Object recognition , Speech Recognition Robotics Applications

Ques : Explain Image and face recognition

A facial recognition system is a technology capable of matching a human face from a digital
image or a video frame against a database of faces, typically employed to authenticate users
through ID verification services, works by pinpointing and measuring facial features from a
given image.

Facial recognition systems attempt to identify a human face, which is three-dimensional and
changes in appearance with lighting and facial expression, based on its two-dimensional image.
To accomplish this computational task, facial recognition systems perform four steps.

• First face detection is used to segment the face from the image background.
• In the second step the segmented face image is aligned to account for face pose, image
size and photographic properties, such as illumination and grayscale.
• The purpose of the alignment process is to enable the accurate localization of facial
features in the third step, the facial feature extraction.
• Features such as eyes, nose and mouth are pinpointed and measured in the image to
represent the face. The so established feature vector of the face is then, in the fourth step,
matched against a database of faces.

Ques :What is face recognition?

Face recognition is a method for identifying an unknown person or authenticating the identity of
a specific person from their face. It’s a branch of computer vision, but face recognition is
specialized and comes with social baggage for some applications, as well as some vulnerabilities
to spoofing.

wH o w f a c i a l r e c o g n i t i o n w o r k s

Facial recognition is the process of identifying or verifying the identity of a person using their
face. It captures, analyzes, and compares patterns based on the person's facial details.

• The face detection process is an essential step as it detects and locates human faces in
images and videos.
• The face capture process transforms analog information (a face) into a set of digital
information (data) based on the person's facial features.
• The face match process verifies if two faces belong to the same person.
Face recognition applications
Face recognition applications mostly fall into three major categories: security, health, and
marketing/retail.

Security includes law enforcement, and that class of facial recognition uses can be as benign as
matching people to their passport photos faster and more accurately than humans can, and as
creepy as the “Person of Interest” scenario where people are tracked via CCTV and compared to
collated photo databases.

Health applications of facial recognition include patient check-ins, real-time emotion detection,
patient tracking within a facility, assessing pain levels in non-verbal patients, detecting certain
diseases and conditions, staff identification, and facility security.

Marketing and retail applications of face recognition include identification of loyalty program
members, identification and tracking of known shoplifters, and recognizing people and their
emotions for targeted product suggestions.

1. The face detection process is an essential step as it detects and locates human faces in
images and videos.
2. The face capture process transforms analog information (a face) into a set of digital
information (data) based on the person's facial features.
3. The face match process verifies if two faces belong to the same person.
4. Facial recognition software is based on the ability to first recognize faces, which is a
technological feat in itself.
5. If you look at the mirror, you can see that your face has certain distinguishable landmarks.
These are the peaks and valleys that make up the different facial features.
6. VISIONICS defines these landmarks as nodal points. There are about 80 nodal points on a
human face.
Face Recognition Operations
The technology system may vary when it comes to facial recognition. Different software applies
different methods and means to achieve face recognition. The stepwise method is as follows:
• Face Detection: To begin with, the camera will detect and recognize a face. The
face can be best detected when the person is looking directly at the camera as it
makes it easy for facial recognition. With the advancements in the technology, this
is improved where the face can be detected with slight variation in their posture of
face facing to the camera.
• Face Analysis: Then the photo of the face is captured and analyzed. Most facial
recognition relies on 2D images rather than 3D because it is more convenient to
match to the database. Facial recognition software will analyze the distance
between your eyes or the shape of your cheekbones.
• Image to Data Conversion: Now it is converted to a mathematical formula and
these facial features become numbers. This numerical code is known a face print.
The way every person has a unique fingerprint, in the same way, they have unique
face print.
• Match Finding: Then the code is compared against a database of other face prints.
This database has photos with identification that can be compared. The technology
then identifies a match for your exact features in the provided database. It returns
with the match and attached information such as name and addresses or it depends
on the information saved in the database of an individual.

Application of face and image recognition

• Security/Counter terrorism. Access control, comparing surveillance images to Know


terrorist.
• Day Care: Verify identity of individuals picking up the children.
• Residential Security: Alert homeowners of approaching personnel
• Voter verification: Where eligible politicians are required to verify their identity during
a voting process this is intended to stop voting where the vote may not go as expected.
• Banking using ATM: The software is able to quickly verify a customer’s face

Ques : What is Object Recognition?

• Object recognition is a computer vision technique for identifying objects in images or


videos.
• When humans look at a photograph or watch a video, we can readily spot people, objects,
scenes, and visual details.
• The goal is to teach a computer to do what comes naturally to humans: to gain a level of
understanding of what an image contains.
Object recognition is a key technology behind driverless cars, enabling them to recognize a stop
sign or to distinguish a pedestrian from a lamppost. It is also useful in a variety of applications
such as disease identification in bioimaging, industrial inspection, and robotic vision.

Humans can easily detect and identify objects present in an image. The human visual system is
fast and accurate and can perform complex tasks like identifying multiple objects and detect
obstacles with little conscious thought.
With the availability of large amounts of data, and better algorithms, we can now easily train
computers to detect and classify multiple objects within an image with high accuracy

Using object recognition to identify different categories of objects.

• Image classification involves predicting the class of one object in an image.


• Object localization refers to identifying the location of one or more objects in an image
and drawing abounding box around their extent.
• Object detection combines these two tasks and localizes and classifies one or more
objects in an image.
• When a user or practitioner refers to “object recognition“, they often mean “object
detection“.
As such, we can distinguish between these three computer vision tasks:
• Image Classification: Predict the type or class of an object in an image.
• Input: An image with a single object, such as a photograph.
• Output: A class label (e.g. one or more integers that are mapped to class labels).
• Object Localization: Locate the presence of objects in an image and indicate their
location with a bounding box.
• Input: An image with one or more objects, such as a photograph.
• Output: One or more bounding boxes (e.g. defined by a point, width, and height).
• Object Detection: Locate the presence of objects with a bounding box and types or
classes of the located objects in an image.
• Input: An image with one or more objects, such as a photograph.
• Output: One or more bounding boxes (e.g. defined by a point, width, and height), and a
class label for each bounding box.

Applications of object Reorganization : Self driving cars example

If you think of self driving cars as an example (NOTE: the real self driving solutions are likely
more sophisticated with nuances, but go with this example for illustrative purposes), it requires
us to:
1. Determine the position of the identified object in the image. For example: if the identified
pedestrian is right in front or to the side
2. Identify more than one object. For example: a single image could have multiple cars,
many pedestrians, traffic light, etc
3. Identify the orientation of the object. For example: the front of the car is facing towards
and rear facing away (i.e. car is coming towards us or parked facing us)
• Accomplishing all this requires a little more to be done than the image classification
models.
Ques :What is a Robot (Robotics)?

A robot is a re programmable multi-function manipulator designed to move material parts, tools


or specialised devices, through variable programmed motions for the performance of a variety of
tasks. (Robotic Institute of America, 1979)

Robotics is the engineering science and technology of robots, and their design, manufacture,
application, and structural disposition. It requires a working knowledge of electronics,
mechanics, and software.

Classes of Robot
Most of physical robots fall into one of the three categories:
• Manipulators/robotic arms which are anchored to their workplace and built usually from sets
of rigid links connected by joints.

• Mobile robots which can move in their environment using wheels, legs, etc.
• Hybrid robots which include humanoid robots are mobile robots equipped with manipulators.

Robotics is a branch of engineering and science that includes electronics engineering, mechanical
engineering and computer science and so on. This branch deals with the design, construction, use
to control robots, sensory feedback and information processing. These are some technologies
which will replace humans and human activities in coming years.

There are some characteristics of robots given below:


• Appearance: Robots have a physical body. They are held by the structure of their
body and are moved by their mechanical parts. Without appearance, robots will be
just a software program.
• Brain: Another name of brain in robots is On-board control unit. Using this robot
receive information and sends commands as output. With this control unit robot
knows what to do else it’ll be just a remote-controlled machine.
• Sensors: The use of these sensors in robots is to gather info from the outside world
and send it to Brain. Basically, these sensors have circuits in them that produces the
voltage in them.
• Actuators: The robots move and the parts with the help of these robots move is
called Actuators. Some examples of actuators are motors, pumps, and compressor
etc. The brain tells these actuators when and how to respond or move.
• Program: Robots only works or responds to the instructions which are provided to
them in the form of a program. These programs only tell the brain when to perform
which operation like when to move, produce sounds etc. These programs only tell
the robot how to use sensors data to make decisions.
• Behaviour: Robots behavior is decided by the program which has been built for it.
Once the robot starts making the movement, one can easily tell which kind of
program is being installed inside the robot.
Advantages:
The advantages of using robots are given below:
• They can get information that a human can’t get.
• They can perform tasks without any mistakes and very efficiently and fast.
• Maximum robots are automatic, so they can perform different tasks without needing
human interaction.
• Robots are used in different factories to produce items like plane, car parts etc.
• They can be used for mining purposes and can be sent to earth’s nadris.

Disadvantages:
The disadvantages of using robots are given below:
• They need the power supply to keep going. People working in factories may lose
their jobs as robots can replace them.
• They need high maintenance to keep them working all day long. And the cost of
maintaining the robots can be expensive.
• They can store huge amount of data but they are not as efficient as our human
brains.
• As we know that robots work on the program that has been installed in them. So
other than the program installed, robots can’t do anything different.
• The most important disadvantage is that if the program of robots comes in wrong
hands they can cause the huge amount of destruction.
Applications:
There are some applications given below:
• Caterpillar plans which is aiming to develop remote-controlled machines and are
expecting to develop heavy robots by 2021.
• A robot can also do Herding task.
• Robots are increasingly been used more than humans in manufacturing while in
auto-industry there are more than half of the labors are “Robots”.
• Many of the robots are used as Military Robots.
• Robots have been used in cleaning up of areas like toxic waste or industrial wastes
etc.
• Agricultural robots.
• Household robots.
• Domestic robots.

You might also like