AI Complete Notes - Unit 1 To Unit 5
AI Complete Notes - Unit 1 To Unit 5
AI Complete Notes - Unit 1 To Unit 5
Syllabus: The evolution of AI to the present , Various approaches to AI , What should all
engineers know about AI? , Other emerging technologies , AI and ethical concerns
Q 1 What is Intelligence?
• Intelligence can have many faces:
creativity, solving problems, pattern recognition, classification, learning, optimization,
surviving in anenvironment, language processing, planning, and knowledge.
❑ The field of AI aims to understand how humans perceive, interact and make decisions;
❑ then take this understanding to create machines that rival human competence in a wide
range of tasks.
❑ The study of mental faculties through the use of computational models.
❑ The study of how to make computers do things at which, at the moment, people are
better.(Rich & Knight, 1991 )
❑ A field of study that seeks to explain and emulate intelligent behavior in terms of
computational processes.
• He proposed that Turing Test can be used to determine “whether or not a machine is
considered as intelligent?”
• The computer would need to possess the following capabilities:
Ask questions of two entities ,receive answers from both If you can’t tell which of the entities is
Human and which is a computer program, then you are fooled and we should therefore consider
the computer to be intelligent.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic
human responses under specific conditions. The original Turing Test requires three terminals,
each of which is physically separated from the other two. One terminal is operated by a
computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the
computer function as respondents. The questioner interrogates the respondents within a specific
subject area, using a specified format and context. After a preset length of time or number of
questions, the questioner is then asked to decide which respondent was human and which was a
computer.
The test is repeated many times. If the questioner makes the correct determination in half of the
test runs or less, the computer is considered to have artificial intelligence because the questioner
regards it as "just as human" as the human respondent.
Q5 .Explain the History of Artificial Intelligence
(1943-1952)
o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behaviour equivalent to human intelligence, called a Turing test.
(1956-1974)
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
(1974-1980)
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.
(1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
(1987-1993)
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
(1993-2011)
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
Application of AI
49
1. AI in Astronomy
o Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful
for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare
o In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have
a significant impact on this industry.
o Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can help
doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the
patient before hospitalization.
3. AI in Gaming
o AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the
machine needs to think of a large number of possible places.
4. AI in Finance
o AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.
5. AI in Data Security
o The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital
world. AI can be used to make your data more safe and secure. Some examples such as AEG bot, AI2
Platform,are used to determine software bug and cyber-attacks in a better way.
6. AI in Social Media
o Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which need to
be stored and managed in a very efficient way. AI can organize and manage massive amounts of data. AI
can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.
o AI is becoming highly demanding for travel industries. AI is capable of doing various travel related works
such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers.
Travel industries are using AI-powered chatbots which can make human-like interaction with customers for
better and fast response.
8. AI in Automotive Industry
o Some Automotive industries are using AI to provide virtual assistant to their user for better performance.
Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.
o Various Industries are currently working for developing self-driven cars which can make your journey
more safe and secure.
9. AI in Robotics:
o Artificial Intelligence has a remarkable role in Robotics. Usually, general robots are programmed such that
they can perform some repetitive task, but with the help of AI, we can create intelligent robots which can
perform tasks with their own experiences without pre-programmed.
o Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as
Erica and Sophia has been developed which can talk and behave like humans.
10. AI in Entertainment
o We are currently using some AI based applications in our daily life with some entertainment services such
as Netflix or Amazon. With the help of ML/AI algorithms, these services show the recommendations for
programs or shows.
11. AI in Agriculture
o Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's
agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture
robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers.
12. AI in E-commerce
o AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the
e-commerce business. AI is helping shoppers to discover associated products with recommended size,
color, or even brand.
13. AI in education:
o AI can automate grading so that the tutor can have more time to teach. AI chatbot can communicate with
students as a teaching assistant.
o AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any
time and any place.
Social Networking
• Google uses AI to ensure that nearly all of the email landing in your inbox is
authentic. Their filters attempt to sort emails into the following categories (Primary
,Social, Promotions, Updates, Forums, Spam) The program helps your emails get
organized so you can find your way to important communications quicker.
• Chatbots : Chatbots recognize words and phrases in order to (hopefully) deliver helpful
content.
• Chatbots attempt to mimic natural language, simulating conversations as they help with
routine tasks such as booking appointments, taking orders etc
Robotic vehicles :
• A driverless robotic car named STANLEY speed through the rough terrain of the Mojave
dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand
Challenge.
• STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders
to sense the environment and onboard software to command the steering, braking, and
acceleration (Thrun, 2006).
Speech recognition
A traveller calling United Airlines to book a flight can have the entire conversation
guided by an automated speech recognition and dialog management system.
Autonomous planning and scheduling
• A hundred million miles from Earth, NASA’s Remote Agent program became the first
on-board autonomous planning program to control the scheduling of operations for a
spacecraft.
• REMOTE AGENT generated plans from high-level goals specified from the ground and
monitored the execution of those plans—detecting, diagnosing, and recovering from
problems as they occurred
Game playing
• IBM’s DEEP BLUE became the first computer program to defeat the world champion in
a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition
match.
Spam fighting
• Each day, learning algorithms classify over a billion messages as spam, saving the
recipient from having to waste time deleting what, for many users, could comprise 80%
or 90% of all messages, if not classified away by algorithms
Logistics planning
• During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and
Replanning Tool, DART
• This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for
starting points, destinations, routes, and conflict resolution among all parameters.
• The AI planning techniques generated in hours a plan that would have taken weeks with
older methods
Robotics
• The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for
home use.
• The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is
used to handle hazardous materials, clear explosives, and identify the location of snipers
Machine Translation
• Different Artificial Intelligence entities are built for different purposes, and that’s how
they vary.
Type-1:
• Designed to solve one single problem and able to execute a single task really well.
• They have narrow capabilities, like recommending a product for an e-commerce user or
predicting the weather.
Artificial General Intelligence (AGI)
• It’s defined as AI which has a human-level of cognitive function, across a wide variety
of domains such as language processing, image processing, reasoning etc.
• This would include decision making, taking rational decisions, and even includes things
like making better art and building emotional relationships
• Based on the ways the machines behave, there are four types of Artificial Intelligence
approaches –
• Reactive Machines
• Limited Memory
• Self-awareness.
Reactive Machines
• These machines are the most basic form of AI applications. Examples: of reactive
machines are games like Deep Blue, IBM’s chess-playing supercomputer.
• The AI teams do not use any training sets to feed the machines, nor do the latter store
data for future references.
• Based on the move made by the opponent, the machine decides/predicts the next move.
Limited Memory
• These machines belong to the class II category of AI applications. Self-driven cars are the
perfect example.
• These machines are fed with data and are trained with other cars’ speed and direction,
lane markings, traffic lights, curves of roads, and other important factors, over time
Theory of Mind
• This is where we are, struggling to make this concept work, however, we are not there
yet.
• Theory of mind is the concept where the bots will be able to understand the human
emotions, thoughts, and how they react to them
Self-Awareness
• These machines are the extension of the Class III type of AI.
This is the phase where the AI teams build machines with self-awareness factor programmed in
them which seems far-fetched from where we stand today.
2. Weak AI refers to the use of software to study or accomplish specific problem solving or
reasoning tasks that do not encompass the full range of human cognitive abilities.
Example: a chess program such as Deep Blue.
Weak Al does not achieve self-awareness; it demonstrates wide range of human level cognitive
abilities; it is merely an intelligent, a specific problem-solver.
Weak AI Strong AI
• Narrow application with a limited scope. It is a wider application with a more vast scope.
• Good at specific tasks. has an incredible human-level intelligence.
• It uses supervised and unsupervised uses clustering & association to process data.
learning to process data.
• Example: Siri, Alexa Example: Advanced Robotics
Weak artificial intelligence (AI) also called narrow AI is a type of artificial intelligence that is
limited to a specific or narrow area. Weak AI simulates human cognition. It has the potential to
benefit society by automating time-consuming tasks and by analyzing data in ways that humans
• In Supervised learning, you train the machine using data which is well "labeled."
• Unsupervised learning is a machine learning technique, where you do not need to
supervise the model.
• Supervised learning allows you to collect data or produce a data output from the previous
experience.
• Unsupervised machine learning helps you to finds all kind of unknown patterns in data.
• For example, you will able to determine the time taken to reach back come base on
weather condition, Times of the day and holiday.
• For example, Baby can identify other dogs based on past supervised learning.
• Regression and Classification are two types of supervised machine learning techniques.
• Clustering and Association are two types of Unsupervised learning.
• In a supervised learning model, input and output variables will be given while with
unsupervised learning model, only input data will be given.
•
• One practical example of supervised learning problems is predicting house prices. How is
this achieved?
• First, we need data about the houses: square footage, number of rooms, features, whether
a house has a garden or not, and so on. We then need to know the prices of these houses,
i.e. the corresponding labels. By leveraging data coming from thousands of houses, their
features and prices, we can now train a supervised machine learning model to predict a
new house’s price based on the examples observed by the model.
Before Learning about Artificial Intelligence, we should know that what is the importance of AI
and why should we learn it. Following are some main reasons to learn about AI:
o With the help of AI, you can create such software or devices which can solve real-world
problems very easily and with accuracy such as health issues, marketing, traffic issues,
etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment where
survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.
o The Google search engine uses numerous AI (machine learning) techniques Grouping
together top news stories from numerous sources , Analyzing data from over 3 billion
web pages to improve search results ,Analyzing which search results are most often
followed, i.e. which results are most relevant
o Artificial intelligence takes the role of an experienced clinical assistant who helps doctors
make faster and more reliable diagnoses.
o We already see AI applications in the areas of imaging and diagnostics, and oncology.
o AI algorithms are able to take information from electronic health records, prescriptions,
insurance records and even wearable sensor devices to design a personalized treatment
plan for patients.
o These AI-related technologies accelerate the discovery and creation of new medicines
and drugs.
Q 13 What are the Different Branches of AI
1. Machine Learning
It is the science that enables machines to translate, execute and investigate data for
solving real-world problems.ML algorithms are created by complex mathematical skills
that are coded in a machine language in order to make a complete ML system.
Applications of Machine Learning
Computer vision which is used for facial recognition and attendance mark through
fingerprints or vehicle identification through number plate.
Information Retrieval from search engines like text search for image search.
Automated email marketing with specified target identification.
Medical diagnosis of cancer tumors or anomaly identification of any chronic disease
2. Neural Network
Replicating the human brain where the human brain comprises an infinite number of
neurons and to code brain-neurons into a system or a machine is what the neural network
functions.
Applications of neural networks
Character Recognition - The idea of character recognition has become very important
as handheld devices like the Palm Pilot are becoming increasingly popular. Neural
networks can be used to recognize handwritten characters.
Image Compression - Neural networks can receive and process vast amounts of
information at once, making them useful in image compression. With the Internet
explosion and more sites using more images on their sites, using neural networks for
image compression is worth a look.
Stock Market Prediction - The day-to-day business of the stock market is extremely
complicated. Many factors weigh in whether a given stock will go up or down on any
given day. Since neural networks can examine a lot of information quickly and sort it all
out, they can be used to predict stock prices.
3. Robotics
It determines the designing, producing, operating, and usage of robots. It deals with
computer systems for their control, intelligent outcomes, and information transformation.
Outer Space Applications: Robots are playing a very important role for outer space
exploration. The robotic unmanned spacecraft is used as the key of exploring the stars,
planets...
Military Applications Predator drone, which are capable of taking surveillance
photographs, and even accurately launching missiles at ground targets, without a pilot.
Intelligent Home Applications: We can monitor home security, environmental
conditions and energy usage with intelligent robotic home systems. Door and windows
can be opened automatically and appliances such as lighting and air conditioning can be
pre programmed to activate. This assists occupants irrespective of their state of mobility.
4 Expert systems
These are built to deal with complex problems via reasoning through the bodies of
proficiency, expressed especially in particular of “if-then” rules instead of traditional
agenda to code.
Medical Domain : Diagnosis Systems to deduce cause of disease from observed data,
conduction medical operations on humans.
Monitoring Systems : Comparing data continuously with observed system or with
prescribed behavior such as leakage monitoring in long petroleum pipeline.
Process Control Systems : Controlling a physical process based on monitoring.
Knowledge Domain : Finding out faults in vehicles, computers.
5 Fuzzy logic
1. Unemployment
• Many companies and start-ups are automating a lot of the processes previously done by
humans.
• This leads to increment of unemployment in many developed countries and with the
growth of AI it will only skyrocket and lately there does not seem to be any measures
taken.
• An example is with the implementation of driver-less cars, assuming the time comes
when taxis are driver-less what then becomes of the millions of taxi drivers in the
country?
• This is only one of many major employment providers that in future could suffer a
massive amount of lost jobs.
4. Legal Issues
o AI’s continues to grow so fast that the law can’t keep up with the same pace.
o A lot of legal issues arising with AI normally is determined by first impression.
An example is Uber’s driver-less delivery service in Arizona, United states. The car
accidentally killed a 49-year-old lady in May,2018 and have had to suspend all testing
due to this accident. On a first impression basis test permits are given but how can this
be regulated to ensure safety? The permit was obviously given from results of research,
accident mitigation and other variables but a fatal accident still happened.
5. Reduction of human to human Interaction
• AI is very convenient in doing simple tasks like automating e-mails, using chat bots to
make conversation and other nifty uses but there will come a day when they can do so
much more.
• Humans normally would attach mental states to robots who are not meant to have any so
Imagine an AI that plays games with you or an AI that regularly chats with you as though
they were your best friend.
This kind of interaction causes attachment. The damage that mobile phones and internet
have done to social interaction can already be seen so imagine a situation where the
attachments grow, the already declining level of social interaction could possibly dropped
if care is not taken.
6. Lack of Privacy
• AI sub category Machine Learning can predict shopping preferences, music preferences
and locations a person might be at a certain time and as mentioned earlier predict
possibility of someone being a criminal all done by the data that is shared.
• When a machine tracking your behaviors, there is almost zero privacy in these kinds of
situations, during investigation can the data be valid in court?
.................................
UNIT 2
History of Data , Data Storage And Importance of Data and its Acquisition , The Stages
of data processing ,Data Visualization , Regression, Prediction & Classification ,
Clustering & Recommender Systems
Q1 What is data
• Data – a collection of facts (numbers, words, measurements, observations, etc) that has
been translated into a form that computers can process
• In general, data is simply another word for information.
• But in computing and business (most of what you read about in the news when it comes
to data – especially if it’s about Big Data), data refers to information that is machine-
readable as opposed to human-readable.
•
• Human-readable (also known as unstructured data) refers to information that only
humans can interpret and study, such as an image or the meaning of a block of text.
• If it requires a person to interpret it, that information is human-readable.
• Machine-readable (or structured data) refers to information that computer programs can
process.
• A program is a set of instructions for manipulating data. And when we take data and
apply a set of programs, we get software.
• It is through data collection that a business or management has the quality information
they need to make informed decisions from further analysis, study, and research.
• Without data collection, companies would stumble around in the dark using outdated
methods to make their decisions.
• Data collection instead allows them to stay on top of trends, provide answers to problems,
and analyze new insights to great effect.
• Data acquisition is the process of sampling signals that measure real world physical
conditions and converting the resulting samples into digital numeric values that can be
manipulated by a computer.
• Data acquisition systems, abbreviated by the DAS, DAQ, typically convert analog
waveforms into digital values for processing.
• The primary purpose of a data acquisition system is to acquire and store the data. But
they are also intended to provide real-time and post-recording visualization and analysis
of the data
• The components of data acquisition systems include:
• Sensors, to convert physical parameters to electrical signals.
• Signal conditioning circuitry, to convert sensor signals into a form that can be converted
to digital values.
• Analog-to-digital converters, to convert conditioned sensor signals to digital values.
Data acquisition applications are usually controlled by software programs developed using
various general purpose programming languages such as:
• Assembly,
• BASIC, C, C++,
• C#, Fortran, Java,
• LabVIEW, Lisp, Pascal, etc
Capacity: Organizations may store the equivalent of a roomful of data on sets of disks that
take small space. A simple disk for a personal computer holds the equivalent of 500 printed
pages.
Reliability: Data in secondary storage is basically safe, since secondary storage is physically
reliable. Also, it is more difficult for illegal users access to data.
Convenience: With the help of a computer, authorized people can locate and access data
quickly.
Collection
Collection of data refers to gathering of data. The data gathered should be defined and accurate.
Preparation
Preparation is a process of constructing a dataset of data from different sources for future use in
processing step of cycle.
Input
Input refers to supply of data for processing. It can be fed into computer through any of input
devices like keyboard, scanner, mouse, etc.
Processing
The process refers to concept of an actual execution of instructions. In this stage, raw facts or
data is converted to meaningful information.
Output and Interpretation
In this process, output will be displayed to user in form of text, audio, video, etc. Interpretation
of output provides meaningful information to user.
Storage
In this process, we can store data, instruction and information in permanent memory for future
reference.
Collection, manipulation, and processing collected data for the required use is known as data
processing. It is a technique normally performed by a computer; the process includes retrieving,
transforming, or classification of information.
However, the processing of data largely depends on the following −
Q 7 What is Data Visualization and what are the types of data Visualization.
• Visualization transforms data into images that effectively and accurately represent
information about the data.
• Data visualization is a graphic representation that expresses the significance of data.
• It reveals insights and patterns that are not immediately visible in the raw data. It is an art
through which information, numbers, and measurements can be made more
understandable.
• Data visualization convert large and small data sets into visuals, which is easy to
understand and process for humans.
• Data visualization tools provide accessible ways to understand outliers, patterns, and
trends in the data.
• In the world of Big Data, the data visualization tools and technologies are required to
analyze vast amounts of information.
• The main goal of data visualization is to communicate information clearly and effectively
through graphical means.
• By using visual elements like charts, graphs, and maps, data visualization tools provide
an accessible way to see and understand trends, outliers, and patterns in data.
Chart : The easiest way to show the development of one or several data sets is a chart. Charts
vary from bar and line charts that show the relationship between elements over time to pie charts
that demonstrate the components or proportions between the elements of one whole.
Plots : Plots allow to distribute two or more data sets over a 2D or even 3D space to
show the relationship between these sets and the parameters on the plot.
• Table ,Map
Data visualization allows you to interact with data. Google, Apple, Facebook, and Twitter all
ask better a better question of their data and make a better business decision by using data
visualization. Here are the some data visualization tools that help you to visualize the data:
1. MS Excel
• You can display your data analysis reports in a number of ways in Excel. However, if
your data analysis results can be visualized as charts that highlight the notable points in
the data, your audience can quickly grasp what you want to project in the data. It also
leaves a good impact on your presentation style
• In Excel, charts are used to make a graphical representation of any set of data. A chart is
a visual representation of the data, in which the data is represented by symbols such as
bars in a Bar Chart or lines in a Line Chart. Excel provides you with many chart types
and you can choose one that suits your data or you can use the Excel Recommended
Charts option to view charts customized to your data and select one of those.
2. Tableau : Tableau is a data visualization tool. You can create graphs, charts, maps, and
many other graphics.
3 Infogram : Infogram is also a data visualization tool. It has some simple steps to process that:
1. First, you choose among many templates, personalize them with additional visualizations
like maps, charts, videos, and images.
2. Then you are ready to share your visualization
4 Plotly : Plotly will help you to create a slick and sharp chart in just a few minutes or in a very
short time. It also starts from a simple spreadsheet.
5 Chartblocks : Chartblocks is an easy way to use online tool which required no coding and
builds visualization from databases, spreadsheets, and live feeds.
Q 9 What Is Regression ?
Regression is a statistical method used in finance, investing, and other disciplines that attempts to
determine the strength and character of the relationship between one dependent variable (usually
denoted by Y) and a series of other variables (known as independent variables).
In Regression, we plot a graph between the variables which best fits the given datapoints, using
this plot, the machine learning model can make predictions about the data.
"Regression shows a line or curve that passes through all the datapoints on target-predictor graph
in such a way that the vertical distance between the datapoints and the regression line is
minimum."
Regression is a supervised learning technique which helps in finding the correlation between
variables and enables us to predict the continuous output variable based on the one or more
predictor variables. It is mainly used for prediction, forecasting, time series modeling, and
determining the causal-effect relationship between variables.
Now, the company wants to do the advertisement of $200 in the year 2019 and wants to know
the prediction about the sales for this year. So to solve such type of prediction problems in
machine learning, we need regression analysis.
Regression analysis is used in stats to find trends in data. For example, you might guess
that there’s a connection between how much you advertise and how much your sales
improve; regression analysis can help you quantify that.
Q 10 What are the different type of Regression
The dependent and independent variables show a linear relationship between the slope and the
intercept. Simple regression: Y = b0 + b1 x.
It is commonly used for more complicated data sets in which the dependent and independent
variables show a nonlinear relationship.
Classification
It is the process of finding a model that describes and distinguishes data classes and concepts.
The idea is to use this model to predict the class of objects. The derived model is dependent on
the examination of sets of training data.
Prediction
Predicting the identity of one thing based purely on the description of another, related thing.
Classification is the process of identifying the category or class label of the new observation to
which it belongs. Predication is the process of identifying the missing or unavailable numerical
data for a new observation. That is the key difference between classification and prediction. The
predication does not concern about the class label like in classification.
Prediction is like saying something which may going to be happened in future. Prediction may be
a kind of classification
Following are the examples of cases where the data analysis task is Classification −
A bank loan officer wants to analyze the data in order to know which customer (loan
applicant) are risky or which are safe.
A marketing manager at a company needs to analyze a customer with a given profile,
who will buy a new computer.
In both of the above examples, a model or classifier is constructed to predict the
categorical labels. These labels are risky or safe for loan application data and yes or no
for marketing data.
Following are the examples of cases where the data analysis task is Prediction −
Suppose the marketing manager needs to predict how much a given customer will spend
during a sale at his company. In this example we are bothered to predict a numeric value.
Therefore the data analysis task is an example of numeric prediction. In this case, a
model or a predictor will be constructed that predicts a continuous-valued-function or
ordered value.
Supervised vs. Unsupervised Classification
Supervised Classification = Classification
We know the class labels and the number of classes
Unsupervised Classification = Clustering
We do not know the class labels and may not know the number of classes
Q13 How Does Classification Works?
With the help of the bank loan application that we have discussed above, let us
understand the working of classification. The Data Classification process includes two
steps −
Building the Classifier or Model
Using Classifier for Classification\
The following are the most important and useful ML clustering algorithms −
K-means Clustering
This clustering algorithm computes the centroids and iterates until we it finds optimal centroid. It
assumes that the number of clusters are already known. It is also called flat clustering algorithm.
The number of clusters identified from data by algorithm is represented by ‘K’ in K-means.
Mean-Shift Algorithm
It is another unsupervised learning algorithm that is used to group together the unlabeled data
points having similar characteristics.
Marketing : It can be used to characterize & discover customer segments for marketing
purposes.
Biology : It can be used for classification among different species of plants and animals.
Libraries : It is used in clustering different books on the basis of topics and information.
Insurance : It is used to acknowledge the customers, their policies and identifying the
frauds.
City Planning: It is used to make groups of houses and to study their values based on
their geographical locations and other factors present.
Earthquake studies: By learning the earthquake-affected areas we can determine the
dangerous zones.
Recommender systems are among the most popular applications of data science today.
They are used to predict the "rating" or "preference" that a user would give to an item.
Almost every major tech company has applied them in some form. Amazon uses it to
suggest products to customers, YouTube uses it to decide which video to play next on
autoplay, and Facebook uses it to recommend pages to like and people to follow.
Based on previous(past) behaviours, it predicts the likelihood that a user would prefer an
item.
For example, Netflix uses recommendation system. It suggest people new movies
according to their past activities that are like watching and voting movies
The purpose of recommender systems is recommending new things that are not seen
before from people
What's more, for some companies like Netflix, Amazon Prime, Hulu, and Hotstar, the
business model and its success revolves around the potency of their recommendations.
Netflix even offered a million dollars in 2009 to anyone who could improve its system by
10%.
There are also popular recommender systems for domains like restaurants, movies, and
online dating. Recommender systems have also been developed to explore research
articles and experts, collaborators, and financial services. YouTube uses the
recommendation system at a large scale to suggest you videos based on your history. For
example, if you watch a lot of educational videos, it would suggest those types of videos.
Q 18 What are the different type of Recommendation system ?
Collaborative filtering engines(User based): these systems are widely used, and they
try to predict the rating or preference that a user would give an item-based on past ratings
and preferences of other users. Collaborative filters do not require item metadata like its
content-based counterparts.
Content-based recommenders(Item Based): suggest similar items based on a particular
item. This system uses item metadata, such as genre, director, description, actors, etc. for
movies, to make these recommendations. The general idea behind these recommender
systems is that if a person likes a particular item, he or she will also like an item that is
similar to it. And to recommend that, it will make use of the user's past item metadata. A
good example could be YouTube, where based on your history, it suggests you new
videos that you could potentially watch
User Based Collaborative Filtering
Collaborative filtering is making recommend according to combination of your
experience and experiences of other people.
First we need to make user vs item matrix.
Each row is users and each columns are items like movie, product or websites
Secondly, computes similarity scores between users.
Each row is users and each row is vector.
Compute similarity of these rows (users).
Thirdly, find users who are similar to you based on past behaviour s Finally, it suggests
that you are not experienced before
Lets make an example of user based collaborative filtering
Think that there are two people
First one watched 2 movies that are lord of the rings and hobbit
Second one watched only lord of the rings movie
User based collaborative filtering computes similarity of these two people and
sees both are watched a lord of the rings.
Then it recommends hobbit movie to second one as it can be seen picture *
Q1 What is NLP ?
Natural language Processing is that subfield of computer science, more specifically of
AI, which enables computers/machines to understand, process and manipulate human
language
Processing of Natural Language is required when you want an intelligent system like
robot to perform as per your instructions, when you want to hear decision from a dialogue
based clinical expert system, etc.
Speech
Written Text
Advantages of Natural Language Processing
Q2 Application of NLP
Sentiment Analysis
Mostly used on the web & social media monitoring, Natural Language Processing is a great tool
to comprehend and analyse the responses to the business messages published on social media
platforms. It helps to analyse the attitude and emotional state of the writer (person
commenting/engaging with posts).
Chatbots & Virtual Assistants
Chatbots and virtual assistants are used for automatic question answering,
designed to understand natural language and deliver an appropriate response
through natural language generation. Standard question answering systems follow
pre-defined rules, while AI-powered chatbots and virtual assistants are able to learn
from every interaction and understand how they should respond. The best part:
they learn from interactions and improve over time.
Speech Recognition
Auto-Correct
Natural Language Processing plays a vital role in grammar checking software and
auto-correct functions. Tools like Grammarly, for example, use NLP to help you
improve your writing, by detecting grammar, spelling, or sentence structure errors.
Market Intelligence
Marketers can benefit from natural language processing to learn more about their
customers and use those insights to create more effective strategies.
Analyzing topics, sentiment, keywords, and intent in unstructured data can really
boost your market research, shedding light on trends and business opportunities.
You can also analyze data to identify customer pain points and to keep an eye on
your competitors (by seeing what things are working well for them and which are
not).
Text Summarization
Machine Translation
However, if you’ve been an avid user of Google Translate over the years, you’ll know
that it has come a long way since its inception, mainly thanks to huge advances in
the field of neural networks and the increased availability of large amounts of data.
Text Extraction
Applications of text extraction include sifting through incoming support tickets and
identifying specific data, like company names, order numbers, and email addresses
without needing to open and read every ticket.
NLU makes it possible for machines to understands the overall context and meaning of
“natural language,” beyond literal definitions. Its goal is to understand written or spoken
language the way a human would.
NLU is used in natural language processing (NLP) tasks like topic classification,
language detection, and sentiment analysis:
• Sentiment analysis automatically interprets emotions within a text and categorizes them
as positive, negative, or neutral. By quickly understanding, processing, and analyzing
thousands of online conversations, sentiment analysis tools can deliver valuable insights
about how customers view your brand and products.
Topic classification is able to understand natural language to automatically sort texts into
predefined groups or topics. Software company Atlassian, for example, uses the
tags Reliability, Usability, and Functionality to sort incoming customer support tickets,
enabling them to deal with customer issues efficiently
Accurately translating text or speech from one language to another is one of the toughest
challenges of natural language processing and natural language understanding.
o Automated Reasoning
o Question Answering
Question answering is a subfield of NLP and speech recognition that uses NLU to help
computers automatically understand natural language questions
• Natural Language Generation (NLG)
It is the process of producing meaningful phrases and sentences in the form of natural
language from some internal representation.
NLG is an AI-driven software solution that extracts data from complex sources to
produce naturally worded content.
It Influences buying behavior with more targeted and individualized content for increased
sales.
It Means you have to spend less time on routine tasks and more time perfecting digital
experiences that compel customers to action.
Example of NLG
Text planning − It includes retrieving the relevant content from knowledge base.
Speech recognition is the process that enables a computer to recognize and respond to
spoken words and then converting them in a format that the machine understands. The
machine may then convert it into another form of data depending on the end-goal.
It is an interdisciplinary subfield of computer science and computational linguistics that
develops methodologies and technologies that enable the recognition and translation of
spoken language into text by computers.
For example, Google Dictate and other transcription programs use speech recognition to
convert your spoken words into text while digital assistants like Siri and Alexa respond in
text format or voice
First, speech recognition that allows the machine to catch the words, phrases and
sentences we speak
Second, natural language processing to allow the machine to understand what we speak,
and
Third, speech synthesis to allow the machine to speak.
• A speech capturing Device: It consists of a microphone, which converts the sound wave
signals to electrical signals and an Analog to Digital Converter which samples and
digitizes the analog signals to obtain the discrete data that the computer can understand.
• Preprocessed signal storage: The preprocessed speech is stored in the memory to carry
out further task of speech recognition.
• Reference Speech patterns: The computer or the system consists of predefined speech
patterns or templates already stored in the memory, to be used as the reference for
matching.
• Pattern matching algorithm: The unknown speech signal is compared with the
reference speech pattern to determine the actual words or the pattern of words.
•
Q7 Application of Speech Reorganization
• Digital assistants,
• Security devices
• Smartphones for call routing, speech-to-text processing, voice dialling and voice search.
• Word processing applications like Microsoft Word, where users can dictate what they
want to show up as text
Recently, new tools designed to simplify the interaction between humans and computers
have hit the market: Chatbots or Virtual Assistants. In banking, chatbots and virtual
assistants are some of the industry’s newest tools designed to simplify the interaction
between humans and computers
Chatbots are not a recent development. They are simulations which can understand
human language, process it and interact back with humans while performing specific
tasks.
A chatbot is a software application used to conduct an on-line chat conversation via text
or text-to-speech, in lieu of providing direct contact with a live human agent
A chatbot is a software application used to conduct an on-line chat conversation via text
or text-to-speech, in lieu of providing direct contact with a live human agent.
From a technological point of view, a chatbot only represents the natural evolution of a
Question Answering system leveraging Natural Language Processing (NLP).
Types of Chatbots
There are many types of chatbots available, a few of them can be majorly classified as
follows:
Text-based chatbot: In a text-based chatbot, a bot answers the user’s questions via text
interface.
There are mainly two approaches used to design the chatbots, described as follows:
Self-learning bots are the ones that use some Machine Learning-based approaches and are
definitely more efficient than rule-based bots. These bots can be further classified in two
types: Retrieval Based or Generative
Chatbot’s for entertainment: Jokebot, Quotebot, Dinner ideas bot, Ruuh, Zo, Genius, etc
Chatbot’s for health: Webot, Meditatebot, Health tap, etc
Chatbot’s for news and weather: CNN, Poncho, etc
Virtual reception assistant
Virtual help desk assistant
Virtual tutor or teacher
Virtual driving assistant
Virtual email, complaints, or content distributor
Virtual home assistant [example: Google Home]
Virtual operations assistant [example: Jarvis from the movie Iron Maiden]
Virtual entertainment assistant [example: Amazon Alexa]
Virtual phone assistant [example: Apple Siri]
Machine translation (MT), process of translating one source language or text into another
language, is one of the most important applications of NLP.
Machine Translation (MT) is the task of automatically converting one natural language
into another, preserving the meaning of the input text, and producing fluent text in the
output language
Machine translation is the task of automatically converting source text in one language to
text in another language.
In a machine translation task, the input already consists of a sequence of symbols in some
language, and the computer program must convert this into a sequence of symbols in
another language.
3) there is no one correct answer (e.g.: translating from a language without gender-
dependent pronouns, he and she can be the same).
A problem with neural networks occurs if the training data is unbalanced, the model
cannot learn from the rare samples as well as frequent ones.
NMT examples
Translation on Facebook
UNIT 4
Artificial Neural Networks
• ANN acquires a large collection of units that are interconnected in some pattern to allow
communication between the units. These units, also referred to as nodes or neurons, are
simple processors which operate in parallel.
• Every neuron is connected with other neuron through a connection link. Each
connection link is associated with a weight that has information about the input signal.
This is the most useful information for neurons to solve a particular problem because the
weight usually excites or inhibits the signal that is being communicated.
• Each neuron has an internal state, which is called an activation signal. Output signals,
which are produced after combining the input signals and activation rule, may be sent to
other units.
A neural network can be understood as a network of hidden layers, an input layer and an
output layer that tries to mimic the working of a human brain.
The hidden layers can be visualized as an abstract representation of the input data itself.
These layers help the neural network understand various features of the data with the
help of its own internal logic.
These neural networks are non-interpretable models. Non-interpretable models are those
which can’t be interpreted or understood even if we observe the hidden layers. This is
because the neural networks have an internal logic working on its own, that can’t be
comprehended by us.
We can just see then as a vector of numerical values. Since the output of a neural
network is a numerical vector, we need to have an explicit output layer that bridges the
gap between the actual data and the representation of the data by the network.
An output layer can be understood as a translator that helps us to understand the logic of
the network and convert the target values
Two main characteristics of a neural network −
Architecture
Learning
Architecture
It tells about the connection type: whether it is feed forward, recurrent, multi-layered,
convolutional, or single layered. It also tells about the number of layers and the number
of neurons in every layer.
Learning
It tells about the method in which the neural network is trained. A common way to train
a neural network is to use gradient descent and backpropagation
ANNs are capable of learning and they need to be trained. There are several learning
strategies −
• Supervised Learning − It involves a teacher that is scholar than the ANN itself. For
example, the teacher feeds some example data about which the teacher already knows
the answers.
For example, pattern recognizing. The ANN comes up with guesses while recognizing.
Then the teacher provides the ANN with the answers. The network then compares it
guesses with the teacher’s “correct” answers and makes adjustments according to errors.
• Unsupervised Learning − It is required when there is no example data set with known
answers. For example, searching for a hidden pattern. In this case, clustering i.e.
dividing a set of elements into groups according to some unknown pattern is carried out
based on the existing data sets present.
• Reinforcement Learning − This strategy built on observation. The ANN makes a
decision by observing its environment. If the observation is negative, the network
adjusts its weights to be able to make a different required decision the next time.
Ques 3 :What is Deep Learning?
Consider the following steps that define the deep Learning process
Identifies relevant data sets and prepares them for analysis.
Chooses the type of algorithm to use
Builds an analytical model based on the algorithm used.
Trains the model on test data sets, revising it as needed.
Runs the model to generate test scores
Ques 6 : What are limitations ,advantages and disadvantages of deep learning ?
Advantages :
1. Best in-class performance on problems.
2. Reduces need for feature engineering.
3. Eliminates unnecessary costs.
4. Identifies defects easily that are difficult to detect.
Disadvantages :
1. Large amount of data required.
2. Computationally expensive to train.
3. No strong theoretical foundation.
Definition
A Convolutional neural network (CNN) is a neural network that has one or more convolutional
layers and are used mainly for image processing, classification, segmentation and also for other
auto correlated data.
• The most common use for CNNs is image classification, for example identifying satellite
images that contain roads or classifying hand written letters and digits.
• CNNs have been used for understanding in Natural Language Processing (NLP) and
speech recognition, although often for NLP Recurrent Neural Nets (RNNs) are used.
Architecture of CNN
Convolutional neural networks are distinguished from other neural networks by their superior
performance with image, speech, or audio signal inputs. They have three main types of layers,
which are:
• Convolutional layer
• Pooling layer
• Fully-connected (FC) layer
Generally, a Convolutional Neural Network has three layers, which are as follows;
We will start with an input image to which we will be applying multiple feature detectors, which
are also called as filters to create the feature maps that comprises of a Convolution layer. Then on
the top of that layer, we will be applying the ReLU or Rectified Linear Unit to remove any
linearity or increase non-linearity in our images.
Next, we will apply a Pooling layer to our Convolutional layer, so that from every feature map we
create a Pooled feature map as the main purpose of the pooling layer is to make sure that we have
spatial invariance in our images. It also helps to reduce the size of our images as well as avoid any
kind of overfitting of our data. After that, we will flatten all of our pooled images into one long
vector or column of all of these values, followed by inputting these values into our artificial neural
network. Lastly, we will feed it into the locally connected layer to achieve the final output.
Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous
step are fed as input to the current step. In traditional neural networks, all the inputs and outputs
are independent of each other, but in cases like when it is required to predict the next word of a
sentence, the previous words are required and hence there is a need to remember the previous
words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer.
The main and most important feature of RNN is Hidden state, which remembers some
information about a sequence.
Recurrent Neural Network(RNN) are a type of Neural Network where the output from
previous step are fed as input to the current step. In traditional neural networks, all the inputs
and outputs are independent of each other, but in cases like when it is required to predict the next
word of a sentence, the previous words are required and hence there is a need to remember the
previous words. Thus RNN came into existence, which solved this issue with the help of a
Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers
some information about a sequence
Working of RNN
The network takes a single time-step of the input.We calculate the current state through the
current input and the previous state.Now, the current state output ht becomes ht-1 for the next
state.There can be n number of steps and in the end, all the information can be joined.After
completion of all the steps, the final step is for calculating the output.Finally, we compute the
error by calculating the difference between actual output and the predicted output.The error
is backpropagated to the network to adjust the weights and produce a better outcome.
It is suitable for spatial data such as RNN is suitable for temporal data, also
images. called sequential data.
This network takes fixed size inputs and RNN can handle arbitrary input/output
generates fixed size outputs. lengths.
CNN is a type of feed-forward artificial RNN unlike feed forward neural networks -
neural network with variations of can use their internal memory to process
multilayer perceptrons designed to use arbitrary sequences of inputs.
minimal amounts of preprocessing.
CNNs use connectivity pattern between the Recurrent neural networks use time-series
neurons. This is inspired by the information - what a user spoke last will
organization of the animal visual cortex, impact what he/she will speak next.
whose individual neurons are arranged in
such a way that they respond to
overlapping regions tiling the visual field.
CNNs are ideal for images and video RNNs are ideal for text and speech analysis.
processing.
GANs are a powerful class of neural networks that are used for unsupervised learning. It was
developed and introduced by Ian J. Goodfellow in 2014. GANs are basically made up of a
system of two competing neural network models which compete with each other and are able to
analyze, capture and copy the variations within a dataset.
In GANs, there is a generator and a discriminator. The Generator generates fake samples of
data(be it an image, audio, etc.) and tries to fool the Discriminator. The Discriminator, on the
other hand, tries to distinguish between the real and fake samples. The Generator and the
Discriminator are both Neural Networks and they both run in competition with each other in the
training phase. The steps are repeated several times and in this, the Generator and Discriminator
get better and better in their respective jobs after each repetition.
- when the generator fools the discriminator, it is rewarded, or no change is needed to the
model parameters, but the discriminator is penalized and its model parameters are
updated.
- At a limit, the generator generates perfect replicas from the input domain every time, and
the discriminator cannot tell the difference and predicts “unsure” (e.g. 50% for real and
fake) in every case. This is just an example of an idealized case; we do not need to get to
this point to arrive at a useful generator model.
The easiest way to understand what GANs are is through a simple analogy:
Suppose there is a shop which buys certain kinds of wine from customers which they will
later resell.
However, there are Some customers who sell fake wine in order to get money. In this
case, the shop owner has to be able to distinguish between the fake and authentic wines.
You can imagine that initially, the forger might make a lot of mistakes when trying to sell
the fake wine and it will be easy for the shop owner to identify that the wine is not
authentic. Because of these failures, the forger will keep on trying different techniques to
simulate the authentic wines and some will eventually be successful. Now that the forger
knows that certain techniques got past the shop owner's checks, he can start to further
improve the fake wines based on those techniques.
At the same time, the shop owner would probably get some feedback from other shop
owners or wine experts that some of the wines that she has are not original. This means
that the shop owner would have to improve how she determines whether a wine is fake or
authentic. The goal of the forger is to create wines that are indistinguishable from the
authentic ones, and the goal of the shop owner is to accurately tell if a wine is real or not
There are two major components within GANs: the generator and the discriminator. The
shop owner in the example is known as a discriminator network which assigns a
probability that the image is real.
The forger is known as the generative network. This network takes some noise vector and
outputs an image. When training the generative network, it learns which areas of the
image to improve/change so that the discriminator would have a harder time
differentiating its generated images from the real ones.
The generative network keeps producing images that are closer in appearance to the real
images while the discriminative network is trying to determine the differences between
real and fake images. The ultimate goal is to have a generative network that can produce
images which are indistinguishable from the real ones
Unit 5
5.1 Image and face recognition , Object recognition , Speech Recognition Robotics Applications
A facial recognition system is a technology capable of matching a human face from a digital
image or a video frame against a database of faces, typically employed to authenticate users
through ID verification services, works by pinpointing and measuring facial features from a
given image.
Facial recognition systems attempt to identify a human face, which is three-dimensional and
changes in appearance with lighting and facial expression, based on its two-dimensional image.
To accomplish this computational task, facial recognition systems perform four steps.
• First face detection is used to segment the face from the image background.
• In the second step the segmented face image is aligned to account for face pose, image
size and photographic properties, such as illumination and grayscale.
• The purpose of the alignment process is to enable the accurate localization of facial
features in the third step, the facial feature extraction.
• Features such as eyes, nose and mouth are pinpointed and measured in the image to
represent the face. The so established feature vector of the face is then, in the fourth step,
matched against a database of faces.
Face recognition is a method for identifying an unknown person or authenticating the identity of
a specific person from their face. It’s a branch of computer vision, but face recognition is
specialized and comes with social baggage for some applications, as well as some vulnerabilities
to spoofing.
wH o w f a c i a l r e c o g n i t i o n w o r k s
Facial recognition is the process of identifying or verifying the identity of a person using their
face. It captures, analyzes, and compares patterns based on the person's facial details.
• The face detection process is an essential step as it detects and locates human faces in
images and videos.
• The face capture process transforms analog information (a face) into a set of digital
information (data) based on the person's facial features.
• The face match process verifies if two faces belong to the same person.
Face recognition applications
Face recognition applications mostly fall into three major categories: security, health, and
marketing/retail.
Security includes law enforcement, and that class of facial recognition uses can be as benign as
matching people to their passport photos faster and more accurately than humans can, and as
creepy as the “Person of Interest” scenario where people are tracked via CCTV and compared to
collated photo databases.
Health applications of facial recognition include patient check-ins, real-time emotion detection,
patient tracking within a facility, assessing pain levels in non-verbal patients, detecting certain
diseases and conditions, staff identification, and facility security.
Marketing and retail applications of face recognition include identification of loyalty program
members, identification and tracking of known shoplifters, and recognizing people and their
emotions for targeted product suggestions.
1. The face detection process is an essential step as it detects and locates human faces in
images and videos.
2. The face capture process transforms analog information (a face) into a set of digital
information (data) based on the person's facial features.
3. The face match process verifies if two faces belong to the same person.
4. Facial recognition software is based on the ability to first recognize faces, which is a
technological feat in itself.
5. If you look at the mirror, you can see that your face has certain distinguishable landmarks.
These are the peaks and valleys that make up the different facial features.
6. VISIONICS defines these landmarks as nodal points. There are about 80 nodal points on a
human face.
Face Recognition Operations
The technology system may vary when it comes to facial recognition. Different software applies
different methods and means to achieve face recognition. The stepwise method is as follows:
• Face Detection: To begin with, the camera will detect and recognize a face. The
face can be best detected when the person is looking directly at the camera as it
makes it easy for facial recognition. With the advancements in the technology, this
is improved where the face can be detected with slight variation in their posture of
face facing to the camera.
• Face Analysis: Then the photo of the face is captured and analyzed. Most facial
recognition relies on 2D images rather than 3D because it is more convenient to
match to the database. Facial recognition software will analyze the distance
between your eyes or the shape of your cheekbones.
• Image to Data Conversion: Now it is converted to a mathematical formula and
these facial features become numbers. This numerical code is known a face print.
The way every person has a unique fingerprint, in the same way, they have unique
face print.
• Match Finding: Then the code is compared against a database of other face prints.
This database has photos with identification that can be compared. The technology
then identifies a match for your exact features in the provided database. It returns
with the match and attached information such as name and addresses or it depends
on the information saved in the database of an individual.
Humans can easily detect and identify objects present in an image. The human visual system is
fast and accurate and can perform complex tasks like identifying multiple objects and detect
obstacles with little conscious thought.
With the availability of large amounts of data, and better algorithms, we can now easily train
computers to detect and classify multiple objects within an image with high accuracy
If you think of self driving cars as an example (NOTE: the real self driving solutions are likely
more sophisticated with nuances, but go with this example for illustrative purposes), it requires
us to:
1. Determine the position of the identified object in the image. For example: if the identified
pedestrian is right in front or to the side
2. Identify more than one object. For example: a single image could have multiple cars,
many pedestrians, traffic light, etc
3. Identify the orientation of the object. For example: the front of the car is facing towards
and rear facing away (i.e. car is coming towards us or parked facing us)
• Accomplishing all this requires a little more to be done than the image classification
models.
Ques :What is a Robot (Robotics)?
Robotics is the engineering science and technology of robots, and their design, manufacture,
application, and structural disposition. It requires a working knowledge of electronics,
mechanics, and software.
Classes of Robot
Most of physical robots fall into one of the three categories:
• Manipulators/robotic arms which are anchored to their workplace and built usually from sets
of rigid links connected by joints.
• Mobile robots which can move in their environment using wheels, legs, etc.
• Hybrid robots which include humanoid robots are mobile robots equipped with manipulators.
Robotics is a branch of engineering and science that includes electronics engineering, mechanical
engineering and computer science and so on. This branch deals with the design, construction, use
to control robots, sensory feedback and information processing. These are some technologies
which will replace humans and human activities in coming years.
Disadvantages:
The disadvantages of using robots are given below:
• They need the power supply to keep going. People working in factories may lose
their jobs as robots can replace them.
• They need high maintenance to keep them working all day long. And the cost of
maintaining the robots can be expensive.
• They can store huge amount of data but they are not as efficient as our human
brains.
• As we know that robots work on the program that has been installed in them. So
other than the program installed, robots can’t do anything different.
• The most important disadvantage is that if the program of robots comes in wrong
hands they can cause the huge amount of destruction.
Applications:
There are some applications given below:
• Caterpillar plans which is aiming to develop remote-controlled machines and are
expecting to develop heavy robots by 2021.
• A robot can also do Herding task.
• Robots are increasingly been used more than humans in manufacturing while in
auto-industry there are more than half of the labors are “Robots”.
• Many of the robots are used as Military Robots.
• Robots have been used in cleaning up of areas like toxic waste or industrial wastes
etc.
• Agricultural robots.
• Household robots.
• Domestic robots.