Asm - Artificial Intelligence - 129649

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

GRADE X -ARTIFICIAL INTELLIGENCE (417)

CHAPTER 3 : PROJECT CYCLE

INTRODUCTION

Let us assume that you have to make a greeting card for your mother as it is her birthday. You are very excited
about it and have thought of many ideas to execute the same. Let us look at some of the steps which you
might take to accomplish this task:

1. Look for some cool greeting card ideas from different sources. You might go online and checkout
some videos or you may ask someone who has knowledge about it.

2. After finalising the design, you would make a list of things that are required to make this card.

3. You will check if you have the material with you or not. If not, you could go and get all the items
required, ready for use.

4. Once you have everything with you, you would start making the card.

5. If you make a mistake in the card somewhere which cannot be rectified, you will discard it and
start remaking it.

6. Once the greeting card is made, you would gift it to your mother. Are these steps relatable?

Do you think your steps might differ? If so, write them down!

These steps show how we plan to execute the tasks around us. Consciously or subconsciously our mind makes
up plans for every task which we have to accomplish which is why things become clearer in our mind.
Similarly, if we have to develop an AI project, the AI Project Cycle provides us with an appropriate
framework which can lead us towards the goal. The AI Project Cycle mainly has 5 stages:

* Images shown here are the property of individual organisations and are used here for reference purpose only.
Starting with Problem Scoping, you set the goal for your AI project by stating the problem which you wish
to solve with it. Under problem scoping, we look at various parameters which affect the problem we wish to
solve so that the picture becomes clearer.

To proceed,

● You need to acquire data which will become the base of your project as it will help you in
understanding what the parameters that are related to problem scoping are.

● You go for data acquisition by collecting data from various reliable and authentic sources. Since
the data you collect would be in large quantities, you can try to give it a visual image of different types of
representations like graphs, databases, flow charts, maps, etc. This makes it easier for you to interpret the
patterns which your acquired data follows.

● After exploring the patterns, you can decide upon the type of model you would build to achieve
the goal. For this, you can research online and select various models which give a suitable output.

● You can test the selected models and figure out which is the most efficient one.

● The most efficient model is now the base of your AI project and you can develop your algorithm
around it.

● Once the modelling is complete, you now need to test your model on some newly fetched data.
The results will help you in evaluating your model and improving it.

● Finally, after evaluation, the project cycle is now complete and what you get is your AI project.
Let us understand each stage of the AI Project Cycle in detail.

PROBLEM SCOPING

It is a fact that we are surrounded by problems. They could be small or big, sometimes ignored or sometimes
even critical. Many times, we become so used to a problem that it becomes a part of our life. Identifying such
a problem and having a vision to solve it, is what Problem Scoping is about. A lot of times we are unable to
observe any problem in our surroundings. In that case, we can take a look at the Sustainable Development
Goals. 17 goals have been announced by the United nations which are termed as the Sustainable Development
Goals. The aim is to achieve these goals by the end of 2030. A pledge to do so has been taken by all the
member nations of the UN.

* Images shown here are the property of individual organisations and are used here for reference purpose only.
Here are the 17 SDGs. Let’s take a look:

As you can see, many goals correspond to the problems which we might observe around us too. One should
look for such problems and try to solve them as this would make many lives better and help our country
achieve these goals.

Scoping a problem is not that easy as we need to have a deeper understanding around it so that the picture
becomes clearer while we are working to solve it. Hence, we use the 4Ws Problem Canvas to help us out.

4Ws Problem Canvas

The 4Ws Problem canvas helps in identifying the key elements related to the problem.

Who? What? Where? Why?

Let us go through each of the blocks one by one.

Who?
The “Who” block helps in analysing the people getting affected directly or indirectly due to it. Under this, we
find out who the ‘Stakeholders’ to this problem are and what we know about them. Stakeholders are the
people who face this problem and would be benefitted with the solution. Here is the Who Canvas:

* Images shown here are the property of individual organisations and are used here for reference purpose only.
CHAPTER-3 2020-2021

What ?
Under the “What” block, you need to look into what you have on hand. At this stage, you need to determine
the nature of the problem. What is the problem and how do you know that it is a problem? Under this block,
you also gather evidence to prove that the problem you have selected actually exists. Newspaper articles,
Media, announcements, etc are some examples. Here is the What Canvas:

Where ?
Now that you know who is associated with the problem and what the problem actually is; you need to focus
on the context/situation/location of the problem. This block will help you look into the situation in which the
problem arises, the context of it, and the locations where it is prominent. Here is the Where Canvas:

Page | 4
CHAPTER-3 2020-2021

Why?
You have finally listed down all the major elements that affect the problem directly. Now it is convenient to
understand who the people that would be benefitted by the solution are; what is to be solved; and where will
the solution be deployed. These three canvases now become the base of why you want to solve this problem.
Thus, in the “Why” canvas, think about the benefits which the stakeholders would get from the solution and
how it will benefit them as well as the society.

After filling the 4Ws Problem canvas, you now need to summarise all the cards into one template. The
Problem Statement Template helps us to summarise all the key points into one single Template so that in
future, whenever there is need to look back at the basis of the problem, we can take a look at the Problem
Statement Template and understand the key elements of it.

Page | 5
CHAPTER-3 2020-2021

Data Acquisition

As we move ahead in the AI Project Cycle, we come across the second element which is : Data Acquisition.
As the term clearly mentions, this stage is about acquiring data for the project. Let us first understand what is
Data. Data can be a piece of information or facts and statistics collected together for reference or analysis.
Whenever we want an AI project to be able to predict an output, we need to train it first using data.

For example, If you want to make an Artificially Intelligent system which can predict the salary of any
employee based on his previous salaries, you would feed the data of his previous salaries into the machine.
This is the data with which the machine can be trained. Now, once it is ready, it will predict his next salary
efficiently. The previous salary data here is known as Training Data while the next salary prediction data set
is known as the Testing Data.

For better efficiency of an AI project, the Training data needs to be relevant and authentic. In the previous
example, if the training data was not of the previous salaries but of his expenses, the machine would not have
predicted his next salary correctly since the whole training went wrong. Similarly, if the previous salary data
was not authentic, that is, it was not correct, then too the prediction could have gone wrong. Hence….

For any AI project to be efficient, the training data should be authentic and relevant to the problem statement
scoped.

Page | 6
CHAPTER-3 2020-2021

Data Features

Look at your problem statement once again and try to find the data features required to address this issue.
Data features refer to the type of data you want to collect. In our previous example, data features would
be salary amount, increment percentage, increment period, bonus, etc.

After mentioning the Data features, you get to know what sort of data is to be collected. Now, the question
arises- From where can we get this data? There can be various ways in which you can collect data. Some of
them are:

Surveys Web Scraping Sensors

API
Cameras Observations (Application Program
Interface)

Sometimes, you use the internet and try to acquire data for your project from some random websites. Such
data might not be authentic as its accuracy cannot be proved. Due to this, it becomes necessary to find a
reliable source of data from where some authentic information can be taken. At the same time, we should
keep in mind that the data which we collect is open-sourced and not someone’s property. Extracting private
data can be an offence. One of the most reliable and authentic sources of information, are the open-sourced
websites hosted by the government. These government portals have general information collected in suitable
format which can be downloaded and used wisely. Some of the open-sourced Govt. portals are: data.gov.in,
india.gov.in

Data Exploration

In the previous modules, you have set the goal of your project and have also found ways to acquire data.
While acquiring data, you must have noticed that the data is a complex entity – it is full of numbers and if
anyone wants to make some sense out of it, they have to work some patterns out of it. For example, if you go
to the library and pick up a random book, you first try to go through its content quickly by turning pages and
by reading the description before borrowing it for yourself, because it helps you in understanding if the book
is appropriate to your needs and interests or not.

Thus, to analyse the data, you need to visualise it in some user-friendly format so that you can:

● Quickly get a sense of the trends, relationships and patterns contained within the data.

● Define strategy for which model to use at a later stage.

● Communicate the same to others effectively. To visualise data, we can use various types of visual
representations.
Page | 7
CHAPTER-3 2020-2021

Are you aware of visual representations of data? Fill them below:

Bar
Graphs

Visual
Representations

MODELLING

In the previous module of Data exploration, we have seen various types of graphical representations which
can be used for representing different parameters of data. The graphical representation makes the data
understandable for humans as we can discover trends and patterns out of it. But when it comes to machines
accessing and analysing data, it needs the data in the most basic form of numbers (which is binary – 0s and
1s) and when it comes to discovering patterns and trends in data, the machine goes in for mathematical
representations of the same. The ability to mathematically describe the relationship between parameters is the
heart of every AI model. Thus, whenever we talk about developing AI models, it is the mathematical approach
towards analysing data which we refer to.

Generally, AI models can be classified as follows:

Machine
Learning
Learning
Based Deep
AI Models
Learning
Rule Based

Page | 8
CHAPTER-3 2020-2021

Rule Based Approach

Refers to the AI modelling where the rules are defined by the developer. The machine follows the rules or
instructions mentioned by the developer and performs its task accordingly. For example, we have a dataset
which tells us about the conditions on the basis of which we can decide if an elephant may be spotted or not
while on safari. The parameters are: Outlook, Temperature, Humidity and Wind.

Now, let’s take various possibilities of these parameters and see in which case the elephant may be spotted
and in which case it may not. After looking through all the cases, we feed this data in to the machine along
with the rules which tell the machine all the possibilities. The machine trains on this data and now is ready to
be tested. While testing the machine, we tell the machine that Outlook = Overcast; Temperature = Normal;
Humidity = Normal and Wind = Weak. On the basis of this testing dataset, now the machine will be able to
tell if the elephant has been spotted before or not and will display the prediction to us. This is known as a
rule-based approach because we fed the data along with rules to the machine and the machine after getting
trained on them is now able to predict answers for the same. A drawback/feature for this approach is that the
learning is static. The machine once trained, does not take into consideration any changes made in the original
training dataset. That is, if you try testing the machine on a dataset which is different from the rules and data
you fed it at the training stage, the machine will fail and will not learn from its mistake. Once trained, the
model cannot improvise itself on the basis of feedbacks. Thus, machine learning gets introduced as an
extension to this as in that case, the machine adapts to change in data and rules and follows the updated path
only, while a rule-based model does what it has been taught once.

Learning Based Approach

Refers to the AI modelling where the machine learns by itself. Under the Learning Based approach, the AI
model gets trained on the data fed to it and then is able to design a model which is adaptive to the change in
data. That is, if the model is trained with X type of data and the machine designs the algorithm around it, the
model would modify itself according to the changes which occur in the data so that all the exceptions are
handled in this case. For example, suppose you have a dataset comprising of 100 images of apples and bananas
each. These images depict apples and bananas in various shapes and sizes. These images are then labelled as
either apple or banana so that all apple images are labelled ‘apple’ and all the banana images have ‘banana’
as their label. Now, the AI model is trained with this dataset and the model is programmed in such a way that
it can distinguish between an apple image and a banana image according to their features and can predict the
label of any image which is fed to it as an apple or a banana. After training, the machine is now fed with
testing data. Now, the testing data might not have similar images as the ones on which the model has been
trained. So, the model adapts to the features on which it has been trained and accordingly predicts if the image
is of an apple or banana. In this way, the machine learns by itself by adapting to the new data which is flowing
in. This is the machine learning approach which introduces the dynamicity in the model.

Page | 9
CHAPTER-3 2020-2021

The learning-based approach can further be divided into three


parts:

Supervised Learning
Supervised
Learning In a supervised learning model, the dataset which is fed to the
machine is labelled. In other words, we can say that the dataset
Unsupervised is known to the person who is training the machine only then
he/she is able to label the data. A label is some information
Learning which can be used as a tag for data. For example, students get
grades according to the marks they secure in examinations.
Reinforcement These grades are labels which categorise the students according
Learning to their marks.

There are two types of Supervised Learning models:

Classification: Where the data is classified according to the labels. For


example, in the grading system, students are classified on the basis of the
grades they obtain with respect to their marks in the examination. This
model works on discrete dataset which means the data need not be
continuous.

Regression: Such models work on continuous data. For example, if you wish to
predict your next salary, then you would put in the data of your previous salary,
any increments, etc., and would train the model. Here, the data which has been
fed to the machine is continuous.

Unsupervised Learning

An unsupervised learning model works on unlabelled dataset. This means that the data which is fed to the
machine is random and there is a possibility that the person who is training the model does not have any
information regarding it. The unsupervised learning models are used to identify relationships, patterns and

Page | 10
CHAPTER-3 2020-2021

trends out of the data which is fed into it. It helps the user in understanding what the data is about and what
are the major features identified by the machine in it.

For example, you have a random data of 1000 dog images and you wish to understand some pattern out of it,
you would feed this data into the unsupervised learning model and would train the machine on it. After
training, the machine would come up with patterns which it was able to identify out of it. The Machine might
come up with patterns which are already known to the user like colour or it might even come up with
something very unusual like the size of the dogs.

Unsupervised learning models can be further divided into two categories:

Clustering: Refers to the unsupervised learning


algorithm which can cluster the unknown data
according to the patterns or trends identified out of
it. The patterns observed might be the ones which
are known to the developeror it might even come
up with some unique patterns out of it.

Dimensionality Reduction: We humans are able to visualise upto 3-Dimensions only but according to a lot
of theories and algorithms, there are various entities which exist beyond 3-Dimensions. For example, in
Natural language Processing, the words are considered to be N-Dimensional entities. Which means that we
cannot visualise them as they exist beyond our visualisation ability. Hence, to make sense out of it, we need
to reduce their dimensions. Here, dimensionality reduction algorithm is used.

As we reduce the dimension of an entity, the information which it contains starts getting distorted. For
example, if we have a ball in our hand, it is 3-Dimensions right now. But if we click its picture, the data
transforms to 2-D as an image is a 2-Dimensional entity. Now, as soon as we reduce one dimension, at least
50% of the information is lost as now we will not know about the back of the ball. Whether the ball was of
same colour at the back or not? Or was it just a hemisphere? If we reduce the dimensions further, more and
more information will get lost.

Hence, to reduce the dimensions and still be able to make sense out of the data, we use Dimensionality
Reduction.

Evaluation

Once a model has been made and trained, it needs to go through proper testing so that one can calculate the
efficiency and performance of the model. Hence, the model is tested with the help of Testing Data (which
Page | 11
CHAPTER-3 2020-2021

was separated out of the acquired dataset at Data Acquisition stage) and the efficiency of the model is
calculated on the basis of the parameters mentioned below:

Accuracy Precision Recall F1 Score

You will read more about this stage in Chapter 7.

NEURAL NETWORKS

Neural networks are loosely modelled after how neurons in the human brain behave. The key advantage of
neural networks are that they are able to extract data features automatically without needing the input of the
programmer. A neural network is essentially a system of organizing machine learning algorithms to perform
certain tasks. It is a fast and efficient way to solve problems for which the dataset is very large, such as in
images.

As seen in the figure given, the larger Neural Networks tend to perform better with larger amounts of data
whereas the traditional machine learning algorithms stop improving after a certain saturation point.

Page | 12
CHAPTER-3 2020-2021

This is a representation of how neural networks work. A Neural Network is divided into multiple layers and
each layer is further divided into several blocks called nodes. Each node has its own task to accomplish which
is then passed to the next layer. The first layer of a Neural Network is known as the input layer. The job of an
input layer is to acquire data and feed it to the Neural Network. No processing occurs at the input layer. Next
to it, are the hidden layers. Hidden layers are the layers in which the whole processing occurs. Their name
essentially means that these layers are hidden and are not visible to the user.

Each node of these hidden layers has its own machine learning algorithm which it executes on the data
received from the input layer. The processed output is then fed to the subsequent hidden layer

of the network. There can be multiple hidden layers in a neural network system and their number depends
upon the complexity of the function for which the network has been configured. Also, the number of nodes
in each layer can vary accordingly. The last hidden layer passes the final processed data to the output layer
which then gives it to the user as the final output. Similar to the input layer, output layer too does not process
the data which it acquires. It is meant for user-interface.

Some of the features of a Neural Network are listed below:

Page | 13

You might also like