100% found this document useful (1 vote)
428 views13 pages

ANN Final Exam

The document contains answers to questions by a student named Abdul Basit Anwar with registration number 17-CS-48. It discusses automatic feature extraction in CNNs and compares the architectures of AlexNet and GoogleNet, finding that GoogleNet performs better for image recognition and object detection due to having fewer parameters and using inception modules. It also describes how to perform sentiment analysis on social media data using recurrent neural networks and discusses how filter size, network depth and width, epoch size, learning rate, and dataset size affect CNN performance.

Uploaded by

basit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
428 views13 pages

ANN Final Exam

The document contains answers to questions by a student named Abdul Basit Anwar with registration number 17-CS-48. It discusses automatic feature extraction in CNNs and compares the architectures of AlexNet and GoogleNet, finding that GoogleNet performs better for image recognition and object detection due to having fewer parameters and using inception modules. It also describes how to perform sentiment analysis on social media data using recurrent neural networks and discusses how filter size, network depth and width, epoch size, learning rate, and dataset size affect CNN performance.

Uploaded by

basit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Name: Abdul Basit Anwar Registration No: 17-CS-

48

Question No.1 (Part A)

Answer:
Question No. 1

B. How CNN does automatic feature extraction from data? Why automatic feature
extraction is important than manual feature extraction? What are various ways of
automatic feature extraction? Justify your answers.

Answer:

Feature extraction is one of the most important machine learning issues. Finding suitable
attributes of datasets can enormously reduce the dimensionality of the input space, and from a
computational point of view can help all of the following steps of pattern recognition
problems, such as classification or information retrieval. However, the feature extraction step
is usually performed manually. Moreover, depending on the type of data, we can face a wide
range of methods to extract features. In this sense, the process to select appropriate
techniques normally takes a long time. This work describes the use of recent advances in
deep learning approach in order to find a good feature representation automatically.

Feature extraction involves reducing the number of resources required to describe a large set
of data. When performing analysis of complex data one of the major problem’s stems from
the number of variables involved.

There following way of automatic feature extraction:

 Sparse filtering
 Isomap.
 Kernel PCA.
 Latent semantic analysis.
 Partial least squares.
 Principal component analysis.
 Independent Component Analysis

Question No. 2

A. Suppose you have a non-image data of some experiment. The data may be in the form
of a Table or CSV file with 20 features and 500 samples and feature values are
represented through; F1, F2,……..F20. What type of transformation in this type of
data will be required so that we can apply convolutional neural network (CNN) to it?
Answer:

We will have to do the following transformations:

We must transform our data into image form because the function keras accepts the inputs as
images. Yet we use CSV file that is an array here. And we have to transform these to image
at first which are #-D pixels matrix. And we are now re-shaping the matrix into 3-D matrix.
The X(input) and Y(output) are isolated from data.

We also re-scale the data in the range 0-1 because it will be faster to process. so, we divide all
the values by 255(The max value in the matrix(pixel) is 255)

Question No. 2

B. Compare the architecture of AlexNet and GoogleNet. Which one performs better for
image recognition and Object detection in terms of accuracy and why?

Answer:

GoogleNet

The winner of the ILSVRC 2014 competition was GoogleNet (a.k.a. Inception V1) from
Google. It achieved a top-5 error rate of 6.67%! This was very close to human level
performance which the organizers of the challenge were now forced to evaluate. As it turns
out, this was actually rather hard to do and required some human training in order to beat
GoogleNets accuracy. After a few days of training, the human expert (Andrej Karpathy) was
able to achieve a top-5 error rate of 5.1%(single model) and 3.6%(ensemble). The network
used a CNN inspired by LeNet but implemented a novel element which is dubbed an
inception module. It used batch normalization, image distortions and RMSprop. This module
is based on several very small convolutions in order to drastically reduce the number of
parameters. Their architecture consisted of a 22 layer deep CNN but reduced the number of
parameters from 60 million (AlexNet) to 4 million.

AlexNet

AlexNet is the name of a convolutional neural network. A large impact on the field of
machine learning, specifically in the application of deep learning to machine vision. It
famously won the 2012 ImageNet LSVRC-2012 competition by a large margin The network
had a very similar architecture as LeNet by Yann LeCun et al but was deeper, with more
filters per layer, and with stacked convolutional layers.

Alex Net Architecture

1. It is based on the Conventional Neural Networks.


2. The architecture consists of eight layers in total, out of which the first 5 are
convolutional layers and the last 3 are fully-connected.
3. The first two convolutional layers are connected to overlapping max-pooling layers to
extract a maximum number of features. The third, fourth, and fifth convolutional
layers are directly connected to the fully-connected layers.

Google Net Architecture

1. It is also based on the Conventional Neural Networks.


2. The Google Net Architecture is 22 layers deep, with 27 pooling layers included.
3. There are 9 inception modules stacked linearly in total.
4. The ends of the inception modules are connected to the global average pooling layer.

Question No. 3

A. What do you know about recurrent neural networks (RNN)? How it can be used to
solve the Sentiment Analysis of Tweets or Social Media contents. Provide the
necessary steps and code for this.

Answer:

Recurrent Neural Network (RNN) are a type of Neural Network where the output from
previous step are fed as input to the current step. In traditional neural networks, all the inputs
and outputs are independent of each other, but in cases like when it is required to predict the
next word of a sentence, the previous words are required and hence there is a need to
remember the previous words.

Steps for sentiment Analysis of Tweets:

1) Get Twitter or Social media API Credentials


First, we must apply for an account to access the Tweeter or any social media API.
2) Setup the API Credentials in Python
Save your credentials in a config file and run source./config to load the keys as
environment variables. This is to not expose your keys in a Python script. Make sure
to not commit this config file into GitHub.
We will Tweepy library in Python to get access to Twitter API. It is a nice wrapper
over the raw Twitter API and provides a lot of heavy lifting for creating API URLs
and http requests. We just need to provide our keys from Step 1, and Tweepy takes
care of talking with Twitter API.
Run pip install tweepy to get the tweepy package in your virtual environment. (I’ve
been using pyeny to manage different versions of Python and have been very
impressed. You’ll also need pyeny virtualeny package to manage virtual environments
for you — but this is
another blog in itself) Getting Tweet Data via Streaming API Code:

import os
import json
import tweepy
from tweepy import Stream
from tweepy.streaming import StreamListener

consumer_key = os.getenv(“CONSUMER_KEY_TWITTER”)
consumer_secret = os.getenv(“CONSUMER_SECRET_TWITTER”)
access_token = os.getenv(“ACCESS_KEY_TWITTER”)
access_token_secret = os.getenv(“ACCESS_SECRET_TWITTER”)auth =

tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

3) Getting Tweet or social media data Via streaming API


After setting the credential we can now get Tweet data using API. We can use filter to
extract data from social media.
Code:
class
listener(StreamListener):
def on_data(self, data): data
= json.loads(data) # Filter
out non-English Tweets if
data.get("lang") != "en":
return True try:
timestamp = data['timestamp_ms']
# Get longer 280 char tweets if
possible if
data.get("extended_tweet"):
tweet = data['extended_tweet']
["full_text"] else:
tweet = data["text"]
url =
"https://www.twitter.com/i/web/status/" +
data["id_str"] user = data["user"]
["screen_name"] verified = data["user"]
["verified"]
write_to_csv([timestamp, tweet, user, verified, url])

except KeyError as e:
print("Keyerror:", e)

return True

def on_error(self, status):


print(status)
4) Get Sentiment Information
Sentiment Analysis can be done either in the listener above or off-line once we have
collected all the tweet data.
We can use out-of-the-box Sentiment processing libraries in Python. From what I saw,
I liked Text Blob and vader sentiment.
Code:
from textblob import
TextBlob ts =
TextBlob(tweet).sentimet
print(ts.subjectivity,
ts.polarity)

5) Plot Sentiment Information Use


graph to plot the sentiment data.
6) Set this up on AWS or Google Cloud Platform
Run this on a AWS EC2 instance or on Google Cloud Platform server. I am not going
into the details on how to set that up, there are fantastic resources for it. Run the above
script using “screen” and get access to topics of your interest on Twitter!
Question No. 3

B. How filter size, depth, width, Epoch size, learning rate and dataset size effects
Convolutional Neural Networks learning.

Answer:

Size of the filters play an important role in finding the key features. A larger size kernel can
overlook at the features and could skip the essential details in the images whereas a smaller
size kernel could provide more information leading to more confusion. Thus, there is a need
to determine the most suitable size of the kernel/filter.

Widening consistently improves performance across residual networks of different depth.


Increasing both depth and width helps until the number of parameters becomes too high and
stronger regularization is needed. There does not seem to be a regularization effect from very
high depth in residual net- works as wide networks with the same number of parameters as
thin ones can learn same or better representations. Furthermore, wide networks can
successfully learn with a 2 or more times larger number of parameters than thin ones, which
would re- quire doubling the depth of thin networks, making them infeasibly expensive to
train.

There is a high correlation between the learning rate and the batch size, when the learning
rates are high, the large batch size performs better than with small learning rates.

Data set size affects the accuracy in transfer learning with deep convolutional neural
networks. The first effect is on the baseline case (to repeat, just training the network with
randomly initialized weights). We can see that the model starts to overfit on the training data
when we artificially reduce the data set size, which leads to a steady decline in accuracy on
both Tiny-ImageNet as well as MiniPlaces2. This can be explained by a sub-optimal
parameter configuration because of overfitting on a small data set size.

Question No. 4

For Even Student ID

You have to choose a datasets of images/video from a surveillance camera or drone. Then do
the image detection and recognition using Convolutional Neural Networks (CNN) or any
variation of it. Also provide the code and overall steps of the working model.
Answer:

from keras.models import Sequential #toinitialize the NN


from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten from keras.layers import Dense
from keras.layers import Dropout

#Initializing the CNN


classifier = Sequential()

#Step 1:Convolution
classifier.add(Conv2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))

#Step 2-Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.3)) classifier.add(Conv2D(32, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Dropout(0.2))

#step 3- Flattening
classifier.add(Flatten())

#Step 4-Full connection


classifier.add(Dropout(0.2))
classifier.add(Dense(output_dim = 512, activation ='relu'))
classifier.add(Dense(output_dim = 256, activation ='relu'))
classifier.add(Dense(output_dim = 1, activation ='sigmoid'))

#compiling The CNN


classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy',metrics=['accuracy'])

#Fitting the Cnn to images


from keras_preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255, s
hear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory(
'training_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')

test_set = test_datagen.ow_from_directory('test_set',
target_size=(64, 64),
batch_size=32,
class_mode='binary')
classifier.t_generator(training_set,
samples_per_epoch=2732,
epochs=20,
validation_data=test_set,
nb_val_samples=435)

You might also like