CS8082 - Rejinpaul - Iq

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Anna University Exams April May 2022 – Regulation 2017

Rejinpaul.com Unique Important Questions –BE/BTECH

CS8082 Machine Learning Techniques


PART B & PACT C IMPORTANT QUESTIONS

om
Unit I

1. Briefly Explain about the Types of Machine Learning With Examples


2. Examine the Version Space and Candidate Elimination Algorithm in Detail
3. Give three computer applications for which machine learning approaches seem

l.c
appropriate and three for which they seem inappropriate and include a justification for
each
4. For the following training example, and compute the new decision tree and show the
value of the information gain for each candidate attribute at each step in growing the

u
tree.
Sky Air-Temp Humidity Wind Water Forecast Enjoy-Sport?
pa
Sunny Warm Normal Weak Warm Same No
5. Consider the following set of training examples:
jin
.re

a. What is the entropy of this collection of training examples with respect to the
target function classification?
b. What is the information gain of a2 relative to these training examples?
6. Elaborate Hypothesis Space Search and Inductive Bias in Decision Tree Learning
w

7. Examine the issues in Decision Tree Learning.


Unit II
w

1. Analyze briefly about Perceptron with its Rules.


2. Briefly explain the multilayer networks learned by the BACKPROPACATION
w

algorithm
3. Derive a gradient descent training rule for a single unit with output o, where

Download Important Question from rejinpaul.com & also refer your friends!!
4. Consider the alternative error function

Derive the gradient descent update rule for this definition of E. Show that it can
be implemented by multiplying each weight by some constant before performing the

om
standard gradient descent update
5. Explain Genetic Algorithm in detail.
6. llustrate the operation of the GP crossover operator by applying it using two copies of
our tree as the two parents.
Unit III

l.c
7. State Bayes theorem and illustrate it with an example.
8. Explain naive Bayes algorithm.
9. Use naive Bayes algorithm to determine whether a red domestic SUV car is a
stolen car or not using the following data:

u
pa
jin

10. Explain the general MLE method for estimating the parameters of a probability
.re

distribution
11. Draw the Bayesian belief network that represents the conditional independence
assumptions of the naive Bayes classifier using your own example
12. Explain probably approximately correct (PAC) learning mode
13. Explain about Naïve Bayes Classifier.
w

Unit IV
w

1. (i) Discuss the K Nearest Neighbor algorithm (ii) Discuss Locally Weighted
Regression (iii) Discuss the learning task and the Q learning in the context of
reinforcement learning
w

2. a. Discuss briefly the hidden Markov model.


Ram is a three month old baby. He can be happy, hungry, or having a wet diaper. Initially
when he wakes up from his nap at 1 pm, he is happy. If he is happy, there is a 50% chance
that he will remain happy one hour later, a 25%chance to be hungry by then, and a 25%
chance to have a wet diaper. Similarly, if he is hungry, one hour later he will be happy with
25% chance, hungry with 25% chance, and wet diaper with 50% chance. If he has a wet
diaper, one hour later he will be happy with 50% chance, hungry with 25% chance, and wet
diaper with 25 chance. When he is happy, he smiles 75%of the time and cries 25% of the

Download Important Question from rejinpaul.com & also refer your friends!!
time, when he is hungry, he smiles 25% and cries 75%, when he has a wet diaper, he smiles
50% and cries 50%.
Draw the HMM that corresponds to the above story clearly mark the transition probabilities
and output probabilities.
The nanny left a note “1 pm smile 2 pm cry 3 pm smile” what is the probability that this
particular observed sequence happens?

om
3. We have 4 medicines as our training data points object and each medicine has 2
attribute represents coordinate of the object. We have to determine which medicines belong
to cluster 1 and which medicines belong to the other cluster.

Object Attributes 1 weight index (X) Attribute 2 (Y): pH

l.c
Medicine A 1 1
Medicine B 2 1
Medicine C 4 3
Medicine D 5 4

u
pa
4. Consider the following alternative method for accounting for distance in weighted
local regression. Create a virtual set of training examples D’ as follows: for each training
example (x, f(x)) in the original data set D, create some (possibly fractional) number of
copies of (x, f(x)) in D’, where the number of copies is K (d (x , x)). Now train a linear
q

approximation to minimize the error criterion.


jin

The idea is to make more copies of training examples that near the query instance, and fewer
of those that are distant. Derive the gradient descent rule for this criterion. Express the rule in
.re

the form of a sum over members of D rather than D’ and compare it with the rules.

Unit V

1. Explain Q learning algorithm in reinforcement learning with a suitable example.


w

2. Solve the given problem using Hidden Markov Model


w
w

Download Important Question from rejinpaul.com & also refer your friends!!
om
u l.c
pa
jin
.re

3. Apply Appriori algorithm on the grocery store example with support threshold s =
33.34% and confidence threshold c = 60%, where H, B, K, C and P are different items
purchased by customers.
• Show all final frequent itemsets
w

• Specify the association rule that are generated


• Show final association rules sorted by the confidence
• Represent the transactions as graph
w

Transaction ID Items
T1 H, B, K
T2 H, B
w

T3 H, C, P
T4 P, C
T5 P, K
T6 H, C, P
4. Discuss the case study in Netflix using Recommendation Systems using einforcement
Learning.
5. How Q-learning works? Explain with Cliff Walking problem.

Download Important Question from rejinpaul.com & also refer your friends!!
om
u l.c
pa
jin
.re
w
w
w

Download Important Question from rejinpaul.com & also refer your friends!!

You might also like