Start For Free: Learning Vector Quantization Learning Vector Quantization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

9/8/23, 9:51 PM Learning Vector Quantization - Neural Networks and Deep Learning Tutorial | Study Glance

Start for Free


Don't settle for another ticketing system. Upgrade to Jira Service
Management.

Jira Service Management

 NN&DL Menu

Learning Vector Quantization

Learning Vector Quantization


In 1980, Finnish Professor Kohonen discovered that some areas of the brain develop structures with different areas, each of them
with a high sensitive for a specific input pattern. It is based on competition among neural units based on a principle called winner-
takes-all.

Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm. A prototype is an early sample, model,
or release of a product built to test a concept or process. One or more prototypes are used to represent each class in the dataset.
New (unknown) data points are then assigned the class of the prototype that is nearest to them. In order for "nearest" to make
sense, a distance measure has to be defined. There is no limitation on how many prototypes can be used per class, the only
requirement being that there is at least one prototype for each class. LVQ is a special case of an artificial neural network and it
applies a winner-take-all Hebbian learning-based approach. With a small difference, it is similar to Self-Organizing Maps (SOM)
algorithm. SOM and LVQ were invented by Teuvo Kohonen.

Ads by by
Thanks. Feedback
Ad closedimproves Google ads
Send feedback Why this ad?
Clip Studio Paint Amazing Sale
The art app full of powerful brushes & features to achieve
your vision. Now up to 60% off.

Clip Studio Paint Shop Now


LVQ system is represented by prototypes W=(W1....,Wn). In winner-take-all training algorithms, the winner is moved closer if it
correctly classifies the data point or moved away if it classifies the data point incorrectly. An advantage of LVQ is that it creates
prototypes that are easy to interpret for experts in the respective application domain

Training Algorithm
Step 0: Initialize the reference vectors. This can be done using the following steps.
From the given set of training vectors, take the first "m" (number of clusters) training vectors and use them as weight
vectors, the remaining vectors can be used for training.
Assign the initial weights and classifications randomly.
K-means clustering method.
Set initial learning rate α

Step l: Perform Steps 2-6 if the stopping condition is false.

Step 2: Perform Steps 3-4 for each training input vector x

Step 3: Calculate the Euclidean distance; for i = 1 to n, j = 1 to m,


n m

2
D(j) = ∑ ∑(xi − w ij )

i=1 j=1

Find the winning unit index J, when D(J) is minimum

Step 4: Update the weights on the winning unit, Wj using the following conditions.

https://studyglance.in/nn/display.php?tno=12&topic=Learning-Vector-Quantization 1/2
9/8/23, 9:51 PM Learning Vector Quantization - Neural Networks and Deep Learning Tutorial | Study Glance
if T = Cj then w j (new) = w j (old) + α[x − w j (old)]

if T ≠ Cj then w j (new) = w j (old) − α[x − w j (old)]

Step 5: Reduce the learning rate α

Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed number of epochs or if
learning rare has reduced to a negligible value.)

Next Topic : Counter Propagation Networks

ABOUT
Study Glance provides Tutorials , Power point Presentations(ppts), Lecture Notes, Important & previously asked questions, Objective Type
questions, Laboratory programs and we provide Syllabus of various subjects.

CATEGORIES
Tutorials Questions
PPTs Lab Programs
Lecture Notes Syllabus

Copyright © 2020 All Rights Reserved by StudyGlance.

   

https://studyglance.in/nn/display.php?tno=12&topic=Learning-Vector-Quantization 2/2

You might also like