ASSOCIATIVE MEMORY NETWORK
ASSOCIATIVE MEMORY NETWORK
Network
• An associative memory network can
store a set of patterns as memories
• When the associative memory is being
presented with a key pattern, it
responds by producing one of the
stored patterns, which closely
resembles or relates to the key pattern.
• The recall is through association of the
key pattern, with the help of
information memorized. If the given memory is content
addressable, the incorrect string
• These types of memories are also "Albert Einstein" as a key is
sufficient to recover the correct
called as content- addressable name "Albert Einstein."
memories (CAM)
• Conventional memory in computers
– Address based memory
• The aim of an associative memory is, to produce
the associated output pattern whenever one of
the input pattern is applied to the neural network.
• Associative memory
– Content based memory
– Keyword based memory
– Pattern based memory
• Input data is correlated with that of the stored data in the CAM.
• The stored patterns must be unique, i.e., different patterns in each
location.
• Associative memory makes a parallel search within a stored data file.
CAM architecture
Two types of associative memories
• Auto-associative memory network
• Hetero-associative memory network
• Both these nets are single layer nets in which the weights
are determined in a manner that the net stores a set of
pattern associations.
• Each of this association is an input-output vector pair, s:t.
• If each of the output
vectors is same as the
input vectors with which
it is associated, then the
net is a said to be auto
associative memory net.
• Hebb Rule
• Outer Products Rule
Hebb Rule
• It is widely used for finding the weights of an associative
memory neural net.
• The weights are updated until there is no weight change.
• It can be used with patterns that are being represented as
either binary or bipolar vectors.
Outer Products Rule
• Outer products rule is an alternative method for finding
weights of an associative net.
Outer product.
Let s and t be row vectors.
Then for a particular training pair s:t
• The input layer consists of n input units and the output layer also
consists of n output units
• The training input and target output vectors are the same.
Training Algorithm
Testing Algorithm
Applications
• Speech processing
• Image processing
• Pattern classification, etc.
❑ An autoassociative net to store one vector:
recognizing the stored vector.
❑ Step 0. The vector s = (1, 1, 1, - 1) is stored with the
weight matrix:
Applying activations over the net input, to calculate output, we get y = [1 1], hence correct
response is obtained.
Y vectors as input:
Testing the network
For test pattern E, now the input is [-1 1]. Computing net input, we have