M7 Digital Memory Systems Module

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

Technological University of the Philippines

COLLEGE OF ENGINEERING
Electronics Engineering Department

Module 7
Digital Memory Systems

Almarinez, Mherby C.
Banao, Laureana Joy S.
Daily, Rex Robert A.
Ete, Frances Desiree D.
Formalejo, Ivy Joy I.
Gallo, John Laurence B.
Hermano, Angelo A.
Miguel, Richell Mark B. Rating
Peregrino, Carl Sonmuel M.
BSECE 3A

Engr. Nilo M. Arago


Instructor
Module 7
Digital Memory Systems

I. Introduction to Digital Memory Systems

A memory unit is a device designed for the transfer and storage


of binary information, facilitating retrieval for processing as
needed. It comprises a collection of cells with the capability to store
a substantial amount of binary data.

II. Two Types of Memory:

1. RAM (Random-Access Memory) - stores new information for later


use. When you open a program or create a file, that information is
temporarily stored in RAM for quick access by the processor. RAM
can perform both write and read operations, whereas ROM can
only perform read operations.
2. Read-Only Memory (ROM) - is a programmable logic device (PLD).
Binary information stored within the device is specified and
embedded within the hardware through a process called
programming. Programmable Logic Device (PLD) is an integrated
circuit with internal logic gates connected through electronic
paths that behave similarly to fuses.

Two Kinds of Memories:

1. Primary memory is the immediate task-performing memory, with


two subtypes: Dynamic Random Access Memory (DRAM), requiring
periodic charging for dynamic data retention and exhibiting
slower performance, and Static Random Access Memory (SRAM),
the fastest storage with no internal capacitors but higher cost.

2. Secondary memory, used for permanent storage, is slower than


primary memory, provides permanent storage with a larger size,
is more cost-effective, and allows semi-random accessibility.
However, its sequential movement requirement results in slower
operations.

Random Access Memory (RAM)

• The time it takes to transfer information to or from a random


location is always the same.
• The time required to retrieve information that is stored on
magnetic tape depends on the location of the data.
• The architecture of memory is such that information can be
selectively retrieved from any of its internal locations.

− Words - group of 1’s and 0’s and may represent binary-coded


information
− Byte - A group of 8 bits which most computer memories use words
that are multiples of 8 bits in length

Figure 1.1 Block Diagram of a Memoru Unit

Where:
n data input lines - provide the information
n data output lines - supply the information
k address lines - specify the particular word chosen among the
many available
Write - binary data to be transferred to memory
Read – transferred out of memory
Address - each word in memory is assigned an identification
number. Starting from 0 up to 2k-1, where k is the number of
address lines.

• Memories vary greatly in size and may range from 1,024 words.
• It is essential to refer to the number of words (or bytes) in memory
with one of the letters K (kilo or 2^10), M (mega or 2^20), and G (giga
or 2^30).

Example:
- How many address lines and data lines are needed in the following
memory unit?

Formula M = 2n
a. 2 kb
1 Byte = 8 bits = Data Lines
1K = 1024 bits = 210 n = 11 = number of Address Line
2K = 210 x 21 = 211

b. 4 M x 4 bits
4 bits = Data Lines
1M = 220 n = 22 = number of Address Lines
22 x 220 = 222

c. 2 GB
1 Byte = 8 bits = Data Lines
1G = 230 n =31 = number of Address Lines
21 x 230 = 231

RAM: Write and Read Operations


Steps in transferring a new word to memory:
1. Apply the binary address of the desired word to the address lines.
2. Apply the data bits that must be stored in memory to the data
input lines.
3. Activate the write input.

Steps in transferring stored word out of memory:


1. Apply the binary address of the desired word to the address lines.
2. Activate the read input.

Table 1.1 Control Inputs to Memory Chip


Memory Enable Read/Write Memory Operation
0 X None
1 0 Write to selected word
1 1 Read from selected word
Memory Access Modes
- Memory access modes are determined by the type of components
used.
- Two main categories: Random-Access Memory (RAM) and Sequential-
Access Memory.

Random-Access Memory (RAM) vs. Sequential-Access Memory


Random-Access Memory (RAM) Sequential-Access Memory
Word locations are separated Information is not
in space, with each word immediately accessible but
occupying a specific location. available at specific time
intervals.

Access time is constant, Access time varies based on


regardless of word location. the position of the word with
respect to the read head.
Examples: integrated circuit Examples:
RAM units

Magnetic Disk - is a storage


device that uses a
magnetization process to
write, rewrite and access
data. It is covered with a
magnetic coating and
stores data in the form of
tracks, spots and sectors.
(Hard disks, zip disks and
floppy disks are common
examples of magnetic
disks.)
Tape Unit - In magnetic tape
only one side of the ribbon is
used for storing data. It is
sequential memory which
contains thin plastic ribbon to
store data and coated by
magnetic oxide. Data
read/write speed is slower
because of sequential access. It
is highly reliable which requires
magnetic tape drive writing
and reading data.

Integrated Circuit RAM Units: Static RAM (SRAM) vs. Dynamic RAM
(DRAM)
• Static RAM (SRAM):

- Internal latches store binary information.

- Information remains valid as long as power is applied.

- Easy to use with shorter read and write cycles.

• Dynamic RAM (DRAM):


- Stores binary information as electric charges on capacitors.

- Requires periodic refreshing to prevent charge decay.


- Offers reduced power consumption and larger storage
capacity.

SRAM DRAM
It stores information as long It stores information as
as the power is supplied long as the power is
supplied or a few
milliseconds when the
power is switched off.
Transistors are used to store Capacitors are used to
information in SRAM. store data in DRAM.
Capacitors are not used To store information for
hence no refreshing is a longer time, the
required. contents of the capacitor
need to be refreshed
periodically.
SRAM is faster compared to DRAM provides slow
DRAM. access speeds.
It does not have a refreshing It has a refreshing unit.
unit.
These are expensive. These are cheaper.
SRAMs are low-density DRAMs are high-density
devices. devices.
In this bits are stored in In this bits are stored in
voltage form. the form of electric
energy.
These are used in cache These are used in main
memories. memories.
Consumes less power and Uses more power and
generates less heat. generates more heat.
SRAMs has lower latency DRAM has more latency
than SRAM
SRAMs are more resistant to DRAMs are less resistant
radiation than DRAM to radiation than SRAMs
SRAM has higher data DRAM has lower data
transfer rate transfer rate
SRAM is used in high-speed DRAM is used in lower-
cache memory speed main memory
SRAM is used in high DRAM is used in general
performance applications purpose applications

Volatile and Non-Volatile Memory

• Volatile Memory (Temporary Memory)

- Fetches/stores data at a high-speed

- Loses stored information when power is turned off.

- Examples include CMOS integrated circuit RAMs and Cache


Memory

• Non-Volatile Memory:

- Type of memory in which data or information is not lost within


the memory even power is shut-down.

- Enables computers to store essential programs that are needed


again after power-on.

- Examples include magnetic disks and ROM.


Volatile Memory Non-Volatile Memory
Volatile memory is the type of Non-volatile memory is the
memory in which data is lost type of memory in which
as it is powered-off. data remains stored even if
it is powered-off.
Contents of Non-volatile
Contents of Volatile memory memory are stored
are stored temporarily. permanently.
It is faster than non-volatile It is slower than volatile
memory. memory.
RAM(Random Access ROM(Read Only Memory) is
Memory) is an example of an example of non-volatile
volatile memory. memory.
In volatile memory, data can In non-volatile memory,
be easily transferred in data can not be easily
comparison to non-volatile transferred in comparison
memory. to volatile memory.
In Volatile memory, process In Non-volatile memory,
can read and write. process can only read.
Volatile memory generally Non-volatile memory
has less storage capacity. generally has more storage
capacity than volatile
memory.
In volatile memory, the In non-volatile memory, any
program’s data are stored kind of data which has to
which are currently in process be saved permanently are
by the CPU. stored.
Volatile memory is more Non-volatile memory is less
costly per unit size. costly per unit size.
Volatile memory has a huge Non-volatile memory has a
impact on the system’s huge impact on a system’s
performance. storage capacity.
In volatile memory, In non-volatile memory,
processor has direct processor has no direct access
access to data. to data.
Volatile memory chips are Non-volatile memory chips are
generally kept on the embedded on the
memory slot. motherboard.
Advantages: Advantages:
• Fast speed • More reliable
Low power consumption • Stores data
• Better system permanently
performance • Inexpensive memory
as it increases • Helps in booting of
speed operating system
Disadvantages: Disadvantages:
• Expensive • Slow speed
• Limited storage space • Can only read data
• Stores data
temporarily

Memory Decoding

How is Data store in Memory?

• Data is stored in memory in the form of bits.

• To store buts, we use something called Binary Cells, which can store
1 bit of information.

• The collection of many bits (16, 32, etc.) is called word.

• Read/Write operations on memory occurs one word at a time


Internal Construction

Figure 2.1 Memory Cell

• A memory stores binary information in groups of bits called words.


A word in memory is an entity of bits that move in and out of
storage as a unit.

• A binary cell is a storage unit that stores 1 bit of information.

If Read/Write == 0

Input data to SR latch

Else:

Output data from SR latch


Decoding 4x4 RAM

Figure 2.2 Diagram of a 4 x 4 RAM

• There is a need for decoding circuits to select the memory word


specified by the input address.

• During the read operation, the four bits of the selected word go
through OR gates to output terminals.

• During the write operation, the data available in the input lines are
transferred into the four binary cells of the selected word.

• A memory with 2k words of n bits peri word requires k address lines


that go into k x 2k decoder.
Coincident Decoding

Figure 2.3 Two-dimensional decoding structure for a 1K-word memory

• A decoder with k inputs and 2k outputs requires 2k AND gates with k


inputs per gate.

• Two decoding in a two-dimensional selection scheme can reduce the


number of inputs per gate.

• 1K-wor memory, instead of using a single 10x1024 decoder, we use two


5x32 decoders.

• Input: (0110010100)

• X = (01100)2, Y = (10100)2

Refresh Cycles

• A refresh cycle is a process that happens in RAM to prevent data


loss.

• It involves reading and rewriting the contents of the memory.

• This is necessary because the electrical charge that represents data


in RAM can leak away over time.
• Without refresh cycles, this would turn all the ones in the RAM to
zeros, causing data loss.

• DRAM, used for the computer’s main memory, undergoes refresh


cycles more often than SRAM.

Address Multiplexing: SRAM vs DRAM

First, what are Address Multiplexing, SRAM, and DRAM?

Address Multiplexing

Address multiplexing permits you to use one tag (a multiplex


tag) to call multiple memory locations in the controller's address
area. You can have read and write access to the multiple memory
locations without having to define a tag for each individual address.
This is a very efficient method of processing large volumes of data.

SRAM DRAM
Stored in latches Stored in
Bit Storage
and flip-flops capacitors
Bistable circuits
that consist of 4 – 6 Single MOS
Structure transistors, transistor and a
latches, and flip- capacitor.
flops
Reduced power
consumption,
Easier to use and higher storage
Advantages has shorter write density, lower
cycles cost per bit, and
larger storage
capacity.
Single Data
Rate DRAM,
Asynchronous Double Data
SRAM, Synchronous Rate (DDR)
Types Burst SRAM, series DRAM,
Pipeline-Burst Synchronous
SRAM DRAM, Error
Correction Code
DRAM

Static Random-Access Memory (SRAM)

- type of random access memory that gives fast access to data but
is physically relatively large. Random access memory (RAM) is
computer main memory in which specific contents can be accessed
(read or written) directly by the central processing unit (CPU) in a
very short time regardless of the sequence (and hence location) in
which they were recorded. SRAM consists of flip-flops, bistable
circuits composed of four to six transistors. Once a flip-flop stores
a bit, it keeps that value until the opposite value is stored in it.
SRAM is used primarily for small amounts of memory called
registers in a computer’s CPU and for fast “cache” memory.

Types of SRAM:

• Asynchronous static RAM - Asynchronous RAM was the first type of


RAM and is usually used to offer an inexpensive memory or speed
upgrade to older machines.

• Synchronous burst static RAM - Synchronous burst static RAM is


expensive but very fast.

• Pipeline-burst static RAM (PBSRAM) - Pipeline-burst static RAM is


the most commonly used static RAM today. After the first round of
access, it is designed to allow subsequent access cycles to require
fewer machine cycles, allowing for a greater throughput of data.
Dynamic Random-Access Memory (DRAM)

- it is a type of Random Access Memory (RAM) that is commonly used


as the main memory in computers. DRAM is volatile storage, which
means the memory is cleared or reset when power is removed. When
a computer is turned off, the main memory is cleared and the
information is not retained. DRAM is essential because it allows
your computer to run efficiently by providing quick access to
essential data that your processor needs to operate at peak
performance. Without DRAM, your processor would have to look
through much slower storage mediums like hard drives or solid-
state drives every time it needed data. That would slow down
operations significantly.

Types of DRAM:

• SDR (Single Data Rate), DDR (Double Data Rate), DDR2 (Double Data
Rate 2), DDR3 (Double Data Rate 3), and DDR4 (Double Data Rate 4):
These types of DRAM are the most common. They all have their own
advantages and disadvantages depending on how much space they
take up, how fast they process data, and how much power they use.
SDR is the oldest type of DRAM and is not very popular anymore
because it does not support high-speed data transfer rates.

DDR is much faster than SDR but also uses more power. It's still
widely used in many electronic devices such as computers, laptops,
tablets, cell phones, etc. DDR2 is twice as fast as DDR but consumes
more power than its predecessor. DDR3 has higher speeds than both
DDR2 and DDR but uses less power than its predecessors. Finally,
DDR4 has higher speeds than all previous versions of DRAM but also
requires less power than them too.

• Synchronous DRAM or SDRAM: This type of DRAM works with a clock


signal that synchronizes it with the rest of the system components
in order to process data at a faster rate than asynchronous DRAM
can do on its own without a clock signal. It supports high transfer
rates which makes it ideal for applications such as video games
which require real-time responses from the system components in
order to make sure that every action taken by players translates
into an immediate response from the system itself.

• ECC DRAM: This type of DRAM stands for Error Correction Code
which means that it checks for errors during data transfers in
order to ensure that no data is corrupted or lost during
transmission between two components within a system or when
sending/receiving information from external sources such as hard
drives or USB devices. This makes it ideal for mission-critical
applications where any kind of data corruption could cause
serious problems down the line due to lack of accuracy or integrity
when processing large amounts of information over long periods
of time.

Address Multiplexing in SRAM and DRAM

Address multiplexing is often used in both Static Random-Access


Memory (SRAM) and Dynamic Random-Access Memory (DRAM) to
optimize the use of address lines and reduce the complexity of memory
interfaces. However, the implementation and implications of address
multiplexing can differ between SRAM and DRAM.

In SRAM (Static Random-Access Memory), address multiplexing


is often not as common as in DRAM. SRAM typically has a simpler
memory interface compared to DRAM. Some SRAM designs may use
dedicated address lines for reading and writing operations, making
it less susceptible to the complexities of address multiplexing.
Figure 2.4 Address Multiplexing for a 64K DRAM

In DRAM (Dynamic Random-Access Memory), address


multiplexing is more prevalent in DRAM to reduce the number of
address lines and simplify the memory interface. Due to their large
capacity, DRAMs use a two-dimensional array for address decoding,
often employing address multiplexing to reduce the number of pins
in the IC package. Address multiplexing involves applying the
address in two steps—first the row address and then the column
address—using the row address strobe (RAS) and column address
strobe (CAS) signals. This method enables a significant reduction in
the size of the package while allowing efficient memory operations.

Hamming Code

The Hamming Code method is a network technique designed by


R.W Hamming, for damage and error detection during data
transmission between multiple network channels. The Hamming Code
method is one of the most effective ways to detect single-data bit
errors in the original data at the receiver end. It is not only used for
error detection but is also for correcting errors in the data bit.
Important terms associated with the Hamming code:

• Redundant Bits - These are the extra binary bits added externally
into the original data bit to prevent damage to the transmitted
data and are also needed to recover the original data.

• Parity Bits - The parity bit is the method to append binary bits to
ensure that the total count of 1’s in the original data is even bit or
odd. It is also applied to detect errors on the receiver side and
correct them.

Steps in Generating Hamming Code:

1. Find the lowest number required for r to satisfy the given formula:
2r where n is the number of data bits, and r is the number of parity
bits.

2. Assign each data bit to a position in the code word. The data bits
are placed in positions that are not powers of 2 in the same order
they are in the original word. The remaining positions are reserved
for the parity bits.

3. Following the pattern: the parity number checked is the number


of bits that will be checked and skipped, calculate the value of
each parity bit in accordance with the type of parity generator
used.

4. Place the parity bits in the code word, generating the hamming
code, to which will be transmitted, and received and verified by
the receiver.
Example:
Hamming Code Generation
Generate a hamming code for the bit stream: 1001110. The system is
using an even parity generator.

1.
2r≥n+r+1 → 22≥7+2+1 → 4 ≱10

2r≥n+r+1 → 23≥7+3+1 → 8 ≱11

2r≥n+r+1 → 24≥7+4+1 → 16 ≱12

∴r=4. There are 4 parity bits required for this bit stream

2.

1 2 3 4 5 6 7 8 9 10 11
P1 P2 1 P3 0 0 1 P4 1 1 0

3. For P1,

P1 3 5 7 9 11 Even Parity 1

- 1 0 1 1 0 = 1

For P2,

P2 3 6 7 10 11 Even Parity 2

- 1 0 1 1 0 = 1

For P3,

P3 5 6 7 Even Parity 3

- 0 0 1 = 1
For P4,

P4 9 10 11 Even Parity 4

- 1 1 0 = 0

4.

1 2 3 4 5 6 7 8 9 10 11
1 1 1 1 0 0 1 0 1 1 0

Hamming code to be transmitted: 11110010110

Steps in Error Detection and Correction for Hamming Code:

1. Assign each bit of the received bit stream to their expected position
during Hamming code generation.

2. Ignoring the bit assigned as parity bit, follow the pattern: the
parity number checked is the number of bits that will be checked
and skipped, calculate the value of each parity bit in accordance
with the type of parity generator used.

3. Compare the parity bits generated in step 2 with the assigned


parity bits from the received bit stream. If the bits did not match,
there is an error present in the received code. Add the position
number of the parity bits that has an error. The resulting number
is the position of the error bit.

Error Detection and Correction of Hamming Code


Suppose the received hamming code with even parity is
10110010110, identify the position of the error bit.
1.

1 2 3 4 5 6 7 8 9 10 11
1 0 1 1 0 0 1 0 1 1 0
P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7

2.

For P1,

P1 3 5 7 9 11 Even Parity 1

- 1 0 1 1 0 = 1

For P2,

P2 3 6 7 10 11 Even Parity 2

- 1 0 1 1 0 = 1

For P3,

P3 5 6 7 Even Parity 3

- 0 0 1 = 1

For P4,

P4 9 10 11 Even Parity 4

- 1 1 0 = 0

3. By inspection, the received P2 bit is incorrect from the expected


parity bit. Since P2 = 2 (position), therefore the incorrect bit is bit
#2.
ROM (Read Only Memory)

- Read-only memory (ROM) is a class of storage medium used in


computers and other electronic devices.

- Data stored in ROM cannot be modified or can be modified only


slowly or with difficulty.

- Read Only Memories (ROM) or Programmable Read Only Memories


(PROM) have: N input lines, M output lines, and 2N decoded
minterms.

- Fixed AND array with 2N outputs implementing all N-literal


minterms.

- Programmable OR Array with M outputs lines to form up to M sum


of minterm expressions.

- A program for a ROM or PROM is simply a multiple-output truth


table.

- If a 1 entry, a connection is made to the corresponding minterm for


the corresponding output.

- If a 0, no connection is made (Can be viewed as a memory with the


inputs as addresses of data (output values), hence ROM or PROM
names)

- ROM holds programs and data permanently even when computer is


switched off

- Data can be read by the CPU in any order, so ROM is also direct
access

- The contents of ROM are fixed at the time of manufacture

- Stores a program called the bootstrap loader that helps startup


the computer Access time of between 10 and 50 nanoseconds
Important Difference Between ROM and RAM.

• ROMs are “non-volatile”—data is preserved even without power. On


the other hand, RAM content disappears once power is lost.

• ROMs require special (and slower) techniques for writing, so they’re


considered to be “read-only” devices.

• Some newer types of ROMs do allow for easier writing, although the
speeds still don’t compare with regular RAMs.

• MP3 players, digital cameras and other toys use CompactFlash,


Secure Digital, or Memory Stick cards for non-volatile storage.

• Many devices allow you to upgrade programs stored in “flash


ROM.”

Read Only Memory

• N input bits

• 2^N words by M bits

Implement M arbitrary functions of N variables

• Example 8 words by 5 bits:

Figure 2.5 - 3 Input Lines A B C ROM 8 words x 5 bits F0 F1 F2 F3 F4 5


Output Lines
Figure 2.6 Internal Structure of ROM

Sample Problems:

Specify a truth table for a ROM which implements:

1. F = AB + A’BC’
G = A’B’C + C’

H = AB’C’ + ABC’ + A’B’C


Types of ROM:

1. Programmable Read Only Memory (PROM)

- Empty of data when manufactured

- May be permanently programmed by the user.

2. Erasable Programmable Read Only Memory (EPROM)

- Can be programmed, erased and reprogrammed

- The EPROM chip has a small window on top allowing it to be


erased by shining ultraviolet light on it

- After reprogramming the window is covered to prevent new


contents being erased

- Access time is around 45 - 90 nanoseconds

3. Electrically Erasable Programmable Read Only Memory (EEPROM)

- Reprogrammed electrically without using ultraviolet light

- Must be removed from the computer and placed in a special


machine to do this

- Access times between 45 and 200 nanoseconds

4. Flash ROM

- Similar to EEPROM, However, can be reprogrammed while still


in the computer

- Easier to upgrade programs stored in Flash ROM

- Used to store programs in devices e.g. modems

- Access time is around 45 – 90 nanoseconds


PROM
One step up from the masked ROM is the PROM (programmable
ROM), which is purchased in an unprogrammed state. If you were to
look at the contents of an unprogrammed PROM, the data is made up
entirely of 1's. The process of writing your data to the PROM involves a
special piece of equipment called a device programmer. The device
programmer writes data to the device one word at a time by applying
an electrical charge to the input pins of the chip. Once a PROM has
been programmed in this way, its contents can never be changed. If the
code or data stored in the PROM must be changed, the current device
must be discarded. As a result, PROMs are also known as one-time
programmable (OTP) devices.

PROM and EPROMS

- Programmable ROMs

- Build array with transistors at every site

- Burn out fuses to disable unwanted transistors

- Electrically Programmable ROMs

- Use floating gate to turn off unwanted transistors

- EPROM, EEPROM,

Figure 2.7
Operation of EPROM
Development of the EPROM memory cell started with
investigation of faulty integrated circuits where the gate connections
of transistors had broken. Stored charge on these isolated gates
changed their properties. The EPROM was invented by Dov Frohman of
Intel in 1971, who was awarded US patent 3660819 in 1972. Each storage
location of an EPROM consists of a single field-effect transistor. Each
field-effect transistor consists of a channel in the semiconductor body
of the device. Source and drain contacts are made to regions at the
end of the channel. An insulating layer of oxide is grown over the
channel, then a conductive (silicon or aluminum) gate electrode is
deposited, and a further thick layer of oxide is deposited over the
gate electrode. The floating gate electrode has no connections to other
parts of the integrated circuit and is completely insulated by the
surrounding layers of oxide. A control gate electrode is deposited,
and further oxide covers it

To retrieve data from the EPROM, the address represented by


the values at the address pins of the EPROM is decoded and used to
connect one word (usually an 8-bit byte) of storage to the output
buffer amplifiers. Each bit of the word is a 1 or 0, depending on the
storage transistor being switched on or off, conducting or non-
conducting. The switching state of the field-effect transistor is
controlled by the voltage on the control gate of the transistor. The
presence of a voltage on this gate creates a conductive channel in the
transistor, switching it on. In effect, the stored charge on the floating
gate allows the threshold voltage of the transistor to be
programmed.
EEPROM
EEPROMS are electrically-erasable-and-programmable.
Internally, they are like EPROMs, but the erase operation is
accomplished electrically, rather than by exposure to
ultraviolet light. Any byte within an EEPROM may be erased and
rewritten. Once written, the new data will remain in the device
forever-or at least until it is electrically erased. The primary
tradeoff for this improved functionality is higher cost, though write
cycles are also significantly longer than writes to a RAM. So you
wouldn't want to use an EEPROM for your main system memory.

Flash Memory
Stores information in an array of memory cells made from
floating-gate transistors. In traditional single-level cell (SLC) devices,
each cell stores only one bit of information. Some newer flash memory,
known as multi-level cell (MLC) devices, including triple-level cell (TLC)
devices, can store more than one bit per cell by choosing between
multiple levels of electrical charge to apply to the floating gates of
its cells. The floating gate may be conductive (typically polysilicon in
most kinds of flash memory) or non-conductive (as in SONOS flash
memory).

DRAM
Most computing systems use DRAM (Dynamic Random Access
Memory) as the technology of choice to implement main memory due
to DRAM’s higher density compared to SRAM (Static Random Access
Memory), and due to its lower latency and higher bandwidth
compared to nonvolatile memory technologies such as PCM (phase
change memory), Flash, and magnetic disks.

A DRAM cell is composed of an access transistor and a


capacitor. Data is stored in the capacitor as electrical charge, but
electrical charge leaks over time. Therefore, DRAM must be refreshed
periodically to preserve the stored data.

As the speed and size of DRAM devices continue to increase with


each new technology generation, the performance and power
overheads of refresh are increasing in significance.

DRAM Refresh: Status Quo


Asynchronous vs. Synchronous DRAMs

• Refresh Rate

In traditional “asynchronous” DRAM, there are two types of


devices, one with standard refresh rate (15.6 us), and the other with
extended refresh rate (125 us). In current SDRAM devices, the
required refresh rate only changes with temperature, regardless
of device organization. For example, all DDR3 devices require
refresh rate of 7.8 us at normal temperature range (0–85°C), and
3.9 us at extended temperature range (up to 95°C).

• Distributed and Burst Refresh.

In traditional “asynchronous” DRAM, the memory


controller can decide to complete all required refreshes in a burst
or to distribute evenly the refreshes over the retention time. In
modern DDRx devices, only the distributed refresh option is
supported, to keep refresh management simple. LPDDRx devices, on
the other hand, also support burst refresh which can be used to
meet the deadlines of real-time applications.

• RAS-Only Refresh.

In traditional “asynchronous” DRAM, RAS-only refresh is


performed by asserting RAS with a row address to be refreshed,
and CAS remains de-asserted. The controller is responsible for
managing the rows to be refreshed. There is no equivalent command
in modern SDRAM. To accomplish something like RAS-only refresh,
one could issue an explicit activate command followed by a
precharge to the bank. As we show in later sections, this has higher
energy and performance costs. It would also require higher
management burden on the memory controller.

• CAS-Before-RAS (CBR) Refresh.

In traditional “asynchronous” DRAM, CBR refresh starts by


first asserting CAS and then asserting only RAS. There is no
requirement of sending a row address, because a device has an
internal counter that increments with each CBR command. In
modern SDRAMs, a variation of CBR is adopted with two important
changes. First, both RAS and CAS are asserted simultaneously on the
clock edge, rather than one before the other. Second, instead of
internally refreshing only one row, SDRAM devices can refresh more
rows depending upon the total number of rows in a device.

• Hidden Refresh.

` Referred to as an immediate CBR command after a read


or write operation by keeping CAS asserted, while RAS is deasserted
once and then asserted again. This means that the data on the DQ
lines is valid while performing refresh. There is no timing
advantage when compared to a read/write followed by an explicit
CBR command. Hidden refresh is implemented in “asynchronous”
DRAMs but not in SDRAMs.

SDRAM Refresh Modes:

• Auto-Refresh (AR):

In general, SR (Self-Refresh) is employed during idle


periods for power saving, while AR is utilized when the system is
busy. The transition from asynchronous to synchronous DRAM
devices brought changes to the refresh command interface and
protocols.

In SDRAM devices, AR commands in DDRx involve asserting


both row access strobe (RAS) and column access strobe (CAS)
signals, along with selecting the device via chip select. Each DRAM
device has an internal refresh counter, managed by the
controller, issuing AR commands to refresh rows across all banks.
Normal memory operations resume after AR completion.

• Self-Refresh (SR):

The device internally generates refresh pulses using a


built-in analog timer. Like AR, all banks should be precharged
before entering SR. The device enters SR mode when the clock
enable (CKE) signal is sampled low with the command decoded as
refresh. Exiting SR requires a specified time delay, and before re-
entering SR mode, at least one AR must be issued.

Refresh Timings:

Modern DRAM devices have built-in refresh counters, requiring


the memory controller to issue refresh commands at appropriate
intervals. Each AR or Auto-Refresh command refreshes a specific
number of rows in each bank, ensuring each DRAM cell is refreshed
within its retention period.

DRAM Retention Time:

DRAM cells lose charge over time due to various leakage


mechanisms, necessitating periodic refreshes to maintain data
integrity. Cells exhibit variable retention times, leading to "inter-cell"
distributed retention time. Another phenomenon, "intra-cell" variable
retention time, involves different meta-states with varying leakage
characteristics. Retention time is also highly sensitive to temperature,
with higher temperatures increasing leakage and shortening retention
time. DDRx devices adjust the refresh rate based on temperature, and
LPDDRx devices incorporate on-device temperature sensors for
adaptive refresh rates.

III. Memory Hierarchy and Cache Memory

Memory Hierarchy

The memory hierarchy in computer systems is a vital component


for the performance of general-purpose computers, involving multiple
storage devices beyond main memory to improve speed. This hierarchy
is integral to the GATE CS syllabus. Modern processors' performance is
closely tied to the memory hierarchy, employing a combination of
memory types to balance performance and cost. Hierarchical memory,
an approach in computer system design, uses different memory types
with varying speeds and capacities to achieve optimal performance.
The memory hierarchy aims to minimize access time by organizing
memory based on the locality of references principle, recognizing that
programs often access specific portions of their address space. This
hierarchy, comprising registers, cache, main memory, and secondary
storage, contributes to efficient and effective computer operation.
Figure 3.1 Memory Hierarchy

Memory Hierarchy Design is divided into 2 main types:

1. Inclusion (or Inclusive) Hierarchy:

An inclusion or inclusive memory hierarchy involves


replicating the contents of a lower memory level in higher levels. In
this design, data present in, for instance, the cache is also found in
the main memory. While this approach simplifies data
management and maintains consistency across levels, it can result
in higher memory usage and increased complexity in coherence
maintenance.

2. Non-Inclusion (or Exclusive) Hierarchy:

In a non-inclusion or exclusive memory hierarchy, lower-


level contents are not duplicated in higher levels, with each level
containing only a subset of data from the lower levels. This design
conserves space and can be memory-efficient but necessitates
additional mechanisms for managing data consistency and
coherence between different levels.
Two categories of memory based on their accessibility by the
processor:

- Internal Memory or Primary Memory:

• Main Memory (RAM): Volatile memory that is directly accessible by


the processor. It holds the currently executing programs and data.

• Cache Memory: Faster, smaller memory that temporarily stores


frequently accessed data to speed up CPU operations.

• CPU Registers: The fastest and smallest memory located within the
CPU itself, used for quick data access during processing.

- External Memory or Secondary Memory: Magnetic Disk, Optical


Disk, Magnetic Tape: Non-volatile peripheral storage devices that
are accessed by the processor through I/O (Input/Output) modules.
These are used for long-term storage and are typically slower than
internal memory.

Two forms of the designed memory hierarchy:

• Primary (Internal) Memory:

Components:

Registers: The fastest and smallest type of memory, located directly


within the CPU. Registers store small amounts of data for immediate
access by the processor during execution.

Cache Memory: A small-sized, high-speed memory that resides between


the main memory (RAM) and the CPU. It stores frequently accessed data
and instructions to reduce the time the CPU spends waiting for data.

Main Memory (RAM): Volatile memory that provides the working space
for the operating system, applications, and currently executing
processes. It is directly accessible by the CPU but is slower compared
to registers and cache.

Characteristics:

- Fast access times compared to secondary memory.

- Volatile nature, meaning data is lost when power is turned off.

- Limited in capacity compared to secondary memory.

Function:

- Storage of Active Programs and Data: Primary memory stores


programs and data actively being used or processed by the CPU.

- Immediate Access for CPU: Direct accessibility allows the CPU to


quickly retrieve and manipulate data during execution.

- Temporary Storage (RAM): RAM provides volatile temporary


storage for dynamic data manipulation during program
execution.

• Secondary (External) Memory:

Components:

Hard Disk Drives (HDDs) and Solid State Drives (SSDs): Common forms
of secondary storage that provide non-volatile, high-capacity storage
for long-term data retention. They are accessed by the CPU through I/O
(Input/Output) modules.

Optical Disks (e.g., DVDs, CDs): Another form of secondary storage,


typically used for data backup, software distribution, and archiving.
Characteristics:

- Slower access times compared to primary memory.

- Non-volatile, meaning data is retained even when power is turned


off.

- Greater storage capacity compared to primary memory.

Function:

- Secondary memory is used for long-term storage of data,


applications, and the operating system.

- It serves as a backup for data that is not actively being used by the
CPU.

- Data is transferred between secondary and primary memory as


needed during program execution.

Memory Hierarchy Levels

Level 0: Registers in CPU

Registers are the smallest and fastest type of memory, directly


integrated into the CPU. They store small amounts of data and
instructions that are actively used by the processor during its
operations.

Characteristics:

- Extremely fast access times.

- Limited in capacity due to their small size.

- Expensive to manufacture.
Level 1: Cache Memory

Cache memory is a small-sized, high-speed memory that sits


between the main memory (RAM) and the CPU. It serves as a buffer to
store frequently accessed data and instructions, reducing the time the
CPU spends waiting for information.

Characteristics:

- Faster access times compared to main memory.

- Relatively larger capacity than registers.

- More expensive than main memory.

Level 2: Main Memory

Main memory, or RAM (Random Access Memory), is the primary


volatile memory of a computer system. It holds the currently executing
programs and data that the CPU needs for immediate processing.

Characteristics:

- Slower access times compared to cache.

- Larger capacity compared to cache.

- Volatile, data is lost when power is turned off.

Level 3: Disk Cache

Disk cache is a small, high-speed buffer that sits between the


main memory and the storage devices (e.g., hard disk). It stores
frequently accessed disk data to improve overall system performance.
Characteristics:

- Faster access times compared to storage devices.

- Enhances data transfer speeds between main memory and storage.

Level 4: Magnetic Disk

Magnetic disks, such as hard disk drives (HDDs), are non-volatile


storage devices used for long-term data storage. They offer higher
capacity than main memory but at slower access speeds.

Characteristics:

- Slower access times compared to main memory.

- Non-volatile, retains data when powered off.

- Higher capacity and lower cost per byte.

Level 5: Optical Disk/Magnetic Tapes

Optical disks (e.g., DVDs, CDs) and magnetic tapes are external
storage devices used for archival and backup purposes. They have
large storage capacities but slower access times compared to
magnetic disks.

Characteristics:

- Slow access times compared to other levels.

- Non-volatile and suitable for long-term storage.

- Economical for storing large volumes of data.


Memory Hierarchical Design

There are various factors such as typical size, bandwidth, access


time, etc which are important as per the design. Check out the Memory
Hierarchical design shown below:
Secon
Level Registers Cache Main dary
Name Memory Memory
memo
ry

Typical < 1KB < 16MB < 16MB >


Size 100GB
Customize DRAM
Implement d SRAM Magne
(Capacit
ation (FlipFlops tic
Multiport
or)
) device
s
Bandwidth
20,000- 5,000- 1,000- 20-150
1,00,000 10,000 5,000
(MB/s)

Access Time 0.25 - 0.5 0.5 - 2.5 80 - 250 50000


(ms) 00

Managed Compiler Hardware OS OS


By

Backed By Cache Main Secondar Comp


Memory Memory y act
Memory Disk

Characteristics of Memory Hierarchy

The following are the main characteristics of memory hierarchy:

• Performance: Initially, computer systems were designed without a


memory hierarchy. The speed gap between the main memory and
CPU registers grew due to the large differential in access time,
resulting in lower system performance. As a result, enhancement
was required. Because of the system's increased performance, this
was enhanced in the memory hierarchy model.

• Ability: The total quantity of data the memory hierarchy can store
is its capability because its capacity grows as we move from top to
bottom.

• Cost per bit: When we move from the bottom to the top of the
memory hierarchy, the cost of each bit increases, implying that
internal memory is more expensive than external memory.

• Access Time: In the memory hierarchy, the access time is the time
delay between data availability and requests to read or write
because the access time increases as we move from the top to the
bottom of the memory hierarchy.

• Capacity: It is the global volume of information the memory can


store. As we move from top to bottom in the Hierarchy, the capacity
increases.

Advantages of Memory Hierarchy

Memory hierarchy is necessary. Here are a few advantages of


memory hierarchy:

• Memory distribution is easy and cost-effective.

• External destruction is removed.

• Data can be spread all over

• Allows for pre-paging and demand paging.

• Swapping will be a lot easier.

• Faster access to frequently used data

• Exploitation of locality

• Effective use of different memory types


• Improved system performance

• Dynamic adaptation to program behavior

A Typical Memory Hierarchy A Typical Memory Hierarchy (With Two


Levels of Cache) With Two Levels of Cache)

Figure 3.2 Typical Memory Hierarchy (With Two Levels of Cache) With
Two Levels of Cache)

A typical memory hierarchy with two levels of cache includes the


following components:

Level 0: Registers

- Fastest and smallest memory directly integrated into the CPU.

- Holds small amounts of data and instructions for immediate


access.
Level 1 Cache (L1 Cache)

- First level of cache, located between the CPU and main memory
(RAM).

- Divided into instruction cache and data cache.

- Stores frequently used data and instructions for faster access.

Level 2 Cache (L2 Cache)

- Second level of cache, providing additional storage beyond L1


Cache.

- Offers a larger capacity compared to L1 Cache.

- Slower access times than L1 but faster than main memory.

Main Memory (RAM)

- Primary volatile memory for the computer system.

- Holds currently executing programs and data.

- Slower access times compared to cache but larger in capacity.

Secondary Storage (e.g., SSD or HDD)

- Non-volatile storage devices for long-term storage.

- Examples include Solid State Drives (SSDs) or Hard Disk Drives


(HDDs).

- Slower access times compared to RAM, higher capacity.

Tertiary Storage (e.g., Optical Disk, Magnetic Tape)

- Slower, high-capacity storage devices for archival and backup.


- Examples include Optical Disks (e.g., DVDs, CDs) and Magnetic
Tapes.

- Used for infrequently accessed data.

This hierarchy is designed to exploit the principle of locality,


ensuring that frequently accessed data is stored in the faster and
smaller cache levels (L1 and L2) closer to the CPU. As you move down the
hierarchy, both access speed and cost per byte increase, providing a
balance between performance and storage capacity.

Cache Memory

- small-sized

- volatile provides

- high-speed data access

- stores frequently used data

Figure 3.3 Cache Memory


How Cache Works

Cache Hit vs. Cache Miss

Figure 3.4 Cache Hit

• When the CPU must perform an operation and it needs data, it will
check the cache first. Basically, if the CPU successfully finds the
required data in the cache, that is called a “cache hit”.

• But if the CPU fails to find the required data in the cache, then it
will have to retrieve it from the main memory. This is called a “cache
miss”. This will introduce a slight delay since the latency will
increase and the CPU will have to fetch the data from a farther
and slower memory level, which will further impact the overall
system speed.

Temporal Locality vs. Spatial Locality

- Temporal Locality: This principle suggests that if a memory


location is accessed, it is likely to be accessed again soon. Caching
mechanisms exploit this by keeping recently accessed data in
faster and smaller memory levels.

- Spatial Locality: This principle states that if a memory location


is accessed, nearby locations are also likely to be accessed soon.
Caching mechanisms take advantage of this by bringing in a block
of contiguous memory into a faster cache.
Levels of Cache

CPUs often have multiple levels with decreasing size but


increasing access time. These multiple levels of cache offer a balance
between speed, size, and cost, crucial for optimizing memory
performance.

Figure 3.5 Levels of Cache

L1 Cache

- Typically smaller, ranging from 16 KB to 128 KB

- Closest to the CPU cores

- Fastest, with low latency

L2 Cache

- Larger than L1, ranging from 128 KB to several megabytes

- Positioned between L1 and L3

- Slower than L1 but faster than L3 and the main memory

L3 Cache

- Larger than L2, often shared among multiple cores

- Shared among CPU cores, located further away from individual


cores
- Slower than L1 and L2 but faster than main memory

Benefits of Cache

- Faster Data Access - allows the processor to grab essential


information without having to search through the entire main
memory

- Reduced Latency - reduces the time it takes for the CPU to access
data

- Bandwidth Conservation - acts as a traffic manager, ensuring that


only the essential data will travel the high-speed lanes, to leave
more bandwidth for other critical tasks

- Enhanced CPU Utilization - reduces the idle time that is spent


waiting for data

- Improved Power Efficiency - minimizes unnecessary data transfers

- Reduced Latency - ensures a seamless computing experience with


fast and responsive applications

IV. Refresh Cycles

Various Techniques to Manage Refresh Cycles

- Based on Row-Level Refresh

• Selective Refresh Architecture (SRA)

- It performs refresh operations at fine granularity and can


either select or skip refresh to a row.

- It can reduce refreshes if the controller has knowledge of


whether the data stored in the rows are going to be used in the
future.

- To implement SRA, add per-row flags in DRAM to indicate


whether a row should be refreshed or not.

- Retention Time Awareness

• Variable Refresh Period Architecture (VRA)

- The refresh interval for each row is chosen from a set of


periods.

- It reduces a significant number of refreshes by setting an


appropriate refresh period for each row.

- It ensures that only the rows with updated content are


refreshed, reducing unnecessary refresh cycles across the
entire display by setting an appropriate refresh period for
each row.

- Refresh Scheduling Flexibility

• Elastic Refresh

- It relies on re-scheduling the refresh commands so that they


overlap with periods of DRAM inactivity.

- It postpones up to eight refresh commands in high-memory


request phases of programs, and then issues the pending
refreshes during idle memory phases at a faster rate, to
maintain the average refresh rate.

- This technique could be valuable in systems where memory


usage fluctuates, allowing for a more efficient allocation
of resources during peak demand while ensuring that the
memory remains refreshed and responsive.
• Adaptive Refresh

- This technique used in computer memory systems to


dynamically adjust the refresh rate of memory cells based
on the accessed memory bandwidth.

- This approach aims to optimize memory performance by


determining the most suitable refresh granularity,
balancing between the normal refresh rate (1x) and a finer-
grained refresh rate (4x).

• Coordinated Refresh

- It focuses on both performance and energy consumption of


refresh operations.

- This mechanism relies on the ability to re-schedule refresh


commands to overlap with periods of DRAM inactivity while
utilizing full flexibility of refresh commands as in Elastic
Refresh.

- It coschedules the refresh commands and the low power


mode switching such that most of the refreshes are issued
energy efficiently, in SR mode. or in Self-Refresh Mode.

- This technique helps enhance the energy efficiency of


memory systems by intelligently coordinating the timing of
refresh commands and low-power mode transitions,
prioritizing the energy-efficient SR mode for most refresh
operations.

V. Non-Volatile Memory Technologies

• A non-volatile memory is a type of memory which has the ability


to retain information even after a power supply is removed, this is by
storing the data on magnetic components are represented by the
direction of magnetization. This is typically used for the task of
secondary storage or long-term persistent storage.

• This type of memory is able to retain information because the


data stored on magnetic components are represented by the direction
of magnetization even after the power is turned off. A nonvolatile
memory enables digital computers to store programs that will be
needed again after the computer is turned on. Before the power is
turned off, the binary information from the computer RAM is
transferred to the disk so that the information will be retained.

Consider the desktops commonly used at home as an example:


they are equipped with DRAM, a type of volatile memory, serving as the
system's high-speed workspace for active processes. Simultaneously, it
also has a non-volatile memory which are storage devices like SSDs or
Hard Drives, responsible for persistent, long-term storage of the
operating system, applications, and user data, ensuring that
information is retained even when the computer is powered off.

NON-VOLATILE MEMORY VS. VOLATILE MEMORY


Although these two are complementary to each other, they are
two different types of computer memory that serves a purpose in
storing data and managing data. Their differences can be seen in
these aspects:

• Speed

- Volatile Memory usually has a faster read and write


compared to NVMs.
The non-volatile will have a slower pace since it keeps as
much information that must be retained as it can, compared
to the volatile that gets rid of it immediately when it’s no
longer needed.

• Data Retention

- NVMs offers a long-term storage for data, while Volatile


Memory is for temporary data essential for operations of
computer.

• Cost

- NVMs are usually cheaper than Volatile Memory.

The reason why volatile memory is more expensive than


NVM is because it involves more complex and costly
manufacturing since they are designed for rapid read and
write operations, with low latency and high bandwidth.

• Energy Consumption

- Many forms of volatile memory require repeated data


refreshes, which consume additional power.

Given that volatile memory is faster, means it is designed


to do a lot of processing, where in obviously it will be taking
up much more power. Non-volatile on the other hand is power-
efficient since it doesn’t require continuous supply of power in
retaining the data it has.
Categories of NVMs
These two have differences in terms of cost, capacity, and
performance. Electrically Addressed System is pricier but is much
faster. Meanwhile, Mechanically Addressed System advantage is it can
have a larger storage.

• Mechanically Addressed System

In mechanically addressed systems, data is accessed or


modified using physical, mechanical components. An example of
mechanically addressed memory is a hard disk drive (HDD). In an
HDD, a mechanical arm positions read/write heads over specific
tracks on spinning disks to read or write data. The movement of
these physical components is part of the addressing mechanism.

- Magnetic storage devices

• Hard disks

• Magnetic Tapes

• Floppy Disks

• etc.

- Optical disks

- Punched tape and cards (early computer storage)

• Electrically Addressed System

In electrically addressed systems, data manipulation


occurs through electronic signals and electrical addressing
schemes. Random Access Memory (RAM), including both Dynamic
RAM (DRAM) and Static RAM (SRAM), is a common example of
electrically addressed memory. In RAM, data is accessed using
electronic signals without the need for physical movement. The
addressing of specific memory locations is achieved through
electrical signals.

- Read-only memory ROM

• Erasable programmable ROM [EPROM]

• Electrically erasable programmable ROM [EEPROM]

- Ferroelectric RAM [FRAM]

- Magnetoresistive RAM [MRAM]

- Phase-change Memory [PCM]

- Flash memory (e.g., NOR and NAND flash memory and solid-
state drives (SSD))

Ferroelectric RAM (FRAM)


Ferroelectric RAM (F-RAM or FeRAM) is a type of non-volatile
random-access memory that combines the fast read and write access
of dynamic RAM (DRAM) with the non-volatile capability of
ferroelectric materials.

It has its distinct properties, such as: extremely high endurance,


ultra-low power consumption, single cycle write speeds, and gamma
radiation tolerance. It is built on ferroelectric technology, typically
using a thin film of ferroelectric material, often lead zirconate
titanate (PZT)

Ferroelectric materials have a unique property called


ferroelectricity. This property enables the material to exhibit
spontaneous polarization, meaning it can develop a net electrical
dipole moment even in the absence of an external electric field. We can
think of the two poles as a switch representing 0 and 1, a binary being
used by the FRAM to store data.

Magnetoresistive RAM (MRAM)


Magnetoresistive RAM (RAM) stores data in magnetic storage
elements called magnetic tunnel junctions [MTJs]. It combines the
high speed of static random-access memory (SRAM) and the high
density of dynamic RAM (DRAM), promising to significantly improve
electronic products by storing greater amounts of data, enabling
faster data access, and consuming less energy.

It also offers high endurance, ultra-low power consumption,


single cycle writes speeds, and resistance to high radiation and
extreme temperature conditions suitable for various applications,
including automotive, industrial, military, and space applications.

MRAM relies on the magnetic properties of materials for data


storage. It uses magnetic elements, typically made of ferromagnetic
materials, to represent binary data. Instead of poles (which are used
by FRAM) in MRAM The direction of magnetization in these elements
corresponds to the binary values of 0 and 1.

Phase-Change Memory (PCM)


Phase-change Memory (PCM) uses the unique properties of
chalcogenide materials to store data. Chalcogenide materials are
compounds containing sulfur, selenium, or tellurium, which can switch
between a crystalline and an amorphous phase under the influence of
thermal or electrical pulses.

Due to its property PCM can store data by altering the phase of
the chalcogenide material, providing high endurance, ultra-low
power consumption, and fast read and write speeds. PCM is made
using a Germanium Antimony Tellurium (GST) alloy.
To write data to a PCM cell, an electrical current is applied to
the cell. The amount and duration of the current determine whether
the material in the cell transitions to the amorphous phase
(representing one binary state) or the crystalline phase (representing
the other binary state.
References:
Admin. (2022, October 26). Design and Characteristics of Memory Hierarchy
| GATE Notes. Retrieved from https://byjus.com/gate/design-and-
characteristics-of-memory-hierarchy-notes/

DRAM refresh Mechanisms, Penalties, and Trade-Offs. (2016, January 1). IEEE
Journals & Magazine | IEEE Xplore.
https://ieeexplore.ieee.org/document/7070756

Ellero, F., Palese, G., Tomat, E., & Vatta, F. (2021). Computational complexity
analysis of hamming codes polynomial co-decoding. 2021
International Conference on Software, Telecommunications and
Computer Networks (SoftCOM).
https://doi.org/10.23919/softcom52868.2021.9559071

GeeksforGeeks. (2020a, April 28). Magnetic Tape memory.


https://www.geeksforgeeks.org/magnetic-tape-memory/

GeeksforGeeks. (2020b, May 3). DRAM full form.


https://www.geeksforgeeks.org/dram-full-form/

GeeksforGeeks. (2022a, March 25). What is Memory Decoding.


https://www.geeksforgeeks.org/what-is-memory-decoding/

GeeksforGeeks. (2022b, July 5). SRAM full form.


https://www.geeksforgeeks.org/sram-full-form/

GeeksforGeeks. (2023a, February 21). Difference between Volatile Memory


and Non Volatile Memory.
https://www.geeksforgeeks.org/difference-between-volatile-memory-
and-non-volatile-memory/
GeeksforGeeks. (2023b, May 13). Difference between SRAM and DRAM.
https://www.geeksforgeeks.org/difference-between-sram-and-dram/

Global Spec. (n.d.). SRAM Modules Selection Guide: Types, features,


applications - globalspec. Global Spec.
https://www.globalspec.com/learnmore/semiconductors/memory_
chips/sram_memory_modules

I. Bhati, M. -T. Chang, Z. Chishti, S. -L. Lu and B. Jacob, "DRAM Refresh


Mechanisms, Penalties, and Trade-Offs," in IEEE Transactions on
Computers, vol. 65, no. 1, pp. 108-121, 1 Jan. 2016, doi:
10.1109/TC.2015.2417540.

Introduction to Memory. (2021). YouTube. Retrieved December 28, 2023, from


https://www.youtube.com/watch?v=PujjqfUhtNo.

Kapoor, A. (2023, October 30). What is hamming code? technique to detect


and correct errors: Simplilearn. Simplilearn.com.
https://www.simplilearn.com/tutorials/networking-tutorial/what-is-
hamming-code-technique-to-detect-errors-correct-data

Lenovo. (n.d.). Dram: What is DRAM memory?: Understanding dynamic


random access memory. Dram: What is DRAM Memory? |
Understanding Dynamic Random Access Memory | Lenovo
Philippines. https://www.lenovo.com/ph/en/glossary/what-is-
dram/

Mano, M. M., & Ciletti, M. D. (2019). Digital Design. Pearson.

Rogers, K. and Niemchick, . Andrew (2023, November 29). human body


systems. Encyclopedia Britannica.
Rouse, M. (2023, March 15). What is a magnetic disk?. Techopedia.
https://www.techopedia.com/definition/8210/magnetic-disk

Siemens. (2007, January 31). How does address multiplexing work?. SIOS.
https://support.industry.siemens.com/cs/document/24509028/how-
does-address-multiplexing-work-?dti=0&lc=en-PH

Slideshare. (2014, September 19). Rom(Read only memory ). PPT.


https://www.slideshare.net/rohitladdu/romread-only-
memory?fbclid=IwAR31RHUJ4LYrKD4sJOwo8o5akZwUVJO7mr
DaKBdYv4LPulrWmzY-fj9sQpM

Stokes, J. (n.d.). Ram Guide: Part I DRAM and SRAM Basics. Ars Technica.
https://archive.arstechnica.com/paedia/r/ram_guide/ram_guide.
part1-4.html

Zivanov, S. (2023, December 19). What is memory hierarchy? Retrieved from


https://phoenixnap.com/kb/memory-
hierarchy#:~:text=Memory%20hierarchy%20organizes%20memory%
20components,than%20less%20frequently%20used%20data./

You might also like