Unit 5 Memory System

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 77

UNIT 5 MEMORY

SYSTEM

Memory Concepts and Hierarchy – Memory Management – Cache


Memories: Mapping and Replacement Techniques – Virtual Memory –
DMA – I/O – Accessing I/O: Parallel and Serial Interface – Interrupt I/O –
Interconnection Standards: USB, SATA
Memory Concepts
• Maximum size of memory that can be used in any computer is
determined by addressing mode
• The processor uses the • The time that elapses between
- address lines to specify the the initiation of an operation to
memory location transfer a word of data and the
completion of that operation. This
- data lines to transfer the data is referred to as the memory
- control lines carry the access time.
command indicating a Read or a • memory cycle time - the
Write operation and whether a byte minimum time delay required
or a word is to be transferred, between the initiation of two
provide the necessary timing successive memory operations.
information. • A memory unit is called a
random-access memory (RAM) if
the access time to any location is
the same, independent of the
location’s address.
Cache and Virtual Memory
• The processor of a computer can • Virtual memory : With this
usually process instructions and data technique, only the active
faster than they can be fetched from portions of a program are stored
the main memory.
in the main memory, and the
• Hence, the memory access time is the
remainder is stored on the much
bottleneck in the system.
larger secondary storage device.
• One way to reduce the memory
access time is to use a cache memory. • Sections of the program are
This is a small, fast memory inserted transferred back and forth
between the main memory and the between the main memory and
processor. the secondary storage device.
• It holds the currently active portions
of a program and their data.
Block Transfers
• Data move frequently between the main memory and the cache and
between the main memory and the disk. These transfers do not
occur one word at a time.
• Data are always transferred in contiguous blocks involving tens,
hundreds, or thousands of words.
• Data transfers between the main memory and high-speed devices
also involve large blocks of data.
• Hence, a critical parameter for the performance of the main
memory is its ability to read or write blocks of data at high speed.
Memory Hierarchy
• Memory would be fast, large, and inexpensive.
• Very fast memory can be implemented using static RAM chips.
• But, these chips are not suitable for implementing large memories,
because their basic cells are larger and consume more power than
dynamic RAM cells.
• A solution is provided by using secondary storage, mainly magnetic
disks, to provide the required memory space.
• Disks are available at a reasonable cost, and they are used
extensively in computer systems. However, they are much slower
than semiconductor memory units.
• A very large amount of cost-effective storage can be provided by
magnetic disks, and a large and considerably faster, yet affordable,
main memory can be built with dynamic RAM technology.
• The processor registers are at the top in terms of speed of access.
• At the next level of the hierarchy is a relatively small amount of
memory that can be implemented directly on the processor chip.
• This memory, called a processor cache, holds copies of the
instructions and data stored in a much larger memory that is provided
externally.
• There are often two or more levels of cache.
• A primary cache is always located on the processor chip. The primary cache is
referred to as the level 1 (L1) cache.
• A larger, and slower, secondary cache is placed between the primary cache
and the rest of the memory. It is referred to as the level 2 (L2) cache.
• Some computers have a level 3 (L3) cache of even larger size, in addition to
the L1 and L2 caches.
• The next level in the hierarchy is the main memory.
• Disk devices provide a very large amount of inexpensive memory, and
they are widely used as secondary storage in computer systems.
• They are very slow compared to the main memory.
• They represent the bottom level in the memory hierarchy.
CACHE MEMORIES: MAPPING AND REPLACEMENT
TECHNIQUES
• CACHE MEMORIES
• The cache is a small and very fast memory, interposed between the processor and the
main memory. Its purpose is to make the main memory appear to the processor to be
much faster than it actually is. The effectiveness of this approach is based on a property
of computer programs called locality of reference.
• Locality of Reference
• There are 2 types:
• Temporal
The recently executed instructions are likely to be executed again very soon.
• Spatial
Instructions close to recently executed instruction are also likely to be executed
soon.
• The term cache block refers to a set of contiguous address locations of some size.
• When the processor issues a Read request, the contents of a block of
memory words containing the location specified are transferred into
the cache.
• The correspondence between the main memory blocks and those in
the cache is specified by a mapping function.
• When the cache is full and a memory word (instruction or data) that is
not in the cache is referenced, the cache control hardware must
decide which block should be removed to create space for the new
block that contains the referenced word.
• The collection of rules for making this decision constitutes the cache’s
replacement algorithm.
• Cache Hits - The cache control circuitry determines whether the requested
word currently exists in the cache. If it does, the Read or Write operation is
performed on the appropriate cache location.
• The write-operation is done in 2 ways: 1) Write-through protocol & 2) Write-
back protocol.
• Write-Through Protocol
• Here the cache-location and the main-memory-
locations are updated simultaneously.
• Write-Back Protocol
• update only the cache-location & mark the cache-location with associated
flag bit called Dirty/Modified Bit.
• The word in memory will be updated later, when the marked-block is
removed from cache. This technique is known as the write-back, or copy-
back, protocol.
• Cache Misses - A Read operation for a word that is not in the cache
constitutes a Read miss. It causes the block of words containing the
requested word to be copied from the main memory into the cache.
• During Read-operation
• If the requested-word currently not exists in the cache, then read-miss will
occur.
• To overcome the read miss, Load–through/Early restart protocol is used.
• Load–Through Protocol
• The block of words that contains the requested-word is copied from the memory into
cache.
• After entire block is loaded into cache, the requested-word is forwarded to processor.

During Write-operation
• If the requested-word not exists in the cache, then write-miss will occur.
• If Write Through Protocol is used, the information is written directly into main-
memory.
• If Write Back Protocol is used, then block containing the addressed word is first
brought into the cache & then the desired word in the cache is over-written with
the new information.
• LATENCY & BANDWIDTH • Bandwidth
• A good indication of performance is • It is defined as the number of bits
given by 2 parameters: 1) Latency 2)
Bandwidth.
or bytes that can be transferred
in one second.
• Latency • Bandwidth mainly depends on -
• It refers to the amount of time it takes
The speed of access to the stored
to transfer a word of data to or from
the memory. data & The number of bits that
• For a transfer of single word, the latency can be accessed in parallel.
provides the complete indication of
memory performance.
• For a block transfer, the latency denotes
the time it takes to transfer the first
word of data.
MAPPING-FUNCTION
• Direct Mapping
• Associative Mapping
• Set-Associative Mapping
• DIRECT MAPPING
• The block-j of the main-memory maps onto block-j modulo-128 of the cache (Figure
8.16).
• When the memory-blocks 0, 128, & 256 are loaded into cache, the block is stored in
cache-block 0.
• Similarly, memory-blocks 1, 129, 257 are stored in cache-block 1.
• The contention may arise when
• When the cache is full.
• When more than one memory-block is mapped onto a given cache-block position.
The contention is resolved by allowing the new blocks to overwrite the currently
resident-block.
Memory-address determines placement of block in the cache.
• The memory-address is divided into 3 fields:
• Low Order 4 bit field
• Selects one of 16 words in a block.
• 7 bit cache-block field
• 7-bits determine the cache-position in which new block must be stored.
• 5 bit Tag field
• 5-bits memory-address of block is stored in 5 tag-bits associated with cache-location.
• As execution proceeds,
• 5-bit tag field of memory-address is compared with tag-bits associated with
cache-location.
• If they match, then the desired word is in that block of the cache.
• Otherwise, the block containing required word must be first read from the
memory. And then the word must be loaded into the cache.
ASSOCIATIVE MAPPING

• The memory-block can be placed into any cache-block position.


(Figure 8.17).
• 12 tag bits are required to identify a memory block when it is resident
in the cache.
• The tag bits of an address received from the processor are compared
to the tag bits of each block of the cache to see if the desired block is
present. This is called the associative-mapping technique.
• It gives complete freedom in choosing the cache-location.
• A new block that has to be brought into the cache has to replace an existing
block if the cache is full.
• The memory has to determine whether a given block is in the cache.
Advantage: It is more flexible than direct mapping technique.
• Disadvantage: Its cost is high
SET-ASSOCIATIVE MAPPING
• It is the combination of direct and associative mapping. (Figure 8.18).
• The blocks of the cache are grouped into sets.
• The mapping allows a block of the main-memory to reside in any block of the
specified set.
• The cache has 2 blocks per set, so the memory-blocks 0, 64, 128… 4032 maps
into
• cache set „0‟.
• The cache can occupy either of the two block position within the set.
• 6 bit set field
• Determines which set of cache contains the desired block.
• 6 bit tag field
• The tag field of the address is compared to the tags of the two blocks of the set.
• This comparison is done to check if the desired block is present.
• The cache which contains 1 block per set is called direct mapping.
• A cache that has „k‟ blocks per set is called as “k-way set associative cache‟.
• Each block contains a control-bit called a valid-bit.
• The Valid-bit indicates that whether the block contains valid-data.
• The dirty bit indicates that whether the block has been modified during its cache residency.
• Valid-bit=0 When power is initially applied to system.
• Valid-bit=1 When the block is loaded from main-memory at first time.
• If the main-memory-block is updated by a source & if the block in the source is
already exists in the cache, then the valid-bit will be cleared to “0‟.
• If Processor & DMA uses the same copies of data then it is called as Cache Coherence
Problem.
• Advantages:
• Contention problem of direct mapping is solved by having few choices for
block placement.
• The hardware cost is decreased by reducing the size of associative search.
Replacement Algorithms
• When a new block is to be brought into the cache and all the
positions that it may occupy are full, the cache controller must decide
which of the old blocks to overwrite.
• This is an important issue, because the decision can be a strong
determining factor in system performance.
• In general, the objective is to keep blocks in the cache that are likely
to be referenced in the near future.
• when a block is to be overwritten, it is sensible to overwrite the one
that has gone the longest time without being referenced. This block is
called the least recently used (LRU) block, and the technique is called
the LRU replacement algorithm.
Virtual Memory
• If a program does not completely fit into the main memory, the parts
of it not currently being executed are stored on a secondary storage
device, typically a magnetic disk.
• As these parts are needed for execution, they must first be brought
into the main memory, possibly replacing other parts that are already
in the memory. ,
• These actions are performed automatically by the operating system,
using a scheme known as virtual memory.
• The binary addresses that the processor issues for either instructions
or data are called virtual or logical addresses
• A special hardware unit, called the Memory Management Unit
(MMU), keeps track of which parts of the virtual address space are in
the physical memory.
• When the desired data or instructions are in the main memory, the
MMU translates the virtual address into the corresponding physical
address.
• If the data are not in the main memory, the MMU causes the
operating system to transfer the data from the disk to the memory.
• Such transfers are performed using the DMA scheme.
Address Translation
• A simple method for translating virtual addresses into physical
addresses is to assume that all programs and data are composed of
fixed-length units called pages.
• Each page consists of a block of words that occupy contiguous
locations in the main memory.
• virtual page number (high-order bits) followed by an offset (low-order
bits) that specifies the location of a particular byte (or word) within a
page.
• Information about the main memory location of each page is kept in a
page table - includes the main memory address where the page is
stored and the current status of the page.
• An area in the main memory that can hold one page is called a page
frame.
• The starting address of the page table is kept in a page table base
register.
• By adding the virtual page number to the contents of this register, the
address of the corresponding entry in the page table is obtained.
Translation Lookaside Buffer
• The page table information is used by the MMU for every read and
write access.
• The portion maintained within the MMU consists of the entries
corresponding to the most recently accessed pages. They are stored
in a small table, usually called the Translation Lookaside Buffer (TLB).
• The TLB functions as a cache for the page table in the main memory.
Each entry in the TLB includes a copy of the information in the
corresponding entry in the page table.
• Figure 8.26 shows a possible organization of a TLB that uses the
associative-mapping technique. Set-associative mapped TLBs are also
found in commercial products.
Address translation proceeds as follows.
• Given a virtual address, the MMU looks in the TLB for the referenced
page. If the page table entry for this page is found in the TLB, the
physical address is obtained immediately.
• If there is a miss in the TLB, then the required entry is obtained from
the page table in the main memory and the TLB is updated.
Page Faults
• When a program generates an access request to a page that is not in the main
memory, a page fault is said to have occurred.
• The entire page must be brought from the disk into the memory before access
can proceed. When it detects a page fault, the MMU asks the operating system
to intervene by raising an exception.
• Processing of the program that generated the page fault is interrupted, and
control is transferred to the operating system. The operating system copies the
requested page from the disk into the main memory.
• If a new page is brought from the disk when the main memory is full, it must
replace one of the resident pages. The problem of choosing which page to
remove is just as critical as it is in a cache.
• Concepts similar to the LRU replacement algorithm can be applied to page
replacement.
Direct Memory Access
• Blocks of data are often transferred between the main memory and I/O
devices such as disks.
• A technique for controlling such transfers without frequent, program-
controlled intervention by the processor.
• Data are transferred from an I/O device to the memory by first reading them
from the I/O device using an instruction such as
Load R2, DATAIN
which loads the data into a processor register.
• Then, the data read are stored into a memory location.
• An instruction to transfer input or output data is executed only after the
processor determines that the I/O device is ready,
• An alternative approach is used to transfer blocks of data directly
between the main memory and I/O devices, such as disks.
• A special control unit is provided to manage the transfer, without
continuous intervention by the processor.
• This approach is called direct memory access, or DMA.
• Figure 8.12 shows an example of the DMA controller registers that are
accessed by the processor to initiate data transfer operations.
• Two registers are used for storing the starting address and the word
count.
• The third register contains status and control flags.
• The R/W bit determines the direction of the transfer.
• When this bit is set to 1 by a program instruction, the controller
performs a Read operation, that is, it transfers data from the memory
to the I/O device.
• Otherwise, it performs a Write operation. Additional information is
also transferred as may be required by the I/O device.
Accessing I/O Devices
• The components of a computer system communicate with each other
through an interconnection network.
• The interconnection network consists of circuits needed to transfer
information between the processor, the memory unit, and a number
of I/O devices.
• The I/O devices and the memory share the same address space, this
arrangement is called memory-mapped I/O.
• With memory-mapped I/O, any machine instruction that can access
memory can be used to transfer data to or from an I/O device.
• For example, if DATAIN is the address of a register in an input device,
the instruction
Load R2, DATAIN
reads the data from the DATAIN register and loads them into
processor register R2. Similarly, the instruction
Store R2, DATAOUT
sends the contents of register R2 to location DATAOUT, which is a
register in an output device.
I/O Device Interface
• An I/O device is connected to the interconnection network by using a
circuit, called the device interface.
• The interface includes some registers that can be accessed by the
processor.
• One register may serve as a buffer for data transfers, another may hold
information about the current status of the device, and yet another may
store the information that controls the operational behavior of the
device.
• These data, status, and control registers are accessed by program
instructions.
• Typical transfers of information are between I/O registers and the
registers in the processor.
• Program-Controlled I/O

• Consider a task that reads characters typed on a keyboard, stores these


data in the memory, and displays the same characters on a display screen.
• A simple way of implementing this task is to write a program that
performs all functions needed to realize the desired action. This method is
known as program-controlled I/O.
• In addition to transferring each character from the keyboard into the
memory, and then to the display, it is necessary to ensure that this
happens at the right time.
• The difference in speed between the processor and I/O devices creates
the need for mechanisms to synchronize the transfer of data between
them.
• One solution to this problem involves a signaling protocol.
• On output, the processor sends the first character and then waits for
a signal from the display that the next character can be sent. It then
sends the second character, and so on.
• An input character is obtained from the keyboard in a similar way. The
processor waits for a signal indicating that a key has been pressed.
Then the processor proceeds to read that code.
• The keyboard includes a circuit that responds to a key being pressed.
• Let KBD_DATA be the address label of an 8-bit register that holds the
generated character.
• KBD_STATUS - eight-bit status register.
• status flag KIN to determine when a character code has been placed
in KBD_DATA.
• DISP_DATA - 8-bit register used to receive characters from the
processor.
Parallel I/O Interface
• Embedded system applications require considerable flexibility in
input/output interfaces.
• A sensor is needed to generate a signal with the value 1 when the
door is open.
• This signal is sent to the microcontroller on one of the pins of an input
interface.
• The same is true for the keys on the microwave’s front panel. Each of
these simple devices produces one bit of information.
Each parallel port in Figure has an associated eight-bit data direction register, which can be used to configure individual
data lines as either input or output.
• Figure illustrates the bidirectional control for one bit in port A.
• Port pin PAi is treated as an input if the data direction flip-flop
contains a 0.
• The port pin serves as an output if the data direction flip-flop is set to
1.
• Figure only the part of the interface that controls the direction of data
transfer.
• A versatile parallel interface may include two possibilities: one where
input data are read directly from the pins, and the other where the
input data are stored in a register as in the interface.
• The status register, PSTAT, contains the status flags.
• The PASIN flag is set to 1 when there are new data on port A.
• It is cleared to 0 when the processor accepts the data by reading the PAIN register.
• The PASOUT flag is set to 1 when the data in register PAOUT are accepted by the
connected device, to indicate that the processor may now load new data into
PAOUT.
• The PASOUT flag is cleared to 0 when the processor writes data into PAOUT.
• The flags PBSIN and PBSOUT perform the same function for port B.
• The status register also contains four interrupt flags. An interrupt flag, such as IAIN,
is set to 1 when that interrupt is enabled and the corresponding I/O action occurs.
• The interrupt-enable bits are held in control register PCONT.
• An enable bit is set to 1 to enable the corresponding interrupt. For example, if
ENAIN=1 and PASIN=1, then the interrupt flag IAIN is set to 1 and an interrupt
request is raised.
Thus, IAIN = ENAIN · PASIN
• Port A has two control lines, CAIN and CAOUT, which can be used to provide
an automatic signaling mechanism between the interface and the attached
device.
• When the interface circuit sees CAIN = 1, it sets the status bit PASIN to 1.
Later, this bit is cleared to 0 when the processor reads the input data.
• This action also causes the interface to send a pulse on the CAOUTline to
inform the device that it may send new data to the interface.
• For an output transfer, the processor writes the data into the PAOUT register.
• The interface responds by clearing the PASOUT bit to 0 and sending a pulse
on the CAOUT line to inform the device that new data are available.
• When the device accepts the data, it sends a pulse on the CAIN line, which
in turn sets PASOUT to 1.
• Control register bits PAREG and PBREG are used to select the mode of
operation of inputs to ports A and B, respectively.
Serial I/O Interface
• The serial interface provides the UART (Universal Asynchronous
Receiver/Transmitter) capability to transfer data.
• Double buffering is used in both the transmit and receive paths.
• Such buffering is needed to handle bursts in I/O transfers correctly.
• Input data are read from the 8-bit Receive buffer, and output data are loaded into the 8-bit
Transmit buffer.
• The status register, SSTAT, provides information about the current status of the receive and
transmit units.
• Bit SSTAT0 is set to 1 when there are valid data in the receive buffer.
• Bit SSTAT1 is set to 1 when the transmit buffer is empty and can be loaded with new data.
• Bit SSTAT2 is set to 1 if an error occurs during the receive process.
• For example, an error occurs if the character in the receive buffer is overwritten by a
subsequently received character before the first character is read by the processor. The status
register also contains the interrupt flags.
• Bit SSTAT4 is set to 1 when the receive buffer becomes full and the receiver interrupt is
enabled.
• Similarly, SSTAT5 is set to 1 when the transmit buffer becomes empty and the transmitter
interrupt is enabled.
• The serial interface raises an interrupt if either SSTAT4 or SSTAT5 is equal to 1. It also raises an
interrupt if SSTAT6 = 1, which occurs if SSTAT2 = 1 and the error condition interrupt is enabled.
• The control register, SCONT, is used to hold the interrupt-enable bits.
• If SCONT0 = 0, then the transmit clock is the same as the system
(processor) clock. If SCONT0 = 1, then a lower frequency transmit
clock is obtained using a clock-dividing circuit.
• The last register in the serial interface is the clock-divisor register, DIV.
• When the count reaches zero, the counter is reloaded using the value
in the DIV register.
Interconnection Standards
• A typical desktop or notebook computer has several ports that can be
used to connect I/O devices, such as a mouse, a memory key, or a disk
drive.
• Standard interfaces have been developed to enable I/O devices to use
interfaces that are independent of any particular processor.
• For example, a memory key that has a USB connector can be used
with any computer that has a USB port.
Universal Serial Bus (USB)
• The Universal Serial Bus (USB) is the most widely used interconnection
standard.
• A large variety of devices are available with a USB connector, including
mice, memory keys, disk drives, printers, cameras, and many more.
• The commercial success of the USB is due to its simplicity and low cost.
• The original USB specification supports two speeds of operation, called
low-speed (1.5 Megabits/s) and full-speed (12 Megabits/s). Later, USB
2, called High-Speed USB, was introduced.
• It enables data transfers at speeds up to 480 Megabits/s. As I/O
devices continued to evolve with even higher speed requirements, USB
3 (called Superspeed) was developed. It supports data transfer rates up
to 5 Gigabits/s.
The USB has been designed to meet several key objectives:
• Provide a simple, low-cost, and easy to use interconnection system
• Accommodate a wide range of I/O devices and bit rates, including
Internet connections, and audio and video applications
• Enhance user convenience through a “plug-and-play” mode of
operation.
Mini USB is used with digital cameras and computer
peripherals

Micro USB - developed for connecting compact and mobile


devices such as digital cameras, smartphones, GPS devices,
Mp3 players and photo printers.

On most modern newer Android smart phones and other USB- connected devices, a USB Type-C
cable is a relatively new type of connector
• Device Characteristics
• The kinds of devices that may be connected to a computer cover a
wide range of functionality.
• The speed, volume, and timing constraints associated with data
transfers to and from these devices vary significantly.
• In the case of a keyboard, one byte of data is generated every time a
key is pressed, which may happen at any time.
• These data should be transferred to the computer promptly.
• Since the event of pressing a key is not synchronized to any other
event in a computer system, the data generated by the keyboard are
called asynchronous.
• Plug-and-Play
• Its plug-and-play feature means that when a new device is
connected, the system detects its existence automatically.
• The software determines the kind of device and how to communicate
with it, as well as any special requirements it might have.
• As a result, the user simply plugs in a USB device and begins to use it.
• The USB is also hot-pluggable, which means a device can be plugged
into or removed from a USB port while power is turned on.
• USB Architecture
• The USB uses point-to-point connections and a serial transmission
format. When multiple devices are connected, they are arranged in a
tree structure as shown in Figure.
• Each node of the tree has a device called a hub, which acts as an
intermediate transfer point between the host computer and the I/O
devices.
• At the root of the tree, a root hub connects the entire tree to the host
computer.
• The leaves of the tree are the I/O devices: a mouse, a keyboard, a
printer, an Internet connection, a camera, or a speaker.
• The tree structure makes it possible to connect many devices using
simple point-to-point serial links.
• If I/O devices are allowed to send messages at any time, two
messages may reach the hub at the same time and interfere with
each other.
• For this reason, the USB operates strictly on the basis of polling.
• A device may send a message only in response to a poll message
from the host processor.
• Hence, no two devices can send messages at the same time. This
restriction allows hubs to be simple, low-cost devices.
• Isochronous Traffic on USB
• isochronous data need to be transferred at precisely timed regular
intervals.
• To accommodate this type of traffic, the root hub transmits a
uniquely recognizable sequence of bits over the USB tree every
millisecond.
• This sequence of bits, called a Start of Frame character, acts as a
marker indicating the beginning of isochronous data, which are
transmitted after this character.
• Thus, digitized audio and video signals can be transferred in a regular
and precisely timed manner.
• Electrical Characteristics
• USB connections consist of four wires, of which two carry power, +5 V
and Ground, and two carry data.
• Thus, I/O devices that do not have large power requirements can be
powered directly from the USB.
• Two methods are used to send data over a USB cable.
• When sending data at low speed, a high voltage relative to Ground is
transmitted on one of the two data wires to represent a 0 and on the
other to represent a 1.
• A signal is injected on a wire relative to ground is referred to as single-
ended transmission.
SATA
• Serial ATA (Serial Advanced Technology Attachment or SATA) is a
command and transport protocol that defines how data is transferred
between a computer's motherboard and mass storage devices, such
as hard disk drives (HDDs), optical drives and solid-state drives (SSDs).
• SATA is based on serial signaling technology, where data is transferred
as a sequence of individual bits.
SATA Cables
• The SATA cables are long, and both end-points of the cable are thin
and flat. SATA cables are of different types, but the following two are
the main types of SATA cables:
• SATA Data Cables: These cables typically have seven pins for
transferring data. These connect the drives to the motherboard of the
computer systems. One end of the SATA cable plugs into the back of
the hard drive of the computer system and the other end plugs into
the computer's motherboard.
• SATA Power Cables: These cables typically have fifteenth pins. These
connect to the power supply.
Revisions of SATA Interface
Following are the three major different revisions of the SATA interface:
• SATA I: This interface is formally called SATA 1.5Gb/s. It is the first
generation of SATA, whose speed is running at 1.5 Gigabit per second.
• SATA II: This interface is formally called SATA 3Gb/s. It is the second
generation of SATA, whose speed is running at 3.0 Gigabit per second.

SATA III: This interface is formally called SATA 6Gb/s. It is the third
generation of SATA, whose speed is running at 6.0 Gigabit per second.

You might also like