0% found this document useful (0 votes)
4 views

Concurrency Control_Concurrency Protocols

Uploaded by

dhruv942003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Concurrency Control_Concurrency Protocols

Uploaded by

dhruv942003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Concurrency Control in DBMS

Concurrently control is a very important concept of DBMS which ensures the simultaneous
execution or manipulation of data by several processes or user without resulting in data
inconsistency. Concurrency Control deals with interleaved execution of more than one
transaction.
What is Transaction?
A set of logically related operations is known as a transaction. The main operations of a
transaction are:
• Read(A): Read operations Read(A) or R(A) reads the value of A from the database and
stores it in a buffer in the main memory.
• Write (A): Write operation Write(A) or W(A) writes the value back to the database from
the buffer.
(Note: It doesn’t always need to write it to a database back it just writes the changes to buffer
this is the reason where dirty read comes into the picture)
Let us take a debit transaction from an account that consists of the following operations:
1. R(A);
2. A=A-1000;
3. W(A);
Assume A’s value before starting the transaction is 5000.
• The first operation reads the value of A from the database and stores it in a buffer.
• the Second operation will decrease its value by 1000. So buffer will contain 4000.
• the Third operation will write the value from the buffer to the database. So A’s final value
will be 4000.
But it may also be possible that the transaction may fail after executing some of its operations.
The failure can be because of hardware, software or power, etc. For example, if the debit
transaction discussed above fails after executing operation 2, the value of A will remain 5000 in
the database which is not acceptable by the bank. To avoid this, Database has two important
operations:
• Commit: After all instructions of a transaction are successfully executed, the changes made
by a transaction are made permanent in the database.
• Rollback: If a transaction is not able to execute all operations successfully, all the changes
made by a transaction are undone.
Properties of a Transaction
Atomicity: As a transaction is a set of logically related operations, either all of them should be
executed or none. A debit transaction discussed above should either execute all three
operations or none. If the debit transaction fails after executing operations 1 and 2 then its new
value of 4000 will not be updated in the database which leads to inconsistency.
Consistency: If operations of debit and credit transactions on the same account are executed
concurrently, it may leave the database in an inconsistent state.
• For Example, with T1 (debit of Rs. 1000 from A) and T2 (credit of 500 to A) executing
concurrently, the database reaches an inconsistent state.
• Let us assume the Account balance of A is Rs. 5000. T1 reads A(5000) and stores the value
in its local buffer space. Then T2 reads A(5000) and also stores the value in its local buffer
space.
• T1 performs A=A-1000 (5000-1000=4000) and 4000 is stored in T1 buffer space. Then T2
performs A=A+500 (5000+500=5500) and 5500 is stored in the T2 buffer space. T1 writes
the value from its buffer back to the database.
• A’s value is updated to 4000 in the database and then T2 writes the value from its buffer
back to the database. A’s value is updated to 5500 which shows that the effect of the debit
transaction is lost and the database has become inconsistent.
• To maintain consistency of the database, we need concurrency control protocols. The
operations of T1 and T2 with their buffers and database have been shown in Table 1.

T1 T1’s buffer space T2 T2’s Buffer Space Database

A=5000

R(A); A=5000 A=5000

A=5000 R(A); A=5000 A=5000

A=A-1000; A=4000 A=5000 A=5000

A=4000 A=A+500; A=5500

W(A); A=5500 A=4000

W(A); A=5500
Isolation: The result of a transaction should not be visible to others before the transaction is
committed. For example, let us assume that A’s balance is Rs. 5000 and T1 debits Rs. 1000 from
A. A’s new balance will be 4000. If T2 credits Rs. 500 to A’s new balance, A will become 4500,
and after this T1 fails. Then we have to roll back T2 as well because it is using the value
produced by T1. So transaction results are not made visible to other transactions before it
commits.
Durable: Once the database has committed a transaction, the changes made by the transaction
should be permanent. e.g.; If a person has credited $500000 to his account, the bank can’t say
that the update has been lost. To avoid this problem, multiple copies of the database are stored
at different locations.
What is a Schedule?
A schedule is a series of operations from one or more transactions. A schedule can be of two
types:
Serial Schedule: When one transaction completely executes before starting another
transaction, the schedule is called a serial schedule. A serial schedule is always consistent. e.g.;
If a schedule S has debit transaction T1 and credit transaction T2, possible serial schedules are
T1 followed by T2 (T1->T2) or T2 followed by T1 ((T2->T1). A serial schedule has low throughput
and less resource utilization.
Concurrent Schedule: When operations of a transaction are interleaved with operations of
other transactions of a schedule, the schedule is called a Concurrent schedule. e.g.; the
Schedule of debit and credit transactions shown in Table 1 is concurrent. But concurrency can
lead to inconsistency in the database. The above example of a concurrent schedule is also
inconsistent.
Difference between Serial Schedule and Serializable Schedule

Serial Schedule Serializable Schedule

In Serial schedule, transactions will be In Serializable schedule transaction are executed


executed one after other. concurrently.

Serial schedule are less efficient. Serializable schedule are more efficient.

In serial schedule only one transaction In Serializable schedule multiple transactions can
executed at a time. be executed at a time.

Serial schedule takes more time for


In Serializable schedule execution is fast.
execution.
Concurrency Control in DBMS
• Executing a single transaction at a time will increase the waiting time of the other
transactions which may result in delay in the overall execution. Hence for increasing the
overall throughput and efficiency of the system, several transactions are executed.
• Concurrently control is a very important concept of DBMS which ensures the simultaneous
execution or manipulation of data by several processes or user without resulting in data
inconsistency.
• Concurrency control provides a procedure that is able to control concurrent execution of
the operations in the database.
Concurrency Control Problems
There are several problems that arise when numerous transactions are executed
simultaneously in a random manner. The database transaction consists of two major operations
“Read” and “Write”. It is very important to manage these operations in the concurrent
execution of the transactions in order to maintain the consistency of the data.
The five concurrency problems that can occur in the database are:
1. Dirty Read Problem (Write-Read conflict OR Temporary Update Problem)
2. Lost Update Problem
3. Incorrect Summary Problem
4. Unrepeatable Read Problem
5. Phantom Read Problem

1. Dirty Read Problem (Write-Read conflict)


The Dirty read problem occurs when one transaction updates an item but due to some
unconditional events that transaction fails but before the transaction performs rollback,
some other transaction reads the updated value. This creates an inconsistency in the
database. The Dirty read problem comes under the scenario of Write-Read conflict
between the transactions in the database.
For example, consider two transactions T1 and T2. T1 modifies a row in a table and starts a
transaction, but does not commit it. Meanwhile, T2 tries to read the same row before T1
commits its changes. If T2 is allowed to read the uncommitted data, it will result in a dirty
read, potentially leading to incorrect or inconsistent results.
To prevent dirty reads, SQL provides transaction isolation levels, which specify how
transactions should be isolated from one another. The isolation levels include:
• Read uncommitted: This level allows transactions to read uncommitted data from other
transactions, leading to potential dirty reads.
• Read committed: This level allows transactions to read only committed data, preventing
dirty reads.
• Repeatable read: This level prevents dirty reads and also ensures that a transaction
always reads the same data for a given query, even if other transactions modify the data
in the meantime.
• Serializable: This level provides the highest level of isolation and ensures that
transactions are executed serially, preventing dirty reads and other anomalies.
Example: Table Record

ID Cus_Name Balance

1 S Adam 100

2 Zee Young 150


Transaction: Transfer 10 from S Adam’s account to Zee Young’s account.
Input:
BEGIN TRY
BEGIN TRANSACTION
UPDATE Table SET Balance = Balance + 10 WHERE ID='C';
UPDATE Table SET Balance = Balance - 10 WHERE ID=1;
COMMIT TRANSACTION
PRINT 'Committed'
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
PRINT 'Not Committed'
END CATCH

Output:
Not committed
By executing the above query the output will be “Not committed” because there is an
error, there is no ID=C. So at that time if we want to execute another transaction with
that row time Dirty Reads occur. There is no partial commitment if both the UPDATE
query succeeds only then the output will be “Committed”. Before Execution.
Table Record

ID Cus_Name Balance

1 S Adam 100

2 Zee Young 150

After Execution:
Input:
BEGIN TRY
BEGIN TRANSACTION
UPDATE Table SET Balance = Balance - 10 WHERE ID=1;
UPDATE Table SET Balance = Balance + 10 WHERE ID='C';
COMMIT TRANSACTION
PRINT 'Committed'
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
PRINT 'Not Committed'
END CATCH
Output:
(1row affected)
(0row affected)
Not Committed
Note that if we put a valid ID first transaction result will come out as committed or 1 row
affected but the 2nd one will not be affected.
Explanation: If we have a ticket booking system and One Customer is trying to book a
ticket at that time an available number of the ticket is 10, before completing the
payment, the Second Customer wants to book a ticket at that time this 2nd transaction
will show the second customer that the number of the available tickets is 9. The twist is
here if the first customer does not have sufficient funds in his debit card or in his wallet
then the 1st transaction will Rollback, At that time 9 seats are available which is read by
the 2nd transaction is Dirty Read.
Example: Available ticket: For 1st customer
Step 1:
Input: Transaction 1
Select *from Bus_ticket;
Output:

ID Bus_Name Available_Seat

1 KA0017 10

Step 2: Booking time for 1st customer


Input:
--Transaction 1
BEING Transaction
UPDATE Bus_Ticket set Available_Seat=9
WHERE ID=1

--Payment for Transaction 1


Waitfor Delay '00.00.30'
Rollback transaction
Available ticket: For the 2nd customer while the 1st customer is paying for the ticket.
Step 3: Input
Transaction 1
set transaction isolation level read uncommitted
Select *from Bus_ticket where ID=1;
Output:

ID Bus_Name Available_Seat

1 KA0017 9

Note that during the payment of the 1st customer’s 2nd transaction read 9 seat is
available if somehow the 1st transaction Rollback then available seat 9 is Dirty read data.
After the rollback of the 1st transaction available seat is 10 again. 2nd and 3rd steps
happening at the same time. Actually available seat after rollback of transaction 1.

ID Bus_Name Available_Seat

1 KA0017 10
2. Lost Update Problem
Lost update problem occurs when two or more transactions modify the same data,
resulting in the update being overwritten or lost by another transaction. The lost update
problem can be illustrated with the below scenario between two transactions T1 and T2.
Scenario 1:
1. Transaction T1 modifies a database record without committing the changes.
2. T2 reads the uncommitted data changed by T1
3. T1 performs rollback
4. T2 has already read the uncommitted data of T1 which is no longer valid, thus
creating inconsistency in the database.
Scenario 2:
1. T1 reads the value of an item from the database.
2. T2 starts and reads the same database item.
3. T1 updates the value of that data and performs a commit.
4. T2 updates the same data item based on its initial read and performs commit.
5. This results in the modification of T1 gets lost by the T2’s write which causes a lost
update problem in the database.
T1 T2
Read_item(X)
Read_Item(X)
X=X+N
Write(X)
X=X+M
Write(X)

3. Incorrect Summary Problem:


Consider a situation, where one transaction is applying the aggregate function on some
records while another transaction is updating these records. The aggregate function may
calculate some values before the values have been updated and others after they are
updated.
Example:

In the above example, transaction 2 is calculating the sum of some records while
transaction 1 is updating them. Therefore the aggregate function may calculate some
values before they have been updated and others after they have been updated.
4. Unrepeatable Read Problem:
The unrepeatable problem occurs when two or more read operations of the same
transaction read different values of the same variable.
Example:

In the above example, once transaction 2 reads the variable X, a write operation in
transaction 1 changes the value of the variable X. Thus, when another read operation is
performed by transaction 2, it reads the new value of X which was updated by transaction
1.
5. Phantom Read Problem:
The phantom read problem occurs when a transaction reads a variable once but when it
tries to read that same variable again, an error occurs saying that the variable does not
exist.
Example:

In the above example, once transaction 2 reads the variable X, transaction 1 deletes the
variable X without transaction 2’s knowledge. Thus, when transaction 2 tries to read X, it is
not able to do it.
Solution :
To prevent concurrency problems in DBMS transactions, several concurrency control
techniques can be used, including locking, timestamp ordering, and optimistic concurrency
control.
• Locking involves acquiring locks on the data items used by transactions, preventing
other transactions from accessing the same data until the lock is released. There are
different types of locks, such as shared and exclusive locks, and they can be used to
prevent Dirty Read and Non-Repeatable Read.
• Timestamp ordering assigns a unique timestamp to each transaction and ensures that
transactions execute in timestamp order. Timestamp ordering can prevent Non-
Repeatable Read and Phantom Read.
• Optimistic concurrency control assumes that conflicts between transactions are rare
and allows transactions to proceed without acquiring locks initially. If a conflict is
detected, the transaction is rolled back, and the conflict is resolved. Optimistic
concurrency control can prevent Dirty Read, Non-Repeatable Read, and Phantom
Read.
Concurrency Control Protocols
Concurrency control protocols are the set of rules that are maintained in order to solve the
concurrency control problems in the database. It ensures that the concurrent transactions can
execute properly while maintaining the database consistency. The concurrent execution of a
transaction is provided with atomicity, consistency, isolation, durability, and serializability via
the concurrency control protocols.
▪ Lock Based Protocol
• Basic 2-PL
• Conservative 2-PL
• Strict 2-PL
• Rigorous 2-PL
▪ Time-Stamp Ordering Protocol
▪ Graph Based Protocol
▪ Multiple Granularity Protocol
▪ Multi-version Protocol
1. Locked based Protocol
• Implementation of Locking in DBMS
• Locking protocols are used in DBMS as a means of concurrency control.
• Multiple transactions may request a lock on a data item simultaneously.
• Hence, we require a mechanism to manage the locking requests made by transactions.
Such a mechanism is called a Lock Manager.
• It relies on the process of message passing where transactions and lock manager exchange
messages to handle the locking and unlocking of data items.

• Data Structure in Lock Manager


The data structure required for the implementation of locking is called a Lock table.
1. It is a hash table where the names of data items are used as a hashing index.
2. Each locked data item has a linked list associated with it.
3. Every node in the linked list represents the transaction requested for lock, the mode of lock
requested (mutual/exclusive), and the current status of the request (granted/waiting).
4. Every new lock request for the data item will be added to the end of the linked list as a new
node.
5. Collisions in the hash table are handled by the technique of separate chaining.
Consider the following example of a lock table:

Implementation of Locking in DBMS


Explanation: In the above figure, the locked data items present in the lock table are 5, 47,
167, and 15. The transactions which have requested for lock have been represented by a
linked list shown below them using a downward arrow. Each node in the linked list has the
name of the transaction which has requested the data item like T43, T1, T27, etc. The color
of the node represents the status i.e. whether the lock has been granted or waiting. Note
that a collision has occurred for data items 5 and 47. It has been resolved by separate
chaining where each data item belongs to a linked list. The data item is acting as a header
for a linked list containing the locking request.
• Working as Lock Manager
• Initially, the lock table is empty as no data item is locked.
• Whenever the lock manager receives a lock request from a transaction Ti on a particular
data item Qi following cases may arise:
• If Qi is not already locked, a linked list will be created and a lock will be granted to the
requesting transaction Ti.
• If the data item is already locked, a new node will be added at the end of its linked list
containing the information about the request made by Ti.
• If the lock mode requested by Ti is compatible with the lock mode of the transaction
currently having the lock, Ti will acquire the lock too and the status will be changed to
‘granted’. Else, the status of Ti’s safety will be ‘waiting’.
• If a transaction Ti wants to unlock the data item it is currently holding, it will send an unlock
request to the lock manager. The lock manager will delete Ti’s node from this linked list.
The lock will be granted to the next transaction in the list.
• Sometimes transaction Ti may have to be aborted. In such a case all the waiting requests
made by Ti will be deleted from the linked lists present in the lock table. Once abortion is
complete, locks held by Ti will also be released.
• Advantages of Locking
• Data Consistency: Locking can help ensure data consistency by preventing multiple users
from modifying the same data simultaneously. By controlling access to shared resources,
locking can help prevent data conflicts and ensure that the database remains in a
consistent state.
• Isolation: Locking can ensure that transactions are executed in isolation from other
transactions, preventing interference between transactions and reducing the risk of data
inconsistencies.
• Granularity: Locking can be implemented at different levels of granularity, allowing for
more precise control over shared resources. For example, row-level locking can be used to
lock individual rows in a table, while table-level locking can be used to lock entire tables.
• Availability: Locking can help ensure the availability of shared resources by preventing
users from monopolizing resources or causing resource starvation.
• Disadvantages of Locking
• Overhead: Locking requires additional overhead, such as acquiring and releasing locks on
shared resources. This overhead can lead to slower performance and increased resource
consumption, particularly in systems with high levels of concurrency.
• Deadlocks: Deadlocks can occur when two or more transactions are waiting for each other
to release resources, causing a circular dependency that can prevent any of the
transactions from completing. Deadlocks can be difficult to detect and resolve and can
result in reduced throughput and increased latency.
• Reduced Concurrency: Locking can limit the number of users or applications accessing the
database simultaneously. This can lead to reduced concurrency and slower performance in
systems with high levels of concurrency.
• Complexity: Implementing locking can be complex, particularly in distributed systems or in
systems with complex transactional logic. This complexity can lead to increased
development and maintenance costs.
1.1 Basic Two Phase (2PL) Locking Protocol : Two phase locking is a widely used technique
which ensures strict ordering of lock acquisition and release. Two phase locking protocol
works in two phases.
• Growing Phase : In this phase, the transaction starts acquiring locks before performing
any modification on the data items. Once a transaction acquires a lock, that lock can not
be released until the transaction reaches the end of the execution.
• Shrinking Phase : In this phase, the transaction releases all the acquired locks once it
performs all the modifications on the data item. Once the transaction starts releasing the
locks, it can not acquire any locks further.
• In locked based protocol, each transaction needs to acquire locks before they start
accessing or modifying the data items. There are two types of locks used in databases.
• Shared Lock S(a): Shared lock is also known as re ad-only lock which allows
multiple transactions to read the data simultaneously. The transaction which is holding
a shared lock can only read the data item but it can not modify the data item.
• Exclusive Lock X(a) : Exclusive lock is also known as the write lock. Exclusive lock allows
a transaction to update a data item. Only one transaction can hold the exclusive lock
on a data item at a time. While a transaction is holding an exclusive lock on a data
item, no other transaction is allowed to acquire a shared/exclusive lock on the same
data item.
• Important points:
• A transaction may be granted a lock on an item if the requested lock is compatible
with locks already held on the item by other transactions.
• Any number of transactions can hold shared locks on an item, but if any
transaction holds an exclusive(X) on the item no other transaction may hold any
lock on the item.
Note: If lock conversion is allowed, then upgrading of lock( from S(a) to X(a) ) is allowed in
the Growing Phase, and downgrading of lock (from X(a) to S(a)) must be done in the
shrinking phase.
Let’s see a transaction implementing 2-PL.

T1 T2

1 lock-S(A)

2 lock-S(A)

3 lock-X(B)

4 ………. ……….

5 Unlock(A)

6 Lock-X(C)
T1 T2

7 Unlock(B)

8 Unlock(A)

9 Unlock(C)

10 ………. ……….
This is just a skeleton transaction that shows how unlocking and locking work with 2-PL.
Note for:
Transaction T1
• The growing Phase is from steps 1-3
• The shrinking Phase is from steps 5-7
• Lock Point at 3
Transaction T2
• The growing Phase is from steps 2-6
• The shrinking Phase is from steps 8-9
• Lock Point at 6
• Lock Point
The Point at which the growing phase ends, i.e., when a transaction takes the final lock it
needs to carry on its work. Now look at the schedule, you’ll surely understand.
• Problem with Two-Phase Locking
• It does not insure recoverability which can be solved by strict two-phase locking and
rigorous two-phase locking.
• It does not ensure a cascade-less schedule which can be solved by strict two-phase locking
and rigorous two-phase locking.
• It may suffer from deadlock which can be solved by conservative two-phase locking.
• Problem With Simple Locking…
Consider the Partial Schedule:
T1 T2

1 lock-X(B)

2 read(B)

3 B:=B-50

4 write(B)

5 lock-S(A)

6 read(A)

7 lock-S(B)
8 lock-X(A)

9 …… ……

Deadlock – consider the above execution phase. Now, T1 holds an Exclusive lock
over B, and T2 holds a Shared lock over A. Consider Statement 7, T2 requests for
lock on B, while in Statement 8 T1 requests lock on A. This as you may notice
imposes a Deadlock as none can proceed with their execution.
Starvation – is also possible if concurrency control manager is badly designed. For
example: A transaction may be waiting for an X-lock on an item, while a sequence of
other transactions request and are granted an S-lock on the same item. This may be
avoided if the concurrency control manager is properly designed.

1.2 Strict Two Phase Locking Protocol : It is almost similar to the two phase locking protocol
the only difference is that in two phase locking the transaction can release its locks before it
commits, but in case of strict two phase locking the transactions are only allowed to release
the locks only when they performs commits.
• In other words, in addition to the lock being 2-Phase; all Exclusive(X) locks held by the
transaction be released until after the Transaction Commits.
• Following Strict 2-PL ensures that our schedule is:
i. Recoverable
ii. Cascadeless
Hence, it gives us freedom from Cascading Abort which was still there in Basic 2-PL and
moreover guarantees Strict Schedules but still, Deadlocks are possible.
1.3 Rigorous 2-PL –
This requires that in addition to the lock being 2-Phase all Exclusive(X) and Shared(S)
locks held by the transaction be released until after the Transaction Commits. Following
Rigorous 2-PL ensures that our schedule is:
i. Recoverable
ii. Cascadeless
Hence, it gives us freedom from Cascading Abort which was still there in Basic 2-PL and
moreover guarantee Strict Schedules but still, Deadlocks are possible!
Note: The difference between Strict 2-PL and Rigorous 2-PL is that Rigorous is more
restrictive, it requires both Exclusive and Shared locks to be held until after the
Transaction commits and this is what makes the implementation of Rigorous 2-PL
easier.
1.4 Conservative 2-PL (Static 2-PL) –
This protocol requires the transaction to lock all the items it accesses before the
Transaction begins execution by predeclaring its read-set and write-set. If any of the
predeclared items needed cannot be locked, the transaction does not lock any of the
items, instead, it waits until all the items are available for locking.
Conservative 2-PL is Deadlock free and but it does not ensure a Strict schedule. However, it
is difficult to use in practice because of the need to predeclare the read-set and the write-
set which is not possible in many situations. In practice, the most popular variation of 2-PL
is Strict 2-PL.
Example:
Now, let’s see the schedule below, tell me if this schedule can be locked using 2-PL, and if
yes, show how and what class of 2-PL does your answer belongs to.
T1 T2
1 Read(A)
2 Read(A)
3 Read(B)
4 Write(B)
5 Commit
6 Read(B)
7 Write(B)
6 Commit
Yes, the schedule is conflict serializable, so we can try implementing 2-PL. So, let’s try…
Solution:
T1 T2
1 Lock-S(A)
2 Read(A)
3 Lock-S(A)
4 Read(A)
5 Lock-X(B)
6 Read(B)
7 Write(B)
8 Commit
9 Unlock(A)
10 Unlock(B)
11 Lock-X(B)
12 Read(B)
13 Write(B)
14 Commit
15 Unlock(A)
16 Unlock(B)
2. Timestamp based Protocol
• In this protocol each transaction has a timestamp attached to it. Timestamp is nothing but
the time in which a transaction enters into the system.
• The conflicting pairs of operations can be resolved by the timestamp ordering protocol
through the utilization of the timestamp values of the transactions. Therefore,
guaranteeing that the transactions take place in the correct order.
The main idea for this protocol is to order the transactions based on their Timestamps. A
schedule in which the transactions participate is then serializable and the only equivalent
serial schedule permitted has the transactions in the order of their Timestamp Values. Stating
simply, the schedule is equivalent to the particular Serial Order corresponding to the order of
the Transaction timestamps. An algorithm must ensure that, for each item accessed
by Conflicting Operations in the schedule, the order in which the item is accessed does not
violate the ordering. To ensure this, use two Timestamp Values relating to each database
item X.
• W_TS(X) is the largest timestamp of any transaction that executed write(X) successfully.
• R_TS(X) is the largest timestamp of any transaction that executed read(X) successfully.
Basic Timestamp Ordering –
Every transaction is issued a timestamp based on when it enters the system. Suppose, if an
old transaction Ti has timestamp TS(Ti), a new transaction Tj is assigned timestamp TS(Tj) such
that TS(Ti) < TS(Tj). The protocol manages concurrent execution such that the timestamps
determine the serializability order. The timestamp ordering protocol ensures that any
conflicting read and write operations are executed in timestamp order. Whenever some
Transaction Ti tries to issue a R_item(X) or a W_item(X), the Basic TO algorithm compares the
timestamp of Ti with R_TS(X) & W_TS(X) to ensure that the Timestamp order is not violated.
This describes the Basic TO protocol algorithm in the following two cases.
1. Whenever a Transaction Ti issues a W_item(X) operation, check the following conditions:
a. If R_TS(X) > TS(Ti), then abort and rollback Ti and reject the operation and
b. if W_TS(X) > TS(Ti), then abort and rollback Ti and reject the operation. else,
c. Execute W_item(X) operation of Ti and set W_TS(X) = TS(Ti).
2. Whenever a Transaction T issues a R_item(X) operation, check the following conditions:
a. If W_TS(X) > TS(Ti), then abort and rollback Ti and reject the operation, else
b. If W_TS(X) <= TS(Ti), then execute the R_item(X) operation of Ti and set R_TS(X) to the
larger of TS(Ti) and current R_TS(X). [ Set R_T(X) = Max{R_TS(X), TS(Ti)}]
Whenever the Basic TO algorithm detects two conflicting operations that occur in an
incorrect order, it rejects the latter of the two operations by aborting the Transaction that
issued it. Schedules produced by Basic TO are guaranteed to be conflict serializable. Already
discussed that using Timestamp can ensure that our schedule will be deadlock free.
One drawback of the Basic TO protocol is that Cascading Rollback is still possible. Suppose
we have a Transaction T1 and T2 has used a value written by T1. If T1 is aborted and
resubmitted to the system then, T2 must also be aborted and rolled back. So the problem of
Cascading aborts still prevails.
Let’s gist the Advantages and Disadvantages of Basic TO protocol:

• Timestamp Ordering protocol ensures serializability since the precedence graph will be of
the form:
Image – Precedence Graph for TS ordering
• Timestamp protocol ensures freedom from deadlock as no transaction ever waits.
• But the schedule may not be cascade free, and may not even be recoverable.
Strict Timestamp Ordering –
A variation of Basic TO is called Strict TO ensures that the schedules are both Strict and
Conflict Serializable. In this variation, a Transaction T that issues a R_item(X) or W_item(X)
such that TS(T) > W_TS(X) has its read or write operation delayed until the Transaction T‘ that
wrote the values of X has committed or aborted.

Advantages :
High Concurrency: Timestamp-based concurrency control allows for a high degree of
concurrency by ensuring that transactions do not interfere with each other.
Efficient: The technique is efficient and scalable, as it does not require locking and can handle
a large number of transactions.
No Deadlocks: Since there are no locks involved, there is no possibility of deadlocks
occurring.
Improved Performance: By allowing transactions to execute concurrently, the overall
performance of the database system can be improved.
Disadvantages:
Limited Granularity: The granularity of timestamp-based concurrency control is limited to
the precision of the timestamp. This can lead to situations where transactions are
unnecessarily blocked, even if they do not conflict with each other.
Timestamp Ordering: In order to ensure that transactions are executed in the correct order,
the timestamps need to be carefully managed. If not managed properly, it can lead to
inconsistencies in the database.
Timestamp Synchronization: Timestamp-based concurrency control requires that all
transactions have synchronized clocks. If the clocks are not synchronized, it can lead to
incorrect ordering of transactions.
Timestamp Allocation: Allocating unique timestamps for each transaction can be challenging,
especially in distributed systems where transactions may be initiated at different locations.

You might also like