Transaction Management

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 69

Transaction Management

• Transaction Concept
• Transaction State
• Concurrent Executions
• Serializability
Transaction State
• Active – the initial state; the transaction stays in this
state while it is executing
• Partially committed – after the final statement has been
executed.
• Failed -- after the discovery that normal execution can no
longer proceed.
• Aborted – after the transaction has been rolled back and
the database restored to its state prior to the start of the
transaction. Two options after it has been aborted:
• restart the transaction
• can be done only if no internal logical error
• kill the transaction
• Committed – after successful completion.
Transaction State (Cont.)
Transaction Concept
• A transaction is a unit of program execution that
accesses and possibly updates various data items.
• E.g. transaction to transfer $50 from account A to
account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)

• Two main issues to deal with:


• Failures of various kinds, such as hardware failures and
system crashes
• Concurrent execution of multiple transactions
Example of Fund Transfer
• Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
• Atomicity requirement
• if the transaction fails after step 3 and before step 6, money will be “lost” leading
to an inconsistent database state
• Failure could be due to software or hardware
• the system should ensure that updates of a partially executed transaction are not
reflected in the database
• Durability requirement — once the user has been notified that the transaction has
completed (i.e., the transfer of the $50 has taken place), the updates to the database
by the transaction must persist even if there are software or hardware failures.
Example of Data Access

buffer
Buffer Block A input(A)
X A
Buffer Block B Y B
output(B)
read(X)
write(Y)

x2
x1
y1

work area work area


of T1 of T2

memory disk
Example of Fund Transfer (Cont.)
• Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
• Consistency requirement in above example:
• the sum of A and B is unchanged by the execution of the transaction
• In general, consistency requirements include
• Explicitly specified integrity constraints such as primary keys and foreign keys
• Implicit integrity constraints
• e.g. sum of balances of all accounts, minus sum of loan amounts must equal value of cash-in-hand
• A transaction must see a consistent database.
• During transaction execution the database may be temporarily inconsistent.
• When the transaction completes successfully the database must be consistent
• Erroneous transaction logic can lead to inconsistency
Example of Fund Transfer (Cont.)
• Isolation requirement — if between steps 3 and 6, another
transaction T2 is allowed to access the partially updated
database, it will see an inconsistent database (the sum A + B
will be less than it should be).
T1 T2
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B

• Isolation can be ensured trivially by running transactions


serially
• that is, one after the other.
• However, executing multiple transactions concurrently has
significant benefits, as we will see later.
ACID Properties
A transaction is a unit of program execution that accesses and possibly
updates various data items.To preserve the integrity of data the database
system must ensure:
• Atomicity. Either all operations of the transaction are properly reflected
in the database or none are.
• Consistency. Execution of a transaction in isolation preserves the
consistency of the database.
• Isolation. Although multiple transactions may execute concurrently,
each transaction must be unaware of other concurrently executing
transactions. Intermediate transaction results must be hidden from
other concurrently executed transactions.
• That is, for every pair of transactions Ti and Tj, it appears to Ti that
either Tj, finished execution before Ti started, or Tj started execution
after Ti finished.
• Durability. After a transaction completes successfully, the changes it
has made to the database persist, even if there are system failures.
Concurrent Executions
• Multiple transactions are allowed to run concurrently in the system. Advantages are:
• increased processor and disk utilization, leading to better transaction throughput
• E.g. one transaction can be using the CPU while another is reading from or writing to the disk
• reduced average response time for transactions: short transactions need not wait behind long ones.
• Concurrency control schemes – mechanisms to achieve isolation
• that is, to control the interaction among the concurrent transactions in order to prevent them from destroying the
consistency of the database
Schedules

• Schedule – a sequences of instructions that specify the chronological order in which instructions of
concurrent transactions are executed
• a schedule for a set of transactions must consist of all instructions of those transactions.
• must preserve the order in which the instructions appear in each individual transaction.
• A transaction that successfully completes its execution will have a commit instructions as the last
statement
• by default transaction assumed to execute commit instruction as its last step
• A transaction that fails to successfully complete its execution will have an abort instruction as the last
statement
Schedule 1
• Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to
B.
• A serial schedule in which T1 is followed by T2 :
Schedule 2
• A serial schedule where T2 is followed by T1
Schedule 3
• Let T1 and T2 be the transactions defined previously. The following
schedule is not a serial schedule, but it is equivalent to Schedule 1.

In Schedules 1, 2 and 3, the sum A + B is preserved.


Schedule 4
• The following concurrent schedule does
not preserve the value of (A + B ).
Serializability

• Basic Assumption – Each transaction preserves database consistency.


• Thus serial execution of a set of transactions preserves database
consistency.
• A (possibly concurrent) schedule is serializable if it is equivalent to a
serial schedule. Different forms of schedule equivalence give rise to the
notions of:
1. conflict serializability
2. view serializability
Simplified view of transactions
• We ignore operations other than read and write instructions
• We assume that transactions may perform arbitrary computations on data in
local buffers in between reads and writes.
• Our simplified schedules consist of only read and write instructions.
Conflicting Instructions
• Instructions li and lj of transactions Ti and Tj respectively, conflict if and only if there exists some item Q
accessed by both li and lj, and at least one of these instructions wrote Q.
1. li = read(Q), lj = read(Q). li and lj don’t conflict.

2. li = read(Q), lj = write(Q). They conflict.

3. li = write(Q), lj = read(Q). They conflict

4. li = write(Q), lj = write(Q). They conflict


• Intuitively, a conflict between li and lj forces a (logical) temporal order between them.
• If li and lj are consecutive in a schedule and they do not conflict, their results would remain the same
even if they had been interchanged in the schedule.
Conflict Serializability
• If a schedule S can be transformed into a schedule S´ by a series of swaps
of non-conflicting instructions, we say that S and S´ are conflict equivalent.
• We say that a schedule S is conflict serializable if it is conflict equivalent to
a serial schedule
Conflict Serializability (Cont.)
• Schedule 3 can be transformed into Schedule 6, a serial schedule where T2 follows T1, by series of swaps
of non-conflicting instructions. Therefore Schedule 3 is conflict serializable.

Schedule 3 Schedule 6
Anomalies with Interleaved Execution
 Reading Uncommitted Data (WR Conflicts, “dirty reads”):
Anomalies with Interleaved Execution
 Unrepeatable Reads (RW Conflicts):
Anomalies (Continued)
• Overwriting Uncommitted Data (WW Conflicts):
Concurrency Control
Outline
• Lock-Based Protocols
• Timestamp-Based Protocols
Lock-Based Protocols
• A lock is a mechanism to control concurrent access to a data item
• Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as well as
written. X-lock is requested using lock-X instruction.
2. shared (S) mode. Data item can only be read. S-lock is
requested using lock-S instruction.
• Lock requests are made to the concurrency-control manager by the
programmer. Transaction can proceed only after request is granted.
Lock-Based Protocols (Cont.)
• Lock-compatibility matrix

• A transaction may be granted a lock on an item if the requested lock is compatible with
locks already held on the item by other transactions
• Any number of transactions can hold shared locks on an item,
• But if any transaction holds an exclusive on the item no other transaction may hold any lock on the
item.
• If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is then
granted.
Lock-Based Protocols (Cont.)
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• Locking as above is not sufficient to guarantee serializability — if A and B get
updated in-between the read of A and B, the displayed sum would be wrong.
• A locking protocol is a set of rules followed by all transactions while requesting
and releasing locks. Locking protocols restrict the set of possible schedules.
The Two-Phase Locking Protocol
• This protocol ensures conflict-serializable schedules.
• Phase 1: Growing Phase
• Transaction may obtain locks
• Transaction may not release locks
• Phase 2: Shrinking Phase
• Transaction may release locks
• Transaction may not obtain locks

• The protocol assures serializability. It can be proved that the transactions can
be serialized in the order of their lock points (i.e., the point where a
transaction acquired its final lock).
The Two-Phase Locking Protocol (Cont.)
• There can be conflict serializable schedules that cannot be obtained if
two-phase locking is used.
• However, in the absence of extra information (e.g., ordering of access to
data), two-phase locking is needed for conflict serializability in the
following sense:
• Given a transaction Ti that does not follow two-phase locking, we can find a
transaction Tj that uses two-phase locking, and a schedule for Ti and Tj that is not
conflict serializable.
Lock Conversions
• Two-phase locking with lock conversions:
– First Phase:
• can acquire a lock-S on item
• can acquire a lock-X on item
• can convert a lock-S to a lock-X (upgrade)
– Second Phase:
• can release a lock-S
• can release a lock-X
• can convert a lock-X to a lock-S (downgrade)
• This protocol assures serializability. But still relies on the programmer to
insert the various locking instructions.
Deadlocks
• Consider the partial schedule

• Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3 to release its lock
on B, while executing lock-X(A) causes T3 to wait for T4 to release its lock on A.
• Such a situation is called a deadlock.
• To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.
Deadlocks (Cont.)

• Two-phase locking does not ensure freedom from deadlocks.


• In addition to deadlocks, there is a possibility of starvation.
• Starvation occurs if the concurrency control manager is badly designed. For
example:
• A transaction may be waiting for an X-lock on an item, while a sequence of other
transactions request and are granted an S-lock on the same item.
• The same transaction is repeatedly rolled back due to deadlocks.
• Concurrency control manager can be designed to prevent starvation.
Deadlocks (Cont.)
• The potential for deadlock exists in most locking protocols. Deadlocks are a necessary
evil.
• When a deadlock occurs there is a possibility of cascading roll-backs.
• Cascading roll-back is possible under two-phase locking. To avoid this, follow a
modified protocol called strict two-phase locking -- a transaction must hold all its
exclusive locks till it commits/aborts.
• Rigorous two-phase locking is even stricter. Here, all locks are held till commit/abort.
In this protocol transactions can be serialized in the order in which they commit.
Deadlock Handling
• System is deadlocked if there is a set of transactions such that every
transaction in the set is waiting for another transaction in the set.
• Deadlock prevention protocols ensure that the system will never enter into a
deadlock state. Some prevention strategies :
• Require that each transaction locks all its data items before it begins execution
(predeclaration).
• Impose partial ordering of all data items and require that a transaction can lock data
items only in the order specified by the partial order.
More Deadlock Prevention Strategies
• Following schemes use transaction timestamps for the sake of
deadlock prevention alone.
• wait-die scheme — non-preemptive
• older transaction may wait for younger one to release data item. (older
means smaller timestamp) Younger transactions never Younger
transactions never wait for older ones; they are rolled back instead.
• a transaction may die several times before acquiring needed data item
• wound-wait scheme — preemptive
• older transaction wounds (forces rollback) of younger transaction
instead of waiting for it. Younger transactions may wait for older ones.
• may be fewer rollbacks than wait-die scheme.
Deadlock prevention (Cont.)
• Both in wait-die and in wound-wait schemes, a rolled back
transactions is restarted with its original timestamp. Older
transactions thus have precedence over newer ones, and starvation is
hence avoided.
• Timeout-Based Schemes:
• a transaction waits for a lock only for a specified amount of time. If the lock
has not been granted within that time, the transaction is rolled back and
restarted,
• Thus, deadlocks are not possible
• simple to implement; but starvation is possible. Also difficult to determine
good value of the timeout interval.
Deadlock Detection
• Deadlocks can be described as a wait-for graph, which consists of a pair G =
(V,E),
• V is a set of vertices (all the transactions in the system)
• E is a set of edges; each element is an ordered pair Ti Tj.
• If Ti  Tj is in E, then there is a directed edge from Ti to Tj, implying that Ti is
waiting for Tj to release a data item.
• When Ti requests a data item currently being held by Tj, then the edge Ti  Tj
is inserted in the wait-for graph. This edge is removed only when Tj is no
longer holding a data item needed by Ti.
• The system is in a deadlock state if and only if the wait-for graph has a cycle.
Must invoke a deadlock-detection algorithm periodically to look for cycles.
Deadlock Detection (Cont.)

Wait-for graph without a cycle Wait-for graph with a cycle


Deadlock Recovery
• When deadlock is detected :
• Some transaction will have to rolled back (made a victim) to break deadlock.
Select that transaction as victim that will incur minimum cost.
• Rollback -- determine how far to roll back transaction
• Total rollback: Abort the transaction and then restart it.
• More effective to roll back transaction only as far as necessary to break deadlock.
• Starvation happens if same transaction is always chosen as victim. Include the
number of rollbacks in the cost factor to avoid starvation
Timestamp-Based Protocols
• Each transaction is issued a timestamp when it enters the system. If an old transaction
Ti has time-stamp TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj) such that
TS(Ti) <TS(Tj).
• The protocol manages concurrent execution such that the time-stamps determine the
serializability order.
• In order to assure such behavior, the protocol maintains for each data Q two
timestamp values:
• W-timestamp(Q) is the largest time-stamp of any transaction that executed write(Q) successfully.
• R-timestamp(Q) is the largest time-stamp of any transaction that executed read(Q) successfully.
Timestamp-Based Protocols (Cont.)
• The timestamp ordering protocol ensures that any conflicting read and
write operations are executed in timestamp order.
• Suppose a transaction Ti issues a read(Q)
1. If TS(Ti)  W-timestamp(Q), then Ti needs to read a value of Q that was
already overwritten.
 Hence, the read operation is rejected, and Ti is rolled back.
2. If TS(Ti)  W-timestamp(Q), then the read operation is executed, and R-
timestamp(Q) is set to max(R-timestamp(Q), TS(Ti)).
Timestamp-Based Protocols (Cont.)
• Suppose that transaction Ti issues write(Q).
1. If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing
was needed previously, and the system assumed that that value
would never be produced.
 Hence, the write operation is rejected, and Ti is rolled back.
2. If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an
obsolete value of Q.
 Hence, this write operation is rejected, and Ti is rolled back.
3. Otherwise, the write operation is executed, and W-timestamp(Q) is
set to TS(Ti).
Example Use of the Protocol
A partial schedule for several data items for transactions with
timestamps 1, 2, 3, 4, 5
Correctness of Timestamp-Ordering Protocol

• The timestamp-ordering protocol guarantees serializability since all the arcs in


the precedence graph are of the form:

Thus, there will be no cycles in the precedence graph


• Timestamp protocol ensures freedom from deadlock as no transaction ever
waits.
• But the schedule may not be cascade-free, and may not even be recoverable.
Recoverability and Cascade Freedom
• Problem with timestamp-ordering protocol:
• Suppose Ti aborts, but Tj has read a data item written by Ti
• Then Tj must abort; if Tj had been allowed to commit earlier, the schedule is not recoverable.
• Further, any transaction that has read a data item written by Tj must abort
• This can lead to cascading rollback --- that is, a chain of rollbacks
• Solution 1:
• A transaction is structured such that its writes are all performed at the end of its processing
• All writes of a transaction form an atomic action; no transaction may execute while a
transaction is being written
• A transaction that aborts is restarted with a new timestamp
• Solution 2: Limited form of locking: wait for data to be committed before reading it
• Solution 3: Use commit dependencies to ensure recoverability
Failure Classification
• Transaction failure :
• Logical errors: transaction cannot complete due to some internal error condition
• System errors: the database system must terminate an active transaction due to an
error condition (e.g., deadlock)
• System crash: a power failure or other hardware or software failure causes the
system to crash.
• Fail-stop assumption: non-volatile storage contents are assumed to not be corrupted by
system crash
• Database systems have numerous integrity checks to prevent corruption of disk data
• Disk failure: a head crash or similar disk failure destroys all or part of disk
storage
• Destruction is assumed to be detectable: disk drives use checksums to detect failures
Q.
Which of the following is (Conflict) serializable) For each serializable schedule. Determine the
equivalent serial schedules.
(a) r1(X); r3(X); w1(X); r2(X); w3(X)
(b) r1(X); r3(X); w3(x); w1(X); r2(X)
(c) r3(X); r2(X); w3(X); r1(X); w1(X)
(d) r3(X); r2(X); r1(X); w3(X); w1(X)
Q1. Consider the following two transactions:
T1 T2
Read(A) Read(B)
Read(B) Read(A)
If A = 0 then B=B + 1 if B = 0 then A = A + 1
Write (B) Write (A)
Let the consistency requirement be A = 0 V B = 0 with A = B = 0 the initial value.
 
(a) Show that every serial execution involving these two transactions preserves the consistency of
the database.
(b) Show a concurrent execution of T1 and T2 that Produces a non serializable schedule.
(c) It there a concurrent execution of T1 and T2 that produces a serializable schedule.
Q2. Consider the three transaction T1, T2 and T3 and the schedules S1 and S2 given below.
Draw the Serializability (Precedence) graph for S1 and S2 and state whether each schedule
is serializable (conflict) or not. If a schedule is serializable write down the equivalent serial
schedule(s)
T1: r1(X); r1(Z); w1(X)
T2: r2(Z); r2(4); w2(2); W2(4)
T3: r3(X); r3(4); w3(4)
S1: r1(X); r2(Z); r1(2); r3(X); r3(4); w1(X); w3(4); r2(4); w2(Z); w2(4)
S2: r1(X); r2(Z); r3(X); r1(Z); r2(y); r3(y); w1(X) W2(Z); w3(4); w2(4)
Recovery System
Types of Failure: failures are generally classified as transaction system and media failure .
There are several possible reasons for a transaction to fail in the middle of transaction ..
(1). A computer failure (system crash): A hardware , software or network error occurs in
the computer system during transaction execution . Hardware crash are usually media failure.
For Example : Main memory failure
 
(2). A transaction or system error: Some operation in the transaction cause it to fail , such
as integer overflow or division by zero . Transaction failure may also occur because of
erroneous parameter values or before of a logical programming error.
 
(3) . Local error or exception condition detected by the transaction: During transaction
execution certain condition may occur that necessitate cancellation of the transaction .
For Example: data for the transaction may not be found
 
(4).Concurrency control enforcement: The concurrency control method may decide to
abort the transaction to be restarted later , because it violates serializability or because saved
transaction are in a stable of deadlock
 
LOG BASED RECOVERY
 Transaction Identifiers is the unique id fro any transaction that perform the write operation .
 Data Item Identifiers is the unique data item id . Typically it is the location of data item in disk.
 Old Value is the value of data item before to the write.
 New Value is the value of that data item after updating.
Also used special log records and the commit or abort of a transaction . We denote the various types of
log records as follows.
 <Ti , Start > transaction Ti is started
 <Ti, Xj , V1 , V2 > Ti = Transaction id
Xj = Data item
V1 = Old value
V2 = New value
 <Ti , Commit> Transaction Ti successfully completed
 <Ti , Abort > Transaction Ti terminated.
There is two type of log to ensure the atomicity of transaction
(1). Deferred (Delayed ) Database Modification:- This technique ensure transaction atomicity by recording
all database modification in the log . but delayed the execution of write operation until the transaction
partially commits .
When a transaction partially commits the information in the log associated with the transaction is used in
executing the delayed write .If the system crash before the transaction completes the execution of ,if the
transaction abort then the information of log is simply ignored.
The execution of transaction Ti proceeds as follows, Before Ti start its execution the statement <Ti, Start > is
written in log , A write(A) operation by Ti results in the written a new record in the log and finally when
transaction Ti is in partially committed state a record <Ti , commit > is written in the log .
To understand how log is worked we take two transaction these are T0 and T1 , in T1 we transfer Rs
from A to B and in T1 withdrawal Rs 100 from C , with initial value as follows
A=1000 B=2000 And in C=700
Transaction execution sequence are as follows
T0 T1
Read(A) Read(C)
A=A-50 C=C-100
Write(A) Write(C)
Read(B)
B=b+50
Write(B)
The log containing the relevant information on these two transaction are show below
LOG
<T0 , Start > The actual values of A B and C after
<T0 , A,1000,950> Transaction are
<T0 , B,2000,2050> A = 950
<T0, Commit> B = 2050
<T1,Start> C = 600
<T1,C,700,600>
<T1, Commmit>
System can handle any failure that result in the loss of information on volatile storage the recovery scheme used
the following recovery procedure.
Recovery Algorithms
• Consider transaction Ti that transfers $50 from account A to account B
• Two updates: subtract 50 from A and add 50 to B
• Transaction Ti requires updates to A and B to be output to the database.
• A failure may occur after one of these modifications have been made but before
both of them are made.
• Modifying the database without ensuring that the transaction will commit may
leave the database in an inconsistent state
• Not modifying the database may result in lost updates if failure occurs just after
transaction commits
• Recovery algorithms have two parts
1. Actions taken during normal transaction processing to ensure enough information
exists to recover from failures
2. Actions taken after a failure to recover the database contents to a state that
ensures atomicity, consistency and durability
Log-Based Recovery

• A log is kept on stable storage.


• The log is a sequence of log records, and maintains a record of update
activities on the database.
• When transaction Ti starts, it registers itself by writing a
<Ti start>log record
• Before Ti executes write(X), a log record
<Ti, X, V1, V2>
is written, where V1 is the value of X before the write (the old
value), and V2 is the value to be written to X (the new value).
• When Ti finishes it last statement, the log record <Ti commit> is
written.
• Two approaches using logs
• Deferred database modification
• Immediate database modification
Immediate Database Modification
• The immediate-modification scheme allows updates of an uncommitted transaction to be
made to the buffer, or the disk itself, before the transaction commits
• Update log record must be written before database item is written
• We assume that the log record is output directly to stable storage
• (Will see later that how to postpone log record output to some extent)
• Output of updated blocks to stable storage can take place at any time before or after
transaction commit
• Order in which blocks are output can be different from the order in which they are written.
• The deferred-modification scheme performs updates to buffer/disk only at the time of
transaction commit
• Simplifies some aspects of recovery
• But has overhead of storing local copy
Transaction Commit
• A transaction is said to have committed when its commit log record is
output to stable storage
• all previous log records of the transaction must have been output already
• Writes performed by a transaction may still be in the buffer when the
transaction commits, and may be output later
Immediate Database Modification Example

Log Write Output

<T0 start>
<T0, A, 1000, 950>
<To, B, 2000, 2050
A = 950
B = 2050
<T0 commit>
<T1 start> BC output before T1
<T1, C, 700, 600> commits
C = 600
BB , BC
<T1 commit>
BA BA output after T0
commits
• Note: BX denotes block containing X.
Concurrency Control and Recovery
• With concurrent transactions, all transactions share a single disk buffer and a
single log
• A buffer block can have data items updated by one or more transactions
• We assume that if a transaction Ti has modified an item, no other transaction
can modify the same item until Ti has committed or aborted
• i.e. the updates of uncommitted transactions should not be visible to other
transactions
• Otherwise how to perform undo if T1 updates A, then T2 updates A and commits, and finally T1
has to abort?
• Can be ensured by obtaining exclusive locks on updated items and holding the locks
till end of transaction (strict two-phase locking)
• Log records of different transactions may be interspersed in the log.
Undo and Redo Operations
• Undo of a log record <Ti, X, V1, V2> writes the old value V1 to X
• Redo of a log record <Ti, X, V1, V2> writes the new value V2 to X
• Undo and Redo of Transactions
• undo(Ti) restores the value of all data items updated by Ti to their old values, going
backwards from the last log record for Ti
• each time a data item X is restored to its old value V a special log record <Ti , X, V> is written out
• when undo of a transaction is complete, a log record
<Ti abort> is written out.
• redo(Ti) sets the value of all data items updated by Ti to the new values, going
forward from the first log record for Ti
• No logging is done in this case
Undo and Redo on Recovering from Failure

• When recovering after failure:


• Transaction Ti needs to be undone if the log
• contains the record <Ti start>,
• but does not contain either the record <Ti commit> or <Ti abort>.
• Transaction Ti needs to be redone if the log
• contains the records <Ti start>
• and contains the record <Ti commit> or <Ti abort>

• Note that If transaction Ti was undone earlier and the <Ti abort> record written to
the log, and then a failure occurs, on recovery from failure Ti is redone
• such a redo redoes all the original actions including the steps that restored old values
• Known as repeating history
• Seems wasteful, but simplifies recovery greatly
Immediate DB Modification Recovery Example
Below we show the log as it appears at three instances of time.

Recovery actions in each case above are:


(a) undo (T0): B is restored to 2000 and A to 1000, and log records
<T0, B, 2000>, <T0, A, 1000>, <T0, abort> are written out
(b) redo (T0) and undo (T1): A and B are set to 950 and 2050 and C is restored to
700. Log records <T1, C, 700>, <T1, abort> are written out.
(c) redo (T0) and redo (T1): A and B are set to 950 and 2050
respectively. Then C is set to 600
Checkpoints
• Redoing/undoing all transactions recorded in the log can be very slow
1. processing the entire log is time-consuming if the system has run for a long time
2. we might unnecessarily redo transactions which have already output their updates
to the database.
• Streamline recovery procedure by periodically performing checkpointing
1. Output all log records currently residing in main memory onto stable storage.
2. Output all modified buffer blocks to the disk.
3. Write a log record < checkpoint L> onto stable storage where L is a list of all
transactions active at the time of checkpoint.
• All updates are stopped while doing checkpointing

You might also like