CIS 2210 - Database Management and Design

Chapter 10: Transaction Management and Concurrency Control


This lesson discusses material from chapter 10. Objectives important to this lesson:

  1. Transactions
  2. Concurrency control
  3. Locking methods
  4. Timestamping methods
  5. Optimistic methods
  6. ANSI levels of isolation
  7. Recovery management

Chapter 10


The chapter begins with a diagram of six relations on page 483. It shows that they have five 1-to-many relationships, the "many" being represented by infinity symbols in this case. The text begins to describe what a transaction is, using this set of tables as a background. When a customer makes a purchase from this company, there are a series of events that take place, updating several tables. The entire series of related events can be considered as an example of a transaction. From the customer's point of view, one thing happened: a purchase took place. From the company's point of view several things had to happen:

  • an invoice had to be generated, affecting the Invoice, Line, and Customer tables (where the balance is stored)
  • the product inventory had to be adjusted, affecting the Product table
  • the customer's order was logged in the Account_Transaction table as a new record

We can take the book's suggestion on page 484 that we should think of a transaction as a logical unit of work in which all of the steps need to be completed. If a problem keeps some step from being completed, the steps that were completed need to be undone. This should make you think about the COMMIT command, which should be issued before starting a transaction, and after all steps in a transaction have been successfully completed. Why "before starting a transaction"? So that the ROLLBACK command can be issued if any step in the transaction fails, making the system forget all those steps. The text refers to this as maintaining a consistent database state. There should be no bill for this customer unless the order can be filled. From an accounting perspective, we do not increase our assets in accounts payable, unless we decrease our assets in our inventory. The books have to balance. (Yes, the customer's request could be placed on a Waiting_List table, but that is not part of this example.)

The text gives us an example of a transaction that is only a SELECT query: we ask the database for the customer's balance. This can be considered a transaction simply because the consistent database state is maintained. This example is a special case: it is easy to roll it back, since it is only one command. Most transactions take more than one SQL command, which means that we have to be careful about where each transaction starts and ends. The number of steps in a transaction is measured by the number of database requests (commands) that are actually used. The text reminds us that there may be several operating system calls that happen as a result of each database request, but these are not counted as parts of the transaction.

On page 485, the text shows us a transaction example that inserts lines into three tables, and updates values in two others. Note the fourth step in this transaction. It is an UPDATE command that includes this line:

(no semicolon because the command does not end at this clause)

This line would not work as written if the CUST_BALANCE attribute did not hold numeric data. Remember the rule: data must be numeric if you are going to do math with that data. The line does work because we are adding a number to the value held in a field, and assigning the result to that field.

At the end of that example, the COMMIT command is given. This should not be done until you are sure the required changes have been made. The text describes an electrical problem that would cause one or more steps of a transaction to fail. It states that a DBMS that supports transaction management would restore the database to a consistent state in that case by rolling back the completed transaction steps. To do this manually in MySQL, you should use COMMIT commands before starting a transaction and upon ending it. You may also use the command START TRANSACTION to mark the beginning of a transaction sequence. MS SQL server uses the phrase BEGIN TRANSACTION. Access does not directly support transaction management.

On page 487, the text introduces some terms that describe characteristics of a good transaction:

  • Atomicity - The steps of a transaction are all required to take place. This means that this set of steps must have been simplified as much as possible, that no steps can be removed from the list without changing the nature of the transaction itself. This is atomicity: the state in which removing any part would change the transaction into something else. In atomic physics, if you remove an electron or a proton from an atom, that atom becomes something different than it was. The same applies to any process that displays atomicity.
  • Consistency - The quality of preserving a consistent state in the database. If a transaction does not preserve a consistent state, it is not a good transaction and it should be rewritten.
  • Isolation - A transaction should not access data currently being accessed by another transaction. A transaction should access data in such a way that that data is locked, and unavailable to any other transaction until the locking transaction has been completed or has been rolled back. This method ensures that each access to data is isolated from every other access to data.
  • Durability - Once a transaction is done, it is committed to the database, making it a durable part of the database. They cannot be rolled back by the DBMS at that point. This is a more stable situation than waiting until the end of a day to process all transactions from that day.

Isolation brings up an issue that must be handled by a DBMS with multiple concurrent users. When concurrent sessions request access to the same files, the Isolation characteristic requires that the transactions be queued to happen one after another. The text refers to ability to queue transaction in this way as serializability. It means that the transactions can be carried out one at a time, typically ordered by their timestamps. Serializability supports Isolation.

On page 488, the text reviews the use of COMMIT and ROLLBACK under normal circumstances, and under two less normal ones:

  • If a program that starts a transaction ends normally, but without calling a COMMIT command, changes to the database are saved as though a COMMIT command had been given.
  • If a program that starts a transaction ends abnormally, before a COMMIT command has been given, the system should issue a ROLLBACK command, removing any pending changes from consideration. (Do you feel a little funny now about just closing a browser window when shopping on the Internet?)
  • The text notes that you should always close a transaction with a COMMIT command, instead of depending on the automatic commit expected at the close of a script.

The text describes a useful feature of transaction management, the transaction log. This is a running file that holds all steps of all transactions that change the database. This feature is required by an ANSI standard to allow rolling back incomplete transactions and rolling forward from completed transactions through any pending transactions that are still stored in the log. Naturally, log files can and should be discarded after a time, but they should be retained for reconstructing the state of a database as needed. Note the example transaction log on page 490. It does not hold actual SQL commands, but it holds information that could be used to reconstruct the commands that it represents. Why do you think that this is so?

Concurrency Control

On page 490, the text defines concurrency control as the coordination of the actions of multiple users of a database system. As noted above, transactions must be serialized to work in such an environment. When they are not, problems occur:

  • lost update problem - When two transactions are updating the same data element (such as a customer's balance) coordination is vital. Assume two transactions read a customer's record at the same moment. One takes an order and increases the current balance. The other takes a payment and decreases the customer's balance. Now imagine that they each read the customer's record before the other transaction updates the balance.

    Sale Reads customer's balance: 100.00
    Reads customer's balance: 100.00
    Sale Calculates new balance: balance = balance + 50.00
    Balance = $150.00
    Payment Calculates new balance: balance = balance - 50.00
    Balance = 50.00
    Sale Saves new balance: balance = 150.00
    Payment Saves new balance: balance = 50.00

    If the actions take place as shown above, the customer may get the new order, but the store is out the price of it. Should it have happened this way instead?

    Reads customer's balance: 100.00
    Sale Reads customer's balance: 100.00
    Payment Calculates new balance: balance = balance - 50.00
    Balance = 50.00
    Sale Calculates new balance: balance = balance + 50.00
    Balance = $150.00
    Payment Saves new balance: balance = 50.00
    Sale Saves new balance: balance = 150.00

    This way, we get an unhappy customer who wants to know what happened. In both cases, the database is inconsistent with the actual events. This illustrates why transactions should be isolated as well as sequential. Everything would be fine if either transaction had started and finished before the other one started.

  • uncommitted data problem - A related problem occurs when one transaction reads data, saves a new value, then rolls back, while another transaction reads the new saved value, calculates a change to it, then saves its new change, oblivious to the fact that the number it is working from is a false value. An example is shown on page 492. The first transaction should never have written its change before it was done. Having done so, the second transaction read a number that was later rolled back. We get the same kind of chaos as in the example above. No transaction should be allowed to start regarding data that is in flux with another transaction. Wait your turn, children. You will be happier that way.

  • inconsistent retrievals problem - This one comes from reading a table that is being updated while you are reading it. You get some old data, some new data, then perform an operation on your faulty dataset, resulting in a faulty interpretation of the data. This one is tricky. If you think about it, you could defend the idea that the data looked the way you read it at a specific moment in time. This is not a good argument if the changes being made were corrections, but what if they were ongoing live data? Would it make sense then? Trust that the concept is correct: don't read tables while they are being changed. At the very least, you are taking measurements at more than one moment, so you are not comparing different lines in the same timeframe. Taking any aggregate measure on a database should not be considered if that database is being updated. A database can only be consistent before updates and after updates, never during updates.

To address the problems discussed above, a DBMS typically has a scheduler, an organizing function that attempts to put the actions of concurrent transactions in something that approximates a proper serial order. You should be frightened a bit by the text's description of the scheduler interleaving the tasks of the various transactions. The word interleaving may make you think of shuffling a deck of cards. It is less like that, and more like stacking a deck of cards to make sure that players get particular hands. That's not a good idea at a poker table, but it's a very good idea in a DBMS. Even though the tasks are not done sequentially for each transaction, they are done in a way that is meant to accomplish the same result as if they had been done in an ideal sequence.

So, why does it do this, when we know that it is unsafe to try it? It does this to make better use of the hardware and software in the system, and to move ahead on each transaction as much as possible. Note that: "as much as possible". It is often necessary to wait for a lock to resolve. We discuss locks below.

Locking Methods

We have almost discussed the idea of locking data several times. Now it is time to define some ideas about it. A lock is like a tag that says only a particular process is granted access to a data object until that access is released. A pessimistic lock is a lock that is placed on a data object with the assumption that another process is likely to cause a problem. A lock should be placed on data before reading it, held on that data while changing it, and released only after the data changes are committed. A lock manager is the part of a DBMS that that automatically applies locks and constrains other processes from ignoring them.

A lock can be placed on any of several kinds of data objects. The level of precision of a lock is called its granularity. For instance, a lock may be placed on an entire database, on a single table, on a row in a table, or on a field in a row. Each of these examples is more granular than the ones before it. The text also mentions a page lock, which restricts a section of memory, as opposed to a section of a database.

Lock level
The entire database is locked for one process. Good for a batch update, but not good for sharing among many users.
Table Locks an entire single table. Only one process can access it, but another process can access a different table. Not very useful when users need to update multiple tables in a single transaction.
Page Locks a section of the file that fills a portion of memory, commonly a 4KB block. This is more useful than the table lock when you have large tables, but less useful than a row lock.
Row Locks a single row at a time in a table. Multiple transactions can access different rows at the same time. Can lead to a deadly embrace/deadlock if the database is not clever enough to avoid it.
Field Locks only a single field in a single row of a table. Very flexible, but takes a lot of processing to manage it for more than a few users.

Each of these lock levels may have a lock type/mode as well. The text mentions two:

  • binary - If a lock is binary, it can only be locked or unlocked. If an object is locked, only the transaction having the lock can use the object. If an object is unlocked, any transaction can lock it. Once a transaction is done with an object, the lock must be unlocked. Generally considered too limiting for concurrency.
  • shared/exclusive - This type allows more choices. An exclusive lock is exclusive to the process that applies the lock. This is typically used by a process that has to change the object. A shared lock is a read only permission, allowing multiple processes to have shared locks one the same object at the same time. In this system, an object can be unlocked, shared, or exclusive. An exclusive lock can only be granted if the object in question is unlocked. A shared lock can be granted if the object is shared or unlocked.
    • To make it confusing, the lock manager has three actions in this system: READ_LOCK to read an object's state, WRITE_LOCK to grant a process a lock to an object, and UNLOCK to release a lock currently held on an object.

Locks can cause problems: either removing serializability, or creating a deadlock. The text describes methods to avoid both, starting on page 500.

  • two-phase locking - This is a method that addresses serializability. It does not address deadlocks.

    The first phase is the growing phase. The transaction acquires all locks needed to do its job, one by one. When it has them all, it is at the stage called the locked point. No actual work is done until the locked point is reached.

    The second phase is the shrinking phase. As you might guess from the name, all locks held by the transaction are released in this phase. During this phase, no new locks are allowed.

    As noted above, this method does not prevent deadlocks. If two transactions require the same exclusive locks, and each has obtained one or more of them, both transactions are blocked from attaining their locked points. As the text notes, this can only happen with exclusive locks. Shared locks have no limit on the number of transactions that may have them in common.
  • deadlock prevention - If the system notices that a deadlock is possible, the transaction being processed is aborted, its actions (if any) are rolled back, and its currently held locks are released. The transaction is rescheduled. If we were practicing two phase locking, there should have been no actions from this transaction yet.
  • deadlock detection - If a deadlock is detected (already happening), one of the transactions is chosen as the victim. This one is aborted and rolled back. The other transaction is allowed to continue. The victim transaction is restarted.
  • deadlock avoidance - According to this site, the method in this case is to grant access only to those resources that cannot cause a deadlock. This makes more sense than the text, which is a better description of two-phase locking. Follow that link, by the way, for a little different spin on the other two as well.

Timestamping Methods

Timestamping is a method often used with network compatible applications. In this case, the DBMS reads the current system time when new requests are received, converts it to a global time, and each request is given a unique timestamp. Timestamps must be unique to ensure that each request has a unique spot in a queue. The text refers to timestamps as being global. It means that the same time system is in effect regardless of the physical location of the requester. Often, timestamping is done with reference to Greenwich Mean Time (GMT) or Universal Time Coordinated (UTC), which seems to have been renamed as Coordinated Universal Time. As the page behind this link explains, GMT and UTC represent the same time, the time kept by the Royal Observatory at Greenwich, England. (Longitudes are measured as being so far west (negative values) or east (positive values) of the prime meridian which passes through Greenwich.) Despite other sources' statements to the contrary, GMT does not observe daylight saving time changes, nor does UTC. This provides a system of timekeeping for a network that covers the Earth.

The text gives us a new word that goes along with timekeeping: monotonicity. It is a noun. It means the quality of either never increasing or never decreasing. With reference to timestamps, it means never decreasing. Each timestamp applied to a new request must be greater than/later than the timestamps applied to all previous requests. Time flows forward on a network, never backward. (And in the real world, so far.) This quality supports the requirement that all timestamps be unique.

Getting back to the subject, the text tells us that steps in a transaction can all receive the same timestamp, because they are parts of the same request. However, subsequent transactions will have later timestamps. This allows different transactions to have timestamps that put then in an order that does not change. This order is serialized.

As discussed above, transactions can still become in conflict with each other. When this occurs, the transaction that is stopped, rolled back, and restarted is given a new timestamp, which will put it farther away from the one that continued. This produces more overhead for the DBMS. The text discusses two methods of managing timestamp systems. Each has two possible rules based on which transaction requests a lock:

  • the wait/die scheme is like a first come, first served system regarding requests for locks
    • The transaction with the earlier timestamp is requesting a lock that would cause a problem:
      The later transaction is favored.
      The transaction with the earlier timestamp waits for the other transaction to finish and release its locks. Then execution of the transaction with the earlier timestamp continues.
    • The transaction with the later timestamp is requesting a lock that would cause a problem:
      The earlier transaction is favored.
      The transaction having the later timestamp is chosen to die. It is stopped, rolled back, and rescheduled with its same timestamp. The transaction with the earlier timestamp continues.
  • the wound/wait scheme -
    • The transaction with the earlier timestamp is requesting a lock that would cause a problem:
      The earlier transaction is favored.
      The earlier transaction wounds/preempts the later one. The later transaction is stopped, rolled back, and rescheduled with its same timestamp.
    • The transaction with the later timestamp is requesting a lock that would cause a problem:
      The earlier transaction is favored.
      The transaction with the later timestamp waits for the other transaction to finish and release its locks. Then execution of the transaction with the later timestamp continues.

As you can see, these two methods take similar actions, but they work in different ways. Only the wait/die method has a circumstance that favors the later transaction (when the older transaction is asking for a lock). All other circumstances favor the earlier transaction. To improve processing flow, the text mentions one other action: if any transaction requests a lock, and that lock is not granted within an associated time limit, the transaction is rolled back and started again.

Optimistic Methods

Back in the beginning of the chapter, we were introduced to the idea of a pessimistic lock. The text takes a different approach next. What if we assume that most locks do not actually conflict with each other? This is an optimistic approach. With this approach we do not use locks or timestamps. (Are you afraid yet?) Transactions have three phases instead:

  • Read - Read the database, make a dataset, make changes to the dataset.
  • Validation - Check whether changes made to the dataset would create integrity or consistency errors in the database. If no errors (positive test), proceed to the next phase. If errors are detected (negative test), discard the dataset and restart the transaction.
  • Write - This phase is only entered if no errors are found in the validation phase. That being so, changes are written from the dataset to the database.

The text warns us that this approach will work only if there are few changes made to the data in the database in question. If there are frequent changes, locks and timestamps should be used.

ANSI Levels of Transaction Isolation

In addition to the methods discussed above, the ANSI standard uses different terms to describe amount of isolation, protection of transaction data from other transactions. It begins by defining three kinds of reads that a transaction might allow other transactions to make:

  • dirty read - another transaction can read data that is not yet committed (saved)
  • nonrepeatable read - reads of the same row at different times give different results, because the row may have been changed or deleted
  • phantom read - a query run at two different times produces more matching rows at the later time

In each case, there is an inconsistency with these reads. The ANSI standard provides four levels of increasing restriction, each one providing more isolation than the last.

ANSI level
Allows Dirty Read?
Allows Nonrepeatable Read?
Allows Phantom Read?
Isolation/Security Rating
Read Uncommitted
Yes Yes Yes bad
Read Committed
No Yes Yes better
Repeatable Read
No No Yes good
Serializable No No No best

This is in keeping with the rest of the material in this chapter. The text explains that we can have more isolation/restriction/security, but if we do, we reduce the amount of concurrency we can have. Its a trade off. ANSI SQL allows a transaction to have its isolation level set when it is begun, as in this example:


A MySQL command to do this might look like this:


Recovery Management

The last section in the chapter discusses database recovery management. We have considered several times in this chapter how to recover from a failed transaction. The question in this discussion is how to recover from a failed database. The text begins by discussing events that could cause this kind of failure:

  • hardware failure
  • software failure
  • human error or intentional human intervention
  • physical disaster, such as fire, flood, power failure (The text calls this one natural disaster, but there is nothing natural about a power failure.)

When a recovery is needed, it may rely on one or more of these features:

  • write-ahead-log protocol - This tells the DBMS to write down a transaction's details in a log before the transaction is processed. The idea is that a system failure that takes place during the transaction will not change our being able to roll back or complete the necessary transaction.
  • redundant transaction logs - This means to store backup copies of the logs in different places, each of which is unlikely to suffer from a disaster that wipes out the others.
  • database buffers - These are copies of parts of the database that are used as draft copies of the data a transaction is working on. It is much faster to change a copy in RAM, then write several changes at once, than it is to write every change to the database as it happens. It is the same idea as using a dataset to hold changes that are written to the database when they are all ready.
  • database checkpoints - When all open buffers are written at once to the disk version of the database, that is a database checkpoint. Checkpoints need to be run regularly enough to provide accurate live data to the database. When a checkpoint has run, it is recorded in the transaction log.

Okay, with those features understood, the text continues on page 507 to discuss two kinds of recovery. Both kinds assume that you restored your database to its last working version.

  • The first type is a deferred-write, also called deferred update) recovery. This is done on systems that update the transaction log immediately, but the do not update the database immediately. The text gives us four steps to conduct this kind of recovery:
    1. Find the last checkpoint in the transaction log. This is the last time the database was updated.
    2. Ignore transactions that were completed before the last checkpoint. They were written during that checkpoint.
    3. For any transaction that was committed after the last checkpoint, use the DBMS to reconstruct that transaction from the transaction log.
    4. Ignore transactions that were rolled back or still in process after the last checkpoint.
  • The second type is a write-through (also called an immediate update) recovery. This is done on systems that update the database as transactions are processed, before they are committed. On such systems, if there is an aborted transaction, that transaction needs to be rolled back. That makes this one different.
    1. Find the last checkpoint in the transaction log, as above.
    2. Ignore transactions that were completed before the last checkpoint. They were written during that checkpoint.
    3. For any transaction that was committed after the last checkpoint, use the DBMS to reconstruct that transaction from the transaction log.
    4. For any transaction that was rolled back after the last checkpoint, or had not reached its COMMIT point, use the DBMS to roll back or undo each of them, up to the checkpoint.

The chapter ends with a walkthrough of an example transaction log.