Jozef Cipa15 min

Database Transactions, Lost Updates & Idempotency

EngineeringFeb 10, 2023

Engineering

/

Feb 10, 2023

Jozef CipaBackend Engineer

Share this article

Web applications are often complex systems consisting of several parts such as UI (frontend), API (backend), a database and often other 3rd-party services that the application depends on. Designing the API service properly so it’s robust, secure and works as expected goes without saying — but sometimes other factors should be considered and handled adequately.

These factors include concurrency and duplication. It is important to know about them and be aware of the challenges they entail and how to deal with them. In this article, we will look at three situations in which our API would normally work just fine but which encounter issues once concurrent or duplicate requests come in.

Where’s My Data?

Imagine you’re working as a copywriter at a company. As part of your job, you prepare posts for the company blog and also proofread posts written by employees. One day, an employee writes a new post and asks you to review it. As you start reading it, you notice a typo.

  The quick brown fox jmps over the lazy dog

Naturally, you correct it and save the post, so it looks like this:

  The quick brown fox jumps over the lazy dog

But at the same time, as you’re reviewing the article, that employee’s also reading it and realizes they forgot to add something important.

  The quick brown fox jmps over the lazy dog Sphinx of black quartz, judge my vow

So they go ahead and add it, saving the article. Now, since the employee saves the article later, it looks like this:

  The quick brown fox jmps over the lazy dog Sphinx of black quartz, judge my vow

As you notice the change, you see that a whole new part was added but the typo that you just fixed has reappeared again. This is known as the lost update problem (also referred to as “mid-air collisions”).

The reason for this is simple: Since both of you had a loaded version of the article, when you updated it, the employee’s version inherently became outdated. When they updated the article by adding the missing text, they only saved their changes while the rest remained as it was when the article was first loaded — rewriting all changes published by you in the meantime.

This problem can be avoided by using ETag, an HTTP header that identifies a specific version of the content. Its value is not strictly defined and can be anything the user prefers but, usually, it’s a document hash (e.g. 3da5415599), modification timestamp (e.g. 1672443352) or a simple revision number (e.g. 1, v1, v1.1).

The idea here is that every time a post is created or updated, the API generates an ETag string for it that represents the current revision. This value is then stored and returned as a header to the client whenever it asks for the given post.

Let’s write a simple Node.js API that simply returns some mocked data and sets the ETag HTTP header, so we can see how it works.

import Koa from 'koa'
import Router from 'koa-router'
import { koaBody } from 'koa-body'
import { createHash } from 'crypto'

const app = new Koa()
const router = new Router()

function createVersionHash(data) {
    return createHash('md5').update(data).digest('hex')
}

const demoText = 'The quick brown fox jmps over the lazy dog'
const demoPost = {
    id: 1,
    text: demoText,
    etag: createVersionHash(demoText),
}

router.get('/api/posts/:id', ctx => {
    ctx.set('ETag', demoPost.etag)
    ctx.body = demoPost
})

app.use(koaBody())
app.use(router.routes())
app.listen(3000)

If we run the script and call the endpoint, we will see that the ETag header is being set and returned in the response.

$ curl -i http://localhost:3000/api/posts/1

HTTP/1.1 200 OK
ETag: 961248836f12bcd8fada83b5ac06a7de
Content-Type: application/json; charset=utf-8
Content-Length: 182

{
  "id": 1,
  "text": "The quick brown fox jmps over the lazy dog",
}

Now, when the client wants to update the post, it sends the updated content along with the ETag header (If-Match: "<etag value>" note, that according to the specification the ETag value is placed between double quotes) to the API.

The server first checks the header and validates it against the value stored in the database. If it matches, the request can proceed. If a change has been made, the ETag is outdated — so the request gets rejected, usually with an HTTP error 412 Precondition Failed. The client needs to resolve this situation by fetching the post again, thus retrieving the latest version.

Let’s add another endpoint that will handle the updating of posts.

// ...

router.put('/api/posts/:id', ctx => {
    const etagMatch = ctx.request.headers['if-match']?.replace('"', '')
    if (!etagMatch) {
        ctx.status = 400 // Bad Request
        ctx.body = { err: 'ETag is missing' }
        return
    }
    if (etagMatch !== post.etag) {
        ctx.status = 412 // Precondition Failed
        ctx.body = { err: 'ETag mismatch' }
        return
    }

    // received ETag matches the saved one, we can proceed
		// update the post
    demoPost.text = ctx.request.body.text
    demoPost.etag = createVersionHash(demoPost.text)

    ctx.set('ETag', demoPost.etag)
    ctx.body = demoPost
})

// ...

If we look at this example, we can see how the first update went through seamlessly — but when we try to update the content again, we get an error.

$ curl -i -X PUT http://localhost:3000/api/posts/1 \
  -H 'If-Match: "edd34addba369ecc64c071cfff2fee78"' \
  -H 'Content-Type: application/json' \
  -d '{"text": "The quick brown fox jumps over the lazy dog"}'

HTTP/1.1 200 OK
ETag: 9e107d9d372bb6826bd81d3542a419d6
Content-Type: application/json; charset=utf-8
Content-Length: 182

{
  "id": 1,
  "text": "The quick brown fox jumps over the lazy dog"
}

The update went through successfully and returned a new version of the ETag (9e107d9d372bb6826bd81d3542a419d6) in the response. But if the employee is sending their update simultaneously, this will fail — as the ETag is no longer up-to-date (the employee’s browser doesn’t know about the update, therefore it still uses the outdated 961248836f12bcd8fada83b5ac06a7de ETag).

$ curl -i -X PUT http://localhost:3000/api/posts/1 \
  -H 'If-Match: "961248836f12bcd8fada83b5ac06a7de"' \
  -H 'Content-Type: application/json' \
  -d '{"text": "The quick brown fox jmps over the lazy dog\nSphinx of black quartz, judge my vow"}'

HTTP/1.1 412 Precondition Failed
Content-Type: application/json; charset=utf-8
Content-Length: 23

{
  "err": "ETag mismatch"
}

As you can see, by utilizing ETags we can easily prevent lost updates and overwrites between concurrent users.

This approach is called optimistic concurrency control. Instead of relying on locks, each transaction, before committing, first verifies that no other transaction has modified the data it has read. This is great for applications where conflicts are rare — but not so good when they happen often, as it might have huge performance implications.

Note: ETags are also used for caching resources, where the client (browser) sends the ETag with the request to the server by using the If-None-Match: <etag value> header. The server only returns the content if the version on the server doesn’t match the one sent in the request.

Don’t Charge Me Twice!

Let’s imagine a different scenario. You’re traveling in the backseat of a car and decide to pass the time by browsing your favorite e-shop website. Suddenly, you spot a great deal. You don’t hesitate and jump on it straight away. As you’re pressing the “Buy” button, the car enters a tunnel and your cell connection is abruptly interrupted.

A few seconds later, you’re out of the tunnel, the connection is back and you press the “Buy” button again (or reload the website) to confirm the order. Next thing you notice, your bank account has been charged twice. How is that possible?

Well, you accidentally created two orders. When the connection got interrupted, your website froze on the shopping cart screen and didn’t receive the order confirmation from the server. Therefore, you naturally pressed the button again, even though the first request had reached the server before — there was just no way for it to send the result back to your phone.

This is a well-known edge case situation that must be handled properly, especially when money is involved (!). The solution to this problem is called idempotency.

Idempotency is a mechanism that ensures that the critical API endpoints — or other parts of the application — won’t perform the same operation multiple times when it’s called more than once. It is a crucial part of requests that do some changes, cause side-effects or mutate the state of something that shouldn’t be accidentally changed repeatedly.

The above-mentioned story is a good reason to make sure your payment handling logic is designed correctly. We can get inspired by Stripe, which describes this approach very nicely. However, payment systems are not the only area where idempotency is useful. Shopify, for instance, uses it to secure the flow of creating e-shop orders. It can even be used to properly sync data pipelines or to avoid spinning up more AWS EC2 instances than intended.

How Does Idempotency Work?

The key is in marking a request with a unique identifier so the API can keep track of it and decide what to do if the request has been seen before. Generally, this is done by sending a custom HTTP header, typically known as Idempotency-Key. The value is arbitrarily defined by the client — it can be something that specifically represents the user session, such as a shopping cart ID or even a random string. UUIDs are often used in this case.

If the API is configured to be idempotent, it always checks whether a request includes the idempotency header. If the header is present and there hasn’t been a request with such a key yet, it tries to proceed with the request as usual and records the idempotency key.

Now, if another request comes in with the same idempotency key, the API looks up the key and returns the same response as in the first request. The API doesn’t perform any business logic that could impact the response. Note that if the original request resulted in an error, the second request will also return the same error message.

When the API returns a response, some APIs might return an additional header communicating that the request with the provided idempotency key has already been sent and the response is therefore from the first (original) call. Stripe, for instance, returns an Idempotent-Replayed header.

It’s good practice to make idempotency keys expire after some time (e.g. 24 hours, 1 day, 30 days) as the requests become stale anyway and you also won’t bloat your database.

Following what we just said, let’s take a look at an example of how it can look in practice.

We have a list of user accounts and a new endpoint that processes payments. As it supports idempotency, we first check if a request is sent with the idempotency key. If it is — and the provided key has been already sent before — we just return the original response right away. Otherwise, the API processes the payment request and stores the idempotency key, so all the subsequent requests will get the same response without creating another payment.

// ...
import { randomBytes } from 'crypto'

function generatePaymentId() {
    return randomBytes(20).toString('hex')
}

const payments = []
const userAccounts = {
    'john.doe@example.org': {
        email: 'john.doe@example.org',
        balance: 200,
    }
}

router.post('/api/payment', (ctx) => {
    const idempotencyKey = ctx.request.headers['idempotency-key']
    const { sender, amount } = ctx.request.body
    const userAccount = userAccounts[sender]

    // Check if the idempotencyKey has been used
    // if so return the existing payment
    const paymentByIdempotencyKey = idempotencyKey &&
      payments.find(payment => payment.idempotencyKey === idempotencyKey)
    if (paymentByIdempotencyKey) {
        ctx.set('Idempotent-Replayed', true)
        ctx.status = paymentByIdempotencyKey.code
        ctx.body = {
            payment: paymentByIdempotencyKey,
            userAccount,
        }
        return
    }

    const payment = {
        id: generatePaymentId(),
        sender,
        amount,
        idempotencyKey,
        status: null,
        code: null
    }

    // check if sender has money
    if (userAccount?.balance >= amount) {
        // subtract amount from user account
        userAccount.balance -= amount
        payment.status = 'OK'
        payment.code = 200
    } else {
        payment.status = 'NO_MONEY'
        payment.code = 400
    }

    // record payment
    payments.push(payment)
    
    ctx.status = payment.code
    ctx.body = {
        payment,
        userAccount,
    }
})

// ...

Now that we have the code ready, let’s try to send a payment request and see how it works. Notice that we’re just sending a normal request and don’t include the idempotency key.

$ curl -i -X POST http://localhost:3000/api/payment \
  -H 'Content-Type: application/json' \
  -d '{"sender": "john.doe@example.org", "amount": 100}'
 

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 188

{
	"payment": {
		"id":"80f2b3655a23e66573e688ffb01c92a3ca8fba5e",
		"sender":"john.doe@example.org",
		"amount":100,
		"status":"OK",
		"code":200
	},
	"userAccount": {
		"email":"john.doe@example.org",
		"balance":100
	}
}

We can see that the request succeeded, the payment was created and our account was charged. Let’s send another request and see what happens.

$ curl -i -X POST http://localhost:3000/api/payment \
  -H 'Content-Type: application/json' \
  -d '{"sender": "john.doe@example.org", "amount": 100}'

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 188

{
  "payment": {
    "id":"2803f03514b292b57b210420fd47d504e860084e",
    "sender":"john.doe@example.org",
    "amount":100,
    "status":"OK",
    "code":200
  },
  "userAccount": {
    "email":"john.doe@example.org",
    "money":0
  }
}

We can see that a new payment was created again (different payment ID), resulting in charging the user account again — so the userAccount.balance now equals 0 and we’re out of money. We can verify that by sending a third request.

$ curl -i -X POST http://localhost:3000/api/payment \
  -H 'Content-Type: application/json' \
  -d '{"sender": "john.doe@example.org", "amount": 100}'

HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Content-Length: 194

{
  "payment": {
    "id":"7b3c8c6c892857a5767b014fd7d4a0531ff80711",
    "sender":"john.doe@example.org",
    "amount":100,
    "status":"NO_MONEY",
    "code":400
  },
  "userAccount": {
    "email":"john.doe@example.org",
    "money":0
  }
}

That would be okay as long as we intended to send two separate payments, one after another. But even though we didn’t plan to pay twice, the API created a new payment each time because it had no way of knowing that it was a duplicate request. This is really bad — as we just charged our customer two times, and I can promise you they won’t be happy about that.

This is exactly what the idempotency key solves. Let’s try to send a new request, this time with the Idempotency-Key header included (don’t forget to restart the server first since we’re currently out of money).

$ curl -i -X POST http://localhost:3000/api/payment \
  -H 'Idempotency-Key: 77e76f80-0466-4e83-95bf-bf754eefa37c' \
  -H 'Content-Type: application/json' \
  -d '{"sender": "john.doe@example.org", "amount": 100}'

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 244

{
  "payment":{
    "id":"91f177563d4a69945d97ddb9f46fa11d4984e948",
    "sender":"john.doe@example.org",
    "amount":100,
    "idempotencyKey":"77e76f80-0466-4e83-95bf-bf754eefa37c",
    "status":"OK",
    "code":200
  },
  "userAccount":{
    "email":"john.doe@example.org",
    "balance":100
  }
}

We see that the payment went through seamlessly, just like in the first example. The payment was created and our account was charged. Let’s try to send the request again to see what happens.

$ curl -i -X POST http://localhost:3000/api/payment \
  -H 'Idempotency-Key: 77e76f80-0466-4e83-95bf-bf754eefa37c' \
  -H 'Content-Type: application/json' \
  -d '{"sender": "john.doe@example.org", "amount": 100}'

HTTP/1.1 200 OK
Idempotent-Replayed: true
Content-Type: application/json; charset=utf-8
Content-Length: 244

{
  "payment":{
    "id":"91f177563d4a69945d97ddb9f46fa11d4984e948",
    "sender":"john.doe@example.org",
    "amount":100,
    "idempotencyKey":"77e76f80-0466-4e83-95bf-bf754eefa37c",
    "status":"OK",
	  "code":200
  },
  "userAccount":{
    "email":"john.doe@example.org",
    "balance":100
  }
}

Now, we got the exact same result. Notice that the user’s account balance remained 100, the payment ID hasn’t changed and we even received the Idempotent-Replayed header confirming that the request has indeed been seen before — and the API just returned the original payment response instead of creating a new one. This is a very simple yet powerful method that can save you a lot of trouble and debugging in the future.

Deduplication ID

Idempotency isn’t only supported by API endpoints. There are other systems that implement this concept too, such as message queues. A similar problem might occur there as well when a message gets published multiple times. In this case, a deduplication ID can help. It works very similarly to the idempotency key. Once a message with the specific deduplication ID is published, any messages sent with the same ID won’t be delivered.

This only applies during the deduplication interval, which is the same concept as the expiration of idempotency keys but is usually much shorter; in the case of AWS SQS, it’s 5 minutes.

Database Transactions

We’ve spoken about two different scenarios where multiple requests may cause issues. To fix them, we only needed to send an HTTP header and do some slight changes on the backend. However, some situations require a more sophisticated approach in order to ensure data integrity; using a header is not enough. This is where SQL database transactions come into play.

Transactions are a mechanism that allows for the manipulation of data and secures a transition from one consistent state to another. This is assured by implementing ACID principles:

  • Atomic - A transaction is treated as one unit; if a single query within the transaction fails, the whole transaction is rolled back. It’s an “all or nothing” approach, ensuring there are no inconsistencies or invalid states.
  • Consistent - Transactions respect all existing constraints and foreign references on tables, thus avoiding illegal states.
  • Isolated - Isolation guarantees that concurrently running transactions won’t interfere with each other and changes made in one won’t affect the other. This is done by using locks. The exact behavior and lock strictness can be configured by isolation levels.
  • Durable - Once a transaction succeeds, it is written permanently and will remain in the system, even after a system crash or power outage.

Isolation Levels

  • Read uncommitted - The least strict level and allows dirty reads, i.e. one transaction can see the changes of another even before them being committed. This is not even implemented by Postgres.
  • Read committed - The default isolation level in Postgres. Transactions will only ever read data that has been committed.
  • Repeatable read - A more strict level that protects against several phenomena.
  • Non-repeatable reads - If we have two transactions A and B, transaction A reads a row value and gets 100. At the same time, transaction B does some calculations, updates the value to 200 and commits. If transaction A now reads the value again, it sees 200. Thus, if the same query is executed multiple times within a transaction, it could return a different value each time.
  • Phantom reads - Similar to non-repeatable but with table rows. Transaction A reads the count of rows and gets 3. Transaction B, in the meantime, deletes one row and commits. Transaction A reads the rows count again and gets 2. Phantom reads are caused due to inserts or deletes of rows.
  • Lost updates - This happens when we start two transactions A and B, transaction A updates a row value to 10 and commits. Transaction B then also updates the same row value to 20 and commits. The row value is now 20 and transaction A did a lost update.
  • Serializable - The most strict isolation level, concurrent transactions would have the same effect as if they were applied serially, one after another.

If you want to learn more about transactions and isolation levels, check out this article.

Locking

Transactions and some operations performed within them may block each other in order to keep data safe. The blocking is known as locks, and they can be applied either on an entire table, a subset of rows or on an individual row. In such cases, when a lock is held by one transaction, other transactions cannot access or modify the data (depending on the type of the lock).

For instance, when we perform two concurrent UPDATE queries on a row, the second transaction must wait until the first one completes. Then, once again depending on the isolation level and the lock type, it can either continue or fail.

While this is enough when updating the data, sometimes it might be necessary to hold the lock longer for some additional processing. For such cases, SELECT ... FOR UPDATE may be used. It acquires the ROW SHARE lock, which conflicts with the ROW EXCLUSIVE lock used by the UPDATE statement and therefore prevents any changes to the row. That said, any other transactions that will attempt to modify the row or use select for update will have to wait until the transaction completes and the lock is released.

Sometimes we don’t want to wait for the release of the lock. In this case, we can use SELECT ... FOR UPDATE NOWAIT, which will try to acquire the lock and, if not available, it will error out immediately.

There are a few other similar locks that are beyond the scope of this article — you can get more familiar with them here.

Example Situation (Another One!)

To give an example of where locks and transactions can be used, let’s imagine we have a website that collects users’ emails for further processing and categorization. It uses a 3rd-party service that sends webhooks anytime a new email is received as well as when there is a problem with the email account. Our app then listens for these webhooks and acts upon them.

If for whatever reason some webhooks come in twice (and it happens!), our app would process the same event two times. This might not be a problem if it’s just about updating a value where duplicate requests won’t make a difference, like accountStatus = connected; but when we want to update some counts, send a notification or make some other action, duplicate events become undesired.

Imagine you get two notifications saying that your email account has been connected. Not ideal. To fix that, you could use the select for update lock. Here is an example in Sequelize:

await sequelize.transaction(async trx => {
	// Only one transaction at a time will be able to fetch the user
	const user = await User.findOne({
		where: { id: webhook.userId }
	  lock: transaction.LOCK.UPDATE // SELECT ... FOR UPDATE
		transaction: trx,
	})

	if (user.accountStatus !== webhook.userStatus) {
		// status has changed
		await user.update({ accountStatus: webhoook.userStatus }, { transaction: trx })

		// send notification ...
	}
})

Since we used the SELECT FOR UPDATE query, when two concurrent requests come in, only one of them will acquire the lock; the other one will have to wait. Now, the first request sees that the account status has changed, so it updates the user and creates a notification. After that, the database transaction is committed and the second request finally acquires the lock. As it now sees that the status is the same (the previous webhook updated the status already), it will just complete the transaction with no changes.

If we didn’t use this lock mechanism, both transactions would fetch the user — and since they would both see the old status, they’d update it and also create a notification. Even though this would result in a correctly updated account status, the notification would be duplicated. This is exactly where the select for update command can help us as it holds the lock even after selecting the data from the database, allowing us to execute other commands before committing and releasing the lock.

A similar result could also be achieved by using the SERIALIZABLE isolation level, but this blocks the whole table instead of just specific rows — which could be inefficient and time-consuming.

Summary

I’ve tried to show you a few things that are good to know, as they might occasionally come in handy. They’re not something you’ll be implementing every day on every project, but certain scenarios may require additional logic to fix the outlined problems. It is important to think about your project in advance, analyze it thoroughly and assess the potential challenges.

If you’ve reached this summary, you should now know a few things: You can utilize ETag to make sure multiple users won’t accidentally rewrite each other’s changes; if you’re working with important resources, you can mitigate the risk of unwanted actions by using idempotency; and database locks and transactions are vast, very important topics — but at this point, you know at least the basics, how they protect data integrity and how you can use them to your benefit.

Share this article