Using Memcached as a Distributed Locking Service

One of the beauties of memcached is that while its interface is incredibly simple, it is so robust and flexible that it can do nearly anything.

As an example, I am currently working on a project where we need to do distributed locking. The reason for this is that we are running into what is generally know as the lost update problem. Basically, we need to read an object, update it, and write it back in a serialized fashion in order to ensure that no updates are lost. The easiest solution is to have a lock which must be acquired in order to do the read/update/write operations. Locks are great with a set of threads but once you break out of the context of a single machine, you need something distributed. While there are certainly other library or complex pieces of code that do this, I find this to be a pretty elegant solution for a set of nodes which have access to the same memcached servers.

Provided below is a class called MemcacheMutex which provides the standard acquire/release mutex interface but with the twist in that it is backed by memcached.


There is one big caveat. Since memcached is not persistent it is possible that the lock could get evicted from the cache. This could result in a case where P1 acquires the lock, the key gets evicted, then P2 is able to acquire the lock even though it has not been released by P1. If your memcached is doing tons of operations or you are holding onto the lock for really long periods of time then this could become an issue. In that case, you should use persistent version of memcached, memcachedb.

1 comments:

Unknown said...

the other problem with this approach is that if P1 were to die a horrible death and doesn't release it's memory like usual (i.e __del__ is not called), then the lock would never be released and p2 would never acquire the lock

Post a Comment