Web Application Caching Strategies: Generational caching

This is the second in a two part series on caching strategies. The first post provided an introduction and described write-through caching.

In the previous post in this series on caching strategies we described write-through caching. While an excellent strategy, it has a somewhat limited application. To see why that is that case and how generational caching improves upon it, let's start with an example.

Imagine that you are creating a basic blogging application. It is essentially just a single “posts” table with some metadata associated with each post (title, created date, etc). Blogs have two main types of pages that you need to be concerned about, pages for individual posts (e.g. “/posts/3”) and pages with lists of posts (e.g. “/posts/”). The question now becomes, how can you effectively cache this type of data?

Caching pages for individual posts is a straightforward application of the write-through caching strategy I discussed in a previous post. Each post gets mapped to a cache key based on the object type and id (e.g. “Post/3”) and every update to a post is written back to the cache.

The more difficult question is how to handle pages with lists of posts, like the home page or pages for posts in a certain category. In order to ensure the cache is consistent with the data store, it is important that any time a post is updated, all keys that contain the post need to be expired. This can become difficult to track since a post can be in many cache keys at the same time (e.g. latest ten posts, posts in Ruby category, posts favorited by user 15). While you could try to programmtically figure out all keys that contain a given post, that is a cumbersome and error-prone process. A more clean way of accomplishing this is what is called generational caching.

Generational caching maintains a "generation" value for the each type of object. Each time an object is updated, its associated generation value is incremented. Using the post example, any time someone updates a post object, we increment the post generation object as well. Then, any time we read/write a grouped object in the cache, we include the generation value in the key. Here is a sequence of actions and what would occur with the cache when performing those actions:



By including the generation in the cache key, any time a post is updated the subsequent request will miss the cache, and query the database for the data. The result of this strategy is that any time a post object is updated/deleted, all keys containing multiple posts will be implicitly expired. The reason I said implicitly expired is that we never actually have to delete the objects from the cache, by incrementing the generation we ensure that the old keys are simply never accessed again.

Here is some sample code for how this could be implemented (in psuedo-Python):



After implementing this strategy in multiple applications I want to point out a few things about how it works and its performance properties. In general I have found that using this strategy can dramatically improve application performance and lessen database load considerably. It can save tons of expensive table scans from happening in the database. By sparing the database of these requests, other queries that do hit the database can be completed more quickly.

In order to maintain cache consistency this strategy is conservative in nature, this results in keys being expired that don’t necessarily need to be expired. For example if you update a post in a particular category, this strategy will expire all the keys for all the categories. While this may seem somewhat inefficient and ripe for optimization, I’ve often found that most applications are so read-heavy that these types of optimization don’t make a noticeable overall performance difference. Plus, the code to implement those optimizations then become application or model specific, and more difficult to maintain.

I previously mentioned that in this strategy nothing is ever explicitly deleted from the cache. This has some implications with respect to the caching tool and eviction policy that you us. This strategy was designed to be used with caches that employ a Least Recently Used (LRU) eviction policy (like Memcached). An LRU policy will result in keys the with old generations being evicted first, which is precisely what you want. Other eviction policies can be used (e.g. FIFO) although they may not be as effective.

Overall, I’ve found generational caching to be a very elegant, clean, and performant caching strategy that can be applied in a wide variety of applications. So next time you need to do some caching, don’t forget about generations!

4 comments:

Unknown said...

Interesting reading. I see you haven't added the "pseudo-Python" code though, which would be very interesting.

swingkid said...

Question on how you would implement the get_generation() and get_generation_key().

For get_generation(), would it be a correct assumption that the latest generation would always be dynamically resolved by querying the cache, vice being locally maintained in say some static variable?

Also why is there get_generation() and get_generation_key()? would n't you just need one method?

Jonathan Kupferman said...

Hi swingkid,
If you take a look at lines 35 and 38 you will see the implementation for get_generation and get_generation_key.

You are correct about needing to store the generation value in the cache. They are dynamic and are changed when the underlying models are changed (see line 27).

get_generation() and get_generation_key() are broken out into separate methods since there are times when you need they generation key to update its value (see lines 27, 32, 36). Those three lines could actually be replaced with a one-liner method for increment_generation_key().

Unknown said...

This is a detail post of web app caching strategies where reader can learn so many things easily. It also provide sample of code which is also good for understanding. Web app developer service

Post a Comment