Taking Azure Redis Cache for a ride – testing a simple Producer/Consumer program

In my “younger” days, I worked in a team that developed/maintained an application written in Ada, which used Oracle 8 as its database. Our performance requirements didn’t allow us to query the DB all the time, so the team had developed an in-memory cache on top of the database. We had a full-blown ORM mapping from the DB to Ada, with in-memory querying mechanisms, cache refreshes, and all the good stuff.

But why am I talking about this now? Because this taught me to really like caching mechanism. And now that Azure Redis cache came out, I had to get my hands dirty. I’ve heard a lot about Redis from many sources, but I never had time to learn about it. Last week I decided to give it a try.

The specific use case I had in mind was using it as a FAST medium of communication between servers, in a publish-subscribe fashion. What a delight it was to find that this use case is supported by Redis! I wanted to know how much time it took for a message to go from a consumer to a producer, so I created a simple ping-pong program, where one server sends ping and the other responds pong. I did this double trip because I don’t trust clock sync algorithms enough, and wanted to do all time measurements in the same machine. Since my code is very simple, just dividing by 2 will give the approximate time it takes for a message to get from one server to the other.

I tried the two different cache tiers provided by Azure: the basic tier with no replication, and the standard tier with replication. Both have caches that are shared (not dedicated) and dedicated (size above 1GB). For my tests, I tried both the Basic and Standard tiers, with 1GB size so that in both cases it is a dedicated cache.

The test code is fairly simple: two azure worker roles, the Producer creates messages and pushes them to a Redis pub/sub queue, and the Consumer subscribes to the messages, and when it receives one, it pushes it back to another Redis queue to which the Producer is subscribed. The Producer checks that the value received is the same as the value sent, and counts the time it took for the message to go back and forth. I used the StackExchange.Redis nuget to access Redis.

This is the code for the Producer:

And the code of the Consumer (which is pretty simple) is this:

The result of the tests in both cases were the same. After running 1000 iterations, both caches gave an average of 15ms latency from the full round trip, and assuming that there was no delay inside the consumer (or that it is negligible), this means that a message takes around 7.5ms to get to its destination. That is pretty damn fast!

I hope to have some more time to learn about Redis and it’s uses, and to have a real project where I can test its capabilities in a real-world environment. But for starters, it is very, very promising.

2 thoughts on “Taking Azure Redis Cache for a ride – testing a simple Producer/Consumer program

  1. Latency seems ok, on par with what you’d get in AWS: 2-4ms per hop in the same zone. It’s not great, in particular compared to what you’d get with physical hardware, as virtualization seems to still suck when it gets to the networking stack. For some application, such as ad-serving and the like, this is actually pretty high latency.

    I wonder what numbers you’d get when running the standard redis-benchmark tool on Azure between VMs. On *nix machines, if using local Redis slaves to cut down on latency, you could also use UNIX sockets to go round the network stack entirely, and have considerably better throughput.

    If you’re trying to achieve any kind of reliable RPC mechanism with pub/sub, I’m not sure this is the right tool – since (a) pub/sub does not enqueue any messages for disconnected clients, and (b) it sends the same message to any subscriber. This means that if you’re server is down for a millisecond for any reason, you would lose messages. Except that, you cannot have multiple servers which share the request load.

    Another common tool, maybe more fitting for the job, is using Redis lists as queues. In this way, multiple workers can easily share the load. Sending responses back can be done in a variety of ways, one of which is a Redis queue per client – so the client itself can have multiple queued responses.

    Have fun, Redis is great. It now also does HyperLogLogs, which are very cool for analytics.

    • Hi Elad and thanks for the comment! You are right that this is not the correct way to implement a pub/sub mechanism – I’ve been reading a lot more since starting to play with Redis and as you say, the best way to do it is by using lists and atomic dequeue/enqueue operations when taking an element from the list. This is what I’ll be implementing in my next “production” environment.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.