In my “younger” days, I worked in a team that developed/maintained an application written in Ada, which used Oracle 8 as its database. Our performance requirements didn’t allow us to query the DB all the time, so the team had developed an in-memory cache on top of the database. We had a full-blown ORM mapping from the DB to Ada, with in-memory querying mechanisms, cache refreshes, and all the good stuff.
But why am I talking about this now? Because this taught me to really like caching mechanism. And now that Azure Redis cache came out, I had to get my hands dirty. I’ve heard a lot about Redis from many sources, but I never had time to learn about it. Last week I decided to give it a try.
The specific use case I had in mind was using it as a FAST medium of communication between servers, in a publish-subscribe fashion. What a delight it was to find that this use case is supported by Redis! I wanted to know how much time it took for a message to go from a consumer to a producer, so I created a simple ping-pong program, where one server sends ping and the other responds pong. I did this double trip because I don’t trust clock sync algorithms enough, and wanted to do all time measurements in the same machine. Since my code is very simple, just dividing by 2 will give the approximate time it takes for a message to get from one server to the other.
I tried the two different cache tiers provided by Azure: the basic tier with no replication, and the standard tier with replication. Both have caches that are shared (not dedicated) and dedicated (size above 1GB). For my tests, I tried both the Basic and Standard tiers, with 1GB size so that in both cases it is a dedicated cache.
The test code is fairly simple: two azure worker roles, the
Producer creates messages and pushes them to a Redis pub/sub queue, and the
Consumer subscribes to the messages, and when it receives one, it pushes it back to another Redis queue to which the
Producer is subscribed. The
Producer checks that the value received is the same as the value sent, and counts the time it took for the message to go back and forth. I used the
StackExchange.Redis nuget to access Redis.
This is the code for the Producer:
And the code of the Consumer (which is pretty simple) is this:
The result of the tests in both cases were the same. After running 1000 iterations, both caches gave an average of 15ms latency from the full round trip, and assuming that there was no delay inside the consumer (or that it is negligible), this means that a message takes around 7.5ms to get to its destination. That is pretty damn fast!
I hope to have some more time to learn about Redis and it’s uses, and to have a real project where I can test its capabilities in a real-world environment. But for starters, it is very, very promising.