Blog

Solr Index Speed on EBS

If youve got your Solr index on an Amazon EBS volume, save yourself some headache and do this every time you make a new volume:

sudo nohup dd if=/dev/xvdi of=/dev/null &

(Use your own volume in place of xvdi.)

That just writes the whole volume to /dev/null. Seems kind of dumb on the face of it, but the Amazon docs on EBS performance say there is a 5% to 50% reduction in IOPS when you first access data on a volume. I dont know what magic happens in Amazons datacenter, but the solution is to read every block on the volume.

Thats all you have to know. If you want the backstory, read on…

We found this out the hard way when trying to pin down performance variances on new installations. Our thinking was that in order to take advantage of AutoScaling wed want our index baked into an AMI so that we can have added query capacity in about 10 minutes (thats about how long it takes to spin up an instance off of an AMI). If instead we opted for instance (ephemeral) storage, wed have to wait for replication, which takes about three hours with our current index.

So this all worked well except when we went to test performance. The weird thing was, we got wildly different performance results every time we created a new stack! A while ago I saw a great ops presentation (I forget who) at LuceneRevolution that talked about preemptively cating the index to /dev/null to prime the OS disk cache. Those keywords helped me find that EBS performance page. After doing the above dd (I think it stands for “disk duplicate”) our performance was much more predictable.

It still takes quite a bit of time to read every block on our EBS volumes. That means new instances in our AutoScaling Group will have degraded performance for a while. One thing I might try later is to have multiple processes reading from various parts of the volume in parallel.