What is the difference in the "Host Cache Preference" settings when adding a disk to an Azure VM? What is the difference in the "Host Cache Preference" settings when adding a disk to an Azure VM? azure azure

What is the difference in the "Host Cache Preference" settings when adding a disk to an Azure VM?


Just as the settings mention this setting turns on caching preferences for I/O. The effect of changing them is that reads, writes or both read/writes can be cached for performance. For example, if you have read-only database/Lucene index/read-only files it would be optimal to turn on read-cache for the drive.

I have not seen dramatic performance changes in changing this setting (until I used SQL Server/Lucene) on the drives. High I/O will be improved by stripping disks...in your case if you have millions of lines of code across 10,000s of files then you could see performance improvement in reading/writing. The default IOPs max for a single drive is 500 IOPs (which is about 2x15k SAS drives or a high-end SSD). If you need more than that, add more disks and stripe them...

For example, on an extra large VM you can attach 16 drives * 500 IOPs (~8,000 IOPs):http://msdn.microsoft.com/en-us/library/windowsazure/dn197896.aspx(there are some good write-ups/whitepapers for people who did this and netted optimal performance by adding the max amount of smaller drives..rather than just one massive one).

Short summary: leave the defaults for caching. Test with an I/O tools for specific performance. Single drive performance will not likely matter, if I/O is your bottleneck striping drives will be MUCH better than the caching setting on the VHD drive.