If you use Analysis Services Tabular, you should dedicate a good amount of time in hardware selection. Unfortunately, throwing money in expensive hardware configuration could be a very bad idea, resulting in your 1,500$ desktop running faster than your expensive 100,000$ server. Moreover, if you use virtualization you have to be very careful in certain settings, that might affect the performance in a very bad way. When I say this, I’m describing queries running up to 2-3 times slower than in optimal conditions. So, why spending time to gain 10% when you have a bigger issue to solve?
I described the main best practices in the article Optimize Hardware Settings for Analysis Services Tabular on SQLBI. This is the result of helping many companies to detect hardware bottleneck and to plan the right hardware configuration. My experience says that the time you spend to correctly allocate the budget has a huge return of investment. Usually you cannot change the CPU or the RAM of a brand new server, so this step is critical. The next step is to check that hardware configuration is correct. It’s incredible how many times I discovered that BIOS settings of an expensive server were the reason of slow performance, so now it’s my first priority when I see a benchmark with suspicious numbers (compared to tech spec of the CPU).
Now, a common discussion I had is that “we have a standardized hardware and virtualization platform”. I completely understand that, but I like to remind that the goal is to get a better return of investment, and standardization has the ultimate goal to reduce costs. So we start to evaluate the cost of a solution that is compliant with the standards, but allocates different hardware to specific workload. The result of a this is spending less (in hardware and licenses) getting more (performance).
I’d like to hear your stories about that – write your experience in the comments!