github twitter linkedin email
Best Practice Reference

OS Partition the Disk volumes hosting SQL databases (Data and log) with 64 KB allocation unit size:The atomic unit of storage in SQL Server is a page which is 8KB in size. Extents are groups of eight 8 KB pages that are physically contiguous to each other for a total of 64 KB. SQL Server uses extents to store data. Hence, on a SQL Server machine the NTFS Allocation unit size hosting SQL database files (Including tempdb files) should be 64K.
OS Instant file initialization
OS OS Power Saving setting to High Performance
OSAllAntivirus exclusion on SQL files.mdf, .ndf, .ldf, .bak, .trn files and sqlservr.exe added to the on-access scanning exclusion list
OS> Windows Server 2008Lock Pages In memoryLock pages in memory privilege to SQL is recommended in Windows Server 2008 or above only when there are signs of paging. Locking pages in memory may boost performance when paging memory to disk is expected.
OS Setting appropriate Page file size
OS Windows security policy and permissions
OS> Windows Server 2012 ClusterDynamic Quorum Configuration in Windows 2012 and 2012R2 cluster
Full-TextAllTo use all CPU processors or cores to the maximum, set sp_configure ’max full-text crawl range’ to the number of CPUs on the systemUsing this option, you can specify the number of partitions that Microsoft SQL Server should use during a full index crawl. For example, if there are many CPUs and their utilization is not optimal, you can increase the maximum value of this option. In addition to this option, SQL Server uses a number of other factors, such as the number of rows in the table and the number of CPUs, to determine the actual number of partitions used.
SQL  Maximum degree of parallelismAs a rule of thumb, use the number of cores in a NUMA node - this is subject to any third party vendor recommendations, of course.
SQL  Cost threshold for parallelismThis will vary depending on your workload requirements. If your system has an OLTP-like workload, set this to 15-20 and possibly higher, as you seek to minimize parallelism and offer more concurrency. On a DSS/OLAP/Data Warehousing-like workload, consider leaving at default and managing parallelism through tweaking of MAXDOP if concurrency is a problem.
SQL  Number of tempdb filesTypically we recommend that you use no less than 8 tempdb files as a starting point, and increase if you observe PFS/GAM/SGAM contention on your system. This recommendation applies to virtualized systems as well.
SQL  Backup CompressionIn our testing, we’ve shown that compression increases backup throughput by at least 25%, reaching 35% on some scenarios. We recommend you continue to use backup compression, with very few exceptions (one being CPU being adversely affected by the use of compression, for example).