top of page
Search

Are Your Disks Lagging?

Updated: Jun 18

Picture of several server racks of SAN disks all looking tired comically

Recently, one of my clients was experiencing performance issues with their SQL Server. Although the server was not underpowered from a CPU and Memory standpoint, after reviewing the wait statistics, we decided to examine the disks and SAN to determine the cause of the performance issues.


I used the query below to look at each file's read and latency per database file.

SELECT
    [ReadLatency] =
        CASE WHEN [num_of_reads] = 0
            THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,
    [WriteLatency] =
        CASE WHEN [num_of_writes] = 0
            THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,
    [AvgBPerRead] =
        CASE WHEN [num_of_reads] = 0
            THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,
    [AvgBPerWrite] =
        CASE WHEN [io_stall_write_ms] = 0
            THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,
    [AvgBPerTransfer] =
        CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
            THEN 0 ELSE
                (([num_of_bytes_read] + [num_of_bytes_written]) /
                ([num_of_reads] + [num_of_writes])) END,
    LEFT ([mf].[physical_name], 2) AS [Drive],
    DB_NAME ([vfs].[database_id]) AS [DB],
    [mf].[physical_name]
FROM
    sys.dm_io_virtual_file_stats (NULL,NULL) AS [vfs]
	INNER JOIN sys.master_files AS [mf]
		ON [vfs].[database_id] = [mf].[database_id]
		AND [vfs].[file_id] = [mf].[file_id]
ORDER BY 
	[Latency] DESC;

Here is the query output, and as you can see, the write latency is quite high.

Latency Results from SQL Server

Optimally, for your databases, the latency should be 20ms or less. For tempdb, it should be 10ms or less.


In this case, most databases consisted of a single file, and most were located on the E drive.


To reduce the latency, we need to add additional drives and split up the objects within each database.


Filegroups

I like to split up each database by table data (heap/clustered indexes), non-clustered indexes, and blob data, naming them FGData, FGIndexes, and FGBlob.


To add new filegroups, you can add them through the UI or the command line.

USE [master]
GO
ALTER DATABASE [TeraData] ADD FILEGROUP [FGData]
GO
ALTER DATABASE [TeraData] ADD FILEGROUP [FGIndexes]
GO
ALTER DATABASE [TeraData] ADD FILEGROUP [FGBlob]
GO
Default Filegroup

If we create new filegroups and move the data, we will want to ensure that no new objects are created in the primary filegroup going forward, so we need to change the default filegroup.

Database Filegroup Properties window

Here is a script to manually set the default file group.

USE [TeraData]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups 
        WHERE is_default=1 AND name = N'FGData') 
    ALTER DATABASE [TeraData] 
        MODIFY FILEGROUP [FGData] DEFAULT
GO

Once we have added the new filegroups, we can now add new files for each filegroup

USE [master]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = 'TeraData_FGBlob', FILENAME = N'D:\Databases\TeraData_FGBlob.ndf', SIZE = 8192KB , FILEGROWTH = 65536KB) TO FILEGROUP [FGBlob]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = 'TeraData_FGData', FILENAME = N'E:\Databases\TeraData_FGData.ndf', SIZE = 8192KB , FILEGROWTH = 65536KB) TO FILEGROUP [FGData]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = 'TeraData_FGIndexes', FILENAME = N'F:\Databases\TeraData_FGIndexes.ndf', SIZE = 8192KB , FILEGROWTH = 65536KB) TO FILEGROUP [FGIndexes]
GO

If latency persists and adding faster hardware is not an option, you can spread the load by adding additional drives to your SAN and distributing more files across those additional drives for each file group. This will provide you with more throughput for better performance.


Below is a sample script to add four files for each file group.

USE [master]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGData1', FILENAME = N'G:\Databases\TeraData_FGData1.ndf', 
SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGData]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGData2', FILENAME = N'F:\Databases\TeraData_FGData2.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGData]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGData3', FILENAME = N'E:\Databases\TeraData_FGData3.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGData]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGData4', FILENAME = N'D:\Databases\TeraData_FGData4.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGData]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGIndexes1', FILENAME = N'D:\Databases\TeraData_FGIndexes1.ndf', SIZE = 8192KB , FILEGROWTH = 65536KB) TO FILEGROUP [FGIndexes]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGIndexes2', FILENAME = N'E:\Databases\TeraData_FGIndexes2.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGIndexes]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGIndexes3', FILENAME = N'F:\Databases\TeraData_FGIndexes3.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGIndexes]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGIndexes4', FILENAME = N'G:\Databases\TeraData_FGIndexes4.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGIndexes]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGBlob1', FILENAME = N'D:\Databases\TeraData_FGBlob1.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGBlob]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGBlob2', FILENAME = N'E:\Databases\TeraData_FGBlob2.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB ) TO FILEGROUP [FGBlob]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGBlob3', FILENAME = N'F:\Databases\TeraData_FGBlob3.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB ) TO FILEGROUP [FGBlob]
GO

ALTER DATABASE [TeraData] ADD FILE (NAME = N'TeraData_FGBlob4', FILENAME = N'G:\Databases\TeraData_FGBlob4.ndf', SIZE = 8192KB, FILEGROWTH = 65536KB) TO FILEGROUP [FGBlob]
GO

**Important Note**

When adding multiple files per filegroup, you need to ensure that all files are the same size in each filegroup and that each growth rate is the same. This way, SQL Server will round-robin the data between each file in each file group.


But to do that, you will also need to enable the trace flag -T1117 if you are using SQL Server 2014 or older, or if you use 2016 and newer, change the flag for each database filegroup to autogrow all files.


SQL Server 2014 and Older

To enable the trace flag in the startup parameters in SQL Server 2014 and older, open the SQL Server Configuration Manager and add -T1117 to the startup parameters.

SQL Serer Configuration Manager Startup Parameters Added
SQL Serer Configuration Manager Startup Parameters Added

Ensure you see the trace flag and then click OK. You can restart, or you can manually turn on the trace flag if you want, instead of restarting.


SQL Server 2016 and Newer

If using SQL Server 2016 and newer, set the Auto Grow All Files flag for each filegroup. To accomplish this, you will need to put the database in single-user mode. You will need to make sure there are no users in the database if this is an existing database.

Database Properties FileGroups Default FileGroup

Here is the script to manually adjust all file groups to autogrow all files.

USE [master];
GO
ALTER DATABASE [TeraData] SET SINGLE_USER WITH NO_WAIT;
GO

ALTER DATABASE [TeraData] MODIFY FILEGROUP [FGData] AUTOGROW_ALL_FILES;
GO
ALTER DATABASE [TeraData] MODIFY FILEGROUP [FGIndexes] AUTOGROW_ALL_FILES;
GO
ALTER DATABASE [TeraData] MODIFY FILEGROUP [FGBlob] AUTOGROW_ALL_FILES;
GO

ALTER DATABASE [TeraData] SET MULTI_USER WITH NO_WAIT;
GO
Instant File Initialization

Be sure Instant File Initialization is turned on for the account running the SQL Server database services. This ensures that all file growth occurs quickly as the files expand.


Conclusion

Going from one drive at around 200MBps, to twelve drives at 200MBps, we have increased our throughput from 200MBps to 2,400MBps. However, there is a catch: this will only work as long as you have sufficient bandwidth between your server and your SAN. If you don't, you will need to increase the bandwidth. If you have a 1GB connection to your SAN you will be limited. I would suggest that you need at least 8GB of dedicated bandwidth between your server and your SAN to ensure you have enough bandwidth.

Comentários


©2021-2025 by Jeff Taylor

bottom of page