In a real world deployment, it's likely the file server would be at least as fast as the clients. This is intentional because we want to ensure that the client machine isn't a bottleneck while we're testing. You may notice that the client machine is a considerably higher spec VM than the server. We'll be using the temporary drive for our tests (D:) You get decent throughput on this disk without having to spin up (and pay for) a separate data disk. The temporary drive in Azure is an SSD disk that's directly connected to the host node that the VM is running on. The disk performance noted above is for the temporary drive attached to each VM. Maximum disk performance: 16,000 IOPS and 128 MB/s.Maximum disk performance: 4,000 IOPS and 32 MB/s.FS1 - Our file server that we want to test the performance of.We'll find that the speed shown in the Windows Explorer file copy interface isn't a reliable measurement of the disk throughput on the backend. After gathering baseline stats, we'll look at 2 different ways to test the performance of a file server: using the Windows Explorer file copy UI and using the diskspd command line tool. In this post, we are going to look at each of the components that affect the speed clients can read/write to a file server. You also have to consider how fast the storage on the client end is if you will be copying files to or from the client's disks. Next, there's the speed of the network, including on the file server side, between the server and the clients, and on the client side. At the lowest level you have the speed of the storage on the file server. However, that test will traverse multiple layers, with the overall speed limited to the slowest link. To simulate real-world usage, testing a file share from a client on the network is the best way to get a good look at the expected speeds. Measuring file server performance can be a little tricky.
0 Comments
Leave a Reply. |