Direct-Attached Storage Performance

The presence of 10G ports on the Synology DS2015xs presents some interesting use-cases. As an example, video production houses have a need for high-speed storage. Usually, direct-attached storage units suffice. Thunderbolt is popular for this purpose - for both single-user modes as well as part of a SAN. However, as 10G network interfaces become common and affordable, there is scope for NAS units to act as a direct-attached storage units also. In order to evaluate the DAS performance of the Synology DS2015xs, we utilized the DAS testbed augmented with an appropriate CNA (converged network adapter), as described in the previous section. To get an idea of the available performance for different workloads, we ran a couple of quick artificial benchmarks along with a subset of our DAS test suite.

CIFS

In the first case, we evaluate the performance of a CIFS share created in a RAID 5 volume. One of the aspects to note is that the direct link between the NAS and the testbed is configured with a MTU of 9000 (compared to the default of 1500 used for the NAS benchmarks).

Synology DS2015xs - CrystalDiskMark Benchmark

Our first artificial benchmark is CrystalDiskMark, which tackles sequential accesses as well as 512 KB and 4KB random accesses. For 4K accesses, we have a repetition of the benchmark at a queue depth of 32. As the screenshot above shows, Synology DS2015xs manages around 675 MBps reads and 703 MBps writes. The write benchmark number corresponds correctly to the claims made by Synology in their marketing material, but the 675 MBps read speeds are a far cry from the promised 1930 MBps. We moved on to ATTO, another artificial benchmark, to check if the results were any different.

Synology DS2015xs - ATTO Benchmark

ATTO Disk Benchmark tackles sequential accesses with different block sizes. We configured a queue depth of 10 and a master file size of 4 GB for accesses with block sizes ranging from 512 bytes to 8 MB and the results are presented above. In this benchmark, we do see 1 MB block sizes giving read speeds of around 1214 MBps.

Synology DS2015xs - 2x10 Gbps LACP - RAID-5 CIFS DAS Performance (MBps)
  Read Write
Photos 594.69 363.47
Videos 915.95 500.09
Blu-ray Folder 949.32 543.93

For real-world performance evaluation, we wrote and read back multi-gigabyte folders of photos, videos and Blu-ray files. The results are presented in the table below. These numbers show that it is possible to achieve speeds close to 1 GBps for real-life workloads. The advantage of a unit like the DS2015xs is that the 10G interfaces can be used as a DAS interface, while the other two 1G ports can connect the unit to the rest of the network for sharing the contents seamlessly with other computers.

iSCSI

We configured a block-level (Single LUN on RAID) iSCSI LUN in RAID-5 using all available disks. Network settings were retained from the previous evaluation environment. The same benchmarks were repeated in this configuration also.

Synology DS2015xs - CrystalDiskMark Benchmark

Synology DS2015xs - ATTO Benchmark

The iSCSI performance seems to be a bit off compared to what we got with CIFS. Results from the real-world performance evaluation suite are presented in the table below. These numbers track what we observed in the artificial benchmarks too.

Synology DS2015xs - 2x10 Gbps LACP - RAID-5 iSCSI DAS Performance (MBps)
  Read Write
Photos 535.28 532.54
Videos 770.41 483.97
Blu-ray Folder 734.51 505.3

Performance Analysis

The performance numbers that we obtained with teamed ports (20 Gbps) were frankly underwhelming. The more worrisome aspect was that we couldn't replicate Synology's claims of upwards of 1900 MBps throughput for reads. In order to determine if there were any issues with our particular setup, we wanted to isolate the problem to either the disk subsystem on the NAS side or the network configuration. Unfortunately, Synology doesn't provide any tools to evaluate them separately. For optimal functioning, 10G links require careful configuration on either side.

iPerf is the tool of choice for many when it comes to ensuring that the network segment is operating optimally. Unfortunately, iPerf for DSM requires an optware package that is not yet available for the Alpine platform. On the positive side, Synology had uploaded the tool chain for Alpine on SourceForge - this helped us to cross-compile iPerf from source for the DS2015xs. Armed with iPerf on both the NAS and the testbed side, we proceeded to evaluate the links operating simultaneously without the teaming overhead.

The screenshot above shows that the two links together saturated at around 5 Gbps (out of the theoretically possible 20 Gbps), but the culprit was our cross-compiled iPerf executable (with each instance completely saturating one core - 25% of the CPU).

In the CIFS case, the smbd process is not multi-threaded, and this severely affects the utilization of the 10G links fully.

In the iSCSI case, the iscsi_trx process also seems to saturate one CPU core, leading to similar results for 10G link utilization.

On the whole, the 10G links are getting utilized, but not to the full possible extent. The utilization is definitely more than, say, four single GbE links teamed together, but the presence of two 10G links had us expecting more from the unit as a DAS.

Introduction and Testbed Setup Single Client Performance - CIFS & iSCSI on Windows
Comments Locked

49 Comments

View All Comments

  • DCide - Friday, February 27, 2015 - link

    Ganesh, thanks for the response. Unless you really know the iperf code (I sure don't!) I don't believe you can make many conclusions based on the iperf performance, considering you hit a CPU bottleneck. There's no telling how much of that CPU went to other operations (such as test data creation/reading) rather than getting data across the pipe. Because of the bottleneck, the iperf results could easily have no relationship whatsoever to SSD RAID R/W performance across the network, which might not be bottlenecking at all (other than the 10GbE limits themselves, which is what we want).

    Could you please run a test with a couple of concurrent robocopys (assuming you can run multiple instances of robocopy)? I'm not sure the number of threads necessarily effects whether both teamed network interfaces are utilized. Please correct me if I'm wrong, but I think it's worth a try. In fact, if concurrent robocopys don't work, it might be worth trying concurrently running any other machine you have available with a 10GbE interface, to see if this ~1GB/s barrier can be broken.
  • usernametaken76 - Friday, February 27, 2015 - link

    Unless we're purchasing agents for the government, can we avoid terms like "COTS"? It has an odor of bureaucracy associated with it.
  • FriendlyUser - Saturday, February 28, 2015 - link

    I am curious to find out how it compares with the AMD-based QNAP 10G NAS (http://www.anandtech.com/show/8863/amd-enters-nas-... I suppose the AMD cores, at 2.4GHz, are much more powerful.
  • Haravikk - Saturday, February 28, 2015 - link

    I really don't know what to make of Synology; the hardware is usually pretty good, but the DSM OS just keeps me puzzled. On the one hand it seems flexible which is great, but the version of Linux is a mess, as most tools are implemented via a version of BusyBox that they seem unwilling to update, even though the version has multiple bugs with many of the tools.

    Granted you can install others, for example a full set of GNU tools, but there really shouldn't be any need to do this if they just kept it up-to-date. A lack of ZFS or even access to BTRFS is disappointing too, as it simply isn't possible to set these up yourself unless you're willing to waste a disk (since you HAVE to setup at least one volume before you could install these yourself).

    I dunno; if all I'm looking for is storage then I'm still inclined to go Drobo for an off-the-shelf solution, otherwise I'd look at a ReadyNAS system instead if I wanted more flexibility.
  • thewishy - Wednesday, March 4, 2015 - link

    I think the point you're missing is that people buying this sort of kit are doing so because they want to "Opt out" of managing this stuff themselves.
    I'm an IT professional, but this isn't my area. I want it to work out the box without much fiddling. The implementation under the hood may be ugly, but I'm not looking under the hood. For me it stores my files with a decent level of data security (No substitute for backup) and allows me to add extra / larger drivers as I need more space, and provides a decent range of supported protocols (SMB, iSCSI, HTTP, etc)
    ZFS and BRTFS are all well and good, but I'm not sure what practical advantage it would bring me.
  • edward1987 - Monday, February 22, 2016 - link

    You can get 1815+ a bit cheaper if you don't really need enterprise class:
    http://www.span.com/compare/DS1815+-vs-DS2015xs/46...
  • Asreenu - Thursday, September 14, 2017 - link

    We bought a couple of these a year ago. All of them had component failures and support is notorious for running you through hoops until you give up because you don't want to be without access to your data for so long. They have ridiculous requiresments to prove your purchase before they even reply to your question. In all three cases we ended up buying replacements and figuring out how to restore data ourselves. I would stick with netgear for the support alone because that's a major sell. Anandtech shouldn't give random ratings to things they don't have experience with. Just announcing they have support doesn't mean a thing.

Log in

Don't have an account? Sign up now