The eight RAID-adapters were tested on a system equipped with a Tyan Thunder K8S mainboard, one 1,4GHz Opteron 240 processor, 2GB RAM and eight Western Digital WD740GD hard drives. The Tyan mainboard has two independent 133MHz PCI-X busses so that the adapters could be tested under optimal conditions. Thanks to the eight Raptors, the cards could be tested with the fastest SATA-disks available at this time. The use of Raptor WD740GD drives ensures that any reasonable bottleneck, resulting in performance scaling degradations, would surface during the performance tests. We are particularly grateful to the Dutch Western Digital branch office for providing the Raptor samples.
Tyan Thunder K8S
The SATA RAID adapters where rigorously tested through a vast test program in which the controllers had to work their way through an almost complete version of the Tweakers.net benchmark suite consisting of a total of fifteen different RAID configurations (in the case of an 8-port card). For logical reasons, the temperature and sound measurements were omitted. Furthermore the low-level tests in AnalyzeDisk were not run because this review concentras mainly on the performance in realistic workloads. The tests for this article are much more extensive than those for last year's RAID review, when only a selection of the desktop benchmarks were run on the adapters and our server workload simulations had not yet been developed. At an effective use of the test system for 20 hours a day, it roughly took a week to perform all required tests on an eight port adapter.
Because the performance scaling of the adapters is not predictable, there was no other way than to run the tests on a large number of RAID levels. The test scheme is displayed below. Naturally, a shortened test program was used on adapters with four and six ports.
|RAID level||Disk drives||Cache mode||Array status|
The tests with eight disks in RAID 50 were only performed if the configuration showed higher performace with six drives in RAID 5 than the same configuration in RAID 50. In reality RAID 50 (a stripe consisting of two RAID 5-arrays) is mostly slower than RAID 5, and because the effective capacity is lower it has little value to test a configuration with eight disks in RAID 50, if the RAID 50 test with six disks already proved to perform less a RAID 5 configuration. No RAID 0 and RAID 10 tests were run with more than four disks. The risks of a drive failure in large RAID 0 configurations takes on such form, that noone will seriously consider using more than four disks in a RAID 0 configuration. RAID 10 offers a very low effective storage capacity. For this reason, RAID 5 is the preferred choice for large RAID arrays.
The write-through tests were restricted to a number of low-level benchmarks and the workload simulations taken from the server StorageMark 2004 suite. The majority of workstation users will use write-back cache, even if no Battery Backup Unit is used. For this reason it is not very useful to test the desktop and workstation performance in write-through configurations.
To test how the adapters performed when a disk failure occurs in a RAID 5-array, tests were run on a degraded array from which one hard disk had been removed. No measurements were conducted from the performance during the rebuild of an array. This being because of the fact that not all adapters used the same rebuild priority rate. A rebuild rate of 30 per cent is standard (meaning that the rebuild process is limited to a disk usage of 30 per cent). However, many adapters do not offer an unlimited choice of percentages. In many cases the user has to choose from predefined profiles. The available rebuild rates were not corresponding for all adapters. As a result it wasn't possible to make a fair comparison.
Performance tuning is extremely important to obtain the maximum performance from a RAID adapter. Some adapters, such as the RAIDCore BC4852, have a very user friendly setup, whereas other cards offer many tuning options. Our goal was to test all RAID adapters with their optimal settings. Because it is not feasible to test all possible configurations, we were obliged to use a pragmatical method for finding the optimal settings. Firstly the optimal cache setting for the adapter was examined, testing with lower or higher stripe sizes (relative to the standard 64K or 128K setting). Then we searched for the optimal stripe size by testing with the best cache configuration at larger or smaller stripe sizes (relatively to the starting point of 64K or 128K), until we found the stripe size which showed no improvement in performances. The performance comparison was based on the weighted and indexed averages of the Desktop, Gaming, Workstation and Server StorageMark 2004 suites, in which we differentiated between desktop and server workloads. For the desktop benchmarks the stripe size which had the best average performance in the desktop-, gaming- and workstation index, was chosen. Therefore on the coming pages we do not show the optimal result per desktop index. Most people use their system for all kinds of desktop, workstation and gaming applications. They do not have the possibility to optimize the stripe size and cache settings for specific applications. More information about the benchmarks of Tweakers.net can be read in Benchmark Database (both are only available in Dutch).
It is noteworthy how reliable the Raptor WD740GDs performed during the many hours of testing. Despite the high duty cycles the disks had to endure and the hundreds of millions I/O operations they had to process, all disks are still in excellent condition. Apart from the irritation caused by the locking feature of the 3ware Escalade 9500S-8, fore which the Raptors where not to blame, the configuration of the disk arrays happened without complications. That's all very different to large SCSI arrays. Even in server rackmounts with integrated SCSI backplanes and fixed SCSI IDs, sometimes it is problematic to get the desired bus speed. The booting of an array with lets say four 68-pins LVD disks can be a real challenge due to loose SCSI cables and the manual SCSI ID settings. Serial ATA does not have any of these problems thanks to the point-to-point topology and serial data transfer.