Hard drive interfaces: SCSI, SAS, Firewire, IDE, SATA. Adaptec SAS Controllers. Fast and nimble sas controller that

Server hard drive, features of choice

The hard drive is the most valuable component in any computer. After all, it stores information with which the computer and the user work, in the event that we are talking about personal computer... A person, each time sitting down at a computer, counts on the fact that he will now run through the operating system loading screen, and he will start working with his data, which the hard drive will give out “to the mountain” from its bowels. If we are talking about a hard disk, or even about their array as part of a server, then there are tens, hundreds and thousands of such users who expect to gain access to personal or work data. And all their quiet work or rest and entertainment depends on these devices, which constantly store data in themselves. Already from this comparison, it is clear that the requests for hard drives of home and industrial class are presented unequal - in the first case, one user works with it, in the second - thousands. It turns out that second hard the disk should be more reliable, faster, more stable than the first one many times, because many users work with it, they rely on it. This article will look at the types used in the corporate sector hard drives and their design features to achieve the highest reliability and performance.

SAS and SATA drives - so similar and so different

Until recently, the standards of industrial-grade and consumer-grade hard drives differed significantly, and were incompatible - SCSI and IDE, now the situation has changed - the overwhelming majority of the market are hard drives of the SATA and SAS (Serial Attached SCSI) standard. The SAS connector is versatile and form factor compatible with SATA. This allows you to directly connect to the SAS system both high-speed, but at the same time small capacity (at the time of this writing - up to 300 GB) SAS drives, as well as slower, but many times more capacious, SATA drives (at the time of this writing, up to 2 TB ). Thus, in one disk subsystem, you can combine vital applications that require high performance and online access to data, and more cost-effective applications with a lower cost per gigabyte.

This design compatibility is beneficial to both manufacturers rear panels and end users, as this reduces equipment and design costs.

That is, both SAS devices and SATA can be connected to the SAS connectors, and only SATA devices are connected to the SATA connectors.

SAS and SATA - high speed and large capacity. What to choose?

SAS disks, which replaced SCSI disks, fully inherited their main characteristics of a hard drive: spindle speed (15000 rpm) and volume standards (36,74,147 and 300 GB). However, SAS itself is significantly different from SCSI. Let's briefly consider the main differences and features: The SAS interface uses a point-to-point connection - each device is connected to the controller by a dedicated channel, in contrast to it, SCSI works over a common bus.

SAS supports a large number of devices (> 16384), while the SCSI interface supports 8, 16, or 32 devices on the bus.

SAS interface supports data transfer rates between devices at speeds of 1.5; 3; 6 Gb / s, while the SCSI interface bus speed is not allocated to each device, but is divided between them.

SAS supports the connection of slower SATA devices.

SAS configuration is much easier to mount, install. Such a system is easier to scale. In addition, SAS drives inherited the reliability of SCSI hard drives.

When choosing a disk subsystem - SAS or SATA, you need to be guided by what functions will be performed by the server or workstation. To do this, you need to decide on the following questions:

1. How many concurrent, diverse requests will the disk handle? If large - your definitive choice - SAS disks. Also, if your system will serve a large number of users, choose SAS.

2. How much information will be stored on the disk subsystem of your server or workstation? If more than 1-1.5 TB - you should pay attention to the system based on SATA hard drives.

3. What is the budget for the purchase of a server or workstation? It should be remembered that in addition to SAS disks, you will need a SAS controller, which must also be taken into account.

4. Do you plan, as a consequence, the growth of data volume, increase in productivity or increase in the fault tolerance of the system? If yes, then you need a SAS-based disk subsystem, it is easier to scale and more reliable.

5. Your server will handle critical data and applications - your choice is SAS drives, designed for heavy operating conditions.

A reliable disk subsystem, it is not only high-quality hard drives from a renowned manufacturer, but also an external disk controller. They will be discussed in one of the following articles. Consider SATA disks, what types of these disks are and which should be used when building server systems.

SATA drives: consumer and industrial sector

SATA drives used everywhere, from consumer electronics and home computers to high-performance workstations and servers, differ in subtypes, there are drives for use in household appliances, with low heat generation, power consumption, and as a result, low performance, there are drives of the middle class, for home computers, and there are drives for high-performance systems. In this article we will look at a class of hard drives for performance systems and servers.

Performance characteristics

Server grade HDD

HDD desktop class

Rotational speed

7,200 rpm (nominal)

7,200 rpm (nominal)

Cache size

Average delay time

4.20ms (nominal)

6.35ms (nominal)

Baud rate

Reading from drive cache (Serial ATA)

maximum 3 Gb / s

maximum 3 Gb / s

physical characteristics

Capacity after formatting

1,000,204 MB

1,000,204 MB

Capacity

Interface

SATA 3Gb / s

SATA 3Gb / s

Number of sectors available to the user

1 953 525 168

1 953 525 168

Dimensions (edit)

Height

25.4 mm

25.4 mm

Length

147 mm

147 mm

Width

101.6 mm

101.6 mm

0.69 kg

0.69 kg

Impact resistance

Impact resistance in working order

65G, 2ms

30G; 2 ms

Shock resistance when inoperative

250G, 2ms

250G, 2ms

Temperature

In working order

-0 ° C to 60 ° C

-0 ° C to 50 ° C

Out of service

-40 ° C to 70 ° C

-40 ° C to 70 ° C

Humidity

In working order

relative humidity 5-95%

Out of service

relative humidity 5-95%

relative humidity 5-95%

Vibration

In working order

Linear

20-300 Hz, 0.75 g (0 to peak)

22-330 Hz, 0.75 g (0 to peak)

Arbitrary

0.004 g / Hz (10 - 300 Hz)

0.005 g / Hz (10 - 300 Hz)

Out of service

Low frequency

0.05 g / Hz (10 - 300 Hz)

0.05 g / Hz (10 - 300 Hz)

High frequency

20-500Hz, 4.0G (0 to peak)

The table shows the characteristics of hard drives from one of the leading manufacturers, one column shows the data of a server-class SATA hard drive, in the other a regular SATA hard drive.

From the table we see that disks differ not only in performance characteristics, but also in performance characteristics, which directly affect the lifespan and successful operation of the hard drive. It should be noted that outwardly these hard drives differ insignificantly. Consider what technologies and features allow you to do this:

Reinforced shaft (spindle) hard disk, for some manufacturers it is fixed at both ends, which reduces the influence of external vibration and contributes to accurate positioning of the head assembly during read and write operations.

The use of special intelligent technologies that take into account both linear and angular vibration, which reduces the time for head positioning and increases the performance of discs by up to 60%

The function of elimination of errors on the operating time in RAID arrays - prevents the loss of hard drives from the RAID, which is a characteristic feature of conventional hard drives.

Adjustment of the flight height of the heads in conjunction with the technology of preventing contact with the surface of the plates, which leads to a significant increase in the life of the disk.

A wide range of self-diagnostic functions that allow you to predict in advance the moment when a hard drive will fail, and to warn the user about this, which allows you to have time to save information to a backup drive.

Features that reduce the rate of unrecoverable read errors, which increases the reliability of the server hard drive compared to conventional hard drives.

Speaking about the practical side of the issue, we can confidently say that specialized hard drives in servers "behave" much better. V technical service there are several times fewer calls due to the instability of the RAID arrays and denials of tough disks. The manufacturer's support for the server segment of hard drives is much faster than conventional hard drives, due to the fact that the industrial sector is the priority area of ​​work of any manufacturer of data storage systems. After all, it is in it that the most advanced technologies are used, guarding your information.

Analogue of SAS disks:

Hard drives from Western Digital VelociRaptor. These 10K RPM drives feature 6Gb / s SATA and 64MB cache. The MTBF of these drives is 1.4 million hours.
More details on the manufacturer's website www.wd.com

You can order a server assembly based on SAS or an analogue of SAS hard drives in our "Status" company in St. Petersburg, you can also buy or order SAS hard drives in St. Petersburg:

  • call + 7-812-385-55-66 in St. Petersburg
  • write to the address
  • leave a request on our website on the "Online Application" page
#SAS

SAS (Serial Attached SCSI)- serial computer interface designed to connect various devices data storage, for example, and tape drives. SAS is designed to replace the parallel SCSI interface and uses the same SCSI command set.

SAS is backward compatible with SATA interface: SATA II and SATA 6 Gb / s devices can be connected to a SAS controller, but SAS devices cannot be connected to a SATA controller. The latest SAS implementation provides data transfer rates up to 12Gbps per line. 24Gb / s SAS specification expected by 2017

SAS combines the advantages of SCSI interfaces (deep sorting of the command queue, good scalability, high noise immunity, large maximum length cables) and Serial ATA (thin, flexible, cheap cables, hot-plug, point-to-point topology that allows you to achieve more performance in complex configurations) with new unique features such as an advanced connection topology using hubs called SAS- expanders (SAS expanders), connecting two SAS channels to one (both to increase reliability and performance), work on one disk with both SAS and SATA interfaces.

In combination with new system addressing, this allows you to connect up to 128 devices per port and have up to 16256 devices on the controller, while no manipulation of jumpers, etc. is required. The limitation of 2 Terabytes on the volume of the logical device has been removed.

The maximum cable length between two SAS devices is 10 m when using passive copper cables.

Actually, the SAS data transfer protocol means three protocols at once - SSP (Serial SCSI Protocol), which provides the transfer of SCSI commands, SMP (SCSI Management Protocol), which works with SCSI control commands and is responsible, for example, for interacting with SAS expanders, and STP (SATA Tunneled Protocol), which implements support for SATA devices.

Produced in this moment have internal connectors of the SFF-8643 type (it can also be called mini SAS HD), but there may still be SFF-8087 (mini SAS) connectors, to which 4 SAS channels are output.


The external version of the interface uses the SFF-8644 connector, but the SFF-8088 connector may still be encountered. It also supports four SAS channels.

SAS controllers are fully compatible with SATA drives and SATA baskets / backplanes- connection is usually carried out using cables:. The cable looks something like this:


SFF-8643 -> 4 x SAS / SATA

Usually SAS baskets / backplanes have SATA connectors on the outside and you can always insert regular SATA drives into them, therefore they (such baskets) are usually called SAS / SATA.

However, there are reversible versions of such a cable for connecting a backplane with internal SFF-8087 connectors to a SAS controller that has regular SATA connectors. Such cables are not interchangeable with each other.

SAS drives cannot be connected to a SATA controller or installed in a SATA cage / backplane.


To connect SAS drives to a controller with SFF-8643 or SFF-8087 internal connectors without using SAS baskets, you must use a cable of the SFF-8643-> SFF-8482 or SFF-8087-> SFF-8482 type, respectively.

The existing versions of the SAS interface (1.0, 2.0, and 3.0) are compatible with each other, that is, a SAS2.0 disk can be connected to a SAS 3.0 controller and vice versa. In addition, the future version of 24 Gb / s will also have backward compatibility.

SAS connector types

Image Codename Also known as External/
interior
Number of contacts Number of devices

Over the past two years, few changes have accumulated:

  • Supermicro is ditching the proprietary "flipped" UIO form factor for controllers. Details will be below.
  • LSI 2108 (SAS2 RAID with 512MB cache) and LSI 2008 (SAS2 HBA with optional RAID support) are still in service. Products based on these chips, both from LSI and from OEM partners, are fairly well debugged and still relevant.
  • LSI 2208 appeared (the same SAS2 RAID with LSI MegaRAID stack, only with a dual-core processor and 1024MB cache) and (an improved version of LSI 2008 with a faster processor and PCI-E 3.0 support).

Moving from UIO to WIO

As you remember, UIO cards are ordinary PCI-E x8 cards, in which the entire element base is located with back side, i.e. when installed in the left riser is on top. It took such a form factor to install the boards in the lowest slot of the server, which allowed four boards to be placed in the left riser. UIO is not only a form factor for expansion cards, it is also cases designed for installing risers, the risers themselves and motherboards of a special form factor, with a cutout for the lower expansion slot and slots for installing risers.
This solution had two problems. Firstly, the non-standard form factor of the expansion cards limited the customer's choice. under the UIO form factor, there are only a few SAS, InfiniBand and Ehternet controllers. Secondly, there is an insufficient number of PCI-E lanes in the riser slots - only 36, of which only 24 lanes for the left riser, which is clearly not enough for four motherboards with PCI-E x8.
What is WIO? At first, it turned out that it was possible to place four boards in the left riser without having to "turn the sandwich butter up", and risers for regular boards appeared (RSC-R2UU-A4E8 +). Then the problem of the shortage of lines (now there are 80) was solved by using slots with a higher contact density.
UIO riser RSC-R2UU-UA3E8 +
WIO riser RSC-R2UW-4E8

Results:
  • WIO risers cannot be installed on UIO motherboards (such as X8DTU-F).
  • UIO risers cannot be installed on new cards that are WIO-enabled.
  • There are risers for WIO (on the motherboard) that have a UIO slot for cards. In case you still have UIO controllers. They are used in platforms for Socket B2 (6027B-URF, 1027B-URF, 6017B-URF).
  • There will be no new controllers in the UIO form factor. For example, the USAS2LP-H8iR controller on the LSI 2108 chip will be the last one, there will be no LSI 2208 under UIO - just a regular MD2 with PCI-E x8.

PCI-E controllers

At the moment, three types are relevant: RAID controllers based on LSI 2108/2208 and HBA based on LSI 2308. There is also a mysterious SAS2 HBA AOC-SAS2LP-MV8 on a Marvel 9480 chip, but write about it because of its exoticism. Most use cases for internal SAS HBAs are ZFS storage under FreeBSD and various Solaris flavors. Due to the absence of support problems in these operating systems, the choice falls on LSI 2008/2308 in 100% of cases.
LSI 2108
In addition to the UIO "shny AOC-USAS2LP-H8iR, which is mentioned in the addition, two more controllers have been added:

AOC-SAS2LP-H8iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 8 internal ports (2 SFF-8087 connectors). It is an analogue of the LSI 9260-8i controller, but manufactured by Supermicro, there are minor differences in the board layout, the price is $ 40-50 lower than LSI. All additional LSI options are supported: activation, FastPath and CacheCade 2.0, battery protection of the cache - LSIiBBU07 and LSIiBBU08 (now BBU08 is preferred, it has an expanded temperature range and includes a cable for remote mounting).
Despite the emergence of more efficient controllers based on the LSI 2208, the LSI 2108 is still relevant due to the price reduction. Performance with conventional HDDs is enough in any scenario, the IOPS limit for working with SSDs is 150,000, which is more than enough for most budget solutions.

AOC-SAS2LP-H4iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 4 internal + 4 external ports. Analogous to the LSI 9280-4i4e controller. Convenient for use in expander enclosures, because there is no need to bring the output from the expander outside to connect additional JBODs, or in 1U enclosures for 4 disks, if necessary, to provide the ability to expand the number of disks.Supports the same BBU and activation keys.
LSI 2208

AOC-S2208L-H8iR
LSI 2208, SAS2 RAID 0/1/5/6/10/50/60, 1024MB cache, 8 internal ports (2 SFF-8087 connectors). It is analogous to the LSI 9271-8i controller. The LSI 2208 is a further development of the LSI 2108. The processor has become a dual-core, which allowed us to raise the IOPS performance limit up to 465000. PCI-E 3.0 support was added and the cache increased to 1GB.
The controller supports BBU09 cache battery protection and CacheVault flash protection. Supermicro supplies them under the part numbers BTR-0022L-LSI00279 and BTR-0024L-LSI00297, but it is easier to purchase from us through the LSI sales channel (the second part of the part numbers are the native LSI part numbers). MegaRAID Advanced Software Options activation keys are also supported, part numbers: AOC-SAS2-FSPT-ESW (FastPath) and AOCCHCD-PRO2-KEY (CacheCade Pro 2.0).
LSI 2308 (HBA)

AOC-S2308L-L8i and AOC-S2308L-L8e
LSI 2308, SAS2 HBA (with IR firmware - RAID 0/1 / 1E), 8 internal ports (2 SFF-8087 connectors). This is the same controller, comes with different firmwares. AOC-S2308L-L8e - IT firmware (pure HBA), AOC-S2308L-L8i - IR firmware (with RAID 0/1 / 1E support). The difference is that L8i can work with IR and IT firmware, L8e only with IT, IR firmware is locked. Analogous to the LSI 9207-8 controller i... Differences from LSI 2008: faster chip (800 MHz, as a result - IOPS limit increased to 650 thousand), PCI-E 3.0 support appeared. Application: software RAID "s (ZFS, for example), budget servers.
On the basis of this chip there will be no cheap controllers with support for RAID-5 (iMR stack, from ready-made controllers - LSI 9240).

Onboard controllers

In the latest products (X9 boards and platforms with them), Supermicro denotes the presence of a SAS2 controller from LSI with the number "7" in the part number, and the number "3" for the chipset SAS (Intel C600). However, no distinction is made between LSI 2208 and 2308, so be careful when choosing a board.
  • The LSI 2208-based controller soldered on the motherboards has a maximum of 16 disks. When adding 17, it simply won't be detected, and in the MSM log you will see the message "PD is not supported". This is compensated for by a significantly lower price. For example, a bundle "X9DRHi-F + external controller LSI 9271-8i" will cost about $ 500 more than X9DRH-7F with LSI 2008 on board. It is not possible to bypass this limitation by flashing it into the LSI 9271 - flashing another SBR block, as in the case of the LSI 2108, does not help.
  • Another feature is the lack of support for CacheVault modules, the boards simply lack space for a special connector, so only BBU09 is supported. The possibility of installing the BBU09 depends on the enclosure used. For example, LSI 2208 is used in 7127R-S6 blade servers, there is a BBU connector there, but to mount the module itself you need an additional MCP-640-00068-0N Battery Holder Bracket.
  • SAS HBA (LSI 2308) firmware will be needed now, because in DOS on any of the boards with LSI 2308 sas2flash.exe does not start with the error "Failed to initialize PAL".

Controllers in Twin and FatTwin platforms

Some 2U Twin 2 platforms are available in three versions, with three kinds of controllers. For example:
  • 2027TR-HTRF + - Chipset SATA
  • 2027TR-H70RF + - LSI 2008
  • 2027TR-H71RF + - LSI 2108
  • 2027TR-H72RF + - LSI 2208
Such a variety is provided due to the fact that the controllers are located on a special backplane that connects to a special slot on the motherboard and to a disk backplane.
BPN-ADP-SAS2-H6IR (LSI 2108)


BPN-ADP-S2208L-H6iR (LSI 2208)

BPN-ADP-SAS2-L6i (LSI 2008)

Supermicro xxxBE16 / xxxBE26 cases

Another topic that is directly related to controllers is the modernization of cases with. There are varieties with an additional cage for two 2.5 "drives located on the rear panel of the case. Purpose - a dedicated disk (or mirror) for system boot. Of course, the system can be loaded by allocating a small volume from another disk group or from additional disks fixed inside the case (in 846 cases, you can install additional fasteners for one 3.5 "or two 2.5" drives), but the updated modifications are much more convenient:




Moreover, these additional disks do not have to be connected to the chipset SATA controller. Using the SFF8087-> 4xSATA cable, you can connect to the main SAS controller through the SAS expander output.
P.S. Hope the information was helpful. Remember, for the most complete information and technical support for Supermicro, LSI, Adaptec by PMC and other vendors, contact True System.

Introduction

Look at modern motherboards (or even some older platforms). Do they need a dedicated RAID controller? Most motherboards have 3 Gigabit SATA ports, as well as audio and network adapters... Most modern chipsets such as AMD A75 and Intel Z68, have SATA 6Gb / s support. With such support from the chipset, powerful processor and I / O ports, do you need additional storage cards and a separate controller?

In most cases, ordinary users can create RAID 0, 1, 5 and even 10 arrays using the built-in SATA ports on the motherboard and special software, while you can get very high productivity... But in cases where a more complex RAID level - 30, 50 or 60 - higher level of disk management or scalability is required, then the controllers on the chipset may not cope with the situation. In such cases, professional-grade solutions are needed.

In such cases, you are no longer limited to SATA storage systems. A large number of special cards provide support for SAS (Serial-Attached SCSI) or Fiber Channel (FC) drives, each of which brings unique benefits.

SAS and FC for Professional RAID Solutions

Each of the three interfaces (SATA, SAS and FC) has its advantages and disadvantages, none of them can be unconditionally called the best. The strengths of SATA based drives are their high capacity and low cost, combined with high data transfer rates. SAS drives are renowned for their reliability, scalability, and high I / O speeds. FC storage systems provide constant and very high data transfer rates. Some companies still use Ultra SCSI solutions, although they can handle a maximum of 16 devices (one controller and 15 drives). Moreover, the bandwidth in this case does not exceed 320 MB / s (in the case of Ultra-320 SCSI), which cannot compete with more modern solutions.

Ultra SCSI is the standard for professional enterprise storage solutions. However, SAS is gaining in popularity because it offers not only significantly higher bandwidth, but also more flexibility when working with mixed SAS / SATA systems, which allows you to optimize costs, performance, availability and capacity even in a single JBOD (set of disks). In addition, many SAS drives have two ports for redundancy. If one controller card fails, then switching the floppy drive to another controller avoids a complete system failure. Thus, SAS ensures high reliability of the entire system.

Moreover, SAS is not only a point-to-point protocol for connecting a controller and a storage device. It supports up to 255 storage devices per SAS port when using an expander. Using a two-tier SAS expander structure, it is theoretically possible to attach 255 x 255 (or just over 65,000) storage devices to a single SAS channel, if the controller is capable of supporting such a large number of devices.

Adaptec, Areca, HighPoint and LSI: Benchmarks of Four SAS RAID Controllers

In that comparative test We are examining the performance of modern SAS RAID controllers, which are represented by four products: Adaptec RAID 6805, Areca ARC-1880i, HighPoint RocketRAID 2720SGL and LSI MegaRAID 9265-8i.

Why SAS and not FC? On the one hand, SAS is by far the most interesting and relevant architecture. It provides features such as zoning, which are very attractive to professional users. On the other hand, FC's role in the professional market is declining, and some analysts even predict its complete departure based on the number of hard drives shipped. According to IDC experts, the future of FC looks rather bleak, but hard drives SAS could claim 72% of the corporate hard drive market in 2014.

Adaptec RAID 6805

Chip manufacturer PMC-Sierra launched the "Adaptec by PMC" series of RAID 6 controller families in late 2010. Series 6 controller cards are based on an 8x6GB SRC dual-core ROC (RAID on Chip) controller that supports 512MB and up to 6 Gbps per SAS port. There are three low-profile models: the Adaptec RAID 6405 (4 internal ports), the Adaptec RAID 6445 (4 internal and 4 external ports), and the one we tested, the Adaptec RAID 6805 with eight internal ports, for about $ 460.

All models support JBOD and RAID of all levels - 0, 1, 1E, 5, 5EE, 6, 10, 50 and 60.

Connected to the system via x8 PCI Express 2.0, the Adaptec RAID 6805 supports up to 256 devices via a SAS expander. According to the manufacturer's specifications, the stable data transfer rate to the system can reach 2 GB / s, and the peak can reach 4.8 GB / s for the aggregated SAS port and 4 GB / s for the PCI Express interface - the last figure is the maximum theoretically possible value for PCI bus Express 2.0x.

ZMCP without the need for support

Our test unit came with an Adaptec Falsh Module 600 which uses Zero Maintenance Cache Protection (ZMCP) and does not use the legacy Battery Backup Unit (BBU). The ZMCP is a 4GB NAND flash unit that is used to back up the controller cache in the event of a power outage.

Because copying from cache to flash is extremely fast, Adaptec uses capacitors to support power rather than batteries. The advantage of capacitors is that they can last as long as the cards themselves, whereas backup batteries must be replaced every few years. In addition, data once copied to flash memory can be stored there for several years. In comparison, you usually have about three days to store data before the cached information is lost, which forces you to rush to recover the data. As the name suggests, ZMCP is a solution that can withstand power failures.


Performance

The Adaptec RAID 6805 in RAID 0 mode loses out in our streaming read / write tests. Also, RAID 0 is not a typical business case that needs data protection (although it could well be used for a video rendering workstation). Sequential reads go at 640 MB / s, and sequential writes at 680 MB / s. For these two parameters, the LSI MegaRAID 9265-8i occupies the top position in our tests. The Adaptec RAID 6805 performs better in the RAID 5, 6, and 10 benchmarks, but is not an absolute leader. In an SSD-only configuration, the Adaptec controller operates at speeds up to 530 MB / s, but is outperformed by the Areca and LSI controllers.

The Adaptec card automatically recognizes what it calls a HybridRaid configuration, which is a mix of hard drives and SSDs, offering RAID levels from 1 to 10 in this configuration. This card outperforms its competitors with dedicated read / write algorithms. They automatically route reads to SSDs and writes to both hard drives and SSDs. Thus, read operations will work as in a system only from SSDs, and writing will work no worse than in a system from hard drives.

However, our test results do not reflect the theoretical situation. Except for web server benchmarks, where data rates are running for hybrid system, hybrid SSD system and hard drives cannot come close to the speed of a system only from SSDs.

The Adaptec controller performs much better in the hard drive I / O benchmark. Regardless of the type of benchmarks (database, file server, web server or work station), the RAID 6805 controller keeps pace with the Areca ARC-1880i and LSI MegaRAID 9265-8i, and comes in first or second place. Only the HighPoint RocketRAID 2720SGL leads the I / O test. If you replace the hard drives with SSDs, the LSI MegaRAID 9265-8i significantly outperforms the other three controllers.

Installing software and configuring RAID

Adaptec and LSI provide well-organized and easy-to-use RAID management tools. Management tools enable administrators to get remote access to the controllers over the network.

Array setup

Areca ARC-188oi

Areca is also bringing the ARC-1880 series to the 6Gb / s SAS RAID controller market. Targeted applications range from NAS and storage servers to HPC, redundancy, security and cloud computing, the manufacturer claims.

Tested ARC-1880i samples with eight external SAS ports and eight PCI Express 2.0 lane are available for $ 580. The low-profile card, which is the only card in our set with an active cooler, is based on an 800 MHz ROC with 512 MB DDR2-800 data cache support. Using SAS expanders, Areca ARC-1880i supports up to 128 storage systems. To preserve the contents of the cache during a power failure, a battery pack can be optionally added to the system.

Besides single mode and JBOD, the controller supports RAID levels 0, 1, 1E, 3, 5, 6, 10, 30, 50 and 60.

Performance

The Areca ARC-1880i performs well in RAID 0 read / write tests, reaching 960 MB / s read and 900 MB / s write. Only the LSI MegaRAID 9265-8i is faster in this particular test. Areca's controller doesn't disappoint in other benchmarks either. And in working with hard drives and SSDs, this controller always actively competes with the test winners. Although the Areca controller was the leader in only one benchmark (sequential reads in RAID 10), it showed very high results, for example, a read speed of 793 MB / s, while the fastest competitor, the LSI MegaRAID 9265-8i, only showed 572 MB / s.

However, the sequential transmission of information is only one part of the picture. The second is I / O performance. Areca ARC-1880i performs brilliantly here too, competing on equal terms with Adaptec RAID 6805 and LSI MegaRAID 9265-8i. Similar to its victory in the data transfer rate benchmark, the Areca controller also won in one of the I / O tests - the Web server benchmark. The Areca controller dominates the Web server benchmark at RAID levels 0, 5, and 6, while the Adaptec 6805 takes the lead for RAID 10, leaving Areca in second place with a slight lag.

Web GUI and setting parameters

Like the HighPoint RocketRAID 2720SGL, the Areca ARC-1880i is conveniently Web-based and easy to configure.

Array setup

HighPoint RocketRAID 2720SGL

The HighPoint RocketRAID 2720SGL is a SAS RAID controller with eight internal SATA / SAS ports, each supporting 6Gb / s. According to the manufacturer, this low-profile card is aimed at storage systems for small and medium businesses and workstations. The key component of the card is the Marvell 9485 RAID controller. competitive advantages- small size and 8-lane PCIe 2.0 interface.

Besides JBOD, the card supports RAID 0, 1, 5, 6, 10 and 50.

In addition to the model that was tested in our tests, there are 4 more models in the low-profile HighPoint 2700 series: RocketRAID 2710, RocketRAID 2711, RocketRAID 2721 and RocketRAID 2722, which mainly differ in the types of ports (internal / external) and their number ( 4 to 8). In our tests, we used the cheapest of these RAID controllers, the RocketRAID 2720SGL ($ 170). All cables to the controller are sold separately.

Performance

In the process of sequential read / write to the RAID 0 array, consisting of eight Fujitsu MBA3147RC drives, the HighPoint RocketRAID 2720SGL demonstrates an excellent read speed of 971 MB / s, second only to the LSI MegaRAID 9265-8i. The write speed - 697 MB / s - is not that high, but nevertheless it surpasses the write speed of the Adaptec RAID 6805. The RocketRAID 2720SGL also demonstrates a whole range of very different results. When working with RAID 5 and 6 arrays, it outperforms other cards, but with RAID 10, the read speed drops to 485 MB / s - the lowest value among the four samples tested. The sequential write speed in RAID 10 is even worse - just 198 MB / s.

This controller is clearly not built for SSDs. The read speed here reaches 332 MB / s, and the write speed is 273 MB / s. Even the Adaptec RAID 6805, which is also not very good with SSDs, is twice as good. Therefore, HighPoint is not a competitor for two cards that perform really well with SSDs: Areca ARC-1880i and LSI MegaRAID 9265-8i - they are at least three times faster.

All that we could say good about the work of HighPoint in I / O mode, we said. Nonetheless, the RocketRAID 2720SGL ranks last in our tests across all four Iometer benchmarks. The HighPoint controller is quite competitive with other cards when working with the benchmark for a Web server, but significantly loses to competitors in the other three benchmarks. This becomes apparent in our SSD benchmarks, where the RocketRAID 2720SGL clearly demonstrates that it is not optimized for SSD performance. It clearly doesn't take full advantage of SSDs over hard drives. For example, the RocketRAID 2720SGL shows 17378 IOPs in the database benchmark, and the LSI MegaRAID 9265-8i outperforms it in this parameter by four times, producing 75,037 IOPs.

Web GUI and Array Settings

The RocketRAID 2720SGL web interface is convenient and easy to use. All RAID parameters are easy to set.

Array setup

LSI MegaRAID 9265-8i

LSI is positioning the MegaRAID 9265-8i as a device for the SMB market. This card is suitable for cloud reliability and other business applications. The MegaRAID 9265-8i is one of the more expensive controllers in our test (it costs $ 630), but as the test shows, that money pays for its real benefits. Before we present the test results, let's discuss the technical features of these controllers and software applications FastPath and CacheCade.

The LSI MegaRAID 9265-8i uses a dual-core LSI SAS2208 ROC using an eight-lane PCIe 2.0 interface. The number 8 at the end of the device name means there are eight internal SATA / SAS ports, each supporting 6Gb / s. Up to 128 storage devices can be connected to the controller via SAS expanders. The LSI card contains 1GB DDR3-1333 cache and supports RAID levels 0, 1, 5, 6, 10, and 60.

Configuring software and RAID, FastPath and CacheCade

LSI claims FastPath can dramatically speed up I / O performance when connecting SSDs. According to experts from LSI, FastPath works with any SSD, significantly increasing the write / read performance of an SSD-based RAID array: 2.5 times for writing and 2 times for reading, reaching 465,000 IOPS. We could not verify this figure. However, this card was able to squeeze the maximum out of five SSDs without using FastPath.

The next application for the MegaRAID 9265-8i is called CacheCade. With it, you can use one SSD as cache memory for an array of hard drives. According to LSI experts, this can speed up the read process up to 50 times, depending on the size of the data in question, the applications and the method of use. We tried this application on a RAID 5 array with 7 hard drives and 1 SSD (SSD used for cache). Compared to a RAID 5 system of 8 hard drives, it became apparent that CacheCade not only improves I / O speed, but also overall performance (the more, the less the amount of constantly used data). For testing, we used 25GB of data and got 3877 IOPS per Iometer in a web server template, while a typical hard drive array only yielded 894 IOPS.

Performance

In the end, the LSI MegaRAID 9265-8i turns out to be the fastest SAS RAID controller in this roundup in terms of I / O. However, in sequential read / write operations, the controller demonstrates average performance, as its sequential performance is highly dependent on the RAID level you are using. When testing the hard drive at RAID 0, we get a sequential read speed of 1080 MB / s (which is significantly higher than the competition). The sequential write speed on the RAID 0 level is 927 MB / s, which is also higher than that of the competitors. But for RAID 5 and 6, LSI controllers are inferior to all their competitors, surpassing them only in RAID 10. In the SSD RAID test, the LSI MegaRAID 9265-8i demonstrates the best sequential write performance (752 MB / s) and only Areca ARC-1880i outperforms it by sequential read parameters.

If you're looking for a RAID controller focused on SSDs with high I / O performance, the LSI controller is the leader. With rare exceptions, it ranks first in our file server, web server, and workstation I / O tests. When your RAID array is made up of SSDs, LSI's competitors have nothing to do with it. For example, in the benchmark for workstations, MegaRAID 9265-8i reaches 70,172 IOPS, while Areca ARC-1880i, which is in second place, is almost two times inferior to it - 36,975 IOPS.

RAID software and array installation

As with Adaptec, LSI has convenient tools for managing the RAID array through the controller. Here are some screenshots:

CacheCade software

RAID software

Array setup

Comparison table and test bench configuration

Manufacturer Adaptec Areca
Product RAID 6805 ARC-1880i
Form Factor Low profile MD2 Low profile MD2
SAS ports 8 8
6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2хSFF-8087 2хSFF-8087
External SAS ports No No
Cache memory 512 MB DDR2-667 512 MB DDR2-800
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed PMC-Sierra PM8013 / No data No data / 800 MHz
Supported RAID levels 0, 1, 1E, 5, 5EE, 6, 10, 50, 60 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60
Windows 7, Windows Server 2008/2008 R2, Windows Server 2003/2003 R2, Windows Vista, VMware ESX Classic 4.x (vSphere), Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Sun Solaris 10 x86, FreeBSD, Debian Linux , Ubuntu Linux Windows 7/2008 / Vista / XP / 2003, Linux, FreeBSD, Solaris 10/11 x86 / x86_64, Mac OS X 10.4.x / 10.5.x / 10.6.x, VMware 4.x
Battery No Optional
Fan No There is

Manufacturer HighPoint LSI
Product RocketRAID 2720SGL MegaRAID 9265-8i
Form Factor Low profile MD2 Low profile MD2
SAS ports 8 8
SAS bandwidth per port 6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2хSFF-8087 2хSFF-8087
External SAS ports No No
Cache memory There is no data 1 GB DDR3-1333
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed Marvel 9485 / No data LSI SAS2208 / 800 MHz
Supported RAID levels 0, 1, 5, 6, 10, 50 0, 1, 5, 6, 10, 60
Supported operating systems Windows 2000, XP, 2003, 2008, Vista, 7, RHEL / CentOS, SLES, OpenSuSE, Fedora Core, Debian, Ubuntu, FreeBSD bis 7.2 Microsoft Windows Vista / 2008 / Server 2003/2000 / XP, Linux, Solaris (x86), Netware, FreeBSD, Vmware
Battery No Optional
Fan No No

Test configuration

We connected eight Fujitsu MBA3147RC SAS hard drives (147 GB each) with RAID controllers and ran benchmarks for RAID levels 0, 5, 6 and 10. SSD tests were conducted with five Samsung drives SS1605.

Hardware
CPU Intel Core i7-920 (Bloomfield) 45 nm, 2.66 GHz, 8 MB shared L3 cache
Motherboard (LGA 1366) Supermicro X8SAX, Revision: 1.0, Intel X58 Chipset + ICH10R, BIOS: 1.0B
Controller LSI MegaRAID 9280-24i4e
Firmware: v12.12.0-0037
Driver: v4.32.0.64
RAM 3 x 1 GB DDR3-1333 Corsair CM3X1024-1333C9DHX
HDD Seagate NL35 400GB, ST3400832NS, 7200 RPM, SATA 1.5Gb / s, 8MB Cache
Power Supply OCZ EliteXstream 800 W, OCZ800EXS-EU
Benchmarks
Performance CrystalDiskMark 3
I / O performance Iometer 2006.07.27
File server Benchmark
Web server Benchmark
Database benchmark
Workstation Benchmark
Streaming Reads
Streaming Writes
4k Random Reads
4k Random Writes
Software and drivers
Operating system Windows 7 Ultimate

Test results

I / O performance in RAID 0 and 5

Benchmarks in RAID 0 show no significant difference between RAID controllers, with the exception of the HighPoint RocketRAID 2720SGL.




The benchmark in RAID 5 does not help the HighPoint controller regain its lost ground. In contrast to the benchmark in RAID 0, all three faster controllers show their strengths and weaknesses more clearly here.




I / O performance in RAID 6 and 10

LSI has optimized its MegaRAID 9265 controller for database, file server and workstation workloads. All controllers pass the Web Server benchmark, showing the same performance.




In the RAID 10 variant, Adaptec and LSI are competing for the first place, while the HighPoint RocketRAID 2720SGL takes the last place.




SSD I / O Performance

Leading the way here is the LSI MegaRAID 9265, which takes full advantage of solid state storage.




Bandwidth in RAID 0, 5 and RAID 5 degraded mode

The LSI MegaRAID 9265 easily leads this benchmark. Adaptec RAID 6805 lags far behind.


HighPoint RocketRAID 2720SGL without a cache copes well with sequential operations in RAID 5. Other controllers are not much inferior to it.


Degraded RAID 5


Bandwidth in RAID 6, 10 and RAID 6 degraded mode

As with RAID 5, the HighPoint RocketRAID 2720SGL delivers the highest throughput for RAID 6, leaving the Areca ARC-1880i second. The impression is that the LSI MegaRAID 9265-8i simply does not like RAID 6.


Degraded RAID 6


Here already LSI MeagaRAID 9265-8i shows itself in the best light, although it gives way to Areca ARC-1880i.

LSI CacheCade




What's the best 6Gb / s SAS controller?

Overall, all four SAS RAID controllers we tested performed well. They all have all the functionality they need, and they can all be used successfully in entry-level and mid-range servers. In addition to outstanding performance, they also have such important features as working in a mixed environment with support for SAS and SATA and scalability via SAS expanders. All four controllers support the SAS 2.0 standard, which increases throughput from 3 Gbps to 6 Gbps per port, and in addition introduces new features such as SAS zoning, which allows multiple controllers to access storage resources through a single SAS -expander.

Despite similarities such as a low-profile form factor, an eight-lane PCI Express interface and eight SAS 2.0 ports, each controller has its own strengths and strengths. weak sides, analyzing which you can give recommendations for their optimal use.

So, the fastest controller is the LSI MegaRAID 9265-8i, especially in terms of I / O throughput. Although it also has weaknesses, in particular, not very high performance in the cases of RAID 5 and 6. MegaRAID 9265-8i leads in most benchmarks and is an excellent professional-grade solution. The cost of this controller - $ 630 - is the highest, and we shouldn't forget about that either. But for that high price, you get a great controller that outperforms the competition, especially when dealing with an SSD. It also has excellent performance, which becomes especially valuable when connecting large storage systems. What's more, you can increase the performance of the LSI MegaRAID 9265-8i by using FastPath or CacheCade, which will naturally cost extra.

The Adaptec RAID 6805 and Areca ARC-1880i controllers offer the same performance and are very similar in price ($ 460 and $ 540). Both perform well as various benchmarks show. The Adaptec controller offers slightly better performance than the Areca controller, and it also offers the highly sought-after ZMCP (Zero Maintenance Cache Protection) feature, which replaces conventional power failure redundancy and allows operation to continue.

The HighPoint RocketRAID 2720SGL retails for just $ 170, which is much cheaper than the other three controllers tested. The performance of this controller is quite adequate if you work with regular disks, although it is worse than that of Adaptec or Areca controllers. And you shouldn't use this controller to work with an SSD.

Briefly about modern RAID controllers

Currently, RAID controllers as a standalone solution are focused exclusively on the specialized server segment of the market. Indeed, all modern motherboards for consumer PCs (not server boards) have integrated firmware SATA RAID controllers, which are more than enough for PC users. However, you need to keep in mind that these controllers are focused exclusively on the use of the Windows operating system. In operating systems of the Linux family, RAID arrays are created programmatically, and all calculations are transferred from the RAID controller to CPU.

Servers traditionally use either hardware-software or purely hardware RAID controllers. A hardware RAID controller allows you to create and maintain a RAID array without the need for an operating system or CPU. Such RAID arrays are seen by the operating system as a single disk (SCSI disk). In this case, no specialized driver is needed - the standard (included in the operating system) SCSI disk driver is used. In this regard, hardware controllers are platform independent, and the RAID array is configured through the controller BIOS. A hardware RAID controller does not use the central processor when calculating all checksums, etc., since it uses its own specialized processor and RAM for calculations.

Software and hardware controllers require a dedicated driver, which replaces the standard SCSI disk driver. In addition, software and hardware controllers are equipped with management utilities. In this regard, software and hardware controllers are tied to a specific operating system. All necessary calculations in this case are also performed by the processor of the RAID controller itself, but using driver software and the management utility allows you to control the controller through the operating system, not just through the controller BIOS.

Considering the fact that SAS drives have already replaced SCSI server drives, all modern server RAID controllers are focused on supporting either SAS or SATA drives, which are also used in servers.

Last year, drives with the new SATA 3 (SATA 6 Gb / s) interface began to appear on the market, which gradually began to replace the SATA 2 (SATA 3Gb / s) interface. SAS (3 Gb / s) drives have been replaced by SAS 2.0 (6 Gb / s) drives. Naturally, new standard SAS 2.0 is fully compatible with the old standard.

Accordingly, RAID controllers with support for the SAS 2.0 standard appeared. It would seem, what's the point of switching to the SAS 2.0 standard, if even the fastest SAS disks have a read and write speed of no more than 200 MB / s and the SAS protocol bandwidth (3 Gb / s or 300 MB / s) is sufficient for them. ?

Indeed, when each drive is connected to a separate port on the RAID controller, 3 Gb / s bandwidth (which is 300 MB / s in theory) is sufficient. However, not only separate disks, but also disk arrays (disk baskets) can be connected to each port of the RAID controller. In this case, one SAS channel is shared by several drives at once, and the bandwidth of 3 Gb / s will no longer be enough. Well, in addition, you need to take into account the presence of SSD-drives, the read and write speed of which has already exceeded the bar of 300 MB / s. For example, the new Intel SSD 510 drive offers sequential read speeds of up to 500 MB / s and sequential write speeds of up to 315 MB / s.

After a quick look at the current situation in the server RAID controller market, let's take a look at the characteristics of the LSI 3ware SAS 9750-8i controller.

3ware SAS 9750-8i RAID Controller Specifications

This RAID controller is based on a specialized XOR processor LSI SAS2108 with a clock frequency of 800 MHz and PowerPC architecture. This processor uses 512MB random access memory DDRII 800 MHz Error Correction (ECC).

The LSI 3ware SAS 9750-8i controller is compatible with SATA and SAS drives (both HDD and SSD drives are supported) and allows you to connect up to 96 devices using SAS expanders. It is also important that this controller supports drives with SATA 600 MB / s (SATA III) and SAS 2 interface.

For connecting drives, the controller provides eight ports, which are physically combined into two Mini-SAS SFF-8087 connectors (four ports in each connector). That is, if disks are connected directly to ports, then a total of eight disks can be connected to the controller, and when disk cages are connected to each port, the total disk capacity can be increased to 96. Each of the eight controller ports has a bandwidth of 6 Gb / s, which corresponds to SAS 2 and SATA III standards.

Naturally, when connecting disks or disk cages to this controller, you will need specialized cables that have an internal Mini-SAS SFF-8087 connector at one end, and a connector at the other end that depends on what exactly is connected to the controller. For example, when connecting SAS disks directly to the controller, you must use a cable that has a Mini-SAS SFF-8087 connector on one side and four SFF 8484 connectors on the other, which allow you to directly connect SAS disks. Note that the cables themselves are not included in the package and must be purchased separately.

The LSI 3ware SAS 9750-8i controller has a PCI Express 2.0 x8 interface that provides 64 Gbps of bandwidth (32 Gbps in each direction). It is clear that this bandwidth is sufficient for a fully loaded eight SAS ports with a bandwidth of 6 Gb / s each. Also note that the controller has a special connector into which you can optionally connect the LSIiBBU07 backup battery.

It is important that this controller requires the installation of a driver, that is, it is a hardware-software RAID controller. It supports such operating systems as Windows Vista, Windows Server 2008, Windows Server 2003 x64, Windows 7, Windows 2003 Server, MAC OS X, LinuxFedora Core 11, Red Hat Enterprise Linux 5.4, OpenSuSE 11.1, SuSE Linux Enterprise Server (SLES ) 11, OpenSolaris 2009.06, VMware ESX / ESXi 4.0 / 4.0 update-1 and other Linux systems. The package also includes software 3ware Disk Manager 2, which allows you to manage your RAID arrays through the operating system.

The LSI 3ware SAS 9750-8i controller supports standard RAID types: RAID 0, 1, 5, 6, 10, and 50. Perhaps the only array type that is not supported is RAID 60. This is due to the fact that this controller is capable of create a RAID 6 array on only five disks connected directly to each controller port (in theory, RAID 6 can be created on four disks). Accordingly, for a RAID 60 array, this controller requires at least ten disks, which simply do not exist.

It is clear that support for a RAID 1 array is irrelevant for such a controller, since given type the array is created on only two disks, and using such a controller for only two disks is illogical and extremely wasteful. But support for arrays RAID 0, 5, 6, 10 and 50 is very relevant. Although, perhaps, we were in a hurry with the RAID 0 array. Nevertheless, this array does not have redundancy, and, accordingly, does not provide reliable data storage, therefore it is rarely used in servers. However, in theory, this array is the fastest in terms of data read and write speed. However, let's remember how different types RAID arrays are different from each other and what they are.

RAID levels

The term "RAID array" appeared in 1987 when American researchers Patterson, Gibson and Katz from the University of California at Berkeley described in their article "A case for redundant arrays of inexpensive discs, RAID" how In this way, multiple low-cost hard drives can be combined into a single logical device so that the result is increased system capacity and performance, and the failure of individual drives does not lead to failure of the entire system. Almost 25 years have passed since the publication of this article, but the technology of building RAID arrays has not lost its relevance today. The only thing that has changed since then is the decoding of the RAID acronym. The fact is that initially RAID arrays were not built on cheap disks, so the word Inexpensive was changed to Independent, which was more in line with reality.

Fault tolerance in RAID arrays is achieved through redundancy, that is, part of the capacity disk space assigned for service purposes, becoming inaccessible to the user.

The increase in the performance of the disk subsystem is provided by the simultaneous operation of several disks, and in this sense, the more disks in the array (up to a certain limit), the better.

Disk sharing in an array can be done using either parallel or independent access. With concurrent access, disk space is divided into blocks (strips) to write data. Similarly, information to be written to disk is divided into the same blocks. When writing, individual blocks are written to different disks, and several blocks are written to various discs occurs concurrently, resulting in better write performance. The necessary information is also read in separate blocks simultaneously from several disks, which also contributes to an increase in performance in proportion to the number of disks in the array.

It should be noted that the parallel access model is implemented only if the size of the data write request is larger than the size of the block itself. Otherwise, it is practically impossible to perform parallel recording of several blocks. Imagine a situation where the size of an individual block is 8KB, and the size of a data write request is 64KB. In this case, the original information is cut into eight blocks of 8 KB each. If you have a four-disk array, you can write four blocks, or 32 KB, at a time. Obviously, in the considered example, the write speed and read speed will be four times higher than when using a single disc. This is true only for an ideal situation, but the request size is not always a multiple of the block size and the number of disks in the array.

If the size of the data being written is less than the block size, then a fundamentally different model is implemented - independent access. Moreover, this model can also be used when the size of the recorded data is greater than the size of one block. With independent access, all the data of an individual request is written to a separate disk, that is, the situation is identical to working with one disk. The advantage of the independent access model is that if multiple write (read) requests are received at the same time, they will all be executed on separate disks independently of each other. This situation is typical, for example, for servers.

In accordance with different types access, there are also different types of RAID arrays, which are usually characterized by RAID levels. In addition to the type of access, RAID levels differ in the way they are located and redundant information is generated. Redundant information can either be placed on a dedicated disk or shared across all disks.

Currently, there are several RAID levels that are widely used are RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60. Previously, there were also RAID 2, RAID 3 and RAID 4 levels, however these RAID levels are not currently used and modern RAID controllers do not support them. Note that all modern RAID controllers also support the JBOD (Just a Bench Of Disks) function. V this case this is not a RAID array, but simply attaching individual drives to a RAID controller.

RAID 0

RAID 0, or striping, is not, strictly speaking, a RAID array, since such an array is not redundant and does not provide data storage reliability. However, historically it is also called a RAID array. A RAID 0 array (Fig. 1) can be built on two or more disks and is used when it is necessary to provide high performance of the disk subsystem, and the reliability of data storage is not critical. When creating a RAID 0 array, information is split into blocks (these blocks are called stripes), which are simultaneously written to separate disks, that is, a system with parallel access is created (if, of course, the block size allows). By allowing simultaneous I / O from multiple disks, RAID 0 provides the fastest transfer rates and maximum disk space utilization since no checksum storage is required. The implementation of this level is very simple. RAID 0 is mainly used in areas where fast transfer of large amounts of data is required.

Rice. 1. RAID 0 array

In theory, the increase in read and write speed should be a multiple of the number of disks in the array.

The reliability of a RAID 0 array is obviously lower than the reliability of any of the disks individually and decreases with an increase in the number of disks included in the array, since the failure of any of them leads to the inoperability of the entire array. If the MTTF of each disk is MTTF disk, then the MTBF of a RAID 0 array consisting of n disks is equal to:

MTTF RAID0 = MTTD disk / n.

If we designate the probability of failure over a certain period of time of one disk after p, then for a RAID 0 array from n disks, the probability that at least one disk fails (the probability of an array falling) is:

P (array fall) = 1 - (1 - p) n.

For example, if the probability of a single disk failure within three years of operation is 5%, then the probability of a RAID 0 array falling from two disks is already 9.75%, and from eight disks - 33.7%.

RAID 1

A RAID 1 array (Figure 2), also referred to as a mirror, is a 100 percent redundant array of two drives. That is, the data is completely duplicated (mirrored), due to which a very high level of reliability (as well as cost) is achieved. Note that RAID 1 does not require pre-partitioning of disks and data into blocks. In the simplest case, two drives contain the same information and are one logical drive. If one disk fails, its functions are performed by another (which is absolutely transparent to the user). Restoring the array is done by simple copying. In addition, in theory, a RAID 1 array should double the read speed, since this operation can be performed simultaneously from two disks. This information storage scheme is used mainly in cases where the cost of data security is much higher than the cost of implementing the storage system.

Rice. 2. RAID 1 array

If, as in the previous case, we denote the probability of failure for a certain period of time of one disk after p, then for a RAID 1 array, the probability that both disks will fail at the same time (the probability of the array falling) is:

P (falling array) = p 2.

For example, if the probability of failure of one disk within three years of operation is 5%, then the probability of simultaneous failure of two disks is already 0.25%.

RAID 5

A RAID 5 array (Figure 3) is a fault-tolerant disk array with distributed checksum storage. When writing, the data stream is divided into blocks (stripes) at the byte level, which are simultaneously written to all disks in the array in a circular order.

Rice. 3. RAID 5 array

Suppose the array contains n disks, and the stripe size is d... For each portion of n-1 stripes checksum is calculated p.

Stripe d 1 written to the first disk, stripe d 2- on the second and so on up to the stripe d n–1, which is written to the (n – 1) th disc. Further on nth disk checksum is written p n, and the process is cyclically repeated from the first disk on which the stripe is written d n.

The recording process ( n–1) stripes and their checksum are performed simultaneously for all n disks.

The checksum is calculated using a bitwise exclusive OR (XOR) operation on the data blocks being written. So, if there is n hard drives and d- data block (stripe), the checksum is calculated using the following formula:

p n = d 1d 2 ⊕ ... d n – 1.

If any disk fails, the data on it can be recovered from the control data and from the data remaining on the healthy disks. Indeed, using the identities (ab) A b= a and aa = 0 , we get that:

p n⊕ (d kp n) = d ld n⊕ ...⊕ ...⊕ d n – l⊕ (d kp n).

d k = d 1d n⊕ ...⊕ d k – 1d k + 1⊕ ...⊕ p n.

Thus, if a disk with a block fails d k, then it can be restored by the value of the remaining blocks and the checksum.

In the case of RAID 5, all disks in the array must be the same size, but the total capacity of the disk subsystem available for writing becomes less than exactly one disk. For example, if five disks are 100 GB, then the actual size of the array is 400 GB because 100 GB is reserved for audit information.

A RAID 5 array can be built on three or more hard drives... As the number of hard drives in an array increases, its redundancy decreases. Note also that a RAID 5 array can be recovered if only one drive fails. If two drives fail at the same time (or if a second drive fails while rebuilding the array), then the array cannot be recovered.

RAID 6

A RAID 5 array has been shown to be rebuildable if one disk fails. However, sometimes you need to provide a higher level of reliability than a RAID 5 array. In this case, you can use a RAID 6 array (Figure 4), which allows you to restore the array even if two drives fail at the same time.

Rice. 4. RAID 6 array

A RAID 6 array is similar to RAID 5, but it uses not one, but two checksums that are cyclically distributed across the disks. First checksum p is calculated using the same algorithm as in a RAID 5 array, that is, it is an XOR operation between data blocks written to different disks:

p n = d 1d2⊕ ...⊕ d n – 1.

The second checksum is calculated using a different algorithm. Without going into mathematical details, let's say that this is also an XOR operation between blocks of data, but each block of data is pre-multiplied by a polynomial coefficient:

q n = g 1 d 1g 2 d 2⊕ ...⊕ g n – 1 d n – 1.

Accordingly, the capacity of two disks in the array is allocated for checksums. In theory, a RAID 6 array can be created on four or more drives, but in many controllers it can be created on a minimum of five drives.

It should be borne in mind that the performance of a RAID 6 array, as a rule, is 10-15% lower than the performance of a RAID 5 array (with an equal number of disks), which is caused by the large volume of calculations performed by the controller (it is necessary to calculate the second checksum, as well as read and overwrite more disk blocks as each block is written).

RAID 10

RAID 10 (Figure 5) is a mix of levels 0 and 1. A minimum of four drives are required for this level. In a RAID 10 array of four disks, they are combined in pairs into RAID 1 arrays, and both of these arrays are combined as logical disks into a RAID 0 array. Another approach is also possible: initially, the disks are combined into RAID 0 arrays, and then logical disks based on these arrays - to a RAID 1 array.

Rice. 5. RAID 10 array

RAID 50

RAID 50 is a mix of levels 0 and 5 (Figure 6). The minimum required for this level is six disks. In a RAID 50 array, two RAID 5 arrays are first created (at least three disks in each), which are then combined as logical disks into a RAID 0 array.

Rice. 6. RAID 50 array

LSI 3ware SAS 9750-8i Controller Test Methodology

To test the LSI 3ware SAS 9750-8i RAID controller, we used a specialized test suite IOmeter 1.1.0 (version 2010.12.02). Test stand had the following configuration:

  • processor - Intel Core i7-990 (Gulftown);
  • motherboard- GIGABYTE GA-EX58-UD4;
  • memory - DDR3-1066 (3 GB, three-channel operation mode);
  • system disk- WD Caviar SE16 WD3200AAKS;
  • video card - GIGABYTE GeForce GTX480 SOC;
  • RAID controller - LSI 3ware SAS 9750-8i;
  • The SAS drives attached to the RAID controller are Seagate Cheetah 15K.7 ST3300657SS.

Testing conducted under operating room control Microsoft systems Windows 7 Ultimate (32-bit).

We used the Windows RAID controller driver version 5.12.00.007 and also updated the controller firmware to version 5.12.00.007.

The system drive was connected to SATA, implemented through a controller integrated into south bridge Intel X58 chipset, and SAS disks were connected directly to the ports of the RAID controller using two Mini-SAS SFF-8087 -> 4 SAS cables.

The RAID controller was installed in a PCI Express x8 slot on the motherboard.

The controller was tested with the following RAID arrays: RAID 0, RAID 1, RAID 5, RAID 6, RAID 10 and RAID 50. The number of disks combined in a RAID array varied for each type of array from a minimum value to eight.

The stripe size on all RAID arrays did not change and was 256 KB.

Recall that the IOmeter package allows you to work both with disks on which a logical partition is created, and with disks without a logical partition. If a disk is tested without a logical partition created on it, then IOmeter works at the level of logical data blocks, that is, instead of the operating system, it sends commands to the controller to write or read LBA blocks.

If a logical partition is created on the disk, then initially the IOmeter utility creates a file on the disk that occupies the entire logical partition by default (in principle, the size of this file can be changed by specifying it in the number of 512 byte sectors), and then it already works with this file, that is, it reads or writes (overwrites) individual LBAs within this file. But again, IOmeter works bypassing the operating system, that is, it directly sends requests to the controller to read / write data.

In general, when testing HDD disks, as practice shows, there is practically no difference between the test results of a disk with a created logical partition and without it. At the same time, we believe that it is more correct to conduct testing without a created logical partition, since in this case the test results do not depend on the used file system(NTFA, FAT, ext, etc.). This is why we performed testing without creating logical partitions.

In addition, the IOmeter utility allows you to set the Transfer Request Size for writing / reading data, and the test can be performed both for sequential (Sequential) reads and writes, when LBA blocks are read and written sequentially one after another, and for random (Random), when LBA-blocks are read and written in random order. When generating a load scenario, you can set the test time, the percentage ratio between sequential and random operations (Percent Random / Sequential Distribution), as well as the percentage ratio between read and write operations (Percent Read / Write Distribution). In addition, the IOmeter utility automates the entire testing process and saves all results to a CSV file, which can then be easily exported to an Excel spreadsheet.

Another setting that the IOmeter utility allows you to do is the so-called Align I / Os on along the boundaries of hard disk sectors. By default, IOmeter aligns request blocks to 512-byte disk sector boundaries, but arbitrary alignment can also be specified. Actually, most hard drives have a sector size of 512 bytes and only recent times disks with a sector size of 4 KB began to appear. Recall that in HDDs, a sector is the smallest addressable data size that can be written to or read from the disk.

When conducting testing, it is necessary to set the alignment of the blocks of data transfer requests by the size of the disk sector. Since Seagate Cheetah 15K.7 ST3300657SS drives have a sector size of 512 bytes, we used 512-byte sector alignment.

Using the IOmeter test suite, we measured the sequential read and write speed, as well as the random read and write speed of the created RAID array. The sizes of the transmitted data blocks were 512 bytes, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024 KB.

In the listed load scenarios, the test time with each request to transfer a data block was 5 minutes. Also note that in all of the above tests, we set the depth of the task queue (# of Outstanding I / Os) to 4 in the IOmeter settings, which is typical for user applications.

Test results

After reviewing the benchmark results, we were surprised by the performance of the LSI 3ware SAS 9750-8i RAID controller. And so much so that they began to look through our scripts to identify errors in them, and then repeated the testing many times with other settings of the RAID controller. We changed the stripe size and cache mode of the RAID controller. This, of course, was reflected in the results, but did not change general the dependence of the data transfer rate on the size of the data block. And we just could not explain this dependence. Work this controller seems to us completely illogical. First, the results are unstable, that is, for each fixed size of the data block, the speed changes periodically and the average result has a large error. Note that usually the results of testing disks and controllers using the IOmeter utility are stable and differ only slightly.

Second, as the block size increases, the data rate must increase or remain unchanged in saturation mode (when the rate reaches its maximum value). However, with the LSI 3ware SAS 9750-8i controller, there is a sharp drop in data rate at some block sizes. In addition, it remains a mystery to us why, with the same number of disks for RAID 5 and RAID 6 arrays, the write speed is higher than the read speed. In short, we cannot explain the operation of the LSI 3ware SAS 9750-8i controller - all that remains is to state the facts.

Test results can be classified in different ways. For example, for boot scenarios, when, for each type of boot, results are given for all possible RAID arrays with a different number of connected disks, or for types of RAID arrays, when results with a different number of disks are indicated for each type of RAID array in sequential read scenarios. , sequential write, random read, and random write. You can also classify the results by the number of disks in the array, when for each number of disks connected to the controller, the results are given for all possible (given the number of disks) RAID arrays in sequential read and sequential write, random read, and random write scenarios.

We decided to classify the results by the types of arrays, since, in our opinion, despite the rather large number of graphs, their presentation is more visual.

RAID 0

A RAID 0 array can be created with two to eight drives. The test results for a RAID 0 array are shown in Fig. 7-15.

Rice. 7. Speed ​​of sequential read and write
with eight disks in a RAID 0 array

Rice. 8. Speed ​​of sequential read and write
with seven disks in a RAID 0 array

Rice. 9. Sequential read speed
and writes with six disks in a RAID 0 array

Rice. 10. Speed ​​of sequential read and write
with five disks in a RAID 0 array

Rice. 11. Speed ​​of sequential read and write
with four disks in a RAID 0 array

Rice. 12. Speed ​​of sequential read and write
with three disks in a RAID 0 array

Rice. 13. Speed ​​of sequential read and write
with two disks in a RAID 0 array

Rice. 14. Random read speed
in a RAID 0 array

Rice. 15. The speed of random write in a RAID 0 array

It is clear that the fastest sequential read and write speeds in a RAID 0 array are achieved with eight disks. It should be noted that with eight and seven disks in a RAID 0 array, the sequential read and write speeds are almost the same, and with fewer disks, the sequential write speed becomes faster than the read speed.

It should also be noted that there are characteristic failures in sequential read and write speed for certain block sizes. For example, with eight and six disks in the array, such failures are observed at a data block size of 1 and 64 KB, and with seven disks - at a size of 1, 2, and 128 KB. There are similar failures, but with other sizes of data blocks, there are also four, three, and two disks in the array.

In terms of sequential read and write speeds (averaged over all block sizes), RAID 0 outperforms all other possible arrays in a configuration with eight, seven, six, five, four, three, and two drives.

Random access in a RAID 0 array is also pretty interesting. The random read speed for each data block size is proportional to the number of disks in the array, which is quite logical. Moreover, with a block size of 512 KB, with any number of disks in the array, there is a characteristic failure in the random read speed.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no drops in speed. At the same time, it should be noted that the highest speed in this case is achieved not with eight, but with seven disks in the array. Next in terms of random write speed is an array of six disks, then five, and only then eight disks. Moreover, in terms of random write speed, an array of eight disks is almost identical to an array of four disks.

In terms of random write speed, RAID 0 outperforms all other possible arrays in configurations with eight, seven, six, five, four, three and two drives. On the other hand, in terms of random read speed in a configuration with eight disks, RAID 0 is inferior to RAID 10 and RAID 50, but in a configuration with fewer disks, RAID 0 is the leader in random read speed.

RAID 5

A RAID 5 array can be created with three to eight drives. The test results for a RAID 5 array are shown in Fig. 16-23.

Rice. 16. Speed ​​of sequential read and write
with eight disks in a RAID 5 array

Rice. 17. Speed ​​of sequential read and write
with seven disks in a RAID 5 array

Rice. 18. Speed ​​of sequential read and write
with six drives in a RAID 5 array

Rice. 19. Speed ​​of sequential read and write
with five disks in a RAID 5 array

Rice. 20. Speed ​​of sequential read and write
with four drives in a RAID 5 array

Rice. 21. Speed ​​of sequential read and write
with three drives in a RAID 5 array

Rice. 22. Random read speed
in a RAID 5 array

Rice. 23. Random write speed
in a RAID 5 array

It is clear that the highest read and write speed is achieved with eight disks. Note that for a RAID 5 array, the sequential write speed is, on average, faster than the read speed. However, for a given request size, the sequential read speed can exceed the sequential write speed.

It should also be noted that there are typical failures in sequential read and write speed for certain block sizes for any number of disks in the array.

In sequential read and write speeds in a configuration with eight drives, RAID 5 is inferior to RAID 0 and RAID 50, but outperforms RAID 10 and RAID 6. In configurations with seven drives, RAID 5 is inferior in sequential read and write speed to RAID 0 and surpasses the RAID 6 array (other types of arrays are not possible with the given number of disks).

In six-drive configurations, RAID 5 is outperforming RAID 0 and RAID 50 in sequential read speed, and only RAID 0 in sequential write speed.

In configurations with five, four, and three drives, RAID 5 is second only to RAID 0 in sequential read and write speeds.

Random access in a RAID 5 array is similar to random access in a RAID 0. Thus, the random read speed for each data block size is proportional to the number of disks in the array, and with a 512 KB block size, for any number of disks in the array, there is a characteristic drop in random read speed. Moreover, it should be noted that the random read speed weakly depends on the number of disks in the array, that is, it is approximately the same for any number of disks.

In terms of random read speed, RAID 5 in a configuration with eight, seven, six, four and three drives is inferior to all other arrays. And only in a configuration with five drives does it slightly outperform a RAID 6 array.

In terms of random write speed, RAID 5 in a configuration with eight disks is second only to RAID 0 and RAID 50, and in a configuration with seven and five, four and three disks - only to RAID 0.

In a six-drive configuration, RAID 5 is inferior in random write speed to RAID 0, RAID 50, and RAID 10.

RAID 6

The LSI 3ware SAS 9750-8i controller allows you to create a RAID 6 array of five to eight drives. The test results for a RAID 6 array are shown in Fig. 24-29.

Rice. 24. Speed ​​of sequential read and write
with eight disks in a RAID 6 array

Rice. 25. Speed ​​of sequential read and write
with seven disks in a RAID 6 array

We also note the characteristic failures in sequential read and write speed for certain block sizes for any number of disks in the array.

In terms of sequential read speed, RAID 6 is inferior to all other arrays in configurations with any (from eight to five) number of disks.

In terms of sequential write speed, the situation is somewhat better. In a configuration with eight drives, RAID 6 outperforms RAID 10, and in a configuration with six drives, both RAID 10 and RAID 50. However, in configurations with seven and five drives, when creating RAID 10 and RAID 50 arrays is not possible, this array turns out to be in last place in terms of sequential write speed.

Random access in a RAID 6 array is similar to random access in RAID 0 and RAID 5. Thus, the random read speed with a 512 KB block size for any number of disks in the array has a characteristic drop in random read speed. Note that maximum speed random reads are achieved with six drives in the array. But with seven and eight disks, the random read speed is almost the same.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no drops in speed. In addition, the random write speed is proportional to the number of disks in the array, but the speed difference is insignificant.

In terms of random read speed, a RAID 6 array in a configuration with eight and seven drives is ahead only of a RAID 5 array and is inferior to all other possible arrays.

In a six-drive configuration, RAID 6 is inferior to RAID 10 and RAID 50 in random read speed, and in a five-drive configuration, it is inferior to RAID 0 and RAID 5.

In terms of random write speed, a RAID 6 array is inferior to all other possible arrays with any number of connected drives.

On the whole, we can state that the RAID 6 array is inferior in performance to the RAID 0, RAID 5, RAID 50 and RAID 10 arrays. That is, in terms of performance, this type of array is in last place.

Rice. 33. Random read speed
in a RAID 10 array

Rice. 34. Speed ​​of random write in a RAID 10 array

Typically, in arrays of eight and six disks, the sequential read speed is higher than the write speed, while in an array of four disks, these speeds are practically the same for any data block size.

For a RAID 10 array, as well as for all other considered arrays, a drop in sequential read and write speed is typical for certain sizes of data blocks for any number of disks in the array.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no drops in speed. In addition, the random write speed is proportional to the number of disks in the array.

In terms of sequential read speed, the RAID 10 array follows RAID 0, RAID 50 and RAID 5 arrays in a configuration with eight, six and four disks, and in terms of sequential write speed it is inferior even to a RAID 6 array, that is, it follows the RAID 0 arrays. RAID 50, RAID 5 and RAID 6.

On the other hand, in terms of random read speed, the RAID 10 array outperforms all other arrays in the configuration with eight, six and four disks. But in terms of random write speed, this array loses to RAID 0, RAID 50 and RAID 5 arrays in a configuration with eight disks, RAID 0 and RAID 50 arrays in a six-disk configuration, and RAID 0 and RAID 5 arrays in a four-disk configuration.

RAID 50

A RAID 50 array can be built on six or eight drives. The test results for a RAID 50 array are shown in Fig. 35-38.

In the random read scenario, as in all the other considered arrays, there is a characteristic drop in performance at a block size of 512 KB.

In case of random writing with any number of disks in the array, the speed increases with the increase in the size of the data block and there are no drops in speed. In addition, the random write speed is proportional to the number of disks in the array, but the difference in speed is insignificant and is observed only with a large (more than 256 KB) data block size.

In terms of sequential read speed, the RAID 50 array is second only to the RAID 0 array (in a configuration with eight and six drives). In terms of sequential write speed, RAID 50 is also second only to RAID 0 in a configuration with eight drives, and in a configuration with six drives, it loses to RAID 0, RAID 5, and RAID 6.

On the other hand, in terms of random read and write speed, the RAID 50 array is second only to the RAID 0 array and is ahead of all other arrays with eight and six disks.

RAID 1

As we have already noted, a RAID 1 array, which can be built on only two disks, is inappropriate to use on such a controller. However, for the sake of completeness, we present the results for a RAID 1 array on two disks. The test results for a RAID 1 array are shown in Fig. 39 and 40.

Rice. 39. Speed ​​of sequential writing and reading in a RAID 1 array

Rice. 40. Speed ​​of random writing and reading in a RAID 1 array

For a RAID 10 array, as well as for all other considered arrays, a drop in sequential read and write speed is typical for certain data block sizes.

In the random read scenario, as well as for other arrays, there is a characteristic drop in performance at a block size of 512 KB.

In case of random writing, the speed increases with the increase in the size of the data block and there are no speed dips.

A RAID 1 array can only be mapped to a RAID 0 array (since no other arrays are possible with two disks). It should be noted that a RAID 1 array outperforms a RAID 0 array with two disks in all load scenarios except random read.

conclusions

Our impression from testing the LSI 3ware SAS 9750-8i controller in combination with Seagate Cheetah 15K.7 ST3300657SS SAS drives was rather mixed. On the one hand, he has beautiful functionality, on the other hand, the speed drops are alarming at certain sizes of data blocks, which, of course, affects the speed performance of RAID arrays when they function in a real environment.