For the commercial work houses, it's a necessary to get access to large amounts of local NVMe storage. Till now, many solutions have been put forward to settle this down (costuming FPGAs to software breakout boxes is one of them). Recently, PCIe x16 Quad M.2 cards are provided by motherboard vendors to work this out.
HighPoint SSD7120 Makes Its Debut
Recently, the HighPoint SSD7120 was released as an NVMe SSD RAID card to help fix the accessing to large amounts of local NVMe storage. PCIe x16 Quad M.2 cards are used to replace the solutions like costuming FPGAs to software breakout boxes. But, they still have an obvious disadvantage – they depend on the processor bifurcation (for instance, processor’s ability of using a single PCIe x16 slot to drive multiple devices).
However, the good news is the HighPoint SSD7120 has overcome that limitation.
Function of PCIe x16 Quad M.2 Cards
At present, it sounds very easy to put four NVMe M.2 drives in a single PCIe x16 slot. Indeed, the slot has 16 lanes, which means that each drive is able to occupy up to four lanes. So where is the difficulty? Well, the problem lies in the CPU side of the equation.
In general, the PCIe slot will connect directly to one PCIe x16 root complex on the motherboard chip; only one device is expected to be connected to this root complex according to configuration. So when you put four devices in, it may be confused. At this time, another device is required to work things out; this device should be acting as a communication facilitator between the drives and the CPU.
This explains the function of PCIe switches. The PCIe x16 Quad M.2 cards have been adopted by some motherboards to split a PCIe x16 complex into x8/x8 when there are more than one device are inserted.
It’s hard to say which computer bus interface is better for SSD: PCIe or SATA. Please read this post and choose the more suitable one.
HighPoint SSD7120 Solves the Problem
Actually, the best way to go through the mentioned limitation is to add a PCIe switch; that’s why PLX8747 chip and custom firmware are used by HighPoint for booting. To be honest, the PCIe switch is expensive, but it does worth its price for allowing configurable interface (located between the drives and the CPU) to work in all cases.
HighPoint has already released a device (the SSD7101A) on the market for allowing 4 M.2 NVMe drives to be connected to the device. Why does it launch SSD7102 now? Of course, this one is even better; the firmware in the PLX chip is changed currently for allowing the boot from a RAID of NVMe drives.
Boot Modes & Features of the NVMe RAID Controller
There are actually 3 boot modes available as long as you get a SSD7102:
- You can boot with RAID 0 across all four drives.
- It is possible to boot with RAID 1 across pairs of drives.
- In a JBOD configuration, it even allows you to boot from a single drive.
Within the JBOD configuration, each drive can be allocated to a boot drive to make it possible for us to install multiple OS across different drives.
Characteristics of SSD7102:
- Available for RAID 0, 1, 5, 1/0
- Give support to any M.2 NVMe drives coming from different suppliers
- Give support to at most 4 off-the-shelf U.2 NVMe SSDs
- Give support to different operating systems (Windows, Linux and Mac OS)
- It enjoys scalable performance to be able to across multiple RAID controllers
- It has unique PCIe 3.0 x 16 bus bandwidth
- It provides PCIe 3.0 x 4 bandwidth for each NVMe U.2 SSD
HighPoint said that this SSD7102 gives support to both Intel and AMD (CPU bigwigs). Considering the presence of the PCIe switch, I guess this card can also work well in both PCIe x 8 and PCIe x 4 modes.
You can get this PCIe card from both HighPoint and its reseller/distribution partners. And it is estimated that the MSRP of it would be $399, which is the same as that of the current SSD7101A containing no RAID bootable option.