Using flash memory, deduplication and predictable performance the iSCSI array has been reinvented by the founders of LeftHand Networks in the shape of NexGen Storage's n5. LeftHand Networks was acquired by HP and is now the HP P4000 line. It competes with Dell's EqualLogic iSCSI arrays, and NexGen is squaring up to both of …
CTO, Nimble Storage
Claims of performance benefits of connecting flash directly to PCIe rather than to a storage HBA are exaggerated, especially in the context of flash in networked storage.
The extra overhead of introducing a storage HBA is in tens of microseconds, which is dwarfed by other latencies such as flash access and network transfer, which are often in hundreds of microseconds. Shaving off a few tens of microseconds is not a noticeable improvement in the end-to-end latency for most applications.
On the other hand, using flash in a disk drive form factor brings significant advantages such as the ability to share it across controllers and to service it non-disruptively.
The facts speak for themselves. The maximum throughput to the backend disk subsystem through a RAID storage controller (storage HBA) is the bandwidth of the PCIe slot that the HBA is plugged into. Now if one solid state device can saturate an x8 PCIe slot, then the storage HBA is bottlenecked with one drive, let alone many SSD drives combined with many rotating drives. It's like placing RAM behind a drive HBA. Realistically, most of the storage HBAs on the market can’t even come close to x8 throughput because of other bottlenecks and overheads. One NexGen SSD device runs at x8 line rate, it can be shared across storage processor domains, and can be serviced non-disruptively. For more details see NexGen's blog on this subject:
Should go like the wind
PCIe attached shared storage should be insanely fast. If the data is in the RAM cache we're talking latencies measured in nanoseconds, not microseconds. It does limit the number of servers that can be attached to the storage, and the distance though.