Bij InQuest is een artikel verschenen waarin InfiniBand en PCI-X met elkaar worden vergeleken. De eerste PCI-X generatie zal nog dit jaar in servers verschijnen met een bandbreedte van 1GB/s. De sleutelwoorden zijn goedkoop, flexibel en compatible. InfiniBand is echter met haar beginsnelheid van 0,5GB/s nog een tijdje weg, maar heeft in tegenstelling tot PCI-X een aantal interessante nieuwe features, waarover je ook hier al kon lezen. De twee standaarden zijn, hoewel ze deels overlappen, geen directe concurrenten, de InfiniBand standaard is namelijk meer universeel dan PCI-X:
InfiniBand and the “cloud computing” clustering model, will serve as a fountain of fascinating theoretical dialog for a while. Taken to its limits, InfiniBand dissolves the current notions of computing by treating all CPUs, memory and peripherals as part of a pooled resource cloud. This is an interesting exercise, but how does InfiniBand solve the practical problems faced by the IT manager or the user?
Intel positions InfiniBand to take on everything from Ethernet for LANs to Fibre Channel for SANs to PCI-X, LDT and Rapid I/O for inside-the-box chip-to-chip wiring. As a type of LAN alternative, InfiniBand comes up short against the inertia of existing standards.
Used as a local component interconnect however, the merits of InfiniBand are even thinner. It does not meaningfully exceed PCI-X bandwidth until its costly 12x implementation is ready. By that time we should expect to see more from PCI-X as well. But, as we have seen in other cases, bandwidth is not everything. InfiniBand is a serialized protocol based connection technology and as such suffers from poor latency due to excessive software overhead. PCI-X is a low latency, down to the metal, pure hardware interface.