I/O Virtualization and Top of Rack
What is in the top of your rack?
Like so many, you’ve probably been evolving over time. I can remember when every server had one or two network cables coming out at a whopping 10Mb/s. Technically, I used to wire ARCnet over RJ-11 and did plenty of coax as well (who else had to troubleshoot missing terminators?) so my memories, while hazy, go back even further than the RJ-45 connectors.
As technology has emerged, the top of the rack has been in a constant condition of flux. First there were switches, which dramatically improved the cabling situation but still some felt that buying an expensive switch for the top of every rack was a waste, “just run it all to the end of the row and sort it out there…”
Today’s servers tend to have more cabling than ever before. While 10GbE prices appear to be coming down somewhat, the most popular configuration is 4x1GbE on the motherboard (integrated LOM) and a second 4x1GbE card in a slot (path redundancy). This means 8 total network cables, and still only 8Gb/s of total throughput.
In reality, for most servers, this is enough bandwidth, and the different physical connections allow for the segmentation of network connections, making management easier (but cabling more difficult).
But when you start filling the rack with servers, putting 8 network connections and typically 4 Fibre Channel connections together, you’re starting to talk about almost a half mile of cables per rack.
As the top of rack has continued to progress, products like the vNET I/O Maestro are putting more intelligence and efficiency into the rack, making it easier to manage your network.
IT Brand Pulse recently released a paper that discusses how this top of rack world has been changing over the years, and how NextIO is helping to simplify top of rack with vNET. Check it out, it’s definitely worth the read: IT Brand Pulse Top of Rack Paper