vNET I/O Maestro
I/O Virtualization Simplified
NextIO’s vNET I/O Maestro is a rack-level appliance that uses I/O virtualization to simplify the deployment and management of complex I/O, reducing cabling and cost, while increasing management efficiency. This intelligent I/O consolidation appliance allows customers to move the I/O devices from each of the servers in the rack and pool them at the top of the rack in an intelligent appliance that removes the need for individual top of rack Ethernet and Fibre Channel switches.
Based on industry standard PCI Express switching technology, the vNET consolidates and virtualizes server I/O at the top of the rack. Individual servers have access to a shared pool of I/O resources that can be dynamically added and re-allocated depending on workload demands. The virtualization of the I/O enables a single adapter to appear as multiple virtual adapters that may be shared among hosts or virtual machines. Virtual adapters (vNICs or vHBAs) assigned to servers appear just like a physical NIC or HBA inside the server.
The vNET I/O Maestro delivers performance and simplicity in a cost-effective appliance that is easy to install and integrate into any data center environment.
NextIO's vNET™ I/O Maestro Delivers Clear Customer Benefits
By removing the I/O controllers from each individual server and relocating them together in a vNET I/O Maestro at the top of the rack, administrators can pool together those resources and better allocate them across the servers in the rack. This strategy of I/O virtualization brings several clear customer benefits:
vNET I/O Maestro greatly reduces the total cost of ownership by reducing the key cost components of TCO. Acquisition cost is lowered by up to 40% by removing the cost of individual controllers, cabling and leaf switches at the top of the rack. Through easier management, fewer components to troubleshoot, lower power and better airflow, customers can see operational costs shrink by up to 60%. And the 80% reduction in cabling reduces installation and troubleshooting times as well as optimizing airflow.
From the time that a server arrives until it is online and accessible, that large capital expenditure is sitting waiting for the installation to be completed. Multiple trouble tickets and waiting for different groups to provision resources will mean delays of multiple days. With vNET the network and storage connections are already provisioned before the server ever shows up. Once the server has booted it is ready to go, no waiting on other teams' schedules to catch up with your provisioning requests. WWPN and MAC addresses are held in the vNET, not the server, meaning that all provisioning, and more importantly, reprovisioning, can be done quickly and efficiently.
vNET helps boost availability of servers through several key capabilities. To begin with, vNET shrinks the backup window by giving more bandwidth to servers during the backup process so they can be back online quicker. In the unlikely event of a server failure, new servers can be deployed and provisioned quickly because WWPN and MAC addresses do not need to change. Because the I/O resources live in the vNET and not the server, live migration can be done in seconds, not minutes or hours.
Reduce Power and Cooling Requirements
By eliminating the controller cards, the cabling and leaf switches from the rack helps reduce I/O power consumption by up to 58%. With fewer cables servers receive better airflow, helping to reduce the power consumption and cooling required to keep the data center operational. With I/O relocated from the server to the vNET, smaller servers can be used to help drive better power efficiency.
Achieve Better Density
With fewer controllers in the chassis, customers can choose smaller form factor servers, like 1U or 2U rack servers, and still get all of the I/O throughput of larger 4U style systems. By shattering the I/O bottleneck, servers connected to vNET can achieve far greater VM density, up to 100 VMs where standard servers might top out at 20 or 40 VMs. This greater VM density ultimately means better data center utilization and maximizing that precious floor space in the glass house.
Leverage Industry Standards
Unlike other I/O virtualization solutions, the NextIO vNET I/O Maestro uses industry-standard PCI Express for the interconnect. To the server, a vNET enclosure just looks like a regular PCIe switch and the virtualized I/O controllers appear to the server as if they are installed in the server chassis. Best of all, the networking and storage controllers utilize a standard Intel 10GbE driver and a standard Emulex 8Gb Fibre Channel driver, so your software stacks can remain consistent. simplifying the management of software stacks and images.
Chassis and Models
- 15 or 30 non-blocking PCIe server ports
- Up to 20Gbps interconnect speed
- Industry standard PCIe interconnect cable x8 (1, 2 or 3m)
- 8x I/O module slots
- Supports any combination of 10GE or 8G FC I/O Adapters
- Dual port 10G Ethernet I/O adapter with SFP+ connectivity with support for 10 GBase-SR
- Dual Port Fibre Channel with SFP+ connectivity with LC-style connector
- Windows Server 2008 (32-bit and 64-bit)
- Windows Server 2008 R2
- RHEL 5.3, 5.4, 5.5 and 6.0,
- CentOS, SEL 11.x
- VMware ESXi 4.1 update 1
- Management Access via Ethernet 10/100/1000 Mbps
- Web-based management GUI (Firefox 3.5, Firefox 4, IE8, IE9, Chrome 11+)
- CLI through Telnet/SSH and Open API for integration with 3rd party control management software
- Lights Out Management Support
- SNMPv3 Trap Configuration with 2 user accounts and 3 trap destinations
- Environment monitoring and chassis alert
- nControl Management Console
- Redundant I/O Modules
- Redundant, hot-swappable power supplies and cooling fan modules
- Supports 10Gbps Ethernet NIC teaming and failover
- Supports Fibre Channel Multi-Pathing (MPIO)
For full specifications, please review the vNET I/O Maestro Data Sheet.
vNET I/O Maestro System
The vNET I/O Maestro is a single top of rack I/O virtualization appliance that can be configured either standalone or in pairs for redundancy. The system is designed with flexibility to help ensure that it can be configured in a variety of different modes to meet customers' needs.
Each vNET I/O Maestro, whether a 15-server or 30-server model features eight hot-pluggable I/O slots at the front of the system. These slots can hold any combination of Ethernet and Fibre Channel controllers; a minimum of 1 controller is required.
Dual channel 10GbE and dual channel 8Gb Fibre Channel controllers are available in hot-pluggable carriers. These can be configured at the time of purchase and can also be added at a later date without having to take the vNET offline. Controller configurations can vary, from all Ethernet to all Fibre Channel, or any combination of both controllers, up to 8 total controllers.
A PCIe cable is used to connect the server to the vNET I/O Maestro. These industry-standard cables are available in 1, 2 or 3 meter lengths. Sturdy metal connectors that lock in place help to ensure not only a strong connection, but a lasting connection as well.
Each server requires a single PCIe controller to be installed physically in the server chassis. This card takes the place of all of the existing I/O controllers and allows the server to connect to the vNET Maestro in order to provide connectivity to the I/O controllers in the top of rack system. For redundancy a second PCIe controller can be installed in the server, connected to a second vNET I/O Maestro.
Power Supplies and Fans
High availability is achieved through redundant power supplies and fans. Each system includes all of the appropriate components, there is nothing further to configure.
NextIO's vNET™ I/O Maestro Reduces Cabling By Up to 80%
Dramatically reduce the rack cabling requirements with vNET I/O Maestro. Instead of multiple NICs (for network, management, Vmotion, etc.) and multiple SAN HBA adapters, just a single PCIe cable will connect the server to all of the top of rack peripherals. And for the vast majority of people that require redundancy, the cabling challenges rise exponentially, in some cases 16-18 total cables are needed, but for vNET only two cables are required for a fully redundant configuration with 2 vNET I/O Maestros at the top of the rack.
In the diagrams below it is easy to see how the vNET I/O Maestro can help dramatically reduce the cabling required. On the left, a traditional server with redundancy is configured with two quad port 1GbE connectors and an 8Gb Fibre Channel HBA. This provides a total of 16Gb/s of total I/O throughput (36Gb combined for the two redundant paths). The server on the right is configured to connect to the networking and SAN controllers in the vNET I/O Maestro. Each connection to the vNET is a 20Gb/s PCIe connection, so the total throughput for the redundant configuration is 40Gb/s, or 25% more aggregate throughput. In addition, the cabling is reduced by more than 80% as two cables on the right handle more I/O traffic than the eighteen on the left.
Through the reduction in cabling, customers can clearly see the following benefits:
- Quicker deployment times for better data center staff efficiency
- Quicker troubleshooting to help get servers back on line faster
- No recabling needed to change/reprovision servers in the same rack for better availability
- Fewer connections to potentially fail, resulting in better availability
- Better airflow for more efficient cooling
- Lower power consumption
- Better utilization of I/O peripherals
- Much lower cost
Making Sense of I/O Virtualization, I/O Consolidation and Software Defined Networking
Today, customers are tasked with “do more with less.” What this means will vary by customer and by environment, but the message is clear, the inefficiencies and islands of automation of the past need to step back to face the new realities of dense, flexible and rapidly changing data centers. This isn’t just about “the cloud”, which is the latest IT trend, but at a higher level it is about being more flexible, more agile; fewer buzzwords, more action. As the market has gone from a world of discrete, un-virtualized servers (think one app, one server) to a world where almost every server is now either hosting multiple virtual machines (VMs) or a consolidated workload, the strains on the systems are beginning to show – mostly in I/O, which has not kept pace with other technologies inside the server. For most customers the top of the rack means either a huge trunk of cables going directly to the core switches (inefficient) or multiple top of rack leaf switches (expensive, increased management) that connect to end of row core switches. There has to be a better way.
The Need for Better I/O Management in Today’s Data Center
With both server virtualization and workload consolidation, we see the key components of the server, the CPU, the memory, networking and the storage being virtualized, giving customers more flexibility and more agility. However, the problem is that the physical I/O devices, network controllers and storage controllers, are still tied to the physical platform, creating the final bottleneck in the system.
NextIO breaks this chain with a rack-level I/O consolidation that brings the I/O components from the system level to the rack level, delivering:
- Reduced time to deploy – New servers can be deployed and provisioned faster because there are fewer connections to manage
- Reduced time to reconfigure – With the drag and drop capability, changing I/O device assignments takes seconds, not minutes or hours and doesn’t require a reboot
- Increased agility –Adding new capacity or changing assignments, can be done on the fly
- Reduced cost – Instead of multiple adapters in each server and expensive top of rack leaf switches, the NextIO vNET Maestro consolidates I/O into a single appliance that greatly reduces both the system and cabling costs
Basic Server I/O
In a standard server, the PCIe bus is host to both the networking and storage traffic. Both of these are routed out to individual controllers that send the appropriate traffic over the appropriate network.
In this world, components tend to be overprovisioned in order to handle the peaks, leaving underutilized resources, extra cabling and extra cost. To solve this, customers have moved to virtualization and consolidation, but with those solutions, a whole new set of challenges arise.
Server-level I/O Virtualization
I/O Virtualization already occurs within server virtualization; the hypervisor takes individual I/O components inside the server and assigns them out as virtual resources to the VMs. Customers have managed to virtualize the CPU and memory through the hypervisor, virtualized the networking at the switch level with VLANs, virtualized the storage through SAN and NAS technologies, but the pieces in the middle that still remains anchored to the physical server are the actual I/O devices.
With I/O virtualization the physical controller and the protocol (which are normally bound tightly) are separated. This creates virtual I/O controllers. In a virtualized server, this I/O virtualization can happen with as few as one device with multiple virtual controllers bound to a single physical controller – but that is just inviting the bottlenecks.
To overcome the I/O virtualization issues, customers tend to add multiple controllers and connect to multiple switches. While this alleviates some of the challenges, it brings more cost into the equation and adds more complexity and cabling, putting more pressure on the IT infrastructure.
I/O Consolidation is a technique for consolidating all of the I/O traffic off of a single server into a converged network. The benefit of this scenario is the reduction in cabling and devices that are required. However, the downside is twofold: first, all of your traffic is essentially tied to a single protocol, Ethernet, so there is some overhead required in converting, and second, all of the traffic utilizes the same networking structure (no physical segmentation between the server and the switch.) This scenario increases the management challenges because all of the data is flowing on a single fabric, so maintaining quality of service for applications adds an extra level of complexity because there is no clear delineation of fabrics.
Network Virtualization With The NextIO vNET Maestro
NextIO utilizes an architecture where PCI Express is the underlying fabric, something that is already integrated into every server on the market today and offers a wide range of I/O devices to choose from. In fact, one of the biggest benefits of vNET is that customers can use industry-standard software drivers, meaning that when vNET is implemented, server software images do not need to change and troubleshooting/support challenges are minimized.
This low-level protocol helps to ensure that the communication is optimized despite the communications taking place outside of the box. To the server, PCIe over a cable vs. PCIe in a slot are treated the same way. Instead of adding more converged adapters to help balance the traffic, you simply add more 10GbE or Fibre Channel adapters at the vNET level. This allows you to grow as needed, and the hot plug capability of vNET means that the expanding not only happens in seconds, but it can be tailored to the type of traffic (network or storage) where the capacity is needed.
Not only are the controllers all virtualized, but a pool of these virtual controllers can be shared across a set of different servers. Where normally I/O virtualization means multiple virtual on a single physical, what NextIO brings to the table is multiple virtual I/O devices across multiple shared physical controllers for far more flexibility.
What About SDN?
Everyone is talking about SDN or Software Defined Networking but it remains one of the most misunderstood areas today. SDN defines a network topology out of a physical set of network devices that is not restricted by the current layer 2-n standards. What SDN does is separate the data and control plane of the routing system, so it is really working within the core switches, not at the top of the rack. SDN only works with Ethernet and does not support Fibre Channel, unless you want to run FCoE and convert all of your Fibre Channel to Ethernet in order to route it.
NextIO vNET lives at the rack level, providing consolidated pools of I/O resources at the rack level; SDN lives between the top of rack and the core switches. There is nothing in vNET that prevents customers from still taking advantage of SDN in their network routing because the two technologies live in different parts of the stack. But, realistically, the problems that most customers are trying to solve today are at the top of the rack, which is why SDN, while interesting as a future technology, is not being deployed much in production.
There are multiple ways to address the challenges that customers face at the top of the rack, but the best way to tackle that challenge is with the NextIO vNET Maestro virtualization appliance. This top of rack I/O solution allows multiple servers to share a single set of pooled I/O controllers. Resources can be deployed and re-provisioned in seconds instead of minutes, hours or days. Shared resources help do away with both the problem of over-provisioned and underutilized servers as well as the problem of too many cables in the rack. While I/O virtualization is something that happens automatically when running a hypervisor, rack-level I/O virtualization, that takes the I/O components out of the server, brings a whole new level of efficiency and productivity to your data center.
What Is NextIO's vNET™ I/O Maestro?
The vNET I/O Maestro is an intelligent top of rack solution that allows customers to consolidate and simplify their rack-level I/O, but not have to completely redesign their data center or change their processes in order to achieve the benefits.
vNET allows customers to continue to work with their existing end of row switching, there is no need to replace any core network switches in order to take advantage of all of the functionality that the vNET I/O Maestro can deliver to the rack.
Typically customers will utilize a single vNET per rack, or two if they are configuring for redundancy. Rack-to-rack configurations are supported from a cabling perspective, allowing up 3 meters of length for cabling with standard PCIe cables.
Each vNET will provide layer 2 switching internally, so server-to-server traffic within the rack is retained within the rack, there is no need to take that traffic up to the core. The beauty of the vNET system is that this server-to-server communication, that would normally occur at 1Gb/s speeds for most servers, will now take place at 10Gb/s speeds through vNET.
The Fibre Channel controllers inside of the vNET I/O Maestro will still require an NPIV (N_Port ID Virtualization) in order to connect to the SAN. This is typically done through the end of row Fibre Channel switches.