SuperBlade - Networking

견적상담제품문의

SuperBlade Networking

SuperBlade® networking options include six different Ethernet modules. For simple Layer 2 switching at 1-Gbps the SBM-GEM-001 switch offers a cost-effective connectivity option for 10-Blade or 14-Blade systems. Alternatively, the SBM-GEM-002 1-Gbps Ethernet Pass-Through module provides an even lower cost access option in 10-Blade or 14-Blade systems when switched access is not required. The SBM-GEP-T20 provides this pass-through capability in TwinBlade systems.

Access to 10-Gigabit Ethernet networks is provided by either the Layer 2/3 1/10-Gbps Ethernet Switches - SBM-GEM-X2C+ and SBM-GEM-X3S+ (up to 20 Blades), the 10-Gigabit Ethernet Switch - SBM-XEM-X10SM, or the 10-Gigabit Ethernet Pass-Through Module - SBM-XEM-002M (10-Blade or 14-Blade systems).

For even faster connections, Supermicro offers three different InfiniBand connectivity options. A powerful new series of QDR InfiniBand switches (SBM-IBS-Q3618, SBM-IBS-Q3618M, SBM-IBS-Q3616, and SBM-IBS-Q3616M) provide connection from Blades to 4X QDR (40-Gbps) InfiniBand networks - the fastest networking technology available for commercial use. Other InfiniBand options include a 4X DDR switch (SBM-IBS-001) and a 14-port 4X DDR Pass-Through module (SBM-IBP-D14). All SuperBlade® networking options are hot-pluggable.

Gigabit Ethernet Modules

Switches
The 1-Gigabit Ethernet switch module (part ID SBM-GEM-001) includes ten external 1-Gb/s uplink (RJ45) ports and fourteen internal 1-Gb/s downlink ports for connection to the SuperBlade®'s LAN interfaces. While SBM-GEM-001 connects to the LAN interfaces onboard the Blade servers. These is layer 2 Ethernet switching modules also have two internal Ethernet paths to the SuperBlade® Chassis Management Module CMM(s) to allow configuration, management, and control of the switch and its ports through a browser-based management interface. Offering such advanced features as link aggregation (static), VLAN support, and Jumbo Frame support, the switches provide a connection between the Ethernet controllers integrated on the mainboard and on the mezzanine card and external Ethernet systems.

The 1/10-Gigabit Ethernet layer 2/3 Switch modules (part ID SBM-GEM-X2C+ and SBM-GEM-X3S+) offer advanced switching features and connection to 10-Gigabit Ethernet networks. Internally they use the same 1-Gb/s downlink ports as the SBM-GEM-001 for connection to the SuperBlade® LAN interfaces. Externally SBM-GEM-X2C+ provides up to three 10-Gb/s uplink connections (two with a CX4 connector which are stackable and one SFP+) and two 1Gb/s uplink connections (RJ45). SBM-GEM-X3S+ provides up to three 10-Gb/s SFP+ uplink connections and four 1Gb/s uplink connections (RJ45). Both switches also have two internal Ethernet paths to the CMM(s) to allow configuration, management, and control of the switch and its ports through a browser-based management interface. In addition to the Web-based GUI, it offers a CLI for flexibility in management and control of single or multiple switch networks. Both SBM-GEM-X2C+ and SBM-GEM-X3S+ support up to 20 Blades.

The 10-Gigabit Ethernet Layer2/3 Switch Module (part ID SBM-XEM-X10SM) connects internally to an optional 10-Gigabit mezzanine card on the blade (either AOC-XEH-iN2, AOC-IBH-XDS or AOC-IBH-XDD). External 10-Gigabit SFP+ connectors are provided for uplinks. In ten-Blade systems, there are ten 10-Gigabit connectors; in TwinBlade systems, there are four.

SBM-XEM-X10SM (10GbE)

Internal Ports
  • 10 (Ten-Blade) or 14 (Fourteen-Blade) or 20 (TwinBlade) internal 10-Gigabit connections to Blades
External Uplink Ports
  • 4 (Twin Blade) or 10 (Ten-Blade/Fourteen-Blade) external 10-Gigabit Ethernet ports (SFP+)
  • One external 1GE port (RJ45)
Type
  • Layer-3 10G switch
Switching Capacity
  • 480 Gbps
Trunking
  • Link Aggregation - 8 groups with 8 members per group
Jumbo Frame Support
  • Up to 9K bytes
Remote Management
  • Browser-based management / CLI
Protocols
  • STP, RSTP, MSTP, IGMP snooping, 802.1x, LACP, DHCP, RIP/OSPF, ACL, QoS.
OS
  • Firmware upgradeable

SBM-XEM-F8X4SM (FCoE)

Internal Ports
  • 10 (Ten-Blade) or 14 (Fourteen-Blade) or 20 (TwinBlade) internal 10-Gigabit connections to Blades. Internal ports support DCB /FCoE
External Uplink Ports
  • 4 (TwinBlade) or 6 (Ten-Blade/Fourteen-Blade) external Fibre Channel ports (SFP+) (N_port type)
  • 4 (Ten-Blade/Fourteen-Blade) external 10-Gigabit Ethernet ports (SFP+). No external 10Gb Ethernet with TwinBlade enclosure.
  • 4 One external 1GE port (RJ45)
Type
  • Converged Data Center switch with FC/FCoE support
Switching Capacity
  • 480 Gbps
Trunking
  • Link Aggregation - 8 groups with 8 members per group
Jumbo Frame Support
  • Up to 9K bytes (Ethernet)
  • 2112 bytes (Fiber Channel)
Remote Management
  • Browser-based management / CLI
Protocols
  • STP, RSTP, MSTP, IGMP snooping, 802.1x, LACP, DHCP, RIP/OSPF, ACL, QoS.
  • FIP (FC-BB-5) gateway
OS
  • Firmware upgradeable

SBM-GEM-X3S+ (1/10 GbE)

Internal Ports
  • Up to twenty 1-Gbps downlink ports for LAN interfaces of the server blades
External Uplink Ports
  • Three 10-Gbps SFP+ uplink ports
  • Four 1-Gbps RJ-45 uplink ports
Type
  • Layer-2 / 3 switch
Switching Capacity
  • 104 Gbps
Trunking
  • Link aggregation support (802.3ad-full)
Jumbo Frame Support
  • Up to 9k bytes
Remote Management
  • Browser-based management / CLI
Protocols
  • STP, RSTP, MSTP, IGMP snooping, 802.1x, LACP, DHCP, RIP/OSPF, ACL, QoS.
OS
  • Firmware upgradeable

SBM-GEM-X2C+ (1/10 GbE)

Internal Ports
  • Up to twenty 1-Gbps downlink ports for LAN interfaces of the server blades
External Uplink Ports
  • Three 10-Gbps uplink ports (Two CX4, stackable & One SFP+)
  • Two 1-Gbps RJ-45 uplink ports
Type
  • Layer-2 / 3 switch
Switching Capacity
  • 112Gbps
Trunking
  • Link aggregation support (802.3ad-full)
Jumbo Frame Support
  • Up to 9k bytes
Remote Management
  • Browser-based management / CLI
Protocols
  • STP, RSTP, MSTP, IGMP snooping, 802.1x, LACP, DHCP, RIP/OSPF, ACL, QoS
OS
  • Firmware upgradeable

SBM-GEM-001 (1GbE)

Internal Ports
  • Fourteen 1-Gbps downlink ports for LAN interfaces of the server blades
External Uplink Ports
  • Ten 1-Gbps uplink RJ-45 ports
Type
  • Layer-2 switch
Switching Capacity
  • 48Gbps
Trunking
  • Link aggregation support (802.3ad-static)
Jumbo Frame Support
  • Up to 9k bytes
Remote Management
  • Browser-based management
Protocols
  • STP, RSTP, 802.1x
OS
  • Firmware upgradeable


Pass-Through
The 1-Gigabit Ethernet Pass-through Module (part ID SBM-GEM-002) is a non-configurable pass through module that includes fourteen 1-Gb/s external uplink (RJ45) ports and fourteen internal 1-Gb/s downlink ports for the SuperBlade®'s LAN interfaces. This module also has two internal Ethernet paths to the CMM(s) for viewing module temperature and voltage (but not for configuration since all connections are fixed)

The 10-Gigabit Ethernet Pass-through Module (part ID SBM-XEM-002M) is also a non-configurable pass through module. It provides fourteen 10-Gb/s external uplink (SFP+) ports and fourteen internal 10-Gb/s downlink ports for the SuperBlade®'s LAN interfaces. Internal links are provided by the use of a mezzanine card (e.g., AOC-IBH-XDS) with 10-Gigabit Ethernet support capability (see AOC-IBH-XDS/XDD/XQS/XQD item descriptions below).

The TwinBlade 1-Gigabit Ethernet Pass-through Module (part ID SBM-GEP-T20) is a non-configurable pass- through module that includes twenty 1-Gb/s external uplink (RJ45) ports and twenty internal 1-Gb/s downlink ports for the TwinBlade's LAN interfaces. This module also has an internal I2C path to the CMM for viewing module temperature and voltage (but not for configuration since all connections are fixed).

SBM-XEM-002M †

Internal Ports
  • Fourteen 10-Gbps downlink XAUI ports
External Uplink Ports
  • Fourteen SFP+ uplink ports fixed at 10Gbps (no auto-negotiation)
Type
  • Ethernet pass-through module
Connections
  • 10GBASE-SR, 10GBASE-LRM, 10GBASE-ER, 10GBASE-LR, Twinax


SBM-GEM-002

Internal Ports
  • Fourteen 1-Gbps downlink ports for LAN interfaces of server blades
External Uplink Ports
  • Fourteen RJ-45 uplink ports fixed at 1-Gbps (no auto-negotiation)
Type
  • Ethernet pass-through module
Connections
  • N/A


SBM-GEP-T20 *

Internal Ports
  • Twenty 1-Gbps downlink ports for LAN interfaces of the server blades
External Uplink Ports
  • Twenty 1-Gbps uplink RJ-45 ports fixed at 1-Gbp (no auto-negotiation)
Type
  • Ethernet pass-through module
Connections
  • N/A

† = for SBE-710E and SBE-714E series enclosure

* = for SBE-720E and SBE-720D enclosures only; one SBM-GEP-T20 per SBE-720E; two SBM-GEP-T20 modules per SBE-720D


InfiniBand Modules

InfiniBand Switch Module
The InfiniBand Switch Modules are switch-based, point-to-point bi-directional serial link systems. They provide high-speed interconnectivity among the blade modules and to external InfiiniBand peripherals and are especially useful in supporting clustered High-Performance-Computing. The SBM-IBS-001 InfiniBand switch supports up to fourteen internal and 10 external 4X DDR connections (20-Gbps). The SBM-IBS-Q3616 InfiniBand Switch Module supports up to 20 internal and up to 16 external 4X QDR connections (40 Gbps). The SBM-IBS-Q3616M InfiniBand Switch Module adds the capability for installation of a BMB-CMM-002 mini-CMM, thus allowing dual/redundant links from each Blade (requires AOC-IBH-XQD) to redundant switches. The SBM-IBS-Q3618 InfiniBand Switch Module supports up to 18 internal and up to 18 external 4X QDR connections (40 Gbps). Like the SBM-IBS-Q3616M, the SBM-IBS-Q3618M InfiniBand Switch Module adds the capability for installation of a BMB-CMM-002 mini-CMM, thus allowing dual/redundant links from each Blade (requires AOC-IBH-XQD) to redundant switches.

InfiniBand Pass-Through
The SBM-IBD-D14 InfiniBand Pass-Through is a non-configurable pass-through with fourteen internal downlinks to blades and fourteen external 4x DDR (20Gbps) connections using CX-4 cables of up to 7M length.

SBM-IBS-F3616M

Switch Chip
  • Mellanox SwitchX
Internal Ports
  • Twenty FDR10/FDR ports at 40/56Gbps
External Uplink Ports
  • Sixteen 4x FDR with QSFP connectors
  • 4X FDR (56-Gbps) non-blocking architecture with 56Gbps through external ports
Bandwidth
  • 3.392Tbps total switch bandwidth (36-Port)


SBM-IBS-Q3616*/ SBM-IBS-Q3616M* and SBM-IBS-Q3618*/ SBM-IBS-Q3618M*

Switch Chip
  • Mellanox InfiniScale IV
Internal Ports
  • Twenty (SBM-IBS-Q3616/M)
  • Eighteen (SBM-IBS-Q3618/M)
External Uplink Ports
  • Sixteen 4X QDR with QSFP connectors (SBM-IBS-Q3616/M)
  • Eighteen 4X QDR with QSFP connectors (SBM-IBS-Q3618/M)
Bandwidth
  • 4x QDR (40-Gbps) non-blocking architecture 2.88Tbps total switch bandwidth (36-Port)


SBM-IBS-001 (IB Switch) †

Switch Chip
  • Mellanox InfiniScale III
Internal Ports
  • Fourteen Internal 4x DDR Ports
External Uplink Ports
  • Ten 4x DDR external copper ports (CX-4 Connectors)
Bandwidth
  • 4x DDR (20-Gbps) non-blocking architecture 960-Gbps total switch bandwidth (24-Port)


SBM-IBP-D14 (IB Pass-Through) †

Internal Ports
  • Fourteen internal 4x DDR ports (20Gbps)
External Uplink Ports
  • Fourteen external 4x DDR copper ports (20Gbps - CX-4 connectors

* = for SBE-710Q, SBE-714Q and SBE-720E series enclosures

† = for SBE-710E and SBE-714E series enclosure


Mezzanine HCA Cards
For any blade to access the InfiniBand module, it must have an InfiniBand mezzanine HCA card installed on its mainboard. The AOC-IBH-002, AOC-IBH-XDD, and AOC-IBH-XDS mezzanine cards provide this 20-Gbps connectivity through the backplane to the InfiniBand switch module or InfiniBand Pass-through module. The AOC-IBH-XQS and AOC-IBH-XQD mezzanine cards provide this connectivity at the QDR (40Gbps) rate when used with the QDR InfiniBand switch.

The AOC-IBH-XDS, AOC-IBH-XDD, AOC-IBH-XQS, and AOC-IBH-XQD can alternatively be used for 10Gbps Ethernet connectivity when used in concert with the SBM-XEM-002M 10Gbps Pass-through module or the SBM-XEM-X10SM 10Gbps switch. The AOC-XEH-iN2 is also available for 10Gbps connectivity.

AOC-IBH-X3QD (Mezzanine HCA)

Chipset
  • Mellanox ConnectX-3
InfiniBand Ports
  • Two 4x FDR-10 40-Gbps ports
Ethernet Ports
  • One or two 10-Gbps ports
Power Consumption
  • 8.5W typical/ 9W max


AOC-IBH-X3QS (Mezzanine HCA)

Chipset
  • Mellanox ConnectX-3
InfiniBand Ports
  • One 4x FDR10 40-Gbps port
Ethernet Ports
  • One 10-Gbps port
Power Consumption
  • 7.5W typical/ 8W max


AOC-IBH-XQD (Mezzanine HCA)

Chipset
  • Mellanox ConnectX-2
InfiniBand Ports
  • Two 4x QDR 40-Gbps ports
Ethernet Ports
  • One or two 10-Gbps ports
Power Consumption
  • 10.4W typical/ 11W max


AOC-IBH-XQS (Mezzanine HCA)

Chipset
  • Mellanox ConnectX
InfiniBand Ports
  • One 4x QDR 40-Gbps port
Ethernet Ports
  • One 10-Gbps port
Power Consumption
  • 10.4W typical/ 11W max


AOC-IBH-XDD (Mezzanine HCA)

Chipset
  • Mellanox ConnectX
InfiniBand Ports
  • One or two 4x DDR 20-Gbps ports
Ethernet Ports
  • One or two 10-Gbps ports
Power Consumption
  • 10.4W typical/ 11W max


AOC-IBH-XDS (Mezzanine HCA)

Chipset
  • Mellanox ConnectX
InfiniBand Ports
  • Single 4x DDR 20-Gbps port
Ethernet Ports
  • Single 10-Gbps port
Power Consumption
  • 10.4W typical/ 11W max


AOC-IBH-002 (Mezzanine HCA)

Chipset
  • Mellanox InfiniHost III Lx DDR
InfiniBand Ports
  • Single 4x DDR 20-Gbps port
Power Consumption
  • 10.4W typical/ 11W max


AOC-XEH-iN2 (Mezzanine HCA)

Chipset
  • Intel 82599 Niantic chip
InfiniBand Ports
  • Dual 10-Gbps Ethernet ports
Power Consumption
  • 6.25W max