< Back

400 Gbps Platforms Open the Doors on Many New Topologies Even Without Widespread Optics Availability

Print Friendly, PDF & Email
While Cloud Providers have always been willing to rethink their networks outside industry consensus (starting disaggregation, implementing Whitebox, 25 Gbps, DAC, splitter cables, fixed aggregation, and core boxes, etc.) there has been one part of the network that has remained consistent. Cloud providers continue to use fiber from the top-of-rack switch all the way to the core with the common reasoning that distance and reliability in those tiers dictate fiber. Part of this rationale is the fact that the aggregation part of the network is usually dispersed in a data center and not centrally located.

While Cloud Providers have always been willing to rethink their networks outside industry consensus (starting disaggregation, implementing Whitebox, 25 Gbps, DAC, splitter cables, fixed aggregation, and core boxes, etc.) there has been one part of the network that has remained consistent.  Cloud providers continue to use fiber from the top-of-rack switch all the way to the core with the common reasoning that distance and reliability in those tiers dictate fiber.  Part of this rationale is the fact that the aggregation part of the network is usually dispersed in a data center and not centrally located.

Active Ethernet Cables (AEC) such as HiWire™ AEC, which has already gained the support of a consortium of 25 industry leaders in connectivity and datacenter technology, opens the door to changing this part of the architecture.  400 Gbps optics lag by almost a year for the availability of 400 Gbps switches and the delay in optics is driving an increasing amount of traffic and tiers into existing 100 Gbps switches.  AEC is similar in quality and reliability to fiber, but in a smaller diameter copper form factor and cost.

Cloud Providers of all sizes, enterprises, and Telco service providers can all benefit from rethinking the aggregation layers of their network, especially now when optics are still months away. Centralizing the aggregation switches to one part of the data center, using AEC and 400 Gbps to connect them, can allow for a reduction in the number of switches and the number of tiers in a data center.  This can lead to significant savings in both OPEX and CAPEX. Customers worried about the blast radius can have multiple aggregation racks.

CAPEX savings come from reducing the number of switches needed and the number of optics.  OPEX savings come from reducing the number of ports and given the need for power savings; any little bit can help.

Customers today can get immediate cost savings by switching future builds to this new type of topology and decrease the use of 400G optics, which will be expensive and have limited availability.  Another advantage is that a customer can use the available optics only when needed, thus allowing a broader adoption of 400G and 12.8 Tbps.

A single 12.8 Tbps switch can replace 6 or more 3.2Tb switches as the higher radix requires less inter-switch links between low-radix 3.2 Tbps switches at the same tier.  The 6:1 compression ratio is compelling, especially when power is taken into account comparing one 12.8 Tbps to six fully loaded 3.2 Tbps switches. AEC helps bridge the gap until 400G optics are available, is a good and alternative option even when the 400G optics become available, and with a roadmap to 800G, will prove to be a long term copper alternative to optics in the data center.

chevron-down