San Jose, Calif., March 19, 2024 – Credo Technology Group Holding Ltd (“Credo”) (Nasdaq: CRDO), an innovator in providing secure, high-speed connectivity solutions that deliver improved power efficiency as data rates and corresponding bandwidth requirements increase throughout the data infrastructure market, is excited to announce a new family of HiWire Active Electrical Cables (AECs) specifically targeted for 400G AI/ML Backend Network Connections to Top of Rack (TOR) Switches.

The new family of HiWire AECs includes:

“400G AI/ML backend networks have proliferated in the past year and have migrated 112G/lane connectivity to Network Interface Cards (NICs) ahead of many customers’ networking plans.” Said, Don Barnetson, Vice President of Product at Credo. “Credo’s innovative HiWire AECs can help bridge this gap with in-cable speed shifting to enable use of legacy 12.8T TORs and Y cable configurations for 25 and 51Tb TORs.”

Credo will demonstrate the new 400G AI/ML Backend Network HiWire AEC family at the upcoming Optical Fiber Conference (OFC) in San Diego, CA March 26 – 28, 2024.  Conference attendees are encouraged to visit Credo in booth 3601 to learn more about these new HiWire devices.

Product Availability

Members of this new HiWire AEC family are sampling now with production scheduled for Q3/2024.

To learn more about the Credo Products in this release go to the product pages linked here.

About Credo 

Our mission is to deliver high-speed solutions to break bandwidth barriers on every wired connection in the data infrastructure market. Credo is an innovator in providing secure, high-speed connectivity solutions that deliver improved power efficiency as data rates and corresponding bandwidth requirements increase exponentially throughout the data infrastructure market. Our innovations ease system bandwidth bottlenecks while simultaneously improving on power, security, and reliability. Our connectivity solutions are optimized for optical and electrical Ethernet applications, including the emerging 100G (or Gigabits per second), 200G, 400G,800G and the emerging 1.6T (or Terabits per second) port markets. Credo products are based on our proprietary Serializer/Deserializer (SerDes) and Digital Signal Processor (DSP) technologies. Our product families include Integrated Circuits (ICs) for the optical and line card markets, Active Electrical Cables (AECs) and SerDes Chiplets. Our intellectual property (IP) solutions consist primarily of SerDes IP licensing.

For more information, please visit https://www.credosemi.com. Follow Credo on LinkedIn.

These modules, initially tailored for inter-data center connections among large hyperscalers using OIF 100G/400G ZR standards, continue to find new applications. The fact that the modules plug directly into switches and routers has eliminated the need for a separate optical transport layer.

The adoption of 400G Open ZR+ has further extended the reach of these applications, thanks to advanced error correction and dispersion compensation techniques. This evolution not only empowers hyperscalers, but also opens up lucrative opportunities for service providers, enabling them to leverage these enhanced “plus” version modules in various network environments, including cloud, urban, and regional networks.

ZR/ZR+ Success with Hyperscalers: A Paradigm Shift
Hyperscalers are looking to leverage ZR/ZR+ coherent pluggable modules to reap the benefits of seamless data transmission, reduce power consumption, lower costs, and gain a smaller footprint. One key strength of ZR/ZR+ modules lay in their interoperability which enables seamless integration into existing infrastructure. All of this makes them highly attractive for both hyperscalers and service providers.

Challenges with Deploying ZR/ZR+ for Service Providers
Despite the myriad of benefits, service providers face many challenges and obstacles while considering 400G and upcoming 800G rollouts with ZR+ coherent optics in their CIN, IP edge, aggregation, and backbone networks. Challenges include:

Introducing HiWire P3: A Game-Changing Solution
To address these challenges head-on, Credo unveiled the HiWire Pluggable Patch Panel (P3) during the OCP ‘23 Show. This innovative solution empowers both service providers and hyperscalers by decoupling pluggable optics from core switching and routing hardware, using Credo’s HiWire Active Electrical Cables (AECs). This offers the flexibility to choose whichever coherent optics they would like to use in their system, allowing them to support 400G and 800G optics.

At its core, HiWire P3 is a simple media conversion box at L1, providing an electrical trace between the pairs of QSFP-DD ports. Ports are powered and cooled within the system and can be accessed using I2C-bus and use low level CMIS interface API. This model supports 32 QSFP-DD ports, 16 of them supporting up to 25W of power designed to support up to 800G ZR+ modules, while the other 16 can support up to 15W for AEC cables.

The flexibility and scalability the HiWire P3 offers, empowers organizations to choose coherent optics tailored to their specific needs, supporting both 400G and 800G optics seamlessly. Some of the features enabled by HiWire P3 are:

With the introduction of HiWire P3, service providers and hyperscalers alike can overcome the challenges associated with deploying ZR/ZR+ modules, unlock their full potential and enjoy benefits including: 

As the optical networking landscape continues to evolve, solutions like HiWire P3 are paving the way for a new era of connectivity. By empowering service providers to embrace coherent pluggable technologies with confidence, Credo is driving innovation and propelling the industry forward. The future of networking is here, and it's brighter than ever, thanks to HiWire P3.

Learn how Credo HiWire P3 enables service providers and hyperscalers the freedom to decouple pluggable optics from core switching and routing hardware using Credo’s HiWire Active Electrical Cables (AECs).

Click here to view the video

No longer a ‘nice to have’ option, the move to 800G is now a necessity for data center- operators wanting to remain relevant in the age of AI. In fact, as the time between each new generation is shrinking, the industry is already looking beyond 800G to 1.6T solutions.

AI is fueling an Optical Boom

In traditional data centers, data flows in, is processed by a CPU on a general-purpose server, and then flows back out. With AI networks and LLM training, data still flows in, but a large number of GPUs operate together to process the data while communicating and sharing data over a back-end network. This server-to-server backend communication can easily require 8-10 times more bandwidth than the traditional front-end networks that only facilitate data input and output.

Huge AI-driven optical deployments are gaining industry attention as users seek practical solutions to ease the migration while decreasing the power/bit. The initial concept proposed to address this need was Linear Pluggable Optics (LPO). LPO completely removes the DSP from the transceiver in an attempt to achieve power and cost savings, while relying on the host ASIC to perform most of the signal conditioning and channel equalization. While this solution does reduce the cost and power, it creates major challenges in interoperability, network robustness and volume deployments, which significantly increasing OPEX, time-to-market and overall risk.

Recently an alternative solution has been gaining momentum that addresses LPO shortcomings while answering the industry call for reduced power and cost. This solution is known as Linear Receive Optics (LRO). LRO removes the DSP from the module receive path but maintains the DSP in the module transmit path. The LRO implementation achieves an optimal balance of standards compliance, interoperability, network reliability, ease of deployment and power efficiency.

When compared to LPO, LRO also offers significant usability advantages. The LPO optical transmit performance relies heavily on the host ASIC, which is a completely independent device. As such, it is impossible for the module vendor to calibrate the LPO transceiver at the factory. It must be shipped to the end user, who then becomes responsible for the calibration, performance, and the interoperability when integrated into a large network.  This presents a major obstacle for efficient deployment and creates roadblocks for interoperability.  

With LRO, the optical transmit performance is decoupled from the host with the addition of a transmit DSP.  IEEE compliance is maintained with no manual tuning of the host ASIC.  Every module is pre-programmed to meet the same optical specifications regardless of the end application or the host system. The transmit DSP also includes additional diagnostic capabilities and permits the use of different optics, allowing for further cost optimization.

A look into the future

Moving forward, certain applications will require pluggable optical transceivers with a full transmit/receive DSP implementation.  This is especially true in networks that are not entirely homogenous and are built with components from many different vendors. However, where the system architecture allows, LRO can cut the DSP power in half and significantly reduce the transceiver cost, while maintaining robust performance and general interoperability. This is a clear advantage when interconnecting hundreds of thousands of GPUs in an AI cluster.

Network planners can leverage the power and cost savings of LRO for future builds even beyond the 112G/lane, 800G generation.  With its clear path to deployment at 224G/lane in 1.6Tb/s optics, LRO solutions will continue to be a vital tool for power and cost savings in future generations.

Credo Dove 850

The Credo Dove 850, is the industry’s first DSP optimized for LRO. A unidirectional 8 x 112 Gb/s DSP, Dove 850 was purpose built for this LRO architecture.

Visit Credo at OFC 2024 in San Diego to find out more about Dove 850 and the advantages it has to offer. To make an appointment or learn more about Credo’s optical products please reach out to sales@credosemi.com.

The ingenuity of Chiplets and their multitude of use cases and designs has also introduced challenges that can turn the benefits and appeal of Chiplets into something counterintuitive and not useful.  Hence, the increased focus around standardization and the effort to create an even playing field to promote adoption.

Chiplets growing popularity

Let’s begin with outlining Chiplet benefits, as every keynote speaker did in their first few slides, in no particular order.

Market research firm Yole showed the following data to explain the value of Chiplets and I think it captured the overall message of the conference:

The obvious reason why companies want to use Chiplets is the overwhelming economic value. Originally, companies had to spend millions of dollars on one chip that could still potentially require additional millions of dollars for revisions and still encounter further cost increases due to large size and bad yields. These companies’ chips can now shrink in size, have better yields, and get to market faster with higher margins.  Who wouldn’t want to save money and make money at the same time?

What markets can take advantage of Chiplets?

Today the largest market adopter is the server market. This is due largely to the growth of the datacenter and the strains created by Artificial Intelligence and Machine Learning. AMD, NVIDIA, Intel, and other CPU and GPU products are becoming power hungry, have more compute power requirements, require more memory, are increasing in size, and have yields that are getting worse.  The ability to disaggregate these products and simplify them by making them modular is a solution to these problems. 

In their presentation, Yole separated out specific technologies and markets that are adopting Chiplets, but ultimately, they are all server related as HBM (high bandwidth memory) is now being used as the memory of choice for AI applications, Datacenters, and Generative AI applications.

After server adoption, Yole sees a move in adoption of Chiplets in the PC market, then smart phones and automotive.  Although it should be noted that in automotive there is an added layer of complexity of test and standard requirements that will slow the adoption rate and make adoption more complicated. And smart phones have an overall cost threshold that cannot be crossed.

With Chiplets, what makes them great is also what makes them not so great. Chiplets are great in their modularity but each chiplet module is an independent product with each Chiplet having the ability to be made in different ways, with different characteristics and specifications. With every Chiplet created, each requires their own independent set of quals and productization. This is not a one size fits all solution and there needs to be careful consideration on cost, complexity, and advantages to creating a product using Chiplets.  With each different variation of Chiplet, the process resets itself and a brand-new analysis and qualification process begins.

Creating an open ecosystem

So how do you simplify this and create an ecosystem in which anyone can enter?  At the Chiplet summit everyone was there to promote options. There were two standards committees there to promote their standardization work. The UCIe consortium and OCP (Open Compute Project).  The UCIe consortium is focused on interconnect and OCP is focused on building an “Open Chiplet Economy”. There were packaging companies focused on 2.5D and 3D packaging technologies, as well as how to stack dies using TSV (through silicon via) and companies discussing interposers and CoWos technology. But even with these potential options and offerings, it is too early to know what the right solution might be. For now, the best solution is the one that fits the need of the end user.

Taking this a step further, most attendees had no idea what they were there for or what to ask. Our very own Jeff Twombly, Credo VP of Business Development participated in a panel titled, “Best ways to Optimize Chiplets”. His first question before talking was, “who in the audience has had experience working with Chiplets to date?” Of the 30-40 attendees, only about 6 people raised their hands. When you look at it from a percentage standpoint, that is about 20% of the audience. Out of those 6, 3 were on the panel with him, so only roughly 10% of the audience. Most of the attendees do not have experience and were looking to see what the trends are and identify key players in the Chiplet space. 

There is no doubt we will start to see new players in this space, including startups looking for an exit (there was a session on the final day called Chiplets for Entrepreneurs in which a panel of “Industry experts” and incubator/VCs were giving advice on how to approach this market). Moving forward we will start to see a stronger push for standards and simplification which I believe will create commodity products which almost always results in consolidation. But we are still far away from this.

At Credo we are a leader in the IP and Chiplet business. We are in mass production and shipping Chiplets utilizing XSR and BoW (bunch of wire) interface and we have a roadmap that will continue to define us as a leader in this space. The adoption of standards will take time and the barriers to entry will remain high, allowing us to continue to be a pioneer in this space. 

We will have to wait for next year’s Summit to see the overall progress of the Chiplet business and what improvements and solutions are created in 2024. If you would like to learn more about our Chiplet and IP portfolio contact sales@credosemi.com. Thanks to the team at Yole for sharing their data with us.

chevron-down