OmniConnect

Credo OmniConnect is a next-generation interconnect architecture designed to break through the memory bottlenecks that limit AI inference scalability. It combines ultra-efficient 112G VSR SerDes with a lightweight AXI framer to deliver high-bandwidth, low-latency, and power-optimized connectivity between compute engines and memory—whether on-die, off-substrate, or across chiplets.

Credo OmniConnect Products

 

Credo OmniConnect is a versatile AXI-over-VSR SerDes bus that enables both die-to-die interconnect and scale-up networking by connecting multiple compute engines via external chiplets. It incorporates robust telemetry features for reliability and uptime.

Weaver

Rate

112G

Number of Lanes

12

Memory Interface

LPDDR5X

Form Factor

Chiplet

Reach

250mm

gradient background
gradient background

Credo OmniConnect Features

As AI inference workloads grow more complex and memory-intensive, memory density and bandwidth have proven to be limiting factors in achieving maximum compute performance. Additionally, inference models and context windows are scaling much more rapidly than memory interfaces. Credo’s OmniConnect solves these challenges through a rich feature set. 

Credo Icon _ Low power
Low-cost, low-power memory fanout

Enables cheaper LPDDR memory to achieve high bandwidth and high density for evolving AI inference workloads.

Credo Icon _ Chips
10x boost in beachfront I/O density

Weaver provides 2Tb/s/mm beachfront density compared to 0.18Tb/s/mm for conventional LPDDR5X.

Memory Icon
20x increase in memory density

Up to 6.4TB of memory density with Weaver compared to max 256GB using LPDDR5X.

Credo Icon _ reliability
Future-proof compute for memory transition

Allows simple transition from LPDDR5X to LPDDR6 without any change to XPU.