Marvell 技術全解析:1.6T PAM4 DSP、2nm 客製化 SRAM 與 51.2T Teralynx 交換晶片如何主導 AI 基礎設施

Last Updated on 2026 年 3 月 29 日 by 総合編集組

Marvell Technology Deep Dive: Powering AI Data Centers with 1.6T PAM4 DSP, 2nm Custom SRAM, and 51.2T Teralynx Switching

Introduction Marvell Technology (NASDAQ: MRVL) has successfully transformed from a traditional storage chip supplier into a key player in AI infrastructure. The company provides essential high-speed interconnects, custom silicon solutions, and networking technologies that enable massive GPU and XPU clusters to operate efficiently. While NVIDIA dominates accelerators, Marvell acts as the “glue” for data movement, storage, and protection in modern data centers. This summary highlights the company’s evolution, core technologies, financial performance, competitive positioning, and future outlook based on recent developments.

Marvell 技術全解析:1.6T PAM4 DSP、2nm 客製化 SRAM 與 51.2T Teralynx 交換晶片如何主導 AI 基礎設施
https://www.marvell.com/products/data-center-switches.html

Company Evolution and Strategic Acquisitions Founded in 1995 by Sehat Sutardja, Weili Dai, and Pantas Sutardja in Santa Clara, California, Marvell initially focused on high-performance, low-power semiconductors, starting with CMOS-based hard disk drive read channels. The company went public on NASDAQ in June 2000. Under CEO Matt Murphy since 2016, Marvell executed “Marvell 2.0” — exiting low-margin consumer and smartphone businesses to focus on cloud data centers, enterprise networking, 5G infrastructure, and automotive Ethernet.

Key acquisitions have built Marvell’s technology portfolio:

  • Cavium (2018): Added ARM processors, networking, and security chips for multi-core capabilities.
  • Avera Semiconductor (2019): Brought custom ASIC design expertise based on IBM technology, strengthening ties with hyperscalers.
  • Inphi (2021): Delivered high-speed electro-optics, PAM4, and coherent DSP — critical for AI cluster interconnects.
  • Innovium (2021): Introduced Teralynx Ethernet switching chips to challenge leaders in large-scale data center switching.
  • Celestial AI (completed in early 2026): Added Photonic Fabric optical interconnect technology to overcome copper limitations in next-generation scale-up connectivity.
  • XConn Technologies (2026): Expanded PCIe and CXL switching for enhanced AI memory pooling and device expansion.

These moves positioned Marvell as a full-stack data infrastructure provider rather than a simple component vendor.

Core Driver: Custom Silicon and ASIC Partnership In the AI era, hyperscalers like Amazon, Microsoft, and Google design their own accelerators (XPUs) for better power-performance-area (PPA) efficiency. Marvell serves as a trusted ASIC partner, often called the “shepherd” for cloud chip designs.

The company has a multi-year supply agreement with Amazon Web Services (AWS). Marvell helps convert internal blueprints into silicon, integrating high-speed SerDes interfaces and memory controllers. It has been deeply involved in Trainium 2 production and continues supporting Trainium 3 and 4 development. These training-focused chips reportedly reduce training costs by up to 50% compared to general-purpose GPUs.

Microsoft selected Marvell for its Maia 100 custom AI chip, with the partnership extending to Maia 200 and 300 series. Industry estimates suggest this collaboration could generate $10-12 billion in cumulative revenue for Marvell by 2027.

Marvell’s technical edge comes from advanced platform tools:

  • Custom HBM architecture: Reduces memory interface power by 70% and saves 25% die area.
  • PIVR (Package Integrated Voltage Regulator): Integrates voltage regulation directly under the processor, cutting transmission losses by 85% and allowing more compute nodes per rack.
  • 2nm custom SRAM (announced June 2025): Halves area while boosting bandwidth 17x — vital for AI inference chips requiring large caches.

High-Speed Optical Interconnect: The “Blood Vessels” of AI Clusters As AI clusters scale to millions of XPUs, traditional copper interconnects face heat and signal degradation issues beyond rack levels. Marvell leads in optical DSP with products spanning 400G to 1.6T bandwidth.

The Ara platform features the industry’s first 3nm 1.6 Tbps PAM4 optical DSP, supporting ultra-low-power optical modules critical for energy-constrained data centers. Orion coherent DSP series enables 800G long-distance interconnects up to 120 km, allowing efficient collaboration between geographically distributed data centers.

Looking ahead, Marvell advances co-packaged optics (CPO) with 3D silicon photonics engines delivering up to 25.2 Tbps interconnect bandwidth inside processor packages. The Celestial AI acquisition brings photonic fabric technology that uses light instead of electrons for chip-to-chip data movement, aiming to break the memory wall bottleneck.

Networking Disruption with Teralynx Ethernet Switches Broadcom has long dominated data center switching, but Marvell’s Teralynx 10 (from Innovium) offers a compelling alternative for AI training fabrics. Key specs include:

  • Up to 51.2 Tbps total throughput with 64 ports of 800GbE.
  • Ultra-low latency of ~560 nanoseconds in cut-through mode — essential for frequent parameter synchronization in distributed training.
  • High-radix architecture that reduces network tiers, cutting latency by over 40% and equipment count by 33%, improving total cost of ownership (TCO).

Teralynx 10 fully supports open-source SONiC OS, enabling customers to build custom telemetry and self-healing tools for predictive maintenance.

Advanced Process and Packaging Leadership Marvell is an early adopter of TSMC’s most advanced nodes. In March 2025, it unveiled a 2nm silicon platform for next-generation AI accelerators, CPUs, and high-performance networking chips. Innovations include 3D bidirectional I/O at 6.4 Gbits/s per direction (doubling traditional unidirectional bandwidth) and modular 3nm/2nm IP portfolios covering 112G/224G SerDes, CXL controllers, and security IP.

The 2nm process addresses exploding AI model sizes and data center power demands, projected to consume 6-12% of U.S. electricity by 2028.

Financial Performance and Growth Outlook Marvell has completed its structural shift toward AI and cloud. For fiscal 2026 (ended January 31, 2026), net revenue reached a record $8.195 billion, up 42% year-over-year. Data center revenue hit $6.1 billion (74% of total), growing 46.6%. The company swung to GAAP net income of $2.67 billion. Cash reserves strengthened significantly.

Analysts project fiscal 2027 revenue approaching $11 billion, with data center growth around 40%, driven by Agentic AI traffic demands and optical module upgrades. Non-GAAP EPS is expected to exceed $5. Recent quarterly results show continued momentum, with Q4 FY2026 data center revenue up 21% and management guiding for accelerating growth throughout FY2027, including contributions from Celestial AI and XConn.

Competitive Landscape vs. Broadcom Marvell is often compared to Broadcom (AVGO). As of early 2026, Broadcom’s market cap stood near $1.8 trillion while Marvell’s hovered around $80-83 billion. Broadcom benefits from software (VMware) for recurring revenue and higher margins (~60%+ EBITDA). Marvell, as a purer-play semiconductor firm, has operating margins of 15-20% with room for improvement but offers greater cyclical upside in AI booms.

Marvell’s strengths in coherent optics, DSP leadership, and customer collaboration provide differentiation. Post-VMware pricing changes, some enterprises seek non-Broadcom alternatives, where Marvell’s transparent partnership model gains traction. If photonic interconnect becomes mainstream, Marvell could achieve a technology leap.

Community Feedback and Risks Engineers on platforms like Reddit praise Marvell’s SerDes validation and firmware expertise, viewing it as a rigorous training ground for hardware talent. However, high-pressure delivery culture and intense interviews (especially for 3nm backend design) are commonly noted.

Investors debate valuation — forward P/E around 22-25x appears attractive versus NVIDIA (~38x) and Broadcom (~46x). Concerns around Amazon potentially shifting some Trainium work exist, but interconnect and DSP remain Marvell’s core moat.

Major risks include high customer concentration (top 10 clients >80% revenue), dependency on hyperscaler AI capex, and geopolitical/export controls affecting supply chains and China exposure.

Conclusion Marvell Technology has built a formidable moat across advanced process design, ultra-high-speed optical communications, and efficient networking. In an AI world shifting from single-chip performance to system-scale efficiency, seamless low-latency data movement across chips, racks, and data centers is becoming the most valuable asset. With solutions like 1.6T DSP, Teralynx 10, and 2nm custom platforms, Marvell is well-positioned as an AI infrastructure architect. While competition and concentration risks persist, its technical foresight makes it a key barometer for the AI semiconductor cycle.

頁次: 1 2

0

發表留言