AI Compute Capital Watch

Competitive dynamics from AMD, Broadcom, Marvell and AI chip startups versus Nvidia in accelerators and custom silicon

Competitive dynamics from AMD, Broadcom, Marvell and AI chip startups versus Nvidia in accelerators and custom silicon

AMD, Broadcom And Emerging AI Chip Rivals

Competitive Dynamics in AI Accelerators: Nvidia Faces New Challenges from AMD, Broadcom, Marvell, and Innovative Startups in 2024

The landscape of AI hardware in 2024 is more competitive and multifaceted than ever. While Nvidia continues to dominate the GPU ecosystem, especially in inference workloads, a dynamic array of players—ranging from established semiconductor giants like AMD, Broadcom, and Marvell to a surge of innovative startups—is actively challenging Nvidia's leadership. These companies are deploying strategic product innovations, securing large-scale customer wins, and leveraging regional manufacturing and geopolitical shifts to stake their claims in the rapidly expanding AI compute market.

Nvidia’s Continued Leadership and Upcoming Challenger

Nvidia remains the undisputed leader in AI accelerators, primarily through its GPU ecosystem and inference hardware. However, the company is preparing to introduce a new inference-focused chip, slated for unveiling in March 2024. This upcoming product aims to further cement Nvidia’s dominance in AI inference, but it also signals an intensification of competition.

The new Nvidia chip is expected to target:

  • High-performance inference workloads
  • Large-scale deployment scenarios
  • Improved efficiency and scalability

The market reaction to Nvidia's upcoming launch will be pivotal. As industry insiders speculate, the chip's performance, power efficiency, and supply availability will influence customer choices and could prompt strategic responses from competitors.

AMD’s Strategic Moves and Large-Scale Wins

AMD continues to aggressively challenge Nvidia, leveraging both product innovation and substantial contract wins:

  • Meta Deal: AMD recently secured a monumental $100 billion-plus AI infrastructure contract with Meta, including a 6 GW supply agreement and an investment in up to 160 million AMD shares. This deal underscores AMD’s push into large-scale AI deployments and signifies confidence from one of the world's largest hyperscalers.
  • Product Development: AMD’s MI300 series, especially the MI450 GPU, is gaining traction among cloud providers and enterprise clients. These accelerators are designed with HBM4 memory, addressing the need for large-memory AI models and real-time inference.
  • Strategic Focus: AMD’s involvement with OpenAI—aiming to deploy 6 GW of hardware by 2026—further demonstrates its focus on becoming a key player in the AI infrastructure ecosystem.

Implication: AMD’s large-scale deals and innovative hardware position it as a formidable alternative to Nvidia, especially as hyperscalers look to diversify their supply sources and mitigate supply chain risks.

Broadcom and Marvell: Masters of Custom Silicon and Data Interconnects

While AMD is focusing on accelerators, Broadcom (AVGO) and Marvell (MRVL) are emphasizing custom silicon and high-speed interconnects, vital components in large AI systems:

  • Broadcom: Known as the "king of custom silicon," Broadcom is designing specialized ASICs tailored for AI inference and data center workloads. Its expertise in ASIC design allows it to provide bespoke hardware solutions that optimize power and performance for specific AI tasks.
  • Marvell: Recently expanding into AI optics and ASIC-based solutions, Marvell’s adoption of the Alaska P PCIe 6 interface enhances high-speed data movement—a critical factor for large AI training and inference infrastructure.

Strategic significance: Both companies are well-positioned to serve as key suppliers of interconnects, ASICs, and custom silicon, filling gaps in Nvidia’s ecosystem and offering tailored solutions for specific AI workloads.

The Startup Ecosystem: Innovation and Diversification

A vibrant startup ecosystem is significantly reshaping the AI hardware landscape. Companies like FuriosaAI, SambaNova, Niobium, and SEMIFIVE are developing specialized inference chips that often outperform general-purpose GPUs in both performance and power efficiency.

Notable startups and their recent developments:

  • FuriosaAI: Led by industry veterans, Furiosa’s chips can process 17,000 tokens per second—a tenfold improvement over mainstream GPUs—while consuming less power.
  • SambaNova: Focuses on flexible, high-performance inference accelerators, attracting significant funding and strategic partnerships.
  • Niobium and SEMIFIVE: Collaborating on Fully Homomorphic Encryption (FHE) accelerators and other custom solutions, particularly aimed at U.S. domestic AI hardware independence and regional deployment strategies.

Funding and investment: These startups have attracted over $1.1 billion in recent months, reflecting strong investor confidence in their potential to diversify AI compute away from Nvidia’s dominant GPU ecosystem.

Supply Chain, Geopolitics, and Regional Strategies

The evolution of AI hardware is also heavily influenced by supply chain constraints and geopolitical tensions:

  • Advanced Packaging & Memory: Companies like Samsung and TSMC are investing billions into next-generation fabrication and chiplet architectures, including 3D stacking and HBM4 memory, essential for high-performance AI hardware.
  • Export Controls: US-led export restrictions, notably on Chinese AI hardware, have prompted regional strategies:
    • Chinese firms such as Horizon Robotics and Cambricon are ramping up domestic ASIC development and training large language models locally.
    • The recent US approval allowing Nvidia to export H200 chips to China underscores the complex geopolitical landscape, balancing market access and technological restrictions.

Current Market Outlook and Implications

As Nvidia prepares to launch its new inference chip in March 2024, the competitive environment is poised for a significant shift:

  • Intensity of Competition: Nvidia’s new product will likely push rivals to accelerate their own hardware innovations and customer outreach.
  • Customer Choices: Cloud providers and enterprise clients will evaluate:
    • Performance and efficiency of new chips
    • Supply chain resilience
    • Customization capabilities
    • Regional and geopolitical considerations

The future of AI hardware in 2024 hinges on technological innovation, supply chain robustness, and geopolitical navigation. Companies that can deliver efficient, scalable, and regionally supported solutions will carve out the most significant market share.


In summary, the AI accelerator market is more fragmented and competitive than ever. Nvidia remains a dominant force, but AMD’s large-scale contracts and innovative hardware, Broadcom and Marvell’s custom silicon expertise, and the vibrant startup ecosystem are reshaping the landscape. The upcoming Nvidia chip launch in March will be a critical event to watch, potentially triggering a new wave of strategic responses across the industry. As AI models continue to grow in complexity, the ability to develop custom, efficient accelerators will be essential for future market leadership.

Sources (12)
Updated Mar 1, 2026
Competitive dynamics from AMD, Broadcom, Marvell and AI chip startups versus Nvidia in accelerators and custom silicon - AI Compute Capital Watch | NBot | nbot.ai