Semiconductors 2026Updated

List of Fabless AI Chip Design Companies

Comprehensive directory of fabless semiconductor companies designing custom AI accelerators, inference chips, and training processors. Covers startups through public companies with chip architecture, target workload, funding stage, and foundry partnerships.

Available Data Fields

Company Name
Chip Product Name
Target Workload
Chip Architecture
Foundry Partner
Headquarters
Total Funding
Founded Year
Key Investors
Employee Count

Data Preview

* Full data requires registration
CompanyChip ProductTarget WorkloadHeadquarters
Cerebras SystemsWSE-3 (Wafer-Scale Engine)Training & InferenceSunnyvale, CA
GroqLPU (Language Processing Unit)InferenceMountain View, CA
TenstorrentWormhole / BlackholeTraining & InferenceSanta Clara, CA
HailoHailo-8 / Hailo-10Edge InferenceTel Aviv, Israel
SambaNova SystemsSN40L RDUInferencePalo Alto, CA

100+ records available for download.

* Continue from free preview

The Fabless AI Chip Landscape

The AI semiconductor industry is undergoing a structural shift. While NVIDIA dominates GPU-based training, a growing wave of fabless companies is designing purpose-built silicon for specific AI workloads — from large-scale training clusters to edge inference on embedded devices. These firms outsource fabrication to foundries like TSMC, Samsung Foundry, and GlobalFoundries, focusing their resources entirely on chip architecture, software toolchains, and go-to-market.

Why Fabless Matters for AI Silicon

Custom AI accelerators — often ASICs or domain-specific architectures — can deliver 10–100x better performance-per-watt compared to general-purpose GPUs on targeted workloads. The fabless model allows startups to compete by eliminating the multi-billion-dollar cost of owning a fab. Companies like Cerebras (wafer-scale integration), Groq (deterministic streaming architecture), and Tenstorrent (RISC-V-based tensix cores) have each taken radically different design approaches to challenge incumbent GPU architectures.

Market Segmentation

Cloud Training ASICs
Companies like Cerebras and SambaNova target hyperscaler and enterprise training workloads, competing with NVIDIA A100/H100 clusters. Custom ASIC shipments from cloud providers are projected to grow 44.6% in 2026.
Inference Accelerators
Groq, d-Matrix, and Untether AI focus on low-latency, high-throughput inference — a market growing faster than training as deployed AI models scale.
Edge AI Processors
Hailo, Blaize, and Syntiant design ultra-low-power chips for autonomous vehicles, smart cameras, and IoT devices where cloud connectivity is impractical.

Funding and Investment Trends

Venture capital and strategic investment into fabless AI chip companies surged past $20 billion cumulatively by 2024. Notable rounds include Cerebras raising $1.1B (Series G), Tenstorrent closing a $693M Series D backed by Jeff Bezos, and Groq securing $750M as inference demand surged. Hyperscalers like Google, Amazon, and Microsoft are also designing in-house AI chips (TPU, Trainium, Maia), further validating the custom silicon thesis.

Key Foundry Relationships

Nearly all leading fabless AI chip firms rely on TSMC for advanced nodes (5nm, 4nm, 3nm). Samsung Foundry and GlobalFoundries serve as secondary options. Access to cutting-edge process nodes is a critical competitive factor — companies without TSMC allocation face 12–18 month delays that can be existential in a fast-moving market.

Frequently Asked Questions

Q.Does this dataset include hyperscaler in-house chip teams like Google TPU or AWS Trainium?

This dataset focuses on independent fabless companies. Hyperscaler in-house efforts (Google TPU, AWS Trainium, Microsoft Maia) are not included as they are internal divisions, not standalone semiconductor firms.

Q.How is chip architecture information sourced?

Architecture details are gathered from public sources including company websites, technical white papers, conference presentations (Hot Chips, ISSCC), and press releases. When our AI crawls the web at request time, it pulls the latest publicly available specifications.

Q.Are Chinese fabless AI chip companies included?

Yes. The dataset covers global companies including Chinese firms like Cambricon, Biren Technology, and Enflame, based on publicly available information. Coverage depends on the availability of English or local-language public data.

Q.What distinguishes fabless from fab-lite companies in this dataset?

We include only companies that outsource 100% of wafer fabrication to third-party foundries. Fab-lite companies like Intel (which designs AI chips but also operates fabs) are excluded. The criterion is whether the company owns manufacturing capacity.