Graphcore

Graphcore Wins a Global Recognition Award 2026

In a genomics research cluster, a drug discovery model is running on a molecular interaction graph with 400 million nodes. On a GPU cluster, the same computation takes 14 hours. On Graphcore’s IPU-POD256, it completes in 38 minutes, not because the software is optimized differently, but because the processor holds the entire model inside the chip, eliminating the memory transfers that consume the majority of GPU compute cycles. This is the architectural difference that Graphcore has built since 2016, and it is the reason the company has earned a 2026 Global Recognition Award. The Bristol-founded semiconductor company, now a wholly owned subsidiary of SoftBank Group and builder of the world’s first 3D wafer-on-wafer commercial AI processor, has validated a decade-long thesis that the path to next-generation AI compute runs through a fundamentally different processor architecture, not an incremental improvement on the GPU.

Technical Innovation and Architecture

Graphcore’s Intelligence Processing Unit stores the complete machine learning model within the processor via its In-Processor-Memory architecture, eliminating the off-chip memory access cycle, which is the primary performance constraint in GPU-based AI workloads. The Colossus MK2 GC200 IPU, built on TSMC’s 7nm process with 59.4 billion transistors, runs 1,472 independent processor cores executing nearly 9,000 parallel program threads simultaneously, delivering 250 teraFLOPS of AI compute with memory bandwidth exceeding 45TB/s, more than 20 times the bandwidth available from Nvidia’s A100 HBM architecture. This bandwidth differential is not marginal; for Graph Neural Networks, sparse transformers, and Bayesian inference workloads, where the majority of memory accesses involve small, irregular data structures that off-chip HBM serves inefficiently, the IPU’s in-processor bandwidth translates directly into order-of-magnitude throughput improvements.

The Bow IPU, Graphcore’s current-generation processor, is the world’s first commercial chip built using 3D wafer-on-wafer silicon stacking: a dedicated power delivery wafer stacked beneath the compute wafer at the silicon level, not the package level. This architecture achieves 350 teraFLOPS of AI compute, 65TB/s of memory bandwidth, and 16% better power efficiency with 40% higher performance versus the prior generation — all within a chip that fits in the same physical form factor and thermal envelope as its predecessor. Up to 64,000 Bow IPUs connect through Graphcore’s proprietary IPU-Fabric interconnect to form petaFLOP-scale AI compute clusters without third-party networking infrastructure. The Poplar SDK maps PyTorch, TensorFlow, ONNX, and JAX workloads onto the IPU architecture with native sparsity support, delivering up to 10x throughput gains for sparse AI computations, where GPU dense computation wastes cycles on zero-value operations.

Market Strategy and Leadership

Co-Founders Nigel Toon (Electrical Engineering, Imperial College London) and Simon Knowles bring a combined track record of two prior semiconductor company exits — Icera to Nvidia for $435 million in 2011, and Element 14 to Broadcom for $640 million in 2000 — to the IPU venture. The irony that the founders built and sold the company that became Graphcore’s primary market competitor directly informs the architectural decisions behind the IPU: both founders understood exactly where Nvidia’s GPU architecture leaves performance on the table for AI workloads, and both had the semiconductor design experience to build an alternative. Graphcore raised $682 million from 34 investors, including Sequoia Capital, Microsoft, Ontario Teachers’ Pension Plan, Fidelity International, and Baillie Gifford, reaching a peak valuation of $2.77 billion in December 2020 before SoftBank Group completed its acquisition in July 2024.

Under SoftBank ownership, Graphcore’s go-to-market model shifted from independent enterprise hardware sales to integration within SoftBank’s Artificial Super Intelligence infrastructure platform. The next-generation Izanagi chip, combining IPU parallel processing with Ampere ARM server CPU architecture, targets deployment across Stargate hyperscale data centers in the United States and Japan in 2026 — a distribution scale achievable only through SoftBank’s capital and infrastructure relationships. The October 2025 announcement of a £1 billion investment in a new AI Engineering Campus in Bengaluru, India, creating 500 semiconductor engineering roles across logical design, physical design, verification, and chip bring-up functions, confirms that SoftBank treats Graphcore as a long-term semiconductor R&D platform rather than a financial portfolio asset awaiting disposal.

Industry Impact and Future Vision

Graphcore’s IPU architecture has enabled research and commercial applications that GPU memory bandwidth constraints made impractical at production scale: genomics organizations running molecular interaction graph simulations at minutes rather than hours; pharmaceutical research teams accelerating drug candidate screening through IPU-powered molecular dynamics; financial institutions executing Bayesian inference models for risk computation at latencies incompatible with GPU batch processing requirements; and scientific computing programs at CERN and Cambridge using IPU-accessible free research allocations to run graph-based physics simulations. The native sparsity support in IPU hardware aligns with the direction of AI model architecture development. As researchers build sparser, more efficient models to reduce training and inference costs, the IPU’s advantage over dense GPU computation increases rather than diminishes.

The Izanagi chip roadmap, the Bengaluru Engineering Campus scaling toward 500 semiconductor engineers, and integration within SoftBank’s Stargate deployment pipeline collectively position Graphcore’s next hardware generation as the AI accelerator inside the world’s highest-profile hyperscale AI infrastructure build-out of 2026. The academic ecosystem program — free IPU access for university research and active scientific collaborations — builds the developer community and research validation pipeline that ensures the IPU architecture evolves in direct response to the frontier of AI model development. For commercializing the world’s first in-processor-memory AI accelerator, the world’s first 3D wafer-on-wafer production processor, and the first architecturally coherent alternative to GPU dominance in AI compute, Graphcore has fully earned the distinction of the 2026 Global Recognition Award.

  • Colossus MK2 GC200 IPU contains 59.4 billion transistors, 1,472 independent processor cores, 900MB of In-Processor-Memory, and delivers 250 teraFLOPS of AI compute with 45TB/s memory bandwidth — more than 20 times the bandwidth of Nvidia’s A100 HBM.

  • Bow IPU is the world’s first commercial processor built using 3D wafer-on-wafer (WoW) silicon stacking, delivering 350 teraFLOPS with 16% better power efficiency and 40% performance uplift over the MK2 generation.

  • Up to 64,000 IPUs interconnected via proprietary IPU-Fabric to form petaFLOP-scale AI compute clusters without third-party networking hardware.

  • Native hardware sparsity support delivers up to 10x throughput improvements on sparse AI workloads where GPU dense computation wastes cycles on zero-value operations.

  • Poplar SDK supports PyTorch, TensorFlow, ONNX, JAX, and PaddlePaddle, mapping ML workloads onto IPU architecture with full framework compatibility.

  • MIMD (Multiple Instruction, Multiple Data) processor architecture enables 8,832 genuinely independent parallel program threads — a compute parallelism model incompatible with GPU SIMD/SIMT architectures.

  • $682 million raised across 12 rounds from 34 investors including Sequoia Capital, Microsoft, Ontario Teachers’ Pension Plan, Fidelity International, Baillie Gifford, Bosch Ventures, Samsung, and Atomico.

  • Peak valuation of $2.77 billion achieved at Series E in December 2020, with $440 million in cash on the balance sheet post-closing.

  • Acquired by SoftBank Group in July 2024 as a wholly owned subsidiary, securing unlimited capital runway within SoftBank’s $100 billion AI infrastructure ambition.

  • £1 billion commitment to Bengaluru AI Engineering Campus over ten years, creating 500 semiconductor engineering roles beginning with 100 immediate hires in October 2025.

  • IPU-M2000 system benchmark: 2.5x faster ResNet-50 training and 4.6x faster inference throughput versus Nvidia DGX-A100 in Graphcore-published head-to-head comparisons.

  • Izanagi next-generation chip integrates Graphcore IPU processing with Ampere ARM server CPU architecture for deployment across US-Japan Stargate hyperscale data centers in 2026.

  • SoftBank ownership provides direct integration with Arm’s ecosystem, enabling the only Arm CPU plus IPU accelerator stack assembled under single corporate ownership globally.

  • Geographic priority markets under SoftBank are Japan (direct SoftBank infrastructure) and the Middle East (sovereign wealth fund and sovereign AI cloud channels).

  • Academic free-access program for IPU compute at universities including Edinburgh, Cambridge, and CERN builds research validation and long-term developer ecosystem depth.

  • Co-founders bring two prior exits totaling $1.075 billion in acquisition value (Icera to Nvidia: $435M; Element 14 to Broadcom: $640M), among the strongest serial hardware founder track records in UK semiconductor history.

  • Poplar SDK’s framework-agnostic design allows developers to run existing PyTorch and TensorFlow models on IPU hardware without full code rewrites, lowering the barrier to IPU adoption for ML engineering teams.

  • Microsoft Azure IPU preview provided enterprise developers with cloud-managed IPU access, eliminating on-premise hardware procurement requirements for initial workload testing.

  • IPU-Fabric scale-out architecture enables seamless cluster expansion from single IPU to 64,000-IPU configurations under a unified Poplar software management layer.

  • Graph Neural Network workloads run natively on IPU’s MIMD architecture without the graph partitioning workarounds required to execute GNN models on GPU SIMD hardware.

  • Benchmark data comparing IPU against GPU for cosmology and molecular simulation workloads is peer-reviewed and published at CERN and Edinburgh academic conferences, providing independent third-party performance validation.

  • Bow IPU’s 3D WoW architecture delivers 16% better power efficiency than MK2, reducing energy consumption per AI FLOP at the chip architecture level rather than through system cooling or power management alone.

  • Native sparsity support eliminates unnecessary floating-point operations at the hardware level, reducing active compute cycles and energy consumption for sparse AI workloads by up to 10x versus GPU dense computation.

  • £1 billion Bengaluru campus distributes advanced semiconductor design expertise and 500 high-skill engineering roles to India, expanding access to frontier AI hardware R&D beyond traditional US-UK-Taiwan semiconductor geographies.

  • Free IPU access for academic AI research programs at universities and institutions including CERN, Edinburgh, and Cambridge democratizes access to next-generation AI compute for researchers who cannot afford GPU cluster costs.

  • Founding mission explicitly frames IPU technology as a tool for “healthier, fairer, more informed, more sustainable lives” — a societal benefit framing that informs research partnership selection and open academic engagement

Recent Winners

Geordie

Engineering the agent-native security and governance platform for enterprise AI agents, Geordie delivers 10-minute agent visibility, real-time behavioral observability, and proactive risk mitigation through its proprietary Beam context engine, setting the global gold standard for AI Agent Security in 2026.

Read More »

AttiFin AI

Engineering Britain’s first enterprise-grade AI platform trained specifically on UK and devolved law, AttiFin AI delivers fully sourced answers and case-ready drafts to legal professionals with citation-first design and full UK data residency, setting the global gold standard for Legal AI in 2026.

Read More »

Eli Health

Engineering the world’s first instant saliva-based hormone monitoring system, Eli Health delivers cortisol and progesterone readings to a smartphone in minutes, FDA-registered with 12+ patents and CES 2025 Best of Innovation recognition, setting the global gold standard for Digital Health in 2026.

Read More »