Table of Contents

On April 14, 2026, NVIDIA released Ising: two AI models targeting the two hardest operational problems in fault-tolerant quantum computing — decoding errors faster than they accumulate, and calibrating qubits without burning days of engineering time. Both are open-weight and available on GitHub and HuggingFace. Neither replaces the tools quantum teams already rely on. What they do is make those tools meaningfully faster and more accurate, with caveats that the launch press releases largely glossed over.

What NVIDIA Ising Actually Is (and What It Isn’t)

The Ising family comprises two distinct products with different architectures, different hardware requirements, and different licenses.

Ising Decoding is a pair of 3D convolutional neural networks that run before PyMatching — the standard minimum-weight perfect matching decoder — not instead of it. (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog) The CNNs pre-process syndrome data to sparsify it, reducing the work PyMatching has to do. The fast variant has roughly 912K parameters with a 9×9×9 receptive field; the accurate variant has roughly 1.79M parameters with a 13×13×13 receptive field. (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog)

Ising Calibration is a 35-billion-parameter sparse mixture-of-experts vision-language model (3B parameters active per token), built on Qwen3.5-35B-A3B and fine-tuned on 72,500 synthetically generated quantum experiment entries spanning 22 experiment families. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face) It analyzes calibration data and suggests tuning actions — but NVIDIA’s own model card explicitly states that outputs should be validated by domain experts before acting on experimental conclusions. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face)

Ising Decoding: Fast vs. Accurate — The Decision You Actually Have to Make

At surface code distance d=13 with physical error rate p=0.003, the two variants produce meaningfully different tradeoffs: (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog)

VariantParametersReceptive FieldThroughput vs. PyMatchingLER Improvement
Fast~912K9×9×92.5×1.11×
Accurate~1.79M13×13×132.25×1.53×

The headline figure you’ll see in coverage — “3× more accurate” — applies to the accurate variant tested at d=31, where the model generalizes beyond its d=13 training data. (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog) That generalization result is genuinely notable, but it is not the d=13 benchmark condition. If your current bottleneck is throughput (you need more syndrome decoding cycles per second), reach for the fast variant. If your bottleneck is logical error rate (you need fewer logical failures per shot), the accurate variant’s 1.53× LER reduction at standard conditions is the relevant number.

An independent evaluation by UCSD’s Picasso Lab — the only third-party assessment published as of 2026-04-20 — tested the 3D CNN pre-decoder approach on surface codes at d=9 and found a 1.66× LER reduction, enabling PyMatching to run up to 2.12× faster through syndrome sparsification. (Evaluating Neural Pre-Decoding with NVIDIA Ising: From Surface to Bivariate Bicycle Codes — Quantum Computing Report) The lab noted that the speedup advantage grows with code distance, which aligns with NVIDIA’s d=13 and d=31 numbers.

A practical advantage that doesn’t require real hardware: training the decoder requires only specifying a noise model, rotated surface code orientation, and desired model depth. Synthetic training data is auto-generated via NVIDIA cuQuantum’s cuStabilizer library. (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog) Teams without existing hardware datasets can still train and deploy.

Ising Calibration: What a 35B Quantum VLM Can and Cannot Do

On NVIDIA’s own QCalEval benchmark, Ising Calibration scores 74.7% overall (62.5–90.5% depending on question type), outperforming Gemini 3.1 Pro by 3.27 percentage points, Claude Opus 4.6 by 9.68 points, and GPT 5.4 by 14.5 points. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face)

The model’s training corpus is entirely synthetic — 72,500 entries generated to represent 22 families of quantum calibration experiments. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face) That’s both a strength (no proprietary hardware data required to train it) and a known limitation. Synthetic data covers the distribution it was designed to cover; real hardware produces edge cases that synthetic generation may not anticipate. NVIDIA’s own card acknowledges this directly.

The practical pitch is calibration time: NVIDIA claims reduction from days to hours for workflows where teams currently iterate manually. (NVIDIA Launches Ising, the World’s First Open AI Models to Accelerate the Path to Useful Quantum Computers — NVIDIA Newsroom)

The Adoption Bar: Hardware, Licensing, and Integration Requirements

Self-hosting Ising Calibration requires either two NVIDIA L40S GPUs (48 GB each) or one H100 (80 GB). It runs via vLLM with FlashAttention, BF16 precision, Ubuntu 22.04 or later. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face) Teams without that hardware can access it as a hosted NVIDIA NIM microservice on build.nvidia.com. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face)

The licensing situation is split, and it matters for procurement and compliance decisions:

Commercial use is permitted under the NVIDIA Open Model License, but the terms are distinct from Apache 2.0. Teams planning to self-host the calibration model weights should review that license separately rather than assuming open-source permissiveness.

Ising Decoding training code carries no such complication — it’s straightforwardly Apache 2.0.

Surface Code Assumption: Where Ising Works and Where It Doesn’t

This is the caveat most coverage has missed entirely. The 3D CNN architecture in Ising Decoding is designed to exploit the grid topology of surface codes. That topology is what makes a convolutional approach efficient — the local spatial structure of surface code syndromes maps naturally onto a CNN’s receptive field.

On bivariate bicycle (qLDPC) codes, the UCSD evaluation found that an MLP-based pre-decoder achieved 14× LER reduction at very low error rates (p=0.01) but the benefit disappeared at higher error rates. (Evaluating Neural Pre-Decoding with NVIDIA Ising: From Surface to Bivariate Bicycle Codes — Quantum Computing Report) The lab’s conclusion: “a neural pre-decoder is most robust when its internal structure reflects the physical connectivity of the quantum code’s Tanner graph.” (Evaluating Neural Pre-Decoding with NVIDIA Ising: From Surface to Bivariate Bicycle Codes — Quantum Computing Report)

qLDPC codes like bivariate bicycle codes have different connectivity structure. The benchmarks for Ising Decoding do not transfer to them, and teams building on non-surface-code architectures should not treat the published numbers as predictive of their results.

Who’s Already Using It and What That Signals

NVIDIA’s launch announcement listed adopters as of April 14, 2026: IonQ, Infleqtion, IQM, Atom Computing, Fermi National Accelerator Laboratory, Lawrence Berkeley National Laboratory, Harvard, Sandia National Laboratories, Cornell, UC San Diego, University of Chicago, and SEEQC. (NVIDIA Launches Ising, the World’s First Open AI Models to Accelerate the Path to Useful Quantum Computers — NVIDIA Newsroom)

The breadth of that list — spanning commercial quantum hardware vendors, national labs, and academic groups — reflects that Ising is positioned as infrastructure rather than a specialized tool. The inclusion of Fermi and Lawrence Berkeley is notable because national lab calibration workflows involve hardware that can’t easily be sent to cloud services, making the self-hostable weights and Apache 2.0 decoder code directly relevant.


FAQ

Does Ising Decoding require me to collect real hardware error data to train?

No. The decoder training pipeline auto-generates synthetic syndrome data using NVIDIA cuQuantum’s cuStabilizer library. You specify your noise model, surface code orientation, and desired model depth — cuStabilizer generates the training set. (NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blog) This removes a significant bootstrapping barrier for teams that haven’t yet accumulated large hardware datasets.

The QCalEval benchmark shows Ising Calibration outperforming major general-purpose models by a wide margin. Is that independently verified?

Not yet, as of 2026-04-20. QCalEval is an NVIDIA-created benchmark specifically designed to evaluate quantum calibration tasks. (Ising-Calibration-1-35B-A3B Model Card — Hugging Face) The comparison scores against other models are plausible given the domain specialization, but no independent third-party replication of those numbers has been published at launch. The more conservative reading is that Ising Calibration is highly competitive on a domain-specific benchmark designed by its creator — which is a real signal, but not the same as independent validation.

Can I use Ising Decoding on qLDPC codes if I just retrain it on that code family?

Possibly, but with caveats. The 3D CNN architecture is geometrically suited to surface code syndrome structure. Retraining on qLDPC data may help, but the UCSD evaluation’s finding — that benefits disappear at higher error rates for bivariate bicycle codes even with an adapted pre-decoder — suggests the architecture itself may not be the right fit for codes with different Tanner graph connectivity. (Evaluating Neural Pre-Decoding with NVIDIA Ising: From Surface to Bivariate Bicycle Codes — Quantum Computing Report) Teams pursuing this path should treat it as original research rather than a supported use case.

Sources

  1. NVIDIA Ising Introduces AI-Powered Workflows to Build Fault-Tolerant Quantum Systems — NVIDIA Technical Blogvendoraccessed 2026-04-24
  2. NVIDIA Launches Ising, the World's First Open AI Models to Accelerate the Path to Useful Quantum Computers — NVIDIA Newsroomanalysisaccessed 2026-04-24
  3. Ising-Calibration-1-35B-A3B Model Card — Hugging Facecommunityaccessed 2026-04-24
  4. Evaluating Neural Pre-Decoding with NVIDIA Ising: From Surface to Bivariate Bicycle Codes — Quantum Computing Reportanalysisaccessed 2026-04-24
  5. NVIDIA Launches Ising: Open AI Models for Quantum Processor Calibration and Error Correction — Quantum Computing Reportanalysisaccessed 2026-04-24

Enjoyed this article?

Stay updated with our latest insights on AI and technology.