Topic

#quantization

3 articles exploring quantization. Expert insights and analysis from our editorial team.

Showing 1โ€“3 of 3 articles

Articles

Newest first
Models & Research

DuQuant++ Makes FP4 Quantization Practical for LLM Inference: What Fine-Grained Rotation Means for Blackwell Deployments

DuQuant++ aligns rotation block size with MXFP4 microscaling groups, halving preprocessing cost and pushing W4A4 accuracy close to FP8 as Blackwell FP4 Tensor Cores ship.

Models & Research

DuQuant++ Brings Fine-Grained Rotation to FP4: What Microscaling Quantization Means for Running Larger Models on the Same GPU

DuQuant++ adapts outlier-aware rotation to MXFP4, halving online rotation cost on LLaMA 3 and shifting the FP4 deployment bottleneck from memory to calibration engineering.

Models & Research

Running DeepSeek R1 Locally: Hardware Requirements, Quantization, and Real Throughput

What hardware actually runs DeepSeek R1 at useful speeds? Specific token/s benchmarks across GPU configs, quantization options, and the honest tradeoffs.

ยท 9 min read