Category

Models & Research

Foundation models, releases, benchmarks, and AI research.

33 articles exploring Models & Research. Expert analysis and insights from our editorial team.

Showing 16–30 of 33 articles · Page 2 of 3

Latest in Models & Research

Newest first
16

Running DeepSeek R1 Locally: Hardware Requirements, Quantization, and Real Throughput

What hardware actually runs DeepSeek R1 at useful speeds? Specific token/s benchmarks across GPU configs, quantization options, and the honest tradeoffs.

· 9 min read
17

Chinese AI Models Compared: DeepSeek, Qwen, Kimi, Doubao, and Ernie

DeepSeek isn't China's only frontier AI. Compare DeepSeek, Qwen, Kimi, Doubao, and Ernie on benchmarks, licensing, API access, and use-case fit.

· 9 min read
18

Executing Programs Inside Transformers: The Inference Breakthrough Nobody Expected

A new architecture from Percepta embeds a program interpreter directly into transformer weights, achieving logarithmic-time execution lookups that could reshape how AI agents handle deterministic computation—if the early claims survive scrutiny.

· 8 min read
19

Fish-Speech: The Open-Source TTS Model That's Threatening ElevenLabs

Fish Audio's S2 model reached SOTA benchmarks in March 2026 with sub-100ms latency, 80+ languages, and open-sourced weights—directly challenging ElevenLabs' commercial dominance while exposing the real costs of 'free' voice AI.

· 8 min read
20

Claude's Web Search Changes Everything for AI Research

Anthropic's web search integration removes the static knowledge ceiling from Claude, enabling real-time retrieval directly inside the reasoning loop—with verifiable citations, domain filtering, and a new dynamic filtering layer that cuts token use by 24% while improving accuracy by 11%.

· 8 min read
21

DeepSeek V3/R1: How Chinese Engineers Matched GPT-4 for $6 Million

DeepSeek's V3 and R1 models match GPT-4-class performance using a fraction of the compute through architectural innovations in Mixture of Experts, attention compression, and reinforcement learning—demonstrating that training efficiency may matter more than raw hardware scale.

· 10 min read
22

Gemini 2.0 Pro's 2 Million Token Context: What Can You Actually Do With It?

Google's Gemini 2.0 Pro Experimental ships with a 2 million token context window—the largest among production-accessible models. Here's what practitioners have discovered works, what doesn't, and what the hard limits are.

· 9 min read
23

Google's TimesFM: A Foundation Model for Time Series

TimesFM is Google's pretrained, decoder-only transformer model for zero-shot time-series forecasting, trained on ~100 billion real-world time-points to deliver accurate predictions across domains without retraining.

· 9 min read
24

The Million-Token Context Window: What Can You Actually Do?

Million-token context windows let you feed entire codebases, legal contracts, and hours of video to an LLM in one pass—but advertised limits routinely overstate practical capability. Here's what the benchmarks, failure modes, and real deployment patterns actually show.

· 9 min read
25

Synthetic Data Is Eating AI Training

The internet's supply of [high-quality human-generated text](/articles/there-will-be-a-scientific-theory-of-deep-learning-what-arxiv-2604-21691-argues/) is approaching exhaustion. Synthetic data—AI-generated training corpora—is filling the gap, but introduces new failure modes practitioners must understand, including model collapse and quality drift.

· 9 min read
26

Gemini 3.1 Pro: Google's New Reasoning Model Explained

Gemini 3.1 Pro is Google's latest reasoning-focused AI model, achieving 77.1% on ARC-AGI-2 benchmarks—more than double the performance of its predecessor. Here's how it compares to Claude and GPT.

· 8 min read
27

DjVu and Its Connection to Deep Learning: An Unexpected History

DjVu, the 1998 image compression format created by future Turing Award winners at AT&T Labs, pioneered techniques like layer separation and multi-resolution encoding that directly influenced modern neural image compression methods.

· 7 min read
28

Kimi Claw: Moonshot AI's Answer to Claude and ChatGPT

Moonshot AI's Kimi series has emerged as China's leading open-source AI challenger, offering trillion-parameter models with advanced agentic capabilities at a fraction of Western competitors' costs.

· 8 min read
29

WiFi DensePose: Full-Body Tracking Through Walls Using Your Router

WiFi-based DensePose technology uses commodity mesh routers to perform dense human pose estimation through walls, raising critical privacy concerns as researchers demonstrate how standard WiFi signals can track body movements and positions without consent or line of sight.

· 6 min read
30

AI Code Generation Benchmarks 2026: Which Model Actually Writes Better Code?

Claude 3.5 Sonnet, GPT-4o, Gemini 2.5 Pro, and open-source models like Qwen2.5-Coder and DeepSeek show competitive performance on benchmarks, but real-world coding tasks reveal significant gaps between benchmark scores and practical utility.

· 8 min read

Explore More Categories

Discover insights across different technology domains.

Browse All Articles