Category

Models & Research

Foundation models, releases, benchmarks, and AI research.

23 articles exploring Models & Research. Expert analysis and insights from our editorial team.

Showing 16–23 of 23 articles · Page 2 of 2

Latest in Models & Research

Newest first
16

Gemini 3.1 Pro: Google's New Reasoning Model Explained

Gemini 3.1 Pro is Google's latest reasoning-focused AI model, achieving 77.1% on ARC-AGI-2 benchmarks—more than double the performance of its predecessor. Here's how it compares to Claude and GPT.

· 8 min read
17

DjVu and Its Connection to Deep Learning: An Unexpected History

DjVu, the 1998 image compression format created by future Turing Award winners at AT&T Labs, pioneered techniques like layer separation and multi-resolution encoding that directly influenced modern neural image compression methods.

· 7 min read
18

Kimi Claw: Moonshot AI's Answer to Claude and ChatGPT

Moonshot AI's Kimi series has emerged as China's leading open-source AI challenger, offering trillion-parameter models with advanced agentic capabilities at a fraction of Western competitors' costs.

· 8 min read
19

WiFi DensePose: Full-Body Tracking Through Walls Using Your Router

WiFi-based DensePose technology uses commodity mesh routers to perform dense human pose estimation through walls, raising critical privacy concerns as researchers demonstrate how standard WiFi signals can track body movements and positions without consent or line of sight.

· 6 min read
20

AI Code Generation Benchmarks 2026: Which Model Actually Writes Better Code?

Claude 3.5 Sonnet, GPT-4o, Gemini 2.5 Pro, and open-source models like Qwen2.5-Coder and DeepSeek show competitive performance on benchmarks, but real-world coding tasks reveal significant gaps between benchmark scores and practical utility.

· 8 min read
21

Two Different Tricks for Fast LLM Inference: Speeding Up AI Responses

Speculative decoding and efficient memory management through PagedAttention are two proven techniques that accelerate LLM inference by 2-24x without sacrificing output quality, enabling production deployments at scale.

· 7 min read
22

Fine-Tune LLMs 2x Faster with 70% Less VRAM: The Unsloth Guide

Discover how Unsloth's Triton-optimized kernels enable 2x faster LLM fine-tuning with 70% less VRAM, making it possible to train DeepSeek, Qwen, and Llama models on consumer GPUs without sacrificing accuracy.

· 8 min read
23

The Best AI Models for OpenClaw in 2026

A comprehensive guide to selecting the right LLM for your OpenClaw workflows, covering coding, writing, reasoning, and cost-effective options.

Explore More Categories

Discover insights across different technology domains.

Browse All Articles