Editor's Picks
Handpicked stories worth your time
MLX vs llama.cpp on Apple Silicon: Which Runtime to Use for Local LLM Inference
MLX delivers 20-87% faster generation on Apple Silicon for models under 14B parameters. llama.cpp wins for cross-platform use and long contexts.
Microsoft's BitNet: How 1-Bit LLMs Could Make GPU Farms Obsolete
Synthetic Data Is Eating AI Training
Explore Topics
Browse by category
Recent Stories
Fresh off the press
America's AI Researcher Pipeline Dropped 89% — What the Stanford Index Means for Teams Hiring AI Engineers
Stanford's 2026 AI Index reports an 89% collapse in AI researcher inflows to the US. Here's what it means for teams actively building AI engineering capacity.
Atlassian Turned On AI Training Data Collection by Default — Here's What to Disable
Atlassian's data contribution policy sends Jira and Confluence content to AI training by default. Here's the exact settings path to opt out before August 17.
Cloudflare Browser Run's CDP and MCP Support: Serverless Browser Automation for AI Agents
Cloudflare renamed Browser Rendering to Browser Run in April 2026 and added CDP and MCP support, letting AI agents use managed headless Chrome with a single config change.
Devstral 2 from Mistral: A Fully Open-Source Coding Agent Model You Can Run on a Laptop
Devstral Small 2 is genuine Apache 2.0 and fits in 14 GB. The 123B flagship looks open-source but carries a revenue cap most enterprises will violate.
ggsql Alpha: Write ggplot2-Style Visualizations Directly in SQL
Posit shipped the ggsql alpha today — a SQL extension that adds grammar-of-graphics clauses to DuckDB and SQLite queries. Here's what works, what's missing, and when to use it.
GitHub CLI's `gh skill` Command: One Standard to Rule Claude Code, Copilot, Cursor, and Gemini
GitHub shipped `gh skill` in public preview on April 16, 2026. Here's how the command works, what the open Agent Skills spec promises, and why the ecosystem is already compromised.
Stay Ahead of the Curve
Get the latest AI and tech insights delivered to your feed.