Editor's Picks
Handpicked stories worth your time
The Million-Token Context Window: What Can You Actually Do?
Million-token context windows let you feed entire codebases, legal contracts, and hours of video to an LLM in one pass—but advertised limits routinely overstate practical capability. Here's what the benchmarks, failure modes, and real deployment patterns actually show.
The AI Agent Marketplace: An Economy of Digital Workers Emerges
Perplexity API: Adding Real-Time Search to Your Apps in Minutes
Explore Topics
Browse by category
Recent Stories
Fresh off the press
I Was Interviewed by an AI Bot—Here's What Nobody Warns You About
AI-conducted job interviews have moved from fringe experiment to standard practice, handling 1 in 10 U.S. job interviews through platforms like Paradox and HireVue. The experience is unsettling, the bias risks are real, and the legal protections are actively weakening.
SWE-Bench's Dirty Secret: AI-Passing PRs That Real Engineers Would Reject
New research from METR shows roughly half of SWE-bench-passing AI-generated PRs would be rejected by actual project maintainers—exposing a 24-percentage-point gap between benchmark scores and real-world code acceptability.
Wrongfully Jailed by an Algorithm: AI Facial Recognition's Misidentification Crisis
At least eight innocent people—nearly all Black—have been wrongfully arrested because police trusted AI facial recognition systems that government studies show misidentify darker-skinned faces at rates 10 to 100 times higher than white faces. The crisis isn't the technology alone; it's the institutional trust placed in documented bias.
Microsoft's BitNet: How 1-Bit LLMs Could Make GPU Farms Obsolete
Microsoft's BitNet inference framework runs billion-parameter LLMs on ordinary CPUs using ternary weights, delivering up to 6x faster inference and 82% lower energy consumption—potentially upending the assumption that AI inference requires expensive GPU hardware.
Document Poisoning: How Attackers Are Corrupting Your AI's Knowledge Base
RAG systems trust their document stores—and attackers know it. Document poisoning injects false or malicious content into knowledge bases, causing AI systems to generate attacker-controlled output for every user who asks the right question. Here's what the research shows.
How Researchers Hacked McKinsey's AI Platform—and What It Reveals
Security researchers at CodeWall used an autonomous AI agent to breach McKinsey's Lilli platform in approximately two hours, exposing 46.5 million messages through SQL injection—a decades-old technique that enterprise AI teams consistently fail to prevent.
Stay Ahead of the Curve
Get the latest AI and tech insights delivered to your feed.