Table of Contents

On April 22, David Crawshaw published “I am building a cloud” and closed a $35M Series A1 for exe.dev on the same day. The post is part manifesto, part sales pitch. That’s not a criticism; the strongest infrastructure arguments have always come from people with something better to ship. For platform teams, the specific claims about IOPS, egress, and Kubernetes overhead now have a funded counterexample attached.

What Shipped on April 22

Crawshaw co-founded Tailscale in 2019 alongside Avery Pennarun, David Carney, and Brad Fitzpatrick, and is a longtime contributor in the golang GitHub org. exe.dev is his new company, operating its own machines in data centers rather than building atop hyperscalers. The “one-person cloud” framing in the post refers to operability, not headcount: he has a co-founder and $35M1 from Amplify (lead), CRV, and HeavyBit.

The stack described in the Series A announcement includes CPU and memory as independent pools rather than pre-bundled instance sizes, local NVMe with asynchronous off-machine block replication, an anycast global frontend, built-in TLS and auth proxies, SSH-based onboarding, and flat-rate pricing for individual developers. On the roadmap but not yet shipped: static IPs, automatic historical disk snapshots, additional data center locations, and custom network design.

The Four Claims Against the Hyperscaler Default

Instance bundles. Cloud providers sell fixed CPU-memory-storage ratios optimized for average workloads. If your workload is memory-heavy with light CPU, you pay for the CPU anyway. exe.dev’s separate pools address this directly, though the same criticism applies to most managed alternatives today.

Block storage. According to the post, configuring an EC2 instance for 200k IOPS costs roughly $10k/month2. A MacBook delivers 500k IOPS locally. The numbers are plausible for io2 Block Express provisioning but depend on instance type and configuration, and Crawshaw’s interest in the comparison landing hard is obvious. Whether the specific figures survive your own pricing analysis, the structural claim holds. Cloud block storage hasn’t kept pace with commodity NVMe and you pay for the gap.

Egress. The post states that standard egress from a cloud provider runs 10x what a colocation customer pays at a normal data center, with meaningful discounts starting only at eight-figure monthly spend. This is widely reported and rarely disputed by anyone who has run the math.

Kubernetes. Crawshaw writes that Kubernetes is “attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.” The argument is that the abstraction exists to paper over cloud primitives that were deliberately made awkward, and that the right fix is better primitives, not a better wrapper around bad ones.

The Numbers Practitioners Can Act On

As of late April, the HN thread3 drew 1,115 points and 561 comments3. Among the notable reports: engineers citing six-figure monthly Kubernetes bills for sub-5000 concurrent-user workloads3, and accounts of migrations to Kamal and Docker on a single VM. The thread is self-selected and skews toward the most dramatic cases, but the specificity and volume suggest the problem isn’t isolated.

The IOPS figure is Crawshaw’s own benchmark and should be verified against your workload and your actual EC2 configuration before it drives any decisions. The egress 10x figure is easier to validate independently against your own billing data.

Why ‘We Use Kubernetes Because Everyone Does’ Got More Expensive to Defend

‘Kubernetes is too much’ is a recurring essay genre. Kamal, Coolify, Dokploy, and various bare-metal advocates have been making versions of this argument for years. None forced a reckoning at scale because none came with funded infrastructure as a demonstrated alternative.

Crawshaw’s post is different on two counts: a networking critique from someone who co-built production-grade mesh networking at Tailscale, and a shipped commercial stack that translates the critique into running primitives. That doesn’t prove the alternative is better for your use case. It does mean the default position, “we use Kubernetes because it’s what serious teams use,” now requires a TCO answer, not just a convention.

Where Kubernetes Still Earns Its Overhead

Defenders in the HN thread3 made the counter-case: managed Kubernetes requires near-zero weekly maintenance after initial setup for small teams, and K3s on a single node is considerably simpler than assembling cloud-init, service discovery, and secrets management from scratch. Several characterized K8s as an extension of the Linux OS layer rather than a microservices artifact.

The cases where the overhead is justified: multi-tenant platforms where namespace isolation is a hard requirement, compliance environments that demand auditable workload separation, and teams already operating across multiple clouds where portability is real rather than theoretical. Worth separating clearly: the Kubernetes tax from the organizational dysfunction that six-figure bills often reflect. An orchestration layer doesn’t generate runaway service sprawl; it reveals it.

What Platform Teams Should Do Now

Run the egress and block storage math against your actual bill. If neither is a significant cost driver, Crawshaw’s benchmarks don’t apply to your situation. If they are, the gap between what you’re paying and what bare-metal NVMe or colo egress would cost is worth calculating explicitly rather than leaving as a vague assumption.

Track the exe.dev roadmap. The current stack is missing static IPs and automatic historical snapshots. Those are table stakes for most production workloads. The Series A runway suggests these arrive within twelve months. Evaluate when they do, not now.

Distinguish Kubernetes-as-platform from Kubernetes-as-default. If the stack was inherited or nobody owns the decision, that’s a different situation from running it because multi-tenancy or compliance genuinely requires it. The former is worth revisiting; the latter isn’t.

The $35M1 is a data point, not a verdict. What Crawshaw has built is a benchmark for what the alternative actually costs. For platform teams, that benchmark is more useful than the manifesto that accompanied it.

Frequently Asked Questions

What workloads should avoid exe.dev in its current state?

Database workloads requiring zero recovery-point objectives can’t safely use the block layer yet — replication is asynchronous, meaning a window of potential data loss exists between local write and off-machine replica. The default image is Ubuntu-only, excluding teams with Alpine or distroless security-hardening requirements. Anything tied to fixed IP addresses (DNS glue records, compliance allowlists) must wait for the static IPs roadmap item.

At what monthly cloud spend does the egress markup become the primary cost driver?

Rarely. At the $10–50k/month range typical of mid-market cloud users, compute and managed services dominate the bill — egress at 10x colo rates is a real line item but usually not the largest. The IOPS gap bites sooner for database-heavy workloads: paying $10k/month for 200k IOPS on io2 Block Express when commodity NVMe delivers 500k is a higher percentage of spend. The quietest savings comes from auditing CPU-to-memory utilization ratios on fixed instance sizes.

How does Uncloud compare to exe.dev for a team evaluating small-cloud options?

Uncloud (psviderski/uncloud) is open-source and self-hosted, running Docker Compose over a WireGuard overlay across any VMs you choose — including on hyperscalers. It imposes no vendor lock-in or per-seat pricing, but you own hardware failure recovery, TLS termination, and network routing. exe.dev operates its own machines and handles those primitives as managed services, trading independence for operational simplicity at the cost of single-provider dependency on a platform still missing static IPs and disk snapshots.

What exe.dev roadmap items change the evaluation for production database workloads?

Automatic historical disk snapshots are the key blocker — point-in-time recovery is non-negotiable for any primary data store. Custom network design would enable the private VPC-like topologies that compliance frameworks (SOC 2, HIPAA) require for data-in-transit isolation. Until both ship, exe.dev is strongest for stateless frontends, API layers, and batch jobs rather than systems of record.

What’s the minimum team size where Kubernetes overhead actually pays for itself?

It’s not team size but workload topology that determines the threshold. K3s on a single node provides declarative service discovery, secrets management, and rolling deployments with less operational complexity than assembling puppet or chef with cloud-init — even for a solo operator. The overhead justifies itself when you need namespace isolation for multi-tenant workloads, auditable separation for compliance, or genuine multi-cloud portability. Below those requirements, Kamal or Docker Compose cover the same deployment benefits without the control plane.

Footnotes

  1. exe.dev Series A 2 3 4

  2. Building a Cloud

  3. HN Discussion 2 3 4

Sources

  1. exe.dev Series Avendoraccessed 2026-04-29
  2. Building a Cloudprimaryaccessed 2026-04-29
  3. HN Discussioncommunityaccessed 2026-04-29

Enjoyed this article?

Stay updated with our latest insights on AI and technology.