The road to fully offline AI runs through here.
Four months ago we set out to build AI that runs on your hardware, with your data, under your control. No cloud. No subscriptions. No one training on your ideas.
We had six inference nodes online at one point — two Blackwell workstation cards, three Strix Halo unified-memory boxes, and a smart NAS that runs compute alongside storage. Two more Strix Halos are racked and waiting; a three-card RDNA4 build is on deck. Then we rebuilt the Beacon — the control plane, the brain — from scratch on a clean box, because the first one had grown too tangled to trust. Now the lab is being migrated through deliberately, every node planned out before it gets a wire.
Beacon — the control plane, the brain — runs clean on a single workstation-class box. Two production inference lanes are live alongside it: one tuned for code generation, one for long-document reasoning. Both run entirely on local hardware. Memory is local. Logs are local. Nothing leaves the building.
What's still moving. A third inference node is being brought online — kernel work in progress. The autonomous build loop — the part where the lab builds itself — ran for nineteen hours straight, then stopped in the way we knew it eventually would. Fix is small and known: better-trained planning model, tighter target ownership, templates instead of free-form code. Bringing it back online deliberately, like everything else.
Next. Weekly Build Logs from here. Plan written down before each cable goes in.
Shipped this week. The website you're reading. The vision finally written in our own voice instead of borrowed pitch language. Lab section restructured around weekly Build Logs and a snapshot of where the lab actually is — not where the marketing says it should be.
What broke and what we did about it. Documentation grew faster than our ability to navigate it. We collapsed sixteen working files into one entry-point index pointing at sectioned reference files. Every doc has a parent, every claim has a source. The discipline is the moat.
Next. Bring the autonomous build loop back online with the four fixes already known. Plan written down before the cable goes in.
Shipped this week. A documentation discipline. Sixteen scattered notes collapsed into one entry-point index pointing at sectioned reference files, each capped at single-sitting read length. Not glamorous. The work product now stays organized at the speed it's being made.
What broke and what we did about it. Our first autonomous build loop ran on cron jobs that produced no output. We caught it because every step is supposed to leave a receipt, and the receipts kept coming back empty. We switched to systemd timers with health-check guards that refuse to start unless the upstream service is reachable. Failed-loud, not silent. Receipts catch what good intentions miss.
What we learned. The first version of any build loop will lie to you about whether it ran. Build the receipt format before you build the loop.
Shipped this week. The control plane went up in twenty-eight hours. Most of the work units passed independent verification on the first run. Some failed at the same root cause — file-write permissions in a sandboxed runner — and waited there until we built around them instead of through them. A few were deferred for later. Every one has a receipt: the command run, the output captured, the timestamp.
What broke and what we did about it. The plan has been rewritten more than once because the first few were wrong. Building in public means showing the rewrites, not hiding them.
What's still moving. A third inference node is blocked on a kernel issue. Voice pipeline still in design. Closeout pipeline writes some sessions but not all — partial fix landed, hardening this week.