Self-Hosting Netwarden: The Single-Binary Preview
An open-beta preview of the self-hosted Netwarden edition: one binary compiled with Bun, SQLite inside, no Postgres, no Redis, no cloud. Honest notes on what's in, what's out, and how to run it on your own box.
Self-Hosting Netwarden: The Single-Binary Preview
I'd been telling people self-hosting was "on the roadmap" for about eight months. That answer was true, in the way "we'll get to it eventually" is technically a true statement about most things, and increasingly I hated giving it.
The thing that finally made me stop saying it was a DM. A homelabber running fourteen WordPress sites for small clients on a single Hetzner box told me, in so many words, he'd pay double for a version that didn't phone home. He had a tidy on-prem stack, a tidy backup story, and one piece — monitoring — that pulled his customer data through somebody else's cloud. He didn't want that. Most of the people I get on calls with don't want that.
So this post is the result. Netwarden's self-hosted edition is now in open beta as a single Bun-compiled binary. SQLite is inside. There is no Postgres to administer, no Redis to babysit, no Docker Compose stack with seven services and a lurking version-skew problem. Download the binary. Run it. Open localhost:3000.
That's the whole bar. Below is what's actually in it, what isn't, and how to put it on a box.
Why a single binary, instead of Helm or Compose
The audience this targets — homelabbers, agencies running a fleet of client sites on a couple of VPSes, IT folks running a half-rack at the office — does not run Kubernetes. A surprising number of them don't run Docker by default either. They run "a Debian box, sometimes Ubuntu, sometimes Alpine, with systemd and SSH." So the install story has to be smaller than a Compose file.
A few things shaped the call to ship a single binary first:
One file is the smallest possible install story. "Download this. Run it." beats "install Docker, then docker-compose up, then wait for the database to come up, then run the migration, then check this env var" by a wide margin. It's also the version of self-hosting where, when something breaks at 11pm, you have one process and one file to look at.
SQLite is genuinely good enough for the target. A homelab with 1-25 hosts produces, in a year, somewhere south of what any modest SQLite deployment handles every minute. The metrics_ts time-series table is the heaviest write path in the system — even there, with WAL mode enabled and the right pragmas, SQLite holds up fine for the scale this binary is for. Postgres + TimescaleDB is the right answer for the SaaS — not for someone running this on a NUC under the desk.
Bun's bun build --compile produces a real native binary. Not a self-extracting archive. Not an Electron-style packed Node runtime. A statically-linked executable. This matters because the install story collapses to "scp it" or "wget it." There's nothing to install alongside it.
Self-hosting is not a port of the SaaS. It's a different deployment shape with different trade-offs. The interesting design question wasn't "how do we make Postgres-on-Kubernetes run on a homelab box," it was "what's the smallest thing that does the same job." This binary is that.
What's in the binary
Everything the SaaS does, end-to-end, in one process:
- Full web UI. Same dashboards, same widgets, same hosts page. The only thing that's gone are the multi-tenant flows (sign-up, billing, admin tenant switcher) — self-hosted is single-tenant by design.
- The complete API. All
/api/*routes are mounted. Your agents talk to it the same way they talk toapi.netwarden.com. - Agent ingestion. The same
/agentendpoint is in there. Point existing agents at the binary's URL and they ingest fine. - SQLite, built in. No external database to set up. The schema is created on first boot and migrated forward on subsequent ones.
- Security wedge. All of it — the four agent collectors (failed logins, SSH config, listening ports, installed packages), the CVE matcher, the per-finding routing. CVE feed refresh runs as an internal cron inside the binary; same Ubuntu USN + Debian DSA + Red Hat sources the SaaS uses. The implementation deep dive lives in How Netwarden's security wedge works if you want the architecture.
- The full notification pipeline. Email (your SMTP server), mobile push, and outbound webhooks. Same alert config UI, same severity tiers.
- Auto-discovery. Docker, Podman, MariaDB/MySQL, Postgres, libvirt, Proxmox — all of it works out of the box, same as on SaaS.
What's not in the box, and you should know up front:
- GeoIP without your own MaxMind file. GeoLite2-City's binary database is free but not redistributable. The lookup gracefully no-ops when the file is missing — failed-login data still flows, just without country enrichment. If you want country-level alerts, sign up for a free MaxMind license, drop the
.mmdbfile on disk, and pointGEOLITE2_CITY_PATHat it. Thegeoipupdatecron handles refreshes. - Mobile push without your own Firebase project. The push pipeline routes through FCM. To use it on self-hosted, you need a Firebase service account JSON. It's a 10-minute setup if you want it; if you don't, email + webhook still cover most of what push does.
- Postgres support, yet. The architecture is adapter-based — there's a
DatabaseAdapterinterface with a SQLite implementation today and a Postgres implementation that's the SaaS path. A Postgres self-hosted deployment is a follow-on, not a v1.
That's the whole list of caveats. Everything else works.
The Bun-compile reality
A note for the kind of readers who care how the sausage is made — skip if not. This part of the project was harder than I expected, and the broad strokes are useful for anyone trying to compile a Next.js + SQLite + native-bindings stack into one file.
The build script is short. The interesting bits are in the Bun layer.
# build-selfhosted.sh
export BUILD_TARGET=bun
export NODE_ENV=production
export NEXT_PUBLIC_API_URL=
export NEXT_PUBLIC_SELF_HOSTED=true
bun install
bun --bun next build
npx next-bun-compile "$TARGET"
The first non-obvious thing: NEXT_PUBLIC_API_URL is set to the empty string. Self-hosted runs the UI and the API on the same origin, so every template literal like ${NEXT_PUBLIC_API_URL}/hosts collapses to a relative path like /hosts. No CORS, no host config, no proxy_pass needed.
The second is the database adapter. The codebase has a DatabaseAdapter interface with two implementations — one wraps pg and is the path the SaaS takes; one wraps bun:sqlite and is the path the binary takes. Selection happens at startup based on DATABASE_URL. Tree-shaking removes whichever one isn't reachable, so the binary doesn't drag a pg dependency around it can't use. The query translator handles the differences ($1/$2 → ?, NOW() → datetime('now'), JSON ops, the usual suspects).
The third is the schema. Postgres-side, the schema lives in 90+ migration files driven by a custom migrate tool. SQLite-side, the schema is generated at startup from a single sqlite-schema.ts file that mirrors the production schema with the Postgres-specific bits stripped (TimescaleDB hypertables, continuous aggregates, stored procedures). The metrics_ts table is replaced by a SQLite analogue with app-level rollups instead of TimescaleDB continuous aggregates — same multi-resolution query shape, different storage backend.
Native bindings were the hairy part. The bcrypt and bun:sqlite modules both need their native code reachable in the compiled binary. Bun handles this for first-party modules; for bcrypt we ended up using the JavaScript-only port (bcryptjs) on the self-hosted path, gated behind the same BUILD_TARGET=bun flag. The performance hit on password hashing is real but invisible at homelab scale.
End result: a single statically-linked executable, around 100MB, that launches in well under a second and serves the whole app on :3000.
The install walkthrough
The actual one-liner, on Linux:
curl -fsSL https://get.netwarden.com/self-hosted/install.sh | bash
That fetches the binary, drops it in /opt/netwarden, generates a netwarden.conf with reasonable defaults, and registers a systemd unit. It does not start the service — read the config, set your SMTP credentials, then systemctl start netwarden-server.
The systemd unit ends up looking like this:
[Unit]
Description=Netwarden self-hosted server
After=network.target
[Service]
Type=simple
User=netwarden
ExecStart=/opt/netwarden/server
Restart=on-failure
WorkingDirectory=/opt/netwarden
Environment=DATABASE_URL=sqlite:///opt/netwarden/data/netwarden.sqlite
[Install]
WantedBy=multi-user.target
If you'd rather containerize it — and a meaningful chunk of homelab folks would — the binary runs fine in any minimal base image. A FROM debian:stable-slim, copy the binary in, expose :3000, mount a volume at /data for the SQLite file, you're done. The image ends up under 150MB. There's no official Docker image yet because I don't want to ship one until the install path stabilizes; running it in your own image takes about three minutes.
For the agent side, the install command is unchanged: curl -sSL get.netwarden.com | bash on each host you want monitored. The only thing that changes is the agent_url in netwarden.conf on each agent — point it at your self-hosted binary's URL instead of api.netwarden.com.
What "open beta" means here
I want to be specific about what beta does and doesn't mean, because "beta" gets used to mean too many different things.
Schema migrations are stable enough to upgrade in place — the binary runs forward migrations on startup, same as the SaaS does. That said: take a backup before you upgrade. The whole database is one file. Copying the file is the backup.
The Pro tier (commercial license, multi-tenant, SSO) is a separate private beta with separate licensing terms. The binary documented in this post is for personal / homelab / single-team use under the open-source-style license that ships with it. If you want to self-host for a customer-facing agency setup or with multiple isolated tenants, that's the conversation; email [email protected].
I am personally responding to bug reports for the next four weeks. Not "the team." Me. File an issue on the GitHub repo, or email me. Issues that affect more than one person get prioritised; weird single-host issues get answered but not always fixed fast.
The nw export SaaS-to-self-hosted data migration is coming, not done yet. If you're a SaaS customer who wants to move to self-hosted with your historical data intact, hold off. The export tool will land in a follow-up release. New self-hosted installs are fine starting fresh.
What I'm watching for before I push to GA
This is what would convince me the binary is ready for general availability. Roughly in order:
- A week of clean upgrades on real homelab installs. The binary upgrade path is "stop service, replace binary, start service" — but I want to see that done by other people, on their boxes, with their data, before I claim it's a non-event. If migration N+1 corrupts somebody's database I want to know before it's a thousand boxes, not after.
- GeoIP works for everyone who tries it. The MaxMind setup story is the thing I'm least sure about. The
.mmdblookup itself is robust; the failure modes I haven't seen yet are aroundgeoipupdatecron permissions, file ownership, and license-key rotation. A few people running through the setup against their own data will surface those. - No CVE-feed regressions on self-hosted. The matcher pulls Ubuntu USN, Debian DSA, and Red Hat security data daily. The feeds change shape occasionally; the parsers have to keep up. If a feed format change silently breaks matching on the self-hosted binary while the SaaS catches it via metrics, that's a class of bug I want to design out before GA.
- Agent compatibility holds. The same agent binary should ingest into SaaS or self-hosted with no code changes. Today it does. I want a few weeks of homelab usage to confirm there's no edge case where the self-hosted ingest path treats a payload differently.
If those four are clean for a few weeks, this stops being a preview. I'm not putting a date on that publicly because I don't want to ship to a calendar — but it's not "next year." It's the kind of milestone where I'll know it when I see it.
A note on the buyers I built this for
If you've read why I built Netwarden you know the original problem was a homelab monitoring gap. The self-hosted binary closes the rest of that loop. The SaaS exists because some people (me, on weekdays, when I'm not on call for my own box) genuinely want a hosted version. The binary exists because most of the people I talk to who care about monitoring their own infrastructure also care about not handing it to someone else's cloud.
The honest pitch hasn't changed. Free tier, on the binary, covers up to 5 hosts. That's enough for almost every personal homelab in the world. The companion piece on what to actually monitor and how is the small-team monitoring playbook; the Proxmox-specific walkthrough is in Monitor a Proxmox cluster without Datadog. All of that applies on the binary the same way it does on the SaaS.
Try it, file issues, tell me what's bad
Go to /self-hosted for the install command and the latest binary. The full setup walkthrough lives in /docs/self-hosting. The repo is on GitHub; the issue tracker is the right place for bugs.
The thing I want most from this beta is the kind of feedback you only get from people running the binary on real hardware with real data. "It crashed on this distro." "The migration broke after upgrade." "The CVE matcher missed this advisory." "The systemd unit doesn't restart cleanly." If you find any of those, please tell me. The point of a beta isn't to charge for it. It's to find the things I missed before twelve thousand homelabbers do.
Build something on top of this. Run it on a box that bothers you. Let me know what's broken.
— Thiago
Keep reading
- How Netwarden's security wedge works — the implementation deep dive on findings, the CVE pipeline, and GeoIP.
- Why I built Netwarden — the founder note on why the self-hosted edition was inevitable.
- The small-team monitoring playbook — what to actually monitor on the homelab once the server is running.
- Monitor a Proxmox cluster without Datadog — a worked example on a real homelab.
- Self-hosting documentation — the install reference for the binary.
Get More Monitoring Insights
Subscribe to our weekly newsletter for monitoring tips and industry insights.
Related Articles
Announcing Netwarden Apps (alpha): error tracking that also tells you when next ships a CVE
We built an error tracker. We also built the part of the error tracker every other vendor decided not to: a daily cross-reference between your installed packages and OSV.dev's advisory feed, fired through the same alert pipeline. Apps is in alpha today, free for the first project, fixed price after that.
Why I Built Netwarden — A Homelab Story
I didn't set out to build a monitoring company. I set out to stop getting bitten by my own homelab. This is the short version of how that turned into Netwarden — what I wanted, what I couldn't get from existing tools, and what's deliberately not in the product yet.
Raspberry Pi Home Server Monitoring in 2026
Your Pi is doing real work. It runs Plex, blocks ads for the whole house, and tells the lights to dim at sunset. Here's how to monitor it properly without an entire observability stack swallowing the SD card.
Ready for Simple Monitoring?
Stop wrestling with complex monitoring tools. Get started with Netwarden today.