Replacing AppDaemon with Rust Binaries
Tonight I pulled one automation out of my AppDaemon instance and replaced it with a compiled Rust binary. It uses 26 MB of RAM, zero measurable CPU, and sleeps until the kernel wakes it. AppDaemon - still running the other seven lighting apps - was using 88 MB and 33% of a core.
This is the start of something I’ve been thinking about for a while.
The problem with AppDaemon
I run AppDaemon to control the lights in my house. It’s a Python runtime that connects to Home Assistant over WebSocket and lets you write automations as Python classes. It’s powerful - I’ve got apps for porch lights, kitchen lights, bathroom lights, living room zones, Christmas lights, guest bedroom, landing, the lot. Eight apps in total, each one a Python class that listens for state changes and calls services.
The problem is observability. All eight apps share a single Python process, so you can’t tell which one is using memory, which one is burning CPU, or which one is misbehaving. top shows you one process: appdaemon, 88 MB, 33% CPU. That’s it. You can’t attribute any of that to a specific automation.
And it’s a lot of machinery for what these apps actually do. AppDaemon brings its own runtime, its own scheduler, its own plugin loader, its own restart logic - all running inside a Docker container with its own Python environment and config files. Meanwhile, the operating system already has all of this. It has a process supervisor. It has a scheduler. It has logging. It has resource limits. AppDaemon is reimplementing the OS, in userspace.
What if each automation was just a process?
The idea behind signal-ha is simple: each automation is a standalone binary. One process, one job, managed by systemd.
No plugin loader. No shared runtime. No Docker container. Just a main() function that connects to HA, subscribes to the entities it cares about, and sleeps until something happens.
You don’t need a framework for any of the operational stuff. systemd already exists:
- Restart on crash -
Restart=on-failure - Resource limits -
MemoryMax=64M,CPUQuota=10% - Logging - stdout goes straight to journald, which feeds into VictoriaLogs
- Dependency ordering -
After=network-online.target - Status -
systemctl status porch-lights
The automation doesn’t need to know about any of this. It just runs. And because each one is its own process, top tells you exactly what each automation costs.
signal-ha: the library
The core is a small async Rust library called signal-ha. It provides three things:
HaClient - a WebSocket client that handles authentication, state queries, service calls, and real-time state subscriptions. It also has a REST API client for creating transient entities (the same set_state that AppDaemon uses to publish debug sensors).
Scheduler - sun-aware timers. You ask for a stream that fires at sunrise, or sunset, or 05:30 every morning. Under the hood it’s the sunrise crate for solar calculations and tokio timers.
Types - EntityState, StateChange. Thin wrappers around HA’s JSON, just enough structure to avoid stringly-typed bugs.
That’s it. No framework, no lifecycle hooks, no plugin API. You import the library and write async Rust.
The first automation: porch lights
My porch lights follow a simple pattern:
- Morning window (05:30 → sunrise): lights on if ambient lux is below 700
- Sunset window (sunset → midnight): same rule
- Override: an input_boolean freezes the automation
- Halloween: candle effect on October 31st
The Rust version is about 300 lines of main.rs. It connects to HA, reads the initial lux and override state, subscribes to changes on both, sets up four timer streams (morning, sunrise, sunset, midnight), and enters a tokio::select! loop.
When a lux reading arrives or a timer fires, it evaluates the desired state and applies it. Between events, it does literally nothing. The process sits in epoll_wait, fully asleep, using zero CPU.
porch-lights 26 MB RES 0.0% CPU 0:00.15 TIME+
appdaemon 88 MB RES 33% CPU 176:41 TIME+
The AppDaemon process had accumulated nearly three hours of CPU time in less than a day of uptime. The Rust binary had used 0.15 seconds.
Transient entities
One thing AppDaemon does well is set_state() - you can create entities on the fly in HA’s state machine. My automations use this to publish a “reason” sensor that shows why the lights are on or off: “sunset window, lux 1 < 700” or “override active”.
HA supports this via its REST API: POST /api/states/sensor.porch_lights_reason. The entity appears immediately in the HA UI, complete with friendly name and icon. It’s transient - it survives until HA restarts, then gets recreated the next time the automation writes to it.
I added this to signal-ha as HaClient::set_state(), so any automation can publish debug state just like AppDaemon did.
What’s next
This is one automation out of eight that AppDaemon is still running. The porch lights were the simplest - a good proving ground for the library. The kitchen and living room automations are more complex, with circadian colour temperature, presence scoring, and multi-zone coordination. Those are still Python classes inside AppDaemon, and they’ll stay there until the Rust equivalents are ready.
I’m also thinking about giving each binary a tiny built-in HTTP endpoint - a /status page that shows the entities it watches, the current window, the last decision and why. A mini dashboard per automation, served by the process itself. No Grafana, no external UI. Just curl localhost:9001/status.
But for now, there’s a 26 MB Rust binary running on my NAS, controlling my porch light, sleeping until the kernel tells it something changed. And that feels right.
signal-ha is very early and very specific to my setup, but the pattern is general: one process per automation, systemd as the framework, sleep until woken.
