FlowLayer
Server-driven local service runtime
FlowLayer starts and orchestrates local services from a JSONC config, exposes a live Session API, streams logs in real time, and lets clients operate the runtime while it is running. The official client is the TUI, and the protocol is open.
Up and running in two commands
# 1. Start the server
flowlayer-server -s 127.0.0.1:6999 -c flowlayer.jsonc
[flowlayer] boot: session token: fl_abc123
# 2. Connect the TUI
flowlayer-client-tui -addr 127.0.0.1:6999 -token <token>
Session token
The token is printed by the server at startup. The server logs show boot: session token: fl_xxxxx; copy that value and pass it as -token to the TUI.
Server uses the config file
Pass -c flowlayer.jsonc to specify which services to start, how they depend on each other, and readiness rules.
TUI supports direct or config mode
Connect directly with -addr and -token, or use -config flowlayer.jsonc to read session.addr and session.token. All service truth stays in the server.
Why
FlowLayer
Exists
Modern stacks mix services, builds, migrations, scripts and containers across multiple technologies. Managing them with scattered shell scripts leads to race conditions and broken environments.
FlowLayer models cross-tech dependencies, enforces deterministic execution order, and ensures every service starts only when its prerequisites are satisfied — reproducibly, for every developer.
Determinism
Same execution order every run. Static dependency graph guarantees Wave A precedes Wave B.
Beyond Containers
Orchestrate builds, code generation, migrations, scripts and services in one cohesive workflow — not just docker containers.
Team Ready
One config, identical behavior for every developer. Simplify onboarding, eliminate scattered scripts, and stabilize environments.
Hot Reload Friendly
Hot-reload watchers start only after builds and dependencies are healthy — no premature startup, no inconsistent state.
Docker as a Complement
Run Docker services alongside native processes in one workflow. Docker handles isolation; FlowLayer handles orchestration.
One Runtime Server, Multiple Clients
FlowLayer runs as the server runtime. It owns service lifecycle, state transitions, and log retrieval. Clients connect through the Session API and protocol to observe and control the same live runtime. The official client is the TUI.
FlowLayer Server
Loads config, computes waves, starts and stops processes, tracks states, and exposes Session API endpoints.
/ws → WebSocket protocol
/health → authenticated health probe
Commands → start/stop/restart/get_snapshot/get_logs
Clients (TUI and others)
Connect using bearer auth, receive hello + snapshot, consume live events, replay logs with sequence continuity, and send control commands.
Events → service_status, log
Session lifecycle → hello, snapshot, live stream
[09:21:11] log billing stdout seq=1539 booting HTTP server...
[09:21:12] log billing stdout seq=1540 listening on :3002
[09:21:12] event service_status billing → ready
[09:21:13] log billing stdout seq=1541 GET /health 200
[focus] command restart_service billing acknowledged
[09:21:14] log billing stdout seq=1542 POST /api/invoice 201
Single JSONC File
Define the runtime in one strict JSONC file: session bind/token, services, dependencies, and readiness rules.
{ "session": { "bind": "127.0.0.1:6999", "token": "flowlayer-dev-token" }, "services": { "ping": { "cmd": "docker compose -f docker/python.compose.yml up", "ready": { "type": "http", "url": "http://localhost:8080/health" } }, "users": { "cmd": "npm --prefix services/users run start:dev", "port": 3001, "ready": { "type": "http", "url": "http://localhost:3001/health" } }, "billing": { "cmd": "npm --prefix services/billing run start:dev", "port": 3002, "dependsOn": ["users"], "ready": { "type": "http", "url": "http://localhost:3002/health" } } } }
Dependency DAG
FlowLayer builds a directed acyclic graph from service dependencies. Cycles are rejected before execution. Startup flows top → down. Shutdown reverses the graph.
Topological Waves
The DAG produces a deterministic execution plan. Each wave runs in parallel; the next wave starts only after all services in the current wave resolve.
What FlowLayer
is not
FlowLayer is built for predictability, not abstract automation. To maintain this mission, several standard orchestrator features are explicitly avoided.
"A deterministic local runtime with practical observability, not a full production monitoring stack."
No monitoring
Does not provide CPU or memory monitoring or performance metrics.
Not a cluster orchestrator
Is not a cluster orchestrator or replacement for Kubernetes.
No auto-restart
Does not restart failed processes automatically (fail fast principle).
No multi-node orchestration
Runs local services on one machine; no multi-node orchestration.
FlowLayer is designed to be the invisible connective tissue of your development environment. It does one thing with surgical precision: ensuring that your local services start and stop exactly as they should, every single time.