← back

12
Oct 24, 2025

I'll be honest: I like order and organization as much as the next obsessive person. But sometimes I think we invent complexity just to justify... well, whatever.

The same goes for jargon.

Today, however, I won't complain about that. I find The Twelve-Factor methology to hit the sweet spot (though the jargon bothers me).

12-Factor Methology

It's a practical set of constraints for building services that deploy cleanly, scale predictably, and don't turn into works on my laptop folklore. The core ideas map extremely well to modern Rust services running in containers, Kubernetes, Nomad, systemd, or basically anything that can start a process and feed it environment variables.

This post walks through all 12 factors with a concrete Rust shape: a small HTTP API using axum, tokio, and sqlx.

A tiny service that:

Factor I — Codebase: one codebase, many deploys

One service = one codebase tracked in version control. If you have multiple codebases for one app, you're already in distributed-system territory; treat each component as its own app.

Rust fit: a single repo can still contain multiple binaries (like server

Factor II — Dependencies: declare and isolate

A twelve-factor app declares dependencies explicitly and avoids assuming system-wide packages exist.

Rust fit:

Factor III — Config: store config in the environment

Configuration that varies between deploys (ports, DB URLs, API keys) should come from environment variables.

A practical Rust pattern: deserialize env into a typed Settings struct.

src/config.rs:

use serde::Deserialize;

#[derive(Clone, Debug, Deserialize)]
pub struct Settings {
    pub host: String,          // e.g. "0.0.0.0"
    pub port: u16,             // e.g. 3000
    pub database_url: String,  // e.g. postgres://...
    pub log_level: String,     // e.g. "info" / "debug"
}

pub fn from_env() -> anyhow::Result<Settings> {
    let cfg = config::Config::builder()
        .add_source(config::Environment::default().separator("__"))
        .build()?;

    Ok(cfg.try_deserialize()?)
}

Local dev convenience: use a .env file locally, but treat it as developer tooling, not the deployment system.

Factor IV — Backing services: treat them as attached resources

Databases, caches, queues, and object storage are backing services and should be treated as swappable attached resources.

Rust fit:

src/db.rs:

use sqlx::{postgres::PgPoolOptions, PgPool};
use std::time::Duration;

pub async fn connect(database_url: &str) -> anyhow::Result<PgPool> {
    let pool = PgPoolOptions::new()
        .acquire_timeout(Duration::from_secs(5))
        .max_connections(10)
        .connect(database_url)
        .await?;
    Ok(pool)
}

Factor V — Build, release, run: strictly separate

The methodology wants strict separation between build, release, and run.

Rust fit:

A simple container flow:

Key idea: no SSH into prod and edit code. If you changed code, you made a new build.

Factor VI — Processes: stateless, share-nothing

Processes should be stateless and share-nothing; persistent state belongs in backing services.

Rust fit:

If you need sessions, use Redis or the DB. If you need files, use object storage.

Factor VII — Port binding: export services via a port

The app should be self-contained and bind to a port to serve requests.

Rust fit (axum):

use axum::{routing::get, Router};
use std::net::SocketAddr;
use tokio::net::TcpListener;

pub async fn serve(host: &str, port: u16) -> anyhow::Result<()> {
    let app = Router::new().route("/healthz", get(|| async { "ok\n" }));

    let addr: SocketAddr = format!("{host}:{port}").parse()?;
    let listener = TcpListener::bind(addr).await?;

    axum::serve(listener, app).await?;
    Ok(())
}

Factor VIII — Concurrency: scale out via the process model

The factor emphasizes scaling out by running more processes.

Rust reality check: Rust async can handle high concurrency inside one process, but twelve-factor wants you to be able to scale horizontally anyway.

So you do both:

Factor IX — Disposability: fast startup, graceful shutdown

Processes should start quickly and shut down gracefully for resilience and rapid deploys.

Rust fit: handle SIGTERM/CTRL-C and allow in-flight requests to finish.

Tokio provides guidance for graceful shutdown patterns.

Axum includes a graceful shutdown example you can adapt.

src/main.rs:

use axum::{routing::get, Router};
use tokio::{net::TcpListener, signal};
use tracing_subscriber::EnvFilter;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    tracing_subscriber::fmt()
        .with_env_filter(EnvFilter::from_default_env())
        .json()
        .init();

    let app = Router::new()
        .route("/healthz", get(|| async { "ok\n" }));

    let listener = TcpListener::bind("0.0.0.0:3000").await?;

    axum::serve(listener, app)
        .with_graceful_shutdown(shutdown_signal())
        .await?;

    Ok(())
}

async fn shutdown_signal() {
    let _ = signal::ctrl_c().await;
}

In production you'll also want SIGTERM handling on Unix; the axum example shows the pattern.

Factor X — Dev/prod parity: keep them similar

Minimize gaps between dev/staging/prod; avoid SQLite locally, Postgres in prod surprises.

Rust fit:

Example docker-compose.yml idea:

Factor XI — Logs: treat logs as event streams

A twelve-factor app should not manage log files, it writes its event stream to stdout and the environment routes/aggregates it.

Rust fit: tracing + tracing-subscriber with JSON to stdout.

tracing-subscriber's fmt subscriber formats events and logs them to stdout.

Good defaults:

Factor XII — Admin processes: run one-off tasks as one-off processes

Migrations, data backfills, and maintenance tasks should run in the same environment (same code + config) as the app.

Two common Rust approaches:

sqlx-cli

sqlx migrate run compares the DB migration history with migrations/ and runs pending migrations.

Docs.rs

This is often perfect in CI/CD:

A dedicated admin binary

src/bin/migrate.rs:

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let database_url = std::env::var("DATABASE_URL")?;
    let pool = sqlx::PgPool::connect(&database_url).await?;

    sqlx::migrate!("./migrations").run(&pool).await?;
    Ok(())
}

Run it as:

DATABASE_URL=... cargo run --bin migrate

(And yes, cargo run -- ... passes args to your binary if you need them.)

:tada: