Layoffs
Today I have an interview. I was expecting to look for a job in a few years, but sadly, layoffs started happening at my current company. There have been two waves already. It was supposed to be over, but it got me thinking: maybe it's time to move on. So I started interviewing. The current climate isn't great, but well... I have to.
Today's interview was for a company that uses Go. I played with Go a long time ago, and nothing since then. Still, it's just a language. One of the nice things about getting older (and hopefully wiser) is realizing that languages are just tools. As long as you understand algorithms, design approaches, and so on, the language isn't a barrier or at least, that's what I believe.
Anyway, I picked up Go again to prepare for the next interview, which will happen next year. It's kind of weird, since it'll be an interview with seven software engineers. I don't feel pressure, but... it's weird.
Anyhow, I really love how Elixir handles asynchronous tasks. Revisiting Go reminded me why I find it a bit off-putting.
Here's what I've learned.
Go vs Elixir
This is a practical comparison of how Go and Elixir (BEAM/OTP) approach doing many things at once: concurrent I/O, background jobs, fan-out/fan-in pipelines, and fault-tolerant workers.
Terminology note: both ecosystems often say concurrency rather than asynchronous. In practice, async tasks usually means run work concurrently without blocking the caller.
TL;DR
- Go: you build concurrency explicitly with goroutines + channels (and
contextfor cancellation). You also build your own supervision patterns (restart, isolation, backoff) or use libraries. - Elixir: the runtime is built around isolated lightweight processes + message passing + OTP supervision. Starting concurrent work is easy, and restarts/fault-handling are first-class.
Both can be excellent, they just optimize for different defaults.
Core mental model
Go
- Concurrency primitive: goroutine (cheap thread-like unit).
- Communication: channels (typed queues) or shared memory with locks.
- Failure model: panics exist, but errors are usually returned, goroutine crashes don't automatically restart.
- You typically implement:
- cancellation via
context.Context - structured concurrency via
errgroup - restart/backoff via your own loops or orchestration (systemd/K8s)
- cancellation via
Elixir (BEAM/OTP)
- Concurrency primitive: process (even cheaper, isolated, preemptively scheduled).
- Communication: message passing (
send/2,receive), mailboxes. - Failure model: “let it crash” + supervisors restart children.
- Structured concurrency & lifecycle are built into OTP:
Supervisor,Task.Supervisor,GenServer,GenStage,Broadway
Side-by-side mapping
| Goal | Go | Elixir |
|---|---|---|
| Start a concurrent task | go fn() | Task.async(fn -> ... end) / Task.start_link |
| Wait for result | channel receive / WaitGroup | Task.await(task, timeout) |
| Run N tasks, collect results | fan-out channels / errgroup.Group | Task.async_stream/3 |
| Cancel work | context.WithCancel/Timeout | Task.shutdown/2, process exit signals, timeouts, receive ... after |
| Backpressure | bounded channels, semaphores | Task.async_stream(max_concurrency:), GenStage/Broadway |
| Supervise & restart | DIY + systemd/K8s / libraries | OTP Supervisor strategies, retries, backoff |
| Queue jobs | external (Redis/SQS/etc) + worker pool | Oban/Que/Exq + supervised workers |
| Distributed concurrency | explicit RPC, gRPC, etc. | built-in distribution (Node, GenServer.call across nodes) |
1) “Fire-and-forget” background work
Go
go func() ()
Concerns:
- where do you log errors?
- how do you stop it on shutdown?
- who restarts it if it dies?
Elixir
Task.start(fn ->
# do work
end)
If you want it tied to a supervision tree:
Task.Supervisor.start_child(MyTaskSup, fn ->
# do work
end)
2) Run a task and await result (with timeout)
Go (channel + timeout)
resultCh := make(chan int, 1)
go func() ()
select
Elixir
task = Task.async(fn -> slow_computation() end)
value = Task.await(task, 2_000)
3) Fan-out/fan-in with cancellation + error propagation
Go (structured concurrency with errgroup)
g, ctx := errgroup.WithContext(parentCtx)
for _, item := range items
if err := g.Wait(); err != nil
Elixir (Task.async_stream)
items
|> Task.async_stream(
fn item -> do_work(item) end,
max_concurrency: System.schedulers_online(),
timeout: 5_000,
on_timeout: :kill_task
)
|> Enum.to_list()
Notes:
async_streamgives you controlled concurrency + backpressure knobs.- Errors come back as
{:exit, reason}or{:error, reason}depending on your function.
Cancellation and timeouts
Go: context.Context is the contract
- Functions accept
ctx context.Context - You cancel via
cancel()or deadline expiry - Code must cooperatively check
ctx.Done()
Example loop:
for
Elixir: processes can be told to stop
- You can:
- use timeouts in
GenServer.call/3andTask.await/2 - send messages to instruct stop
- exit/kill processes (stronger than cooperative cancellation)
- link/monitor to propagate shutdown
- use timeouts in
Example timeout receive:
receive do
-> v
after
2_000 ->
end
Shutdown a task:
Task.shutdown(task, :brutal_kill)
Error handling and what happens if it crashes?
Go
- If a goroutine returns an error, you must send it somewhere (channel,
errgroup, etc.). - A panic in a goroutine can crash the whole program if not recovered.
- Restart strategies are not automatic at the language/runtime level.
Typical approach:
- keep workers in a loop
- log errors
- retry with backoff
- rely on process supervisor (systemd/Kubernetes) to restart the service
Elixir
- If a process crashes, its supervisor can restart it automatically (with strategies like
:one_for_one, etc.). - Crashes are expected, you isolate work in processes and design clean restarts.
- This is a big reason Elixir shines for always-on systems.
Backpressure and throughput control
Go
Common tools:
- bounded channels to limit queue size
- worker pools + semaphore patterns
selectdefaults to avoid blocking
Example: semaphore for max concurrency
sem := make(chan struct, 20) // max 20
for _, item := range items
Elixir
Common tools:
Task.async_stream(max_concurrency:)- GenStage/Broadway pipelines (explicit demand-driven flow)
- OTP mailboxes (be careful: unbounded mailbox can become a memory pressure point)
CPU-bound vs I/O-bound work
Go
- Great for I/O concurrency.
- CPU-bound work scales with GOMAXPROCS; goroutines run on OS threads managed by the scheduler.
Elixir
- BEAM is excellent for I/O and many concurrent processes.
- CPU-heavy workloads can still work well, but:
- long CPU loops can hurt scheduler responsiveness
- you often move heavy numeric work to NIFs (Rust/C) or ports if needed
Observability and debugging
Go
- Standard:
pprof, tracing, metrics, logs. - Concurrency bugs often involve:
- goroutine leaks
- deadlocks
- races (use
-race)
Elixir
- The runtime provides introspection tools:
- process listings, mailbox sizes, reductions, traces
- Concurrency bugs often involve:
- mailbox growth
- message ordering assumptions
- supervision misconfiguration
When one is a better fit
Pick Go when...
- you want a small static binary, simple deployment footprint
- you're building system/network services with explicit concurrency control
- you're in an ecosystem heavily invested in gRPC/K8s and Go tooling
- you want close-to-the-metal performance with relatively simple concurrency primitives
Pick Elixir when...
- you need fault tolerance as a default (telecom-style always up)
- you want thousands/millions of concurrent lightweight processes
- you benefit from OTP patterns: supervision, hot code upgrades (rare), distributed messaging
- you want restartable components inside the app, not just at the container level
Practical guidance: translating patterns
I have background jobs
- Go: worker pool + queue (Redis/SQS/etc) + process-level restart by K8s/systemd
- Elixir: Oban/Que + supervised workers + retries/backoff built-in
I need a pipeline with backpressure
- Go: bounded channels + workers + careful select logic
- Elixir: GenStage/Broadway (demand-driven) or
async_streamfor simpler cases
I need request-scoped cancellation
- Go:
contexteverywhere - Elixir: timeouts and process links/monitors; pass deadlines explicitly as data
Common footguns
Go
- goroutine leaks (started, never stopped)
- forgetting to respect
ctx.Done() - data races from shared memory (use
-race) - unbuffered channels causing deadlocks in production
Elixir
- mailbox growth (slow consumer, too many messages)
- blocking the scheduler with long CPU loops
- misusing
Task.await(awaiting from process that must remain responsive) - everything is a GenServer (over-serialization)
Mini cheat-sheet: what's the equivalent of...?
- goroutine → Elixir process (
spawn,Task, GenServer) - channel → Elixir mailbox (message passing); for queues use
:queue, GenStage, or external queues - WaitGroup →
Task.yield_many/Task.async_stream/ supervised child tracking - select →
receivewith pattern matching +after - context cancellation → shutdown signals, timeouts, explicit stop messages,
Task.shutdown
Go: request handler that fans out with cancellation
func Handler(w http.ResponseWriter, r *http.Request)
Elixir: concurrent calls with timeout + supervised isolation
results =
[:a, :b]
|> Task.async_stream(&call_service/1, timeout: 2_000, on_timeout: :kill_task)
|> Enum.to_list()
end