← back

Layoffs
Dec 19, 2025

Today I have an interview. I was expecting to look for a job in a few years, but sadly, layoffs started happening at my current company. There have been two waves already. It was supposed to be over, but it got me thinking: maybe it's time to move on. So I started interviewing. The current climate isn't great, but well... I have to.

Today's interview was for a company that uses Go. I played with Go a long time ago, and nothing since then. Still, it's just a language. One of the nice things about getting older (and hopefully wiser) is realizing that languages are just tools. As long as you understand algorithms, design approaches, and so on, the language isn't a barrier or at least, that's what I believe.

Anyway, I picked up Go again to prepare for the next interview, which will happen next year. It's kind of weird, since it'll be an interview with seven software engineers. I don't feel pressure, but... it's weird.

Anyhow, I really love how Elixir handles asynchronous tasks. Revisiting Go reminded me why I find it a bit off-putting.

Here's what I've learned.

Go vs Elixir

This is a practical comparison of how Go and Elixir (BEAM/OTP) approach doing many things at once: concurrent I/O, background jobs, fan-out/fan-in pipelines, and fault-tolerant workers.

Terminology note: both ecosystems often say concurrency rather than asynchronous. In practice, async tasks usually means run work concurrently without blocking the caller.

TL;DR

Both can be excellent, they just optimize for different defaults.

Core mental model

Go

Elixir (BEAM/OTP)

Side-by-side mapping

GoalGo
Start a concurrent taskgo fn()
Wait for resultchannel receive / WaitGroup
Run N tasks, collect resultsfan-out channels / errgroup.Group
Cancel workcontext.WithCancel/Timeout
Backpressurebounded channels, semaphores
Supervise & restartDIY + systemd/K8s / libraries
Queue jobsexternal (Redis/SQS/etc) + worker pool
Distributed concurrencyexplicit RPC, gRPC, etc.
GoalElixir
Start a concurrent taskTask.async(fn -> ... end) / Task.start_link
Wait for resultTask.await(task, timeout)
Run N tasks, collect resultsTask.async_stream/3
Cancel workTask.shutdown/2, process exit signals, timeouts,
receive... after
BackpressureTask.async_stream(max_concurrency:),
GenStage/Broadway
Supervise & restartOTP Supervisor strategies, retries, backoff
Queue jobsOban/Que/Exq + supervised workers
Distributed concurrencybuilt-in distribution (Node, GenServer.call
across nodes)

Concerns:

Elixir

Task.start(fn ->
  # do work
end)

If you want it tied to a supervision tree:

Task.Supervisor.start_child(MyTaskSup, fn ->
  # do work
end)

2) Run a task and await result (with timeout)

Go (channel + timeout)

resultCh := make(chan int, 1)

go func() {
    resultCh <- slowComputation()
}()

select {
case v := <-resultCh:
    _ = v
case <-time.After(2 * time.Second):
    // timeout
}

Elixir

task = Task.async(fn -> slow_computation() end)
value = Task.await(task, 2_000)

3) Fan-out/fan-in with cancellation + error propagation

Go (structured concurrency with errgroup)

g, ctx := errgroup.WithContext(parentCtx)

for _, item := range items {
    item := item
    g.Go(func() error {
        return doWork(ctx, item) // should respect ctx
    })
}

if err := g.Wait(); err != nil {
    // first error cancels ctx, other workers should stop
}

Elixir (Task.async_stream)

items
|> Task.async_stream(
  fn item -> do_work(item) end,
  max_concurrency: System.schedulers_online(),
  timeout: 5_000,
  on_timeout: :kill_task
)
|> Enum.to_list()

Notes:

Cancellation and timeouts

Go: context.Context is the contract

Example loop:

for {
    select {
    case <-ctx.Done():
        return ctx.Err()
    case msg := <-ch:
        handle(msg)
    }
}

Elixir: processes can be told to stop

Example timeout receive:

receive do
  {:msg, v} -> v
after
  2_000 -> {:error, :timeout}
end

Shutdown a task:

Task.shutdown(task, :brutal_kill)

Error handling and what happens if it crashes?

Go

Typical approach:

Elixir

Backpressure and throughput control

Go

Common tools:

Example: semaphore for max concurrency

sem := make(chan struct{}, 20) // max 20
for _, item := range items {
    sem <- struct{}{}
    go func(it Item) {
        defer func() { <-sem }()
        _ = do(it)
    }(item)
}

Elixir

Common tools:

CPU-bound vs I/O-bound work

Go

Elixir

Observability and debugging

Go

Elixir

When one is a better fit

Pick Go when...

Pick Elixir when...

Practical guidance: translating patterns

I have background jobs

I need a pipeline with backpressure

I need request-scoped cancellation

Common footguns

Go

Elixir

Mini cheat-sheet: what's the equivalent of...?

Go: request handler that fans out with cancellation

func Handler(w http.ResponseWriter, r *http.Request) {
    ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
    defer cancel()

    g, ctx := errgroup.WithContext(ctx)
    g.Go(func() error { return callA(ctx) })
    g.Go(func() error { return callB(ctx) })

    if err := g.Wait(); err != nil {
        http.Error(w, err.Error(), 500)
        return
    }
    w.WriteHeader(200)
}

Elixir: concurrent calls with timeout + supervised isolation

def handle_call(:fetch, _from, state) do
  results =
    [:a, :b]
    |> Task.async_stream(&call_service/1, timeout: 2_000, on_timeout: :kill_task)
    |> Enum.to_list()

  {:reply, results, state}
end

Go Cheatsheet

Your browser can’t display PDFs inline. Open the PDF