Concurrency in Go: Worker Pool (Dummy Example)

2026-02-13

Go concurrency becomes much easier to reason about when you treat each goroutine as a focused unit of work, and each channel as a contract for communication. The most common production shape is a worker pool: a fixed number of workers consume jobs from a channel and push results to another channel.

This pattern is useful when you want bounded parallelism. Instead of launching one goroutine per item and risking resource spikes, you decide a worker count based on CPU, latency budget, and downstream limits.

Stock-free Golang concurrency illustration showing jobs, worker pool, and results
Stock-free custom illustration for Go concurrency concepts.

Why this pattern works

  • You can limit concurrency by controlling worker count.
  • You can apply backpressure with buffered/unbuffered channels.
  • You can cancel all workers quickly with context.Context.
  • You can collect errors in one place and decide retry behavior.

Minimal worker-pool example

package main
 
import (
    "context"
    "fmt"
    "sync"
    "time"
)
 
type Job struct {
    ID int
}
 
type Result struct {
    JobID int
    Value string
}
 
func worker(ctx context.Context, id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
    defer wg.Done()
 
    for {
        select {
        case <-ctx.Done():
            return
        case job, ok := <-jobs:
            if !ok {
                return
            }
 
            // Simulate I/O or compute work
            time.Sleep(50 * time.Millisecond)
            results <- Result{JobID: job.ID, Value: fmt.Sprintf("worker-%d done", id)}
        }
    }
}
 
func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
 
    jobs := make(chan Job, 10)
    results := make(chan Result, 10)
 
    const workers = 3
    var wg sync.WaitGroup
 
    for i := 1; i <= workers; i++ {
        wg.Add(1)
        go worker(ctx, i, jobs, results, &wg)
    }
 
    for i := 1; i <= 8; i++ {
        jobs <- Job{ID: i}
    }
    close(jobs)
 
    go func() {
        wg.Wait()
        close(results)
    }()
 
    for r := range results {
        fmt.Printf("job=%d %s\n", r.JobID, r.Value)
    }
}

Diagram

Worker pool architecture with buffered jobs channel
Jobs are queued, workers consume concurrently, results flow to aggregator.

What to check before production

  1. Add timeout and cancellation to all external calls.
  2. Ensure channels are closed by one clear owner.
  3. Avoid goroutine leaks by handling all exit paths.
  4. Track queue depth and worker saturation in metrics.
  5. Keep worker logic idempotent where retries are possible.

This is a dummy post for layout preview, but the structure is production-friendly for real technical writing: short sections, one visual, one diagram, and one runnable code block.