Keep locks as small and as few as possible.

TL;DR

In concurrent code, keep locks as few and as small in scope as possible.

Context

This was my first real Go project after mostly Python and JavaScript. I learned the value of tiny locks the hard way while building a connection pool.

Case: Connection Pool Management

The pool needed to hand out connections, take them back, and keep only healthy ones. I started with a simple array, a flag for in-use/available, and one lock. Pick and return both grabbed that lock. A separate health-check goroutine also inspected the pool while holding the same lock.

Issues

Before (Initial Design)
────────────────────────────────────────────
┌────────────────────────────┐
│ ConnectionPool             │
│ ┌───────────────┐          │
│ │ []connections │◄────────┐│
│ └───────────────┘          ││
│   ▲ Lock (single global)   ││
│   ▼                        ││
│ [Pick / Return / Health]   ││
└────────────────────────────┘│
           ▲                  │
           └──────────────────┘
All operations share one big lock

It looked simple, but the single lock froze the entire pool whenever health checks ran. One lock guarded both the array and flags, and finding an available connection required scanning. Multiple goroutines—health checks, logging, metrics—queued behind the same lock, and the wide scope tangled the logic.

Revised Design

After (Re-Architected Design)
────────────────────────────────────────────
┌──────────────────────────────┐
│ ConnectionPool               │
│ ┌────────────┐   ┌─────────┐ │
│ │ Queue      │   │ sync.Map│ │
│ │(available) │   │(in-use) │ │
│ └────────────┘   └─────────┘ │
│   ▲    ▲   ▲                 │
│   │    │   │                 │
│   │    │   └─ atomic counter │
│   │    └── health check gor. │
│   └── atomic health flag     │
│                              │
│ messageCh  →  reconcile loop │
└──────────────────────────────┘
Smaller, isolated locks — each goroutine independent

I replaced the array with a queue of available connections so enqueue/dequeue handled their own locking. Each connection got its own health-check goroutine with atomic flags. In-use connections moved to sync.Map, paired with an atomic counter for the count. Each piece owned its own small synchronization, and pool metrics came from queue length plus counters.

Maintaining the Pool Size

Pool Reconciliation Flow
────────────────────────────────────────────
[ goroutine A ]   \
[ goroutine B ]   --->  messageCh  --->  [ reconcileLoop ]
[ goroutine C ]   /
            (processed sequentially, not concurrently)

Keeping the pool at the right size needed care. Picking a connection tries a handful of times before failing and letting callers retry. Refilling the pool must not run concurrently or the count explodes, so I funneled reconciliation commands through a single channel and processed them one by one.

Conclusion

Small, focused synchronization points made the pool stable and the code cleaner. The redesign cut lock contention and made each concern independent.