Implementing a Simple Worker Pool in Vanilla JS

Spawning a new Worker for every computational task introduces measurable main-thread latency, spikes GC pressure, and fragments memory allocation. A fixed-size pool eliminates per-task thread initialization by recycling pre-warmed execution contexts. This architecture aligns with established Web Workers Architecture & Communication patterns, ensuring deterministic message routing and strict lifecycle control.

1. Core Architecture: Fixed-Size Pool & FIFO Queue

A production pool requires three tightly coupled components:

  • A static array of pre-initialized Worker instances.
  • An internal FIFO queue for pending computational payloads.
  • A synchronous dispatcher that binds queued tasks to idle workers.

Pool size must be capped at navigator.hardwareConcurrency. Exceeding logical CPU cores triggers OS-level thread thrashing, context-switching overhead, and degraded throughput.

2. Step-by-Step Implementation

2.1 Worker Initialization & State Tracking

Each worker requires explicit state tracking to prevent race conditions during rapid dispatch cycles. Attach handlers immediately upon instantiation.

const createWorker = (id, scriptUrl, poolInstance) => {
 const worker = new Worker(scriptUrl);
 worker.state = 'idle'; // 'idle' | 'busy' | 'terminating'
 worker.id = id;
 worker.onmessage = (e) => poolInstance.handleWorkerMessage(worker, e.data);
 worker.onerror = (err) => poolInstance.handleWorkerError(worker, err);
 return worker;
};

2.2 Task Queue & Dispatch Logic

The WorkerPool class manages the queue and correlates tasks to pending Promises using a unique taskId. The dispatcher runs synchronously on every enqueue() call to minimize latency.

class WorkerPool {
 constructor(size, scriptUrl) {
 this.workers = Array.from({ length: size }, (_, i) => createWorker(i, scriptUrl, this));
 this.queue = [];
 this.pending = new Map();
 }

 async enqueue(task) {
 const promise = new Promise((resolve, reject) => {
 this.pending.set(task.id, { resolve, reject });
 });
 this.queue.push(task);
 this.dispatch();
 return promise;
 }

 dispatch() {
 while (this.queue.length) {
 const idle = this.workers.find(w => w.state === 'idle');
 if (!idle) break;
 
 const task = this.queue.shift();
 idle.state = 'busy';
 idle.postMessage({ taskId: task.id, payload: task.data });
 }
 }
}

2.3 Message Routing & Promise Resolution

Incoming messages must be routed to the correct Promise resolver. Worker crashes are isolated to prevent pool-wide failure.

 handleWorkerMessage(worker, data) {
 const { taskId, result, error } = data;
 const pendingTask = this.pending.get(taskId);
 
 if (!pendingTask) return; // Task already resolved or timed out

 if (error) {
 pendingTask.reject(error);
 } else {
 pendingTask.resolve(result);
 }

 this.pending.delete(taskId);
 worker.state = 'idle';
 this.dispatch(); // Attempt to process next queued item immediately
 }

 handleWorkerError(worker, err) {
 console.error(`Worker ${worker.id} crashed:`, err);
 worker.state = 'idle';
 // Reject pending tasks assigned to this worker in production
 }

3. Memory & Serialization Trade-offs

Passing large datasets via postMessage triggers structuredClone serialization on both threads. This blocks the main thread and spikes heap usage.

Data Size Serialization Overhead Recommended Strategy
< 5 MB Negligible (< 1ms) Standard postMessage
5–50 MB 2–8 ms (GC pressure) Chunk payloads, batch dispatch
> 50 MB > 10 ms (Jank risk) Transferable (zero-copy)

Zero-copy transfers bypass serialization entirely by moving ownership of ArrayBuffer instances. Apply advanced Worker Pool Management heuristics when memory-constrained SPAs require strict heap budgets.

// Zero-copy example
const buffer = new ArrayBuffer(1024 * 1024 * 60); // 60MB
worker.postMessage({ taskId: 'heavy', payload: buffer }, [buffer]);
// `buffer` is now detached on the main thread. Do not access it.

4. Step-by-Step Diagnostics & Performance Tuning

4.1 DevTools Profiling for Thread Starvation

Thread starvation occurs when queue depth consistently exceeds worker capacity. Diagnose it precisely:

  1. Open Chrome DevTools > Performance tab.
  2. Record a session with the Web Worker track enabled.
  3. Filter by postMessage serialization events to isolate transfer costs.
  4. Instrument code: performance.measure('pool-latency', 'enqueue', 'resolve').
  5. Inspect the timeline for sustained idle gaps or queue backlog spikes.

4.2 Backpressure & Queue Depth Monitoring

Unbounded queues trigger V8 heap expansion and main-thread jank. Implement explicit backpressure thresholds.

 enqueue(task) {
 const BACKPRESSURE_LIMIT = this.workers.length * 3;
 if (this.queue.length >= BACKPRESSURE_LIMIT) {
 console.warn('Pool backpressure threshold reached. Dropping task.');
 return Promise.reject(new Error('Queue full'));
 }
 // ... rest of enqueue logic
 }

5. Graceful Termination & Resource Cleanup

Detached workers leak memory in long-running dashboards. Implement a deterministic drain sequence.

 isDraining = false;

 async drain() {
 this.isDraining = true;
 // Wait for queue to empty and pending promises to settle
 await Promise.allSettled([...this.pending.values()].map(p => p.promise));
 
 this.workers.forEach(worker => {
 worker.state = 'terminating';
 worker.terminate();
 });
 this.workers = [];
 this.pending.clear();
 }

// Bind to lifecycle
window.addEventListener('beforeunload', () => pool.drain());

6. When to Scale Beyond Vanilla

Vanilla postMessage pools excel for batch processing but hit hard limits in sub-16ms real-time rendering pipelines. Transition to SharedArrayBuffer with Atomics for lock-free concurrent reads when cross-thread synchronization latency exceeds 2ms. For server-side or heavy I/O workloads, migrate to Node.js Worker Threads. Evaluate pool scaling against actual frame budgets before adopting complex shared-memory architectures.