Concurrency and Parallelism in Node.js
Introduction When people think of Node.js, they think “non-blocking I/O”, “event loop”, or “JavaScript on the server.” But one of the most underrated capabilities of Node.js is how elegantly it handles concurrency and parallelism - despite Javascript being single-threaded. If you’ve ever believed that Node.js isn’t built for CPU-intensive tasks or can’t run things in parallel, think again. Underneath the simplicity of asynchronous fs.readFile lies a surprisingly powerful ecosystem for managing concurrent and parallel tasks. Let’s dive into this hidden gem and learn how to unlock its full potential. Concurrency vs. Parallelism Before we get into Node.js internals, let’s understand what these terms mean: Concurrency is the ability to manage multiple tasks at once, regardless of whether they run in parallel. Parallelism is doing multiple things at the same time, often leveraging multiple cores or threads. Node.js is single-threaded since it is a runtime environment for JavaScript which is single-threaded. But, it is concurrent and even parallel under the hood. That’s the key insight that most developers fail to realize. Node.js Event Loop The event loop is the heart of Node.js concurrency. It allows Node to handle thousands of simultaneous I/O operations - like reading files, querying databases, handling HTTP requests, etc. without blocking. Let’s consider an example: const fs = require('fs'); console.log("Start"); fs.readFile('file.txt', 'utf8', () => { console.log("File read"); }); console.log("End"); Output: Start End File read Without using threads, Node.js can process the file read operation asynchronously, thanks to its event loop and non-blocking APIs. But what about Parallelism? This is where things get really interesting. It often gets overlooked. Node.js can do real parallelism, even for CPU-heavy operations, via the following methods: 1. Worker Threads: Actual Multithreading in JavaScript A surprisingly underused module in Node.js is worker_threads. It allows you to run JavaScript code in actual separate threads, perfect for CPU-bound work. Let’s see an example of a CPU-intensive operation being run on a worker thread. // index.js const { Worker } = require('worker_threads'); const run = () => { return new Promise((resolve, reject) => { const worker = new Worker('./worker.js', { workerData: 40 }); worker.on('message', resolve); worker.on('error', reject); worker.on('exit', (code) => { if (code !== 0) reject(new Error(`Worker stopped with exit code ${code}`)); }); }); }; run().then(console.log); // worker.js const { workerData, parentPort } = require('worker_threads'); function fibonacci(n) { return n { if (err) return console.error(err); console.log(stdout); }); 3. Cluster Module: Scaling across CPU Cores For web servers, the cluster module can be used to fork multiple instances of Node.js. It helps to distribute workload among multiple CPU cores, making the app truly parallel. const cluster = require('node:cluster'); const http = require('node:http'); const os = require("node:os"); const process = require("node:process"); const numCPUs = os.availableParallelism(); if (cluster.isPrimary) { console.log(`Primary process with ID:${process.pid} is running`); for (let i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit', (worker, code, signal) => { console.log(`worker process with ID:${worker.process.pid} exited`); }); } else { http.createServer((req, res) => { res.writeHead(200); res.end('hello world\n'); }).listen(3000); console.log(`Worker process with ID:${process.pid} started`); } The libuv Thread Pool Even when you don’t use workers or child processes, Node.js runs I/O operations in a thread pool (via libuv) under the hood - allowing tasks like fs.readFile , crypto.pbkdf2 , dns.lookup , etc. to run in parallel under the hood. Conclusion Node.js is not just for I/O. It’s a surprisingly capable runtime for concurrent and even parallel workloads. What’s often seen as a limitation “JavaScript is single-threaded”, is in fact a design trade-off backed by powerful native systems like libuv, worker threads, and OS-level processes. Whether you’re building APIs, doing heavy computation, or scraping the web in batches, Node.js gives you the tools - you just have to reach for them.

Introduction
When people think of Node.js, they think “non-blocking I/O”, “event loop”, or “JavaScript on the server.” But one of the most underrated capabilities of Node.js is how elegantly it handles concurrency and parallelism - despite Javascript being single-threaded.
If you’ve ever believed that Node.js isn’t built for CPU-intensive tasks or can’t run things in parallel, think again. Underneath the simplicity of asynchronous fs.readFile lies a surprisingly powerful ecosystem for managing concurrent and parallel tasks.
Let’s dive into this hidden gem and learn how to unlock its full potential.
Concurrency vs. Parallelism
Before we get into Node.js internals, let’s understand what these terms mean:
- Concurrency is the ability to manage multiple tasks at once, regardless of whether they run in parallel.
- Parallelism is doing multiple things at the same time, often leveraging multiple cores or threads.
Node.js is single-threaded since it is a runtime environment for JavaScript which is single-threaded. But, it is concurrent and even parallel under the hood. That’s the key insight that most developers fail to realize.
Node.js Event Loop
The event loop is the heart of Node.js concurrency. It allows Node to handle thousands of simultaneous I/O operations - like reading files, querying databases, handling HTTP requests, etc. without blocking.
Let’s consider an example:
const fs = require('fs');
console.log("Start");
fs.readFile('file.txt', 'utf8', () => {
console.log("File read");
});
console.log("End");
Output:
Start
End
File read
Without using threads, Node.js can process the file read operation asynchronously, thanks to its event loop and non-blocking APIs.
But what about Parallelism?
This is where things get really interesting. It often gets overlooked.
Node.js can do real parallelism, even for CPU-heavy operations, via the following methods:
1. Worker Threads: Actual Multithreading in JavaScript
A surprisingly underused module in Node.js is worker_threads. It allows you to run JavaScript code in actual separate threads, perfect for CPU-bound work.
Let’s see an example of a CPU-intensive operation being run on a worker thread.
// index.js
const { Worker } = require('worker_threads');
const run = () => {
return new Promise((resolve, reject) => {
const worker = new Worker('./worker.js', { workerData: 40 });
worker.on('message', resolve);
worker.on('error', reject);
worker.on('exit', (code) => {
if (code !== 0)
reject(new Error(`Worker stopped with exit code ${code}`));
});
});
};
run().then(console.log);
// worker.js
const { workerData, parentPort } = require('worker_threads');
function fibonacci(n) {
return n <= 1 ? 1 : fibonacci(n - 1) + fibonacci(n - 2);
}
parentPort.postMessage(fibonacci(workerData));
2. Child Processes: Run external programs or scripts
Need to run a Python script, Bash command, or an isolated Node process? child_process module makes this easy.
Let’s see how a child-process can be spawned to carry out an operation. We will run a unix command to lists files in the current directory in long format as an example.
const { exec } = require('child_process');
exec('ls -la', (err, stdout) => {
if (err) return console.error(err);
console.log(stdout);
});
3. Cluster Module: Scaling across CPU Cores
For web servers, the cluster module can be used to fork multiple instances of Node.js. It helps to distribute workload among multiple CPU cores, making the app truly parallel.
const cluster = require('node:cluster');
const http = require('node:http');
const os = require("node:os");
const process = require("node:process");
const numCPUs = os.availableParallelism();
if (cluster.isPrimary) {
console.log(`Primary process with ID:${process.pid} is running`);
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker process with ID:${worker.process.pid} exited`);
});
} else {
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(3000);
console.log(`Worker process with ID:${process.pid} started`);
}
The libuv Thread Pool
Even when you don’t use workers or child processes, Node.js runs I/O operations in a thread pool (via libuv) under the hood - allowing tasks like fs.readFile , crypto.pbkdf2 , dns.lookup , etc. to run in parallel under the hood.
Conclusion
Node.js is not just for I/O. It’s a surprisingly capable runtime for concurrent and even parallel workloads. What’s often seen as a limitation “JavaScript is single-threaded”, is in fact a design trade-off backed by powerful native systems like libuv, worker threads, and OS-level processes.
Whether you’re building APIs, doing heavy computation, or scraping the web in batches, Node.js gives you the tools - you just have to reach for them.