NodeJS performance optimization

Node.js Performance Optimization

Here’s how to measure, test, and optimize your code to boost Node.js performance. JavaScript memory optimization tips, CPU profiling and benchmarking tools, and practical advice to save you time, money and performance problems 

Node.js performance can have a major impact on your app’s success and cost. If your code takes longer to run, a user might be frustrated or even a core business logic might fail. Memory leaks cause whole servers and processes to crash. Requiring more CPU means more unnecessary cloud expanses to your company as your app scales. 

You can prevent all those problems by optimizing Node’s performance.

Main Key Takeaways

  • The Node.js event loop is a loop that runs your Node.js tasks. It runs on the main thread and blocking it with long tasks might cause your application to run slower.
  • You can add more tasks to the Node.js event loop via asynchronous interfaces like timers, promises, child process, etc.
  • Garbage collection is the process that clears unused objects from memory. The more objects we have to clear, the longer it takes. The Garbage collection runs on the main thread and thus can block your event loop.
  • Handling big data should be done using streams in chunks. Otherwise you risk choking your app’s memory.
  • Use caching to prevent costly resources from doing unnecessary work when there’s already a valid result from these resources for the same query.
  • Sometimes you have to optimize functions that handle big amounts of data in order to save on CPU and memory.
  • Load balancing is a strategy that enables an application to withstand huge amount of traffic by spinning up new machines to handle traffic spikes. It usually requires a stateless application.
  • Profiling is a must in order to solve performance issues. You can easily profile a Nodejs application using the –inspect flag and Google Chrome dev tools.
  • Optimizing performance requires you to identify the problem from a performance profile. Most optimizations can be done using well known optimization design patterns.

How Node.js Works and What Causes Performance Problems in Your Web Apps

The Node.js Event Loop

In order to understand most runtime performance problems in Nodejs, you must understand a core principle – how the Nodejs event loop works and where exactly it might slow down parts of the process.

Node.js is the most popular web applications server these days for a few reasons. Its main benefits are, in my opinion, low learning curve, really fast scaling capabilities and the fact that it is running in a single main thread.

The unique nature of Node.js performance comes from it having a single main thread. What do we mean by that? Is it really single threaded as many people claim?

The simple answer is no. Nodejs runs on multiple threads to allow for asynchronous processes to run in parallel.

Nodejs has a single thread process that is called the Event Loop. This event loop loops over a tasks list until this task list is completed. We can look at the process this way: 

while (tasks.length) {
tasks.pop()();
}

This simplistic way of looking at the event loop is its actual core. The pseudo-code above says: while we still have tasks, pop the first task in the tasks queue and execute it. Where do these tasks come from? This is where the asynchronous processes and other threads get into the picture.

The source of Node.js tasks

Let’s look what happens when we run the following command: 

node index.js 

Node.js takes the content of `index.js` and turns it into a task. If the only line in it is 

console.log(“Hello world”);

then Nodejs will have one task – to log “Hello world”. Once this task is done, it will exit with code 0 (no errors).

If, on the other hand, you will create a timer of some kind, a new task will enter the task queue, and Node.js will run more than one event loop:

console.log(“Hello world”);
setTimeout(() => console.log(“Goodbye world”), 100);

This will cause two tasks to be in the list of tasks. Here’s how: The first task will be the synchronous task of logging “Hello world”. setTimeout will initiate a call to something outside Node.js (in this case, the via the timers API). It sends the callback and the delay (in our case 100ms) to this API. Node.js knows it has a pending request from these API, so it awaits to see when it returns. Once the 100ms pass, the timers API adds the callback to the tasks queue. The tasks queue then runs the callback, sees it has nothing pending and quits.

If we were to use `setInterval` instead of `setTimeout`, the Node software would have run forever without exiting.

Hence our event loop code can be extended now:

while (tasks.length &&   listeners.length) {
tasks.pop()();
}

One can argue that a listener is a task, but that’s besides the point. Note that this algorithm is an oversimplification of the Node.js event loop.

Node.js timers are one source of tasks. There are many more such tasks initiators like event listeners. For instance, the Nodejs `http.server` has a `listen` method which keeps the Node’s event loop expecting new events to trigger the server. In the http case, it awaits http requests to the server.

What are the implications of the event loop on your application?

The fact that everything is running in that same loop means the following: when a task is running, no other task is running. This means, you have to make sure your tasks complete as fast as possible and allow your app to be ready to handle new tasks quickly.

Learning how to optimize your Node.js code’s performance will help your application to be ready for more tasks and respond faster to queries sent to it.

Garbage Collection, Memory Budget and the Event Loop in Node

Garbage collection is the process that eventually clears unneeded variables from memory. When you create new objects or arrays in your application, the JavaScript engine allocates memory for them. When your application doesn’t need these objects anymore, the JavaScript engine marks them for removal via the Garbage Collection mechanism.

The Garbage collector goes over your application’s variables in memory and seeks the ones that have no reference from your code. This is how the process knows that your application doesn’t need an object. Here’s an example:

function allocateAnArray() {
    return new Array(100000).fill(8);
}

justToShow();
let newArray = justToShow();

In the code snippet above, the function `allocateAnArray` creates an array of 1 million integers (the number 8). When the function is called for the first time, it creates the array inside the function. There is nothing that references the result of the function and thus the array has no reference from the code once the function exits. The Garbage Collector will mark this array for removal from memory.

The second time, the variable `newArray`references the returned array. In this case, the garbage collector will not mark this array for removal from memory. If we were to write `newArray = null`, the array would have had no reference anymore and would be marked for deletion from memory.

This is, in essence, the Garbage Collection process.

There are a few issues with this process:

  1. It takes time – the more memory you have to clear, the longer it takes to mark and sweep it. It also takes CPU power.
  2. Because this process is running in our event loop’s thread, it blocks the event loop while it’s running.
  3. If your tasks queue is full, the garbage collector might not get to run as frequently as needed. This might result in a memory leak.

Understanding how to mitigate Garbage Collection and memory allocation in your application might be crucial and money saving. Especially if your application is part of a big data pipeline.

3 ways to handle Data in order to optimize Node.js application speed and performance

Data handling in Node.js has a huge influence on your app’s performance. Node.js applications sit at the crux of big data pipelines now-days. Their speed and efficiency can be the key to the success or failure of a system.

There are 3 main optimization techniques that give the best value when handling data in Node. 

Use streams to handle big amounts of data

Streams allow your application to handle big amounts of data in chunks. There are a few reasons why you would like to do that:

  1. JS memory is limited to 1.5GB by default. This can be increased but it is usually not recommended except in rare cases.
  2. Choking your Node.js app’s memory will cause slower runtime. Memory allocation and Garbage Collection will slow down your system.
  3. Using a lot of memory during runtime will require stronger machines, which cost more money to your business.

Nodejs memory optimization techniques might help, but they are time consuming and not as effective as removing the problem altogether.

A very common scenario of using streams is to read from a big file. Here’s how it is done:

import * as fs from ‘fs’;
async function handleStreamContent(myStream, cb) {
    for await (const chunk of myStream) {
        cb(chunk);
    }
}

const myStream = fs.createReadStream(‘bigData.json’, {encoding: ‘utf8’});
handleStreamContent(myStream, console.log);

In the example above, we create an async iterator inside `handleStreamContent`. It receives a stream and a callback and iterates asynchronously over the stream. On every iteration, it runs the callback with a chunk received from the stream.

The app that uses `handleStreamContent` creates a file stream. This file can be a few GB of data. We know that our Node.js cannot import such a big fie. Luckily, we don’t need to – we just read it in small chunks.

`handleStreamContent` doesn’t care what is the source of the stream. It can be a file, data from a server or DB or Kafka topic messages.

Cache frequent requests to handle big amount of traffic

Caching is a key concept when your Node.js is handling big amounts of traffic. It is usually more related to requests to external resources like databases. Nevertheless, caching will improve your Node.js applications response to clients requests and reduce the load on your server.

There are many ways to create a caching mechanism. The underlying principle is the same: 

  1. Find a request that is common in your application;
  2. On receiving such a request see if it has a valid cache result;
  3. If yes – return the cached result;
  4. If no –
    1. Fulfill the request (fetch data from DB, make certain data transformation etc.);
    2. Save the result in cache;
  5. If something happens in the system that changes our result (new data came in or just enough time passed) invalidate the cache.

Caching saves a lot of time and money. It prevents redundant calls to expansive resources in your system.

Optimize data handling functions

Optimizing your data handling functions can become crucial. This is usually harder than using streams or caching requests. Nevertheless, this can be very rewarding. Sometimes, changing a single for loop can result in a x3 runtime speed and a major reduction in memory allocation, garbage collection and CPU usage.

There is no “how to” here. Optimizing functions for performance is a case by case process. Each function should be optimized according to the way it is written and the context.

Sometimes, you’ll have the possibility to improve an O(n^2) to O(n). Would this have a real impact on your application? We will talk about it in the Profiling section below. There might be small changes you can do that will have a big impact in cases of massive traffic to your servers.

Memory optimization might also come into play. We already learned about the Garbage Collection, but there are more issues that demand optimization. Memory leaks that lead to JavaScript heap out of memory are one. At other times you’d might need to apply memory saving design patterns like the Fly Weight.

Scaling Node JS applications

As your business grows, your applications need to scale. Usually, when we speak about scaling Nodejs applications, we mean to create new instances. When your application machine reaches a certain limit (CPU usage, memory usage, number of requests etc.), you spin up a new machine to handle requests. This process is called Load Balancing.

There are a few more stages to pass before starting Load Balancing. 

For instance, imagine you have a service that serves data to a client. This client is being used by 10 million users at any given time. You can expect billions of requests on your server. One server will be hard pressed to handle so many requests.

If a server gets more requests than it can handle, the requests will queue up and the clients of this server will experience long wait times or even timeouts. In addition, your server will be hard pressed to handle the load, garbage collection will not have time to do its work and a memory leak will commence. Memory leaks usually lead to apps crashes. Not a good experience.

Caching is one thing you can do to reduce the load on a server. You can also create a managed requests queue for the server. It will allow the server to handle X requests and hold off other requests until currently handled requests are done.

Eventually, there’s so much traffic one server can handle and you’ll have to use a Load Balancing technique. 

Load balancing and runtime optimization in Node.js

Load balancing allows for systems to grow in demand indefinitely. In order for this to work, you need to design your application in a certain way that will allow multiple instances to work simultaneously. More specifically, your app needs to be stateless.

Creating a stateless application is less related to runtime performance. We will not cover this topic here. On the other hand, runtime performance can have an impact on your cloud computing costs when load balancing at scale. Load balancing might become costly if your servers are not optimized. The math is simple: if each server can handle X requests/second, you will need Y machines. The bigger X is, the smaller Y is. The less machines you need, the less you will pay your cloud vendor.

That is a very good reason to understand how to profile and optimize your application’s runtime.

Profiling a Nodejs Application Runtime Performance

What is profiling?

Profiling is a form of analysis which measures certain metrics regarding your software. In our case, we would like to profile certain metrics during runtime. The metrics we will focus on are CPU load, memory and run time of functions.

Profiling is the first step in diagnosing a performance issue. The most common case is reacting to a performance issue that showed up in your application. This means you have hints that will guide you where to look. The other case of profiling is when you are exploring ways to improve your performance. That means you think you can save costs or reaction time of your application. You look at the runtime performance profiling and see what parts of your application can benefit best from a performance boost.

Debugging a performance issue requires a performance test tool. Luckily, V8 comes with such a tool bundled inside. You need nothing more than Node.js (if you are here, you probably have it) and chrome (if you live on planet earth, you probably have that too).

Create a Node.js performance test profile

Starting ANY Node.js application with the –inspect flag starts the application in debug mode. This opens a socket connection from V8 to whomever wants to listen. Choosing a Nodejs performance test tooling is individual. You can use WebStorm or Visual Studio code for instance. You can even use the command line. The most common visual profiling and debugging tool is Google Chrome DevTools. I also use Chrome DevTools as my main Nodejs benchmark tool.
Once you start your application with the –inspect flag, you will get a port where your app’s debugging and profiling data will be broadcasted to (marked with red in the figure below).

Once you have this port, you can head over to chrome://inspect in the browser (marked with red in the image below). Inside you will have the option to Open dedicated DevTools for Node (marked with blue in the image below).

The dedicated DevTools for Node that opens allows you to debug and profile your Node.js application running with the –inspect flag.

You should head over to the Connection tab and make sure your running application’s debug port is set (marked red in the image below). If not, just click “Add connection” (green arrow) and add your port (usually localhost:{{port}}).

Node JS devtools optimization

Note that you can theoretically debug and profile remote machines by adding their address to the list.

Now that your server is connected to the chrome profiler, we can start profile memory and CPU. 

Head over to the Memory tab and profile the memory of your applications – either take a snapshot or profile the heap over time. You can also see how much memory each function allocated during the runtime.

In the Profiler tab you will be able to track what is happening in your application in real time. You start recording, use your application (e.g. a GET or POST request to the server) and once you stop the recording, you’ll be able to see the functions that ran during that time. The information you get is the amount of time each function ran, CPU usage and more.

Other Nodejs Profiling Tools

There are many other profiling tools. The most basic tool is the internal profiler, using the --prof flag (node --prof myApp ). This process will output a log file that can be consumed like this: node --prof-process file.log > results.txt.

Some tools are not even related to the actual runtime performance. Cannon.js, for instance, is a tool that allows you to “bomb” your server with HTTP requests and thus benchmark your API access points in regards to response time to requests.

Optimizing a Node.js Application Runtime Performance

Here are some practical tips that you can use right now to improve Node.js performance.

Go asynchronous and run things in parallel

Yes, I know. JS is single threaded, right? But no, it actually isn’t really single threaded. There’s a main thread, but other threads to do things with. Running scripts asynchronously allows you to run processes in parallel and save a lot of time to your application.

You can always spin up a child process to take some of the load on the current process. Let’s say you have a heavy process that you can run but still do other things while waiting for its solution. Here’s how you might use a child process:

Node js child process

n the example above, we use Node’s exec command to run a file that runs a long process. We send this file an argument and it runs its course. The answer is sent via the process’s `stdout`. You can always create more elaborate mechanisms, like saving data to a file and stream it. This will leave your main app’s main thread available for other tasks.

Similarly, you can use a microservice architecture to spin off remote servers to off load specific work. Instead of using a child process, you’d be doing http requests.

Profile and use design patterns to solve your problem

You have a performance problem. You start tackling it. You profiled your app and found out where the problem is. Know that 95% of the times, someone had your problem before and documented a solution. Design patterns are solutions to commonly occurring problems. 

Let’s say you profiled your application and find out you have a lot of garbage collection occurrences. This is bad, because we’ve learned that GC takes CPU resources from your main thread. Solving GC issues usually refers to the Object Pool design pattern. Implementing the right design pattern can save you up to 90% CPU time. Here’s a simple Object Pool example results:

Node js optimization Object Pool

In the picture above we can see the example with the pool took 0.4187 seconds. That’s roughly 4 times faster than the same code without an Object Pool (1.6888 seconds)!

This is just one example on how to profile and optimize your Javascript. Benchmark for your use-case can be used to tell how effective the change you’ve just made.

Summary

Dealing with Node.js performance can be fun and rewarding.

In this article we learned about how Node.js works internally (the event loop). We learned a little about memory management (the Garbage Collector). We then learned a bit about ways to scale Node.js applications via streams, caching and load balancing.

We also saw how to profile Node.js runtime. Profiling is the key to solving performance issues. Without knowing where your bottlenecks are, you might flail your hands and waste a lot of time on irrelevant sections of the code.

As in the design patterns example explained above, profiling Node.js runtime can lead you to see the bottleneck in your application – and identifying the sources of this bottleneck can lead you to a ready-made solution like a design pattern.


Once in a week I publish a FREE newsletter for developers and programmers who are interested in taking their skills to an expert level.

And I invite you to sign up right now. Just a few of my optimization techniques can make a big difference to the performance of your applications.

Sign Up for our FREE newsletter and get our best articles delivered to you by e-mail


What you will learn:

  • How to make simple changes to your code to achieve the HIGHEST JavaScript performance possible
  • An easy, effective way to TEST your code and find places to make improvements
  • Practical tips on how to improve JavaScript MEMORY performance and reduce garbage collection
  • An expert advice on JavaScript from the leading programmers in the field
  • JavaScript Performance hacks and shortcuts to save you time and boost efficiency

Enter your name and email to access your FREE membership

*** We take your privacy very seriously. We hate spam as much as you do, so rest assured that your contact details will never be shared with a third party.


JavaScript Performance Optimization Tips

Javascript JS performance optimization

Test and optimize JavaScript performance

Practical steps to help you find and eliminate JS performance problems. A step-by-step guide to fixing common JavaScript performance bottlenecks in Web Apps.

JavaScript Memory leaks

Here’s how to find and fix memory leaks in JavaScript

JavaScript memory leaks can be easily found and fixed when you know what and where to look for. To spot a memory leak you need to follow those easy steps.


JS garbage collection

How to optimize JavaScript garbage collection

Find out how garbage collection works in JavaScript and what exactly you need to fix in order to achieve the highest performance of your application.