A few tricks to speed up your JavaScript application
Bartosz Slysz
Software Engineer
With the advancement of browser technology, web applications have begun to transfer more and more logic to the front-end, thus relieving the server and lowering the number of operations it has to perform. In basic CRUDs, the role of the server comes down to authorization, validation, communication with databases and the required business logic. The rest of the data logic, as it turns out, can easily be handled by the code responsible for the representation of the application on the UI side.
In this article, I will try to show to you a few examples and patterns that will help keep our code efficient, neat and fast.
Before we go deeper into specific examples – in this article, I would like to focus only on showing instances that, in my opinion, can affect the speed of the application in a surprising way. However, this does not mean that the use of faster solutions is the best choice in every possible case. The tips below should rather be treated as something you should look into when our application is running slowly, e.g., in products that require game rendering or more advanced graphs on the canvas, video operations, or activities that you want to sync in real time as soon as possible.
First of all – Array.prototype methods
We base a large part of the application logic on arrays – their mapping, sorting, filtering, summing elements, and so on. In an easy, transparent and natural way, we use their built-in methods that simply allow us to perform various types of calculations, groupings etc. They work similarly in each instance – as an argument, we pass a function where, in most cases, the element value, index and array are pushed in turns during each iteration. The specified function is performed for each element in the array and the result is interpreted differently depending on the method. I will not elaborate on the Array.prototype methods as I want to focus on why it runs slowly in a large number of cases.
The Array methods are slow because they perform a function for each element. A function called from the engine’s perspective must prepare a new call, provide the appropriate scope and a lot of other dependencies, which makes the process a lot longer than repeating a specific block of code in a specific scope. And this is probably enough background knowledge to allow us to understand the following example:
(() => {
const randomArray = [...Array(1E6).keys()].map(() => ({ value: Math.random() }));
console.time('Sum by reduce');
const reduceSum = randomArray
.map(({ value }) => value)
.reduce((a, b) => a + b);
console.timeEnd('Sum by reduce');
console.time('Sum by for loop');
let forSum = randomArray[0].value;
for (let index = 1; index < randomArray.length; index++) {
forSum += randomArray[index].value;
}
console.timeEnd('Sum by for loop');
console.log(reduceSum === forSum);
})();
I know that this test is not as reliable as the benchmarks (we will come back to them later), but it triggers a warning light. For a random case on my computer, it turns out that the code with the for loop can be about 50 times faster if compared to mapping and then reducing elements that achieve the same effect! This is about operating on some strange object created only to reach a specific target of computations. So, let’s create something more legit to be objective about the Array methods:
(() => {
const randomArray = [...Array(1E6).keys()].map(() => ({ value: Math.random() }));
console.time('Sum by reduce');
const reduceSum = randomArray
.reduce((a, b) => ({ value: a.value + b.value })).value
console.timeEnd('Sum by reduce');
console.time('Sum by for loop');
let forSum = randomArray[0].value;
for (let index = 1; index < randomArray.length; index++) {
forSum += randomArray[index].value;
}
console.timeEnd('Sum by for loop');
console.log(reduceSum === forSum);
})();
I know that this test is not as reliable as the benchmarks (we will come back to them later), but it triggers a warning light. For a random case on my computer, it turns out that the code with the for loop can be about 50 times faster if compared to mapping and then reducing elements that achieve the same effect! This is because getting the sum in this particular case using the reduce method requires mapping the array for pure values that we want to summarize. So, let’s create something more legit to be objective about the Array methods:
(() => {
const randomArray = [...Array(1E6).keys()].map(() => ({ value: Math.random() }));
console.time('Sum by reduce');
const reduceSum = randomArray
.reduce((a, b) => ({ value: a.value + b.value })).value
console.timeEnd('Sum by reduce');
console.time('Sum by for loop');
let forSum = randomArray[0].value;
for (let index = 1; index < randomArray.length; index++) {
forSum += randomArray[index].value;
}
console.timeEnd('Sum by for loop');
console.log(reduceSum === forSum);
})();
And as it turns out, our 50x boost decreased to 4x boost. Apologies if you feel disappointed! To stay objective to the end, let’s analyze both codes again. First of all – innocent-looking differences doubled the drop in our theoretical computational complexity; instead of first mapping and then adding up pure elements, we still operate on objects and a specific field, to finally pull out to get the sum we are interested in. The problem arises when another programmer takes a look at the code – then, compared to the codes shown earlier, the latter loses its abstraction at some point.
This is because since the second operation that we operate on a strange object, with the field of interest to us and the second, standard object of the iterated array. I don’t know what you think about it, but from my perspective, in the second code example, the logic of the for loop is much clearer and more intent than this strange-looking reduce. And still, even though it’s not the mythical 50 anymore, it’s still 4 times faster when it comes to computation time! As every millisecond is valuable, the choice in this case is simple.
The most surprising example
The second thing I wanted to compare regards the Math.max method or, more precisely, stuffing a million elements and them extracting the largest and smallest ones. I prepared the code, methods for measuring time as well, then I fire up the code and get a very strange error – the stack size is exceeded. Here’s the code:
(() => {
const randomValues = [...Array(1E6).keys()].map(() => Math.round(Math.random() * 1E6) - 5E5);
console.time('Math.max with ES6 spread operator');
const maxBySpread = Math.max(...randomValues);
console.timeEnd('Math.max with ES6 spread operator');
console.time('Math.max with for loop');
let maxByFor = randomValues[0];
for (let index = 1; index < randomValues.length; index++) {
if (randomValues[index] > maxByFor) {
maxByFor = randomValues[index];
}
}
console.timeEnd('Math.max with for loop');
console.log(maxByFor === maxBySpread);
})();
(() => {
const randomValues = [...Array(1E6).keys()].map(() => Math.round(Math.random() * 1E6) - 5E5);
console.time('Math.max with ES6 spread operator');
const maxBySpread = Math.max(...randomValues);
console.timeEnd('Math.max with ES6 spread operator');
console.time('Math.max with for loop');
let maxByFor = randomValues[0];
for (let index = 1; index < randomValues.length; index++) {
if (randomValues[index] > maxByFor) {
maxByFor = randomValues[index];
}
}
console.timeEnd('Math.max with for loop');
console.log(maxByFor === maxBySpread);
})();
It turns out that native methods use recursion, which in v8 is limited by call stacks and its number is dependent on the environment. This is something that surprised me a lot, but it carries a conclusion: the native method is faster, as long as our array does not exceed a certain magical number of elements, which in my case turned out to be 125375. For this number of elements, the result for was 5x faster if compared to the loop. However, above the mentioned number of elements, the for loop definitely wins – unlike the opponent, it allows us to get correct results.
Recursion
The concept I want to mention in this paragraph is recursion. In the previous example, we saw it in the Math.max method and argument folding, where it turned out that it is impossible to get a result for recursive calls exceeding a specific number due to the stack size limitation.
We will see now what recursion looks like in the context of code written in JS, and not with built-in methods.Perhaps the most classic thing we can show here is, of course, finding the nth term in the Fibonacci sequence. So, let’s write this!
(() => {
const fiboIterative = (n) => {
let [a, b] = [0, 1];
for (let i = 0; i < n; i++) {
[a, b] = [b, a + b];
}
return a;
};
const fiboRecursive = (n) => {
if(n < 2) {
return n;
}
return fiboRecursive(n - 2) + fiboRecursive(n - 1);
};
console.time('Fibonacci sequence by for loop');
const resultIterative = fiboIterative(30);
console.timeEnd('Fibonacci sequence by for loop');
console.time('Fibonacci sequence by recursion');
const resultRecursive = fiboRecursive(30);
console.timeEnd('Fibonacci sequence by recursion');
console.log(resultRecursive === resultIterative);
})();
Okay, in this particular case of computing the 30th item of the sequence on my computer, we get the result in about 200x shorter time with the iterative algorithm.
There is, however, one thing that can be rectified in the recursive algorithm – as it turns out, it works much more efficiently when we use a tactic called tail recursion. This means that we pass the result that we obtained in the previous iteration using arguments for deeper calls. This allows us to reduce the number of calls required and, as a result, speeds up the result. Let’s correct our code accordingly!
(() => {
const fiboIterative = (n) => {
let [a, b] = [0, 1];
for (let i = 0; i < n; i++) {
[a, b] = [b, a + b];
}
return a;
};
const fiboTailRecursive = (n, first = 0, second = 1) => {
if(n === 0) {
return first;
}
return fiboTailRecursive(n - 1, second, first + second);
};
console.time('Fibonacci sequence by for loop');
const resultIterative = fiboIterative(30);
console.timeEnd('Fibonacci sequence by for loop');
console.time('Fibonacci sequence by tail recursion');
const resultRecursive = fiboTailRecursive(30);
console.timeEnd('Fibonacci sequence by tail recursion');
console.log(resultRecursive === resultIterative);
})();
Something I didn’t quite expect did happen here – the result of the tail recursion algorithm was able to deliver the result (computing 30th element of a sequence) almost twice as fast as the iterative algorithm in some cases. I am not entirely sure if this is due to optimization for tail recursion on the part of v8 or the lack of optimization for the for loop for this specific number of iterations, but the result is unambiguous – the tail recursion wins.
This is weird because, essentially, the for loop imposes a lot less abstraction on lower-level computation activities, and you could say it is closer to the basic computer operation. Yet, the results are undeniable – cleverly designed recursion turns out to be faster than iteration.
Use asynchronous call statements as often as you can
I would like to devote the last paragraph to a short reminder about a method of performing operations which can also greatly affect the speed of our application. As you should know, JavaScript is a single-threaded language that keeps all the operations with event-loop mechanism. It’s all about a cycle that runs over and over and all the steps in this cycle are about dedicated specified actions.
To make this loop fast and allow all the cycles to wait less for their turn, all the elements should be as fast as possible. Avoid running long operations on the main thread – if something tooks too long, try to move these computations into WebWorker or split into parts that run asynchronously. It may slow down some operations but boost the whole ecosystem of JS, including IO operations, such as handling mouse moving or pending HTTP request.
Summary
In conclusion, as mentioned earlier, chasing milliseconds that can be saved by selecting an algorithm may turn out to be senseless in some cases. On the other hand, neglecting such things in applications that require smooth running and fast results can be deadly for your application. In some instances, apart from the speed of the algorithm, one additional questions should be asked – is the abstraction operated at the right level? Will the programmer who reads the code be able to understand it without any problems?
The only way is to ensure the balance between performance, ease of implementation and appropriate abstraction, and be confident that the algorithm works correctly for both small and large amounts of data. The way to do this is quite simple – be smart, consider the different cases when designing the algorithm and arrange it to behave as efficiently as possible for average executions. Also, it is advisable to design tests – be sure that the algorithm returns the appropriate information for different data, no matter how it works. Take care of the right interfaces – so that both the input and output of methods are readable, clear and reflect exactly what they are doing.
I mentioned earlier that I will circle back to the reliability of measuring the speed of the algorithms in the examples above. Measuring them with console.time is not very reliable, but it reflects the standard use-case best. Anyway, I present the benchmarks below – some of them look a bit different than a single execution due to the fact that benchmarks simply repeat a given activity at a certain time and use v8 optimization for loops.
https://jsben.ch/KhAqb – reduce vs for loop
https://jsben.ch/F4kLY – optimized reduce vs for loop
https://jsben.ch/MCr6g – Math.max vs for loop
https://jsben.ch/A0CJB – recursive fibo vs iterative fibo