Four Kitchens
Insights

Precision and speed with numbers in JavaScript

4 Min. ReadDevelopment

Precision and Speed with Numbers in JavaScript or That Time I Over-Optimized A Recursive Fibonacci Algorithm

I recently experienced one of those “let’s just go do some maths for fun” episodes we’re all so familiar with, and I had occasion to write a recursive fibonacci algorithm.

For anyone unfamiliar with the concept, there’s a simple method for finding the nth value in the fibonacci sequence using recursion. It might look something like this:

function fib(n) {
  return n <= 1 ? 1 : fib(n - 1) + fib(n - 2);
}

This function determines how many 1s need to be added together to find the desired value, and that can become quite time consuming. When working on recursive algorithms like this, it’s a great idea to implement memoization, or a cache of results that have already been computed.

There are a few approaches to memoization that one can take when working with JavaScript— you might build the memo directly into the function you’re calling, or you might use a function (perhaps from a library like Lo-Dash) that gives you a new function which wraps your function and provides a memo.

Aware of these options, I thought it would be interesting to experiment with using a proxy to provide memoization—replicating the wrapped function with a proxy allows for the same functionality, managed in a, in my opinion, more readable fashion.

This all went off without a hitch (here’s the code), but I was curious about the performance of this method, and I still needed to address the approximated values my function was returning.

And down the rabbit hole I went…

Speed

When refactoring JavaScript, there are obvious optimizations, such as not looping through values when you don’t need to or performing fewer existential checks, and then there are micro optimizations—changes to your code that improve performance without changing functionality.

If you’re calling a function five or ten times here or there, there’s not much difference between 125ms and 250ms. I like to play code golf too, but those pull requests may result in a few rolled eyes.

However, in situations where you need to walk through dozens of directories and hundreds of files while aggregating data, you’ve entered a new territory—here’s a fun game in order to illustrate: conversationally, how would you refer to the difference between 12,500,000ms and 25,000,000ms?

Feel free to check my math, but that’s nearly three and a half hours.

While this is very far removed from most JavaScript developers’ typical, every-day work, when one finds oneself in this scenario, these optimizations end up not so micro after all.

Before you start performing micro optimizations, you may want to investigate what is actually causing bottlenecks (it would be unfortunate to optimize a particular piece of your codebase only to discover that you have another bottleneck that’s holding you back from better performance), as well as rethinking what you’re doing in contrast with what you want to accomplish. I think of micro optimizations as a seldom needed last step. I think of micro optimizations as a seldom needed last step. I think of micro optimizations as a seldom needed last step.

For example, in this case, exploring matrix exponentiation and fast doubling would have been much more productive than fidgeting with memoization, but, fortunately for the sake of this post, fidget with memoization I did!

function fib(n) { 
  return fib.memo[n] || (fib.memo[n] = n <= 1 ? 1 : fib(n - 1) + fib(n - 2));
};
fib.memo = {};

This method checks if a key/value pair for the provided argument exists in the memo (attached to the function), and, if it does, returns that value instead of doing the work.

According to my tests using Benchmark.js, this method was significantly (relatively speaking, of course) more efficient. And don’t fret—if you don’t want to bother with setting up a project for benchmarking your work, there’s always console.time. A few rounds of timing the execution of your candidates should be accurate enough when making decisions of this nature.

Precision

While going several steps beyond what was necessary in regard to speed, I started putting larger and larger values into my fibonacci function, and was, not for the first time, faced with the lack of precision in larger values with JavaScript.

A fun thing about numbers in JavaScript is that there is only one option: 53 bit floating points. This results in all sorts of wonky behavior, like the need for special functions to check for equality when manipulating floating point values.

In order to represent numbers that are larger than 53 bits, we need to cast them as strings or, in order to effectively manipulate them, hold the digits in an array. As you can imagine, this requires adjusting how you address all arithmetic in your application.

To solve this problem and work with large numbers in JavaScript, I use big.js.

const Big = require('big.js');

function bigFib(n) { 
  return fib.memo[n] || fib.memo[n] = n <= 1 ? 1 : new Big(fib(n - 1)).plus(fib(n - 2));
};
fib.memo = {};

new Big creates a new big.js number instance, then I’m using the add method to perform some addition. Big.js is a powerful tool, and this is a very simple use case, but it allows us to go from results like 573147844013817200000 to results like 573147844013817084101. I don’t know much about maths, but I know that some people are kind of sticklers when it comes to having the correct numbers!

Not Too Shabby

And now we’re on the other side of the rabbit hole, having seen that JavaScript, with a bit of tweaking, can be a fine instrument for common maths (even without squeezing out that extra ounce of performance). If you have any questions or you’d like to share your own thoughts on the subject, please reach out in the comments!