High Performance JavaScript – Array Creation & Population

Preallocating arrays in Javascript is about 65% faster than not!
JavaScript Array Performance in Chrome V8 using a size of 10,000 elements

JavaScript Arrays

Arrays in JavaScript are very easy to use and there are some simple tricks to make them perform at peak efficiency.  Testing shows that the recommended standard method for declaring arrays in JavaScript ( var a = [];) has pretty poor performance when compared to using the less popular alternative ( var a = new Array(100);).

Most people argue that the performance difference is negligible and the benefits of the standard method outweigh any performance gains possible. However, I am working on developing a high performance Neural Network Library in JavaScript, and the more evaluations per second I can get through my network the better!

Creating and evaluating neural networks is mostly composed of reading and writing values to arrays, so benchmarking array performance has been the first step towards optimizing the library.

I used JSPerf.com to do these early benchmarks.  In another post I will use chrome’s built-in JavaScript profiler to do more in-depth benchmarks of the Neural Network library once the library is more complete.


JavaScript Array Allocation

According to a discussion on stack overflow, there is no real good reason to use anything other than the standard method of creating an array in JavaScript:

Its more readable, the overhead is small, and it leaves all the memory management to the JavaScript engine.  However, allocating all the space at once instead of in increments during the filling process is generally quicker.

This method is generally frowned upon as its easier to make mistakes and its “harder to read”. In fact, w3schools says to avoid it outright! (see the When to Use Arrays? section.) However, for my Neural Network this method is way more efficient and faster!

If you want to test it on your machine / browser here is the link:  http://jsperf.com/preallocating-array/8

Large JavaScript Arrays in Chrome V8

It turns out that V8 does a neat trick internally with an object’s hidden class when it sees a large array length.  It switches from an array of unsigned integers, to a sparse array (i.e., map).  In order to force this optimization you can manually set the length property.

This is ONLY faster when working with large arrays: about 1000+.  For smaller arrays it is only slightly slower, if not the same speed.

If you want to test this on your machine / browser here is the link: http://jsperf.com/preallocating-array/9


In the Next Post I will benchmark the fastest ways to read and write values in JavaScript arrays.

Travis Payton
Follow Me

  5 comments for “High Performance JavaScript – Array Creation & Population

  1. June 14, 2019 at 11:43 pm

    The downside of new Array(n) is that is HOLEY. It is better from allocation point, considering the memory reallocation and copy with PACKED arrays. But it’s down side is the access and operation performance. Which way much slower then PACKED (multiple lookups steps) and it make it hard for v8 to optimize.

  2. April 9, 2019 at 8:20 am

    Thank you so much for the information. It gave me assurance to use new Array to create array

  3. ngryman
    March 18, 2017 at 11:15 am

    Have you then benched reads/writes? You may have a bad surprise…

    Doing what you advise basically switches from a fast linear storage to a hash table. So yes you gain creation time as it doesn’t make sense to pre-allocate a hash table, basically the VM does nothing hence your perf boost.
    But then, the performance impact on further reads/writes is huge!

    To access a linear storage you only need the start memory address and the offset (index). This is a very cheap operation, this is fast.
    To access a hash map entry, you first need to compute the hash using a hashing function and then jump to the memory that holds the actual value. This is way more expensive.


    • March 20, 2017 at 9:45 pm

      Great point and excellent article. I believe I pointed out that V8 switches to sparse arrays / maps if using the [] / .length = N hack. Here are some more performance benchmarks geared towards sequential access, and storing numbers instead of strings or a mixture of types. You’ll notice that the performance is almost identical between writing to the .length = N array and the new Array(N) one: http://jsperf.com/preallocating-array/15. I ultimately went with the pre-allocating the array new Array(N) in my Neural Network code as it consistently proved slightly faster not only in V8 but also in SpiderMonkey.

      Another interesting note is doubles are much faster than floats or ints, as mentioned in that article. It probably has to do with the fact that JS natively stores numbers as doubles and there is no typecasting or manipulation overhead happening. This benchmark shows that the native double Array is identical to the typed array: http://jsperf.com/array-comparison-typed-vs-regular/4

      I’ve learned that it is often best to avoid any performance tricks or hacks as the V8 team has done an amazing job with the JIT compiler and optimizations for the more common use cases. For example, here are some various array topologies and data types while doing an ANN evaluation. The fastest is the simple native multi-dimensional array. http://jsperf.com/array-comparison-for-network-evaluations

  4. Anonymous
    February 28, 2016 at 11:21 am

    thanks !

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: