What is the performance of Objects/Arrays in JavaScript? (specifically for Google V8) What is the performance of Objects/Arrays in JavaScript? (specifically for Google V8) arrays arrays

What is the performance of Objects/Arrays in JavaScript? (specifically for Google V8)


I created a test suite, precisely to explore these issues (and more) (archived copy).

And in that sense, you can see the performance issues in this 50+ test case tester (it will take a long time).

Also as its name suggest, it explores the usage of using the native linked list nature of the DOM structure.

(Currently down, rebuilt in progress) More details on my blog regarding this.

The summary is as followed

  • V8 Array is Fast, VERY FAST
  • Array push / pop / shift is ~approx 20x+ faster than any object equivalent.
  • Surprisingly Array.shift() is fast ~approx 6x slower than an array pop, but is ~approx 100x faster than an object attribute deletion.
  • Amusingly, Array.push( data ); is faster than Array[nextIndex] = data by almost 20 (dynamic array) to 10 (fixed array) times over.
  • Array.unshift(data) is slower as expected, and is ~approx 5x slower than a new property adding.
  • Nulling the value array[index] = null is faster than deleting it delete array[index] (undefined) in an array by ~approx 4x++ faster.
  • Surprisingly Nulling a value in an object is obj[attr] = null ~approx 2x slower than just deleting the attribute delete obj[attr]
  • Unsurprisingly, mid array Array.splice(index,0,data) is slow, very slow.
  • Surprisingly, Array.splice(index,1,data) has been optimized (no length change) and is 100x faster than just splice Array.splice(index,0,data)
  • unsurprisingly, the divLinkedList is inferior to an array on all sectors, except dll.splice(index,1) removal (Where it broke the test system).
  • BIGGEST SURPRISE of it all [as jjrv pointed out], V8 array writes are slightly faster than V8 reads =O

Note: These metrics applies only to large array/objects which v8 does not "entirely optimise out". There can be very isolated optimised performance cases for array/object size less then an arbitrary size (24?). More details can be seen extensively across several google IO videos.

Note 2: These wonderful performance results are not shared across browsers, especially *cough* IE. Also the test is huge, hence I yet to fully analyze and evaluate the results : please edit it in =)

Updated Note (dec 2012): Google representatives have videos on youtubes describing the inner workings of chrome itself (like when it switches from a linkedlist array to a fixed array, etc), and how to optimize them. See GDC 2012: From Console to Chrome for more.


At a basic level that stays within the realms of JavaScript, properties on objects are much more complex entities. You can create properties with setters/getters, with differing enumerability, writability, and configurability. An item in an array isn't able to be customized in this way: it either exists or it doesn't. At the underlying engine level this allows for a lot more optimization in terms of organizing the memory that represents the structure.

In terms of identifying an array from an object (dictionary), JS engines have always made explicit lines between the two. That's why there's a multitude of articles on methods of trying to make a semi-fake Array-like object that behaves like one but allows other functionality. The reason this separation even exists is because the JS engines themselves store the two differently.

Properties can be stored on an array object but this simply demonstrates how JavaScript insists on making everything an object. The indexed values in an array are stored differently from any properties you decide to set on the array object that represents the underlying array data.

Whenever you're using a legit array object and using one of the standard methods of manipulating that array you're going to be hitting the underlying array data. In V8 specifically, these are essentially the same as a C++ array so those rules will apply. If for some reason you're working with an array that the engine isn't able to determine with confidence is an array, then you're on much shakier ground. With recent versions of V8 there's more room to work though. For example, it's possible to create a class that has Array.prototype as its prototype and still gain efficient access to the various native array manipulation methods. But this is a recent change.

Specific links to recent changes to array manipulation may come in handy here:

As a bit of extra, here's Array Pop and Array Push directly from V8's source, both implemented in JS itself:

function ArrayPop() {  if (IS_NULL_OR_UNDEFINED(this) && !IS_UNDETECTABLE(this)) {    throw MakeTypeError("called_on_null_or_undefined",                        ["Array.prototype.pop"]);  }  var n = TO_UINT32(this.length);  if (n == 0) {    this.length = n;    return;  }  n--;  var value = this[n];  this.length = n;  delete this[n];  return value;}function ArrayPush() {  if (IS_NULL_OR_UNDEFINED(this) && !IS_UNDETECTABLE(this)) {    throw MakeTypeError("called_on_null_or_undefined",                        ["Array.prototype.push"]);  }  var n = TO_UINT32(this.length);  var m = %_ArgumentsLength();  for (var i = 0; i < m; i++) {    this[i+n] = %_Arguments(i);  }  this.length = n + m;  return this.length;}


I'd like to complement existing answers with an investigation to the question of how implementations behave regarding growing arrays: If they implement them the "usual" way, one would see many quick pushes with rare, interspersed slow pushes at which point the implementation copies the internal representation of the array from one buffer to a larger one.

You can see this effect very nicely, this is from Chrome:

16: 4ms40: 8ms 2.576: 20ms 1.9130: 31ms 1.7105263157894737211: 14ms 1.623076923076923332: 55ms 1.5734597156398105514: 44ms 1.5481927710843373787: 61ms 1.53112840466926061196: 138ms 1.51969504447268111810: 139ms 1.51337792642140472731: 299ms 1.50883977900552484112: 341ms 1.50567557671182736184: 681ms 1.50389105058365769292: 1324ms 1.5025873221216042

Even though each push is profiled, the output contains only those that take time above a certain threshold. For each test I customized the threshold to exclude all the pushes that appear to be representing the fast pushes.

So the first number represents which element has been inserted (the first line is for the 17th element), the second is how long it took (for many arrays the benchmark is done for in parallel), and the last value is the division of the first number by that of the one in the former line.

All lines that have less than 2ms execution time are excluded for Chrome.

You can see that Chrome increases array size in powers of 1.5, plus some offset to account for small arrays.

For Firefox, it's a power of two:

126: 284ms254: 65ms 2.015873015873016510: 28ms 2.00787401574803151022: 58ms 2.0039215686274512046: 89ms 2.00195694716242664094: 191ms 2.00097751710654948190: 364ms 2.0004885197850513

I had to put the threshold up quite a bit in Firefox, that's why we start at #126.

With IE, we get a mix:

256: 11ms 256512: 26ms 21024: 77ms 21708: 113ms 1.667968752848: 154ms 1.66744730679156914748: 423ms 1.66713483146067427916: 944ms 1.6672283066554338

It's a power of two at first and then it moves to powers of five thirds.

So all common implementations use the "normal" way for arrays (instead of going crazy with ropes, for example).

Here's the benchmark code and here's the fiddle it's in.

var arrayCount = 10000;var dynamicArrays = [];for(var j=0;j<arrayCount;j++)    dynamicArrays[j] = [];var lastLongI = 1;for(var i=0;i<10000;i++){    var before = Date.now();    for(var j=0;j<arrayCount;j++)        dynamicArrays[j][i] = i;    var span = Date.now() - before;    if (span > 10)    {      console.log(i + ": " + span + "ms" + " " + (i / lastLongI));      lastLongI = i;    }}