Time complexity of unshift() vs. push() in Javascript
push() is faster.
js>function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}js>foo()2190js>function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}js>bar()10
Update
The above does not take into consideration the order of the arrays. If you want to compare them properly, you must reverse the pushed array. However, push then reverse is still faster by ~10ms
for me on chrome with this snippet:
var a=[]; var start = new Date; for (var i=0;i<100000;i++) { a.unshift(1);}var end = (new Date)-start;console.log(`Unshift time: ${end}`);var a=[];var start = new Date;for (var i=0;i<100000;i++) { a.push(1);}a.reverse();var end = (new Date)-start;console.log(`Push and reverse time: ${end}`);
The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.
It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push
and unshift
operations. The C++ std::deque
is an example. A Javascript implementation that used C++ deques to represent Javascript arrays internally would therefore have O(1) push
and unshift
operations.
But if you need to guarantee such time bounds, you will have to roll your own, like this:
For people curious about the v8 implementation here is the source. Because unshift
takes an arbitrary number of arguments, the array will shift itself to accommodate all arguments.
UnshiftImpl
ends up calling AddArguments
with a start_position
of AT_START
which kicks it to this else
statement
// If the backing store has enough capacity and we add elements to the // start we have to shift the existing objects. Isolate* isolate = receiver->GetIsolate(); Subclass::MoveElements(isolate, receiver, backing_store, add_size, 0, length, 0, 0);
and takes it to the MoveElements
.
static void MoveElements(Isolate* isolate, Handle<JSArray> receiver, Handle<FixedArrayBase> backing_store, int dst_index, int src_index, int len, int hole_start, int hole_end) { Heap* heap = isolate->heap(); Handle<BackingStore> dst_elms = Handle<BackingStore>::cast(backing_store); if (len > JSArray::kMaxCopyElements && dst_index == 0 && heap->CanMoveObjectStart(*dst_elms)) { // Update all the copies of this backing_store handle. *dst_elms.location() = BackingStore::cast(heap->LeftTrimFixedArray(*dst_elms, src_index)) ->ptr(); receiver->set_elements(*dst_elms); // Adjust the hole offset as the array has been shrunk. hole_end -= src_index; DCHECK_LE(hole_start, backing_store->length()); DCHECK_LE(hole_end, backing_store->length()); } else if (len != 0) { WriteBarrierMode mode = GetWriteBarrierMode(KindTraits::Kind); dst_elms->MoveElements(heap, dst_index, src_index, len, mode); } if (hole_start != hole_end) { dst_elms->FillWithHoles(hole_start, hole_end); } }
I also want to call out that v8 has a concept of different element kinds
depending what the array contains. This also can affect the performance.
It's hard to actually say what the performance is because truthfully it depends on what types of elements are passed, how many holes are in the array, etc. If I dig through this more, maybe I can give a definitive answer but in general I assume since unshift
needs to allocate more space in the array, in general you can kinda assume it's O(N) (will scale linerally depending on the number of elements) but someone please correct me if I'm wrong.