Why is array.prototype.slice() so slow on sub-classed arrays? Why is array.prototype.slice() so slow on sub-classed arrays? arrays arrays

Why is array.prototype.slice() so slow on sub-classed arrays?


V8 developer here. What you're seeing is fairly typical:

The built-in .slice() function for regular arrays is heavily optimized, taking all sorts of shortcuts and specializations (it even goes as far as using memcpy for arrays containing only numbers, hence copying more than one element at a time using your CPU's vector registers!). That makes it the fastest option.

Calling Array.prototype.slice on a custom object (like a subclassed array, or just let obj = {length: 100_000_000, foo: "bar", ...}) doesn't fit the restrictions of the fast path, so it's handled by a generic implementation of the .slice builtin, which is much slower, but can handle anything you throw at it. This is not JavaScript code, so it doesn't collect type feedback and can't get optimized dynamically. The upside is that it gives you the same performance every time, no matter what. This performance is not actually bad, it just pales in comparison to the optimizations you get with the alternatives.

Your own implementation, like all JavaScript functions, gets the benefit of dynamic optimization, so while it naturally can't have any fancy shortcuts built into it right away, it can adapt to the situation at hand (like the type of object it's operating on). That explains why it's faster than the generic builtin, and also why it provides consistent performance in both of your test cases. That said, if your scenario were more complicated, you could probably pollute this function's type feedback to the point where it becomes slower than the generic builtin.

With the [i, item] of source.entries() approach you're coming close to the spec behavior of .slice() very concisely at the cost of some overhead; a plain old for (let i = 0; i < source.length; i++) {...} loop would be about twice as fast, even if you add an if (i in source) check to reflect .slice()'s "HasElement" check on every iteration.


More generally: you'll probably see the same general pattern for many other JS builtins -- it's a natural consequence of running on an optimizing engine for a dynamic language. As much as we'd love to just make everything fast, there are two reasons why that won't happen:

(1) Implementing fast paths comes at a cost: it takes more engineering time to develop (and debug) them; it takes more time to update them when the JS spec changes; it creates an amount of code complexity that quickly becomes unmanageable leading to further development slowdown and/or functionality bugs and/or security bugs; it takes more binary size to ship them to our users and more memory to load such binaries; it takes more CPU time to decide which path to take before any of the actual work can start; etc. Since none of those resources are infinite, we'll always have to choose where to spend them, and where not.

(2) Speed is fundamentally at odds with flexibility. Fast paths are fast because they get to make restrictive assumptions. Extending fast paths as much as possible so that they apply to as many cases as possible is part of what we do, but it'll always be easy for user code to construct a situation that makes it impossible to take the shortcuts that make a fast path fast.