Does Google Closure Compiler ever decrease performance? Does Google Closure Compiler ever decrease performance? javascript javascript

Does Google Closure Compiler ever decrease performance?


The answer is maybe.

Lets look at what the closure team says about it.

From the FAQ:

Does the compiler make any trade-off between my application's execution speed and download code size?

Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).

Does the compiler optimize for speed?

In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.

I flatly challenge the first assumption they've made here. The size of vars names used does not directly impact how the various JavaScript engines treat the code-- in fact, JS engines don't care if you call your variables supercalifragilisticexpialidocious or x (but I as a programmer sure do). Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for.

To truthfully understand why your question is maybe, first thing you need to ask is "What makes JavaScript fast or slow?"

Then of course we run into the question, "What JavaScript engine are we talking about?"

We have:

  • Carakan (Opera)
  • Chakra (IE9+)
  • SpiderMonkey (Mozilla/FireFox)
  • SquirrelFish (Apple's webkit)
  • V8 (Chrome)
  • Futhark (Opera)
  • JScript (All versions of IE before 9)
  • JavaScriptCore (Konqueror, Safari)
  • I've skipped out on a few.

Does anyone here really think they all work the same? Especially JScript and V8? Heck no!

So again, when google closure compiles code, which engine is it building stuff for? Are you feeling lucky?

Okay, because we'll never cover all these bases lets try to look more generally here, at "old" vs "new" code.

Here's a quick summary for this specific part from one of the best presentations on JS Engines I've ever seen.

Older JS engines

  • Code is interpreted and compiled directly to byte code
  • No optimization: you get what you get
  • Code is hard to run fast because of the loosely typed language

New JS Engines

  • Introduce Just-In-Time(JIT) compilers for fast execution
  • Introduce type-optimizing JIT compilers for really fast code (think near C code speeds)

Key difference here being that new engines introduce JIT compilers.

In essence, JIT will optimize your code execution such that it can run faster, but if something it doesn't like happens it turns around and makes it slow again.

You can do such things by having two functions like this:

var FunctionForIntegersOnly = function(int1, int2){    return int1 + int2;}var FunctionForStringsOnly = function(str1, str2){    return str1 + str2;}alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b"));

Running that through google closure actually simplifies the whole code down to:

alert("3ab");

And by every metric in the book that's way faster. What really happened here is it simplified my really simple example, because it does a bit of partial-execution. This is where you need to be careful however.

Lets say we have a y-combinator in our code, the compiler turns it into something like this:

(function(a) { return function(b) {    return a(a)(b)  }})(function(a) {  return function(b) {    if(b > 0) {      return console.log(b), a(a)(b - 1)    }  }})(5);

Not really faster, just minified the code.

JIT would normally see that in practice your code only ever takes two string inputs to that function, and returns a string (or integer for the first function), and this put it into the type-specific JIT, which makes it really quick. Now, if google closure does something strange like transform both those functions that have nearly identical signatures into one function (for code that is non-trivial) you may lose JIT speed if the compiler does something JIT doesn't like.

So, what did we learn?

  • You might have JIT-optimized code, but the compiler re-organizes your code into something else
  • Old browsers don't have JIT but still run your code
  • Closure compiled JS invokes less function calls by doing partial-execution of your code for simple functions.

So what do you do?

  • Write small and to-the-point functions, the compiler will be able to deal with them better
  • If you have a very deep understanding of JIT, hand optimizing code, and used that knowledge then closure compiler may not be worthwhile to use.
  • If you want the code to run a bit faster on older browsers, it's an excellent tool
  • Trade-offs are generally worth-while, but just be careful to check things over and not blindly trust it all the time.

In general, your code is faster. You may introduce things that various JIT compilers don't like but they're going to be rare if your code uses smaller functions and correct prototypical object-oriented-design. If you think about the full scope of what the compiler is doing (shorter download AND faster execution) then strange things like var i = true, m = null, r = false; may be a worth-while trade off that the compiler made even though they're running slower, the total lifespan was faster.

It's also worthwhile to note the most common bottle neck in web-app execution is the Document Object model, and I suggest you put more effort over there if your code is slow.


It would appear that in modern browsers using the literal true or null vs a variable makes absolutely no difference in almost all cases (as in zero; they are exactly the same). In very few cases, the variable is actually faster.

So, those extra bytes in saving are worth it and cost nothing.

true vs variable (http://jsperf.com/true-vs-variable):

true vs variable

null vs variable (http://jsperf.com/null-vs-variable):

null vs variable


I think there will be a very slight performance penalty, but unlikely to matter much in newer, modern browsers.

Notice that the Closure Compiler's standard alias variables are all global variables. Which means that, in an old browser where the JavaScript engine takes linear time to navigate functional scopes (e.g. IE < 9), the deeper you are within nested function calls, the longer it takes to find that variable which holds "true" or "false" etc. Almost all modern JavaScript engines optimize global variable access so this penalty should no longer hold in many cases.

In addition, there really shouldn't be many places where you'd be seeing "true" or "false" or "null" directly in compiled code, except for assignments or arguments. For example: if (someFlag == true) ... is mostly just written if (someFlag) ... which is compiled by the compiler into a && .... You mostly only see them in assignments (someFlag = true;) and arguments (someFunc(true);), which really do not happen very frequently.

Conclusion is: although many people doubt the usefulness of the Closure Compiler's standard aliases (me included), you shouldn't expect any material performance hit. You also shouldn't expect any material benefits in gzipped sizes, though.