How to predict the maximum call depth of a recursive method? How to predict the maximum call depth of a recursive method? java java

How to predict the maximum call depth of a recursive method?


This is clearly JVM- and possibly also architecture-specific.

I've measured the following:

  static int i = 0;  public static void rec0() {      i++;      rec0();  }  public static void main(String[] args) {      ...      try {          i = 0; rec0();      } catch (StackOverflowError e) {          System.out.println(i);      }      ...  }

using

Java(TM) SE Runtime Environment (build 1.7.0_09-b05)Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)

running on x86.

With a 20MB Java stack (-Xss20m), the amortized cost fluctuated around 16-17 bytes per call. The lowest I've seen was 16.15 bytes/frame. I therefore conclude that the cost is 16 bytes and the rest is other (fixed) overhead.

A function that takes a single int has basically the same cost, 16 bytes/frame.

Interestingly, a function that takes ten ints requires 32 bytes/frame. I am not sure why the cost is so low.

The above results apply after the code's been JIT compiled. Prior to compilation the per-frame cost is much, much higher. I haven't yet figured out a way to estimate it reliably. However, this does mean that you have no hope of reliably predicting maximum recursion depth until you can reliably predict whether the recursive function has been JIT compiled.

All of this was tested with a ulimit stack sizes of 128K and 8MB. The results were the same in both cases.


Only a partial answer: from JVM Spec 7, 2.5.2, stack frames can be allocated on the heap, and the stack size may be dynamic. I couldn't say for certain, but it seems it should be possible to have your stack size bounded only by your heap size:

Because the Java virtual machine stack is never manipulated directly except to push and pop frames, frames may be heap allocated.

and

This specification permits Java virtual machine stacks either to be of a fixed size or to dynamically expand and contract as required by the computation. If the Java virtual machine stacks are of a fixed size, the size of each Java virtual machine stack may be chosen independently when that stack is created.

A Java virtual machine implementation may provide the programmer or the user control over the initial size of Java virtual machine stacks, as well as, in the case of dynamically expanding or contracting Java virtual machine stacks, control over the maximum and minimum sizes.

So it'll be up to the JVM implementation.


Addig to NPEs answer:

The maximum stack depth seems to be flexible. The following test program prints vastly different numbers:

public class StackDepthTest {    static int i = 0;    public static void main(String[] args) throws Throwable {        for(int i=0; i<10; ++i){            testInstance();        }    }    public static void testInstance() {        StackDepthTest sdt = new StackDepthTest();        try {            i=0;            sdt.instanceCall();        } catch(StackOverflowError e){}        System.out.println(i);    }    public void instanceCall(){        ++i;        instanceCall();    }}

The output is:

10825108255953859538595385953859538595385953859538

I've used the default of this JRE:

java version "1.7.0_09"OpenJDK Runtime Environment (IcedTea7 2.3.3) (7u9-2.3.3-0ubuntu1~12.04.1)OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)

So the conclusion is: If you pus had enough (i.e. more than twice) you get a second chance ;-)