What determines number of simultaneous connections What determines number of simultaneous connections multithreading multithreading

What determines number of simultaneous connections


The biggest bottleneck I have experienced is the time it takes to process the request.The faster you can service a request, the more connections you can handle.

It's a difficult question to answer due to every application being different.To figure this out for an application I support, I created a unit test that spawns many threads and I watch the memory usage in VisualVM in eclipse.

You can see how your memory consumption changes with the number of threads in use.And you should be able to get a thread dump and see how much memory the thread is using.You can extrapolate an average out to understand how much RAM you might need for N number of users.

The bottleneck will be a moving target since you'll optimize one area until you can scale larger, then another area will become your bottleneck.

If the response time of the servlet is a bottleneck, you'll could use some queuing mathematics to determine how many requests can be queued optimally based on the avg response time.

http://www4.ncsu.edu/~hp/SSME_QueueingTheory.pdf

Hope this helps.

Updated to address your additional questions:

Can my Tomcat server handle 6000 simultaneous HTTP connections? Why not (file handles? CPU time per request?)?

It's possible but probably not. Also you should probably add a web layer in front of the application server if you plan on doing high volume.
Suppose you have 6000 users all pounding away on your application. Each request a user sends only exists on the server for a moment [hopefully], and your peak thread count may have never reached over 20.
I'd recommend setting up some monitoring to understand how your application performs under real use cases. Check out http://Hawt.io which uses Jolokia to grab JMX metrics via http.
If your serious about analytics I'd recommend using something like Graphite to aggregate your JMX metrics. https://github.com/graphite-project/graphite-web
I've written a collector for Jolokia to send metrics to Carbon/Graphite, and may be able to open-source it with approval from my management. Let me know if you are interested.

Can I have thread pool size as 5000 (do idle threads cost CPU/RAM)?

Idle threads are not much to worry about, though setting your thread pool too high could allow your application server to receive too many requests. If this happens you may end up flooding your DB with connections it cant handle, or your memory allocation may not be enough to handle so many requests. This could start overall application performance degradation.
Set too low, and your app server could start queuing request again causing performance degradation.
It's normally to have some queuing during spikes or high volume times, but you don't want to overload your application server. Check out queuing theory to understand more about this.
Also, this is where having a web server in front of the app server could help you. If you have Apache serve your static content, only dynamic requests will reach the application servers in most cases.
Tuning is very specific to your individual application. I'd recommend staying with the defaults and just optimize your code until you can gather enough data to know which knob should be turned.

Can I have oracle connection pool size as 500 connections (do idle connections cost CPU/RAM)?

Same situation as the application thread pool size. Though your pool size for DB should be much smaller than the app thread count.
500 would be too high for most web applications unless you have very high volume, in which case you may need a DB cluster environment like Oracle RAC.
If the pool is set too high and you start using a lot of connections, your DB hardware will not be able to keep up and you will end up with performance problem on the database server.
The time it takes for a query to return may increase, in turn causing your application response time to increase. The "log jam" effect.
Use profiling or metrics to determine the avg number of active DB connections under normal use, and use that as a baseline for determining the max allowed.

Is the amount of garbage that is generated for each connection have an impact? For example, if for each HTTP connection 20KB of objects are created and left behind by Tomcat.. then by the time 2500 requests are processed 100MB heap would be used and this may trigger a GC pause of 300ms.

The numbers would be different, but yes. Also remember the Full GC are more concern. The incremental GCs will not pause your application. Check out "concurrent mark and sweep" and "Garbage first".

Can we say something like this: if Tomcat uses 0.2 sec of CPU time for processing a single HTTP request, then it would be able to handle roughly 500 http connections in a second. So, 6000 connections would need 5 seconds.

It's not quite that easy as each request is coming in, there are also some being processed and completed. Check out queuing theory to understand this better.http://www4.ncsu.edu/~hp/SSME_QueueingTheory.pdf


Interesting question, If we leave apart all the performance deciding attributes finally it boils down to how much work you are doing in the servlet or how much time it takes if it has highest I/O, CPU and memory. Now lets move down with you list with the above statement in mind;-

Number of HTTP connections the server can allow per port

There are limit for file descriptors but that again gets triggered by how much time a servlet is taking complete a request or how much time it takes from request first byte receive to finish sending the entire response. Because if it take only 1ms and you are using Netty and persistent connection, you can reach a really high >> 6000.

Number of servlets in pool

Theoretically >> 6000. But how many thread are processing your requests? Is there a thread pool that is burning your requests ? So you want to increase threads, but how much lets say 2000 concurrent threads. Is your CPU behaving poor with context switching ? Is it I/O bound? if yes it makes sense to context switch but then you will be hitting those network limits because a lot of thread waiting on network I/O, so ultimately how much time you spent on a piece of work.

DB

If it oracle, bless you with connection management, you definitely need rigorous monitoring here. Now this is just another limiting factor and can be considered as an just another blocking I/O. By definition of I/O, latency/throughput matters and becomes a bottleneck the moment it becomes the bigger than the smallest piece of work.

So, finally, you need to break down following or more attributes for all the servlets

  1. Is it CPU bound? If yes, how much cycles it takes or can it be converted safely to some time unit. e.g. 1ms for just the compute piece of work.
  2. Is it I/O bound, If yes similarly find the unit.
  3. and others
  4. A long list of what you have, e.g. CPU, Memory, GB/s

Now you know how much work needs to be done and all you do is divide by what you have and keep tuning , so that you find out the optimal and also find out what else attribute you have not considered and consider them.


There is another common bottleneck : the size of the database connection pool. But I have an additional remark : when you exhaust the number of allowed HTTP connections, of the number of threads allowed to serve request, you will only reject some requests. But when you exhaust memory (too much sessions with too much data for example), you can crash the whole application.

The difference is that in the case of heavy load for a short time, when load later falls down :

  • in first case, the application is up and can serve requests normally
  • in second case the application is down and must be restarted

EDIT :

I forgot to remember real use cases. The biggest problem I ever found for serving numerous concurrent connections is the quality of the database requests (assuming you use a database). There is not a direct impact since there is no maximum number, but you can easily hog all database server resources. Common examples of poor database requests :

  • no index on a table with a large number of rows
  • a request (on a big table) that makes no use of any index
  • the n+1 syndrome : with a ORM when you map a one to many relation to a collection no eagerly when you always need data from the collection
  • the load full database syndrome : with a ORM when you map all relations as eager, any single request ends in loading a high quantity of dependent data.

What is worse with those problems, is that they can cause no harm in tests when the database is young because there are not that many rows, but with time and increasing number of rows performances fall giving a unusable application over few users.