No operations allowed after connection closed errors in Slick/HikariCP No operations allowed after connection closed errors in Slick/HikariCP mysql mysql

No operations allowed after connection closed errors in Slick/HikariCP


HikariCP determined that the connection was dead, i.e. (connection is evicted or dead), and therefore attempted to close it. The driver then said, "Sorry the connection is already closed", which is not unexpected.

You might think, "Why do you need to close a dead connection?" Well, maybe it was only temporarily unavailable (or slow) so the validation test failed, but the connection is still "alive" from the driver's perspective. Closing, or at least attempting to, is essential to allow the driver opportunity to cleanup resources.

HikariCP closes connections in five cases:

  1. The connection failed validation. This is invisible to your application. The connection is retired and replaced. You would see a log message to the effect of Failed to validate connection....
  2. A connection was idle for longer than idleTimeout. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection has passed idleTimeout).
  3. A connection reached its maxLifetime. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection has passed maxLifetime), or if the connection is in use at the time of reaching maxLifetime you would see (connection is evicted or dead) at a later time.
  4. The user manually evicted a connection. This is invisible to your application. The connection is retired and replaced. You would see a closure reason of (connection evicted by user).
  5. A JDBC call threw an unrecoverable SQLException. This should be visible to your application. You would see a closure reason of (connection is broken).

There are quite a few variables here. I do not know what HikariCP default settings might be altered by Slick, in addition to user specified settings. You do not show surrounding logs, so I cannot tell if any other related issues. There is strangeness in the fact that your configuration shows 222 connections, but the pool stats logged at the Timeout Failure are (total=30, active=0, idle=30, waiting=0), so it appears that RDS may be capping you (?).

I suggest opening an issue on Github, attaching the log messages containing the pool settings at startup, attaching the section of log for the one minute preceding the exception, and grepping the log file for any other relevant warnings.