mysqld service stops once a day on ec2 server mysqld service stops once a day on ec2 server linux linux

mysqld service stops once a day on ec2 server


Use 50% of available RAM to test:

You can decrease the innodb_buffer_pool_size very low to see if it helps:

#/etc/my.cnf innodb_buffer_pool_size = 1M

A rule of thumb is to set innodb_buffer_pool_size to 50% of available RAM for your low memory testing. This means you start the server and everything except MySQL InnoDB. See how much RAM you have. Then use 50% of that for InnoDB.

To try many low-memory settings at once:

A more likely culprit is whatever else is on that server, such as a webserver.

Apache?

Are you using Apache and/or another webserver? If so, try to decrease its RAM usage. For example in Apache conf, consider low RAM settings like these:

StartServers 1MinSpareServers 1MaxSpareServers 5MaxClients 5

And cap the requests like this:

MaxRequestsPerChild 300

Then restart Apache.

mod_wsgi:

If you're using Apache with mod_python, switch to Apache with mod_wsgi.

Pympler:

If it's still happening, possibly your Django is steadily growing. Try Django memory profiling with Pympler:

SAR:

Your report of once-per-day failures, then once-per-week failures, could point to some kind of cron job running daily or weekly. For example, perhaps there's a batch process that takes up a lot of RAM, or a database dump, etc.

To track RAM use and look for RAM spikes in the hour before MySQL dies, take a look at SAR, which is a great tool: http://www.thegeekstuff.com/2011/03/sar-examples/


You have to decrease you innodb_buffer_pool_size = <60-80% of your main memory)

Solution for Innodb Error:

110603  7:34:15 [ERROR] Plugin ‘InnoDB’ init function returned error.110603  7:34:15 [ERROR] Plugin ‘InnoDB’ registration as a STORAGE ENGINE failed.110603  7:34:15 [ERROR] Unknown/unsupported storage engine: InnoDB110603  7:34:15 [ERROR] Aborting10603  7:34:15 [Note] /usr/sbin/mysqld: Shutdown completeI moved the ib_logfile0 and ib_logfile01 to bak and start Mysql again. Now this time, it is working fine[root@xxx mysql]# mv ib_logfile0 ib_logfile0-bak[root@xxx mysql]# mv ib_logfile1 ib_logfile1-bak

Source: http://www.onaxer.com/tag/error-plugin-innodb-init-function-returned-error/


Like others have mentioned, the problem appears to be your system running low on RAM and MySQL is blowing up due to that. Below is how to go about narrowing down where your system's memory is being used and how to recover from the database going down.

Take a look at collectd and its plugins. Some of the applicable ones may be the processes plugin and the memory plugin. With those you can see your systems' memory usage and what processes are taking up most of it.

Depending on how you are running Django, you can configure the worker processes to only process a certain number of requests and then terminate. That way if there is some sort of memory leak in your application it will not persist past that number of requests. For example, if you use Gunicorn, you can use the --max-requests option. Setting it to 500 will drop the worker after it has processed 500 requests.

The above combined with stats collection will show you some interesting memory usage trends.

As for the database going down, you can setup process supervision so that if MySQL does die, it will be relaunched automatically. MySQL in latest version of Ubuntu uses Upstart to do just that. If the process dies, Upstart will bring it back up immediately. If you're using another distro that doesn't have this built-in, take a look at Supervisor. While this doesn't fix the problem it will at least mitigate its effects. This should not be seen as the fix but rather a way to keep your application running in case something does go wrong.