Container killed by the ApplicationMaster Exit code is 143 Container killed by the ApplicationMaster Exit code is 143 hadoop hadoop

Container killed by the ApplicationMaster Exit code is 143


Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked.

Please check this link out:https://community.hortonworks.com/questions/96183/help-troubleshoot-container-killed-by-the-applicat.html

Please look into:https://www.slideshare.net/SparkSummit/top-5-mistakes-when-writing-spark-applications-63071421

Excellent source to optimize your code.


I found out I mixed up two separate things.The 143 exit code is from the metrics collector which is down.The Jobs are killed, as far as I understand, due to no memory issues.The problem is with large window functions that cant reduce the data till the last one which contains all the data.

Although, the place in the logs where it gives the reason why the job was killed, still eludes me.