GC overhead while running pig job, after hadoop job ends GC overhead while running pig job, after hadoop job ends hadoop hadoop

GC overhead while running pig job, after hadoop job ends


Looks like this is caused in your application manager, since you mention that the error is being returned after the execution of all mappers/reducers. Try increasing the memory of application-manager.

In a YARN cluster, you can use the following two properties to control the amount of memory available to your ApplicationMaster:

  1. yarn.app.mapreduce.am.command-opts

  2. yarn.app.mapreduce.am.resource.mb

Again, you could set -Xmx (in the former) to 75% of the resource.mb value.

Details regarding the parameters can be found here.