Hadoop: Split metadata size exceeded 10000000 Hadoop: Split metadata size exceeded 10000000 hadoop hadoop

Hadoop: Split metadata size exceeded 10000000


Can you try setting following property in conf/mapred-site.xml:

<!-- No limits if set to -1 --><property>    <name>mapreduce.jobtracker.split.metainfo.maxsize</name>    <value>-1</value></property>

Not sure if following will help (give it a shot)

xxx.jar -D mapreduce.jobtracker​.split.metainfo.maxsi‌​ze=-1

Reference: https://archive.cloudera.com/cdh/3/hadoop/mapred-default.html

| Name                                        | Default Value | Description                                                                                                                                                                                                                   ||---------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|| mapred.jobtracker.job.history.block.size    | 3145728       | The block size of the job history file. Since the job recovery uses job,history, its important to dump job history to disk as soon as possible.,Note that this is an expert level parameter. The default value is set to,3 MB || mapreduce.jobtracker.split.metainfo.maxsize | 10000000      | The maximum permissible size of the split metainfo file. The JobTracker,won't attempt to read split metainfo files bigger than the configured,value. No limits if set to -1.                                                  |