how to change the task status into failed in hadoop mapreduce on exception how to change the task status into failed in hadoop mapreduce on exception hadoop hadoop

how to change the task status into failed in hadoop mapreduce on exception


Use the JobClient to access the RunningJob class ( I have 1.0.4 API).

So the code looks like this:

Have a JobClient and a RunningJobreference in your setup().

The method is as follows:

public void setup(Context context){    JobClient jobClient;    RunningJob runningJob;    try     {        jobClient = new JobClient((JobConf)context.getConfiguration());        runningJob = jobClient.getJob((JobID)(context.getJobId()); //mapred.JobID!    }    catch (IOException e)    {        System.out.println("IO Exception");    }    try    {        System.out.println(propertyName);        session = FindPath.createSession("localhost",3250, EncodingConstants.en_ISO_8859_1);        session.open();    }    catch     {        System.out.println("error");        runningJob.killTask((TaskAttemptID)context.getTaskAttemptID(), true);// cast as mapred.TaskAttemptID    }}

This causes the TaskAttempt to fail.

Finally, you should probably set mapred.map.max.attempts to 1 so that a failed taskAttempt is a failed task.

Note:

You should consider altering mapred.max.map.failures.percent as it reflects the tolerance of your cluster towards failed tasks.