PIG: how to efficiently LOAD and FILTER a large dataset? PIG: how to efficiently LOAD and FILTER a large dataset? hadoop hadoop

PIG: how to efficiently LOAD and FILTER a large dataset?


I would say both would perform the same. A map reduce job is initiated only when you have a store or dump. You should probably look into where exactly PIG stores its relations


If the filtered data is small then applying filter early would not enhance the performance.One of the best practice is to filter early and often. See here for enhancing performance by applying filters FILTER EARLY AND OFTEN


I think the second one will be more efficient.

During logical build up of program PIG will check the all the statements. When it sees dump or store command i will start the map-reduce program. Now in the second scenario- you have given 2 filter statements. It means there will mandatory 2 reducers. So it can be more efficient if the number of mappers and reducers are set to default.

I AM NOT VERY SURE ABOUT MY ANSWER.PLEASE LET ME KNOW IF YOU FOUND SOMETHING NEW.