Copy files from a hdfs folder to another hdfs location by filtering with modified date using shell script Copy files from a hdfs folder to another hdfs location by filtering with modified date using shell script hadoop hadoop

Copy files from a hdfs folder to another hdfs location by filtering with modified date using shell script


For copying 6 months files from a hdfs location to another we can use the below script.

script should be run from your local linux location.

#!/bin/bashhdfs dfs -ls /hive/warehouse/data.db/all_history/ |awk 'BEGIN{ SIXMON=60*60*24*180; "date +%s" | getline NOW } { cmd="date -d'\''"$6" "$7"'\'' +%s"; cmd | getline WHEN; DIFF=NOW-SIXMON; if(WHEN > DIFF){print $8}}' >> TempFile.txtcat TempFile.txt |while read linedo   echo $i   hdfs dfs -cp -p $line /user/can_anns/all_history_copy/;done

Line 2 : We are copying list of files which are of max 180 days to a TempFile. Then we iterate through this Temp file and if match is found then copy the file.

If you are writing the script from windows and copying to linux machine, sometimes it may not work showing syntax error. For avoiding the carriage return error, after copying the script to linux machine local path run the below command. sed -i 's/\r//'Then run the script >>> sh FileName.sh


I think you can do it through a shell script like the below in three runs. It's just a modified version of your script. I tried and it works for me.

In each run, you need to modify the grep condition with the required month for three months. (2019-03, 2019-02, 2019-01)

Script:

hdfs dfs -ls /hive/warehouse/data.db/all_history/|grep "2019-03"|awk '{print $8}' >> Files.txtcat Files.txt |while read linedo    echo $i    hdfs dfs -cp $line /user/can_anns/all_history_copy/;done

Hope that helps!


I assume the dataset has date column. So, you could create an external hive table on that dataset and extract just the required data.

If there are huge number of records for a given date, shell script works very slow.