Is possible to set hadoop blocksize 24 MB? Is possible to set hadoop blocksize 24 MB? hadoop hadoop

Is possible to set hadoop blocksize 24 MB?


Yes. It is possible to set HDFS block size to 24 MB. Hadoop 1.x.x default is 64 MB and that of 2.x.x is 128 MB.

On my opinion increase the block size. Because, the larger the block size, less time will be utilized at the reducer phase. And things will speed up.However, if you reduce the block size, less time will be spent at each map phase, but chance are there that more time will be utilized at the reduce phase. Thereby increasing the overall time.

You can change the block size using the below command while transfereing from Local File System to HDFS:

hadoop fs -D dfs.blocksize=<blocksize> -put <source_filename> <destination>

Permanent change of block size can be made by changing the hdfs-site.xml to the below one:

<property> <name>dfs.block.size<name> <value>134217728<value> <description>Block size<description> <property>


Yes, It is possible to set block size in the Hadoop environment. Simply go to /usr/local/hadoop/conf/hdfs-site.xmlthen change block size value Refer: http://commandstech.com/blocksize-in-hadoop/