Unable to close file because the last block does not have enough number of replicas Unable to close file because the last block does not have enough number of replicas hadoop hadoop

Unable to close file because the last block does not have enough number of replicas


We had similar issue. Its primarily attributed to dfs.namenode.handler.count was not enough. Increasing that may help in some small clusters but it is because of DOS issue where nameNode couldn't handle no. of connections or RPC call and your Pending deletion blocks will grow humongous. Validate the hdfs audit logs and see any mass deletion happening or other hdfs actions and match with the jobs which might be overwhelming NN . Stoping those tasks will help HDFS to recover.