How does Hadoop get input data not stored on HDFS? How does Hadoop get input data not stored on HDFS? hadoop hadoop

How does Hadoop get input data not stored on HDFS?


Actually HDFS is needed in the Real world application for many reasons.

  • Very high bandwidth to support Map Reduce workloads and Scalability.
  • Data reliability and fault tolerant. Due to replication and by distributed nature. Required for critical data systems.
  • Flexibility - You don't have to pre-process the data to store that in HDFS.

Hadoop is designed to be write once and read many concept. Kafka, Flume and Sqoop which are generally used for ingestion are themselves very fault tolerant and provide high-bandwidth for data ingestion to HDFS. Sometimes it is required to ingest data from thousands for sources per minute with data in GBs. For this these tools are required as well as fault tolerant storage system-HDFS.