Distributed, error-handling, copying of TB's of data Distributed, error-handling, copying of TB's of data hadoop hadoop

Distributed, error-handling, copying of TB's of data


Take a look at Flume http://archive.cloudera.com/cdh/3/flume/UserGuide.html

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. The system is centrally managed and allows for intelligent dynamic management. It uses a simple extensible data model that allows for online analytic applications.

To install it https://wiki.cloudera.com/display/DOC/Flume+Installation


As already mentioned the Hadoop is the answer cause it's exactly made for such kind of large data. You can create Hadoop cluster and store the information there and use the core's of the boxes to analyze the information by using map/reduce.