using pyspark, read/write 2D images on hadoop file system using pyspark, read/write 2D images on hadoop file system hadoop hadoop

using pyspark, read/write 2D images on hadoop file system


I have found a solution that works : using the pyspark 1.2.0 binaryfile does the job. It is flagged as experimental, but I was able to read tiff images with a proper combination of openCV.

import cv2import numpy as np# build rdd and take one element for testing purposeL = sc.binaryFiles('hdfs://localhost:9000/*.tif').take(1)# convert to bytearray and then to np arrayfile_bytes = np.asarray(bytearray(L[0][1]), dtype=np.uint8)# use opencv to decode the np bytes array R = cv2.imdecode(file_bytes,1)

Note the help of pyspark :

binaryFiles(path, minPartitions=None)    :: Experimental    Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.    Note: Small files are preferred, large file is also allowable, but may cause bad performance.