Python Multiprocessing Pool Doesn't Create Enough Processes Python Multiprocessing Pool Doesn't Create Enough Processes pandas pandas

Python Multiprocessing Pool Doesn't Create Enough Processes


Thanks everyone for your input. Here is my current solution to the problem, I plan to make it more efficient in the coming week. I took Martin's advice and I glue the files together once they're all done, however, I'd like to work to implement daphtdazz solution of having a process work to do the gluing with a queue while I produce more files.

def do_analyis(file):    # To keep the file names unique, I append the process id to the end    process_id = multiprocessing.current_process().pid    # doing analysis work...    for key, value in dataframe.iteritems():        if os.path.isfile(filename):            value.to_csv(filename), mode='a', header=False, encoding='utf-8')        else:            value.to_csv(filename), header=True, encoding='utf-8')def merge_files(base_file_name):    write_directory = 'write_directory'    all_files = glob.glob('{0}*'.format(base_file_name))    is_file_created = False    for file in all_files:        if is_file_created:            print 'File already exists, appending'            dataframe = pandas.read_csv(file, index_col=0)            dataframe.to_csv('{0}{1}.csv'.format(write_directory, os.path.basename(base_file_name)), mode='a', header=False, encoding='utf-8')        else:            print 'File does not exist, creating.'            dataframe = pandas.read_csv(file, index_col=0)            dataframe.to_csv('{0}{1}.csv'.format(write_directory, os.path.basename(base_file_name)), header=True, encoding='utf-8')            is_file_created = Trueif __name__ == '__main__':    # Run the code to do analysis and group files by the id in the json lines    directory = 'directory'    file_names = glob.glob(directory)    pool = Pool()    pool.imap_unordered(do_analysis, file_names, 1)    pool.close()    pool.join()    # Merge all of the files together    base_list = get_unique_base_file_names('file_directory')    pool = Pool()    pool.imap_unordered(merge_files, base_list, 100)    pool.close()    pool.join()

This saves each file with a unique process id appended to the end of the file, then goes back and gets all of the files by the id in the json file and merge them all together. While creating the files, the cpu usage is between 60-70%. That's decent. While merging the files, the cpu usage is around 8%. This is because the files are merged so quickly that I don't need all of the cpu processing power I have. This solution works. But it could be more efficient. I'm going to work to do both of these simultaneously. Any suggestions are welcomed.