KubeFlow, handling large dynamic arrays and ParallelFor with current size limitations KubeFlow, handling large dynamic arrays and ParallelFor with current size limitations kubernetes kubernetes

KubeFlow, handling large dynamic arrays and ParallelFor with current size limitations


the array comes in as a file http url due to pipeline input arguements size limitations of argo and Kubernetes

Usually the external data is first imported into the pipeline (downloaded and output). Then the components use inputPath and outputPath to pass big data pieces as files.The size limitation only applies for the data that you consume as value instead of file using inputValue.

The loops consume the data by value, so the size limit applies to them.

What you can do is make this data smaller. For example if your data is a JSON list of big objects [{obj1}, {obj2}, ... , {objN}], you can transform it to list of indexes [1, 2, ... , N], pass that list to the loop and then inside the loop you can have a component that uses the index and the data to select a single piece to work on N ->{objN}.