spark-submit job by doing exec on a master pod in k8s
As you can read from the error:
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/Users/atekade/IdeaProjects/scala-spark-streaming/logstreamer.sh\": stat /Users/atekade/IdeaProjects/scala-spark-streaming/logstreamer.sh: no such file or directory": unknown command terminated with exit code 126
What interest us the most is part /Users/atekade/IdeaProjects/scala-spark-streaming/logstreamer.sh: no such file or directory
, which means the pod is unable to locate the logstreamer.sh
file.
Script logstreamer.sh
needs to be uploaded to the spark-master
pod.Also the scala-spark-streaming_2.11-1.0.jar
needs to be there as well.
You can configure a PersistenVolume for Storage, this will be useful because if your pod will ever be rescheduled all data that was not stored on a PV will be lost.
Here is a link to minikube documentation for Persistent Volumes.
You can also use different Storage Classes.