what is XLA_GPU and XLA_CPU for tensorflow what is XLA_GPU and XLA_CPU for tensorflow python python

what is XLA_GPU and XLA_CPU for tensorflow


As mentioned in the docs, XLA stands for "accelerated linear algebra". It's Tensorflow's relatively new optimizing compiler that can further speed up your ML models' GPU operations by combining what used to be multiple CUDA kernels into one (simplifying because this isn't that important for your question).

To your question, my understanding is that XLA is separate enough from the default Tensorflow compiler that they separately register GPU devices and have slightly different constraints on which GPUs they treat as visible (see here for more on this). Looking at the output of the command you ran, it looks like XLA is registering 1 GPU and normal TF is registering 3.

I'm not sure if you're having issues or are just curious, but if it's the former, I recommend taking a look at the issue I linked above and this one. Tensorflow's finicky about which CUDA/cuDNN versions with which it works flawlessly and it's possible you're using incompatible versions. (If you're not having issues, then hopefully the first part of my answer is sufficient.)