Does Numpy automatically detect and use GPU?
Does Numpy/Python automatically detect the presence of GPU and utilizeit to speed up matrix computation (e.g. numpy.multiply,numpy.linalg.inv, ... etc)?
No.
Or do I have code in a specific way to exploit the GPU for fastcomputation?
Yes. Search for Numba, CuPy, Theano, PyTorch or PyCUDA for different paradigms for accelerating Python with GPUs.