How to use GPU version of tensorflow

A GPU-enabled version of tensorflow can always be installed in a user environment.  But if you should wish to avoid going through the installation, a cluster-wide version is available.

You can load, use, and test it by typing the following:

conda activate tensorflow-gpu
python

>>>  import torch
>>>  print('Torch Version: '+torch.__version__)
>>>  print('CUDA Availability: '+str(torch.cuda.is_available()))
>>>  print('GPU Name: '+str(torch.cuda.get_device_name(0)))

Here is the actual information you will see:

(base)$ conda activate tensorflow-gpu
(tensorflow-gpu)$ python
Python 3.7.15 (default, Nov 24 2002, 21:12:53)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print('Torch Version: '+torch.__version__)
Torch Version: 1.13.1
>>> print('CUDA Availability: '+str(torch.cuda.is_available()))
CUDA Availability: True
>>> print('GPU Name: '+str(torch.cuda.get_device_name(0)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(1)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(2)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(3)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(4)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(5)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(6)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> print('GPU Name: '+str(torch.cuda.get_device_name(7)))
GPU Name: NVIDIA A100-SXM4-80GB
>>> quit()