Using GPU - Developer Documentation
Skip to content

Using GPU

GPU instance support is provided for GPU-capable code and frameworks. It can significantly boost the performance of many machine learning training or inference algorithms. GPU instance support is available for PrL workspaces and environments used by jobs.

The provided GPU instances are non-clustered and provide up to 8 GPU computation capabilities per instance.

Run the commands below to view the driver and CUDA toolkit details:

nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright © 2005-2018 NVIDIA Corporation
Built on Wed_Apr_11_23:16:29_CDT_2018

Cuda compilation tools, release 9.2, V9.2.88


Using GPU

Once you know what your system's defaults are for GPU, install any GPU-enabled framework you require, and try out your GPU code:

import torch
cuda0 = torch.device('cuda:0')
torch.ones([2, 4], dtype=torch.float64, device=cuda0)

Last update: June 15, 2023

Except where otherwise noted, content on this site is licensed under the Development License Agreement.