With the right tools, running GPU-accelerated workloads like deep learning or scientific computing on an Ubuntu server can be significantly simplified. In this guide, we’ll explore how to use the Nvidia Container Toolkit for Dockerized GPU access and Miniconda for flexible environment management.

Whether building AI models or managing CUDA-based applications, combining these tools provides a powerful, modular, and reproducible setup.

Prerequisites

Before diving in, ensure you have:

  • An Ubuntu 24.04 server with an NVIDIA GPU.
  • A non-root user with sudo privileges.
  • NVIDIA drivers installed.
  • Basic knowledge of Docker and Conda.

Verify Nvidia Drivers and Docker

1. Verify that the Nvidia drivers are installed.

nvidia-smi

Output.

Sun Apr  6 05:53:34 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07              Driver Version: 550.90.07      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA A40-8Q                  On  |   00000000:06:00.0 Off |                    0 |
| N/A   N/A    P8             N/A /  N/A  |       1MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

2. Make sure the Docker package is installed.

docker --version

Output.

Docker version 28.0.1, build 068a01e

Configure Docker to Use Nvidia Runtime

1. Configure Docker to use Nvidia runtime.

nvidia-ctk runtime configure --runtime=docker

2. Restart Docker to apply changes.

systemctl restart docker

3. Run a PyTorch container and check CUDA

docker run --rm -it --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:24.08-py3 python3 -c "import torch;print('CUDA available:', torch.cuda.is_available())"

Output.

=============
== PyTorch ==
=============

NVIDIA Release 24.08 (build 107063150)
PyTorch Version 2.5.0a0+872d972
Container image Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copyright (c) 2014-2024 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

NOTE: CUDA Forward Compatibility mode ENABLED.
  Using CUDA 12.6 driver version 560.35.03 with kernel driver version 550.90.07.
  See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.

CUDA available: True

Install Miniconda

Miniconda provides lightweight Python environment management.

1. Create a directory for Miniconda.

mkdir -p ~/miniconda3

2. Download the installer.

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh

3. Run the installer.

bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3

4. Remove the installer script.

rm -rf ~/miniconda3/miniconda.sh

5. Initialize Conda for the current user.

~/miniconda3/bin/conda init bash

6. Reload the shell to apply changes.

source ~/.bashrc

7. Verify Conda installation.

conda --version

Output.

conda 24.7.1

Set Up a Conda Environment with PyTorch (GPU Support)

1. Create a new environment named ‘torch’.

conda create -n torch python=3.10

2. Activate the environment.

conda activate torch

3. Install PyTorch with CUDA 12.1.

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

4. Verify GPU support.

python3 -c "import torch; print('CUDA available:', torch.cuda.is_available())"

Output.

CUDA available: True

Conclusion

By combining NVIDIA Container Toolkit and Miniconda, you’re unlocking a modular, GPU-powered Python environment that’s portable and easy to manage. Use this setup for:

  • Training PyTorch/TensorFlow models
  • Scientific computing
  • Reproducible ML pipelines

This combo is especially great for teams or production pipelines needing consistency, isolation, and GPU access. Try it today on GPU hosting from Atlantic.Net!