How to Install PyTorch on Ubuntu 24.04
PyTorch is an open-source deep learning framework that provides tensor computation with GPU acceleration and automatic differentiation for building neural networks. It has become the most widely adopted framework for model training, with a 63% adoption rate according to the Linux Foundation.
In this tutorial, you will learn how to install PyTorch on Ubuntu 24.04 using both Pip and Anaconda. We cover CPU-only and GPU-accelerated setups, virtual environment configuration, verification steps, and common troubleshooting solutions.
#What is PyTorch?
PyTorch is a free and open-source machine learning framework built on Python and the Torch library. Meta AI originally developed it, and the PyTorch Foundation (a subsidiary of the Linux Foundation) now governs the project. PyTorch supports GPU-accelerated computing through NVIDIA CUDA and AMD ROCm, making it well-suited for training deep neural networks.
#What is PyTorch used for?
PyTorch is used across a wide range of AI and deep learning tasks. These include image classification, object detection, speech recognition, natural language processing, reinforcement learning, and generative AI. Its modular architecture lets users build neural networks from composable building blocks, making it practical for both research prototyping and production deployment.
Major projects like ChatGPT, Tesla Autopilot, and Hugging Face Transformers all use PyTorch as their training framework. For GPU programming fundamentals, see our guide on CUDA and Python programming.
#Prerequisites
Before you start the process of installing PyTorch on Ubuntu, ensure your Ubuntu system meets the following requirements:
-
Python 3.10 or later (PyTorch 2.10.0 dropped support for Python 3.9 and below). Run
python3 --versionto check. If you need to install or upgrade Python, see our guide on how to install Python on Ubuntu. -
If your Ubuntu system or server is GPU supported, ensure you have the CUDA drivers and toolkit installed.
-
At least 4 GB of free disk space for the PyTorch packages and dependencies. GPU builds are larger (around 2-3 GB for the
torchpackage alone). -
An Ubuntu 24.04 server or virtual machine with sudo access. You can deploy one quickly on Cherry Servers.
Deploy and scale your Python projects effortlessly on Cherry Servers' robust and cost-effective dedicated or virtual servers. Benefit from an open cloud ecosystem with seamless API & integrations and a Python library.
#How to install PyTorch on Ubuntu?
There are a few ways to install PyTorch on Ubuntu, including building from the source, but this guide will show you how to install PyTorch using Pip as well as how to install PyTorch using Anaconda. Also, PyTorch provides both CPU (Central Processing Unit) and GPU (Graphics Processing Unit) options, so if your system doesn't have GPU support, you can install PyTorch with CPU support only.
#1. Pip: install PyTorch
You can install PyTorch on Ubuntu using Pip (Python's native package manager) in the following steps:
Step 1 - Update system packages
First, update the system packages to ensure that you're using the latest packages.
sudo apt update
Step 2 - Install Python3-venv
Next, you need to install Python3-venv, which you can use to create an isolated Python environment for your project so you don't have to install Python packages globally, thereby preventing possible compatibility issues.
sudo apt install python3-venv -y
Step 3 - Set up the environment
Now create and navigate to a new directory that will be used to store the virtual environment files and settings. Here pytorch_env is the name of the directory; you can use a different name if you want.
mkdir pytorch_env
cd pytorch_env
Next, run the following commands to create and activate the virtual environment.
python3 -m venv pytorch_env
source pytorch_env/bin/activate
Once the virtual environment has been activated, you should see your command prompt change to show the name of the virtual environment. With this activated, you can install Python packages or scripts associated with this virtual environment rather than the system-wide Python avoiding conflicts with projects requiring different package versions.
Note: Always use a virtual environment for PyTorch projects. Different projects may need different PyTorch or CUDA versions, and isolating each one prevents dependency conflicts. If you prefer Conda environments, skip ahead to the Conda section below.
Step 4 - Install PyTorch using Pip
Now with your virtual environment activated, you can go on to install PyTorch on your Ubuntu system.
Visit the official PyTorch installation page to generate the exact command for your system. Select your OS, package manager, Python version, and compute platform (CPU, ROCm 7.1, CUDA 12.6, CUDA 12.8, or CUDA 13.0) to get a copy-paste-ready command.
To install PyTorch with CPU support only, run:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
With GPU support (install CUDA with PyTorch) (CUDA):
To install PyTorch with CUDA 13.0 GPU support, run:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
For CUDA 12.8 support:
#For Linux x86 use:
pip3 install torch torchvision
#For Linux Aarch64:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu128
For CUDA 12.6 support:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126
How to choose the right CUDA version: Run nvidia-smi on your system. The output shows the maximum supported CUDA version for your driver. Pick a PyTorch CUDA build that matches or is lower than that version. For example, if nvidia-smi shows "CUDA Version: 12.8," you can safely use cu128 or cu126, but not cu130.
Step 5 - Verify the installation
You can run the following command to verify that PyTorch has been installed:
python
import torch
print(torch.__version__)
The command above will start the Python interpreter, import the PyTorch library, and print the version of PyTorch that is currently installed.
python
Python 3.12.3 (main, Jan 22 2026, 20:57:42) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
2.10.0+cpu
>>> exit()
Step 6 - Verify GPU access (optional)
If you installed PyTorch with CUDA support, confirm that PyTorch can detect your GPU:
python3 -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('GPU name:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'N/A')"
If CUDA available prints True and you see your GPU name, the installation is working correctly. If it prints False, check that your NVIDIA drivers are properly installed by running nvidia-smi and that you selected the correct CUDA version during installation.
#2. Conda: install PyTorch
Another way you can install PyTorch is by using an open-source platform called Anaconda. You can install PyTorch using Conda in the following steps:
Important note: Starting with PyTorch 2.6.0 (January 2025), the conda install method from the pytorch channel is no longer available. The recommended approach is to use pip inside a Conda environment. The instructions below reflect this change: we use Conda to manage the environment and Python version, then install PyTorch via pip.
Step 1 - Update system packages
As usual, you want to make sure that your system packages are up to date.
sudo apt update
Step 2 - Install Anaconda
Next, you need to install Anaconda, and you can do that in the following steps.
First, you need to install Curl. This will be used to download the Anaconda installer script.
sudo apt install curl -y
Next, as a best practice, you want to isolate the installation process. Navigate to "tmp" directory, you can create or use a different directory for this if you want.
cd /tmp
Next, download the Anaconda installer script using Curl by running the command:
curl --output anaconda.sh https://repo.anaconda.com/archive/Anaconda3-2025.12-2-Linux-x86_64.sh
The command above will download the Anaconda installer script and save it as a file called "anaconda.sh". The latest version as of this writing is Anaconda 2025.12, which ships with Python 3.13.9 and Conda 25.11.0. You can always get the URL for the latest version of the script on their archive page.
Now, verify the integrity of the downloaded file using:
sha256sum anaconda.sh
You want to confirm that the sha256sum checksum value matches that on the official site.
After verifying the downloading script, you can go on to run the Anaconda installer script and start the installation process by running:
bash anaconda.sh
After the installation process is complete, you need to update the current shell session to ensure that the Anaconda environment and its commands are available for use in the current terminal session. Do this by running:
source ~/.bashrc
It will also activate Anaconda, as you can see in the change in the prompt to "base," which is the default environment created by Anaconda during the installation. Subsequently, you can use the conda activate command to activate Anaconda.
You can verify that Anaconda has been installed using the conda --version command:
conda –version
Step 3 - Install PyTorch using pip inside a Conda environment
Now with Anaconda installed and activated, create a dedicated Conda environment with a compatible Python version:
conda create -n pytorch_env python=3.12 -y
conda activate pytorch_env
With CPU support only:
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cpu
With GPU support (CUDA 12.6):
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu
Why pip instead of conda? PyTorch dropped its Conda packages starting with version 2.6.0. Using pip inside a Conda environment gives you the best of both worlds: Conda handles the Python version and environment isolation, while pip installs the latest official PyTorch packages directly from the PyTorch wheel index.
Step 4 - Verify the installation
Run the following command to verify that PyTorch has been installed:
python
import torch
print(torch.__version__)
exit()
Also read: How to Install Chrome on Ubuntu 24.04
#Running a basic PyTorch example
After installation, test your setup with a simple tensor operation and a basic neural network layer.
#Tensor operations
Create a file, test_pytorch.py, and add the below content:
import torch
# Create tensors
x = torch.tensor([1.0, 2.0, 3.0])
y = torch.tensor([4.0, 5.0, 6.0])
# Basic operations
print("Addition:", x + y)
print("Dot product:", torch.dot(x, y))
print("Matrix multiplication:")
a = torch.randn(3, 4)
b = torch.randn(4, 2)
print(a @ b)
Run the script:
python3 test_pytorch.py
OutputAddition: tensor([5., 7., 9.])
Dot product: tensor(32.)
Matrix multiplication:
tensor([[-1.0347, -0.3055],
[ 1.5307, -4.7465],
[ 1.5422, -2.7926]])
#GPU tensor operations
If you have a CUDA-capable GPU, move tensors to the GPU and run computations there:
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
x = torch.randn(1000, 1000, device=device)
y = torch.randn(1000, 1000, device=device)
# Matrix multiplication on GPU
z = torch.mm(x, y)
print(f"Result shape: {z.shape}")
print(f"Computed on: {z.device}")
else:
print("CUDA is not available. Running on CPU.")
#Simple neural network
Create a file, test_neural_netwrk.py, and add the content below:
import torch
import torch.nn as nn
# Define a simple feed-forward network
model = nn.Sequential(
nn.Linear(10, 64),
nn.ReLU(),
nn.Linear(64, 32),
nn.ReLU(),
nn.Linear(32, 1)
)
# Create random input
x = torch.randn(5, 10) # batch of 5, 10 features each
# Forward pass
output = model(x)
print(f"Input shape: {x.shape}")
print(f"Output shape: {output.shape}")
print(f"Model parameters: {sum(p.numel() for p in model.parameters()):,}")
Execute the script:
python3 test_neural_netwrk.py
OutputInput shape: torch.Size([5, 10])
Output shape: torch.Size([5, 1])
Model parameters: 2,817
These examples confirm that PyTorch is correctly installed and that tensor operations, GPU access (if applicable), and neural network layers all work as expected.
#How to uninstall PyTorch?
If you need to uninstall PyTorch on your Ubuntu system, you can do that with Pip by running the following command:
pip3 uninstall torch -y
Found existing installation: torch 2.10.0+cpu
Uninstalling torch-2.10.0+cpu:
Successfully uninstalled torch-2.10.0+cpu
To remove all PyTorch packages (torch, torchvision, and torchaudio) at once:
pip3 uninstall torch torchvision torchaudio -y
OutputWARNING: Skipping torch as it is not installed.
Found existing installation: torchvision 0.25.0+cpu
Uninstalling torchvision-0.25.0+cpu:
Successfully uninstalled torchvision-0.25.0+cpu
WARNING: Skipping torchaudio as it is not installed.
If you used a virtual environment, you can also delete the entire environment directory:
deactivate
rm -rf ~/pytorch_env
To uninstall PyTorch from a Conda environment:
conda activate pytorch_env
pip3 uninstall torch torchvision torchaudio -y
Or delete the entire Conda environment:
conda deactivate
conda env remove -n pytorch_env
#Troubleshooting
#"No module named torch"
The most common cause is running python3 outside the virtual environment where you installed PyTorch. Activate your environment first:
source ~/pytorch_env/pytorch_env/bin/activate
Or for Conda:
conda activate pytorch_env
Then try import torch again.
#CUDA is available but torch.cuda.is_available() returns False
You likely installed the CPU-only version of PyTorch. Uninstall it and reinstall with the correct CUDA index URL:
pip3 uninstall torch torchvision torchaudio -y
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu128
Make sure the CUDA version in the URL matches or is lower than the version shown by `nvidia-smi`.
#Installation fails with "externally-managed-environment" error
Ubuntu 24.04 uses PEP 668, which prevents pip from installing packages into the system Python. Always use a virtual environment (python3 -m venv) or a Conda environment. Do not use --break-system-packages unless you understand the risks.
#Version mismatch between torch, torchvision, and torchaudio
These packages must be compatible versions. The safest approach is to install all three in a single pip command from the same index URL. Check the PyTorch version compatibility table for matching versions.
#Out of memory during installation
The PyTorch GPU package is large (2-3 GB). If pip fails during download or extraction, free disk space or use the --no-cache-dir flag:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu128 --no-cache-dir
#Conclusion
You now have PyTorch installed on Ubuntu 24.04 and verified that it works with both CPU and GPU (if available). We covered installation via pip in a virtual environment and via pip inside a Conda environment, along with verification steps, basic examples, and common troubleshooting solutions.
FAQs
Can I install PyTorch without a GPU?
Yes. The CPU-only version works on any system. Use the `--index-url https://download.pytorch.org/whl/cpu` flag with pip to install the CPU build. Training will be slower than on a GPU, but PyTorch is fully functional for learning, prototyping, and running smaller models.
What is the difference between PyTorch and TensorFlow?
Both are deep learning frameworks, but they differ in design philosophy. PyTorch uses dynamic computation graphs (eager execution by default), which makes it easier to debug and experiment. TensorFlow uses static graphs by default (though TensorFlow 2.x added eager mode). PyTorch currently dominates in research and training workloads, while TensorFlow remains strong in production inference and mobile deployment.
Do I need to install CUDA separately?
No, not for most use cases. PyTorch pip packages bundle the required CUDA runtime libraries. You only need compatible NVIDIA GPU drivers installed on your system. Run `nvidia-smi` to confirm your driver is working. The full CUDA Toolkit is only needed if you plan to compile custom CUDA extensions.
Can I use PyTorch with AMD GPUs?
Yes. PyTorch supports AMD GPUs through ROCm (Radeon Open Compute). On the PyTorch installation page, select ROCm as the compute platform to get the correct install command. ROCm support is Linux-only.
How do I switch between CPU and GPU in my code?
Use `torch.device` to specify where tensors and models should run. Moving the data and model to the same device is required before running forward or backward passes.
Harness the power of GPU acceleration anywhere. Deploy CUDA and machine learning workloads on robust hardware tailored for GPU intensive tasks.