- Published on
CUDA on Nvidia MX130 GPU
- Authors
- Name
- Muhammad Fareez Iqmal
- @iqfareez
Welcome to this guide on how to enable CUDA on an Nvidia MX130 GPU for machine vision inference on laptops. CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs).
The Nvidia MX130 is a dedicated GPU that is commonly found in laptops, it is not as powerful as the RTX GPU families, but it can still run some machine vision tasks efficiently. In this guide, we will cover the installation of necessary drivers and software, as well as any configuration required to properly utilize the GPU for these types of computations using Pytorch. Keep in mind that the mileage may vary when using other frameworks.
At the end, I will demo the performance using SuperGlue's demo script.
Checking the GPU you have
Usually, laptop with NVIDIA GPU already preinstalled with NVIDIA drivers along with NVIDIA Control Panel. So, open that to see your GPU version.
I have MX130 drivers. From the specs page, it says it can support CUDA. If you have other drivers, you can still follow along this guide but your mileage may vary.
Optionally, you can verify the CUDA supports using Python (You need Python 3.9.x version, as PyTorch doesn't support the latest Python version yet).
from numba import cuda # run pip install numba
cuda.detect()
Output:
Found 1 CUDA devices
id 0 b'NVIDIA GeForce MX130' [SUPPORTED (DEPRECATED)]
Compute Capability: 5.0
PCI Device ID: 0
PCI Bus ID: 1
UUID: GPU-7b092133-34da-571d-9506-9de68403ed55
Watchdog: Enabled
Compute Mode: WDDM
FP32/FP64 Performance Ratio: 32
Summary:
1/1 devices are supported
Process finished with exit code 0
Checking CUDA availability using PyTorch
Even though the script above says that our GPU supports CUDA, the PyTorch still cannot 'see' the GPU yet.
tip
I recommend you to create virtual environment to easily manage your packages and python version per project
Try running the script below:
import torch as torch # pip install torch
yes_cuda = torch.cuda.is_available()
print(yes_cuda)
Output:
False
So, we need to install some tools to make our CUDA visible to PyTorch.
Install VS Studio
You may need to install Visual Studio 2022 to correctly install CUDA.
But for my case, I don't to install the full VS due to storage constraints, but I have the VS Build Tools already installed with C++ development. I just downloaded Microsoft Visual C++ Redistributable for Visual Studio 2022 just in case. You can get it from the download page, scroll down to Other Tools, Frameworks, and Redistributables.
Setup CUDA
Download supported CUDA version
The latest CUDA version is 12.0. However, PyTorch doesn't support it yet, So, you'll need an older version of CUDA (11.6
or 11.7
). Go to Cuda Toolkit Archives to download version 11.7.1
.
You may select Installer Type to local, and proceed with downloading.
Install CUDA
Once the file downloaded, double click it to begin installation.
Click the Express Installation. It will install the CUDA toolkit, some other things, and your display driver.
When the installation finishes. I got notified about trouble installing Nsight.
As the description said, it may not be related to CUDA so just hit Next and complete the installation.
Verify the installation
Open Command Prompt or Powershell, run the following command:
nvidia-smi
Setup Pytorch (correctly)
Go to PyTorch's Get Started page. Select the setting accordingly, copy and run the command generated.
note
You may need to uninstall the existing PyTorch installation (pip uninstall torch
) before running the command below.
Command:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
If you re-run the code previously. The output should be True
indicating that PyTorch is able to recognise your CUDA GPU.
Demo
I'm going to demo the machine vision project SuperGlue Inference and Evaluation Demo Script without CUDA (using CPU and with CUDA running on MX130).
warning
Without CUDA
Running inference on device "cpu"
Average FPS = 0.4
With CUDA
Running inference on device "cuda"
Average FPS = 1.0
The result with CUDA is improves. Please don't expect it to run on higher FPS as the GPU may not capable to deliver high intensive calculations.