NVIDIA® cuOpt™ is a GPU-accelerated optimization library that solves Mixed Integer Linear Programming (MILP), Linear Programming (LP), and Vehicle Routing Problems (VRP). It enables solutions for large-scale problems with millions of variables and constraints, offering seamless deployment across hybrid and multi-cloud environments.
Using accelerated computing, NVIDIA® cuOpt optimizes operations research and logistics by enabling better, faster decisions.
Please refer to system requirements for NVIDIA cuOpt.
The tag naming scheme for NVIDIA cuOpt images incorporates key platform details into the tag as shown below:
<image_name>:<version>-<cuda_version>-<py_version>
For example, the tag cuopt:25.8-cuda12.8-py3.12 indicates the following:
25.8-cuda12.8-py3.12
^ ^ ^
| | Python version
| |
| CUDA version
|
cuOpt version
latest tag will always point to latest release and will be overwritten in every release.
This should open a shell to the container. From here, you can run the cuOpt CLI, Python API, and more.
docker run -it --gpus all --rm -v $(pwd):/workspace -w /workspace nvidia/cuopt:25.8.0-cuda12.8-py312 /bin/bash
If you want to run cuOpt server, you can use the following command:
docker run -it --gpus all --rm -v $(pwd):/workspace -w /workspace -e CUOPT_SERVER_PORT=8000 -p 8000:8000 nvidia/cuopt:25.8.0-cuda12.8-py312
Content type
Image
Digest
sha256:bd02c0e53…
Size
2.9 GB
Last updated
about 2 hours ago
docker pull nvidia/cuopt:26.6.0a-cuda13.1-py3.14