'APEX' is a NVIDIA tools for easy mixed precision and distributed training in Pytorch.
For more information, click HERE to see official guide.
Go to Anaconda website, download installer and install Anaconda.
We can see that
For more information, click HERE to see official guide.
Environments:
- Ubuntu 16.04
- CUDA 10.0
- Anaconda
- PyTorch 1.1.0
- Python 3.6
For performance and full functionality, official site recommends installing Apex with CUDA and C++ extensions via those steps.
Step 1: Install Anaconda with Python 3.7 .
Go to Anaconda website, download installer and install Anaconda.
Step 2: Create virtual environment and install PyTorch 1.1.0.
$ conda create -n torch-1.1.0 python==3.6
$ conda activate torch-1.1.0
$ conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Step 3: Check CUDA version.
(torch-1.1.0) chunming@: $ python
Python 3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 12:22:00)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
>>> import torch
>>> from torch.utils.cpp_extension import CUDA_HOME
>>> torch.version.cuda
'10.0.130'
>>> CUDA_HOME
'/usr/local/cuda'
>>> import subprocess
>>> subprocess.check_output([CUDA_HOME + '/bin/nvcc', '-V'], universal_newlines=True)
'nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2018 NVIDIA Corporation\nBuilt on Sat_Aug_25_21:08:01_CDT_2018\nCuda compilation tools, release 10.0, V10.0.130\n'
We can see that
torch.version.cuda
outputs 10.0.130
, which is the CUDA version used in PyTorch. The path of CUDA_HOME
is /usr/local/cuda
. We used nvcc
command under this path to output CUDA version in your system. In my case, we can see release 10.0, V10.0.130
at the end of output message. If CUDA_HOME
is not set, you can set this environment variable manually to indicate your CUDA path installed in your system before building. Note that CUDA versions have to match each other.Step 4: Clone APEX and build it with CUDA.
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .
0 意見:
張貼留言