How to Build AlphaPose - Installation



Alpha Pose is an accurate multi-person pose estimator, which is the first real-time open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.



Paper [v5]: https://arxiv.org/abs/1612.00137
Source code: https://github.com/MVIG-SJTU/AlphaPose

Environment

  • Ubuntu 16.04
  • PyTorch 0.4.0
  • Cuda 9.0 / Cudnn 7.0
  • Anaconda 3.x
  • Python 3.6
  • gcc/g++ 5.4.0

Install PyTorch

Creates a new environment with python 3.6 named torch-py3-0.4.0 using Anaconda.
$ conda create -n torch-py3-0.4.0 pip python=3.6

Activate evironment.

$ conda activate torch-py3-0.4.0

See the official guide to install PyTorch with indicated version. AlphaPose is only able to run with 0.4.0 for now. Otherwise, it is likely to end up with runtime error.
$ conda install pytorch=0.4.0 cuda90 -c pytorch
$ conda install torchvision -c pytorch
$ pip install cython setuptools tqdm
$ pip install -e git+https://github.com/ncullen93/torchsample.git#egg=torchsample
$ pip install visdom
$ pip install nibabel

Check PyTorch version

$ python
>>> import torch
>>> torch.__version__
'0.4.0'


Build AlphaPose

Step 1: Get the code

$ git clone https://github.com/MVIG-SJTU/AlphaPose.git

Step 2: Build Non-maximum surpression (NMS)

Modify Makefile in AlphaPose/human-detection/lib/

Makefile
$ cd AlphaPose/human-detection/lib/
$ vim Makefile
# replace `python2` with `python`
   python setup.py build_ext --inplace

Build NMS library
$ make clean
$ make


Modify files in AlphaPose/human-detection/lib/newnms

Makefile
$ cd newnms
$ vim Makefile
# replace `python2` with `python`
   python setup_linux.py build_ext --inplace

setup_linux.py
$ vim setup_linux.py
# line 51: replace `.iteritems()` with `.items()`
for k, v in cudaconfig.items():

Step 3: Build Deepmatching

Install dependencies for compiling deepmatching module.
$ sudo apt-get update
$ sudo apt-get install cmake libatlas-dev libatlas-base-dev

Presume that we are under the root of AlphaPose folder.
$ cd PoseFlow/deepmatching
Update a few lines in Makefile as below. Note that you have to replace CPYTHONFLAGS with your own path.
...
LAPACKLDFLAGS=/usr/lib/libsatlas.so # single-threaded blas
...
STATICLAPACKLDFLAGS=-fPIC -Wall -g -fopenmp -static -static-libstdc++ /usr/lib/x86_64-linux-gnu/libjpeg.a /usr/lib/x86_64-linux-gnu/libpng.a /usr/lib/x86_64-linux-gnu/libz.a /usr/lib/libblas.a /usr/lib/gcc/x86_64-linux-gnu/5/libgfortran.a /usr/lib/gcc/x86_64-linux-gnu/5/libquadmath.a # statically linked version
...
CPYTHONFLAGS=-I/home/chunming/anaconda3/envs/torch-py3-0.4.0/include/python3.6m
LIBFLAGS= -L/lib/x86_64-linux-gnu -lpng -ljpeg -lz -lblas
...
all: deepmatching-static
...

The entire Makefile looks like this:
CC=g++

OS_NAME=$(shell uname -s)
ifeq ($(OS_NAME),Linux) 
  LAPACKLDFLAGS=/usr/local/atlas/lib/libsatlas.so   # single-threaded blas
  #LAPACKLDFLAGS=/usr/lib64/atlas/libtatlas.so  # multi-threaded blas
  #BLAS_THREADING=-D MULTITHREADED_BLAS # remove this if wrong
endif
ifeq ($(OS_NAME),Darwin)  # Mac OS X
  LAPACKLDFLAGS=-framework Accelerate # for OS X
endif
LAPACKCFLAGS=-Dinteger=int $(BLAS_THREADING)
STATICLAPACKLDFLAGS=-fPIC -Wall -g -fopenmp -static -static-libstdc++ /home/lear/douze/tmp/jpeg-6b/libjpeg.a /usr/lib64/libpng.a /usr/lib64/libz.a /usr/lib64/libblas.a /usr/lib/gcc/x86_64-redhat-linux/4.9.2/libgfortran.a /usr/lib/gcc/x86_64-redhat-linux/4.9.2/libquadmath.a # statically linked version

CFLAGS= -fPIC -Wall -g -std=c++11 $(LAPACKCFLAGS) -fopenmp -DUSE_OPENMP -O3
LDFLAGS=-fPIC -Wall -g -ljpeg -lpng -fopenmp 
CPYTHONFLAGS=-I/usr/include/python2.7

SOURCES := $(shell find . -name '*.cpp' ! -name 'deepmatching_matlab.cpp')
OBJ := $(SOURCES:%.cpp=%.o)
HEADERS := $(shell find . -name '*.h')


all: deepmatching 

.cpp.o:  %.cpp %.h
 $(CC) -o $@ $(CFLAGS) -c $+

deepmatching: $(HEADERS) $(OBJ)
 $(CC) -o $@ $^ $(LDFLAGS) $(LAPACKLDFLAGS) -I/home/ibal_109/atlas/build/include

deepmatching-static: $(HEADERS) $(OBJ)
 $(CC) -o $@ $^ $(STATICLAPACKLDFLAGS)

python: $(HEADERS) $(OBJ)
# swig -python $(CPYTHONFLAGS) deepmatching.i # not necessary, only do if you have swig compiler
 g++ $(CFLAGS) -c deepmatching_wrap.c $(CPYTHONFLAGS)
 g++ -shared $(LDFLAGS) $(LAPACKLDFLAGS) deepmatching_wrap.o $(OBJ) -o _deepmatching.so $(LIBFLAGS) 

clean:
 rm -f $(OBJ) deepmatching *~ *.pyc .gdb_history deepmatching_wrap.o _deepmatching.so deepmatching.mex???

deepmathcing_wrap.c
At line 2983, we change #include <numpy/arrayobject.h> to the path under anaconda environment. In my case, I changed to  #include </home/chunming/anaconda3/envs/torch-py3-0.4.0/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h>

Generate and link libsatlas.so.
$ cd /usr/lib
$ sudo ld -shared -o libsatlas.so --whole-archive libatlas.a liblapack.a --no-whole-archive libf77blas.a libcblas.a

Clean and build all in deepmatching directory.
$ make clean all
$ make python

It should be able to run without any errors as importing deepmatching module.
$ python
>>> import deepmatching


Test AlphaPose

Here uses a sample as input.mp4 from MOT challenge for testing.

Step 1: Fetch media information

Install mediainfo to check frame rate. Click HERE for more information. Under the root of AlphaPose directory.
$ sudo apt-get install mediainfo ffmpeg
$ mediainfo examples/input.mp4 | grep FPS
Frame rate                        : 15.000 FPS

Step 2: Generate images from a video. 

Use ffmpeg to generate images from a video. Check out HERE to see how to convert a video into images using ffmpeg. Change the frame rate depending on your video file.
$ mkdir examples/demo-images
$ ffmpeg -i examples/input.mp4 -vf fps=15 examples/demo-images/out%07d.png

Step 3: Generate multi-person pose estimation result.

Run demo.py in AlphaPose. Use --sp to run on single core. It may end up with errors if it runs without --sp.
$ python demo.py --sp --indir examples/demo-images --outdir examples/demo-results

Run PoseFlow to track detected people by reading generated json.
$ python PoseFlow/tracker-general.py --imgdir examples/demo-images \
--in_json examples/demo-results/alphapose-results.json \
--out_json examples/demo-results/alphapose-tracked.json \
--visdir examples/demo-render

Step 4: Convert a set of images to a video

Tracked images will be generated in examples/demo-render. Now we convert images into a video.
$ cd examples/demo-render
$ ls -U | head -1
out0000001.png.png

Note that each filename is suffixed with extra .png. Click HERE or HERE to see more usages of ffmpeg.
$ ffmpeg -i out%07d.png.png -c:v libx264 -vf fps=15 out.mp4

Step 5: Demo

References

Share:

3 則留言: