Simple implement inference deep head pose ncnn version with high performance and optimized resource. This project based on deep-head-pose project by Nataniel Ruiz. And detail paper in CVPR Workshop. I use Retinaface for face detection step.
Re-build ncnn for arbitrary platform.
- Official ncnn document shown in detail how to use and build ncnn for arbitrary platform.
- And if you use my docker environment, i was build ncnn library inside docker environment with path:
/home/ncnn_original/buildit's contain ncnn shared static library and tools for convert and quantize ncnn models.
Convert models ncnn format.
- As original deep head pose project used Pytorch framwork. So, We need convert Pytorch model to ncnn model.
- In ncnn wiki had detailed this guide here. After convert pytorch to onnx format, you have to use ncnn build tools to convert onnx->ncnn. Inside my docker env, ready to use in
- Notice, Netron support to visualize network in intuitive easy to get
output nodeas specification of ncnn
Build run test
git clone https://github.com/docongminh/ncnn-deep-head-pose
- execute env:
docker exec -it deep-head-pose bash
- cd to source mounted:
- cd to ncnn build library:
In project root inside docker: `mkdir -p build && cd build
Cmake and build:
Note during develop project
- create extractor instant
- normalize image
- resize image
- This project in progress. So, it has many coding performance issues during develop process.
- Quantized model version will be update ASAP.