RetinaFace TensorRT推理的开源代码位置在https://github.com/linghu8812/tensorrt_inference/tree/master/RetinaFace。
首先通过命令git clone https://github.com/deepinsight/insightface.git
clone insightface的代码,然后将export_onnx.py文件拷贝到./detection/RetinaFace
或者./detection/RetinaFaceAntiCov
文件夹中,通过以下命令生成ONNX文件。对于RetinaFace-R50,RetinaFace-MobileNet0.25和RetinaFaceAntiCov这几个模型都可以支持。通过以下命令可以导出模型:
python3 export_onnx.py
python3 export_onnx.py --prefix ./model/mnet.25
python3 export_onnx.py --prefix ./model/mnet_cov2 --network net3l
同YOLOv4模型一样,对输出结果也做了concat,如下图所示。
TensorRT模型即TensorRT的推理引擎,代码中通过C 实现。相关配置写在config.yaml文件中,如果存在engine_file
的路径,则读取engine_file
,否则从onnx_file
生成engine_file
。
void RetinaFace::LoadEngine() { // create and load engine std::fstream existEngine; existEngine.open(engine_file, std::ios::in); if (existEngine) { readTrtFile(engine_file, engine); assert(engine != nullptr); } else { onnxToTRTModel(onnx_file, engine_file, engine, BATCH_SIZE); assert(engine != nullptr); }}
config.yaml文件可以设置batch size,图像的size及模型的anchor等。
RetinaFace: onnx_file: "../R50.onnx" engine_file: "../R50.trt" BATCH_SIZE: 1 INPUT_CHANNEL: 3 IMAGE_WIDTH: 640 IMAGE_HEIGHT: 640 obj_threshold: 0.5 nms_threshold: 0.45 detect_mask: False mask_thresh: 0.5 landmark_std: 1 feature_steps: [32, 16, 8] anchor_sizes: [[512, 256], [128, 64], [32, 16]]
通过以下命令对项目进行编译,生成RetinaFace_trt
mkdir build && cd buildcmake ..make -j
通过以下命令运行项目,得到推理结果
./RetinaFace_trt../config.yaml ../samples
./RetinaFace_trt ../config_anti.yaml ../samples
联系客服