yangjun dfa27afb39 提交PaddleDetection develop 分支 d56cf3f7c294a7138013dac21f87da4ea6bee829 | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
README_cn.md | 1 year ago | |
mcfairmot_dla34_30e_1088x608_visdrone.yml | 1 year ago | |
mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml | 1 year ago | |
mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone.yml | 1 year ago | |
mcfairmot_hrnetv2_w18_dlafpn_30e_1088x608_visdrone_vehicle_bytetracker.yml | 1 year ago | |
mcfairmot_hrnetv2_w18_dlafpn_30e_576x320_bdd100k_mcmot.yml | 1 year ago | |
mcfairmot_hrnetv2_w18_dlafpn_30e_576x320_visdrone.yml | 1 year ago | |
mcfairmot_hrnetv2_w18_dlafpn_30e_864x480_visdrone.yml | 1 year ago |
English | 简体中文
MCFairMOT is the Multi-class extended version of FairMOT.
In addition, PaddleDetection also provides PP-Tracking real-time multi-object tracking system. PP-Tracking is the first open source real-time Multi-Object Tracking system, and it is based on PaddlePaddle deep learning framework. It has rich models, wide application and high efficiency deployment.
PP-Tracking supports two paradigms: single camera tracking (MOT) and multi-camera tracking (MTMCT). Aiming at the difficulties and pain points of actual business, PP-Tracking provides various MOT functions and applications such as pedestrian tracking, vehicle tracking, multi-class tracking, small object tracking, traffic statistics and multi-camera tracking. The deployment method supports API and GUI visual interface, and the deployment language supports Python and C++, The deployment platform environment supports Linux, NVIDIA Jetson, etc.
PP-tracking provides an AI studio public project tutorial. Please refer to this tutorial.
backbone | input shape | MOTA | IDF1 | IDS | FPS | download | config |
---|---|---|---|---|---|---|---|
DLA-34 | 1088x608 | 24.3 | 41.6 | 2314 | - | model | config |
HRNetV2-W18 | 1088x608 | 20.4 | 39.9 | 2603 | - | model | config |
HRNetV2-W18 | 864x480 | 18.2 | 38.7 | 2416 | - | model | config |
HRNetV2-W18 | 576x320 | 12.0 | 33.8 | 2178 | - | model | config |
Notes:
backbone | input shape | MOTA | IDF1 | IDS | FPS | download | config |
---|---|---|---|---|---|---|---|
DLA-34 | 1088x608 | 37.7 | 56.8 | 199 | - | model | config |
HRNetV2-W18 | 1088x608 | 35.6 | 56.3 | 190 | - | model | config |
Notes:
Model | Compression Strategy | Prediction Delay(T4) | Prediction Delay(V100) | Model Configuration File | Compression Algorithm Configuration File |
---|---|---|---|---|---|
DLA-34 | baseline | 41.3 | 21.9 | Configuration File | - |
DLA-34 | off-line quantization | 37.8 | 21.2 | Configuration File | Configuration File |
Training MCFairMOT on 4 GPUs with following command
python -m paddle.distributed.launch --log_dir=./mcfairmot_dla34_30e_1088x608_visdrone/ --gpus 0,1,2,3 tools/train.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone.yml
Evaluating the track performance of MCFairMOT on val dataset in single GPU with following commands:
# use weights released in PaddleDetection model zoo
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/mcfairmot_dla34_30e_1088x608_visdrone.pdparams
# use saved checkpoint in training
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone.yml -o weights=output/mcfairmot_dla34_30e_1088x608_visdrone/model_final.pdparams
Notes:
configs/datasets/mcmot.yml
:
EvalMOTDataset:
!MOTImageFolder
dataset_dir: dataset/mot
data_root: your_dataset/images/val
keep_ori_im: False # set True if save visualization images or video
{output_dir}/mot_results/
, and every sequence has one txt file, each line of the txt file is frame,id,x1,y1,w,h,score,cls_id,-1,-1
, and you can set {output_dir}
by --output_dir
.Inference a video on single GPU with following command:
# inference on video and save a video
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/mcfairmot_dla34_30e_1088x608_visdrone.pdparams --video_file={your video name}.mp4 --save_videos
Notes:
apt-get update && apt-get install -y ffmpeg
.CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/mcfairmot_dla34_30e_1088x608_visdrone.pdparams
python deploy/pptracking/python/mot_jde_infer.py --model_dir=output_inference/mcfairmot_dla34_30e_1088x608_visdrone --video_file={your video name}.mp4 --device=GPU --save_mot_txts
Notes:
--save_mot_txts
to save the txt result file, or --save_images
to save the visualization images.frame,id,x1,y1,w,h,score,cls_id,-1,-1
.The offline quantization model is calibrated using the VisDrone Vehicle val-set, running as:
CUDA_VISIBLE_DEVICES=0 python3.7 tools/post_quant.py -c configs/mot/mcfairmot/mcfairmot_dla34_30e_1088x608_visdrone_vehicle_bytetracker.yml --slim_config=configs/slim/post_quant/mcfairmot_ptq.yml
Notes:
@article{zhang2020fair,
title={FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking},
author={Zhang, Yifu and Wang, Chunyu and Wang, Xinggang and Zeng, Wenjun and Liu, Wenyu},
journal={arXiv preprint arXiv:2004.01888},
year={2020}
}
@ARTICLE{9573394,
author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Detection and Tracking Meet Drones Challenge},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TPAMI.2021.3119563}
}
@article{zhang2021bytetrack,
title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
journal={arXiv preprint arXiv:2110.06864},
year={2021}
}