All Projects → Sharpiless → Yolov5-distillation-train-inference

Sharpiless / Yolov5-distillation-train-inference

Licence: GPL-3.0 license
Yolov5 distillation training | Yolov5知识蒸馏训练,支持训练自己的数据

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Yolov5-distillation-train-inference

simpleAICV-pytorch-ImageNet-COCO-training
SimpleAICV:pytorch training example on ImageNet(ILSVRC2012)/COCO2017/VOC2007+2012 datasets.Include ResNet/DarkNet/RetinaNet/FCOS/CenterNet/TTFNet/YOLOv3/YOLOv4/YOLOv5/YOLOX.
Stars: ✭ 276 (+228.57%)
Mutual labels:  distillation, yolov5
ZAQ-code
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (-29.76%)
Mutual labels:  model-compression, distillation
YOLOv4MLNet
Use the YOLO v4 and v5 (ONNX) models for object detection in C# using ML.Net
Stars: ✭ 61 (-27.38%)
Mutual labels:  yolov5
ROS-Object-Detection-2Dto3D-RealsenseD435
Use the Intel D435 real-sensing camera to realize object detection based on the Yolov3-5 framework under the Opencv DNN(old version)/TersorRT(now) by ROS-melodic.Real-time display of the Pointcloud in the camera coordinate system.
Stars: ✭ 45 (-46.43%)
Mutual labels:  yolov5
yolov5 onnx2caffe
yolov5 onnx caffe
Stars: ✭ 73 (-13.1%)
Mutual labels:  yolov5
YOLOv5-Lite
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
Stars: ✭ 1,230 (+1364.29%)
Mutual labels:  yolov5
yolov5 for rknn
YOLOv5 in PyTorch > ONNX > RKNN
Stars: ✭ 79 (-5.95%)
Mutual labels:  yolov5
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-33.33%)
Mutual labels:  distillation
realtime-object-detection
Detects objects in images/streaming video
Stars: ✭ 16 (-80.95%)
Mutual labels:  yolov5
FKD
A Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (-41.67%)
Mutual labels:  distillation
YOLOX deepsort tracker
using yolox+deepsort for object-tracking
Stars: ✭ 228 (+171.43%)
Mutual labels:  yolov5
yolov5-deepsort-tensorrt
A c++ implementation of yolov5 and deepsort
Stars: ✭ 207 (+146.43%)
Mutual labels:  yolov5
AIGO-Pedestrian-Crosswalk-Guide
자율주행 AI 안내견 : AIGO - 횡단보도 가이드
Stars: ✭ 12 (-85.71%)
Mutual labels:  yolov5
yolov5 face landmark
基于yolov5的人脸检测,带关键点检测
Stars: ✭ 159 (+89.29%)
Mutual labels:  yolov5
Comet.Box
Collection of Object Detection and Segmentation Pipelines🛸🚀
Stars: ✭ 24 (-71.43%)
Mutual labels:  yolov5
yolov5-opencv-cpp-python
Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python
Stars: ✭ 122 (+45.24%)
Mutual labels:  yolov5
yolov5 tensorrt int8 tools
tensorrt int8 量化yolov5 onnx模型
Stars: ✭ 105 (+25%)
Mutual labels:  yolov5
yolov5-crowdhuman
Head and Person detection using yolov5. Detection from crowd.
Stars: ✭ 79 (-5.95%)
Mutual labels:  yolov5
allie
🤖 A machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers).
Stars: ✭ 93 (+10.71%)
Mutual labels:  model-compression
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (-47.62%)
Mutual labels:  model-compression

代码地址:

https://github.com/Sharpiless/Yolov5-distillation-train-inference

最新版本:

请移步:https://github.com/Sharpiless/yolov5-distillation-5.0

教师模型权重:

链接:https://pan.baidu.com/s/13gq5QwCrRNdRXWzSYUeJIw

提取码:4ppv

蒸馏训练:

python train_distill.py --weights yolov5s.pt \
    --teacher weights/yolov5l_voc.pt --distill_ratio 0.001 \
    --teacher-cfg model/yolov5l.yaml --data data/voc.yaml \
    --epochs 30 --batch-size 16

训练参数:

--weights:预训练模型

--teacher:教师模型权重

--distill-ratio:蒸馏损失权重

--with-gt-loss:是否同时使用ground truth

--soft-loss:是否使用KL散度作为蒸馏的类别损失(缺省使用L2-logits损失)

--full-output-loss:是否使用《Object detection at 200 Frames Per Second》中的损失

这篇文章分别对这几个损失函数做出改进,具体思路为只有当teacher network的objectness value高时,才学习bounding box坐标和class probabilities。

准备数据集:

默认会启用 data/voc.yaml 自动下载VOC数据集进行训练

或者手动运行 data/scripts/get_voc2007.sh 下载

如需修改成自己的数据集,则只需要修改yaml路径即可

实验结果:

数据集:

VOC2007(补充的无标签数据使用VOC2012)

GPU:2080Ti*1

Batch Size:16

Epoches:30

Baseline:Yolov5s

Teacher model:Yolov5l(mAP 0.5:0.95 = 0.541)

这里假设VOC2012中新增加的数据为无标签数据(2k张)。

教师模型 训练方法 蒸馏损失 P R mAP50
正常训练 不使用 0.7756 0.7115 0.7609
Yolov5l output based l2 0.7585 0.7198 0.7644
Yolov5l output based KL 0.7417 0.7207 0.7536
Yolov5m output based l2 0.7682 0.7436 0.7976
Yolov5m output based KL 0.7731 0.7313 0.7931

训练结果

参数和细节正在完善,支持KL散度、L2 logits损失和Sigmoid蒸馏损失等

待做事项:

  • [√] 修改logist输出作为蒸馏损失输入
  • [√] 完善代码结构和相关参数设定
  • [×] 查找为何蒸馏损失不起作用(或者收敛慢)的原因
  • [×] 完善相关实验并测试精度
  • [√] 修改dataloader加快训练速度
  • [√] 修改teacher model的批量推理加快训练速度

可能存在的问题:

  • 1.训练轮数太少没收敛,可能蒸馏训练收敛满最终结果高
  • 2.教师模型是Yolov5l在VOC训练30轮得到的(mAP 0.5:0.95 = 0.541),质量比标注较差影响蒸馏训练的结果
  • 3.可调整的参数还有很多(教师模型的检测、IOU阈值,蒸馏损失种类,蒸馏损失比率等)

我的公众号:

在这里插入图片描述

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].