AB distillationKnowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)
Stars: ✭ 105 (-96.01%)
Pretrained Language ModelPretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Stars: ✭ 2,033 (-22.82%)
ACCV TinyGANBigGAN; Knowledge Distillation; Black-Box; Fast Training; 16x compression
Stars: ✭ 62 (-97.65%)
FGDFocal and Global Knowledge Distillation for Detectors (CVPR 2022)
Stars: ✭ 124 (-95.29%)
ProSelfLC-2021noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.
Stars: ✭ 45 (-98.29%)
SemCKDThis is the official implementation for the AAAI-2021 paper (Cross-Layer Distillation with Semantic Calibration).
Stars: ✭ 42 (-98.41%)
FKDA Fast Knowledge Distillation Framework for Visual Recognition
Stars: ✭ 49 (-98.14%)
bert-AADAdversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation
Stars: ✭ 27 (-98.97%)
mmrazorOpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (-75.55%)
WSDM2021 NSMImproving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals. WSDM 2021.
Stars: ✭ 84 (-96.81%)
MoTISMobile(iOS) Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP). Accepted at NAACL 2022.
Stars: ✭ 60 (-97.72%)
MutualGuideLocalize to Classify and Classify to Localize: Mutual Guidance in Object Detection
Stars: ✭ 97 (-96.32%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (-74.72%)
SAN[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images
Stars: ✭ 41 (-98.44%)
LDLocalization Distillation for Dense Object Detection (CVPR 2022)
Stars: ✭ 271 (-89.71%)
MLIC-KD-WSDMulti-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection (ACM MM 2018)
Stars: ✭ 58 (-97.8%)
Distill-BERT-TextgenResearch code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".
Stars: ✭ 121 (-95.41%)
LabelRelaxation-CVPR21Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021
Stars: ✭ 37 (-98.6%)
awesome-efficient-gnnCode and resources on scalable and efficient Graph Neural Networks
Stars: ✭ 498 (-81.09%)
BAKESelf-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Stars: ✭ 79 (-97%)
distill-and-selectAuthors official PyTorch implementation of the "DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval" [IJCV 2022]
Stars: ✭ 43 (-98.37%)
head-network-distillation[IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and [ACM MobiCom HotEdgeVideo 2019] "Distilled Split Deep Neural Networks for Edge-assisted Real-time Systems"
Stars: ✭ 27 (-98.97%)
kdtfKnowledge Distillation using Tensorflow
Stars: ✭ 139 (-94.72%)