fasterai1FasterAI: A repository for making smaller and faster models with the FastAI library.
Stars: ✭ 34 (-83.33%)
Auto-CompressionAutomatic DNN compression tool with various model compression and neural architecture search techniques
Stars: ✭ 19 (-90.69%)
PocketflowAn Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Stars: ✭ 2,672 (+1209.8%)
JfasttextJava interface for fastText
Stars: ✭ 193 (-5.39%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-88.73%)
Ld NetEfficient Contextualized Representation: Language Model Pruning for Sequence Labeling
Stars: ✭ 148 (-27.45%)
Collaborative DistillationPyTorch code for our CVPR'20 paper "Collaborative Distillation for Ultra-Resolution Universal Style Transfer"
Stars: ✭ 138 (-32.35%)
sparsifyEasy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-32.35%)
agegenderLMTCNNJia-Hong Lee, Yi-Ming Chan, Ting-Yen Chen, and Chu-Song Chen, "Joint Estimation of Age and Gender from Unconstrained Face Images using Lightweight Multi-task CNN for Mobile Applications," IEEE International Conference on Multimedia Information Processing and Retrieval, MIPR 2018
Stars: ✭ 39 (-80.88%)
dynetxDynamic Network Analysis library
Stars: ✭ 75 (-63.24%)
RMNetRM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.
Stars: ✭ 129 (-36.76%)
Ntaggerreference pytorch code for named entity tagging
Stars: ✭ 58 (-71.57%)
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+5144.12%)
nuxt-prune-html🔌⚡ Nuxt module to prune html before sending it to the browser (it removes elements matching CSS selector(s)), useful for boosting performance showing a different HTML for bots/audits by removing all the scripts with dynamic rendering
Stars: ✭ 69 (-66.18%)
AquvitaeThe Easiest Knowledge Distillation Library for Lightweight Deep Learning
Stars: ✭ 71 (-65.2%)
HrankPytorch implementation of our CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
Stars: ✭ 164 (-19.61%)
CResMD(ECCV 2020) Interactive Multi-Dimension Modulation with Dynamic Controllable Residual Learning for Image Restoration
Stars: ✭ 92 (-54.9%)
AimetAIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+122.06%)
BipointnetThis project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
Stars: ✭ 27 (-86.76%)
mmrazorOpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (+215.69%)
SkimcaffeCaffe for Sparse Convolutional Neural Network
Stars: ✭ 230 (+12.75%)
Ghostnet.pytorch[CVPR2020] GhostNet: More Features from Cheap Operations
Stars: ✭ 440 (+115.69%)
Generalizing-Lottery-TicketsThis repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers"
Stars: ✭ 48 (-76.47%)
jetbrains-utilityRemove/Backup – settings & cli for macOS (OS X) – DataGrip, AppCode, CLion, Gogland, IntelliJ, PhpStorm, PyCharm, Rider, RubyMine, WebStorm
Stars: ✭ 62 (-69.61%)
DLCV2018SPRINGDeep Learning for Computer Vision (CommE 5052) in NTU
Stars: ✭ 38 (-81.37%)
Pytorch PruningPyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Stars: ✭ 740 (+262.75%)
FisherPruningGroup Fisher Pruning for Practical Network Compression(ICML2021)
Stars: ✭ 127 (-37.75%)
CAE-ADMMCAE-ADMM: Implicit Bitrate Optimization via ADMM-Based Pruning in Compressive Autoencoders
Stars: ✭ 34 (-83.33%)
DistillerNeural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+1743.14%)
torchpruneA research library for pytorch-based neural network pruning, compression, and more.
Stars: ✭ 133 (-34.8%)
SMSR[CVPR 2021] Exploring Sparsity in Image Super-Resolution for Efficient Inference
Stars: ✭ 205 (+0.49%)
neural-compressorIntel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Stars: ✭ 666 (+226.47%)
NeuralMergerYi-Min Chou, Yi-Ming Chan, Jia-Hong Lee, Chih-Yi Chiu, Chu-Song Chen, "Unifying and Merging Well-trained Deep Neural Networks for Inference Stage," International Joint Conference on Artificial Intelligence (IJCAI), 2018
Stars: ✭ 20 (-90.2%)
Awesome EmdlEmbedded and mobile deep learning research resources
Stars: ✭ 554 (+171.57%)
DIN-Group-Activity-Recognition-BenchmarkA new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.
Stars: ✭ 26 (-87.25%)
batchnorm pruneTensorflow codes for "Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers"
Stars: ✭ 30 (-85.29%)
Cen[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (-45.1%)
sparsezooNeural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
Stars: ✭ 264 (+29.41%)
Selecsls PytorchReference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
Stars: ✭ 251 (+23.04%)
GAN-LTH[ICLR 2021] "GANs Can Play Lottery Too" by Xuxi Chen, Zhenyu Zhang, Yongduo Sui, Tianlong Chen
Stars: ✭ 24 (-88.24%)
Filter GraftingFilter Grafting for Deep Neural Networks(CVPR 2020)
Stars: ✭ 110 (-46.08%)
Keras SurgeonPruning and other network surgery for trained Keras models.
Stars: ✭ 339 (+66.18%)
deep-compressionLearning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626
Stars: ✭ 156 (-23.53%)