NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+1448.19%)
HypernetsA General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (-68.02%)
PaddleslimPaddleSlim is an open-source library for deep model compression and architecture search.
Stars: ✭ 677 (-2.03%)
Awesome AutodlA curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.
Stars: ✭ 1,819 (+163.24%)
Autodl ProjectsAutomated deep learning algorithms implemented in PyTorch.
Stars: ✭ 1,187 (+71.78%)
Nas Benchmark"NAS evaluation is frustratingly hard", ICLR2020
Stars: ✭ 126 (-81.77%)
Auto SklearnAutomated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+756.15%)
BossNAS(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Stars: ✭ 125 (-81.91%)
HyperactiveA hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (-73.66%)
nas-encodingsEncodings for neural architecture search
Stars: ✭ 29 (-95.8%)
AutogluonAutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+467.29%)
Awesome Automl PapersA curated list of automated machine learning papers, articles, tutorials, slides and projects
Stars: ✭ 3,198 (+362.81%)
Meta-SACAuto-tune the Entropy Temperature of Soft Actor-Critic via Metagradient - 7th ICML AutoML workshop 2020
Stars: ✭ 19 (-97.25%)
Hpbandstera distributed Hyperband implementation on Steroids
Stars: ✭ 456 (-34.01%)
DeephyperDeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-83.07%)
Smac3Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (-18.38%)
AutodlAutomated Deep Learning without ANY human intervention. 1'st Solution for AutoDL [email protected]
Stars: ✭ 854 (+23.59%)
Shape AdaptorThe implementation of "Shape Adaptor: A Learnable Resizing Module" [ECCV 2020].
Stars: ✭ 59 (-91.46%)
Once For All[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
Stars: ✭ 1,127 (+63.1%)
FairdartsFair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search
Stars: ✭ 145 (-79.02%)
DnaBlock-wisely Supervised Neural Architecture Search with Knowledge Distillation (CVPR 2020)
Stars: ✭ 147 (-78.73%)
MetaD2AOfficial PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)
Stars: ✭ 49 (-92.91%)
HyperKerasAn AutoDL tool for Neural Architecture Search and Hyperparameter Optimization on Tensorflow and Keras
Stars: ✭ 29 (-95.8%)
Nas Bench 201NAS-Bench-201 API and Instruction
Stars: ✭ 537 (-22.29%)
Boml Bilevel Optimization Library in Python for Multi-Task and Meta Learning
Stars: ✭ 120 (-82.63%)
RayAn open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: ✭ 18,547 (+2584.08%)
Haq[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
Stars: ✭ 247 (-64.25%)
CM-NASCM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)
Stars: ✭ 39 (-94.36%)
pymfePython Meta-Feature Extractor package.
Stars: ✭ 89 (-87.12%)
BitPackBitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.
Stars: ✭ 36 (-94.79%)
maggyDistribution transparent Machine Learning experiments on Apache Spark
Stars: ✭ 83 (-87.99%)
PocketflowAn Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Stars: ✭ 2,672 (+286.69%)
ZAQ-codeCVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
Stars: ✭ 59 (-91.46%)
Auto-CompressionAutomatic DNN compression tool with various model compression and neural architecture search techniques
Stars: ✭ 19 (-97.25%)
AutoSpeech[InterSpeech 2020] "AutoSpeech: Neural Architecture Search for Speaker Recognition" by Shaojin Ding*, Tianlong Chen*, Xinyu Gong, Weiwei Zha, Zhangyang Wang
Stars: ✭ 195 (-71.78%)
AtmAuto Tune Models - A multi-tenant, multi-data system for automated machine learning (model selection and tuning).
Stars: ✭ 504 (-27.06%)
LaleLibrary for Semi-Automated Data Science
Stars: ✭ 198 (-71.35%)
ultraoptDistributed Asynchronous Hyperparameter Optimization better than HyperOpt. 比HyperOpt更强的分布式异步超参优化库。
Stars: ✭ 93 (-86.54%)
codeflareSimplifying the definition and execution, scaling and deployment of pipelines on the cloud.
Stars: ✭ 163 (-76.41%)
TF-NASTF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search (ECCV2020)
Stars: ✭ 66 (-90.45%)
mindwareAn efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-95.08%)
syne-tuneLarge scale and asynchronous Hyperparameter Optimization at your fingertip.
Stars: ✭ 105 (-84.8%)
sparsifyEasy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint
Stars: ✭ 138 (-80.03%)
FEDOTAutomated modeling and machine learning framework FEDOT
Stars: ✭ 312 (-54.85%)
ESNACLearnable Embedding Space for Efficient Neural Architecture Compression
Stars: ✭ 27 (-96.09%)
allie🤖 A machine learning framework for audio, text, image, video, or .CSV files (50+ featurizers and 15+ model trainers).
Stars: ✭ 93 (-86.54%)
mindwareAn efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-95.08%)
AutodeeplabAutoDeeplab / auto-deeplab / AutoML for semantic segmentation, implemented in Pytorch
Stars: ✭ 269 (-61.07%)
Amc[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Stars: ✭ 298 (-56.87%)
Pnasnet.pytorchPyTorch implementation of PNASNet-5 on ImageNet
Stars: ✭ 309 (-55.28%)
AuptimizerAn automatic ML model optimization tool.
Stars: ✭ 166 (-75.98%)
NaszillaNaszilla is a Python library for neural architecture search (NAS)
Stars: ✭ 181 (-73.81%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-94.07%)
ArchaiReproducible Rapid Research for Neural Architecture Search (NAS)
Stars: ✭ 266 (-61.51%)
DartsDifferentiable architecture search for convolutional and recurrent networks
Stars: ✭ 3,463 (+401.16%)