Awesome Automl And Lightweight ModelsA list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Stars: ✭ 691 (+475.83%)
HyperactiveA hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (+51.67%)
Auto SklearnAutomated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+4830%)
Meta-SACAuto-tune the Entropy Temperature of Soft Actor-Critic via Metagradient - 7th ICML AutoML workshop 2020
Stars: ✭ 19 (-84.17%)
DeterminedDetermined: Deep Learning Training Platform
Stars: ✭ 1,171 (+875.83%)
MfeMeta-Feature Extractor
Stars: ✭ 20 (-83.33%)
Hcn Prototypeloss PytorchHierarchical Co-occurrence Network with Prototype Loss for Few-shot Learning (PyTorch)
Stars: ✭ 17 (-85.83%)
Gradient Free OptimizersSimple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
Stars: ✭ 711 (+492.5%)
HordEfficient Hyperparameter Optimization of Deep Learning Algorithms Using Deterministic RBF Surrogates
Stars: ✭ 99 (-17.5%)
Neural Process FamilyCode for the Neural Processes website and replication of 4 papers on NPs. Pytorch implementation.
Stars: ✭ 53 (-55.83%)
Smac3Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (+370%)
Few Shot Text ClassificationFew-shot binary text classification with Induction Networks and Word2Vec weights initialization
Stars: ✭ 32 (-73.33%)
Pytorch MetaA collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch
Stars: ✭ 1,239 (+932.5%)
Rl Baselines ZooA collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
Stars: ✭ 839 (+599.17%)
MaxlThe implementation of "Self-Supervised Generalisation with Meta Auxiliary Learning" [NeurIPS 2019].
Stars: ✭ 101 (-15.83%)
Few ShotRepository for few-shot learning machine learning projects
Stars: ✭ 727 (+505.83%)
Hyperparameter hunterEasy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (+440%)
Deep architectA general, modular, and programmable architecture search framework
Stars: ✭ 110 (-8.33%)
TpotA Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
Stars: ✭ 8,378 (+6881.67%)
Cfnet[CVPR'17] Training a Correlation Filter end-to-end allows lightweight networks of 2 layers (600 kB) to high performance at fast speed..
Stars: ✭ 496 (+313.33%)
R2d2[ICLR'19] Meta-learning with differentiable closed-form solvers
Stars: ✭ 96 (-20%)
MultidigitmnistCombine multiple MNIST digits to create datasets with 100/1000 classes for few-shot learning/meta-learning
Stars: ✭ 48 (-60%)
Hpbandstera distributed Hyperband implementation on Steroids
Stars: ✭ 456 (+280%)
Learning To Learn By Pytorch"Learning to learn by gradient descent by gradient descent "by PyTorch -- a simple re-implementation.
Stars: ✭ 31 (-74.17%)
XcessivA web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.
Stars: ✭ 1,255 (+945.83%)
Mt NetCode accompanying the ICML-2018 paper "Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace"
Stars: ✭ 30 (-75%)
TalosHyperparameter Optimization for TensorFlow, Keras and PyTorch
Stars: ✭ 1,382 (+1051.67%)
TransferlearningTransfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+6967.5%)
Learn2learnA PyTorch Library for Meta-learning Research
Stars: ✭ 1,193 (+894.17%)
LooperA resource list for causality in statistics, data science and physics
Stars: ✭ 23 (-80.83%)
ChocolateA fully decentralized hyperparameter optimization framework
Stars: ✭ 112 (-6.67%)
Learningtocompare fslPyTorch code for CVPR 2018 paper: Learning to Compare: Relation Network for Few-Shot Learning (Few-Shot Learning part)
Stars: ✭ 837 (+597.5%)
MgoPurely functional genetic algorithms for multi-objective optimisation
Stars: ✭ 63 (-47.5%)
NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Stars: ✭ 10,698 (+8815%)
Reinforcement learning tutorial with demoReinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (+268.33%)
Test TubePython library to easily log experiments and parallelize hyperparameter search for neural networks
Stars: ✭ 663 (+452.5%)
BayesoSimple, but essential Bayesian optimization package
Stars: ✭ 57 (-52.5%)
Deep trafficMIT DeepTraffic top 2% solution (75.01 mph) 🚗.
Stars: ✭ 47 (-60.83%)
MetaoptnetMeta-Learning with Differentiable Convex Optimization (CVPR 2019 Oral)
Stars: ✭ 412 (+243.33%)
DeephyperDeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Stars: ✭ 117 (-2.5%)
HyperbandTuning hyperparams fast with Hyperband
Stars: ✭ 555 (+362.5%)
AtmAuto Tune Models - A multi-tenant, multi-data system for automated machine learning (model selection and tuning).
Stars: ✭ 504 (+320%)
Gnn Meta AttackImplementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".
Stars: ✭ 99 (-17.5%)
Meta DatasetA dataset of datasets for learning to learn from few examples
Stars: ✭ 483 (+302.5%)
G MetaGraph meta learning via local subgraphs (NeurIPS 2020)
Stars: ✭ 50 (-58.33%)
OptunaA hyperparameter optimization framework
Stars: ✭ 5,679 (+4632.5%)
Meta BlocksA modular toolbox for meta-learning research with a focus on speed and reproducibility.
Stars: ✭ 110 (-8.33%)
SimpleExperimental Global Optimization Algorithm
Stars: ✭ 450 (+275%)
L2p GnnCodes and datasets for AAAI-2021 paper "Learning to Pre-train Graph Neural Networks"
Stars: ✭ 48 (-60%)
Meta Transfer LearningTensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)
Stars: ✭ 439 (+265.83%)
Hyperopt Keras Cnn Cifar 100Auto-optimizing a neural net (and its architecture) on the CIFAR-100 dataset. Could be easily transferred to another dataset or another classification task.
Stars: ✭ 95 (-20.83%)
NeuraxleA Sklearn-like Framework for Hyperparameter Tuning and AutoML in Deep Learning projects. Finally have the right abstractions and design patterns to properly do AutoML. Let your pipeline steps have hyperparameter spaces. Enable checkpoints to cut duplicate calculations. Go from research to production environment easily.
Stars: ✭ 377 (+214.17%)
Maml TfTensorflow Implementation of MAML
Stars: ✭ 44 (-63.33%)
HypertunityA toolset for black-box hyperparameter optimisation.
Stars: ✭ 119 (-0.83%)
FewshotnlpThe source codes of the paper "Improving Few-shot Text Classification via Pretrained Language Representations" and "When Low Resource NLP Meets Unsupervised Language Model: Meta-pretraining Then Meta-learning for Few-shot Text Classification".
Stars: ✭ 115 (-4.17%)
What I Have ReadPaper Lists, Notes and Slides, Focus on NLP. For summarization, please refer to https://github.com/xcfcode/Summarization-Papers
Stars: ✭ 110 (-8.33%)