Iccv2019 LearningtopaintICCV2019 - A painting AI that can reproduce paintings stroke by stroke using deep reinforcement learning.
Stars: ✭ 1,995 (+1162.66%)
Autograd.jlJulia port of the Python autograd package.
Stars: ✭ 147 (-6.96%)
PadasipPython Adaptive Signal Processing
Stars: ✭ 138 (-12.66%)
NettackImplementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
Stars: ✭ 156 (-1.27%)
BnafPytorch implementation of Block Neural Autoregressive Flow
Stars: ✭ 138 (-12.66%)
RainbowA PyTorch implementation of Rainbow DQN agent
Stars: ✭ 147 (-6.96%)
Kitnet PyKitNET is a lightweight online anomaly detection algorithm, which uses an ensemble of autoencoders.
Stars: ✭ 152 (-3.8%)
FlowppCode for reproducing Flow ++ experiments
Stars: ✭ 137 (-13.29%)
Incremental learning.pytorchA collection of incremental learning paper implementations including PODNet (ECCV20) and Ghost.
Stars: ✭ 145 (-8.23%)
Ai plays snakeAI trained using Genetic Algorithm and Deep Learning to play the game of snake
Stars: ✭ 137 (-13.29%)
SavnLearning to Learn how to Learn: Self-Adaptive Visual Navigation using Meta-Learning (https://arxiv.org/abs/1812.00971)
Stars: ✭ 135 (-14.56%)
Chess Alpha ZeroChess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (+1082.28%)
Policy GradientMinimal Monte Carlo Policy Gradient (REINFORCE) Algorithm Implementation in Keras
Stars: ✭ 135 (-14.56%)
NcrfppNCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+1018.35%)
ResourcesResources on various topics being worked on at IvLabs
Stars: ✭ 158 (+0%)
Gym FxForex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (-4.43%)
MyriamA vulnerable iOS App with Security Challenges for the Security Researcher inside you.
Stars: ✭ 146 (-7.59%)
Reinforcement learning in pythonImplementing Reinforcement Learning, namely Q-learning and Sarsa algorithms, for global path planning of mobile robot in unknown environment with obstacles. Comparison analysis of Q-learning and Sarsa
Stars: ✭ 134 (-15.19%)
Show Adapt And TellCode for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (-7.59%)
Pytorch 101 Tutorial SeriesPyTorch 101 series covering everything from the basic building blocks all the way to building custom architectures.
Stars: ✭ 136 (-13.92%)
CryptonetsCryptoNets is a demonstration of the use of Neural-Networks over data encrypted with Homomorphic Encryption. Homomorphic Encryptions allow performing operations such as addition and multiplication over data while it is encrypted. Therefore, it allows keeping data private while outsourcing computation (see here and here for more about Homomorphic Encryptions and its applications). This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions. The scenario in mind is a provider that would like to provide Prediction as a Service (PaaS) but the data for which predictions are needed may be private. This may be the case in fields such as health or finance. By using CryptoNets, the user of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider. Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted throughout the process and finaly send the prediction to the user who can decrypt the results. During the process the service provider does not learn anything about the data that was used, the prediction that was made or any intermediate result since everything is encrypted throughout the process. This project uses the Simple Encrypted Arithmetic Library SEAL version 3.2.1 implementation of Homomorphic Encryption developed in Microsoft Research.
Stars: ✭ 152 (-3.8%)
TonTelegram Open Network research group. Telegram: https://t.me/ton_research
Stars: ✭ 146 (-7.59%)
ArtemisArtemis is a free genome viewer and annotation tool that allows visualization of sequence features and the results of analyses within the context of the sequence, and its six-frame translation
Stars: ✭ 135 (-14.56%)
Holdem🃏 OpenAI Gym No Limit Texas Hold 'em Environment for Reinforcement Learning
Stars: ✭ 135 (-14.56%)
100daysofmlcodeMy journey to learn and grow in the domain of Machine Learning and Artificial Intelligence by performing the #100DaysofMLCode Challenge.
Stars: ✭ 146 (-7.59%)
Hindsight Experience ReplayThis is the pytorch implementation of Hindsight Experience Replay (HER) - Experiment on all fetch robotic environments.
Stars: ✭ 134 (-15.19%)
RavensTrain robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Stars: ✭ 133 (-15.82%)
MuzeroA structured implementation of MuZero
Stars: ✭ 156 (-1.27%)
Chainer Cifar10Various CNN models for CIFAR10 with Chainer
Stars: ✭ 134 (-15.19%)
Deep Learning With PythonExample projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (-15.19%)
Saltie🚗 Rocket League Distributed Deep Reinforcement Learning Bot
Stars: ✭ 134 (-15.19%)
Role2vecA scalable Gensim implementation of "Learning Role-based Graph Embeddings" (IJCAI 2018).
Stars: ✭ 134 (-15.19%)
TradzqaiTrading environnement for RL agents, backtesting and training.
Stars: ✭ 150 (-5.06%)
Tensor2tensorLibrary of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Stars: ✭ 11,865 (+7409.49%)
Ed4Computational Cognitive Neuroscience, Fourth Edition
Stars: ✭ 133 (-15.82%)
DnwDiscovering Neural Wirings (https://arxiv.org/abs/1906.00586)
Stars: ✭ 133 (-15.82%)
OpenbotOpenBot leverages smartphones as brains for low-cost robots. We have designed a small electric vehicle that costs about $50 and serves as a robot body. Our software stack for Android smartphones supports advanced robotics workloads such as person following and real-time autonomous navigation.
Stars: ✭ 2,025 (+1181.65%)
Hep mlMachine Learning for High Energy Physics.
Stars: ✭ 133 (-15.82%)
Reinforcement learningImplementation of selected reinforcement learning algorithms in Tensorflow. A3C, DDPG, REINFORCE, DQN, etc.
Stars: ✭ 132 (-16.46%)
ParlA high-performance distributed training framework for Reinforcement Learning
Stars: ✭ 2,348 (+1386.08%)
AirsimOpen source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
Stars: ✭ 12,528 (+7829.11%)
Tidyversity🎓 Tidy tools for academics
Stars: ✭ 155 (-1.9%)
Ml Workspace🛠 All-in-one web-based IDE specialized for machine learning and data science.
Stars: ✭ 2,337 (+1379.11%)
Sumo RlA simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.
Stars: ✭ 145 (-8.23%)
Move37Coding Demos from the School of AI's Move37 Course
Stars: ✭ 130 (-17.72%)
RobinRObust document image BINarization
Stars: ✭ 131 (-17.09%)
PersephoneA tool for automatic phoneme transcription
Stars: ✭ 130 (-17.72%)
Ai BlocksA powerful and intuitive WYSIWYG interface that allows anyone to create Machine Learning models!
Stars: ✭ 1,818 (+1050.63%)
Uncertainty MetricsAn easy-to-use interface for measuring uncertainty and robustness.
Stars: ✭ 145 (-8.23%)
Vizdoom Keras RlReinforcement Learning in Keras on VizDoom
Stars: ✭ 130 (-17.72%)
AutomataA comprehensive autonomous decentralized systems framework for AI control architects.
Stars: ✭ 130 (-17.72%)
MachinReinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Stars: ✭ 145 (-8.23%)