hpcLearning and practice of high performance computing (CUDA, Vulkan, OpenCL, OpenMP, TBB, SSE/AVX, NEON, MPI, coroutines, etc. )
Stars: ✭ 39 (+95%)
Super Mario NeatThis program evolves an AI using the NEAT algorithm to play Super Mario Bros.
Stars: ✭ 64 (+220%)
nbodykitAnalysis kit for large-scale structure datasets, the massively parallel way
Stars: ✭ 93 (+365%)
mpi-parallelizationExamples for MPI Spawning and Splitting, and the differences between two implementations
Stars: ✭ 16 (-20%)
Pytorch-RL-CPPA Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Stars: ✭ 73 (+265%)
neuro-evolutionA project on improving Neural Networks performance by using Genetic Algorithms.
Stars: ✭ 25 (+25%)
Tensorflow-NeuroevolutionNeuroevolution Framework for Tensorflow 2.x focusing on modularity and high-performance. Preimplements NEAT, DeepNEAT, CoDeepNEAT, etc.
Stars: ✭ 109 (+445%)
GalaxyGalaxy is an asynchronous parallel visualization ray tracer for performant rendering in distributed computing environments. Galaxy builds upon Intel OSPRay and Intel Embree, including ray queueing and sending logic inspired by TACC GraviT.
Stars: ✭ 18 (-10%)
neat-openai-gymNEAT for Reinforcement Learning on the OpenAI Gym
Stars: ✭ 19 (-5%)
safe-control-gymPyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and RL
Stars: ✭ 272 (+1260%)
pytorch-gymImplementation of the Deep Deterministic Policy Gradient(DDPG) in bullet Gym using pytorch
Stars: ✭ 39 (+95%)
freqtrade-gymA customized gym environment for developing and comparing reinforcement learning algorithms in crypto trading.
Stars: ✭ 192 (+860%)
fmlFused Matrix Library
Stars: ✭ 24 (+20%)
sst-coreSST Structural Simulation Toolkit Parallel Discrete Event Core and Services
Stars: ✭ 82 (+310%)
gym-cryptotradingOpenAI Gym Environment API based Bitcoin trading environment
Stars: ✭ 111 (+455%)
raptorGeneral, high performance algebraic multigrid solver
Stars: ✭ 50 (+150%)
bsuir-csn-cmsn-helperRepository containing ready-made laboratory works in the specialty of computing machines, systems and networks
Stars: ✭ 43 (+115%)
Harris-Hawks-Optimization-Algorithm-and-ApplicationsSource codes for HHO paper: Harris hawks optimization: Algorithm and applications: https://www.sciencedirect.com/science/article/pii/S0167739X18313530. In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which is called Harris Hawks Optimizer (HHO).
Stars: ✭ 31 (+55%)
libquoDynamic execution environments for coupled, thread-heterogeneous MPI+X applications
Stars: ✭ 21 (+5%)
scrSCR caches checkpoint data in storage on the compute nodes of a Linux cluster to provide a fast, scalable checkpoint / restart capability for MPI codes.
Stars: ✭ 84 (+320%)
neat-pythonPython implementation of the NEAT neuroevolution algorithm
Stars: ✭ 32 (+60%)
pbdMLNo description or website provided.
Stars: ✭ 13 (-35%)
GoBiggerCome & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Stars: ✭ 410 (+1950%)
ravelRavel MPI trace visualization tool
Stars: ✭ 26 (+30%)
EDLibExact diagonalization solver for quantum electron models
Stars: ✭ 18 (-10%)
CartPoleRun OpenAI Gym on a Server
Stars: ✭ 16 (-20%)
faabricMessaging and state layer for distributed serverless applications
Stars: ✭ 39 (+95%)
ACCLAccelerated Collective Communication Library: MPI-like communication operations for Xilinx Alveo accelerators
Stars: ✭ 28 (+40%)
cgp-cnn-designUsing Cartesian Genetic Programming to find an efficient Convolutional Neural Network architecture
Stars: ✭ 25 (+25%)
h5fortran-mpiHDF5-MPI parallel Fortran object-oriented interface
Stars: ✭ 15 (-25%)
gslibsparse communication library
Stars: ✭ 22 (+10%)
SIRIUSDomain specific library for electronic structure calculations
Stars: ✭ 87 (+335%)
ecoleExtensible Combinatorial Optimization Learning Environments
Stars: ✭ 249 (+1145%)
NeatronYet another NEAT implementation
Stars: ✭ 14 (-30%)
t8codeParallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (+85%)
reinforcement learning ppo rndDeep Reinforcement Learning by using Proximal Policy Optimization and Random Network Distillation in Tensorflow 2 and Pytorch with some explanation
Stars: ✭ 33 (+65%)
sboxgatesProgram for finding low gate count implementations of S-boxes.
Stars: ✭ 30 (+50%)
mloperatorMachine Learning Operator & Controller for Kubernetes
Stars: ✭ 85 (+325%)
mpiBenchMPI benchmark to test and measure collective performance
Stars: ✭ 39 (+95%)
FluxUtils.jlSklearn Interface and Distributed Training for Flux.jl
Stars: ✭ 12 (-40%)
az-hopThe Azure HPC On-Demand Platform provides an HPC Cluster Ready solution
Stars: ✭ 33 (+65%)
arborThe Arbor multi-compartment neural network simulation library.
Stars: ✭ 87 (+335%)
alsvinnThe fast Finite Volume simulator with UQ support.
Stars: ✭ 22 (+10%)
evo-NEATA java implementation of NEAT(NeuroEvolution of Augmenting Topologies ) from scratch for the generation of evolving artificial neural networks. Only for educational purposes.
Stars: ✭ 34 (+70%)
DQNDeep-Q-Network reinforcement learning algorithm applied to a simple 2d-car-racing environment
Stars: ✭ 42 (+110%)
flutterFlutter fitness/workout app for wger
Stars: ✭ 106 (+430%)
Theano-MPIMPI Parallel framework for training deep learning models built in Theano
Stars: ✭ 55 (+175%)
exactEXONA: The Evolutionary eXploration of Neural Networks Framework -- EXACT, EXALT and EXAMM
Stars: ✭ 43 (+115%)
pacman-aiA.I. plays the original 1980 Pacman using Neuroevolution of Augmenting Topologies and Deep Q Learning
Stars: ✭ 26 (+30%)
fenics-DRLRepository from the paper https://arxiv.org/abs/1908.04127, to train Deep Reinforcement Learning in Fluid Mechanics Setup.
Stars: ✭ 40 (+100%)
ParMmgDistributed parallelization of 3D volume mesh adaptation
Stars: ✭ 19 (-5%)
SWCaffeA Deep Learning Framework customized for Sunway TaihuLight
Stars: ✭ 37 (+85%)
musterMassively Scalable Clustering
Stars: ✭ 22 (+10%)
mpiGraphMPI benchmark to generate network bandwidth images
Stars: ✭ 17 (-15%)
eventgradEvent-Triggered Communication in Parallel Machine Learning
Stars: ✭ 14 (-30%)