All Projects → corail-research → SeaPearl.jl

corail-research / SeaPearl.jl

Licence: BSD-3-Clause license
Julia hybrid constraint programming solver enhanced by a reinforcement learning driven search.

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to SeaPearl.jl

py-problems-solutions
Implementations of various problems using Python. Dynamic Programming, BackTracking & Sorting algorithms 💻
Stars: ✭ 20 (-83.19%)
Mutual labels:  graphs, dynamic-programming
Java-Questions-and-Solutions
This repository aims to solve and create new problems from different spheres of coding. A path to help students to get access to solutions and discuss their doubts.
Stars: ✭ 34 (-71.43%)
Mutual labels:  graphs, dynamic-programming
LearnCPP
Learn Cpp from Beginner to Advanced ✅ Practice 🎯 Code 💻 Repeat 🔁 One step solution for c++ beginners and cp enthusiasts.
Stars: ✭ 359 (+201.68%)
Mutual labels:  graphs, dynamic-programming
Competitive Programming Repository
Competitive Programming templates that I used during the past few years.
Stars: ✭ 367 (+208.4%)
Mutual labels:  graphs, dynamic-programming
Coding Ninjas Competitive
This will have all the solutions to the competitive programming course's problems by Coding ninjas. Star the repo if you like it.
Stars: ✭ 168 (+41.18%)
Mutual labels:  graphs, dynamic-programming
Ingraph
Incremental view maintenance for openCypher graph queries.
Stars: ✭ 40 (-66.39%)
Mutual labels:  research, graphs
Algorithms
Free hands-on course with the implementation (in Python) and description of several computational, mathematical and statistical algorithms.
Stars: ✭ 117 (-1.68%)
Mutual labels:  graphs, dynamic-programming
Cracking The Coding Interview
📚 C++ and Python solutions with automated tests for Cracking the Coding Interview 6th Edition.
Stars: ✭ 396 (+232.77%)
Mutual labels:  graphs, dynamic-programming
Coding Ninjas Java Solutions
This will have solutions to all the problems that are included in Coding Ninja's 2020 Java Course. Star the repo if you like it.
Stars: ✭ 32 (-73.11%)
Mutual labels:  graphs, dynamic-programming
ai-distillery
Automatically modelling and distilling knowledge within AI. In other words, summarising the AI research firehose.
Stars: ✭ 20 (-83.19%)
Mutual labels:  research, graphs
algoexpert
AlgoExpert is an online platform that helps software engineers to prepare for coding and technical interviews.
Stars: ✭ 8 (-93.28%)
Mutual labels:  graphs, dynamic-programming
circDeep
End-to-End learning framework for circular RNA classification from other long non-coding RNA using multimodal deep learning
Stars: ✭ 21 (-82.35%)
Mutual labels:  dynamic-programming
football-graphs
Graphs and passing networks in football.
Stars: ✭ 81 (-31.93%)
Mutual labels:  graphs
ragedb
In Memory Property Graph Server using a Shared Nothing design
Stars: ✭ 75 (-36.97%)
Mutual labels:  graphs
ViterbiAlgorithm
Viterbi Algorithm for HMM
Stars: ✭ 26 (-78.15%)
Mutual labels:  dynamic-programming
mully
R package to create, modify and visualize graphs with multiple layers.
Stars: ✭ 36 (-69.75%)
Mutual labels:  graphs
DynamicalBilliards.jl
An easy-to-use, modular, extendable and absurdly fast Julia package for dynamical billiards in two dimensions.
Stars: ✭ 97 (-18.49%)
Mutual labels:  julialang
DataAPI.jl
A data-focused namespace for packages to share functions
Stars: ✭ 29 (-75.63%)
Mutual labels:  julialang
SuiteSparseGraphBLAS.jl
Sparse, General Linear Algebra for Graphs!
Stars: ✭ 79 (-33.61%)
Mutual labels:  graphs
gubbins
Rapid phylogenetic analysis of large samples of recombinant bacterial whole genome sequences using Gubbins
Stars: ✭ 103 (-13.45%)
Mutual labels:  research

codecov

drawing

SeaPearl is a Constraint Programming solver that can use Reinforcement Learning agents as value-selection heuristics, using graphs as inputs for the agent's approximator. It is to be seen as a tool for researchers that gives the possibility to go above and beyond what has already been done with it.

The paper accompanying this solver can be found on the arXiv. If you use SeaPearl in your research, please cite our work.

The RL agents are defined using ReinforcementLearning.jl, their inputs are dealt with using Flux.jl. The CP part, inspired from MiniCP, is focused on readability. The code is meant to be clear and modulable so that researchers could easily get access to CP data and use it as input for their ML model.

Installation

]add SeaPearl

Use

Working examples can be found in SeaPearlZoo and documentation can be found here.

SeaPearl can be used either as a classic CP solver that uses predefined variable and value selection heuristics or as Reinforcement Learning driven CP solver that is capable of learning through solving automatically generated instances of a given problem (knapsack, tsptw, graphcoloring, EternityII ...).

SeaPearl as a classic CP solver :

To use SeaPearl as a classic CP solver, one needs to :

  1. declare a variable selection heuristic :
YourVariableSelectionHeuristic{TakeObjective} <: SeaPearl.AbstractVariableSelection{TakeObjective}
  1. declare a value selection heuristic :
BasicHeuristic <: ValueSelection
  1. create a Constraint Programming Model :
trailer = SeaPearl.Trailer()
model = SeaPearl.CPModel(trailer)

#create variable : 
SeaPearl.addVariable!(...)

#add constraints : 
SeaPearl.addConstraint!(model, SeaPearl.AbstractConstraint(...))

#add optionnal objective function : 
SeaPearl.addObjective!(model, ObjectiveVar)

SeaPearl as a RL-driven CP solver :

To use SeaPearl as a RL-driven CP solver, one needs to :

  1. declare a variable selection heuristic :
CustomVariableSelectionHeuristic{TakeObjective} <: SeaPearl.AbstractVariableSelection{TakeObjective}
  1. declare a value selection learnedheuristic :
LearnedHeuristic{SR<:AbstractStateRepresentation, R<:AbstractReward, A<:ActionOutput} <: ValueSelection
  1. define an agent :
agent = RL.Agent(
policy=(...),
trajectory=(...),
)
  1. optionally, declare a custom reward :
CustomReward <: SeaPearl.AbstractReward 
  1. optionally, declare a custom StateRepresentation ( instead of the Default tripartite-graph representation ) :
CustomStateRepresentation <: SeaPearl.AbstractStateRepresentation
  1. optionally, declare a custom featurization for the StateRepresentation :
CustomFeaturization <: SeaPearl.AbstractFeaturization
  1. create a generator for your given problem, that will create different instances of the specific problem used during the learning process.
CustomProblemGenerator <: AbstractModelGenerator
  1. set a number of training epochs, declare an evaluator, a Strategy, a metric for benchmarking
nb_epochs = 3000
CustomStrategy <: SearchStrategy #DFS, RBS, ILDS 

CustomEvaluator <: AbstractEvaluator #or use predefined one : SeaPearl.SameInstancesEvaluator(...)
function CustomMetricsFun
  1. launch the training :
metricsArray, eval_metricsArray = SeaPearl.train!(
valueSelectionArray=valueSelectionArray,
generator=tsptw_generator,
nbEpisodes=nbEpisodes,
strategy=strategy,
eval_strategy=eval_strategy,
variableHeuristic=variableSelection,
out_solver = true,
verbose = true,
evaluator=SeaPearl.SameInstancesEvaluator(valueSelectionArray,tsptw_generator; evalFreq = evalFreq, nbInstances = nbInstances, evalTimeOut = evalTimeOut),
restartPerInstances = restartPerInstances
)

)

Contributing to SeaPearl

All contributions are welcome! Have a look at our contributing guidelines.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].