All Projects → Rookfighter → least-squares-cpp

Rookfighter / least-squares-cpp

Licence: MIT License
A single header-only C++ library for least squares fitting.

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to least-squares-cpp

nim-mpfit
A wrapper for the cMPFIT library for the Nim programming language, https://vindaar.github.io/nim-mpfit/
Stars: ✭ 18 (-60.87%)
Mutual labels:  least-squares, levenberg-marquardt
Ceres Solver
A large scale non-linear optimization library
Stars: ✭ 2,180 (+4639.13%)
Mutual labels:  levenberg-marquardt, gauss-newton
dfogn
DFO-GN: Derivative-Free Optimization using Gauss-Newton
Stars: ✭ 20 (-56.52%)
Mutual labels:  optimization, least-squares
Cppnumericalsolvers
a lightweight C++17 library of numerical optimization methods for nonlinear functions (Including L-BFGS-B for TensorFlow)
Stars: ✭ 638 (+1286.96%)
Mutual labels:  optimization, gradient-descent
Far Ho
Gradient based hyperparameter optimization & meta-learning package for TensorFlow
Stars: ✭ 161 (+250%)
Mutual labels:  optimization, gradient-descent
Raytracer.jl
Differentiable RayTracing in Julia
Stars: ✭ 110 (+139.13%)
Mutual labels:  optimization, gradient-descent
Optim
OptimLib: a lightweight C++ library of numerical optimization methods for nonlinear functions
Stars: ✭ 411 (+793.48%)
Mutual labels:  optimization, gradient-descent
theseus
A library for differentiable nonlinear optimization
Stars: ✭ 1,257 (+2632.61%)
Mutual labels:  levenberg-marquardt, gauss-newton
descent
First-order optimization tools
Stars: ✭ 23 (-50%)
Mutual labels:  optimization, gradient-descent
eqsat
A language-generic implementation of equality saturation in Haskell
Stars: ✭ 15 (-67.39%)
Mutual labels:  optimization
pytorch
Improved LBFGS optimizer in PyTorch.
Stars: ✭ 16 (-65.22%)
Mutual labels:  optimization
yoga-image-optimizer
A graphical tool to convert and optimize JPEG, PNG and WebP images (based on YOGA)
Stars: ✭ 85 (+84.78%)
Mutual labels:  optimization
tungsten
Bring fusion to everyone
Stars: ✭ 13 (-71.74%)
Mutual labels:  optimization
olympus
Olympus: a benchmarking framework for noisy optimization and experiment planning
Stars: ✭ 38 (-17.39%)
Mutual labels:  optimization
NCVX
NCVX: A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning
Stars: ✭ 19 (-58.7%)
Mutual labels:  optimization
bio ik
MoveIt kinematics_base plugin based on particle optimization & GA
Stars: ✭ 104 (+126.09%)
Mutual labels:  optimization
dmipy
The open source toolbox for reproducible diffusion MRI-based microstructure estimation
Stars: ✭ 58 (+26.09%)
Mutual labels:  optimization
simsopt
Simons Stellarator Optimizer Code
Stars: ✭ 28 (-39.13%)
Mutual labels:  optimization
Train plus plus
Repo and code of the IEEE UIC paper: Train++: An Incremental ML Model Training Algorithm to Create Self-Learning IoT Devices
Stars: ✭ 17 (-63.04%)
Mutual labels:  optimization
structured-volume-sampling
A clean room implementation of Structured Volume Sampling by Bowles and Zimmermann in Unity
Stars: ✭ 27 (-41.3%)
Mutual labels:  optimization

least-squares-cpp

Cpp11 License CMake

least-squares-cpp is a header-only C++ library for unconstrained non-linear least squares optimization using the Eigen3 library.

It provides the following newton type optimization algorithms:

  • Gradient Descent
  • Gauss Newton
  • Levenberg Marquardt

The library also includes the following step size calculation methods:

  • Constant Step Size
  • Barzilai-Borwein Method
  • Armijo Backtracking
  • Wolfe Backtracking

Install

Simply copy the header file into your project or install it using the CMake build system by typing

cd path/to/repo
mkdir build
cd build
cmake ..
make install

The library requires Eigen3 to be installed on your system. In Debian based systems you can simply type

apt-get install libeigen3-dev

Make sure Eigen3 can be found by your build system.

You can use the CMake Find module in cmake/ to find the installed header.

Usage

For examples on using different algorithms and stop criteria, please have a look at the examples/ directory.

There are three major steps to use least-squares-cpp:

  • Implement your error function as functor
  • Instantiate the optimization algorithm of your choice
  • Pick the line search algorithm and parameters of your choice
#include <lsqcpp.h>

// Implement an objective functor.
struct ParabolicError
{
    void operator()(const Eigen::VectorXd &xval,
        Eigen::VectorXd &fval,
        Eigen::MatrixXd &) const
    {
        // omit calculation of jacobian, so finite differences will be used
        // to estimate jacobian numerically
        fval.resize(xval.size() / 2);
        for(lsq::Index i = 0; i < fval.size(); ++i)
            fval(i) = xval(i*2) * xval(i*2) + xval(i*2+1) * xval(i*2+1);
    }
};

int main()
{
    // Create GaussNewton optimizer object with ParabolicError functor as objective.
    // There are GradientDescent, GaussNewton and LevenbergMarquardt available.
    //
    // You can specify a StepSize functor as template parameter.
    // There are ConstantStepSize, BarzilaiBorwein, ArmijoBacktracking
    // WolfeBacktracking available. (Default for GaussNewton is ArmijoBacktracking)
    //
    // You can additionally specify a Callback functor as template parameter.
    //
    // You can additionally specify a FiniteDifferences functor as template
    // parameter. There are Forward-, Backward- and CentralDifferences
    // available. (Default is CentralDifferences)
    //
    // For GaussNewton and LevenbergMarquardt you can additionally specify a
    // linear equation system solver.
    // There are DenseSVDSolver and DenseCholeskySolver available.
    lsq::GaussNewton<double, ParabolicError, lsq::ArmijoBacktracking<double>> optimizer;

    // Set number of iterations as stop criterion.
    // Set it to 0 or negative for infinite iterations (default is 0).
    optimizer.setMaxIterations(100);

    // Set the minimum length of the gradient.
    // The optimizer stops minimizing if the gradient length falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 1e-9).
    optimizer.setMinGradientLength(1e-6);

    // Set the minimum length of the step.
    // The optimizer stops minimizing if the step length falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 1e-9).
    optimizer.setMinStepLength(1e-6);

    // Set the minimum least squares error.
    // The optimizer stops minimizing if the error falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 0).
    optimizer.setMinError(0);

    // Set the the parametrized StepSize functor used for the step calculation.
    optimizer.setStepSize(lsq::ArmijoBacktracking<double>(0.8, 1e-4, 1e-10, 1.0, 0));

    // Turn verbosity on, so the optimizer prints status updates after each
    // iteration.
    optimizer.setVerbosity(4);

    // Set initial guess.
    Eigen::VectorXd initialGuess(4);
    initialGuess << 1, 2, 3, 4;

    // Start the optimization.
    auto result = optimizer.minimize(initialGuess);

    std::cout << "Done! Converged: " << (result.converged ? "true" : "false")
        << " Iterations: " << result.iterations << std::endl;

    // do something with final function value
    std::cout << "Final fval: " << result.fval.transpose() << std::endl;

    // do something with final x-value
    std::cout << "Final xval: " << result.xval.transpose() << std::endl;

    return 0;
}

Performance Considerations

The performance for solving an optimization problem with least-squares-cpp can be significantly influenced by the following factors:

  • line search, such as Armijo or Wolfe Backtracking, is expensive
    • limit the maximum number of iterations for line search algorithms
      • if you do not have a full analysis of your objective, numerics can do funny things and the algroithm can get stuck for quite some time in line search loops
  • calculating jacobians numerically by finite differences is expensive and has low precision
    • consider providing an explicit jacobian in your error functor
    • consider using algorithmic differentiation in your error functor (not necessarily faster, but more precise)
  • parallelize your error functor
    • usually calculations for different parts of an error vector can be done independently
  • compile in Release mode
    • Eigen3 uses a lot of runtime checks in Debug mode, so there is quite some performance gain

References

[1] Jorge Nocedal, Stephen J. Wright, Numerical Optimization, Springer 1999

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].