All Projects → itsrainingdata → sparsebn

itsrainingdata / sparsebn

Licence: other
Software for learning sparse Bayesian networks

Programming Languages

r
7636 projects

Projects that are alternatives of or similar to sparsebn

Dowhy
DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.
Stars: ✭ 3,480 (+8387.8%)
Mutual labels:  graphical-models, bayesian-networks
pyRiemann
Python machine learning package based on sklearn API for multivariate data processing and statistical analysis of symmetric positive definite matrices via Riemannian geometry
Stars: ✭ 470 (+1046.34%)
Mutual labels:  covariance-matrices
traj-pred-irl
Official implementation codes of "Regularizing neural networks for future trajectory prediction via IRL framework"
Stars: ✭ 23 (-43.9%)
Mutual labels:  regularization
deep-learning-notes
🧠👨‍💻Deep Learning Specialization • Lecture Notes • Lab Assignments
Stars: ✭ 20 (-51.22%)
Mutual labels:  regularization
mixup
speechpro.com/
Stars: ✭ 23 (-43.9%)
Mutual labels:  regularization
SSE-PT
Codes and Datasets for paper RecSys'20 "SSE-PT: Sequential Recommendation Via Personalized Transformer" and NurIPS'19 "Stochastic Shared Embeddings: Data-driven Regularization of Embedding Layers"
Stars: ✭ 103 (+151.22%)
Mutual labels:  regularization
Machine Learning From Scratch
Machine Learning models from scratch with a better visualisation
Stars: ✭ 15 (-63.41%)
Mutual labels:  regularization
hyperstar
Hyperstar: Negative Sampling Improves Hypernymy Extraction Based on Projection Learning.
Stars: ✭ 24 (-41.46%)
Mutual labels:  regularization
PenaltyFunctions.jl
Julia package of regularization functions for machine learning
Stars: ✭ 25 (-39.02%)
Mutual labels:  regularization
Distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Stars: ✭ 3,760 (+9070.73%)
Mutual labels:  regularization
Deeplearning
Python for《Deep Learning》,该书为《深度学习》(花书) 数学推导、原理剖析与源码级别代码实现
Stars: ✭ 4,020 (+9704.88%)
Mutual labels:  regularization
tulip
Scaleable input gradient regularization
Stars: ✭ 19 (-53.66%)
Mutual labels:  regularization
dbnR
Gaussian dynamic Bayesian networks structure learning and inference based on the bnlearn package
Stars: ✭ 33 (-19.51%)
Mutual labels:  bayesian-networks
Machine-Learning-Andrew-Ng
机器学习-Coursera-吴恩达- python+Matlab代码实现
Stars: ✭ 127 (+209.76%)
Mutual labels:  regularization
derivative
Optimal numerical differentiation of noisy time series data in python.
Stars: ✭ 34 (-17.07%)
Mutual labels:  experimental-data
Regularization-Pruning
[ICLR'21] PyTorch code for our paper "Neural Pruning via Growing Regularization"
Stars: ✭ 44 (+7.32%)
Mutual labels:  regularization
AMP-Regularizer
Code for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Stars: ✭ 26 (-36.59%)
Mutual labels:  regularization
SparseRegression.jl
Statistical Models with Regularization in Pure Julia
Stars: ✭ 37 (-9.76%)
Mutual labels:  regularization
Statistical-Learning-using-R
This is a Statistical Learning application which will consist of various Machine Learning algorithms and their implementation in R done by me and their in depth interpretation.Documents and reports related to the below mentioned techniques can be found on my Rpubs profile.
Stars: ✭ 27 (-34.15%)
Mutual labels:  regularization
pycid
Library for graphical models of decision making, based on pgmpy and networkx
Stars: ✭ 64 (+56.1%)
Mutual labels:  bayesian-networks

sparsebn

Project Status: Active The project has reached a stable, usable state and is being actively developed. Travis-CI Build Status CRAN RStudio mirror downloads

Introducing sparsebn: A new R package for learning sparse Bayesian networks and other graphical models from high-dimensional data via sparse regularization. Designed from the ground up to handle:

  • Experimental data with interventions
  • Mixed observational / experimental data
  • High-dimensional data with p >> n
  • Datasets with thousands of variables (tested up to p=8000)
  • Continuous and discrete data

The emphasis of this package is scalability and statistical consistency on high-dimensional datasets. Compared to existing algorithms, sparsebn scales much better and is under active development. For more details on this package, including worked examples and the methodological background, please see our new preprint [1].

Overview

The main methods for learning graphical models are:

  • estimate.dag for directed acyclic graphs (Bayesian networks).
  • estimate.precision for undirected graphs (Markov random fields).
  • estimate.covariance for covariance matrices.

Currently, estimation of precision and covariances matrices is limited to Gaussian data.

The workhorse behind sparsebn is the sparsebnUtils package, which provides various S3 classes and methods for representing and manipulating graphs. The basic algorithms are implemented in ccdrAlgorithm and discretecdAlgorithm.

Installation

You can install:

  • the latest CRAN version with

    install.packages("sparsebn")
  • the latest development version from GitHub with

    devtools::install_github(c("itsrainingdata/sparsebn/", "itsrainingdata/sparsebnUtils/dev", "itsrainingdata/ccdrAlgorithm/dev", "gujyjean/discretecdAlgorithm"))

References

[1] Aragam, B., Gu, J., and Zhou, Q. (2017). Learning large-scale Bayesian networks with the sparsebn package. arXiv: 1703.04025.

[2] Aragam, B. and Zhou, Q. (2015). Concave penalized estimation of sparse Gaussian Bayesian networks. The Journal of Machine Learning Research. 16(Nov):2273−2328.

[3] Fu, F., Gu, J., and Zhou, Q. (2014). Adaptive penalized estimation of directed acyclic graphs from categorical data. arXiv: 1403.2310.

[4] Aragam, B., Amini, A. A., and Zhou, Q. (2015). Learning directed acyclic graphs with penalized neighbourhood regression. arXiv: 1511.08963.

[5] Fu, F. and Zhou, Q. (2013). Learning sparse causal Gaussian networks with experimental intervention: Regularization and coordinate descent. Journal of the American Statistical Association, 108: 288-300.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].