All Projects → tansey → sdp

tansey / sdp

Licence: other
Deep nonparametric estimation of discrete conditional distributions via smoothed dyadic partitioning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to sdp

Letslearnai.github.io
Lets Learn AI
Stars: ✭ 33 (+120%)
Mutual labels:  machine-learning-algorithms, tensorflow-models
Facial-Expression-Recognition
Facial-Expression-Recognition using tensorflow
Stars: ✭ 19 (+26.67%)
Mutual labels:  tensorflow-models
AverageShiftedHistograms.jl
⚡ Lightning fast density estimation in Julia ⚡
Stars: ✭ 52 (+246.67%)
Mutual labels:  density-estimation
glossary
https://machinelearning.wtf/ - An online glossary of machine learning terms.
Stars: ✭ 28 (+86.67%)
Mutual labels:  machine-learning-algorithms
PyImpetus
PyImpetus is a Markov Blanket based feature subset selection algorithm that considers features both separately and together as a group in order to provide not just the best set of features but also the best combination of features
Stars: ✭ 83 (+453.33%)
Mutual labels:  machine-learning-algorithms
adventures-with-ann
All the code for a series of Medium articles on Approximate Nearest Neighbors
Stars: ✭ 40 (+166.67%)
Mutual labels:  machine-learning-algorithms
genieclust
Genie++ Fast and Robust Hierarchical Clustering with Noise Point Detection - for Python and R
Stars: ✭ 34 (+126.67%)
Mutual labels:  machine-learning-algorithms
Probability Theory
A quick introduction to all most important concepts of Probability Theory, only freshman level of mathematics needed as prerequisite.
Stars: ✭ 25 (+66.67%)
Mutual labels:  probability-distribution
data-algorithms-with-spark
O'Reilly Book: [Data Algorithms with Spark] by Mahmoud Parsian
Stars: ✭ 34 (+126.67%)
Mutual labels:  machine-learning-algorithms
Android-Machine-Learning-With-TensorFlow
Tensor Flow implementation for Android
Stars: ✭ 17 (+13.33%)
Mutual labels:  tensorflow-models
Customer segmentation
Analysing the content of an E-commerce database that contains list of purchases. Based on the analysis, I develop a model that allows to anticipate the purchases that will be made by a new customer, during the following year from its first purchase.
Stars: ✭ 80 (+433.33%)
Mutual labels:  machine-learning-algorithms
OpencvInstallation
shell script for openCV installation and configuration in linux based system. Most easy way to configue openCV, you only need to run opencv.sh shell file.
Stars: ✭ 16 (+6.67%)
Mutual labels:  machine-learning-algorithms
ZS-Data-Science-Challenge
A Data science challenge - "Mekktronix Sales Forecasting" organised by ZS through Hackerearth platform. Rank: 223 out of 4743.
Stars: ✭ 21 (+40%)
Mutual labels:  machine-learning-algorithms
FB-Ads-Opt-UCB
The easiest way to optimize Facebook Ads using Upper Confidence Bound Algorithm. 💻
Stars: ✭ 23 (+53.33%)
Mutual labels:  machine-learning-algorithms
Seating Chart
Optimizing a Wedding Reception Seating Chart Using a Genetic Algorithm
Stars: ✭ 25 (+66.67%)
Mutual labels:  machine-learning-algorithms
COVID19
Using Kalman Filter to Predict Corona Virus Spread
Stars: ✭ 78 (+420%)
Mutual labels:  machine-learning-algorithms
FineGrainedVisualRecognition
Fine grained visual recognition tensorflow baseline on CUB, Stanford Cars, Dogs, Aircrafts, and Flower102.
Stars: ✭ 19 (+26.67%)
Mutual labels:  tensorflow-models
reweighted-ws
Implementation of the reweighted wake-sleep machine learning algorithm
Stars: ✭ 39 (+160%)
Mutual labels:  machine-learning-algorithms
fastML
A Python package built on sklearn for running a series of classification Algorithms in a faster and easier way.
Stars: ✭ 40 (+166.67%)
Mutual labels:  machine-learning-algorithms
structured-volume-sampling
A clean room implementation of Structured Volume Sampling by Bowles and Zimmermann in Unity
Stars: ✭ 27 (+80%)
Mutual labels:  density-estimation

Smoothed Dyadic Partitions

Deep nonparametric conditional discrete probability estimation via smoothed dyadic partitioning.

  • See the tfsdp directory (specifically models.py) for the details of all the models we implemented. Note that this file contains a lot of garbage code and legacy naming that needs to be cleaned up. Our SDP model is named LocallySmoothedMultiscaleLayer and is often referred to in some of the experiments as trendfiltering-multiscale.

  • See the experiments directory for the code to replicate our experiments.

Note that you should be able to install the package as a local pip package via pip -e . in this directory. The best example for how to run the models is in experiments/uci/main.py, which contains the most recent code and should not have any API issues.

Installation

You can install via Pip: pip install tf-sdp

Using SDP

Adding an SDP layer to your code is straightforward:

import tensorflow as tf
from keras import backend as K
from keras.regularizers import l2
from keras.layers import Dense, Dropout, Flatten
from tfsdp.models import LocallySmoothedMulticaleLyaer

# Load everything else you need
# ...
num_classes = (32, 45) # Discrete output space with shape 32 x 45

# Create your awesome deep model with lots of layers and whatnot
# ...
final_hidden_layer = Dense(final_hidden_size, W_regularizer=l2(0.01), activation=K.relu)(...)
final_hidden_drop = Dropout(0.5)(final_hidden_layer)
model = LocallySmoothedMultiscaleLayer(final_hidden_drop, final_hidden_size, num_classes, one_hot=False)

# ...
# You can get the training loss for an optimizer
opt = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=args.epsilon)
train_step = opt.minimize(model.train_loss)

# Training evaluation and learning is straightforward
feed_dict = {} # fill the train dict with other needed params like training flag and input vars
model.fill_train_dict(self._train_dict, labels) # add the relevant dyadic nodes to the dictionary
sess.run(train_step, feed_dict=feed_dict)

# You can also get the testing loss for validation
feed_dict = {} # fill the train dict with other needed params like training flag and input vars
model.fill_test_dict(self._train_dict, labels) # add the relevant dyadic nodes to the dictionary
loss += sess.run(model.test_loss, feed_dict=feed_dict)

# If you want the full conditional distribution over the entire space:
feed_dict = {} # fill the train dict with other needed params like training flag and input vars
density = sess.run(model.density, feed_dict=feed_dict) # density will have shape [batchsize,num_classes]

See experiments/uci/model.py and experiments/uci/main.py for complete examples on how to setup and run the model.

Citation

If you use this code in your work, please cite the following:

@article{tansey:etal:2017:sdp,
  title={Deep Nonparametric Estimation of Discrete Conditional Distributions via
  Smoothed Dyadic Partitioning},
  author={Tansey, Wesley and Pichotta, Karl and Scott, James G.},
  journal={arXiv preprint arXiv:1702.07398},
  year={2017}
}

The paper is available here.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].