All Projects → shchur → ifl-tpp

shchur / ifl-tpp

Licence: MIT license
Implementation of "Intensity-Free Learning of Temporal Point Processes" (Spotlight @ ICLR 2020)

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to ifl-tpp

constant-memory-waveglow
PyTorch implementation of NVIDIA WaveGlow with constant memory cost.
Stars: ✭ 36 (-37.93%)
Mutual labels:  normalizing-flows
InvertibleNetworks.jl
A Julia framework for invertible neural networks
Stars: ✭ 86 (+48.28%)
Mutual labels:  normalizing-flows
continuous-time-flow-process
PyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)
Stars: ✭ 34 (-41.38%)
Mutual labels:  normalizing-flows
MongeAmpereFlow
Continuous-time gradient flow for generative modeling and variational inference
Stars: ✭ 29 (-50%)
Mutual labels:  normalizing-flows
deeprob-kit
A Python Library for Deep Probabilistic Modeling
Stars: ✭ 32 (-44.83%)
Mutual labels:  normalizing-flows
NanoFlow
PyTorch implementation of the paper "NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity." (NeurIPS 2020)
Stars: ✭ 63 (+8.62%)
Mutual labels:  normalizing-flows
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1987.93%)
Mutual labels:  normalizing-flows
flowtorch-old
Separating Normalizing Flows code from Pyro and improving API
Stars: ✭ 36 (-37.93%)
Mutual labels:  normalizing-flows
nessai
nessai: Nested Sampling with Artificial Intelligence
Stars: ✭ 18 (-68.97%)
Mutual labels:  normalizing-flows
score flow
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)
Stars: ✭ 49 (-15.52%)
Mutual labels:  normalizing-flows
UMNN
Implementation of Unconstrained Monotonic Neural Network and the related experiments. These architectures are particularly useful for modelling monotonic transformations in normalizing flows.
Stars: ✭ 63 (+8.62%)
Mutual labels:  normalizing-flows
gradient-boosted-normalizing-flows
We got a stew going!
Stars: ✭ 20 (-65.52%)
Mutual labels:  normalizing-flows
semi-supervised-NFs
Code for the paper Semi-Conditional Normalizing Flows for Semi-Supervised Learning
Stars: ✭ 23 (-60.34%)
Mutual labels:  normalizing-flows
normalizing-flows
Implementations of normalizing flows using python and tensorflow
Stars: ✭ 15 (-74.14%)
Mutual labels:  normalizing-flows
introduction to normalizing flows
Jupyter Notebook corresponding to 'Going with the Flow: An Introduction to Normalizing Flows'
Stars: ✭ 21 (-63.79%)
Mutual labels:  normalizing-flows
cflow-ad
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"
Stars: ✭ 138 (+137.93%)
Mutual labels:  normalizing-flows
Normalizing Flows
Implementation of Normalizing flows on MNIST https://arxiv.org/abs/1505.05770
Stars: ✭ 14 (-75.86%)
Mutual labels:  normalizing-flows

Intensity-Free Learning of Temporal Point Processes

Pytorch implementation of the paper "Intensity-Free Learning of Temporal Point Processes", Oleksandr Shchur, Marin Biloš and Stephan Günnemann, ICLR 2020.

Refactored code

The master branch contains a refactored version of the code. Some of the original functionality is missing, but the code is much cleaner and should be easier to extend.

You can find the original code (used for experiments in the paper) on branch original-code.

Usage

In order to run the code, you need to install the dpp library that contains all the algorithms described in the paper

cd code
python setup.py install

A Jupyter notebook code/interactive.ipynb contains the code for training models on the datasets used in the paper. The same code can also be run as a Python script code/train.py.

Using your own data

You can save your custom dataset in the format used in our code as follows:

dataset = {
    "sequences": [
        {"arrival_times": [0.2, 4.5, 9.1], "marks": [1, 0, 4], "t_start": 0.0, "t_end": 10.0},
        {"arrival_times": [2.3, 3.3, 5.5, 8.15], "marks": [4, 3, 2, 2], "t_start": 0.0, "t_end": 10.0},
    ],
    "num_marks": 5,
}
torch.save(dataset, "data/my_dataset.pkl")

Defining new models

RecurrentTPP is the base class for marked TPP models.

You just need to inherit from it and implement the get_inter_time_dist method that defines how to obtain the distribution (an instance of torch.distributions.Distribution) over the inter-event times given the context vector. For example, have a look at the LogNormMix model from our paper. You can also change the get_features and get_context methods of RecurrentTPP to, for example, use a transformer instead of an RNN.

Mistakes in the old version

  • In the old code we used to normalize the NLL of each sequence by the number of events --- this was incorrect. When computing NLL for multiple TPP sequences, we are only allowed to divide the NLL by the same number for each sequence.
  • In the old code we didn't include the survival time of the last event (i.e. time from the last event until the end of the obseved interval) into the NLL computation. This is fixed in the refactored version (and by the way, this seems to be a common mistake in other TPP implementations online).

Requirements

numpy=1.16.4
pytorch=1.2.0
scikit-learn=0.21.2
scipy=1.3.1

Cite

Please cite our paper if you use the code or datasets in your own work

@article{
    shchur2020intensity,
    title={Intensity-Free Learning of Temporal Point Processes},
    author={Oleksandr Shchur and Marin Bilo\v{s} and Stephan G\"{u}nnemann},
    journal={International Conference on Learning Representations (ICLR)},
    year={2020},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].