All Projects → chncwang → InsNet

chncwang / InsNet

Licence: other
InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.

Programming Languages

C++
36643 projects - #6 most used programming language
fortran
972 projects
c
50402 projects - #5 most used programming language
CMake
9771 projects
Cuda
1817 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to InsNet

nuts-ml
Flow-based data pre-processing for deep learning
Stars: ✭ 32 (-44.83%)
Mutual labels:  deep-learning-library
mlpractical
Machine Learning Practical Course Code Repository
Stars: ✭ 26 (-55.17%)
Mutual labels:  deep-learning-library
Dandelion
A light weight deep learning framework, on top of Theano, offering better balance between flexibility and abstraction
Stars: ✭ 15 (-74.14%)
Mutual labels:  deep-learning-library
The Incredible Pytorch
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.
Stars: ✭ 8,584 (+14700%)
Mutual labels:  deep-learning-library
Lasagne
Lightweight library to build and train neural networks in Theano
Stars: ✭ 3,800 (+6451.72%)
Mutual labels:  deep-learning-library

InsNet documentation

InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is designed to support padding-free dynamic batching, thus allow users to focus on building the model for a single instance. This design has at least four advantages as follows:

  1. It can batch not only operators in a mini-batch but also operators in the same instance. For example, it can batch two parallel transformers from the same instance.
  2. It makes it super easy to build NLP models with instance-dependent computation graphs and execute them in batch, such as tree-LSTM and hierarchical Transformers.
  3. It reduces users' intellectual burden of manual batching, as InsNet can efficiently take over all batching procedures. As such, users even need not know the concept of tensor, but only the matrix and vector (which is a one-column matrix), neither the concept of padding.
  4. It significantly reduces memory usage since no padding is needed and lazy execution can release useless tensors immediately.

To summarize, we believe that Padding-free Dynamic Batching is the feature that NLPers will dive into but is surprisingly not supported by today's deep learning libraries.

Besides, InsNet has the following features:

  1. It is written in C++ 14 and is built as a static library.
  2. For GPU computation, we write almost all CUDA kernels by hand, allowing efficient parallel computation for matrices of unaligned shapes.
  3. Both lazy and eager execution is supported, with the former allowing for automatic batching and the latter facilitating users' debugging.
  4. For the moment, it provides about thirty operators with both GPU and CPU implementations, supporting building modern NLP models for sentence classification, sequence tagging, and language generation. It furthermore provides NLP modules such as attention, RNNs, and the Transformer, built with the aforementioned operators.

Studies using InsNet are listed as follows, and we are looking forward to enriching this list:

InsNet uses Apache 2.0 license allowing you to use it in any project. But if you use InsNet for research, please cite this paper as follows and declare it as an early version of InsNet since the paper of InsNet is not completed yet:

@article{wang2019n3ldg,
title={N3LDG: A Lightweight Neural Network Library for Natural Language Processing},
author={Wang, Qiansheng and Yu, Nan and Zhang, Meishan and Han, Zijia and Fu, Guohong},
journal={Beijing Da Xue Xue Bao},
volume={55},
number={1},
pages={113--119},
year={2019},
publisher={Acta Scientiarum Naturalium Universitatis Pekinenis}
}

Due to incorrect Git operations, the very early history of InsNet is erased, but you can see it in another repo.

If you have any question about InsNet, feel free to post an issue or send me an email: [email protected]

See the documentation for more details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].