benedekrozemberczki / Tadw

Licence: gpl-3.0
An implementation of "Network Representation Learning with Rich Text Information" (IJCAI '15).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tadw

Gemsec
The TensorFlow reference implementation of 'GEMSEC: Graph Embedding with Self Clustering' (ASONAM 2019).
Stars: ✭ 210 (+388.37%)
Mutual labels:  unsupervised-learning, word2vec, matrix-factorization, gensim
RolX
An alternative implementation of Recursive Feature and Role Extraction (KDD11 & KDD12)
Stars: ✭ 52 (+20.93%)
Mutual labels:  word2vec, matrix-factorization, gensim, unsupervised-learning
Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+29581.4%)
Mutual labels:  data-science, data-mining, word2vec, gensim
Bagofconcepts
Python implementation of bag-of-concepts
Stars: ✭ 18 (-58.14%)
Mutual labels:  unsupervised-learning, word2vec, text-mining
Nlp In Practice
Starter code to solve real world text data problems. Includes: Gensim Word2Vec, phrase embeddings, Text Classification with Logistic Regression, word count with pyspark, simple text preprocessing, pre-trained embeddings and more.
Stars: ✭ 790 (+1737.21%)
Mutual labels:  word2vec, text-mining, gensim
Php Ml
PHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+18272.09%)
Mutual labels:  data-science, data-mining, unsupervised-learning
Awesome Community Detection
A curated list of community detection research papers with implementations.
Stars: ✭ 1,874 (+4258.14%)
Mutual labels:  data-science, unsupervised-learning, matrix-factorization
Vizuka
Explore high-dimensional datasets and how your algo handles specific regions.
Stars: ✭ 100 (+132.56%)
Mutual labels:  data-science, data-mining, unsupervised-learning
Danmf
A sparsity aware implementation of "Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection" (CIKM 2018).
Stars: ✭ 161 (+274.42%)
Mutual labels:  data-science, unsupervised-learning, word2vec
NMFADMM
A sparsity aware implementation of "Alternating Direction Method of Multipliers for Non-Negative Matrix Factorization with the Beta-Divergence" (ICASSP 2014).
Stars: ✭ 39 (-9.3%)
Mutual labels:  word2vec, matrix-factorization, unsupervised-learning
Mlxtend
A library of extension and helper modules for Python's data analysis and machine learning libraries.
Stars: ✭ 3,729 (+8572.09%)
Mutual labels:  data-science, data-mining, unsupervised-learning
Aravec
AraVec is a pre-trained distributed word representation (word embedding) open source project which aims to provide the Arabic NLP research community with free to use and powerful word embedding models.
Stars: ✭ 239 (+455.81%)
Mutual labels:  word2vec, text-mining, gensim
Shallowlearn
An experiment about re-implementing supervised learning models based on shallow neural network approaches (e.g. fastText) with some additional exclusive features and nice API. Written in Python and fully compatible with Scikit-learn.
Stars: ✭ 196 (+355.81%)
Mutual labels:  word2vec, text-mining, gensim
Pyod
A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)
Stars: ✭ 5,083 (+11720.93%)
Mutual labels:  data-science, data-mining, unsupervised-learning
Gwu data mining
Materials for GWU DNSC 6279 and DNSC 6290.
Stars: ✭ 217 (+404.65%)
Mutual labels:  data-science, data-mining, text-mining
Artificial Adversary
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Stars: ✭ 348 (+709.3%)
Mutual labels:  data-science, data-mining, text-mining
Graph2vec
A parallel implementation of "graph2vec: Learning Distributed Representations of Graphs" (MLGWorkshop 2017).
Stars: ✭ 605 (+1306.98%)
Mutual labels:  unsupervised-learning, word2vec, matrix-factorization
Cookbook 2nd
IPython Cookbook, Second Edition, by Cyrille Rossant, Packt Publishing 2018
Stars: ✭ 704 (+1537.21%)
Mutual labels:  data-science, data-mining
Text2vec
Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
Stars: ✭ 715 (+1562.79%)
Mutual labels:  word2vec, text-mining
Dataproofer
A proofreader for your data
Stars: ✭ 628 (+1360.47%)
Mutual labels:  data-science, data-mining

TADW

codebeat badge repo sizebenedekrozemberczki⠀⠀

An implementation of **Network Representation Learning with Rich Text Information**. Text Attribtued Deep Walk (TADW) is a node embedding algorithm which learns an embedding of nodes and fuses the node representations with node attributes. The procedure places nodes in an abstract feature space where information about a fixed order procimity is preserved and attributes of neighbours within the proximity are also part of the representation. TADW learns the joint feature-proximal representations using regularized non-negative matrix factorization. In our implementation we assumed that the proximity matrix used in the approximation is sparse, hence the solution runtime can be linear in the number of nodes for low proximity. For a large proximity order value (which is larger than the graph diameter) the runtime is quadratic. The model can assume that the node-feature matrix is sparse or that it is dense, which changes the runtime considerably.

The model is now also available in the package Karate Club.

This repository provides an implementation for TADW as described in the paper:

Network Representation Learning with Rich Text Information. Yang Cheng, Liu Zhiyuan, Zhao Deli, Sun Maosong and Chang Edward Y IJCAI, 2015. https://www.ijcai.org/Proceedings/15/Papers/299.pdf

The original MatLab implementation is available [here], while another Python implementation is available [here].

Requirements

The codebase is implemented in Python 2.7. package versions used for development are just below.

networkx          2.4
tqdm              4.28.1
numpy             1.15.4
pandas            0.23.4
texttable         1.5.0
scipy             1.1.0
argparse          1.1.0

Datasets

The code takes an input graph in a csv file. Every row indicates an edge between two nodes separated by a comma. The first row is a header. Nodes should be indexed starting with 0. Sample graphs for the `Wikipedia Chameleons` and `Wikipedia Giraffes` are included in the `input/` directory.

The feature matrix can be stored two ways:

If the feature matrix is a sparse binary one it is stored as a json. Nodes are keys of the json and features are the values. For each node feature column ids are stored as elements of a list. The feature matrix is structured as:

{ 0: [0, 1, 38, 1968, 2000, 52727],
  1: [10000, 20, 3],
  2: [],
  ...
  n: [2018, 10000]}

If the feature matrix is dense it is assumed that it is stored as csv with comma separators. It has a header, the first column contains node identifiers and it is sorted by these identifers. It should look like this:

NODE ID Feature 1 Feature 2 Feature 3 Feature 4
0 3 0 1.37 1
1 1 1 2.54 -11
2 2 0 1.08 -12
3 1 1 1.22 -4
... ... ... ... ...
n 5 0 2.47 21

Options

Learning of the embedding is handled by the src/main.py script which provides the following command line arguments.

Input and output options

  --edge-path      STR      Input graph path.           Default is `input/chameleon_edges.csv`.
  --feature-path   STR      Input Features path.        Default is `input/chameleon_features.json`.
  --output-path    STR      Embedding path.             Default is `output/chameleon_tadw.csv`.

Model options

  --dimensions     INT        Number of embeding dimensions.                     Default is 32.
  --order          INT        Order of adjacency matrix powers.                  Default is 2.
  --iterations     INT        Number of gradient descent interations.            Default is 200.
  --alpha          FLOAT      Learning rate.                                     Default is 10**-6.
  --lambd          FLOAT      Regularization term coefficient.                   Default is 1000.0.  
  --lower-control  FLOAT      Overflow control parameter.                        Default is 10**-15.
  --features       STR        Structure of the feature matrix.                   Default is `sparse`. 

Examples

The following commands learn a graph embedding and write the embedding to disk. The node representations are ordered by the ID.

Creating a sparse TADW embedding of the default dataset with the default hyperparameter settings. Saving the embedding at the default path.

$ python src/main.py

Creating a TADW embedding of the default dataset with 128x2 dimensions and approximation order 1.

$ python src/main.py --dimensions 128 --order 1

Creating a TADW embedding with high regularization.

$ python src/main.py --lambd 2000

Creating an embedding of an other dataset with dense features the Wikipedia Giraffes. Saving the output in a custom folder.

$ python src/main.py --edge-path input/giraffe_edges.csv --feature-path input/giraffe_features.csv --output-path output/giraffe_tadw.csv --features dense

License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].