All Projects → CyberZHG → keras-pos-embd

CyberZHG / keras-pos-embd

Licence: MIT license
Position embedding layers in Keras

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to keras-pos-embd

playing with vae
Comparing FC VAE / FCN VAE / PCA / UMAP on MNIST / FMNIST
Stars: ✭ 53 (-13.11%)
Mutual labels:  embedding
Embedding
Embedding模型代码和学习笔记总结
Stars: ✭ 25 (-59.02%)
Mutual labels:  embedding
Chinese Word Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
Stars: ✭ 9,548 (+15552.46%)
Mutual labels:  embedding
text-classification-cn
中文文本分类实践,基于搜狗新闻语料库,采用传统机器学习方法以及预训练模型等方法
Stars: ✭ 81 (+32.79%)
Mutual labels:  embedding
RolX
An alternative implementation of Recursive Feature and Role Extraction (KDD11 & KDD12)
Stars: ✭ 52 (-14.75%)
Mutual labels:  embedding
XLNet embbeding
Using XLNet as Embedding of Keras
Stars: ✭ 32 (-47.54%)
Mutual labels:  embedding
GLOM-TensorFlow
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data
Stars: ✭ 32 (-47.54%)
Mutual labels:  embedding
Cool-NLPCV
Some Cool NLP and CV Repositories and Solutions (收集NLP中常见任务的开源解决方案、数据集、工具、学习资料等)
Stars: ✭ 143 (+134.43%)
Mutual labels:  embedding
pymde
Minimum-distortion embedding with PyTorch
Stars: ✭ 420 (+588.52%)
Mutual labels:  embedding
Milvus
An open-source vector database for embedding similarity search and AI applications.
Stars: ✭ 9,015 (+14678.69%)
Mutual labels:  embedding
BERT-embedding
A simple wrapper class for extracting features(embedding) and comparing them using BERT in TensorFlow
Stars: ✭ 24 (-60.66%)
Mutual labels:  embedding
walklets
A lightweight implementation of Walklets from "Don't Walk Skip! Online Learning of Multi-scale Network Embeddings" (ASONAM 2017).
Stars: ✭ 94 (+54.1%)
Mutual labels:  embedding
NMFADMM
A sparsity aware implementation of "Alternating Direction Method of Multipliers for Non-Negative Matrix Factorization with the Beta-Divergence" (ICASSP 2014).
Stars: ✭ 39 (-36.07%)
Mutual labels:  embedding
AnnA Anki neuronal Appendix
Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
Stars: ✭ 39 (-36.07%)
Mutual labels:  embedding
Awesome Community Detection
A curated list of community detection research papers with implementations.
Stars: ✭ 1,874 (+2972.13%)
Mutual labels:  embedding
exembed
Go Embed experiments
Stars: ✭ 27 (-55.74%)
Mutual labels:  embedding
FSCNMF
An implementation of "Fusing Structure and Content via Non-negative Matrix Factorization for Embedding Information Networks".
Stars: ✭ 16 (-73.77%)
Mutual labels:  embedding
fastwalk
A multi-thread implementation of node2vec random walk.
Stars: ✭ 24 (-60.66%)
Mutual labels:  embedding
Siamese Triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
Stars: ✭ 2,564 (+4103.28%)
Mutual labels:  embedding
TransE-Knowledge-Graph-Embedding
TensorFlow implementation of TransE and its extended models for Knowledge Representation Learning
Stars: ✭ 64 (+4.92%)
Mutual labels:  embedding

Keras Position Embedding

Version

[中文|English]

Position embedding layers in Keras.

Install

pip install keras-pos-embd

Usage

Trainable Embedding

from tensorflow import keras
from keras_pos_embd import PositionEmbedding

model = keras.models.Sequential()
model.add(PositionEmbedding(
    input_shape=(None,),
    input_dim=10,     # The maximum absolute value of positions.
    output_dim=2,     # The dimension of embeddings.
    mask_zero=10000,  # The index that presents padding (because `0` will be used in relative positioning).
    mode=PositionEmbedding.MODE_EXPAND,
))
model.compile('adam', 'mse')
model.summary()

Note that you don't need to enable mask_zero if you want to add/concatenate other layers like word embeddings with masks:

from tensorflow import keras
from keras_pos_embd import PositionEmbedding

model = keras.models.Sequential()
model.add(keras.layers.Embedding(
    input_shape=(None,),
    input_dim=10,
    output_dim=5,
    mask_zero=True,
))
model.add(PositionEmbedding(
    input_dim=100,
    output_dim=5,
    mode=PositionEmbedding.MODE_ADD,
))
model.compile('adam', 'mse')
model.summary()

Sin & Cos Embedding

The sine and cosine embedding has no trainable weights. The layer has three modes, it works just like PositionEmbedding in expand mode:

from tensorflow import keras
from keras_pos_embd import TrigPosEmbedding

model = keras.models.Sequential()
model.add(TrigPosEmbedding(
    input_shape=(None,),
    output_dim=30,                      # The dimension of embeddings.
    mode=TrigPosEmbedding.MODE_EXPAND,  # Use `expand` mode
))
model.compile('adam', 'mse')
model.summary()

If you want to add this embedding to existed embedding, then there is no need to add a position input in add mode:

from tensorflow import keras
from keras_pos_embd import TrigPosEmbedding

model = keras.models.Sequential()
model.add(keras.layers.Embedding(
    input_shape=(None,),
    input_dim=10,
    output_dim=5,
    mask_zero=True,
))
model.add(TrigPosEmbedding(
    output_dim=5,
    mode=TrigPosEmbedding.MODE_ADD,
))
model.compile('adam', 'mse')
model.summary()
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].