TwinBertpytorch implementation of the TwinBert paper
Stars: ✭ 36 (-89.66%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-93.97%)
Similaritysimilarity:相似度计算工具包,java编写。用于词语、短语、句子、词法分析、情感分析、语义分析等相关的相似度计算。
Stars: ✭ 760 (+118.39%)
beirA Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Stars: ✭ 738 (+112.07%)
v-semanticSemantic-ui2 + vue2
Stars: ✭ 23 (-93.39%)
BERT-QECode and resources for the paper "BERT-QE: Contextualized Query Expansion for Document Re-ranking".
Stars: ✭ 43 (-87.64%)
BERTOverflowA Pre-trained BERT on StackOverflow Corpus
Stars: ✭ 40 (-88.51%)
ReactionDecoderReaction Decoder Tool (RDT) - Atom Atom Mapping Tool
Stars: ✭ 59 (-83.05%)
Fill-the-GAP[ACL-WS] 4th place solution to gendered pronoun resolution challenge on Kaggle
Stars: ✭ 13 (-96.26%)
Duplicate-Image-FinderdifPy - Python package for finding duplicate or similar images within folders
Stars: ✭ 187 (-46.26%)
opentrack-cgRepository for OpenTrack Community Group
Stars: ✭ 21 (-93.97%)
TriB-QA吹逼我们是认真的
Stars: ✭ 45 (-87.07%)
AliceMindALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab
Stars: ✭ 1,479 (+325%)
ganbertEnhancing the BERT training with Semi-supervised Generative Adversarial Networks
Stars: ✭ 205 (-41.09%)
sisterSImple SenTence EmbeddeR
Stars: ✭ 66 (-81.03%)
korpatbert특허분야 특화된 한국어 AI언어모델 KorPatBERT
Stars: ✭ 48 (-86.21%)
Transformer-QG-on-SQuADImplement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (-91.95%)
neuro-comma🇷🇺 Punctuation restoration production-ready model for Russian language 🇷🇺
Stars: ✭ 46 (-86.78%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: ✭ 209 (-39.94%)
JManticJava library for connecting to sc-machine
Stars: ✭ 14 (-95.98%)
siamese dssmsiamese dssm sentence_similarity sentece_similarity_rank tensorflow
Stars: ✭ 59 (-83.05%)
DiscEvalDiscourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-94.83%)
CheXbertCombining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Stars: ✭ 51 (-85.34%)
NSLImplementation for <Neural Similarity Learning> in NeurIPS'19.
Stars: ✭ 33 (-90.52%)
nxontologyNetworkX-based Python library for representing ontologies
Stars: ✭ 45 (-87.07%)
pyhaystackPyhaystack is a module that allow python programs to connect to a haystack server project-haystack.org. Connection can be established with Niagara Platform running the nhaystack, Skyspark and Widesky. For this to work with Anaconda IPython Notebook in Windows, be sure to use "python setup.py install" using the Anaconda Command Prompt in Windows.…
Stars: ✭ 57 (-83.62%)
bert attn vizVisualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (-88.22%)
oreilly-bert-nlpThis repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-94.54%)
LAMB Optimizer TFLAMB Optimizer for Large Batch Training (TensorFlow version)
Stars: ✭ 119 (-65.8%)
vektonnvektonn.github.io/vektonn
Stars: ✭ 109 (-68.68%)
FasterTransformerTransformer related optimization, including BERT, GPT
Stars: ✭ 1,571 (+351.44%)
NLPDataAugmentationChinese NLP Data Augmentation, BERT Contextual Augmentation
Stars: ✭ 94 (-72.99%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (-45.98%)
Kaleido-BERT(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain.
Stars: ✭ 252 (-27.59%)
wisdomifyA BERT-based reverse dictionary of Korean proverbs
Stars: ✭ 95 (-72.7%)
Sohu20192019搜狐校园算法大赛
Stars: ✭ 26 (-92.53%)
cmrc2019A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (-66.09%)
ExpBERTCode for our ACL '20 paper "Representation Engineering with Natural Language Explanations"
Stars: ✭ 28 (-91.95%)
OpenUEOpenUE是一个轻量级知识图谱抽取工具 (An Open Toolkit for Universal Extraction from Text published at EMNLP2020: https://aclanthology.org/2020.emnlp-demos.1.pdf)
Stars: ✭ 274 (-21.26%)
NDDDrug-Drug Interaction Predicting by Neural Network Using Integrated Similarity
Stars: ✭ 25 (-92.82%)
R-ATRegularized Adversarial Training
Stars: ✭ 19 (-94.54%)
CAIL法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-90.23%)
backpropBackprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (-34.2%)
semantic role labeling deep learningSRL deep learning model is based on DB-LSTM which is described in this paper : [End-to-end learning of semantic role labeling using recurrent neural networks](http://www.aclweb.org/anthology/P15-1109)
Stars: ✭ 20 (-94.25%)