Spark NlpState of the Art Natural Language Processing
Stars: ✭ 2,518 (+2108.77%)
classyclassy is a simple-to-use library for building high-performance Machine Learning models in NLP.
Stars: ✭ 61 (-46.49%)
eve-botEVE bot, a customer service chatbot to enhance virtual engagement for Twitter Apple Support
Stars: ✭ 31 (-72.81%)
Cluener2020CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
Stars: ✭ 689 (+504.39%)
Pytorch-NLUPytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (+32.46%)
deep-atrous-nerDeep-Atrous-CNN-NER: Word level model for Named Entity Recognition
Stars: ✭ 35 (-69.3%)
TorchBlocksA PyTorch-based toolkit for natural language processing
Stars: ✭ 85 (-25.44%)
KashgariKashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
Stars: ✭ 2,235 (+1860.53%)
SimpletransformersTransformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Stars: ✭ 2,881 (+2427.19%)
adversarial-code-generationSource code for the ICLR 2021 work "Generating Adversarial Computer Programs using Optimized Obfuscations"
Stars: ✭ 16 (-85.96%)
simple NERsimple rule based named entity recognition
Stars: ✭ 29 (-74.56%)
keras-chatbot-web-apiSimple keras chat bot using seq2seq model with Flask serving web
Stars: ✭ 51 (-55.26%)
minimal-nmtA minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-68.42%)
language-plannerOfficial Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Stars: ✭ 84 (-26.32%)
gnn-lspeSource code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+44.74%)
DocSumA tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (-49.12%)
AlpacaTagAlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging (ACL 2019 Demo)
Stars: ✭ 126 (+10.53%)
Nuts自然语言处理常见任务(主要包括文本分类,序列标注,自动问答等)解决方案试验田
Stars: ✭ 21 (-81.58%)
DeepLearning-LabCode lab for deep learning. Including rnn,seq2seq,word2vec,cross entropy,bidirectional rnn,convolution operation,pooling operation,InceptionV3,transfer learning.
Stars: ✭ 83 (-27.19%)
xpandasUniversal 1d/2d data containers with Transformers functionality for data analysis.
Stars: ✭ 25 (-78.07%)
modulesThe official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
Stars: ✭ 25 (-78.07%)
TransCenterThis is the official implementation of TransCenter. The code and pretrained models are now available here: https://gitlab.inria.fr/yixu/TransCenter_official.
Stars: ✭ 82 (-28.07%)
nejiFlexible and powerful platform for biomedical information extraction from text
Stars: ✭ 37 (-67.54%)
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (-54.39%)
chatbotkbqa task-oriented qa seq2seq ir neo4j jena seq2seq tf chatbot chat
Stars: ✭ 32 (-71.93%)
deepfrogAn NLP-suite powered by deep learning
Stars: ✭ 16 (-85.96%)
transformerA PyTorch Implementation of "Attention Is All You Need"
Stars: ✭ 28 (-75.44%)
optimum🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
Stars: ✭ 567 (+397.37%)
slotminerTool for slot extraction from text
Stars: ✭ 15 (-86.84%)
FDDCNamed Entity Recognition & Relation Extraction 实体命名识别与关系分类
Stars: ✭ 29 (-74.56%)
pytorch-vitAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Stars: ✭ 250 (+119.3%)
InformationExtractionSystemInformation Extraction System can perform NLP tasks like Named Entity Recognition, Sentence Simplification, Relation Extraction etc.
Stars: ✭ 27 (-76.32%)
fiction generatorFiction generator with Tensorflow. 模仿王小波的风格的小说生成器
Stars: ✭ 27 (-76.32%)
WellcomeMLRepository for Machine Learning utils at the Wellcome Trust
Stars: ✭ 31 (-72.81%)
molecule-attention-transformerPytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules
Stars: ✭ 46 (-59.65%)
lingvo--Ner-ruNamed entity recognition (NER) in Russian texts / Определение именованных сущностей (NER) в тексте на русском языке
Stars: ✭ 38 (-66.67%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-50.88%)
converseConversational text Analysis using various NLP techniques
Stars: ✭ 147 (+28.95%)
sentence2vecDeep sentence embedding using Sequence to Sequence learning
Stars: ✭ 23 (-79.82%)
nervaluateFull named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
Stars: ✭ 40 (-64.91%)
deep-keyphraseseq2seq based keyphrase generation model sets, including copyrnn copycnn and copytransfomer
Stars: ✭ 51 (-55.26%)
trinity-ieInformation extraction pipeline containing coreference resolution, named entity linking, and relationship extraction
Stars: ✭ 59 (-48.25%)
Shukongdashi使用知识图谱,自然语言处理,卷积神经网络等技术,基于python语言,设计了一个数控领域故障诊断专家系统
Stars: ✭ 109 (-4.39%)
metamapliteA near real-time named-entity recognizer
Stars: ✭ 37 (-67.54%)
YodaSpeakTranslating English to Yoda English using Sequence-to-Sequence with Tensorflow.
Stars: ✭ 25 (-78.07%)
lightning-transformersFlexible components pairing 🤗 Transformers with Pytorch Lightning
Stars: ✭ 551 (+383.33%)
CVAE DialCVAE_XGate model in paper "Xu, Dusek, Konstas, Rieser. Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity"
Stars: ✭ 16 (-85.96%)
Shakespearizing-Modern-EnglishCode for "Jhamtani H.*, Gangal V.*, Hovy E. and Nyberg E. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models" Workshop on Stylistic Variation, EMNLP 2017
Stars: ✭ 64 (-43.86%)