head-qaHEAD-QA: A Healthcare Dataset for Complex Reasoning
Stars: ✭ 20 (-56.52%)
DVQA datasetDVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-56.52%)
KrantikariQAAn InformationGain based Question Answering over knowledge Graph system.
Stars: ✭ 54 (+17.39%)
ercEmotion recognition in conversation
Stars: ✭ 34 (-26.09%)
OpenDialogAn Open-Source Package for Chinese Open-domain Conversational Chatbot (中文闲聊对话系统,一键部署微信闲聊机器人)
Stars: ✭ 94 (+104.35%)
bernA neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (+228.26%)
policy-data-analyzerBuilding a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-52.17%)
TextPair文本对关系比较 - 语义相似度、字面相似度、文本蕴含等等
Stars: ✭ 44 (-4.35%)
MengziMengzi Pretrained Models
Stars: ✭ 238 (+417.39%)
golgothaContextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (-15.22%)
NER-FunTool本NER项目包含多个中文数据集,模型采用BiLSTM+CRF、BERT+Softmax、BERT+Cascade、BERT+WOL等,最后用TFServing进行模型部署,线上推理和线下推理。
Stars: ✭ 56 (+21.74%)
deformer[ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
Stars: ✭ 111 (+141.3%)
bert-as-a-service TFXEnd-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis.
Stars: ✭ 32 (-30.43%)
PersianQAPersian (Farsi) Question Answering Dataset (+ Models)
Stars: ✭ 114 (+147.83%)
text2classMulti-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-67.39%)
ParsBigBirdPersian Bert For Long-Range Sequences
Stars: ✭ 58 (+26.09%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-54.35%)
denspiReal-Time Open-Domain Question Answering with Dense-Sparse Phrase Index (DenSPI)
Stars: ✭ 188 (+308.7%)
LMMSLanguage Modelling Makes Sense - WSD (and more) with Contextual Embeddings
Stars: ✭ 79 (+71.74%)
VideoBERTUsing VideoBERT to tackle video prediction
Stars: ✭ 56 (+21.74%)
TOEFL-QAA question answering dataset for machine comprehension of spoken content
Stars: ✭ 61 (+32.61%)
ganbert-pytorchEnhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace
Stars: ✭ 60 (+30.43%)
patrick-wechat⭐️🐟 questionnaire wechat app built with taro, taro-ui and heart. 微信问卷小程序
Stars: ✭ 74 (+60.87%)
TEXTOIRTEXTOIR is a flexible toolkit for open intent detection and discovery. (ACL 2021)
Stars: ✭ 31 (-32.61%)
CoronaXivFirst Prize in HackJaipur Hackathon 2020 for Best ElasticSearch-based Product! Website: http://coronaxiv2.surge.sh/#/
Stars: ✭ 15 (-67.39%)
ODSQAODSQA: OPEN-DOMAIN SPOKEN QUESTION ANSWERING DATASET
Stars: ✭ 43 (-6.52%)
productqaProduct-Aware Answer Generation in E-Commerce Question-Answering
Stars: ✭ 29 (-36.96%)
NCE-CNN-TorchNoise-Contrastive Estimation for Question Answering with Convolutional Neural Networks (Rao et al. CIKM 2016)
Stars: ✭ 54 (+17.39%)
cherche📑 Neural Search
Stars: ✭ 196 (+326.09%)
MICCAI21 MMQMultiple Meta-model Quantifying for Medical Visual Question Answering
Stars: ✭ 16 (-65.22%)
NSP-BERTThe code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
Stars: ✭ 166 (+260.87%)
XpersonaXPersona: Evaluating Multilingual Personalized Chatbot
Stars: ✭ 54 (+17.39%)
strategyImproving Machine Reading Comprehension with General Reading Strategies
Stars: ✭ 35 (-23.91%)
bert experimentalcode and supplementary materials for a series of Medium articles about the BERT model
Stars: ✭ 72 (+56.52%)
ai web RISKOUT BTS국방 리스크 관리 플랫폼 (🏅 국방부장관상/Minister of National Defense Award)
Stars: ✭ 18 (-60.87%)
keras-bert-nerKeras solution of Chinese NER task using BiLSTM-CRF/BiGRU-CRF/IDCNN-CRF model with Pretrained Language Model: supporting BERT/RoBERTa/ALBERT
Stars: ✭ 7 (-84.78%)
extractive rc by runtime mtCode and datasets of "Multilingual Extractive Reading Comprehension by Runtime Machine Translation"
Stars: ✭ 36 (-21.74%)
textgoText preprocessing, representation, similarity calculation, text search and classification. Let's go and play with text!
Stars: ✭ 33 (-28.26%)
co-attentionPytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (+17.39%)
textwiser[AAAI 2021] TextWiser: Text Featurization Library
Stars: ✭ 26 (-43.48%)
squadgymEnvironment that can be used to evaluate reasoning capabilities of artificial agents
Stars: ✭ 27 (-41.3%)
tfbert基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
Stars: ✭ 54 (+17.39%)
LightLM高性能小模型测评 Shared Tasks in NLPCC 2020. Task 1 - Light Pre-Training Chinese Language Model for NLP Task
Stars: ✭ 54 (+17.39%)
JD2Skills-BERT-XMLCCode and Dataset for the Bhola et al. (2020) Retrieving Skills from Job Descriptions: A Language Model Based Extreme Multi-label Classification Framework
Stars: ✭ 33 (-28.26%)
Filipino-Text-BenchmarksOpen-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-52.17%)
ALBERT-PytorchPytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)
Stars: ✭ 214 (+365.22%)
mirror-bert[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (+21.74%)
WikiTableQuestionsA dataset of complex questions on semi-structured Wikipedia tables
Stars: ✭ 81 (+76.09%)