All Projects → trib-plan → TriB-QA

trib-plan / TriB-QA

Licence: MIT license
吹逼我们是认真的

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TriB-QA

MRC Competition Dureader
机器阅读理解 冠军/亚军代码及中文预训练MRC模型
Stars: ✭ 552 (+1126.67%)
Mutual labels:  mrc, bert, dureader
iamQA
中文wiki百科QA阅读理解问答系统,使用了CCKS2016数据的NER模型和CMRC2018的阅读理解模型,还有W2V词向量搜索,使用torchserve部署
Stars: ✭ 46 (+2.22%)
Mutual labels:  question-answering, bert
mcQA
🔮 Answering multiple choice questions with Language Models.
Stars: ✭ 23 (-48.89%)
Mutual labels:  question-answering, bert
korpatbert
특허분야 특화된 한국어 AI언어모델 KorPatBERT
Stars: ✭ 48 (+6.67%)
Mutual labels:  mrc, bert
Medi-CoQA
Conversational Question Answering on Clinical Text
Stars: ✭ 22 (-51.11%)
Mutual labels:  question-answering, bert
text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+317.78%)
Mutual labels:  question-answering, bert
Haystack
🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+7475.56%)
Mutual labels:  question-answering, bert
backprop
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
Stars: ✭ 229 (+408.89%)
Mutual labels:  question-answering, bert
AiSpace
AiSpace: Better practices for deep learning model development and deployment For Tensorflow 2.0
Stars: ✭ 28 (-37.78%)
Mutual labels:  bert, dureader
DrFAQ
DrFAQ is a plug-and-play question answering NLP chatbot that can be generally applied to any organisation's text corpora.
Stars: ✭ 29 (-35.56%)
Mutual labels:  question-answering, bert
SQUAD2.Q-Augmented-Dataset
Augmented version of SQUAD 2.0 for Questions
Stars: ✭ 31 (-31.11%)
Mutual labels:  question-answering, bert
ChineseNER
中文NER的那些事儿
Stars: ✭ 241 (+435.56%)
Mutual labels:  mrc, bert
BERT-for-Chinese-Question-Answering
No description or website provided.
Stars: ✭ 75 (+66.67%)
Mutual labels:  question-answering, bert
cdQA-ui
⛔ [NOT MAINTAINED] A web interface for cdQA and other question answering systems.
Stars: ✭ 19 (-57.78%)
Mutual labels:  question-answering, bert
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+28.89%)
Mutual labels:  question-answering, bert
Nlp chinese corpus
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
Stars: ✭ 6,656 (+14691.11%)
Mutual labels:  question-answering, bert
CAIL
法研杯CAIL2019阅读理解赛题参赛模型
Stars: ✭ 34 (-24.44%)
Mutual labels:  mrc, bert
FinBERT-QA
Financial Domain Question Answering with pre-trained BERT Language Model
Stars: ✭ 70 (+55.56%)
Mutual labels:  question-answering, bert
cmrc2019
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (+162.22%)
Mutual labels:  question-answering, bert
lets-quiz
A quiz website for organizing online quizzes and tests. It's build using Python/Django and Bootstrap4 frameworks. 🤖
Stars: ✭ 165 (+266.67%)
Mutual labels:  question-answering

TriB-QA

富强,民主,文明,和谐,自由,平等,公正,法治,爱国,敬业,诚信,友善

TriB-QA Brief Intro.

Here is our group's project for CCF & Baidu 2019 Reading Comprehension Competition, dataset is Baidu Dureader.

The code will be completed once the competition finishes. The whole project is based on pytorch-version BERT: Passage Rerank, Answer predcition and YES/NO answer classification. So you may need to download pretrained language model,config file and vocab list in advance, or use our pretrained model to get final prediction.

Later, if possible, we will build a simple pipeline to ease the complicated procedures, also a web API may be built for it.

(起初以为参加就是白给,没想到或许可以嫖上一笔零花钱!o( ̄▽ ̄)ブ)

谢谢Naturali高抬一手,如愿以偿! 🙇‍

1.任务介绍 Task Description

百度阅读理解竞赛官网查看具体要求.

当前训练数据集可以从我这边用U盘拷贝。

项目注意事项、进展情况可以由这里跟踪。

数据格式可以从这里 查看并补全。

2. 时间规划 Time Management

竞赛的关键时间如下图:

Event 时间
报名 训练数据发放 02/25
报名截止 开发数据、测试集1发放 03/31
测试集2发放 05/13
结果提交截止 05/20
公布结果,接受报告论文 05/31

3. 进展提交汇总 Submitted Result History

名字 Rouge Bleu 时间 Rouge提升 Bleu 提升
BIT_03/31(simple) 37.15 23.41 2019/3/31
冲鸭04/01(single) 40.84 24.82 2019/4/1 3.69 1.41
冲鸭^2(single) 44.54 27.6 2019/4/2 3.7 2.78
冲鸭^3(single) 45.91 35 2019/4/8 1.37 7.4
冲鸭^4(single) 47.85 45.61 2019/4/9 1.94 10.61
冲鸭^5(single) 48.03 46.09 2019/4/10 0.18 0.48
冲鸭^6(果断就会白给) 48.13 46.7 2019/4/12 0.1 0.61
冲鸭^7(single) 50.3 52.77 2019/4/15 2.17 6.07
冲鸭^8(single) 48.27 49.65 2019/5/5 -2.03 -3.12
冲鸭^8(single) 46.35 48.07 2019/5/6 -1.92 -1.58
冲鸭^8(single) 50.46 52.37 2019/5/7 4.11 4.3
冲鸭^9(single) 52.5 54.3 2019/5/8 2.04 1.93
冲鸭^10(single) 53.13 54.63 2019/5/12 0.63 0.33
冲鸭^11(single) 54.12 55.82 2019/5/13 0.99 1.19
果断就会白给(single) 54.54 55.87 2019/5/15 0.42 0.05
果断就会白给(single) 54.47 55.67 2019/5/16 -0.07 -0.2
果断就会白给(single) 54.97 56.05 2019/5/18 0.5 0.38
果然还是白给了吗(single) 55.3 56.09 2019/5/19 0.33 0.04
.... 18.15 32.68

4. 模型结构 Model Structure

我们的模型初步定为三个bert,简称Tri-Bert。

Since Our model was initialy desgined as a pipileline include three BERT, we also call it TriB(ert) in our group :>.

Sounds rough but superisely effective.

模型的流程图 Model Flow-Chart

Passage Reranking

Answer Prediction

5. 任务分配 Task Allocation

小组目前成员5名:任慕成、魏然、柏宇、王洋、刘宏玉。

人人都是炼丹师

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].