All Projects → zhpmatrix → Bertem

zhpmatrix / Bertem

Licence: apache-2.0
论文实现(ACL2019):《Matching the Blanks: Distributional Similarity for Relation Learning》

Projects that are alternatives of or similar to Bertem

Cnn Re Tf
Convolutional Neural Network for Multi-label Multi-instance Relation Extraction in Tensorflow
Stars: ✭ 190 (+30.14%)
Mutual labels:  jupyter-notebook, relation-extraction
Ruijin round2
瑞金医院MMC人工智能辅助构建知识图谱大赛复赛
Stars: ✭ 159 (+8.9%)
Mutual labels:  jupyter-notebook, relation-extraction
Pytorch graph Rel
A PyTorch implementation of GraphRel
Stars: ✭ 204 (+39.73%)
Mutual labels:  jupyter-notebook, relation-extraction
Relation Classification Using Bidirectional Lstm Tree
TensorFlow Implementation of the paper "End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures" and "Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Paths" for classifying relations
Stars: ✭ 167 (+14.38%)
Mutual labels:  jupyter-notebook, relation-extraction
Deepke
基于深度学习的开源中文关系抽取框架
Stars: ✭ 525 (+259.59%)
Mutual labels:  jupyter-notebook, relation-extraction
Shufflenet V2 Tensorflow
A lightweight convolutional neural network
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Deep Deep
Adaptive crawler which uses Reinforcement Learning methods
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Data Driven Prediction Of Battery Cycle Life Before Capacity Degradation
Code for Nature energy manuscript
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Scipy con 2019
Tutorial Sessions for SciPy Con 2019
Stars: ✭ 142 (-2.74%)
Mutual labels:  jupyter-notebook
100daysofmlcode
My journey to learn and grow in the domain of Machine Learning and Artificial Intelligence by performing the #100DaysofMLCode Challenge.
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Python Machine Learning Book
The "Python Machine Learning (1st edition)" book code repository and info resource
Stars: ✭ 11,428 (+7727.4%)
Mutual labels:  jupyter-notebook
Hypertools Paper Notebooks
Supporting notebooks and data from hypertools paper
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Textbook
Principles and Techniques of Data Science, the textbook for Data 100 at UC Berkeley
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Alta
The Art of Literary Text Analysis
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Rloss
Regularized Losses (rloss) for Weakly-supervised CNN Segmentation
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Deep Learning With Tensorflow Book
深度学习入门开源书,基于TensorFlow 2.0案例实战。Open source Deep Learning book, based on TensorFlow 2.0 framework.
Stars: ✭ 12,105 (+8191.1%)
Mutual labels:  jupyter-notebook
Machinelearning Az
Repositorio del Curso de Machine Learning de la A a la Z con R y Python
Stars: ✭ 144 (-1.37%)
Mutual labels:  jupyter-notebook
Citeomatic
A citation recommendation system that allows users to find relevant citations for their paper drafts. The tool is backed by Semantic Scholar's OpenCorpus dataset.
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Digital video introduction
A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding).
Stars: ✭ 12,184 (+8245.21%)
Mutual labels:  jupyter-notebook
Ppdai risk evaluation
“魔镜杯”风控算法大赛 拍拍贷风控模型,接近冠军分数
Stars: ✭ 144 (-1.37%)
Mutual labels:  jupyter-notebook

实现说明

主要实现文章前半部分的工作,PyTorch实现,基于huggingface的工作,PyTorch才是世界上最屌的框架,逃。

实现参考

img1

代码说明

(1)主要修改:modeling.py

output representation: BertForSequenceClassification

input representation: BertEmbeddings

input和output都实现了多种策略,可以结合具体的任务,找到最佳的组合。

(2)非主要实现:examples下的关于classification的文件

(3)服务部署:基于Flask,可以在本地开启一个服务。具体实现在tacred_run_infer.py中。

(4)代码仅供参考,不提供数据集,不提供预训练模型,不提供训练后的模型(希望理解吧)。

(5)相关工作可以参考我的博客-神经关系抽取,可能比这个代码更有价值一些吧。

实现结果:

 数据集TACRED上的结果:

模型序号 输入类型 输出类型 指标类型 P R F1 备注
0 entity marker sum(entity start) micro 0.68 0.63 0.65 base-model,lr=3e-5,epoch=3
macro 0.60 0.54 0.55
1 entity marker sum(entity start) micro 0.70 0.62 0.65 large-model,lr=3e-5,epoch=1
macro 0.63 0.52 0.55
-1 None None micro 0.69 0.66 0.67 手误之后,再也找不到了,尴尬
macro 0.58 0.50 0.53

数据集SemEval2010 Task 8上的结果:

模型序号 输入类型 输出类型 指标类型 P R F1 备注
0 entity marker maxpool(entity emb)+relu micro 0.86 0.86 0.86 bert-large
macro 0.82 0.83 0.82

混合精度加速结果

在具体任务上,延续之前的setting,将train和dev合并共同作为新的train集,test集不变。在fp32 和fp16的两种setting下,比较相同batch_size下,一个epoch的用时或者每个迭代的用时。

比较方面 fp32 fp16 备注
训练阶段 1.04it/s 4.41it/s 12.76it/s(独占显卡)
推断阶段 4.14it/s 8.63it/s
测试集指标 0.65/0.55 0.64/0.53 格式:micro/macro
模型大小 421M 212M
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].