All Projects → n-waves → Multifit

n-waves / Multifit

Licence: mit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning" https://arxiv.org/abs/1909.04761

Projects that are alternatives of or similar to Multifit

Study Group
Deep Learning Study Group
Stars: ✭ 258 (-1.53%)
Mutual labels:  jupyter-notebook
Artline
A Deep Learning based project for creating line art portraits.
Stars: ✭ 3,061 (+1068.32%)
Mutual labels:  jupyter-notebook
Understanding Nn
Tensorflow tutorial for various Deep Neural Network visualization techniques
Stars: ✭ 261 (-0.38%)
Mutual labels:  jupyter-notebook
Spark Jupyter Aws
A guide on how to set up Jupyter with Pyspark painlessly on AWS EC2 clusters, with S3 I/O support
Stars: ✭ 259 (-1.15%)
Mutual labels:  jupyter-notebook
Nn dynamics
Stars: ✭ 260 (-0.76%)
Mutual labels:  jupyter-notebook
Matplotlib Venn
Area-weighted venn-diagrams for Python/matplotlib
Stars: ✭ 260 (-0.76%)
Mutual labels:  jupyter-notebook
Dask Examples
Easy-to-run example notebooks for Dask
Stars: ✭ 257 (-1.91%)
Mutual labels:  jupyter-notebook
Applied Ml
https://madewithml.com/
Stars: ✭ 252 (-3.82%)
Mutual labels:  jupyter-notebook
Python Is Cool
Cool Python features for machine learning that I used to be too afraid to use. Will be updated as I have more time / learn more.
Stars: ✭ 2,962 (+1030.53%)
Mutual labels:  jupyter-notebook
Lstm pit speech separation
Two-talker Speech Separation with LSTM/BLSTM by Permutation Invariant Training method.
Stars: ✭ 262 (+0%)
Mutual labels:  jupyter-notebook
Image Matching Benchmark
Public release of the Image Matching Benchmark: https://image-matching-challenge.github.io
Stars: ✭ 258 (-1.53%)
Mutual labels:  jupyter-notebook
Restapi
REST API is a web-based API using a Websocket connection. Developers and investors can create custom trading applications, integrate into our platform, back test strategies and build robot trading.
Stars: ✭ 260 (-0.76%)
Mutual labels:  jupyter-notebook
Startupdatascience
Stars: ✭ 260 (-0.76%)
Mutual labels:  jupyter-notebook
Npy Matlab
Experimental code to read/write NumPy .NPY files in MATLAB
Stars: ✭ 258 (-1.53%)
Mutual labels:  jupyter-notebook
Anomaly detection
Stars: ✭ 262 (+0%)
Mutual labels:  jupyter-notebook
Siamesenetwork Tensorflow
Using siamese network to do dimensionality reduction and similar image retrieval
Stars: ✭ 258 (-1.53%)
Mutual labels:  jupyter-notebook
Tensorflow Segmentation
Semantic image segmentation in Tensorflow
Stars: ✭ 260 (-0.76%)
Mutual labels:  jupyter-notebook
Docs L10n
Translations of TensorFlow documentation
Stars: ✭ 262 (+0%)
Mutual labels:  jupyter-notebook
Reborn
Stars: ✭ 262 (+0%)
Mutual labels:  jupyter-notebook
Machine Learning In Action
个人使用jupyter notebook整理的peter的《机器学习实战》代码,使其更有层次感,更加连贯,也加了一些自己的修改,以及注释
Stars: ✭ 261 (-0.38%)
Mutual labels:  jupyter-notebook

PWC PWC PWC

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

Code to reproduce the paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning".

Here is a blog post with an introducing to our paper: http://nlp.fast.ai/classification/2019/09/10/multifit.html

This repository contains a small framework on top of fastai v1.0; the code is compatible with v1.0.47 up to v1.0.59 (the current as of 2019.11.03). The results between fastai versions may differ due to optimizations added to fastai. Our models were trained using 1.0.47.

The framework was rewritten to make it easier to use with the newest fastai.

We released 7 language models trained on corresponding Wikipedia dumps:

  • de_multifit_paper_version
  • es_multifit_paper_version
  • fr_multifit_paper_version
  • it_multifit_paper_version
  • ja_multifit_paper_version
  • ru_multifit_paper_version
  • zh_multifit_paper_version

To fetch the model just use multifit.from_pretrained function. Here are some example notebook showing how to train a classifier using a pretrained models.

Results

MLDoc

Document classification results on MLDoc dataset Schwenk and Li, 2018

Model de es fr it ja ru zh
LASER 92.70 88.75 90.80 85.93 85.15 84.65 88.98
MultiBERT 94.0 95.15 93.20 85.82 87.48 86.85 90.72
MultiFiT 95.90 96.07 94.77 90.25 90.03 87.65 92.52

Amazon CLS

Sentiment classification results on CLS dataset Prettenhofer and Stein, 2010

DE FR JA
MultiBERT 86.05 / 84.90 / 82.00 86.15 / 86.90 / 86.65 80.87 / 82.83 / 79.95
MultiFiT 93.19 / 90.54 / 93.00 91.25 / 89.55 / 93.40 86.29 / 85.75 / 86.59

How to use it with fastai v1.0

You can use the pretrained models with fastai library as follows:

from fastai.text import *
import multifit

exp = multifit.from_pretrained("name of the model")
fa_config =  exp.pretrain_lm.tokenizer.get_fastai_config(add_open_file_processor=True)
data_lm = (TextList.from_folder(imdb_path, **fa_config)
            .filter_by_folder(include=['train', 'test', 'unsup']) 
            .split_by_rand_pct(0.1)
            .label_for_lm()           
            .databunch(bs=bs))
learn = exp.finetune_lm.get_learner(data_lm)  
# learn is a preconfigured fastai learner with a pretrained model loaded
learn.fit_one_cycle(10)
learn.save_encoder("enc")
...

Reproducing the results

This repository is a rewrite of the original training scripts so it lacks all the scripts used in the paper. We are working on a port to fastai v2.0 and then we will be adding the scripts that show how to reproduce the results. In case you need to use the scripts faster you can access the original scripts here.

Citation

@article{Eisenschlos2019MultiFit,
  title={MultiFiT: Efficient Multi-lingual Language Model Fine-tuning},
  author={Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard}
  journal={Proceedings of EMNLP-IJCNLP 2019},
  year={2019}
} 
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].