All Projects → alibaba → AliceMind

alibaba / AliceMind

Licence: Apache-2.0 license
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Cuda
1817 projects
Jupyter Notebook
11667 projects
C++
36643 projects - #6 most used programming language
cython
566 projects

Projects that are alternatives of or similar to AliceMind

TradeTheEvent
Implementation of "Trade the Event: Corporate Events Detection for News-Based Event-Driven Trading." In Findings of ACL2021
Stars: ✭ 64 (-95.67%)
Mutual labels:  bert
OpenUE
OpenUE是一个轻量级知识图谱抽取工具 (An Open Toolkit for Universal Extraction from Text published at EMNLP2020: https://aclanthology.org/2020.emnlp-demos.1.pdf)
Stars: ✭ 274 (-81.47%)
Mutual labels:  bert
NLPDataAugmentation
Chinese NLP Data Augmentation, BERT Contextual Augmentation
Stars: ✭ 94 (-93.64%)
Mutual labels:  bert
bert-sentiment
Fine-grained Sentiment Classification Using BERT
Stars: ✭ 49 (-96.69%)
Mutual labels:  bert
ChineseNER
中文NER的那些事儿
Stars: ✭ 241 (-83.71%)
Mutual labels:  bert
wisdomify
A BERT-based reverse dictionary of Korean proverbs
Stars: ✭ 95 (-93.58%)
Mutual labels:  bert
FinBERT-QA
Financial Domain Question Answering with pre-trained BERT Language Model
Stars: ✭ 70 (-95.27%)
Mutual labels:  bert
ERNIE-text-classification-pytorch
This repo contains a PyTorch implementation of a pretrained ERNIE model for text classification.
Stars: ✭ 49 (-96.69%)
Mutual labels:  bert
Kaleido-BERT
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain.
Stars: ✭ 252 (-82.96%)
Mutual labels:  bert
Fill-the-GAP
[ACL-WS] 4th place solution to gendered pronoun resolution challenge on Kaggle
Stars: ✭ 13 (-99.12%)
Mutual labels:  bert
py-lingualytics
A text analytics library with support for codemixed data
Stars: ✭ 36 (-97.57%)
Mutual labels:  bert
Text Classification TF
用tf实现各种文本分类模型,并且封装restful接口,可以直接工程化
Stars: ✭ 32 (-97.84%)
Mutual labels:  bert
TwinBert
pytorch implementation of the TwinBert paper
Stars: ✭ 36 (-97.57%)
Mutual labels:  bert
vietnamese-roberta
A Robustly Optimized BERT Pretraining Approach for Vietnamese
Stars: ✭ 22 (-98.51%)
Mutual labels:  bert
sister
SImple SenTence EmbeddeR
Stars: ✭ 66 (-95.54%)
Mutual labels:  bert
pn-summary
A well-structured summarization dataset for the Persian language!
Stars: ✭ 29 (-98.04%)
Mutual labels:  bert
cmrc2019
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)
Stars: ✭ 118 (-92.02%)
Mutual labels:  bert
embedding study
中文预训练模型生成字向量学习,测试BERT,ELMO的中文效果
Stars: ✭ 94 (-93.64%)
Mutual labels:  bert
les-military-mrc-rank7
莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案
Stars: ✭ 37 (-97.5%)
Mutual labels:  bert
neuro-comma
🇷🇺 Punctuation restoration production-ready model for Russian language 🇷🇺
Stars: ✭ 46 (-96.89%)
Mutual labels:  bert

AliceMind

AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

This repository provides pre-trained encoder-decoder models and its related optimization techniques developed by Alibaba's MinD (Machine IntelligeNce of Damo) Lab.

The family of AliceMind:

  • Pre-trained Models:
    • Language understanding model: StructBERT (ICLR 2020)
    • Generative language model: PALM (EMNLP 2020)
    • Cross-lingual language model: VECO (ACL 2021)
    • Cross-modal language model: StructVBERT (CVPR 2020 VQA Challenge Runner-up)
    • Structural language model: StructuralLM (ACL 2021)
    • Chinese language understanding model with multi-granularity inputs: LatticeBERT (NAACL 2021)
    • Pre-training table model: SDCUP (Under Review)
    • Large-scale chinese understanding and generation model: PLUG
    • Large-scale vision-language understanding and generation model: mPLUG
  • Fine-tuning Methods:
    • Effective and generalizable fine-tuning method ChildTuning (EMNLP 2021)
  • Model Compression:
    • Language model compression methods ContrastivePruning (AAAI 2022)
    • Parameter-Efficient Sparsity methods PST (IJCAI 2022)

News

  • March, 2021: AliceMind released!
  • May, 2021: VECO and StructuralLM were accepted by ACL 2021.
  • September, 2021: The first Chinese pre-training table model SDCUP released!
  • October, 2021: ChildTuning were accepted by EMNLP 2021.
  • December, 2021: ContrastivePruning were accepted by AAAI 2022.
  • April, 2022: The SOFA modeling toolkit released which supports models&techs standard code and the direct use of them in transformers!
  • May, 2022: PST were accepted by IJCAI 2022.

Pre-trained Models

  • StructBERT (March 15, 2021): pre-trained models for natural language understanding (NLU). We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. "StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding" (ICLR 2020)

  • PALM (March 15, 2021): pre-trained models for natural language generation (NLG). We propose a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus, specifically designed for generating new text conditioned on context. It achieves new SOTA results in several downstream tasks. "PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation" (EMNLP 2020)

  • VECO v0 (March 15, 2021): pre-trained models for cross-lingual (x) natural language understanding (x-NLU) and generation (x-NLG). VECO (v0) achieves the new SOTA results on various cross-lingual understanding tasks of the XTREME benchmark, covering text classification, sequence labeling, question answering, and sentence retrieval. For cross-lingual generation tasks, it also outperforms all existing cross-lingual models and state-of-the-art Transformer variants on WMT14 English-to-German and English-to-French translation datasets, with gains of up to 1~2 BLEU. “VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation" (ACL 2021)

  • StructVBERT (March 15, 2021): pre-trained models for vision-language understanding. We propose a new single-stream visual-linguistic pre-training scheme by leveraging multi-stage progressive pre-training and multi-task learning. StructVBERT obtained the 2020 VQA Challenge Runner-up award, and SOTA result on VQA 2020 public Test-standard benchmark (June 2020). "Talk Slides" (CVPR 2020 VQA Challenge Runner-up).

  • StructuralLM (March 15, 2021): pre-trained models for document-image understanding. We propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks. "StructuralLM: Structural Pre-training for Form Understanding" (ACL 2021)

  • LatticeBERT (March 15, 2021): we propose a novel pre-training paradigm for Chinese — Lattice-BERT which explicitly incorporates word representations with those of characters, thus can model a sentence in a multi-granularity manner. "Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models" (NAACL 2021)

  • SDCUP (September 6, 2021): pre-trained models for table understanding. We design a schema dependency pre-training objective to impose the desired inductive bias into the learned representations for table pre-training. We further propose a schema-aware curriculum learning approach to alleviate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner. The experiment results on SQUALL and Spider demonstrate the effectiveness of our pre-training objective and curriculum in comparison to a variety of baselines. "SDCUP: Schema Dependency Enhanced Curriculum Pre-Training for Table Semantic Parsing" (Under Review)

  • PLUG (September 1, 2022): large-scale chinese pre-trained model for understanding and generation. PLUG (27B) is a large-scale chinese pre-training model for language understanding and generation. The training of PLUG is two-stage, the first stage is a 24-layer StructBERT encoder, and the second stage is a 24-6-layer PALM encoder-decoder.

  • mPLUG (September 1, 2022): large-scale pre-trained model for vision-language understanding and generation. mPLUG is pre-trained end-to-end on large scale image-text pairs with both discriminative and generative objectives. It achieves state-of-the-art results on a wide range of vision-language downstream tasks, including image-captioning, image-text retrieval, visual grounding and visual question answering.

Fine-tuning Methods

Model Compression

  • ContrastivePruning (December 17, 2021): ContrAstive Pruning (CAP) is a general pruning framework under the pre-training and fine-tuning paradigm, which aims at maintaining both task-specific and task-agnostic knowledge during pruning. CAP is designed as a general framework, compatible with both structured and unstructured pruning. Unified in contrastive learning, CAP encourage the pruned model to learn from the pre-trained model, the snapshots (intermediate models during pruning), and the fine-tuned model, respectively. “From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression" (AAAI 2022)

  • PST (May 23, 2022): Parameter-efficient Sparse Training (PST) is to reduce the number of trainable parameters during sparse-aware training in downstream tasks. It combines the data-free and data-driven criteria to efficiently and accurately measures the importance of weights, and investigates the intrinsic redundancy of data-driven weight importance and derive two obvious characteristics i.e., low-rankness and structuredness, which therefore makes the sparse training resource-efficient and parameter-efficient. “Parameter-Efficient Sparsity for Large Language Models Fine-Tuning" (IJCAI 2022)

Modeling toolkit

  • SOFA SOFA aims to faciliate easy use and distribution of the pretrained language models from Alibaba DAMO Academy AliceMind project. In addition, detail examples in the project make it simple for any end-user to access those models.

Contact Information

AliceMind Official Website: https://nlp.aliyun.com/portal#/alice

AliceMind Open Platform: https://alicemind.aliyuncs.com

Please submit a GitHub issue if you have want help or have issues using ALICE.

For more information, you can join the AliceMind Users Group on DingTalk to contact us. The number of the DingTalk group is 35738533.

For other business communications, please contact [email protected]

License

AliceMind is released under the Apache 2.0 license.

Copyright 1999-2020 Alibaba Group Holding Ltd.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at the following link.

     http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].