ParsBigBirdPersian Bert For Long-Range Sequences
Stars: ✭ 58 (-45.28%)
language-plannerOfficial Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
Stars: ✭ 84 (-20.75%)
Pytorch-NLUPytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (+42.45%)
text2keywordsTrained T5 and T5-large model for creating keywords from text
Stars: ✭ 53 (-50%)
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (+227.36%)
Aspect-Based-Sentiment-AnalysisA python program that implements Aspect Based Sentiment Analysis classification system for SemEval 2016 Dataset.
Stars: ✭ 57 (-46.23%)
long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (-2.83%)
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (-50.94%)
tttA package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+
Stars: ✭ 35 (-66.98%)
TorchBlocksA PyTorch-based toolkit for natural language processing
Stars: ✭ 85 (-19.81%)
molecule-attention-transformerPytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules
Stars: ✭ 46 (-56.6%)
eve-botEVE bot, a customer service chatbot to enhance virtual engagement for Twitter Apple Support
Stars: ✭ 31 (-70.75%)
text2classMulti-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-85.85%)
xpandasUniversal 1d/2d data containers with Transformers functionality for data analysis.
Stars: ✭ 25 (-76.42%)
robustness-vitContains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (-26.42%)
lightning-transformersFlexible components pairing 🤗 Transformers with Pytorch Lightning
Stars: ✭ 551 (+419.81%)
robo-vlnPytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Stars: ✭ 34 (-67.92%)
MISEMultimodal Image Synthesis and Editing: A Survey
Stars: ✭ 214 (+101.89%)
deepconsensusDeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
Stars: ✭ 124 (+16.98%)
anonymisationAnonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (-19.81%)
omikujiAn efficient implementation of Partitioned Label Trees & its variations for extreme multi-label classification
Stars: ✭ 69 (-34.91%)
Product-Categorization-NLPMulti-Class Text Classification for products based on their description with Machine Learning algorithms and Neural Networks (MLP, CNN, Distilbert).
Stars: ✭ 30 (-71.7%)
TransCenterThis is the official implementation of TransCenter. The code and pretrained models are now available here: https://gitlab.inria.fr/yixu/TransCenter_official.
Stars: ✭ 82 (-22.64%)
golgothaContextualised Embeddings and Language Modelling using BERT and Friends using R
Stars: ✭ 39 (-63.21%)
deepfrogAn NLP-suite powered by deep learning
Stars: ✭ 16 (-84.91%)
pytorch-vitAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Stars: ✭ 250 (+135.85%)
small-textActive Learning for Text Classification in Python
Stars: ✭ 241 (+127.36%)
converseConversational text Analysis using various NLP techniques
Stars: ✭ 147 (+38.68%)
CaverCaver: a toolkit for multilabel text classification.
Stars: ✭ 38 (-64.15%)
modulesThe official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
Stars: ✭ 25 (-76.42%)
napkinXCExtremely simple and fast extreme multi-class and multi-label classifiers.
Stars: ✭ 38 (-64.15%)
gnn-lspeSource code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+55.66%)
text2textText2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+77.36%)
DocSumA tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (-45.28%)
WellcomeMLRepository for Machine Learning utils at the Wellcome Trust
Stars: ✭ 31 (-70.75%)
extremeTextLibrary for fast text representation and extreme classification.
Stars: ✭ 141 (+33.02%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-47.17%)
Transformers-TutorialsThis repository contains demos I made with the Transformers library by HuggingFace.
Stars: ✭ 2,828 (+2567.92%)
classifier multi labelmulti-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification
Stars: ✭ 127 (+19.81%)
label-studio-transformersLabel data using HuggingFace's transformers and automatically get a prediction service
Stars: ✭ 117 (+10.38%)
BERT-NERUsing pre-trained BERT models for Chinese and English NER with 🤗Transformers
Stars: ✭ 114 (+7.55%)
Text-SummarizationAbstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-64.15%)
spark-transformersSpark-Transformers: Library for exporting Apache Spark MLLIB models to use them in any Java application with no other dependencies.
Stars: ✭ 39 (-63.21%)
remixer-pytorchImplementation of the Remixer Block from the Remixer paper, in Pytorch
Stars: ✭ 37 (-65.09%)
n-grammer-pytorchImplementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch
Stars: ✭ 50 (-52.83%)
smaller-transformersLoad What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.
Stars: ✭ 66 (-37.74%)
miniconsUtility for analyzing Transformer based representations of language.
Stars: ✭ 28 (-73.58%)
optimum🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools
Stars: ✭ 567 (+434.91%)