All Projects → FewCLUE → Similar Projects or Alternatives

961 Open source projects that are alternatives of or similar to FewCLUE

http bench
golang HTTP stress test tool, support single and distributed
Stars: ✭ 142 (-43.43%)
Mutual labels:  benchmark
LearningToCompare-Tensorflow
Tensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-93.23%)
Mutual labels:  few-shot-learning
revl
Helps to benchmark code for Autodesk Maya.
Stars: ✭ 14 (-94.42%)
Mutual labels:  benchmark
npm-yarn-benchmark
Bash script for comparing NPM and Yarn performance
Stars: ✭ 42 (-83.27%)
Mutual labels:  benchmark
benchmarking-fft
choosing FFT library...
Stars: ✭ 74 (-70.52%)
Mutual labels:  benchmark
keras-bert-ner
Keras solution of Chinese NER task using BiLSTM-CRF/BiGRU-CRF/IDCNN-CRF model with Pretrained Language Model: supporting BERT/RoBERTa/ALBERT
Stars: ✭ 7 (-97.21%)
Mutual labels:  bert
laboratorio-de-ideias
Repositório destinado a ideias para projetos que podemos utilizar para estudos, aperfeiçoamento, ou aprendizado de novas tecnologias ou recursos.
Stars: ✭ 13 (-94.82%)
Mutual labels:  pet
caliper-benchmarks
Sample benchmark files for Hyperledger Caliper https://wiki.hyperledger.org/display/caliper
Stars: ✭ 69 (-72.51%)
Mutual labels:  benchmark
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-90.44%)
Mutual labels:  bert
mirror-bert
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (-77.69%)
Mutual labels:  bert
kaldi-timit-sre-ivector
Develop speaker recognition model based on i-vector using TIMIT database
Stars: ✭ 17 (-93.23%)
Mutual labels:  chinese
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-77.69%)
Mutual labels:  bert
BinKit
Binary Code Similarity Analysis (BCSA) Benchmark
Stars: ✭ 54 (-78.49%)
Mutual labels:  benchmark
nowplaying-RS-Music-Reco-FM
#nowplaying-RS: Music Recommendation using Factorization Machines
Stars: ✭ 23 (-90.84%)
Mutual labels:  benchmark
SemEval2019Task3
Code for ANA at SemEval-2019 Task 3
Stars: ✭ 41 (-83.67%)
Mutual labels:  bert
task-transferability
Data and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.
Stars: ✭ 35 (-86.06%)
Mutual labels:  bert
kwx
BERT, LDA, and TFIDF based keyword extraction in Python
Stars: ✭ 33 (-86.85%)
Mutual labels:  bert
berserker
Berserker - BERt chineSE woRd toKenizER
Stars: ✭ 17 (-93.23%)
Mutual labels:  bert
latenz
JavaScript HTTP latency analyzer
Stars: ✭ 18 (-92.83%)
Mutual labels:  benchmark
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (-78.09%)
Mutual labels:  bert
MeTAL
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)
Stars: ✭ 24 (-90.44%)
Mutual labels:  few-shot-learning
gl-bench
⏱ WebGL performance monitor with CPU/GPU load.
Stars: ✭ 146 (-41.83%)
Mutual labels:  benchmark
RGBD-SODsurvey
RGB-D Salient Object Detection: A Survey
Stars: ✭ 171 (-31.87%)
Mutual labels:  benchmark
BERT-for-Chinese-Question-Answering
No description or website provided.
Stars: ✭ 75 (-70.12%)
Mutual labels:  bert
TEXTOIR
TEXTOIR is a flexible toolkit for open intent detection and discovery. (ACL 2021)
Stars: ✭ 31 (-87.65%)
Mutual labels:  bert
LuaJIT-Benchmarks
LuaJIT Benchmark tests
Stars: ✭ 20 (-92.03%)
Mutual labels:  benchmark
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (-33.86%)
Mutual labels:  benchmark
FinBERT
A Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (-23.11%)
Mutual labels:  bert
hashcat-benchmark-comparison
Hashcat Benchmark Comparison
Stars: ✭ 22 (-91.24%)
Mutual labels:  benchmark
bern
A neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (-39.84%)
Mutual labels:  bert
embeddings
Embeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Stars: ✭ 27 (-89.24%)
Mutual labels:  benchmark
policy-data-analyzer
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-91.24%)
Mutual labels:  bert
benchmarkjs-pretty
Tiny wrapper around benchmarkjs with a nicer api
Stars: ✭ 20 (-92.03%)
Mutual labels:  benchmark
Functional-Light-JS-Zh
《Functional-Light-JS》中文翻译
Stars: ✭ 14 (-94.42%)
Mutual labels:  chinese
SQL-ProcBench
SQL-ProcBench is an open benchmark for procedural workloads in RDBMSs.
Stars: ✭ 26 (-89.64%)
Mutual labels:  benchmark
mcQA
🔮 Answering multiple choice questions with Language Models.
Stars: ✭ 23 (-90.84%)
Mutual labels:  bert
TV4Dialog
No description or website provided.
Stars: ✭ 33 (-86.85%)
Mutual labels:  chinese
hack-pet
🐰 Managing command snippets for hackers/bug bounty hunters. with pet.
Stars: ✭ 77 (-69.32%)
Mutual labels:  pet
classifier multi label seq2seq attention
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search
Stars: ✭ 26 (-89.64%)
Mutual labels:  bert
PIE
Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (-34.66%)
Mutual labels:  bert
ufw
A minimalist framework for rapid server side applications prototyping in C++ with dependency injection support.
Stars: ✭ 19 (-92.43%)
Mutual labels:  benchmark
few shot slot tagging and NER
PyTorch implementation of the paper: Vector Projection Network for Few-shot Slot Tagging in Natural Language Understanding. Su Zhu, Ruisheng Cao, Lu Chen and Kai Yu.
Stars: ✭ 17 (-93.23%)
Mutual labels:  few-shot-learning
Python-Complementary-Languages
Just a small test to see which language is better for extending python when using lists of lists
Stars: ✭ 32 (-87.25%)
Mutual labels:  benchmark
quic vs tcp
A Survey and Benchmark of QUIC
Stars: ✭ 41 (-83.67%)
Mutual labels:  benchmark
WSDM-Cup-2019
[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (-75.3%)
Mutual labels:  bert
bert nli
A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)
Stars: ✭ 97 (-61.35%)
Mutual labels:  bert
hyperspectral-soilmoisture-dataset
Hyperspectral and soil-moisture data from a field campaign based on a soil sample. Karlsruhe (Germany), 2017.
Stars: ✭ 23 (-90.84%)
Mutual labels:  benchmark
Text-Summarization
Abstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-84.86%)
Mutual labels:  bert
hashcatbenchmark
Benchmark in Hashcat for diferents GPU's
Stars: ✭ 19 (-92.43%)
Mutual labels:  benchmark
anonymisation
Anonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (-66.14%)
Mutual labels:  bert
OpenGNT
Open Greek New Testament Project; NA28 / NA27 Equivalent Text & Resources
Stars: ✭ 55 (-78.09%)
Mutual labels:  chinese
Black-Box-Tuning
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service
Stars: ✭ 99 (-60.56%)
Mutual labels:  few-shot-learning
ALBERT-Pytorch
Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)
Stars: ✭ 214 (-14.74%)
Mutual labels:  bert
typescript-orm-benchmark
⚖️ ORM benchmarking for Node.js applications written in TypeScript
Stars: ✭ 106 (-57.77%)
Mutual labels:  benchmark
node-vs-ruby-io
Node vs Ruby I/O benchmarks when resizing images with libvips.
Stars: ✭ 11 (-95.62%)
Mutual labels:  benchmark
semantic-document-relations
Implementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-91.63%)
Mutual labels:  bert
rop-benchmark
ROP Benchmark is a tool to compare ROP compilers
Stars: ✭ 23 (-90.84%)
Mutual labels:  benchmark
chinese-calendar
🔖 Chinese calendar control in C#
Stars: ✭ 22 (-91.24%)
Mutual labels:  chinese
ttskit
text to speech toolkit. 好用的中文语音合成工具箱,包含语音编码器、语音合成器、声码器和可视化模块。
Stars: ✭ 336 (+33.86%)
Mutual labels:  chinese
Medi-CoQA
Conversational Question Answering on Clinical Text
Stars: ✭ 22 (-91.24%)
Mutual labels:  bert
241-300 of 961 similar projects