http benchgolang HTTP stress test tool, support single and distributed
Stars: ✭ 142 (-43.43%)
LearningToCompare-TensorflowTensorflow implementation for paper: Learning to Compare: Relation Network for Few-Shot Learning.
Stars: ✭ 17 (-93.23%)
revlHelps to benchmark code for Autodesk Maya.
Stars: ✭ 14 (-94.42%)
npm-yarn-benchmarkBash script for comparing NPM and Yarn performance
Stars: ✭ 42 (-83.27%)
keras-bert-nerKeras solution of Chinese NER task using BiLSTM-CRF/BiGRU-CRF/IDCNN-CRF model with Pretrained Language Model: supporting BERT/RoBERTa/ALBERT
Stars: ✭ 7 (-97.21%)
laboratorio-de-ideiasRepositório destinado a ideias para projetos que podemos utilizar para estudos, aperfeiçoamento, ou aprendizado de novas tecnologias ou recursos.
Stars: ✭ 13 (-94.82%)
caliper-benchmarksSample benchmark files for Hyperledger Caliper https://wiki.hyperledger.org/display/caliper
Stars: ✭ 69 (-72.51%)
mirror-bert[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.
Stars: ✭ 56 (-77.69%)
kaldi-timit-sre-ivectorDevelop speaker recognition model based on i-vector using TIMIT database
Stars: ✭ 17 (-93.23%)
bert-squeeze🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (-77.69%)
BinKitBinary Code Similarity Analysis (BCSA) Benchmark
Stars: ✭ 54 (-78.49%)
task-transferabilityData and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.
Stars: ✭ 35 (-86.06%)
kwxBERT, LDA, and TFIDF based keyword extraction in Python
Stars: ✭ 33 (-86.85%)
berserkerBerserker - BERt chineSE woRd toKenizER
Stars: ✭ 17 (-93.23%)
latenzJavaScript HTTP latency analyzer
Stars: ✭ 18 (-92.83%)
troveWeakly supervised medical named entity classification
Stars: ✭ 55 (-78.09%)
MeTALOfficial PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)
Stars: ✭ 24 (-90.44%)
gl-bench⏱ WebGL performance monitor with CPU/GPU load.
Stars: ✭ 146 (-41.83%)
RGBD-SODsurveyRGB-D Salient Object Detection: A Survey
Stars: ✭ 171 (-31.87%)
TEXTOIRTEXTOIR is a flexible toolkit for open intent detection and discovery. (ACL 2021)
Stars: ✭ 31 (-87.65%)
CARLACARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (-33.86%)
FinBERTA Pretrained BERT Model for Financial Communications. https://arxiv.org/abs/2006.08097
Stars: ✭ 193 (-23.11%)
bernA neural named entity recognition and multi-type normalization tool for biomedical text mining
Stars: ✭ 151 (-39.84%)
embeddingsEmbeddings: State-of-the-art Text Representations for Natural Language Processing tasks, an initial version of library focus on the Polish Language
Stars: ✭ 27 (-89.24%)
policy-data-analyzerBuilding a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-91.24%)
SQL-ProcBenchSQL-ProcBench is an open benchmark for procedural workloads in RDBMSs.
Stars: ✭ 26 (-89.64%)
mcQA🔮 Answering multiple choice questions with Language Models.
Stars: ✭ 23 (-90.84%)
TV4DialogNo description or website provided.
Stars: ✭ 33 (-86.85%)
hack-pet🐰 Managing command snippets for hackers/bug bounty hunters. with pet.
Stars: ✭ 77 (-69.32%)
PIEFast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (-34.66%)
ufwA minimalist framework for rapid server side applications prototyping in C++ with dependency injection support.
Stars: ✭ 19 (-92.43%)
few shot slot tagging and NERPyTorch implementation of the paper: Vector Projection Network for Few-shot Slot Tagging in Natural Language Understanding. Su Zhu, Ruisheng Cao, Lu Chen and Kai Yu.
Stars: ✭ 17 (-93.23%)
quic vs tcpA Survey and Benchmark of QUIC
Stars: ✭ 41 (-83.67%)
WSDM-Cup-2019[ACM-WSDM] 3rd place solution at WSDM Cup 2019, Fake News Classification on Kaggle.
Stars: ✭ 62 (-75.3%)
bert nliA Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)
Stars: ✭ 97 (-61.35%)
Text-SummarizationAbstractive and Extractive Text summarization using Transformers.
Stars: ✭ 38 (-84.86%)
anonymisationAnonymization of legal cases (Fr) based on Flair embeddings
Stars: ✭ 85 (-66.14%)
OpenGNTOpen Greek New Testament Project; NA28 / NA27 Equivalent Text & Resources
Stars: ✭ 55 (-78.09%)
Black-Box-TuningICML'2022: Black-Box Tuning for Language-Model-as-a-Service
Stars: ✭ 99 (-60.56%)
ALBERT-PytorchPytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)
Stars: ✭ 214 (-14.74%)
node-vs-ruby-ioNode vs Ruby I/O benchmarks when resizing images with libvips.
Stars: ✭ 11 (-95.62%)
semantic-document-relationsImplementation, trained models and result data for the paper "Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles"
Stars: ✭ 21 (-91.63%)
rop-benchmarkROP Benchmark is a tool to compare ROP compilers
Stars: ✭ 23 (-90.84%)
ttskittext to speech toolkit. 好用的中文语音合成工具箱,包含语音编码器、语音合成器、声码器和可视化模块。
Stars: ✭ 336 (+33.86%)
Medi-CoQAConversational Question Answering on Clinical Text
Stars: ✭ 22 (-91.24%)