All Projects → backprop-ai → backprop

backprop-ai / backprop

Licence: other
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to backprop

Haystack
🔍 Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+1388.65%)
Mutual labels:  transformers, question-answering, transfer-learning, language-model, bert
Nlp chinese corpus
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
Stars: ✭ 6,656 (+2806.55%)
Mutual labels:  text-classification, question-answering, language-model, bert
Bert language understanding
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Stars: ✭ 933 (+307.42%)
Mutual labels:  text-classification, question-answering, transfer-learning, language-model
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Stars: ✭ 39 (-82.97%)
Mutual labels:  transformers, transfer-learning, language-model, bert
FNet-pytorch
Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
Stars: ✭ 204 (-10.92%)
Mutual labels:  text-classification, image-classification, language-model
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-89.52%)
Mutual labels:  text-classification, transformers, bert
TorchBlocks
A PyTorch-based toolkit for natural language processing
Stars: ✭ 85 (-62.88%)
Mutual labels:  text-classification, transformers, bert
Skin Lesions Classification DCNNs
Transfer Learning with DCNNs (DenseNet, Inception V3, Inception-ResNet V2, VGG16) for skin lesions classification
Stars: ✭ 47 (-79.48%)
Mutual labels:  image-classification, transfer-learning, fine-tuning
HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
Stars: ✭ 154 (-32.75%)
Mutual labels:  transformers, image-classification, bert
Text and Audio classification with Bert
Text Classification in Turkish Texts with Bert
Stars: ✭ 34 (-85.15%)
Mutual labels:  text-classification, transformers, bert
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (-90.39%)
Mutual labels:  text-classification, transfer-learning, bert
Spark Nlp
State of the Art Natural Language Processing
Stars: ✭ 2,518 (+999.56%)
Mutual labels:  text-classification, transformers, bert
Clue
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+958.95%)
Mutual labels:  transformers, language-model, bert
Pytorch-NLU
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (-34.06%)
Mutual labels:  text-classification, transformers, bert
Tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Stars: ✭ 5,077 (+2117.03%)
Mutual labels:  transformers, language-model, bert
text2class
Multi-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT
Stars: ✭ 15 (-93.45%)
Mutual labels:  text-classification, transformers, bert
Simpletransformers
Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Stars: ✭ 2,881 (+1158.08%)
Mutual labels:  text-classification, transformers, question-answering
ParsBigBird
Persian Bert For Long-Range Sequences
Stars: ✭ 58 (-74.67%)
Mutual labels:  transformers, transfer-learning, bert
text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (-17.9%)
Mutual labels:  transformers, question-answering, bert
policy-data-analyzer
Building a model to recognize incentives for landscape restoration in environmental policies from Latin America, the US and India. Bringing NLP to the world of policy analysis through an extensible framework that includes scraping, preprocessing, active learning and text analysis pipelines.
Stars: ✭ 22 (-90.39%)
Mutual labels:  text-classification, transformers, bert

Backprop

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Solve a variety of tasks with pre-trained models or finetune them in one line for your own tasks.

Out of the box tasks you can solve with Backprop:

  • Conversational question answering in English
  • Text Classification in 100+ languages
  • Image Classification
  • Text Vectorisation in 50+ languages
  • Image Vectorisation
  • Summarisation in English
  • Emotion detection in English
  • Text Generation

For more specific use cases, you can adapt a task with little data and a single line of code via finetuning.

Getting started Installation, few minute introduction
💡 Examples Finetuning and usage examples
📙 Docs In-depth documentation about task inference and finetuning
⚙️ Models Overview of available models

Getting started

Installation

Install Backprop via PyPi:

pip install backprop

Basic task inference

Tasks act as interfaces that let you easily use a variety of supported models.

import backprop

context = "Take a look at the examples folder to see use cases!"

qa = backprop.QA()

# Start building!
answer = qa("Where can I see what to build?", context)

print(answer)
# Prints
"the examples folder"

You can run all tasks and models on your own machine, or in production with our inference API, simply by specifying your api_key.

See how to use all available tasks.

Basic finetuning and uploading

Each task implements finetuning that lets you adapt a model for your specific use case in a single line of code.

A finetuned model is easy to upload to production, letting you focus on building great applications.

import backprop

tg = backprop.TextGeneration("t5-small")

# Any text works as training data
inp = ["I really liked the service I received!", "Meh, it was not impressive."]
out = ["positive", "negative"]

# Finetune with a single line of code
tg.finetune({"input_text": inp, "output_text": out})

# Use your trained model
prediction = tg("I enjoyed it!")

print(prediction)
# Prints
"positive"

# Upload to Backprop for production ready inference
# Describe your model
name = "t5-sentiment"
description = "Predicts positive and negative sentiment"

tg.upload(name=name, description=description, api_key="abc")

See finetuning for other tasks.

Why Backprop?

  1. No experience needed

    • Entrance to practical AI should be simple
    • Get state-of-the-art performance in your task without being an expert
  2. Data is a bottleneck

    • Solve real world tasks without any data
    • With transfer learning, even a small amount of data can adapt a task to your niche requirements
  3. There are an overwhelming amount of models

    • We offer a curated selection of the best open-source models and make them simple to use
    • A few general models can accomplish more with less optimisation
  4. Deploying models cost effectively is hard work

    • If our models suit your use case, no deployment is needed: just call our API
    • Adapt and deploy your own model with just a few lines of code
    • Our API scales, is always available, and you only pay for usage

Examples

Documentation

Check out our docs for in-depth task inference and finetuning.

Model Hub

Curated list of state-of-the-art models.

Demos

Zero-shot image classification with CLIP.

Credits

Backprop relies on many great libraries to work, most notably:

Feedback

Found a bug or have ideas for new tasks and models? Open an issue.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].