All Projects → Abhijit-2592 → spacy-langdetect

Abhijit-2592 / spacy-langdetect

Licence: MIT license
A fully customisable language detection pipeline for spaCy

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to spacy-langdetect

spacymoji
💙 Emoji handling and meta data for spaCy with custom extension attributes
Stars: ✭ 174 (+102.33%)
Mutual labels:  spacy, spacy-extension
contextualSpellCheck
✔️Contextual word checker for better suggestions
Stars: ✭ 274 (+218.6%)
Mutual labels:  spacy, spacy-extension
spacy conll
Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Doc and its sentences and tokens. Can also be used as a command-line tool.
Stars: ✭ 60 (-30.23%)
Mutual labels:  spacy, spacy-extension
extractacy
Spacy pipeline object for extracting values that correspond to a named entity (e.g., birth dates, account numbers, laboratory results)
Stars: ✭ 47 (-45.35%)
Mutual labels:  spacy, spacy-extension
Pytextrank
Python implementation of TextRank for phrase extraction and summarization of text documents
Stars: ✭ 1,675 (+1847.67%)
Mutual labels:  spacy, spacy-extension
spacy-iwnlp
German lemmatization with IWNLP as extension for spaCy
Stars: ✭ 22 (-74.42%)
Mutual labels:  spacy, spacy-extension
spacy hunspell
✏️ Hunspell extension for spaCy 2.0.
Stars: ✭ 94 (+9.3%)
Mutual labels:  spacy, spacy-extension
hmrb
Python Rule Processing Engine 🏺
Stars: ✭ 65 (-24.42%)
Mutual labels:  spacy, spacy-extension
augmenty
Augmenty is an augmentation library based on spaCy for augmenting texts.
Stars: ✭ 101 (+17.44%)
Mutual labels:  spacy, spacy-extension
amrlib
A python library that makes AMR parsing, generation and visualization simple.
Stars: ✭ 107 (+24.42%)
Mutual labels:  spacy, spacy-extension
spacy-fastlang
Language detection using Spacy and Fasttext
Stars: ✭ 34 (-60.47%)
Mutual labels:  language-detection, spacy
Neuralcoref
✨Fast Coreference Resolution in spaCy with Neural Networks
Stars: ✭ 2,453 (+2752.33%)
Mutual labels:  spacy, spacy-extension
spaczz
Fuzzy matching and more functionality for spaCy.
Stars: ✭ 215 (+150%)
Mutual labels:  spacy, spacy-extension
Whatthelang
Lightning Fast Language Prediction 🚀
Stars: ✭ 130 (+51.16%)
Mutual labels:  language-detection
DrFAQ
DrFAQ is a plug-and-play question answering NLP chatbot that can be generally applied to any organisation's text corpora.
Stars: ✭ 29 (-66.28%)
Mutual labels:  spacy
Fasttext.js
FastText for Node.js
Stars: ✭ 127 (+47.67%)
Mutual labels:  language-detection
Padatious
A neural network intent parser
Stars: ✭ 124 (+44.19%)
Mutual labels:  language-detection
spacy-dbpedia-spotlight
A spaCy wrapper for DBpedia Spotlight
Stars: ✭ 85 (-1.16%)
Mutual labels:  spacy
spacy-sentence-bert
Sentence transformers models for SpaCy
Stars: ✭ 88 (+2.33%)
Mutual labels:  spacy
Nlp Models Tensorflow
Gathers machine learning and Tensorflow deep learning models for NLP problems, 1.13 < Tensorflow < 2.0
Stars: ✭ 1,603 (+1763.95%)
Mutual labels:  language-detection

spacy-langdetect

Fully customizable language detection pipeline for spaCy

Installation

pip install spacy-langdetect

NOTE:

Requires spaCy >= 2.0. This dependency is removed in pip install spacy-langdetect so that it can be used with nightly versions also

Basic usage

Out of the box, under the hood it uses langdetect to detect languages on spaCy's Doc and Span objects.

import spacy
from spacy_langdetect import LanguageDetector
nlp = spacy.load("en")
nlp.add_pipe(LanguageDetector(), name="language_detector", last=True)
text = "This is English text. Er lebt mit seinen Eltern und seiner Schwester in Berlin. Yo me divierto todos los días en el parque. Je m'appelle Angélica Summer, j'ai 12 ans et je suis canadienne."
doc = nlp(text)
# document level language detection. Think of it like average language of document!
print(doc._.language)
# sentence level language detection
for i, sent in enumerate(doc.sents):
    print(sent, sent._.language)

# Token level language detection from version 0.1.2
# Use this with caution because, in some cases language detection will not make sense for individual tokens
for token in doc:
    print(token, token._.language)

Using your own language detector

Suppose you are not happy with the accuracy of the out of the box language detector or you have your own language detector which you want to use with spaCy pipeline. How do you do it? That's where the language_detection_function argument comes in. The function takes in a Spacy Doc or Span object and can return any python object which is stored in doc._.language and span._.language. For example, let's say you want to use googletrans as your language detection module:

import spacy
from spacy.tokens import Doc, Span
from spacy_langdetect import LanguageDetector
# install using pip install googletrans
from googletrans import Translator
nlp = spacy.load("en")

def custom_detection_function(spacy_object):
    # custom detection function should take a Spacy Doc or a
    assert isinstance(spacy_object, Doc) or isinstance(
        spacy_object, Span), "spacy_object must be a spacy Doc or Span object but it is a {}".format(type(spacy_object))
    detection = Translator().detect(spacy_object.text)
    return {'language':detection.lang, 'score':detection.confidence}

nlp.add_pipe(LanguageDetector(language_detection_function=custom_detection_function), name="language_detector", last=True)
text = "This is English text. Er lebt mit seinen Eltern und seiner Schwester in Berlin. Yo me divierto todos los días en el parque. Je m'appelle Angélica Summer, j'ai 12 ans et je suis canadienne."
doc = nlp(text)
# document level language detection. Think of it like average language of document!
print(doc._.language)
# sentence level language detection
for i, sent in enumerate(doc.sents):
    print(sent, sent._.language)

Similarly you can also use pycld2 and other language detectors with spaCy

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].