All Projects → erickrf → ppdb

erickrf / ppdb

Licence: MIT license
Interface for reading the Paraphrase Database (PPDB)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ppdb

NLP-Natural-Language-Processing
Projects and useful articles / links
Stars: ✭ 149 (+577.27%)
Mutual labels:  nlp-resources, nlp-library
TextFeatureSelection
Python library for feature selection for text features. It has filter method, genetic algorithm and TextFeatureSelectionEnsemble for improving text classification models. Helps improve your machine learning models
Stars: ✭ 42 (+90.91%)
Mutual labels:  nlp-resources, nlp-library
minie
An open information extraction system that provides compact extractions
Stars: ✭ 83 (+277.27%)
Mutual labels:  nlp-resources, nlp-library
Urduhack
An NLP library for the Urdu language. It comes with a lot of battery included features to help you process Urdu data in the easiest way possible.
Stars: ✭ 200 (+809.09%)
Mutual labels:  nlp-library
Fnlp
中文自然语言处理工具包 Toolkit for Chinese natural language processing
Stars: ✭ 2,468 (+11118.18%)
Mutual labels:  nlp-library
indic nlp resources
Resources to go with the Indic NLP Library
Stars: ✭ 55 (+150%)
Mutual labels:  nlp-resources
spaczz
Fuzzy matching and more functionality for spaCy.
Stars: ✭ 215 (+877.27%)
Mutual labels:  nlp-library
Pyarabic
pyarabic
Stars: ✭ 183 (+731.82%)
Mutual labels:  nlp-library
Nlp profiler
A simple NLP library allows profiling datasets with one or more text columns. When given a dataset and a column name containing text data, NLP Profiler will return either high-level insights or low-level/granular statistical information about the text in that column.
Stars: ✭ 181 (+722.73%)
Mutual labels:  nlp-library
preprocess-conll05
Scripts for preprocessing the CoNLL-2005 SRL dataset.
Stars: ✭ 17 (-22.73%)
Mutual labels:  nlp-resources
TutorialBank
No description or website provided.
Stars: ✭ 85 (+286.36%)
Mutual labels:  nlp-resources
Multi Task Nlp
multi_task_NLP is a utility toolkit enabling NLP developers to easily train and infer a single model for multiple tasks.
Stars: ✭ 221 (+904.55%)
Mutual labels:  nlp-library
kbbi-python
A Python module that fetches a page of a word/phrase from the Online Indonesian Dictionary (https://kbbi.kemdikbud.go.id).
Stars: ✭ 58 (+163.64%)
Mutual labels:  nlp-resources
Sudachipy
Python version of Sudachi, a Japanese tokenizer.
Stars: ✭ 207 (+840.91%)
Mutual labels:  nlp-library
py-lingualytics
A text analytics library with support for codemixed data
Stars: ✭ 36 (+63.64%)
Mutual labels:  nlp-library
lima
The Libre Multilingual Analyzer, a Natural Language Processing (NLP) C++ toolkit.
Stars: ✭ 75 (+240.91%)
Mutual labels:  nlp-library
linguistic-datasets-portuguese
Linguistic Datasets for Portuguese: Lista de conjuntos de dados linguísticos para língua portuguesa com licença flexíveis: banco de dados, lista de palavras, sinônimos, antônimos, dicionário temático, tesauro, linked data, semântica, ontologia e representação de conhecimento
Stars: ✭ 46 (+109.09%)
Mutual labels:  nlp-resources
awesome-yoruba-nlp
📖 A curated list of resources dedicated to Natural Language Processing (NLP) in the Yoruba Language.
Stars: ✭ 21 (-4.55%)
Mutual labels:  nlp-resources
schrutepy
The Entire Transcript from the Office in Tidy Format
Stars: ✭ 22 (+0%)
Mutual labels:  nlp-library
nlp-notebooks
A collection of natural language processing notebooks.
Stars: ✭ 19 (-13.64%)
Mutual labels:  nlp-resources

PPDB

This module contains functions for reading the Paraphrase Database (PPDB), an automatically generated database of paraphrases in different languages.

Long story short, PPDB was generated by selecting words and phrases that were translated in the same way to English. Thus, a known problem with it is that many entries are not real paraphrases, but variations in gender, number, case or other morphological subtleties not present in English.

This package provides both an easy interface to use the PPDB in your code as well as entry points to define filter functions.

Filters

Currently, there are only implemented filters for Portuguese. They discard paraphrase rules that only change gender or number, as well as leading or trailing articles and commas in the rules.

Here is an example of a filter function that you can implement:

def my_filter(lhs, rhs):
    """
    Return True if the pair should be filtered out (it is not a relevant
    paraphrase); otherwise return False.
    """
    stripped_lhs = strip_suffix(lhs)
    stripped_rhs = strip_suffix(rhs)

    if stripped_lhs == stripped_rhs:
        return True

    return False

def strip_suffix(word):
    # language-specific logic

Then you call ppdb.load_dict with it:

import ppdb
ppdb_rules = ppdb.load_ppdb(path, my_filter)

Loading a PPDB file and filtering pairs can be time consuming, especially for the larger ones. For this reason, I recommend using pickle to serialize a TransformationDict after it is created, so the next time it can be loaded much faster. If you pass a path ending in .pickle, ppdb.load_ppdb() will just load it and ignore the filtering logic.

If you want to use the existing Portuguese filters, import ppdb_pt:

from ppdb import ppdb_pt
ppdb_rules = ppdb.load_ppdb(path)

And if you happen to write filter functions for another language, please submit a pull request!

Singleton Usage

Once the dataset is read, ppdb stores the same object it returns as a singleton. You can then call ppdb.get_rhs() to get the RHS of a given LHS.

In order to replace the singleton object inside the module, call ppdb.load_ppdb() with force=True.

TransformationDict

The paraphrase rules are stored in a data structure called TransformationDict, which is a subclass of Python's built-in dict. The TransformationDict returned by load_ppdb maps the left-hand side (LHS) of the rules into the righ-hand sides (RHS).

The values stored by the dictionary are tuples with the set of RHS for that LHS and another TransformationDict with possible continuations of the LHS for other RHS.

Confused? Let's start simple. Suppose there are two paraphrase rules:

A -> X
A B -> Y

A TransformationDict storing it would look like this:

>>> ppdb_rules
{'A': ({('X',)}, {'B': ({('Y',)}, {})})}
>>> rhs, more_rules = ppdb_rules['A']
>>> rhs
{('X',)}  # a set with the only RHS for "A"
>>> more_rules
{'B': ({('Y',)}, {})}  # more nested stuff
>>> rhs, more_rules = ppdb_rules[('A', 'B')]
>>> rhs
{('Y',)}  # a set with the only RHS for "A B"
>>> more_rules
{}

If you only want the RHS for a specific LHS, you can use get_rhs(), like in the singleton usage:

>>> ppdb_rules.get_rhs('A')
{('X',)}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].