All Projects → datasciencecampus → pygrams

datasciencecampus / pygrams

Licence: other
Extracts key terminology (n-grams) from any large collection of documents (>1000) and forecasts emergence

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to pygrams

Practical Machine Learning With Python
Master the essential skills needed to recognize and solve complex real-world problems with Machine Learning and Deep Learning by leveraging the highly popular Python Machine Learning Eco-system.
Stars: ✭ 1,868 (+3492.31%)
Mutual labels:  scikit-learn, nltk
topic modelling financial news
Topic modelling on financial news with Natural Language Processing
Stars: ✭ 51 (-1.92%)
Mutual labels:  nltk, tf-idf
ResumeRise
An NLP tool which classifies and summarizes resumes
Stars: ✭ 29 (-44.23%)
Mutual labels:  nltk, tf-idf
text-classification-cn
中文文本分类实践,基于搜狗新闻语料库,采用传统机器学习方法以及预训练模型等方法
Stars: ✭ 81 (+55.77%)
Mutual labels:  scikit-learn, tf-idf
Ryuzaki bot
Simple chatbot in Python using NLTK and scikit-learn
Stars: ✭ 28 (-46.15%)
Mutual labels:  scikit-learn, nltk
resume tailor
An unsupervised analysis combining topic modeling and clustering to preserve an individuals work history and credentials while tailoring their resume towards a new career field
Stars: ✭ 15 (-71.15%)
Mutual labels:  scikit-learn, nltk
nlp workshop odsc europe20
Extensive tutorials for the Advanced NLP Workshop in Open Data Science Conference Europe 2020. We will leverage machine learning, deep learning and deep transfer learning to learn and solve popular tasks using NLP including NER, Classification, Recommendation \ Information Retrieval, Summarization, Classification, Language Translation, Q&A and T…
Stars: ✭ 127 (+144.23%)
Mutual labels:  scikit-learn, nltk
Text Analytics With Python
Learn how to process, classify, cluster, summarize, understand syntax, semantics and sentiment of text data with the power of Python! This repository contains code and datasets used in my book, "Text Analytics with Python" published by Apress/Springer.
Stars: ✭ 1,132 (+2076.92%)
Mutual labels:  scikit-learn, nltk
Text Classification
Machine Learning and NLP: Text Classification using python, scikit-learn and NLTK
Stars: ✭ 239 (+359.62%)
Mutual labels:  scikit-learn, nltk
Datacamp Python Data Science Track
All the slides, accompanying code and exercises all stored in this repo. 🎈
Stars: ✭ 250 (+380.77%)
Mutual labels:  scikit-learn
koolsla
Food recommendation tool with Machine learning.
Stars: ✭ 21 (-59.62%)
Mutual labels:  tf-idf
Porndetector
Porn images detector with python, tensorflow, scikit-learn and opencv.
Stars: ✭ 248 (+376.92%)
Mutual labels:  scikit-learn
Artificial Intelligence Deep Learning Machine Learning Tutorials
A comprehensive list of Deep Learning / Artificial Intelligence and Machine Learning tutorials - rapidly expanding into areas of AI/Deep Learning / Machine Vision / NLP and industry specific areas such as Climate / Energy, Automotives, Retail, Pharma, Medicine, Healthcare, Policy, Ethics and more.
Stars: ✭ 2,966 (+5603.85%)
Mutual labels:  scikit-learn
pipeline
PipelineAI Kubeflow Distribution
Stars: ✭ 4,154 (+7888.46%)
Mutual labels:  scikit-learn
Orange3
🍊 📊 💡 Orange: Interactive data analysis
Stars: ✭ 3,152 (+5961.54%)
Mutual labels:  scikit-learn
Semantic-Textual-Similarity
Natural Language Processing using NLTK and Spacy
Stars: ✭ 30 (-42.31%)
Mutual labels:  nltk
Tune Sklearn
A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.
Stars: ✭ 241 (+363.46%)
Mutual labels:  scikit-learn
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+5584.62%)
Mutual labels:  scikit-learn
imbalanced-ensemble
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible. | 模块化、灵活、易扩展的类别不平衡/长尾机器学习库
Stars: ✭ 199 (+282.69%)
Mutual labels:  scikit-learn
intro-to-ml
A basic introduction to machine learning (one day training).
Stars: ✭ 15 (-71.15%)
Mutual labels:  scikit-learn

build status Build status codecov LICENSE.

Description of tool

This python-based app (pygrams.py) is designed to extract popular or emergent n-grams/terms (words or short phrases) from free text within a large (>1,000) corpus of documents. Example corpora of granted patent document abstracts are included for testing purposes.

The app pipeline (more details in the user option section):

  1. Input Text Data Text data can be input by several text document types (ie. csv, xls, pickled python dataframes, etc)
  2. TFIDF Dictionary This is the processed list of terms (ngrams) out of the whole corpus. These terms are the columns of the TFIDF sparse matrix. The user can control the following parameters: minimum document frequency, stopwords, ngram range.
  3. TFIDF Computation Grab a coffee if your text corpus is long (>1 million docs) :)
  4. Filters These are filters to use on the computed TFIDF matrix. They consist of document filters and term filters
    1. Document Filters These filters work on document level. Examples are: date range, column features (eg. cpc classification).
    2. Term Filters These filters work on term level. Examples are: search terms list (eg. pharmacy, medicine, chemist)
  5. Mask the TFIDF Matrix Apply the filters to the TFIDF matrix
  6. Emergence
    1. Emergence Calculations Options include Porter 2018 emergence calculations, curve fitting, or calculations designed to favour exponential like emergence.
    2. Emergence Forecasts Options include ARIMA, linear and quadratic regression, Holt-Winters, state-space models.
  7. Outputs The default 'report' output is a ranked and scored list of 'popular' ngrams or emergent ones if selected. Other outputs include a word cloud and an html document as emergence report.

Installation guide

Setup using Docker

Ensure that docker is installed on your machine.

To build your own Docker image

Navigate to root directory of the project and build the docker image

docker build -t pygrams

-t - tags the image

pygrams - image tag

To use a pre-built Docker image

The latest version of pyGrams has been added to docker.io at https://hub.docker.com/r/datasciencecampus/pygrams. To use this:

docker pull datasciencecampus/pygrams

To run the Docker image

Run the built or pulled docker image using

docker run pygrams

If you would like to pass parameters when running the program as described in User guide, append the parameters at the end of docker run:

docker run pygrams -mn=1 -mx=3

Have a look at User guide for further runtime parameters.

Setup without Docker

pyGrams.py has been developed to work on both Windows and MacOS. To install:

  1. Please make sure Python 3.6 is installed and set in your path.

    To check the Python version default for your system, run the following in command line/terminal:

    python --version
    

    Note: If Python 2.x is the default Python version, but you have installed Python 3.x, your path may be setup to use python3 instead of python.

  2. To install pyGrams packages and dependencies, from the root directory (./pyGrams) run:

    pip install -e .
    

    This will install all the libraries and then download their required datasets (namely NLTK's data). Once installed, setup will run some tests. If the tests pass, the app is ready to run. If any of the tests fail, open a GitHub issue here.

System Performance

The system performance was tested using a 2.7GHz Intel Core i7 16GB MacBook Pro using 3.2M US patent abstracts from approximately 2005 to 2018. Indicatively, it initially takes about 6 hours to produce a specially optimised 100,000 term TFIDF Dictionary with a file size under 100MB. Once this is created however, it takes approximately 1 minute to run a pyGrams popular terminology query, or approximately 7 minutes for an emerging terminology query.

User guide

pyGrams is command line driven, and called in the following manner:

python pygrams.py

Input Text Data

Selecting the document source (-ds)

This argument is used to select the corpus of documents to analyse. The default source is a pre-created random 1,000 patent dataset from the USPTO, USPTO-random-1000.pkl.bz2.

Pre-created datasets of 100, 1,000, 10,000, 100,000, and 500,000 patents are available in the ./data folder:

  • USPTO-random-100.pkl.bz2
  • USPTO-random-1000.pkl.bz2
  • USPTO-random-10000.pkl.bz2
  • USPTO-random-100000.pkl.bz2
  • USPTO-random-500000.pkl.bz2

For example, to load the 10,000 pickled dataset for patents, use:

python pygrams.py -ds=USPTO-random-10000.pkl.bz2

To use your own document dataset, please place in the ./data folder of pyGrams. File types currently supported are:

  • pkl.bz2: compressed pickle file containing a dataset
  • xlsx: new Microsoft excel format
  • xls: old Microsoft excel format
  • csv: comma separated value file (with headers)

Datasets should contain the following columns:

Column Required? Comments
Free text field Yes Terms extracted from here
Date Optional Compulsory for emergence
Other headers Optional Can filter by content

Selecting the text and date column header names (-th, -dh)

When loading a document dataset, it is mandatory to provide the date column header name, with the text heading having an assumed default:

  • -th: free text field column ('text heading'; default is 'abstract')
  • -dh: date column ('date heading'; default is None as pyGrams can also work as a keywords extraction only without timeseries analysis), format of dates is 'YYYY/MM/DD'

For example, for a corpus of book blurbs you could use:

python pygrams.py -th='blurb' -dh='published_date'

Using cached files to speed up processing (-uc)

In order save processing time, at various stages of the pipeline, we cache data structures that are costly and slow to compute, like the compressed tf-idf matrix, the timeseries matrix, the smooth series and its derivatives from kalman filter and others:

python pygrams.py -uc all-mdf-0.05-200501-201841

TFIDF Dictionary

N-gram selection (-mn, -mx)

An n-gram is a contiguous sequence of n items (source). N-grams can be unigrams (single words, e.g., vehicle), bigrams (sequences of two words, e.g., aerial vehicle), trigrams (sequences of three words, e.g., unmanned aerial vehicle) or any n number of continuous terms.

The following arguments will set the n-gram limit to be, e.g. unigrams, bigrams, and trigrams (the default):

python pygrams.py -mn=1 -mx=3

To analyse only unigrams:

python pygrams.py -mn=1 -mx=1

Maximum document frequency (-mdf)

Terms identified are filtered by the maximum number of documents that use this term; the default is 0.05, representing an upper limit of 5% of documents containing this term. If a term occurs in more that 5% of documents it is rejected.

For example, to set the maximum document frequency to 5% (the default), use:

python pygrams.py -mdf 0.05

Using a small (5% or less) maximum document frequency may help remove generic words, or stop words.

Stopwords

There are three configuration files available inside the config directory:

  • stopwords_glob.txt
  • stopwords_n.txt
  • stopwords_uni.txt

The first file (stopwords_glob.txt) contains stopwords that are applied to all n-grams. The second file contains stopwords that are applied to all n-grams for n > 1 (bigrams and trigrams). The last file (stopwords_uni.txt) contains stopwords that apply only to unigrams. The users can append stopwords into this files, to stop undesirable output terms.

Pre-filter Terms

Given that many of the terms will actually be very rare, they will not be of use when looking for popular terms. The total number of terms can easily exceed 1,000,000 and slow down pygrams with irrelevant terms. To circumvent this, a prefilter is applied as soon as the TFIDF matrix is created which will retain the highest scoring terms by TFIDF (as is calculated and reported at the end of the main pipeline). The default is to retain the top 100,000 terms; setting it to 0 will disable it, viz:

python pygrams.py -pt 0

Or changed to a different threshold such as 10,000 terms (using the longer argument name for comparison):

python pygrams.py -prefilter_terms 10000

Note that the prefilter will change TFIDF results as it will remove rare n-grams - which will result in bi-grams & tri-grams having increased scores when rare uni-grams and bi-grams are removed, as we unbias results to avoid double or triple counting contained n-grams.

Document Filters

Time filters (-df, -dt)

This argument can be used to filter documents to a certain timeframe. For example, the below will restrict the document cohort to only those from 20 Feb 2000 up to now (the default start date being 1 Jan 1900).

python pygrams.py -dh publication_date -df=2000/02/20

The following will restrict the document cohort to only those between 1 March 2000 and 31 July 2016.

python pygrams.py -dh publication_date -df=2000/03/01 -dt=2016/07/31

Column features filters (-fh, -fb)

If you want to filter results, such as for female, British in the example below, you can specify the column names you wish to filter by, and the type of filter you want to apply, using:

  • -fh: the list of column names (default is None)
  • -fb: the type of filter (choices are 'union' (default), where all fields need to be 'yes', or 'intersection', where any field can be 'yes')
python pygrams.py -fh=['female','british'] -fb='union'

This filter assumes that values are '0'/'1', or 'Yes'/'No'.

Choosing CPC classification (Patent specific) (-cpc)

This subsets the chosen patents dataset to a particular Cooperative Patent Classification (CPC) class, for example Y02. The Y02 classification is for "technologies or applications for mitigation or adaptation against climate change". An example script is:

python pygrams.py -cpc=Y02 -ds=USPTO-random-10000.pkl.bz2

In the console the number of subset patents will be stated. For example, for python pygrams.py -cpc=Y02 -ps=USPTO-random-10000.pkl.bz2 the number of Y02 patents is 197. Thus, the TFIDF will be run for 197 patents.

Term Filters

Search terms filter (-st)

This subsets the TFIDF term dictionary by only keeping terms related to the given search terms.

python pygrams.py -st pharmacy medicine chemist

Timeseries Calculations

Timeseries (-ts)

An option to choose between popular or emergent terminology outputs. Popular terminology is the default option; emergent terminology can be used by typing:

python pygrams.py -ts

Emergence Index (-ei)

An option to choose between quadratic fitting, Porter 2018 or gradients from state-space model using kalman filter smoothing emergence indexes. Porter is used by default; quadratic fitting can be used instead, for example:

python pygrams.py -ts -ei quadratic

Exponential (-exp)

An option designed to favour exponential like emergence, based on a yearly weighting function that linearly increases from zero, for example:

python pygrams.py -ts -exp

Timeseries Forecasts

Various options are available to control how emergence is forecasted.

Predictor Names (-pns)

The forecast method is selected using argument pns, in this case corresponding to Linear (2=default) and Holt-Winters (6).

Python pygrams.py -pns=2
Python pygrams.py -pns=6

The full list of options is included below, with multiple inputs are allowed.

  1. All options
  2. Naive
  3. Linear
  4. Quadratic
  5. Cubic
  6. ARIMA
  7. Holt-Winters
  8. SSM

Other options

number of terms to analyse (default: 25)

Python pygrams.py -nts=25

minimum number of patents per quarter referencing a term (default: 15)

Python pygrams.py -mpq=15

number of steps ahead to analyse for (default: 5)

Python pygrams.py -stp=5

analyse using test or not (default: False)

Python pygrams.py -tst=False

analyse using normalised patents counts or not (default: False)

Python pygrams.py -nrm=False

Outputs (-o)

Pygrams outputs a report of top ranked terms (popular or emergent). Additional command line arguments provide alternative options, for example a word cloud.

python pygrams.py -o wordcloud

Time series analysis also supports a multiplot to present up to 30 terms time series (emergent and declining), output in the outputs/emergence folder:

python pygrams.py -ts -dh 'publication_date' -o multiplot

The output options generate:

  • Report is a text file containing top n terms (default is 250 terms, see -np for more details)
  • wordcloud: a word cloud containing top n terms (default is 250 terms, see -nd for more details)

Note that all outputs are generated in the outputs subfolder. Below are some example outputs:

Report

The report will output the top n number of terms (default is 250) and their associated TFIDF score. Below is an example for patent data, where only bigrams have been analysed.

Term TFIDF Score
1. fuel cell 2.143778
2. heat exchanger 1.697166
3. exhaust gas 1.496812
4. combustion engine 1.480615
5. combustion chamber 1.390726
6. energy storage 1.302651
7. internal combustion 1.108040
8. positive electrode 1.100686
9. carbon dioxide 1.092638
10. control unit 1.069478

Wordcloud ('wordcloud')

A wordcloud, or tag cloud, is a novel visual representation of text data, where words (tags) importance is shown with font size and colour. Here is a wordcloud using patent data. The greater the TFIDF score, the larger the font size of the term.

Folder structure

  • pygrams.py is the main python program file in the root folder (Pygrams).
  • README.md is this markdown readme file in the root folder
  • pipeline.py in the scripts folder provides the main program sequence along with pygrams.py.
  • The 'data' folder is where to place the source text data files.
  • The 'outputs' folder contains all the program outputs.
  • The 'config' folder contains the stop word configuration files.
  • The setup file in the root folder, along with the meta folder, contain installation related files.
  • The test folder contains unit tests.

Help

A help function details the range and usage of these command line arguments:

python pygrams.py -h

The help output is included below. This starts with a summary of arguments:

usage: pygrams.py [-h] [-ds DOC_SOURCE] [-uc USE_CACHE] [-th TEXT_HEADER]
                  [-dh DATE_HEADER] [-fc FILTER_COLUMNS]
                  [-st SEARCH_TERMS [SEARCH_TERMS ...]]
                  [-stthresh SEARCH_TERMS_THRESHOLD] [-df DATE_FROM]
                  [-dt DATE_TO] [-tsdf TIMESERIES_DATE_FROM]
                  [-tsdt TIMESERIES_DATE_TO] [-mn {1,2,3}] [-mx {1,2,3}]
                  [-mdf MAX_DOCUMENT_FREQUENCY] [-ndl] [-pt PREFILTER_TERMS]
                  [-o [{wordcloud,multiplot} [{wordcloud,multiplot} ...]]]
                  [-on OUTPUTS_NAME] [-wt WORDCLOUD_TITLE] [-nltk NLTK_PATH]
                  [-np NUM_NGRAMS_REPORT] [-nd NUM_NGRAMS_WORDCLOUD]
                  [-nf NUM_NGRAMS_FDG] [-cpc CPC_CLASSIFICATION] [-ts]
                  [-pns PREDICTOR_NAMES [PREDICTOR_NAMES ...]] [-nts NTERMS]
                  [-mpq MINIMUM_PER_QUARTER] [-stp STEPS_AHEAD]
                  [-ei {porter,net-growth}] [-sma {kalman,savgol}] [-exp]
                  [-nrm]

extract popular n-grams (words or short phrases) from a corpus of documents

It continues with a detailed description of the arguments:

  -h, --help            show this help message and exit
  -ds DOC_SOURCE, --doc_source DOC_SOURCE
                        the document source to process (default: USPTO-
                        random-1000.pkl.bz2)
  -uc USE_CACHE, --use_cache USE_CACHE
                        Cache file to use, to speed up queries (default: None)
  -th TEXT_HEADER, --text_header TEXT_HEADER
                        the column name for the free text (default: abstract)
  -dh DATE_HEADER, --date_header DATE_HEADER
                        the column name for the date (default: None)
  -fc FILTER_COLUMNS, --filter_columns FILTER_COLUMNS
                        list of columns with binary entries by which to filter
                        the rows (default: None)
  -st SEARCH_TERMS [SEARCH_TERMS ...], --search_terms SEARCH_TERMS [SEARCH_TERMS ...]
                        Search terms filter: search terms to restrict the
                        tfidf dictionary. Outputs will be related to search
                        terms (default: [])
  -stthresh SEARCH_TERMS_THRESHOLD, --search_terms_threshold SEARCH_TERMS_THRESHOLD
                        Provides the threshold of how related you want search
                        terms to be Values between 0 and 1: 0.8 is considered
                        high (default: 0.75)
  -df DATE_FROM, --date_from DATE_FROM
                        The first date for the document cohort in YYYY/MM/DD
                        format (default: None)
  -dt DATE_TO, --date_to DATE_TO
                        The last date for the document cohort in YYYY/MM/DD
                        format (default: None)
  -tsdf TIMESERIES_DATE_FROM, --timeseries-date-from TIMESERIES_DATE_FROM
                        The first date for the document cohort in YYYY/MM/DD
                        format (default: None)
  -tsdt TIMESERIES_DATE_TO, --timeseries-date-to TIMESERIES_DATE_TO
                        The last date for the document cohort in YYYY/MM/DD
                        format (default: None)
  -mn {1,2,3}, --min_ngrams {1,2,3}
                        the minimum ngram value (default: 1)
  -mx {1,2,3}, --max_ngrams {1,2,3}
                        the maximum ngram value (default: 3)
  -mdf MAX_DOCUMENT_FREQUENCY, --max_document_frequency MAX_DOCUMENT_FREQUENCY
                        the maximum document frequency to contribute to TF/IDF
                        (default: 0.05)
  -ndl, --normalize_doc_length
                        normalize tf-idf scores by document length (default:
                        False)
  -pt PREFILTER_TERMS, --prefilter_terms PREFILTER_TERMS
                        Initially remove all but the top N terms by TFIDF
                        score before pickling initial TFIDF (removes 'noise'
                        terms before main processing pipeline starts)
                        (default: 100000)
  -o [{wordcloud,multiplot} [{wordcloud,multiplot} ...]], --output [{wordcloud,multiplot} [{wordcloud,multiplot} ...]]
                        Note that this can be defined multiple times to get
                        more than one output. (default: [])
  -on OUTPUTS_NAME, --outputs_name OUTPUTS_NAME
                        outputs filename (default: out)
  -wt WORDCLOUD_TITLE, --wordcloud_title WORDCLOUD_TITLE
                        wordcloud title (default: Popular Terms)
  -nltk NLTK_PATH, --nltk_path NLTK_PATH
                        custom path for NLTK data (default: None)
  -np NUM_NGRAMS_REPORT, --num_ngrams_report NUM_NGRAMS_REPORT
                        number of ngrams to return for report (default: 250)
  -nd NUM_NGRAMS_WORDCLOUD, --num_ngrams_wordcloud NUM_NGRAMS_WORDCLOUD
                        number of ngrams to return for wordcloud (default:
                        250)
  -cpc CPC_CLASSIFICATION, --cpc_classification CPC_CLASSIFICATION
                        the desired cpc classification (for patents only)
                        (default: None)
  -ts, --timeseries     denote whether timeseries analysis should take place
                        (default: False)
  -pns PREDICTOR_NAMES [PREDICTOR_NAMES ...], --predictor_names PREDICTOR_NAMES [PREDICTOR_NAMES ...]
                        0. All standard predictors, 1. Naive, 2. Linear, 3.
                        Quadratic, 4. Cubic, 5. ARIMA, 6. Holt-Winters, 7.
                        SSM; multiple inputs are allowed. (default: [2])
  -nts NTERMS, --nterms NTERMS
                        number of terms to analyse (default: 25)
  -mpq MINIMUM_PER_QUARTER, --minimum-per-quarter MINIMUM_PER_QUARTER
                        minimum number of patents per quarter referencing a
                        term (default: 15)
  -stp STEPS_AHEAD, --steps_ahead STEPS_AHEAD
                        number of steps ahead to analyse for (default: 5)
  -ei {porter,net-growth}, --emergence-index {porter,net-growth}
                        Emergence calculation to use (default: porter)
  -sma {kalman,savgol}, --smoothing-alg {kalman,savgol}
                        Time series smoothing to use (default: savgol)
  -exp, --exponential_fitting
                        analyse using exponential type fit or not (default:
                        False)
  -nrm, --normalised    analyse using normalised patents counts or not
                        (default: False)

Acknowledgements

Patent data

Patent data was obtained from the United States Patent and Trademark Office (USPTO) through the Bulk Data Storage System (BDSS). In particular we used the Patent Grant Full Text Data/APS (JAN 1976 - PRESENT) dataset, using the data from 2004 onwards in XML 4.* format.

scikit-learn usage

Sections of this code are based on scikit-learn sources.

3rd Party Library Usage

Various 3rd party libraries are used in this project; these are listed on the dependencies page, whose contributions we gratefully acknowledge.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].