All Projects → WZBSocialScienceCenter → Tmtoolkit

WZBSocialScienceCenter / Tmtoolkit

Licence: apache-2.0
Text Mining and Topic Modeling Toolkit for Python with parallel processing power

Programming Languages

139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tmtoolkit

자연어 처리와 텍스트 분석을 위한 오픈소스 파이썬 라이브러리 입니다.
Stars: ✭ 91 (-32.59%)
Mutual labels:  topic-modeling, text-processing
Weaving analytical stories from text data
Stars: ✭ 12 (-91.11%)
Mutual labels:  topic-modeling, text-processing
Palmetto is a quality measuring tool for topics
Stars: ✭ 144 (+6.67%)
Mutual labels:  topic-modeling, evaluation
Useful python NLP tools (evaluation, GUI interface, tokenization)
Stars: ✭ 39 (-71.11%)
Mutual labels:  evaluation, text-processing
A neural network intent parser
Stars: ✭ 124 (-8.15%)
Mutual labels:  text-processing
Expressive is a cross-platform expression parsing and evaluation framework. The cross-platform nature is achieved through compiling for .NET Standard so it will run on practically any platform.
Stars: ✭ 113 (-16.3%)
Mutual labels:  evaluation
Tf Exercise Gan
Tensorflow implementation of different GANs and their comparisions
Stars: ✭ 110 (-18.52%)
Mutual labels:  evaluation
Numpy Ml
Machine learning, in numpy
Stars: ✭ 11,100 (+8122.22%)
Mutual labels:  topic-modeling
Preprocessing Library for Natural Language Processing
Stars: ✭ 130 (-3.7%)
Mutual labels:  text-processing
Hpatches Benchmark
Python & Matlab code for local feature descriptor evaluation with the HPatches dataset.
Stars: ✭ 129 (-4.44%)
Mutual labels:  evaluation
STREAM, for lots of devices written in many programming models
Stars: ✭ 121 (-10.37%)
Mutual labels:  parallel-processing
Pytrec eval
pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
Stars: ✭ 114 (-15.56%)
Mutual labels:  evaluation
Nas Benchmark
"NAS evaluation is frustratingly hard", ICLR2020
Stars: ✭ 126 (-6.67%)
Mutual labels:  evaluation
Colibri Core
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool ``colibri-patternmodeller`` whi ch allows you to build, view, manipulate and query pattern models.
Stars: ✭ 112 (-17.04%)
Mutual labels:  text-processing
🌿 An easy-to-use Japanese Text Processing tool, which makes it possible to switch tokenizers with small changes of code.
Stars: ✭ 130 (-3.7%)
Mutual labels:  text-processing
Lda2vec Pytorch
Topic modeling with word vectors
Stars: ✭ 108 (-20%)
Mutual labels:  topic-modeling
Beautiful visualizations of how language differs among document types.
Stars: ✭ 1,722 (+1175.56%)
Mutual labels:  topic-modeling
A Golang library for processing Asciidoc files.
Stars: ✭ 129 (-4.44%)
Mutual labels:  text-processing
Generative Evaluation Prdc
Code base for the precision, recall, density, and coverage metrics for generative models. ICML 2020.
Stars: ✭ 117 (-13.33%)
Mutual labels:  evaluation
Cogcomp Nlpy
CogComp's light-weight Python NLP annotators
Stars: ✭ 115 (-14.81%)
Mutual labels:  text-processing

tmtoolkit: Text mining and topic modeling toolkit

|pypi| |pypi_downloads| |rtd| |travis| |coverage| |zenodo|

tmtoolkit is a set of tools for text mining and topic modeling with Python developed especially for the use in the social sciences. It aims for easy installation, extensive documentation and a clear programming interface while offering good performance on large datasets by the means of vectorized operations (via NumPy) and parallel computation (using Python's multiprocessing module). It combines several known and well-tested packages such as SpaCy <>_ and SciPy <>_.

At the moment, tmtoolkit focuses on methods around the Bag-of-words model, but word vectors (word embeddings) can also be generated.

The documentation for tmtoolkit is available on <>_ and the GitHub code repository is on <>_.


Text preprocessing ^^^^^^^^^^^^^^^^^^

tmtoolkit implements or provides convenient wrappers for several preprocessing methods, including:

  • tokenization and part-of-speech (POS) tagging <>_ (via SpaCy)
  • lemmatization and term normalization <>_
  • extensive pattern matching capabilities <>_ (exact matching, regular expressions or "glob" patterns) to be used in many methods of the package, e.g. for filtering on token, document or document label level, or for keywords-in-context (KWIC) <#Keywords-in-context-(KWIC)-and-general-filtering-methods>_
  • adding and managing custom token metadata <>_
  • accessing word vectors (word embeddings) <,-vocabulary-and-other-important-properties>_
  • generating n-grams <>_
  • generating sparse document-term matrices <>_
  • expanding compound words and "gluing" of specified subsequent tokens <>_, e.g. ["White", "House"] becomes ["White_House"]

All text preprocessing methods can operate in parallel to speed up computations with large datasets.

Topic modeling ^^^^^^^^^^^^^^

  • model computation in parallel <>_ for different copora and/or parameter sets

  • support for lda <>, scikit-learn <> and gensim <>_ topic modeling backends

  • evaluation of topic models <>_ (e.g. in order to an optimal number of topics for a given dataset) using several implemented metrics:

    • model coherence (Mimno et al. 2011 <>) or with metrics implemented in Gensim <>)
    • KL divergence method (Arun et al. 2010 <>_)
    • probability of held-out documents (Wallach et al. 2009 <>_)
    • pair-wise cosine distance method (Cao Juan et al. 2009 <>_)
    • harmonic mean method (Griffiths, Steyvers 2004 <>_)
    • the loglikelihood or perplexity methods natively implemented in lda, sklearn or gensim
  • plotting of evaluation results <>_

  • common statistics for topic models <>_ such as word saliency and distinctiveness (Chuang et al. 2012 <>), topic-word relevance (Sievert and Shirley 2014 <>)

  • finding / filtering topics with pattern matching <>_

  • export estimated document-topic and topic-word distributions to Excel <>_

  • visualize topic-word distributions and document-topic distributions <>_ as word clouds or heatmaps

  • model coherence (Mimno et al. 2011 <>_) for individual topics

  • integrate PyLDAVis <>_ to visualize results

Other features ^^^^^^^^^^^^^^

  • loading and cleaning of raw text from text files, tabular files (CSV or Excel), ZIP files or folders <>_
  • common statistics and transformations for document-term matrices <>_ like word cooccurrence and tf-idf


  • all languages are supported, for which SpaCy language models <>_ are available
  • all data must reside in memory, i.e. no streaming of large data from the hard disk (which for example Gensim <>_ supports)

Requirements and installation

For requirements and installation procedures, please have a look at the installation section in the documentation <>_.


Code licensed under Apache License 2.0 <>. See LICENSE <> file.

.. |pypi| image:: :target: :alt: PyPI Version

.. |pypi_downloads| image:: :target: :alt: Downloads from PyPI

.. |travis| image:: :target: :alt: Travis CI Build Status

.. |coverage| image:: :target: :alt: Coverage status

.. |rtd| image:: :target: :alt: Documentation Status

.. |zenodo| image:: :target: :alt: Citable Zenodo DOI

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].