All Projects → Pinafore → Qb

Pinafore / Qb

Licence: mit
QANTA Quiz Bowl AI

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Qb

Jupyterlab Prodigy
🧬 A JupyterLab extension for annotating data with Prodigy
Stars: ✭ 97 (-36.6%)
Mutual labels:  artificial-intelligence, natural-language-processing
Nonautoreggenprogress
Tracking the progress in non-autoregressive generation (translation, transcription, etc.)
Stars: ✭ 118 (-22.88%)
Mutual labels:  artificial-intelligence, natural-language-processing
Neuronblocks
NLP DNN Toolkit - Building Your NLP DNN Models Like Playing Lego
Stars: ✭ 1,356 (+786.27%)
Mutual labels:  artificial-intelligence, natural-language-processing
Simplednn
SimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to support relevant neural network architectures in natural language processing tasks
Stars: ✭ 81 (-47.06%)
Mutual labels:  artificial-intelligence, natural-language-processing
Ncrfpp
NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character LSTM/CNN, word LSTM/CNN and softmax/CRF components.
Stars: ✭ 1,767 (+1054.9%)
Mutual labels:  artificial-intelligence, natural-language-processing
Ml
A high-level machine learning and deep learning library for the PHP language.
Stars: ✭ 1,270 (+730.07%)
Mutual labels:  artificial-intelligence, natural-language-processing
Xlnet extension tf
XLNet Extension in TensorFlow
Stars: ✭ 109 (-28.76%)
Mutual labels:  artificial-intelligence, natural-language-processing
Comet
A Neural Framework for MT Evaluation
Stars: ✭ 58 (-62.09%)
Mutual labels:  artificial-intelligence, natural-language-processing
Cocoaai
🤖 The Cocoa Artificial Intelligence Lab
Stars: ✭ 134 (-12.42%)
Mutual labels:  artificial-intelligence, natural-language-processing
Zamia Ai
Free and open source A.I. system based on Python, TensorFlow and Prolog.
Stars: ✭ 133 (-13.07%)
Mutual labels:  artificial-intelligence, natural-language-processing
Get started with deep learning for text with allennlp
Getting started with AllenNLP and PyTorch by training a tweet classifier
Stars: ✭ 69 (-54.9%)
Mutual labels:  artificial-intelligence, natural-language-processing
Awesome Nlp Resources
This repository contains landmark research papers in Natural Language Processing that came out in this century.
Stars: ✭ 145 (-5.23%)
Mutual labels:  artificial-intelligence, natural-language-processing
Hackerrank
This is the Repository where you can find all the solution of the Problems which you solve on competitive platforms mainly HackerRank and HackerEarth
Stars: ✭ 68 (-55.56%)
Mutual labels:  artificial-intelligence, natural-language-processing
Virtual Assistant
A linux based Virtual assistant on Artificial Intelligence in C
Stars: ✭ 88 (-42.48%)
Mutual labels:  artificial-intelligence, natural-language-processing
Botsharp
The Open Source AI Chatbot Platform Builder in 100% C# Running in .NET Core with Machine Learning algorithm.
Stars: ✭ 1,103 (+620.92%)
Mutual labels:  artificial-intelligence, natural-language-processing
Ios ml
List of Machine Learning, AI, NLP solutions for iOS. The most recent version of this article can be found on my blog.
Stars: ✭ 1,409 (+820.92%)
Mutual labels:  artificial-intelligence, natural-language-processing
Coursera Natural Language Processing Specialization
Programming assignments from all courses in the Coursera Natural Language Processing Specialization offered by deeplearning.ai.
Stars: ✭ 39 (-74.51%)
Mutual labels:  artificial-intelligence, natural-language-processing
Thot
Thot toolkit for statistical machine translation
Stars: ✭ 53 (-65.36%)
Mutual labels:  artificial-intelligence, natural-language-processing
Awesome Ai Services
An overview of the AI-as-a-service landscape
Stars: ✭ 133 (-13.07%)
Mutual labels:  artificial-intelligence, natural-language-processing
Nlpaug
Data augmentation for NLP
Stars: ✭ 2,761 (+1704.58%)
Mutual labels:  artificial-intelligence, natural-language-processing

QANTA

Downloading Data

Whether you would like to use our system or use only our dataset, the easiest way to do so is use our dataset.py script. It is a standalone script whose only dependencies are python 3.6 and the package click which can be installed via pip install click.

The following commands can be used to download our dataset, or datasets we use in either the system or paper plots. Data will be downloaded to data/external/datasets by default, but can be changed with the --local-qanta-prefix option

  • ./dataset.py download: Download only the qanta dataset
  • ./dataset.py download wikidata: Download our preprocessed wikidata.org instance of attributes
  • ./dataset.py download plotting: Download the squad, simple questions, jeopardy, and triviaqa datasets we compare against in our paper plots and tables

File Description:

  • qanta.unmapped.2018.04.18.json: All questions in our dataset, without mapped Wikipedia answers. Sourced from protobowl and quizdb. Light preprocessing has been applied to remove quiz bowl specific syntax such as instructions to moderators
  • qanta.processed.2018.04.18.json: Prior dataset with added fields extracting the first sentence, and sentence tokenizations of the question paragraph for convenience.
  • qanta.mapped.2018.04.18.json: The processed dataset with Wikipedia pages matched to the answer where possible. This includes all questions, even those without matched pages.
  • qanta.2018.04.18.sqlite3: Equivalent to qanta.mapped.2018.04.18.json but in sqlite3 format
  • qanta.train.2018.04.18.json: Training data which is the mapped dataset filtered down to only questions with non-null page matches
  • qanta.dev.2018.04.18.json: Dev data which is the mapped dataset filtered down to only questions with non-null page matches
  • qanta.test.2018.04.18.json: Test data which is the mapped dataset filtered down to only questions with non-null page matches

Dependencies

The recommended way to run our system is to use the Anaconda python distribution. The environment.yaml can be used to create a conda environment with all the necessary software versions installed.

The qanta system has the following dependencies. Depending on your objective however not all are necessary. The python packages are generally required so that imports resolve, Apache Spark is required for many data preprocessing steps, Vowpal Wabbit is not needed for anything except training a linear model, Spacy is required for preprocessing, Elastic Search is required for the IR based models, and lz4 and the AWS cli are necessary for downloading data not part of the dataset.py script.

  • Anaconda Distributed Python 3.6
  • PyTorch 0.3.X
  • Apache Spark 2.2.0 with Scala/JVM
  • Vowpal Wabbit 8.4
  • Spacy 2.0
  • Elastic Search 5.6.X
  • lz4
  • AWS CLI
  • All python packages listed in environment.yaml

Installing Apache Spark

packer/bin/install-spark.sh

You can test is spark is installed property by running something like the following:

> python
Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from qanta.spark import create_spark_context
>>> sc = create_spark_context()
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/07/25 10:04:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/07/25 10:04:01 WARN Utils: Your hostname, hongwu resolves to a loopback address: 127.0.0.2; using 192.168.2.2 instead (on interface eth0)
17/07/25 10:04:01 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address

Installing Elastic Search 5.6

packer/bin/install-elasticsearch.sh

Install version 5.6.X, do not use 6.X. Also be sure that the directory bin/ within the extracted files is in your $PATH as it contains the necessary binary elasticsearch.

Installing Python packages

Either use environment.yaml or:

pip install -r packer/requirements.txt

NLTK Models

# Download nltk data
$ python3 setup.py download

Qanta on Path

In addition to these steps you need to either run python setup.py develop or include the qanta directory in your PYTHONPATH environment variable. We intend to fix path issues in the future by fixing absolute/relative paths.

Configuration

QANTA configuration is done through a combination of environment variables and the qanta-defaults.yaml/qanta.yaml files. QANTA will read a qanta.yaml first if it exists, otherwise it will fall back to reading qanta-defaults.yaml. This is meant to allow for custom configuration of qanta.yaml after copying it via cp qanta-defaults.yaml qanta.yaml.

The configuration of most interest is how to enable or disable specific guesser implementations. In the guesser config the keys such as qanta.guesser.dan.DanGuesser correspond to the fully qualified paths of each guesser. Each of these keys contain an array of configurations (this is signified in yaml by the -). Our code will inspect all of these configurations looking for those that have enabled: true, and only run those guessers. By default we have enabled: false for all models. If you simply want to perform a sanity check we recommend enabling qanta.guesser.tfidf.TfidfGuesser. If you are looking for our best model and configuration you should use enable qanta.guesser.rnn.RnnGuesser.

Running QANTA

Running qanta is managed primarily by two methods: ./cli.py and Luigi. The former is used to run specific commands such as starting/stopping elastic search, but in general luigi is the primary method for running our system.

Luigi Pipelines

Luigi is a pure python make-like framework for running data pipelines. Below we give sample commands for running different parts of our pipeline. In general, you should either append --local-scheduler to all commands or learn about using the Luigi Central Scheduler.

For these common tasks you can use command luigi --local-scheduler followed by:

  • --module qanta.pipeline.preprocess DownloadData: This downloads any necessary data and preprocesses it. This will download a copy of our preprocessed Wikipedia stored in AWS S3 and turn it into the format used by our code. This step requires the AWS CLI, lz4, Apache Spark, and may require a decent amount of RAM.
  • --module qanta.pipeline.guesser AllGuesserReports: Train all enabled guessers, generate guesses for them, and produce a report of their performance into output/guesser.

Certain tasks might require Spacy models (e.g en_core_web_lg) or nltk data (e.g wordnet) to be downloaded. See the FAQ section for more information.

Qanta CLI

You can start/stop elastic search with

  • ./cli.py elasticsearch start
  • ./cli.py elasticsearch stop

AWS S3 Checkpoint/Restore

To provide and easy way to version, checkpoint, and restore runs of qanta we provide a script to manage that at aws_checkpoint.py. We assume that you set an environment variable QB_AWS_S3_BUCKET to where you want to checkpoint to and restore from. We assume that we have full access to all the contents of the bucket so we suggest creating a dedicated bucket.

Information on our data sources

Wikipedia Dumps

As part of our ingestion pipeline we access raw wikipedia dumps. The current code is based on the english wikipedia dumps created on 2017/04/01 available at https://dumps.wikimedia.org/enwiki/20170401/

Of these we use the following (you may need to use more recent dumps)

  • Wikipedia page text: This is used to get the text, title, and id of wikipedia pages
  • Wikipedia titles: This is used for more convenient access to wikipedia page titles
  • Wikipedia redirects: DB dump for wikipedia redirects, used for resolving different ways of referencing the same wikipedia entity
  • Wikipedia page to ids: Contains a mapping of wikipedia page and ids, necessary for making the redirect table useful

To process wikipedia we use https://github.com/attardi/wikiextractor with the following command:

$ WikiExtractor.py --processes 15 -o parsed-wiki --json enwiki-20170401-pages-articles-multistream.xml.bz2

Do not use the flag to filter disambiguation pages. It uses a simple string regex to check the title and articles contents. This introduces both false positives and false negatives. We handle the problem of filtering these out by using the wikipedia categories dump

Afterwards we use the following command to tar it, compress it with lz4, and upload the archive to S3

tar cvf - parsed-wiki | lz4 - parsed-wiki.tar.lz4

Wikipedia Redirect Mapping Creation

The output of this process is stored in s3://pinafore-us-west-2/public/wiki_redirects.csv

All the wikipedia database dumps are provided in MySQL sql files. This guide has a good explanation of how to install MySQL which is necessary to use SQL dumps. For this task we will need these tables:

To install, prepare MySQL, and read in the Wikipedia SQL dumps execute the following:

  1. Install MySQL sudo apt-get install mysql-server and sudo mysql_secure_installation
  2. Login with something like mysql --user=root --password=something
  3. Create a database and use it with create database wikipedia; and use wikipedia;
  4. source enwiki-20170401-redirect.sql; (in MySQL session)
  5. source enwiki-20170401-page.sql; (in MySQL session)
  6. This will take quite a long time, so wait it out...
  7. Finally run the query to fetch the redirect mapping and write it to a CSV by executing bin/redirect.sql with source bin/redirect.sql. The file will be located in /var/lib/mysql/redirect.csv which requires sudo access to copy
  8. The result of that query is CSV file containing a source page id, source page title, and target page title. This can be interpretted as the source page redirecting to the target page. We filter namespace=0 to keep only redirects/pages that are main pages and trash things like list/category pages

Wikipedia Category Links Creation

The purpose of this step is to use wikipedia category links to filter out disambiguation pages. Every wikipedia page has a list of categories it belongs to. We filter out any pages which have a category which includes the string disambiguation in its name. The output of this process is a json file containing a list of page_ids that correspond to known disambiguation pages. These are then used downstream to filter down to only non-disambiguation wikipedia pages.

The output of this process is stored in s3://pinafore-us-west-2/public/disambiguation_pages.json with the csv also saved at s3://pinafore-us-west-2/public/categorylinks.csv

The process for this is similar to redirects, except that you should instead source a file named similar to enwiki-20170401-categorylinks.sql, run the script bin/categories.sql, and copy categorylinks.csv. Afterwards run ./cli.py categories disambiguate categorylinks.csv data/external/wikipedia/disambiguation_pages.json. This file is automatically downloaded by the pipeline code like the redirects file so unless you would like to change this or inspect the results, you shouldn't need to worry about this.

SQL References

These references may be useful and are the source for these instructions:

Debugging FAQ and Solutions

pyspark uses the wrong version of python

Set PYSPARK_PYTHON to be python3

ImportError: No module named 'pyspark'

export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/build:$PYTHONPATH

ValueError: unknown locale: UTF-8

export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8

TypeError: namedtuple() missing 3 required keyword-only arguments: 'verbose', 'rename', and 'module'

Python 3.6 needs Spark 2.1.1

OSError: [E050] Can't find model 'en_core_web_lg'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.

To download the required Spacy model, run:

python -m spacy download en_core_web_lg

Missing "wordnet" data for nltk

In a Python interactive shell, run the following commands to download wordnet data:

import nltk
nltk.download('wordnet')

Qanta ID Numbering

  • Default dataset starts near 0
  • PACE Adversarial Writing Event May 2018 starts at 1,000,000
  • December 15 2018 event starts at 2,000,000
  • Dataset for HS student of ACF 2018 Regionals starts at 3,000,000
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].