All Projects β†’ microsoft β†’ Rat Sql

microsoft / Rat Sql

Licence: mit
A relation-aware semantic parsing model from English to SQL

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Rat Sql

Haystack
πŸ” Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: ✭ 3,409 (+1917.16%)
Mutual labels:  question-answering
Gossiping Chinese Corpus
PTT ε…«ε¦η‰ˆε•η­”δΈ­ζ–‡θͺžζ–™
Stars: ✭ 137 (-18.93%)
Mutual labels:  question-answering
Denspi
Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index (DenSPI)
Stars: ✭ 162 (-4.14%)
Mutual labels:  question-answering
Dynamic Memory Networks Plus Pytorch
Implementation of Dynamic memory networks plus in Pytorch
Stars: ✭ 123 (-27.22%)
Mutual labels:  question-answering
Kbqa Ar Smcnn
Question answering over Freebase (single-relation)
Stars: ✭ 129 (-23.67%)
Mutual labels:  question-answering
Cape Webservices
Entrypoint for all backend cape webservices
Stars: ✭ 149 (-11.83%)
Mutual labels:  question-answering
Bi Att Flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
Stars: ✭ 1,472 (+771.01%)
Mutual labels:  question-answering
Improved Dynamic Memory Networks Dmn Plus
Theano Implementation of DMN+ (Improved Dynamic Memory Networks) from the paper by Xiong, Merity, & Socher at MetaMind, http://arxiv.org/abs/1603.01417 (Dynamic Memory Networks for Visual and Textual Question Answering)
Stars: ✭ 165 (-2.37%)
Mutual labels:  question-answering
Question Answering
TensorFlow implementation of Match-LSTM and Answer pointer for the popular SQuAD dataset.
Stars: ✭ 133 (-21.3%)
Mutual labels:  question-answering
Chinese Rc Datasets
Collections of Chinese reading comprehension datasets
Stars: ✭ 159 (-5.92%)
Mutual labels:  question-answering
Knowledge Aware Reader
PyTorch implementation of the ACL 2019 paper "Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader"
Stars: ✭ 123 (-27.22%)
Mutual labels:  question-answering
Medquad
Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites
Stars: ✭ 129 (-23.67%)
Mutual labels:  question-answering
Pytorch Question Answering
Important paper implementations for Question Answering using PyTorch
Stars: ✭ 154 (-8.88%)
Mutual labels:  question-answering
Clicr
Machine reading comprehension on clinical case reports
Stars: ✭ 123 (-27.22%)
Mutual labels:  question-answering
Rczoo
question answering, reading comprehension toolkit
Stars: ✭ 163 (-3.55%)
Mutual labels:  question-answering
Dynamic Coattention Network Plus
Dynamic Coattention Network Plus (DCN+) TensorFlow implementation. Question answering using Deep NLP.
Stars: ✭ 117 (-30.77%)
Mutual labels:  question-answering
Question answering models
This repo collects and re-produces models related to domains of question answering and machine reading comprehension
Stars: ✭ 139 (-17.75%)
Mutual labels:  question-answering
Hq bot
πŸ“² Bot to help solve HQ trivia
Stars: ✭ 167 (-1.18%)
Mutual labels:  question-answering
Awesomemrc
This repo is our research summary and playground for MRC. More features are coming.
Stars: ✭ 162 (-4.14%)
Mutual labels:  question-answering
Nspm
πŸ€– Neural SPARQL Machines for Knowledge Graph Question Answering.
Stars: ✭ 156 (-7.69%)
Mutual labels:  question-answering

RAT-SQL

This repository contains code for the ACL 2020 paper "RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers".

If you use RAT-SQL in your work, please cite it as follows:

@inproceedings{rat-sql,
    title = "{RAT-SQL}: Relation-Aware Schema Encoding and Linking for Text-to-{SQL} Parsers",
    author = "Wang, Bailin and Shin, Richard and Liu, Xiaodong and Polozov, Oleksandr and Richardson, Matthew",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    pages = "7567--7578"
}

Changelog

2020-08-14:

  • The Docker image now inherits from a CUDA-enabled base image.
  • Clarified memory and dataset requirements on the image.
  • Fixed the issue where token IDs were not converted to word-piece IDs for BERT value linking.

Usage

Step 1: Download third-party datasets & dependencies

Download the datasets: Spider and WikiSQL. In case of Spider, make sure to download the 08/03/2020 version or newer. Unpack the datasets somewhere outside this project to create the following directory structure:

/path/to/data
β”œβ”€β”€ spider
β”‚   β”œβ”€β”€ database
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ dev.json
β”‚   β”œβ”€β”€ dev_gold.sql
β”‚   β”œβ”€β”€ tables.json
β”‚   β”œβ”€β”€ train_gold.sql
β”‚   β”œβ”€β”€ train_others.json
β”‚   └── train_spider.json
└── wikisql
    β”œβ”€β”€ dev.db
    β”œβ”€β”€ dev.jsonl
    β”œβ”€β”€ dev.tables.jsonl
    β”œβ”€β”€ test.db
    β”œβ”€β”€ test.jsonl
    β”œβ”€β”€ test.tables.jsonl
    β”œβ”€β”€ train.db
    β”œβ”€β”€ train.jsonl
    └── train.tables.jsonl

To work with the WikiSQL dataset, clone its evaluation scripts into this project:

mkdir -p third_party
git clone https://github.com/salesforce/WikiSQL third_party/wikisql

Step 2: Build and run the Docker image

We have provided a Dockerfile that sets up the entire environment for you. It assumes that you mount the datasets downloaded in Step 1 as a volume /mnt/data into a running image. Thus, the environment setup for RAT-SQL is:

docker build -t ratsql .
docker run --rm -m4g -v /path/to/data:/mnt/data -it ratsql

Note that the image requires at least 4 GB of RAM to run preprocessing. By default, Docker Desktop for Mac and Docker Desktop for Windows run containers with 2 GB of RAM. The -m4g switch overrides it; alternatively, you can increase the default limit in the Docker Desktop settings.

If you prefer to set up and run the codebase without Docker, follow the steps in Dockerfile one by one. Note that this repository requires Python 3.7 or higher and a JVM to run Stanford CoreNLP.

Step 3: Run the experiments

Every experiment has its own config file in experiments. The pipeline of working with any model version or dataset is:

python run.py preprocess experiment_config_file  # Step 3a: preprocess the data
python run.py train experiment_config_file       # Step 3b: train a model
python run.py eval experiment_config_file        # Step 3b: evaluate the results

Use the following experiment config files to reproduce our results:

  • Spider, GloVE version: experiments/spider-glove-run.jsonnet
  • Spider, BERT version (requires a GPU with at least 16GB memory): experiments/spider-bert-run.jsonnet
  • WikiSQL, GloVE version: experiments/wikisql-glove-run.jsonnet

The exact model accuracy may vary by Β±2% depending on a random seed. See paper for details.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].