All Projects → kermitt2 → article-dataset-builder

kermitt2 / article-dataset-builder

Licence: Apache-2.0 license
Open Access PDF harvester, metadata aggregator and full-text ingester

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to article-dataset-builder

Walrus
🔥 Fast, Secure and Reliable System Backup, Set up in Minutes.
Stars: ✭ 197 (+1415.38%)
Mutual labels:  s3-storage
concourse-ci-formula
All-in-one Concourse VM with S3-compatible storage and Vault secret manager
Stars: ✭ 26 (+100%)
Mutual labels:  s3-storage
wakemeops
A Debian repository for portable applications
Stars: ✭ 54 (+315.38%)
Mutual labels:  s3-storage
image-uploader
JavaScript Image Uploader Library for use with Amazon S3
Stars: ✭ 19 (+46.15%)
Mutual labels:  s3-storage
PDFConverter
Best PDF Converter! PDF to any format, pdf2word/excel/xml/html/txt...
Stars: ✭ 94 (+623.08%)
Mutual labels:  pdf2xml
s3cli
Command line tool for S3
Stars: ✭ 21 (+61.54%)
Mutual labels:  s3-storage
Seaweedfs
SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
Stars: ✭ 13,380 (+102823.08%)
Mutual labels:  s3-storage
ionic-image-upload
Ionic Plugin for Uploading Images to Amazon S3
Stars: ✭ 26 (+100%)
Mutual labels:  s3-storage
unpywall
Interfacing the Unpaywall Database with Python
Stars: ✭ 22 (+69.23%)
Mutual labels:  unpaywall
awesome-storage
A curated list of storage open source tools. Backups, redundancy, sharing, distribution, encryption, etc.
Stars: ✭ 324 (+2392.31%)
Mutual labels:  s3-storage
frisbee
Collect email addresses by crawling search engine results.
Stars: ✭ 29 (+123.08%)
Mutual labels:  harvester
GEANet-BioMed-Event-Extraction
Code for the paper Biomedical Event Extraction with Hierarchical Knowledge Graphs
Stars: ✭ 52 (+300%)
Mutual labels:  cord-19
react-native-appsync-s3
React Native app for image uploads to S3 and storing their records in Amazon DynamoDB using AWS Amplify and AppSync SDK
Stars: ✭ 18 (+38.46%)
Mutual labels:  s3-storage
osint
Docker image for osint
Stars: ✭ 92 (+607.69%)
Mutual labels:  harvester
thoth
Metadata management and dissemination system for Open Access books
Stars: ✭ 25 (+92.31%)
Mutual labels:  openaccess
Cloudexplorer
Cloud Explorer
Stars: ✭ 170 (+1207.69%)
Mutual labels:  s3-storage
minio
Minio Object Storage in Kubernetes, used by Deis Workflow.
Stars: ✭ 51 (+292.31%)
Mutual labels:  s3-storage
oai-harvest
Python package for harvesting records from OAI-PMH provider(s).
Stars: ✭ 57 (+338.46%)
Mutual labels:  harvester
site
The OpenScienceMOOC website
Stars: ✭ 20 (+53.85%)
Mutual labels:  openaccess
chiadog
A watch dog providing a peace in mind that your Chia farm is running smoothly 24/7.
Stars: ✭ 466 (+3484.62%)
Mutual labels:  harvester

Open Access PDF harvester and ingester

Python utility for harvesting efficiently a large Open Access collection of PDF (fault tolerant, can be resumed, parallel download and ingestion) and for transforming them into structured XML adapted to text mining and information retrieval applications.

Input currently supported:

  • list of DOI in a file, one DOI per line
  • metadata csv input file from CORD-19 dataset, see the CORD-19 result section below to see the capacity of the tool to get more full texts and better data quality that the official dataset
  • list of PMID in a file, one DOI per line
  • list of PMC ID in a file, one DOI per line

The harvesting is following fair-use (which means that it covers non re-sharable articles) and it is exploiting various Open Access sources. The harvesting thus should result in a close-to-optimal discovery of full texts. For instance, from the same CORD-19 metadata file, the tool can harvest 35.5% more usable full text than available in the CORD-19 dataset (140,322 articles with at least one usable full text versus 103,587 articles with at least one usable full text for the CORD-19 dataset version 2020-09-11), see statistics here.

To do:

  • list of ISTEX identifiers or ark, one DOI per line
  • Apache Airflow for the task workflow
  • Consolidate/resolve bibliographical references obtained via Pub2TEI

What

  • Perform some metadata enrichment/agregation via biblio-glutton & CrossRef web API and output consolidated metadata in a json file

  • Harvest PDF from the specification of the article set (list of strong identifiers or basic metadata provided in a csv file), typically PDF available in Open Access PDF via the Unpaywall API (and some heuristics)

  • Perform Grobid full processing of PDF (including bibliographical reference consolidation and OA access resolution of the cited references), converting them into structured XML TEI

  • For PMC files (Open Access set only), harvest also XML JATS (NLM) files and perform a conversion into XML TEI (same TEI customization as Grobid) via Pub2TEI

Optionally:

  • Generate thumbnails for article (based on the first page of the PDF), small/medium/large

  • Upload the generated dataset on S3 instead of the local file system

  • Generate json PDF annotations (with coordinates) for inline reference markers and bibliographical references (see here)

Requirements

The utility has been tested with Python 3.5+. It is developed for a deployment on a POSIX/Linux server (it uses imagemagick as external process to generate thumbnails and wget). An S3 account and bucket must have been created for non-local storage of the data collection.

To install imagemagick:

  • on Linux Ubuntu:
sudo apt update
sudo apt build-dep imagemagick
  • on macos:
brew install libmagic

Installation

Third party services

The following tools need to be installed and running, with access information specified in the configuration file (config.json):

  • Grobid, for converting PDF into XML TEI

  • biblio-glutton, for metadata retrieval and aggregation

  • Pub2TEI, for converting PMC XML files into XML TEI

It should be possible to use the public demo instance of biblio-glutton, as default configured in the config.json file (the tool scale at more than 6000 queries per second). However for Grobid, we strongly recommand to install a local instance, because the online public demo will not be able to scale and won't be reliable given that it is more or less always overloaded.

As biblio-glutton is using dataset dumps, there is a gap of several months in term of bibliographical data freshness. So, complementary, the CrossRef web API and Unpaywall API services are used to cover the gap. For these two services, you need to indicate your email in the config file (config.json) to follow the etiquette policy of these two services. If the configuration parameters for biblio-glutton are empty, only the CrossRef REST API will be used.

An important parameter in the config.json file is the number of parallel document processing that is allowed, this is specified by the attribute batch_size, default value being 10 (so 10 documents max downloaded in parallel with distinct threads/workers and processed by Grobid in parallel). You can set this number according to your available number of threads. Be careful that parallel download from the same source might be blocked or might result in black-listing for some OA publisher sites, so it might be better to keep batch_size low.

These tools requires Java 8 or more.

For downloading preferably the fulltexts available at PubMed Central from the NIH site rather than on publisher sites, you need to download the Open Access list file from PMC that maps PMC identifiers to PMC resource archive URL:

cd resources
wget https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_file_list.txt

If this file is available under resources/oa_file_list.txt, an index will be built at first launch and the harvester will prioritize the access to the NIH resources.

Article dataset builder

Create a virtual environment and install the python mess:

virtualenv --system-site-packages -p python3 env
source env/bin/activate
pip3 install -r requirements.txt

Docker

TBD

Usage

usage: harvest.py [-h] [--dois DOIS] [--cord19 CORD19] [--pmids PMIDS]
                  [--pmcids PMCIDS] [--config CONFIG] [--reset] [--reprocess]
                  [--thumbnail] [--annotation] [--diagnostic] [--dump]

COVIDataset harvester

optional arguments:
  -h, --help       show this help message and exit
  --dois DOIS      path to a file describing a dataset articles as a simple
                   list of DOI (one per line)
  --cord19 CORD19  path to the csv file describing the CORD-19 dataset
                   articles
  --pmids PMIDS    path to a file describing a dataset articles as a simple
                   list of PMID (one per line)
  --pmcids PMCIDS  path to a file describing a dataset articles as a simple
                   list of PMC ID (one per line)
  --config CONFIG  path to the config file, default is ./config.json
  --reset          ignore previous processing states, and re-init the
                   harvesting process from the beginning
  --reprocess      reprocessed existing failed entries
  --thumbnail      generate thumbnail files for the front page of the
                   harvested PDF
  --annotation     generate bibliographical annotations with coordinates for
                   the harvested PDF
  --diagnostic     perform a full consistency diagnostic on the harvesting and
                   transformation process
  --dump           write all the consolidated metadata in json in the file
                   consolidated_metadata.json
  --download       only download the raw files (PDF, NLM/JATS) without 
                   processing them

Fill the file config.json with relevant service and parameter url.

For instance to process a list of DOI (one DOI per line):

python3 harvest.py --dois test/dois.txt 

Similarly for a list of PMID or PMC ID:

python3 harvest.py --pmids test/pmids.txt 
python3 harvest.py --pmcids test/pmcids.txt 

For instance for the CORD-19 dataset, you can use the metadata.csv (last tested version from 2020-06-29) file by running:

python3 harvest.py --cord19 metadata.csv  

This will generate a consolidated metadata file (specified by --out, or consolidated_metadata.json by default), upload full text files, converted tei.xml files and other optional files either in the local file system (under data_path indicated in the config.json file) or on a S3 bucket if the fields are filled in config.json.

You can set a specific config file name with --config :

python3 harvest.py --cord19 metadata.csv --config my_config.json    

To resume an interrupted processing, simply re-run the same command.

To re-process the failed articles of an harvesting, use:

python3 harvest.py --reprocess --config my_config.json  

To reset entirely an existing harvesting and re-start an harvesting from zero:

python3 harvest.py --cord19 metadata.csv --reset --config my_config.json  

To only download full texts (PDF and JATS/NLM) without GROBID processing, use --download parameter:

python3 harvest.py --cord19 metadata.csv --config my_config.json --download

To create a dump of the consolidated metadata of all the processed files (including the UUID identifier and the state of processing), add the parameter --dump:

python3 harvest.py --dump --config my_config.json  

The generated metadata file is named consolidated_metadata.json.

For producing the thumbnail images of the article first page, use --thumbnail argument. This option requires imagemagick installed on your system and will produce 3 PNG files of size height x150, x300 and x500. These thumbnails can be interesting for offering a preview to an article for an application using these data.

python3 harvest.py --cord19 metadata.csv --thumbnail --config my_config.json  

For producing PDF annotations in JSON format corresponding to the bibliographical information (reference markers in the article and bibliographical references in the bibliographical section), use the argument --annotation. See more information about these annotations here. They allow to enrich the display of PDF, and make them more interactive.

python3 harvest.py --cord19 metadata.csv --annotation --config my_config.json  

Finally you can run a short diagnostic/reporting on the latest harvesting like this:

python3 harvest.py --diagnostic --config my_config.json  

Generated files

Default

Structure of the generated files for an article having as UUID identifier 98da17ff-bf7e-4d43-bdf2-4d8d831481e5

98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5.pdf
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5.json
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5.grobid.tei.xml

The *.json file above gives the metadata of the harvested item, based on CrossRef entries, with additional information provided by biblio-glutton, status of the harvesting and GROBID processing and UUID (field id).

Optional additional files:

98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5.nxml
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5.pub2tei.tei.xml
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5-ref-annotations.json
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5-thumb-small.png
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5-thumb-medium.png
98/da/17/ff/98da17ff-bf7e-4d43-bdf2-4d8d831481e5/98da17ff-bf7e-4d43-bdf2-4d8d831481e5-thumb-large.png

The UUID identifier for a particular article is given in the generated consolidated_metadata.csv file (obtained with option --dump, see above).

The *.nxml files correspond to the JATS files available for PMC (Open Access set only).

CORD-19

The tool can realize its own harvesting and ingestion of CORD-19 papers based on an official version of the metadata.csv file of CORD-19. It provides two main advantages as compared to the official CORD-19 dataset:

  • Harvest around 35% more usable full texts
  • Structure the full texts into high quality TEI format (both from PDF and from JATS), with much more information than the JSON format of the CORD-19 dataset. The JATS conversion into TEI in particular does not lose any information from the original XML file.

Be sure to install the latest available version of GROBID, many recent improvements have been added to the tool in the last months regarding the support of bioRxiv and medRxiv preprints.

To launch the harvesting (see above for more details):

python3 harvest.py --cord19 metadata.csv  

For the CORD-19 dataset, for simplification and clarity, we reuse the cord id which is a random string 8 characters in [0-9a-z]:

00/0a/je/vz/000ajevz/000ajevz.pdf
00/0a/je/vz/000ajevz/000ajevz.json
00/0a/je/vz/000ajevz/000ajevz.grobid.tei.xml

Optional additional files:

00/0a/je/vz/000ajevz/000ajevz.nxml
00/0a/je/vz/000ajevz/000ajevz.pub2tei.tei.xml
00/0a/je/vz/000ajevz/000ajevz-ref-annotations.json
00/0a/je/vz/000ajevz/000ajevz-thumb-small.png
00/0a/je/vz/000ajevz/000ajevz-thumb-medium.png
00/0a/je/vz/000ajevz/000ajevz-thumb-large.png

For harvesting and structuring, you only need the metadata file of the CORD-19 dataset, available at:

https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/<date_iso_str>/metadata.csv

where <date_iso_str> should match a release date indicated on the CORD-19 release page.

For running the coverage script, which compares the full text coverage of the official CORD-19 dataset with the one produced by the present tool, you will need the full CORD-19 dataset.

On harvesting and ingesting the CORD-19 dataset

Adding a local PDF repository for Elsevier OA COVID papers

The CORD-19 dataset includes more than 19k articles corresponding to a set of Elsevier articles on COVID-19 recently put in Open Access. As Unpaywall does not cover these OA articles (on 23.03.2020 at least), you will need to download first these PDF and indicates to the harvesting tool where the local repository of PDF is located:

  • download the PDF files on the COVID-19 FTP server:

Indicate beat_corona as password. See the instruction page in case of troubles.

cd pdf
mget *
  • indicate the local repository where you have downloaded the dataset in the config.json file:
"cord19_elsevier_pdf_path": "/the/path/to/the/pdf"

That's it. The file ./elsevier_covid_map_28_06_2020.csv.gz contains a map of DOI and PII (the Elsevier article identifiers) for these OA articles.

Incremental harvesting

CORD-19 is updated regularly. Suppose that you have harvested one release of the CORD-19 full texts and a few weeks later you would like to refresh your local corpus. Incremental harvesting is supported, so only the new entries will be uploaded and ingested.

If the harvesting was done with one version of the metadata file metadata-2020-09-11.csv (from the 2020-09-11 release):

python3 harvest.py --cord19 metadata-2020-09-11.csv --config my_config.json   

The incremental update will be realized with a new version of the metadata file simply by specifying it:

python3 harvest.py --cord19 metadata-2021-03-22.csv --config my_config.json  

The constraint is that the same data repository path is kept in the config file. The repository and its state will be reused to check if an entry has already been harvested or not.

As an alternative, it is also possible to point to a local old data directory in the config file, with parameter legacy_data_path. Before trying to download a file from the internet, the harvester will first check in this older data directory if the PDF files are not already locally available based on the same identifiers.

Results with CORD-19

Here are the results regarding the CORD-19 from 2020-09-11 (cord-19_2020-09-11.tar.gz) (4.6GB) to illustrate the interest of the tool. We used the present tool using the CORD-19 metadata file (metadata.csv), re-harvested the full texts and converted all into the same target TEI XML format (without information loss with respect to the available publisher XML and GROBID PDF-to-XML conversion).

official CORD-19 this harvester
total entries 253,454 253,454
without cord id duplicates 241,335 241,335
without duplicates - 161,839
entries with valid OA URL - 141,142
entries with successfully downloaded PDF - 139,565
entries with structured full texts via GROBID 94,541 (PDF JSON) 138,440 (TEI XML)
entries with structured full texts via PMC JATS 77,115 (PMC JSON) 104,288 (TEI XML)
total entries with at least one structured full text 103,587 (PDF JSON or PMC JSON) 140,322 (TEI XML)

Other information for this harvester:

  • total OA URL not found or invalid: 20,697 (out of the 161,839 distinct articles)
  • 760 GROBID PDF to TEI XML conversion failures (the average failure on random downloaded scholar PDF is normally around 1%, so we are at 0.5% and this is good)
  • 45 Pub2TEI tranformations (NLM (JATS) -> TEI XML) reported as containing some kind of failure

Other main differences include:

  • the XML TEI contain richer structured full text (section titles, notes, formulas, etc.),
  • usage of up-to-date GROBID models for PDF conversion (with extra medRxiv and bioRxiv training data),
  • PMC JATS files conversion with Pub2TEI (normally without information loss because the TEI custumization we are using superseeds the structures covered by JATS). Note that a conversion from PMC JATS files has been introduced in CORD-19 from version 6.
  • full consolidation of the bibliographical references with publisher metadata, DOI, PMID, PMC ID, etc. when available (if you are into citation graphs)
  • consolidation of article metadata with CrossRef and PubMed aggregations for the entries
  • optional coordinates of structures on the original PDF
  • optional thumbnails for article preview

Converting the PMC XML JATS files into XML TEI

After the harvesting and processing realised by harvest.py, it is possible to convert of PMC XML JATS files into XML TEI. This will provide better XML quality than what can be extracted automatically by Grobid from the PDF. This conversion allows to have all the documents in the same XML TEI customization format. As the TEI format superseeds JATS, there is no loss of information from the JATS file.

To launch the conversion under the default data/ directory:

python3 nlm2tei.py

If a custom config file and custom data/ path are used:

python3 nlm2tei.py --config ./my_config.json

This will apply Pub2TEI (a set of XSLT) to all the harvested *.nxml files and add to the document repository a new file TEI file, for instance for a CORD-19 entry:

00/0a/je/vz/000ajevz/000ajevz.pub2tei.tei.xml

Note that Pub2TEI supports a lot of other publisher's XML formats (and variants of these formats), so the principle and current tool could be used to transform different publisher XML formats into a single one (TEI) - not just NLM/JATS, facilitating and centralizing further ingestion and process by avoiding to write complicated XML parsers for each case.

Checking CORD-19 dataset coverage

The following script checks the number of duplicated cord id (also done by the normal harvester), but also count the number of articles with at least one JSON full text file:

usage: check_cord19_coverage.py [-h] [--documents DOCUMENTS]
                                [--metadata METADATA]

COVIDataset harvester

optional arguments:
  -h, --help            show this help message and exit
  --documents DOCUMENTS
                        path to the official CORD-19 uncompressed document dataset
  --metadata METADATA   path to the CORD-19 CSV metadata file

For example:

python3 check_cord19_coverage.py --metadata cord-19/2021-03-22/metadata.csv --documents cord-19/2021-03-22/ --config my_config.json

The path for --documents is the path where the folder document_parses is located.

Troubleshooting with imagemagick

Recent update (end of October 2018) of imagemagick is breaking the normal conversion usage. Basically the converter does not convert by default for security reason related to server usage. For non-server mode as involved in our module, it is not a problem to allow PDF conversion. For this, simply edit the file /etc/ImageMagick-6/policy.xml (or /etc/ImageMagick/policy.xml) and put into comment the following line:

<!-- <policy domain="coder" rights="none" pattern="PDF" /> -->

How to cite

For citing this software work, please refer to the present GitHub project, together with the Software Heritage project-level permanent identifier. For example, with BibTeX:

@misc{articledatasetbuilder,
    title = {Article Dataset Builder},
    howpublished = {\url{https://github.com/kermitt2/article-dataset-builder}},
    publisher = {GitHub},
    year = {2020--2021},
    archivePrefix = {swh},
    eprint = {1:dir:adc1581a092560c0ac4a82256c0c905859ec15fc}
}

License and contact

Distributed under Apache 2.0 license.

Main author and contact: Patrice Lopez ([email protected])

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].