All Projects → matpalm → common-crawl

matpalm / common-crawl

Licence: other
playing around with the common crawl dataset

Programming Languages

java
68154 projects - #9 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects

playing with the common crawl

serious work in progess

common crawl is a freely available 25+TB webcrawl.

dependencies

KeepEverythingWithMinKWordsExtractor has been working well for me... * tika for language detection. * the stanford parser for general NLP witchcraft.

method

pass 0) download the data

download the data using jets3t from s3 unmodified to hdfs. was using common crawl input format (which did the download) but had lots of problems.

see simple_dist_cp.sh

pass 1) filter text/html

map only pass using the nutch arc input format to ignore everything but mime_type 'text/html'

also converts from raw http response (ie ascii headers + encoded bytes) to just utf-8 encoded html

want to just have this so can do experiments in either link graph or visible text

outputs (as sequence file) key: url, value: html response (utf-8 encoded)

see text_html.sh

pass 2 ) visible text extraction

map only pass html through boilerpipe to extract visible text

uses the boilerpipe KeepEverythingWithMinKWordsExtractor to ignore block elements that don't have at least 5 terms

outputs (as sequence file) key: url, value: visible text, each line denotes a seperate block element from html

pass 3) filter english text only

map only pass visible text through tika to identify language and ignore everything but language 'en'

outputs (as sequence file) key: url value: visible text

see visible_en_text.sh

pass 4 ) tokenisation

map/reduce pass visible text, a paragraph at a time, through the stanford parser and extract sentences / tokens

ignore a sentence that tokens to less than 3 terms.

only emit each sentence once per page since the vast majority of these duplicates represent noise (headers / footers / list structures etc)

outputs (as sequence file) key: url \t paragraph_idx \t sentence_in_paragraph_idx value: one sentence, tokens space seperated

#reducers ~= 3gb to get under 5gb s3 limit (ie sans multipart upload)

see sentences.sh

pass 2 -> pass 4

see run.sh for a ChainMapper version that does steps 2 -> 4 in a single map/reduce pass

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].