All Projects → TeamHG-Memex → Deep Deep

TeamHG-Memex / Deep Deep

Adaptive crawler which uses Reinforcement Learning methods

Projects that are alternatives of or similar to Deep Deep

Pyeng
Python for engineers
Stars: ✭ 144 (-0.69%)
Mutual labels:  jupyter-notebook
Machinelearning Az
Repositorio del Curso de Machine Learning de la A a la Z con R y Python
Stars: ✭ 144 (-0.69%)
Mutual labels:  jupyter-notebook
Recurrent neural network
This is the code for "Recurrent Neural Networks - The Math of Intelligence (Week 5)" By Siraj Raval on Youtube
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Google2csv
Google2Csv a simple google scraper that saves the results on a csv/xlsx/jsonl file
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Jupyter
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Rloss
Regularized Losses (rloss) for Weakly-supervised CNN Segmentation
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Tf2 course
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course
Stars: ✭ 1,826 (+1159.31%)
Mutual labels:  jupyter-notebook
Hypertools Paper Notebooks
Supporting notebooks and data from hypertools paper
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Scipy con 2019
Tutorial Sessions for SciPy Con 2019
Stars: ✭ 142 (-2.07%)
Mutual labels:  jupyter-notebook
Kaggle Titanic
kaggle titanic solution
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Course Python Data Science
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Mlmodels
mlmodels : Machine Learning and Deep Learning Model ZOO for Pytorch, Tensorflow, Keras, Gluon models...
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Shufflenet V2 Tensorflow
A lightweight convolutional neural network
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Introduccion a python Curso online
Repositorio en el que se encontrarán diversos materiales, códigos, videos y ejercicios para el aprendizaje del lenguaje Python.
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Ppdai risk evaluation
“魔镜杯”风控算法大赛 拍拍贷风控模型,接近冠军分数
Stars: ✭ 144 (-0.69%)
Mutual labels:  jupyter-notebook
Ipython Notebooks
Informal IPython experiments and tutorials. TensorFlow, machine learning/deep learning/RL, NLP applications.
Stars: ✭ 144 (-0.69%)
Mutual labels:  jupyter-notebook
Data Driven Prediction Of Battery Cycle Life Before Capacity Degradation
Code for Nature energy manuscript
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Sqlcell
SQLCell is a magic function for the Jupyter Notebook that executes raw, parallel, parameterized SQL queries with the ability to accept Python values as parameters and assign output data to Python variables while concurrently running Python code. And *much* more.
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Citeomatic
A citation recommendation system that allows users to find relevant citations for their paper drafts. The tool is backed by Semantic Scholar's OpenCorpus dataset.
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook
Textbook
Principles and Techniques of Data Science, the textbook for Data 100 at UC Berkeley
Stars: ✭ 145 (+0%)
Mutual labels:  jupyter-notebook

Deep-Deep: Adaptive Crawler

.. image:: https://travis-ci.org/TeamHG-Memex/deep-deep.svg?branch=master :target: http://travis-ci.org/TeamHG-Memex/deep-deep :alt: Build Status

.. image:: http://codecov.io/github/TeamHG-Memex/deep-deep/coverage.svg?branch=master :target: http://codecov.io/github/TeamHG-Memex/deep-deep?branch=master :alt: Code Coverage

Deep-Deep is a Scrapy-based crawler which uses Reinforcement Learning methods to learn which links to follow.

It is called Deep-Deep, but it doesn't use Deep Learning, and it is not only for Deep web. Weird.

Running

In order to run the spider, you need some seed urls and a relevancy function that will provide reward value for each crawled page. There are some scripts in ./scripts with common use-cases:

  • crawl-forms.py learns to find password recovery forms (they are classified with Formasaurus). This is a good benchmark task, because the spider must learn to plan several steps ahead (they are often best reachable via login links).
  • crawl-keywords.py starts a crawl where relevance function is determined by a keywords file (keywords starting with "-" are considered negative).
  • crawl-relevant.py start a crawl where reward is given by a classifier that returns a score with .predict_proba method.

There is also an extraction spider deepdeep.spiders.extraction.ExtractionSpider that learns to extract unique items from a single domain given an item extractor.

For keywords and relevancy crawlers, the following files will be created in the result folder:

  • items.jl.gz - depending on the value of the export_cdr argument, either items in CDR format will be exported (default), or spider stats, including learning statistics (pass -a export_cdr=0)
  • meta.json - arguments of the spider
  • params.json - full spider parameters
  • Q-*.joblib - Q-model snapshots
  • queue-*.csv.gz - queue snapshots
  • events.out.tfevents.* - a log in TensorBoard_ format. Install TensorFlow_ to view it with tensorboard --logdir <result folder parent> command.

Using trained model

You can use deep-deep to just run adaptive crawls, updating link model and collecting crawled data at the same time. But in some cases it is more efficient to first train a link model with deep-deep, and then use this model in another crawler. Deep-deep uses a lot of memory to store page and link features, and more CPU to update the link model. So if the link model is general enough to freeze it, you can run a more efficient crawl. Or you might want to just use deep-deep link model in an existing project.

This is all possible with deepdeep.predictor.LinkClassifier: just load it from Q-*.joblib checkpoint and use .extract_urls_from_response or .extract_urls methods to get a list of urls with scores. An example of using this classifier in a simple scrapy spider is given in examples/standalone.py. Note that in order to use default scrapy queue, a float link score is converted to an integer priority value.

Note that in some rare cases the model might fail to generalize from the crawl it was trained on to the new crawl.

Model explanation

It's possible to explain model weights and predictions using eli5_ library. For that you'll need to crawl with model checkpointing enabled and storing items in CDR format. Crawled items are used in order to invert the hashing vectorizer features, and also for prediction explanation.

./scripts/explain-model.py can save a model explanation to pickle, html, or print it in the terminal. But it is hard to analyze because character ngram features are used.

./scripts/explain-predictions.py will produce an html file for each crawled page, where explanations for all link scores will be shown.

Testing

To run tests, execute the following command from the deep-deep folder::

./check.sh

It requires Python 3.5+, pytest_, pytest-cov, pytest-twisted and mypy_.

Alternatively, run tox from deep-deep folder.

.. _eli5: http://eli5.readthedocs.io/ .. _pytest: http://pytest.org/latest/ .. _pytest-cov: https://pytest-cov.readthedocs.io/ .. _pytest-twisted: https://github.com/schmir/pytest-twisted .. _mypy: http://mypy-lang.org/ .. _TensorBoard: https://www.tensorflow.org/how_tos/summaries_and_tensorboard/ .. _TensorFlow: https://www.tensorflow.org/


.. image:: https://hyperiongray.s3.amazonaws.com/define-hg.svg :target: https://www.hyperiongray.com/?pk_campaign=github&pk_kwd=deep-deep :alt: define hyperiongray

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].