All Projects → timlib → Webxray

timlib / Webxray

Licence: gpl-3.0
webxray is a tool for analyzing third-party content on webpages and identifying the companies which collect user data.

Programming Languages

python
139335 projects - #7 most used programming language

webXray

webXray is a tool for analyzing third-party content on webpages and identifying the companies which collect user data. A command line user interface makes webXray easy to use for non-programmers, and those with advanced needs may analyze millions of pages with proper configuration. webXray is a professional tool designed for academic research, and may be used by privacy compliance officers, regulators, and those who are generally curious about hidden data flows on the web.

webXray uses a custom library of domain ownership to chart the flow of data from a given third-party domain to a corporate owner, and if applicable, to parent companies. Tracking attribution reports produced by webXray provide robust granularity. Reports of the average numbers of third-parties and cookies per-site, most commonly occurring third-party domains and elements, volumes of data transferred, use of SSL encryption, and more are provided out-of-the-box. A flexible data schema allows for the generation of custom reports as well as authoring extensions to add additional data sources.

The public version of webXray uses Chrome to load pages, stores data in a SQLite database, and can be used on a normal desktop computer. There is also a propriety forensic version of webXray designed to meet the demands of academic research and litigation. If you have academic needs please contact Tim Libert (https://timlibert.me), if you have litigation needs please contact us at the webXray company website (https://webxray.eu).

More information and detailed installation instructions may be found on the project website.

Dependencies

webXray depends on several pieces of software being installed on your computer in advance. The webXray website has detailed instructions for setting up the software on Ubuntu and macOS. If you are familiar with installing dependencies on your own, the following are needed:

Python 3.4+ is required:

Python 3.4+ 			https://www.python.org

Google Chrome:

Chrome 75+				https://www.google.com/chrome/
Chrome Driver 75			https://sites.google.com/a/chromium.org/chromedriver/

Selenium: Selenium https://pypi.python.org/pypi/selenium

Installation

If the dependencies above are met all you can clone this repository and get started:

git clone https://github.com/timlib/webXray.git

Again, see the webXray website for installation guides for Ubuntu and macOS.

Using webXray

To start webXray in interactive mode type:

python3 run_webXray.py

The prompts will guide you to scanning a sample list of websites using the default settings of Chrome in windowed mode and a SQLite database. If you wish to run several browsers in paralell to increase speed, leverage a more powerful database engine, or perform other advanced tasks, please see the project website for details.

Using webXray to Analyze Your Own List of Pages

The raison d'être of webXray is to allow you to analyze pages of your choosing. In order to do so, first place all of the page addresses you wish to scan into a text file and place this file in the "page_lists" directory. Make sure your addresses start with "http://" or "https://", if not, webXray will not recognize them as valid addresses. Once you have placed your page list in the proper directory you may run webXray and it will allow you to select your page list.

Viewing and Understanding Reports

Use the interactive mode to guide you to generating an analysis once you have completed your data collection. When it is completed it will be output to the '/reports' directory. This will contain a number of csv files:

  • db_summary.csv: a basic report of what is in the database and how many pages loaded
  • stats.csv: provides top-level stats on how many domains are contacted, cookies, javascript, etc.
  • aggregated_tracking_attribution.csv: details on percentages of sites tracked by different companies and their subsidiaries
  • 3p_domain.csv: most frequently occurring third-party domains
  • 3p_element.csv: most frequently occurring third-party elements of all types
  • 3p_image.csv: most frequently occurring third-party images
  • 3p_javascript.csv: most frequently occurring third-party javascript
  • 3p_ssl_use.csv: rates at which detected third-parties encrypt requests
  • data_xfer_summary.csv: volume and percentage of data received from first- and third-party domains
  • data_xfer_aggregated.csv: volume and percentage of data received from various companies
  • data_xfer_by_domain.csv: volume and percentage of data received from specific third-party domains
  • network: pairings between page domains and third-party domains, you can import this info to network visualization software
  • per_page_data_flow.csv: one giant file that lists the requests made for each page, off by default

Important Note on Speed and Parallelization

webXray can load many pages in parallell and may be used for analyzing millions of pages fairly quickly. However, out-of-the-box, webXray is configured to only scan one page at a time. If you think your system can handle more (and chances are it can!), open the 'run_webXray.py' file and search for the first occurance of the 'pool_size' variable. When you find that there are instructions on how to increase the numbers of pages you can do concurrently. Please find additional information on the project website.

Academic Citation

This tool is produced by Timothy Libert, if you are using it for academic research, please cite the most pertinent publication from his Google Scholar page.

License

webXray is FOSS and licensed under GPLv3, see LICENSE.md for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].