All Projects → icy → Google Group Crawler

icy / Google Group Crawler

Get (almost) original messages from google group archives. Your data is yours.

Programming Languages

shell
77523 projects
bash
514 projects

Projects that are alternatives of or similar to Google Group Crawler

Googliser
a fast BASH multiple-image downloader
Stars: ✭ 202 (+6.32%)
Mutual labels:  wget, google, curl
Gdown
Download a large file from Google Drive (curl/wget fails because of the security notice).
Stars: ✭ 962 (+406.32%)
Mutual labels:  wget, curl
Autocrawler
Google, Naver multiprocess image web crawler (Selenium)
Stars: ✭ 957 (+403.68%)
Mutual labels:  google, crawler
Wsend
wsend: The opposite of wget
Stars: ✭ 64 (-66.32%)
Mutual labels:  wget, curl
Sitemap Generator
Easily create XML sitemaps for your website.
Stars: ✭ 273 (+43.68%)
Mutual labels:  google, crawler
Zhihu Login
知乎模拟登录,支持提取验证码和保存 Cookies
Stars: ✭ 340 (+78.95%)
Mutual labels:  crawler, cookie
Bashupload
PHP/JavaScript file upload web app to upload files from command line & browser, and download them elsewhere. Frequently used to upload/download files on servers. Hosted version is available at bashupload.com.
Stars: ✭ 56 (-70.53%)
Mutual labels:  wget, curl
squirrel
Like curl, or wget, but downloads directly go to a SQLite databse
Stars: ✭ 24 (-87.37%)
Mutual labels:  curl, wget
Http Client
A high-performance, high-stability, cross-platform HTTP client.
Stars: ✭ 86 (-54.74%)
Mutual labels:  wget, curl
D4n155
OWASP D4N155 - Intelligent and dynamic wordlist using OSINT
Stars: ✭ 105 (-44.74%)
Mutual labels:  google, crawler
Fawkes
Fawkes is a tool to search for targets vulnerable to SQL Injection. Performs the search using Google search engine.
Stars: ✭ 108 (-43.16%)
Mutual labels:  google, crawler
Host
Expose your LocalHost with this tool
Stars: ✭ 268 (+41.05%)
Mutual labels:  wget, curl
dePAC
seamless Proxy Auto-Config (a.k.a. Web Proxy Auto Discovery) for CLI apps
Stars: ✭ 26 (-86.32%)
Mutual labels:  curl, wget
Xidel
Command line tool to download and extract data from HTML/XML pages or JSON-APIs, using CSS, XPath 3.0, XQuery 3.0, JSONiq or pattern matching. It can also create new or transformed XML/HTML/JSON documents.
Stars: ✭ 335 (+76.32%)
Mutual labels:  wget, curl
1c http
Подсистема 1С для работы с HTTP
Stars: ✭ 48 (-74.74%)
Mutual labels:  curl, cookie
Php Educational Administration
大学微信查教务成绩 数据抓取 数据分析 微信查成绩 验证码识别 redis缓存
Stars: ✭ 38 (-80%)
Mutual labels:  curl, cookie
Magic google
Google search results crawler, get google search results that you need
Stars: ✭ 247 (+30%)
Mutual labels:  google, crawler
Curlsharp
CurlSharp - .Net binding and object-oriented wrapper for libcurl.
Stars: ✭ 153 (-19.47%)
Mutual labels:  curl, cookie
Is Google
Verify that a request is from Google crawlers using Google's DNS verification steps
Stars: ✭ 82 (-56.84%)
Mutual labels:  google, crawler
Youtube Projects
This repository contains all the code I use in my YouTube tutorials.
Stars: ✭ 144 (-24.21%)
Mutual labels:  google, crawler

Build Status

Download all messages from Google Group archive

google-group-crawler is a Bash-4 script to download all (original) messages from a Google group archive. Private groups require some cookies string/file. Groups with adult contents haven't been supported yet.

Installation

The script requires bash-4, sort, curl, sed, awk.

Make the script executable with chmod 755 and put them in your path (e.g, /usr/local/bin/.)

The script may not work on Windows environment as reported in https://github.com/icy/google-group-crawler/issues/26.

Usage

The first run

For private group, please prepare your cookies file.

# export _CURL_OPTION="-v"        # use curl options to provide e.g, cookies
# export _HOOK_FILE="/some/path"  # provide a hook file, see in #the-hook

# export _ORG="your.company"      # required, if you are using Gsuite
export _GROUP="mygroup"           # specify your group
./crawler.sh -sh                  # first run for testing
./crawler.sh -sh > curl.sh        # save your script
bash curl.sh                      # downloading mbox files

You can execute curl.sh script multiple times, as curl will skip quickly any fully downloaded files.

Update your local archive thanks to RSS feed

After you have an archive from the first run you only need to add the latest messages as shown in the feed. You can do that with -rss option and the additional _RSS_NUM environment variable:

export _RSS_NUM=50                # (optional. See Tips & Tricks.)
./crawler.sh -rss > update.sh     # using rss feed for updating
bash update.sh                    # download the latest posts

It's useful to follow this way frequently to update your local archive.

Private group or Group hosted by an organization

To download messages from private group or group hosted by your organization, you need to provide some cookie information to the script. In the past, the script uses wget and the Netscape cookie file format, now we are using curl with cookie string and a configuration file.

  1. Open Firefox, press F12 to enable Debug mode and select Network tab from the Debug console of Firefox. (You may find a similar way for your favorite browser.)

  2. Log in to your testing google account, and access your group. For example https://groups.google.com/forum/?_escaped_fragment_=categories/google-group-crawler-public (replace google-group-crawler-public with your group name). Make sure you can read some contents with your own group URI.

  3. Now from the Network tab in Debug console, select the address and select Copy -> Copy Request Headers. You will have a lot of things in the result, but please paste them in your text editor and select only Cookie part.

  4. Now prepare a file curl-options.txt as below

     user-agent = "Mozilla/5.0 (X11; Linux x86_64; rv:74.0) Gecko/20100101 Firefox/74.0"
     header = "Cookie: <snip>"
    

    Of course, replace the <snip> part with your own cookie strings. See man curl for more details of the file format.

  5. Specify your cookie file by _CURL_OPTIONS:

     export _CURL_OPTIONS="-K /path/to/curl-options.txt"
    

    Now every hidden group can be downloaded :)

The hook

If you want to execute a hook command after a mbox file is downloaded, you can do as below.

  1. Prepare a Bash script file that contains a definition of __curl_hook command. The first argument is to specify an output filename, and the second argument is to specify an URL. For example, here is simple hook

     # $1: output file
     # $2: url (https://groups.google.com/forum/message/raw?msg=foobar/topicID/msgID)
     __curl_hook() {
       if [[ "$(stat -c %b "$1")" == 0 ]]; then
         echo >&2 ":: Warning: empty output '$1'"
       fi
     }
    

    In this example, the hook will check if the output file is empty, and send a warning to the standard error device.

  2. Set your environment variable _HOOK_FILE which should be the path to your file. For example,

     export _GROUP=archlinuxvn
     export _HOOK_FILE=$HOME/bin/curl.hook.sh
    

    Now the hook file will be loaded in your future output of commands crawler.sh -sh or crawler.sh -rss.

What to do with your local archive

The downloaded messages are found under $_GROUP/mbox/*.

They are in RFC 822 format (possibly with obfuscated email addresses) and they can be converted to mbox format easily before being imported to your email clients (Thunderbird, claws-mail, etc.)

You can also use mhonarc ultility to convert the downloaded to HTML files.

See also

Rescan the whole local archive

Sometimes you may need to rescan / redownload all messages. This can be done by removing all temporary files

rm -fv $_GROUP/threads/t.*    # this is a must
rm -fv $_GROUP/msgs/m.*       # see also Tips & Tricks

or you can use _FORCE option:

_FORCE="true" ./crawler.sh -sh

Another option is to delete all files under $_GROUP/ directory. As usual, remember to backup before you delete some thing.

Known problems

  1. Fails on group with adult contents (https://github.com/icy/google-group-crawler/issues/14)
  2. This script may not recover emails from public groups. When you use valid cookies, you may see the original emails if you are a manager of the group. See also https://github.com/icy/google-group-crawler/issues/16.
  3. When cookies are used, the original emails may be recovered and you must filter them before making your archive public.
  4. Script can't fetch from group whose name contains some special character (e.g, +) See also https://github.com/icy/google-group-crawler/issues/30

Contributions

  1. parallel support: @Pikrass has a script to download messages in parallel. It's discussed in the ticket https://github.com/icy/google-group-crawler/issues/32. The script: https://gist.github.com/Pikrass/f8462ff8a9af18f97f08d2a90533af31
  2. raw access denied: @alexivkin mentioned he could use the print function to work-around the issue. See it here https://github.com/icy/google-group-crawler/issues/29#issuecomment-468810786

Similar projects

License

This work is released under the terms of a MIT license.

Author

This script is written by Anh K. Huynh.

He wrote this script because he couldn't resolve the problem by using nodejs, phantomjs, Watir.

New web technology just makes life harder, doesn't it?

For script hackers

Please skip this section unless your really know to work with Bash and shells.

  1. If you clean your files (as below), you may notice that it will be very slow when re-downloading all files. You may consider to use the -rss option instead. This option will fetch data from a rss link.

    It's recommmeded to use the -rss option for daily update. By default, the number of items is 50. You can change it by the _RSS_NUM variable. However, don't use a very big number, because Google will ignore that.

  2. Because Topics is a FIFO list, you only need to remove the last file. The script will re-download the last item, and if there is a new page, that page will be fetched.

     ls $_GROUP/msgs/m.* \
     | sed -e 's#\.[0-9]\+$##g' \
     | sort -u \
     | while read f; do
         last_item="$f.$( \
           ls $f.* \
           | sed -e 's#^.*\.\([0-9]\+\)#\1#g' \
           | sort -n \
           | tail -1 \
         )";
         echo $last_item;
       done
    
  3. The list of threads is a LIFO list. If you want to rescan your list, you will need to delete all files under $_D_OUTPUT/threads/

  4. You can set the time for mbox output files, as below

     ls $_GROUP/mbox/m.* \
     | while read FILE; do \
         date="$( \
           grep ^Date: $FILE\
           | head -1\
           | sed -e 's#^Date: ##g' \
         )";
         touch -d "$date" $FILE;
       done
    

    This will be very useful, for example, when you want to use the mbox files with mhonarc.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].