All Projects → MarshalX → telegram-crawler

MarshalX / telegram-crawler

Licence: MIT license
🕷 Automatically detect changes made to the official Telegram sites, clients and servers.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to telegram-crawler

BaiduSpider
项目已经移动至:https://github.com/BaiduSpider/BaiduSpider !! 一个爬取百度搜索结果的爬虫,目前支持百度网页搜索,百度图片搜索,百度知道搜索,百度视频搜索,百度资讯搜索,百度文库搜索,百度经验搜索和百度百科搜索。
Stars: ✭ 29 (-65.48%)
Mutual labels:  crawling, crawling-python
the-seinfeld-chronicles
A dataset for textual analysis on arguably the best written comedy television show ever.
Stars: ✭ 14 (-83.33%)
Mutual labels:  crawling
Colly
Elegant Scraper and Crawler Framework for Golang
Stars: ✭ 15,535 (+18394.05%)
Mutual labels:  crawling
core
The complete web scraping toolkit for PHP.
Stars: ✭ 1,110 (+1221.43%)
Mutual labels:  crawling
Memorious
Distributed crawling framework for documents and structured data.
Stars: ✭ 248 (+195.24%)
Mutual labels:  crawling
mal-analysis
github repo for MyAnimeList analysis. Also links to the MAL dataset.
Stars: ✭ 31 (-63.1%)
Mutual labels:  crawling
Nutch
Apache Nutch is an extensible and scalable web crawler
Stars: ✭ 2,277 (+2610.71%)
Mutual labels:  crawling
scrapy-fieldstats
A Scrapy extension to log items coverage when the spider shuts down
Stars: ✭ 17 (-79.76%)
Mutual labels:  crawling
xXx dead xXx
b̶̡̪̬͒l̸̰̗̝̀ỏ̷̡̩g̴͇̑g̶̲̱̽͐i̵̹͗n̶̤̥͂̅̆g̴̮̾̅͜ ̷̧͎͆i̷̛͒͜͠n̸̥̺͒ ̶͚͚͊̿͜t̸̺͙̭̆̊̈́ḧ̶̟́̐e̸̱͔̟̓̓͝ ̶̨͔̾͛̑d̵̥̣̏ȧ̷̼̊r̷̰̝̥̅̌͝k̵̟̥̞̉̍͛
Stars: ✭ 19 (-77.38%)
Mutual labels:  crawling
scrape-github-trending
Tutorial for web scraping / crawling with Node.js.
Stars: ✭ 42 (-50%)
Mutual labels:  crawling
podcastcrawler
PHP library to find podcasts
Stars: ✭ 40 (-52.38%)
Mutual labels:  crawling
puppet-master
Puppeteer as a service hosted on Saasify.
Stars: ✭ 25 (-70.24%)
Mutual labels:  crawling
tech-seo-crawler
Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index.
Stars: ✭ 57 (-32.14%)
Mutual labels:  crawling
Cdp4j
cdp4j - Chrome DevTools Protocol for Java
Stars: ✭ 232 (+176.19%)
Mutual labels:  crawling
auctus
Dataset search engine, discovering data from a variety of sources, profiling it, and allowing advanced queries on the index
Stars: ✭ 34 (-59.52%)
Mutual labels:  crawling
Antch
Antch, a fast, powerful and extensible web crawling & scraping framework for Go
Stars: ✭ 198 (+135.71%)
Mutual labels:  crawling
double-agent
A test suite of common scraper detection techniques. See how detectable your scraper stack is.
Stars: ✭ 123 (+46.43%)
Mutual labels:  crawling
crawling-framework
Easily crawl news portals or blog sites using Storm Crawler.
Stars: ✭ 22 (-73.81%)
Mutual labels:  crawling
diffbot-php-client
[Deprecated - Maintenance mode - use APIs directly please!] The official Diffbot client library
Stars: ✭ 53 (-36.9%)
Mutual labels:  crawling
socials
👨‍👩‍👦 Social account detection and extraction in Python, e.g. for crawling/scraping.
Stars: ✭ 37 (-55.95%)
Mutual labels:  crawling

🕷 Telegram Crawler

This project is developed to automatically detect changes made to the official Telegram sites and beta clients. This is necessary for anticipating future updates and other things (new vacancies, API updates, etc).

Name Commits Status
Data tracker Commits Fetch new content of tracked links and files
Site links collector Commits Generate or update list of tracked links
  • passing – new changes
  • failing – no changes

You should to subscribe to channel with alerts to stay updated. Copy of Telegram websites and client`s resources stored here.

GitHub pretty diff example

How it works

  1. Link crawling runs as often as possible. Starts crawling from the home page of the site. Detects relative and absolute sub links and recursively repeats the operation. Writes a list of unique links for future content comparison. Additionally, there is the ability to add links by hand to help the script find more hidden (links to which no one refers) links. To manage exceptions, there is a system of rules for the link crawler.

  2. Content crawling is launched as often as possible and uses the existing list of links collected in step 1. Going through the base it gets contains and builds a system of subfolders and files. Removes all dynamic content from files. It downloads beta version of Android Client, decompiles it and track resources also. Tracking of resources of Telegram for macOS presented too.

  3. Using of GitHub Actions. Works without own servers. You can just fork this repository and own tracker system by yourself. Workflows launch scripts and commit changes. All file changes are tracked by GIT and beautifully displayed on GitHub. GitHub Actions should be built correctly only if there are changes on the Telegram website. Otherwise, the workflow should fail. If build was successful, we can send notifications to Telegram channel and so on.

FAQ

Q: How often is "as often as possible"?

A: TLTR: content update action runs every ~10 minutes. More info:

Q: Why there is 2 separated crawl scripts instead of one?

A: Because the previous idea was to update tracked links once at hour. It was so comfortably to use separated scripts and workflows. After Telegram 7.7 update, I realised that find new blog posts so slowly is bad idea.

Q: Why alert for sending alerts have while loop?

A: Because GitHub API doesn't return information about commit immediately after push to repository. Therefore, script are waiting for information to appear...

Q: Why are you using GitHub Personal Access Token in action/checkout workflow`s step?

A: To have ability to trigger other workflows by on push trigger. More info:

Q: Why are you using GitHub PAT in make_and_send_alert.py?

A: To increase limits of GitHub API.

Q: Why are you decompiling .apk file each run?

A: Because it doesn't require much time. I am decompiling only resources (-s flag of apktool to disable disassembly of dex files). Writing a check for the need for decompilation by the hash of the apk file would take more time.

Example of link crawler rules configuration

CRAWL_RULES = {
    # every rule is regex
    # empty string means match any url
    # allow rules with higher priority than deny
    'translations.telegram.org': {
        'allow': {
            r'^[^/]*$',  # root
            r'org/[^/]*/$',  # 1 lvl sub
            r'/en/[a-z_]+/$'  # 1 lvl after /en/
        },
        'deny': {
            '',  # all
        }
    },
    'bugs.telegram.org': {
        'deny': {
            '',    # deny all sub domain
        },
    },
}

Current hidden urls list

HIDDEN_URLS = {
    # 'corefork.telegram.org', # disabled

    'telegram.org/privacy/gmailbot',
    'telegram.org/tos',
    'telegram.org/tour',
    'telegram.org/evolution',

    'desktop.telegram.org/changelog',
}

License

Licensed under the MIT License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].