All Projects → toadlyBroodle → Spam Bot 3000

toadlyBroodle / Spam Bot 3000

Social media research and promotion, semi-autonomous CLI bot

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Spam Bot 3000

Socialmanagertools Gui
🤖 👻 Desktop application for Instagram Bot, Twitter Bot and Facebook Bot
Stars: ✭ 293 (+270.89%)
Mutual labels:  bot, scraper, twitter, facebook, instagram
Skraper
Kotlin/Java library and cli tool for scraping posts and media from various sources with neither authorization nor full page rendering (Facebook, Instagram, Twitter, Youtube, Tiktok, Telegram, Twitch, Reddit, 9GAG, Pinterest, Flickr, Tumblr, IFunny, VK, Pikabu)
Stars: ✭ 72 (-8.86%)
Mutual labels:  scraper, twitter, facebook, instagram, reddit
Social ids
Get user ids from social network handlers
Stars: ✭ 9 (-88.61%)
Mutual labels:  cli, twitter, facebook, instagram
Instagramfirstcommenter
This bot will post a predefined comment as fast as possible to a new post on the target profile. I used this to successfully win tickets for a big music festival.
Stars: ✭ 26 (-67.09%)
Mutual labels:  bot, automation, instagram, selenium
Socialmanagertools Igbot
🤖 📷 Instagram Bot made with love and nodejs
Stars: ✭ 699 (+784.81%)
Mutual labels:  bot, instagram, social-media, selenium
Media Scraper
Scrapes all photos and videos in a web page / Instagram / Twitter / Tumblr / Reddit / pixiv / TikTok
Stars: ✭ 206 (+160.76%)
Mutual labels:  scraper, twitter, instagram, reddit
Nvidia Sniper
🎯 Autonomously buy Nvidia Founders Edition GPUs as soon as they become available.
Stars: ✭ 193 (+144.3%)
Mutual labels:  bot, automation, selenium, firefox
Instagram Bot
An Instagram bot developed using the Selenium Framework
Stars: ✭ 138 (+74.68%)
Mutual labels:  bot, automation, instagram, selenium
Instapy
📷 Instagram Bot - Tool for automated Instagram interactions
Stars: ✭ 12,473 (+15688.61%)
Mutual labels:  bot, automation, instagram, selenium
Brotab
Control your browser's tabs from the command line
Stars: ✭ 137 (+73.42%)
Mutual labels:  command-line-tool, automation, cli, firefox
Socialreaper
Social media scraping / data collection library for Facebook, Twitter, Reddit, YouTube, Pinterest, and Tumblr APIs
Stars: ✭ 338 (+327.85%)
Mutual labels:  twitter, facebook, reddit, social-media
Bash2mp4
Video Downloader for Termux .
Stars: ✭ 68 (-13.92%)
Mutual labels:  twitter, facebook, instagram
Singlefilez
Web Extension for Firefox/Chrome/MS Edge and CLI tool to save a faithful copy of an entire web page in a self-extracting HTML/ZIP polyglot file
Stars: ✭ 882 (+1016.46%)
Mutual labels:  cli, selenium, firefox
Instagram Profilecrawl
📝 quickly crawl the information (e.g. followers, tags etc...) of an instagram profile.
Stars: ✭ 816 (+932.91%)
Mutual labels:  automation, instagram, selenium
Memedensity
CLI tool to let you know amount of memes in facebook feed.
Stars: ✭ 44 (-44.3%)
Mutual labels:  cli, facebook, selenium
Feeds
Importiert Daten aus API-Quellen wie Facebook, Instagram, Twitter, YouTube, Vimeo oder RSS (ehemals YFeed)
Stars: ✭ 34 (-56.96%)
Mutual labels:  twitter, facebook, instagram
Social Amnesia
Forget the past. Social Amnesia makes sure your social media accounts only show your posts from recent history, not from "that phase" 5 years ago.
Stars: ✭ 656 (+730.38%)
Mutual labels:  twitter, reddit, social-media
Huginn
Create agents that monitor and act on your behalf. Your agents are standing by!
Stars: ✭ 33,694 (+42550.63%)
Mutual labels:  automation, scraper, twitter
Nemiro.oauth.dll
Nemiro.OAuth is a class library for authorization via OAuth protocol in .NET Framework
Stars: ✭ 45 (-43.04%)
Mutual labels:  twitter, facebook, instagram
Foxify Cli
💻 Firefox Command-Line Theme Manager 🦊 Inspired by spicetify-cli 🔥
Stars: ✭ 55 (-30.38%)
Mutual labels:  command-line-tool, cli, firefox

spam-bot-3000

A python command-line (CLI) bot for automating research and promotion on popular social media platforms (reddit, twitter, facebook, [TODO: instagram]). With a single command, scrape social media sites using custom queries and/or promote to all relevant results.

Please use with discretion: i.e. choose your input arguments wisely, otherwise your bot could find itself, along with any associated accounts, banned from platforms very quickly. The bot has some built in anti-spam filter avoidance features to help you remain undetected; however, no amount of avoidance can hide blatantly abusive use of this tool.

features

  • reddit
    • scrape subreddit(s) for lists of keyword, dump results in local file (red_scrape_dump.txt)
      • separate keyword lists for AND, OR, NOT search operations (red_subkey_pairs.json)
      • search new, hot, or rising categories
    • reply to posts in red_scrape_dump.txt with random promotion from red_promos.txt
      • ignore posts by marking them in dump file with "-" prefix
    • praw.errors.HTTPException handling
    • write all activity to log (log.txt)
  • twitter
    • maintain separate jobs for different promotion projects
    • update user status
    • unfollow users who don't reciprocate your follow
    • scan twitter for list of custom queries, dump results in local file (twit_scrape_dump.txt)
      • scan continuously or in overwatch mode
    • optional bypassing of proprietary twitter APIs and their inherent limitations
    • promotion abilities
      • tweepy api
        • follow original posters
        • favorite relevant tweets
        • direct message relevant tweets
        • reply to relevant tweets with random promotional tweet from file (twit_promos.txt)
      • Selenium GUI browser
        • favorite, follow, reply to scraped results while bypassing API limits
      • ignore tweets by marking them in dump file with "-" prefix
    • script for new keyword, hashtag research by gleening scraped results
    • script for filtering out irrelevant keywords, hashtags, screen names
    • script for automating scraping, filtering, and spamming only most relevant results
    • relatively graceful exception handling
    • write all activity to log (log.txt)
  • facebook
    • zero reliance on proprietary facebook APIs and their inherent limitations
    • Selenium GUI browser agent
    • scrape public and private user profiles for keywords using AND, OR, NOT operators
      • note: access to private data requires login to authorized account with associated access
    • scrape public and private group feeds for keywords using AND, OR, NOT operators

dependencies

  • install dependencies you probably don't have already, errors will show up if you're missing any others
    • install pip3 sudo apt install python3-pip
    • install dependencies pip3 install --user tweepy bs4 praw selenium

reddit initial setup

  • update 'praw.ini' with your reddit app credentials
  • replace example promotions (red_promos.txt) with your own
  • replace example subreddits and keywords (red_subkey_pairs.json) with your own
    • you'll have to follow the existing json format
    • keywords_and: all keywords in this list must be present for positive matching result
    • keywords_or: at least one keyword in this list must be present for positive match
    • keywords_not: none of these keywords can be present in a positive match
    • any of the three lists may be omitted by leaving it empty - e.g. "keywords_not": []

<praw.ini>

...

[bot1]
client_id=Y4PJOclpDQy3xZ
client_secret=UkGLTe6oqsMk5nHCJTHLrwgvHpr
password=pni9ubeht4wd50gk
username=fakebot1
user_agent=fakebot 0.1

<red_subkey_pairs.json>

{"sub_key_pairs": [
{
  "subreddits": "androidapps",
  "keywords_and": ["list", "?"],
  "keywords_or": ["todo", "app", "android"],
  "keywords_not": ["playlist", "listen"]
}
]}

reddit usage

usage: spam-bot-3000.py reddit [-h] [-s N] [-n | -H | -r] [-p]

optional arguments:
  -h,	--help		show this help message and exit
  -s N,	--scrape N	scrape subreddits in subreddits.txt for keywords in red_keywords.txt; N = number of posts to scrape
  -n,	--new		scrape new posts
  -H,	--hot		scrape hot posts
  -r,	--rising	scrape rising posts
  -p,	--promote	promote to posts in red_scrape_dump.txt not marked with a "-" prefix

twitter initial setup

<credentials.txt>

your_consumer_key
your_consumer_secret
your_access_token
your_access_token_secret
your_twitter_username
your_twitter_password
  • create new 'twit_promos.txt' in job directory to store your job's promotions to spam
    • individual tweets on seperate lines
    • each line must by <= 140 characters long
  • create new 'twit_queries.txt' in job directory to store your job's queries to scrape twitter for
  • create new 'twit_scrape_dump.txt' file to store your job's returned scrape results

twitter usage

usage: spam-bot-3000.py twitter [-h] [-j JOB_DIR] [-t] [-u UNF] [-s] [-c] [-e] [-b]
                          [-f] [-p] [-d]
spam-bot-3000
optional arguments:
 -h, --help		show this help message and exit
 -j JOB_DIR, --job JOB_DIR
	                choose job to run by specifying job's relative directory
 -t, --tweet-status 	update status with random promo from twit_promos.txt
 -u UNF, --unfollow UNF
                        unfollow users who aren't following you back, UNF=number to unfollow

 query:
 -s, --scrape		scrape for tweets matching queries in twit_queries.txt
 -c, --continuous	scape continuously - suppress prompt to continue after 50 results per query
 -e, --english         	return only tweets written in English

spam -> browser:
 -b, --browser          favorite, follow, reply to all scraped results and
                        thwart api limits by mimicking human in browser!

spam -> tweepy api:
 -f, --follow		follow original tweeters in twit_scrape_dump.txt
 -p, --promote		favorite tweets and reply to tweeters in twit_scrape_dump.txt with random promo from twit_promos.txt
 -d, --direct-message	direct message tweeters in twit_scrape_dump.txt with random promo from twit_promos.txt

twitter example workflows

  1. continuous mode
    • -cspf scrape and promote to all tweets matching queries
  2. overwatch mode
    • -s scrape first
    • manually edit twit_scrape_dump.txt
      • add '-' to beginning of line to ignore
      • leave line unaltered to promote to
    • -pf then promote to remaining tweets in twit_scrape_dump.txt
  3. gleen common keywords, hashtags, screen names from scrape dumps
    • bash gleen_keywords_from_twit_scrape.bash
      • input file: twit_scrape_dump.txt
      • output file: gleened_keywords.txt
        • results ordered by most occurrences first
  4. filter out keywords/hashtags from scrape dump
    • manually edit gleened_keywords.txt by removing all relevent results
    • filter_out_strings_from_twit_scrape.bash
      • keywords input file: gleened_keywords.txt
      • input file: twit_scrape_dump.txt
      • output file: twit_scrp_dmp_filtd.txt
  5. browser mode
    • -b thwart api limits by promoting to scraped results directly in firefox browser
      • add username and password to lines 5 and 6 of credentials.txt respectively
  6. automatic scrape, filter, spam
    • auto_spam.bash
      • automatically scrape twitter for queries, filter out results to ignore, and spam remaining results
  7. specify job
    • -j studfinder_example/ specify which job directory to execute

Note: if you don't want to maintain individual jobs in separate directories, you may create single credentials, queries, promos, and scrape dump files in main working directory.

facebook initial setup

  • create new client folder in 'facebook/clients/YOUR_CLIENT'
  • create new 'jobs.json' file to store your client's job information in the following format:

<jobs.json>

{"client_data":
	{"name": "",
	"email": "",
	"fb_login": "",
	"fb_password": "",
	"jobs": [
		{"type": "groups",
			"urls": ["",""],
			"keywords_and": ["",""],
			"keywords_or": ["",""],
			"keywords_not": ["",""] },
		{"type": "users",
			"urls": [],
			"keywords_and": [],
			"keywords_or": [],
			"keywords_not": [] }
	]}
}

facebook usage

  • scrape user and group feed urls for keywords
    • facebook-scraper.py clients/YOUR_CLIENT/
      • results output to 'clients/YOUR_CLIENT/results.txt'

TODO

  • Flesh out additional suite of promotion and interaction tool for facebook platform
  • Organize platforms and their associated data and tools into their own folders and python scripts
  • Future updates will include modules for scraping and promoting to Instagram.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].