All Projects → thoppe → today-AI-learned

thoppe / today-AI-learned

Licence: other
Training a classifier to reddit's TIL to find new things on Wikipedia

Programming Languages

python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to today-AI-learned

scripts
A collection of random scripts I coded up
Stars: ✭ 17 (-51.43%)
Mutual labels:  reddit
vosonSML
R package for collecting social media data and creating networks for analysis.
Stars: ✭ 65 (+85.71%)
Mutual labels:  reddit
roux
Simple and (a)synchronous Reddit API wrapper for Rust.
Stars: ✭ 41 (+17.14%)
Mutual labels:  reddit
wistalk
Wistalk : Analyze Wikipedia User's Activity
Stars: ✭ 19 (-45.71%)
Mutual labels:  wikipedia
wikipedia-preview
wikimedia.github.io/wikipedia-preview/main
Stars: ✭ 42 (+20%)
Mutual labels:  wikipedia
reddit2telegram
Bot to supply telegram channel with hot Reddit submissions.
Stars: ✭ 202 (+477.14%)
Mutual labels:  reddit
cat-message
Finds cat images/videos/gifs on reddit, sends them to my mom via applescript
Stars: ✭ 35 (+0%)
Mutual labels:  reddit
insta reddit bot
[UNMAINTAINED] A bot which pulls images from Reddit and uploads them to Instagram. Former source code of @me_irl_bot
Stars: ✭ 26 (-25.71%)
Mutual labels:  reddit
wikifox
A clean and simplified WikiPedia powered by wikifox.js
Stars: ✭ 50 (+42.86%)
Mutual labels:  wikipedia
space-wiki
太空维基 A Chrome Plug-in for Wikipedia
Stars: ✭ 19 (-45.71%)
Mutual labels:  wikipedia
wikibot
A 🤖 which provides features from Wikipedia like summary, title searches, location API etc.
Stars: ✭ 25 (-28.57%)
Mutual labels:  wikipedia
OddshotConverter
Gets oddshot.tv clips posted on Reddit and converts it into YouTube videos.
Stars: ✭ 48 (+37.14%)
Mutual labels:  reddit
analyzing-reddit-sentiment-with-aws
Learn how to use Kinesis Firehose, AWS Glue, S3, and Amazon Athena by streaming and analyzing reddit comments in realtime. 100-200 level tutorial.
Stars: ✭ 40 (+14.29%)
Mutual labels:  reddit
ducky
Chrome extension to overlay a (super adorable) rubber duck, as a virtual companion during rubber duck debugging.
Stars: ✭ 80 (+128.57%)
Mutual labels:  reddit
naacl2018-fever
Fact Extraction and VERification baseline published in NAACL2018
Stars: ✭ 109 (+211.43%)
Mutual labels:  wikipedia
Spell4Wiki
Spell4Wiki is a mobile application to record and upload audio for Wiktionary words to Wikimedia commons. Also act as a Wiki-Dictionary.
Stars: ✭ 17 (-51.43%)
Mutual labels:  wikipedia
orca
C Multi-REST API library for Discord, Slack, Reddit, etc.
Stars: ✭ 360 (+928.57%)
Mutual labels:  reddit
awesome-alternatives
A list of alternative websites/software to popular proprietary services.
Stars: ✭ 123 (+251.43%)
Mutual labels:  wikipedia
TelegramBot-Go
Telegram bot which search information in Wikipedia and written on Go language
Stars: ✭ 35 (+0%)
Mutual labels:  wikipedia
subreddit-archiver
Python utility to archive and keep up-to-date archives of reddit subreddits. Archives to SQLite databases.
Stars: ✭ 21 (-40%)
Mutual labels:  reddit

today-AI-learned

Hello reddit! I'm the semi-autonomous bot u/possible_urban_king

TLDR; I was created to machine learn reddit's r/today-I-learned (TIL) subreddit for new and interesting things. If karma/upvotes measure success, I passed the Turing test.


Press & Presentations

BuzzFeed News

H&&T DSDC

BuzzFeed News : Meet The Man Who Gamed Reddit With A Bot

H&&T : Round 20: Severe Municipal Jazz, May 11, 2015, presentation link

Data Science DC : Lightning Talks! (IV), September 29th, 2015, presentation link


Description

from the author Travis Hoppe

It is an exciting time right now if you're interested in Machine Learning. With modest effort, anyone with an idea can transform it into a working algorithm. I've been a fan of the subreddit r/today-I-learned and I always found it interesting that top posts would build upon my current knowledge and append a new factoid. In contrast to traditional machine learning tasks such as image recognition or time-series prediction, the concept of an interesting post is vague and undefined. This makes it an exciting topic to study!

The metric for a successful post on reddit is the upvote. These votes are an aggregated poll over the reddit vox populi, and in a limited sense constitute tests for intelligence. In the TIL subreddit especially, this requires higher order cognitive skills from the Bloom Taxonomy like Knowledge, Synthesis and Evaluation. If a machine were to act like a (human) redditor, it would have to emulate these submissions with new and novel posts.

In this context u/possible_urban_king passes the Turing test. Over the last three months I've been running an experiment and posted about 50 submissions to TIL. The bot's posts have made it to the front page multiple times and the majority of posts are well-received (see results).

The bot was trained over a selection of previously successful TIL posts (see methods) that used Wikipedia as a source. Classification worked well, sometimes too well. I found that media characters (books, movies, etc...) were disproportionately tagged as interesting. These characters would be interesting too, if only they were real people! Additionally, sections in Wikipedia that were salacious or required a [Citation Needed] were often removed by the time they were to be posted.

  • Semi-autonomous?

It turns out that writing the title of a post is really hard, and ultimately I decided that this was outside the scope of the experiment. In all of the posts, I wrote the title and submitted by hand. I was however, limited to use the information taken from the paragraph marked by the bot.

  • Which algorithm/classifier?

Extremely Random Trees.

  • Why the name possible_urban_king?

It's a colorless green idea.


Results

Upvotes Post
4726 TIL The Founder Of Japans Mcdonalds Stated
4123 TIL Mike Kurtz An American Burglar Found Out That
2899 TIL A Woman That Reported 100 Incidents Of
1551 TIL During The Sentencing Of His War Crimes Trial
1144 TIL That Art Spiegelman The Creator Of Maus A
640 TIL That Once Officially Labeled As Retarded
498 TIL Before World War Ii It Was Very Rare For
142 TIL That A Study Showed Those With A Distressed
135 TIL Frankie Fraser A Notorious English Gangster
68 TIL Rafael Quintero A Mexican Drug Trafficker
55 TIL The Summer Of Shark Refers To The Medias
49 TIL The Indian Head Eagle Coin Minted In America
42 TIL There Is A 1 Million Dollar Prize For
42 TIL A Murder Victim Was Dismembered So Precisely
40 TIL Daigo Fukuryu Maru A Japanese Fishing Boat
38 TIL It Was 1883 When Kerckhoff Laid Out The
38 TIL An Overcrowded Trailer Carrying 70 People To
37 TIL Machon Ayalon Was A Secret Underground Bullet
36 TIL That Joe Pullen An Africanamerican Tenant
29 TIL Peter Fat Pete Chiodo A Capo In The Lucchese
29 TIL Pinochets Government In Chile 19731990 Had A
24 TIL During Wwi The British Forbade Incendiary
24 TIL That Even Professional Herbalists Avoid The
21 TIL Hm Prison Liverpool Charges Prisoners To
17 TIL Women In Norway That Fraternized With German
16 TIL The Saab 96 Engine Was Tested Under Extreme
15 TIL That The Male Clouded Leopard Is Extremely
15 TIL The Tactic Of Marching Fire Where Rounds Are
12 TIL Captain James Cook Was Killed While
10 TIL Chrysomya Rufifacies Are Usually The First
10 TIL Oskar Daubmann Was A Con Man Who Convinced
9 TIL Captain Strong Is A Dc Clone Of Popeye Except
8 TIL Peter Sawyer Is Credited As The First
7 TIL The Stock Expression Thats A Joke Son Came
6 TIL In The Summer Of 2011 Three Enforcers Ice
6 TIL Morality Follows In The Wake Of Malt Liquors
5 TIL Frances Parker A British Suffragette Was
4 TIL Of The Rogue Elephant Of Aberdare Forest An
4 TIL Nasenbluten A Band Credited For Pioneering
3 TIL Sulfa The First Effective Antibiotic
2 TIL Fiddlin John Carson An American Oldtime
2 TIL While Investigating The Phenomena Of Entombed
2 TIL There Is A Hazemaking Compound That Designers
2 TIL During Kobe Bryants Sexual Assault Case It
1 TIL Of The Worst Deal Made In The Dotcom Era The
1 TIL Primate Experiments At Cambridge Incorrectly
1 TIL The Mushroom Poisonous Mushroom Hapalopilus
1 TIL Sahar Gul Was An Afghan Teenager Who Was
1 TIL That Up Until 1996 Japan Had A Law To Stop
0 TIL Prior To The Commencement Of An Elimination
0 TIL Former Congressman Cleo Fields Achieved
0 TIL There Is A Canadian Bill Called The Blood
0 TIL There Was A NC Sheriff That Dressed In

Methods

In the interests of scientific reproducibility, all of the code used in the experiment is hosted in this project. If you'd like to repeat the experiment yourself however, it will require a bit of tinkering to get it to work with your system. A zipped sqlite3 database of the raw paragraphs marked as interesting can be found in db/report.db.bz2. Feel free to fork and do whatever you like with this repo as long you follow the CC Attribution 3.0 license.

Data collection

Supervised machine learning requires a massive tagged collection of high-quality data to be effective. Fortunately the past submissions of to r/TIL have done just that. Redditors have carefully curated a selection of posts that they collectively find interesting through their voting system. We can filter these posts to just those that point to Wikipedia as a source. This way, the source of each post uses a somewhat standardized language and grammar.

src/subreddit_dl.py

Initially I started with the top 1000 posts of all-time (due to an API restriction in reddit's search) using praw. Ultimately however, I extended that to all posts that had a score of > 1000 in the years 2013 and 2014 (resulting in about 5000 quality TIL posts) using an alternate database.

src/wikipedia_dl.py

From here it is relatively easy to download a parsed down versed of the wiki page linked to by the reddit post.

Data wrangling

src/attribute_TIL.py

Somehow, we have to link the pithy one-line TIL title to the correct paragraph in the Wikipedia article. This is a non trivial task, as simple word frequencies are not enough. Ultimately I settled on a sort of "word-entropy". That is, each paragraph was stripped to it's unique words and these sets all formed a frequency vector for each paragraph. These vectors were normalized so that the unique words in each paragraph carried more weight. Then we took the title of the TIL post and compared it to the vectors of each paragraph settling on the paragraph with the closest match. This turns out to work surprisingly well.

Additionally, I saved the non-matching paragraphs as some useful false positives.

src/build_decoy_db.py

The next step was to prep the Wikipedia corpus. Using a full XML corpus of Wikipedia (not provided and parsed with bs4), I tokenized and stemmed each paragraph of text for each article. This uses both nltk for the word tokenization & stop words and the porter2 stemmer from the aptly named package stemming.

This creates a rather massive SQLite database with each paragraph and the associated meta-data (like title, paragraph number, word-entropy, ...). Since there are many millions of assorted paragraphs (and I assume very few of them are interesting), I am going to use a random sampling of some of these as True Negatives in my machine learning.

Machine Learning

src/build_features.py

Initially, I experimented with a simple word frequency as my feature vector. While this works for toy problems, the corpus of Wikipedia needed a smarter way to condense down the data. Fortunately, a neat textual feature generator, Word2Vec (developed by Google) is available in gensim.

Using Word2Vec requires two complete passes over the data, though it allows you to use an iterator making the memory requirements rather small.

src/train.py

Here, perhaps lies the most contentious part of the project, the construction of the classifier. In the end, I settled for the Extremely Random Trees implementation in scikit-learn. This classifier, while fairly poor at detecting new true positives at about 10%, was extremely proficient at marking the true negatives. Since the assumption is that most of Wikipedia is, in fact, quite boring, this will help narrow down the results immensely.

Training classifier
Test Accuracy: 0.878
Test Accuracy on TP: 0.116
Test Accuracy on TN: 0.998

src/score.py

With the classifier solved, the next step is score each and every paragraph in Wikipedia. The classifier marks about 6 per 10000 as potential candidates.

src/report.py

With the positives marked, we need to prepare the potentially interesting things to a human-readable format! Report starts building a new database that contains only the positive entries and the associated Wikipedia text from the original source.

src/cross_reference.py

Nobody likes a repost (unless it's better, or more aptly timed...), so we need to find out what has already been posted to reddit. To do so, we need a proper search name of the Wikipedia article. The module mediawiki-utils can do this, but stupidly requires python3. Thus the cross-reference program makes a system call to properly encode name as a search query for reddit. We then take the top search result (if exists) and store it; this info will serve as the criteria for a post/repost.

src/plot_times.py

With the potential TIL candidates identified, let's find the best time to post! Note that we are going to posit that post time has a casual relationship with the ultimate score. Since reddit is dynamic and viewership is dependent on a steady-stream of upvotes, this should be a reasonable assumption. Going back over our training set, we can map the distribution of times for a r/TIL post:

it seems like the sweet spot for a submission is between 9AM-11AM!

What about the bottom r/TIL posts, those that had a score of < 1000? Considering only the ones we found with our algorithm, the posting time is dramatically different:

src/mine_submissions.py

Since we are going to have a few false positives, I setup a simple script to help determine quality TIL's. A random unlabeled TIL is pull from the database that hasn't been posted already and is opened on both the screen and the browser to quickly determine if it is "something worth learning". This script show both the tagged interesting paragraph and the corresponding Wikipedia page. There is a simple prompt that allows you to mark an item to post later.


License

CC Attribution 3.0.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].