All Projects → mez-0 → linky

mez-0 / linky

Licence: MIT License
Yet Another LInkedIn Scraper...

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to linky

Operative Framework
operative framework is a OSINT investigation framework, you can interact with multiple targets, execute multiple modules, create links with target, export rapport to PDF file, add note to target or results, interact with RESTFul API, write your own modules.
Stars: ✭ 511 (+1061.36%)
Mutual labels:  scraper, osint, linkedin
onedrive user enum
onedrive user enumeration - pentest tool to enumerate valid onedrive users
Stars: ✭ 223 (+406.82%)
Mutual labels:  osint, enumeration, user-enumeration
linkedinscraper
LinkedinScraper is an another information gathering tool written in python. You can scrape employees of companies on Linkedin.com and then create these employee names, titles and emails.
Stars: ✭ 22 (-50%)
Mutual labels:  scraper, osint, linkedin
Scrape Linkedin Selenium
`scrape_linkedin` is a python package that allows you to scrape personal LinkedIn profiles & company pages - turning the data into structured json.
Stars: ✭ 239 (+443.18%)
Mutual labels:  scraper, linkedin
Instaloctrack
An Instagram OSINT tool to collect all the geotagged locations available on an Instagram profile in order to plot them on a map, and dump them in a JSON.
Stars: ✭ 85 (+93.18%)
Mutual labels:  scraper, osint
Youtube Comment Suite
Download YouTube comments from numerous videos, playlists, and channels for archiving, general search, and showing activity.
Stars: ✭ 120 (+172.73%)
Mutual labels:  scraper, osint
Gosint
OSINT Swiss Army Knife
Stars: ✭ 401 (+811.36%)
Mutual labels:  scraper, osint
gHarvester
Proof of concept for a security issue (in my opinion) that I found in accounts.google.com
Stars: ✭ 20 (-54.55%)
Mutual labels:  scraper, osint
findcdn
findCDN is a tool created to help accurately identify what CDN a domain is using.
Stars: ✭ 64 (+45.45%)
Mutual labels:  osint, enumeration
bing-ip2hosts
bingip2hosts is a Bing.com web scraper that discovers websites by IP address
Stars: ✭ 99 (+125%)
Mutual labels:  scraper, osint
Linkedin-Client
Web scraper for grabing data from Linkedin profiles or company pages (personal project)
Stars: ✭ 42 (-4.55%)
Mutual labels:  scraper, linkedin
dorkscout
DorkScout - Golang tool to automate google dork scan against the entiere internet or specific targets
Stars: ✭ 189 (+329.55%)
Mutual labels:  scraper, osint
Reconky-Automated Bash Script
Reconky is an great Content Discovery bash script for bug bounty hunters which automate lot of task and organized in the well mannered form which help them to look forward.
Stars: ✭ 167 (+279.55%)
Mutual labels:  osint, enumeration
Scrapedin
LinkedIn Scraper (currently working 2020)
Stars: ✭ 453 (+929.55%)
Mutual labels:  scraper, linkedin
Linkedin Profile Scraper
🕵️‍♂️ LinkedIn profile scraper returning structured profile data in JSON. Works in 2020.
Stars: ✭ 171 (+288.64%)
Mutual labels:  scraper, linkedin
Linkedin scraper
A library that scrapes Linkedin for user data
Stars: ✭ 413 (+838.64%)
Mutual labels:  scraper, linkedin
AzureAD Autologon Brute
Brute force attack tool for Azure AD Autologon/Seamless SSO - Source: https://arstechnica.com/information-technology/2021/09/new-azure-active-directory-password-brute-forcing-flaw-has-no-fix/
Stars: ✭ 90 (+104.55%)
Mutual labels:  enumeration, user-enumeration
Linkedin
Linkedin Scraper using Selenium Web Driver, Chromium headless, Docker and Scrapy
Stars: ✭ 309 (+602.27%)
Mutual labels:  scraper, linkedin
Osi.ig
Information Gathering Instagram.
Stars: ✭ 377 (+756.82%)
Mutual labels:  scraper, osint
LinkedIn-Scraper
A LinkedIn Scraper to scrape up to 10k LinkedIn profiles from company profile links and save their e-mail addresses if available!
Stars: ✭ 62 (+40.91%)
Mutual labels:  scraper, linkedin

Linky

Release Software License GitHub issues

Yet another LinkedIn Scraper...

Linky is a another LinkedIn scraper. Inspired by vysecurity and his LinkedInt project.

Currently, this method of extracting data from LinkedIn is limited to 1000 users at a time. So, Linky's HTML output has a small table at the bottom of the page which calculates the top 5 most common occupations that occur. This way, if the company has a weird naming scheme for devs, then Linky should be able to spot it and report it back. With these new found data points, the --keywords flag can be used to attempt to filter the output.


Note

This is no longer maintained. Afaik, the validation method via o365 has been patched. I also removed the blog post a while ago detailing this, so the cookie.txt referenced in this README is the li_at cookie on LinkedIn.


Installing

pip3 -r install requirements.txt

Help Page

usage: linky.py [-h] [-c] [-i] [-k] [-d] [-o] [-f] [-v] [-a] [-t]
                [--valid-emails-only] [--verbose] [--debug]
                [--list-email-schemes | --version]

Yet another LinkedIn scraper.

optional arguments:
  -h, --help            show this help message and exit
  -c , --cookie         Cookie to authenticate to LinkedIn with [li_at]
  -i , --company-id     Company ID number
  -k , --keyword        Keyword for searches
  -d , --domain         Company domain name
  -o , --output         File to output to: Writes CSV, JSON and HTML.
  -f , --format         Format for email addresses
  -v , --validate       Validate email addresses: O365/Hunter API
  -a , --api            API Key for Hunter API
  -t , --threads        Amount of threads to use [default 5]
  --valid-emails-only   When you literally only want a txt of valid emails.
  --verbose             Verbosity of the output
  --debug               Enable debugging, will spam.
  --list-email-schemes  List available email schemes
  --version             Print current version

Example: python3 linky.py --cookie cookie.txt --company-id 1441 --domain
google.com --output google_employees --format 'firstname.surname'

Usage

Get Employees

python3 --cookie cookie.txt --company-id 1441 --domain google.com --output google_employees --format 'firstname.surname'

Get Employees with keyword

python3 --cookie cookie.txt --company-id 1441 --domain google.com --output google_employees --format 'firstname.surname' --keyword developer

Supported email formats

Run linky.py --list-email-schemes to see all current formats:

firstname.surname:john.doe
firstnamesurname:johndoe
f.surname:j.doe
fsurname:jdoe
surname.firstname:doe.john
surnamefirstname:doejohn
s.firstname:d.john
sfirstname:djohn
firstname.msurname:john.jdoe

They can all be referenced in --format, E.G:

f.surname: --format f.surname

Job Role Count

By default, Linky will count the occurence of job roles and write it out to html. But, it will also do so with a standard json file. The structure is as seen below:

{
  "Software Developer": 24,
  "Systems Developer": 14,
  "Senior Software Developer": 11,
  "Project Manager": 10,
  "System Developer": 9,
  "Cyber Security Consultant": 7,
  "Project Developer": 7,
  "Programme Manager": 6,
  "Software Architect": 6,
  "Development Manager": 6
}

Efficient usage

  1. Run once the gain the initial data:

python3 --cookie cookie.txt --company-id 1441 --domain google.com --output google_employees --format 'firstname.surname'

  1. Find the job role occurence

    cat job_role_count.json|jq

  2. With the roles identified, use the keyword feature:

python3 --cookie cookie.txt --company-id 1441 --domain google.com --output google_employees --format 'firstname.surname' --keyword developer

Only print a list of validated email addresses

The --valid-emails-only flag will perform the same level of enumeration. But, it will only output validated emails to a txt file. This also assumes o365 validation.

python3 --cookie cookie.txt --company-id 1441 --domain google.com --output google_employees --format 'firstname.surname' --keyword developer --valid-emails-only

From this command, a txt file will be created with nothing but emails that were found to be valid via o365.

This is basically the TL;DR version of Linky.

Happy Stalking.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].