All Projects → cr0hn → Festin

cr0hn / Festin

Licence: bsd-3-clause
FestIn - S3 Bucket Weakness Discovery

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Festin

ionic-image-upload
Ionic Plugin for Uploading Images to Amazon S3
Stars: ✭ 26 (-83.85%)
Mutual labels:  s3, s3-bucket
simply-static-deploy
WordPress plugin to deploy static sites easily to an AWS S3 bucket.
Stars: ✭ 48 (-70.19%)
Mutual labels:  s3, s3-bucket
docker-aws-s3-sync
Docker container to sync a folder to Amazon S3
Stars: ✭ 21 (-86.96%)
Mutual labels:  s3, s3-bucket
flask-drive
A simple Flask app to upload and download files off Amazon's S3
Stars: ✭ 23 (-85.71%)
Mutual labels:  s3, s3-bucket
Google Sheet S3
Google Apps Script that publishes a Google Sheet to Amazon S3 as a JSON file. Auto-updates on edit & maintains data types. Creates an array of objects keyed by column header.
Stars: ✭ 81 (-49.69%)
Mutual labels:  s3, s3-bucket
Bucket-Flaws
Bucket Flaws ( S3 Bucket Mass Scanner ): A Simple Lightweight Script to Check for Common S3 Bucket Misconfigurations
Stars: ✭ 43 (-73.29%)
Mutual labels:  s3, s3-bucket
terraform-aws-s3-bucket
A Terraform module to create a Simple Storage Service (S3) Bucket on Amazon Web Services (AWS). https://aws.amazon.com/s3/
Stars: ✭ 47 (-70.81%)
Mutual labels:  s3, s3-bucket
s3cr3t
A supercharged S3 reverse proxy
Stars: ✭ 55 (-65.84%)
Mutual labels:  s3, s3-bucket
Minio Hs
MinIO Client SDK for Haskell
Stars: ✭ 39 (-75.78%)
Mutual labels:  s3, s3-bucket
S3 Site Cache Optimizer
Optimize a static website for hosting in S3, by including a fingerprint into all assets' filenames. The optimized website is uploaded into the specified S3 bucket with the right cache headers.
Stars: ✭ 9 (-94.41%)
Mutual labels:  s3, s3-bucket
gatsby-source-s3
A Gatsby source plugin to query against an S3 bucket (including images!)
Stars: ✭ 19 (-88.2%)
Mutual labels:  s3, s3-bucket
Sbt S3 Resolver
☁️Amazon S3-based resolver for sbt
Stars: ✭ 112 (-30.43%)
Mutual labels:  s3, s3-bucket
BlobHelper
BlobHelper is a common, consistent storage interface for Microsoft Azure, Amazon S3, Komodo, Kvpbase, and local filesystem written in C#.
Stars: ✭ 23 (-85.71%)
Mutual labels:  s3, s3-bucket
s3-proxy
S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)
Stars: ✭ 106 (-34.16%)
Mutual labels:  s3, s3-bucket
awesome-storage
A curated list of storage open source tools. Backups, redundancy, sharing, distribution, encryption, etc.
Stars: ✭ 324 (+101.24%)
Mutual labels:  s3, s3-bucket
radio
Redundant Array of Distributed Independent Objectstores in short RADIO performs synchronous mirroring, erasure coding across multiple object stores
Stars: ✭ 25 (-84.47%)
Mutual labels:  s3, s3-bucket
terraform-aws-s3
Terraform module to create default S3 bucket with logging and encryption type specific features.
Stars: ✭ 22 (-86.34%)
Mutual labels:  s3, s3-bucket
s3recon
Amazon S3 bucket finder and crawler.
Stars: ✭ 111 (-31.06%)
Mutual labels:  s3, s3-bucket
Goofys
a high-performance, POSIX-ish Amazon S3 file system written in Go
Stars: ✭ 3,932 (+2342.24%)
Mutual labels:  s3, s3-bucket
S3fs
S3 FileSystem (fs.FS) implementation
Stars: ✭ 93 (-42.24%)
Mutual labels:  s3, s3-bucket

Festin logo

FestIN the powered S3 bucket finder and content discover

What is FestIn

FestIn is a tool for discovering open S3 Buckets starting from a domains.

It perform a lot of test and collects information from:

  • DNS
  • Web Pages (Crawler)
  • S3 bucket itself (like S3 redirections)

Why Festin

There's a lot of S3 tools for enumeration and discover S3 bucket. Some of them are great but anyone have a complete list of features that Festin has.

Main features that does Festin great:

  • Various techniques for finding buckets: crawling, dns crawling and S3 responses analysis.
  • Proxy support for tunneling requests.
  • AWS credentials are not needed.
  • Works with any S3 compatible provider, not only with AWS.
  • Allows to configure custom DNS servers.
  • Integrated high performance HTTP crawler.
  • Recursively search and feedback from the 3 engines: a domain found by dns crawler is send to S3 and Http Crawlers analyzer and the same for the S3 and Crawler.
  • Works as 'watching' mode, listening for new domains in real time.
  • Save all of the domains discovered in a separate file for further analysis.
  • Allow to download bucket objects and put then in a FullText Search Engine (Redis Search) automatically, indexing the objects content allowing powerful search further.
  • Limit the search for specific domain/s.

Install

Using Python

Python 3.8 of above needed!
$ pip install festin
$ festin -h

Using Docker

$ docker run --rm -it cr0hn/festin -h

Full options

$ festin -h
usage: __main__.py [-h] [--version] [-f FILE_DOMAINS] [-w] [-c CONCURRENCY] [--no-links] [-T HTTP_TIMEOUT] [-M HTTP_MAX_RECURSION] [-dr DOMAIN_REGEX] [-rr RESULT_FILE] [-rd DISCOVERED_DOMAINS] [-ra RAW_DISCOVERED_DOMAINS]
                   [--tor] [--debug] [--no-print] [-q] [--index] [--index-server INDEX_SERVER] [-dn] [-ds DNS_RESOLVER]
                   [domains [domains ...]]

Festin - the powered S3 bucket finder and content discover

positional arguments:
  domains

optional arguments:
  -h, --help            show this help message and exit
  --version             show version
  -f FILE_DOMAINS, --file-domains FILE_DOMAINS
                        file with domains
  -w, --watch           watch for new domains in file domains '-f' option
  -c CONCURRENCY, --concurrency CONCURRENCY
                        max concurrency

HTTP Probes:
  --no-links            extract web site links
  -T HTTP_TIMEOUT, --http-timeout HTTP_TIMEOUT
                        set timeout for http connections
  -M HTTP_MAX_RECURSION, --http-max-recursion HTTP_MAX_RECURSION
                        maximum recursison when follow links
  -dr DOMAIN_REGEX, --domain-regex DOMAIN_REGEX
                        only follow domains that matches this regex

Results:
  -rr RESULT_FILE, --result-file RESULT_FILE
                        results file
  -rd DISCOVERED_DOMAINS, --discovered-domains DISCOVERED_DOMAINS
                        file name for storing new discovered after apply filters
  -ra RAW_DISCOVERED_DOMAINS, --raw-discovered-domains RAW_DISCOVERED_DOMAINS
                        file name for storing any domain without filters

Connectivity:
  --tor                 Use Tor as proxy

Display options:
  --debug               enable debug mode
  --no-print            doesn't print results in screen
  -q, --quiet           Use quiet mode

Redis Search:
  --index               Download and index documents into Redis
  --index-server INDEX_SERVER
                        Redis Search ServerDefault: redis://localhost:6379

DNS options:
  -dn, --no-dnsdiscover
                        not follow dns cnames
  -ds DNS_RESOLVER, --dns-resolver DNS_RESOLVER
                        comma separated custom domain name servers

Usage

Configure search domains

By default FestIn accepts a start domain as command line parameter:

> festin mydomain.com

But you also cat setup an external file with a list of domains:

> cat domains.txt
domain1.com
domain2.com
domain3.com
> festin -f domains.txt 

Concurrency

FestIn performs a lot of test for a domain. Each test was made concurrently. By default concurrency is set to 5. If you want to increase the number of concurrency tests you must set the option -c

> festin -c 10 mydomain.com 
Be carefull with the number of concurrency test or "alarms" could raises in some web sites.

HTTP Crawling configuration

FestIn embed a small crawler to discover links to S3 buckets. Crawler accepts these options:

  • Timeout (-T or --http-timeout): configure a timeout for HTTP connections. If website of the domain you want to analyze is slow, we recommend to increase this value. By default timeout is 5 seconds.
  • Maximum recursion (-H or --http-max-recursion): this value setup a limit for crawling recursion. Otherwise FestIn will scan all internet. By default this value is 3. It means that only will follow: domain1.com -> [link] -> domain2.com -> [link] -> domain3.com -> [link] -> Maximum recursion reached. Stop
  • Limit domains (-dr or --domain-regex): set this option to limit crawler to these domains that matches with this regex.
  • Black list (-B): configure a black list words file. Each domain that matches with some word in the black list will be skipped.
  • White list (-W): configure a white list words file. Each domain that DOESN'T match with some word in the white list will be skipped.

Example:

> echo "cdn" > blacklist.txt
> echo "photos" >> blacklist.txt
> festin -T 20 -M 8 -B blacklist.txt -dr .mydomain. mydomain.com 
BE CAREFUL: -dr (or --domain-regex) only accept valid POSIX regex. 

*mydomain.com* -> is not a valida POSIX regex
.mydomain\.com. -> is a valida POSIX regex

Manage results

When FestIn runs it discover a lot of useful information. Not only about S3 buckets, also for other probes we could do. For example:

After we use FestIn we can use discovered information (domains, links, resources, other buckets...) as input of other tools, like nmap.

For above reason FestIn has 3 different modes to store discovered information and we can combine them:

  • FestIn result file (-rr or --result-file): this file contains one JSON per line with buckets found by them. Each JSON includes: origin domain, bucket name and the list of objects for the bucket.
  • Filtered discovered domains file (-rd or --discovered-domains): this file contains one domain per line. These domains are discovered by the crawler, dns or S3 probes but only are stored these domains that matches with user and internal filters.
  • Raw discovered domains file (-ra or --raw-discovered-domains ): this file contains all domains, one per line, discovered by FestIn without any filter. This option is useful for post-processing and analyzing.

Example:

> festin -rr festin.results -rd discovered-domains.txt -ra raw-domains.txt mydomain.txt 

And, chaining with Nmap:

> festin -rd domains.txt && nmap -Pn -A -iL domains.txt -oN nmap-domains.txt 

Proxy usage

FestIn embeds the option --tor. By using this parameter you need local Tor proxy running at port 9050 at 127.0.0.1.

> tor &
> festin --tor mydomain.com 

DNS Options

Some tests made by FestIn involves DNS. It support these options:

  • Disable DNS discovery (-dn or --no-dnsdiscover)
  • Custom DNS server (-ds or --dns-resolver): setup custom DNS server. If you plan to perform a lot of tests you should use a different DNS server like you use to your browser.

Example:

> festin -ds 8.8.8.8 mydomain.com 

Full Text Support

FestIn not only can discover open S3 buckets. It also can download all content and store them in a Full Text Search Engine. This means that you can perform Full Text Queries to the content of the bucket!

FestIn uses as Full Text Engine the Open Source project Redis Search.

This feature has two options:

  • Enable indexing (--index): to enable the indexing to the search engine you must setup this flag.
  • Redis Search config (--index-server): you only need to setup this option if your server is running in a different IP/Port that: localhost:6379.

Example:

> docker run --rm -p 6700:6379 redislabs/redisearch:latest -d
> festin --index --index-server redis://127.0.0.1:6700 mydomain.com
Pay attention to option `--index-server` is must has the prefix **redis://** 

Running as a service (or watching mode)

Some times we don't want to stop FestIn and launch them some times when we have a new domain to inspect or any external tool discovered new domains we want to check.

FestIn supports watching mode. This means that FestIn will start and listen for new domains. The way to "send" new domains to FestIn is by domains file. It monitor this file for changes.

This feature is useful to combine FestIn with other tools, like dnsrecon

Example:

> festin --watch -f domains.txt 

In a different terminal we can write:

> echo "my-second-domain.com" >> domains.txt 
> echo "another-domain.com" >> domains.txt 

Each new domain added to domains.txt will wakeup FestIn.

Example: Mixing FesTin + DnsRecon

Using DnsRecon

The domain chosen for this example is target.com.

Step 1 - Run dnsrecon with desired options against target domain and save the output

>  dnsrecon -d target.com -t crt -c target.com.csv

With this command we are going to find out other domains related to target.com. This will help to maximize our chances of success.

Step 2 - Prepare the previous generated file to feed FestIn

> tail -n +2 target.com.csv | sort -u | cut -d "," -f 2 >> target.com.domains

With this command we generate a file with one domain per line. This is the input that FestIn needs.

Step 3 - Run FestIn with desired options and save output

>  festin -f target.com.domains -c 5 -rr target.com.result.json --tor -ds 212.166.64.1 >target.com.stdout 2>target.com.stderr

In this example the resulting files are:

  • target.com.result.json - Main result file with one line per bucket found. Each line is a JSON object.
  • target.com.stdout - The standard output of festin command execution
  • target.com.stderr - The standard error of festin command execution

In order to easy the processing of multiple domains, we provide a simple script examples/loop.sh that automatize this.

Using FestIn with DnsRecon results

Run against target.com with default options and leaving result to target.com.result file:

> festin target.com -rr target.com.result.json 
Run against target.com using tor proxy, with concurrency of 5, using DNS 212.166.64.1 for resolving CNAMEs and leaving result to target.com.result file:
> festin target.com -c 5 -rr target.com.result.json --tor -ds 212.166.64.1 

F.A.Q.

Q: AWS bans my IP A:

When you perform a lot of test against AWS S3, AWS includes your IP in a black list. Then each time you want to access to any S3 bucket with FestIn of with your browser will be blocked.

We recommend to setup a proxy when you use FestIn.

Who uses FestIn

MrLooquer

Mr looquer

They analyze and assess your company risk exposure in real time. Website

License

This project is distributed under BSD license

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].