All Projects → lgraubner → Sitemap Generator Cli

lgraubner / Sitemap Generator Cli

Licence: mit
Creates an XML-Sitemap by crawling a given site.

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Sitemap Generator Cli

Sitemap Generator
Easily create XML sitemaps for your website.
Stars: ✭ 273 (+27.57%)
Mutual labels:  google, sitemap, crawler, seo
Sitemap Generator Crawler
Script that generates a sitemap by crawling a given URL
Stars: ✭ 169 (-21.03%)
Mutual labels:  sitemap, crawler, seo
Laravel Sitemap
Create and generate sitemaps with ease
Stars: ✭ 1,325 (+519.16%)
Mutual labels:  google, sitemap, seo
Trino
Trino: Master your translations with command line!
Stars: ✭ 118 (-44.86%)
Mutual labels:  cli, google
Lumberjack
An automated website accessibility scanner and cli
Stars: ✭ 109 (-49.07%)
Mutual labels:  cli, crawler
Gkeep
Google Keep Command Line Interface (CLI)
Stars: ✭ 114 (-46.73%)
Mutual labels:  cli, google
Laravel Seo Tools
Laravel Seo package for Content writer/admin/web master who do not know programming but want to edit/update SEO tags from dashboard
Stars: ✭ 99 (-53.74%)
Mutual labels:  sitemap, seo
Craft Seomatic
SEOmatic facilitates modern SEO best practices & implementation for Craft CMS 3. It is a turnkey SEO system that is comprehensive, powerful, and flexible.
Stars: ✭ 135 (-36.92%)
Mutual labels:  sitemap, seo
Curatedseotools
Best SEO Tools Stash
Stars: ✭ 128 (-40.19%)
Mutual labels:  google, seo
Youtube Projects
This repository contains all the code I use in my YouTube tutorials.
Stars: ✭ 144 (-32.71%)
Mutual labels:  google, crawler
Ngmeta
Dynamic meta tags in your AngularJS single page application
Stars: ✭ 152 (-28.97%)
Mutual labels:  crawler, seo
Youtubeshop
Youtube autolike and autosubs script
Stars: ✭ 177 (-17.29%)
Mutual labels:  cli, google
Fawkes
Fawkes is a tool to search for targets vulnerable to SQL Injection. Performs the search using Google search engine.
Stars: ✭ 108 (-49.53%)
Mutual labels:  google, crawler
Craft Sitemap
Craft plugin to generate a sitemap.
Stars: ✭ 105 (-50.93%)
Mutual labels:  sitemap, seo
Prerender Java
java framework for prerender
Stars: ✭ 115 (-46.26%)
Mutual labels:  crawler, seo
D4n155
OWASP D4N155 - Intelligent and dynamic wordlist using OSINT
Stars: ✭ 105 (-50.93%)
Mutual labels:  google, crawler
Search Engine Optimization
🔍 A helpful checklist/collection of Search Engine Optimization (SEO) tips and techniques.
Stars: ✭ 1,798 (+740.19%)
Mutual labels:  google, seo
Is Google
Verify that a request is from Google crawlers using Google's DNS verification steps
Stars: ✭ 82 (-61.68%)
Mutual labels:  google, crawler
Google Group Crawler
Get (almost) original messages from google group archives. Your data is yours.
Stars: ✭ 190 (-11.21%)
Mutual labels:  google, crawler
Rendora
dynamic server-side rendering using headless Chrome to effortlessly solve the SEO problem for modern javascript websites
Stars: ✭ 1,853 (+765.89%)
Mutual labels:  crawler, seo

Sitemap Generator CLI

Travis David npm

Create xml sitemaps from the command line.

Generates a sitemap by crawling your site. Uses streams to efficiently write the sitemap to your drive. Is cappable of creating multiple sitemaps if threshold is reached. Respects robots.txt and meta tags.

Table of contents

Install

This module is available on npm.

npm install -g sitemap-generator-cli
# or execute it directly with npx (since npm v5.2)
npx sitemap-generator-cli https://example.com

Usage

The crawler will fetch all folder URL pages and file types parsed by Google. If present the robots.txt will be taken into account and possible rules are applied for each URL to consider if it should be added to the sitemap. Also the crawler will not fetch URL's from a page if the robots meta tag with the value nofollow is present and ignore them completely if noindex rule is present. The crawler is able to apply the base value to found links.

sitemap-generator [options] <url>

When the crawler finished the XML Sitemap will be built and saved to your specified filepath. If the count of fetched pages is greater than 50000 it will be splitted into several sitemap files and create a sitemapindex file. Google does not allow more than 50000 items in one sitemap.

Example:

sitemap-generator http://example.com

Options

sitemap-generator --help

  Usage: cli [options] <url>

  Options:

    -V, --version                           output the version number
    -f, --filepath <filepath>               path to file including filename (default: sitemap.xml)
    -m, --max-entries <maxEntries>          limits the maximum number of URLs per sitemap file (default: 50000)
    -d, --max-depth <maxDepth>              limits the maximum distance from the original request (default: 0)
    -q, --query                             consider query string
    -u, --user-agent <agent>                set custom User Agent
    -v, --verbose                           print details when crawling
    -c, --max-concurrency <maxConcurrency>  maximum number of requests the crawler will run simultaneously (default: 5)
    -r, --no-respect-robots-txt             controls whether the crawler should respect rules in robots.txt
    -l, --last-mod                          add Last-Modified header to xml
    -g, --change-freq <changeFreq>          adds a <changefreq> line to each URL in the sitemap.
    -p, --priority-map <priorityMap>        priority for each depth url, values between 1.0 and 0.0, example: "1.0,0.8 0.6,0.4"
    -x, --proxy <url>                       Use the passed proxy URL
    -h, --help                              output usage information

filepath

Path to file to write including the filename itself. Path can be absolute or relative. Default is sitemap.xml.

Examples:

  • sitemap.xml
  • mymap.xml
  • /var/www/sitemap.xml
  • ./sitemap.myext

maxConcurrency

Sets the maximum number of requests the crawler will run simultaneously (default: 5).

maxEntries

Define a limit of URLs per sitemap files, useful for site with lots of urls. Defaults to 50000.

maxDepth

Set a maximum distance from the original request to crawl URLs, useful for generating smaller sitemap.xml files. Defaults to 0, which means it will crawl all levels.

noRespectRobotsTxt

Controls whether the crawler should respect rules in robots.txt.

query

Consider URLs with query strings like http://www.example.com/?foo=bar as individual sites and add them to the sitemap.

user-agent

Set a custom User Agent used for crawling. Default is Node/SitemapGenerator.

verbose

Print debug messages during crawling process. Also prints out a summery when finished.

last-mod

add Last-Modified header to xml

change-freq

adds a line to each URL in the sitemap.

priority-map

add priority for each depth url, values between 1.0 and 0.0, example: "1.0,0.8 0.6,0.4"

License

MIT © Lars Graubner

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].