cloudharmony / network
Licence: Apache-2.0 License
Tests network performance characteristics including latency, throughput and DNS
Stars: ✭ 18
Programming Languages
Network Benchmark Tests network performance characteristics including latency, throughput and DNS query performance. Common Linux tools including curl, ping and dig are used to conduct testing. Test endpoints must have CloudHarmony test files installed on an http/https accessible URI (see https://github.com/cloudharmony/web-probe) NOTE: When cloning this repository, please use the --recurse-submodules option to also pull the https://github.com/cloudharmony/benchmark submodule. TESTING PARAMETERS The following test parameters are supported. Parameters with a 'meta_' prefix are informational and used in conjunction with saving results (see save.sh) * collectd_rrd If set, collectd rrd stats will be captured from --collectd_rrd_dir. To do so, when testing starts, existing directories in --collectd_rrd_dir will be renamed to .bak, and upon test completion any directories not ending in .bak will be zipped and saved along with other test artifacts (as collectd-rrd.zip). User MUST have sudo privileges to use this option * collectd_rrd_dir Location where collectd rrd files are stored - default is /var/lib/collectd/rrd * abort_threshold Number of failures to permit before aborting testing. If set and this number of failures is reached, testing will stop and no result metrics will be generated * conditional_spacing Conditional spacing (milliseconds) to apply before a test interval, if the prior interval hits a defined threshold. This parameter should match the regular expression /^[><][0-9]+=[0-9]+$/ where the value on the left defines the threshold, and the value on the right the spacing. For example, if this argument was ">30000=60000" for a throughput test, then a 60 second (60000 ms) spacing will be applied before a test interval if the prior interval results in a throughput value higher than 30000 Mb/s. * discard_fastest If set, this percentage of the fastest metrics will be discarded prior to metric calculations (mean, median, standard deviation) * discard_slowest If set, this percentage of the slowest metrics will be discarded prior to metric calculations (mean, median, standard deviation) * dns_one_server If set, only 1 (randomly selected) authoritative DNS server will be tested for each --test_endpoint for DNS testing * dns_recursive Use recursive instead of authoritative queries for DNS tests (uses the name servers in /etc/resolv.conf) * dns_retry Optional explicit number of (UDP) DNS query retries (default is 2) * dns_samples The number of test samples for DNS tests. Default is 10. DNS queries will be performed against delegated servers in round robin order. If --dns_one_server is set, each test will use just 1 randomly selected server * dns_tcp Perform DNS tests with TCP queries (default is UDP) * dns_timeout: Timeout in seconds for DNS queries. Default is 5 seconds * font_size The base font size pt to use in reports and graphs. All text will be relative to this size (i.e. smaller, larger). Default is 9. Graphs use this value + 4 (i.e. default 13). Open Sans is included with this software. To change this, simply replace the reports/font.ttf file with your desired font * geoiplookup Get --test_endpoint locations using a geoiplookup if --test_location is not specified. In order to use this option, geoiplookup must be installed with current country/state GeoIp databases (may required commercial licensing - geoiplookup is a command line tool from MaxMind included in the GeoIP package) * geo_regions Geo regions to use for *_geo_region parameters and included in results. This parameter should be a comma or space separated list of desired geo region identifiers. The file lib/config/geo-regions.ini lists and defines the possible identifiers including associated countries/states. Default for this parameter is: us_west us_central us_east canada eu_west eu_central eu_east oceania asia america_south africa Geo region associations are based on first match. For example, based on the default value above, Australia match 'oceania' even though AU is listed in both oceania and asia_apac regions * latency_interval Wait interval seconds between sending each packet. Only super-user may set interval to values less 0.2 seconds. Default is 0.2 * latency_samples The number of test samples for latency tests. Default is 100 * latency_skip Endpoint, service or provider ID to ignore latency tests for. May be repeated for multiple * latency_timeout: Timeout in seconds for latency tests. Default is 3 seconds * max_runtime Optional max runtime in seconds - if this time is reached before all tests have completed, testing will stop and report on the completed tests * max_tests Optional max number of tests to perform. If the number of tests assigns exceeds this number testing will stop and report on the completed tests * meta_compute_service Optional name of the service for the compute instance performing the tests (e.g. Amazon EC2) * meta_compute_service_id Optional id of the service for the compute instance performing the tests (e.g. aws:ec2) * meta_cpu CPU descriptor - if not specified, it will be set using 'model name' from /proc/cpuinfo * meta_instance_id Optional compute service instance type identifier (e.g. c3.xlarge) * meta_location Optional geographical location of the compute instance performing tests. This parameter may be either a two character ISO 3166 country code, or a state abbreviation and country code. (e.g. --meta_location "CA, US" or --meta_location US) * meta_memory Memory description - if not specified, the system memory size will be used * meta_os Operating system description - if not set, the first line of /etc/issue will be used * meta_provider Optional name of the provider for the compute instance performing the tests (e.g. Amazon) * meta_provider_id Optional id of the provider for the compute instance performing the tests (e.g. aws) * meta_region Optional region name or identifier for the compute instance performing the tests (e.g. us-east-1) * meta_resource_id Optional unique identifier of the compute instance performing the tests (e.g. 1234) * meta_run_id Optional unique identifier for the test (e.g. 4567) * meta_test_id Optional unique identifier for a sequential of tests (e.g. aws-0914) * min_runtime May define an optional minimum runtime. If testing completes before this time is reached, the process will sleep for the remaining duration * min_runtime_in_save If set, --min_runtime will be applied by save.sh * nopdfreport Don't generate PDF version of test report - report.pdf. (wkhtmltopdf dependency removed if specified) * noreport Don't generate html or PDF test reports - report.zip and report.pdf (gnuplot, wkhtmltopdf and zip dependencies removed if specified) * output The output directory for writing test artifacts. If not specified, the current working directory will be used * params_url Optional URL that will respond to requests with one or more JSON encoded test parameters. This URL should support GET requests and provide a 2XX response code. The response body should be a JSON encoded string containing a hash with one or more parameters. If duplicate parameters exist between the command line and URL, command line parameters have precedence * params_url_service_type Optional service type filter to apply to test endpoints defined by --params_url. Services not of this type (or with no type defined) will not be tested. This parameter may be repeated for multiple service types * params_url_header Optional request header(s) to set for --params_url. These should use the format [name]:[value]. This parameter may be repeated for multiple headers (e.g. api_key:12345) * randomize If set, the order of testing will be randomized (if multiple tests are defined) * same_continent_only: If set, only --test_endpoint hosts located in the same continent will be tested (others are skipped) Does not apply to CDN or DNS services * same_country_only: If set, only --test_endpoint hosts located in the same country will be tested (others are skipped) Does not apply to CDN or DNS services * same_geo_region If set, only --test_endpoint hosts located in the same geo region will be tested (others are skipped) See --geo_regions parameter above. Does not apply to CDN or DNS services * same_provider_only: If set, only --test_endpoint hosts from the same provider (e.g. aws) will be tested (others are skipped) * same_region_only: If set, only --test_endpoint hosts from the same service and service region (e.g. us-east-1) will be tested (others are skipped) * same_service_only: If set, only --test_endpoint hosts from the same service (e.g. aws:ec2) will be tested (others are skipped) * same_state_only: If set, only --test_endpoint hosts located in the same country and state will be tested (others are skipped). Does not apply to CDN or DNS services * service_lookup If set, the CloudHarmony 'Identify Service' API method will be used to attempt to correlate --test_endpoint hosts to their associated cloud provider, service, service type, region and location. For more information, see: https://cloudharmony.com/docs/api#!/api/GET_Identify_Service NOTE: if used, response will be cached in /tmp * sleep_before_start an optional numeric value or range defining a sleep period (seconds) to apply before starting testing. If a single numeric value, that exact period will be applied. If a range of values (e.g. 30-90), then a random sleep period will be applied within that range * spacing Spacing in milliseconds to apply between each test (default is 200 ms => 1/5 second) * suppress_failed If set, failed tests will be excluded from results generated by save.sh. Otherwise, they are included with status=failed * tcp_file File(s) to use for TCP tests. Default is 'ping.js'. May be set to a comma separated list of file names from the web-probe repository, a size or size range (bytes assumed if no size suffix designated), or the keyword 'small' which will select a random file <=128KB in size for each sample (same effect as --tcp_file "8-128KB"). Examples: --tcp_file "ping.js" --tcp_file "test10kb.jpg,test20kb.jpg,test30kb.jpg,test40kb.jpg,test50kb.jpg" --tcp_file "512" --tcp_file "0-10MB" --tcp_file "10KB-1MB" --tcp_file "small" * tcp_header Optional headers to include in http requests - multiple OK. For example, to simulate a user agent: User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) * tcp_samples The number of test samples for TCP tests (rtt, ssl or ttfb). Default is 10 * tcp_timeout Timeout in seconds for TCP tests. Default is 30 seconds * tcp_uri Defines the base URI/location of the http/https accessible CloudHarmony web-probe directory on --test_endpoint. Default is '/web-probe'. May be overridden using a URI suffix within --test_endpoint * test The test(s) to perform - one of the following: latency test latency using ping - use of this test requires ICMP connectivity to --test_endpoint downlink test downlink throughput - use of this or uplink tests require the CloudHarmony web-probe repository be http/https accessible on --test_endpoint (see --throughput_uri) uplink test uplink throughput - use of this test requires support for large POST requests against the URI [throughput_uri]/up.html throughput test both downlink and uplink dns measure the time to make a DNS query. Authoritative name servers for the domain in --test_endpoint will be used for this testing unless the --dns_recursive flag is set rtt measure TCP/IP round trip time (RTT). Use of this test requires the CloudHarmony web-probe repository be http/https accessible on --test_endpoint (see --tcp_* parameters). If a test_endpoint is designated without an http prefix, http will be assumed. RTT is calculated using curl timing metrics as: rtt = %{time_connect} - %{time_namelookup} (metric saved is milliseconds) See https://blog.cloudflare.com/a-question-of-timing/ ssl measure SSL handshake time. Use of this test requires the CloudHarmony web-probe repository be https accessible on --test_endpoint (see --tcp_* parameters). If an endpoint is specified with an http prefix, it will be assumed to support https. SSH handshake time is calculated using curl timing metrics as: ssl_time = %{time_appconnect} - %{time_connect} (metric saved is milliseconds) See https://blog.cloudflare.com/a-question-of-timing/ ttfb measure time to first byte (TTFB). Use of this test requires the CloudHarmony web-probe repository be http/https accessible on --test_endpoint (see --tcp_* parameters). If a test_endpoint is designated without an http prefix, http will be assumed. TTFB is calculated using curl timing metrics as: ttfb = %{time_starttransfer} - %{time_pretransfer} (metric saved is milliseconds). This metric is similar to rtt but incorporates time spent by the server processing an HTTP GET request. This metric assumes the TCP socket connection has already been established (TCP + SSL handshakes) and disregards DNS lookup time. See https://blog.cloudflare.com/a-question-of-timing/ tcp run all TCP tests - rtt, ssl and ttfb Multiple tests may be specified each separated by a space or comma. Default test is latency * test_cmd_downlink May be used to override use of curl for downlink tests. If set, this argument should designate the structure of a CLI command to use to download (web-probe) test files. Contents of the requested file should be written to stdout or [dest]. If the latter, [dest] will be replaced with a random file name in the 'output' directory and deleted upon test completion. On error, this command should exit with a non-zero status code. This argument must contain the substring [file] which will be replaced at runtime with the path to the test file to download. This value is constructed using a combination of the test_endpoint and throughput_uri arguments followed by the web-probe repository file name. For example, the following dowlink command designates use of the aws s3 cp cli command, a bucket named 'mybucket' and web-probe repositories located in that bucket under the '/probe' prefix: Stdout --test_cmd_downlink "aws s3 cp s3://mybucket/probe/[file] -" [dest] --test_cmd_downlink "aws s3 cp s3://mybucket/probe/[file] [dest]" Command line dash style arguments (e.g. -a / --arg) must include a leading backslash before each dash Alternatively, if the URL portion (test_endpoint and throughput_uri) should be separate from the file name, this argument may contain a [url] substring. For example: --test_cmd_downlink "azcopy copy https://[url]/[file]<your sas> >/dev/stdout" * test_cmd_downlink_bytes Set this flag if --test_cmd_downlink outputs the number of bytes transferred to stdout on success. When used, do not include [dest] in the --test_cmd_downlink argument * test_cmd_downlink_dir If test_cmd_downlink contains [dest], this parameter may be set to specify an alternate directory to write temporary downloaded files to (in place of the default 'output' directory). If set, this directory must be writeable to the test process * test_cmd_downlink_sleep Optional seconds to sleep between each uplink command. Maybe be a fractional value * test_cmd_token This parameter may be optionally set to one or more tokens for use in conjunction with test_cmd_downlink, test_cmd_uplink and test_cmd_uplink_del whereby the string [token] will be replaced by the value specified by this parameter. If more than 1 tokens are set, they should be delimited by a pipe character (|) and prefixed with [test_service_id]=. For example, if 2 tokens, token1 and token2, were specified by this parameter, one for test service ID azure:storage, and the second for test service ID azure:plob, this parameter would be set to: --test_cmd_token "azure:storage=token1|azure:plob=token2" * test_cmd_uplink Like test_cmd_downlink, but used in place of curl for uplink tests. This command must also contain the substring [file] which will be replaced at runtime with a random name to assign to the test file. Additionally, this command must also contain the substring [source] which will be replaced at runtime with the path to a local file to be uploaded. On error, this command should exit with a non-zero status code, otherwise it will be considered to have been successful. stdout from this command is ignored. For example, the following uplink command designates use of the aws s3 cp cli command, a bucket named 'mybucket' and a name prefix of '/test' for uploaded files: --test_cmd_uplink "aws s3 cp [source] s3://mybucket/test/[file]" Command line dash style arguments (e.g. -a / --arg) must include a leading backslash before each dash Alternatively, if the URL portion (test_endpoint and throughput_uri) should be separate from the file name, this argument may contain a [url] substring. --test_cmd_uplink "azcopy copy [source] https://[file]<your sas>" * test_cmd_uplink_del This argument must be used in conjunction with test_cmd_uplink designating a command to use to remove test files created as a result of uplink testing. Like test_cmd_uplink, it may contain the substring [file] which will be replaced with the name of each test file uploaded (command will be invoked once per file). In place of [file], this argument may contain a wildcard '*' character, in which case it is assumed this command need only be invoked once to remove all test files. For example, the following commands designate removal of files from the uplink command example above using both individual file and wildcard methods: Individual (run once per file): --test_cmd_uplink_del "aws s3 rm s3://mybucket/test/[file]" Wildcard (run once per test iteration): --test_cmd_uplink_del "aws s3 rm s3://mybucket/test/ \-\-recursive \-\-include '*'" Command line dash style arguments (e.g. -a / --arg) must include a leading backslash before each dash Alternatively, if the URL portion (test_endpoint and throughput_uri) should be separate from the file name, this argument may contain a [url] substring. For example: --test_cmd_uplink_del "azcopy remove https://[file]<your sas>" * test_cmd_uplink_sleep Optional seconds to sleep between each uplink command. Maybe be a fractional value * test_cmd_url_strip Optional string to remove from file URL values (i.e. test_endpoint and throughput_uri arguments). For example, if testing was setup for Amazon S3 via curl/HTTP requests (e.g. http://mybucket.s3.amazonaws.com/probe) this argument could be set to --test_cmd_url_strip ".s3.amazonaws.com" (http:// and https:// are automatically removed) to convert HTTP URLs to S3 URLs. Multiple strings may be specified each separated by a pipe character (|) * test_endpoint REQUIRED: hostname or IP address to perform tests against. For throughput tests this may include an optional http/https prefix (if set, overrides the --throughput_https parameter) and web-probe URI suffix (if set, overrides the --throughput_uri parameter). May also contain a wildcard character which will be replaced with a random string for each test. Examples: --test_endpoint test.mydomain.com --test_endpoint *.test.mydomain.com --test_endpoint https://test.mydomain.com --test_endpoint https://test.mydomain.com/test-files For DNS tests, name servers used during testing are those delegated for the base domain (e.g. mydomains.com), unless the --dns_recursive flag is set. However, if this parameter contains a comma or space separated values, the values proceeding the first will be considered to be custom name servers to use instead of those delegate For test endpoints with both public and private hostnames/IP addresses, this parameter may be a space or comma separated where the second value is the private hostname/IP. The private hostname/IP will be used if the compute instance and the test endpoint from the same provider, service and service region (if it fails, the public hostname/IP will be used instead) * test_files_dir May be set to the location of a local directory containing the web-probe test files test files (https://github.com/cloudharmony/web-probe). If uplink testing is conducted, these files will be used in place of generating random files/bytes for such tests which will shorten test duration and reduce local resource overhead. May specified multiple directories separated by a comma if files might exist in more than one. * test_instance_id Optional instance type that --test_endpoint belongs to (e.g. c3.xlarge). If multiple --test_endpoint parameters are specified, --test_instance_id may be set only once (same instance type for all endpoints), or the same number of times as --test_endpoint (different instance types for each endpoint) * test_location The geographic location of --test_endpoint. The value for this parameter may be either a two character ISO 3166 country code, or a state abbreviation and country code. (e.g. --meta_location "CA, US" or --meta_location US) If multiple --test_endpoint parameters are specified, --test_location may be set only once (same location for all endpoints), or the same number of times as --test_endpoint (different locations for each endpoint) * test_private_network_type If --test_endpoint contains both public and private hostnames/IP addresses, this parameter may describe the type of private network it refers to (e.g. vpc, vpc-enhanced-networking). The value of this parameter is included in the corresponding results (testing logic does not change). If multiple --test_endpoint parameters are specified, --test_private_network_type may be set only once (same private network type for all endpoints), or the same number of times as --test_endpoint (different private network types for each endpoint) * test_provider Optional name of the provider that --test_endpoint belongs to. If multiple --test_endpoint parameters are specified, --test_provider may be set only once (same provider for all endpoints), or the same number of times as --test_endpoint (different providers for each endpoint) * test_provider_id Optional ID of the provider that --test_endpoint belongs to. If multiple --test_endpoint parameters are specified, --test_provider_id may be set only once (same provider for all endpoints), or the same number of times as --test_endpoint (different providers for each endpoint) * test_region Optional service regions where --test_endpoint is located in (e.g. --test_region us-east-1 for EC2). If multiple --test_endpoint parameters are specified, --test_region may be set only once (same for all endpoints), or the same number of times as --test_endpoint (different for each endpoint) * test_service Optional name of the service that --test_endpoint belongs to. If multiple --test_endpoint parameters are specified, --test_service may be set only once (same service for all endpoints), or the same number of times as --test_endpoint (different services for each endpoint) * test_service_id Optional ID of the service that --test_endpoint belongs to. If multiple --test_endpoint parameters are specified, --test_service_id may be set only once (same service for all endpoints), or the same number of times as --test_endpoint (different services for each endpoint) * test_service_type Optional type of service that --test_endpoint belongs to. If multiple --test_endpoint parameters are specified, --test_service_type may be set only once (same service type for all endpoints), or the same number of times as --test_endpoint (different service type for each endpoint). Only the following values are allowed: compute, storage (i.e. object storage), cdn or dns. Not required for DNS tests. Optionally, the service type can be imbedded into --test_service_id (e.g. google:compute). If used this attribute will also be used to determine which --test are supported by each endpoint based on the following type to test correlations: compute => throughput, latency storage => downlink, latency cdn => downlink, latency dns => dns If an endpoint is specified for which there are no supported tests, it will be disregarded * throughput_header Optional headers to include in http requests - multiple OK. For example, to simulate a user agent: User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) * throughput_https If set, the protocol for throughput tests will default to https - otherwise it defaults to http * throughput_inverse If set, and a throughput test record contains both meta_compute_service_id and test_service_id, and test_service_type is 'compute', and --test is 'downlink' OR 'uplink' (not 'throughput'), then an inverse of each test record will be added to the results. The inverse record will use the same metrics, but replace 'test' for the opposite type (e.g. download => uplink or uplink => downlink), and the following test record attributes will be substituted: meta_compute_service <=> test_service meta_compute_service_id <=> test_service_id meta_geo_region <=> test_geo_region meta_instance_id <=> test_instance_id meta_hostname <=> test_endpoint [ip of host] => test_ip meta_location <=> test_location meta_location_country <=> test_location_country meta_location_state <=> test_location_state meta_provider <=> test_provider meta_provider_id <=> test_provider_id meta_region <=> test_region Additionally, the following attributes will be set to null: meta_cpu meta_memory meta_memory_gb meta_memory_mb meta_os_info meta_resource_id * throughput_keepalive If set, throughput tests will use http keep alive, meaning http connections will be re-used for multiple requests. When used, throughput_samples will be equally spread across throughput_threads for each test. Cannot be used in conjunction with test_cmd_downlink or test_cmd_uplink * throughput_same_continent Throughput test size to use in megabytes if the compute instance performing tests is in the same continent as --test_endpoint. Overrides --throughput_size in that case. CDN services will always match this parameter Default is 10 * throughput_same_country Throughput test size to use in megabytes if the compute instance performing tests is in the same country as --test_endpoint. Overrides --throughput_size in that case. CDN services will always match this parameter Default is 20 * throughput_same_geo_region Throughput test size to use in megabytes if the compute instance performing tests is in the same geo region as --test_endpoint. Overrides --throughput_size in that case (see --geo_regions parameter above). CDN services will always match this parameter Default is 30 * throughput_same_provider Throughput test size to use in megabytes if the compute instance performing tests is from the same provider as --test_endpoint. Overrides --throughput_size in that case Default is 10 * throughput_same_region Throughput test size to use in megabytes if the compute instance performing tests is from the same service AND in the same region as --test_endpoint. Overrides --throughput_size in that case Default is 100 * throughput_same_service Throughput test size to use in megabytes if the compute instance performing tests is from the same service as --test_endpoint. Overrides --throughput_size in that case No default value * throughput_same_state Throughput test size to use in megabytes if the compute instance performing tests is located in the same country and state as --test_endpoint. Overrides --throughput_size in that case Default is 50 * throughput_samples The number of test samples for throughput tests. Default is 5 unless --throughput_small_file is set or --throughput_size is 0, in which case it is 10. Total number of test samples is [throughput_samples] * [throughput_threads] * throughput_size Default size for throughput tests in megabytes. For downlink throughput tests, the test file from the CloudHarmony web-probe repository with the closest matching size will be used. For uplink throughput tests, POST requests of this exact size (request body containing random data) will be used. If set to 0, tests will use an 8 byte request and --throughput_time will be set to true (result metrics will represent request times in ms instead of rates in Mb/s) Default is 5 * throughput_slowest_thread If set, throughput metrics will be based on the speed of the slowest thread instead of average speed X number of threads * throughput_small_file If set, --throughput_size is ignored and throughput tests are constrained to test files smaller than 128KB. Each thread of each request will randomly select one such file. When used, the throughput_size result value will be the average file size * throughput_threads The number of concurrent threads for throughput tests. Default is 2. May contain [cpus] which will be automatically replaced with the number of CPU cores * throughput_time If set, throughput metrics will be average request times (ms) instead of rate (Mb/s). When used with throughput_webpage, metrics will be the total page load time * throughput_timeout Timeout in seconds for throughput tests. Default is 900 seconds unless --throughput_size is 0 or --throughput_small_file is set in which case it is 30 * throughput_tolerance The permitted variation between requested and transferred bytes for a throughput test. Default is 0.6 - meaning transferred bytes must be within 60% * throughput_uri Defines the base URI/location of the http/https accessible CloudHarmony web-probe directory on --test_endpoint. Default is '/web-probe'. May be overridden using a URI suffix within --test_endpoint * throughput_use_mean If set, mean metrics will be used for reporting and calculations instead of the default median * throughput_webpage May be used to designate contents of a single web page. When set, the value should be a space or comma separated list of URIs relative to test_endpoint (or optionally absolute for an external endpoint). If set, throughput_same_*, throughput_size, throughput_small_file and throughput_uri will be ignored, throughput_keepalive will be implicitly set, and throughput_samples will designate the number of full page loads to perform (each metric representing one such load). To accomplish this, webpage resources will be evenly divided between throughput_threads * throughput_webpage_check If set, the URLs designated by throughput_webpage will individually be checked for validity before testing begins. To be considered valid, the URL should have a 2XX response and be within 5% of the same size as the first endpoint. If any URL is not valid, that index will be removed for all test endpoints. To use this parameter, the number of URLs in each throughput_webpage parameter must be equal * traceroute Perform a traceroute if a test fails - results of the traceroutes are written to traceroute.log in the --output directory * verbose Show verbose output DEPENDENCIES This benchmark has the following dependencies: curl Used for throughput, rtt, ttfb and ssl testing dig Used for DNS testing GeoIP If --geoiplookup is set - used to lookup locations of --test_endpoint using its IP address php-cli Used for test automation ping Used for latency testing traceroute If --traceroute is set - used to traceroute failed hosts following failed tests zip Used to compress test artifacts TEST ARTIFACTS This benchmark generates the following artifacts (written to --output) collectd-rrd.zip collectd RRD files (see --collectd_rrd) traceroute.log => traceroutes for any failed tests SAVE SCHEMA The following columns are included in CSV files/tables generated by save.sh. Indexed MySQL/PostgreSQL columns are identified by *. Columns without descriptions are documented as runtime parameters above. Data types are defined in save/schema/network.json. Columns can be removed using the save.sh --remove parameter benchmark_version: [benchmark version] collectd_rrd: [URL to zip file containing collectd rrd files] dns_recursive dns_servers: [number of unique DNS servers queried] iteration: [iteration number (used with incremental result directories)] meta_compute_service meta_compute_service_id* meta_cpu: [CPU model info] meta_cpu_cores: [# of CPU cores] meta_instance_id* meta_geo_region: [geo region of the testing compute instance (derived from meta_location)] meta_hostname: [hostname of the testing compute instance] meta_location meta_location_country meta_location_state meta_memory meta_memory_gb: [memory in gigabytes] meta_memory_mb: [memory in megabyets] meta_os_info: [operating system name and version] meta_provider meta_provider_id* meta_region* meta_resource_id meta_run_id meta_test_id* metric: [median test metric: ms for latency and DNS, Mb/s or ms for throughput] metric_10: [10th (slowest to fastest) percentile metric] metric_25: [25th (slowest to fastest) percentile metric] metric_75: [75th (slowest to fastest) percentile metric] metric_90: [90th (slowest to fastest) percentile metric] metric_fastest: [fastest metric - lowest value for dns/latency, highest for others] metric_max: [largest metric] metric_mean: [mean metric] metric_min: [smallest metric] metric_rstdev: [sample relative standard deviation %: (metric_stdev/metric)*100] metric_slowest: [slowest metric - highest value for dns/latency, lowest for others] metric_stdev: [sample standard deviation] metric_sum: [summation of individual measurements] metric_sum_squares: [summation of squared individual measurements] metric_timed: [throughput based on time - not curl reported speed - uplink/downlink tests only] metric_unit: [unit of measure for the metrics - e.g. ms or Mb/s] metric_unit_long: [long form unit of measure for the metrics - e.g. milliseconds or megabits per second] metrics: [pipe separated string containing all metrics] samples: [number of test samples] status: [test status - success, partial or failed] test: [type of test: dns, latency, uplink or downlink] test_endpoint: [user designated IP/hostname of the test endpoint - private hostname/IP if used] test_geo_region*: geo region of the test endpoint test_instance_id* test_ip: [actual IP of the test_endpoint] test_location test_location_country test_location_state test_private_endpoint: [true if a private hostname/IP used for testing] test_private_network_type test_provider test_provider_id test_region* test_service test_service_id* test_service_type test_started*: [when the test started] test_stopped: [when the test ended] tests_failed: [number of failed test samples] tests_success: [number of successful test samples] throughput_custom_cmd: [true if a custom command was used for throughput testing] throughput_https: [true if throughput test was over https] throughput_size: [throughput test size for each sample in megabytes] throughput_time: [true if throughput metrics are based on request times (ms) instead of transfer rate (Mb/s)] throughput_transfer: [total transfer for throughput tests in MB] throughput_threads timeout: [test timeout] traceroute: [URL to traceroute - if status is failed (if --store option used)] USAGE # perform downlink throughput tests against the test endpoint cloudfront.cloudharmony.net ./run.sh --meta_compute_service_id aws:ec2 --meta_region us-east-1 --test downlink --test_endpoint cloudfront.cloudharmony.net --test_service_id aws:cloudfront # perform latency and throughput tests to 3 google compute instances ./run.sh --test latency --test throughput --test_endpoint us-central1.gce.cloudharmony.net --test_endpoint europe-west1.gce.cloudharmony.net --test_endpoint asia-east1.gce.cloudharmony.net --test_service_id google:compute # save.sh saves results to CSV, MySQL, PostgreSQL, BigQuery, Librato Metrics or # via HTTP callback. It can also save artifacts (traceroutes) to S3, Azure Blob # Storage or Google Cloud Storage # save results to CSV files ./save.sh # save results in ~/stream-testing ./save.sh ~/stream-testing # save results to a PostgreSQL database ./save --db postgresql --db_user dbuser --db_pswd dbpass --db_host db.mydomain.com --db_name benchmarks # save results to BigQuery and artifacts to S3 ./save --db bigquery --db_name benchmark_dataset --store s3 --store_key THISIH5TPISAEZIJFAKE --store_secret thisNoat1VCITCGggisOaJl3pxKmGu2HMKxxfake --store_container benchmarks1234 # save results to Librato Metrics using the median metric and custom name/source save.sh --db librato --db_user [user] --db_pswd [API key] -v --db_librato_aggregate --db_librato_value metric # save results to Librato Metrics using count + sum and custom name/source and other attributes save.sh --db librato --db_user [user] --db_pswd [API key] -v --db_librato_aggregate --db_librato_count samples --db_librato_display_units_short ms --db_librato_max metric_max --db_librato_min metric_min --db_librato_measure_time test_stopped --db_librato_name "{benchmark}-{test}" --db_librato_period 300 --db_librato_source "{meta_geo_region}" --db_librato_sum metric_sum --db_librato_sum_squares metric_sum_squares
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].