All Projects → analysis-dev → save

analysis-dev / save

Licence: MIT license
Universal test framework for cli tools [mainly for code analyzers and compilers]

Programming Languages

kotlin
9241 projects
shell
77523 projects

Projects that are alternatives of or similar to save

go-recipes
🦩 Tools for Go projects
Stars: ✭ 2,490 (+7445.45%)
Mutual labels:  static-analysis, compilers
Bolt
Bolt is a language with in-built data-race freedom!
Stars: ✭ 215 (+551.52%)
Mutual labels:  static-analysis, compilers
Static Analysis
⚙️ A curated list of static analysis (SAST) tools for all programming languages, config files, build tools, and more.
Stars: ✭ 9,310 (+28112.12%)
Mutual labels:  static-analysis, static-analyzers
Fortran-Tools
Fortran compilers, preprocessors, static analyzers, transpilers, IDEs, build systems, etc.
Stars: ✭ 31 (-6.06%)
Mutual labels:  static-analysis, compilers
go-mnd
Magic number detector for Go.
Stars: ✭ 153 (+363.64%)
Mutual labels:  static-analysis
mllint
`mllint` is a command-line utility to evaluate the technical quality of Python Machine Learning (ML) projects by means of static analysis of the project's repository.
Stars: ✭ 67 (+103.03%)
Mutual labels:  static-analysis
phpstan-nette
Nette Framework class reflection extension for PHPStan & framework-specific rules
Stars: ✭ 87 (+163.64%)
Mutual labels:  static-analysis
phpstan
PHP Static Analysis in Github Actions.
Stars: ✭ 41 (+24.24%)
Mutual labels:  static-analysis
PhpCodeAnalyzer
PhpCodeAnalyzer scans codebase and analyzes which non-built-in php extensions used
Stars: ✭ 91 (+175.76%)
Mutual labels:  static-analysis
eslint-plugin-vue-scoped-css
ESLint plugin for Scoped CSS in Vue.js
Stars: ✭ 58 (+75.76%)
Mutual labels:  static-analysis
rstatic
An R package for static analysis of R code.
Stars: ✭ 32 (-3.03%)
Mutual labels:  static-analysis
jitana
A graph-based static-dynamic hybrid DEX code analysis tool
Stars: ✭ 35 (+6.06%)
Mutual labels:  static-analysis
unimport
unimport is a Go static analysis tool to find unnecessary import aliases.
Stars: ✭ 64 (+93.94%)
Mutual labels:  static-analysis
awesome-malware-analysis
Defund the Police.
Stars: ✭ 9,181 (+27721.21%)
Mutual labels:  static-analysis
wasm-script
Compile WebAssembly in your HTML
Stars: ✭ 28 (-15.15%)
Mutual labels:  compilers
klara
Automatic test case generation for python and static analysis library
Stars: ✭ 250 (+657.58%)
Mutual labels:  static-analysis
phpstan-webmozart-assert
PHPStan extension for webmozart/assert
Stars: ✭ 132 (+300%)
Mutual labels:  static-analysis
phpstan-dba
PHPStan based SQL static analysis and type inference for the database access layer
Stars: ✭ 163 (+393.94%)
Mutual labels:  static-analysis
sonarlint4netbeans
SonarLint integration for Apache Netbeans
Stars: ✭ 23 (-30.3%)
Mutual labels:  static-analysis
OCCAM
OCCAM: Object Culling and Concretization for Assurance Maximization
Stars: ✭ 20 (-39.39%)
Mutual labels:  static-analysis

GitHub release (latest SemVer) Maven Central Lines of code Hits-of-Code GitHub repo size Run deteKT Run diKTat Build and test License FOSSA Status codecov

Save is an all-purpose command-line test framework that could be used for testing of development tools, especially which work with the code. Fully native and multiplatform application.

Quick start

CLI properties examples save.toml config Warn plugin Fix plugin Save presentation

What is SAVE?

Static Analysis Verification and Evaluation (SAVE) - is an eco-system (see also save-cloud) for evaluation, testing and certification of static analyzers. Instead of writing your own test framework, you can use SAVE to have a command-line test application. The only thing you need is to prepare test resources in a proper format.

Save can be used not only with static analyzers, but can be used as a test framework for writing functional tests for other development tools, like compilers (as testing principles remain the same).

How to start

  1. Prepare and configure your test base in the proper format. See test_detection and plugins
  2. Run the following: save "/my/path/to/tests". Directory tests should contain save.toml configuration file.

Plugins with examples

Here is a list of standard default plugins:

  • warn plugin for testing tools that find problems in the source code and emit warnings
  • fix plugin for testing tools for static analyzers that mutate text
  • fix-and-warn plugin optimization in case you would like to fix file and after that check warnings that the tool was not able to fix in one execution.

In case you would like to have several plugins to work in your directory with same test files (resources), just simply add them all to save.toml config:

[general]
...

[fix]
...

[warn]
...

[other plugin]
...

Save warnings DSL

save-cli You can read more about the warn plugin here

How to configure

SAVE has a command line interface that runs the framework and your executable. What you need is simply to configure the output of your static analyzer so SAVE will be able to check if the proper error was raised on the proper line of test code.

To check that the warning is correct for SAVE - your static analyzer must print the result to stderr/stdout or to some log file.

General behavior of SAVE can be configured using command line arguments, or a configuration file save.properties that should be placed in the same folder with a root test config save.toml.

For the complete list of supported options that can be passed to SAVE via command line or save.properties file, please refer to the options table or run save --help. Note, that options with choice are case-sensitive.

Example of save.properties file:

reportType=plain
language=c++

OR you can pass these arguments directly in command line:

save --report-type json --language java

SAVE framework is able to automatically detect your tests, run your analyzer on these tests, calculate the pass-rate and return test results in the expected format.

Test detection and save.toml file

To make SAVE detect your test suites you need to put save.toml file in each directory where you have tests that should be run. Note, that these configuration files inherit configurations from the previous level of directories.

Despite the fact, that almost all fields may not be defined in bottom levels and can be inherited from the top level, you should be accurate: some fields in [general] section are required for execution, so you need to provide them at least in one config from inheritance chain for test that should be run. Look which fields are required.

For example, in case of the following hierarchy of directories:

| A
  | save.toml
  | B
    | save.toml

save.toml from the directory B will inherit settings and properties from directory A.

Please note, that SAVE will detect all files with Test postfix and will automatically use configuration from save.toml file that is placed in the directory. Tests are named by the test file resource name without a suffix 'Test'. In case SAVE will detect a file with Test postfix in test resources and will not be able to find any save.toml configurations in the hierarchy of directories - it will raise an error.

For example, the following example is invalid and will cause an error, because SAVE framework will not be able to find save.toml configuration file:

| A
  | B
  | myTest.java

As described above, save.toml is needed to configure tests. The idea is to have only one configuration file for a directory with tests (one to many relation). Such directories we will call test suites. We decided to have only one configuration file as we have many times seen that for such tests there is a duplication of configuration in the same test suite.

save.toml configuration file

Save configuration uses toml format. As it was told above, save.toml can be imported from the directory hierarchy. The configuration file has [general] table and [plugins] table. To see more information about plugins, read this section. In this section we will give information only about the [general] table that can be used in all plugins.

[general]
# your custom tags that will be used to detect groups of tests (required)
tags = ["parsing", "null-pointer", e.t.c]

# custom free text that describes the test suite (required)
description = "My suite description"

# Simple suite name (required)
suiteName = DocsCheck, CaseCheck, NpeTests, e.t.c 

// FixMe: add tests that check that it is required and that it can be overwritten by child configs
# Execution command (required at least once in the configuration hierarchy)
execCmd="./ktlint -R diktat-0.4.2.jar"

# excluded tests in the suite (optional). Here you can provide names of excluded tests, separated by comma. By the default no tests are excluded. 
# to exclude tests use relative path to the root of test project (to the root directory of `save.toml`)
excludedTests = ["warn/chapter1/GarbageTest.kt", "warn/otherDir/NewTest.kt"], e.t.c

# command execution time for one test (milliseconds)
timeOutMillis = 10000

# language for tests
language = "Kotlin"

Executing specific tests

It can be useful to execute only a number of tests instead of all tests under a particular save.toml config. To do so, you want to pass a relative path to test file after all configuration options:

$ save [options] /path/to/tests/Test1

or a list of relative paths to test files (separated with spaces)

$ save [options] /path/to/tests/Test1 /path/to/tests/Test2

SAVE will detect the closest save.toml file and use configuration from there.

Note: On Windows, you may need to use double backslash \\ as path separator

Using plugins for specific test-scenarios

SAVE doesn't have any inspections active by default, instead the behavior of the analysis is fully configurable using plugins.

// FixMe: Custom plugins are not yet fully supported. Do not use custom pluins. Plugins are dynamic libraries (.so or .dll) and they should be provided using argument --plugins-path. Some plugins are bundled with SAVE out-of-the-box and don't require an additional setup.

SAVE output

Save supports several formats of test result output: PLAIN (markdown-like table with all test results), PLAIN_FAILED (same as PLAIN, but doesn't show passed tests) and JSON (structured representation of execution result). The format could be selected with --report-type option.

Purpose of Static Analysis Verification and Evaluation (SAVE) project

Usage of static analyzers - is a very important part of development each and every software product. All human beings can make a mistake in their code even when a software developer is writing all kinds of tests and has a very good test-coverage. All these issues can lead to potential money losses of companies. Static analysis of programs helps to reduce the number of such bugs and issues that cannot be found by validations on the compiler's side.

There are different kinds and purposes of static analysis: it can be simple analysis using AST (abstract syntax tree), it can be more complex CFA (control-flow analysis), interprocedural analysis, context sensitive analysis, e.t.c. Static analyzers can check code style, find potential issues on the runtime in the logic of an application, check for code smells and suggest best practices. But what exactly should static analyzers do? How their functionality can be measured? What is an acceptance criteria for Which functionality do developers really need when they are writing a brand new analyzer? These questions are still remain not answered, even after decades of development of static analyzers.

Problematics

Each and every creator of static analyzers in the beginning of his development journey starts from the very simple thing: types of issues that his tool will detect. This leads to a searching of existing lists of potential issues or test packages that can be used to measure the result of his work or can be used for TDD (test driven development). In other areas of system programming such benchmarks and test sets already exists, for example SPEC.org benchmarks are used all over the world to test the functionality, evaluate and measure the performance of different applications and hardware: from compilers to CPUs, from web-servers to Java Clients. But there are no test sets and even strict standards for detection of issues that can be found in popular programming languages. There were some guidelines of coding on C/C++ done by MISRA, but there are no analogues of it even for the most popular languages in the world like Python and JVM-languages. There are only existing test suites at NIST, but the framework and eco-system remain limited.

In this situation each and every new developer that reinvents his new code style or mechanism of static analysis each time reinvents his brand new test framework and writting test sets that have been written already thousands of times for his analyzer/linter. Someone uses existing guidelines like Google code style or using PMD rules. But in all cases a lot of time will be spent on reinventing, writing and debuging tests.

Development

Build

The project uses gradle as a build system and can be built with the command ./gradlew build. To compile native artifacts, you will need to install prerequisites as described in Kotlin/Native documentation.

To access dependencies hosted on Github Package Registry, you need to add the foolowing into gradle.properties or ~/.gradle/gradle.properties:

gprUser=<GH username>
gprKey=<GH personal access token>

Personal Access Token should be generated via https://github.com/settings/tokens/new with the scope at least containing read:packages.

Because of generated code, you will need to run the build once to correctly import project in IDE with resolved imports.

Contribution

You can always contribute to the main SAVE framework - just create a PR for it. But to contribute or change tests in categories you will need get approvals from the maintainer of the test package/analysis category. Please see the list of them.

License

FOSSA Status

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].