All Projects → Kesin11 → CIAnalyzer

Kesin11 / CIAnalyzer

Licence: MIT License
A tool collecting multi CI services build data and export it for creating self-hosting build dashboard.

Programming Languages

typescript
32286 projects
Earthly
7 projects
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to CIAnalyzer

megalinter
🦙 Mega-Linter analyzes 48 languages, 22 formats, 19 tooling formats, excessive copy-pastes, spelling mistakes and security issues in your repository sources with a GitHub Action, other CI tools or locally.
Stars: ✭ 534 (+926.92%)
Mutual labels:  jenkins, ci, github-actions
CI-Utils
Utilities for running Common Lisp on CI platforms
Stars: ✭ 18 (-65.38%)
Mutual labels:  circleci, ci, github-actions
Env Ci
Get environment variables exposed by CI services
Stars: ✭ 180 (+246.15%)
Mutual labels:  jenkins, circleci, ci
Ci Matters
Integration (comparison) of different continuous integration services on Android project
Stars: ✭ 119 (+128.85%)
Mutual labels:  jenkins, circleci, ci
Ci Detector
Detect continuous integration environment and get information of current build
Stars: ✭ 138 (+165.38%)
Mutual labels:  jenkins, circleci, ci
Nevergreen
🐤 A build monitor with attitude
Stars: ✭ 170 (+226.92%)
Mutual labels:  jenkins, circleci, ci
ci-skip
CI skip comment
Stars: ✭ 35 (-32.69%)
Mutual labels:  circleci, bitrise, github-actions
jcefbuild
Binary builds of java-cef
Stars: ✭ 160 (+207.69%)
Mutual labels:  ci, github-actions
install-swift
GitHub Action to install a version of Swift 🏎
Stars: ✭ 23 (-55.77%)
Mutual labels:  ci, github-actions
arduino-lint-action
GitHub Actions action to check Arduino projects for problems
Stars: ✭ 20 (-61.54%)
Mutual labels:  ci, github-actions
steps-git-clone
No description or website provided.
Stars: ✭ 14 (-73.08%)
Mutual labels:  ci, bitrise
update-container-description-action
github action to update a Docker Hub, Quay or Harbor repository description from a README file
Stars: ✭ 20 (-61.54%)
Mutual labels:  ci, github-actions
setup-scheme
Github Actions CI / CD setup for Scheme
Stars: ✭ 13 (-75%)
Mutual labels:  ci, github-actions
steps-xcode-test
Xcode Test step
Stars: ✭ 26 (-50%)
Mutual labels:  ci, bitrise
googletest-ci
Continuous integration (CI) + Google Test (gtest) + CMake example boilerplate demo
Stars: ✭ 14 (-73.08%)
Mutual labels:  circleci, github-actions
setup-swift
GitHub Action that setup a Swift environment
Stars: ✭ 114 (+119.23%)
Mutual labels:  ci, github-actions
docker-coala-base
coala base docker image
Stars: ✭ 20 (-61.54%)
Mutual labels:  circleci, ci
overview
Automate your workflows with GitHub actions for MATLAB.
Stars: ✭ 40 (-23.08%)
Mutual labels:  ci, github-actions
steps-cocoapods-install
No description or website provided.
Stars: ✭ 19 (-63.46%)
Mutual labels:  ci, bitrise
drupal9ci
One-line installers for implementing Continuous Integration in Drupal 9
Stars: ✭ 137 (+163.46%)
Mutual labels:  circleci, github-actions

CIAnalyzer

CI Docker build Docker Pulls Docker Pulls

CIAnalyzer is a tool for collecting build data from CI services. You can create a dashboard to analyze your build from the collected data.

Motivation

Today, many CI services provide the ability to build applications, docker images, and many other things. Since some of these builds can take a long time to build, you may want to analyze your build data, average build time, success rate, etc.

Unfortunately, few services provide a dashboard for analyzing build data. As far as I know Azure Pipeline provides a great feature called Pipeline reports, but it only shows data about builds that have been run in Azure Pipeline.

CIAnalyzer collects build data using each service API, then normalizes the data format and exports it. So you can create a dashboard that allows you to analyze build data across multiple CI services using your favorite BI tools.

Sample dashboard

CIAnalyzer sample dashboard (DataStudio)

It created by DataStudio with BigQuery ci_analyzer_dashboard1 ci_analyzer_dashboard2 cianalyzer_test_report

Architecture

CIAnalyzer Architecture

Export data

Workflow

Workflow is a data about job that executed in CI. The items included in the workflow data are as follows.

  • Executed date
  • Duration time
  • Status(Success, Failed, Abort, etc.)
  • Build number
  • Trigger type
  • Repository
  • Branch
  • Tag
  • Queued time
  • Commit
  • Actor
  • Workflow URL
  • Executor data

See full schema: workflow.proto

Test report

Test report is a data about test. If you output test result as JUnit format XML and store to archive, CIAnalyzer can collect from it.

  • Executed date
  • Duration time
  • Status(Success, Failed, Skipped, etc.)
  • Test name
  • Number of test
  • Failure test num
  • Branch

See full schema: test_report.proto

Supported services

  • CI services
    • GitHub Actions
    • CircleCI (also support enterprise version)
    • Jenkins (only Pipeline job)
    • Bitrise
  • Export
    • BigQuery
    • Local file (output JSON or JSON Lines)

USAGE

docker run \
  --mount type=bind,src=${PWD},dst=/app/ \
  --mount type=bind,src=${SERVICE_ACCOUNT},dst=/service_account.json \
  -e GITHUB_TOKEN=${GITHUB_TOKEN} \
  -e CIRCLECI_TOKEN=${CIRCLECI_TOKEN} \
  -e JENKINS_USER=${JENKINS_USER} \
  -e JENKINS_TOKEN=${JENKINS_TOKEN} \
  -e BITRISE_TOKEN=${BITRISE_TOKEN} \
  -e GOOGLE_APPLICATION_CREDENTIALS=/service_account.json \
  ghcr.io/kesin11/ci_analyzer:v4 -c ci_analyzer.yaml

Container tagging scheme

The versioning follows Semantic Versioning:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Most recommend tag for user is v{major}. If you prefere more conservetive versioning, v{major}.{minor} or v{major}.{minor}.{patch} are recommended.

tag when update for
v{major} Create release User
v{major}.{minor} Create release User
v{major}.{minor}.{patch} Create release User
latest Create release Developer
master Push master Developer

Setup ENV

  • Services
    • GITHUB_TOKEN: GitHub auth token
    • CIRCLECI_TOKEN: CircleCI API token
    • JENKINS_USER: Username for login to your Jenkins
    • JENKINS_TOKEN: Jenkins user API token
    • BITRISE_TOKEN: Bitrise personal access token
  • Exporter
    • GOOGLE_APPLICATION_CREDENTIALS: GCP service account json path
  • LastRunStore
    • GOOGLE_APPLICATION_CREDENTIALS

Setup BigQuery (Recommend)

If you want to use bigquery_exporter, you have to create dataset and table that CIAnalyzer will export data to it.

# Prepare bigquery schema json files
git clone https://github.com/Kesin11/CIAnalyzer.git
cd CIAnalyzer

# Create dataset
bq mk \
  --project_id=${GCP_PROJECT_ID} \
  --location=${LOCATION} \
  --dataset \
  ${DATASET}

# Create tables
bq mk \
  --project_id=${GCP_PROJECT_ID} \
  --location=${LOCATION} \
  --table \
  --time_partitioning_field=createdAt \
  ${DATASET}.${WORKFLOW_TABLE} \
  ./bigquery_schema/workflow_report.json

bq mk \
  --project_id=${GCP_PROJECT_ID} \
  --location=${LOCATION} \
  --table \
  --time_partitioning_field=createdAt \
  ${DATASET}.${TEST_REPORT_TABLE} \
  ./bigquery_schema/test_report.json

And also GCP service account used for CIAnalyzer needs some BigQuery permissions. Please attach roles/bigquery.dataEditor and roles/bigquery.jobUser. More detail, check BigQuery access control document.

Setup GCS bucket (Recommend)

What is LastRunStore

CIAnalyzer collects build data from each CI service API, but there may be duplicates of the previously collected data. To remove the duplicate, it is necessary to save the last build number of the previous run and output only the difference from the previous run.

After CIAnalyzer collects build data successfully, it save each job build number and load before next time execution. This feature called LastRunStore.

By default, CIAnalyzer uses a local JSON file as a backend for LastRunStore. However, the last build number needs to be shared, for example when running CIAnalyzer on Jenkins which uses multiple nodes.

Resolving these problems, CIAnalyzer can use GCS as LastRunStore to read/write the last build number from any machine. It inspired by Terraform backend.

Create GCS bucket

If you want to use lastRunStore.backend: gcs, you have to create GCS bucket before execute CIAnalyzer.

gsutil mb -b on -l ${LOCATION} gs://${BUCKET_NAME}

And also GCP service account needs to read and write permissions for the target bucket. More detail, check GCS access control document.

Edit config YAML

Copy ci_analyzer.yaml and edit to your preferred configuration. CIAnalyzer uses ci_analyzer.yaml as config file in default, but it can change with -c options.

More detail for config file, please check ci_analyzer.yaml and sample files.

Execute on CI service with cron (Recommend)

CIAnalyzer is designed as a tool that runs every time, not as an agent. It's a good idea to run it with cron on CI services such as CircleCI or Jenkins.

Please check sample, then copy it and edit to your configuration.

Sample output JSON

Workflow report

{
  "service": "circleci",
  "workflowId": "Kesin11/CIAnalyzer-ci",
  "buildNumber": 306,
  "workflowRunId": "Kesin11/CIAnalyzer-ci-306",
  "workflowName": "ci",
  "createdAt": "2020-05-21T01:08:06.800Z",
  "trigger": "github",
  "status": "SUCCESS",
  "repository": "Kesin11/CIAnalyzer",
  "headSha": "09f1d6d398c108936ff7973139fcbf1793d74f8f",
  "branch": "master",
  "tag": "v0.2.0",
  "startedAt": "2020-05-21T01:08:09.632Z",
  "completedAt": "2020-05-21T01:08:53.469Z",
  "workflowDurationSec": 40.752,
  "sumJobsDurationSec": 39.959,
  "successCount": 1,
  "parameters": [],
  "jobs": [
    {
      "workflowRunId": "Kesin11/CIAnalyzer-ci-306",
      "buildNumber": 306,
      "jobId": "24f03e1a-1699-4237-971c-ebc6c9b19baa",
      "jobName": "build_and_test",
      "status": "SUCCESS",
      "startedAt": "2020-05-21T01:08:28.347Z",
      "completedAt": "2020-05-21T01:08:53.469Z",
      "jobDurationSec": 25.122,
      "sumStepsDurationSec": 24.738,
      "steps": [
        {
          "name": "Spin Up Environment",
          "status": "SUCCESS",
          "number": 0,
          "startedAt": "2020-05-21T01:08:28.390Z",
          "completedAt": "2020-05-21T01:08:30.710Z",
          "stepDurationSec": 2.32
        },
        {
          "name": "Preparing Environment Variables",
          "status": "SUCCESS",
          "number": 99,
          "startedAt": "2020-05-21T01:08:30.956Z",
          "completedAt": "2020-05-21T01:08:30.984Z",
          "stepDurationSec": 0.028
        },
        {
          "name": "Checkout code",
          "status": "SUCCESS",
          "number": 101,
          "startedAt": "2020-05-21T01:08:30.993Z",
          "completedAt": "2020-05-21T01:08:31.502Z",
          "stepDurationSec": 0.509
        },
        {
          "name": "Restoring Cache",
          "status": "SUCCESS",
          "number": 102,
          "startedAt": "2020-05-21T01:08:31.509Z",
          "completedAt": "2020-05-21T01:08:32.737Z",
          "stepDurationSec": 1.228
        },
        {
          "name": "npm ci",
          "status": "SUCCESS",
          "number": 103,
          "startedAt": "2020-05-21T01:08:32.747Z",
          "completedAt": "2020-05-21T01:08:37.335Z",
          "stepDurationSec": 4.588
        },
        {
          "name": "Build",
          "status": "SUCCESS",
          "number": 104,
          "startedAt": "2020-05-21T01:08:37.341Z",
          "completedAt": "2020-05-21T01:08:43.371Z",
          "stepDurationSec": 6.03
        },
        {
          "name": "Test",
          "status": "SUCCESS",
          "number": 105,
          "startedAt": "2020-05-21T01:08:43.381Z",
          "completedAt": "2020-05-21T01:08:53.369Z",
          "stepDurationSec": 9.988
        },
        {
          "name": "Save npm cache",
          "status": "SUCCESS",
          "number": 106,
          "startedAt": "2020-05-21T01:08:53.376Z",
          "completedAt": "2020-05-21T01:08:53.423Z",
          "stepDurationSec": 0.047
        }
      ]
    }
  ]
}

Test report

[
  {
    "workflowId": "Kesin11/CIAnalyzer-CI",
    "workflowRunId": "Kesin11/CIAnalyzer-CI-170",
    "buildNumber": 170,
    "workflowName": "CI",
    "createdAt": "2020-08-09T10:20:28.000Z",
    "branch": "feature/fix_readme_for_v2",
    "service": "github",
    "status": "SUCCESS",
    "successCount": 1,
    "testSuites": {
      "name": "CIAnalyzer tests",
      "tests": 56,
      "failures": 0,
      "time": 9.338,
      "testsuite": [
        {
          "name": "__tests__/analyzer/analyzer.test.ts",
          "errors": 0,
          "failures": 0,
          "skipped": 0,
          "timestamp": "2020-08-09T10:22:18",
          "time": 3.688,
          "tests": 17,
          "testcase": [
            {
              "classname": "Analyzer convertToReportTestSuites Omit some properties",
              "name": "testcase.error",
              "time": 0.003,
              "successCount": 1,
              "status": "SUCCESS"
            },
            {
              "classname": "Analyzer convertToReportTestSuites Omit some properties",
              "name": "testcase.failure",
              "time": 0,
              "successCount": 1,
              "status": "SUCCESS"
            },
    ...

Collect and export any JSON from build artifacts

You can export any data related to build with CustomReport. CIAanalyzer can collect JSON file that has any structure from CI build artifacts. If you want to collect some data and export it to BigQuery(or others), just create JSON that includes your preferred data and store it to CI build artifacts.

1. Create schema file for your CustomReport table

Create BigQuery schema JSON like this sample schema json and save it to any path you want.

These columns are must need in your schema:

name type
workflowId STRING
workflowRunId STRING
createdAt TIMESTAMP

2. Create BigQuery table

As introduced before in "Setup BigQuery", create BigQuery table using bq mk command like this.

bq mk
  --project_id=${YOUR_GCP_PROJECT_ID} \
  --location=${LOCATION} \
  --table \
  --time_partitioning_field=createdAt \
  ${DATASET}.${TABLE} \
  /path/to/your/custom_report_schema.json

3. Add CustomReport config

Add your CustomReport JSON path (import target) at each repo(job)'s artifacts and BigQuery table info (export target) to your config YAML.

See sample ci_analyzer.yaml.

bigquery.customReports[].schema is BigQuery schema JSON created at step1. It accepts absolute path or relative path from your config YAML.

NOTICE: When you run CIAnalyzer using docker, bigquery.customReports[].schema is a path that inside of CIAnalyzer docker container. So it's very confusing and recommends it to mount custom schema JSON at the same path as your ci_analyzer.yaml in the next step.

4. Mount custom schema JSON at docker run (Only using docker)

To load your custom schema JSON from CIAnalyzer that runs inside of container, you have to also mount your JSON with additional docker run --mount options if you need.

--mount type=bind,src=${CUSTOM_SCHEMA_DIR_PATH},dst=/app/custom_schema

See sample cron.jenkinsfile.

Roadmap

  • Collect test data
  • Collect any of JSON format from build artifacts
  • Support Bitrise
  • Support CircleCI API v2
  • Implement better logger
  • Better error message
  • Export commit message
  • Export executor data (CircleCI, Bitrise)

Debug options

  • Fetch only selected service
    • --only-services
    • ex: --only-services github circleci
  • Using only selected exporters
    • --only-exporters
    • ex: --only-exporters local
  • Enable debug mode
    • --debug
    • Limit fetching build results only 10 by each services
    • Export result to local only
    • Don't loading and storing last build number
  • Enable debug log
    • export CI_ANALYZER_DEBUG=1

Development

Install and test

npm ci
npm run build
npm run test

Generate pb_types and bigquery_schema from .proto files

Install Earthly first and then execute these commands.

npm run proto

Docker build

Install Earthly first and then execute these commands.

npm run docker

Execute CIAnalyzer using nodejs

npm run start
# or
node dist/index.js -c your_custom_config.yaml

LICENSE

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].