All Projects → vmarchaud → openprofiling-node

vmarchaud / openprofiling-node

Licence: Apache-2.0 license
OpenProfiling is a toolkit for collecting profiling data from production workload safely.

Programming Languages

typescript
32286 projects

Projects that are alternatives of or similar to openprofiling-node

Home
Project Glimpse: Node Edition - Spend less time debugging and more time developing.
Stars: ✭ 260 (+333.33%)
Mutual labels:  diagnostics, profiling
Gops
A tool to list and diagnose Go processes currently running on your system
Stars: ✭ 5,404 (+8906.67%)
Mutual labels:  diagnostics, cpu-profile
Wtrace
Command line tracing tool for Windows, based on ETW.
Stars: ✭ 563 (+838.33%)
Mutual labels:  diagnostics, profiling
express-sls-app
How to deploy a Node.js application to AWS Lambda using Serverless, a quick start.
Stars: ✭ 20 (-66.67%)
Mutual labels:  production
rest-api-node-typescript
This is a simple REST API with node and express with typescript
Stars: ✭ 154 (+156.67%)
Mutual labels:  production
kataw
An 100% spec compliant ES2022 JavaScript toolchain
Stars: ✭ 303 (+405%)
Mutual labels:  diagnostics
ecutools
IoT Automotive Tuning, Diagnostics & Analytics
Stars: ✭ 144 (+140%)
Mutual labels:  diagnostics
terranetes
Terraform boilerplate for production-grade Kubernetes clusters on AWS (optionally includes kube-system components, OpenVPN, an ingress controller, monitoring services...)
Stars: ✭ 15 (-75%)
Mutual labels:  production
audria
audria - A Utility for Detailed Ressource Inspection of Applications
Stars: ✭ 35 (-41.67%)
Mutual labels:  profiling
vim-profiler
A vim plugin profiler and data plotter
Stars: ✭ 31 (-48.33%)
Mutual labels:  profiling
editions
📆🆕 Daily Edition app
Stars: ✭ 42 (-30%)
Mutual labels:  production
eaf-linter
🤪 A linter, prettier, and test suite that does everything as-simple-as-possible.
Stars: ✭ 17 (-71.67%)
Mutual labels:  production
vue-production-server-proxy
Boilerplate for Vue project ready for production, with neat implementation of "devServer proxy" in production environment, using Nginx
Stars: ✭ 27 (-55%)
Mutual labels:  production
esp-insights
ESP Insights: A remote diagnostics/observability framework for connected devices
Stars: ✭ 31 (-48.33%)
Mutual labels:  diagnostics
iopipe-go
Go agent for AWS Lambda metrics, tracing, profiling & analytics
Stars: ✭ 18 (-70%)
Mutual labels:  profiling
imgui-flame-graph
A Dear ImGui Widget for displaying Flame Graphs.
Stars: ✭ 93 (+55%)
Mutual labels:  profiling
clockwork-firefox
Clockwork - php dev tools integrated to your browser - Firefox add-on
Stars: ✭ 22 (-63.33%)
Mutual labels:  profiling
defcon
DefCon - Status page and API for production status
Stars: ✭ 12 (-80%)
Mutual labels:  production
adapt
A package for designing activity-informed nucleic acid diagnostics for viruses.
Stars: ✭ 16 (-73.33%)
Mutual labels:  diagnostics
ros jetson stats
🐢 The ROS jetson-stats wrapper. The status of your NVIDIA jetson in diagnostic messages
Stars: ✭ 55 (-8.33%)
Mutual labels:  diagnostics

Version Build Status codecov License

NOTE: This project is deprecated, OpenTelemetry is discusting adding support for profiling

OpenProfiling is a toolkit for collecting profiling data from production workload safely.

The project's goal is to empower developers to understand how their applications is behaving in production with minimal performance impact and without vendor lock-in.

The library is in alpha stage and the API is subject to change.

I expect that the library will not match everyone use-cases, so I'm asking everyone in this case to open an issue so we can discuss how the toolkit could meet yours.

The NodeJS implementation is currently tested with all recent NodeJS LTS (10, 12) and the most recent major (14).

Use cases

An application have a memory leak

The recommended profiler is the Heap Sampling Profiler which has the lowest impact in terms of performance, here are the instructions on how to use it. After getting the exported file, you can go to speedscope to analyse it. If we load an example heap profile and head to the Sandwich panel, we can see a list of functions sorted by how much memory they allocated.

At the left of the table, you have two entry:

  • self memory: is how much the function allocated in the memory to run without counting any function that it may have called.
  • total memory: the opposite of self which means that it count the memory it allocated plus all the memory allocated by the functions it called.

Note that the top function in the view should not be automatically considered as a leak: for example, when you receive a HTTP request, NodeJS allocates some memory for it but it will be freed after the request finishes. The view will only show where memory is allocated, not where it leaks.

We highly recommend to read the documentation about the profiler to understand all the pros and cons of using it.

An application is using too much CPU

The recommended profiler is the CPU JS Sampling Profiler which is made for production profiling (low overhead), check the instructions to get it running. After getting the exported file, you can go to speedscope to analyze it. If we load an example CPU profile and head to the Sandwich panel again, we can see a list of functions sorted by how much CPU they used.

As the heap profiler that there is two concepts to read the table:

  • self time: is the time the CPU took in the function itself, without considering calling other functions.
  • total time: the opposite of self, it represent both the time used by the function and all functions that it called.

You should then look for functions that have a high self time, which means that their inner code take a lot of time to execute.

We highly recommend to read the documentation about the profiler to understand all the pros and cons of using it.

Installation

Install OpenProfiling for NodeJS with:

yarn add @openprofiling/nodejs

or

npm install @openprofiling/nodejs

Configure

Before running your application with @openprofiling/nodejs, you will need to choose 3 different things:

  • What do you want to profile: a profiler
  • How to start this profiler: a trigger
  • Where to send the profiling data: an exporter

Typescript Example

import { ProfilingAgent } from '@openprofiling/nodejs'
import { FileExporter } from '@openprofiling/exporter-file'
import { InspectorHeapProfiler } from '@openprofiling/inspector-heap-profiler'
import { InspectorCPUProfiler } from '@openprofiling/inspector-cpu-profiler'
import { SignalTrigger } from '@openprofiling/trigger-signal'

const profilingAgent = new ProfilingAgent()
/**
 * Register a profiler for a specific trigger
 * ex: we want to collect cpu profile when the application receive a SIGUSR2 signal
 */
profilingAgent.register(new SignalTrigger({ signal: 'SIGUSR2' }), new InspectorCPUProfiler({}))
/**
 * Start the agent (which will tell the trigger to start listening) and
 * configure where to output the profiling data
 * ex: the file exporter will output on the disk, by default in /tmp
 */
profilingAgent.start({ exporter: new FileExporter() })

JavaScript Example

const { ProfilingAgent } = require('@openprofiling/nodejs')
const { FileExporter } = require('@openprofiling/exporter-file')
const { InspectorCPUProfiler } = require('@openprofiling/inspector-cpu-profiler')
const { SignalTrigger } = require('@openprofiling/trigger-signal')

const profilingAgent = new ProfilingAgent()
/**
 * Register a profiler for a specific trigger
 * ex: we want to collect cpu profile when the application receive a SIGUSR2 signal
 */
profilingAgent.register(new SignalTrigger({ signal: 'SIGUSR1' }), new InspectorCPUProfiler({}))
/**
 * Start the agent (which will tell the trigger to start listening) and
 * configure where to output the profiling data
 * ex: the file exporter will output on the disk, by default in /tmp
 */
profilingAgent.start({ exporter: new FileExporter(), logLevel: 4 })

Triggers

A trigger is simply a way to start collecting data, you can choose between those:

Profilers

Profilers are the implementation that collect profiling data from different sources, current available profilers:

Exporters

OpenProfiling aims to be vendor-neutral and can push profiling data to any backend with different exporter implementations. Currently, it supports:

Versioning

This library follows Semantic Versioning.

Note that before the 1.0.0 release, any minor update can have breaking changes.

LICENSE

Apache License 2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].