All Projects → zodb → perfmetrics

zodb / perfmetrics

Licence: other
A library for sending software performance metrics from Python libraries and apps to statsd.

Programming Languages

python
139335 projects - #7 most used programming language
powershell
5483 projects
Batchfile
5799 projects
shell
77523 projects
cython
566 projects

Projects that are alternatives of or similar to perfmetrics

Appmetrics
Node Application Metrics provides a foundational infrastructure for collecting resource and performance monitoring data for Node.js-based applications.
Stars: ✭ 864 (+3223.08%)
Mutual labels:  performance-metrics, performance-monitoring
Spm Agent Nodejs
NodeJS Monitoring Agent
Stars: ✭ 51 (+96.15%)
Mutual labels:  performance-metrics, performance-monitoring
Apm Server
APM Server
Stars: ✭ 878 (+3276.92%)
Mutual labels:  performance-metrics, performance-monitoring
ember-appmetrics
Ember library used to measure various metrics in your Ember app with ultra simple APIs.
Stars: ✭ 16 (-38.46%)
Mutual labels:  performance-metrics, performance-monitoring
Opbeat Node
DEPRECATED - See Elastic APM instead: https://github.com/elastic/apm-agent-nodejs
Stars: ✭ 155 (+496.15%)
Mutual labels:  performance-metrics, performance-monitoring
Apm Agent Nodejs
Elastic APM Node.js Agent
Stars: ✭ 467 (+1696.15%)
Mutual labels:  performance-metrics, performance-monitoring
Corefreq
CoreFreq is a CPU monitoring software designed for the 64-bits Processors.
Stars: ✭ 1,026 (+3846.15%)
Mutual labels:  performance-metrics, performance-monitoring
performance-budget-plugin
Perfromance budget plugin for Webpack (https://webpack.js.org/)
Stars: ✭ 65 (+150%)
Mutual labels:  performance-metrics, performance-monitoring
Nemetric
前端性能指标的监控,采集以及上报。用于测量第一个dom生成的时间(FP/FCP/LCP)、用户最早可操作时间(fid|tti)和组件的生命周期性能,,网络状况以及资源大小等等。向监控后台报告实际用户测量值。
Stars: ✭ 145 (+457.69%)
Mutual labels:  performance-metrics, performance-monitoring
Scouter
Scouter is an open source APM (Application Performance Management) tool.
Stars: ✭ 1,792 (+6792.31%)
Mutual labels:  performance-metrics, performance-monitoring
zipkin-javascript-opentracing
Opentracing implementation for Zipkin in Javascript
Stars: ✭ 19 (-26.92%)
Mutual labels:  performance-metrics, performance-monitoring
Myperf4j
High performance Java APM. Powered by ASM. Try it. Test it. If you feel its better, use it.
Stars: ✭ 2,281 (+8673.08%)
Mutual labels:  performance-metrics, performance-monitoring
javametrics
Application Metrics for Java™ instruments the Java runtime for performance monitoring, providing the monitoring data visually with its built in dashboard
Stars: ✭ 19 (-26.92%)
Mutual labels:  performance-metrics, performance-monitoring
Spm Agent Mongodb
Sematext Agent for monitoring MongoDB
Stars: ✭ 7 (-73.08%)
Mutual labels:  performance-metrics, performance-monitoring
compile-time-perf
Measures high-level timing and memory usage metrics during compilation
Stars: ✭ 64 (+146.15%)
Mutual labels:  performance-metrics, performance-monitoring
Vsphere2metrics
VMware vSphere Performance Metrics Integration with Graphite & InfluxDB
Stars: ✭ 28 (+7.69%)
Mutual labels:  performance-metrics, performance-monitoring
Pcm
Processor Counter Monitor
Stars: ✭ 1,240 (+4669.23%)
Mutual labels:  performance-metrics, performance-monitoring
Apm Agent Rum Js
Elastic APM Real User Monitoring JavaScript agent
Stars: ✭ 166 (+538.46%)
Mutual labels:  performance-metrics, performance-monitoring
jamonapi
Another repo for jamonapi.com which is primarily hosted on sourceforge
Stars: ✭ 57 (+119.23%)
Mutual labels:  performance-metrics, performance-monitoring
Statsite
C implementation of statsd
Stars: ✭ 1,791 (+6788.46%)
Mutual labels:  statsd

perfmetrics

The perfmetrics package provides a simple way to add software performance metrics to Python libraries and applications. Use perfmetrics to find the true bottlenecks in a production application.

The perfmetrics package is a client of the Statsd daemon by Etsy, which is in turn a client of Graphite (specifically, the Carbon daemon). Because the perfmetrics package sends UDP packets to Statsd, perfmetrics adds no I/O delays to applications and little CPU overhead. It can work equally well in threaded (synchronous) or event-driven (asynchronous) software.

Complete documentation is hosted at https://perfmetrics.readthedocs.io

Latest release Supported Python versions CI Build Status Code Coverage Documentation Status

Usage

Use the @metric and @metricmethod decorators to wrap functions and methods that should send timing and call statistics to Statsd. Add the decorators to any function or method that could be a bottleneck, including library functions.

Caution!

These decorators are generic and cause the actual function signature to be lost, replaced with *args, **kwargs. This can break certain types of introspection, including zope.interface validation. As a workaround, setting the environment variable PERFMETRICS_DISABLE_DECORATOR before importing perfmetrics or code that uses it will cause @perfmetrics.metric, @perfmetrics.metricmethod, @perfmetrics.Metric(...) and @perfmetrics.MetricMod(...) to return the original function unchanged.

Sample:

from perfmetrics import metric
from perfmetrics import metricmethod

@metric
def myfunction():
    """Do something that might be expensive"""

class MyClass(object):
    @metricmethod
    def mymethod(self):
        """Do some other possibly expensive thing"""

Next, tell perfmetrics how to connect to Statsd. (Until you do, the decorators have no effect.) Ideally, either your application should read the Statsd URI from a configuration file at startup time, or you should set the STATSD_URI environment variable. The example below uses a hard-coded URI:

from perfmetrics import set_statsd_client
set_statsd_client('statsd://localhost:8125')

for i in xrange(1000):
    myfunction()
    MyClass().mymethod()

If you run that code, it will fire 2000 UDP packets at port 8125. However, unless you have already installed Graphite and Statsd, all of those packets will be ignored and dropped. Dropping is a good thing: you don't want your production application to fail or slow down just because your performance monitoring system is stopped or not working.

Install Graphite and Statsd to receive and graph the metrics. One good way to install them is the graphite_buildout example at github, which installs Graphite and Statsd in a custom location without root access.

Pyramid and WSGI

If you have a Pyramid app, you can set the statsd_uri for each request by including perfmetrics in your configuration:

config = Configuration(...)
config.include('perfmetrics')

Also add a statsd_uri setting such as statsd://localhost:8125. Once configured, the perfmetrics tween will set up a Statsd client for the duration of each request. This is especially useful if you run multiple apps in one Python interpreter and you want a different statsd_uri for each app.

Similar functionality exists for WSGI apps. Add the app to your Paste Deploy pipeline:

[statsd]
use = egg:perfmetrics#statsd
statsd_uri = statsd://localhost:8125

[pipeline:main]
pipeline =
    statsd
    egg:myapp#myentrypoint

Threading

While most programs send metrics from any thread to a single global Statsd server, some programs need to use a different Statsd server for each thread. If you only need a global Statsd server, use the set_statsd_client function at application startup. If you need to use a different Statsd server for each thread, use the statsd_client_stack object in each thread. Use the push, pop, and clear methods.

Graphite Tips

Graphite stores each metric as a time series with multiple resolutions. The sample graphite_buildout stores 10 second resolution for 48 hours, 1 hour resolution for 31 days, and 1 day resolution for 5 years. To produce a coarse grained value from a fine grained value, Graphite computes the mean value (average) for each time span.

Because Graphite computes mean values implicitly, the most sensible way to treat counters in Graphite is as a "hits per second" value. That way, a graph can produce correct results no matter which resolution level it uses.

Treating counters as hits per second has unfortunate consequences, however. If some metric sees a 1000 hit spike in one second, then falls to zero for at least 9 seconds, the Graphite chart for that metric will show a spike of 100, not 1000, since Graphite receives metrics every 10 seconds and the spike looks to Graphite like 100 hits per second over a 10 second period.

If you want your graph to show 1000 hits rather than 100 hits per second, apply the Graphite hitcount() function, using a resolution of 10 seconds or more. The hitcount function converts per-second values to approximate raw hit counts. Be sure to provide a resolution value large enough to be represented by at least one pixel width on the resulting graph, otherwise Graphite will compute averages of hit counts and produce a confusing graph.

It usually makes sense to treat null values in Graphite as zero, though that is not the default; by default, Graphite draws nothing for null values. You can turn on that option for each graph.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].