All Projects → P403n1x87 → Austin

P403n1x87 / Austin

Licence: other
Python frame stack sampler for CPython

Programming Languages

python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language

Labels

Projects that are alternatives of or similar to Austin

ml-for-resource-allocation
Machine Learning for Dynamic Resource Allocation in Network Function Virtualization
Stars: ✭ 22 (-95.98%)
Mutual labels:  profiling
Clockwork
Clockwork - php dev tools in your browser - server-side component
Stars: ✭ 4,076 (+645.16%)
Mutual labels:  profiling
Clockwork Chrome
Clockwork - php dev tools integrated to your browser - Chrome extension
Stars: ✭ 415 (-24.13%)
Mutual labels:  profiling
thundra-agent-nodejs
Thundra Lambda Node.js Agent
Stars: ✭ 31 (-94.33%)
Mutual labels:  profiling
Pyroscope
Continuous Profiling Platform! Debug performance issues down to a single line of code
Stars: ✭ 4,816 (+780.44%)
Mutual labels:  profiling
Datacleaner
The premier open source Data Quality solution
Stars: ✭ 391 (-28.52%)
Mutual labels:  profiling
terabit-network-stack-profiling
Tools for profiling the Linux network stack.
Stars: ✭ 68 (-87.57%)
Mutual labels:  profiling
Pyheat
pprofile + matplotlib = Python program profiled as an awesome heatmap!
Stars: ✭ 491 (-10.24%)
Mutual labels:  profiling
Tracereader
android小工具,通过读取trace文件,回溯整个整个程序执行调用树。
Stars: ✭ 311 (-43.14%)
Mutual labels:  profiling
Plibsys
Highly portable C system library: threads and synchronization primitives, sockets (TCP, UDP, SCTP), IPv4 and IPv6, IPC, hash functions (MD5, SHA-1, SHA-2, SHA-3, GOST), binary trees (RB, AVL) and more. Native code performance.
Stars: ✭ 402 (-26.51%)
Mutual labels:  profiling
Home
Project Glimpse: Node Edition - Spend less time debugging and more time developing.
Stars: ✭ 260 (-52.47%)
Mutual labels:  profiling
Profiling
Was an interactive continuous Python profiler.
Stars: ✭ 2,989 (+446.44%)
Mutual labels:  profiling
Scalene
Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python
Stars: ✭ 4,819 (+780.99%)
Mutual labels:  profiling
puffin egui
Show puffin profiler flamegraph in-game using egui
Stars: ✭ 45 (-91.77%)
Mutual labels:  profiling
Trace Nodejs
Trace is a visualised distributed tracing platform designed for microservices.
Stars: ✭ 471 (-13.89%)
Mutual labels:  profiling
lo2s
Linux OTF2 Sampling - A Lightweight Node-Level Performance Monitoring Tool
Stars: ✭ 24 (-95.61%)
Mutual labels:  profiling
Diago
Diago is a visualization tool for CPU profiles and heap snapshots generated with `pprof`.
Stars: ✭ 351 (-35.83%)
Mutual labels:  profiling
Mongood
A MongoDB GUI with Fluent Design
Stars: ✭ 540 (-1.28%)
Mutual labels:  profiling
Processhacker
A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.
Stars: ✭ 6,285 (+1048.99%)
Mutual labels:  profiling
Tufte
Simple profiling and performance monitoring for Clojure/Script
Stars: ✭ 401 (-26.69%)
Mutual labels:  profiling

Austin

A Frame Stack Sampler for CPython

         

Travis CI Build Status GitHub Action Status: Tests latest release LICENSE
Chocolatey Version Conda Version Debian package status homebrew austin Snap Store Build Status

Get it from the Snap Store

Synopsis • Installation • Usage • Compatibility • Why Austin • Examples • Contribute

Buy Me A Coffee


This is the nicest profiler I’ve found for Python. It’s cross-platform, doesn’t need me to change the code that’s being profiled, and its output can be piped directly into flamegraph.pl. I just used it to pinpoint a gross misuse of SQLAlchemy at work that’s run in some code at the end of each day, and now I can go home earlier.

-- gthm on lobste.rs

If people are looking for a profiler, Austin looks pretty cool. Check it out!

-- Michael Kennedy on Python Bytes 180

Follow on Twitter


Synopsis

Austin is a Python frame stack sampler for CPython written in pure C. Samples are collected by reading the CPython interpreter virtual memory space in order to retrieve information about the currently running threads along with the stack of the frames that are being executed. Hence, one can use Austin to easily make powerful statistical profilers that have minimal impact on the target application and that don't require any instrumentation.

The key features of Austin are:

  • Zero instrumentation;
  • Minimal impact;
  • Fast and lightweight;
  • Time and memory profiling;
  • Built-in support for multi-process applications (e.g. mod_wsgi).

The simplest way to turn Austin into a full-fledged profiler is to combine it with FlameGraph or Speedscope. However, Austin's simple output format can be piped into any other external or custom tool for further processing. Look, for instance, at the following Python TUI

Keep reading for more tools ideas and examples!

Installation

Austin is available from the major software repositories of the most popular platforms. Check out the latest release page for pre-compiled binaries and installation packages.

On Linux, it can be installed using autotools or as a snap from the Snap Store. The latter will automatically perform the steps of the autotools method with a single command. On distributions derived from Debian, Austin can be installed from the official repositories with Aptitude. Anaconda users can install Austin from Conda Forge.

On Windows, Austin can be easily installed from the command line using either Chocolatey or Scoop. Alternatively, you can download the installer from the latest release page.

On macOS, Austin can be easily installed from the command line using Homebrew. Anaconda users can install Austin from Conda Forge.

For any other platform, compiling Austin from sources is as easy as cloning the repository and running the C compiler.

With autotools

Installing Austin using autotools amounts to the usual ./configure, make and make install finger gymnastic. The only dependency is the standard C library.

git clone --depth=1 https://github.com/P403n1x87/austin.git
autoreconf --install
./configure
make
make install

Alternatively, sources can be compiled with just a C compiler (see below).

From the Snap Store

Austin can be installed on many major Linux distributions from the Snap Store with the following command

sudo snap install austin --classic

On Debian and Derivatives

On March 30 2019 Austin was accepted into the official Debian repositories and can therefore be installed with the apt utility.

On macOS

Austin can be installed on macOS using Homebrew:

brew install austin

From Chocolatey

To install Austin from Chocolatey, run the following command from the command line or from PowerShell

choco install austin

To upgrade run the following command from the command line or from PowerShell:

choco upgrade austin

From Scoop

To install Austin using Scoop, run the following command from the command line or from PowerShell

scoop install austin

To upgrade run the following command from the command line or from PowerShell:

scoop update

From Conda Forge

Anaconda users on Linux and macOS can install Austin from Conda Forge with the command

conda install -c conda-forge austin

From Sources without autotools

To install Austin from sources using the GNU C compiler, without autotools, clone the repository with

git clone --depth=1 https://github.com/P403n1x87/austin.git

On Linux one can then use the command

gcc -O3 -Os -Wall -pthread src/*.c -o src/austin

whereas on macOS it is enough to run

gcc -O3 -Os -Wall src/*.c -o src/austin

On Windows, the -lpsapi switch is needed

gcc -O3 -Os -Wall -lpsapi src/*.c -o src/austin

Add -DDEBUG if you need a more verbose log. This is useful if you encounter a bug with Austin and you want to report it here.

Usage

Usage: austin [OPTION...] command [ARG...]
Austin -- A frame stack sampler for Python.

  -a, --alt-format           Alternative collapsed stack sample format.
  -C, --children             Attach to child processes.
  -e, --exclude-empty        Do not output samples of threads with no frame
                             stacks.
  -f, --full                 Produce the full set of metrics (time +mem -mem).
  -i, --interval=n_us        Sampling interval in microseconds (default is
                             100). Accepted units: s, ms, us.
  -m, --memory               Profile memory usage.
  -o, --output=FILE          Specify an output file for the collected samples.
  -p, --pid=PID              The the ID of the process to which Austin should
                             attach.
  -s, --sleepless            Suppress idle samples.
  -t, --timeout=n_ms         Start up wait time in milliseconds (default is
                             100). Accepted units: s, ms.
  -x, --exposure=n_sec       Sample for n_sec seconds only.
  -?, --help                 Give this help list
      --usage                Give a short usage message
  -V, --version              Print program version

Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.

Report bugs to <https://github.com/P403n1x87/austin/issues>.

The output is a sequence of frame stack samples, one on each line. The format is the collapsed one that is recognised by FlameGraph so that it can be piped straight to flamegraph.pl for a quick visualisation, or redirected to a file for some further processing.

By default, each line has the following structure:

P<pid>;T<tid>[;[frame]]* [metric]*

where the structure of [frame] and the number and type of metrics on each line depend on the mode.

Normal Mode

When no special switch are passed to Austin from the command line, the process identifier is omitted and [frame] has the structure

[frame] := <function> (<module>);L<line number>

The reason for not including the line number in the ([module]) part, as one might have expected, is that this way the flame graph will show the total time spent in each function, plus the finer detail of the time spent on each line. A drawback of this format is that frame stacks double in height. If you prefer something more conventional, you can use the -a option to switch to the alternative format in which [frame] has the structure

[frame] := <function> (<module>:<line number>)

Each line then ends with a single [metric], i.e. the sampling time measured in microseconds.

Memory and Full Metrics

When profiling in memory mode with the -m or --memory switch, the metric value at the end of each line is the memory delta between samples, measured in bytes. In full mode (-f or --full switches), each samples ends with three values: the time delta, any positive memory delta (memory allocations) or zero and any negative memory delta (memory releases) or zero.

NOTE The reported memory allocations and deallocations are obtained by computing resident memory deltas between samples. Hence these values give an idea of how much physical memory is being requested/released.

Multi-process Applications

Austin can be told to profile multi-process applications with the -C or --children switch. This way Austin will look for new children of the parent process.

Logging

Austin uses syslog on Linux and macOS, and %TEMP%\austin.log on Windows for log messages, so make sure to watch these to get execution details and statistics. Bad frames are output together with the other frames. In general, entries for bad frames will not be visible in a flame graph as all tests show error rates below 1% on average.

Compatibility

Austin supports Python 2.3-2.7 and 3.3-3.8 and has been tested on the following platforms and architectures

* **
x86_64
i686
arm64
ppc64le

* In order to attach to an external process, Austin requires the CAP_SYS_PTRACE capability. This means that you will have to either use sudo when attaching to a running Python process or grant the CAP_SYS_PTRACE capability to the Austin binary with, e.g.

sudo setcap cap_sys_ptrace+ep `which austin`

In order for Austin to work with Docker, the --cap-add SYS_PTRACE option needs to be passed when starting a container.

** Due to the System Integrity Protection introduced in MacOS with El Capitan, Austin cannot profile Python processes that use an executable located in the /bin folder, even with sudo. Hence, either run the interpreter from a virtual environment or use a Python interpreter that is installed in, e.g., /Applications or via brew with the default prefix (/usr/local). Even in these cases, though, the use of sudo is required.

NOTE Austin might work with other versions of Python on all the platforms and architectures above. So it is worth giving it a try even if your system is not listed below.

Why Austin

When there already are similar tools out there, it's normal to wonder why one should be interested in yet another one. So here is a list of features that currently distinguish Austin.

  • Written in pure C Austin is written in pure C code. There are no dependencies on third-party libraries with the exception of the standard C library and the API provided by the Operating System.

  • Just a sampler Austin is just a frame stack sampler. It looks into a running Python application at regular intervals of time and dumps whatever frame stack it finds. The samples can then be analysed at a later time so that Austin can sample at rates higher than other non-C alternative that also analyse the samples as they run.

  • Simple output, powerful tools Austin uses the collapsed stack format of FlameGraph that is easy to parse. You can then go and build your own tool to analyse Austin's output. You could even make a player that replays the application execution in slow motion, so that you can see what has happened in temporal order.

  • Small size Austin compiles to a single binary executable of just a bunch of KB.

  • Easy to maintain Occasionally, the Python C API changes and Austin will need to be adjusted to new releases. However, given that Austin, like CPython, is written in C, implementing the new changes is rather straight-forward.

Examples

The following flame graph has been obtained with the command

austin -i 1ms ./test.py | ./flamegraph.pl --countname=μs > test.svg

where the sample test.py script has the following content

import psutil

for i in range(1000):
    list(psutil.process_iter())

To profile Apache2 WSGI application, one can attach Austin to the web server with

austin -Cp `pgrep apache2 | head -n 1`

Any child processes will be automatically detected as they are created and Austin will sample them too.

Austin TUI

The Austin TUI is a text-based user interface for Austin that gives you a top-like view of what is currently running inside a Python application. It is most useful for scripts that have long-running procedures as you can see where execution is at without tracing instructions in your code. You can also save the collected data from within the TUI and feed it to Flame Graph for visualisation, or convert it to the pprof format.

If you want to give it a go you can install it using pip with

pip install austin-tui --upgrade

and run it with

austin-tui [OPTION...] command [ARG...]

with the same command line as Austin. Please note that the austin binary should be available from within the PATH environment variable in order for the TUI to work.

The TUI is based on python-curses. The version included with the standard Windows installations of Python is broken so it won't work out of the box. A solution is to install the the wheel of the port to Windows from this page. Wheel files can be installed directly with pip, as described in the linked page.

Austin Web

Austin Web is a web application that wraps around Austin. At its core, Austin Web is based on d3-flame-graph to display a live flame graph in the browser, that refreshes every 3 seconds with newly collected samples. Austin Web can also be used for remote profiling by setting the --host and --port options.

If you want to give it a go you can install it using pip with

pip install austin-web --upgrade

and run it with

austin-web [OPTION...] command [ARG...]

with the same command line as Austin. This starts a simple HTTP server that serves on localhost by default. When no explicit port is given, Austin Web will use an ephemeral one.

Please note that the austin binary should be available from within the PATH environment variable in order for Austin Web to work.

Speedscope

Austin output is now supported by Speedscope. However, the austin-python library comes with format conversion tools that allow to convert the output from Austin to the Speedscope JSON format.

If you want to give it a go you can install it using pip with

pip install austin-python --upgrade

and run it with

austin2speedscope [-h] [--indent INDENT] [-V] input output

where input is a file containing the output from Austin and output is the name of the JSON file to use to save the result of the conversion, ready to be used on Speedscope.

Google pprof

Austin's format can also be converted to the Google pprof format using the austin2pprof utility that comes with austin-python. If you want to give it a go you can install it using pip with

pip install austin-python --upgrade

and run it with

austin2pprof [-h] [-V] input output

where input is a file containing the output from Austin and output is the name of the protobuf file to use to save the result of the conversion, ready to be used with Google's pprof tools.

IDE Extensions

It is easy to write your own extension for your favourite text editor. This, for example, is a demo of a Visual Studio Code extension that highlights the most hit lines of code straight into the editor

Contribute

If you like Austin and you find it useful, there are ways for you to contribute.

If you want to help with the development, then have a look at the open issues and have a look at the contributing guidelines before you open a pull request.

You can also contribute to the development of the Austin by becoming a sponsor and/or by buying me a coffee on BMC or by chipping in a few pennies on PayPal.Me.

Buy Me A Coffee


Follow on Twitter

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].