All Projects → genuinetools → Bpfd

genuinetools / Bpfd

Licence: mit
Framework for running BPF programs with rules on Linux as a daemon. Container aware.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Bpfd

pwru
Packet, where are you? -- Linux kernel networking debugger
Stars: ✭ 694 (+75.25%)
Mutual labels:  kernel, tracing, ebpf, bpf
Cilium
eBPF-based Networking, Security, and Observability
Stars: ✭ 10,256 (+2489.9%)
Mutual labels:  kernel, bpf, ebpf, containers
ebpfpub
ebpfpub is a generic function tracing library for Linux that supports tracepoints, kprobes and uprobes.
Stars: ✭ 86 (-78.28%)
Mutual labels:  tracing, ebpf, bpf
bpflock
bpflock - eBPF driven security for locking and auditing Linux machines
Stars: ✭ 54 (-86.36%)
Mutual labels:  kernel, ebpf, bpf
oxdpus
A toy tool that leverages the super powers of XDP to bring in-kernel IP filtering
Stars: ✭ 59 (-85.1%)
Mutual labels:  kernel, ebpf, bpf
Ebpf exporter
Prometheus exporter for custom eBPF metrics
Stars: ✭ 829 (+109.34%)
Mutual labels:  tracing, bpf, ebpf
KubeArmor
Cloud-native Runtime Security Enforcement System
Stars: ✭ 434 (+9.6%)
Mutual labels:  kernel, ebpf, bpf
Bpftrace
High-level tracing language for Linux eBPF
Stars: ✭ 4,526 (+1042.93%)
Mutual labels:  tracing, bpf, ebpf
packiffer
lightweight cross-platform networking toolkit
Stars: ✭ 52 (-86.87%)
Mutual labels:  ebpf, bpf
uprobe-http-tracer
uprobe-based HTTP tracer for Go binaries
Stars: ✭ 45 (-88.64%)
Mutual labels:  tracing, ebpf
bpfps
A tool to list and diagnose bpf programs. (Who watches the watchers..? :)
Stars: ✭ 93 (-76.52%)
Mutual labels:  tracing, bpf
aya
Aya is an eBPF library for the Rust programming language, built with a focus on developer experience and operability.
Stars: ✭ 950 (+139.9%)
Mutual labels:  ebpf, bpf
XDP-Firewall
An XDP firewall that is capable of filtering specific packets based off of filtering rules specified in a config file. IPv6 is supported!
Stars: ✭ 129 (-67.42%)
Mutual labels:  ebpf, bpf
btfhub
BTFHub, together with BTFHub Archive repository, provides BTF files for existing published kernels that don't support embedded BTF.
Stars: ✭ 100 (-74.75%)
Mutual labels:  kernel, ebpf
libebpf
Experiemental userspace eBPF library
Stars: ✭ 14 (-96.46%)
Mutual labels:  ebpf, bpf
el7-bpf-specs
RPM specs for building bpf related tools on CentOS 7
Stars: ✭ 38 (-90.4%)
Mutual labels:  ebpf, bpf
go-tc
traffic control in pure go - it allows to read and alter queues, filters and classes
Stars: ✭ 245 (-38.13%)
Mutual labels:  ebpf, bpf
Cinf
Command line tool to view namespaces and cgroups, useful for low-level container prodding
Stars: ✭ 389 (-1.77%)
Mutual labels:  cli, containers
Rbpf
Rust virtual machine and JIT compiler for eBPF programs
Stars: ✭ 306 (-22.73%)
Mutual labels:  bpf, ebpf
Img
Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
Stars: ✭ 3,512 (+786.87%)
Mutual labels:  cli, containers

bpfd

Travis CI GoDoc Github All Releases

Framework for running BPF tracers with rules on Linux as a daemon. Container aware.

This is not just "yet another tool to trace"...

Since it uses BPF and allows for any implementation of the Tracer interface you can use it to do all sorts of things from modifying a file everytime a call to open is called on it, to hot patching an internal kernel function to prevent a known vulnerability without the need to upgrade your kernel.

More use cases with examples coming soon... for now see how it works.

Table of Contents

How it Works

Tracers retrieve the data... Rules filter the data... Actions perform actions on the data.

The tracers are in the tracer/ folder. The idea is that you can add any tracers you would like and then create rules for the data retrieved from the tracers. Any events with data that passes the filters will be passed on to the specified action.

Tracers

The tracers that exist today are based off a few bcc-tools tracers.

You could always add your own tracers in a fork if you worry people will reverse engineer the data you are collecting and alerting on.

The current compiled in tracers are:

  • dockeropenbreakout: trace when files that are not inside the container rootfs are being accessed
  • bashreadline: trace commands being entered into the bash command line
  • exec: trace calls to exec binaries
  • open: trace calls to open files

These must implement the Tracer interface:

// Tracer defines the basic capabilities of a tracer.
type Tracer interface {
    // Load creates the bpf module and starts collecting the data for the tracer.
    Load() error
    // Unload closes the bpf module and all the probes that all attached to it.
    Unload()
    // WatchEvent defines the function to watch the events for the tracer.
    WatchEvent() (*grpc.Event, error)
    // Start starts the map for the tracer.
    Start()
    // String returns a string representation of this tracer.
    String() string
}

As you can see from above you could technically implement this interface with something other than BPF ;)

The Event type defines the data returned from the tracer. As you can see below, the Data is of type map[string]string meaning any key value pair can be returned for the data. The rules then filter using those key value pairs.

// Event defines the data struct for holding event data.
type Event struct {
    PID              uint32            // Process ID.
    TGID             uint32            // Task group ID.
    UID              uint32            // User ID.
    GID              uint32            // User group ID.
    Command          string            // The command for the process.
    ReturnValue      int32             // The return value for the function.
    Data             map[string]string
    ContainerRuntime string            // Filled in after the tracer is run so you don't need to.
    ContainerID      string            // Filled in after the tracer is run so you don't need to.
    Tracer           string            // Filled in after the tracer is run so you don't need to.
}

Rules

These are toml files that hold some logic for what you would like to trace. You can search for anything returned by a Tracer in its map[string]string data struct.

You can also filter based off the container runtime you would like to alert on. The container runtime must be one of the strings defined here.

If you provide no rules for a tracer, then all the events will be passed to actions.

The example below describes a rule file to filter the data returned from the exec tracer. Events from exec will only be returned if the command matches one of those values AND the container runtime is docker or kube.

tracer = "exec"

actions = ["stdout"]

[filterEvents]
  [filterEvents.command]
  values = ["sshd", "dbus-daemon-lau", "ping", "ping6", "critical-stack-", "pmmcli", "filemng", "PassengerAgent", "bwrap", "osdetect", "nginxmng", "sw-engine-fpm", "start-stop-daem"]

containerRuntimes = ["docker","kube"]

If you are wondering where the command key comes from, it's defined in the exec tracer here.

Rules can be dynamically controlled via bpfd's gRPC interface. The cli tool can also be used for creating rules dynamically, see create usage.

The protobuf protocol definition is defined in api/grpc/api.proto

To interact with the gRPC api you can use the --gpc-addr flag or the default is a sock at /run/bpfd/bpfd.sock.

Actions

Actions do "something" on an event. This way you can send filtered events to Slack, email, or even run arbitrary code. You could kill a container, pause a container, or checkpoint a container to restore it elsewhere without even having to login to a computer.

The current compiled in actions are:

Actions implement the Actions interface:

// Action performs an action on an event.
type Action interface {
    // Do runs the action on an event.
    Do(event *grpc.Event) error
    // String returns a string representation of this tracer.
    String() string
}

Installation

To build, you need to have libbcc installed SEE INSTRUCTIONS HERE

Binaries

For installation instructions from binaries please visit the Releases Page.

Via Go

$ go get github.com/genuinetools/bpfd

Via Docker

$ docker run --rm -it \
    --name bpfd \
    -v /lib/modules:/lib/modules:ro \
    -v /usr/src:/usr/src:ro \
    --privileged \
    r.j3ss.co/bpfd daemon

Usage

$ bpfd -h
bpfd -  Framework for running BPF tracers with rules on Linux as a daemon.

Usage: bpfd <command>

Flags:

  -d, --debug  enable debug logging (default: false)
  --grpc-addr  Address for gRPC api communication (default: /run/bpfd/bpfd.sock)

Commands:

  create   Create one or more rules.
  daemon   Start the daemon.
  ls       List rules.
  rm       Remove one or more rules.
  trace    Live trace the events returned after filtering.
  version  Show the version information.

Run the daemon

You can preload rules by passing --rules-dir to the command or placing rules in the default directory: /etc/bpfd/rules.

$ bpfd daemon -h
Usage: bpfd daemon [OPTIONS]

Start the daemon.

Flags:

  -d, --debug  enable debug logging (default: false)
  --grpc-addr  Address for gRPC api communication (default: /run/bpfd/bpfd.sock)
  --rules-dir  Directory that stores the rules files (default: /etc/bpfd/rules)

Create rules dynamically

You can create rules on the fly with the create command. You can pass more than one file at a time.

Usage: bpfd create [OPTIONS] RULE_FILE [RULE_FILE...]

Create one or more rules.

Flags:

  -d, --debug  enable debug logging (default: false)
  --grpc-addr  Address for gRPC api communication (default: /run/bpfd/bpfd.sock)

Remove rules dynamically

You can delete rules with the rm command. You can pass more than one rule name at a time.

$ bpfd rm -h
Usage: bpfd rm [OPTIONS] RULE_NAME [RULE_NAME...]

Remove one or more rules.

Flags:

  -d, --debug  enable debug logging (default: false)
  --grpc-addr  Address for gRPC api communication (default: /run/bpfd/bpfd.sock)

List active rules

You can list the rules that the daemon is filtering with by using the ls command.

$ bpfd ls
NAME                TRACER
bashreadline        bashreadline
password_files      open
setuid_binaries     exec

Live tracing events

You can live trace the events returned after filtering with the trace command.

This does not include past events. Consider it like a tail.

$ bpfd trace
INFO[0000] map[string]string{"filename":"/etc/shadow", "command":"sudo", "returnval":"4"}  container_id= container_runtime=not-found pid=12893 tracer=open tgid=0
INFO[0000] map[string]string{"command":"sudo", "returnval":"4", "filename":"/etc/sudoers.d/README"}  container_id= container_runtime=not-found pid=12893 tracer=open tgid=0
INFO[0000] map[string]string{"command":"sudo", "returnval":"4", "filename":"/etc/sudoers.d"}  container_id= container_runtime=not-found pid=12893 tracer=open tgid=0
INFO[0000] map[string]string{"filename":"/etc/sudoers", "command":"sudo", "returnval":"3"}  container_id= container_runtime=not-found pid=12893 tracer=open tgid=0
INFO[0000] map[string]string{"command":"sudo bpfd trace"}  container_id= container_runtime=not-found pid=23751 tracer=bashreadline tgid=0
INFO[0000] map[string]string{"command":"vim README.md"}  container_id= container_runtime=not-found pid=23751 tracer=bashreadline tgid=0
INFO[0000] map[string]string{"filename":"/etc/shadow", "command":"sudo", "returnval":"4"}  container_id= container_runtime=not-found pid=12786 tracer=open tgid=0
INFO[0000] map[string]string{"command":"sudo", "returnval":"4", "filename":"/etc/sudoers.d/README"}  container_id= container_runtime=not-found pid=12786 tracer=open tgid=0
INFO[0000] map[string]string{"filename":"/etc/sudoers.d", "command":"sudo", "returnval":"4"}  container_id= container_runtime=not-found pid=12786 tracer=open tgid=0
INFO[0000] map[string]string{"filename":"/etc/sudoers", "command":"sudo", "returnval":"3"}  container_id= container_runtime=not-found pid=12786 tracer=open tgid=0
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].