All Projects → rstade → Trafficengine

rstade / Trafficengine

Licence: mit
a high performance TCP traffic generator

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to Trafficengine

Fastclick
FastClick - A faster version the Click Modular Router featuring batching, advanced multi-processing and improved Netmap and DPDK support (ANCS'15). Check the metron branch for Metron specificities (NSDI'18).
Stars: ✭ 100 (+354.55%)
Mutual labels:  networking, dpdk
Nff Go
NFF-Go -Network Function Framework for GO (former YANFF)
Stars: ✭ 1,036 (+4609.09%)
Mutual labels:  networking, dpdk
Npf
NPF: packet filter with stateful inspection, NAT, IP sets, etc.
Stars: ✭ 160 (+627.27%)
Mutual labels:  networking, dpdk
Pcapplusplus
PcapPlusPlus is a multiplatform C++ library for capturing, parsing and crafting of network packets. It is designed to be efficient, powerful and easy to use. It provides C++ wrappers for the most popular packet processing engines such as libpcap, WinPcap, DPDK and PF_RING.
Stars: ✭ 1,322 (+5909.09%)
Mutual labels:  networking, dpdk
Capsule
A framework for network function development. Written in Rust, inspired by NetBricks and built on DPDK.
Stars: ✭ 217 (+886.36%)
Mutual labels:  networking, dpdk
Trex Core
trex-core site
Stars: ✭ 776 (+3427.27%)
Mutual labels:  dpdk
Zerotierone
A Smart Ethernet Switch for Earth
Stars: ✭ 7,839 (+35531.82%)
Mutual labels:  networking
Evillimiter
Tool that monitors, analyzes and limits the bandwidth of devices on the local network without administrative access.
Stars: ✭ 764 (+3372.73%)
Mutual labels:  networking
Seastar
High performance server-side application framework
Stars: ✭ 6,328 (+28663.64%)
Mutual labels:  dpdk
Pavlos
A light-weight container runtime for Linux with NVIDIA gpu support, allows developers to quicky setup development environments for dev and test. Pavlos can emulate any Linux rootfs image as a container.
Stars: ✭ 22 (+0%)
Mutual labels:  networking
Gleri
Network protocol, service, and API for using OpenGL remotely.
Stars: ✭ 16 (-27.27%)
Mutual labels:  networking
Kube Ovn
A Kubernetes Network Fabric for Enterprises that is Rich in Functions and Easy in Operations
Stars: ✭ 798 (+3527.27%)
Mutual labels:  networking
Bandwhich
Terminal bandwidth utilization tool
Stars: ✭ 6,737 (+30522.73%)
Mutual labels:  networking
Sv3
A userspace software-switch including Intel 82599 10G NIC driver
Stars: ✭ 6 (-72.73%)
Mutual labels:  networking
Shadow
Shadow is a unique discrete-event network simulator that runs real applications like Tor, and distributed systems of thousands of nodes on a single machine. Shadow combines the accuracy of emulation with the efficiency and control of simulation, achieving the best of both approaches.
Stars: ✭ 769 (+3395.45%)
Mutual labels:  networking
Tcp Connection Hijack Reset
Simple scapy-based tool to hijack and reset existing TCP connections
Stars: ✭ 17 (-22.73%)
Mutual labels:  networking
Jsftp
Light and complete FTP client implementation for Node.js
Stars: ✭ 766 (+3381.82%)
Mutual labels:  networking
Swift Nio
Event-driven network application framework for high performance protocol servers & clients, non-blocking.
Stars: ✭ 6,777 (+30704.55%)
Mutual labels:  networking
P2p
Practice project to demonstrate p2p file sharing.
Stars: ✭ 16 (-27.27%)
Mutual labels:  networking
Mongo Cxx Driver
C++ Driver for MongoDB
Stars: ✭ 792 (+3500%)
Mutual labels:  networking

TrafficEngine Overview

TrafficEngine is a stateful user-space TCP traffic generator written in Rust with following properties

It may be used for (load-)testing TCP based application servers and TCP proxies. TrafficEngine maintains TCP-state and can therefore setup and release complete TCP connections.

Multi-core scaling is supported by steering packets of the same TCP connection based on the TCP port or the IP address to the appropriate core which handles that connection. Therefore port resources can be assigned to cores (based on paramater dst_port_mask in the configuration file). Alternatively, if the NIC does not support port masks, steering can be based on the IP address.

TrafficEngine builds on Netbricks which itself utilizes DPDK for user-space networking. Starting with version 0.2.0 more generic code is moved to an application independent crate netfcts (in sub-directory netfcts).

TrafficEngine Installation

First install NetBricks. TrafficEngine needs the branch e2d2-rstade from the fork at https://github.com/rstade/Netbricks. The required NetBricks version is tagged (starting with v0.2.0). Install NetBricks locally on your (virtual) machine by following the description of NetBricks. The installation path of e2d2 needs to be updated in the dependency section of Cargo.toml of TrafficEngine.

Note, that a local installation of NetBricks is necessary as it includes DPDK and some C-libraries for interfacing the Rust code of NetBricks with the DPDK. If the optional KNI interface is needed, the DPDK kernel module needs to be re-compiled each time the kernel version changes. This can be done with the script build.sh of NetBricks. Note also that the Linux linker ld needs to be made aware of the location of the .so libraries created by NetBricks. This can be solved using ldconfig.

Secondly, TrafficEngine depends on the crate netfcts. netfcts is an extension to NetBricks with helper functions and data structures, and needs to be build using the locally installed NetBricks to ensure consistent dependencies.

The network interfaces of the test machine need to be prepared (see prepNet.sh):

First a network interface for user-space DPDK is needed. This interface is used by the engine to connect to servers (in the example configuration this interface uses PCI slot 07:00.0). The latest code is tested with NIC X520-DA2 (82599).

Secondly an extra Linux interface is required which is used by the test modules for placing server stacks.

For some integration tests both interfaces must be interconnected. In case of physical interfaces, interfaces my be connected by a cross over cable. In case of virtual interfaces, e.g. interfaces may be connected to a host-only network of the hypervisor. Using Wireshark on the linux interface allows us to observe the traffic exchange between clients, the TrafficEngine and the servers. However, as wireshark may not keep up with the transmission speeds of modern line cards, packets may be lost.

In addition some parameters like the Linux interface name (linux_if) and the IP / MAC addresses in the test module configuration files tests/*.toml need to be adapted.

Latest code of TrafficEngine is tested on a 2-socket NUMA server, each socket hosting 4 physical cores, running the real-time kernel of Centos 7.5.

Testing

The executables must currently be run with supervisor rights, as otherwise the DPDK cannot be initialized. However to avoid that Cargo itself must be run under root, the shell script test.sh can be used, for example

  • "./test.sh test_as_client --release" or "./test.sh test_as_server --release".

The script requires installation of the jq tool, e.g. by running "yum install jq".

In addition the script allows to run a simple loopback helper tool, called macswap:

  • "./test.sh macswap --release"

This tool can be used in cases the loopback mode of the NIC is not working. This happened with X710DA2. The tool should be run ideally on a second server. It swaps source and destination MAC addresses and sends the frames back towards the origin.

Performance

Our test scenario is as follows:

  • We connect client- with server-side of TrafficEngine by using the loopback feature of the NIC (see loopback_run.toml). For this we used a 82599 based NIC.
  • After the client has setup the TCP connection, it sends a small payload packet to the server. After receiving the payload the server side release the TCP connection. In total we exchange seven packets per connection.
  • The same TrafficEngine instance operates concurrently as client and as server. Therefore when comparing our cps figures with the cps of a TCP server our figures can be approximately doubled.
  • Tests were run on a two socket server with two rather old 4 core L5520 CPU @ 2.27GHz with 32K/256K/8192K L1/L2/L3 Cache and a recent Centos 7.6 real-time kernel, e.g. from repository: http://linuxsoft.cern.ch/cern/centos/7/rt/CentOS-RT.repo. We also performed the basic tuning steps to isolate the cores which are running our working threads. The real-time kernel increases determinism significantly versus the usual Centos non-real-time kernel. For more information see rt-tuning.md.

The following figures shows results for the achieved connections per second in dependence of the cores used for forwarding pipelines. The measurements are based on NetBricks using DPDK 18.11. The upper curve is showing the result with generation of connection records switched off. Each point is the average of four runs with 2 million TCP connections each per core. The lower curve are the results with generation of connection records. In the latter case each run has 200 thousand TCP connections per core.

TrafficEngine performance

Limitations

Currently only a basic TCP state machine without retransmission, flow control, etc., is implemented.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].