All Projects → hpc → Spindle

hpc / Spindle

Licence: other
Scalable dynamic library and python loading in HPC environments

Programming Languages

c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to Spindle

Mud
MUD is a layer over Overtone to make live composition more powerful and immediate.
Stars: ✭ 58 (-18.31%)
Mutual labels:  performance
Faststring
Strings in a hurry.
Stars: ✭ 65 (-8.45%)
Mutual labels:  performance
Query Monitor
The Developer Tools Panel for WordPress
Stars: ✭ 1,156 (+1528.17%)
Mutual labels:  performance
Packagephobia
⚖️ Find the cost of adding a new dependency to your project
Stars: ✭ 1,110 (+1463.38%)
Mutual labels:  performance
Traceshark
This is a tool for Linux kernel ftrace and perf events visualization
Stars: ✭ 63 (-11.27%)
Mutual labels:  performance
React
Smart Server Performance
Stars: ✭ 65 (-8.45%)
Mutual labels:  performance
Performance Column
🚅 性能专栏(Performance Column)
Stars: ✭ 1,097 (+1445.07%)
Mutual labels:  performance
Md5 Simd
Accelerate aggregated MD5 hashing performance up to 8x for AVX512 and 4x for AVX2. Useful for server applications that need to compute many MD5 sums in parallel.
Stars: ✭ 71 (+0%)
Mutual labels:  performance
Wprig
A progressive theme development rig for WordPress.
Stars: ✭ 1,125 (+1484.51%)
Mutual labels:  performance
Execution time
How fast is your code? See it directly in Rails console.
Stars: ✭ 67 (-5.63%)
Mutual labels:  performance
Swisstable
Access Abseil Swiss Tables from C
Stars: ✭ 61 (-14.08%)
Mutual labels:  performance
Efsecondlevelcache
Entity Framework 6.x Second Level Caching Library.
Stars: ✭ 63 (-11.27%)
Mutual labels:  performance
Powa Web
PoWA user interface
Stars: ✭ 66 (-7.04%)
Mutual labels:  performance
Phpspy
Low-overhead sampling profiler for PHP 7+
Stars: ✭ 1,105 (+1456.34%)
Mutual labels:  performance
Yall.js
A fast, flexible, and small SEO-friendly lazy loader.
Stars: ✭ 1,163 (+1538.03%)
Mutual labels:  performance
Cmov
Measuring cmov vs branch-mov performance
Stars: ✭ 58 (-18.31%)
Mutual labels:  performance
Gl vs vk
Comparison of OpenGL and Vulkan API in terms of performance.
Stars: ✭ 65 (-8.45%)
Mutual labels:  performance
Calip
calip(er): all functions deserve to be measured and debugged at runtime
Stars: ✭ 71 (+0%)
Mutual labels:  performance
Datatable
A Python package for manipulating 2-dimensional tabular data structures
Stars: ✭ 1,166 (+1542.25%)
Mutual labels:  performance
Go Tdigest
A T-Digest implementation in golang
Stars: ✭ 67 (-5.63%)
Mutual labels:  performance

============================================================================= == SPINDLE: Scalable Parallel Input Network for Dynamic Load Environments ==

Authors: SPINDLE: Matthew LeGendre (legendre1 at llnl dot gov) W.Frings <W.Frings at fz-juelich dot de> COBO: Adam Moody

Version: 0.13 (Aug 2020)

Summary:

Spindle is a tool for improving the performance of dynamic library and python loading in HPC enviornments.

Documentation:

https://computation.llnl.gov/spindle/

Overview:

Using dynamically-linked libraries is common in most computational environments, but they can cause serious problem when used on large clusters and supercomputers. Shared libraries are frequently stored on shared file systems, such as NFS. When thousands of processes simultaneously start and attempt to search for and load libraries, it resembles a denial-of-service attack against the shared file system. This "attack" doesn't just slow down the application, but impacts every user on the system. We encountered cases where it took over ten hours for a dynamically-linked MPI application running on 16K processes to reach main.

Spindle presents a novel solution to this problem. It transparently runs alongside your distributed application and takes over its library loading mechanism. When processes start to load a new library, Spindle intercepts the operation, designates one process to read the file from the shared file system, then distributes the library's contents to every process with a scalable broadcast operation.

Spindle is very scalable. On a cluster at LLNL the Pynamic benchmark (which measures library loading performance) was unable to scale much past 100 nodes. Even at that small scale it was causing significant performance problems that were impacting everyone on the cluster. When running Pynamic under Spindle, we were able to scale up to the max job size at 1,280 nodes without showing any signs of file-system stress or library-related slowdowns.

Unlike competing solutions, Spindle does not require any special hardware, and libraries do not have to be staged into any special locations. Applications can work out-of-the-box do not need any special compile or link flags. Spindle is completely userspace and does not require kernel patches or root privileges.

Spindle can trigger scalable loading of dlopened libraries, dependent library, executables, python modules and specified application data files.

Compilation:

Please see INSTALL file in the Spindle source tree.

Usage:

Put 'spindle' before your job launch command. E.g:

spindle mpirun -n 128 mpi_hello_world

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].