Benchmark orchestration for Linux nodes
Linux Benchmark Library
Run repeatable workloads, collect multi-level metrics, and generate reports with a clean CLI and stable Python APIs.
Quick run
lb config init -i lb plugin list --enable stress_ng lb run --remote --run-id demo-run
Provisioned runs are available in dev mode: --docker or --multipass.
Why this exists¶
Repeatable workloads
Standardize load patterns and run them across hosts or provisioned targets.
Layered architecture
Runner, controller, app, and UI are cleanly separated to keep coupling low.
Actionable artifacts
Raw metrics, journals, reports, and exports are organized per run and host.
Extensible plugins
Add new workloads via entry points and a user plugin directory.
Core layers¶
| Layer | Responsibility |
|---|---|
lb_runner |
Execute workloads and collect metrics on a node. |
lb_controller |
Orchestrate remote runs via Ansible and manage state. |
lb_app |
Stable API for CLIs/UIs and integrations. |
lb_ui |
CLI/TUI implementation. |
lb_analytics |
Reporting and post-processing. |
lb_provisioner |
Docker/Multipass helpers for the CLI. |
Where to go next¶
- Read the Quickstart for CLI and Python examples.
- Use the CLI reference for all commands.
- Browse the API reference for stable modules.
- Check Diagrams for architecture visuals and release artifacts.