1652791e5d
Rusty's load balancer calculates load differently based on average system CPU utilization in create_domain_hierarchy(). At >= 99.999% utilization, load is the product of a task's weight and duty cycle; below that, load is the same as the task's duty cycle. populate_tasks_by_load(), however, always uses the product when calculating per-task load so that in the sub-99.999% util case, load is inflated, typically by a factor of 100 with a normal priority task. Tasks look too heavy to migrate as a result because a single task would transfer more load than the domain imbalance allows, leading to significant imbalance in some cases. Make populate_tasks_by_load() calculate task load the same way as domain load, checking lb_apply_weight. Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> |
||
---|---|---|
.. | ||
src | ||
.gitignore | ||
build.rs | ||
Cargo.toml | ||
LICENSE | ||
meson.build | ||
README.md | ||
rustfmt.toml |
scx_rusty
This is a single user-defined scheduler used within sched_ext, which is a Linux kernel feature which enables implementing kernel thread schedulers in BPF and dynamically loading them. Read more about sched_ext.
Overview
A multi-domain, BPF / user space hybrid scheduler. The BPF portion of the scheduler does a simple round robin in each domain, and the user space portion (written in Rust) calculates the load factor of each domain, and informs BPF of how tasks should be load balanced accordingly.
How To Install
Available as a Rust crate: cargo add scx_rusty
Typical Use Case
Rusty is designed to be flexible, and accommodate different architectures and workloads. Various load balancing thresholds (e.g. greediness, frequenty, etc), as well as how Rusty should partition the system into scheduling domains, can be tuned to achieve the optimal configuration for any given system or workload.
Production Ready?
Yes. If tuned correctly, rusty should be performant across various CPU architectures and workloads. Rusty by default creates a separate scheduling domain per-LLC, so its default configuration may be performant as well. Note however that scx_rusty does not yet disambiguate between LLCs in different NUMA nodes, so it may perform better on multi-CCX machines where all the LLCs share the same socket, as opposed to multi-socket machines.
Note as well that you may run into an issue with infeasible weights, where a task with a very high weight may cause the scheduler to incorrectly leave cores idle because it thinks they're necessary to accommodate the compute for a single task. This can also happen in CFS, and should soon be addressed for scx_rusty.