scx-upstream/scheds/rust/scx_rustland
David Vernet 4520514fe8
rusty: Account for disabled but offline CPUs
As described in https://bugzilla.kernel.org/show_bug.cgi?id=218109,
https://github.com/sched-ext/scx/issues/147 and
https://github.com/sched-ext/sched_ext/issues/69, AMD chips can
sometimes report fully disabled CPUs as offline, which causes us to
count them when looking at /sys/devices/system/cpu/possible.

Additionally, systems can have holes in their active CPU maps. For
example, a system with CPUs 0, 1, 2, 3 possible, may have only 0 and 2
active. To address this, we need to do a few things:

1. Update topology.rs to be clear that it's returning the number of
   _possible_ CPUs in the system. Also update Topology to only record
   online CPUs when creating its span and iterating over sysfs when
   creating domains. It was previously trying to record when a CPU was
   online, but this was actually broken as the topology directory isn't
   present in sysfs when the CPU is offline.

2. Schedulers should not be relying on nr_possible_cpus for anything
   other than interacting with per-CPU data (e.g. for stats extraction),
   or e.g. verifying maximum sizes of statically sized arrays in BPF. It
   should _not_ be used for e.g. performing load calculations, etc. With
   that said, we'll also need to update schedulers to not rely on the
   nr_possible_cpus figure being exported by the topology crate. We do
   that for rusty in this patch, but don't fix any of the others other
   than updating how they call topology.rs.

3. Account for the fact that LLC IDs may be non-contiguous. For example,
   if there is a single core in an LLC, then if we assign LLC IDs to
   domains, then the domain IDs won't be contiguous. This doesn't fit
   our current model which is used by e.g. infeasible_weights.rs. We'll
   update some of the code in rusty to accomodate this, but we'll need
   to do more.

4. Update schedulers to properly reset themselves in the event of a
   hotplug event. We'll take care of that in a follow-on change.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:15:28 -05:00
..
src rusty: Account for disabled but offline CPUs 2024-03-14 11:15:28 -05:00
.gitignore scx_rustland_core: generate source files in-tree 2024-02-28 17:49:44 +01:00
build.rs scx_rustland_core: introduce RustLandBuilder() 2024-02-28 17:49:44 +01:00
Cargo.toml Revert "scx_rustland_core: use new consume_raw() libbpf-rs API" 2024-03-11 21:54:21 +01:00
LICENSE scx_rustland: rename from scx_rustlite 2023-12-22 00:20:14 +01:00
meson.build Fetch and build bpftool by default 2024-03-11 10:00:01 -07:00
README.md scx_rustland: update documentation 2024-02-28 17:49:44 +01:00
rustfmt.toml scx_rustland: rename from scx_rustlite 2023-12-22 00:20:14 +01:00

scx_rustland

This is a single user-defined scheduler used within sched_ext, which is a Linux kernel feature which enables implementing kernel thread schedulers in BPF and dynamically loading them. Read more about sched_ext.

Overview

scx_rustland is made of a BPF component (scx_rustland_core) that implements the low level sched-ext functionalities and a user-space counterpart (scheduler), written in Rust, that implements the actual scheduling policy.

How To Install

Available as a Rust crate: cargo add scx_rustland

Typical Use Case

scx_rustland is designed to prioritize interactive workloads over background CPU-intensive workloads. For this reason the typical use case of this scheduler involves low-latency interactive applications, such as gaming, video conferencing and live streaming.

scx_rustland is also designed to be an "easy to read" template that can be used by any developer to quickly experiment more complex scheduling policies fully implemented in Rust.

Production Ready?

Not quite. For production scenarios, other schedulers are likely to exhibit better performance, as offloading all scheduling decisions to user-space comes with a certain cost.

However, a scheduler entirely implemented in user-space holds the potential for seamless integration with sophisticated libraries, tracing tools, external services (e.g., AI), etc.

Hence, there might be situations where the benefits outweigh the overhead, justifying the use of this scheduler in a production environment.

Demo

scx_rustland-terraria

The key takeaway of this demo is to demonstrate that , despite the overhead of running a scheduler in user-space, we can still obtain interesting results and, in this particular case, even outperform the default Linux scheduler (EEVDF) in terms of application responsiveness (fps), while a CPU intensive workload (parallel kernel build) is running in the background.