We want to activate the user-space scheduler only when there are pending
tasks that require scheduling actions.
To do so we keep track of the queued tasks via nr_queued, that is
incremented in .enqueue() when a task is sent to the user-space
scheduler and decremented in .dispatch() when a task is dispatched.
However, we may trigger an unbalance if the same pid is sent to the
scheduler multiple times (because the scheduler store all the tasks by
their unique pid).
When this happens nr_queued is never decremented back to 0, leading the
user-space scheduler to constantly spin, even if there's no activity to
do.
To prevent this from happening split nr_queued into nr_queued and
nr_scheduled. The former will be updated by the BPF component every time
that a task is sent to the scheduler and it's up to the user-space
scheduler to reset the counter when the queue is fully dreained. The
latter is maintained by the user-space scheduler and represents the
amount of tasks that are still processed by the scheduler and are
waiting to be dispatched.
The sum of nr_queued + nr_scheduled will be called nr_waiting and we can
rely on this metric to determine if the user-space scheduler has some
pending work to do or not.
This change makes rust_rustland more reliable and it strongly reduces
the CPU usage of the user-space scheduler by eliminating a lot of
unnecessary activations.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Considering the CPU where the user-space scheduler is running as busy
doesn't really provide any benefit, since the user-space scheduler is
constantly dispatching an amount of tasks equal to the amount of idle
CPUs and then yields (therefore its own CPU should be considered idle).
Considering the CPU where the user-space scheduler is running as busy
doesn't provide any benefit, as the scheduler consistently dispatches
tasks equal to the number of idle CPUs and then yields (therefore its
own CPU should be considered idle).
This also allows to reduce the overall user-space scheduler CPU
utilization, especially when the system is mostly idle, without
introducing any measurable performance regression.
Measuring the average CPU utilization of a (mostly) idle system over a
time period of 60 sec:
- wihout this patch: 5.41% avg cpu util
- with this patch: 2.26% avg cpu util
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Move the logic to activate the userspace scheduler to an update_idle()
callback, which is called when the CPU is about to go idle.
This disables the built-in idle tracking mechanism, so it allows to rely
completely on the internal CPU ownership logic (via get_cpu_owner() and
set_cpu_owner()) and it also allows to share the idle state with the
user-space scheduler via the BPF_MAP_TYPE_ARRAY cpu_map.
Moreover, when the user-space scheduler is activated, kick the idle cpu
to trigger immediate dispatch and avoid bubbles in the scheduling
pipeline.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This reverts commit 9237e1d ("scx_rustland: always dispatch kthreads on
the local CPU").
Do not always prioritize all kthreads, we may have unbound workqueue
workers that can consume a lot of CPU cycles (e.g., encryption workers),
so we definitely want to apply the scheduling for those.
Therefore, restore the old behavior to prioritize only per-CPU kthreads.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Adding extra overhead to any kthread can potentially slow down the
entire system, so make sure this never happens by dispatching all
kthreads directly on the same local CPU (not just the per-CPU kthreads),
bypassing the user-space scheduler.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Trigger the user-space scheduler only upon a task's CPU release event
(avoiding its activation during each enqueue event) and only if there
are tasks waiting to be processed by the user-space scheduler.
This should save unnecessary calls to the user-space scheduler, reducing
the overall overhead of the scheduler.
Moreover, rename nr_enqueues to nr_queued and store the amount of tasks
currently queued to the user-space scheduler (that are waiting to be
dispatched).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide the following primitives to get and set CPU ownership in the BPF
part. This improves code readability and these primitives can be used by
the BPF part as a baseline to implement a better CPU idle tracking in
the future.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
BPF doesn't have full memory model yet, and while strict atomicity might
not be necessary in this context, it is advisable to enhance clarity in
the interlocking model.
To achieve this, provide the following primitives to operate on
usersched_needed:
static void set_usersched_needed(void)
static bool test_and_clear_usersched_needed(void)
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Dispatch tasks in a batch equal to the amount of idle CPUs in the
system.
This allows to reduce the pressure on the dispatcher queues, improving
the effectiveness of the scheduler (by having more tasks sitting in the
scheduler task pool) and mitigating potential priority inversion issues.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide an interface for the BPF dispatcher and user-space scheduler to
share CPU information. This information can empower the user-space
scheduler to make more informed decisions and enable the implementation
of a broader range of scheduling policies.
With this change the BPF dispatcher provides a CPU map (one entry per
CPU) that stores the pid that is running on each CPU (0 if the CPU is
idle). The CPU map is updated by the BPF dispatcher in the .running()
and .stopping() callbacks.
The dispatcher then sends to the user-space scheduler a suggestion of
the candidate CPU for each task that needs to run (that is always the
previously used CPU), along with all the task's information.
The user-space scheduler can decide to confirm the selected CPU or to
choose a different one, using all the shared CPU information.
Lastly, the selected CPU is communicated back to the dispatcher along
with all the task's information and the BPF dispatcher takes care of
executing the task on the selected CPU, eventually triggering a
migration.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Do not report an exit error message if it's empty. Moreover, distinguish
between a graceful exit vs a non-graceful exit.
In general, try to follow the behavior of user_exit_info.h for the C
schedulers.
NOTE: in the future the whole exit handling probably can be moved to a
more generic place (scx_utils) to prevent code duplication across
schedulers and also to prevent small inconsistencies like this one.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Rename scx_rustlite to scx_rustland to better represent the mirroring of
scx_userland (in C), but implemented in Rust.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This scheduler is made of a BPF component (dispatcher) that implements
the low level sched-ext functionalities and a user-space counterpart
(scheduler), written in Rust, that implements the actual scheduling
policy.
The main goal of this scheduler is to be easy to read and well
documented, so that newcomers (i.e., students, researchers, junior devs,
etc.) can use this as a template to quickly experiment scheduling
theory.
For this reason the design of this scheduler is mostly focused on
simplicity and code readability.
Moreover, the BPF dispatcher is completely agnostic of the particular
scheduling policy implemented by the user-space scheduler. For this
reason developers that are willing to use this scheduler to experiment
scheduling policies should be able to simply modify the Rust component,
without having to deal with any internal kernel / BPF details.
Future improvements:
- Transfer the responsibility of determining the CPU for executing a
particular task to the user-space scheduler.
Right now this logic is still fully implemented in the BPF part and
the user-space scheduler can only decide the order of execution of
the tasks, that significantly restricts the scheduling policies that
can be implemented in the user-space scheduler.
- Experiment the possibility to send tasks from the user-space
scheduler to the BPF dispatcher using a batch size, instead of
draining the task queue completely and sending all the tasks at once
every single time.
A batch size should help to reduce the overhead and it should also
help to reduce the wakeups of the user-space scheduler.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
- combine c and kernel-examples as it's confusing to have both
- rename 'rust-user' and 'c-user' to just 'rust' and 'c', which is simpler
- update and fix sync-to-kernel.sh