Commit Graph

848 Commits

Author SHA1 Message Date
Andrea Righi
086c6dffc8 scx_rustlite: simple user-space scheduler written in Rust
This scheduler is made of a BPF component (dispatcher) that implements
the low level sched-ext functionalities and a user-space counterpart
(scheduler), written in Rust, that implements the actual scheduling
policy.

The main goal of this scheduler is to be easy to read and well
documented, so that newcomers (i.e., students, researchers, junior devs,
etc.) can use this as a template to quickly experiment scheduling
theory.

For this reason the design of this scheduler is mostly focused on
simplicity and code readability.

Moreover, the BPF dispatcher is completely agnostic of the particular
scheduling policy implemented by the user-space scheduler. For this
reason developers that are willing to use this scheduler to experiment
scheduling policies should be able to simply modify the Rust component,
without having to deal with any internal kernel / BPF details.

Future improvements:

 - Transfer the responsibility of determining the CPU for executing a
   particular task to the user-space scheduler.

   Right now this logic is still fully implemented in the BPF part and
   the user-space scheduler can only decide the order of execution of
   the tasks, that significantly restricts the scheduling policies that
   can be implemented in the user-space scheduler.

 - Experiment the possibility to send tasks from the user-space
   scheduler to the BPF dispatcher using a batch size, instead of
   draining the task queue completely and sending all the tasks at once
   every single time.

   A batch size should help to reduce the overhead and it should also
   help to reduce the wakeups of the user-space scheduler.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-21 18:53:30 +01:00
David Vernet
eb7b3c99f0
Merge pull request #40 from sched-ext/ci
scx: Add CI action that builds schedulers for PRs
2023-12-18 21:17:47 -06:00
David Vernet
4523b10e45
scx: Add CI action that builds schedulers for PRs
When Ubuntu ships with sched_ext, we can also maybe test loading the
schedulers (not sure if the runners can run as root though). For now, we
should at least have a CI job that lets us verify that the schedulers
can _build_. To that end, this patch adds a basic CI action that builds
the schedulers.

As is, this is a bit brittle in that we're having to manually download
and install a few dependencies. I don't see a better way for now without
hosting our own runners with our own containers, but that's a bigger
investment. For now, hopefully this will get us _some_ coverage.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-18 21:12:50 -06:00
David Vernet
318c06fa9c
nest: Skip out of idle cpu selection on exec() path
The core sched code calls select_task_rq() in a few places: the task
wakeup path (typical path), the fork() path, and the exec() path. For
nest scheduling, we don't want to select a core from the nest on the
exec() path. If we were previously able to find an idle core, we would
have found it on the fork() path, so we don't gain much by checking on
the exec() path. In fact, it's actually harmful, because we could
incorrectly blow up the primary nest unnecessarily by bumping the same
task between multiple cores for no reason. Let's just opt-out of
select_task_rq calls on the exec() path.

Suggested-by: Julia Lawall <julia.lawall@inria.fr>
Signed-off-by: David Vernet <void@manifault.com>
2023-12-18 13:51:15 -06:00
David Vernet
ab0e36f9ce
scx_nest: Apply r_impatient if no task is found in primary nest
Julia pointed out that our current implementation of r_impatient is
incorrect. r_impatient is meant to be a mechanism for more aggressively
growing the primary nest if a task repeatedly isn't able to find a core.
Right now, we trigger r_impatient if we're not able to find an attached
or previous core in the primary nest, but we _should_ be triggering it
only if we're unable to find _any_ core in the primary nest. Fixing the
implementation to do this drastically decreases how aggressively we grow
the primary nest when r_impatient is in effect.

Reported-by: Julia Lawall <julia.lawall@inria.fr>
Signed-off-by: David Vernet <void@manifault.com>
2023-12-18 11:05:36 -06:00
Jordan Rome
e9a9d32ab6 Restructure scheds folder names
- combine c and kernel-examples as it's confusing to have both
- rename 'rust-user' and 'c-user' to just 'rust' and 'c', which is simpler
- update and fix sync-to-kernel.sh
2023-12-17 13:14:31 -08:00
Daniel Müller
fed1dae9da rust: Update libbpf-rs & libbpf-cargo to 0.22
This is a follow on to #32, which got reverted. I wrongly assumed that
scx_rusty resides in the sched_ext tree and consumes published version
of scx_utils.
With this change we update the other in-tree dependencies. I built
scx_layered & scx_rusty. I bumped scx-utils to 0.4, because the
libbpf-cargo seems to be part of the public API surface and libbpf-cargo
0.21 and 0.22 are not compatible with each other.

Signed-off-by: Daniel Müller <deso@posteo.net>
2023-12-14 14:33:58 -08:00
David Vernet
b8f70fa09a
Merge pull request #31 from jordalgo/minor-rusty-refactor
minor refactor of scx_rusty
2023-12-14 09:35:18 -06:00
Jordan Rome
ba35b97bb7 minor refactor of scx_rusty 2023-12-14 07:33:53 -08:00
Andrea Righi
dc81311d79 scx_userland: align MAX_ENQUEUED_TASKS to dispatch batch
With commit 48bba8e ("scx_userland: survive to dispatch failures")
scx_useland can better tolerate dispatch failures, so we can reduce a
bit MAX_ENQUEUED_TASKS and align it with the size used in bpf_repeat(),
when tasks are actually dispatched in the bpf counterpart.

This allows reducing the memory footprint of the scheduler and makes it
more consistent between enqueue and dispatch events.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-13 22:25:04 +01:00
Andrea Righi
48bba8e4f6 scx_userland: survive to dispatch failures
If the scheduler fails to dispatch a task we immediately give up,
exiting with an error like the following:

 Failed to dispatch task 251 in 1
 EXIT: BPF scheduler unregistered

This scenario can be simulated decreasing dramatically the value of
MAX_ENQUEUED_TASKS.

We can make the scheduler a little more robust simply by re-adding the
task that cannot be dispatched to vruntime_head and stop dispatching
additional tasks in the same batch.

This can give enough room, under such "dispatch overload" condition, to
catch up and resume the normal execution without crashing.

Moreover, introduce nr_vruntime_failed to report failed dispatch events
in the scheduler's statistics.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-13 22:19:36 +01:00
Andrea Righi
1e9e6778bc scx_userland: allocate tasks array based on kernel.pid_max
Currently the array of enqueued tasks is statically allocated to a fixed
size of USERLAND_MAX_TASKS to avoid potential deadlocks that could be
introduced by performing dynamic allocations in the enqueue path.

However, this also adds a limit on the maximum pid that the scheduler
can handle, since the pid is used as the index to access the array.

In fact, it is quite easy to trigger the following failure on an average
desktop system (making this scheduler pretty much unusable in such
scenario):

 $ sudo scx_userland
 ...
 Failed to enqueue task 33258: No such file or directory
 EXIT: BPF scheduler unregistered

Prevent this by using sysctl's kernel.pid_max as the size of the tasks
array (and still allocate it all at once during initialization).

The downside of this change is that scx_userland may require additional
memory to start and in small systems it could even trigger OOMs. For
this reason add an explicit message to the command help, suggesting to
reduce kernel.pid_max in case of OOM conditions.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-13 17:33:10 +01:00
Tejun Heo
8a07bcc31b Bump versions and add LICENSE symlinks for scx_layered and scx_rusty 2023-12-12 11:21:08 -10:00
Kumar Kartikeya Dwivedi
c4c994c9ce
scx_central: Break dispatch_to_cpu loop when running out of buffer slots
For the case where many tasks being popped from the central queue cannot
be dispatched to the local DSQ of the target CPU, we will keep bouncing
them to the fallback DSQ and continue the dispatch_to_cpu loop until we
find one which can be dispatch to the local DSQ of the target CPU.

In a contrived case, it might be so that all tasks pin themselves to
CPUs != target CPU, and due to their affinity cannot be dispatched to
that CPU's local DSQ. If all of them are filling up the central queue,
then we will keep looping in the dispatch_to_cpu loop and eventually run
out of slots for the dispatch buffer. The nr_mismatched counter will
quickly rise and sched-ext will notice the error and unload the BPF
scheduler.

To remedy this, ensure that we break the dispatch_to_cpu loop when we
can no longer perform a dispatch operation. The outer loop in
central_dispatch for the central CPU should ensure the loop breaks when
we run out of these slots and schedule a self-IPI to the central core,
and allow sched-ext to consume the dispatch buffer before restarting the
dispatch loop again.

A basic way to reproduce this scenario is to do:
taskset -c 0 perf bench sched messaging

The error in the kernel will be:
sched_ext: BPF scheduler "central" errored, disabling
sched_ext: runtime error (dispatch buffer overflow)
bpf_prog_6a473147db3cec67_dispatch_to_cpu+0xc2/0x19a
bpf_prog_c9e51ba75372a829_central_dispatch+0x103/0x1a5

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
2023-12-12 07:50:46 +00:00
Tejun Heo
c9d0cc640a
Merge pull request #22 from arighi/enable-rust-build-option
build: introduce enable_rust build option
2023-12-09 15:59:19 -10:00
Tejun Heo
abbb6a0276
Merge pull request #20 from arighi/scx-rusty-fix
scx_rusty: fix "subtract with overflow" error
2023-12-09 15:58:11 -10:00
Andrea Righi
6343bcf360 build: introduce enable_rust build option
Introduce an option to enable/disable the build of all the Rust
sub-projects.

This can be useful to build scx on those systems where Rust is not
fully supported (e.g., armhf).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 15:05:23 +01:00
Andrea Righi
0637b6a0b5 scx_nest: use proper format string for u64 types
This prevents some warnings when building scx_nest on 32-bit
architectures.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:50 +01:00
Andrea Righi
adc01140aa scx_qmap: use proper format string for u64 types
This prevents some warnings when building scx_qmap on 32-bit
architectures.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:44 +01:00
Andrea Righi
4df979ccb7 scx_pair: use proper format string for u64 types
This prevents some warnings when building scx_pair on 32-bit
architectures.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:38 +01:00
Andrea Righi
14e70fd134 scx_flatcg: use proper data size for hweight_gen
We should explicitly use u64 for hweight_gen to prevent the following
build failures on 32-bit architectures:

scheds/kernel-examples/scx_flatcg.p/scx_flatcg.bpf.skel.h: In function ‘scx_flatcg__assert’:
scheds/kernel-examples/scx_flatcg.p/scx_flatcg.bpf.skel.h:3523:9: error: static assertion failed: "unexpected size of \'hweight_gen\'"
 3523 |         _Static_assert(sizeof(s->data->hweight_gen) == 8, "unexpected size of 'hweight_gen'");

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:30 +01:00
Andrea Righi
00c5d2dfb7 scx_qmap: use proper data size for scheduler stats
We should explicitly use u64 for scheduler statistics to prevent the
following build failures on 32-bit architectures:

scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h: In function ‘scx_qmap__assert’:
scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h:2560:9: error: static assertion failed: "unexpected size of \'nr_enqueued\'"
 2560 |         _Static_assert(sizeof(s->bss->nr_enqueued) == 8, "unexpected size of 'nr_enqueued'");
      |         ^~~~~~~~~~~~~~
scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h:2561:9: error: static assertion failed: "unexpected size of \'nr_dispatched\'"
 2561 |         _Static_assert(sizeof(s->bss->nr_dispatched) == 8, "unexpected size of 'nr_dispatched'");
      |         ^~~~~~~~~~~~~~
scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h:2562:9: error: static assertion failed: "unexpected size of \'nr_reenqueued\'"
 2562 |         _Static_assert(sizeof(s->bss->nr_reenqueued) == 8, "unexpected size of 'nr_reenqueued'");
      |         ^~~~~~~~~~~~~~
scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h:2563:9: error: static assertion failed: "unexpected size of \'nr_dequeued\'"
 2563 |         _Static_assert(sizeof(s->bss->nr_dequeued) == 8, "unexpected size of 'nr_dequeued'");
      |         ^~~~~~~~~~~~~~
scheds/kernel-examples/scx_qmap.p/scx_qmap.bpf.skel.h:2564:9: error: static assertion failed: "unexpected size of \'nr_core_sched_execed\'"
 2564 |         _Static_assert(sizeof(s->bss->nr_core_sched_execed) == 8, "unexpected size of 'nr_core_sched_execed'");
      |         ^~~~~~~~~~~~~~

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:25 +01:00
Andrea Righi
4c65e71c48 scx_central: use proper format string for u64
When printing scheduler statistics we use %lu to print u64 values, that
works well on 64-bit architectures, but on 32-bit architectures we get
errors like the following:

  106 |                 printf("total   :%10lu    local:%10lu   queued:%10lu  lost:%10lu\n",
      |                                  ~~~~^
      |                                      |
      |                                      long unsigned int
      |                                  %10llu
  107 |                        skel->bss->nr_total,
      |                        ~~~~~~~~~~~~~~~~~~~
      |                                 |
      |                                 u64 {aka long long unsigned int}

Fix this by using the proper format %llu.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:20 +01:00
Andrea Righi
e396f1e467 scx_userland: get rid of strings.h include
Use compiler's built-in stack initialization instead of memset().

In this way we can get rid of the string.h include and make
cross-compilation easier in certain small environments (i.e., arm).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:49:14 +01:00
Andrea Righi
c5d1bc3577 scx_rusty: fix "subtract with overflow" error
It seems that under certain conditions, the difference between the
current and the previous procfs::CpuStat values may become negative,
triggering the following crash/trace:

thread 'main' panicked at /build/rustc-VvCkKl/rustc-1.73.0+dfsg0ubuntu1/library/core/src/ops/arith.rs:217:1:
attempt to subtract with overflow
stack backtrace:
...
  19:     0x590d8481909e - scx_rusty::calc_util::h46f2af9c512c2ecd
                               at /home/arighi/src/scx/scheds/rust-user/scx_rusty/src/main.rs:217:31
  20:     0x590d8481c794 - scx_rusty::Tuner::step::h2e51076f043a8593
                               at /home/arighi/src/scx/scheds/rust-user/scx_rusty/src/main.rs:444:38
  21:     0x590d84828270 - scx_rusty::Scheduler::run::hb5483f1e585f52fe
                               at /home/arighi/src/scx/scheds/rust-user/scx_rusty/src/main.rs:1198:17
  22:     0x590d848289e9 - scx_rusty::main::h9ba8c62ad33aeee1
...

Prevent this by introducing a sub_or_zero() helper function that returns
zero if the difference is negative.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2023-12-09 14:47:35 +01:00
David Vernet
c953ee47a6
scx_nest: Reset schedulings when a task is dispatched
In scx_nest, we currently count the number of times that a core is
scheduled for compaction before we eventually just eagerly compact the
core. The idea is that the core could thrash between being scheduled and
then "de-scheduled" for compaction if there are a couple of tasks that
are bouncing between cores in the primary nest often enough to kick them
out of being compacted.

We're currently resetting schedulings when a core is eagerly compacted,
but to be precise we should probably also reset the count when a core
consumes a task from the fallback DSQ, at this indicates that the system
is overcommitted and that we likely won't benefit from compacting the
primary nest.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-08 13:16:40 -06:00
Tejun Heo
11c6a809b2
Merge pull request #12 from sched-ext/scx_nest
scx_nest: Add scx_nest scheduler
2023-12-07 12:19:40 -10:00
David Vernet
ca21842908
scx_nest: Add scx_nest scheduler
The scx_nest scheduler seems to be behaving well. Let's merge it to the
scx repo so that CachyOS can package and use it more easily.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-07 13:28:09 -06:00
David Vernet
b53e8251a1
rusty: Fix calc_util() in rusty
We were assigning curr to prev stats, and vice versa, in calc_util().
This was causing the following crash on debug builds:

[void@maniforge scheds]$ sudo RUST_BACKTRACE=1 scx_rusty
00:00:56 [INFO] CPUs: online/possible = 32/32
00:00:56 [INFO] DOM[00] cpumask 0000000000FF00FF (16 cpus)
00:00:56 [INFO] DOM[01] cpumask 00000000FF00FF00 (16 cpus)
00:00:56 [INFO] Rusty Scheduler Attached
thread 'main' panicked at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/ops/arith.rs:217:1:
attempt to subtract with overflow
stack backtrace:
   0: rust_begin_unwind
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/std/src/panicking.rs:597:5
   1: core::panicking::panic_fmt
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/panicking.rs:72:14
   2: core::panicking::panic
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/panicking.rs:127:5
   3: <u64 as core::ops::arith::Sub>::sub
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/ops/arith.rs:217:1
   4: <&u64 as core::ops::arith::Sub<&u64>>::sub
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/internal_macros.rs:55:17
   5: scx_rusty::calc_util
             at ./rust-user/scx_rusty/src/main.rs:216:29
   6: scx_rusty::Tuner::step
             at ./rust-user/scx_rusty/src/main.rs:444:38
   7: scx_rusty::Scheduler::run
             at ./rust-user/scx_rusty/src/main.rs:1198:17
   8: scx_rusty::main
             at ./rust-user/scx_rusty/src/main.rs:1261:5
   9: core::ops::function::FnOnce::call_once
             at /rustc/475c71da0710fd1d40c046f9cee04b733b5b2b51/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Flip them to avoid the crash. Rusty now runs fine.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-06 18:25:27 -06:00
David Vernet
eba9155a7f
README: Add scheds/ README's
There's a fairly comprehensive README in the kernel's tools/sched_ext
directory which describes each of the example schedulers. Let's pull it
into this repository, and split it into the various subdirectories
containing the kernele-examples/ schedulers, and the rust-user/
schedulers.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-06 16:55:02 -06:00
Tejun Heo
8ee1bc706f scx_central: Implement fallback for missing BPF_F_TIMER_CPU_PIN support 2023-12-05 14:50:45 -10:00
David Vernet
13586dc2ab scx_simple: Don't vtime dispatch to SCX_DSQ_GLOBAL
SCX_DSQ_GLOBAL now does not support vtime dispatching. scx_simple uses
it to do vtime scheduling, so let's update it to create and use a
separate DSQ that it can both FIFO and PRIQ dispatch to.

Signed-off-by: David Vernet <void@manifault.com>
2023-12-04 18:06:47 -06:00
Tejun Heo
c42ba105ca scx_layered: Use p->thread_node to iterate threads in tp_cgroup_attach_task()
tp_cgroup_attach_task() walks p->thread_group to visit all member threads
and set tctx->refresh_layer. However, the upstream kernel has removed
p->thread_group recently in 8e1f385104ac ("kill task_struct->thread_group")
as it was mostly a duplicate of p->signal->thread_head list which goes
through p->thread_node.

Switch to iterate via p->thread_node instead, add a comment explaining why
it's using the cgroup TP instead of scx_ops.cgroup_move(), and make
iteration failure non-fatal as the iteration is racy.
2023-12-04 10:58:06 -10:00
Tejun Heo
d6742a9b1b scx_rusty: Add comment explaining spurious delete_elem failures 2023-12-04 10:51:05 -10:00
Tejun Heo
66150490a1 scx_rusty: Work around spurious task_ctx update failures
As in scx_layered, bpf_map_delete_elem() can fail due to recursion
protection triggering spuriously which can then lead to task_ctx creation
failure after PIDs wrap. Work around by dropping BPF_NOEXIST.
2023-12-03 15:46:29 -10:00
Tejun Heo
bf186f95f3 sync-to-kernel.sh: Drop unused shopt globstar 2023-12-03 15:35:15 -10:00
Tejun Heo
255f614615 sync-to-kernel.sh: Updated to sync to kernel and renamed accordingly
The scx repo is going to serve as the source of truth for sched_ext
schedulers. Reverse the sync direction and include syncing rust-user
schedulers too.
2023-12-03 15:34:26 -10:00
Tejun Heo
1a4734bb4c scx_flatcg: Drop unnecessary include 2023-12-03 14:22:59 -10:00
Tejun Heo
fcab460386 scx_utils: Bump version and publish
include directory structure has changed (a breaking change) and the doc had
a misleading error. Let's cut a new release.
2023-12-03 12:51:16 -10:00
Tejun Heo
d0ed7913b4 scheds: Rearrange include files to match kernel/tools/sched_ext/include
Build scripts are updated accordingly.
2023-12-03 12:47:23 -10:00
Tejun Heo
41a4f6407e build: Add dummy gnu/stubs.h to fix build on systems without 32bit glibc-dev
See comment in sched/include/bpf-compat/gnu/stubs.h for details.
2023-12-02 13:25:02 -10:00
Tejun Heo
0f1ed894bd build: rust projects now link against libbpf.a if provided 2023-12-02 06:41:26 -10:00
Tejun Heo
30f6a38573 build: Pipe down build options to scx_utils::BuildHelpers using env vars 2023-12-01 14:49:32 -10:00
Tejun Heo
6b9c392bf0 build: "meson install" works now 2023-12-01 13:37:28 -10:00
Tejun Heo
a55fc6893b build: Trigger cargo build on rust sub-projects from meson.build 2023-12-01 11:58:56 -10:00
Tejun Heo
01d8351616 rustu: Import scx_rusty and scx_layered from kernel tree 2023-11-30 13:13:41 -10:00
Tejun Heo
302ea57798 scheds: Remove now unnecessary ravg_read.rs.h and relocate sync script 2023-11-28 08:55:41 -10:00
Tejun Heo
68b6d37800 scx: Initial repo setup and import of example schedulers from kernel tree 2023-11-27 14:47:04 -10:00