As described in https://bugzilla.kernel.org/show_bug.cgi?id=218109,
https://github.com/sched-ext/scx/issues/147 and
https://github.com/sched-ext/sched_ext/issues/69, AMD chips can
sometimes report fully disabled CPUs as offline, which causes us to
count them when looking at /sys/devices/system/cpu/possible.
Additionally, systems can have holes in their active CPU maps. For
example, a system with CPUs 0, 1, 2, 3 possible, may have only 0 and 2
active. To address this, we need to do a few things:
1. Update topology.rs to be clear that it's returning the number of
_possible_ CPUs in the system. Also update Topology to only record
online CPUs when creating its span and iterating over sysfs when
creating domains. It was previously trying to record when a CPU was
online, but this was actually broken as the topology directory isn't
present in sysfs when the CPU is offline.
2. Schedulers should not be relying on nr_possible_cpus for anything
other than interacting with per-CPU data (e.g. for stats extraction),
or e.g. verifying maximum sizes of statically sized arrays in BPF. It
should _not_ be used for e.g. performing load calculations, etc. With
that said, we'll also need to update schedulers to not rely on the
nr_possible_cpus figure being exported by the topology crate. We do
that for rusty in this patch, but don't fix any of the others other
than updating how they call topology.rs.
3. Account for the fact that LLC IDs may be non-contiguous. For example,
if there is a single core in an LLC, then if we assign LLC IDs to
domains, then the domain IDs won't be contiguous. This doesn't fit
our current model which is used by e.g. infeasible_weights.rs. We'll
update some of the code in rusty to accomodate this, but we'll need
to do more.
4. Update schedulers to properly reset themselves in the event of a
hotplug event. We'll take care of that in a follow-on change.
Signed-off-by: David Vernet <void@manifault.com>
We're iterating from min..max cpu in cpus_online(), but that's not
inclusive of the max CPU. Let's also include that so we don't think that
last CPU is offline.
Signed-off-by: David Vernet <void@manifault.com>
Most of the schedulers assume that the amount of possible CPUs in the
system represents the actual number of CPUs available.
This is not always true: some CPUs may be offline or certain CPU models
(AMD CPUs for example) may include unavailable CPUs in this number.
This can lead to sub-optimal performance or even errors in the scheduler
(see for example [1][2]).
Ideally, we need to attack this issue in a more generic way, such as
having a proper API provided by a C library, that can be used by all
schedulers and the topology Rust module (scx_utils crate).
But for now, let's try to mitigate most of the common sub-optimal cases
separately inside each scheduler.
For rustland we can apply some mitigations both in select_cpu() (for the
BPF part) and in the user-space part:
- the former is fixed in the sched-ext kernel by commit 94dc0c01b957
("scx: Use cpu_online_mask when resetting idle masks"). However,
adding an extra check `cpu < num_possible_cpus` in select_cpu(),
allows to properly support AMD CPUs, even with kernels that don't
have the cpu_online_mask fix yet (this doesn't always guarantee the
validity of cpu, but it should be enough to mitigate the majority of
the potential sub-optimal cases, without introducing any significant
overhead)
- the latter can be fixed relying on topology.span(), instead of
topology.nr_cpus(), to count the amount of available CPUs in the
system.
[1] https://github.com/sched-ext/sched_ext/issues/69
[2] https://github.com/sched-ext/scx/issues/147
Link: 94dc0c01b9
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In order to use the new consume_raw() API we need to depend on a version
of libbpf-rs that is not released yet.
Apparently adding such dependency may introduce a potential dependency
conflict with libbpf-sys.
Therefore, revert this change and go back to the previous consume() API.
One a new version of libbpf-rs will be out we can update all our
dependencies to use the new libbpf-rs and re-apply this patch to
scx_rustland_core.
Fixes: 7c8c5fd ("scx_rustland_core: use new consume_raw() libbpf-rs API")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In line with rustland's focus on prioritizing interactive tasks, set the
default base time slice to 5ms.
This allows to mitigate potential audio craking issues or system lags
when the system is overloaded or under memory pressure condition (i.e.,
https://github.com/sched-ext/scx/issues/96#issuecomment-1978154324).
A downside of this change is to introduce potential regressions in the
throughput of CPU-intensive workloads, but in such scenarios rustland
may not be the optimal choice and alternative schedulers may be
preferred.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Some high-priority tasks may have a weight too high, that can
potentially disrupt the slice boost optimization logic, causing
interactive tasks to be less responsive.
In line with rustland's focus on prioritizing interactive tasks, prevent
giving too much CPU bandwidth to such high-priority tasks by limiting
the maximum task weight to 1000.
This allows to maintain a good level of system responsiveness even in
presence of tasks with a really high priority.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Use the new consume_raw() API provided by libbpf-rs with
https://github.com/libbpf/libbpf-rs/pull/680.
This allows to be more precise and efficient at processing tasks
consumed from the BPF ring buffer.
NOTE: the new consume_raw() API is not available yet in any official
release of the libbpf-rs crate, but cargo allows to pick versions
directly from git. This slightly increases the build time of
scx_rustland_core and the schedulers based on this crate (since we need
to recompile libbpf-rs from source), but we can re-add a proper
versioned dependency once the libbpf-rs is out.
TODO: this new API also offers the possibility to consume multiple items
from the BPF ring buffer with a single call to consume_raw(). This could
be investigated and implemented as a potential future enhancement.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The current topology.rs crate assumes that all cores have unique core
IDs in a system. This need not be the case, such as in certain Intel
Xeon processors which reuse core IDs in different NUMA nodes. Let's
update the crate to assume unique core IDs only per socket.
Signed-off-by: David Vernet <void@manifault.com>
Provide distinct methods to set the target CPU and the per-task time
slice to dispatched tasks.
Moreover, also provide a constructor to create a DispatchedTask from a
QueuedTask (this allows to automatically bounce a task from the
scheduler to the BPF dispatcher without having to take care of setting
the individual task's attributes).
This also allows to make most of the attributes of DispatchedTask
private, especially it allows to hide cpumask_cnt, that should be only
used internally between the BPF and the user-space component.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide a way to set a different time slice per-task, by adding a new
attribute slice_ns to the DispatchedTask struct.
This attribute determines the time slice assigned to the task, if it is
set to 0 then the global time slice (either the default one or the
effective one, if set) will be used.
At the same time, remove the payload attribute, that is basically unused
(scx_rustland uses it to send the task's vruntime to the BPF dispatcher
for debugging purposes, but it's not very useful anymore at this point).
In the future we may introduce a proper interface to attach a custom
payload to each task with a proper interface.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This is to potentinally reduce issues with folks
using different versions of libbpf at runtime.
This also:
- makes static linking of libbpf the default
- adds steps in `meson setup` to fetch libbpf and make it
There is no need to generate source code in a temporary directory with
RustLandBuilder(), we can simply generate code in-tree and exclude the
generated source files from .gitignore.
Having the generated source files in-tree can help to debug potential
build issues (and it also allows to drop the the tempfile crate
dependency).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a wrapper to scx_utils::BpfBuilder that can be used to build
the BPF component provided by scx_rustland_core.
The source of the BPF components (main.bpf.c) is included in the crate
as an array of bytes, the content is then unpacked in a temporary file
to perform the build.
The RustLandBuilder() helper is also used to generate bpf.rs (that
implements the low-level user-space Rust connector to the BPF
commponent).
Schedulers based on scx_rustland_core can simply use RustLandBuilder(),
to build the backend provided by scx_rustland_core.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a helper function to update the counter of queued and
scheduled tasks (used to notify the BPF component if the user-space
scheduler has still some pending work to do).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rustland has significantly evolved since its original design.
With the introduction of scx_rustland_core and the inclusion of the
scx_rlfifo example, scx_rustland's focus can be shifted from solely
being an "easy-to-read Rust scheduler template" to a fully functional
scheduler.
For this reason, update the README and documentation to reflect its
revised design, objectives, and intended use cases.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Move the BPF component of scx_rustland to scx_rustland_core and make it
available to other user-space schedulers.
NOTE: main.bpf.c and bpf.rs are not pre-compiled in the
scx_rustland_core crate, they need to be included in the user-space
scheduler's source code in order to be compiled/linked properly.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a separate crate (scx_rustland_core) that can be used to
implement sched-ext schedulers in Rust that run in user-space.
This commit only provides the basic layout for the new crate and the
abstraction to the custom allocator.
In general, any scheduler that has a user-space component needs to use
the custom allocator to prevent potential deadlock conditions, caused by
page faults (a kthread needs to run to resolve the page fault, but the
scheduler is blocked waiting for the user-space page fault to be
resolved => deadlock).
However, we don't want to necessarily enforce this constraint to all the
existing Rust schedulers, some of them may do all user-space allocations
in safe paths, hence the separate scx_rustland_core crate.
Merging this code in scx_utils would force all the Rust schedulers to
use the custom allocator.
In a future commit the scx_rustland backend will be moved to
scx_rustland_core, making it a totally generic BPF scheduler framework
that can be used to implement user-space schedulers in Rust.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The new topology crate allows us to replace the custom rustland topology
logic with the logic in the topology crate itself.
Signed-off-by: David Vernet <void@manifault.com>
Add a command line option to enable/disable the sched-ext built-in idle
selection logic in the user-space scheduler.
With this option the user-space scheduler will try to dispatch tasks on
the CPU selected during the .select_cpu() phase (using the built-in idle
selection logic).
Without this option the user-space scheduler will try to dispatch tasks
to the first CPU available.
The former can be useful to improve throughput, since tasks are more
likely to stick on the same CPU, while the latter can provide better
system responsiveness, especially when the system is significantly busy.
Given that, by default, tasks can be dispatched directly bypassing the
user-space scheduler if an idle CPU is found during .select_cpu(), the
user-space scheduler is primarily engaged only when the system is busy
(no idle CPUs are available). Under these circumstances, it is typically
more efficient to dispatch tasks on the first available CPU. Hence, the
default behavior is to ignore built-in idle selection logic in the
user-space scheduler.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Checking if a CPU is idle or busy in the user-space scheduler is a bit
redundant, considering that we also rely on the built-in idle selection
logic in the BPF part.
Therefore get rid of the additional idle selection logic in the
user-space scheduler and rely on the built-in idle selection.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce an option to send all scheduling events and actions to
user-space, disabling any form of in-kernel optimization.
Enabling this option will likely make the system less responsive (but
more predictable in terms of performance) and it can be useful for
debugging purposes.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The buffer used to store struct queued_task_ctx items fetched from the
BPF ring buffer needs to be aligned to the architecture register size,
otherwise we may hit misaligned pointer dereference issues, such as:
thread 'main' panicked at src/bpf.rs:162:43:
misaligned pointer dereference: address must be a multiple of 0x8 but is 0x56516a51e004
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Prevent this by making sure the buffer is always aligned to 64-bits.
Fixes: 93dc615 ("scx_rustland: use a ring buffer for queued tasks")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Switch from a BPF_MAP_TYPE_QUEUE to a BPF_MAP_TYPE_RINGBUF to store the
tasks that need to be processed by the user-space scheduler.
A ring buffer allows to save a lot of memory copies and syscalls, since
the memory is directly shared between the BPF and the user-space
components.
Performance profile before this change:
2.44% [kernel] [k] __memset
2.19% [kernel] [k] __sys_bpf
1.59% [kernel] [k] __kmem_cache_alloc_node
1.00% [kernel] [k] _copy_from_user
After this change:
1.42% [kernel] [k] __memset
0.14% [kernel] [k] __sys_bpf
0.10% [kernel] [k] __kmem_cache_alloc_node
0.07% [kernel] [k] _copy_from_user
Both the overhead of sys_bpf() and copy_from_user() are reduced by a
factor of ~15x now (only the dispatch path is using sys_bpf() now).
NOTE: despite being very effective, the current implementation is a bit
of a hack. This is because the present ring buffer API exclusively
permits consumption in a greedy manner, where multiple items can be
consumed simultaneously. However, libbpf-rs does not provide precise
information regarding the exact number of items consumed. By utilizing a
more refined libbpf-rs API [1] we may be able to improve this code a
bit.
Moreover, libbpf-rs doesn't provide an API for the user_ring_buffer, so
at the moment there's not a trivial way to apply the same change to the
dispatched tasks.
However, just with this change applied, the overhead of sys_bpf() and
copy_from_user() is already minimal, so we won't get much benefits by
changing the dispatch path to use a BPF ring buffer.
[1] https://github.com/libbpf/libbpf-rs/pull/680
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Instead of using a BPF_MAP_TYPE_ARRAY to store which tasks are running
on which CPU we can simply use a global array, mapped in the user-space
address space.
In this way we can avoid a lot of memory copies and call to sys_bpf(),
significantly reducing the scheduler's overhead.
Keep in mind that we don't need to be 100% correct while accessing this
information, so we can accept some fuzziness in order to significantly
reduce the scheduler's overhead.
Performance profile before this change:
5.52% [kernel] [k] __sys_bpf
4.84% [kernel] [k] __kmem_cache_alloc_node
4.71% [kernel] [k] map_lookup_elem
4.10% [kernel] [k] _copy_from_user
3.51% [kernel] [k] bpf_map_copy_value
3.12% [kernel] [k] check_heap_object
After this change:
2.20% [kernel] [k] __sys_bpf
1.91% [kernel] [k] map_lookup_and_delete_elem
1.60% [kernel] [k] __kmem_cache_alloc_node
1.10% [kernel] [k] _copy_from_user
0.12% [kernel] [k] check_heap_object
n/a bpf_map_copy_value
n/a map_lookup_elem
With this change we can reduce the overhead of sys_bpf() by ~2x and
the overhead of copy_from_user() by ~4x.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Currently, the primary bottleneck in scx_rustland lies within its custom
memory allocator, which is used to prevent page faults in the user-space
scheduler.
This is pretty evident looking at perf top:
39.95% scx_rustland [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::alloc
3.41% [kernel] [k] _copy_from_user
3.20% [kernel] [k] __kmem_cache_alloc_node
2.59% [kernel] [k] __sys_bpf
2.30% [kernel] [k] __kmem_cache_free
1.48% libc.so.6 [.] syscall
1.45% [kernel] [k] __virt_addr_valid
1.42% scx_rustland [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::dealloc
1.31% [kernel] [k] _copy_to_user
1.23% [kernel] [k] entry_SYSRETQ_unsafe_stack
However, there's no need to reinvent the wheel here, rather than relying
on an overly simplistic and inefficient allocator, we can rely on
buddy-alloc [1], which is also capable of operating on a preallocated
memory buffer.
After switching to buddy-alloc, the performance profile under the same
workload conditions looks like the following:
6.01% [kernel] [k] _copy_from_user
5.21% [kernel] [k] __kmem_cache_alloc_node
4.45% [kernel] [k] __sys_bpf
3.80% [kernel] [k] __kmem_cache_free
2.79% libc.so.6 [.] syscall
2.34% [kernel] [k] __virt_addr_valid
2.26% [kernel] [k] _copy_to_user
2.14% [kernel] [k] __check_heap_object
2.10% [kernel] [k] __check_object_size.part.0
2.02% [kernel] [k] entry_SYSRETQ_unsafe_stack
With this change in place, the primary overhead is now moved to the
bpf() syscall and the copies between kernel and user-space (this could
potentially be optimized in the future using BPF ring buffers, instead
of BPF FIFO queues).
A better focus at the allocator overhead before vs after this change:
[before]
39.95% scx_rustland [.] core::alloc::global::GlobalAlloc>::alloc
1.42% scx_rustland [.] core::alloc::global::GlobalAlloc>::dealloc
[after]
1.50% scx_rustland [.] core::alloc::global::GlobalAlloc>::alloc
0.76% scx_rustland [.] core::alloc::global::GlobalAlloc>::dealloc
[1] https://crates.io/crates/buddy-alloc
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In order to prevent duplicate PIDs in the TaskTree (BTreeSet), we
perform an O(N) search each time we add an item, to verify whether the
PID already exists or not.
Under heavy stress test conditions the O(N) complexity can have a
potential impact on the overall performance.
To mitigate this, introduce a HashMap that can be used to retrieve tasks
by PID typically with a O(1) complexity. This could potentially degrade
to O(N) in presence of hash collisions, but even in this case, accessing
the hash map is still more efficient than scanning all the entries in
the BTreeSet to search for the target PID.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a per-task generation counter to check the validity of the
cpumask at dispatch time.
The logic is the following:
- the cpumask generation number is incremented every time a task
calls .set_cpumask()
- when a task is enqueued the current generation number is stored in
the queued_task_ctx and relayed to the user-space scheduler
- the user-space scheduler can decide to dispatch the task on the CPU
determined by the BPF layer in .select_cpu(), redirect the task to
any other specific CPU, or redirect to the first CPU available (using
NO_CPU)
- task is then dispatched back to the BPF code along with its cpumask
generation counter
- at dispatch time the BPF code checks if the generation number is the
same and it discards the dispatch attempt if the cpumask is not valid
anymore (the task will be automatically re-enqueued by the sched-ext
core code, potentially selecting another CPU / cpumask)
- if the cpumask is valid, but the CPU selected by the user-space
scheduler is invalid (according to the cpumask), the task will be
transparently bounced by the BPF code to the shared DSQ (in this way
the user-space code can be completely abstracted and dispatches that
target invalid CPUs can be automatically fixed by the BPF layer)
This solution can prevent stalls due to dispatches targeting invalid
CPUs and it can also avoid redundant dispatch events, making the code
more efficient and the cpumask interlocking more reliable.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Dispatch to the shared DSQ (NO_CPU) only when the assigned CPU is not
idle anymore, otherwise maintain the same CPU that has been assigned by
the BPF layer.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
When the system is not being fully utilized there may be delays in
promptly awakening the user-space scheduler.
This can happen for example, when some CPU-intensive tasks are
constantly dispatched bypassing the user-space scheduler (e.g., using
SCX_DSQ_LOCAL) and other CPUs are completely idle.
Under this condition the update_idle() can fail to activate the
user-space scheduler, because there are no pending events, and only the
periodic timer will wake up the scheduler, potentially introducing lags
of up to 1 sec.
This can be reproduced, for example, running a video game that doesn't
use all the CPUs available in the system (i.e., Team Fortress 2). With
this game it is pretty easy to notice sporadic lags that are resumed
after ~1sec, due to the periodic timer kicking scheduler.
To prevent this from happening wake up the user-space scheduler
immediately as soon as a CPU is released, speculating on the fact that
most of the time there will be always another task ready to run.
This can introduce a little more overhead in the scheduler (due to
potential unnecessary wake up events), but it also prevents stuttery
behaviors and it makes the system much more smooth and responsive,
especially with video games.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Use scx_bpf_dispatch_cancel() to invalidate dispatches on wrong per-CPU
DSQ, due to cpumask race conditions, and redirect them to the shared
DSQ.
This prevents dispatching tasks to CPU that cannot be used according to
the task's cpumask.
With this applied the scheduler passed all the `stress-ng --race-sched`
stress tests.
Moreover, introduce a counter that is periodically reported to stdout as
an additional statistic, that can be helpful for debugging.
Link: https://github.com/sched-ext/sched_ext/pull/135
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Print all the scheduler statistics before exiting. Reporting the very
last state of the scheduler can help to debug events that could trigger
error conditions (such as page faults, scheduler congestions, etc.).
While at it, fix also some minor coding style issues (tabs vs spaces).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
SCX_KICK_IDLE is a new feature which isn't defined in older kernels. Add
compat wrapper and use it for idle CPU wakeups.
Signed-off-by: Tejun Heo <tj@kernel.org>
Items in the task BTreeSet are stored by pid and vruntime. Make sure
that we never store multiple items with the same PID, so that
re-enqueued tasks are not dispatched multiple times.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Allow to scale the effective time slice down to 250 us. This can help to
maintain a good quality of the audio even when the system is overloaded
by multiple CPU-intensive tasks.
Moreover, always round up the time slice scaling factor to be a little
more aggressive and prioritize at scaling the time slice, so that we can
prioritize low latency tasks even more.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Evaluate the number of voluntary context switches per second (nvcsw/sec)
for each task using an exponentially weighted moving average (EWMA) with
weight 0.5, that allows to classify interactive tasks with more
accuracy.
Using a simple average over a period of time of 10 sec can introduce
small lags every 10 sec, as the statistics for the number of voluntary
context switches are refreshed. This can result in interactive tasks
taking a brief time to catch up in order to be accurately classified as
so, causing for example short audio cracks, small drop of 5-10 fps in
games, etc.
Using a EMWA allows to smooth the average of nvcsw/sec, preventing short
lags in the interactive tasks, while also preventing to incorrectly
classify as interactive tasks that may experience an isolated short
burst of voluntary context switches.
This patch has been tested with the usual test case of playing a
videogame while running a parallel kernel build in the background.
Without this patch the short lag every 10 sec is clearly noticeable,
with this patch applied the game and audio run smoothly.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Simplify the idle selection logic by relying only on the built-in idle
selection performed in the BPF layer.
When there are idle CPUs available in the system, tasks are dispatched
directly by the BPF dispatcher without invoking the user-space
scheduler. This allows to avoid the user-space overhead and get the best
system performance when CPU resources are not overcommitted.
Once the number of tasks exceeds the available CPUs, the user-space
scheduler takes over. However, by this time, the system is already
overcommitted, so there's little advantage in attempting to pinpoint the
optimal idle CPU through the user-space scheduler. Instead, tasks can be
executed on the first available CPU, consistently dispatching them to
the shared DSQ.
This allows to achieve the optimal performance both with system
under-utilization and over-utilization.
With this change in place the user-space scheduler won't dispatch tasks
directly to specific CPUs, but we still want to keep this as a generic
feature in the BPF layer, so that it can be potentially used in the
future by this scheduler or even by other user-space schedulers (once
the BPF layer will be moved to a more generic place).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
When the user-space scheduler dispatches a task on a specific CPU, that
CPU might not be valid, since the user-space doesn't have visibility of
the task's cpumask.
When this happens the BPF dispatcher (that has direct visibility of the
cpumask) should automatically redirect the task to a valid CPU, but
instead of bouncing the task on the shared DSQ, we should try to use the
CPU assigned by the built-in idle selection logic.
If this CPU is also not valid, then we can simply ignore the task, that
has been de-queued and re-enqueued, since a valid CPU will be naturally
re-selected at a later time.
Moreover, avoid to kick any specific CPU when the task is dispatched to
shared DSQ, since the task can be consumed on any CPU and the additional
kick would simply add more overhead.
Lastly, rename dsq_id_to_cpu() to dsq_to_cpu() and cpu_to_dsq_id() to
cpu_to_dsq() for more clarity.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
With commit c6ada25 ("scx_rustland: use custom pcpu DSQ instead of
SCX_DSQ_LOCAL{_ON}") we tried to introduce custom per-CPU DSQs, instead
of using SCX_DSQ_LOCAL and SCX_DSQ_LOCAL_ON to dispatch tasks.
This was required, because dispatching tasks using SCX_DSQ_LOCAL_ON
doesn't provide a guarantee that the cpumask, checked at dispatch time
to determine the validity of a target CPU, remains valid.
This method solved the cpumask validity issue, but unfortunately it
introduced a noticeable performance regression and a potential
starvation issue (that were probably caused by the same problem): if a
task is assigned to a CPU in select_cpu() and the scheduler decides to
dispatch it on a different CPU, the task will be added to the new CPU's
DSQ, but if no dispatch event happens there, the task may remain stuck
in the per-CPU DSQ for a long time, triggering the sched-ext watchdog
timeout that would kick out the scheduler, for example:
12:53:28 [WARN] FAIL: IPC:CSteamEngin[7217] failed to run for 6.482s (err=1026)
12:53:28 [INFO] Unregister RustLand scheduler
Therefore, we reverted this change with 6d89ece ("scx_rustland: dispatch
tasks only on the global DSQ"), dispatching all the tasks to the global
DSQ, completely delegating the kernel to distribute tasks among the
available CPUs.
This is not the ideal solution, because we still want to give the
possibility to the user-space scheduler to assign tasks to specific
CPUs.
Therefore, re-introduce distinct per-CPU DSQs, but also provide a global
shared DSQ. Tasks dispatched in the per-CPU DSQs are consumed from the
dispatch() callback of their corresponding CPU, tasks dispatched in the
global shared DSQ are consumed from any CPU.
In this way the BPF layer is able to provide an interface that gives
the flexibility to the user-space to dispatch a task on a specific CPU
or on the first CPU available, depending on the particular scheduler's
need.
If an invalid CPU (according to the cpumask) is selected the BPF
dispatcher will transparently redirect the task to a valid CPU, selected
using the built-in idle selection logic.
In the future we may want to improve this part, giving to the
user-space the visibility of the cpumask, in order to pick a valid CPU
in advance and in a proper synchronized way.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
No functional change, just some refactoring to make the code more clear.
We have is_usersched_needed() and set_usersched_needed() that are doing
different things (the former is checkig if there are pending tasks for
the scheduler, the latter is setting the usersched_needed flag to
activate the dispatch of the user-space scheduler).
Rename is_usersched_needed() to usersched_has_pending_tasks() to make
the code more clear and understandable.
Also move dispatch_user_scheduler() closer to the other dispatch-related
helper functions.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>