Overview
========
Currently, a task's time slice is determined based on the total number
of tasks waiting to be scheduled: the more overloaded the system, the
shorter the time slice.
This approach can help to reduce the average wait time of all tasks,
allowing them to progress more slowly, but uniformly, thus providing a
smoother overall system performance.
However, under heavy system load, this approach can lead to very short
time slices distributed among all tasks, causing excessive context
switches that can badly affect soft real-time workloads.
Moreover, the scheduler tends to operate in a bursty manner (tasks are
queued and dispatched in bursts). This can also result in fluctuations
of longer and shorter time slices, depending on the number of tasks
still waiting in the scheduler's queue.
Such behavior can also negatively impact on soft real-time workloads,
such as real-time audio processing.
Virtual time slice
==================
To mitigate this problem, introduce the concept of virtual time slice:
the idea is to evaluate the optimal time slice of a task, considering
the vruntime as a deadline for the task to complete its work before
releasing the CPU.
This is accomplished by calculating the difference between the task's
vruntime and the global current vruntime and use this value as the task
time slice:
task_slice = task_vruntime - min_vruntime
In this way, tasks that "promise" to release the CPU quickly (based on
their previous work pattern) get a much higher priority (due to
vruntime-based scheduling and the additional priority boost for being
classified as interactive), but they are also given a shorter time slice
to complete their work and fulfill their promise of rapidity.
At the same time tasks that are more CPU-intensive get de-prioritized,
but they will tend to have a longer time slice available, reducing in
this way the amount of context switches that can negatively affect their
performance.
In conclusion, latency-sensitive tasks get a high priority and a short
time slice (and they can preempt other tasks), CPU-intensive tasks get
low priority and a long time slice.
Example
=======
Let's consider the following theoretical scenario:
task | time
-----+-----
A | 1
B | 3
C | 6
D | 6
In this case task A represents a short interactive task, task C and D
are CPU-intensive tasks and task B is mainly interactive, but it also
requires some CPU time.
With a uniform time slice, scaled based on the amount of tasks, the
scheduling looks like this (assuming the time slice is 2):
A B B C C D D A B C C D D C C D D
| | | | | | | | |
`---`---`---`-`-`---`---`---`----> 9 context switches
With the virtual time slice the scheduling changes to this:
A B B C C C D A B C C C D D D D D
| | | | | | |
`---`-----`-`-`-`-----`----------> 7 context switches
In the latter scenario, tasks do not receive the same time slice scaled
by the total number of tasks waiting to be scheduled. Instead, their
time slice is adjusted based on their previous CPU usage. Tasks that
used more CPU time are given longer slices and their processing time
tends to be packed together, reducing the amount of context switches.
Meanwhile, latency-sensitive tasks can still be processed as soon as
they need to, because they get a higher priority and they can preempt
other tasks. However, they will get a short time slice, so tasks that
were incorrectly classified as interactive will still be forced to
release the CPU quickly.
Experimental results
====================
This patch has been tested on a on a 8-cores AMD Ryzen 7 5800X 8-Core
Processor (16 threads with SMT), 16GB RAM, NVIDIA GeForce RTX 3070.
The test case involves the usual benchmark of playing a video game while
simultaneously overloading the system with a parallel kernel build
(`make -j32`).
The average frames per second (fps) reported by Steam is used as a
metric for measuring system responsiveness (the higher the better):
Game | before | after | delta |
---------------------------+---------+---------+--------+
Baldur's Gate 3 | 40 fps | 48 fps | +20.0% |
Counter-Strike 2 | 8 fps | 15 fps | +87.5% |
Cyberpunk 2077 | 41 fps | 46 fps | +12.2% |
Terraria | 98 fps | 108 fps | +10.2% |
Team Fortress 2 | 81 fps | 92 fps | +13.6% |
WebGL demo (firefox) [1] | 32 fps | 42 fps | +31.2% |
---------------------------+---------+---------+--------+
Apart from the massive boost with Counter-Strike 2 (that should be taken
with a grain of salt, considering the overall poor performance in both
cases), the virtual time slice seems to systematically provide a boost
in responsiveness of around +10-20% fps.
It also seems to significantly prevent potential audio cracking issues
when the system is massively overloaded: no audio cracking was detected
during the entire run of these tests with the virtual deadline change
applied.
[1] https://webglsamples.org/aquarium/aquarium.html
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Make restart handling with user_exit_info simpler and consistently use the
load and report macros consistently across the rust schedulers. This makes
all schedulers automatically handle auto restarts from CPU hotplug events.
Note that this is necessary even for scx_lavd which has CPU hotplug
operations as CPU hotplug operations which took place between skel open and
scheduler init can still trigger restart.
In cpumask_intersects_domain(), we check whether a given cpumask has any
CPUs in common with the specified domain by looking at the const, static
dom_cpumasks map. This map is only really necessary when creating the
domain struct bpf_cpumask objects at scheduler load time. After that, we
can just use the actual struct bpf_cpumask object embedded in the domain
context. Let's use that and cpumask kfuncs instead.
This allows rusty to load with
https://github.com/sched-ext/sched_ext/pull/216.
Signed-off-by: David Vernet <void@manifault.com>
Commit 23b0bb5f ("scx_rustland: dispatch interactive tasks on any CPU")
allows only interactive tasks to be dispatched on any CPU, enabling them
to quickly use the first idle CPU available. Non-interactive tasks, on
the other hand, are kept on the same CPU as much as possible.
This change deprioritizes CPU-intensive tasks further, but it also helps
to exploit cache locality, while latency-sensitive tasks are dispatched
sooner, improving overall responsiveness, despite the potential
migration cost.
Given this new logic, the built-idle option, which forces all tasks to
be dispatched on the CPU assigned during select_cpu(), no longer offers
significant benefits. It would merely reduce the responsiveness of
interactive tasks.
Therefore, simply remove this option, allowing the scheduler to
determine the target CPU(s) for all tasks based on their nature.
Fixes: 23b0bb5f ("scx_rustland: dispatch interactive tasks on any CPU")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In order to prevent compiler from merging or refetching load/store
operations or unwanted reordering, we take the implemetation of
READ_ONCE()/WRITE_ONCE() from kernel sources under
"/include/asm-generic/rwonce.h".
Use WRITE_ONCE() in function flip_sys_cpu_util() to ensure the compiler
doesn't perform unnecessary optimization so the compiler won't make
incorrect assumptions when performing the operation of modifying of bit
flipping.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
layered_dispatch() was incorrectly continuing down to the lower priority
DSQs after successfully consuming from HI_FALLBACK_DSQ which can lead to
latency issues. Fix it.
Use the GNU built-in __sync_fetch_and_xor() to perform the XOR operation
on global variable "__sys_cpu_util_idx" to ensure the operations
visibility.
The built-in function "__sync_fetch_and_xor()" can provide both atomic
operation and full memory barrier which is needed by every operation
(especially store operation) on global variables.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
Newer sched_ext kernel versions sets the scheduler to schedule all tasks
within the system by default. However, some users are using the old
versions of kernel.
Therefore we call "__COMPAT_scx_bpf_switch_all()" to move all tasks to
"SCHED_EXT" class so scx_central would schedule all tasks by default in
older kernels.
The main reason why custom affinities are tricky for scx_layered is because
if we put a task which doesn't allow all CPUs into a layer's DSQ, it may not
get consumed for an indefinite amount of time. However, this is only true
for confined layers. Both open and grouped layers always consumed from all
CPUs and thus don't have this risk.
Let's allow tasks with custom affinities in open and grouped layers.
- In select_cpu(), don't consider direct dispatching to a local DSQ as
affinity violation even if the target CPU is outside the layer's cpumask
if the layer is open.
- In enqueue(), separate out per-cpu kthread special case into its own
block. Note that this is only applied if the layer is not preempting as a
preempting layer has a higher priority than HI_FALLBACK_DSQ anyway.
- Trigger the LO_FALLBACK_DSQ path for other threads only if the layer is
confined.
- The preemption path now also runs for tasks with a custom affinity in open
and grouped layers. Update it so that it only considers the CPUs in the
preempting task's allowed cpumask.
(cherry picked from commit 82d2f887a4608de61ddf5e15643c10e504a88f7b)
- AFFN_VIOL for per-cpu tasks could be double counted. Once in select_cpu()
and again in enqueue(). Count in select_cpu() only when direct
dispatching.
- Violating tasks were prioritized over non-violating ones because they were
queued on SCX_DSQ_GLOBAL which has priority over all user DSQs. This
doesn't make sense. Let's introduce two fallback DSQs - HI_FALLBACK_DSQ
and LO_FALLBACK_DSQ. HI is used for violating kthreads and LO for
violating user threads. HI is dispatched after preempting layers and LO
after all other layers. This shouldn't change the behavior too much for
kthreads while punshing, rather than rewarding, violating user threads.
(cherry picked from commit 67f69645667ba8a155cae9a9b7e90c055d39e23c)
Dispatch non-interactive tasks on the CPU selected by the built-in idle
selection logic and allow interactive tasks to be dispatched on any CPU.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Do not always assign the maximum time slice to interactive tasks, but
use the same value of the dynamic time slice for everyone.
This seems to prevent potential audio cracking when the system is over
commissioned.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The option --full-user is provided to delegate *all* scheduling
decisions to the user-space scheduler with no exception, including the
idle selection logic.
Therefore, make this option incompatible with --builtin-idle and
completely bypass the built-in idle selection logic when running in
full-user mode.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide a knob in scx_rustland_core to automatically turn the scheduler
into a simple FIFO when the system is underutilized.
This choice is based on the assumption that, in the case of system
underutilization (less tasks running than the amount of available CPUs),
the best scheduling policy is FIFO.
With this option enabled the scheduler starts in FIFO mode. If most of
the CPUs are busy (nr_running >= num_cpus - 1), the scheduler
immediately exits from FIFO mode and starts to apply the logic
implemented by the user-space component. Then the scheduler can switch
back to FIFO if there are no tasks waiting to be scheduled (evaluated
using a moving average).
This option can be enabled/disabled by the user-space scheduler using
the fifo_sched parameter in BpfScheduler: if set, the BPF component will
periodically check for system utilization and switch back and forth to
FIFO mode based on that.
This allows to improve performance of workloads that are using a small
amount of the available CPUs in the system, while still maintaining the
same good level of performance for interactive tasks when the system is
over commissioned.
In certain video games, such as Baldur's Gate 3 or Counter-Strike 2,
running in "normal" system conditions, we can experience a boost in fps
of approximately 4-8% with this change applied.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This merge included additional commits that were supposed to be included
in a separate pull request and have nothing to do with the fifo-mode
changes.
Therefore, revert the whole pull request and create a separate one with
the correct list of commits required to implement this feature.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Dispatch non-interactive tasks on the CPU selected by the built-in idle
selection logic and allow interactive tasks to be dispatched on any CPU.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Do not always assign the maximum time slice to interactive tasks, but
use the same value of the dynamic time slice for everyone.
This seems to prevent potential audio cracking when the system is over
commissioned.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide a knob in scx_rustland_core to automatically turn the scheduler
into a simple FIFO when the system is underutilized.
This choice is based on the assumption that, in the case of system
underutilization (less tasks running than the amount of available CPUs),
the best scheduling policy is FIFO.
With this option enabled the scheduler starts in FIFO mode. If most of
the CPUs are busy (nr_running >= num_cpus - 1), the scheduler
immediately exits from FIFO mode and starts to apply the logic
implemented by the user-space component. Then the scheduler can switch
back to FIFO if there are no tasks waiting to be scheduled (evaluated
using a moving average).
This option can be enabled/disabled by the user-space scheduler using
the fifo_sched parameter in BpfScheduler: if set, the BPF component will
periodically check for system utilization and switch back and forth to
FIFO mode based on that.
This allows to improve performance of workloads that are using a small
amount of the available CPUs in the system, while still maintaining the
same good level of performance for interactive tasks when the system is
over commissioned.
In certain video games, such as Baldur's Gate 3 or Counter-Strike 2,
running in "normal" system conditions, we can experience a boost in fps
of approximately 4-8% with this change applied.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_simple is a basic scheduler that does either basic vtime, or global
FIFO, scheduling. At first glance, it may be confusing why we create a
separate DSQ rather than just using SCX_DSQ_GLOBAL. Let's add a comment
explaining the reason for this, so that users that are going over
scx_simple as an example scheduler don't get confused.
Signed-off-by: David Vernet <void@manifault.com>
Report the amount of running tasks to stdout. This value also represents
the amount of active CPUs that are currently executing a task.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Although newer kernels default to switching-all, some users might still
be using the scheduler with older kernels.
Therefore, ensure all tasks are moved to the SCHED_EXT class by calling
__COMPAT_scx_bpf_switch_all() during init, so that scx_simple can still
operate on these older kernels as well.
Fixes: cf66e58 ("Sync from kernel (670bdab6073)")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The dynamic slice boost is not used anymore in the code, so there is no
reason to keep evaluating it.
Moreover, using it instead of the static slice boost seems to make
things worse, so let's just get rid of it.
Fixes: 0b3c399 ("scx_rustland: introduce dynamic slice boost")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rustland has a function called get_cpu_owner() in BPF which
currently has no callers. There's nothing wrong with the function, but
it causes a warning due to an unused function. Let's just annotate it
with __maybe_unused to tell the compiler that it's not a problem.
Signed-off-by: David Vernet <void@manifault.com>
When building with warnings enabled, a few obvious bugs are pointed out:
- We're not correctly calculating waker frequency
- We're not taking the min of avg_run_raw compared to max latency
- We're missing an element from sched_prio_to_weight
Fix these. With these changes, interactivity is seemingly improved. We
go from ~12 sec / turn -> 11 seconds / turn in the Civ 6 AI benchmark
with a 4 x nproc CPU hogging workload in the background. It's clear,
however, that we really need preemption.
Signed-off-by: David Vernet <void@manifault.com>
C SCX_OPS_ATTACH() and rust scx_ops_attach() macros were not calling
.attach() and were only attaching the struct_ops. This meant that all
non-struct_ops BPF programs contained in the skels were never attached which
breaks e.g. scx_layered.
Let's fix it by adding .attach() invocation the the attach macros.
Originally the implementation of function rsigmoid_u64 will
perform substraction even when the value of "v" equals to the value
of "max" , in which the result is certainly zero.
We can avoid this redundant substration by changing the condition from
">" to ">=" since we know when the value of "v" and "max" are equal
we can return 0 without any substract operation.
Now that the scx_ops_open!() macro is available, let's use it in scx_rusty to
cover all cases of when hotplug can happen.
Signed-off-by: David Vernet <void@manifault.com>
Now that the kernel exports the SCX_ECODE_ACT_RESTART exit code, we can
remove the custom hotplug logic from scx_rusty, and instead rely on the
built-in logic from the kernel. There's still a corner case that we're not
honoring: when a hotplug event happens on the init path. A future change will
address this as well.
Signed-off-by: David Vernet <void@manifault.com>
Introduce a low-power mode to force the scheduler to operate in a very
non-work conserving way, causing a significant saving in terms of power
consumption, while still providing a good level of responsiveness in the
system.
This option can be enabled in scx_rustland via the --low_power / -l
option.
The idea is to not immediately re-kick a CPU when it enters an idle
state, but do that only if there are no other tasks running in the
system.
In this way, latency-critical tasks can be still dispatched immediately
on the other active CPUs, while CPU-bound tasks will be forced to spend
more time waiting to be scheduled, basically enforcing a special CPU
throttling mechanism that affects only the tasks that are not latency
critical.
The consequence is a reduction in the overall system throughput, but
also a significant reduction of power consumption, that can be useful
for mobile / battery-powered devices.
Test case (using `scx_rustland -l`):
- play a video game (Terraria) while recompiling the kernel
- measure game performance (fps) and core power consumption (W)
- compare the result of normal mode vs low-power mode
Result:
Game performance | Power consumption |
------------+-----------------+-------------------+
normal mode | 60 fps | 6W |
low-power mode | 60 fps | 3W |
As we can see from the result the reduction of power consumption is
quite significant (50%), while the responsiveness of the game (fps)
remains the same, that means battery life can be potentially doubled
without significantly affecting system responsiveness.
The overall throughput of the system is, of course, affected in a
negative way (kernel build is approximately 50% slower during this
test), but the goal here is to save power while still maintaining a good
level of responsiveness in the system.
For this reason the low-power mode should be considered only in
emergency conditions, for example when the system is close to completely
run out of power or simply to extend the battery life of a mobile device
without compromising its responsiveness.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
During the initialization phase the scheduler needs to be aware of all
the available CPUs in the system (also those that are offline), in order
to create a proper per-CPU DSQ for all of them.
Otherwise, if some cores are offline, we may get errors like the
following:
swapper/7[0] triggered exit kind 1024:
runtime error (invalid DSQ ID 0x0000000000000007)
Backtrace:
scx_bpf_consume+0xaa/0xd0
bpf_prog_42ff1b9d1ac5b184_rustland_dispatch+0x12b/0x187
Change the code to configure the BpfScheduler object with the total
amount of CPUs available in the system and prevent such failure.
This fixes#280.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Always dispatch at least one task, even if all the CPUs are busy.
This small overcommitment allows to maximize the CPU utilization without
introducing bubbles in the scheduling and also without introducing
regressions in terms of resposiveness.
Before this change the average CPU utilization of a `stress-ng -c 8` on
an 8-cores system is around 95%. With this change applied the CPU
utilization goes up to a consistent 100%.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Add a method to TopologyMap to get the amount of online CPUs.
Considering that most of the schedulers are not handling CPU hotplugging
it can be useful to expose also this metric in addition to the amount of
available CPUs in the system.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Drop the global effective time-slice and use the more fine-grained
per-task time-slice to implement the dynamic time-slice capability.
This allows to reduce the scheduler's overhead (dropping the global time
slice volatile variable shared between user-space and BPF) and it
provides a more fine-grained control on the per-task time slice.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
If there is a higher priority task when running ops.tick(),
ops.select_cpu(), and ops.enqueue() callbacks, the current running tasks
yields its CPU by shrinking time slice to zero and a higher priority
task can run on the current CPU.
As low-cost, fine-grained preemption becomes available, default
parameters are adjusted as follows:
- Raise the bar for remote CPU preemption to avoid IPIs.
- Increase the maximum time slice.
- Gradually enforce the fair use of CPU time (i.e., ineligible duration)
Lastly, using CAS, we ensure that a remote CPU is preempted by only one
CPU. This removes unnecessary remote preemptions (and IPIs).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Replace the BPF_MAP_TYPE_QUEUE with a BPF_MAP_TYPE_USER_RINGBUF to store
the tasks dispatched from the user-space scheduler to the BPF component.
This eliminates the need of the bpf() syscalls, significantly reducing
the overhead of the user-space->kernel communication and delivering a
notable performance boost in the overall system throughput.
Based on experimental results, this change allows to reduces the scheduling
overhead by approximately 30-35% when the system is overcommitted.
This improvement has the potential to make user-space schedulers based
on scx_rustland_core viable options for real production systems.
Link: https://github.com/libbpf/libbpf-rs/pull/776
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rusty's intention is to support hotplug by automatically restarting
whenever a hotplug event is encountered. Now that we're not trying to
consume a bogus DSQ in the rusty_dispatch() on a newly hotplugged CPU,
let's just remove offline tracking. It's really just there as a sanity
check, but it triggers if an offline task is made runnable during a
hotplug event before the ops.hotplug() callback has been invoked.
Signed-off-by: David Vernet <void@manifault.com>
There's currently a slight issue on existing kernels on the hotplug
path wherein we can start to receive scheduling callbacks on a CPU
before that CPU has received hotplug events. For CPUs going online, this
can possibly confuse a scheduler because it may not be expecting
anything to ever happen on that CPU, and therefore may do things that
could cause the scheduler to crash. For example, without this patch in
scx_rusty, we try to consume from a bogus DSQ that doesn't exist, which
causes ext.c to boot out the scheduler.
Though this issue will soon be fixed in ext.c, let's explicitly avoid
dispatching from an onlining CPU in rusty so that we properly support
hotplug on older kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
We can hint to the compiler about paths we'll take in a scheduler. This
is a common pattern, so lets provide convenience macros.
Signed-off-by: David Vernet <void@manifault.com>
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.
Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.
Signed-off-by: David Vernet <void@manifault.com>
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.
Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.
Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).
Signed-off-by: David Vernet <void@manifault.com>
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.
In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.
Signed-off-by: David Vernet <void@manifault.com>
To know the required CPU performance (e.g., frequency) demand, we keep
track of 1) utilization of each CPU and 2) _performance criticality_ of
each task. The performance criticality of a task denotes how critical it
is to CPU performance (frequency). Like the notion of latency
criticality, we use three factors: the task's average runtime, wake-up
frequency, and waken-up frequency. A task's runtime is longer, and its
two frequencies are higher; the task is more performance-critical
because it would be a bottleneck in the middle of the task chain.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Let's remove the extraneous copy pasting and use a lookup helper like we
do for task and pcpu context.
Signed-off-by: David Vernet <void@manifault.com>
A LoadEntity gets the load to transfer between two entities by taking
the minimum of their imbalances and reducing its abs value by
xfer_ratio.
In practice self.imbal(), the push node or domain, always has positive
imbalance and other.imbal(), the pull node or domain, always has
negative imbalance, so other.imbal() is always the minimum even though
the abs value of its imbalance might be greater than the abs value of
self.imbal(). It seems like the intent is to take the minimum of the
two absolute values instead to avoid overbalancing at the puller, so
make both values abs.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Rusty's load balancer calculates load differently based on average
system CPU utilization in create_domain_hierarchy(). At >= 99.999%
utilization, load is the product of a task's weight and duty cycle;
below that, load is the same as the task's duty cycle.
populate_tasks_by_load(), however, always uses the product when
calculating per-task load so that in the sub-99.999% util case, load is
inflated, typically by a factor of 100 with a normal priority task.
Tasks look too heavy to migrate as a result because a single task would
transfer more load than the domain imbalance allows, leading to
significant imbalance in some cases.
Make populate_tasks_by_load() calculate task load the same way as
domain load, checking lb_apply_weight.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
The current code replenishes the task's time slice whenever the task
becomes ops.running(). However, there is a case where such behavior can
starve the other tasks, causing the watchdog timeout error. One (if not
all) such case is when a task is preempted while running by the higher
scheduler class (e.g., RT, DL). In such a case, the task will be transit
in a cycle of ops.running() -> ops.stopping() -> ops.running() -> etc.
Whenever it becomes re-running, it will be placed at the head of local
DSQ and ops.running() will renew its time slice. Hence, in the worst
case, the task can run forever since its time slice is never exhausted.
The fix is assigning the time slice only once by checking if the time
slice is calculated before.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Provide a command line option to print the version of the scheduler and
the scx_rustland_core crate.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Given that rustland_core now supports task preemption and it has been
tested successfully, it's worhtwhile to cut a new version of the crate.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In Rust c_char can be aliased to i8 or u8, depending on the particular
target architecture.
For example, trying to build scx_lavd on ppc64 triggers the following
error:
error[E0308]: mismatched types
--> src/main.rs:200:38
|
200 | let c_tx_cm: *const c_char = (&tx.comm as *const [i8; 17]) as *const i8;
| ------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `*const u8`, found `*const i8`
| |
| expected due to this
|
= note: expected raw pointer `*const u8`
found raw pointer `*const i8`
To fix this, consistently use c_char instead of assuming it corresponds
to i8.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In _some_ kernel versions, loading scx_lavd fails with an error of
"bpf_rcu_read_unlock is missing". The usage of
bpf_rcu_read_lock/unlock() in proc_dump_all_tasks() is correct but the
bpf verifier still think bpf_rcu_read_unlock() is missing. The most
plausible reason so far is that the problematic kernel does not have a
commit 6fceea0fa59f ("bpf: Transfer RCU lock state between subprog
calls"), failing inter-procedural analysis between proc_dump_all_tasks()
and submit_task_ctx(). Thus, we force inline submit_task_ctx() (no
inter-procedural analysis by the verifier is necessary) for the time
being.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Only the very newest kernels support scx_bpf_cpuperf_set(). Let's update
scx_layered to accommodate older kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
Looking at perf top it seems that the scheduler can spend a significant
amount of time iterating over the CPU topology/cpumask information,
especially when the system is running a significant amount of tasks:
2.57% scx_rustland [.] <scx_utils::cpumask::CpumaskIntoIterator as core::iter::traits::iterator::Iterator>::next
Considering that scx_rustland doesn't support CPU hotplugging yet (it
requires a full restart to properly handle CPU hotplug events), we can
completely avoid this overhead by caching a TopologyMap object at the
beginning, when the scheduler starts, instead of constantly
re-evaluating the CPU topology information.
This allows to reduce the scheduler overhead by ~5% CPU utilization
under heavy load conditions (from ~65% -> ~60%, according to top).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This change adds `scx_bpf_cpuperf_cap`, `scx_bpf_cpuperf_cur` and
`scx_bpf_cpuperf_set` definitions that were recently introduced into
[`sched_ext`](https://github.com/sched-ext/sched_ext/pull/180). It adds
a `perf` field to `scx_layered` to allow for controlling performance per
layer.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
If a library creates threads, those threads will often have the same
name. If two different processes of different priority both use a
library, it may be that we want the library's threads in each process to
be put into different layers.
To support this, let's add the ability to filter not only by task name,
but also by process name via the task thread group leader's comm.
Tested by creating two executables named "foo" and "bar", which both
spawn a bunch of tasks named "exp_worker" that spin until being
interrupted. With this config: https://pastebin.com/Uz2phzxQ, the tasks
were correctly matched to the expected layers.
Signed-off-by: David Vernet <void@manifault.com>
We're currently cloning cpumasks returned by calls to {Core, Cache,
Node, Topology}::span(). If a caller needs to clone it, they can. Let's
not penalize the callers that just want to query the underlying cpumask.
Signed-off-by: David Vernet <void@manifault.com>
Some people have expressed confusion at this behavior. Let's be a bit
more explicit in the documentation.
Signed-off-by: David Vernet <void@manifault.com>
Provide a run-time option to disable task preemption.
This option can be used to improve the throughput of the CPU-intensive
tasks while still providing a good level of responsiveness in the
system.
By default preemption is enabled, to provide a higher level of
responsiveness to the interactive tasks.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Use the new scx_rustland_core dispatch flag RL_PREEMPT_CPU to allow
interactive tasks to preempt other tasks with scx_rustland.
If the built-in idle selection logic is enforced (option `-i`), the
scheduler prioritizes keeping tasks on the target CPU designated by this
logic. With preemption enabled, these tasks have a higher likelihood of
reusing their cached working set, potentially improving performance.
Alternatively, when tasks are dispatched to the first available CPU
(default behavior), interactive tasks benefit from running more promptly
by kicking out other tasks before their assigned time slice expires.
This potentially allows to increase the default time slice to higher
values in the future, to improve the overall throughput in the system
and, at the same time, still maintain a good level of responsiveness,
because interactive tasks are now able to run pretty much immediately,
independently on the remaining time slice of the other tasks that are
contending the CPUs in the system.
= Results =
Measuring the performance of the usual benchmark "playing a video game
while running a parallel kernel build in background" seems to give
around 2-10% boost in the fps with preemption enabled, depending on the
particular video game.
Results were obtained running a `make -j32` kernel build on a AMD Ryzen
7 5800X 8-Cores 16GB RAM, while testing video games such as Baldur's
Gate 3 (with a solid +10% fps), Counter Strike 2 (around +5%) and Team
Fortress 2 (+2% boost).
Moreover, some WebGL applications (such as
https://webglsamples.org/aquarium/aquarium.html) seem to benefit even
more with preemption enabled, providing up to a +15% fps boost.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Reserve some bits of the `cpu` attribute of a task to store special
dispatch flags.
Initially, let's introduce just RL_CPU_ANY to replace the special value
NO_CPU, indicating that the task can be dispatched on any CPU,
specifically the first CPU that becomes available.
This allows to keep the CPU value assigned by the builtin idle selection
logic, that can potentially be used later for further optimizations.
Moreover, having the possibility to specify dispatch flags gives more
flexibility and it allows to map new scheduling features to such flags.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
When I transitioned layered to using task local storage, I messed up
initializing the task ctx, not realizing we previously had a separate
variable that was initializing the hasmap entry. We need to initialize
the task's layer to -11, and also set refresh_layer to 1.
Signed-off-by: David Vernet <void@manifault.com>
scx_simple no longer supports running in "partial" mode, with only
certain tasks usig scx_simple. When this option was removed, we also
removed the call to scx_bpf_switch_all();
While switching-all is the default behavior for newer kernels, let's add
__COMPAT_scx_bpf_switch_all() so that scx_simple can work on older
kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
We have a lot of boilerplate code where we create a cpumask, initialize
it, and then bpf_kptr_xchg() it into the map. In an effort to slightly
reduce the amount of boilerplate, let's create a helper that can
alleviate some of it.
Signed-off-by: David Vernet <void@manifault.com>
There are some random issues in the code, like unused variables, and bad
print formatters. I'm not sure why the compiler isn't consistently
complaining, but let's fix them.
Signed-off-by: David Vernet <void@manifault.com>
In scx_rusty, now that we have a complete view of the host's topology
thanks to the Topology crate, we can update our calls to
scx_bpf_create_dsq() to create the DSQ on the NUMA node of the domain.
It's unclear how much this will end up mattering for performance in the
typical case, but we might as well do the right thing given that host
topolgoy is static, and we have the information.
Signed-off-by: David Vernet <void@manifault.com>
* scx-lavd: preemption of a lower-priority task using kick cpu
When a task is enqueued to the global queue, the scheduler checks if
there is a lower priority task than the enqueued task. If so, it kicks
out the lower-priority task, hoping the newly enqueued task or another
higher-priority task runs on the kicked CPU. Kicking another CPU is
expensive as an IPI is involved, so the scheduler judiciously kicks the
CPU when its benefit (i.e., priority gap) is clear enough.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The scx_rusty scheduler does not support hotplug, and expects a static
host topology throughout its runtime. Though the kernel does have
support for detecting hotplug events, we currently don't detect this in
the kernel, nor surface it to user space when it happens. Now that we
have scx_bpf_exit(), we can gracefully exit the kernel in the event of a
hotplug, and communicate to user space that it should restart the
scheduler.
This patch adds that support to scx_rusty. Note that this assumes that
we're running on a recent enough kernel that has scx_bpf_exit(). If it
doesn't, then we instead just error out of the kernel scheduler and exit
the application.
Signed-off-by: David Vernet <void@manifault.com>
If we try to cross-build scx on builders with older versions of system's
linux headers (such as those provided by linux-libc-headers in older
releases of Ubuntu), we may hit build failures, due to the different
kernel ABI, such as:
error: invalid use of undefined type ‘struct btf_enum64’
To address this, introduce a new build option called "kernel_headers"
that allows to specify a custom path for the kernel headers required
during the build process.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Synchronize stragglers.
- Bug fix in __COMPAT_read_enum().
- A cosmetic difference in scx_qmap.bpf.c.
- Stray 'p' when calling getopt() in scx_simple.c.
After this the kernel tree and scx repo are in sync.
In rusty_select_cpu(), if a task is WAKE_SYNC, we'll currently migrate
the task to that CPU if there are any idle cores on the system. As in
[0], this condition is insufficient, as there could be idle cores
elsewhere on the system, but still tasks piled up on a single local DSQ.
Let's add a condition that the local DSQ has to be empty in order to
apply the WAKE_SYNC migration.
Before patch:
[void@maniforge src]$ hackbench
Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 0.433
With patch:
[void@maniforge src]$ hackbench
Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 0.035
Signed-off-by: David Vernet <void@manifault.com>
Change the upper bound of ineligible duration (LAVD_ELIGIBLE_TIME_MAX).
The updated (2x increased) upper bound reflects the distribution of
tasks' eligible_delta_ns better.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Change the calculation of the run_frequence using the wait_period from
the last time the task yielded CPU to this time when the task is
running. The old implementation measures the time interval between the
last stopping and the current running and increases run_freq without
reason.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Change the last_{start/stop/wait/wake}_clk in task_ctx to
last_{running/stopping/quiescent/runnable}_clk, matching with state
transition names. In addition, add comments and reorder fields in
task_ctx for readability.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Sync from kernel to receive new vmlinux.h and the updates to common headers.
This includes the following updates:
- scx_bpf_switch_all() is replaced by SCX_OPS_SWITCH_PARTIAL flag.
- sched_ext_ops.exit_dump_len added to allow customizing dump buffer size.
- scx_bpf_exit() added.
- Common headers updated to provide backward compatibility in a way which
hides most complexities from scheduler implementations.
scx_simple, qmap, central and flatcg are updated accordingly. Other
schedulers are broken for the moment.
When a task runs more than once (running <->stopping) within one
runnable-quiescent transition, accumulate runtime of multiple runnings
for statistics. This helps to get the task's runtime per schedule when
supposing that a huge time slice is given, which is what we want to
collect for scheduling decisions.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Remove runtime_boost using slice_boost_prio. Without slice_boost_prio,
the scheduler collects the exact time slice.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Let's change the function names of update_stat_for_*() as follow their
callers for consistency and less confusion.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The run_time_boosted_ns calculation requires updated slice_boost_prio,
so updating slice_boost_prio should be done before updating
run_time_boosted_ns.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid)
rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a
task. As far as I can tell, there's no reason we need to be doing this.
We never access the map from user space, and we're even passing a
struct task_struct * to a helper subprog to look up the task context
rather than only doing it by pid.
Using a hashmap is error prone for this because we end up having to
manually track lifecycles for entries in the map rather than relying on
BPF to do it for us. For example, BPF will automatically free a task's
entry from the map when it exits. Let's just use TLS here rather than a
hashmap to avoid issues from this (e.g. we've observed the scheduler
getting evicted because we're accessing a stale map entry after a task
has been destroyed).
Reported-by: Valentin Andrei <vandrei@meta.com>
Signed-off-by: David Vernet <void@manifault.com>
transit_task_stat() is now tracking the same runnable, running, stopping,
quiescent transitions that sched_ext core already tracks and always returns
%true. Let's remove it.
LAVD_TASK_STAT_ENQ is tracking a subset of runnable task state transitions -
the ones which end up calling ops.enqueue(). However, what it is trying to
track is a task becoming runnable so that its load can be added to the cpu's
load sum.
Move the LAVD_TASK_STAT_ENQ state transition and update_stat_for_enq()
invocation to ops.runnable() which is called for all runnable transitions.
Note that when all the methods are invoked, the invocation order would be
ops.select_cpu(), runnable() and then enqueue(). So, this change moves
update_stat_for_enq() invocation before calc_when_to_run() for
put_global_rq(). update_stat_for_enq() updates taskc->load_actual which is
consumed by calc_greedy_ratio() and thus affects calc_when_to_run().
Before this patch, calc_greedy_ratio() would use load_actual which doesn't
reflect the last running period. After this patch, the latest running period
will be reflected when the task gets queued to the global queue.
The difference is unlikely to matter but it'd probably make sense to make it
more consistent (e.g. do it at the end of quiescent transition).
After this change, transit_task_stat() doesn't detect any invalid
transitions.
scx_lavd tracks task state transitions and updates statistics on each valid
transition. However, there's an asymmetry between the runnable/running and
stopping/quiescent transitions. In the former, the runnable and running
transitions are accounted separately in update_stat_for_enq() and
update_stat_for_run(), respectively. However, in the latter, the two
transitions are combined together in update_stat_for_stop().
This asymmetry leads to incorrect accounting. For example, a task's load
should be added to the cpu's load sum when the task gets enqueued and
subtracted when the task is no longer runnable (quiescent). The former is
accounted correctly from update_stat_for_enq() but the latter is done
whenever the task stops. A task can transit between running and stopping
multiple times before becoming quiescent, so the asymmetry can end up
subtracting the load of a task which is still running from the cpu's load
sum.
This patch:
- introduces LAVD_TASK_STAT_QUIESCENT and updates transit_task_stat() so
that it can handle all valid state transitions including the multiple back
and forth transitions between two pairs - QUIESCENT <-> ENQ and RUNNING
<-> STOPPING.
- restores the symmetry by moving load adjustments part from
update_stat_for_stop() to new update_stat_for_quiescent().
This removes a good chunk of ignored transitions. The next patch will take
care of the rest.
lookup_task_ctx(), lookup_task_ctx_may_fail(), and lookup_layer()
currently don't have the static keyword, so BPF may treat them as a
global function. We don't actually want these to be global, so let's
make them static to avoid confusing the verifier.
Signed-off-by: David Vernet <void@manifault.com>