During the initialization phase the scheduler needs to be aware of all
the available CPUs in the system (also those that are offline), in order
to create a proper per-CPU DSQ for all of them.
Otherwise, if some cores are offline, we may get errors like the
following:
swapper/7[0] triggered exit kind 1024:
runtime error (invalid DSQ ID 0x0000000000000007)
Backtrace:
scx_bpf_consume+0xaa/0xd0
bpf_prog_42ff1b9d1ac5b184_rustland_dispatch+0x12b/0x187
Change the code to configure the BpfScheduler object with the total
amount of CPUs available in the system and prevent such failure.
This fixes#280.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Always dispatch at least one task, even if all the CPUs are busy.
This small overcommitment allows to maximize the CPU utilization without
introducing bubbles in the scheduling and also without introducing
regressions in terms of resposiveness.
Before this change the average CPU utilization of a `stress-ng -c 8` on
an 8-cores system is around 95%. With this change applied the CPU
utilization goes up to a consistent 100%.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The comment that describes rustland_update_idle() is still incorrectly
reporting an old implemention detail. Update its description for better
clarity.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Change the BPF CPU selection logic as following:
- if the previously used CPU is idle, keep using it
- if the task is not coming from a wait state, try to stick as much as
possible to the same CPU (for better cache usage)
- if the task is waking up from a wait state rely on the sched_ext
built-int idle selection logic
This logic can be completely disabled when the full user-space mode is
enabled. In this case tasks will always be assigned to the previously
used CPU and the user-space scheduler should take care of distributing
them among the available CPUs.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Some users are running with NUMA disabled, which makes sense given that it's
useless in a lot of contexts. Let's make the Topology crate assume a default
node with ID 0 in such cases.
Signed-off-by: David Vernet <void@manifault.com>
Add a method to TopologyMap to get the amount of online CPUs.
Considering that most of the schedulers are not handling CPU hotplugging
it can be useful to expose also this metric in addition to the amount of
available CPUs in the system.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Drop the global effective time-slice and use the more fine-grained
per-task time-slice to implement the dynamic time-slice capability.
This allows to reduce the scheduler's overhead (dropping the global time
slice volatile variable shared between user-space and BPF) and it
provides a more fine-grained control on the per-task time slice.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
If another scheduler is already running, the Rust schedulers based on
scx_utils are reporting an error like the following, that can be a bit
difficult to understand:
Error: Failed to attach struct ops
Caused by:
bpf call "libbpf_rs::map::Map::attach_struct_ops::{{closure}}" returned NULL
Change the scx_ops_attach macro to check if another sched_ext scheduler
is running and in that case report a more explicit error.
With this applied:
$ sudo scx_rustland
Error: another sched_ext scheduler is already running
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
If there is a higher priority task when running ops.tick(),
ops.select_cpu(), and ops.enqueue() callbacks, the current running tasks
yields its CPU by shrinking time slice to zero and a higher priority
task can run on the current CPU.
As low-cost, fine-grained preemption becomes available, default
parameters are adjusted as follows:
- Raise the bar for remote CPU preemption to avoid IPIs.
- Increase the maximum time slice.
- Gradually enforce the fair use of CPU time (i.e., ineligible duration)
Lastly, using CAS, we ensure that a remote CPU is preempted by only one
CPU. This removes unnecessary remote preemptions (and IPIs).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Currently if the scx.service is failing to launch due issues, systemd will try to start the scheduler all the time.
This results into a massive flood to the kernel and does not bring the service up again.
explanation of the changes:
The StartLimitBurst=2 and StartLimitIntervalSec=30 settings tell systemd that if the service unsuccessfully tries to restart itself twice within 30 seconds, it should enter a failed state and no longer try to restart. This ensures that if the service is truly broken, systemd won't continuously try to restart it.
Signed-off-by: Peter Jung <admin@ptr1337.dev>
Replace the BPF_MAP_TYPE_QUEUE with a BPF_MAP_TYPE_USER_RINGBUF to store
the tasks dispatched from the user-space scheduler to the BPF component.
This eliminates the need of the bpf() syscalls, significantly reducing
the overhead of the user-space->kernel communication and delivering a
notable performance boost in the overall system throughput.
Based on experimental results, this change allows to reduces the scheduling
overhead by approximately 30-35% when the system is overcommitted.
This improvement has the potential to make user-space schedulers based
on scx_rustland_core viable options for real production systems.
Link: https://github.com/libbpf/libbpf-rs/pull/776
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rusty's intention is to support hotplug by automatically restarting
whenever a hotplug event is encountered. Now that we're not trying to
consume a bogus DSQ in the rusty_dispatch() on a newly hotplugged CPU,
let's just remove offline tracking. It's really just there as a sanity
check, but it triggers if an offline task is made runnable during a
hotplug event before the ops.hotplug() callback has been invoked.
Signed-off-by: David Vernet <void@manifault.com>
There's currently a slight issue on existing kernels on the hotplug
path wherein we can start to receive scheduling callbacks on a CPU
before that CPU has received hotplug events. For CPUs going online, this
can possibly confuse a scheduler because it may not be expecting
anything to ever happen on that CPU, and therefore may do things that
could cause the scheduler to crash. For example, without this patch in
scx_rusty, we try to consume from a bogus DSQ that doesn't exist, which
causes ext.c to boot out the scheduler.
Though this issue will soon be fixed in ext.c, let's explicitly avoid
dispatching from an onlining CPU in rusty so that we properly support
hotplug on older kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
We can hint to the compiler about paths we'll take in a scheduler. This
is a common pattern, so lets provide convenience macros.
Signed-off-by: David Vernet <void@manifault.com>
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.
Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.
Signed-off-by: David Vernet <void@manifault.com>
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.
Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.
Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).
Signed-off-by: David Vernet <void@manifault.com>
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.
In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.
Signed-off-by: David Vernet <void@manifault.com>
To know the required CPU performance (e.g., frequency) demand, we keep
track of 1) utilization of each CPU and 2) _performance criticality_ of
each task. The performance criticality of a task denotes how critical it
is to CPU performance (frequency). Like the notion of latency
criticality, we use three factors: the task's average runtime, wake-up
frequency, and waken-up frequency. A task's runtime is longer, and its
two frequencies are higher; the task is more performance-critical
because it would be a bottleneck in the middle of the task chain.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The current Slack URL gives a link to the Slack workspace, but doesn't
include the invite. Update the URL to include the invite URL to make it
easier for people to join the Slack workspace.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
Let's remove the extraneous copy pasting and use a lookup helper like we
do for task and pcpu context.
Signed-off-by: David Vernet <void@manifault.com>
A LoadEntity gets the load to transfer between two entities by taking
the minimum of their imbalances and reducing its abs value by
xfer_ratio.
In practice self.imbal(), the push node or domain, always has positive
imbalance and other.imbal(), the pull node or domain, always has
negative imbalance, so other.imbal() is always the minimum even though
the abs value of its imbalance might be greater than the abs value of
self.imbal(). It seems like the intent is to take the minimum of the
two absolute values instead to avoid overbalancing at the puller, so
make both values abs.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Rusty's load balancer calculates load differently based on average
system CPU utilization in create_domain_hierarchy(). At >= 99.999%
utilization, load is the product of a task's weight and duty cycle;
below that, load is the same as the task's duty cycle.
populate_tasks_by_load(), however, always uses the product when
calculating per-task load so that in the sub-99.999% util case, load is
inflated, typically by a factor of 100 with a normal priority task.
Tasks look too heavy to migrate as a result because a single task would
transfer more load than the domain imbalance allows, leading to
significant imbalance in some cases.
Make populate_tasks_by_load() calculate task load the same way as
domain load, checking lb_apply_weight.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>