Add a `scripts` folder to hold scripts that are useful for the project,
and add a bpftrace program to measure the runqueue latency for a process.
bpftrace's built-in runqlat.bt program instruments runqueue latency for
all processes and does not provide a way to filter by PID. For sched_ext
performance work, we are interested in the runqueue latency of a
specific process we are trying to optimize, such as a video game.
Therefore, we need to create a custom bpftrace program to achieve this.
`process_runqlat.bt` instruments runqueue latency for a PID and its
threads. This script measures the runqueue latency for a specified PID
and includes all threads spawned by that process.
USAGE: sudo ./scripts/process_runqlat.bt <PID>
The program output will include:
- Stats by thread (count, avg latency, total latency)
- A histogram of all latency measurements
- Aggregated total stats (count, avg latency, total latency)
Example output when targeting Terraria's main process:
$ sudo ./scripts/process_runqlat.bt 652644
Attaching 5 probes...
Instrumenting runqueue latency for PID 652644. Hit Ctrl-C to end.
@tasks[AsyncActionDisp, 652676]: count 24, average 2, total 67
@tasks[Finalizer, 652646]: count 24, average 6, total 151
@tasks[Main Thread, 652668]: count 1432, average 8, total 12561
@tasks[Terraria.b:gl0, 652672]: count 1421, average 9, total 13120
@tasks[Main Thread, 652667]: count 1037, average 9, total 10091
@tasks[FACT Thread, 652679]: count 1033, average 10, total 11005
@tasks[Terraria:gdrv0, 652671]: count 3511, average 10, total 37047
@tasks[SDLAudioP3, 652678]: count 982, average 10, total 10210
@tasks[Main Thread, 652666]: count 104, average 10, total 1088
@tasks[SDLAudioP2, 652675]: count 982, average 10, total 10461
@tasks[Main Thread, 652644]: count 5840, average 11, total 69177
@tasks[Main Thread, 694917]: count 44, average 14, total 659
@tasks[Terraria.b:cs0, 652650]: count 3288, average 14, total 47300
@tasks[Thread Pool Wor, 695239]: count 1001, average 35, total 35873
@tasks[Thread Pool Wor, 696059]: count 986, average 35, total 34848
@tasks[Thread Pool Wor, 695458]: count 985, average 36, total 35836
@tasks[Thread Pool Wor, 696058]: count 982, average 37, total 36518
@usec_hist:
[0] 16 | |
[1] 360 |@ |
[2, 4) 4650 |@@@@@@@@@@@@@@ |
[4, 8) 14517 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[8, 16) 16593 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[16, 32) 5056 |@@@@@@@@@@@@@@@ |
[32, 64) 6795 |@@@@@@@@@@@@@@@@@@@@@ |
[64, 128) 7106 |@@@@@@@@@@@@@@@@@@@@@@ |
[128, 256) 1926 |@@@@@@ |
[256, 512) 130 | |
[512, 1K) 2 | |
@usec_total_stats: count 57152, average 29, total 1699656
Signed-off-by: Jose Fernandez <josef@netflix.com>
During the initialization phase the scheduler needs to be aware of all
the available CPUs in the system (also those that are offline), in order
to create a proper per-CPU DSQ for all of them.
Otherwise, if some cores are offline, we may get errors like the
following:
swapper/7[0] triggered exit kind 1024:
runtime error (invalid DSQ ID 0x0000000000000007)
Backtrace:
scx_bpf_consume+0xaa/0xd0
bpf_prog_42ff1b9d1ac5b184_rustland_dispatch+0x12b/0x187
Change the code to configure the BpfScheduler object with the total
amount of CPUs available in the system and prevent such failure.
This fixes#280.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Always dispatch at least one task, even if all the CPUs are busy.
This small overcommitment allows to maximize the CPU utilization without
introducing bubbles in the scheduling and also without introducing
regressions in terms of resposiveness.
Before this change the average CPU utilization of a `stress-ng -c 8` on
an 8-cores system is around 95%. With this change applied the CPU
utilization goes up to a consistent 100%.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The comment that describes rustland_update_idle() is still incorrectly
reporting an old implemention detail. Update its description for better
clarity.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Change the BPF CPU selection logic as following:
- if the previously used CPU is idle, keep using it
- if the task is not coming from a wait state, try to stick as much as
possible to the same CPU (for better cache usage)
- if the task is waking up from a wait state rely on the sched_ext
built-int idle selection logic
This logic can be completely disabled when the full user-space mode is
enabled. In this case tasks will always be assigned to the previously
used CPU and the user-space scheduler should take care of distributing
them among the available CPUs.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Some users are running with NUMA disabled, which makes sense given that it's
useless in a lot of contexts. Let's make the Topology crate assume a default
node with ID 0 in such cases.
Signed-off-by: David Vernet <void@manifault.com>
Add a method to TopologyMap to get the amount of online CPUs.
Considering that most of the schedulers are not handling CPU hotplugging
it can be useful to expose also this metric in addition to the amount of
available CPUs in the system.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Drop the global effective time-slice and use the more fine-grained
per-task time-slice to implement the dynamic time-slice capability.
This allows to reduce the scheduler's overhead (dropping the global time
slice volatile variable shared between user-space and BPF) and it
provides a more fine-grained control on the per-task time slice.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
If another scheduler is already running, the Rust schedulers based on
scx_utils are reporting an error like the following, that can be a bit
difficult to understand:
Error: Failed to attach struct ops
Caused by:
bpf call "libbpf_rs::map::Map::attach_struct_ops::{{closure}}" returned NULL
Change the scx_ops_attach macro to check if another sched_ext scheduler
is running and in that case report a more explicit error.
With this applied:
$ sudo scx_rustland
Error: another sched_ext scheduler is already running
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
If there is a higher priority task when running ops.tick(),
ops.select_cpu(), and ops.enqueue() callbacks, the current running tasks
yields its CPU by shrinking time slice to zero and a higher priority
task can run on the current CPU.
As low-cost, fine-grained preemption becomes available, default
parameters are adjusted as follows:
- Raise the bar for remote CPU preemption to avoid IPIs.
- Increase the maximum time slice.
- Gradually enforce the fair use of CPU time (i.e., ineligible duration)
Lastly, using CAS, we ensure that a remote CPU is preempted by only one
CPU. This removes unnecessary remote preemptions (and IPIs).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Currently if the scx.service is failing to launch due issues, systemd will try to start the scheduler all the time.
This results into a massive flood to the kernel and does not bring the service up again.
explanation of the changes:
The StartLimitBurst=2 and StartLimitIntervalSec=30 settings tell systemd that if the service unsuccessfully tries to restart itself twice within 30 seconds, it should enter a failed state and no longer try to restart. This ensures that if the service is truly broken, systemd won't continuously try to restart it.
Signed-off-by: Peter Jung <admin@ptr1337.dev>
Replace the BPF_MAP_TYPE_QUEUE with a BPF_MAP_TYPE_USER_RINGBUF to store
the tasks dispatched from the user-space scheduler to the BPF component.
This eliminates the need of the bpf() syscalls, significantly reducing
the overhead of the user-space->kernel communication and delivering a
notable performance boost in the overall system throughput.
Based on experimental results, this change allows to reduces the scheduling
overhead by approximately 30-35% when the system is overcommitted.
This improvement has the potential to make user-space schedulers based
on scx_rustland_core viable options for real production systems.
Link: https://github.com/libbpf/libbpf-rs/pull/776
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rusty's intention is to support hotplug by automatically restarting
whenever a hotplug event is encountered. Now that we're not trying to
consume a bogus DSQ in the rusty_dispatch() on a newly hotplugged CPU,
let's just remove offline tracking. It's really just there as a sanity
check, but it triggers if an offline task is made runnable during a
hotplug event before the ops.hotplug() callback has been invoked.
Signed-off-by: David Vernet <void@manifault.com>
There's currently a slight issue on existing kernels on the hotplug
path wherein we can start to receive scheduling callbacks on a CPU
before that CPU has received hotplug events. For CPUs going online, this
can possibly confuse a scheduler because it may not be expecting
anything to ever happen on that CPU, and therefore may do things that
could cause the scheduler to crash. For example, without this patch in
scx_rusty, we try to consume from a bogus DSQ that doesn't exist, which
causes ext.c to boot out the scheduler.
Though this issue will soon be fixed in ext.c, let's explicitly avoid
dispatching from an onlining CPU in rusty so that we properly support
hotplug on older kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
We can hint to the compiler about paths we'll take in a scheduler. This
is a common pattern, so lets provide convenience macros.
Signed-off-by: David Vernet <void@manifault.com>
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.
Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.
Signed-off-by: David Vernet <void@manifault.com>
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.
Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.
Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).
Signed-off-by: David Vernet <void@manifault.com>
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.
In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.
Signed-off-by: David Vernet <void@manifault.com>