Although newer kernels default to switching-all, some users might still
be using the scheduler with older kernels.
Therefore, ensure all tasks are moved to the SCHED_EXT class by calling
__COMPAT_scx_bpf_switch_all() during init, so that scx_simple can still
operate on these older kernels as well.
Fixes: cf66e58 ("Sync from kernel (670bdab6073)")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Add '-Wno-maybe-uninitialized' option since gcc spits a warning (as an
error) at release build. Following is the error message without the
option.
❯ meson setup build -Dbuildtype=release --prefix ~
❯ meson compile -C build --jobs=$(nproc)
INFO: autodetecting backend as ninja
INFO: calculating backend command to run: /usr/bin/ninja -C /home/changwoo/ws-multics69/dev/scx-github/build -j 16
ninja: Entering directory `/home/changwoo/ws-multics69/dev/scx-github/build'
[3/40] Generating libbpf with a custom command
FAILED: cc_cflags_probe.c.__PHONY__
/home/changwoo/ws-multics69/dev/scx-github/meson-scripts/build_libbpf /usr/bin/jq /usr/bin/make /home/changwoo/ws-multics69/dev/scx-github/build/libbpf/src 16
In function ‘elf_close’,
inlined from ‘elf_close’ at elf.c:53:6,
inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
elf.c:57:9: error: ‘elf_fd.elf’ may be used uninitialized [-Werror=maybe-uninitialized]
57 | elf_end(elf_fd->elf);
| ^~~~~~~~~~~~~~~~~~~~
elf.c: In function ‘elf_find_func_offset_from_file’:
elf.c:377:23: note: ‘elf_fd.elf’ was declared here
377 | struct elf_fd elf_fd;
| ^~~~~~
In function ‘elf_close’,
inlined from ‘elf_close’ at elf.c:53:6,
inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
elf.c:58:9: error: ‘elf_fd.fd’ may be used uninitialized [-Werror=maybe-uninitialized]
58 | close(elf_fd->fd);
| ^~~~~~~~~~~~~~~~~
elf.c: In function ‘elf_find_func_offset_from_file’:
elf.c:377:23: note: ‘elf_fd.fd’ was declared here
377 | struct elf_fd elf_fd;
| ^~~~~~
At top level:
cc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics
cc1: all warnings being treated as errors
make: *** [Makefile:133: staticobjs/elf.o] Error 1
make: *** Waiting for unfinished jobs....
libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
libbpf.c:1107:18: error: ‘mod_btf’ may be used uninitialized [-Werror=maybe-uninitialized]
1107 | kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux;
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
libbpf.c:1092:28: note: ‘mod_btf’ was declared here
1092 | struct module_btf *mod_btf;
| ^~~~~~~
In function ‘find_struct_ops_kern_types’,
inlined from ‘bpf_map__init_kern_struct_ops’ at libbpf.c:1100:8:
libbpf.c:980:21: error: ‘btf’ may be used uninitialized [-Werror=maybe-uninitialized]
980 | kern_type = btf__type_by_id(btf, kern_type_id);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
libbpf.c: In function ‘bpf_map__init_kern_struct_ops’:
libbpf.c:965:21: note: ‘btf’ was declared here
965 | struct btf *btf;
| ^~~
At top level:
cc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics
cc1: all warnings being treated as errors
make: *** [Makefile:133: staticobjs/libbpf.o] Error 1
[4/40] Generating bpftool_target with a custom command
ninja: build stopped: subcommand failed.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The dynamic slice boost is not used anymore in the code, so there is no
reason to keep evaluating it.
Moreover, using it instead of the static slice boost seems to make
things worse, so let's just get rid of it.
Fixes: 0b3c399 ("scx_rustland: introduce dynamic slice boost")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rustland has a function called get_cpu_owner() in BPF which
currently has no callers. There's nothing wrong with the function, but
it causes a warning due to an unused function. Let's just annotate it
with __maybe_unused to tell the compiler that it's not a problem.
Signed-off-by: David Vernet <void@manifault.com>
When building with warnings enabled, a few obvious bugs are pointed out:
- We're not correctly calculating waker frequency
- We're not taking the min of avg_run_raw compared to max latency
- We're missing an element from sched_prio_to_weight
Fix these. With these changes, interactivity is seemingly improved. We
go from ~12 sec / turn -> 11 seconds / turn in the Civ 6 AI benchmark
with a 4 x nproc CPU hogging workload in the background. It's clear,
however, that we really need preemption.
Signed-off-by: David Vernet <void@manifault.com>
meson has a builtin -werror option that can be passed to meson setup. So
as to allow users to specify if they want to treat warnings as errors,
let's add this flag to meson.build.
We elect to make the flag the default behavior for now, as warnings in
BPF scheds are not surfaced unless there is an actual compiler error.
The warning can be turned off by specifying `-D werror=false` when
invoking `meson setup`.
Signed-off-by: David Vernet <void@manifault.com>
C SCX_OPS_ATTACH() and rust scx_ops_attach() macros were not calling
.attach() and were only attaching the struct_ops. This meant that all
non-struct_ops BPF programs contained in the skels were never attached which
breaks e.g. scx_layered.
Let's fix it by adding .attach() invocation the the attach macros.
Originally the implementation of function rsigmoid_u64 will
perform substraction even when the value of "v" equals to the value
of "max" , in which the result is certainly zero.
We can avoid this redundant substration by changing the condition from
">" to ">=" since we know when the value of "v" and "max" are equal
we can return 0 without any substract operation.
Now that the scx_ops_open!() macro is available, let's use it in scx_rusty to
cover all cases of when hotplug can happen.
Signed-off-by: David Vernet <void@manifault.com>
In order to make it easy for schedulers to use the hotplug_seq feature that's
available in recent kernels, we'll need to provide a macro wrapper so that we
can support the feature with backwards compatibility. This adds scx_ops_open!()
to abstract that. Any scheduler that uses scx_ops_open()! will be exited if a
hotplug event happens between opening the skeleton, and loading it.
Signed-off-by: David Vernet <void@manifault.com>
Now that the kernel exports the SCX_ECODE_ACT_RESTART exit code, we can
remove the custom hotplug logic from scx_rusty, and instead rely on the
built-in logic from the kernel. There's still a corner case that we're not
honoring: when a hotplug event happens on the init path. A future change will
address this as well.
Signed-off-by: David Vernet <void@manifault.com>
Use tabs instead of spaces for indentation in process_runqlat.bt. I also
updated comments and fixed a typo.
Signed-off-by: Jose Fernandez <josef@netflix.com>
Introduce a low-power mode to force the scheduler to operate in a very
non-work conserving way, causing a significant saving in terms of power
consumption, while still providing a good level of responsiveness in the
system.
This option can be enabled in scx_rustland via the --low_power / -l
option.
The idea is to not immediately re-kick a CPU when it enters an idle
state, but do that only if there are no other tasks running in the
system.
In this way, latency-critical tasks can be still dispatched immediately
on the other active CPUs, while CPU-bound tasks will be forced to spend
more time waiting to be scheduled, basically enforcing a special CPU
throttling mechanism that affects only the tasks that are not latency
critical.
The consequence is a reduction in the overall system throughput, but
also a significant reduction of power consumption, that can be useful
for mobile / battery-powered devices.
Test case (using `scx_rustland -l`):
- play a video game (Terraria) while recompiling the kernel
- measure game performance (fps) and core power consumption (W)
- compare the result of normal mode vs low-power mode
Result:
Game performance | Power consumption |
------------+-----------------+-------------------+
normal mode | 60 fps | 6W |
low-power mode | 60 fps | 3W |
As we can see from the result the reduction of power consumption is
quite significant (50%), while the responsiveness of the game (fps)
remains the same, that means battery life can be potentially doubled
without significantly affecting system responsiveness.
The overall throughput of the system is, of course, affected in a
negative way (kernel build is approximately 50% slower during this
test), but the goal here is to save power while still maintaining a good
level of responsiveness in the system.
For this reason the low-power mode should be considered only in
emergency conditions, for example when the system is close to completely
run out of power or simply to extend the battery life of a mobile device
without compromising its responsiveness.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Schedulers and the kernel can include an exit code when exiting a scheduler.
There are some built-in codes that can be specified: SCX_ECODE_RSN_HOTPLUG,
and SCX_ECODE_ACT_RESTART. Some schedulers may want to check the exit
code against these values, so let's export them from user_exit_info.rs.
We use lazy_static so that we can read the values for the enum for the
currently-running kernel.
Signed-off-by: David Vernet <void@manifault.com>
Add a `scripts` folder to hold scripts that are useful for the project,
and add a bpftrace program to measure the runqueue latency for a process.
bpftrace's built-in runqlat.bt program instruments runqueue latency for
all processes and does not provide a way to filter by PID. For sched_ext
performance work, we are interested in the runqueue latency of a
specific process we are trying to optimize, such as a video game.
Therefore, we need to create a custom bpftrace program to achieve this.
`process_runqlat.bt` instruments runqueue latency for a PID and its
threads. This script measures the runqueue latency for a specified PID
and includes all threads spawned by that process.
USAGE: sudo ./scripts/process_runqlat.bt <PID>
The program output will include:
- Stats by thread (count, avg latency, total latency)
- A histogram of all latency measurements
- Aggregated total stats (count, avg latency, total latency)
Example output when targeting Terraria's main process:
$ sudo ./scripts/process_runqlat.bt 652644
Attaching 5 probes...
Instrumenting runqueue latency for PID 652644. Hit Ctrl-C to end.
@tasks[AsyncActionDisp, 652676]: count 24, average 2, total 67
@tasks[Finalizer, 652646]: count 24, average 6, total 151
@tasks[Main Thread, 652668]: count 1432, average 8, total 12561
@tasks[Terraria.b:gl0, 652672]: count 1421, average 9, total 13120
@tasks[Main Thread, 652667]: count 1037, average 9, total 10091
@tasks[FACT Thread, 652679]: count 1033, average 10, total 11005
@tasks[Terraria:gdrv0, 652671]: count 3511, average 10, total 37047
@tasks[SDLAudioP3, 652678]: count 982, average 10, total 10210
@tasks[Main Thread, 652666]: count 104, average 10, total 1088
@tasks[SDLAudioP2, 652675]: count 982, average 10, total 10461
@tasks[Main Thread, 652644]: count 5840, average 11, total 69177
@tasks[Main Thread, 694917]: count 44, average 14, total 659
@tasks[Terraria.b:cs0, 652650]: count 3288, average 14, total 47300
@tasks[Thread Pool Wor, 695239]: count 1001, average 35, total 35873
@tasks[Thread Pool Wor, 696059]: count 986, average 35, total 34848
@tasks[Thread Pool Wor, 695458]: count 985, average 36, total 35836
@tasks[Thread Pool Wor, 696058]: count 982, average 37, total 36518
@usec_hist:
[0] 16 | |
[1] 360 |@ |
[2, 4) 4650 |@@@@@@@@@@@@@@ |
[4, 8) 14517 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[8, 16) 16593 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[16, 32) 5056 |@@@@@@@@@@@@@@@ |
[32, 64) 6795 |@@@@@@@@@@@@@@@@@@@@@ |
[64, 128) 7106 |@@@@@@@@@@@@@@@@@@@@@@ |
[128, 256) 1926 |@@@@@@ |
[256, 512) 130 | |
[512, 1K) 2 | |
@usec_total_stats: count 57152, average 29, total 1699656
Signed-off-by: Jose Fernandez <josef@netflix.com>
During the initialization phase the scheduler needs to be aware of all
the available CPUs in the system (also those that are offline), in order
to create a proper per-CPU DSQ for all of them.
Otherwise, if some cores are offline, we may get errors like the
following:
swapper/7[0] triggered exit kind 1024:
runtime error (invalid DSQ ID 0x0000000000000007)
Backtrace:
scx_bpf_consume+0xaa/0xd0
bpf_prog_42ff1b9d1ac5b184_rustland_dispatch+0x12b/0x187
Change the code to configure the BpfScheduler object with the total
amount of CPUs available in the system and prevent such failure.
This fixes#280.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Always dispatch at least one task, even if all the CPUs are busy.
This small overcommitment allows to maximize the CPU utilization without
introducing bubbles in the scheduling and also without introducing
regressions in terms of resposiveness.
Before this change the average CPU utilization of a `stress-ng -c 8` on
an 8-cores system is around 95%. With this change applied the CPU
utilization goes up to a consistent 100%.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The comment that describes rustland_update_idle() is still incorrectly
reporting an old implemention detail. Update its description for better
clarity.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Change the BPF CPU selection logic as following:
- if the previously used CPU is idle, keep using it
- if the task is not coming from a wait state, try to stick as much as
possible to the same CPU (for better cache usage)
- if the task is waking up from a wait state rely on the sched_ext
built-int idle selection logic
This logic can be completely disabled when the full user-space mode is
enabled. In this case tasks will always be assigned to the previously
used CPU and the user-space scheduler should take care of distributing
them among the available CPUs.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Some users are running with NUMA disabled, which makes sense given that it's
useless in a lot of contexts. Let's make the Topology crate assume a default
node with ID 0 in such cases.
Signed-off-by: David Vernet <void@manifault.com>