Commit Graph

899 Commits

Author SHA1 Message Date
Tejun Heo
0d4f6829a8
Merge pull request #284 from vax-r/Fix_typo
Fix typo
2024-05-15 06:47:54 -10:00
Jose Fernandez
1beb4ed205
scripts: Add script to measure runqlat for a process
Add a `scripts` folder to hold scripts that are useful for the project,
and add a bpftrace program to measure the runqueue latency for a process.

bpftrace's built-in runqlat.bt program instruments runqueue latency for
all processes and does not provide a way to filter by PID. For sched_ext
performance work, we are interested in the runqueue latency of a
specific process we are trying to optimize, such as a video game.
Therefore, we need to create a custom bpftrace program to achieve this.

`process_runqlat.bt` instruments runqueue latency for a PID and its
threads. This script measures the runqueue latency for a specified PID
and includes all threads spawned by that process.

USAGE: sudo ./scripts/process_runqlat.bt <PID>

The program output will include:
- Stats by thread (count, avg latency, total latency)
- A histogram of all latency measurements
- Aggregated total stats (count, avg latency, total latency)

Example output when targeting Terraria's main process:

$ sudo ./scripts/process_runqlat.bt 652644

Attaching 5 probes...
Instrumenting runqueue latency for PID 652644. Hit Ctrl-C to end.

@tasks[AsyncActionDisp, 652676]: count 24, average 2, total 67
@tasks[Finalizer, 652646]: count 24, average 6, total 151
@tasks[Main Thread, 652668]: count 1432, average 8, total 12561
@tasks[Terraria.b:gl0, 652672]: count 1421, average 9, total 13120
@tasks[Main Thread, 652667]: count 1037, average 9, total 10091
@tasks[FACT Thread, 652679]: count 1033, average 10, total 11005
@tasks[Terraria:gdrv0, 652671]: count 3511, average 10, total 37047
@tasks[SDLAudioP3, 652678]: count 982, average 10, total 10210
@tasks[Main Thread, 652666]: count 104, average 10, total 1088
@tasks[SDLAudioP2, 652675]: count 982, average 10, total 10461
@tasks[Main Thread, 652644]: count 5840, average 11, total 69177
@tasks[Main Thread, 694917]: count 44, average 14, total 659
@tasks[Terraria.b:cs0, 652650]: count 3288, average 14, total 47300
@tasks[Thread Pool Wor, 695239]: count 1001, average 35, total 35873
@tasks[Thread Pool Wor, 696059]: count 986, average 35, total 34848
@tasks[Thread Pool Wor, 695458]: count 985, average 36, total 35836
@tasks[Thread Pool Wor, 696058]: count 982, average 37, total 36518

@usec_hist:
[0]                   16 |                                                    |
[1]                  360 |@                                                   |
[2, 4)              4650 |@@@@@@@@@@@@@@                                      |
[4, 8)             14517 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
[8, 16)            16593 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[16, 32)            5056 |@@@@@@@@@@@@@@@                                     |
[32, 64)            6795 |@@@@@@@@@@@@@@@@@@@@@                               |
[64, 128)           7106 |@@@@@@@@@@@@@@@@@@@@@@                              |
[128, 256)          1926 |@@@@@@                                              |
[256, 512)           130 |                                                    |
[512, 1K)              2 |                                                    |

@usec_total_stats: count 57152, average 29, total 1699656

Signed-off-by: Jose Fernandez <josef@netflix.com>
2024-05-15 09:18:29 -06:00
vax-r
f293995b59 Fix typo
Fix the usage of "scheduler" in the comment of main.bpf.c , it should
a verb which is "schedule".
2024-05-15 23:02:35 +08:00
Changwoo Min
971ea6629e
Merge pull request #283 from multics69/scx-lavd-misc
scx_lavd: add non-functional misc updates
2024-05-15 17:18:41 +09:00
Changwoo Min
08e7e23cbe scx_lavd: priint out the current limitaiton of scx_lavd for users
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-15 12:04:09 +09:00
Changwoo Min
a4560c7f7f scx_lavd: add comments describing the idea of preemption
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-15 12:04:03 +09:00
Tejun Heo
5ad79952b7
Merge pull request #282 from sirlucjan/pacman-hooks-systemd
Add pacman hooks for systemd
2024-05-13 05:36:42 -10:00
Piotr Gorski
1fbf4f4f9b
Add pacman hooks for systemd
Signed-off-by: Piotr Gorski <lucjan.lucjanov@gmail.com>
2024-05-13 15:52:06 +02:00
Andrea Righi
fa1c146cad
Merge pull request #281 from sched-ext/rustland-fix-offline-cpus
scx_rustland: properly support offline CPUs
2024-05-12 09:30:44 +02:00
Andrea Righi
2a7b1cc3c4 scx_rustland: properly support offline CPUs
During the initialization phase the scheduler needs to be aware of all
the available CPUs in the system (also those that are offline), in order
to create a proper per-CPU DSQ for all of them.

Otherwise, if some cores are offline, we may get errors like the
following:

  swapper/7[0] triggered exit kind 1024:
    runtime error (invalid DSQ ID 0x0000000000000007)

  Backtrace:
    scx_bpf_consume+0xaa/0xd0
    bpf_prog_42ff1b9d1ac5b184_rustland_dispatch+0x12b/0x187

Change the code to configure the BpfScheduler object with the total
amount of CPUs available in the system and prevent such failure.

This fixes #280.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-12 08:42:46 +02:00
Andrea Righi
5de8ff5bd8
Merge pull request #279 from sched-ext/rustland-max-cpu-util
scx_rustland: maximize CPU utilization
2024-05-11 17:08:27 +02:00
Andrea Righi
a31bcc6847 scx_rustland: maximize CPU utilization
Always dispatch at least one task, even if all the CPUs are busy.

This small overcommitment allows to maximize the CPU utilization without
introducing bubbles in the scheduling and also without introducing
regressions in terms of resposiveness.

Before this change the average CPU utilization of a `stress-ng -c 8` on
an 8-cores system is around 95%. With this change applied the CPU
utilization goes up to a consistent 100%.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-11 16:23:12 +02:00
Andrea Righi
5f9ce3bba6
Merge pull request #272 from sched-ext/rustland-reduce-scheduling-overhead
rustland: reduce scheduling overhead
2024-05-11 10:17:01 +02:00
Andrea Righi
209c454149 scx_rustland_core: fix update_idle description
The comment that describes rustland_update_idle() is still incorrectly
reporting an old implemention detail. Update its description for better
clarity.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-11 07:37:33 +02:00
Andrea Righi
311b7f861c scx_rustland_core: refine built-in CPU idle selection logic
Change the BPF CPU selection logic as following:

 - if the previously used CPU is idle, keep using it
 - if the task is not coming from a wait state, try to stick as much as
   possible to the same CPU (for better cache usage)
 - if the task is waking up from a wait state rely on the sched_ext
   built-int idle selection logic

This logic can be completely disabled when the full user-space mode is
enabled. In this case tasks will always be assigned to the previously
used CPU and the user-space scheduler should take care of distributing
them among the available CPUs.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-11 07:37:31 +02:00
Tejun Heo
291e2cc996
Merge pull request #278 from sched-ext/topology_numa
topology: Support CONFIG_NUMA=n in Topology crate
2024-05-10 12:47:56 -10:00
David Vernet
de512b6bfd
Merge pull request #277 from ptr1337/cachyos-debug
INSTALL.md: Add info about debug kernel on arch based
2024-05-10 16:52:22 -05:00
David Vernet
904a89117c
topology: Support CONFIG_NUMA=n in Topology crate
Some users are running with NUMA disabled, which makes sense given that it's
useless in a lot of contexts. Let's make the Topology crate assume a default
node with ID 0 in such cases.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-10 16:46:15 -05:00
Peter Jung
c10fcf47f7
INSTALL.md: Add info about debug kernel on arch based
Signed-off-by: Peter Jung <admin@ptr1337.dev>
2024-05-10 21:26:13 +02:00
Andrea Righi
63feba9c2b topology: TopologyMap: add nr_cpus_online()
Add a method to TopologyMap to get the amount of online CPUs.

Considering that most of the schedulers are not handling CPU hotplugging
it can be useful to expose also this metric in addition to the amount of
available CPUs in the system.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-10 17:24:20 +02:00
Andrea Righi
f052493005 scx_rustland_core: implement effective time slice on a per-task basis
Drop the global effective time-slice and use the more fine-grained
per-task time-slice to implement the dynamic time-slice capability.

This allows to reduce the scheduler's overhead (dropping the global time
slice volatile variable shared between user-space and BPF) and it
provides a more fine-grained control on the per-task time slice.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-10 17:24:20 +02:00
Andrea Righi
382ef72999
Merge pull request #276 from sched-ext/scx-utils-sched-running-error
scx_utils: report an explicit error when another scheduler is running
2024-05-10 16:17:36 +02:00
Andrea Righi
887812197c scx_utils: report an explicit error when another scheduler is running
If another scheduler is already running, the Rust schedulers based on
scx_utils are reporting an error like the following, that can be a bit
difficult to understand:

 Error: Failed to attach struct ops

 Caused by:
     bpf call "libbpf_rs::map::Map::attach_struct_ops::{{closure}}" returned NULL

Change the scx_ops_attach macro to check if another sched_ext scheduler
is running and in that case report a more explicit error.

With this applied:

 $ sudo scx_rustland
 Error: another sched_ext scheduler is already running

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-10 11:20:19 +02:00
Changwoo Min
01faf9408b
Merge pull request #274 from multics69/scx-lavd-preemption02
scx_lavd: support yield-based preemption
2024-05-10 11:32:29 +09:00
Changwoo Min
446de3ef3c scdx_lavd: minor style changes
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-10 11:07:32 +09:00
Tejun Heo
6ae1031acd
Merge pull request #273 from ptr1337/systemd-service-restart
systemd-service: Don't restart always
2024-05-09 06:50:59 -10:00
Changwoo Min
7fcc6e4576 scx_lavd: support yield-based preemption
If there is a higher priority task when running ops.tick(),
ops.select_cpu(), and ops.enqueue() callbacks, the current running tasks
yields its CPU by shrinking time slice to zero and a higher priority
task can run on the current CPU.

As low-cost, fine-grained preemption becomes available, default
parameters are adjusted as follows:
  - Raise the bar for remote CPU preemption to avoid IPIs.
  - Increase the maximum time slice.
  - Gradually enforce the fair use of CPU time (i.e., ineligible duration)

Lastly, using CAS, we ensure that a remote CPU is preempted by only one
CPU. This removes unnecessary remote preemptions (and IPIs).

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-10 00:54:41 +09:00
Peter Jung
cb8928260e
systemd-service: Don't restart always
Currently if the scx.service is failing to launch due issues, systemd will try to start the scheduler all the time.
This results into a massive flood to the kernel and does not bring the service up again.

explanation of the changes:
The StartLimitBurst=2 and StartLimitIntervalSec=30 settings tell systemd that if the service unsuccessfully tries to restart itself twice within 30 seconds, it should enter a failed state and no longer try to restart. This ensures that if the service is truly broken, systemd won't continuously try to restart it.

Signed-off-by: Peter Jung <admin@ptr1337.dev>
2024-05-09 14:54:07 +02:00
Andrea Righi
7bc62d8db8
Merge pull request #270 from sched-ext/rustland-user-ringbuffer
scx_rustland_core: use a BPF_MAP_TYPE_USER_RINGBUF to dispatch tasks
2024-05-09 06:50:19 +02:00
Andrea Righi
eb2b0b0fa3
Merge pull request #271 from vax-r/Fix_typo
Fix typo
2024-05-09 06:49:59 +02:00
vax-r
093a08356e Fix typo
Fix "expermentation" to "experimentation".
2024-05-09 12:10:55 +08:00
Andrea Righi
5da4602ad7 scx_rustland_core: use a BPF_MAP_TYPE_USER_RINGBUF to dispatch tasks
Replace the BPF_MAP_TYPE_QUEUE with a BPF_MAP_TYPE_USER_RINGBUF to store
the tasks dispatched from the user-space scheduler to the BPF component.

This eliminates the need of the bpf() syscalls, significantly reducing
the overhead of the user-space->kernel communication and delivering a
notable performance boost in the overall system throughput.

Based on experimental results, this change allows to reduces the scheduling
overhead by approximately 30-35% when the system is overcommitted.

This improvement has the potential to make user-space schedulers based
on scx_rustland_core viable options for real production systems.

Link: https://github.com/libbpf/libbpf-rs/pull/776
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-08 22:16:53 +02:00
David Vernet
07b521b3d0
Merge pull request #266 from sched-ext/rusty_hot_plug
rusty: Support CPU hotplug onlining
2024-05-04 21:56:51 -05:00
David Vernet
b9b9875aa7
rusty: Remove task offline tracking
scx_rusty's intention is to support hotplug by automatically restarting
whenever a hotplug event is encountered. Now that we're not trying to
consume a bogus DSQ in the rusty_dispatch() on a newly hotplugged CPU,
let's just remove offline tracking. It's really just there as a sanity
check, but it triggers if an offline task is made runnable during a
hotplug event before the ops.hotplug() callback has been invoked.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:55 -05:00
David Vernet
6f1dc6067a
rusty: Check for offline CPU in rusty_dispatch()
There's currently a slight issue on existing kernels on the hotplug
path wherein we can start to receive scheduling callbacks on a CPU
before that CPU has received hotplug events. For CPUs going online, this
can possibly confuse a scheduler because it may not be expecting
anything to ever happen on that CPU, and therefore may do things that
could cause the scheduler to crash. For example, without this patch in
scx_rusty, we try to consume from a bogus DSQ that doesn't exist, which
causes ext.c to boot out the scheduler.

Though this issue will soon be fixed in ext.c, let's explicitly avoid
dispatching from an onlining CPU in rusty so that we properly support
hotplug on older kernels as well.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:54 -05:00
David Vernet
0d6b00238f
common: Add likely/unlikely macros
We can hint to the compiler about paths we'll take in a scheduler. This
is a common pattern, so lets provide convenience macros.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:53 -05:00
David Vernet
4b16f5117a
rusty: Fix alignment
Found a misaligned conditional in main.rs. Fix it.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:19 -05:00
Changwoo Min
01e5a46371
Merge pull request #263 from multics69/scx_lavd-power01
scx_lavd: support CPU frequency scaling
2024-05-05 10:16:00 +09:00
Tejun Heo
b0a759b40d
Merge pull request #268 from ptr1337/readme-nix
README: Add missing link to Nix Install instructions
2024-05-04 04:34:14 -10:00
Peter Jung
1a19ddabd4
README: Add missing link to NIX Install instructions
Signed-off-by: Peter Jung <admin@ptr1337.dev>
2024-05-04 11:16:52 +02:00
Tejun Heo
43e7752998
Merge pull request #267 from ptr1337/nix-install
INSTALL: Add instructions for Nix
2024-05-03 21:58:53 -10:00
Peter Jung
5551b99ac0
INSTALL: Add instructions for Nix
Signed-off-by: Peter Jung <admin@ptr1337.dev>
2024-05-04 09:55:35 +02:00
Changwoo Min
a24e1d7adf scx_lavd: more comments about CPU frequency scaling
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-04 10:41:13 +09:00
David Vernet
acef503373
Merge pull request #264 from sched-ext/log2_helpers 2024-05-03 18:34:48 -05:00
Tejun Heo
257d48d65a
Merge pull request #265 from sched-ext/ci-enable-kvm
ci: enable kvm support in the github workflow
2024-05-03 11:50:57 -10:00
Andrea Righi
0d26219fad ci: enable kvm support in the github workflow
Enable kvm acceleration and qemu microvm to speed up CI tests inside
virtme-ng.

Also adjust the regex to catch potential errors excluding a false
positive triggered by the new configuration.

Link: https://github.blog/changelog/2023-02-23-hardware-accelerated-android-virtualization-on-actions-windows-and-linux-larger-hosted-runners/
Link: https://github.blog/2024-01-17-github-hosted-runners-double-the-power-for-open-source/
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-03 23:20:10 +02:00
David Vernet
9bb8e9a548
common: Pull bpf_log2l() into helper function header
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.

Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:50:24 -05:00
David Vernet
efb97de785
Merge pull request #261 from sched-ext/rusty_interactive
Make scx_rusty interactive
2024-05-03 14:42:15 -05:00
David Vernet
2403f60631
rusty: Dynamically scale slice according to system util
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.

Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.

Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:58 -05:00
David Vernet
76618989f8
rusty: Implement basic eligible deadline scheduling in rusty
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.

In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:56 -05:00