Commit Graph

2026 Commits

Author SHA1 Message Date
Changwoo Min
9acf950b75 scx_lavd: change how to use the context information for latency criticality
Previously, contextual information—such as sync wakeup and kernel
task—was incorporated into the final latency criticality value ad hoc
by adding a constant. Instead, let's make everything proportional to
run time and waker and wakee frequencies by scaling up/down the run
time and the frequencies.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-23 21:32:18 +09:00
Changwoo Min
6fb57643fb scx_lavd: remove the time restriction in preemption
Previously, the preemption is allowed only when a task is at the
early in its time slice by using LAVD_PREEMPT_KICK_MARGIN and
LAVD_PREEMPT_TICK_MARGIN. This is not necessary any more because
the lock holder preemption can avoid harmful preemptions. So we
remove LAVD_PREEMPT_KICK_MARGIN and LAVD_PREEMPT_TICK_MARGIN and
unleash the preemption.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-22 17:48:56 +09:00
Changwoo Min
07ed821511 scx_lavd: incorporate task's weight to latency criticality
When calculating task's latency criticality, incorporate task's
weight into runtime, wake_freq, and wait_freq more systematically.
It looks nicer and works better under heavy load.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-22 17:48:56 +09:00
Changwoo Min
47dd1b9582 scx_lavd: respect a chosen cpu even if it is not idle
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-22 17:48:56 +09:00
Changwoo Min
257a3db376 scx_lavd: add ops.cpu_release()
When a CPU is released to serve higher priority scheduler class,
requeue the tasks in a local DSQ to the global enqueue.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-22 17:48:56 +09:00
Changwoo Min
89749ecad7 scx_lavd: fix/work around a verifier error
Without this, the BPF verifier spits the following errors
with *some* version of vmlinux.h. So added +1 to work around
the problem.

---------------
; bpf_for(j, 0, 64) { @ main.bpf.c:1926
509: (bf) r1 = r8                     ; R1_w=fp-32 R8_w=fp-32 refs=66,2035
510: (b4) w2 = 0                      ; R2_w=0 refs=66,2035
511: (b4) w3 = 64                     ; R3_w=64 refs=66,2035
512: (85) call bpf_iter_num_new#104189        ; R0=scalar() fp-32=iter_num(ref_id=2048,state=active,depth=0) refs=66,2035,2048
513: (bf) r1 = r8                     ; R1=fp-32 R8=fp-32 refs=66,2035,2048
514: (85) call bpf_iter_num_next#104191 515: R0_w=rdonly_mem(id=2049,ref_obj_id=2048,sz=4) R6=scalar(id=2047,smin=smin32=0,smax=umax=smax32=umax32=7,var_off=(0x0; 0x7)) R7=scalar() R8=fp-32 R9=map_value(map=bpf_bpf.bss,ks=4,vs=4584,off=384,smin=smin32=0,smax=umax=smax32=umax32=3968,var_off=(0x0; 0xf80)) R10=fp0 fp-16=iter_num(ref_id=66,state=active,depth=1) fp-24=iter_num(ref_id=2035,state=active,depth=1) fp-32=iter_num(ref_id=2048,state=active,depth=1) fp-80=scalar(id=1) fp-88=map_value(map=.data.LAVD,ks=4,vs=1320,off=40,smin=smin32=0,smax=umax=smax32=umax32=1240,var_off=(0x0; 0x7f8)) fp-96=????0 fp-112=rcu_ptr_bpf_cpumask() fp-120=rcu_ptr_bpf_cpumask() fp-128=rcu_ptr_bpf_cpumask() fp-136=rcu_ptr_bpf_cpumask() refs=66,2035,2048
; bpf_for(j, 0, 64) { @ main.bpf.c:1926
515: (15) if r0 == 0x0 goto pc+49     ; R0_w=rdonly_mem(id=2049,ref_obj_id=2048,sz=4) refs=66,2035,2048
516: (64) w6 <<= 6                    ; R6=scalar(smin=smin32=0,smax=umax=smax32=umax32=448,var_off=(0x0; 0x1c0)) refs=66,2035,2048
517: (61) r8 = *(u32 *)(r0 +0)        ; R0=rdonly_mem(id=2049,ref_obj_id=2048,sz=4) R8_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) refs=66,2035,2048
518: (26) if w8 > 0x3f goto pc+46     ; R8_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=63,var_off=(0x0; 0x3f)) refs=66,2035,2048
; if (cpumask & 0x1LLU << j) { @ main.bpf.c:1927
519: (bf) r1 = r7                     ; R1_w=scalar(id=2053) R7=scalar(id=2053) refs=66,2035,2048
520: (7f) r1 >>= r8                   ; R1_w=scalar() R8_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=63,var_off=(0x0; 0x3f)) refs=66,2035,2048
521: (57) r1 &= 1                     ; R1_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=1,var_off=(0x0; 0x1)) refs=66,2035,2048
522: (15) if r1 == 0x0 goto pc+38     ; R1_w=1 refs=66,2035,2048
; cpu = (i * 64) + j; @ main.bpf.c:1928
523: (4c) w8 |= w6                    ; R6=scalar(smin=smin32=0,smax=umax=smax32=umax32=448,var_off=(0x0; 0x1c0)) R8_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=511,var_off=(0x0; 0x1ff)) refs=66,2035,2048
; bpf_cpumask_set_cpu(cpu, cd_cpumask); @ main.bpf.c:1929
524: (bc) w1 = w8                     ; R1_w=scalar(id=2054,smin=smin32=0,smax=umax=smax32=umax32=511,var_off=(0x0; 0x1ff)) R8_w=scalar(id=2054,smin=smin32=0,smax=umax=smax32=umax32=511,var_off=(0x0; 0x1ff)) refs=66,2035,2048
525: (79) r2 = *(u64 *)(r10 -88)      ; R2_w=map_value(map=.data.LAVD,ks=4,vs=1320,off=40,smin=smin32=0,smax=umax=smax32=umax32=1240,var_off=(0x0; 0x7f8)) R10=fp0 fp-88=map_value(map=.data.LAVD,ks=4,vs=1320,off=40,smin=smin32=0,smax=umax=smax32=umax32=1240,var_off=(0x0; 0x7f8)) refs=66,2035,2048
526: (85) call bpf_cpumask_set_cpu#93595
invalid access to map value, value_size=1320 off=1280 size=48
R2 max value is outside of the allowed memory range
processed 24200 insns (limit 1000000) max_states_per_insn 19 total_states 961 peak_states 789 mark_read 44
---------------

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-22 17:19:37 +09:00
Changwoo Min
d5b8aafa1a
Merge pull request #822 from multics69/lavd-tuning-v3
scx_lavd: misc performance tuning
2024-10-22 09:57:58 +09:00
Tejun Heo
6ea15f9f9f
Merge pull request #819 from minosfuture/vmlinux_per_arch
Use per-arch vmlinux.h v2
2024-10-21 19:36:52 +00:00
likewhatevs
303c6d09a0
Merge pull request #824 from likewhatevs/layered-exit-task-no-missing-ctx
scx_layered: fix exit_task ctx lookup err
2024-10-21 14:52:07 +00:00
6216a4b3b1
Merge pull request #826 from JakeHillion/pr826
layered: bpf: add layer kind to layer
2024-10-21 10:47:39 +00:00
Jake Hillion
55c9636f78 layered: bpf: add layer kind to layer
Currently we have an approximation of LayerKind in the BPF code with `open` on
the layer, but it is difficult/impossible to tell the difference between an
Open and a Grouped layer. Add a `kind` field to the BPF `layer` and plumb
through an enum from the Rust side.
2024-10-21 11:32:17 +01:00
Changwoo Min
5f19fa0bab scx_lavd: refill time slice once for a lock holder
When a task holds a lock, refill its time slice once at the
ops.dispatch() path to avoid the lock holder preemption problem.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-21 15:56:51 +09:00
Changwoo Min
5a852dc3d9 scx_lavd: direct dispatch when there is an idle CPU
When there is an idle CPU, direct dispatch is performed to reduce
scheduling latency. This didn't work well before, but it seems
to work well now with other tunings.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-21 15:56:51 +09:00
Changwoo Min
420de70159 scx_lavd: give more penalty to long-running tasks
Giving more penalties to a long-running tasks helps to segregate
latency-critical tasks, which are usually short-running, to
long-running tasks, which are compute-intensive.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-21 15:56:41 +09:00
Pat Somaru
d89c571593
scx_layered: do not attempt ctx lookup on tasks exited before running on scx 2024-10-20 17:47:24 -04:00
Andrea Righi
fb3f1d0b43
Merge pull request #821 from sched-ext/rustland-min-vtime-budget
Some checks failed
build-and-test / lint (push) Has been cancelled
build-and-test / build-kernel (push) Has been cancelled
build-and-test / pages (push) Has been cancelled
build-and-test / integration-test (scx_bpfland) (push) Has been cancelled
build-and-test / integration-test (scx_lavd) (push) Has been cancelled
build-and-test / integration-test (scx_layered) (push) Has been cancelled
build-and-test / integration-test (scx_rlfifo) (push) Has been cancelled
build-and-test / integration-test (scx_rustland) (push) Has been cancelled
build-and-test / integration-test (scx_rusty) (push) Has been cancelled
build-and-test / layered-matrix (scx_layered, --disable-topology=false) (push) Has been cancelled
build-and-test / layered-matrix (scx_layered, --disable-topology=true) (push) Has been cancelled
build-and-test / rust-test-core (scx_loader) (push) Has been cancelled
build-and-test / rust-test-core (scx_rustland_core) (push) Has been cancelled
build-and-test / rust-test-core (scx_stats) (push) Has been cancelled
build-and-test / rust-test-core (scx_utils) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_bpfland) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_lavd) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_layered) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_rlfifo) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_rustland) (push) Has been cancelled
build-and-test / rust-test-schedulers (scx_rusty) (push) Has been cancelled
bpf-next-test / build-kernel (push) Has been cancelled
bpf-next-test / integration-test (scx_bpfland) (push) Has been cancelled
bpf-next-test / integration-test (scx_lavd) (push) Has been cancelled
bpf-next-test / integration-test (scx_layered) (push) Has been cancelled
bpf-next-test / integration-test (scx_rlfifo) (push) Has been cancelled
bpf-next-test / integration-test (scx_rustland) (push) Has been cancelled
bpf-next-test / integration-test (scx_rusty) (push) Has been cancelled
scx_rustland: Adjust task's vruntime budget based on latency weight
2024-10-20 07:44:35 +00:00
Changwoo Min
bf1b014d63
Merge pull request #818 from multics69/lavd-tuning
Some checks are pending
build-and-test / lint (push) Waiting to run
build-and-test / build-kernel (push) Waiting to run
build-and-test / integration-test (scx_bpfland) (push) Blocked by required conditions
build-and-test / integration-test (scx_lavd) (push) Blocked by required conditions
build-and-test / integration-test (scx_layered) (push) Blocked by required conditions
build-and-test / integration-test (scx_rlfifo) (push) Blocked by required conditions
build-and-test / integration-test (scx_rustland) (push) Blocked by required conditions
build-and-test / integration-test (scx_rusty) (push) Blocked by required conditions
build-and-test / layered-matrix (scx_layered, --disable-topology=false) (push) Blocked by required conditions
build-and-test / layered-matrix (scx_layered, --disable-topology=true) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_loader) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_rustland_core) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_stats) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_utils) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_bpfland) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_lavd) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_layered) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rlfifo) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rustland) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rusty) (push) Blocked by required conditions
build-and-test / pages (push) Waiting to run
scx_lavd: add missing reset_lock_futex_boost()
2024-10-20 01:41:54 +00:00
Daniel Hodges
e72e5ce0f4
Merge pull request #744 from minosfuture/main
Some checks are pending
build-and-test / lint (push) Waiting to run
build-and-test / build-kernel (push) Waiting to run
build-and-test / integration-test (scx_bpfland) (push) Blocked by required conditions
build-and-test / integration-test (scx_lavd) (push) Blocked by required conditions
build-and-test / integration-test (scx_layered) (push) Blocked by required conditions
build-and-test / integration-test (scx_rlfifo) (push) Blocked by required conditions
build-and-test / integration-test (scx_rustland) (push) Blocked by required conditions
build-and-test / integration-test (scx_rusty) (push) Blocked by required conditions
build-and-test / layered-matrix (scx_layered, --disable-topology=false) (push) Blocked by required conditions
build-and-test / layered-matrix (scx_layered, --disable-topology=true) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_loader) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_rustland_core) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_stats) (push) Blocked by required conditions
build-and-test / rust-test-core (scx_utils) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_bpfland) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_lavd) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_layered) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rlfifo) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rustland) (push) Blocked by required conditions
build-and-test / rust-test-schedulers (scx_rusty) (push) Blocked by required conditions
build-and-test / pages (push) Waiting to run
scx_layered: Fix crash on aarch64 due to unavailable cache id file
2024-10-19 22:33:53 +00:00
Ming Yang
1b5359ef4a Use per-arch vmlinux.h v2
Rework per-arch vmlinux solution
* have per-arch directory under sched/include/arch/, in which we
  maintain vmlinux.h symlink and real file
  vmlinux-{kernel_ver}-g{sha1}.h. The original sched/include/vmlinux/
  folder is removed.
* update meson build `-I` option to find the new vmlinux.h position
* update cargo build scripts to use the per-arch vmlinux.h for
  generating bindings
* keep the original ClangInfo refactoring changes

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-19 10:50:59 -07:00
Andrea Righi
30a2a2013c scx_rustland: Adjust task's vruntime budget based on latency weight
Adjust the amount of vruntime budget an idle task can accumulate in
function of its latency weight, which is derived from the average number
of voluntary context switches.

This ensures that latency-sensitive tasks naturally receive an
additional priority boost and we can get avoid scaling down the vruntime
to determine the task's deadline, making the scheduler more fair.

It also makes the scheduler more robust, now rustland can survive
intensive stress tests, such as `stress-ng --cpu-sched 64` or hackbench.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-19 19:32:14 +02:00
Daniel Hodges
8f3b75acb9
Merge pull request #820 from hodgesds/rusty-cleanup
scx_rusty: Cleanup cpumask casting
2024-10-19 16:12:11 +00:00
Daniel Hodges
b1b76ee72a
scx_rusty: Cleanup cpumask casting
Use the cask_mask helper function to cleanup scx_rusty.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-19 12:01:36 -04:00
Changwoo Min
2fd395bbbf scx_lavd: remove unnecessary load tracking
The algorithm has been evolved to decide the time slice without
tracking the system-wide load. So remove the obsolete load tracking
code.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-19 15:39:24 +09:00
Changwoo Min
8d63024be7 scx_lavd: add missing reset_lock_futex_boost()
reset_lock_futex_boost() should be called every context switch of a
task. Otherwise, in the worst case, a task and that CPU could block
the preemption. To avoid such a situation, add missing
reset_lock_futex_boost() calls.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-19 15:39:18 +09:00
Ming Yang
f3f4726c09 scx_layered: Read CPU topology for building CpuPool
Building CpuPool from cache-cpu topology did not apply on arm, because
`/sys/devices/system/cpu/cpu{}/cache/index{}/id` file is unavailable.

Read CPU topology instead.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-17 23:41:08 -07:00
Andrea Righi
f37bc0db7f
Merge pull request #813 from sched-ext/bpfland-lowlatency-rework
scx_bpfland: rework lowlatency mode
2024-10-17 19:56:00 +00:00
Andrea Righi
48bbcd24dd scx_bpfland: tune default settings
Adjust some default settings after the rework done with commit 112a5d4
("scx_bpfland: rework lowlatency mode to adjust tasks priority").

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-17 21:46:51 +02:00
Andrea Righi
4d68133f3b scx_bpfland: rework lowlatency mode to adjust tasks priority
Rework lowlatency mode as following:
 - introduce task dynamic priority: task weight multiplied by the
   average amount of voluntary context switches
 - use dynamic priority to determine task's vruntime (instead of the
   static task's weight)
 - task's minimum vruntime is evaluated in function of the dynamic
   priority (tasks with a higher dynamic priority can have a smaller
   vruntime compared to tasks with a lower dynamic priority)

The dynamic priority allows to maintain a good system responsiveness
also without applying the classification of tasks in "interactive" and
"regular", therefore in lowlatency mode only the shared DSQ will be
used (priority DSQ is disabled).

Using a separate priority queue to dispatch "interactive" tasks makes
the scheduler less fair, allowing latency-sensitive tasks to be
prioritized even when there is a high number of tasks in the system
(e.g., `stress-ng -c 1024` or similar scenarios), where relying solely
on dynamic priority may not be sufficient.

On the other hand, disabling the classification of "interactive" tasks
results in a fairer scheduler and more predictable performance, making
it better suited for soft real-time applications (e.g, audio and
multimedia).

Therefore, the --lowlatency option is retained to allow users to choose
between more predictable performance (by disabling the interactive task
classification) or a more responsive system (default).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-17 21:46:51 +02:00
Andrea Righi
d336892c71
Merge pull request #816 from sched-ext/rustland-core-update-doc
scx_rustland_core: update documentation about the new API
2024-10-17 19:18:16 +00:00
likewhatevs
9a65fea75e
Merge pull request #817 from likewhatevs/fix-ci
remove apt fast from ci setup
2024-10-17 17:42:20 +00:00
Pat Somaru
d944a39a7f
remove apt fast from ci setup
remove apt fast from ci setup to reduce non-core dependencies
2024-10-17 13:08:03 -04:00
Andrea Righi
a155ff2ada scx_rustland_core: update documentation about the new API
Update the documentation adding the new task statistics provided by
scx_rustland_core.

Fixes: be681c7 ("scx_rustland_core: pass nvcsw, slice and dsq_vtime to user-space")
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-17 19:07:51 +02:00
f1b1830512
Merge pull request #814 from JakeHillion/pr814
layered: add RandomTopo layer growth algorithm
2024-10-17 17:05:53 +00:00
abc202c972
Merge pull request #815 from JakeHillion/pr815
layered: make disable_topology arg require equals
2024-10-17 17:05:51 +00:00
Jake Hillion
1415b4a454 layered: make disable_topology arg require equals
The recent changes to `disable_topology` making the arg an `Option<bool>`
instead of a `bool` caused an issue with it incorrectly attaching arguments.
Make the argument `require_equals` to fix this case.

This is a behaviour change for anybody previously relying on `-t true`,
`-t false`, `--disable-topology true`, or `--disable-topology false`. The
equals syntax worked before and continues to work after, as demonstrated in the
CI.

Test plan:

Before:
```sh
$ sudo target/release/scx_layered -t f:/tmp/test.json
error: invalid value 'f:/tmp/test.json' for '--disable-topology
[<DISABLE_TOPOLOGY>]'
  [possible values: true, false]

  For more information, try '--help'.
```

After:
```sh
$ sudo target/release/scx_layered -t f:/tmp/test.json
14:44:00 [INFO] CPUs: online/possible=176/176 nr_cores=88
14:44:00 [INFO] Disabling topology awareness
...
^CEXIT: Scheduler unregistered from user space
```
2024-10-17 15:46:30 +01:00
Jake Hillion
a0fe303b61 layered: add RandomTopo layer growth algorithm
Add an additional layer growth algorithm, named 'RandomTopo'. It follows these
rules:
- Randomise NUMA nodes. List each core in each NUMA node before a core from
  another NUMA node.
- Randomise LLCs within each NUMA node. List each core in each LLC before a
  core in a different LLC.
- Randomise the core order within each LLC.

This attempts to provide a relatively evenly distributed set of cores while
considering topology. Unlike `Topo`, it does not require you to specify the
ordering and instead generates it from the hardware, making desyncs between the
config and the hardware less likely.

Currently `RandomTopo` considers topology even with `--disable-topology=true`.
I can see the arguments for this going both ways. On one hand requesting
disable topology suggests you want no consideration of machine topology, and
`RandomTopo` should decay to `Random` (which it does on single node/LLC machines
anyway). On the other hand, the config explicitly specifies `RandomTopo` and
should consider the topology. If anyone feels strongly I can change this to
respect `disable_topology`.

Test plan:
```sh
$ sudo target/release/scx_layered -v f:/tmp/test.json
...
14:31:19 [DEBUG] layer: batch algo: RandomTopo core order: [47, 44, 43, 42, 40, 45, 46, 41, 38, 37, 36, 39, 34, 32, 35, 33, 54, 49, 50, 52, 51, 48, 55, 53, 68, 64, 66, 67, 70, 69, 71, 65, 9, 10, 12, 15, 14, 11, 8, 13, 59, 60, 57, 63, 62, 56, 58, 61, 2, 3, 5, 4, 0, 6, 7, 1, 86, 83, 85, 87, 84, 81, 80, 82, 20, 22, 19, 23, 21, 18, 17, 16, 30, 25, 26, 31, 28, 27, 29, 24, 78, 73, 74, 79, 75, 77, 76, 72]
14:31:19 [DEBUG] layer: immediate algo: RandomTopo core order: [45, 40, 46, 42, 47, 43, 41, 44, 80, 82, 83, 84, 85, 86, 81, 87, 13, 10, 9, 15, 14, 12, 11, 8, 36, 38, 39, 32, 34, 35, 33, 37, 7, 3, 1, 0, 2, 5, 4, 6, 53, 52, 54, 48, 50, 49, 55, 51, 76, 77, 79, 78, 73, 74, 72, 75, 71, 66, 64, 67, 70, 69, 65, 68, 24, 26, 31, 25, 28, 30, 27, 29, 58, 56, 59, 61, 57, 62, 60, 63, 16, 19, 17, 23, 22, 20, 18, 21]
...
```

This is a machine with 1 NUMA/11 LLCs with 8 cores per LLC and you can see the
results are grouped by LLC but random within.
2024-10-17 15:36:00 +01:00
Daniel Hodges
b01ff79080
Merge pull request #805 from hodgesds/layered-refresh-cleanup
scx_layered: Refactor refresh cpumasks
2024-10-16 19:06:15 +00:00
Andrea Righi
2ea47af4bc
Merge pull request #804 from sched-ext/rustland-fixes
scx_rustland fixes and improvements
2024-10-16 18:26:03 +00:00
Tejun Heo
58093eace5
Merge pull request #809 from sched-ext/htejun/revert-arch-vmlinux_h
Revert #793
2024-10-16 16:52:02 +00:00
Tejun Heo
84d8abf913 Revert "Use per-arch vmlinux.h"
This reverts commit a23f3566e3.
2024-10-16 06:42:28 -10:00
Tejun Heo
bd79059f1a Revert "Add vmlinux.h for multiple arch"
This reverts commit 7067092555.
2024-10-16 06:42:18 -10:00
Dan Schatzberg
730052a0c4
Merge pull request #803 from dschatzberg/mitosis_fallback_dsq
scx_mitosis: Handle pinned tasks
2024-10-16 13:26:23 +00:00
Andrea Righi
763da6ab55 scx_rlfifo: operate in a more work-conserving way
Make scx_rlfifo even simpler and keep dispatching tasks even if the CPUs
are all busy.

This allows to better stress test the scx_rustland_core backend, by
using both the per-CPU DSQs and the global shared DSQ.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:06:00 +02:00
Andrea Righi
b07de1d7d5 scx_rustland: clarify EDF scheduling
scx_rustland is now effectively a deadline-based scheduler and not a
pure vruntime-based scheduler.

Clarify this in the source code. No functional change.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:06:00 +02:00
Andrea Righi
c4b6408e92 scx_rustland: smooth vruntime update
Update vruntime adding the used virtual time slice of each task as soon
they are scheduled.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:06:00 +02:00
Andrea Righi
0b2de2c10c scx_rustland: use built-in nvcsw metrics
Use the nvcsw metric from the scx_rustland_core backend, intead of
retrieving this metric in user-space via procfs.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:06:00 +02:00
Andrea Righi
97629178e2 scx_rustland_core: bump up version to 2.2.2
Bump up the minor version to reflect the new backward-compatible
functionality added.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:06:00 +02:00
Andrea Righi
704fe95f51 scx_rustland_core: get rid of the SCX_ENQ_WAKEUP logic
With user-space scheduling we don't usually dispatch a task immediately
after selecting an idle CPU, so there's not much benefit at trying to
optimize the WAKE_SYNC scenario (when a task is waking up another task
and releaing the CPU) when picking an idle CPU.

Therefore, get rid of the WAKE_SYNC logic in select_cpu() and rely on
the user-space logic (that has access to the WAKE_SYNC information) to
handle this particular case.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:05:58 +02:00
Andrea Righi
67ec1af5cf scx_rustland_core: kick an idle CPU after global dispatch
Do not kick a CPU from rs_select_cpu() (called by the user-space
scheduler), since we may not immediately dispatch the task.

Instead, always try to wake up the task's assigned CPU after dispatching
to a global DSQ, ensuring it can be consumed immediately.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 14:05:33 +02:00
Andrea Righi
0a05f1f193 scx_rustland_core: keep CPUs alive with pending tasks
Prevent CPUs from going idle when the user-space scheduler has some
pending activities to complete.

Keeping the CPU alive allows to consume tasks from the user-space
scheduler more efficiently, preventing bubbles in the scheduling
pipeline.

To achieve this, trigger a CPU kick from ops.update_idle() and set a
flag in the CPU context to prevent it from going idle. Then keep kicking
the CPU from ops.dispatch() until the flag is cleared, which occurs when
no more tasks are pending or when the CPU exits idle as a task starts
running on it.

This allows to fix the performance regression introduced by the
put_prev_task_scx() behavior change in Linux 6.12 (see #788).

Link: https://lore.kernel.org/lkml/20241015111539.12136-1-andrea.righi@linux.dev/
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-16 10:43:43 +02:00