Commit Graph

1000 Commits

Author SHA1 Message Date
Changwoo Min
b1bc4033b4
Merge pull request #673 from multics69/lavd-prop-lat-cri
scx_lavd: propagate waker's latency criticality to its wakee
2024-09-24 07:34:07 +09:00
Daniel Hodges
35477970bd scx_layered: Cleanup dump format
Cleanup the dump format for topology aware dumps in scx_layered.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-23 10:02:49 -07:00
Daniel Hodges
8b14e48994
Merge pull request #671 from hodgesds/layered-last-waker
scx_layered: Add waker stats per layer
2024-09-23 10:58:54 -04:00
Changwoo Min
71fa92cf1c scx_lavd: propagate waker's latency criticality to its wakee
If a waker is more latency critical than a wakee, inherit a waker's
latency criticality for the wakee. This allows the wakee to consider the
context of who wakes me up. For now, we limit such inheritance to one
hop and one schedule.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-23 12:56:16 +09:00
Changwoo Min
ad8536b4a4
Merge pull request #670 from multics69/lavd-opt-preemption
scx_lavd: find a victim cpu for preemption within task's compute domain
2024-09-23 10:22:08 +09:00
Daniel Hodges
91d32663bd
scx_layered: Refactor waker tracking to only use last waker
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 18:05:54 -04:00
Daniel Hodges
1a2f82b91c
Merge pull request #666 from hodgesds/layered-local-llc
scx_layered: Add topology aware preemption
2024-09-22 17:36:32 -04:00
Daniel Hodges
326f3b7988
Merge pull request #667 from hodgesds/layered-pcore-grow
scx_layered: Add Big/Little core growth algos
2024-09-22 16:59:42 -04:00
Daniel Hodges
1ac9712d2e
scx_layered: Refactor preemption into a separate function
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:54:11 -04:00
Daniel Hodges
bc34bd867b
scx_layered: Add option to enable XNUMA preemption
Disable XNUMA preemption by default and add an option to enable it.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
e105d9f8b1
scx_layered: Use cast_mask helper
Use the cast_mask helper to clean up some of the bpf cpumask conversion
code for preemption.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
5d9d32b65c
scx_layered: Add stats for XLLC/XNUMA preemptions
Add stats for XLLC/XNUMA preemptions.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
c15ecbb3a4
scx_layered: Add topology aware preemption
Add topology aware preemption that begins in the local LLC and attempts
to preempt from cpus nearest in the topology.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:56 -04:00
Daniel Hodges
6fb2f0b2b4
scx_layered: Clean up waker code
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 06:43:10 -04:00
Daniel Hodges
c55b2c6e69
scx_layered: Add waker stats per layered
Update the task context to keep a mask of wakers and add stats for wakes
across layers.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 06:43:03 -04:00
Daniel Hodges
140a101874
Merge pull request #449 from hodgesds/layered-dsq-fixes
scx_layered: Add a hi fallback dsq per llc
2024-09-22 06:39:46 -04:00
Changwoo Min
13a68465bf common: add bpf_cpumask_weight() to common.bpf.h
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 12:47:38 +09:00
Changwoo Min
7321a89724 scx_lavd: find a victim cpu for preemption within task's compute domain
Previously, we found a victim from the entire CPUs, which include remote
or non-compatible CPUs. Now we limit our search for victim finding
within a task's compute domain.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 12:47:18 +09:00
Changwoo Min
a13082c2b8
Merge pull request #669 from multics69/lavd-opt-select-cpu
scx_lavd: consider waker's CPU when ops.select_cpu()
2024-09-22 09:16:06 +09:00
Andrea Righi
897977bbc1
Merge pull request #663 from vax-r/bpfland_fix
scx_bpfland: Remove the usage of cast_mask in bpfland_enqueue
2024-09-21 22:15:11 +02:00
Changwoo Min
8d8d8f9f61 scx_lavd: consider waker's CPU when ops.select_cpu()
In case of sync wake-up, consider waker's CPU also to improve cache
locality.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 01:57:49 +09:00
Daniel Hodges
4aa841de0a
scx_layered: Rename HI_FALLBACK_DSQ to HI_FALLBACK_DSQ_BASE
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-20 17:28:38 -04:00
Daniel Hodges
a3d1344293
scx_layered: Add core growth algo for core type
Add core growth algos for Big/Little core support. The algos allow
layers to grow layers by preferring either big or little cores first.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-20 11:50:15 -04:00
I Hsin Cheng
7799b94f07 scx_layered: Add helper function to access cpumask within bpf_cpumask
Before passing "nodec->cpumas" and "cachec->cpumask" into
"bpf_cpumask_test_cpu()", type conversion should be done first.
Implement "cast_mask()" to convert "struct bpf_cpumask *" into "const
struct cpumask *".

Reference from
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/progs/cpumask_common.h#n63

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-20 20:52:03 +08:00
I Hsin Cheng
5596d5e3fe scx_bpfland: Remove the usage of cast_mask in bpfland_enqueue
The usage of cast_mask() within bpfland_enqueue aims to cast the type of
"p->cpus_ptr" from "struct bpf_cpumask *" to "const struct cpumask *".
However, the type of "p->cpus_ptr" is already "const cpumask_t *" aka
"const struct cpumask *", so no conversion is needed.

Passing a value of type "struct cpumask *" into "struct bpf_cpumask *"
also leads to compiling error.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-20 20:45:09 +08:00
Daniel Hodges
8532ba3f1e
scx_layered: Fix hi fallback dsq consumption
Fix hi fallback dsq consumption to only consume from the cache local hi
fallback dsq.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-20 04:18:05 -04:00
I Hsin Cheng
e4bb99efc5 scx_layered: Refactor match_layer()
Refactor match_layer() to prevent the compiling error caused by
uninitialization of the variable "nr_match_ors" before usage.

Move the checking of "nr_match_ors" after it access the value within
"layer->nr_match_ors" to make sure it's initiailized successfully.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-19 22:20:03 +08:00
Andrea Righi
3f8db5783b
Merge pull request #658 from sched-ext/rustland-core-improve-cpu-selection
scx_rustland_core: improve idle CPU selection API and logic
2024-09-17 22:38:15 +02:00
Andrea Righi
e6b624a97c scx_rustland_core: improve idle CPU selection API and logic
Pass enqueue flags to user-space: flags will be passed via
QueuedTask.flags and can be forwarded back to BPF via
DispatchedTask.flags.

These flags can be also passed to BpfScheduler.select_cpu() to apply a
more refined CPU selection policy.

Moreover, avoid to prioritize the user-space scheduler too much and
dispatch it only if there are no other tasks that needs to be dispatched
in ops.dispatch().

This improves CPU utilization and enhances the fairness, robustness, and
resilience of schedulers based on scx_rustland_core, particularly under
stress test conditions.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-16 22:12:38 +02:00
Daniel Hodges
4f98de333d
Merge pull request #652 from JakeHillion/layer-growth-rr
scx_layered: add round robin growth strategy
2024-09-16 17:34:48 +02:00
Andrea Righi
00eebaf905 scx_bpfland: refine task wakeup logic
On WAKE_SYNC attempt to migrate the wakee on the same CPU as the waker
if the waker is not exiting, the wakee can use the waker's CPU, the
waker's L3 domain is not saturated and there are not other tasks queued
to the local DSQ of the waker's CPU.

This is the same logic used in scx_rusty.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-15 14:50:14 +02:00
Andrea Righi
079a53c689 scx_bpfland: get rid of preferred domain
Using the turbo boosted CPUs as preferred scheduling seems to be
beneficial only a very few corner cases, for example on battery-powered
devices with an aggressive cpufreq governor that constantly tries to
scale down the frequency (and even in this case it's probably better to
not force the tasks to run on the fast CPUs, to save power).

In practive the preferred domain seems to introduce more overhead than
benefits overall, so let's get rid of it.

This can be improved in the future adding multiple user-configurable
scheduling domains.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-15 14:50:14 +02:00
Changwoo Min
95e2f4dabe scx_lavd: boost the latency critility of kernel threads
Many kernel threads performs latency critical tasks (e.g., net, gpu). In
particular, AMD GPU driver runs the most part in the kernel space using
kworker. Hence, treat kernel threads as if a woken up task.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-14 00:41:02 +09:00
Changwoo Min
4b4f42fce1 scx_lavd: add a short circuit for the case of no turbo core
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-13 16:02:07 +09:00
Jake Hillion
3848d87895 scx_layered: add round robin growth strategy 2024-09-12 23:27:21 +01:00
Daniel Hodges
632fcfe4ae
Merge pull request #648 from hodgesds/layered-llc-stats
scx_layered: Add stats for XNUMA/XLLC migrations
2024-09-12 13:23:23 -04:00
Daniel Hodges
dde6e0c7f9 scx_utils: Add node/llc id to core topology
Add ids for node/llc in the Core topology struct.
2024-09-12 10:05:02 -07:00
Daniel Hodges
aee19dd9a1 scx_layered: Add topology aware core growth selection
Add topology aware core growth selection.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-12 06:48:51 -07:00
Daniel Hodges
14a19dc3ca scx_layered: Add random layer growth algo
Add a random layer growth algo.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-12 05:35:54 -07:00
Daniel Hodges
ae57f8d1f9 scx_rusty: Initialize node cpumask
Initialize the node cpumask, which was previously uninitialized causing
metric calculations to be wrong when attempting to lookup CPUs in the
node cpumask.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-11 13:14:44 -07:00
Jake Hillion
8ca45cfa37
lint: enable cargo fmt (#643)
Use `cargo fmt` with a specific nightly branch in the CI to enforce formatting. Globally format these files while the diff is still small so we can stay on top of it.

Test plan:
- CI lint check passes.
2024-09-11 10:03:20 +01:00
Daniel Hodges
43ec8bfe82 scx_layered: Add stats for XNUMA/XLLC migrations
Add stats for XNUMA/XLLC migrations. An example of the output is shown:
```
  hodgesd  : util/frac=    5.4/  0.1 load/frac=    301.0/  0.3 tasks=   476
             tot=   3168 local=97.82 wake/exp/reenq= 2.18/ 0.00/ 0.00
             keep/max/busy= 0.03/ 0.00/ 0.03 kick= 0.00 yield/ign= 0.09/    0
             open_idle= 0.00 mig= 6.82 xnuma_mig= 6.82 xllc_mig= 4.86 affn_viol= 0.00
             preempt/first/idle/fail= 0.00/ 0.00/ 0.00/ 0.00 min_exec= 0.00/   0.00ms
             cpus=  2 [  2,  4] 00000000 00000010 00001000
  normal   : util/frac=   28.7/  0.7 load/frac= 101704.7/ 95.8 tasks=  2450
             tot=   4660 local=99.06 wake/exp/reenq= 0.88/ 0.06/ 0.00
             keep/max/busy= 1.03/ 0.00/ 0.00 kick= 0.06 yield/ign= 0.04/  400
             open_idle=15.73 mig=23.45 xnuma_mig=23.45 xllc_mig= 3.07 affn_viol= 0.00
             preempt/first/idle/fail= 0.00/ 0.00/ 0.00/ 0.88 min_exec= 0.00/   0.00ms
             cpus=  2 [  2,  2] 00000001 00000100 00000000
             excl_coll=12.55 excl_preempt= 0.00
  random   : util/frac=    0.0/  0.0 load/frac=      0.0/  0.0 tasks=     0
             tot=      0 local= 0.00 wake/exp/reenq= 0.00/ 0.00/ 0.00
             keep/max/busy= 0.00/ 0.00/ 0.00 kick= 0.00 yield/ign= 0.00/    0
             open_idle= 0.00 mig= 0.00 xnuma_mig= 0.00 xllc_mig= 0.00 affn_viol= 0.00
             preempt/first/idle/fail= 0.00/ 0.00/ 0.00/ 0.00 min_exec= 0.00/   0.00ms
             cpus=  0 [  0,  0] 00000000 00000000 00000000
             excl_coll= 0.00 excl_preempt= 0.00
  stress-ng: util/frac= 4189.1/ 99.2 load/frac=   4200.0/  4.0 tasks=    43
             tot=     62 local= 0.00 wake/exp/reenq= 0.00/100.0/ 0.00
             keep/max/busy=2433.9/177.4/ 0.00 kick=100.0 yield/ign= 3.23/    0
             open_idle= 0.00 mig=54.84 xnuma_mig=54.84 xllc_mig=35.48 affn_viol= 0.00
             preempt/first/idle/fail= 0.00/ 0.00/ 0.00/ 0.00 min_exec= 0.00/   0.00ms
             cpus=  4 [  4,  4] 00000300 00030000 00000000
             excl_coll= 0.00 excl_preempt= 0.00
```

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-10 19:53:28 -07:00
Tejun Heo
8f0cc89ee8
Merge pull request #645 from frelon/rusty-init-dom
scx_rusty: init domains when calculating averages
2024-09-10 12:25:51 -10:00
Andrea Righi
e6e3579a92
Merge pull request #634 from anh0516/main
scx_bpfland: Documentation consistency fix
2024-09-10 23:25:55 +02:00
Fredrik Lönnegren
f155966b77 scx_rusty: init domains when calculating averages
The domains are added to the aggregator when load is added (and
duty_cycle is not 0.0f64).

This commit makes sure that all domains are added to the aggregator even
when the calculated duty_cycle is 0.

Signed-off-by: Fredrik Lönnegren <fredrik@frelon.se>
2024-09-10 21:51:41 +02:00
likewhatevs
85863d0e1c
Merge pull request #644 from hodgesds/layered-topo-order
scx_layered: Pass layer spec for core growth algo
2024-09-10 14:49:37 -04:00
Daniel Hodges
5fdd257862 scx_layered: Pass layer spec for core growth algo
Pass in the layer spec when determining the layer core growth algo. This
should make it easier to implement layer growth algos that are spec
specific.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-10 10:27:08 -07:00
Samuel Nair
c6af1aa1c8 scx_layered: Fix typo in stats 2024-09-10 08:44:57 -07:00
likewhatevs
c4c3659b6d
Merge pull request #638 from likewhatevs/remove-rlimit-dep
remove dependency on rlimit.rs
2024-09-10 03:14:12 -04:00
Andrea Righi
655ed5b4c6 scx_bpfland: use sum_exec_runtime to evaluate task's used time slice
Using p->scx.slice to evaluate the consumed time slice can be a bit
imprecise, because the sched_ext core implements yielding by setting
p->scx.slice to 0.

When the task's vruntime is evaluated this is considered as the task has
exhausted its entire allocated time slice, even though it voluntarily
released the CPU before the slice fully expired.

To avoid this inaccuracy and prevent penalizing tasks that voluntarily
release the CPU, always evaluate the used time slice based on the
difference in the task's total execution time (p->se.sum_exec_runtime).

This method provides a more precise calculation of vruntime and results
in a fairer task's deadline evaluation.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-10 08:03:35 +02:00