Commit Graph

1028 Commits

Author SHA1 Message Date
Ming Yang
445743487a Add #stat_doc attribute macro to Stats struct
`#stat_doc` extends the document from stat desc property.

Add this attribute macro to the remaining Stats structs.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-09-30 22:12:11 -07:00
Daniel Hodges
c897511c62
scx_layered: Fix compiler warnings
Cleanup various compiler warnings.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-30 20:59:24 -04:00
Tejun Heo
04648bc511
Merge pull request #703 from minosfuture/main
scx_stats: Implement macro #stat_doc to autogen doc from stat desc
2024-09-30 17:58:56 +00:00
Andrea Righi
e966455af2 scx_bpfland: fix task_avg_nvcsw() return type
task_avg_nvcsw() was incorrectly returning a bool instead of u64,
limiting the impact of the lowlatency boost.

Fix it by returning the proper type (u64).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
6e24fcc7f0 scx_bpfland: keep tasks running on full-idle SMT cores
When a task is the last one running on a CPU and still wants to
continue, allow it to run and replenish its time only if the used CPU is
part a fully idle SMT core.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
c20a19c946 scx_bpfland: always give tasks a chance to run on an idle CPU
During ttwu, the kernel may decide to skip ->select_task_rq() (e.g.,
when only one CPU is allowed or migration is disabled). This causes to
call ops.enqueue() directly without having a chance to call
ops.select_cpu().

Therefore, introduce a new flag (select_cpu_done) in the local task
context to determine if ops.select_cpu() was bypassed and, in that case,
attempt to find an idle CPU directly from ops.enqueue().

In the future this information will be supplied by the kernel through a
special enqueue flag (SCX_ENQ_CPU_SELECTED) [1]. However, the custom
flag in the local task context ensures to reliably determine the same
information, even on older kernels where this flag is not available.

[1] https://lore.kernel.org/lkml/20240928003840.GA2717@maniforge/T

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:19 +02:00
Daniel Hodges
bb560088de
scx_layered: Fix cache initialization cpumask
Fix a bug in cache initialization where the first node would repeated
get all CPUs added to the mask. Refactor some consts to be more clear.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-29 22:10:08 -04:00
Changwoo Min
ade6931bfc scx_lavd: fix incorrect neighbor_bit initialization
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-29 02:27:40 +09:00
Changwoo Min
6d116208c8 scx_lavd: do not perform the victim selection for an invalid cpu
When finding a victim candidate for preemption, a randomly chosen
candidate could be out of valid CPU range due to CPU offline, etc. In
this case, try another CPU randomly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-29 02:22:18 +09:00
Ming Yang
28bfd2986a scx_stats: Implement #stat_doc to autogen doc from stat desc
The doc of scx_layered `Opt` is out of sync.

Implement attribute macro #stat_doc to generate doc from the `desc`
property.

Apply #stat_doc to `LayerStats` and `SysStats in scx_layered.

Signed-off-by : Ming Yang <minos.future@gmail.com>
2024-09-28 09:32:48 -07:00
Changwoo Min
e8ebc09ced
Merge pull request #702 from multics69/lavd-dyn-pc-thr
scx_lavd: more accurately determine the performance criticality threshold
2024-09-28 04:35:45 +00:00
Changwoo Min
cd7846f4d2 scx_lavd: more accurately determine the performance criticality threshold
We used the average performance criticality of tasks as a threshold to
determine the proper core type (big or little). However, if the big
core's compute capacity is not half of the total compute capacity, such
an average-based determination becomes suboptimal. If fewer tasks are
classified as performance-critical tasks and requested to run on big
cores, the big cores would be wasted by stealing arbitrary
non-performance-critical tasks. That could result in performance
instability.

Hence, determine the threshold more accurately by considering (active)
big cores' compute capacity and the (approximated) distribution of
performance criticality of tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-27 16:56:30 +09:00
Changwoo Min
f07023e42b scx_lavd: rename avg_perf_cri to thr_perf_cri
As a preparation to improve the performance criticality logic, we first
rename "avg_perf_cri" to "thr_perf_cri" since average is no longer the
threshold.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-27 13:12:20 +09:00
Daniel Hodges
326ccb3385
Merge pull request #696 from hodgesds/layered-growth-fixes
scx_layered: Fix idle core selection
2024-09-26 11:59:19 -04:00
Daniel Hodges
d1b425d1fa scx_layered: Fix idle core selection
Fix idle core selection to correctly use pick_idle_cpu_from.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-26 07:46:26 -07:00
Tejun Heo
540576ac30 scheds/c: Re-enable scx_flatcg and scx_pair
cgroup support is availale again, re-enable scx_flatcg and scx_pair.
2024-09-25 12:38:45 -10:00
Tejun Heo
466b7984c2 Sync from kernel - a748db0c8c6a ("tools/sched_ext: Receive misc updates from SCX repo")
Sync from sched_ext/for-6.12-fixes a748db0c8c6a ("tools/sched_ext: Receive
misc updates from SCX repo")

  git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git for-6.12-fixes

- cgroup support + scx_flatcg updates.

- scx_bpf_dispatch[_vtime]_from_dsq() + scx_qmap updates.

- COMPAT macros.
2024-09-25 12:34:03 -10:00
Daniel Hodges
f9b39244cc
Merge pull request #687 from hodgesds/layered-growth-enum-refactor
scx_layered: Add layer growth algo to layer bpf config
2024-09-25 15:10:00 -04:00
Daniel Hodges
bce840d9e5 scx_layered: Add layer growth algo to layer bpf config
Add an enum for the layer growth algo to the bpf layer config. This will
be useful for implementing topology aware layer growth algorithms.
When selecting an idle CPU the current logic tries to keep tasks
local to LLC/NUMA node. However, for certain growth algorithms (ex:
RoundRobin) this is suboptimal. Adding the layer growth algorithm
will allow for different paths for CPU selection in the idle/preemption
paths.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-25 12:00:24 -07:00
likewhatevs
99d1179866
enable ide's etc. to work on bpf.c files (#668)
* enable ide's etc. to work on the bpf.c files
this makes it so that clangd and ide tools which use clangd
can work on the bpf.c code.

nothing should actually be changed outside of that ide/editor
environment, all the changes are ifdef'ed on LSP which is set
in the added .clangd file.

* move intf include out of both sides of ifdef toggle
2024-09-24 16:55:02 -04:00
Daniel Hodges
2805bb77a5
Merge pull request #683 from hodgesds/layered-idle-topo
scx_layered: Make layered idle CPU selection topology aware
2024-09-24 16:37:23 -04:00
Daniel Hodges
679dd5920c scx_layered: Fix comment
Fix comment to be more accurate.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-24 12:31:03 -07:00
Daniel Hodges
87c6e276d9 scx_layered: Restrict idle selection to layer cpus
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-24 09:40:38 -07:00
Daniel Hodges
e68fccd26c scx_layered: Update comments on layer preemption
Update comments on layer preemption to be more descriptive.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-24 08:24:04 -07:00
Daniel Hodges
6b966cda0c scx_layered: Restrict preemption to layer cpumask
When preempting restrict preemption to the current layer cpumask. This
may reduce the amount of preemption, but cause better cache locality
of preempted tasks.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-24 06:23:01 -07:00
I Hsin Cheng
61cb3f7fc5 scx_common_bpf: Append cast_mask()
Remove cast_mask() function distributed throughout different schedulers
and add it in common.bpf.h so every scheduler can reference it once they
need to.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-24 16:01:19 +08:00
Changwoo Min
b1bc4033b4
Merge pull request #673 from multics69/lavd-prop-lat-cri
scx_lavd: propagate waker's latency criticality to its wakee
2024-09-24 07:34:07 +09:00
Daniel Hodges
29fb647c93 scx_layered: Refactor idle core selection
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-23 12:01:42 -07:00
Daniel Hodges
380fd1f3b3 scx_layered: Make idle select topology aware
Make idle CPU selection topology aware.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-23 10:10:43 -07:00
Daniel Hodges
35477970bd scx_layered: Cleanup dump format
Cleanup the dump format for topology aware dumps in scx_layered.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-23 10:02:49 -07:00
Daniel Hodges
8b14e48994
Merge pull request #671 from hodgesds/layered-last-waker
scx_layered: Add waker stats per layer
2024-09-23 10:58:54 -04:00
Changwoo Min
71fa92cf1c scx_lavd: propagate waker's latency criticality to its wakee
If a waker is more latency critical than a wakee, inherit a waker's
latency criticality for the wakee. This allows the wakee to consider the
context of who wakes me up. For now, we limit such inheritance to one
hop and one schedule.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-23 12:56:16 +09:00
Changwoo Min
ad8536b4a4
Merge pull request #670 from multics69/lavd-opt-preemption
scx_lavd: find a victim cpu for preemption within task's compute domain
2024-09-23 10:22:08 +09:00
Daniel Hodges
91d32663bd
scx_layered: Refactor waker tracking to only use last waker
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 18:05:54 -04:00
Daniel Hodges
1a2f82b91c
Merge pull request #666 from hodgesds/layered-local-llc
scx_layered: Add topology aware preemption
2024-09-22 17:36:32 -04:00
Daniel Hodges
326f3b7988
Merge pull request #667 from hodgesds/layered-pcore-grow
scx_layered: Add Big/Little core growth algos
2024-09-22 16:59:42 -04:00
Daniel Hodges
1ac9712d2e
scx_layered: Refactor preemption into a separate function
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:54:11 -04:00
Daniel Hodges
bc34bd867b
scx_layered: Add option to enable XNUMA preemption
Disable XNUMA preemption by default and add an option to enable it.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
e105d9f8b1
scx_layered: Use cast_mask helper
Use the cast_mask helper to clean up some of the bpf cpumask conversion
code for preemption.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
5d9d32b65c
scx_layered: Add stats for XLLC/XNUMA preemptions
Add stats for XLLC/XNUMA preemptions.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:57 -04:00
Daniel Hodges
c15ecbb3a4
scx_layered: Add topology aware preemption
Add topology aware preemption that begins in the local LLC and attempts
to preempt from cpus nearest in the topology.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 16:52:56 -04:00
Daniel Hodges
6fb2f0b2b4
scx_layered: Clean up waker code
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 06:43:10 -04:00
Daniel Hodges
c55b2c6e69
scx_layered: Add waker stats per layered
Update the task context to keep a mask of wakers and add stats for wakes
across layers.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-22 06:43:03 -04:00
Daniel Hodges
140a101874
Merge pull request #449 from hodgesds/layered-dsq-fixes
scx_layered: Add a hi fallback dsq per llc
2024-09-22 06:39:46 -04:00
Changwoo Min
13a68465bf common: add bpf_cpumask_weight() to common.bpf.h
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 12:47:38 +09:00
Changwoo Min
7321a89724 scx_lavd: find a victim cpu for preemption within task's compute domain
Previously, we found a victim from the entire CPUs, which include remote
or non-compatible CPUs. Now we limit our search for victim finding
within a task's compute domain.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 12:47:18 +09:00
Changwoo Min
a13082c2b8
Merge pull request #669 from multics69/lavd-opt-select-cpu
scx_lavd: consider waker's CPU when ops.select_cpu()
2024-09-22 09:16:06 +09:00
Andrea Righi
897977bbc1
Merge pull request #663 from vax-r/bpfland_fix
scx_bpfland: Remove the usage of cast_mask in bpfland_enqueue
2024-09-21 22:15:11 +02:00
Changwoo Min
8d8d8f9f61 scx_lavd: consider waker's CPU when ops.select_cpu()
In case of sync wake-up, consider waker's CPU also to improve cache
locality.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-22 01:57:49 +09:00
Daniel Hodges
4aa841de0a
scx_layered: Rename HI_FALLBACK_DSQ to HI_FALLBACK_DSQ_BASE
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-20 17:28:38 -04:00