Commit Graph

2223 Commits

Author SHA1 Message Date
I Hsin Cheng
72ecf3c8e3 scx_rusty: Temporary fix of duplicate active tptr
Under severe load unbalance scenario such as mixtures of CPU-insensive
workload and I/O-intensive worload, same tptr may be written into the
same dom_active_tptrs's array.

It will lead to load balancer's failure because when the tptr task
contains large enough load, it tends be to selected so warnings about
same tptr being set in "lb_data" will continue to pop up.

Use a workaround for now , which is to keep a HashSet in userspace
recording the current active tptr under a domain, and do not generate
the same task repeatedly.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-11-19 22:03:18 +08:00
Tejun Heo
489ce8a766
Merge pull request #939 from sched-ext/htejun/layered-updates
scx_layered: Work around verification failure in antistall_set() on o…
2024-11-19 08:02:03 +00:00
Tejun Heo
dbcd233f17 scx_layered: Work around verification failure in antistall_set() on old kernels
In earlier kernels, the iterator variable wasn't trusted making the verifier
choke on calling kfuncs on its dereferences. Work around by re-looking up
the task by PID.
2024-11-18 21:37:36 -10:00
Changwoo Min
61f378c1cd
Merge pull request #931 from multics69/lavd-osu
scx_lavd: Factor the task's runtime more aggressively in  a deadline calculation
2024-11-19 00:29:33 +00:00
Tejun Heo
88c7d47314
Merge pull request #934 from sched-ext/htejun/layered-updates
scx_layered: Cleanups around topology handling
2024-11-18 23:12:48 +00:00
Tejun Heo
aec9e86797 Merge branch 'main' into htejun/layered-updates 2024-11-18 12:19:42 -10:00
Tejun Heo
10bf25a65f topology, scx_layered: Make --disable-topology handling more consistent
When --disable-topology is specified the topology information (e.g. llc map)
supplied to the BPF code disagrees with how the scheduler operates requiring
code paths to be split unnecessarily and making things error-prone (e.g.
layer_dsq_id() returned wrong value with --disable-topology).

- Add Topology::with_flattened_llc_node() which create a dummy topo with one
  llc and node regardless of the underlying hardware and make layered use it
  when --disable-topology.

- Add explicit nr_llcs == 1 handling to layer_dsq_id() to generate better
  code when topology is disabled and remove explicit disable_topology
  branches in the callers.

- Fix layer->cache_mask when a layer doesn't explicitly specify nodes and
  drop the disable_topology branch in layered_dump().
2024-11-18 12:19:01 -10:00
Daniel Hodges
ff0e9c621c
Merge pull request #933 from hodgesds/layered-verifier-nested
scx_layered: Fix verifier issues on older kernels
2024-11-18 21:48:59 +00:00
Daniel Hodges
1869dd8a2d scx_layered: Fix verifier issues on older kernels
On 6.9 kernels the verifier is not able to track `struct bpf_cpumasks`
properly on nested structs. Move the cpumasks from the `cached_cpus`
struct back to the `task_ctx` struct so older versions of the verifier
can pass.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-18 13:18:57 -08:00
Tejun Heo
68e1741351 scx_layered: Use cached cpu_ctx->hi_fallback_dsq_id and cpu_ctx->cached_idx
- Remember hi_fallback_dsq_id for each CPU in cpu_ctx and use the remembered
  values.

- Make antistall_scan() walk each hi fallback DSQ once instead of multiple
  times through CPU iteration.

- Remove unused functions.
2024-11-18 10:09:24 -10:00
Tejun Heo
827af0b7ef scx_layered: Fix dsq_id indexing bugs
keep_running() and antistall_scan() were incorrectly assuming that
layer->index equals DSQ ID. Fix them. Also, remove a compile warning while
at it around cpumask cast.
2024-11-18 09:53:39 -10:00
Tejun Heo
f2c9e7fddd scx_layered: Don't use tctx->last_cpu when picking target llc
It's confusing to use tctx->last_cpu for making active choices as it makes
layered deviate from other schedulers unnecessarily. Use last_cpu only for
migration accounting in layered_running().

- In layered_enqueue(), layered_select_cpu() already returned prev_cpu for
  non-direct-dispatch cases and the CPU the task is currently on should
  match tctx->last_cpu. Use task_cpu instead.

- In keep_running(), the current CPU always matches tctx->last_cpu. Always
  use bpf_get_smp_processor_id().
2024-11-18 09:16:08 -10:00
Tejun Heo
519a27f920
Merge pull request #932 from sched-ext/htejun/layered-updates
scx_layered: Don't limit antistall execution to layered_cpumask
2024-11-18 18:51:25 +00:00
Tejun Heo
ce300101ed scx_layered: Don't limit antistall execution to layered_cpumask
A task may end up in a layer which doesn't have any CPUs that are allowed
for the task. They are accounted as affinity violations and put onto a
fallback DSQ. When antistall_set() is trying to find the CPU to run a
stalled DSQ, it ignores CPUs that are not in the first task's
layered_cpumask. This makes antistall skip stalling DSQs with affnity
violating tasks at the front.

Consider all allowed CPUs for affinity violating tasks. While at it, combine
the two if blocks to set antistall to improve readability.
2024-11-18 08:41:20 -10:00
Tejun Heo
77eec19792
Merge pull request #929 from sched-ext/htejun/layered-updates
scx_layered: Perf improvements and a bug fix
2024-11-18 17:41:40 +00:00
Tejun Heo
65b49f8d30
Merge pull request #928 from purplewall1206/patch-1
fix compile errors
2024-11-18 17:35:50 +00:00
Tejun Heo
8e6e3de639
Merge branch 'main' into patch-1 2024-11-18 05:23:51 -10:00
Andrea Righi
a7fcda82cc
Merge pull request #924 from sched-ext/scx-fair
scheds: introduce scx_flash
2024-11-18 08:21:36 +00:00
Andrea Righi
5b4b6df5e4
Merge branch 'main' into scx-fair 2024-11-18 07:42:09 +01:00
Changwoo Min
3292be7b72 scx_lavd: Factor the task's runtime more aggressively in a deadline calculation
Instead of using a constant runtime value in the deadline calculation,
use the adjusted runtime value of a task. Since tasks' runtime value
follows a highly skewed distribution, we convert the highly skewed
distribution to a mildly skewed distribution to avoid stalls. This
resolves the audio breaking issue in osu! under heavy background workloads.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-11-18 11:55:12 +09:00
Tejun Heo
56e0dae81d scx_layered: Fix linter disagreement 2024-11-17 06:03:30 -10:00
Tejun Heo
93a0bc9969 scx_layered: Fix consume_preempting() when --local-llc-iteration
consume_preempting() wasn't teting layer->preempt in consume_preempting()
when --local-llc-iterations ending up treating all layers as preempting
layers and often leading to HI fallback starvations under saturation. Fix
it.
2024-11-17 05:54:03 -10:00
Tejun Heo
51d4945d69 scx_layered: Don't call scx_bpf_cpuperf_set() unnecessarily
layered_running() is calling scx_bpf_cpuperf_set() whenever a task of a
layer w/ cpuperf setting starts running which can be every task switch.
There's no reason to repeatedly call with the same value. Remember the last
value and call iff the new value is different.

This reduces the bpftop reported CPU consumption of scx_bpf_cpuperf_set()
from ~1.2% to ~0.7% while running rd-hashd at full CPU saturation on Ryzen
3900x.
2024-11-16 05:45:44 -10:00
Andrea Righi
678b10133d scheds: introduce scx_flash
Introduce scx_flash (Fair Latency-Aware ScHeduler), a scheduler that
focuses on ensuring fairness among tasks and performance predictability.

This scheduler is introduced as a replacement of the "lowlatency" mode
in scx_bpfland, that has been dropped in commit 78101e4 ("scx_bpfland:
drop lowlatency mode and the priority DSQ").

scx_flash operates based on an EDF (Earliest Deadline First) policy,
where each task is assigned a latency weight. This weight is adjusted
dynamically, influenced by the task's static weight and how often it
releases the CPU before its full assigned time slice is used: tasks that
release the CPU early receive a higher latency weight, granting them
a higher priority over tasks that fully use their time slice.

The combination of dynamic latency weights and EDF scheduling ensures
responsive and stable performance, even in overcommitted systems, making
the scheduler particularly well-suited for latency-sensitive workloads,
such as multimedia or real-time audio processing.

Tested-by: Peter Jung <ptr1337@cachyos.org>
Tested-by: Piotr Gorski <piotrgorski@cachyos.org>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-16 14:49:25 +01:00
ppw
c7faf70a26
fix compile errors 2024-11-16 15:56:20 +08:00
Tejun Heo
75dd81e3e6 scx_layered: Improve topology aware select_cpu()
- Cache llc and node masked cpumasks instead of calculating them each time.
  They're recalculated only when the task has migrated cross the matching
  boundary and recalculation is necessary.

- llc and node masks should be taken from the wakee's previous CPU not the
  waker's CPU.

- idle_smtmask is already considered by scx_bpf_pick_idle_cpu(). No need to
  and it manually.

- big_cpumask code updated to be simpler. This should also be converted to
  use cached cpumask. big_cpumask portion is not tested.

This brings down CPU utilization of select_cpu() from ~2.7% to ~1.7% while
running rd-hashd at saturation on Ryzen 3900x.
2024-11-15 16:29:47 -10:00
Tejun Heo
2b52d172d4 scx_layered: Encapsulate per-task layered cpumask caching
and fix build warnings while at it. Maybe we should drop const from
cast_mask().
2024-11-15 14:30:03 -10:00
Tejun Heo
1293ae21fc scx_layered: Stat output format update
Rearrange things a bit so that lines are not too long.
2024-11-15 13:38:56 -10:00
66223bf235
Merge pull request #926 from JakeHillion/pr926
layered: split out common parts of LayerKind
2024-11-15 22:48:10 +00:00
Jake Hillion
d35d5271f5 layered: split out common parts of LayerKind
We duplicate the definition of most fields in every layer kind. This makes
reading the config harder than it needs to be, and turns every simple read of a
common field into a `match` statement that is largely redundant.

Utilise `#[serde(flatten)]` to embed a common struct into each of the LayerKind
variants. Rather than matching on the type this can be directly accessed with
`.kind.common()` and `.kind.common_mut()`. Alternatively, you can extend
existing matches to match out the common parts as demonstrated in this diff
where necessary.

There is some further code cleanup that can be done in the changed read sites,
but I wanted to make it clear that this change doesn't change behaviour, so
tried to make these changes in the least obtrusive way.

Drive-by: fix the formatting of the lazy_static section in main.rs by using
`lazy_static::lazy_static`.

Test plan:
```
# main
$ cargo build --release && target/release/scx_layered --example /tmp/test_old.json
# this change
$ cargo build --release && target/release/scx_layered --example /tmp/test_new.json
$ diff /tmp/test_{old,new}.json
# no diff
```
2024-11-15 21:57:22 +00:00
Daniel Hodges
90164160a2
Merge pull request #925 from hodgesds/layered-lol
scx_layered: Fix formatting
2024-11-15 17:01:56 +00:00
Daniel Hodges
1afb7d5835 scx_layered: Fix formatting
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-15 08:54:05 -08:00
Daniel Hodges
79125ef613
Merge pull request #919 from hodgesds/layered-dispatch-local
scx_layered: Consume from local LLCs for dispatch
2024-11-15 14:29:09 +00:00
Daniel Hodges
3a3a7d71ad
Merge branch 'main' into layered-dispatch-local 2024-11-14 16:10:12 -05:00
Daniel Hodges
db46e27651
Merge pull request #923 from hodgesds/layered-dsq-preempt-fix
scx_layered: Fix cost accounting for fallback dsqs
2024-11-14 13:47:11 +00:00
Daniel Hodges
4fc0509178 scx_layered: Add flag to control llc iteration on dispatch
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-13 12:43:45 -08:00
Daniel Hodges
0096c0632b scx_layered: Fix cost accounting for dsqs
Fix cost accounting for fallback DSQs on refresh that DSQ budgets
get refilled appropriately. Add helper functions for converting to and
from a DSQ id to a LLC budget id. During preemption a layer should check
if it is attempting to preempt from a layer that has more budget and
only preempt if the preempting layer has more budget.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-13 07:23:53 -08:00
Daniel Hodges
72f21dba06
Merge pull request #922 from hodgesds/layered-cost-dump-fixes
scx_layered: Fix dump format
2024-11-12 18:28:31 +00:00
Daniel Hodges
76310497bb
Merge pull request #921 from hodgesds/layered-formatting-fix
scx_layered: Fix formatting
2024-11-12 18:26:50 +00:00
Daniel Hodges
f7009f7960 scx_layered: Fix dump format
Fix a small bug where incorrect per CPU costs were being dumped. The
output format should now appropriately match the per-CPU costs. The
following dump shows the correct format:

    HI_FALLBACK[1024] nr_queued=46 -25755ms
    HI_FALLBACK[1025] nr_queued=43 -25947ms
    LO_FALLBACK nr_queued=0 -0ms
    COST GLOBAL[0][random] budget=16791955959896739
    capacity=16791955959896739
    COST GLOBAL[1][hodgesd] budget=16791955959896739
    capacity=16791955959896739
    COST GLOBAL[2][stress-ng] budget=43243243243243243
    capacity=43243243243243243
    COST GLOBAL[3][normal] budget=33583911919793478
    capacity=33583911919793478
    COST FALLBACK[1024][0] budget=16791955959896739
    capacity=16791955959896739
    COST FALLBACK[1025][1] budget=16791955959896739
    capacity=16791955959896739
    COST CPU[0][0][random] budget=5405405405405405 capacity=5405405405405405
    COST CPU[0][1][hodgesd] budget=2702702694605435
    capacity=2702702702702702
    COST CPU[0][2][stress-ng] budget=540514231324919
    capacity=540540540540540
    COST CPU[0][3][normal] budget=5405405342325615 capacity=5405405405405405
    COST CPU[0]FALLBACK[0][1024] budget=0 capacity=5405405405405405
    COST CPU[0]FALLBACK[1][1025] budget=1 capacity=2702702694605435
    COST CPU[1][0][random] budget=5405405405405405 capacity=5405405405405405
    COST CPU[1][1][hodgesd] budget=2702702675501951
    capacity=2702702702702702
    COST CPU[1][2][stress-ng] budget=540514250569731
    capacity=540540540540540

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-12 10:22:17 -08:00
Daniel Hodges
ff15f257be scx_layered: Fix formatting
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-12 10:09:22 -08:00
Daniel Hodges
673316827b
Merge pull request #918 from hodgesds/layered-slice-helper
scx_layered: Add helper for layer slice duration
2024-11-11 18:24:43 +00:00
Daniel Hodges
775d09ae1f scx_layered: Consume from local LLCs for dispatch
When dispatching consume from DSQs in the local LLC first before trying
remote DSQs. This should still be fair as the layer iteration order will
be maintained.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-11 09:22:03 -08:00
Daniel Hodges
4fb05d9252
Merge pull request #920 from hodgesds/layered-consume-fix
scx_layered: Fix error in dispatch consumption
2024-11-11 16:43:25 +00:00
Daniel Hodges
b2505e74df
Merge branch 'main' into layered-consume-fix 2024-11-11 11:29:43 -05:00
Daniel Hodges
1ed387d7f3 scx_layered: Fix error in dispatch consumption
Fix bug is consume_non_open where it improperly returns 0 when the DSQ
is not consumed.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-11 08:19:54 -08:00
Daniel Hodges
cad3413886 scx_layered: Add helper for layer slice duration
Add a helper for returning the appropriate slice duration for a layer
and replace a various instances where the slice value was being
recalculated.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-11-11 06:11:32 -08:00
likewhatevs
835f0d0e6a
Merge pull request #890 from likewhatevs/layered-dsq-timer
scx_layered: add timer antistall
2024-11-09 01:39:35 +00:00
Pat Somaru
89f4aa1351
scx_layered: add antistall
add timer based antistall to scx_layered and new flags
to enable/disable and specify seconds of delay before
it turns on.

also update ci config to make sure this verifies/runs.
2024-11-08 20:31:02 -05:00
Tejun Heo
38512bfce8
Merge pull request #916 from sched-ext/htejun/scx_layered-verifier-workaround
scx_layered: Work around older kernels choking on function calls from…
2024-11-08 19:37:36 +00:00