Commit Graph

1066 Commits

Author SHA1 Message Date
Pat Somaru
e0ce4711d4
flatten and simplify dispatch 2024-10-07 18:36:07 -04:00
Daniel Hodges
24fba4ab8d scx_layered: Add idle smt layer configuration
Add support for layer configuration for idle CPU selection. This allows
layers to choose whether or not to restrict idle CPU selection to SMT
idle CPUs.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-07 06:58:54 -07:00
Daniel Hodges
2f280ac025 scx_layered: Use idle smt mask for idle selection
In the non topology aware code the idle smt mask is used for finding
idle cpus. Update topology aware idle selection to also use the idle
smt mask. In certain benchmarks this can improve performance.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-07 05:40:59 -07:00
Daniel Hodges
30feecc5ae
Merge pull request #743 from hodgesds/layered-big-little-mask
scx_layered: Add big cpumask
2024-10-07 11:05:01 +00:00
Daniel Hodges
d86638ef0b
scx_layered: Add big cpumask
Add big cpumask to scx_layered and prefer selecting big idle cores when
using the BigLittle growth algo.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-06 14:05:12 -04:00
Andrea Righi
9a29547e5b scx_bpfland: rework lowlatency mode
In lowlatency mode (option --lowlatency) tasks are ordered using a
deadline that is evaluated as the vruntime minus a certain "bonus",
determined in function of the max time slice and the average amount of
voluntary context switches, to amplify the priority boost of the tasks
that are voluntarily releasing the CPU (which are typically
interactive).

However, this method can be extremely unfair in some cases: tasks with
short bursts of voluntary context switches may receive a huge priority
boost, making the rest of the system almost unresponsive (see massive
hackbench stress tests for example).

To prevent this rework the task's deadline logic to use the vruntime and
a "deadline component" that is a function of the average used time
slice, scaled using a dynamic task priority (evaluated as the static
task priority and the its average amount of voluntary context switches).

This logic seems to prevent excessive prioritization of tasks performing
short intensive bursts of voluntary context switches.

It also makes lowlatency mode in scx_bpfland (somehow) more similar to
the deadline logic used by scx_rusty.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-05 17:44:09 +02:00
Changwoo Min
a673dcf809
Merge pull request #736 from multics69/scx-futex-v1
scx_lavd: split main.bpf.c into multiple files
2024-10-05 13:11:15 +09:00
Pat Somaru
efabcfcdc3
Replace PID with Task Pointer in Rusty
Replace PID with Task Pointer in Rusty

Fixes: #610
2024-10-04 18:06:37 -04:00
Daniel Hodges
c56e60b86a scx_layered: Add better debug output of iter algo
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 11:36:36 -07:00
Daniel Hodges
e1241d6e52 scx_layered: Cleanup layer growth weight limits
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 11:16:58 -07:00
Daniel Hodges
17f9b3f4f3 scx_layered: Cleanup layer infeasible weight calc
Cleanup the calculation of the infeasible weight to not use an
unneccesary collect.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 10:12:22 -07:00
Daniel Hodges
0476a10f83 scx_layered: Cleanup from code review
Cleanup from code review.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 10:09:38 -07:00
Daniel Hodges
817e310a31 scx_layered: Add default dsq iter algo
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:58:26 -07:00
Daniel Hodges
7ee12091c3 scx_layered: Add DSQ iteration algo
Add DSQ iteration algorithms.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:58:23 -07:00
Daniel Hodges
6929501aea scx_layered: Refactor stats variable names
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
f066580612 scx_layered: Use dcycle for infeasible weights
Fix a bug to use duty cycle for infeasible weights calculations.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
c55d34c319 scx_layered: Cleanup unused metrics
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
c0c4e183f0 scx_layered: Cargo fmt
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
f3b3d4f19c scx_layered: Add weighted layer DSQ iteration
Add a flag to control DSQ iteration across layers by layer weight. This
helps prevent starvation by iterating over layers with the lowest weight
first.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
bd75ac8dbf scx_layered: Add flags for growth and preemption
Add two new flags `layer_preempt_weight_disable` and
`layer_growth_weight_disable` to disabled preemption and layer growth
when weighted layer load exceeds the configured threshold.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
e48e675cff scx_layered: Remove LoadLedger from stats
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
2518c99bf2 scx_layered: Refactor load calculation
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
54dbf35680 scx_layered: Add weights to userspace layer config
Add weights to userspace layer config.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
07be9dcf59 scx_layered: Add stats for adjusted layer weights
Add stats for infeasible weights adjusted layer stats.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Daniel Hodges
da38d69009 scx_layered: Add layer weights
Add weights to layers and use the infeasible weights crate to properly
apply weights during contention to prevent starvation.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-10-04 09:56:37 -07:00
Ming Yang
d76036b7cb scx_layered: Add Reverse layer growth algo
Add `LayerGrowthAlgo::Reverse` to be the reverse order of Linear.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-04 09:29:36 -07:00
Ming Yang
35d5c082d5 scx_layered: Break up layer_core_order function
`layer_core_order` provided multiple core growth implementation

Break it up into smaller function. Also, attach the method to
LayerGrowthAlgo. And `LayerCoreOrderGenerator` is added to make future
growth algo extension easy.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-04 09:18:01 -07:00
Ming Yang
29308d4705 scx_layered: Move layer core growth logic to separate module
Move layer core growth logic to separate module for further refactoring.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-04 08:51:11 -07:00
Changwoo Min
7c5c83a3a2 scx_lavd: split main.bpf.c into multiple files
As the main.bpf.c file grows, it gets hard to maintain.
So, split it into multiple logical files. There is no
functional change.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-05 00:25:40 +09:00
Ming Yang
658a75df73 scx_layered: Add per layer time slices to stats
Quoting issue #720 description from @hodgesds:

> In `scx_layered` the time slice can be [configured per layer](https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/main.rs#L493).
> This should be added to the
> [`LayerStats`](https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/stats.rs#L51)
> for each layer. During stats
> [refresh](https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/main.rs#L852)
> read the time slice duration (from the bpf skel) to the layer and add it
> to the stats. Finally, update the
> [format](https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/stats.rs#L218)
> method for `LayerStats` to print the per layer time slices.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-10-03 20:56:03 -07:00
Fredrik Lönnegren
4b290a1757 scx_rusty: fix single dom short-circuit
Remove a short-circuit in cpu_to_dom_id that will return domain id 0 for
any input.

This fixes a crash of scx_rusty when running with a single domain and
any CPU is offline.

Signed-off-by: Fredrik Lönnegren <fredrik@frelon.se>
2024-10-03 20:34:18 +02:00
Changwoo Min
b1070449b2
Merge pull request #714 from multics69/lavd-hotplug
scx_lavd: support CPU hotplug correctly
2024-10-03 07:35:25 +09:00
Tejun Heo
7402895f4a version: v1.0.5 2024-10-02 08:34:57 -10:00
Daniel Hodges
054352f172
Merge pull request #716 from vax-r/lavd_typo
scx_lavd: Fix typo
2024-10-02 13:21:15 +00:00
I Hsin Cheng
3055716382 scx_rusty: Delete unused function variable
"struct task_struct *p" isn't used within the function
"task_load_adj()". Delete the function parameter for cleaner code.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-10-02 17:49:13 +08:00
I Hsin Cheng
7fbef2aa0b scx_lavd: Fix typo
Fix "alreay" to "already".

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-10-02 17:39:10 +08:00
Changwoo Min
770a59f69d scx_lavd: support CPU hotplug correctly
Use scx_utils::NR_CPU_IDS to iterate whole CPUs and separately count the
number of online CPUs to support CPU hotplug correctly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-02 14:19:18 +09:00
Changwoo Min
fb7bc0a850 scx_lavd: fix incorrect preemtability test
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-10-02 13:24:43 +09:00
Ming Yang
445743487a Add #stat_doc attribute macro to Stats struct
`#stat_doc` extends the document from stat desc property.

Add this attribute macro to the remaining Stats structs.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-09-30 22:12:11 -07:00
Daniel Hodges
c897511c62
scx_layered: Fix compiler warnings
Cleanup various compiler warnings.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-30 20:59:24 -04:00
Tejun Heo
04648bc511
Merge pull request #703 from minosfuture/main
scx_stats: Implement macro #stat_doc to autogen doc from stat desc
2024-09-30 17:58:56 +00:00
Andrea Righi
e966455af2 scx_bpfland: fix task_avg_nvcsw() return type
task_avg_nvcsw() was incorrectly returning a bool instead of u64,
limiting the impact of the lowlatency boost.

Fix it by returning the proper type (u64).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
6e24fcc7f0 scx_bpfland: keep tasks running on full-idle SMT cores
When a task is the last one running on a CPU and still wants to
continue, allow it to run and replenish its time only if the used CPU is
part a fully idle SMT core.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
c20a19c946 scx_bpfland: always give tasks a chance to run on an idle CPU
During ttwu, the kernel may decide to skip ->select_task_rq() (e.g.,
when only one CPU is allowed or migration is disabled). This causes to
call ops.enqueue() directly without having a chance to call
ops.select_cpu().

Therefore, introduce a new flag (select_cpu_done) in the local task
context to determine if ops.select_cpu() was bypassed and, in that case,
attempt to find an idle CPU directly from ops.enqueue().

In the future this information will be supplied by the kernel through a
special enqueue flag (SCX_ENQ_CPU_SELECTED) [1]. However, the custom
flag in the local task context ensures to reliably determine the same
information, even on older kernels where this flag is not available.

[1] https://lore.kernel.org/lkml/20240928003840.GA2717@maniforge/T

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:19 +02:00
Daniel Hodges
bb560088de
scx_layered: Fix cache initialization cpumask
Fix a bug in cache initialization where the first node would repeated
get all CPUs added to the mask. Refactor some consts to be more clear.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-09-29 22:10:08 -04:00
Changwoo Min
ade6931bfc scx_lavd: fix incorrect neighbor_bit initialization
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-29 02:27:40 +09:00
Changwoo Min
6d116208c8 scx_lavd: do not perform the victim selection for an invalid cpu
When finding a victim candidate for preemption, a randomly chosen
candidate could be out of valid CPU range due to CPU offline, etc. In
this case, try another CPU randomly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-29 02:22:18 +09:00
Ming Yang
28bfd2986a scx_stats: Implement #stat_doc to autogen doc from stat desc
The doc of scx_layered `Opt` is out of sync.

Implement attribute macro #stat_doc to generate doc from the `desc`
property.

Apply #stat_doc to `LayerStats` and `SysStats in scx_layered.

Signed-off-by : Ming Yang <minos.future@gmail.com>
2024-09-28 09:32:48 -07:00
Changwoo Min
e8ebc09ced
Merge pull request #702 from multics69/lavd-dyn-pc-thr
scx_lavd: more accurately determine the performance criticality threshold
2024-09-28 04:35:45 +00:00
Changwoo Min
cd7846f4d2 scx_lavd: more accurately determine the performance criticality threshold
We used the average performance criticality of tasks as a threshold to
determine the proper core type (big or little). However, if the big
core's compute capacity is not half of the total compute capacity, such
an average-based determination becomes suboptimal. If fewer tasks are
classified as performance-critical tasks and requested to run on big
cores, the big cores would be wasted by stealing arbitrary
non-performance-critical tasks. That could result in performance
instability.

Hence, determine the threshold more accurately by considering (active)
big cores' compute capacity and the (approximated) distribution of
performance criticality of tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-09-27 16:56:30 +09:00