The keep_running path relies on the implicit last task enqueue which makes
the statistics a bit difficult to track. Let's make the enqueue path
comprehensive:
- Set SCX_OPS_ENQ_LAST and handle the last runnable task enqueue explicitly.
- Implement layered_cpu_release() to re-enqueue tasks from a CPU preempted
by a higher pri sched class and handle the re-enqueued tasks explicitly in
layered_enqueue().
- Add more statistics to track all enqueue operations.
When a task exhausts its slice, layered currently doesn't make any effort to
keep it on the same CPU. It dispatches the next task to run and then
enqueues the running one. This leads to suboptimal behaviors. e.g. When this
happens to a task in a preempting layer, the task will most likely find an
idle CPU or a task to preempt and then migrate there causing a completely
unnecessary migration.
This patch layered_dispatch() test whether the current task should keep
running on the CPU and then skip dispatching to keep the task running. This
behavior depends on the implicit local DSQ enqueue mechanism which triggers
when there are no other tasks to run.
- scx_utils: Replace kfunc_exists() with ksym_exists() which doesn't care
about the type of the symbol.
- scx_layered: Fix load failure on kernels >= v6.10-rc due to
scheduler_tick() -> sched_tick rename. Attach the tick fentry function to
either scheduler_tick() or sched_tick().
Make restart handling with user_exit_info simpler and consistently use the
load and report macros consistently across the rust schedulers. This makes
all schedulers automatically handle auto restarts from CPU hotplug events.
Note that this is necessary even for scx_lavd which has CPU hotplug
operations as CPU hotplug operations which took place between skel open and
scheduler init can still trigger restart.
layered_dispatch() was incorrectly continuing down to the lower priority
DSQs after successfully consuming from HI_FALLBACK_DSQ which can lead to
latency issues. Fix it.
The main reason why custom affinities are tricky for scx_layered is because
if we put a task which doesn't allow all CPUs into a layer's DSQ, it may not
get consumed for an indefinite amount of time. However, this is only true
for confined layers. Both open and grouped layers always consumed from all
CPUs and thus don't have this risk.
Let's allow tasks with custom affinities in open and grouped layers.
- In select_cpu(), don't consider direct dispatching to a local DSQ as
affinity violation even if the target CPU is outside the layer's cpumask
if the layer is open.
- In enqueue(), separate out per-cpu kthread special case into its own
block. Note that this is only applied if the layer is not preempting as a
preempting layer has a higher priority than HI_FALLBACK_DSQ anyway.
- Trigger the LO_FALLBACK_DSQ path for other threads only if the layer is
confined.
- The preemption path now also runs for tasks with a custom affinity in open
and grouped layers. Update it so that it only considers the CPUs in the
preempting task's allowed cpumask.
(cherry picked from commit 82d2f887a4608de61ddf5e15643c10e504a88f7b)
- AFFN_VIOL for per-cpu tasks could be double counted. Once in select_cpu()
and again in enqueue(). Count in select_cpu() only when direct
dispatching.
- Violating tasks were prioritized over non-violating ones because they were
queued on SCX_DSQ_GLOBAL which has priority over all user DSQs. This
doesn't make sense. Let's introduce two fallback DSQs - HI_FALLBACK_DSQ
and LO_FALLBACK_DSQ. HI is used for violating kthreads and LO for
violating user threads. HI is dispatched after preempting layers and LO
after all other layers. This shouldn't change the behavior too much for
kthreads while punshing, rather than rewarding, violating user threads.
(cherry picked from commit 67f69645667ba8a155cae9a9b7e90c055d39e23c)
C SCX_OPS_ATTACH() and rust scx_ops_attach() macros were not calling
.attach() and were only attaching the struct_ops. This meant that all
non-struct_ops BPF programs contained in the skels were never attached which
breaks e.g. scx_layered.
Let's fix it by adding .attach() invocation the the attach macros.
Only the very newest kernels support scx_bpf_cpuperf_set(). Let's update
scx_layered to accommodate older kernels as well.
Signed-off-by: David Vernet <void@manifault.com>
This change adds `scx_bpf_cpuperf_cap`, `scx_bpf_cpuperf_cur` and
`scx_bpf_cpuperf_set` definitions that were recently introduced into
[`sched_ext`](https://github.com/sched-ext/sched_ext/pull/180). It adds
a `perf` field to `scx_layered` to allow for controlling performance per
layer.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
If a library creates threads, those threads will often have the same
name. If two different processes of different priority both use a
library, it may be that we want the library's threads in each process to
be put into different layers.
To support this, let's add the ability to filter not only by task name,
but also by process name via the task thread group leader's comm.
Tested by creating two executables named "foo" and "bar", which both
spawn a bunch of tasks named "exp_worker" that spin until being
interrupted. With this config: https://pastebin.com/Uz2phzxQ, the tasks
were correctly matched to the expected layers.
Signed-off-by: David Vernet <void@manifault.com>
Some people have expressed confusion at this behavior. Let's be a bit
more explicit in the documentation.
Signed-off-by: David Vernet <void@manifault.com>
When I transitioned layered to using task local storage, I messed up
initializing the task ctx, not realizing we previously had a separate
variable that was initializing the hasmap entry. We need to initialize
the task's layer to -11, and also set refresh_layer to 1.
Signed-off-by: David Vernet <void@manifault.com>
In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid)
rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a
task. As far as I can tell, there's no reason we need to be doing this.
We never access the map from user space, and we're even passing a
struct task_struct * to a helper subprog to look up the task context
rather than only doing it by pid.
Using a hashmap is error prone for this because we end up having to
manually track lifecycles for entries in the map rather than relying on
BPF to do it for us. For example, BPF will automatically free a task's
entry from the map when it exits. Let's just use TLS here rather than a
hashmap to avoid issues from this (e.g. we've observed the scheduler
getting evicted because we're accessing a stale map entry after a task
has been destroyed).
Reported-by: Valentin Andrei <vandrei@meta.com>
Signed-off-by: David Vernet <void@manifault.com>
lookup_task_ctx(), lookup_task_ctx_may_fail(), and lookup_layer()
currently don't have the static keyword, so BPF may treat them as a
global function. We don't actually want these to be global, so let's
make them static to avoid confusing the verifier.
Signed-off-by: David Vernet <void@manifault.com>
This is to potentinally reduce issues with folks
using different versions of libbpf at runtime.
This also:
- makes static linking of libbpf the default
- adds steps in `meson setup` to fetch libbpf and make it
I have a usecase where specific nice values are used to bucket tasks
into groups that are handled separately by different `scx_layered`
policies, with no implications of relative priority between niceness X,
X + 1, X - 1, etc. In other words, nicevals are used as simple tags in
this scenario.
If we wanted to treat a specific niceness this way e.g. `11`, we could
do so with AND'd MATCH_NICE_{ABOVE,BELOW} like so:
```json
"matches" : [
[
{
"NiceAbove": 10
},
{
"NiceBelow": 12
},
],
],
```
But this is unnecessarily verbose and doesn't communicate the intent of
the match very well. Adding a `NiceEquals` matcher simplifies the
config and makes intent obvious:
```json
"matches" : [
[
{
"NiceEquals": 11
},
],
],
```
This PR adds support for such a matcher.
Also, rename `layer_match.nice_above_or_below` to just
`layer_match.nice`, as the former doesn't describe the newly-added
matcher's use of the field. It's still obvious that `layer_match.nice`
is relevant to MATCH_NICE_{ABOVE, BELOW, EQUALS}.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
As mentioned in the previous commit, for some reason we're sometimes
(non-deterministically) not seeing the updated cpumask / layer values in
BPF if we initialize the cpumasks here before attaching. Let's undo this
for now so to avoid observing buggy behavior, until we figure it out.
Signed-off-by: David Vernet <void@manifault.com>
This reverts commit 56ff3437a2.
For some reason we seem to be non-deterministically failing to see the
updated layer values in BPF if we initialize before attaching. Let's
just undo this specific part so that we can unblock this being broken,
and we can figure it out async.
Signed-off-by: David Vernet <void@manifault.com>
Currently, in layered_dispatch, we do the following:
1. Iterate over all preempt=true layers, and first try to consume from
them.
2. Iterate over all confined layers, and consume from them if we find a
layer with a cpumask that contains the consuming CPU.
3. Iterate over all grouped and open layers and consume from them in
ordered sequence.
In (2), we're only iterating over confined layers, but we should also be
iterating over grouped layers. Otherwise, despite a consuming CPU being
allocated to a specific grouped layer, the CPU will consume from
whichever grouped or open layer has a task that's ready to run.
Signed-off-by: David Vernet <void@manifault.com>
In layered_init, we're currently setting all bits in every layers'
cpumask, and then asynchronously updating the cpumasks at later time to
reflect their actual values at runtime. Now that we're updating the
layered code to initialize the cpumasks before we attach the scheduler,
we can instead have the init path actually refresh and initialize the
cpumasks directly.
Signed-off-by: David Vernet <void@manifault.com>
We currently have a bug in layered wherein we could fail to propagate
layer updates from user space to kernel space if a layer is never
adjusted after it's first initialized. For example, in the following
configuration:
[
{
"name": "workload.slice",
"comment": "main workload slice",
"matches": [
[
{
"CgroupPrefix": "workload.slice/"
}
]
],
"kind": {
"Grouped": {
"cpus_range": [30, 30],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
},
{
"name": "normal",
"comment": "the rest",
"matches": [
[]
],
"kind": {
"Grouped": {
"cpus_range": [2, 2],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
}
]
Both layers are static, and need only be resized a single time, so the
configuration would never be propagated to the kernel due to us never
calling update_bpf_layer_cpumask(). Let's instead have the
initialization propagate changes to the skeleton before we attach the
scheduler.
This has the advantage both of fixing the bug mentioned above where a
static configuration is never propagated to the kernel, and that we
don't have a short period when the scheduler is first attached where we
don't make optimal scheduling decisions due to the layer resizing not
being propagated.
Signed-off-by: David Vernet <void@manifault.com>
If we are doing local dispatch, we can avoid enqueue() altogether by
dispatching from select_cpu()
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
This is a really minor optimization, but we don't need idle_smtmask to
schedule pinned tasks, so defer it so the nr_cpus_allowed == 1 path is
marginally faster.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
idle_cpumask isn't used at all in pick_idle_cpu_from. The only need for
these cpumasks is to check if prev_cpu is a wholly idle CPU (and we only
do this when smt_enabled). idle_smtmask is sufficient for that check.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Prior to this patch, we only bump LSTAT_AFFN_BIOL when the target cpu
was idle, but in both cases it should be counted as AFFN_VIOL.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Currently scx_layered outputs statistics periodically as info! logs. The
format of this is largely unstructured and mostly suitable for running
scx_layered interactively (e.g. observing its behavior on the command
line or via logs after the fact).
In order to run scx_layered at larger scale, it's desireable to have
statistics output in some format that is amenable to being ingested into
monitoring databases (e.g. Prometheseus). This allows collection of
stats across many machines.
This commit adds a command line flag (-o) that outputs statistics to
stdout in OpenMetrics format instead of the normal log mechanism.
OpenMetrics has a public format
specification (https://github.com/OpenObservability/OpenMetrics) and is
in use by many projects.
The library for producing OpenMetrics metrics is lightweight but does
induce some changes. Primarily, metrics need to be pre-registered (see
OpenMetricsStats::new()).
Without -o, the output looks as before, for example:
```
19:39:54 [INFO] CPUs: online/possible=52/52 nr_cores=26
19:39:54 [INFO] Layered Scheduler Attached
19:39:56 [INFO] tot= 9912 local=76.71 open_idle= 0.00 affn_viol= 2.63 tctx_err=0 proc=21ms
19:39:56 [INFO] busy= 1.3 util= 65.2 load= 263.4 fallback_cpu= 1
19:39:56 [INFO] batch : util/frac= 49.7/ 76.3 load/frac= 252.0: 95.7 tasks= 458
19:39:56 [INFO] tot= 2842 local=45.04 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:56 [INFO] cpus= 2 [ 0, 2] 04000001 00000000
19:39:56 [INFO] immediate: util/frac= 0.0/ 0.0 load/frac= 0.0: 0.0 tasks= 0
19:39:56 [INFO] tot= 0 local= 0.00 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:56 [INFO] cpus= 50 [ 0, 50] fbfffffe 000fffff
19:39:56 [INFO] normal : util/frac= 15.4/ 23.7 load/frac= 11.4: 4.3 tasks= 556
19:39:56 [INFO] tot= 7070 local=89.43 open_idle= 0.00 preempt= 0.00 affn_viol= 3.69
19:39:56 [INFO] cpus= 50 [ 0, 50] fbfffffe 000fffff
19:39:58 [INFO] tot= 7091 local=84.91 open_idle= 0.00 affn_viol= 2.64 tctx_err=0 proc=21ms
19:39:58 [INFO] busy= 0.6 util= 31.2 load= 107.1 fallback_cpu= 1
19:39:58 [INFO] batch : util/frac= 18.3/ 58.5 load/frac= 93.9: 87.7 tasks= 589
19:39:58 [INFO] tot= 2011 local=60.67 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:58 [INFO] cpus= 2 [ 2, 2] 04000001 00000000
19:39:58 [INFO] immediate: util/frac= 0.0/ 0.0 load/frac= 0.0: 0.0 tasks= 0
19:39:58 [INFO] tot= 0 local= 0.00 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:58 [INFO] cpus= 50 [ 50, 50] fbfffffe 000fffff
19:39:58 [INFO] normal : util/frac= 13.0/ 41.5 load/frac= 13.2: 12.3 tasks= 650
19:39:58 [INFO] tot= 5080 local=94.51 open_idle= 0.00 preempt= 0.00 affn_viol= 3.68
19:39:58 [INFO] cpus= 50 [ 50, 50] fbfffffe 000fffff
^C19:39:59 [INFO] EXIT: BPF scheduler unregistered
```
With -o passed, the output is in OpenMetrics format:
```
19:40:08 [INFO] CPUs: online/possible=52/52 nr_cores=26
19:40:08 [INFO] Layered Scheduler Attached
# HELP total Total scheduling events in the period.
# TYPE total gauge
total 8489
# HELP local % that got scheduled directly into an idle CPU.
# TYPE local gauge
local 86.45305689716104
# HELP open_idle % of open layer tasks scheduled into occupied idle CPUs.
# TYPE open_idle gauge
open_idle 0.0
# HELP affn_viol % which violated configured policies due to CPU affinity restrictions.
# TYPE affn_viol gauge
affn_viol 2.332430203793144
# HELP tctx_err Failures to free task contexts.
# TYPE tctx_err gauge
tctx_err 0
# HELP proc_ms CPU time this binary has consumed during the period.
# TYPE proc_ms gauge
proc_ms 20
# HELP busy CPU busy % (100% means all CPUs were fully occupied).
# TYPE busy gauge
busy 0.5294061026085283
# HELP util CPU utilization % (100% means one CPU was fully occupied).
# TYPE util gauge
util 27.37195512782239
# HELP load Sum of weight * duty_cycle for all tasks.
# TYPE load gauge
load 81.55024768702126
# HELP layer_util CPU utilization of the layer (100% means one CPU was fully occupied).
# TYPE layer_util gauge
layer_util{layer_name="immediate"} 0.0
layer_util{layer_name="normal"} 19.340849995024997
layer_util{layer_name="batch"} 8.031105132797393
# HELP layer_util_frac Fraction of total CPU utilization consumed by the layer.
# TYPE layer_util_frac gauge
layer_util_frac{layer_name="batch"} 29.34063385422595
layer_util_frac{layer_name="immediate"} 0.0
layer_util_frac{layer_name="normal"} 70.65936614577405
# HELP layer_load Sum of weight * duty_cycle for tasks in the layer.
# TYPE layer_load gauge
layer_load{layer_name="immediate"} 0.0
layer_load{layer_name="normal"} 11.14363313258934
layer_load{layer_name="batch"} 70.40661455443191
# HELP layer_load_frac Fraction of total load consumed by the layer.
# TYPE layer_load_frac gauge
layer_load_frac{layer_name="normal"} 13.664744680306903
layer_load_frac{layer_name="immediate"} 0.0
layer_load_frac{layer_name="batch"} 86.33525531969309
# HELP layer_tasks Number of tasks in the layer.
# TYPE layer_tasks gauge
layer_tasks{layer_name="immediate"} 0
layer_tasks{layer_name="normal"} 490
layer_tasks{layer_name="batch"} 343
# HELP layer_total Number of scheduling events in the layer.
# TYPE layer_total gauge
layer_total{layer_name="normal"} 6711
layer_total{layer_name="batch"} 1778
layer_total{layer_name="immediate"} 0
# HELP layer_local % of scheduling events directly into an idle CPU.
# TYPE layer_local gauge
layer_local{layer_name="batch"} 69.79752530933632
layer_local{layer_name="immediate"} 0.0
layer_local{layer_name="normal"} 90.86574281031143
# HELP layer_open_idle % of scheduling events into idle CPUs occupied by other layers.
# TYPE layer_open_idle gauge
layer_open_idle{layer_name="immediate"} 0.0
layer_open_idle{layer_name="batch"} 0.0
layer_open_idle{layer_name="normal"} 0.0
# HELP layer_preempt % of scheduling events that preempted other tasks. #
# TYPE layer_preempt gauge
layer_preempt{layer_name="normal"} 0.0
layer_preempt{layer_name="batch"} 0.0
layer_preempt{layer_name="immediate"} 0.0
# HELP layer_affn_viol % of scheduling events that violated configured policies due to CPU affinity restrictions.
# TYPE layer_affn_viol gauge
layer_affn_viol{layer_name="normal"} 2.950379973178364
layer_affn_viol{layer_name="batch"} 0.0
layer_affn_viol{layer_name="immediate"} 0.0
# HELP layer_cur_nr_cpus Current # of CPUs assigned to the layer.
# TYPE layer_cur_nr_cpus gauge
layer_cur_nr_cpus{layer_name="normal"} 50
layer_cur_nr_cpus{layer_name="batch"} 2
layer_cur_nr_cpus{layer_name="immediate"} 50
# HELP layer_min_nr_cpus Minimum # of CPUs assigned to the layer.
# TYPE layer_min_nr_cpus gauge
layer_min_nr_cpus{layer_name="normal"} 0
layer_min_nr_cpus{layer_name="batch"} 0
layer_min_nr_cpus{layer_name="immediate"} 0
# HELP layer_max_nr_cpus Maximum # of CPUs assigned to the layer.
# TYPE layer_max_nr_cpus gauge
layer_max_nr_cpus{layer_name="immediate"} 50
layer_max_nr_cpus{layer_name="normal"} 50
layer_max_nr_cpus{layer_name="batch"} 2
# EOF
^C19:40:11 [INFO] EXIT: BPF scheduler unregistered
```
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
After updates to reflect the updated init and direct dispatch API, the
schedulers aren't compatible with older kernels. Bump versions and publish
releases.
In the latest kernel, sched_ext API has changed in two areas:
- ops.prep_enable/cancel_enable/enable/disable() replaced with
ops.init_task/enable/disable/exit_task().
- scx_bpf_dispatch() can now be called from ops.select_cpu(). Also,
SCX_ENQ_LOCAL flag is removed. Instead, users can call
scx_bpf_select_cpu_dfl() from ops.select_cpu() and use the @is_idle out
param value to determine whether to dispatch directly.
This commit updates all schedules so that they build.
- Init functions renamed / merged / split.
- ops.select_cpu() is added to several schedulers and local direct
disptching logic is moved there.
This is the minimum update which is need to make the schedulers build and
work. It needs further update to e.g. move vtime udpates to ops.enable().
This because each scheduler has it's own Rust Crate
and it's better if they had a README associated with each one.
https://crates.io/crates/scx_layered
- combine c and kernel-examples as it's confusing to have both
- rename 'rust-user' and 'c-user' to just 'rust' and 'c', which is simpler
- update and fix sync-to-kernel.sh