In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid)
rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a
task. As far as I can tell, there's no reason we need to be doing this.
We never access the map from user space, and we're even passing a
struct task_struct * to a helper subprog to look up the task context
rather than only doing it by pid.
Using a hashmap is error prone for this because we end up having to
manually track lifecycles for entries in the map rather than relying on
BPF to do it for us. For example, BPF will automatically free a task's
entry from the map when it exits. Let's just use TLS here rather than a
hashmap to avoid issues from this (e.g. we've observed the scheduler
getting evicted because we're accessing a stale map entry after a task
has been destroyed).
Reported-by: Valentin Andrei <vandrei@meta.com>
Signed-off-by: David Vernet <void@manifault.com>
lookup_task_ctx(), lookup_task_ctx_may_fail(), and lookup_layer()
currently don't have the static keyword, so BPF may treat them as a
global function. We don't actually want these to be global, so let's
make them static to avoid confusing the verifier.
Signed-off-by: David Vernet <void@manifault.com>
This is to potentinally reduce issues with folks
using different versions of libbpf at runtime.
This also:
- makes static linking of libbpf the default
- adds steps in `meson setup` to fetch libbpf and make it
I have a usecase where specific nice values are used to bucket tasks
into groups that are handled separately by different `scx_layered`
policies, with no implications of relative priority between niceness X,
X + 1, X - 1, etc. In other words, nicevals are used as simple tags in
this scenario.
If we wanted to treat a specific niceness this way e.g. `11`, we could
do so with AND'd MATCH_NICE_{ABOVE,BELOW} like so:
```json
"matches" : [
[
{
"NiceAbove": 10
},
{
"NiceBelow": 12
},
],
],
```
But this is unnecessarily verbose and doesn't communicate the intent of
the match very well. Adding a `NiceEquals` matcher simplifies the
config and makes intent obvious:
```json
"matches" : [
[
{
"NiceEquals": 11
},
],
],
```
This PR adds support for such a matcher.
Also, rename `layer_match.nice_above_or_below` to just
`layer_match.nice`, as the former doesn't describe the newly-added
matcher's use of the field. It's still obvious that `layer_match.nice`
is relevant to MATCH_NICE_{ABOVE, BELOW, EQUALS}.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
As mentioned in the previous commit, for some reason we're sometimes
(non-deterministically) not seeing the updated cpumask / layer values in
BPF if we initialize the cpumasks here before attaching. Let's undo this
for now so to avoid observing buggy behavior, until we figure it out.
Signed-off-by: David Vernet <void@manifault.com>
This reverts commit 56ff3437a2.
For some reason we seem to be non-deterministically failing to see the
updated layer values in BPF if we initialize before attaching. Let's
just undo this specific part so that we can unblock this being broken,
and we can figure it out async.
Signed-off-by: David Vernet <void@manifault.com>
Currently, in layered_dispatch, we do the following:
1. Iterate over all preempt=true layers, and first try to consume from
them.
2. Iterate over all confined layers, and consume from them if we find a
layer with a cpumask that contains the consuming CPU.
3. Iterate over all grouped and open layers and consume from them in
ordered sequence.
In (2), we're only iterating over confined layers, but we should also be
iterating over grouped layers. Otherwise, despite a consuming CPU being
allocated to a specific grouped layer, the CPU will consume from
whichever grouped or open layer has a task that's ready to run.
Signed-off-by: David Vernet <void@manifault.com>
In layered_init, we're currently setting all bits in every layers'
cpumask, and then asynchronously updating the cpumasks at later time to
reflect their actual values at runtime. Now that we're updating the
layered code to initialize the cpumasks before we attach the scheduler,
we can instead have the init path actually refresh and initialize the
cpumasks directly.
Signed-off-by: David Vernet <void@manifault.com>
We currently have a bug in layered wherein we could fail to propagate
layer updates from user space to kernel space if a layer is never
adjusted after it's first initialized. For example, in the following
configuration:
[
{
"name": "workload.slice",
"comment": "main workload slice",
"matches": [
[
{
"CgroupPrefix": "workload.slice/"
}
]
],
"kind": {
"Grouped": {
"cpus_range": [30, 30],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
},
{
"name": "normal",
"comment": "the rest",
"matches": [
[]
],
"kind": {
"Grouped": {
"cpus_range": [2, 2],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
}
]
Both layers are static, and need only be resized a single time, so the
configuration would never be propagated to the kernel due to us never
calling update_bpf_layer_cpumask(). Let's instead have the
initialization propagate changes to the skeleton before we attach the
scheduler.
This has the advantage both of fixing the bug mentioned above where a
static configuration is never propagated to the kernel, and that we
don't have a short period when the scheduler is first attached where we
don't make optimal scheduling decisions due to the layer resizing not
being propagated.
Signed-off-by: David Vernet <void@manifault.com>
If we are doing local dispatch, we can avoid enqueue() altogether by
dispatching from select_cpu()
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
This is a really minor optimization, but we don't need idle_smtmask to
schedule pinned tasks, so defer it so the nr_cpus_allowed == 1 path is
marginally faster.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
idle_cpumask isn't used at all in pick_idle_cpu_from. The only need for
these cpumasks is to check if prev_cpu is a wholly idle CPU (and we only
do this when smt_enabled). idle_smtmask is sufficient for that check.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Prior to this patch, we only bump LSTAT_AFFN_BIOL when the target cpu
was idle, but in both cases it should be counted as AFFN_VIOL.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Currently scx_layered outputs statistics periodically as info! logs. The
format of this is largely unstructured and mostly suitable for running
scx_layered interactively (e.g. observing its behavior on the command
line or via logs after the fact).
In order to run scx_layered at larger scale, it's desireable to have
statistics output in some format that is amenable to being ingested into
monitoring databases (e.g. Prometheseus). This allows collection of
stats across many machines.
This commit adds a command line flag (-o) that outputs statistics to
stdout in OpenMetrics format instead of the normal log mechanism.
OpenMetrics has a public format
specification (https://github.com/OpenObservability/OpenMetrics) and is
in use by many projects.
The library for producing OpenMetrics metrics is lightweight but does
induce some changes. Primarily, metrics need to be pre-registered (see
OpenMetricsStats::new()).
Without -o, the output looks as before, for example:
```
19:39:54 [INFO] CPUs: online/possible=52/52 nr_cores=26
19:39:54 [INFO] Layered Scheduler Attached
19:39:56 [INFO] tot= 9912 local=76.71 open_idle= 0.00 affn_viol= 2.63 tctx_err=0 proc=21ms
19:39:56 [INFO] busy= 1.3 util= 65.2 load= 263.4 fallback_cpu= 1
19:39:56 [INFO] batch : util/frac= 49.7/ 76.3 load/frac= 252.0: 95.7 tasks= 458
19:39:56 [INFO] tot= 2842 local=45.04 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:56 [INFO] cpus= 2 [ 0, 2] 04000001 00000000
19:39:56 [INFO] immediate: util/frac= 0.0/ 0.0 load/frac= 0.0: 0.0 tasks= 0
19:39:56 [INFO] tot= 0 local= 0.00 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:56 [INFO] cpus= 50 [ 0, 50] fbfffffe 000fffff
19:39:56 [INFO] normal : util/frac= 15.4/ 23.7 load/frac= 11.4: 4.3 tasks= 556
19:39:56 [INFO] tot= 7070 local=89.43 open_idle= 0.00 preempt= 0.00 affn_viol= 3.69
19:39:56 [INFO] cpus= 50 [ 0, 50] fbfffffe 000fffff
19:39:58 [INFO] tot= 7091 local=84.91 open_idle= 0.00 affn_viol= 2.64 tctx_err=0 proc=21ms
19:39:58 [INFO] busy= 0.6 util= 31.2 load= 107.1 fallback_cpu= 1
19:39:58 [INFO] batch : util/frac= 18.3/ 58.5 load/frac= 93.9: 87.7 tasks= 589
19:39:58 [INFO] tot= 2011 local=60.67 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:58 [INFO] cpus= 2 [ 2, 2] 04000001 00000000
19:39:58 [INFO] immediate: util/frac= 0.0/ 0.0 load/frac= 0.0: 0.0 tasks= 0
19:39:58 [INFO] tot= 0 local= 0.00 open_idle= 0.00 preempt= 0.00 affn_viol= 0.00
19:39:58 [INFO] cpus= 50 [ 50, 50] fbfffffe 000fffff
19:39:58 [INFO] normal : util/frac= 13.0/ 41.5 load/frac= 13.2: 12.3 tasks= 650
19:39:58 [INFO] tot= 5080 local=94.51 open_idle= 0.00 preempt= 0.00 affn_viol= 3.68
19:39:58 [INFO] cpus= 50 [ 50, 50] fbfffffe 000fffff
^C19:39:59 [INFO] EXIT: BPF scheduler unregistered
```
With -o passed, the output is in OpenMetrics format:
```
19:40:08 [INFO] CPUs: online/possible=52/52 nr_cores=26
19:40:08 [INFO] Layered Scheduler Attached
# HELP total Total scheduling events in the period.
# TYPE total gauge
total 8489
# HELP local % that got scheduled directly into an idle CPU.
# TYPE local gauge
local 86.45305689716104
# HELP open_idle % of open layer tasks scheduled into occupied idle CPUs.
# TYPE open_idle gauge
open_idle 0.0
# HELP affn_viol % which violated configured policies due to CPU affinity restrictions.
# TYPE affn_viol gauge
affn_viol 2.332430203793144
# HELP tctx_err Failures to free task contexts.
# TYPE tctx_err gauge
tctx_err 0
# HELP proc_ms CPU time this binary has consumed during the period.
# TYPE proc_ms gauge
proc_ms 20
# HELP busy CPU busy % (100% means all CPUs were fully occupied).
# TYPE busy gauge
busy 0.5294061026085283
# HELP util CPU utilization % (100% means one CPU was fully occupied).
# TYPE util gauge
util 27.37195512782239
# HELP load Sum of weight * duty_cycle for all tasks.
# TYPE load gauge
load 81.55024768702126
# HELP layer_util CPU utilization of the layer (100% means one CPU was fully occupied).
# TYPE layer_util gauge
layer_util{layer_name="immediate"} 0.0
layer_util{layer_name="normal"} 19.340849995024997
layer_util{layer_name="batch"} 8.031105132797393
# HELP layer_util_frac Fraction of total CPU utilization consumed by the layer.
# TYPE layer_util_frac gauge
layer_util_frac{layer_name="batch"} 29.34063385422595
layer_util_frac{layer_name="immediate"} 0.0
layer_util_frac{layer_name="normal"} 70.65936614577405
# HELP layer_load Sum of weight * duty_cycle for tasks in the layer.
# TYPE layer_load gauge
layer_load{layer_name="immediate"} 0.0
layer_load{layer_name="normal"} 11.14363313258934
layer_load{layer_name="batch"} 70.40661455443191
# HELP layer_load_frac Fraction of total load consumed by the layer.
# TYPE layer_load_frac gauge
layer_load_frac{layer_name="normal"} 13.664744680306903
layer_load_frac{layer_name="immediate"} 0.0
layer_load_frac{layer_name="batch"} 86.33525531969309
# HELP layer_tasks Number of tasks in the layer.
# TYPE layer_tasks gauge
layer_tasks{layer_name="immediate"} 0
layer_tasks{layer_name="normal"} 490
layer_tasks{layer_name="batch"} 343
# HELP layer_total Number of scheduling events in the layer.
# TYPE layer_total gauge
layer_total{layer_name="normal"} 6711
layer_total{layer_name="batch"} 1778
layer_total{layer_name="immediate"} 0
# HELP layer_local % of scheduling events directly into an idle CPU.
# TYPE layer_local gauge
layer_local{layer_name="batch"} 69.79752530933632
layer_local{layer_name="immediate"} 0.0
layer_local{layer_name="normal"} 90.86574281031143
# HELP layer_open_idle % of scheduling events into idle CPUs occupied by other layers.
# TYPE layer_open_idle gauge
layer_open_idle{layer_name="immediate"} 0.0
layer_open_idle{layer_name="batch"} 0.0
layer_open_idle{layer_name="normal"} 0.0
# HELP layer_preempt % of scheduling events that preempted other tasks. #
# TYPE layer_preempt gauge
layer_preempt{layer_name="normal"} 0.0
layer_preempt{layer_name="batch"} 0.0
layer_preempt{layer_name="immediate"} 0.0
# HELP layer_affn_viol % of scheduling events that violated configured policies due to CPU affinity restrictions.
# TYPE layer_affn_viol gauge
layer_affn_viol{layer_name="normal"} 2.950379973178364
layer_affn_viol{layer_name="batch"} 0.0
layer_affn_viol{layer_name="immediate"} 0.0
# HELP layer_cur_nr_cpus Current # of CPUs assigned to the layer.
# TYPE layer_cur_nr_cpus gauge
layer_cur_nr_cpus{layer_name="normal"} 50
layer_cur_nr_cpus{layer_name="batch"} 2
layer_cur_nr_cpus{layer_name="immediate"} 50
# HELP layer_min_nr_cpus Minimum # of CPUs assigned to the layer.
# TYPE layer_min_nr_cpus gauge
layer_min_nr_cpus{layer_name="normal"} 0
layer_min_nr_cpus{layer_name="batch"} 0
layer_min_nr_cpus{layer_name="immediate"} 0
# HELP layer_max_nr_cpus Maximum # of CPUs assigned to the layer.
# TYPE layer_max_nr_cpus gauge
layer_max_nr_cpus{layer_name="immediate"} 50
layer_max_nr_cpus{layer_name="normal"} 50
layer_max_nr_cpus{layer_name="batch"} 2
# EOF
^C19:40:11 [INFO] EXIT: BPF scheduler unregistered
```
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
After updates to reflect the updated init and direct dispatch API, the
schedulers aren't compatible with older kernels. Bump versions and publish
releases.
In the latest kernel, sched_ext API has changed in two areas:
- ops.prep_enable/cancel_enable/enable/disable() replaced with
ops.init_task/enable/disable/exit_task().
- scx_bpf_dispatch() can now be called from ops.select_cpu(). Also,
SCX_ENQ_LOCAL flag is removed. Instead, users can call
scx_bpf_select_cpu_dfl() from ops.select_cpu() and use the @is_idle out
param value to determine whether to dispatch directly.
This commit updates all schedules so that they build.
- Init functions renamed / merged / split.
- ops.select_cpu() is added to several schedulers and local direct
disptching logic is moved there.
This is the minimum update which is need to make the schedulers build and
work. It needs further update to e.g. move vtime udpates to ops.enable().
This because each scheduler has it's own Rust Crate
and it's better if they had a README associated with each one.
https://crates.io/crates/scx_layered
- combine c and kernel-examples as it's confusing to have both
- rename 'rust-user' and 'c-user' to just 'rust' and 'c', which is simpler
- update and fix sync-to-kernel.sh