Returning prev_cpu after picking an idle CPU will cause the idle CPU
stall because the idle core was already punched out from the idle mask
by the scx core so it is no longer idle from the scx core's point of
view.
This fix conducts the idle core selection at the last step so it never
return prev_cpu after picking the idle core.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
get_task_ctx() and try_get_task_ctx() were added for common error
handling for task lookup failure. Since idle "swapper" task is not under
sched_ext, try_get_task_ctx() is added for the case such that idle task
can be searched.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
We don't need to test SCX_WAKE_SYNC because SCX_WAKE_SYNC should only be
set when SCX_WAKE_TTWU is set.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
scx_lavd is a BPF scheduler that implements an LAVD (Latency-criticality
Aware Virtual Deadline) scheduling algorithm. While LAVD is new and
still evolving, its core ideas are 1) measuring how much a task is
latency critical and 2) leveraging the task's latency-criticality
information in making various scheduling decisions (e.g., task's
deadline, time slice, etc.). As the name implies, LAVD is based on the
foundation of deadline scheduling. This scheduler consists of the BPF
part and the rust part. The BPF part makes all the scheduling decisions;
the rust part loads the BPF code and conducts other chores (e.g.,
printing sampled scheduling decisions).
There were a few issues, e.g. us still mentioning the infeasible weights
problem, and arguments using underscores despite clap rendering them
with dashes. Let's fix them up.
Signed-off-by: David Vernet <void@manifault.com>
As described in https://bugzilla.kernel.org/show_bug.cgi?id=218109,
https://github.com/sched-ext/scx/issues/147 and
https://github.com/sched-ext/sched_ext/issues/69, AMD chips can
sometimes report fully disabled CPUs as offline, which causes us to
count them when looking at /sys/devices/system/cpu/possible.
Additionally, systems can have holes in their active CPU maps. For
example, a system with CPUs 0, 1, 2, 3 possible, may have only 0 and 2
active. To address this, we need to do a few things:
1. Update topology.rs to be clear that it's returning the number of
_possible_ CPUs in the system. Also update Topology to only record
online CPUs when creating its span and iterating over sysfs when
creating domains. It was previously trying to record when a CPU was
online, but this was actually broken as the topology directory isn't
present in sysfs when the CPU is offline.
2. Schedulers should not be relying on nr_possible_cpus for anything
other than interacting with per-CPU data (e.g. for stats extraction),
or e.g. verifying maximum sizes of statically sized arrays in BPF. It
should _not_ be used for e.g. performing load calculations, etc. With
that said, we'll also need to update schedulers to not rely on the
nr_possible_cpus figure being exported by the topology crate. We do
that for rusty in this patch, but don't fix any of the others other
than updating how they call topology.rs.
3. Account for the fact that LLC IDs may be non-contiguous. For example,
if there is a single core in an LLC, then if we assign LLC IDs to
domains, then the domain IDs won't be contiguous. This doesn't fit
our current model which is used by e.g. infeasible_weights.rs. We'll
update some of the code in rusty to accomodate this, but we'll need
to do more.
4. Update schedulers to properly reset themselves in the event of a
hotplug event. We'll take care of that in a follow-on change.
Signed-off-by: David Vernet <void@manifault.com>
If a CPU is offline, it could cause an LLC to go offline, which could
cause us to have non-contiguous domain IDs. Right now, a few places in
code assume contiguous domain IDs, such as in the infeasible weights
crate. Let's update domain.rs and load_balaance.rs to do the right
thing. We'll fix the others later.
Signed-off-by: David Vernet <void@manifault.com>
We implement functions or(), and(), and xor() for cpumasks, but we
should also implement the bitwise ops for those operations in case
people prefer that syntax.
Signed-off-by: David Vernet <void@manifault.com>
We're iterating from min..max cpu in cpus_online(), but that's not
inclusive of the max CPU. Let's also include that so we don't think that
last CPU is offline.
Signed-off-by: David Vernet <void@manifault.com>
Most of the schedulers assume that the amount of possible CPUs in the
system represents the actual number of CPUs available.
This is not always true: some CPUs may be offline or certain CPU models
(AMD CPUs for example) may include unavailable CPUs in this number.
This can lead to sub-optimal performance or even errors in the scheduler
(see for example [1][2]).
Ideally, we need to attack this issue in a more generic way, such as
having a proper API provided by a C library, that can be used by all
schedulers and the topology Rust module (scx_utils crate).
But for now, let's try to mitigate most of the common sub-optimal cases
separately inside each scheduler.
For rustland we can apply some mitigations both in select_cpu() (for the
BPF part) and in the user-space part:
- the former is fixed in the sched-ext kernel by commit 94dc0c01b957
("scx: Use cpu_online_mask when resetting idle masks"). However,
adding an extra check `cpu < num_possible_cpus` in select_cpu(),
allows to properly support AMD CPUs, even with kernels that don't
have the cpu_online_mask fix yet (this doesn't always guarantee the
validity of cpu, but it should be enough to mitigate the majority of
the potential sub-optimal cases, without introducing any significant
overhead)
- the latter can be fixed relying on topology.span(), instead of
topology.nr_cpus(), to count the amount of available CPUs in the
system.
[1] https://github.com/sched-ext/sched_ext/issues/69
[2] https://github.com/sched-ext/scx/issues/147
Link: 94dc0c01b9
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Given the complexity of migrating load between nodes (we're doing four
nested loops), we should add a comment explaining what we're doing. This
commit does that. In addition, we use a VecDeque to store (and then
restore) push nodes and push domains so that we can re-add them to their
respective lists in load-sorted order rather than reverse-load-sorted
order. This allows us to avoid having to do unnecessary right-shifts
every time a push object is re-added to its containing list.
Signed-off-by: David Vernet <void@manifault.com>
Fixing alignment, moving a couple bail! calls around, and adding a
missing break from move_between_nodes() that lets us bail out of a loop
early.
Signed-off-by: David Vernet <void@manifault.com>
As Tejun pointed out in review, the disadvantage of using
push/pull/balanced lists is that if the domains inside the nodes are
balanced, we won't be able to push load between them. I'd originally
done it that way both as an optimization, but also to allow me to
iterate over the lists of pushable and pullable domains mutably. That
was addressed in the prior commit, but the nodes themselves were still
put into 3 buckets.
I think this is generally just a cleaner way of doing things, so let's
just collapse the nodes into a flat list as well. This prevents us from
having to coalesce the lists, std::mem::swap them, etc.
Signed-off-by: David Vernet <void@manifault.com>
Tejun pointed out that a possible issue exists in the current
implementation, wherein if you have two NUMA nodes that are imbalanced,
but their domains are internally balanced, we'll fail to migrate between
them if all nodes are in the balanced_nodes list.
To address this, let's just use a single global list for all types of
domains, and do checking internally for imbalances. The reason it was
done this way in the first place was to allow me to mutably iterate over
both vectors in a nested loop. The way around that is to just use loop
{} and push/pop domains from the list.
We could do the same thing for the NUMA nodes themselves, which are also
in 3 separate lists in the LoadBalancer. We'll do that in a subsequent
commit.
Signed-off-by: David Vernet <void@manifault.com>
In scx_rusty, a CPU that is going to go idle will attempt to steal tasks
from remote domains when its domain has no tasks to run, and a remote
domain has at least greedy_threshold enqueued tasks. This stealing is
temporary, but of course has a cost in that the CPU that's stealing the
task may cause it to suffer from cache misses, or in the case of
multi-node machines, remote NUMA accesses and working sets split across
multiple domains.
Given the higher cost of x NUMA work stealing, let's add a separate flag
that lets users tune the threshold for doing cross NUMA greedy task
stealing.
Signed-off-by: David Vernet <void@manifault.com>
In order to use the new consume_raw() API we need to depend on a version
of libbpf-rs that is not released yet.
Apparently adding such dependency may introduce a potential dependency
conflict with libbpf-sys.
Therefore, revert this change and go back to the previous consume() API.
One a new version of libbpf-rs will be out we can update all our
dependencies to use the new libbpf-rs and re-apply this patch to
scx_rustland_core.
Fixes: 7c8c5fd ("scx_rustland_core: use new consume_raw() libbpf-rs API")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In line with rustland's focus on prioritizing interactive tasks, set the
default base time slice to 5ms.
This allows to mitigate potential audio craking issues or system lags
when the system is overloaded or under memory pressure condition (i.e.,
https://github.com/sched-ext/scx/issues/96#issuecomment-1978154324).
A downside of this change is to introduce potential regressions in the
throughput of CPU-intensive workloads, but in such scenarios rustland
may not be the optimal choice and alternative schedulers may be
preferred.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Some high-priority tasks may have a weight too high, that can
potentially disrupt the slice boost optimization logic, causing
interactive tasks to be less responsive.
In line with rustland's focus on prioritizing interactive tasks, prevent
giving too much CPU bandwidth to such high-priority tasks by limiting
the maximum task weight to 1000.
This allows to maintain a good level of system responsiveness even in
presence of tasks with a really high priority.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Use the new consume_raw() API provided by libbpf-rs with
https://github.com/libbpf/libbpf-rs/pull/680.
This allows to be more precise and efficient at processing tasks
consumed from the BPF ring buffer.
NOTE: the new consume_raw() API is not available yet in any official
release of the libbpf-rs crate, but cargo allows to pick versions
directly from git. This slightly increases the build time of
scx_rustland_core and the schedulers based on this crate (since we need
to recompile libbpf-rs from source), but we can re-add a proper
versioned dependency once the libbpf-rs is out.
TODO: this new API also offers the possibility to consume multiple items
from the BPF ring buffer with a single call to consume_raw(). This could
be investigated and implemented as a potential future enhancement.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The current topology.rs crate assumes that all cores have unique core
IDs in a system. This need not be the case, such as in certain Intel
Xeon processors which reuse core IDs in different NUMA nodes. Let's
update the crate to assume unique core IDs only per socket.
Signed-off-by: David Vernet <void@manifault.com>
We removed the debug!() output that was previously present in main.rs. Let's
add more debug!() output that helps debug the current LB hierarchy.
Signed-off-by: David Vernet <void@manifault.com>
The scx_rusty load balancer is currently no longer exporting statistics such as
domain load averages, load sums, etc. Now that we're also balancing by NUMA,
we'll need a way to hierarchically illustrate load balancing statistics. This
patch adds support for that.
Signed-off-by: David Vernet <void@manifault.com>
updating stats printing
Signed-off-by: David Vernet <void@manifault.com>
Users may want to toggle whether tasks can be temporarily sent to idle CPUs on
remote NUMA nodes. By default, we want it to be disabled as a task spanning
multiple NUMA nodes will end up having its working set spanning both nodes,
which is probably not desirable. However, in case a workload really wants to
encourage work conservation, let's add a flag that allows them to toggle it.
Signed-off-by: David Vernet <void@manifault.com>
scx_rusty currently pushes tasks to idle cores if the direct greedy threshold
is exceeded, even if the core is on a remote NUMA node. This behavior is
probably not desired in most scenarios. The worst that will happen if a task is
pushed to an idle core in the same node is some L3 cache miss traffic, but for
multiple NUMA nodes, it could cause the task to have its working set span
multiple nodes.
Let's disable direct greedy work stealing across NUMA nodes. A future commit
will add a flag that's disabled by default, and let's users turn this on if
they really want to encourage work conservation.
Signed-off-by: David Vernet <void@manifault.com>
Right now, scx_rusty has no notion of domains spanning NUMA nodes, and makes no
distinction when making load balancing decisions, or work stealing. This can
cause problems on multi-NUMA machines, as load balancing and work stealing
across NUMA nodes has significantly different cost from across L3 cache
boundaries.
In order to better support multi-NUMA machines, this commit adds another layer
to the rusty load balancer, which balances across NUMA nodes using a different
cost function from balancing across domains. Load balancing now takes place
over the span of two passes:
1. In the first pass, we fix imbalances across NUMA nodes by moving tasks
between domains across those NUMA node boundaries. We require a load
imbalance of at least 17% in order to move load at this stage. The ratio of
load imbalance we attempt to adjust (50%) and the maximum amount of load
we're allowed to push out of a domain (50%) is still the same as when
balancing between domains inside a NUMA node, but this is easy to tune with
the current setup.
2. Once we've balanced across NUMA nodes, we iterate over all nodes and balance
between the domains within each NUMA node. The cost function here is the
same as what it has been thus far: we require at least a 5% imbalance in
order to trigger load balancing.
There are a few additional changes / improvements to load balancing in this
commit:
1. NUMA nodes and domains are now ordered according to their load by using
SortedVec objects. We were previously using BTreeMap keyed by load, but this
was suboptimal due to the fact that it doesn't allow duplicate entries.
2. We're no longer exporting load balancing statistics as a vector of data such
as load sums, averages, and imbalances. This is instead all encapsulated in
the load balancing hierarchy we setup in lb.load_balance(). These statistics
are not yet exported, but they will be in a subsequent commit.
One of the issues with this commit is that it does introduce some
almost-identical logic that somehow begs to be deduplicated. For example, when
we balance between NUMA nodes, the logic for iterating over push nodes and
pushing to pull nodes is very similar to the logic of iterating over push
domains and pull domains when balancing within a node. It may be that this can
be improved.
The following are some benchmarks run on an Intel Xeon Gold 6138 (2 x 40 core
processor):
kcompile
--------
On Commit a27648c74210 ("afs: Fix setting of mtime when creating a
file/dir/symlink"):
1. make allyesconfig
2. make -j $(nproc) built-in.a
3. make -j clean
4. goto 2
Runtime
-------
o-----------o-----------o----------o
| scx_rusty | CFS | Delta |
---------o-----------o-----------o----------o
Mean | 562.688s | 566.085s | -.6% |
---------o-----------o-----------o----------o
Variance | 0.54387 | 0.72431 | -24.9% |
---------o-----------o-----------o----------o
o-----------o-----------o----------o
| rusty NUMA| rusty ORIG| Delta |
---------o-----------o-----------o----------o
Mean | 562.688s | 563.209s | -.092% |
---------o-----------o-----------o----------o
Variance | 0.54387 | 0.42038 | 29.38% |
---------o-----------o-----------o----------o
scx_rusty with NUMA awareness clearly beats CFS, but only barely beats
scx_rusty without it. This isn't necessarily super surprising given that
this is kcompile, which has very poor front-end CPU locality. Further
experimentation with toggling the cost function for performing
migrations may improve this further.
CPU util
--------
o-----------o-----------o----------o
| scx_rusty | CFS | Delta |
---------o-----------o-----------o----------o
Mean | 7654.25% | 7551.67% | 1.11% |
---------o-----------o-----------o----------o
Variance | 165.35714 | 158.3333 | 4.436% |
---------o-----------o-----------o----------o
o-----------o-----------o----------o
| rusty NUMA| rusty ORIG| Delta |
---------o-----------o-----------o----------o
Mean | 7654.25% | 7641.57% | 0.1659% |
---------o-----------o-----------o----------o
Variance | 165.35714 | 1230.619 | -86.5% |
---------o-----------o-----------o----------o
As expected, CPU util is quite a bit higher with scx_rusty than it is
with CFS. Further experiments that could be interesting are always
enabling direct-greedy stealing between domains within a NUMA node, and
then comparing rusty NUMA and rusty ORIG. rusty NUMA prevents stealing
between NUMA nodes, so this would show whether the locality introduced
by NUMA awareness appropriately offsets the loss of work conservation.
Major PFs
---------
o-----------o-----------o----------o
| scx_rusty | CFS | Delta |
---------o-----------o-----------o----------o
Mean | 5332 | 3950 | 36.566% |
---------o-----------o-----------o----------o
Variance | 6975.5 | 5986.333 | 16.5237% |
---------o-----------o-----------o----------o
o-----------o-----------o----------o
| rusty NUMA| rusty ORIG| Delta |
---------o-----------o-----------o----------o
Mean | 5332 | 5336.5 | -.084% |
---------o-----------o-----------o----------o
Variance | 6975.5 | 955.5 | 630.03% |
---------o-----------o-----------o----------o
Also as expected, major page faults are far highe higher with scx_rusty
than with CFS. This is expected even with NUMA awareness, given that
scx_rusty is still less sticky than CFS.
Further experiments that could be interesting are tuning the threshold
for which we perform x NUMA migrations to try and keep this value even
lower. The rate of major page faults between rusty NUMA and rusty ORIG
were very close, though rusty NUMA was a bit lower.
Signed-off-by: David Vernet <void@manifault.com>
More cleanup of scx_rusty. Let's move the LoadBalancer out of rusty.rs and into
its own file. It will soon be extended quite a bit to support multi-NUMA and
other multivariate LB cost functions, so it's time to clean things up and split
it out.
Signed-off-by: David Vernet <void@manifault.com>
rusty.rs is growing a bit unwieldy. We're going to want to update its load
balancing logic somewhat significantly to account for multi-NUMA and other cost
functions, so let's start cleaning the code up so that things are more
logically segmented and easier to work with.
To start, we move the Tuner and DomainGroup/Domain objects into their own
modules.
Signed-off-by: David Vernet <void@manifault.com>
scx_rlfifo is provided as a simple example to show how to use
scx_rustland_core and it's not supposed to be used in a real production
environment.
To prevent performance bug reports print an explicit warning when it's
started to clarify the goal of this scheduler.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Small improvement to make the scheduler a bit more responsive, without
introducing too much complexity or too much CPU overhead.
This can be achieved by replacing a sleep of 1ms with a sched_yield()
every time that the scheduler has finished to dispatch all the queued
tasks.
This also makes the code a bit smaller and easier to read.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide distinct methods to set the target CPU and the per-task time
slice to dispatched tasks.
Moreover, also provide a constructor to create a DispatchedTask from a
QueuedTask (this allows to automatically bounce a task from the
scheduler to the BPF dispatcher without having to take care of setting
the individual task's attributes).
This also allows to make most of the attributes of DispatchedTask
private, especially it allows to hide cpumask_cnt, that should be only
used internally between the BPF and the user-space component.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Provide a way to set a different time slice per-task, by adding a new
attribute slice_ns to the DispatchedTask struct.
This attribute determines the time slice assigned to the task, if it is
set to 0 then the global time slice (either the default one or the
effective one, if set) will be used.
At the same time, remove the payload attribute, that is basically unused
(scx_rustland uses it to send the task's vruntime to the BPF dispatcher
for debugging purposes, but it's not very useful anymore at this point).
In the future we may introduce a proper interface to attach a custom
payload to each task with a proper interface.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
This is to potentinally reduce issues with folks
using different versions of libbpf at runtime.
This also:
- makes static linking of libbpf the default
- adds steps in `meson setup` to fetch libbpf and make it
There is no need to generate source code in a temporary directory with
RustLandBuilder(), we can simply generate code in-tree and exclude the
generated source files from .gitignore.
Having the generated source files in-tree can help to debug potential
build issues (and it also allows to drop the the tempfile crate
dependency).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a wrapper to scx_utils::BpfBuilder that can be used to build
the BPF component provided by scx_rustland_core.
The source of the BPF components (main.bpf.c) is included in the crate
as an array of bytes, the content is then unpacked in a temporary file
to perform the build.
The RustLandBuilder() helper is also used to generate bpf.rs (that
implements the low-level user-space Rust connector to the BPF
commponent).
Schedulers based on scx_rustland_core can simply use RustLandBuilder(),
to build the backend provided by scx_rustland_core.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a helper function to update the counter of queued and
scheduled tasks (used to notify the BPF component if the user-space
scheduler has still some pending work to do).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rustland has significantly evolved since its original design.
With the introduction of scx_rustland_core and the inclusion of the
scx_rlfifo example, scx_rustland's focus can be shifted from solely
being an "easy-to-read Rust scheduler template" to a fully functional
scheduler.
For this reason, update the README and documentation to reflect its
revised design, objectives, and intended use cases.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Move the BPF component of scx_rustland to scx_rustland_core and make it
available to other user-space schedulers.
NOTE: main.bpf.c and bpf.rs are not pre-compiled in the
scx_rustland_core crate, they need to be included in the user-space
scheduler's source code in order to be compiled/linked properly.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a separate crate (scx_rustland_core) that can be used to
implement sched-ext schedulers in Rust that run in user-space.
This commit only provides the basic layout for the new crate and the
abstraction to the custom allocator.
In general, any scheduler that has a user-space component needs to use
the custom allocator to prevent potential deadlock conditions, caused by
page faults (a kthread needs to run to resolve the page fault, but the
scheduler is blocked waiting for the user-space page fault to be
resolved => deadlock).
However, we don't want to necessarily enforce this constraint to all the
existing Rust schedulers, some of them may do all user-space allocations
in safe paths, hence the separate scx_rustland_core crate.
Merging this code in scx_utils would force all the Rust schedulers to
use the custom allocator.
In a future commit the scx_rustland backend will be moved to
scx_rustland_core, making it a totally generic BPF scheduler framework
that can be used to implement user-space schedulers in Rust.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Now that we have a new 'infeasible' crate that abstracts the logic for
implementing the infeasible weights solution. Let's update rusty to use
it.
Signed-off-by: David Vernet <void@manifault.com>
The new topology crate allows us to replace the custom rustland topology
logic with the logic in the topology crate itself.
Signed-off-by: David Vernet <void@manifault.com>
I have a usecase where specific nice values are used to bucket tasks
into groups that are handled separately by different `scx_layered`
policies, with no implications of relative priority between niceness X,
X + 1, X - 1, etc. In other words, nicevals are used as simple tags in
this scenario.
If we wanted to treat a specific niceness this way e.g. `11`, we could
do so with AND'd MATCH_NICE_{ABOVE,BELOW} like so:
```json
"matches" : [
[
{
"NiceAbove": 10
},
{
"NiceBelow": 12
},
],
],
```
But this is unnecessarily verbose and doesn't communicate the intent of
the match very well. Adding a `NiceEquals` matcher simplifies the
config and makes intent obvious:
```json
"matches" : [
[
{
"NiceEquals": 11
},
],
],
```
This PR adds support for such a matcher.
Also, rename `layer_match.nice_above_or_below` to just
`layer_match.nice`, as the former doesn't describe the newly-added
matcher's use of the field. It's still obvious that `layer_match.nice`
is relevant to MATCH_NICE_{ABOVE, BELOW, EQUALS}.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
As mentioned in the previous commit, for some reason we're sometimes
(non-deterministically) not seeing the updated cpumask / layer values in
BPF if we initialize the cpumasks here before attaching. Let's undo this
for now so to avoid observing buggy behavior, until we figure it out.
Signed-off-by: David Vernet <void@manifault.com>
This reverts commit 56ff3437a2.
For some reason we seem to be non-deterministically failing to see the
updated layer values in BPF if we initialize before attaching. Let's
just undo this specific part so that we can unblock this being broken,
and we can figure it out async.
Signed-off-by: David Vernet <void@manifault.com>
Currently, in layered_dispatch, we do the following:
1. Iterate over all preempt=true layers, and first try to consume from
them.
2. Iterate over all confined layers, and consume from them if we find a
layer with a cpumask that contains the consuming CPU.
3. Iterate over all grouped and open layers and consume from them in
ordered sequence.
In (2), we're only iterating over confined layers, but we should also be
iterating over grouped layers. Otherwise, despite a consuming CPU being
allocated to a specific grouped layer, the CPU will consume from
whichever grouped or open layer has a task that's ready to run.
Signed-off-by: David Vernet <void@manifault.com>
In layered_init, we're currently setting all bits in every layers'
cpumask, and then asynchronously updating the cpumasks at later time to
reflect their actual values at runtime. Now that we're updating the
layered code to initialize the cpumasks before we attach the scheduler,
we can instead have the init path actually refresh and initialize the
cpumasks directly.
Signed-off-by: David Vernet <void@manifault.com>
We currently have a bug in layered wherein we could fail to propagate
layer updates from user space to kernel space if a layer is never
adjusted after it's first initialized. For example, in the following
configuration:
[
{
"name": "workload.slice",
"comment": "main workload slice",
"matches": [
[
{
"CgroupPrefix": "workload.slice/"
}
]
],
"kind": {
"Grouped": {
"cpus_range": [30, 30],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
},
{
"name": "normal",
"comment": "the rest",
"matches": [
[]
],
"kind": {
"Grouped": {
"cpus_range": [2, 2],
"util_range": [
0.0,
1.0
],
"preempt": false
}
}
}
]
Both layers are static, and need only be resized a single time, so the
configuration would never be propagated to the kernel due to us never
calling update_bpf_layer_cpumask(). Let's instead have the
initialization propagate changes to the skeleton before we attach the
scheduler.
This has the advantage both of fixing the bug mentioned above where a
static configuration is never propagated to the kernel, and that we
don't have a short period when the scheduler is first attached where we
don't make optimal scheduling decisions due to the layer resizing not
being propagated.
Signed-off-by: David Vernet <void@manifault.com>
Add a command line option to enable/disable the sched-ext built-in idle
selection logic in the user-space scheduler.
With this option the user-space scheduler will try to dispatch tasks on
the CPU selected during the .select_cpu() phase (using the built-in idle
selection logic).
Without this option the user-space scheduler will try to dispatch tasks
to the first CPU available.
The former can be useful to improve throughput, since tasks are more
likely to stick on the same CPU, while the latter can provide better
system responsiveness, especially when the system is significantly busy.
Given that, by default, tasks can be dispatched directly bypassing the
user-space scheduler if an idle CPU is found during .select_cpu(), the
user-space scheduler is primarily engaged only when the system is busy
(no idle CPUs are available). Under these circumstances, it is typically
more efficient to dispatch tasks on the first available CPU. Hence, the
default behavior is to ignore built-in idle selection logic in the
user-space scheduler.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Checking if a CPU is idle or busy in the user-space scheduler is a bit
redundant, considering that we also rely on the built-in idle selection
logic in the BPF part.
Therefore get rid of the additional idle selection logic in the
user-space scheduler and rely on the built-in idle selection.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce an option to send all scheduling events and actions to
user-space, disabling any form of in-kernel optimization.
Enabling this option will likely make the system less responsive (but
more predictable in terms of performance) and it can be useful for
debugging purposes.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
scx_rusty has logic in the scheduler to inspect the host to
automatically build scheduling domains across every L3 cache. This would
be generically useful for many different types of schedulers, so let's
add it to the scx_utils crate so it can be used by others.
Signed-off-by: David Vernet <void@manifault.com>
The buffer used to store struct queued_task_ctx items fetched from the
BPF ring buffer needs to be aligned to the architecture register size,
otherwise we may hit misaligned pointer dereference issues, such as:
thread 'main' panicked at src/bpf.rs:162:43:
misaligned pointer dereference: address must be a multiple of 0x8 but is 0x56516a51e004
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Prevent this by making sure the buffer is always aligned to 64-bits.
Fixes: 93dc615 ("scx_rustland: use a ring buffer for queued tasks")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Switch from a BPF_MAP_TYPE_QUEUE to a BPF_MAP_TYPE_RINGBUF to store the
tasks that need to be processed by the user-space scheduler.
A ring buffer allows to save a lot of memory copies and syscalls, since
the memory is directly shared between the BPF and the user-space
components.
Performance profile before this change:
2.44% [kernel] [k] __memset
2.19% [kernel] [k] __sys_bpf
1.59% [kernel] [k] __kmem_cache_alloc_node
1.00% [kernel] [k] _copy_from_user
After this change:
1.42% [kernel] [k] __memset
0.14% [kernel] [k] __sys_bpf
0.10% [kernel] [k] __kmem_cache_alloc_node
0.07% [kernel] [k] _copy_from_user
Both the overhead of sys_bpf() and copy_from_user() are reduced by a
factor of ~15x now (only the dispatch path is using sys_bpf() now).
NOTE: despite being very effective, the current implementation is a bit
of a hack. This is because the present ring buffer API exclusively
permits consumption in a greedy manner, where multiple items can be
consumed simultaneously. However, libbpf-rs does not provide precise
information regarding the exact number of items consumed. By utilizing a
more refined libbpf-rs API [1] we may be able to improve this code a
bit.
Moreover, libbpf-rs doesn't provide an API for the user_ring_buffer, so
at the moment there's not a trivial way to apply the same change to the
dispatched tasks.
However, just with this change applied, the overhead of sys_bpf() and
copy_from_user() is already minimal, so we won't get much benefits by
changing the dispatch path to use a BPF ring buffer.
[1] https://github.com/libbpf/libbpf-rs/pull/680
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Instead of using a BPF_MAP_TYPE_ARRAY to store which tasks are running
on which CPU we can simply use a global array, mapped in the user-space
address space.
In this way we can avoid a lot of memory copies and call to sys_bpf(),
significantly reducing the scheduler's overhead.
Keep in mind that we don't need to be 100% correct while accessing this
information, so we can accept some fuzziness in order to significantly
reduce the scheduler's overhead.
Performance profile before this change:
5.52% [kernel] [k] __sys_bpf
4.84% [kernel] [k] __kmem_cache_alloc_node
4.71% [kernel] [k] map_lookup_elem
4.10% [kernel] [k] _copy_from_user
3.51% [kernel] [k] bpf_map_copy_value
3.12% [kernel] [k] check_heap_object
After this change:
2.20% [kernel] [k] __sys_bpf
1.91% [kernel] [k] map_lookup_and_delete_elem
1.60% [kernel] [k] __kmem_cache_alloc_node
1.10% [kernel] [k] _copy_from_user
0.12% [kernel] [k] check_heap_object
n/a bpf_map_copy_value
n/a map_lookup_elem
With this change we can reduce the overhead of sys_bpf() by ~2x and
the overhead of copy_from_user() by ~4x.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Currently, the primary bottleneck in scx_rustland lies within its custom
memory allocator, which is used to prevent page faults in the user-space
scheduler.
This is pretty evident looking at perf top:
39.95% scx_rustland [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::alloc
3.41% [kernel] [k] _copy_from_user
3.20% [kernel] [k] __kmem_cache_alloc_node
2.59% [kernel] [k] __sys_bpf
2.30% [kernel] [k] __kmem_cache_free
1.48% libc.so.6 [.] syscall
1.45% [kernel] [k] __virt_addr_valid
1.42% scx_rustland [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::dealloc
1.31% [kernel] [k] _copy_to_user
1.23% [kernel] [k] entry_SYSRETQ_unsafe_stack
However, there's no need to reinvent the wheel here, rather than relying
on an overly simplistic and inefficient allocator, we can rely on
buddy-alloc [1], which is also capable of operating on a preallocated
memory buffer.
After switching to buddy-alloc, the performance profile under the same
workload conditions looks like the following:
6.01% [kernel] [k] _copy_from_user
5.21% [kernel] [k] __kmem_cache_alloc_node
4.45% [kernel] [k] __sys_bpf
3.80% [kernel] [k] __kmem_cache_free
2.79% libc.so.6 [.] syscall
2.34% [kernel] [k] __virt_addr_valid
2.26% [kernel] [k] _copy_to_user
2.14% [kernel] [k] __check_heap_object
2.10% [kernel] [k] __check_object_size.part.0
2.02% [kernel] [k] entry_SYSRETQ_unsafe_stack
With this change in place, the primary overhead is now moved to the
bpf() syscall and the copies between kernel and user-space (this could
potentially be optimized in the future using BPF ring buffers, instead
of BPF FIFO queues).
A better focus at the allocator overhead before vs after this change:
[before]
39.95% scx_rustland [.] core::alloc::global::GlobalAlloc>::alloc
1.42% scx_rustland [.] core::alloc::global::GlobalAlloc>::dealloc
[after]
1.50% scx_rustland [.] core::alloc::global::GlobalAlloc>::alloc
0.76% scx_rustland [.] core::alloc::global::GlobalAlloc>::dealloc
[1] https://crates.io/crates/buddy-alloc
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In order to prevent duplicate PIDs in the TaskTree (BTreeSet), we
perform an O(N) search each time we add an item, to verify whether the
PID already exists or not.
Under heavy stress test conditions the O(N) complexity can have a
potential impact on the overall performance.
To mitigate this, introduce a HashMap that can be used to retrieve tasks
by PID typically with a O(1) complexity. This could potentially degrade
to O(N) in presence of hash collisions, but even in this case, accessing
the hash map is still more efficient than scanning all the entries in
the BTreeSet to search for the target PID.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce a per-task generation counter to check the validity of the
cpumask at dispatch time.
The logic is the following:
- the cpumask generation number is incremented every time a task
calls .set_cpumask()
- when a task is enqueued the current generation number is stored in
the queued_task_ctx and relayed to the user-space scheduler
- the user-space scheduler can decide to dispatch the task on the CPU
determined by the BPF layer in .select_cpu(), redirect the task to
any other specific CPU, or redirect to the first CPU available (using
NO_CPU)
- task is then dispatched back to the BPF code along with its cpumask
generation counter
- at dispatch time the BPF code checks if the generation number is the
same and it discards the dispatch attempt if the cpumask is not valid
anymore (the task will be automatically re-enqueued by the sched-ext
core code, potentially selecting another CPU / cpumask)
- if the cpumask is valid, but the CPU selected by the user-space
scheduler is invalid (according to the cpumask), the task will be
transparently bounced by the BPF code to the shared DSQ (in this way
the user-space code can be completely abstracted and dispatches that
target invalid CPUs can be automatically fixed by the BPF layer)
This solution can prevent stalls due to dispatches targeting invalid
CPUs and it can also avoid redundant dispatch events, making the code
more efficient and the cpumask interlocking more reliable.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
As described in [0], there is an open problem in load balancing called
the "infeasible weights" problem. Essentially, the problem boils down to
the fact that a task with disproportionately high load can be granted
more CPU time than they can actually consume per their duty cycle.
This patch implements a solution to that problem, wherein we apply the
algorithm described in this paper to adjust all infeasible weights in
the system down to a feasible wight that gives them their full duty
cycle, while allowing the remaining feasible tasks on the system to
share the remaining compute capacity on the machine.
[0]: https://drive.google.com/file/d/1fAoWUlmW-HTp6akuATVpMxpUpvWcGSAv/view?usp=drive_link
Signed-off-by: David Vernet <void@manifault.com>
Dispatch to the shared DSQ (NO_CPU) only when the assigned CPU is not
idle anymore, otherwise maintain the same CPU that has been assigned by
the BPF layer.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>