get_task_ctx() and try_get_task_ctx() were added for common error
handling for task lookup failure. Since idle "swapper" task is not under
sched_ext, try_get_task_ctx() is added for the case such that idle task
can be searched.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
We don't need to test SCX_WAKE_SYNC because SCX_WAKE_SYNC should only be
set when SCX_WAKE_TTWU is set.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
scx_lavd is a BPF scheduler that implements an LAVD (Latency-criticality
Aware Virtual Deadline) scheduling algorithm. While LAVD is new and
still evolving, its core ideas are 1) measuring how much a task is
latency critical and 2) leveraging the task's latency-criticality
information in making various scheduling decisions (e.g., task's
deadline, time slice, etc.). As the name implies, LAVD is based on the
foundation of deadline scheduling. This scheduler consists of the BPF
part and the rust part. The BPF part makes all the scheduling decisions;
the rust part loads the BPF code and conducts other chores (e.g.,
printing sampled scheduling decisions).
There were a few issues, e.g. us still mentioning the infeasible weights
problem, and arguments using underscores despite clap rendering them
with dashes. Let's fix them up.
Signed-off-by: David Vernet <void@manifault.com>
As described in https://bugzilla.kernel.org/show_bug.cgi?id=218109,
https://github.com/sched-ext/scx/issues/147 and
https://github.com/sched-ext/sched_ext/issues/69, AMD chips can
sometimes report fully disabled CPUs as offline, which causes us to
count them when looking at /sys/devices/system/cpu/possible.
Additionally, systems can have holes in their active CPU maps. For
example, a system with CPUs 0, 1, 2, 3 possible, may have only 0 and 2
active. To address this, we need to do a few things:
1. Update topology.rs to be clear that it's returning the number of
_possible_ CPUs in the system. Also update Topology to only record
online CPUs when creating its span and iterating over sysfs when
creating domains. It was previously trying to record when a CPU was
online, but this was actually broken as the topology directory isn't
present in sysfs when the CPU is offline.
2. Schedulers should not be relying on nr_possible_cpus for anything
other than interacting with per-CPU data (e.g. for stats extraction),
or e.g. verifying maximum sizes of statically sized arrays in BPF. It
should _not_ be used for e.g. performing load calculations, etc. With
that said, we'll also need to update schedulers to not rely on the
nr_possible_cpus figure being exported by the topology crate. We do
that for rusty in this patch, but don't fix any of the others other
than updating how they call topology.rs.
3. Account for the fact that LLC IDs may be non-contiguous. For example,
if there is a single core in an LLC, then if we assign LLC IDs to
domains, then the domain IDs won't be contiguous. This doesn't fit
our current model which is used by e.g. infeasible_weights.rs. We'll
update some of the code in rusty to accomodate this, but we'll need
to do more.
4. Update schedulers to properly reset themselves in the event of a
hotplug event. We'll take care of that in a follow-on change.
Signed-off-by: David Vernet <void@manifault.com>
If a CPU is offline, it could cause an LLC to go offline, which could
cause us to have non-contiguous domain IDs. Right now, a few places in
code assume contiguous domain IDs, such as in the infeasible weights
crate. Let's update domain.rs and load_balaance.rs to do the right
thing. We'll fix the others later.
Signed-off-by: David Vernet <void@manifault.com>
We implement functions or(), and(), and xor() for cpumasks, but we
should also implement the bitwise ops for those operations in case
people prefer that syntax.
Signed-off-by: David Vernet <void@manifault.com>
Offline CPUs don't have a /sys/devices/system/cpu/cpuN/topology
directory, so let's just skip them if they're not online. Schedulers are
expected to detect hotplug, and handle gracefully restarting.
Signed-off-by: David Vernet <void@manifault.com>
We're iterating from min..max cpu in cpus_online(), but that's not
inclusive of the max CPU. Let's also include that so we don't think that
last CPU is offline.
Signed-off-by: David Vernet <void@manifault.com>
get_clang_ver fails if clang is built from scratch.
Teach get_clang_ver to recognize the clang version even for clang built
from git.
These are the tests I ran:
# /usr/local/bin/clang --version
clang version 18.0.0git (https://github.com/llvm/llvm-project.git c458f928fad7bbcf08ab1da9949eb2969fc9f89c)
# meson-scripts/get_clang_ver /usr/local/bin/clang
18.0.0
# /usr/bin/clang --version
clang version 17.0.6 (CentOS 17.0.6-5.el9)
# meson-scripts/get_clang_ver /usr/bin/clang
17.0.6
Signed-off-by: Breno Leitao <leitao@debian.org>
In my dev enviroment, bpf_clang_ver is coming as NULL, since I am using
upstream Clang. Fails graceful in this case.
# /usr/local/bin/clang --version
clang version 18.0.0git (https://github.com/llvm/llvm-project.git c458f928fad7bbcf08ab1da9949eb2969fc9f89c)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/bin
But the command below returns nothing
/home/leit/Devel/scx/meson-scripts/get_clang_ver /usr/local/bin/clang
Signed-off-by: Breno Leitao <leitao@debian.org>
Most of the schedulers assume that the amount of possible CPUs in the
system represents the actual number of CPUs available.
This is not always true: some CPUs may be offline or certain CPU models
(AMD CPUs for example) may include unavailable CPUs in this number.
This can lead to sub-optimal performance or even errors in the scheduler
(see for example [1][2]).
Ideally, we need to attack this issue in a more generic way, such as
having a proper API provided by a C library, that can be used by all
schedulers and the topology Rust module (scx_utils crate).
But for now, let's try to mitigate most of the common sub-optimal cases
separately inside each scheduler.
For rustland we can apply some mitigations both in select_cpu() (for the
BPF part) and in the user-space part:
- the former is fixed in the sched-ext kernel by commit 94dc0c01b957
("scx: Use cpu_online_mask when resetting idle masks"). However,
adding an extra check `cpu < num_possible_cpus` in select_cpu(),
allows to properly support AMD CPUs, even with kernels that don't
have the cpu_online_mask fix yet (this doesn't always guarantee the
validity of cpu, but it should be enough to mitigate the majority of
the potential sub-optimal cases, without introducing any significant
overhead)
- the latter can be fixed relying on topology.span(), instead of
topology.nr_cpus(), to count the amount of available CPUs in the
system.
[1] https://github.com/sched-ext/sched_ext/issues/69
[2] https://github.com/sched-ext/scx/issues/147
Link: 94dc0c01b9
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
We are failing to parse /sys/devices/system/cpu/online in systems with
just one CPU, for example:
$ vng -r --cpus 1 -- scx_rusty
Error: Failed to parse online cpus 0
Correctly handle strings containing only a single CPU during parsing.
Fixes: c5a3b83b ("topology: Add new topology crate")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Given the complexity of migrating load between nodes (we're doing four
nested loops), we should add a comment explaining what we're doing. This
commit does that. In addition, we use a VecDeque to store (and then
restore) push nodes and push domains so that we can re-add them to their
respective lists in load-sorted order rather than reverse-load-sorted
order. This allows us to avoid having to do unnecessary right-shifts
every time a push object is re-added to its containing list.
Signed-off-by: David Vernet <void@manifault.com>
Fixing alignment, moving a couple bail! calls around, and adding a
missing break from move_between_nodes() that lets us bail out of a loop
early.
Signed-off-by: David Vernet <void@manifault.com>
As Tejun pointed out in review, the disadvantage of using
push/pull/balanced lists is that if the domains inside the nodes are
balanced, we won't be able to push load between them. I'd originally
done it that way both as an optimization, but also to allow me to
iterate over the lists of pushable and pullable domains mutably. That
was addressed in the prior commit, but the nodes themselves were still
put into 3 buckets.
I think this is generally just a cleaner way of doing things, so let's
just collapse the nodes into a flat list as well. This prevents us from
having to coalesce the lists, std::mem::swap them, etc.
Signed-off-by: David Vernet <void@manifault.com>
Tejun pointed out that a possible issue exists in the current
implementation, wherein if you have two NUMA nodes that are imbalanced,
but their domains are internally balanced, we'll fail to migrate between
them if all nodes are in the balanced_nodes list.
To address this, let's just use a single global list for all types of
domains, and do checking internally for imbalances. The reason it was
done this way in the first place was to allow me to mutably iterate over
both vectors in a nested loop. The way around that is to just use loop
{} and push/pop domains from the list.
We could do the same thing for the NUMA nodes themselves, which are also
in 3 separate lists in the LoadBalancer. We'll do that in a subsequent
commit.
Signed-off-by: David Vernet <void@manifault.com>
In scx_rusty, a CPU that is going to go idle will attempt to steal tasks
from remote domains when its domain has no tasks to run, and a remote
domain has at least greedy_threshold enqueued tasks. This stealing is
temporary, but of course has a cost in that the CPU that's stealing the
task may cause it to suffer from cache misses, or in the case of
multi-node machines, remote NUMA accesses and working sets split across
multiple domains.
Given the higher cost of x NUMA work stealing, let's add a separate flag
that lets users tune the threshold for doing cross NUMA greedy task
stealing.
Signed-off-by: David Vernet <void@manifault.com>
In order to use the new consume_raw() API we need to depend on a version
of libbpf-rs that is not released yet.
Apparently adding such dependency may introduce a potential dependency
conflict with libbpf-sys.
Therefore, revert this change and go back to the previous consume() API.
One a new version of libbpf-rs will be out we can update all our
dependencies to use the new libbpf-rs and re-apply this patch to
scx_rustland_core.
Fixes: 7c8c5fd ("scx_rustland_core: use new consume_raw() libbpf-rs API")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The flatcg scheduler uses a rb_node type - struct cgv_node - to keep
track of vtime. On cgroup init, a cgv_node is created and stashed in a
hashmap - cgv_node_stash - for later use. In cgroup_enqueued and
try_pick_next_cgroup, the node is inserted into the rbtree, which
required removing it from the stash before this patch's changes.
This patch makes cgv_node refcounted, which allows keeping it in the
stash for the entirety of the cgroup's lifetime. Unnecessary
bpf_kptr_xchg's and other boilerplate can be removed as a result.
Note that in addition to bpf_refcount patches, which have been upstream
for quite some time, this change depends on a more recent series [0].
[0]: https://lore.kernel.org/bpf/20231107085639.3016113-1-davemarchevsky@fb.com/
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>