Commit Graph

642 Commits

Author SHA1 Message Date
Tejun Heo
625bb84bc4 scx_lavd: Move load subtraction to quiescent state transition
scx_lavd tracks task state transitions and updates statistics on each valid
transition. However, there's an asymmetry between the runnable/running and
stopping/quiescent transitions. In the former, the runnable and running
transitions are accounted separately in update_stat_for_enq() and
update_stat_for_run(), respectively. However, in the latter, the two
transitions are combined together in update_stat_for_stop().

This asymmetry leads to incorrect accounting. For example, a task's load
should be added to the cpu's load sum when the task gets enqueued and
subtracted when the task is no longer runnable (quiescent). The former is
accounted correctly from update_stat_for_enq() but the latter is done
whenever the task stops. A task can transit between running and stopping
multiple times before becoming quiescent, so the asymmetry can end up
subtracting the load of a task which is still running from the cpu's load
sum.

This patch:

- introduces LAVD_TASK_STAT_QUIESCENT and updates transit_task_stat() so
  that it can handle all valid state transitions including the multiple back
  and forth transitions between two pairs - QUIESCENT <-> ENQ and RUNNING
  <-> STOPPING.

- restores the symmetry by moving load adjustments part from
  update_stat_for_stop() to new update_stat_for_quiescent().

This removes a good chunk of ignored transitions. The next patch will take
care of the rest.
2024-03-26 12:23:19 -10:00
Tejun Heo
dd40377f03 scx_lavd: Drop unnecessary extern crates
Since https://doc.rust-lang.org/edition-guide/rust-2018/path-changes.html,
extern crate declarations aren't necessary. Let's drop them.
2024-03-26 12:23:19 -10:00
Tejun Heo
63bae69d2a
Merge pull request #198 from sched-ext/layered_static
layered: Make helper functions static
2024-03-26 10:33:08 -10:00
David Vernet
602ec5ada3
layered: Make helper functions static
lookup_task_ctx(), lookup_task_ctx_may_fail(), and lookup_layer()
currently don't have the static keyword, so BPF may treat them as a
global function. We don't actually want these to be global, so let's
make them static to avoid confusing the verifier.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-26 15:08:32 -05:00
Changwoo Min
83169481a6 scx_lavd: improve latency criticality to latency priority mapping
The old approach is mapping [0, maximum latency criticliy] to [-boost
range, boost range). This approach is easily affected by one outlier
maximum value and suffers from the integer truncation error. The new
approach divides the range into two -- [minimum latency criticality,
average latency criticality) and [average latency criticality, maximum
latency criticality] -- and maps them into [boost range/2, 0) and [0,
-boost range/2), respectively,

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-25 22:13:41 +09:00
David Vernet
5bfd90bd64
Merge pull request #196 from sched-ext/topo_fallback
topology: Fall back to cache 0
2024-03-22 02:18:12 +10:00
David Vernet
1bd990fb87
topology: Fall back to cache 0
As described in https://github.com/sched-ext/scx/issues/195, apparently
some chips don't export information about their cache topology. There's
not much we can do if we don't have that information, so let's just
assume a unified cache per node if that happens.

Andrea suggested this patch -- I'm applying exactly what he proposed,
with a slightly modified comment.

Suggested-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David Vernet <void@manifault.com>
2024-03-21 11:10:55 -05:00
Changwoo Min
2b5d3c1300 scx_lavd: change sched_prio_to_latency_weight to more skewed one
Replace a latency weight arrary to more skewed one, which is the
inverse of sched_prio_to_slice_weight. It turns out more skewed one
works better under highly CPU-overloaded cases since it gives a longer
deadline to non-latency-critical tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 14:01:44 +09:00
Changwoo Min
9c12b607ca scx_lavd: increase LAVD_LC_RUNTIME_MAX for improved lat_prio
As the calculated runtime increases by considering the number of
full-time slice consumption, increase the upper bound
(LAVD_LC_RUNTIME_MAX) of runtime to be considered in latency
calculation. Also, add LAVD_SLICE_BOOST_MAX_PRIO to avoid
slice_boost_prio dropping to zero suddenly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:59:13 +09:00
Changwoo Min
32570789d8 scx_lavd: improve the accuracy of runtime per schedule
Take slice_boost_prio -- how many times a full time slice was consumed
-- into consideration in calculating run_time_ns (runtime per schedule).
This improve the accuracy especially when a task is overscheduled and
its time slice is reduced for enforcing fairness.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:32:09 +09:00
Tejun Heo
0e53e7a00a
Merge pull request #194 from multics69/pr192-comments
scx_lavd: addressed comments from PR #192
2024-03-19 08:48:08 -07:00
Changwoo Min
b37370bb35 scx_lavd: entail two invalid task state transitions
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-20 00:15:47 +09:00
Changwoo Min
8860f26ff4 scx_lavd: add a sanity check if runtime is negative
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-20 00:15:37 +09:00
Changwoo Min
fa2282363b scx_lavd: more explanation about sched_prio_to_latency_weight
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 21:31:37 +09:00
Changwoo Min
24bddad9b4 scx_lavd: fix a typo
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 21:19:55 +09:00
Tejun Heo
6bd3e6c138
Merge pull request #193 from sirlucjan/services-update3
scx: update /etc/default/scx
2024-03-18 23:36:22 -07:00
Changwoo Min
512c4e794f scx_lavd: fix potential CPU stall in lavd_select_cpu()
Returning prev_cpu after picking an idle CPU will cause the idle CPU
stall because the idle core was already punched out from the idle mask
by the scx core so it is no longer idle from the scx core's point of
view.

This fix conducts the idle core selection at the last step so it never
return prev_cpu after picking the idle core.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:46 +09:00
Changwoo Min
e41c674fae scx_lavd: remove redundant latency calculation at calc_latency_weight()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
865269f438 scx_lavd: remove unnecessary condition check at slice_fully_consumed()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
c2b1a10e17 scx_lavd: remove unnecessary condition check at update_stat_for_stop()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
a27b509452 sdx_lavd: use is_wakeup_ef() in checking wait flag
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
419ccae8db scx_lavd: improve the clarity of the task state transition
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
66e15285ea scx_lavd: move scx_bpf_error() calls to get_cpu_ctx{_id}()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
0fc5591bf6 scx_lavd: add a utility func, {try_}get_task_ctx()
get_task_ctx() and try_get_task_ctx() were added for common error
handling for task lookup failure. Since idle "swapper" task is not under
sched_ext, try_get_task_ctx() is added for the case such that idle task
can be searched.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
97b4d9ce5a scx_lavd: remove unnecessary condition check in is_wakeup_wf()
We don't need to test SCX_WAKE_SYNC because SCX_WAKE_SYNC should only be
set when SCX_WAKE_TTWU is set.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
47e7238b13 scs_lavd: improve the description of fairness
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:45:37 +09:00
Changwoo Min
670c1b5b92 scx_lavd: print one scheduling decision by default
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:41 +09:00
Changwoo Min
315e5b3fe2 scx_lavd: remove unnecessary arg from put_local_rq()
cpu_id is unused and not necessary in pu_local_rq(), so it it removed.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Changwoo Min
ead7d55c5c scx_lavd: replace num_cpus to scx_utils::Topology
This removes the external carte depenendy and avoides the known bugs
in the num_cpus carte.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Changwoo Min
17bce169e7 scx_lavd: fix formatting issues in main.rs and main.bpf.c
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Piotr Gorski
040ade57ef
scx: update /etc/default/scx
Signed-off-by: Piotr Gorski <lucjan.lucjanov@gmail.com>
2024-03-18 09:25:11 +01:00
David Vernet
80986e4a23
Merge pull request #192 from multics69/scx_lavd
scx_lavd: add LAVD (Latency-criticality Aware Virtual Deadline) scheduler
2024-03-17 22:09:56 -07:00
Changwoo Min
fb73520990 scx_lavd: add scx_lavd to the meson build 2024-03-16 10:55:37 +09:00
Changwoo Min
6ab3928a0d scx_lavd: add scx_lavd (Latency-criticality Aware Virtual Deadline) scheduler
scx_lavd is a BPF scheduler that implements an LAVD (Latency-criticality
Aware Virtual Deadline) scheduling algorithm. While LAVD is new and
still evolving, its core ideas are 1) measuring how much a task is
latency critical and 2) leveraging the task's latency-criticality
information in making various scheduling decisions (e.g., task's
deadline, time slice, etc.). As the name implies, LAVD is based on the
foundation of deadline scheduling. This scheduler consists of the BPF
part and the rust part. The BPF part makes all the scheduling decisions;
the rust part loads the BPF code and conducts other chores (e.g.,
printing sampled scheduling decisions).
2024-03-16 10:31:07 +09:00
David Vernet
3ad0fff855
Merge pull request #188 from sched-ext/topology-fix-single-cpu
topology: support single CPU systems
2024-03-14 13:16:59 -05:00
David Vernet
6ecb6aea68
Merge pull request #191 from sched-ext/fix_possible_cpus
rusty: Account for disabled but offline CPUs
2024-03-14 13:12:26 -05:00
David Vernet
35b7dc95d0
rusty: Fix up the scheduler description
There were a few issues, e.g. us still mentioning the infeasible weights
problem, and arguments using underscores despite clap rendering them
with dashes. Let's fix them up.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:21:03 -05:00
David Vernet
4520514fe8
rusty: Account for disabled but offline CPUs
As described in https://bugzilla.kernel.org/show_bug.cgi?id=218109,
https://github.com/sched-ext/scx/issues/147 and
https://github.com/sched-ext/sched_ext/issues/69, AMD chips can
sometimes report fully disabled CPUs as offline, which causes us to
count them when looking at /sys/devices/system/cpu/possible.

Additionally, systems can have holes in their active CPU maps. For
example, a system with CPUs 0, 1, 2, 3 possible, may have only 0 and 2
active. To address this, we need to do a few things:

1. Update topology.rs to be clear that it's returning the number of
   _possible_ CPUs in the system. Also update Topology to only record
   online CPUs when creating its span and iterating over sysfs when
   creating domains. It was previously trying to record when a CPU was
   online, but this was actually broken as the topology directory isn't
   present in sysfs when the CPU is offline.

2. Schedulers should not be relying on nr_possible_cpus for anything
   other than interacting with per-CPU data (e.g. for stats extraction),
   or e.g. verifying maximum sizes of statically sized arrays in BPF. It
   should _not_ be used for e.g. performing load calculations, etc. With
   that said, we'll also need to update schedulers to not rely on the
   nr_possible_cpus figure being exported by the topology crate. We do
   that for rusty in this patch, but don't fix any of the others other
   than updating how they call topology.rs.

3. Account for the fact that LLC IDs may be non-contiguous. For example,
   if there is a single core in an LLC, then if we assign LLC IDs to
   domains, then the domain IDs won't be contiguous. This doesn't fit
   our current model which is used by e.g. infeasible_weights.rs. We'll
   update some of the code in rusty to accomodate this, but we'll need
   to do more.

4. Update schedulers to properly reset themselves in the event of a
   hotplug event. We'll take care of that in a follow-on change.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:15:28 -05:00
David Vernet
2b8a3ea984
rusty: Iterate over domains, not IDs
If a CPU is offline, it could cause an LLC to go offline, which could
cause us to have non-contiguous domain IDs. Right now, a few places in
code assume contiguous domain IDs, such as in the infeasible weights
crate. Let's update domain.rs and load_balaance.rs to do the right
thing. We'll fix the others later.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:02:01 -05:00
David Vernet
4e9cf5181e
rusty: Fix domain weight() function
We were looking at the domain cpumask length, instead of its weight.
Correct the oversight.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:02:01 -05:00
David Vernet
bc0336d727
cpumask: Add bitwise ops for cpumask
We implement functions or(), and(), and xor() for cpumasks, but we
should also implement the bitwise ops for those operations in case
people prefer that syntax.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:02:01 -05:00
David Vernet
84a202e2a0
topology: Skip offline CPUs
Offline CPUs don't have a /sys/devices/system/cpu/cpuN/topology
directory, so let's just skip them if they're not online. Schedulers are
expected to detect hotplug, and handle gracefully restarting.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:02:01 -05:00
David Vernet
583696f940
topology: Include last CPU in online
We're iterating from min..max cpu in cpus_online(), but that's not
inclusive of the max CPU. Let's also include that so we don't think that
last CPU is offline.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-14 11:01:52 -05:00
David Vernet
0ecac96467
Merge pull request #189 from sched-ext/rustland-offline-cpus
scx_rustland: mitigate sub-optimal performance with offline CPUs
2024-03-14 11:00:28 -05:00
Tejun Heo
e4dbc22b6c
Merge pull request #190 from leitao/main
Better parsing for clang version
2024-03-14 05:54:27 -10:00
Breno Leitao
1745daea0f meson: support clang built from git
get_clang_ver fails if clang is built from scratch.

Teach get_clang_ver to recognize the clang version even for clang built
from git.

These are the tests I ran:

	# /usr/local/bin/clang --version
	clang version 18.0.0git (https://github.com/llvm/llvm-project.git c458f928fad7bbcf08ab1da9949eb2969fc9f89c)
	# meson-scripts/get_clang_ver /usr/local/bin/clang
	18.0.0

	# /usr/bin/clang --version
	clang version 17.0.6 (CentOS 17.0.6-5.el9)
	# meson-scripts/get_clang_ver /usr/bin/clang
	17.0.6

Signed-off-by: Breno Leitao <leitao@debian.org>
2024-03-14 03:39:09 -07:00
Breno Leitao
4a18e9c7f7 meson: fail if get_clang_ver fails
In my dev enviroment, bpf_clang_ver is coming as NULL, since I am using
upstream Clang. Fails graceful in this case.

	#  /usr/local/bin/clang --version
	clang version 18.0.0git (https://github.com/llvm/llvm-project.git c458f928fad7bbcf08ab1da9949eb2969fc9f89c)
	Target: x86_64-unknown-linux-gnu
	Thread model: posix
	InstalledDir: /usr/local/bin

But the command below returns nothing
	/home/leit/Devel/scx/meson-scripts/get_clang_ver /usr/local/bin/clang

Signed-off-by: Breno Leitao <leitao@debian.org>
2024-03-14 03:27:19 -07:00
Andrea Righi
2cd3929475 scx_rustland: mitigate sub-optimal performance with offline CPUs
Most of the schedulers assume that the amount of possible CPUs in the
system represents the actual number of CPUs available.

This is not always true: some CPUs may be offline or certain CPU models
(AMD CPUs for example) may include unavailable CPUs in this number.

This can lead to sub-optimal performance or even errors in the scheduler
(see for example [1][2]).

Ideally, we need to attack this issue in a more generic way, such as
having a proper API provided by a C library, that can be used by all
schedulers and the topology Rust module (scx_utils crate).

But for now, let's try to mitigate most of the common sub-optimal cases
separately inside each scheduler.

For rustland we can apply some mitigations both in select_cpu() (for the
BPF part) and in the user-space part:

 - the former is fixed in the sched-ext kernel by commit 94dc0c01b957
   ("scx: Use cpu_online_mask when resetting idle masks"). However,
   adding an extra check `cpu < num_possible_cpus` in select_cpu(),
   allows to properly support AMD CPUs, even with kernels that don't
   have the cpu_online_mask fix yet (this doesn't always guarantee the
   validity of cpu, but it should be enough to mitigate the majority of
   the potential sub-optimal cases, without introducing any significant
   overhead)

 - the latter can be fixed relying on topology.span(), instead of
   topology.nr_cpus(), to count the amount of available CPUs in the
   system.

[1] https://github.com/sched-ext/sched_ext/issues/69
[2] https://github.com/sched-ext/scx/issues/147

Link: 94dc0c01b9
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-03-14 10:19:31 +01:00
Andrea Righi
a1b05c5ab3 topology: support single CPU systems
We are failing to parse /sys/devices/system/cpu/online in systems with
just one CPU, for example:

 $ vng -r --cpus 1 -- scx_rusty
 Error: Failed to parse online cpus 0

Correctly handle strings containing only a single CPU during parsing.

Fixes: c5a3b83b ("topology: Add new topology crate")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-03-14 07:46:20 +01:00
David Vernet
3cda1bc690
Merge pull request #187 from sched-ext/layered-updates
scx_layered: Make config json assume default vaules for unspecified fields
2024-03-13 17:15:18 -05:00