Commit Graph

380 Commits

Author SHA1 Message Date
vax-r
093a08356e Fix typo
Fix "expermentation" to "experimentation".
2024-05-09 12:10:55 +08:00
David Vernet
b9b9875aa7
rusty: Remove task offline tracking
scx_rusty's intention is to support hotplug by automatically restarting
whenever a hotplug event is encountered. Now that we're not trying to
consume a bogus DSQ in the rusty_dispatch() on a newly hotplugged CPU,
let's just remove offline tracking. It's really just there as a sanity
check, but it triggers if an offline task is made runnable during a
hotplug event before the ops.hotplug() callback has been invoked.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:55 -05:00
David Vernet
6f1dc6067a
rusty: Check for offline CPU in rusty_dispatch()
There's currently a slight issue on existing kernels on the hotplug
path wherein we can start to receive scheduling callbacks on a CPU
before that CPU has received hotplug events. For CPUs going online, this
can possibly confuse a scheduler because it may not be expecting
anything to ever happen on that CPU, and therefore may do things that
could cause the scheduler to crash. For example, without this patch in
scx_rusty, we try to consume from a bogus DSQ that doesn't exist, which
causes ext.c to boot out the scheduler.

Though this issue will soon be fixed in ext.c, let's explicitly avoid
dispatching from an onlining CPU in rusty so that we properly support
hotplug on older kernels as well.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:54 -05:00
David Vernet
0d6b00238f
common: Add likely/unlikely macros
We can hint to the compiler about paths we'll take in a scheduler. This
is a common pattern, so lets provide convenience macros.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:53 -05:00
David Vernet
4b16f5117a
rusty: Fix alignment
Found a misaligned conditional in main.rs. Fix it.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-04 21:33:19 -05:00
Changwoo Min
01e5a46371
Merge pull request #263 from multics69/scx_lavd-power01
scx_lavd: support CPU frequency scaling
2024-05-05 10:16:00 +09:00
Changwoo Min
a24e1d7adf scx_lavd: more comments about CPU frequency scaling
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-04 10:41:13 +09:00
David Vernet
9bb8e9a548
common: Pull bpf_log2l() into helper function header
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.

Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:50:24 -05:00
David Vernet
2403f60631
rusty: Dynamically scale slice according to system util
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.

Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.

Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:58 -05:00
David Vernet
76618989f8
rusty: Implement basic eligible deadline scheduling in rusty
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.

In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:56 -05:00
Changwoo Min
6892898469 scx_lavd: support CPU frequency scaling
To know the required CPU performance (e.g., frequency) demand, we keep
track of 1) utilization of each CPU and 2) _performance criticality_ of
each task. The performance criticality of a task denotes how critical it
is to CPU performance (frequency). Like the notion of latency
criticality, we use three factors: the task's average runtime, wake-up
frequency, and waken-up frequency. A task's runtime is longer, and its
two frequencies are higher; the task is more performance-critical
because it would be a bottleneck in the middle of the task chain.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-04 00:30:25 +09:00
David Vernet
925a69b156
rusty: Use helper to lookup domain context
Let's remove the extraneous copy pasting and use a lookup helper like we
do for task and pcpu context.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-02 13:56:46 -05:00
Daniel Jordan
de2773d621 scx_rusty: compare abs values in xfer_between()
A LoadEntity gets the load to transfer between two entities by taking
the minimum of their imbalances and reducing its abs value by
xfer_ratio.

In practice self.imbal(), the push node or domain, always has positive
imbalance and other.imbal(), the pull node or domain, always has
negative imbalance, so other.imbal() is always the minimum even though
the abs value of its imbalance might be greater than the abs value of
self.imbal().  It seems like the intent is to take the minimum of the
two absolute values instead to avoid overbalancing at the puller, so
make both values abs.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
2024-05-02 11:54:13 -04:00
Daniel Jordan
1652791e5d scx_rusty: make per-task loads sensitive to lb_apply_weight
Rusty's load balancer calculates load differently based on average
system CPU utilization in create_domain_hierarchy().  At >= 99.999%
utilization, load is the product of a task's weight and duty cycle;
below that, load is the same as the task's duty cycle.

populate_tasks_by_load(), however, always uses the product when
calculating per-task load so that in the sub-99.999% util case, load is
inflated, typically by a factor of 100 with a normal priority task.
Tasks look too heavy to migrate as a result because a single task would
transfer more load than the domain imbalance allows, leading to
significant imbalance in some cases.

Make populate_tasks_by_load() calculate task load the same way as
domain load, checking lb_apply_weight.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
2024-05-02 11:54:05 -04:00
Andrea Righi
11f100f043 scx_rustland: bump up version to 0.0.6
Bump up scx_rustland version to use the new scx_rustland_core crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-30 18:32:21 +02:00
Andrea Righi
fd68ce13a7 scx_rustland_core: bump up version to 0.4.0
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-30 18:09:09 +02:00
Tejun Heo
c77d101655 scheds/c: Sync to the new conventions
Sync with the in-kernel-tree example schedulers.
2024-04-29 10:13:46 -10:00
Tejun Heo
71d5e60093 scheds/rust: Use __COMPAT helpers instead of open coding feature tests 2024-04-29 09:58:34 -10:00
Tejun Heo
cf66e58118 Sync from kernel (670bdab6073)
And fix build breakage in scx_utils due to an enum type rename.
2024-04-29 09:58:19 -10:00
Tejun Heo
e5e88b7e18 Bump versions to prepare for a release 2024-04-29 09:07:27 -10:00
Tejun Heo
3e7ef35649
Merge pull request #250 from multics69/lavd-issue-234
scx_lavd: replesih time slice at ops.running() only when necessary
2024-04-29 09:01:04 -10:00
Tejun Heo
5b7b7d5193
Merge pull request #247 from multics69/lavd-issue-244
scx_lavd: always inline submit_task_ctx to make the verifier happy
2024-04-29 07:53:38 -10:00
Changwoo Min
5f63e0ca30 scx_lavd: replesih time slice at ops.running() only when necessary
The current code replenishes the task's time slice whenever the task
becomes ops.running(). However, there is a case where such behavior can
starve the other tasks, causing the watchdog timeout error. One (if not
all) such case is when a task is preempted while running by the higher
scheduler class (e.g., RT, DL). In such a case, the task will be transit
in a cycle of ops.running() -> ops.stopping() -> ops.running() -> etc.
Whenever it becomes re-running, it will be placed at the head of local
DSQ and ops.running() will renew its time slice. Hence, in the worst
case, the task can run forever since its time slice is never exhausted.
The fix is assigning the time slice only once by checking if the time
slice is calculated before.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-29 12:13:31 +09:00
Andrea Righi
cabde30736 scx_utils: bump up version to 0.8.0
Bump up scx-utils version to provide the new scx_utils::TopologyMap.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 21:01:16 +02:00
Andrea Righi
5effb4fc4c scx_rustland: bump up version to 0.0.5
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
0785246ee2 scx_rustland: provide --version option
Provide a command line option to print the version of the scheduler and
the scx_rustland_core crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
fb2f5c240e scx_rustland_core: bump up version to 0.3
Given that rustland_core now supports task preemption and it has been
tested successfully, it's worhtwhile to cut a new version of the crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
905960f752 scx_lavd: use c_char consistently
In Rust c_char can be aliased to i8 or u8, depending on the particular
target architecture.

For example, trying to build scx_lavd on ppc64 triggers the following
error:

error[E0308]: mismatched types
   --> src/main.rs:200:38
    |
200 |         let c_tx_cm: *const c_char = (&tx.comm as *const [i8; 17]) as *const i8;
    |                      -------------   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `*const u8`, found `*const i8`
    |                      |
    |                      expected due to this
    |
    = note: expected raw pointer `*const u8`
               found raw pointer `*const i8`

To fix this, consistently use c_char instead of assuming it corresponds
to i8.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-27 17:21:19 +02:00
Changwoo Min
f470b1aa13 scx_lavd: always inline submit_task_ctx to make the verifier happy
In _some_ kernel versions, loading scx_lavd fails with an error of
"bpf_rcu_read_unlock is missing". The usage of
bpf_rcu_read_lock/unlock() in proc_dump_all_tasks() is correct but the
bpf verifier still think bpf_rcu_read_unlock() is missing. The most
plausible reason so far is that the problematic kernel does not have a
commit 6fceea0fa59f ("bpf: Transfer RCU lock state between subprog
calls"), failing inter-procedural analysis between proc_dump_all_tasks()
and submit_task_ctx(). Thus, we force inline submit_task_ctx() (no
inter-procedural analysis by the verifier is necessary) for the time
being.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-28 00:11:38 +09:00
Changwoo Min
d0d0a18b10 scx_lavd: fix copyright information
Correct the copyright and author information

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-26 16:36:58 +09:00
Andrea Righi
973aded5a8
Merge pull request #238 from sched-ext/rustland-reduce-topology-overhead
scx_rustland: reduce overhead by caching host topology
2024-04-24 22:24:23 +02:00
David Vernet
5ba137e8c9
layered: Make layered backwards compat with cpufreq
Only the very newest kernels support scx_bpf_cpuperf_set(). Let's update
scx_layered to accommodate older kernels as well.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-24 14:01:51 -05:00
Tejun Heo
9a9b4dd23e
Merge pull request #239 from hodgesds/cpufreq_helpers
Add CPU frequency related helpers and extend scx_layered
2024-04-24 07:22:15 -10:00
Andrea Righi
5302ff1cdc scx_rustland: use TopologyMap for efficient CPU topology iteration
Looking at perf top it seems that the scheduler can spend a significant
amount of time iterating over the CPU topology/cpumask information,
especially when the system is running a significant amount of tasks:

  2.57% scx_rustland [.] <scx_utils::cpumask::CpumaskIntoIterator as core::iter::traits::iterator::Iterator>::next

Considering that scx_rustland doesn't support CPU hotplugging yet (it
requires a full restart to properly handle CPU hotplug events), we can
completely avoid this overhead by caching a TopologyMap object at the
beginning, when the scheduler starts, instead of constantly
re-evaluating the CPU topology information.

This allows to reduce the scheduler overhead by ~5% CPU utilization
under heavy load conditions (from ~65% -> ~60%, according to top).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-24 17:08:06 +02:00
Daniel Hodges
32e97bf4d5 Adds CPU frequency related helpers and extend scx_layered
This change adds `scx_bpf_cpuperf_cap`, `scx_bpf_cpuperf_cur` and
`scx_bpf_cpuperf_set` definitions that were recently introduced into
[`sched_ext`](https://github.com/sched-ext/sched_ext/pull/180). It adds
a `perf` field to `scx_layered` to allow for controlling performance per
layer.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-04-24 07:27:52 -07:00
David Vernet
a8daf372b2
Merge pull request #241 from sched-ext/cpumask_efficient
topology: Don't allocate on calls to span()
2024-04-24 09:21:15 -05:00
David Vernet
24c248eebb
layered: Add support for filtering on process name
If a library creates threads, those threads will often have the same
name. If two different processes of different priority both use a
library, it may be that we want the library's threads in each process to
be put into different layers.

To support this, let's add the ability to filter not only by task name,
but also by process name via the task thread group leader's comm.

Tested by creating two executables named "foo" and "bar", which both
spawn a bunch of tasks named "exp_worker" that spin until being
interrupted. With this config: https://pastebin.com/Uz2phzxQ, the tasks
were correctly matched to the expected layers.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-23 23:12:37 -05:00
David Vernet
c187c65702
topology: Don't allocate on calls to span()
We're currently cloning cpumasks returned by calls to {Core, Cache,
Node, Topology}::span(). If a caller needs to clone it, they can. Let's
not penalize the callers that just want to query the underlying cpumask.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-23 22:59:42 -05:00
David Vernet
a998fb7d01
layered: Clarify f: and file: prefix behavior
Some people have expressed confusion at this behavior. Let's be a bit
more explicit in the documentation.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-23 20:39:28 -05:00
Andrea Righi
fbe9a80af8 scx_rustland: introduce --no-preemption
Provide a run-time option to disable task preemption.

This option can be used to improve the throughput of the CPU-intensive
tasks while still providing a good level of responsiveness in the
system.

By default preemption is enabled, to provide a higher level of
responsiveness to the interactive tasks.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-23 07:13:30 +02:00
Andrea Righi
0ffaaac6db scx_rustland: enable preemption
Use the new scx_rustland_core dispatch flag RL_PREEMPT_CPU to allow
interactive tasks to preempt other tasks with scx_rustland.

If the built-in idle selection logic is enforced (option `-i`), the
scheduler prioritizes keeping tasks on the target CPU designated by this
logic. With preemption enabled, these tasks have a higher likelihood of
reusing their cached working set, potentially improving performance.

Alternatively, when tasks are dispatched to the first available CPU
(default behavior), interactive tasks benefit from running more promptly
by kicking out other tasks before their assigned time slice expires.

This potentially allows to increase the default time slice to higher
values in the future, to improve the overall throughput in the system
and, at the same time, still maintain a good level of responsiveness,
because interactive tasks are now able to run pretty much immediately,
independently on the remaining time slice of the other tasks that are
contending the CPUs in the system.

= Results =

Measuring the performance of the usual benchmark "playing a video game
while running a parallel kernel build in background" seems to give
around 2-10% boost in the fps with preemption enabled, depending on the
particular video game.

Results were obtained running a `make -j32` kernel build on a AMD Ryzen
7 5800X 8-Cores 16GB RAM, while testing video games such as Baldur's
Gate 3 (with a solid +10% fps), Counter Strike 2 (around +5%) and Team
Fortress 2 (+2% boost).

Moreover, some WebGL applications (such as
https://webglsamples.org/aquarium/aquarium.html) seem to benefit even
more with preemption enabled, providing up to a +15% fps boost.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-23 07:13:30 +02:00
Andrea Righi
6d2aac1591 scx_rustland_core: introduce dispatch flags
Reserve some bits of the `cpu` attribute of a task to store special
dispatch flags.

Initially, let's introduce just RL_CPU_ANY to replace the special value
NO_CPU, indicating that the task can be dispatched on any CPU,
specifically the first CPU that becomes available.

This allows to keep the CPU value assigned by the builtin idle selection
logic, that can potentially be used later for further optimizations.

Moreover, having the possibility to specify dispatch flags gives more
flexibility and it allows to map new scheduling features to such flags.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-23 07:13:30 +02:00
takase1121
3e12676ca2
scheds-rust: add explanation for chaining schedulers 2024-04-23 08:30:38 +08:00
takase1121
5d20f89a87
scheds-rust: build rust schedulers in sequence 2024-04-23 08:06:27 +08:00
David Vernet
5f1eac85ff
layered: Fix init_task
When I transitioned layered to using task local storage, I messed up
initializing the task ctx, not realizing we previously had a separate
variable that was initializing the hasmap entry. We need to initialize
the task's layer to -11, and also set refresh_layer to 1.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-18 09:44:32 -05:00
David Vernet
45589cd0f7
lavd: Fix a few typos
Noticed a few typos. Let's fix em up

Signed-off-by: David Vernet <void@manifault.com>
2024-04-17 08:17:52 -05:00
David Vernet
eed338ef25
simple: Invoke __COMPAT_scx_bpf_switch_all();
scx_simple no longer supports running in "partial" mode, with only
certain tasks usig scx_simple. When this option was removed, we also
removed the call to scx_bpf_switch_all();

While switching-all is the default behavior for newer kernels, let's add
__COMPAT_scx_bpf_switch_all() so that scx_simple can work on older
kernels as well.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-16 11:09:44 -05:00
David Vernet
ffced1f615
rusty: Remove explicit padding
As of libbpf-rs 0.23.0 (which contains commit
9d9e979fcf),
libbpf-rs now generates rust structs that honor padding. We can
therefore remove the custom padding in scx_rusty's struct pcpu_ctx.

For example, here is the generated pub struct pcpu_ctx:

pub struct pcpu_ctx {
    pub dom_rr_cur: u32,
    pub dom_id: u32,
    pub nr_node_doms: u32,
    pub node_doms: [u32; 64],
    pub __pad_268: [u8; 52],
}

And here is the matching struct in the BPF object file:

struct pcpu_ctx {
        u32                        dom_rr_cur;           /*     0     4 */
        u32                        dom_id;               /*     4     4 */
        u32                        nr_node_doms;         /*     8     4 */
        u32                        node_doms[64];        /*    12   256 */

        /* size: 320, cachelines: 5, members: 4 */
        /* padding: 52 */
} __attribute__((__aligned__(64)));

Signed-off-by: David Vernet <void@manifault.com>
2024-04-12 13:52:13 -05:00
David Vernet
e032ee7cc0
rusty: Add lookup_pcpu_ctx() helper
Getting rid of more boilerplate

Signed-off-by: David Vernet <void@manifault.com>
2024-04-11 19:30:23 -05:00
David Vernet
885a9fd7da
rusty: Make lookup_task_ctx() static
It doesn't need to be a global prog. Let's make it static.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-11 19:30:23 -05:00