Commit Graph

702 Commits

Author SHA1 Message Date
Andrea Righi
0d26219fad ci: enable kvm support in the github workflow
Enable kvm acceleration and qemu microvm to speed up CI tests inside
virtme-ng.

Also adjust the regex to catch potential errors excluding a false
positive triggered by the new configuration.

Link: https://github.blog/changelog/2023-02-23-hardware-accelerated-android-virtualization-on-actions-windows-and-linux-larger-hosted-runners/
Link: https://github.blog/2024-01-17-github-hosted-runners-double-the-power-for-open-source/
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-05-03 23:20:10 +02:00
David Vernet
efb97de785
Merge pull request #261 from sched-ext/rusty_interactive
Make scx_rusty interactive
2024-05-03 14:42:15 -05:00
David Vernet
2403f60631
rusty: Dynamically scale slice according to system util
In user space in rusty, the tuner detects system utilization, and uses
it to inform how we do load balancing, our greedy / direct cpumasks,
etc. Something else we could be doing but currently aren't, is using
system utilization to inform how we dispatch tasks. We currently have a
static, unchanging slice length for the runtime of the program, but this
is inefficient for all scenarios.

Giving a task a long slice length does have advantages, such as
decreasing the number of involuntary context switches, decreasing the
overhead of preemption by doing it less frequently, possibly getting
better cache locality due to a task running on a CPU for a longer amount
of time, etc. On the other hand, long slices can be problematic as well.
When a system is highly utilized, a CPU-hogging task running for too
long can harm interactive tasks. When the system is under-utilized,
those interactive tasks can likely find an idle, or under-utilized core
to run on. When the system is over-utilized, however, they're likely to
have to park in a runqueue.

Thus, in order to better accommodate such scenarios, this patch
implements a rudimentary slice scaling mechanism in scx_rusty. Rather
than having one global, static slice length, we instead have a dynamic,
global slice length that can be changed depending on system utilization.
When over-utilized, we go with a longer slice length, and vice versa for
when the system is under-utilized. With Terraria, this results in
roughly a 50% improvement in mean FPS when playing on an AMD Ryzen 9
7950X, while running Spotify, and stress-ng -c $((4 * $(nproc))).

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:58 -05:00
David Vernet
76618989f8
rusty: Implement basic eligible deadline scheduling in rusty
scx_rusty doesn't do terribly well with interactive workloads. In order
to improve the situation, this patch adds support for basic deadline
scheduling in rusty. This approach doesn't incorporate eligibility, and
simply uses a crude avg_runtime tracking approach to scaling a task's
deadline.

In a series of follow-on changes, we'll update the scheduler to use more
indicators for interactivity that affect both slice length, and deadline
calculation.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:17:56 -05:00
Tejun Heo
ccd91d1d6a
Merge pull request #262 from hodgesds/more-readme-updates
Update README with Slack invite link
2024-05-02 18:29:29 -10:00
Daniel Hodges
e11d022283
Update README with Slack invite link
The current Slack URL gives a link to the Slack workspace, but doesn't
include the invite. Update the URL to include the invite URL to make it
easier for people to join the Slack workspace.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-05-02 23:57:57 -04:00
David Vernet
925a69b156
rusty: Use helper to lookup domain context
Let's remove the extraneous copy pasting and use a lookup helper like we
do for task and pcpu context.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-02 13:56:46 -05:00
David Vernet
8fe900eec3
Merge pull request #260 from danieljordan10/rusty-lb-fixes
scx_rusty: load balancing fixes
2024-05-02 12:08:04 -05:00
Daniel Jordan
de2773d621 scx_rusty: compare abs values in xfer_between()
A LoadEntity gets the load to transfer between two entities by taking
the minimum of their imbalances and reducing its abs value by
xfer_ratio.

In practice self.imbal(), the push node or domain, always has positive
imbalance and other.imbal(), the pull node or domain, always has
negative imbalance, so other.imbal() is always the minimum even though
the abs value of its imbalance might be greater than the abs value of
self.imbal().  It seems like the intent is to take the minimum of the
two absolute values instead to avoid overbalancing at the puller, so
make both values abs.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
2024-05-02 11:54:13 -04:00
Daniel Jordan
1652791e5d scx_rusty: make per-task loads sensitive to lb_apply_weight
Rusty's load balancer calculates load differently based on average
system CPU utilization in create_domain_hierarchy().  At >= 99.999%
utilization, load is the product of a task's weight and duty cycle;
below that, load is the same as the task's duty cycle.

populate_tasks_by_load(), however, always uses the product when
calculating per-task load so that in the sub-99.999% util case, load is
inflated, typically by a factor of 100 with a normal priority task.
Tasks look too heavy to migrate as a result because a single task would
transfer more load than the domain imbalance allows, leading to
significant imbalance in some cases.

Make populate_tasks_by_load() calculate task load the same way as
domain load, checking lb_apply_weight.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
2024-05-02 11:54:05 -04:00
Tejun Heo
4fad916689
Merge pull request #258 from hodgesds/readme-updates
Update README with instructions for building against local libbpf
2024-05-01 04:31:00 -10:00
Tejun Heo
dd0f2b32dd
Merge pull request #259 from hodgesds/libbpf_fetch-path-fixes
Fix issue when CDPATH contains libbpf directory
2024-05-01 04:30:04 -10:00
Daniel Hodges
b64dc740b4 Update README with instructions for linking local libbpf
Add extra info to the README for linking building using system libbpf.
It may not be the preferred way of building, but still could be useful
in certain situations such as making changes against a local libbpf
version.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-05-01 06:41:50 -07:00
Daniel Hodges
0a587d63dd
Fix issue when CDPATH contains libbpf directory
When CDPATH is set the fetch_libbpf build script will cd into
the preferred CDPATH directory. This change removes the CDPATH
environment variable so any preferred CDPATH paths are ignored.
The issue can be reproduced with the following steps:

1) mkdir -p /tmp/libbpf
2) CDPATH=/tmp/ meson setup build --prefix /tmp

The build should fail at the fetch_libbpf step.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-05-01 08:43:58 -04:00
Tejun Heo
52cbf34937
Merge pull request #257 from sched-ext/rustland-version-0.0.6
scx_rustland: bump up version to 0.0.6
2024-04-30 06:37:10 -10:00
Andrea Righi
11f100f043 scx_rustland: bump up version to 0.0.6
Bump up scx_rustland version to use the new scx_rustland_core crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-30 18:32:21 +02:00
Andrea Righi
33092f096e
Merge pull request #255 from sched-ext/rustland-core-relax-mm-constraint
scx_rustland_core: relax compact unevictable memory constraint
2024-04-30 18:29:05 +02:00
Andrea Righi
fd68ce13a7 scx_rustland_core: bump up version to 0.4.0
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-30 18:09:09 +02:00
Andrea Righi
be415d4c06 scx_rustland_core: relax compact unevictable memory constraint
In order to prevent deadlock conditions user-space schedulers need to
perform memory allocations from a pool of pre-allocated memory that will
never be unmapped/reclaimed by the kernel (unevictable memory).

However, there is a special kernel sysctl setting
(vm.compact_unevictable_allowed) that allows kcompactd to reclaim
unevictable memory. This behavior should be prevented by setting
vm.compact_unevictable_allowed = 0, that is what scx_rustland_core does
transparently when a scheduler is started, restoring the previous value
when the scheduler is stopped.

Unfortunately, this is not always doable, especially when running a
scheduler inside a containerized environment or under certain
security/privilege restrictions (e.g., AppArmor confinement).

Therefore, just report a WARNING if we are unable to change this
parameter, instead of considering it a hard-failure, to allow running
scx_rustland_core schedulers also inside such limited environments.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-30 18:09:09 +02:00
Andrea Righi
9e4901bc53
Merge pull request #256 from sched-ext/fix_ci
ci: Also include gcc-multilib to fix CI
2024-04-30 18:08:41 +02:00
David Vernet
e1d6b4d270
ci: Also include gcc-multilib to fix CI
We're failing in CI with:

/usr/include/string.h:26:10: fatal error: 'bits/libc-header-start.h'
file not found
   26 | #include <bits/libc-header-start.h>

This header is apparently shipped with the gcc-multilib Ubuntu package,
according to a Stack Overflow post. Let's see if this fixes CI.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-30 10:11:31 -05:00
Tejun Heo
b1bb2a5c5f
Merge pull request #253 from sched-ext/htejun/sync-kernel
Sync to the latest kernel
2024-04-29 10:16:35 -10:00
Tejun Heo
c77d101655 scheds/c: Sync to the new conventions
Sync with the in-kernel-tree example schedulers.
2024-04-29 10:13:46 -10:00
Tejun Heo
71d5e60093 scheds/rust: Use __COMPAT helpers instead of open coding feature tests 2024-04-29 09:58:34 -10:00
Tejun Heo
cf66e58118 Sync from kernel (670bdab6073)
And fix build breakage in scx_utils due to an enum type rename.
2024-04-29 09:58:19 -10:00
Tejun Heo
3ee64a1301
Merge pull request #252 from sched-ext/htejun/bump-versions
Bump versions to prepare for a release
2024-04-29 09:08:06 -10:00
Tejun Heo
e5e88b7e18 Bump versions to prepare for a release 2024-04-29 09:07:27 -10:00
Tejun Heo
3e7ef35649
Merge pull request #250 from multics69/lavd-issue-234
scx_lavd: replesih time slice at ops.running() only when necessary
2024-04-29 09:01:04 -10:00
Tejun Heo
5b7b7d5193
Merge pull request #247 from multics69/lavd-issue-244
scx_lavd: always inline submit_task_ctx to make the verifier happy
2024-04-29 07:53:38 -10:00
Andrea Righi
344cbd7953
Merge pull request #249 from sched-ext/scx-utils-update
scx-utils: bump up version to 0.8.0
2024-04-29 18:39:06 +02:00
Changwoo Min
5f63e0ca30 scx_lavd: replesih time slice at ops.running() only when necessary
The current code replenishes the task's time slice whenever the task
becomes ops.running(). However, there is a case where such behavior can
starve the other tasks, causing the watchdog timeout error. One (if not
all) such case is when a task is preempted while running by the higher
scheduler class (e.g., RT, DL). In such a case, the task will be transit
in a cycle of ops.running() -> ops.stopping() -> ops.running() -> etc.
Whenever it becomes re-running, it will be placed at the head of local
DSQ and ops.running() will renew its time slice. Hence, in the worst
case, the task can run forever since its time slice is never exhausted.
The fix is assigning the time slice only once by checking if the time
slice is calculated before.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-29 12:13:31 +09:00
Andrea Righi
cabde30736 scx_utils: bump up version to 0.8.0
Bump up scx-utils version to provide the new scx_utils::TopologyMap.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 21:01:16 +02:00
Tejun Heo
d9ea53cb9d
Merge pull request #248 from sched-ext/rustland-update-version
rustland: bump up version
2024-04-28 08:19:54 -10:00
Andrea Righi
5effb4fc4c scx_rustland: bump up version to 0.0.5
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
0785246ee2 scx_rustland: provide --version option
Provide a command line option to print the version of the scheduler and
the scx_rustland_core crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
fb2f5c240e scx_rustland_core: bump up version to 0.3
Given that rustland_core now supports task preemption and it has been
tested successfully, it's worhtwhile to cut a new version of the crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Andrea Righi
cdf3729355 scx_rustland_core: make crate version accessible
Export the version of scx_rustland_core to the users of the crate.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 12:01:38 +02:00
Changwoo Min
0495a5220b
Merge pull request #246 from sched-ext/lavd-fix-arch-build
scx_lavd: use c_char consistently
2024-04-28 00:49:05 +09:00
Andrea Righi
905960f752 scx_lavd: use c_char consistently
In Rust c_char can be aliased to i8 or u8, depending on the particular
target architecture.

For example, trying to build scx_lavd on ppc64 triggers the following
error:

error[E0308]: mismatched types
   --> src/main.rs:200:38
    |
200 |         let c_tx_cm: *const c_char = (&tx.comm as *const [i8; 17]) as *const i8;
    |                      -------------   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `*const u8`, found `*const i8`
    |                      |
    |                      expected due to this
    |
    = note: expected raw pointer `*const u8`
               found raw pointer `*const i8`

To fix this, consistently use c_char instead of assuming it corresponds
to i8.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-27 17:21:19 +02:00
Changwoo Min
f470b1aa13 scx_lavd: always inline submit_task_ctx to make the verifier happy
In _some_ kernel versions, loading scx_lavd fails with an error of
"bpf_rcu_read_unlock is missing". The usage of
bpf_rcu_read_lock/unlock() in proc_dump_all_tasks() is correct but the
bpf verifier still think bpf_rcu_read_unlock() is missing. The most
plausible reason so far is that the problematic kernel does not have a
commit 6fceea0fa59f ("bpf: Transfer RCU lock state between subprog
calls"), failing inter-procedural analysis between proc_dump_all_tasks()
and submit_task_ctx(). Thus, we force inline submit_task_ctx() (no
inter-procedural analysis by the verifier is necessary) for the time
being.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-28 00:11:38 +09:00
Changwoo Min
7ec098e90a
Merge pull request #245 from multics69/scx-lavd-copyright
scx_lavd: fix copyright information
2024-04-26 18:50:32 +09:00
Changwoo Min
d0d0a18b10 scx_lavd: fix copyright information
Correct the copyright and author information

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-26 16:36:58 +09:00
Andrea Righi
973aded5a8
Merge pull request #238 from sched-ext/rustland-reduce-topology-overhead
scx_rustland: reduce overhead by caching host topology
2024-04-24 22:24:23 +02:00
David Vernet
68d43040cf
Merge pull request #243 from sched-ext/cpufreq_bc
layered: Make layered backwards compat with cpufreq
2024-04-24 14:28:11 -05:00
David Vernet
5ba137e8c9
layered: Make layered backwards compat with cpufreq
Only the very newest kernels support scx_bpf_cpuperf_set(). Let's update
scx_layered to accommodate older kernels as well.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-24 14:01:51 -05:00
David Vernet
46defa073d
compat: Add kfunc_exists() compat helper func
It would be useful to be able to check whether a kfunc exists from rust
user space. For example, now that we have support for adjusting cpufreq
in scx_layered, we'll want to be able to test whether or not a the
scx_bpf_cpuperf_set() (and friends) kfuncs are present for backwards
compat purposes. Let's add a kfunc_exists() function to compat.rs for
this purpose.

Signed-off-by: David Vernet <void@manifault.com>
2024-04-24 13:49:31 -05:00
Tejun Heo
9a9b4dd23e
Merge pull request #239 from hodgesds/cpufreq_helpers
Add CPU frequency related helpers and extend scx_layered
2024-04-24 07:22:15 -10:00
Andrea Righi
5302ff1cdc scx_rustland: use TopologyMap for efficient CPU topology iteration
Looking at perf top it seems that the scheduler can spend a significant
amount of time iterating over the CPU topology/cpumask information,
especially when the system is running a significant amount of tasks:

  2.57% scx_rustland [.] <scx_utils::cpumask::CpumaskIntoIterator as core::iter::traits::iterator::Iterator>::next

Considering that scx_rustland doesn't support CPU hotplugging yet (it
requires a full restart to properly handle CPU hotplug events), we can
completely avoid this overhead by caching a TopologyMap object at the
beginning, when the scheduler starts, instead of constantly
re-evaluating the CPU topology information.

This allows to reduce the scheduler overhead by ~5% CPU utilization
under heavy load conditions (from ~65% -> ~60%, according to top).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-24 17:08:06 +02:00
Andrea Righi
d7bb5a7cba topology: Add TopologyMap
Introuce a TopologyMap object, represented as an array of arrays, where
each inner array corresponds to a core containing its associated CPU
IDs.

This object can be used as a cache to facilitate efficient iteration
over the entire host's topology.

Example usage:

  let topo = Topology::new()?;
  let topo_map = TopologyMap::new(topo)?;
  for (core_id, core) in topo_map.iter().enumerate() {
      for cpu in core {
          println!("core={} cpu={}", core_id, cpu);
      }
  }

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-24 17:08:06 +02:00
Daniel Hodges
32e97bf4d5 Adds CPU frequency related helpers and extend scx_layered
This change adds `scx_bpf_cpuperf_cap`, `scx_bpf_cpuperf_cur` and
`scx_bpf_cpuperf_set` definitions that were recently introduced into
[`sched_ext`](https://github.com/sched-ext/sched_ext/pull/180). It adds
a `perf` field to `scx_layered` to allow for controlling performance per
layer.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-04-24 07:27:52 -07:00