Instead clone the libbpf repo at a specific hash during setup.
This is to fix an issue whereby submodules are not included
in the tarball and therefore won't be updated/fetched during
setup after unzipping the tarball.
This is to fix an error sometimes seen when compiling with gcc,
whereby the position independent sched ext rust libraries
don't play nicely with the libbpf library if it's not also built
as position independent.
This also explicitly sets the rustc relocation-mode to "pic",
which is the default (just so this doesn't accidentally change
out from under us).
This is to potentinally reduce issues with folks
using different versions of libbpf at runtime.
This also:
- makes static linking of libbpf the default
- adds steps in `meson setup` to fetch libbpf and make it
This reverts commit a7b39f24e2, reversing
changes made to cf7404fb03.
The PR doesn't do what the description says. It instead limits the number of
rustc instances to 1 for each cargo build making rust builds extremely slow.
Let's revert and try again.
This reverts commit 5b9b953e3c, reversing
changes made to a7b39f24e2.
The current git submodule approach is a bit cumbersome and doesn't provide a
unified build environment for both libbpf and scx scheds. Also, the build
instruction doesn't seem to work. Let's revert it for now.
Each cargo build is already parallelized, spreading multiple rustc
across all the available CPUs by default.
Allowing to run multiple instances of cargo at the same time doesn't
provide any benefit and it can only increase the risk of triggering OOM
conditions or overloading the build system.
Therefore, limit the amount of parallel cargo build instances to 1.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
After updates to reflect the updated init and direct dispatch API, the
schedulers aren't compatible with older kernels. Bump versions and publish
releases.
Use virtme-ng to run the schedulers after they're built; virtme-ng
allows to pick an arbitrary sched-ext enabled kernel and run it
virtualizing the entire user-space root filesystem, so we can basically
exceute the recompiled schedulers inside such kernel.
This should allow to catch potential run-time issue in advance (both in
the kernel and the schedulers).
The sched-ext kernel is taken from the Ubuntu ppa (ppa:arighi/sched-ext)
at the moment, since it is the easiest / fastest way to get a
precompiled sched-ext kernel to run inside the Ubuntu 22.04 testing
environment.
The schedulers are tested using the new meson target "test_sched", the
specific actions are defined in meson-scripts/test_sched.
By default each test has a timeout of 30 sec, after the virtme-ng
completes the boot (that should be enough to initialize the scheduler
and run the scheduler for some seconds), while the total lifetime of the
virtme-ng guest is set to 60 sec, after this time the guest will be
killed (this allows to catch potential kernel crashes / hangs).
If a single scheduler fails the test, the entire "test_sched" action
will be interrupted and the overall test result will be considered a
failure.
At the moment scx_layered is excluded from the tests, because it
requires a special configuration (we should probably pre-generate a
default config in the workflow actions and change the scheduler to use
the default config if it's executed without any argument).
Moreover, scx_flatcg is also temporarily excluded from the tests,
because of these known issues:
- https://github.com/sched-ext/scx/issues/49
- https://github.com/sched-ext/sched_ext/pull/101
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Add the proper mapping from Debian architecture "s390x" to the kernel
architecture name "s390".
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Introduce an option to enable/disable the build of all the Rust
sub-projects.
This can be useful to build scx on those systems where Rust is not
fully supported (e.g., armhf).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>