Periodically report the amount of online CPUs to stdout.
The online CPUs are initially evaluated looking at the online cpumask,
then the value is updated in the .cpu_offline() / .cpu_online()
callbacks.
Tested-by: Piotr Gorski <lucjan.lucjanov@gmail.com>
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Keep track of the CPUs that are running interactive tasks and report
their amount to stdout.
Tested-by: Piotr Gorski <lucjan.lucjanov@gmail.com>
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
The correct default value of slice_ns 5ms, not 5s.
This change doesn't really make any difference in practice, since these
values are changed by the Rust part when the scheduler is started, but
it's good to keep this aligned to the proper values for consistency.
Tested-by: Piotr Gorski <lucjan.lucjanov@gmail.com>
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Every time we need to dispatch a task re-evalate its time slice as:
(unused_time_slice + min_time_slice) / 2
This allows to refill the time slice for tasks that haven't used much of
their previously assigned time, improving fairness.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Make sure to always classify interactive tasks, even when the system is
not fully utilized. This ensures that if the system suddenly becomes
overloaded, we already know which tasks need to be dispatched to the
priority DSQ.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Fetch the value of "delta" directly from the returned value from
__sync_fetch_and_sub, as it returns the origin value of
cgc->cvtime_delta.
Additional fetching instruction of cgc->cvtime_delta would be redundant
here.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
Tasks are consumed from various DSQs in the following order:
per-CPU DSQs => priority DSQ => shared DSQ
Tasks in the shared DSQ may be starved by those in the priority DSQ,
which in turn may be starved by tasks dispatched to any per-CPU DSQ.
To mitigate this, record the timestamp of the last task scheduling event
both from the priority DSQ and the shared DSQ.
If the starvation threshold is exceeded without consuming a task, the
scheduler will be forced to consume a task from the corresponding DSQ.
The starvation threshold can be adjusted using the --starvation-thresh
command line parameter (default is 5ms).
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
There is no need to RCU protect the cpumask for the offline CPUs: it is
created once when the scheduler is initialized and it's never
deallocated.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Reduce the default time slice down to 5ms for a faster reaction and
system responsiveness when the system is overcomissioned.
This also helps to provide a more predictable level of performance.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Always use direct CPU dispatch for kthreads, there is no need to treat
kthreads in a special way, simply reuse direct CPU dispatch to
prioritize them.
Moreover, change direct CPU dispatches to use scx_bpf_dispatch_vtime(),
since we may dispatch multiple tasks to the same per-CPU DSQ now.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Small refactoring of the idle CPU selection logic:
- optimize idle CPU selection for tasks that can run on a single CPU
- drop the built-in idle selection policy and completely rely on the
custom one
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
We are incorrectly using the SMT idle cpumask to find any idle CPU, fix
by using the generic idle cpumask.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Implement CPU hotplugging in scx_bpfland without restarting the
scheduler.
The idle selection logic has been updated to consider online CPUs.
Additionally, a cpumask for offline CPUs has been introduced. Tasks
that have been dispatched to the DSQs associated with offline CPUs are
consumed by the other CPUs that are still online.
Moreover, the dependency on the Topology crate is temporarily dropped
and instead, /sys/devices/system/cpu/smt/active is used to determine if
SMT should be taken into account during idle selection. The Topology
crate will be re-introduced later when scx_bpfland will gain more
topology-aware capabilities.
This fixes#406.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The stats map in scx_rusty is a BPF_MAP_TYPE_PERCPU_ARRAY, with its size
determined by num_possible_cpus(). Initializing it with nr_cpu_ids() can
result in errors such as:
Error: Failed to zero stat
Caused by:
number of values 6 != number of cpus 8
Fix by using num_possible_cpus() to initialize it.
Fixes: 263e02f6 ("rusty: Use nr_cpu_ids instead of nr_cpus_possible")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
With commit 5d20f89a ("scheds-rust: build rust schedulers in sequence"),
schedulers are now built serially one after the other to prevent meson
and cargo from forking NxN parallel tasks.
However, this change has made building a single scheduler much more
cumbersome, due to the chain of dependencies.
For example, building scx_rusty using the specific meson target would
still result in all schedulers being built, because they all depend on
each other.
To address this issue, introduce the new meson build option
`serialize=true|false` (default is false).
This option allows to disable the schedulers' build chain, restoring the
old behavior.
With this option enabled, it is now possible to build just a single
scheduler, parallelizing the cargo build properly, without triggering
the build of the others. Example:
$ meson setup build -Dbuildtype=release -Dserialize=false
$ meson compile -C build scx_rusty
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The competition window was 7.5 msec, half of the targeted latency.
However, it is too wide for some workloads, so unrelated tasks may
compete with each other. Hence, it is tightened to about 1 msec with
LAVD_LAT_WEIGHT_SHIFT to avoid unnecessary competition.
Also, when a system is overloaded, now the time space is stretched more
aggressively (i.e., lat_prio^2) when a task's latency priority is low
(high value).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Introduce a tunable to set a limit of the minimum vruntime that is used
when a task is dispatched, as:
vtime_min = vtime_now - slice_lag_ns
Increasing the time slice lag can make interactive tasks even more
responsive at the cost of starving regular and newly created tasks.
Default time slice lag is 0.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Overview
========
This scheduler is derived from scx_rustland, but it is fully implemented
in BFP with minimal user-space Rust part to process command line
options, collect metrics and logs out scheduling statistics.
Unlike scx_rustland, all scheduling decisions are made by the BPF
component.
Motivation
==========
The primary goal of this scheduler is to act as a performance baseline
for comparison with scx_rustland, allowing for a better assessment of
the overhead caused by kernel/user-space interactions.
It can also be used to deploy prototypes initially tested in the
scx_rustland scheduler. In fact, this scheduler is expected to
outperform scx_rustland, due to the elimitation of the kernel/user-space
overhead.
Scheduling policy
=================
scx_bpfland is a vruntime-based sched_ext scheduler that prioritizes
interactive workloads. Its scheduling policy closely mirrors
scx_rustland, but it has been re-implemented in BPF with some small
adjustments.
Tasks are categorized as either interactive or regular based on their
average rate of voluntary context switches per second: tasks that exceed
a specific voluntary context switch threshold are classified as
interactive.
Interactive tasks are prioritized in a higher-priority DSQ, while
regular tasks are placed in a lower-priority DSQ. Within each queue,
tasks are sorted based on their weighted runtime, using the built-in scx
vtime ordering capabilities (scx_bpf_dispatch_vtime()).
Moreover, each task gets a time slice budget. When a task is dispatched,
it receives a time slice equivalent to the remaining unused portion of
its previously allocated time slice (with a minimum threshold applied).
This gives latency-sensitive workloads more chances to exceed their time
slice when needed to perform short bursts of CPU activity without being
interrupted (i.e., real-time audio encoding / decoding workloads).
Results
=======
According to the initial test results, using the same benchmark "playing
a videogame while recompiling the kernel", this scheduler seems to
provide a +5% improvement in the frames-per-second (fps) compared to
scx_rustland, with video games such as Cyberpunk 2077, Counter-Strike 2
and Baldur's Gate 3.
Initial test results indicate that this scheduler offers around a +5%
improvement in frames-per-second (fps) compared to scx_rustland when
using the benchmark "playing a video game while recompiling the kernel".
This improvement was observed in games such as Cyberpunk 2077,
Counter-Strike 2, and Baldur's Gate 3.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The old approach was too conservative in running a new task, so when a
fork-heavy workload competes with a CPU-bound workload, the fork-heavy
one is starved. The new approach solves the starvation problem by
inheriting parent's statistics. It seems a good (at least better than
old) guess how a new task will behave.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When the system is highly loaded with compute-intensive tasks, the old
setting chokes latensive-intensive tasks, so loosen the dealine when the
system is overloaded (> 100% utilization).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When the lavd is loaded, it prints out its build id. It helps to easily
identify what version it is when testing.
```
01:56:54 [INFO] scx_lavd scheduler is initialized (build ID: 0.8.1-g98a5fa8595430414115c504857cea1a458393838-dirty x86_64-unknown-linux-gnu)
```
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The synchronization for mitosis is a bit ad-hoc, working around lack of
atomics in BPF. This commit updates the logic to use READ/WRITE_ONCE and
compiler barriers to get the behaviors we want.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
When someone is testing schedulers, we often have to ask what version
the scheduler is running as. Now that we can access the build ID from
rust schedulers, let's update scx_rusty to print the build ID when rusty
first starts running.
This results in output such as the following:
```
[void@maniforge scx]$ rusty
19:04:26 [INFO] Running scx_rusty (build ID: 0.8.1-g2043d2537f37c8d75753bb65eb75bca965067564 x86_64-unknown-linux-gnu/debug)
19:04:26 [INFO] NUMA[00] mask= 0b11111111111111111111111111111111
19:04:26 [INFO] DOM[00] mask= 0b00000000111111110000000011111111
19:04:26 [INFO] DOM[01] mask= 0b11111111000000001111111100000000
19:04:26 [INFO] Rusty scheduler started!
```
Signed-off-by: David Vernet <void@manifault.com>
This is a second attempt to optimize tunables for a wider range of
games.
1) LAVD_BOOST_RANGE increased from 14 (35%) to 40 (100% of nice range).
Now the latency priority (biased by nice value) will decide which
task should run first . The nice value will decide the time slice.
2) The first change will give higher priority to latency-critical task
compared to before. For compensation, the slice boost also increased
(2x -> 3x).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
This change adds a new module to the scx_utils crate that provides a
log recorder for metrics-rs. The log recorder will log all metrics to
the console at a configurable interval in an easy to read format. Each
metric type will be displayed in a separate section. Indentation will
be used to show the hierarchy of the metrics. This results in a more
verbose output, but it is easier to read and understand.
scx_rusty was updated to use the log recorder and all explicit metric
logging was removed.
Counters will show the total count and the rate of change per second.
Counters with an additional label, like `type` in
`dispatched_tasks_total` in rusty, will show the count, rate, and
percentage of the total count.
Counters:
dispatched_tasks_total: 65559 [1344.8/s]
prev_idle: 44963 (68.6%) [966.5/s]
wsync_prev_idle: 15696 (23.9%) [317.3/s]
direct_dispatch: 2833 (4.3%) [35.3/s]
dsq: 1804 (2.8%) [21.3/s]
wsync: 262 (0.4%) [4.3/s]
direct_greedy: 1 (0.0%) [0.0/s]
pinned: 0 (0.0%) [0.0/s]
greedy_idle: 0 (0.0%) [0.0/s]
greedy_xnuma: 0 (0.0%) [0.0/s]
direct_greedy_far: 0 (0.0%) [0.0/s]
greedy_local: 0 (0.0%) [0.0/s]
dl_clamped_total: 1290 [20.3/s]
dl_preset_total: 514 [1.0/s]
kick_greedy_total: 6 [0.3/s]
lb_data_errors_total: 0 [0.0/s]
load_balance_total: 0 [0.0/s]
repatriate_total: 0 [0.0/s]
task_errors_total: 0 [0.0/s]
Gauges will show the last set value:
Gauges:
slice_length_us: 20000.00
Histograms will show the average, min, and max. The histogram will be
reset after each log interval to avoid memory leaks, since the data
structure that holds the samples is unbounded.
Histograms:
cpu_busy_pct: avg=1.66 min=1.16 max=2.16
load_avg node=0: avg=0.31 min=0.23 max=0.39
load_avg node=0 dom=0: avg=0.31 min=0.23 max=0.39
processing_duration_us: avg=297.50 min=296.00 max=299.00
Signed-off-by: Jose Fernandez <josef@netflix.com>
In some games (e.g., Elden Ring), it was observed that preemption
happens much less frequently. The reason is that tasks' runtime per
schedule is similar, so it does not meet the existing criteria. To
alleviate the problem, the following three tunables are revised:
1) Smaller LAVD_PREEMPT_KICK_MARGIN and LAVD_PREEMPT_TICK_MARGIN help to
trigger more preemption.
2) Smaller LAVD_SLICE_MAX_NS works better especially 250 or 300Hz
kernels.
3) Longer LAVD_ELIGIBLE_TIME_MAX purturbes time lines less frequently.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Origin assignment of the variable ridx is equivalent to comparing
between "ridx" and "wids - MAX_PIDS". Using u64 max library helper
function to perform the comparison and provide better readability.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
Check whether the BalanceState of pull_dom.load inside function
try_find_move_task is actually the variant NeedsPull. It'll perform task
migration in abit more conservative manner when the system is under high
loading situation.
Experiments are performed when the system is compiling linux kernel and
undergoing a large amount of I/O operation at the same time using fio.
The result showns that before the modification, there're 12,6617 times
of task migrations system wide. After the modification, there're 11,5419
times of task migrations system wide.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
In scx_rlfifo, we're currently using topo.nr_cpus_possible() to
determine how many possible CPU IDs we could have on the system. To
properly support systems whose disabled CPUs may be in the middle of the
range of possible CPU IDs, let's instead use topo.nr_cpu_ids() so that
we don't accidentally dispatch to an invalid DSQ.
Signed-off-by: David Vernet <void@manifault.com>
In scx_rusty, we're currently using topo.nr_cpus_possible() to determine
how many possible CPU IDs we could have on the system. scx_rusty already
accounts for offlined CPUs, so to properly support systems whose
disabled CPUs may be in the middle of the range of possible CPU IDs,
let's instead use topo.nr_cpu_ids().
Signed-off-by: David Vernet <void@manifault.com>