Put a performance-critical task to a performance critical queue and a
regular task to a regular queue.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
- Use .enumerate() consistently while building the cpu_fids vector.
- Use .then_with() to chain .cmp() when sorting cpu_fids.
Both reduce visual clutter.
With optimizations of calculatring ineligibility duration, now the
scheduler works well under heavy load without 2-level scheduling, so we
drop it for simplicitiy.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
This commit include a few changes:
- treat a new forked task more conservatively
- defer the execution of more tasks for longer time using ineligibility duration
- consider if a task is waken up in calculating ineligibility duration
L or R: Latency-critical, Regular
H or I: performance-Hungry, performance-Insensitive
B or T: Big, liTtle
E or G: Eligible, Greedy
P or N: Preemption, Not
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Tuning the time slice under high load and change the kick/tick margins
for preemption more conservative. Especially, aggressive IPI-based
preemption (kick) causes performance unstability.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Instead of using coarse-grained log(), let's directly use the ratio of
task's service time. Also, the virtual dealine equation is also updated
to reflect this change.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The max_entries parameter in BPF_MAP_TYPE_PERCPU_ARRAY defines the
number of values per CPU and for cpu_ctx_stor we only need one item: the
CPU context.
Set max_entries to 1 to avoid allocating unnecessary memory and slightly
reduce the memory footprint.
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
We introduce two-level scheduling similar to scx_bpfland. The two-level
scheduling consists of two DSQs: 1) latency-critical run queue and 2)
regular run queue. The scheduler prioritizes scheduling tasks on the
latency-critical queue but makes its best effort to schedule tasks on
the regular queue. The scheduler could be more resilient under heavy
load by segregating regular, non-latency-critical tasks from
latency-critical tasks.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The max frequency information from topology (from sysfs) seems not
always true. In some installations, it returns zero for all CPUs. In
this case, let's just consider all CPUs have the same capacity (1024),
hoping the kernel can give more preceise information.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Latency criticality is a task's inherent property, but the starvation
factor is its dynamic status for the urgency of scheduling. Hence, we
segregate the starvation factor out. Also, cleaned up unnecessary
arguments and struct fields related.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When a task is running on more performant core, the scheduler will give
a longer time slice. On the other hand, on a less performant core, a
shorter time slice will be assigned. The longer time slice helps
boosting clock frequency on a performant core. Also, the shorter time
slice gives more chance the performant core being utilized.
Regarding the CPU capacity, we first check if kernel-provided capacitiy values
are trustworthy or not. If not (i.e., all the same values), we rely on
the user-provided value, based on each CPU's maximum clock frequency.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
With the --prefer-smt-core option is on, the core compaction prefers to
utilizae hyper-twin first before utilizing the other physical CPUs. By
default, the option is off.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Previously, the core compaction assumed that each core's capacity was
the same. Now, we additionally consider each core's max clock frequency.
So, it always tries to use the higher-frequency cores first.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Remove unused constants and rename outdated constants to proper names
(LAVD_TC_* to LAVC_CC_* and LAVD_ELIGIBLE_DSQ to LAVD_GLOBAL_DSQ).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
With all the other optimizations and tunings, it turns out that maintaining
two runqueues has more harm than good.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Further depenalize above-average latency-critical tasks and penalize
further below-avergage latency-critical tasks in ineligibility duration.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
LAVD_VDL_LOOSENESS_FT represents how loose the deadline is. The smaller
value means the deadline is tighter. While it is unlikely to be tuned,
let's keep it as a tunable for now.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
That is okay since the runtime is considered in calculating a virtual
deadline. A shorter runtime will result in a tighter deadline linearly.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
If inheriting the parent's properties, a new fork task tends to be too
prioritized. That is, many parent processes, such as `make,` are a bit
more latency-critical than average.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Advancing the clock slower when overloaded gives more opportunities for
latency-critical tasks to cut in the run queue. Controlling the clock
better reflects the actual load than the prior approach of stretching
the time-space when overloaded.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
We now maintain two run queues—an eligible run queue (DSQ) and an
ineligible run queue (rbtree)—sorted by the task's virtual deadline.
When the eligible run queue is empty, or the ineligible run queue has
not been consumed for too long (e.g., 15 msec), a task in the ineligible
run queue is moved to the eligible run queue for execution. With these
two queues, we have a better admission control.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Update libbpf-rs & libbpf-cargo to 0.24. Among other things, generated
skeletons now contain directly accessible map and program objects, no
longer necessitating the use of accessor methods. As a result, the risk
for mutability conflicts is reduced greatly.
Signed-off-by: Daniel Müller <deso@posteo.net>
Estimating the service time from run time and frequency is not
incorrect. However, it reacts slowly to sudden changes since it relies
on the moving average. Hence, we directly measure the service time to
enforce fairness.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
sched_ext is about to be merged upstream. There are some compatibility
breaking changes and we're making the current sched_ext/for-6.11
1edab907b57d ("sched_ext/scx_qmap: Pick idle CPU for direct dispatch on
!wakeup enqueues") the baseline.
Tag everything except scx_mitosis as 1.0.0. As scx_mitosis is still in early
development and is currently temporarily disabled, only the patchlevel is
bumped.
Sync from sched_ext/for-6.11 1edab907b57d ("sched_ext/scx_qmap: Pick idle
CPU for direct dispatch on !wakeup enqueues")
git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git for-6.11
- cgroup support hasn't landed in the upstream kernel yet. This most likely
will happen in a few weeks. For the time being, disable scx_flatcg,
scx_pair and scx_mitosis.
- Compat macro for DSQ task iterator dropped. This is now a part of
the baseline.
- scx_bpf_consume() isn't upstream yet. BPF interfacing side is still being
discussed. Dropped example usage from tools/sched_ext. None of the
practical schedulers use it, so this should be fine for now.
- scx_bpf_cpu_rq() added.
- AUTOATTACH workaround for newer libbpf versions added.
A task can become a runnable on any task's context not only its waker
task. Thus, we should not count wake-up on unrelated task's context.
With this commit, the scheduler can (much more) accurately detect
waker-wakee relationsships.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The prior approach using the sum of weights gives too much penalty to
nice tasks with large nice values. With this commit, the time slice is
determined by the number of runnable tasks regardless of nice priority.
Note that the fairness will still be enforced based on tasks' nice
priorities (weights).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
To easily distinguish, let's initialize the current logical clock to
zero (not the current physical time). Also, avoid the deadline
calculation being zero by adding +1 here and there.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
This commit changes the use of a physical clock to a virtual, logical
clock in calculating deadlines.
- The virtual current clock advances upon a task's running to its
virtual deadline.
- When enqueuing a task, its virtual deadline from the virtual current
clock is calculated.
With the above two changes, this guarantees that there is no such task
whose virtual deadline is smaller than the virtual current clock. This
means any enqueuing task can compete with any other already enqueued
tasks. This allows a latency-critical task to be immediately scheduled
if needed.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
With commit 5d20f89a ("scheds-rust: build rust schedulers in sequence"),
schedulers are now built serially one after the other to prevent meson
and cargo from forking NxN parallel tasks.
However, this change has made building a single scheduler much more
cumbersome, due to the chain of dependencies.
For example, building scx_rusty using the specific meson target would
still result in all schedulers being built, because they all depend on
each other.
To address this issue, introduce the new meson build option
`serialize=true|false` (default is false).
This option allows to disable the schedulers' build chain, restoring the
old behavior.
With this option enabled, it is now possible to build just a single
scheduler, parallelizing the cargo build properly, without triggering
the build of the others. Example:
$ meson setup build -Dbuildtype=release -Dserialize=false
$ meson compile -C build scx_rusty
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
The competition window was 7.5 msec, half of the targeted latency.
However, it is too wide for some workloads, so unrelated tasks may
compete with each other. Hence, it is tightened to about 1 msec with
LAVD_LAT_WEIGHT_SHIFT to avoid unnecessary competition.
Also, when a system is overloaded, now the time space is stretched more
aggressively (i.e., lat_prio^2) when a task's latency priority is low
(high value).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The old approach was too conservative in running a new task, so when a
fork-heavy workload competes with a CPU-bound workload, the fork-heavy
one is starved. The new approach solves the starvation problem by
inheriting parent's statistics. It seems a good (at least better than
old) guess how a new task will behave.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When the system is highly loaded with compute-intensive tasks, the old
setting chokes latensive-intensive tasks, so loosen the dealine when the
system is overloaded (> 100% utilization).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When the lavd is loaded, it prints out its build id. It helps to easily
identify what version it is when testing.
```
01:56:54 [INFO] scx_lavd scheduler is initialized (build ID: 0.8.1-g98a5fa8595430414115c504857cea1a458393838-dirty x86_64-unknown-linux-gnu)
```
Signed-off-by: Changwoo Min <changwoo@igalia.com>
This is a second attempt to optimize tunables for a wider range of
games.
1) LAVD_BOOST_RANGE increased from 14 (35%) to 40 (100% of nice range).
Now the latency priority (biased by nice value) will decide which
task should run first . The nice value will decide the time slice.
2) The first change will give higher priority to latency-critical task
compared to before. For compensation, the slice boost also increased
(2x -> 3x).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
In some games (e.g., Elden Ring), it was observed that preemption
happens much less frequently. The reason is that tasks' runtime per
schedule is similar, so it does not meet the existing criteria. To
alleviate the problem, the following three tunables are revised:
1) Smaller LAVD_PREEMPT_KICK_MARGIN and LAVD_PREEMPT_TICK_MARGIN help to
trigger more preemption.
2) Smaller LAVD_SLICE_MAX_NS works better especially 250 or 300Hz
kernels.
3) Longer LAVD_ELIGIBLE_TIME_MAX purturbes time lines less frequently.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Use the function can_task1_kick_task2() to replace places which also
checking the comp_preemption_info between two cpus for better
consistency.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
It seems that we are not updating `is_idle` when we find an idle CPU
with pick_cpu(), causing unnecessary rescheduling events when
select_cpu() is called.
To resolve this, ensure that the is_idle state is correctly set.
Additionally, always ensure that the task is dispatched to the local DSQ
immediately upon finding (and reserving) an idle CPU.
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
- clean up u63 and u32 usages in structures to reduce struct size
- refactoring pick_cpu() for readability
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The required CPU performance (cpuperf) was set to 1024 (100%) when the
CPU utilization was 100%. When a sudden load spike happens, it makes the
system adapt slowly in the next interval.
The new scheme always reserves some headroom in advance, so it sets
cpuperf to 1024 when the CPU utilization reaches to 85%. This gives some
room to adapt in advance.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
In preparation of upstreaming, let's set the min version requirement at the
released v6.9 kernels. Drop __COMPAT_SCX_KICK_IDLE. The open helper macros
now check the existence of SCX_KICK_IDLE and abort if not.
In preparation of upstreaming, let's set the min version requirement at the
released v6.9 kernels. Drop __COMPAT_scx_bpf_switch_call(). The open helper
macros now check the existence of SCX_OPS_SWITCH_PARTIAL and abort if not.