Add a small section to document how to use SCX_SCHEDULER_OVERRIDE and
SCX_FLAGS_OVERRIDE with the scx systemd service.
Also fix a small typo (namspace -> namespace).
Signed-off-by: Pietro Righi <pietro.righi.email@gmail.com>
Switching the scheduler requires changing SCX_SCHEDULER (and potentially
also SCX_FLAGS) in /etc/default/scx.
This patch allows overriding these settings using systemd environment
variables SCX_SCHEDULER_OVERRIDE and SCX_FLAGS_OVERRIDE, without
changing the default configuration.
Example:
> grep SCX_SCHEDULER /etc/default/scx
SCX_SCHEDULER=scx_rusty
> sudo systemctl status scx
...
Main PID: 8021 (scx_rusty)
...
> sudo systemctl set-environment SCX_SCHEDULER_OVERRIDE=scx_rustland
> sudo systemctl restart scx
> sudo systemctl status scx
...
Main PID: 4021 (scx_rustland)
...
This feature can be useful for quickly testing different schedulers and
settings, without altering the global system configuration.
Signed-off-by: Pietro Righi <pietro.righi.email@gmail.com>
These are used in mitosis, but they belong in common code so other
schedulers can do css iteration.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
If LLVM is compiled with the LLVM_VERSION_SUFFIX cmake option, then the
version may have an additional suffix, for example "18.1.7+libcxx".
Gentoo for example uses this to fend off ABI issues between libstdc++
and libc++.
Signed-off-by: Violet Purcell <vimproved@inventati.org>
The old logic for CPU frequency scaling is that the task's CPU
performance target (i.e., target CPU frequency) is checked every tick
interval and updated immediately. Indeed, it samples and updates a
performance target every tick interval. Ultimately, it fluctuates CPU
frequency every tick interval, resulting in less steady performance.
Now, we take a different strategy. The key idea is to increase the
frequency as soon as possible when a task starts running for quick
adoption to load spikes. However, if necessary, it decreases gradually
every tick interval to avoid frequency fluctuations.
In my testing, it shows more stable performance in many workloads
(games, compilation).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Originally, do_update_sys_stat() simply calculated the system-wide CPU
utilization. Over time, it has evolved to collect all kinds of
system-wide, periodic statistics for decision-making, so it has become
bulky. Now, it is time to refactor it for readability. This commit does
not contain functional changes other than refactoring.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
The periodic CPU utilization routine does a lot of other work now. So we
rename LAVD_CPU_UTIL_INTERVAL_NS to LAVD_SYS_STAT_INTERVAL_NS.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When a device is suspended and resumed, the suspended duration is added
up to a task's runtime if the task was running on the CPU. After the
resume, the task's runtime is incorrectly long and the scheduler starts
to recognize the system is under heavy load. To avoid such problem, the
suspended duration is measured and substracted from the task's runtime.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
scx_mitosis is a dynamic affinity scheduler which assigns cgroups to
Cells and Cells to discrete sets of CPUs. The number of cells is dynamic
as is the CPU assignment. BPF mostly just does vtime scheduling for each
cell, tracks load, and responds to reconfiguration from userspace.
Userspace makes decisions about how to assign cgroups to cells and cells
to cpus.
This is not yet a complete scheduler, much of the userspace logic is a
placeholder as I experiment with better logic. I also want to add richer
scheduling semantics to userspace, e.g. so that cells can do more
"soft-affinity" rather than the strict partitioning implemented
currently.
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
The RESIZE_ARRAY() macro assumes the presence of an in-scope "skel" variable.
This is bad practice and can cause issues in other macros that use it. Let's
update it to explicitly take a skel argument.
Signed-off-by: David Vernet <void@manifault.com>
This change adds the CPU frequency transition latency from the
`cpuinfo_transition_latency` from sysfs. The value of this field is
described [cpufreq
docs](https://www.kernel.org/doc/Documentation/cpu-freq/user-guide.txt).
On supported systems it returns the CPU frequency transition latency in
nanoseconds. The goal of this change is so that in the future schedulers
can use this data to make better frequency scaling decisions.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
READ_ONCE()/WRITE_ONCE() macros are added in commit 0932fde, we should
be able to utilize the macros to get around the possibility of data
races for domc->min_vruntime.
Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
- pick_idle_cpu() was putting idle_smtmask that it didn't acquire.
- layered_enqueue() was unnecessarily entering preemption path after finding
an idle CPU.
- No need to test whether scx_bpf_get_idle_cpu/smtmask() return NULL. They
never do.
- Relocate cctx->yielding test into keep_runinng() from its caller.
scx_lavd: core compaction for low power consumption
When system-wide CPU utilization is low, it is very likely all the CPUs
are running with very low utilization. That means all CPUs run with low
clock frequency thanks to dynamic frequency scaling and very frequently
go in and out from/to C-state. That results in low performance (i.e.,
low clock frequency) and high power consumption (i.e., frequent
P-/C-state transition).
The idea of *core compaction* is using less number of CPUs when
system-wide CPU utilization is low. The chosen cores (called "active
cores") will run in higher utilization and higher clock frequency, and
the rest of the cores (called "idle cores") will be in a C-state for a
much longer duration. Thus, the core compaction can achieve higher
performance with lower power consumption.
One potential problem of core compaction is latency spikes when all the
active cores are overloaded. A few techniques are incorporated to solve
this problem.
1) Limit the active CPU core's utilization below a certain limit (say 50%).
2) Do not use the core compaction when the system-wide utilization is
moderate (say 50%).
3) Do not enforce the core compaction for kernel and pinned user-space
tasks since they are manually optimized for performance.
In my experiments, under a wide range of system-wide CPU utilization
(5%—80%), the core compaction reduces 7-30% power consumption without
sacrificing average and 99p tail latency.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Currently, when preempting, searching for the candidate CPU always starts
from the RR preemption cursor. Let's first try the previous CPU the
preempting task was on as that may have some locality benefits.
When a task is being enqueued outside wakeup path, ops.select_cpu() isn't
called, so we can end up in a situation where a newly enqueued task keeps
waiting in one of the DSQs while there are idle CPUs. Factor out idle CPU
selection path into pick_idle_cpu() and call it from the enqueue path in
such cases. This problem is shared across schedulers and likely needs a more
generic solution in the future.
yield(2) currently gives up the entire slice. Add "yield_ignore" layer
parameter which can modulate the magnitude of yiedling. When 1.0, yields are
completely ignored. 0.5, only half worth of the full slice is given up and
so on.
Currently, a task which yields is treated the same as a task which has run
out its slice. As the budget charged to a task is calculated from wall clock
time, a repeatedly yielding task can stay at the top of the queue for quite
a while hogging the CPU and spiking the number of scheduling events.
Let's add explicit yield support. An yielding task is now always charged the
full slice and not allowed to keep running on the same CPU.
The keep_running path relies on the implicit last task enqueue which makes
the statistics a bit difficult to track. Let's make the enqueue path
comprehensive:
- Set SCX_OPS_ENQ_LAST and handle the last runnable task enqueue explicitly.
- Implement layered_cpu_release() to re-enqueue tasks from a CPU preempted
by a higher pri sched class and handle the re-enqueued tasks explicitly in
layered_enqueue().
- Add more statistics to track all enqueue operations.
When a task exhausts its slice, layered currently doesn't make any effort to
keep it on the same CPU. It dispatches the next task to run and then
enqueues the running one. This leads to suboptimal behaviors. e.g. When this
happens to a task in a preempting layer, the task will most likely find an
idle CPU or a task to preempt and then migrate there causing a completely
unnecessary migration.
This patch layered_dispatch() test whether the current task should keep
running on the CPU and then skip dispatching to keep the task running. This
behavior depends on the implicit local DSQ enqueue mechanism which triggers
when there are no other tasks to run.
- scx_utils: Replace kfunc_exists() with ksym_exists() which doesn't care
about the type of the symbol.
- scx_layered: Fix load failure on kernels >= v6.10-rc due to
scheduler_tick() -> sched_tick rename. Attach the tick fentry function to
either scheduler_tick() or sched_tick().