Commit Graph

539 Commits

Author SHA1 Message Date
Changwoo Min
5e194330f0 scx_lavd: consider task's wakeup and vruntime (starvation) more aggressively
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-08-02 12:25:29 +09:00
Changwoo Min
fc0ffeb45b scx_lavd: print the overall status of a scheduled task
L or R: Latency-critical, Regular
H or I: performance-Hungry, performance-Insensitive
B or T: Big, liTtle
E or G: Eligible, Greedy
P or N: Preemption, Not

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 19:00:35 +09:00
Changwoo Min
22d4b13e8e scx_lavd: classify CPUs into BIG and little ones based on their average capacity
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 19:00:35 +09:00
Changwoo Min
0ad2f30fa8
Merge pull request #460 from multics69/lavd-misc
scx_lavd: misc updates
2024-07-31 08:55:04 +09:00
Daniel Hodges
c224154866
Merge pull request #459 from hodgesds/layer-cpu-counter
scx_layered: Add per cpu layer iterator offset
2024-07-30 16:00:37 -04:00
Daniel Hodges
4f12bebaa5 scx_layered: Add per cpu layer iterator offset
Add a per cpu counter offset to round robin when iterating on layers.
This is to make selection from different layers more fair.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-07-30 10:44:41 -07:00
Changwoo Min
9b455cf010
Merge pull request #458 from sched-ext/lavd-fix-cpu-ctx-size
scx_lavd: set correct size for cpu_ctx_stor
2024-07-31 00:39:13 +09:00
Changwoo Min
6136cbee65 scx_lavd: tuning the time slice and preemption margins
Tuning the time slice under high load and change the kick/tick margins
for preemption more conservative. Especially, aggressive IPI-based
preemption (kick) causes performance unstability.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 00:30:59 +09:00
Changwoo Min
35b0d9f3c2 scx_lavd: improve starvation factor equation
Instead of using coarse-grained log(), let's directly use the ratio of
task's service time. Also, the virtual dealine equation is also updated
to reflect this change.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 00:27:17 +09:00
Changwoo Min
f9657a549f scx_lavd: fix bpf verification error in old kernel versions
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 00:22:43 +09:00
Changwoo Min
d2615b4975 scx_lavd: fix warnings from the rust code
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-31 00:21:32 +09:00
Andrea Righi
2015faa745 scx_lavd: set correct size for cpu_ctx_stor
The max_entries parameter in BPF_MAP_TYPE_PERCPU_ARRAY defines the
number of values per CPU and for cpu_ctx_stor we only need one item: the
CPU context.

Set max_entries to 1 to avoid allocating unnecessary memory and slightly
reduce the memory footprint.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-30 09:32:55 +02:00
Changwoo Min
643edb5431
Merge pull request #457 from multics69/lavd-amp-v2
scx_lavd: support two-level scheduling for heavy-loaded cases (like bpfland)
2024-07-30 10:39:06 +09:00
Changwoo Min
b91c1e4759 scx_lavd: add more comments on no_2_level_scheduling implementation
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-29 12:22:28 +09:00
Changwoo Min
f71fff9bbe scx_lavd: print a warning message when system does not provide a proper freq info
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 15:53:02 +09:00
Changwoo Min
4449d8e31c scx_lavd: incorporate a task's static priority in calculating its latency criticality
That's because static (nice) priority is a strong hint to distinguish
latency-critical tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 15:41:43 +09:00
Changwoo Min
221f1fe12a scx_lavd: further prioritize producers over consumers
That is because many latency-critical tasks are producers.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 15:38:54 +09:00
Changwoo Min
7106e8cdca scx_lavd: support two-level scheduling for heavy-loaded cases
We introduce two-level scheduling similar to scx_bpfland. The two-level
scheduling consists of two DSQs: 1) latency-critical run queue and 2)
regular run queue. The scheduler prioritizes scheduling tasks on the
latency-critical queue but makes its best effort to schedule tasks on
the regular queue. The scheduler could be more resilient under heavy
load by segregating regular, non-latency-critical tasks from
latency-critical tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 15:33:17 +09:00
Changwoo Min
9236c3e57c scx_lavd: increase the targeted latency for heavy loaded cases
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 15:30:01 +09:00
Changwoo Min
230512208d scx_lavd: fix div by zero error in some installations
The max frequency information from topology (from sysfs) seems not
always true. In some installations, it returns zero for all CPUs. In
this case, let's just consider all CPUs have the same capacity (1024),
hoping the kernel can give more preceise information.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 12:47:00 +09:00
Changwoo Min
59e54f4972 scx_lavd: print how to disable logging
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-28 12:31:51 +09:00
Changwoo Min
df1108ec6c scx_lavd: segregate starvation factor from the latency criticality (refactoring)
Latency criticality is a task's inherent property, but the starvation
factor is its dynamic status for the urgency of scheduling. Hence, we
segregate the starvation factor out. Also, cleaned up unnecessary
arguments and struct fields related.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-27 17:25:39 +09:00
Changwoo Min
d4a5a629ff
Merge pull request #452 from multics69/lavd-core-compaction-v2
lavd_lavd: initial support for AMP (asynmmetric multi-processor) architecture
2024-07-27 16:22:27 +09:00
Changwoo Min
eeea847697 scx_lavd: adjust time slice based on CPU's capacity
When a task is running on more performant core, the scheduler will give
a longer time slice. On the other hand, on a less performant core, a
shorter time slice will be assigned. The longer time slice helps
boosting clock frequency on a performant core. Also, the shorter time
slice gives more chance the performant core being utilized.

Regarding the CPU capacity, we first check if kernel-provided capacitiy values
are trustworthy or not. If not (i.e., all the same values), we rely on
the user-provided value, based on each CPU's maximum clock frequency.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Changwoo Min
e7b6ed1838 scx_lavd: add --prefer-smt-core option
With the --prefer-smt-core option is on, the core compaction prefers to
 utilizae hyper-twin first before utilizing the other physical CPUs. By
 default, the option is off.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Changwoo Min
19e337cd9b scx_lavd: make the core compaction AMP-aware
Previously, the core compaction assumed that each core's capacity was
the same. Now, we additionally consider each core's max clock frequency.
So, it always tries to use the higher-frequency cores first.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Changwoo Min
dbb3957eb1 scx_lavd: add a missing no_freq_scaling option check
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Changwoo Min
90b57a3fd7 scx_lavd: put a pinned kernel task to an overflow set
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Changwoo Min
e76bf999df scx_lavd: clean up constants (no functional changes)
Remove unused constants and rename outdated constants to proper names
(LAVD_TC_* to LAVC_CC_* and LAVD_ELIGIBLE_DSQ to LAVD_GLOBAL_DSQ).

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-26 18:46:21 +09:00
Andrea Righi
19854f1535 scx_bpfland: allow to specify negative values with --slice-us-lag
Using negative values with --slice-us-lag can be useful to make
performance more consistent and prioritize newly created tasks over the
running tasks.

Therefore, allow to specify negative values from the command line and
also update the documentation of this option.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-26 09:10:18 +02:00
David Vernet
5401876430
Revert "rusty: Rework deadline as a signed sum" 2024-07-25 14:50:45 -05:00
David Vernet
09536aa15d
Merge pull request #309 from sched-ext/rusty_improved_dl
rusty: Rework deadline as a signed sum
2024-07-25 13:44:54 -05:00
David Vernet
c1ad602ce5
rusty: Transfer latency priority between CPU-intensive and interactive tasks
In some scenarios, a CPU-intensive task may be on the critical path for
interactive workloads. For example, you may have a game with CPU-intensive
tasks that are crunching the logic for the game, and that's required for the
game to proceed without being choppy.

To support such workflows, this change adds logic to allow a non-interactive
task to inherit the lower (i.e. stronger) latency priority of another task if
it wakes or is woken by that task.

Signed-off-by: David Vernet <void@manifault.com>
2024-07-25 11:55:40 -05:00
David Vernet
933ea9baa1
rusty: Rework deadline as a signed sum
Currently, a task's deadline is computed as its vtime + a scaled function of
its average runtime (with its deadline being scaled down if it's more
interactive). This makes sense intuitively, as we do want an interactive task
to have an earlier deadline, but it also has some flaws.

For one thing, we're currently ignoring duty cycle when determining a task's
deadline. This has a few implications. Firstly, because we reward tasks with
higher waker and blocked frequencies due to considering them to be part of a
work chain, we implicitly penalize tasks that rarely ever use the CPU because
those frequencies are low. While those tasks are likely not part of a work
chain, they also should get an interactivity boost just by pure virtue of not
using the CPU very often. This should in theory be addressed by vruntime, but
because we cap the amount of vtime that a task can accumulate to one slice, it
may not be adequately reflected after a task runs for the first time.

Another problem is that we're minimizing a task's deadline if it's interactive,
but we're also not really penalizing a task that's a super CPU hog by
increasing its deadline. We sort of do a bit by applying a higher niceness
which gives it a higher deadline for a lower weight, but its somewhat minimal
considering that we're using niceness, and that the best an interactive task
can do is minimize its deadline to near zero relative to its vtime.

What we really want to do is "negatively" scale an interactive task's deadline
with the same magnitude as we "positively" scale a CPU-hogging task's deadline.
To do this, we make two major changes to how we compute deadline:

1. Instead of using niceness, we now instead use our own straightforward
   scaling factor. This was chosen arbitrarily to be a scaling by 1000, but we
   can and should improve this in the future.

2. We now create a _signed_ linear latency priority factor as a sum of the
   three following inputs:
   - Work-chain factor (log_2 of product of blocked freq and waker freq)
   - Inverse duty cycle factor (log_2 of the inverse of a task's duty cycle --
     higher duty cycle means lower factor)
   - Average runtime factor (Higher avg runtime means higher average runtime
     factor)

We then compute the latency priority as:

	lat_prio := Average runtime factor - (work-chain factor + duty cycle factor)

This gives us a signed value that can be negative. With this, we can compute a
non-negative weight value by calculating a weight from the absolute value of
lat_prio, and use this to scale slice_ns. If lat_prio is negative we calculate
a task's deadline as its vtime MINUS its scaled slice_ns, and if it's positive,
it's the task's vtime PLUS scaled slice_ns.

This ends up working well because you get a higher weight both for highly
interactive tasks, and highly CPU-hogging / non-interactive tasks, which lets
you scale a task's deadline "more negatively" for interactive tasks, and "more
positively" for the CPU hogs.

With this change, we get a significant improvement in FPS. On a 7950X, if I run
the following workload:

	$ stress-ng -c $((8 * $(nproc)))

1. I get 60 FPS when playing Stellaris (while time is progressing at max
   speed), whereas EEVDF gets 6-7 FPS.

2. I get ~15-40 FPS while playing Civ6, whereas EEVDF seems to get < 1 FPS. The
   Civ6 benchmark doesn't even start after over 4 minutes in the initial frame
   with EEVDF, but gets us 13s / turn with rusty.

3. It seems that EEVDF has improved with Terraria in v6.9. It was able to
   maintain ~30-55 FPS, as opposed to the ~5-10FPS we've seen in the past.
   rusty is still able to maintain a solid 60-62FPS consistently with no
   problem, however.
2024-07-25 11:55:03 -05:00
Daniel Hodges
4c3fd6cd9b scx_layered: Rename UserId and GroupId
TLDR; rename UserId and GroupId to UIDEquals and GIDEquals.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-07-24 15:09:08 -07:00
Daniel Hodges
55f6d68eef scx_layered: Add user and group layers
Add a layer match based on either the effective user id or the effective
group id. This allows for creating layers for individual users or
groups.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-07-24 15:09:08 -07:00
Daniel Hodges
4042fc42d7
Merge pull request #446 from hodgesds/layered-topo
scx_layered: Add topology awareness for NUMA nodes and LLCs
2024-07-24 18:06:43 -04:00
Daniel Hodges
2803f9c127 scx_layered: Fix formatting issues
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-07-24 14:39:02 -07:00
Daniel Hodges
0814abf0b8 scx_layered: Add node topology awareness
Add NUMA node topology awareness for scx_layared. This borrows some of
the NUMA handling from scx_rusty and allows layers to set a node mask.
Different layer kinds will use the node mask differently.

Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
2024-07-24 09:53:48 -07:00
Daniel Müller
98af514972 scx_rusty: Simplify LoadBalancer::populate_tasks_by_load()
Simplify LoadBalancer::populate_tasks_by_load() by cutting out the
heap allocation bits, by moving mutable accesses in front of immutable
ones. Because multiple immutable accesses (between bss and rodata) do
not conflict, we don't need the intermediate PID storage.

Signed-off-by: Daniel Müller <deso@posteo.net>
2024-07-23 13:59:26 -07:00
Andrea Righi
46ddca6bd5 scx_bpfland: report task time slice to stdout
Periodically report to stdout samples of the effective time slice
applied to tasks.

While one could determine this metric by examining the max slice_ns and
nr_waiting metrics, directly reporting it to stdout allows users to
quickly identify what is happening and it provides a clearer overview of
the scheduling behavior.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-22 22:01:49 +02:00
Andrea Righi
c1d93d2a00 scx_bpfland: drop kthread dispatches metric
Dispatching per-CPU kthreads directly is disabled by default, reporting
this metric can generate some confusion (since it is always 0), and even
if local kthread dispatches are enabled, they should be still considered
as regular direct dispatches (there is no difference in practice).

Therefore, merge direct kthread dispatches into direct dispatches and
drop the separate nr_kthread_dispatches metric.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-22 22:01:49 +02:00
Andrea Righi
a5f1d6b595 scx_bpfland: show average amount of tasks waiting to be dispatched
Periodically report the average amount of tasks sitting in the priority
and shared DSQs.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-22 22:01:45 +02:00
Andrea Righi
5908a985bc scx_bpfland: adjust task time slice based on the amount of waiting tasks
Scale the task's time slice based on the average amount of tasks that
are currently waiting to be dispatched.

Use a moving average for the amount of waiting tasks to smooth out
potential spikes caused by temporary bursts of tasks piling in the wait
queues.

This was initially modeled in scx_rustland and it seems to work pretty
well also in scx_bpfland now.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
2024-07-22 21:53:25 +02:00
Changwoo Min
af75d147c8
Merge pull request #443 from multics69/lavd-vtime
scx_lavd: overhaul the virtual deadline algorithm
2024-07-21 18:00:57 +09:00
Changwoo Min
a9aab6b229 scx_lavd: fix typo
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-21 17:58:44 +09:00
Changwoo Min
add96f0e18 scx_lavd: do not maintain ineligible runnable tasks separately
With all the other optimizations and tunings, it turns out that maintaining
two runqueues has more harm than good.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-20 17:49:12 +09:00
Changwoo Min
827187d213 scx_lavd: adjust ineligible duration according to task's lat_cri
Further depenalize above-average latency-critical tasks and penalize
further below-avergage latency-critical tasks in ineligibility duration.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-20 17:37:27 +09:00
Changwoo Min
c653622ed9 scx_lavd: add LAVD_VDL_LOOSENESS_FT in calculating virtual deadline
LAVD_VDL_LOOSENESS_FT represents how loose the deadline is. The smaller
value means the deadline is tighter. While it is unlikely to be tuned,
let's keep it as a tunable for now.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-20 12:00:50 +09:00
Changwoo Min
e94070d5ca scx_lavd: remove LAVD_BOOST_*
These are no longer necessary after directly using latency criticality.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-07-20 11:53:20 +09:00