The max frequency information from topology (from sysfs) seems not
always true. In some installations, it returns zero for all CPUs. In
this case, let's just consider all CPUs have the same capacity (1024),
hoping the kernel can give more preceise information.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
When a task is running on more performant core, the scheduler will give
a longer time slice. On the other hand, on a less performant core, a
shorter time slice will be assigned. The longer time slice helps
boosting clock frequency on a performant core. Also, the shorter time
slice gives more chance the performant core being utilized.
Regarding the CPU capacity, we first check if kernel-provided capacitiy values
are trustworthy or not. If not (i.e., all the same values), we rely on
the user-provided value, based on each CPU's maximum clock frequency.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
With the --prefer-smt-core option is on, the core compaction prefers to
utilizae hyper-twin first before utilizing the other physical CPUs. By
default, the option is off.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Previously, the core compaction assumed that each core's capacity was
the same. Now, we additionally consider each core's max clock frequency.
So, it always tries to use the higher-frequency cores first.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Remove unused constants and rename outdated constants to proper names
(LAVD_TC_* to LAVC_CC_* and LAVD_ELIGIBLE_DSQ to LAVD_GLOBAL_DSQ).
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Using negative values with --slice-us-lag can be useful to make
performance more consistent and prioritize newly created tasks over the
running tasks.
Therefore, allow to specify negative values from the command line and
also update the documentation of this option.
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
In some scenarios, a CPU-intensive task may be on the critical path for
interactive workloads. For example, you may have a game with CPU-intensive
tasks that are crunching the logic for the game, and that's required for the
game to proceed without being choppy.
To support such workflows, this change adds logic to allow a non-interactive
task to inherit the lower (i.e. stronger) latency priority of another task if
it wakes or is woken by that task.
Signed-off-by: David Vernet <void@manifault.com>
Currently, a task's deadline is computed as its vtime + a scaled function of
its average runtime (with its deadline being scaled down if it's more
interactive). This makes sense intuitively, as we do want an interactive task
to have an earlier deadline, but it also has some flaws.
For one thing, we're currently ignoring duty cycle when determining a task's
deadline. This has a few implications. Firstly, because we reward tasks with
higher waker and blocked frequencies due to considering them to be part of a
work chain, we implicitly penalize tasks that rarely ever use the CPU because
those frequencies are low. While those tasks are likely not part of a work
chain, they also should get an interactivity boost just by pure virtue of not
using the CPU very often. This should in theory be addressed by vruntime, but
because we cap the amount of vtime that a task can accumulate to one slice, it
may not be adequately reflected after a task runs for the first time.
Another problem is that we're minimizing a task's deadline if it's interactive,
but we're also not really penalizing a task that's a super CPU hog by
increasing its deadline. We sort of do a bit by applying a higher niceness
which gives it a higher deadline for a lower weight, but its somewhat minimal
considering that we're using niceness, and that the best an interactive task
can do is minimize its deadline to near zero relative to its vtime.
What we really want to do is "negatively" scale an interactive task's deadline
with the same magnitude as we "positively" scale a CPU-hogging task's deadline.
To do this, we make two major changes to how we compute deadline:
1. Instead of using niceness, we now instead use our own straightforward
scaling factor. This was chosen arbitrarily to be a scaling by 1000, but we
can and should improve this in the future.
2. We now create a _signed_ linear latency priority factor as a sum of the
three following inputs:
- Work-chain factor (log_2 of product of blocked freq and waker freq)
- Inverse duty cycle factor (log_2 of the inverse of a task's duty cycle --
higher duty cycle means lower factor)
- Average runtime factor (Higher avg runtime means higher average runtime
factor)
We then compute the latency priority as:
lat_prio := Average runtime factor - (work-chain factor + duty cycle factor)
This gives us a signed value that can be negative. With this, we can compute a
non-negative weight value by calculating a weight from the absolute value of
lat_prio, and use this to scale slice_ns. If lat_prio is negative we calculate
a task's deadline as its vtime MINUS its scaled slice_ns, and if it's positive,
it's the task's vtime PLUS scaled slice_ns.
This ends up working well because you get a higher weight both for highly
interactive tasks, and highly CPU-hogging / non-interactive tasks, which lets
you scale a task's deadline "more negatively" for interactive tasks, and "more
positively" for the CPU hogs.
With this change, we get a significant improvement in FPS. On a 7950X, if I run
the following workload:
$ stress-ng -c $((8 * $(nproc)))
1. I get 60 FPS when playing Stellaris (while time is progressing at max
speed), whereas EEVDF gets 6-7 FPS.
2. I get ~15-40 FPS while playing Civ6, whereas EEVDF seems to get < 1 FPS. The
Civ6 benchmark doesn't even start after over 4 minutes in the initial frame
with EEVDF, but gets us 13s / turn with rusty.
3. It seems that EEVDF has improved with Terraria in v6.9. It was able to
maintain ~30-55 FPS, as opposed to the ~5-10FPS we've seen in the past.
rusty is still able to maintain a solid 60-62FPS consistently with no
problem, however.
Add a layer match based on either the effective user id or the effective
group id. This allows for creating layers for individual users or
groups.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
Add NUMA node topology awareness for scx_layared. This borrows some of
the NUMA handling from scx_rusty and allows layers to set a node mask.
Different layer kinds will use the node mask differently.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
Simplify LoadBalancer::populate_tasks_by_load() by cutting out the
heap allocation bits, by moving mutable accesses in front of immutable
ones. Because multiple immutable accesses (between bss and rodata) do
not conflict, we don't need the intermediate PID storage.
Signed-off-by: Daniel Müller <deso@posteo.net>
Add a cpus method various subfields in the topology struct to easily get
the map of CPUs for nodes/LLCs.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
Periodically report to stdout samples of the effective time slice
applied to tasks.
While one could determine this metric by examining the max slice_ns and
nr_waiting metrics, directly reporting it to stdout allows users to
quickly identify what is happening and it provides a clearer overview of
the scheduling behavior.
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Dispatching per-CPU kthreads directly is disabled by default, reporting
this metric can generate some confusion (since it is always 0), and even
if local kthread dispatches are enabled, they should be still considered
as regular direct dispatches (there is no difference in practice).
Therefore, merge direct kthread dispatches into direct dispatches and
drop the separate nr_kthread_dispatches metric.
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Scale the task's time slice based on the average amount of tasks that
are currently waiting to be dispatched.
Use a moving average for the amount of waiting tasks to smooth out
potential spikes caused by temporary bursts of tasks piling in the wait
queues.
This was initially modeled in scx_rustland and it seems to work pretty
well also in scx_bpfland now.
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
With all the other optimizations and tunings, it turns out that maintaining
two runqueues has more harm than good.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Further depenalize above-average latency-critical tasks and penalize
further below-avergage latency-critical tasks in ineligibility duration.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
LAVD_VDL_LOOSENESS_FT represents how loose the deadline is. The smaller
value means the deadline is tighter. While it is unlikely to be tuned,
let's keep it as a tunable for now.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
Non-kthreads with custom affinities in non-open layers are dispatched into a
LO_FALLBACK_DSQ, with the idea being that they're penalized for their custom
affinities. When a host is fully utilized, these tasks can end up being starved
due to LO_FALLBACK_DSQ being consumed only when there are no other layers to
consume from. In internal workloads at Meta, we've observed that this can
happen in practice.
Longer term, we can probably address this by implementing layer weights and
applying that to fallback DSQs to avoid starvation. For now, let's just
dispatch them to HI_FALLBACK_DSQ to avoid this starvation issue.
Signed-off-by: David Vernet <void@manifault.com>
Refactor the main module for scx_layered to move metrics into a separate
module. This change does no functional differences, only code structure.
This will make it a little easier to navigate the logic in the main
scheduler code.
Signed-off-by: Daniel Hodges <hodges.daniel.scott@gmail.com>
That is okay since the runtime is considered in calculating a virtual
deadline. A shorter runtime will result in a tighter deadline linearly.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
If inheriting the parent's properties, a new fork task tends to be too
prioritized. That is, many parent processes, such as `make,` are a bit
more latency-critical than average.
Signed-off-by: Changwoo Min <changwoo@igalia.com>