Commit Graph

120 Commits

Author SHA1 Message Date
Andrea Righi
78101e4688 scx_bpfland: drop lowlatency mode and the priority DSQ
Schedule all tasks using a single global DSQ. This gives a better
control to prevent potential starvation conditions.

With this change, scx_bpfland adopts a logic similar to scx_rusty and
scx_lavd, prioritizing tasks based on the frequency of their wait and
wake-up events, rather than relying exclusively on the average amount of
voluntary context switches.

Tasks are still classified as interactive / non-interactive based on the
amount of voluntary context switches, but this is only affecting the
cpufreq logic.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-06 15:06:39 +01:00
Andrea Righi
efc41dd936 scx_bpfland: strict domain affinity
Rather then always migrating tasks across LLC domains when no idle CPU
is available in their current LLC domain, allow migration but attempt to
bring tasks back to their original LLC domain whenever possible.

To do so, define the task's scheduling domain upon task creation or when
its affinity changes, and ensure the task remains within this domain
throughout its lifetime.

In the future we will add a proper load balancing logic, but for now
this change seems to provide consistent performance improvement in
certain server workloads.

For example, simple CUDA benchmarks show a performance boost of about
+10-20% with this change applied (on multi-LLC / NUMA machines).

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-05 16:37:51 +01:00
Andrea Righi
064d6fb560 scx_bpfland: consider all tasks as regular if priority DSQ is congested
This allows to prevent excessive starvation of regular tasks in presence
of high amount of interactive tasks (e.g., when running stress tests,
such as hackbench).

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-05 16:37:51 +01:00
Andrea Righi
8a655d94f5 scx_bpfland: do not overly prioritize WAKE_SYNC tasks
This can lead to stalls when a high number of interactive tasks are
running in the system (i.e.., hackbench or similar stress tests).

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-05 16:37:51 +01:00
Andrea Righi
f0c8de3477 scx_bpfland: do not exclude exiting tasks
Add SCX_OPS_ENQ_EXITING to the scheduler flags, since we are not using
bpf_task_from_pid() and the scheduler can handle exiting tasks.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-05 16:37:51 +01:00
Andrea Righi
eb99e45ced scx_bpfland: consistent vruntime update
Ensure that task vruntime is always updated in ops.running() to maintain
consistency with other schedulers.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-11-05 16:37:51 +01:00
Andrea Righi
628605cdee scx_bpfland: get rid of the global dynamic nvcsw threshold
The dynamic nvcsw threshold is not used anymore in the scheduler and it
doesn't make sense to report it in the scheduler's statistics, so let's
just drop it.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-10-31 21:48:44 +01:00
Andrea Righi
827f6c6147 scx_bpfland: get rid of MAX_LATENCY_WEIGHT
Get rid of the static MAX_LATENCY_WEIGHT and always rely on the value
specified by --nvcsw-max-thresh.

This allows to tune the maximum latency weight when running in
lowlatency mode (via --nvcsw-max-thresh) and it also restores the
maximum nvcsw limit in non-lowlatency mode, that was incorrectly changed
during the lowlatency refactoring.

Fixes: 4d68133 ("scx_bpfland: rework lowlatency mode to adjust tasks priority")
Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-10-31 21:48:44 +01:00
Andrea Righi
72e9451c4a scx_bpfland: evaluate nvcsw without using kernel metrics
Evalute the amount of voluntary context switches directly in the BPF
code, without relying on the kernel p->nvcsw metric.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
2024-10-31 21:48:44 +01:00
Andrea Righi
48bbcd24dd scx_bpfland: tune default settings
Adjust some default settings after the rework done with commit 112a5d4
("scx_bpfland: rework lowlatency mode to adjust tasks priority").

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-17 21:46:51 +02:00
Andrea Righi
4d68133f3b scx_bpfland: rework lowlatency mode to adjust tasks priority
Rework lowlatency mode as following:
 - introduce task dynamic priority: task weight multiplied by the
   average amount of voluntary context switches
 - use dynamic priority to determine task's vruntime (instead of the
   static task's weight)
 - task's minimum vruntime is evaluated in function of the dynamic
   priority (tasks with a higher dynamic priority can have a smaller
   vruntime compared to tasks with a lower dynamic priority)

The dynamic priority allows to maintain a good system responsiveness
also without applying the classification of tasks in "interactive" and
"regular", therefore in lowlatency mode only the shared DSQ will be
used (priority DSQ is disabled).

Using a separate priority queue to dispatch "interactive" tasks makes
the scheduler less fair, allowing latency-sensitive tasks to be
prioritized even when there is a high number of tasks in the system
(e.g., `stress-ng -c 1024` or similar scenarios), where relying solely
on dynamic priority may not be sufficient.

On the other hand, disabling the classification of "interactive" tasks
results in a fairer scheduler and more predictable performance, making
it better suited for soft real-time applications (e.g, audio and
multimedia).

Therefore, the --lowlatency option is retained to allow users to choose
between more predictable performance (by disabling the interactive task
classification) or a more responsive system (default).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-17 21:46:51 +02:00
Andrea Righi
197dee93f4 scx_bpfland: get rid of per-CPU DSQs
Using per-CPU DSQs seems to introduce more issues than benefits
(potential stalls, etc.). Therefore, let's get rid of the per-CPU DSQs
and use SCX_DSQ_LOCAL for tasks directly dispatched to specific CPUs.

This change seems to also improve performance on 6.12 and it makes the
scheduler a lot more stable and consistent.

The issues will be investigated separately, providing a separate stress
test scheduler, designed to stress test per-CPU DSQs.

Tested-by: Piotr Gorski <piotrgorski@cachyos.org>
Tested-by: Eric Naim <dnaim@cachyos.org>
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-12 08:15:51 +02:00
Andrea Righi
198f22656c scx_bpfland: clarify error code returned by pick_idle_cpu()
Return more meaningful error codes from pick_idle_cpu(). No functional
change, just improved code readability.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-12 08:08:48 +02:00
Andrea Righi
ceb4f1755f scx_bpfland: always refill task timeslice in ops.dispatch()
When a task exhausts its timeslice and no other tasks are ready to run,
we automatically refill its timeslice, but only if the current CPU is a
fully idle SMT core.

If we don’t handle the refill, the sched_ext core will default to
refilling using SCX_SLICE_DFL, which may not be optimal.

To ensure better control over the task’s timeslice, always refill it
when no other tasks are available to run.

Fixes: 6e24fcc ("scx_bpfland: keep tasks running on full-idle SMT cores")
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-12 08:08:48 +02:00
Andrea Righi
54d704ceda scx_bpfland: pick a random idle CPU when prev_cpu is not valid
Pick any random idle CPU when the previous CPU isn't valid anymore
according to the task's cpumask.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-12 08:08:48 +02:00
Andrea Righi
d62989e462 scx_bpfland: fix cpumask initialization error
In the WAKE_SYNC path lf L3 cache awareness is disabled (--disable-l3)
we may hit the following error:

  Error: EXIT: scx_bpf_error (CPU L3 cpumask not initialized)

Fix this by setting the L3 cpumask to the whole primary domain if L3
cache awareness is disabled.

Tested-by: Eric Naim <dnaim@cachyos.org>
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-10 09:30:54 +02:00
Andrea Righi
c8a9207371 scx_bpfland: prevent per-CPU DSQ stall with per-CPU kthreads
Since per-CPU kthreads may show an inconsistent prev_cpu and/or cpumask,
dispatch them directly to local DSQ and allow to preempt the current
running task.

This allows to prevent per-CPU kthread stalls and it also helps to
prioritize them, as are usually important for system performance and
responsiveness.

Moreover, change the behavior of --local-kthreads to prioritize all
kthreads when this option is used.

This addresses issue #728.

NOTE: ideally we may want to fix this in the kernel by making sure to
always expose a consistent prev_cpu and cpumask also for kthreads, but
at the moment this change allows to prevent some annoying stalls and
performance-wise it doesn't seem to introduce any regression. In fact,
the usual gaming/fps benchmarks show even a slight improvement in
responsiveness with this change applied.

Thanks to YUBY from the CachyOS community for all the extremely valuable
help with the intensive stress tests.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-08 15:02:31 +02:00
Andrea Righi
9a29547e5b scx_bpfland: rework lowlatency mode
In lowlatency mode (option --lowlatency) tasks are ordered using a
deadline that is evaluated as the vruntime minus a certain "bonus",
determined in function of the max time slice and the average amount of
voluntary context switches, to amplify the priority boost of the tasks
that are voluntarily releasing the CPU (which are typically
interactive).

However, this method can be extremely unfair in some cases: tasks with
short bursts of voluntary context switches may receive a huge priority
boost, making the rest of the system almost unresponsive (see massive
hackbench stress tests for example).

To prevent this rework the task's deadline logic to use the vruntime and
a "deadline component" that is a function of the average used time
slice, scaled using a dynamic task priority (evaluated as the static
task priority and the its average amount of voluntary context switches).

This logic seems to prevent excessive prioritization of tasks performing
short intensive bursts of voluntary context switches.

It also makes lowlatency mode in scx_bpfland (somehow) more similar to
the deadline logic used by scx_rusty.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-10-05 17:44:09 +02:00
Tejun Heo
7402895f4a version: v1.0.5 2024-10-02 08:34:57 -10:00
Ming Yang
445743487a Add #stat_doc attribute macro to Stats struct
`#stat_doc` extends the document from stat desc property.

Add this attribute macro to the remaining Stats structs.

Signed-off-by: Ming Yang <minos.future@gmail.com>
2024-09-30 22:12:11 -07:00
Andrea Righi
e966455af2 scx_bpfland: fix task_avg_nvcsw() return type
task_avg_nvcsw() was incorrectly returning a bool instead of u64,
limiting the impact of the lowlatency boost.

Fix it by returning the proper type (u64).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
6e24fcc7f0 scx_bpfland: keep tasks running on full-idle SMT cores
When a task is the last one running on a CPU and still wants to
continue, allow it to run and replenish its time only if the used CPU is
part a fully idle SMT core.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:32 +02:00
Andrea Righi
c20a19c946 scx_bpfland: always give tasks a chance to run on an idle CPU
During ttwu, the kernel may decide to skip ->select_task_rq() (e.g.,
when only one CPU is allowed or migration is disabled). This causes to
call ops.enqueue() directly without having a chance to call
ops.select_cpu().

Therefore, introduce a new flag (select_cpu_done) in the local task
context to determine if ops.select_cpu() was bypassed and, in that case,
attempt to find an idle CPU directly from ops.enqueue().

In the future this information will be supplied by the kernel through a
special enqueue flag (SCX_ENQ_CPU_SELECTED) [1]. However, the custom
flag in the local task context ensures to reliably determine the same
information, even on older kernels where this flag is not available.

[1] https://lore.kernel.org/lkml/20240928003840.GA2717@maniforge/T

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-30 14:36:19 +02:00
I Hsin Cheng
61cb3f7fc5 scx_common_bpf: Append cast_mask()
Remove cast_mask() function distributed throughout different schedulers
and add it in common.bpf.h so every scheduler can reference it once they
need to.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-24 16:01:19 +08:00
I Hsin Cheng
5596d5e3fe scx_bpfland: Remove the usage of cast_mask in bpfland_enqueue
The usage of cast_mask() within bpfland_enqueue aims to cast the type of
"p->cpus_ptr" from "struct bpf_cpumask *" to "const struct cpumask *".
However, the type of "p->cpus_ptr" is already "const cpumask_t *" aka
"const struct cpumask *", so no conversion is needed.

Passing a value of type "struct cpumask *" into "struct bpf_cpumask *"
also leads to compiling error.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-09-20 20:45:09 +08:00
Andrea Righi
00eebaf905 scx_bpfland: refine task wakeup logic
On WAKE_SYNC attempt to migrate the wakee on the same CPU as the waker
if the waker is not exiting, the wakee can use the waker's CPU, the
waker's L3 domain is not saturated and there are not other tasks queued
to the local DSQ of the waker's CPU.

This is the same logic used in scx_rusty.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-15 14:50:14 +02:00
Andrea Righi
079a53c689 scx_bpfland: get rid of preferred domain
Using the turbo boosted CPUs as preferred scheduling seems to be
beneficial only a very few corner cases, for example on battery-powered
devices with an aggressive cpufreq governor that constantly tries to
scale down the frequency (and even in this case it's probably better to
not force the tasks to run on the fast CPUs, to save power).

In practive the preferred domain seems to introduce more overhead than
benefits overall, so let's get rid of it.

This can be improved in the future adding multiple user-configurable
scheduling domains.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-15 14:50:14 +02:00
Jake Hillion
8ca45cfa37
lint: enable cargo fmt (#643)
Use `cargo fmt` with a specific nightly branch in the CI to enforce formatting. Globally format these files while the diff is still small so we can stay on top of it.

Test plan:
- CI lint check passes.
2024-09-11 10:03:20 +01:00
Andrea Righi
e6e3579a92
Merge pull request #634 from anh0516/main
scx_bpfland: Documentation consistency fix
2024-09-10 23:25:55 +02:00
likewhatevs
c4c3659b6d
Merge pull request #638 from likewhatevs/remove-rlimit-dep
remove dependency on rlimit.rs
2024-09-10 03:14:12 -04:00
Andrea Righi
655ed5b4c6 scx_bpfland: use sum_exec_runtime to evaluate task's used time slice
Using p->scx.slice to evaluate the consumed time slice can be a bit
imprecise, because the sched_ext core implements yielding by setting
p->scx.slice to 0.

When the task's vruntime is evaluated this is considered as the task has
exhausted its entire allocated time slice, even though it voluntarily
released the CPU before the slice fully expired.

To avoid this inaccuracy and prevent penalizing tasks that voluntarily
release the CPU, always evaluate the used time slice based on the
difference in the task's total execution time (p->se.sum_exec_runtime).

This method provides a more precise calculation of vruntime and results
in a fairer task's deadline evaluation.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-10 08:03:35 +02:00
patso
c1df85914b
remove dependency on rlimit.rs
the rlimit crate is the only dependency crate
with a build.rs. build.rs files complicate portability.
this removes the need for rlimit.rs
2024-09-10 01:16:53 -04:00
Avraham Hollander
f71cc646a3 scx_bpfland: Fix in README.md for the same text as a comment in the
source
2024-09-06 19:12:33 -04:00
Tejun Heo
46fc2e1a49 version: v1.0.4 2024-09-05 18:12:45 -10:00
Andrea Righi
918cfc613d scx_bpfland: optimize producer/consumer workloads
When selecting an idle CPU for a task that has been woken up, prioritize
reusing the same CPU if the waker and wakee share the same L3 cache.

Otherwise, attempt to migrate the wakee to the waker's CPU, provided it
is allowed by the wakee's scheduling domain.

This seems to consistently improve FPS performance when the system is
not operating over its full capacity.

Example:
 $ __GL_SYNC_TO_VBLANK=0 vblank_mode=0 glxgears -geometry 800x600

 - before: ~18305.77 FPS
 - after:  ~19060.62 FPS

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-05 19:02:09 +02:00
Andrea Righi
844c00fd26 scx_bpfland: enable "auto" mode by default
Rename "turbo domain" to "preferred domain", that conceptually is more
generic and introduce the new option `--preferred-domain CPUMASK`, which
allows users to define the preferred domain, specifying a cpumask as a
hex number. By default ("auto") the scheduler will always try to detect
and use the fastest CPUs in the system.

Moreover, adjust the cpufreq logic to use "auto" both with the
"balance_power" and "balance_performance" EPP profiles.

Then, enable "auto" mode by default: the scheduler will try to
automatically determine the optimal primary domain, preferred domain and
cpufreq level, based on the selected scheduler and energy profiles.

Tested-by: Piotr Gorski < piotr.gorski@cachyos.org >
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-05 16:11:12 +02:00
Andrea Righi
afc7b5404b
Merge pull request #600 from sched-ext/bpfland-cpufreq
scx_bpfland: improve cpufreq awareness
2024-09-05 07:32:10 +02:00
Tejun Heo
f010eda5c0 meson: Remove scheds/rust/*/meson.build
These aren't used since 43950c65 ("build: Use workspace to group rust
sub-projects"). Drop them.
2024-09-04 06:40:17 -10:00
Andrea Righi
918f1db4bd scx_bpfland: dynamically adjust cpufreq level in auto mode
In auto mode, rather than keeping the previous fixed cpuperf factor,
dynamically calculate it based on CPU utilization and apply it before a
task runs within its allocated time slot.

Interactive tasks consistently receive the maximum scaling factor to
ensure optimal performance.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-03 21:36:48 +02:00
Andrea Righi
fe6ac15015 scx_bpfland: improve turbo domain CPU selection
Always consider the turbo domain when running in "auto" mode.

Additionally, when the turbo domain is used, split the CPU idle
selection logic into two stages:
 1) in ops.select_cpu(), provide the task with a second opportunity to
    remain within the same LLC
 2) in ops.enqueue(), perform another check for an idle CPU, allowing
    the task to move to a different LLC if an idle CPU within the same
    LLC is not available.

This allows tasks to stick more on turbo-boosted CPUs and CPUs within
the same LLC.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-03 09:59:29 +02:00
Andrea Righi
70b93ed641 scx_bpfland: skip idle CPU selection for tasks with changing affinity
When tasks are changing CPU affinity it is pointless to try to find an
optimal idle CPU. In this case just skip the the idle CPU selection step
and let the task being dispatched to a global DSQ if needed.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-03 09:59:29 +02:00
Andrea Righi
802d104b46 scx_bpfland: add basic cpufreq support
Add hints for the cpufreq governor based on the selected scheduler's
performance profile and the current energy performance preference (EPP).

With this change applied the scheduler works as following:

scheduler profile (--primary-domain option):
  - default:
    - use all cores
    - cpufreq: use default scaling factor
  - powersave:
    - use E-cores
    - cpufreq: use min scaling factor
  - performance:
    - use P-cores
    - cpufreq: use max scaling factor
  - auto:
    - EPP: power, powersave
      - use E-cores
      - cpufreq: use min scaling factor
    - EPP: balance_power (typically battery-powered systems)
      - use E-cores
      - cpufreq: use default scaling factor
    - EPP: balance_performance, performance
      - use P-cores
      - cpufreq: use max scaling factor

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-09-03 09:59:29 +02:00
Andrea Righi
2cbf252019 scx_bpfland: directly dispatch only per-cpu kthreads with local_kthreads
We want to directly dispatch only kthreads when local_kthreads is
enabled, not all tasks that can run on a single CPU.

Fixes: 7cc1846 ("scx_bpfland: always rely on prev_cpu with single-CPU tasks")
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-31 16:35:54 +02:00
Andrea Righi
7cc18460b9 scx_bpfland: always rely on prev_cpu with single-CPU tasks
When selecting an idle for tasks that can only run on a single CPU,
always check if the previously used CPU is sill usable, instead of
trying to figure out the single allowed CPU looking at the task's
cpumask.

Apparently, single-CPU tasks can report a prev_cpu that is not in the
allowed cpumask when they rapidly change affinity.

This could lead to stalls, because we may end up dispatching the kthread
to a per-CPU DSQ that is not compatible with its allowed cpumask.

Example:

kworker/u32:2[173797] triggered exit kind 1026:
  runnable task stall (kworker/2:1[70] failed to run for 7.552s)
...
  R kworker/2:1[70] -7552ms
      scx_state/flags=3/0x9 dsq_flags=0x1 ops_state/qseq=0/0
      sticky/holding_cpu=-1/-1 dsq_id=0x8 dsq_vtime=234483011369
      cpus=04

In this case kworker/2 can only run on CPU #2 (cpus=0x4), but it's
dispatched to dsq_id=0x8, that can only be consumed by CPU 8 => stall.

To prevent this, do not try to figure out the best idle CPU for tasks
that are changing affinity and just dispatch them to a global DSQ
(either priority or regular, depending on its interactive state).

Moreover, introduce an explicit error check in dispatch_direct_cpu() to
improve detection of similar issues in the future, and drop
lookup_task_ctx() in favor of try_lookup_task_ctx(), since we can now
safely handle all the cases where the task context is not found.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-30 09:45:58 +02:00
Andrea Righi
28cb1ec5cb scx_bpfland: enhanced task affinity
Aggressively try to keep tasks running on the same CPU / cache / domain,
to achieve higher performance when the system is not over commissioned.

This is done by giving a second chance in ops.enqueue(), in addition to
ops.select_cpu(), to find an idle CPU close to the previously used CPU.

Moreover, even if the task is dispatched to the global DSQs, always try
to check if there is an idle CPU in the primary domain that can
immediately consume the task.

= Results =

This change seems to provide a minor, but consistent, boost of
performance with the CPU-intensive benchmarks from the CachyOS
benchmarks selection [1].

Similar results can also be noticed with some WebGL benchmarks [2], when
system usage is close to its maximum capacity.

Test:
 - cachyos-benchmarker

System:
 - AMD Ryzen 7 5800X 8-Core Processor

Metrics:
 - total time: elapsed time of all benchmarks
 - total score: geometric mean of all benchmarks

NOTE: total time is the most relevant, since it gives a measure of the
aggregate performance, while the total score emphasizes more on
performance consistency across all benchmarks.

== Results: summary ==

 +-------------------------+---------------------+---------------------+
 |         Scheduler       |    Total Time       |    Total Score      |
 |                         |    (less = better)  |    (less = better)  |
 +-------------------------+---------------------+---------------------+
 |                 EEVDF   |  624.44 sec         |      123.68         |
 |               bpfland   |  625.34 sec         |      122.21         |
 | bpfland-task-affinity   |  623.67 sec         |      122.27         |
 +-------------------------+---------------------+---------------------+

== Conclusion ==

With this patch applied, bpfland shows both a better performance and
consistency. Although the gains are small (less than 1%), they are still
significant for this type of benchmark and consistently appear across
multiple runs.

[1] https://github.com/CachyOS/cachyos-benchmarker
[2] https://webglsamples.org/aquarium/aquarium.html

Tested-by: Piotr Gorski < piotr.gorski@cachyos.org >
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-28 10:30:54 +02:00
Andrea Righi
a155d5185d scx_bpfland: rely on Topology to classify core types
Rely on scx_utils::Topology to classify Big, Little and Turbo CPUs.

Moreover, support the special keyword "all" with --primary-domain to
include all the CPUs in the system (default).

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-28 00:23:55 +02:00
Andrea Righi
872e653cd2 scx_utils: introduce Turbo core type to Topology
Integrate the logic used by scx_bpfland to detect turbo-boosted cores in
Topology.

Also change the logic to detect Big/Little cores in function of
base_frequency, instead of scaling_max_freq, otherwise turbo-boosted
cores in homogeneous systems may be incorrectly classified as Big.

Moreover, introduce the following new methods to Cpu to check for the
core type:
 - is_turbo(): return true if the CPU is Turbo, false otherwise
 - is_big(): return true if the CPU is either Turbo or Big
 - is_little(): return true if the CPU is Little

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-28 00:09:08 +02:00
Andrea Righi
e0f49a338a scx_bpfland: fix turbo boost domain nullifying primary domain limits
When creating the turbo boost scheduling domain, we might use a full CPU
mask (selecting all possible CPUs) to indicate "do not prioritize turbo
boost CPUs" or when all CPUs have the same maximum frequency.

This approach works when the primary domain also contains all the CPUs,
as the complete overlap allows the CPU selection logic to ignore the
turbo boost domain and start picking CPUs directly from the primary
domain.

However, if the primary domain doesn't include all CPUs, the two domains
won't fully overlap, which can lead to the turbo boost domain
incorrectly including all CPUs, thereby negating the restrictions set by
the primary scheduling domain.

To resolve this, an empty CPU mask should be used for the turbo boost
domain when turbo boost CPUs aren't prioritized. If the turbo boost
domain is empty, it should be entirely bypassed, and the selection
should proceed directly to the primary domain.

Reported-by: Changwoo Min <changwoo@igalia.com>
Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-27 13:36:50 +02:00
Andrea Righi
a469f0f1ce
Merge pull request #561 from sched-ext/bpfland-fix-energy-profile-refresh
scx_bpfland: prevent reading energy profile if not available
2024-08-25 18:31:34 +02:00
Andrea Righi
f8acd069f0 scx_bpfland: prevent reading energy profile if not available
Avoid to periodically read the current performance profile from
/sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference if
it's not available (i.e., with older CPUs or kernels without cpufreq).

This fixes issue #560.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-25 16:53:35 +02:00