Commit Graph

241 Commits

Author SHA1 Message Date
Tejun Heo
ba52cc131b scx_lavd: Add .gitignore 2024-04-04 07:15:37 -10:00
Tejun Heo
a60737a6bf
Merge pull request #207 from sched-ext/api-updates
scx: Apply API updates from sched_ext
2024-04-02 14:26:42 -10:00
Tejun Heo
b925bdf94d Cargo.toml: Update libbpf-rs/cargo dependencies to 0.23 and drop patch.crates-io sections
New versions of libbpf-rs and libbpf-cargo are now available with all the
needed features. Update the dependencies and drop the patch sections.
2024-04-02 11:19:39 -10:00
Tejun Heo
6f81409df4 Bump versions
- scx_utils bumped from 0.6.0 to 0.7.0.

- Repo and rust schedulers get a PATCH level bump.
2024-04-02 10:58:50 -10:00
Tejun Heo
f3e20ae9b3 scx_rustland: Apply API updates and add --exit-dump-len option to scx_rustland 2024-04-02 10:30:56 -10:00
David Vernet
5088328f9e
rusty: Check LOCAL_DSQ length for WAKE_SYNC
In rusty_select_cpu(), if a task is WAKE_SYNC, we'll currently migrate
the task to that CPU if there are any idle cores on the system. As in
[0], this condition is insufficient, as there could be idle cores
elsewhere on the system, but still tasks piled up on a single local DSQ.
Let's add a condition that the local DSQ has to be empty in order to
apply the WAKE_SYNC migration.

Before patch:

[void@maniforge src]$ hackbench
Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 0.433

With patch:
[void@maniforge src]$ hackbench
Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 0.035

Signed-off-by: David Vernet <void@manifault.com>
2024-04-02 15:17:32 -05:00
Tejun Heo
dfa978d166 scx_lavd: Apply API updates 2024-04-02 10:08:02 -10:00
Tejun Heo
0c07f382b1 scx_rusty: Apply API updates 2024-04-02 10:07:54 -10:00
Tejun Heo
59bbd800c1 compat: Implement scx_utils::compat and fix up scx_layered
Implement scx_utils::compat to match C's scx/compat.h and update
scx_layered. Other rust scheds are still broken.
2024-04-02 07:08:56 -10:00
Changwoo Min
3a3bd2a750 scx_lavd: increase the upper bound of ineligible duration
Change the upper bound of ineligible duration (LAVD_ELIGIBLE_TIME_MAX).
The updated (2x increased) upper bound reflects the distribution of
tasks' eligible_delta_ns better.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 22:59:06 +09:00
Changwoo Min
8efaf0c4c2 scx_lavd: improve the accuracy of task's run_freq
Change the calculation of the run_frequence using the wait_period from
the last time the task yielded CPU to this time when the task is
running. The old implementation measures the time interval between the
last stopping and the current running and increases run_freq without
reason.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 22:55:17 +09:00
Changwoo Min
fe3efb8ce2 scx_lavd: rename last_{start/stop/wait/wake}_clk for consistency
Change the last_{start/stop/wait/wake}_clk in task_ctx to
last_{running/stopping/quiescent/runnable}_clk, matching with state
transition names. In addition, add comments and reorder fields in
task_ctx for readability.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 10:13:20 +09:00
Changwoo Min
3ba10a8d4f scx_lavd: accumulate consecutive runnings
When a task runs more than once (running <->stopping) within one
runnable-quiescent transition, accumulate runtime of multiple runnings
for statistics. This helps to get the task's runtime per schedule when
supposing that a huge time slice is given, which is what we want to
collect for scheduling decisions.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 17:19:30 +09:00
Changwoo Min
7b99ed9c5c scx_lavd: drop runtime_boost using slice_boost_prio
Remove runtime_boost using slice_boost_prio. Without slice_boost_prio,
the scheduler collects the exact time slice.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 16:31:03 +09:00
Changwoo Min
5629189527 scx_lavd: change update_stat_for_*() for consistency
Let's change the function names of update_stat_for_*() as follow their
callers for consistency and less confusion.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 14:49:06 +09:00
Changwoo Min
04c9e7fe9d
Merge pull request #201 from multics69/perf-vdeadline01
scx_lavd: fix merge conflicts between PR 197 and 199
2024-03-28 14:15:00 +09:00
Changwoo Min
0ea1aab070 scx_lavd: fix merge conflicts
Merge branch 'perf-vdeadline01' of github.com:sched-ext/scx into perf-vdeadline01
2024-03-28 13:49:19 +09:00
Tejun Heo
340938025f
Merge pull request #200 from sched-ext/layered_delete
layered: Use TLS map instead of hash map
2024-03-27 17:09:20 -10:00
Changwoo Min
60472db845
Merge pull request #197 from multics69/perf-vdeadline01
scx_lavd: improve virtual deadline calculation
2024-03-28 11:44:54 +09:00
Changwoo Min
67f41c7d83 scx_lavd: bug fix: slice_boost should be update before adjusted runtime
The run_time_boosted_ns calculation requires updated slice_boost_prio,
so updating slice_boost_prio should be done before updating
run_time_boosted_ns.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-28 11:21:42 +09:00
David Vernet
e857dd90ab
layered: Use TLS map instead of hash map
In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid)
rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a
task. As far as I can tell, there's no reason we need to be doing this.
We never access the map from user space, and we're even passing a
struct task_struct * to a helper subprog to look up the task context
rather than only doing it by pid.

Using a hashmap is error prone for this because we end up having to
manually track lifecycles for entries in the map rather than relying on
BPF to do it for us. For example, BPF will automatically free a task's
entry from the map when it exits. Let's just use TLS here rather than a
hashmap to avoid issues from this (e.g. we've observed the scheduler
getting evicted because we're accessing a stale map entry after a task
has been destroyed).

Reported-by: Valentin Andrei <vandrei@meta.com>
Signed-off-by: David Vernet <void@manifault.com>
2024-03-27 20:14:27 -05:00
Changwoo Min
31157ebc81 scx-lavd: make the comments in update_sys_cpu_load() clear
The current description is a bit confusing, so update the comments for clarity.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-28 06:45:57 +09:00
Tejun Heo
129d99f542 scx_lavd: Remove custom task state tracking
transit_task_stat() is now tracking the same runnable, running, stopping,
quiescent transitions that sched_ext core already tracks and always returns
%true. Let's remove it.
2024-03-26 12:23:19 -10:00
Tejun Heo
d7ec05e017 scx_lavd: Call update_stat_for_enq() from lavd_runnable()
LAVD_TASK_STAT_ENQ is tracking a subset of runnable task state transitions -
the ones which end up calling ops.enqueue(). However, what it is trying to
track is a task becoming runnable so that its load can be added to the cpu's
load sum.

Move the LAVD_TASK_STAT_ENQ state transition and update_stat_for_enq()
invocation to ops.runnable() which is called for all runnable transitions.

Note that when all the methods are invoked, the invocation order would be
ops.select_cpu(), runnable() and then enqueue(). So, this change moves
update_stat_for_enq() invocation before calc_when_to_run() for
put_global_rq(). update_stat_for_enq() updates taskc->load_actual which is
consumed by calc_greedy_ratio() and thus affects calc_when_to_run().

Before this patch, calc_greedy_ratio() would use load_actual which doesn't
reflect the last running period. After this patch, the latest running period
will be reflected when the task gets queued to the global queue.

The difference is unlikely to matter but it'd probably make sense to make it
more consistent (e.g. do it at the end of quiescent transition).

After this change, transit_task_stat() doesn't detect any invalid
transitions.
2024-03-26 12:23:19 -10:00
Tejun Heo
625bb84bc4 scx_lavd: Move load subtraction to quiescent state transition
scx_lavd tracks task state transitions and updates statistics on each valid
transition. However, there's an asymmetry between the runnable/running and
stopping/quiescent transitions. In the former, the runnable and running
transitions are accounted separately in update_stat_for_enq() and
update_stat_for_run(), respectively. However, in the latter, the two
transitions are combined together in update_stat_for_stop().

This asymmetry leads to incorrect accounting. For example, a task's load
should be added to the cpu's load sum when the task gets enqueued and
subtracted when the task is no longer runnable (quiescent). The former is
accounted correctly from update_stat_for_enq() but the latter is done
whenever the task stops. A task can transit between running and stopping
multiple times before becoming quiescent, so the asymmetry can end up
subtracting the load of a task which is still running from the cpu's load
sum.

This patch:

- introduces LAVD_TASK_STAT_QUIESCENT and updates transit_task_stat() so
  that it can handle all valid state transitions including the multiple back
  and forth transitions between two pairs - QUIESCENT <-> ENQ and RUNNING
  <-> STOPPING.

- restores the symmetry by moving load adjustments part from
  update_stat_for_stop() to new update_stat_for_quiescent().

This removes a good chunk of ignored transitions. The next patch will take
care of the rest.
2024-03-26 12:23:19 -10:00
Tejun Heo
dd40377f03 scx_lavd: Drop unnecessary extern crates
Since https://doc.rust-lang.org/edition-guide/rust-2018/path-changes.html,
extern crate declarations aren't necessary. Let's drop them.
2024-03-26 12:23:19 -10:00
David Vernet
602ec5ada3
layered: Make helper functions static
lookup_task_ctx(), lookup_task_ctx_may_fail(), and lookup_layer()
currently don't have the static keyword, so BPF may treat them as a
global function. We don't actually want these to be global, so let's
make them static to avoid confusing the verifier.

Signed-off-by: David Vernet <void@manifault.com>
2024-03-26 15:08:32 -05:00
Changwoo Min
83169481a6 scx_lavd: improve latency criticality to latency priority mapping
The old approach is mapping [0, maximum latency criticliy] to [-boost
range, boost range). This approach is easily affected by one outlier
maximum value and suffers from the integer truncation error. The new
approach divides the range into two -- [minimum latency criticality,
average latency criticality) and [average latency criticality, maximum
latency criticality] -- and maps them into [boost range/2, 0) and [0,
-boost range/2), respectively,

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-25 22:13:41 +09:00
Changwoo Min
2b5d3c1300 scx_lavd: change sched_prio_to_latency_weight to more skewed one
Replace a latency weight arrary to more skewed one, which is the
inverse of sched_prio_to_slice_weight. It turns out more skewed one
works better under highly CPU-overloaded cases since it gives a longer
deadline to non-latency-critical tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 14:01:44 +09:00
Changwoo Min
9c12b607ca scx_lavd: increase LAVD_LC_RUNTIME_MAX for improved lat_prio
As the calculated runtime increases by considering the number of
full-time slice consumption, increase the upper bound
(LAVD_LC_RUNTIME_MAX) of runtime to be considered in latency
calculation. Also, add LAVD_SLICE_BOOST_MAX_PRIO to avoid
slice_boost_prio dropping to zero suddenly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:59:13 +09:00
Changwoo Min
32570789d8 scx_lavd: improve the accuracy of runtime per schedule
Take slice_boost_prio -- how many times a full time slice was consumed
-- into consideration in calculating run_time_ns (runtime per schedule).
This improve the accuracy especially when a task is overscheduled and
its time slice is reduced for enforcing fairness.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:32:09 +09:00
Changwoo Min
b37370bb35 scx_lavd: entail two invalid task state transitions
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-20 00:15:47 +09:00
Changwoo Min
8860f26ff4 scx_lavd: add a sanity check if runtime is negative
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-20 00:15:37 +09:00
Changwoo Min
fa2282363b scx_lavd: more explanation about sched_prio_to_latency_weight
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 21:31:37 +09:00
Changwoo Min
24bddad9b4 scx_lavd: fix a typo
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 21:19:55 +09:00
Changwoo Min
512c4e794f scx_lavd: fix potential CPU stall in lavd_select_cpu()
Returning prev_cpu after picking an idle CPU will cause the idle CPU
stall because the idle core was already punched out from the idle mask
by the scx core so it is no longer idle from the scx core's point of
view.

This fix conducts the idle core selection at the last step so it never
return prev_cpu after picking the idle core.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:46 +09:00
Changwoo Min
e41c674fae scx_lavd: remove redundant latency calculation at calc_latency_weight()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
865269f438 scx_lavd: remove unnecessary condition check at slice_fully_consumed()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
c2b1a10e17 scx_lavd: remove unnecessary condition check at update_stat_for_stop()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
a27b509452 sdx_lavd: use is_wakeup_ef() in checking wait flag
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:15 +09:00
Changwoo Min
419ccae8db scx_lavd: improve the clarity of the task state transition
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
66e15285ea scx_lavd: move scx_bpf_error() calls to get_cpu_ctx{_id}()
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
0fc5591bf6 scx_lavd: add a utility func, {try_}get_task_ctx()
get_task_ctx() and try_get_task_ctx() were added for common error
handling for task lookup failure. Since idle "swapper" task is not under
sched_ext, try_get_task_ctx() is added for the case such that idle task
can be searched.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
97b4d9ce5a scx_lavd: remove unnecessary condition check in is_wakeup_wf()
We don't need to test SCX_WAKE_SYNC because SCX_WAKE_SYNC should only be
set when SCX_WAKE_TTWU is set.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:46:01 +09:00
Changwoo Min
47e7238b13 scs_lavd: improve the description of fairness
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:45:37 +09:00
Changwoo Min
670c1b5b92 scx_lavd: print one scheduling decision by default
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:41 +09:00
Changwoo Min
315e5b3fe2 scx_lavd: remove unnecessary arg from put_local_rq()
cpu_id is unused and not necessary in pu_local_rq(), so it it removed.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Changwoo Min
ead7d55c5c scx_lavd: replace num_cpus to scx_utils::Topology
This removes the external carte depenendy and avoides the known bugs
in the num_cpus carte.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Changwoo Min
17bce169e7 scx_lavd: fix formatting issues in main.rs and main.bpf.c
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-19 00:30:26 +09:00
Changwoo Min
fb73520990 scx_lavd: add scx_lavd to the meson build 2024-03-16 10:55:37 +09:00