Commit Graph

69 Commits

Author SHA1 Message Date
Changwoo Min
4c0f996ddc
Revert "scx_lavd: Enforce memory barrier in flip_sys_cpu_util" 2024-05-27 12:19:21 +09:00
I Hsin Cheng
f839106a57 scx_lavd: Enforce memory barrier in flip_sys_cpu_util
Use the GNU built-in __sync_fetch_and_xor() to perform the XOR operation
on global variable "__sys_cpu_util_idx" to ensure the operations
visibility.

The built-in function "__sync_fetch_and_xor()" can provide both atomic
operation and full memory barrier which is needed by every operation
(especially store operation) on global variables.

Signed-off-by: I Hsin Cheng <richard120310@gmail.com>
2024-05-26 15:27:10 +08:00
David Vernet
17c0c10b4e
Merge pull request #294 from sched-ext/fix_warnings
Fix warnings
2024-05-18 10:47:54 -05:00
Changwoo Min
4cba06dc33 scx_lavd: fix inconsistent indentation in main.bpf.c
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-18 22:22:16 +09:00
David Vernet
a1c60ce589
lavd: Remove unused variables from scx_lavd
Fix unused variable warnings.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-18 07:51:20 -05:00
Tejun Heo
ab25992416 Add missing skel.attach() calls
C SCX_OPS_ATTACH() and rust scx_ops_attach() macros were not calling
.attach() and were only attaching the struct_ops. This meant that all
non-struct_ops BPF programs contained in the skels were never attached which
breaks e.g. scx_layered.

Let's fix it by adding .attach() invocation the the attach macros.
2024-05-17 14:33:04 -10:00
I Hsin Cheng
6cce01c66b Avoid redundant substraction in rsigmoid_u64
Originally the implementation of function rsigmoid_u64 will
perform substraction even when the value of "v" equals to the value
of "max" , in which the result is certainly zero.

We can avoid this redundant substration by changing the condition from
 ">" to ">=" since we know when the value of "v" and "max" are equal
we can return 0 without any substract operation.
2024-05-16 11:58:39 +08:00
vax-r
f293995b59 Fix typo
Fix the usage of "scheduler" in the comment of main.bpf.c , it should
a verb which is "schedule".
2024-05-15 23:02:35 +08:00
Changwoo Min
08e7e23cbe scx_lavd: priint out the current limitaiton of scx_lavd for users
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-15 12:04:09 +09:00
Changwoo Min
a4560c7f7f scx_lavd: add comments describing the idea of preemption
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-15 12:04:03 +09:00
Changwoo Min
446de3ef3c scdx_lavd: minor style changes
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-10 11:07:32 +09:00
Changwoo Min
7fcc6e4576 scx_lavd: support yield-based preemption
If there is a higher priority task when running ops.tick(),
ops.select_cpu(), and ops.enqueue() callbacks, the current running tasks
yields its CPU by shrinking time slice to zero and a higher priority
task can run on the current CPU.

As low-cost, fine-grained preemption becomes available, default
parameters are adjusted as follows:
  - Raise the bar for remote CPU preemption to avoid IPIs.
  - Increase the maximum time slice.
  - Gradually enforce the fair use of CPU time (i.e., ineligible duration)

Lastly, using CAS, we ensure that a remote CPU is preempted by only one
CPU. This removes unnecessary remote preemptions (and IPIs).

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-10 00:54:41 +09:00
Changwoo Min
01e5a46371
Merge pull request #263 from multics69/scx_lavd-power01
scx_lavd: support CPU frequency scaling
2024-05-05 10:16:00 +09:00
Changwoo Min
a24e1d7adf scx_lavd: more comments about CPU frequency scaling
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-04 10:41:13 +09:00
David Vernet
9bb8e9a548
common: Pull bpf_log2l() into helper function header
scx_lavd implemented 32 and 64 bit versions of a base-2 logarithm
function. This is now also used in rusty. To avoid code duplication,
let's pull it into a shared header.

Note that there is technically a functional change here as we remove the
always inline compiler directive. We instead assume that the compiler
will know best whether or not to inline the function.

Signed-off-by: David Vernet <void@manifault.com>
2024-05-03 14:50:24 -05:00
Changwoo Min
6892898469 scx_lavd: support CPU frequency scaling
To know the required CPU performance (e.g., frequency) demand, we keep
track of 1) utilization of each CPU and 2) _performance criticality_ of
each task. The performance criticality of a task denotes how critical it
is to CPU performance (frequency). Like the notion of latency
criticality, we use three factors: the task's average runtime, wake-up
frequency, and waken-up frequency. A task's runtime is longer, and its
two frequencies are higher; the task is more performance-critical
because it would be a bottleneck in the middle of the task chain.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-05-04 00:30:25 +09:00
Tejun Heo
e5e88b7e18 Bump versions to prepare for a release 2024-04-29 09:07:27 -10:00
Tejun Heo
3e7ef35649
Merge pull request #250 from multics69/lavd-issue-234
scx_lavd: replesih time slice at ops.running() only when necessary
2024-04-29 09:01:04 -10:00
Tejun Heo
5b7b7d5193
Merge pull request #247 from multics69/lavd-issue-244
scx_lavd: always inline submit_task_ctx to make the verifier happy
2024-04-29 07:53:38 -10:00
Changwoo Min
5f63e0ca30 scx_lavd: replesih time slice at ops.running() only when necessary
The current code replenishes the task's time slice whenever the task
becomes ops.running(). However, there is a case where such behavior can
starve the other tasks, causing the watchdog timeout error. One (if not
all) such case is when a task is preempted while running by the higher
scheduler class (e.g., RT, DL). In such a case, the task will be transit
in a cycle of ops.running() -> ops.stopping() -> ops.running() -> etc.
Whenever it becomes re-running, it will be placed at the head of local
DSQ and ops.running() will renew its time slice. Hence, in the worst
case, the task can run forever since its time slice is never exhausted.
The fix is assigning the time slice only once by checking if the time
slice is calculated before.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-29 12:13:31 +09:00
Andrea Righi
cabde30736 scx_utils: bump up version to 0.8.0
Bump up scx-utils version to provide the new scx_utils::TopologyMap.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-28 21:01:16 +02:00
Andrea Righi
905960f752 scx_lavd: use c_char consistently
In Rust c_char can be aliased to i8 or u8, depending on the particular
target architecture.

For example, trying to build scx_lavd on ppc64 triggers the following
error:

error[E0308]: mismatched types
   --> src/main.rs:200:38
    |
200 |         let c_tx_cm: *const c_char = (&tx.comm as *const [i8; 17]) as *const i8;
    |                      -------------   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `*const u8`, found `*const i8`
    |                      |
    |                      expected due to this
    |
    = note: expected raw pointer `*const u8`
               found raw pointer `*const i8`

To fix this, consistently use c_char instead of assuming it corresponds
to i8.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-04-27 17:21:19 +02:00
Changwoo Min
f470b1aa13 scx_lavd: always inline submit_task_ctx to make the verifier happy
In _some_ kernel versions, loading scx_lavd fails with an error of
"bpf_rcu_read_unlock is missing". The usage of
bpf_rcu_read_lock/unlock() in proc_dump_all_tasks() is correct but the
bpf verifier still think bpf_rcu_read_unlock() is missing. The most
plausible reason so far is that the problematic kernel does not have a
commit 6fceea0fa59f ("bpf: Transfer RCU lock state between subprog
calls"), failing inter-procedural analysis between proc_dump_all_tasks()
and submit_task_ctx(). Thus, we force inline submit_task_ctx() (no
inter-procedural analysis by the verifier is necessary) for the time
being.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-28 00:11:38 +09:00
Changwoo Min
d0d0a18b10 scx_lavd: fix copyright information
Correct the copyright and author information

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-26 16:36:58 +09:00
takase1121
5d20f89a87
scheds-rust: build rust schedulers in sequence 2024-04-23 08:06:27 +08:00
David Vernet
45589cd0f7
lavd: Fix a few typos
Noticed a few typos. Let's fix em up

Signed-off-by: David Vernet <void@manifault.com>
2024-04-17 08:17:52 -05:00
Changwoo Min
f53c29759e
scx_lavd: support preemption (in some scenarios) (#224)
* scx-lavd: preemption of a lower-priority task using kick cpu

When a task is enqueued to the global queue, the scheduler checks if
there is a lower priority task than the enqueued task. If so, it kicks
out the lower-priority task, hoping the newly enqueued task or another
higher-priority task runs on the kicked CPU. Kicking another CPU is
expensive as an IPI is involved, so the scheduler judiciously kicks the
CPU when its benefit (i.e., priority gap) is clear enough.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-04-09 14:25:53 +09:00
Tejun Heo
ba52cc131b scx_lavd: Add .gitignore 2024-04-04 07:15:37 -10:00
Tejun Heo
a60737a6bf
Merge pull request #207 from sched-ext/api-updates
scx: Apply API updates from sched_ext
2024-04-02 14:26:42 -10:00
Tejun Heo
b925bdf94d Cargo.toml: Update libbpf-rs/cargo dependencies to 0.23 and drop patch.crates-io sections
New versions of libbpf-rs and libbpf-cargo are now available with all the
needed features. Update the dependencies and drop the patch sections.
2024-04-02 11:19:39 -10:00
Tejun Heo
6f81409df4 Bump versions
- scx_utils bumped from 0.6.0 to 0.7.0.

- Repo and rust schedulers get a PATCH level bump.
2024-04-02 10:58:50 -10:00
Tejun Heo
dfa978d166 scx_lavd: Apply API updates 2024-04-02 10:08:02 -10:00
Tejun Heo
59bbd800c1 compat: Implement scx_utils::compat and fix up scx_layered
Implement scx_utils::compat to match C's scx/compat.h and update
scx_layered. Other rust scheds are still broken.
2024-04-02 07:08:56 -10:00
Changwoo Min
3a3bd2a750 scx_lavd: increase the upper bound of ineligible duration
Change the upper bound of ineligible duration (LAVD_ELIGIBLE_TIME_MAX).
The updated (2x increased) upper bound reflects the distribution of
tasks' eligible_delta_ns better.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 22:59:06 +09:00
Changwoo Min
8efaf0c4c2 scx_lavd: improve the accuracy of task's run_freq
Change the calculation of the run_frequence using the wait_period from
the last time the task yielded CPU to this time when the task is
running. The old implementation measures the time interval between the
last stopping and the current running and increases run_freq without
reason.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 22:55:17 +09:00
Changwoo Min
fe3efb8ce2 scx_lavd: rename last_{start/stop/wait/wake}_clk for consistency
Change the last_{start/stop/wait/wake}_clk in task_ctx to
last_{running/stopping/quiescent/runnable}_clk, matching with state
transition names. In addition, add comments and reorder fields in
task_ctx for readability.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-30 10:13:20 +09:00
Changwoo Min
3ba10a8d4f scx_lavd: accumulate consecutive runnings
When a task runs more than once (running <->stopping) within one
runnable-quiescent transition, accumulate runtime of multiple runnings
for statistics. This helps to get the task's runtime per schedule when
supposing that a huge time slice is given, which is what we want to
collect for scheduling decisions.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 17:19:30 +09:00
Changwoo Min
7b99ed9c5c scx_lavd: drop runtime_boost using slice_boost_prio
Remove runtime_boost using slice_boost_prio. Without slice_boost_prio,
the scheduler collects the exact time slice.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 16:31:03 +09:00
Changwoo Min
5629189527 scx_lavd: change update_stat_for_*() for consistency
Let's change the function names of update_stat_for_*() as follow their
callers for consistency and less confusion.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-29 14:49:06 +09:00
Changwoo Min
0ea1aab070 scx_lavd: fix merge conflicts
Merge branch 'perf-vdeadline01' of github.com:sched-ext/scx into perf-vdeadline01
2024-03-28 13:49:19 +09:00
Changwoo Min
67f41c7d83 scx_lavd: bug fix: slice_boost should be update before adjusted runtime
The run_time_boosted_ns calculation requires updated slice_boost_prio,
so updating slice_boost_prio should be done before updating
run_time_boosted_ns.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-28 11:21:42 +09:00
Changwoo Min
31157ebc81 scx-lavd: make the comments in update_sys_cpu_load() clear
The current description is a bit confusing, so update the comments for clarity.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-28 06:45:57 +09:00
Tejun Heo
129d99f542 scx_lavd: Remove custom task state tracking
transit_task_stat() is now tracking the same runnable, running, stopping,
quiescent transitions that sched_ext core already tracks and always returns
%true. Let's remove it.
2024-03-26 12:23:19 -10:00
Tejun Heo
d7ec05e017 scx_lavd: Call update_stat_for_enq() from lavd_runnable()
LAVD_TASK_STAT_ENQ is tracking a subset of runnable task state transitions -
the ones which end up calling ops.enqueue(). However, what it is trying to
track is a task becoming runnable so that its load can be added to the cpu's
load sum.

Move the LAVD_TASK_STAT_ENQ state transition and update_stat_for_enq()
invocation to ops.runnable() which is called for all runnable transitions.

Note that when all the methods are invoked, the invocation order would be
ops.select_cpu(), runnable() and then enqueue(). So, this change moves
update_stat_for_enq() invocation before calc_when_to_run() for
put_global_rq(). update_stat_for_enq() updates taskc->load_actual which is
consumed by calc_greedy_ratio() and thus affects calc_when_to_run().

Before this patch, calc_greedy_ratio() would use load_actual which doesn't
reflect the last running period. After this patch, the latest running period
will be reflected when the task gets queued to the global queue.

The difference is unlikely to matter but it'd probably make sense to make it
more consistent (e.g. do it at the end of quiescent transition).

After this change, transit_task_stat() doesn't detect any invalid
transitions.
2024-03-26 12:23:19 -10:00
Tejun Heo
625bb84bc4 scx_lavd: Move load subtraction to quiescent state transition
scx_lavd tracks task state transitions and updates statistics on each valid
transition. However, there's an asymmetry between the runnable/running and
stopping/quiescent transitions. In the former, the runnable and running
transitions are accounted separately in update_stat_for_enq() and
update_stat_for_run(), respectively. However, in the latter, the two
transitions are combined together in update_stat_for_stop().

This asymmetry leads to incorrect accounting. For example, a task's load
should be added to the cpu's load sum when the task gets enqueued and
subtracted when the task is no longer runnable (quiescent). The former is
accounted correctly from update_stat_for_enq() but the latter is done
whenever the task stops. A task can transit between running and stopping
multiple times before becoming quiescent, so the asymmetry can end up
subtracting the load of a task which is still running from the cpu's load
sum.

This patch:

- introduces LAVD_TASK_STAT_QUIESCENT and updates transit_task_stat() so
  that it can handle all valid state transitions including the multiple back
  and forth transitions between two pairs - QUIESCENT <-> ENQ and RUNNING
  <-> STOPPING.

- restores the symmetry by moving load adjustments part from
  update_stat_for_stop() to new update_stat_for_quiescent().

This removes a good chunk of ignored transitions. The next patch will take
care of the rest.
2024-03-26 12:23:19 -10:00
Tejun Heo
dd40377f03 scx_lavd: Drop unnecessary extern crates
Since https://doc.rust-lang.org/edition-guide/rust-2018/path-changes.html,
extern crate declarations aren't necessary. Let's drop them.
2024-03-26 12:23:19 -10:00
Changwoo Min
83169481a6 scx_lavd: improve latency criticality to latency priority mapping
The old approach is mapping [0, maximum latency criticliy] to [-boost
range, boost range). This approach is easily affected by one outlier
maximum value and suffers from the integer truncation error. The new
approach divides the range into two -- [minimum latency criticality,
average latency criticality) and [average latency criticality, maximum
latency criticality] -- and maps them into [boost range/2, 0) and [0,
-boost range/2), respectively,

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-25 22:13:41 +09:00
Changwoo Min
2b5d3c1300 scx_lavd: change sched_prio_to_latency_weight to more skewed one
Replace a latency weight arrary to more skewed one, which is the
inverse of sched_prio_to_slice_weight. It turns out more skewed one
works better under highly CPU-overloaded cases since it gives a longer
deadline to non-latency-critical tasks.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 14:01:44 +09:00
Changwoo Min
9c12b607ca scx_lavd: increase LAVD_LC_RUNTIME_MAX for improved lat_prio
As the calculated runtime increases by considering the number of
full-time slice consumption, increase the upper bound
(LAVD_LC_RUNTIME_MAX) of runtime to be considered in latency
calculation. Also, add LAVD_SLICE_BOOST_MAX_PRIO to avoid
slice_boost_prio dropping to zero suddenly.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:59:13 +09:00
Changwoo Min
32570789d8 scx_lavd: improve the accuracy of runtime per schedule
Take slice_boost_prio -- how many times a full time slice was consumed
-- into consideration in calculating run_time_ns (runtime per schedule).
This improve the accuracy especially when a task is overscheduled and
its time slice is reduced for enforcing fairness.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
2024-03-21 10:32:09 +09:00