scx_lavd: Call update_stat_for_enq() from lavd_runnable()

LAVD_TASK_STAT_ENQ is tracking a subset of runnable task state transitions -
the ones which end up calling ops.enqueue(). However, what it is trying to
track is a task becoming runnable so that its load can be added to the cpu's
load sum.

Move the LAVD_TASK_STAT_ENQ state transition and update_stat_for_enq()
invocation to ops.runnable() which is called for all runnable transitions.

Note that when all the methods are invoked, the invocation order would be
ops.select_cpu(), runnable() and then enqueue(). So, this change moves
update_stat_for_enq() invocation before calc_when_to_run() for
put_global_rq(). update_stat_for_enq() updates taskc->load_actual which is
consumed by calc_greedy_ratio() and thus affects calc_when_to_run().

Before this patch, calc_greedy_ratio() would use load_actual which doesn't
reflect the last running period. After this patch, the latest running period
will be reflected when the task gets queued to the global queue.

The difference is unlikely to matter but it'd probably make sense to make it
more consistent (e.g. do it at the end of quiescent transition).

After this change, transit_task_stat() doesn't detect any invalid
transitions.
This commit is contained in:
Tejun Heo 2024-03-26 12:23:19 -10:00
parent 625bb84bc4
commit d7ec05e017

View File

@ -1395,14 +1395,6 @@ static bool put_local_rq(struct task_struct *p, struct task_ctx *taskc,
if (!is_eligible(taskc))
return false;
/*
* Add task load based on the current statistics regardless of a target
* rq. Statistics will be adjusted when more accurate statistics
* become available (ops.running).
*/
if (transit_task_stat(taskc, LAVD_TASK_STAT_ENQ))
update_stat_for_enq(p, taskc, cpuc);
/*
* This task should be scheduled as soon as possible (e.g., wakened up)
* so the deadline is no use and enqueued into a local DSQ, which
@ -1432,12 +1424,6 @@ static bool put_global_rq(struct task_struct *p, struct task_ctx *taskc,
*/
calc_when_to_run(p, taskc, cpuc, enq_flags);
/*
* Reflect task's load immediately.
*/
if (transit_task_stat(taskc, LAVD_TASK_STAT_ENQ))
update_stat_for_enq(p, taskc, cpuc);
/*
* Enqueue the task to the global runqueue based on its virtual
* deadline.
@ -1527,10 +1513,24 @@ void BPF_STRUCT_OPS(lavd_dispatch, s32 cpu, struct task_struct *prev)
void BPF_STRUCT_OPS(lavd_runnable, struct task_struct *p, u64 enq_flags)
{
struct cpu_ctx *cpuc;
struct task_struct *waker;
struct task_ctx *taskc;
struct task_ctx *p_taskc, *waker_taskc;
u64 now, interval;
cpuc = get_cpu_ctx();
p_taskc = get_task_ctx(p);
if (!cpuc || !p_taskc)
return;
/*
* Add task load based on the current statistics regardless of a target
* rq. Statistics will be adjusted when more accurate statistics become
* available (ops.running).
*/
if (transit_task_stat(p_taskc, LAVD_TASK_STAT_ENQ))
update_stat_for_enq(p, p_taskc, cpuc);
/*
* When a task @p is wakened up, the wake frequency of its waker task
* is updated. The @current task is a waker and @p is a waiter, which
@ -1540,8 +1540,8 @@ void BPF_STRUCT_OPS(lavd_runnable, struct task_struct *p, u64 enq_flags)
return;
waker = bpf_get_current_task_btf();
taskc = try_get_task_ctx(waker);
if (!taskc) {
waker_taskc = try_get_task_ctx(waker);
if (!waker_taskc) {
/*
* In this case, the waker could be an idle task
* (swapper/_[_]), so we just ignore.
@ -1550,9 +1550,9 @@ void BPF_STRUCT_OPS(lavd_runnable, struct task_struct *p, u64 enq_flags)
}
now = bpf_ktime_get_ns();
interval = now - taskc->last_wake_clk;
taskc->wake_freq = calc_avg_freq(taskc->wake_freq, interval);
taskc->last_wake_clk = now;
interval = now - waker_taskc->last_wake_clk;
waker_taskc->wake_freq = calc_avg_freq(waker_taskc->wake_freq, interval);
waker_taskc->last_wake_clk = now;
}
void BPF_STRUCT_OPS(lavd_running, struct task_struct *p)