scx/scheds/rust/scx_bpfland
Andrea Righi 7cc18460b9 scx_bpfland: always rely on prev_cpu with single-CPU tasks
When selecting an idle for tasks that can only run on a single CPU,
always check if the previously used CPU is sill usable, instead of
trying to figure out the single allowed CPU looking at the task's
cpumask.

Apparently, single-CPU tasks can report a prev_cpu that is not in the
allowed cpumask when they rapidly change affinity.

This could lead to stalls, because we may end up dispatching the kthread
to a per-CPU DSQ that is not compatible with its allowed cpumask.

Example:

kworker/u32:2[173797] triggered exit kind 1026:
  runnable task stall (kworker/2:1[70] failed to run for 7.552s)
...
  R kworker/2:1[70] -7552ms
      scx_state/flags=3/0x9 dsq_flags=0x1 ops_state/qseq=0/0
      sticky/holding_cpu=-1/-1 dsq_id=0x8 dsq_vtime=234483011369
      cpus=04

In this case kworker/2 can only run on CPU #2 (cpus=0x4), but it's
dispatched to dsq_id=0x8, that can only be consumed by CPU 8 => stall.

To prevent this, do not try to figure out the best idle CPU for tasks
that are changing affinity and just dispatch them to a global DSQ
(either priority or regular, depending on its interactive state).

Moreover, introduce an explicit error check in dispatch_direct_cpu() to
improve detection of similar issues in the future, and drop
lookup_task_ctx() in favor of try_lookup_task_ctx(), since we can now
safely handle all the cases where the task context is not found.

Signed-off-by: Andrea Righi <andrea.righi@linux.dev>
2024-08-30 09:45:58 +02:00
..
src scx_bpfland: always rely on prev_cpu with single-CPU tasks 2024-08-30 09:45:58 +02:00
build.rs scx_bpfland: update copyright info 2024-08-14 16:17:54 +02:00
Cargo.toml scx_bpfland: Convert to scx_stats 2024-08-24 23:14:55 -10:00
LICENSE scheds: introduce scx_bpfland 2024-06-27 17:28:42 +02:00
meson.build build: Use workspace to group rust sub-projects 2024-08-25 00:47:58 -10:00
README.md scheds: introduce scx_bpfland 2024-06-27 17:28:42 +02:00
rustfmt.toml scheds: introduce scx_bpfland 2024-06-27 17:28:42 +02:00

scx_bpfland

This is a single user-defined scheduler used within sched_ext, which is a Linux kernel feature which enables implementing kernel thread schedulers in BPF and dynamically loading them. Read more about sched_ext.

Overview

scx_bpfland: a vruntime-based sched_ext scheduler that prioritizes interactive workloads.

This scheduler is derived from scx_rustland, but it is fully implemented in BPF with minimal user-space Rust part to process command line options, collect metrics and logs out scheduling statistics. The BPF part makes all the scheduling decisions.

Tasks are categorized as either interactive or regular based on their average rate of voluntary context switches per second. Tasks that exceed a specific voluntary context switch threshold are classified as interactive. Interactive tasks are prioritized in a higher-priority queue, while regular tasks are placed in a lower-priority queue. Within each queue, tasks are sorted based on their weighted runtime: tasks that have higher weight (priority) or use the CPU for less time (smaller runtime) are scheduled sooner, due to their a higher position in the queue.

Moreover, each task gets a time slice budget. When a task is dispatched, it receives a time slice equivalent to the remaining unused portion of its previously allocated time slice (with a minimum threshold applied). This gives latency-sensitive workloads more chances to exceed their time slice when needed to perform short bursts of CPU activity without being interrupted (i.e., real-time audio encoding / decoding workloads).

Typical Use Case

Interactive workloads, such as gaming, live streaming, multimedia, real-time audio encoding/decoding, especially when these workloads are running alongside CPU-intensive background tasks.

In this scenario scx_bpfland ensures that interactive workloads maintain a high level of responsiveness.

Production Ready?

The scheduler is based on scx_rustland, implementing nearly the same scheduling algorithm with minor changes and optimizations to be fully implemented in BPF.

Given that the scx_rustland scheduling algorithm has been extensively tested, this scheduler can be considered ready for production use.