Commit Graph

209 Commits

Author SHA1 Message Date
Tejun Heo
7e578bb034 sync-to-kernel.sh: Sync scx_central and scx_flatcg
They got added back as example schedulers in the kernel tree.
2024-02-23 14:21:03 -10:00
Tejun Heo
b997b4ad17 common: Cosmetic change for consistency 2024-02-23 10:55:00 -10:00
David Vernet
70960ab162
Merge pull request #155 from sched-ext/rustland_refactor
scx_rustland: cpu topology refactoring
2024-02-23 14:27:48 -06:00
Tejun Heo
c789b4dd66 scheds/sync-to-kernel.sh: Warn and skip if destination file is missing instead of failing
We aren't gonna sync all headers to the kernel tree.
2024-02-23 09:28:41 -10:00
David Vernet
87eab38506
rustland: Update rustland to use topology.rs
The new topology crate allows us to replace the custom rustland topology
logic with the logic in the topology crate itself.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-23 13:09:06 -06:00
David Vernet
43624a87ce
rusty: Use new topology crate
Now that we have this new Topology crate, let's update Rusty to use it
instead of using the old one.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-23 10:39:55 -06:00
Tejun Heo
4dc77f8ddf
Merge pull request #149 from davemarchevsky/davemarchevsky_nice_equals
scx_layered: Add MATCH_NICE_EQUALS match kind
2024-02-22 06:38:17 -10:00
Dave Marchevsky
9f510f18cd scx_layered: Add MATCH_NICE_EQUALS match kind
I have a usecase where specific nice values are used to bucket tasks
into groups that are handled separately by different `scx_layered`
policies, with no implications of relative priority between niceness X,
X + 1, X - 1, etc. In other words, nicevals are used as simple tags in
this scenario.

If we wanted to treat a specific niceness this way e.g. `11`, we could
do so with AND'd MATCH_NICE_{ABOVE,BELOW} like so:

```json
  "matches" : [
    [
      {
        "NiceAbove": 10
      },
      {
        "NiceBelow": 12
      },
    ],
  ],
```

But this is unnecessarily verbose and doesn't communicate the intent of
the match very well. Adding a `NiceEquals` matcher simplifies the
config and makes intent obvious:

```json
  "matches" : [
    [
      {
        "NiceEquals": 11
      },
    ],
  ],
```

This PR adds support for such a matcher.

Also, rename `layer_match.nice_above_or_below` to just
`layer_match.nice`, as the former doesn't describe the newly-added
matcher's use of the field. It's still obvious that `layer_match.nice`
is relevant to MATCH_NICE_{ABOVE, BELOW, EQUALS}.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
2024-02-22 04:08:07 -08:00
David Vernet
615b594e1c
layered: Don't refresh cpumasks before attaching
As mentioned in the previous commit, for some reason we're sometimes
(non-deterministically) not seeing the updated cpumask / layer values in
BPF if we initialize the cpumasks here before attaching. Let's undo this
for now so to avoid observing buggy behavior, until we figure it out.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-21 19:19:45 -06:00
David Vernet
68d317079a
Revert "layered: Set layered cpumask in scheduler init call"
This reverts commit 56ff3437a2.

For some reason we seem to be non-deterministically failing to see the
updated layer values in BPF if we initialize before attaching. Let's
just undo this specific part so that we can unblock this being broken,
and we can figure it out async.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-21 19:17:19 -06:00
David Vernet
31df8fbd09
layered: Consume from layer with cpumask in layered_dispatch
Currently, in layered_dispatch, we do the following:

1. Iterate over all preempt=true layers, and first try to consume from
   them.

2. Iterate over all confined layers, and consume from them if we find a
   layer with a cpumask that contains the consuming CPU.

3. Iterate over all grouped and open layers and consume from them in
   ordered sequence.

In (2), we're only iterating over confined layers, but we should also be
iterating over grouped layers. Otherwise, despite a consuming CPU being
allocated to a specific grouped layer, the CPU will consume from
whichever grouped or open layer has a task that's ready to run.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-21 15:38:23 -06:00
David Vernet
56ff3437a2
layered: Set layered cpumask in scheduler init call
In layered_init, we're currently setting all bits in every layers'
cpumask, and then asynchronously updating the cpumasks at later time to
reflect their actual values at runtime. Now that we're updating the
layered code to initialize the cpumasks before we attach the scheduler,
we can instead have the init path actually refresh and initialize the
cpumasks directly.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-21 15:38:23 -06:00
David Vernet
1f834e7f94
layered: Initialize layers before attaching scheduler
We currently have a bug in layered wherein we could fail to propagate
layer updates from user space to kernel space if a layer is never
adjusted after it's first initialized. For example, in the following
configuration:

[
	{
		"name": "workload.slice",
		"comment": "main workload slice",
		"matches": [
			[
				{
					"CgroupPrefix": "workload.slice/"
				}
			]
		],
		"kind": {
			"Grouped": {
				"cpus_range": [30, 30],
				"util_range": [
					0.0,
					1.0
				],
				"preempt": false
			}
		}
	},
	{
		"name": "normal",
		"comment": "the rest",
		"matches": [
			[]
		],
		"kind": {
			"Grouped": {
				"cpus_range": [2, 2],
				"util_range": [
					0.0,
					1.0
				],
				"preempt": false
			}
		}
	}
]

Both layers are static, and need only be resized a single time, so the
configuration would never be propagated to the kernel due to us never
calling update_bpf_layer_cpumask(). Let's instead have the
initialization propagate changes to the skeleton before we attach the
scheduler.

This has the advantage both of fixing the bug mentioned above where a
static configuration is never propagated to the kernel, and that we
don't have a short period when the scheduler is first attached where we
don't make optimal scheduling decisions due to the layer resizing not
being propagated.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-21 15:38:21 -06:00
Tejun Heo
22d635c385
Merge pull request #141 from jordalgo/rusty-logging
Add libbpf logging to rust schedulers
2024-02-20 13:52:39 -10:00
Andrea Righi
80de48ec83 scx_rustland: introduce --builtin-idle
Add a command line option to enable/disable the sched-ext built-in idle
selection logic in the user-space scheduler.

With this option the user-space scheduler will try to dispatch tasks on
the CPU selected during the .select_cpu() phase (using the built-in idle
selection logic).

Without this option the user-space scheduler will try to dispatch tasks
to the first CPU available.

The former can be useful to improve throughput, since tasks are more
likely to stick on the same CPU, while the latter can provide better
system responsiveness, especially when the system is significantly busy.

Given that, by default, tasks can be dispatched directly bypassing the
user-space scheduler if an idle CPU is found during .select_cpu(), the
user-space scheduler is primarily engaged only when the system is busy
(no idle CPUs are available). Under these circumstances, it is typically
more efficient to dispatch tasks on the first available CPU. Hence, the
default behavior is to ignore built-in idle selection logic in the
user-space scheduler.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-21 00:25:14 +01:00
Andrea Righi
e487d71032 scx_rustland: simply CPU selection by relying on built-in idle selection
Checking if a CPU is idle or busy in the user-space scheduler is a bit
redundant, considering that we also rely on the built-in idle selection
logic in the BPF part.

Therefore get rid of the additional idle selection logic in the
user-space scheduler and rely on the built-in idle selection.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-21 00:25:14 +01:00
Andrea Righi
2cd1d4b684 scx_rustland: introduce --full-user
Introduce an option to send all scheduling events and actions to
user-space, disabling any form of in-kernel optimization.

Enabling this option will likely make the system less responsive (but
more predictable in terms of performance) and it can be useful for
debugging purposes.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-21 00:25:14 +01:00
Jordan Rome
7c32acece0 Add libbpf logging to the rust schedulers
This is to get better logs when failing to load, attach, etc.
2024-02-20 15:17:10 -08:00
David Vernet
ef8aa9ea31
add documentation
Signed-off-by: David Vernet <void@manifault.com>
2024-02-20 14:57:09 -06:00
David Vernet
8aba090d4f
rust: Add topology module to utils crate
scx_rusty has logic in the scheduler to inspect the host to
automatically build scheduling domains across every L3 cache. This would
be generically useful for many different types of schedulers, so let's
add it to the scx_utils crate so it can be used by others.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-20 14:57:09 -06:00
Andrea Righi
7ff06a6ff0 scx_rustland: prevent misaligned pointer dereference
The buffer used to store struct queued_task_ctx items fetched from the
BPF ring buffer needs to be aligned to the architecture register size,
otherwise we may hit misaligned pointer dereference issues, such as:

  thread 'main' panicked at src/bpf.rs:162:43:
  misaligned pointer dereference: address must be a multiple of 0x8 but is 0x56516a51e004
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Prevent this by making sure the buffer is always aligned to 64-bits.

Fixes: 93dc615 ("scx_rustland: use a ring buffer for queued tasks")
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-20 19:08:38 +01:00
Andrea Righi
93dc615653 scx_rustland: use a ring buffer for queued tasks
Switch from a BPF_MAP_TYPE_QUEUE to a BPF_MAP_TYPE_RINGBUF to store the
tasks that need to be processed by the user-space scheduler.

A ring buffer allows to save a lot of memory copies and syscalls, since
the memory is directly shared between the BPF and the user-space
components.

Performance profile before this change:

  2.44%  [kernel]  [k] __memset
  2.19%  [kernel]  [k] __sys_bpf
  1.59%  [kernel]  [k] __kmem_cache_alloc_node
  1.00%  [kernel]  [k] _copy_from_user

After this change:

  1.42%  [kernel]  [k] __memset
  0.14%  [kernel]  [k] __sys_bpf
  0.10%  [kernel]  [k] __kmem_cache_alloc_node
  0.07%  [kernel]  [k] _copy_from_user

Both the overhead of sys_bpf() and copy_from_user() are reduced by a
factor of ~15x now (only the dispatch path is using sys_bpf() now).

NOTE: despite being very effective, the current implementation is a bit
of a hack. This is because the present ring buffer API exclusively
permits consumption in a greedy manner, where multiple items can be
consumed simultaneously. However, libbpf-rs does not provide precise
information regarding the exact number of items consumed. By utilizing a
more refined libbpf-rs API [1] we may be able to improve this code a
bit.

Moreover, libbpf-rs doesn't provide an API for the user_ring_buffer, so
at the moment there's not a trivial way to apply the same change to the
dispatched tasks.

However, just with this change applied, the overhead of sys_bpf() and
copy_from_user() is already minimal, so we won't get much benefits by
changing the dispatch path to use a BPF ring buffer.

[1] https://github.com/libbpf/libbpf-rs/pull/680

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-20 12:30:22 +01:00
Andrea Righi
04685e633f scx_rustland: avoid memory copies while accessing cpu_map
Instead of using a BPF_MAP_TYPE_ARRAY to store which tasks are running
on which CPU we can simply use a global array, mapped in the user-space
address space.

In this way we can avoid a lot of memory copies and call to sys_bpf(),
significantly reducing the scheduler's overhead.

Keep in mind that we don't need to be 100% correct while accessing this
information, so we can accept some fuzziness in order to significantly
reduce the scheduler's overhead.

Performance profile before this change:

   5.52%  [kernel]  [k] __sys_bpf
   4.84%  [kernel]  [k] __kmem_cache_alloc_node
   4.71%  [kernel]  [k] map_lookup_elem
   4.10%  [kernel]  [k] _copy_from_user
   3.51%  [kernel]  [k] bpf_map_copy_value
   3.12%  [kernel]  [k] check_heap_object

After this change:

   2.20%  [kernel]  [k] __sys_bpf
   1.91%  [kernel]  [k] map_lookup_and_delete_elem
   1.60%  [kernel]  [k] __kmem_cache_alloc_node
   1.10%  [kernel]  [k] _copy_from_user
   0.12%  [kernel]  [k] check_heap_object
                    n/a bpf_map_copy_value
                    n/a map_lookup_elem

With this change we can reduce the overhead of sys_bpf() by ~2x and
the overhead of copy_from_user() by ~4x.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-20 12:30:16 +01:00
Andrea Righi
fc889c6995 scx_rustland: replace custom allocator with buddy-alloc
Currently, the primary bottleneck in scx_rustland lies within its custom
memory allocator, which is used to prevent page faults in the user-space
scheduler.

This is pretty evident looking at perf top:

  39.95%  scx_rustland             [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::alloc
   3.41%  [kernel]                 [k] _copy_from_user
   3.20%  [kernel]                 [k] __kmem_cache_alloc_node
   2.59%  [kernel]                 [k] __sys_bpf
   2.30%  [kernel]                 [k] __kmem_cache_free
   1.48%  libc.so.6                [.] syscall
   1.45%  [kernel]                 [k] __virt_addr_valid
   1.42%  scx_rustland             [.] <scx_rustland::bpf::alloc::RustLandAllocator as core::alloc::global::GlobalAlloc>::dealloc
   1.31%  [kernel]                 [k] _copy_to_user
   1.23%  [kernel]                 [k] entry_SYSRETQ_unsafe_stack

However, there's no need to reinvent the wheel here, rather than relying
on an overly simplistic and inefficient allocator, we can rely on
buddy-alloc [1], which is also capable of operating on a preallocated
memory buffer.

After switching to buddy-alloc, the performance profile under the same
workload conditions looks like the following:

   6.01%  [kernel]                 [k] _copy_from_user
   5.21%  [kernel]                 [k] __kmem_cache_alloc_node
   4.45%  [kernel]                 [k] __sys_bpf
   3.80%  [kernel]                 [k] __kmem_cache_free
   2.79%  libc.so.6                [.] syscall
   2.34%  [kernel]                 [k] __virt_addr_valid
   2.26%  [kernel]                 [k] _copy_to_user
   2.14%  [kernel]                 [k] __check_heap_object
   2.10%  [kernel]                 [k] __check_object_size.part.0
   2.02%  [kernel]                 [k] entry_SYSRETQ_unsafe_stack

With this change in place, the primary overhead is now moved to the
bpf() syscall and the copies between kernel and user-space (this could
potentially be optimized in the future using BPF ring buffers, instead
of BPF FIFO queues).

A better focus at the allocator overhead before vs after this change:

 [before]
 39.95%  scx_rustland  [.] core::alloc::global::GlobalAlloc>::alloc
  1.42%  scx_rustland  [.] core::alloc::global::GlobalAlloc>::dealloc

 [after]
  1.50%  scx_rustland  [.] core::alloc::global::GlobalAlloc>::alloc
  0.76%  scx_rustland  [.] core::alloc::global::GlobalAlloc>::dealloc

[1] https://crates.io/crates/buddy-alloc

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-11 14:33:39 +01:00
Andrea Righi
ccf5946425 scx_rustland: speed up search by PID in tasks BTreeSet
In order to prevent duplicate PIDs in the TaskTree (BTreeSet), we
perform an O(N) search each time we add an item, to verify whether the
PID already exists or not.

Under heavy stress test conditions the O(N) complexity can have a
potential impact on the overall performance.

To mitigate this, introduce a HashMap that can be used to retrieve tasks
by PID typically with a O(1) complexity. This could potentially degrade
to O(N) in presence of hash collisions, but even in this case, accessing
the hash map is still more efficient than scanning all the entries in
the BTreeSet to search for the target PID.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-11 14:11:38 +01:00
Andrea Righi
7ce0d038e4
Merge pull request #133 from sched-ext/rustland-cpumask-gen-cnt
scx_rustland: per-task cpumask generation counter
2024-02-10 19:07:02 +01:00
Andrea Righi
61d1ed338a scx_rustland: per-task cpumask generation counter
Introduce a per-task generation counter to check the validity of the
cpumask at dispatch time.

The logic is the following:

 - the cpumask generation number is incremented every time a task
   calls .set_cpumask()

 - when a task is enqueued the current generation number is stored in
   the queued_task_ctx and relayed to the user-space scheduler

 - the user-space scheduler can decide to dispatch the task on the CPU
   determined by the BPF layer in .select_cpu(), redirect the task to
   any other specific CPU, or redirect to the first CPU available (using
   NO_CPU)

 - task is then dispatched back to the BPF code along with its cpumask
   generation counter

 - at dispatch time the BPF code checks if the generation number is the
   same and it discards the dispatch attempt if the cpumask is not valid
   anymore (the task will be automatically re-enqueued by the sched-ext
   core code, potentially selecting another CPU / cpumask)

 - if the cpumask is valid, but the CPU selected by the user-space
   scheduler is invalid (according to the cpumask), the task will be
   transparently bounced by the BPF code to the shared DSQ (in this way
   the user-space code can be completely abstracted and dispatches that
   target invalid CPUs can be automatically fixed by the BPF layer)

This solution can prevent stalls due to dispatches targeting invalid
CPUs and it can also avoid redundant dispatch events, making the code
more efficient and the cpumask interlocking more reliable.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-10 18:02:42 +01:00
David Vernet
1c00de9402
Merge pull request #129 from sched-ext/infeasible_weights
Implement solution to infeasible weights problem
2024-02-09 16:23:56 -06:00
David Vernet
e627176d90
scx: Implement solution to infeasible weights problem
As described in [0], there is an open problem in load balancing called
the "infeasible weights" problem. Essentially, the problem boils down to
the fact that a task with disproportionately high load can be granted
more CPU time than they can actually consume per their duty cycle.

This patch implements a solution to that problem, wherein we apply the
algorithm described in this paper to adjust all infeasible weights in
the system down to a feasible wight that gives them their full duty
cycle, while allowing the remaining feasible tasks on the system to
share the remaining compute capacity on the machine.

[0]: https://drive.google.com/file/d/1fAoWUlmW-HTp6akuATVpMxpUpvWcGSAv/view?usp=drive_link

Signed-off-by: David Vernet <void@manifault.com>
2024-02-09 16:23:12 -06:00
Andrea Righi
8e47602f00 scx_rustland: keep default CPU selection when idle
Dispatch to the shared DSQ (NO_CPU) only when the assigned CPU is not
idle anymore, otherwise maintain the same CPU that has been assigned by
the BPF layer.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-08 22:48:07 +01:00
Andrea Righi
7085d57709 scx_rustland: kick user-space scheduler when a CPU is released
When the system is not being fully utilized there may be delays in
promptly awakening the user-space scheduler.

This can happen for example, when some CPU-intensive tasks are
constantly dispatched bypassing the user-space scheduler (e.g., using
SCX_DSQ_LOCAL) and other CPUs are completely idle.

Under this condition the update_idle() can fail to activate the
user-space scheduler, because there are no pending events, and only the
periodic timer will wake up the scheduler, potentially introducing lags
of up to 1 sec.

This can be reproduced, for example, running a video game that doesn't
use all the CPUs available in the system (i.e., Team Fortress 2). With
this game it is pretty easy to notice sporadic lags that are resumed
after ~1sec, due to the periodic timer kicking scheduler.

To prevent this from happening wake up the user-space scheduler
immediately as soon as a CPU is released, speculating on the fact that
most of the time there will be always another task ready to run.

This can introduce a little more overhead in the scheduler (due to
potential unnecessary wake up events), but it also prevents stuttery
behaviors and it makes the system much more smooth and responsive,
especially with video games.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-08 22:48:07 +01:00
Andrea Righi
cb82d91e0f scx_rustland: use scx_bpf_dispatch_cancel()
Use scx_bpf_dispatch_cancel() to invalidate dispatches on wrong per-CPU
DSQ, due to cpumask race conditions, and redirect them to the shared
DSQ.

This prevents dispatching tasks to CPU that cannot be used according to
the task's cpumask.

With this applied the scheduler passed all the `stress-ng --race-sched`
stress tests.

Moreover, introduce a counter that is periodically reported to stdout as
an additional statistic, that can be helpful for debugging.

Link: https://github.com/sched-ext/sched_ext/pull/135
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-08 22:48:07 +01:00
Andrea Righi
13e23e8cc9 scx_rustland: dump scheduler statistics before exiting
Print all the scheduler statistics before exiting. Reporting the very
last state of the scheduler can help to debug events that could trigger
error conditions (such as page faults, scheduler congestions, etc.).

While at it, fix also some minor coding style issues (tabs vs spaces).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-08 15:37:44 +01:00
David Vernet
c574598dc7
scx_rusty: Fix typos
Signed-off-by: David Vernet <void@manifault.com>
2024-02-07 23:38:26 -06:00
Tejun Heo
2062d1ad1f scx: Add compat support for SCX_KICK_IDLE and use it for idle CPU wakeups
SCX_KICK_IDLE is a new feature which isn't defined in older kernels. Add
compat wrapper and use it for idle CPU wakeups.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-02-06 15:28:40 -10:00
Tejun Heo
17014e91fb scx: Update vmlinux.h to receive SCX_KICK_IDLE
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-02-06 15:02:01 -10:00
David Vernet
4108ece204
scx_userland: Increase scx_userland timeout
This is meant to be an example scheduler that won't necessarily run well
in production. Let's remove the 3 second timeout and use the system
default of 30.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-04 16:23:18 -06:00
David Vernet
28a0b82be6
scx_userland: Print warning about poor performance
Let's make it clear that this scheduler isn't expected to perform well,
and instead point people to scx_rustland.

Signed-off-by: David Vernet <void@manifault.com>
2024-02-04 16:02:57 -06:00
Tejun Heo
1ca3b8dca8 common.bpf.h: Add kfunc prototype for scx_bpf_dispatch_cancel()
And relocate scx_bpf_dispatch_nr_slots() while at it.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-02-03 09:46:13 -10:00
Andrea Righi
b6eee3a5c4
Merge pull request #121 from sched-ext/rustland-duplicate-pids
scx_rustland: prevent duplicate PIDs in the task BTreeSet
2024-02-03 17:55:43 +01:00
Andrea Righi
acb174aa51 scx_rustland: prevent duplicate PIDs in the task BTreeSet
Items in the task BTreeSet are stored by pid and vruntime. Make sure
that we never store multiple items with the same PID, so that
re-enqueued tasks are not dispatched multiple times.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-03 14:46:39 +01:00
David Vernet
7cbcc16be9
Merge pull request #119 from sched-ext/htejun
scheds/sync-to-kernel.sh: Drop most schedulers from sync
2024-02-02 14:20:02 -06:00
Tejun Heo
c7ad3a71f9 scheds/sync-to-kernel.sh: Drop most schedulers from sync
Only scx_simple/qmap are in the kernel tree now. Drop the rest from the sync
script. Also update the sync script so that it can handle empty
rust_scheds variable.

Signed-off-by: Tejun Heo <tj@kernel.org>
2024-02-02 09:08:30 -10:00
Andrea Righi
681b3fd807 scx_rustland: more aggressive time slice scaling
Allow to scale the effective time slice down to 250 us. This can help to
maintain a good quality of the audio even when the system is overloaded
by multiple CPU-intensive tasks.

Moreover, always round up the time slice scaling factor to be a little
more aggressive and prioritize at scaling the time slice, so that we can
prioritize low latency tasks even more.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 16:40:59 +01:00
Andrea Righi
26d6d530f0 scx_rustland: enhance interactive task classification
Evaluate the number of voluntary context switches per second (nvcsw/sec)
for each task using an exponentially weighted moving average (EWMA) with
weight 0.5, that allows to classify interactive tasks with more
accuracy.

Using a simple average over a period of time of 10 sec can introduce
small lags every 10 sec, as the statistics for the number of voluntary
context switches are refreshed. This can result in interactive tasks
taking a brief time to catch up in order to be accurately classified as
so, causing for example short audio cracks, small drop of 5-10 fps in
games, etc.

Using a EMWA allows to smooth the average of nvcsw/sec, preventing short
lags in the interactive tasks, while also preventing to incorrectly
classify as interactive tasks that may experience an isolated short
burst of voluntary context switches.

This patch has been tested with the usual test case of playing a
videogame while running a parallel kernel build in the background.

Without this patch the short lag every 10 sec is clearly noticeable,
with this patch applied the game and audio run smoothly.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 16:40:59 +01:00
Andrea Righi
baeea306fc scx_rustland: rely on the built-in idle selection logic
Simplify the idle selection logic by relying only on the built-in idle
selection performed in the BPF layer.

When there are idle CPUs available in the system, tasks are dispatched
directly by the BPF dispatcher without invoking the user-space
scheduler. This allows to avoid the user-space overhead and get the best
system performance when CPU resources are not overcommitted.

Once the number of tasks exceeds the available CPUs, the user-space
scheduler takes over. However, by this time, the system is already
overcommitted, so there's little advantage in attempting to pinpoint the
optimal idle CPU through the user-space scheduler. Instead, tasks can be
executed on the first available CPU, consistently dispatching them to
the shared DSQ.

This allows to achieve the optimal performance both with system
under-utilization and over-utilization.

With this change in place the user-space scheduler won't dispatch tasks
directly to specific CPUs, but we still want to keep this as a generic
feature in the BPF layer, so that it can be potentially used in the
future by this scheduler or even by other user-space schedulers (once
the BPF layer will be moved to a more generic place).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 16:40:59 +01:00
Andrea Righi
b9e60f71ed scx_rustland: usersched: code refactoring
No functional change, just move code around to make it more readable.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 16:40:59 +01:00
Andrea Righi
d13ed5c025 scx_rustland: BPF: refine CPU dispatch logic
When the user-space scheduler dispatches a task on a specific CPU, that
CPU might not be valid, since the user-space doesn't have visibility of
the task's cpumask.

When this happens the BPF dispatcher (that has direct visibility of the
cpumask) should automatically redirect the task to a valid CPU, but
instead of bouncing the task on the shared DSQ, we should try to use the
CPU assigned by the built-in idle selection logic.

If this CPU is also not valid, then we can simply ignore the task, that
has been de-queued and re-enqueued, since a valid CPU will be naturally
re-selected at a later time.

Moreover, avoid to kick any specific CPU when the task is dispatched to
shared DSQ, since the task can be consumed on any CPU and the additional
kick would simply add more overhead.

Lastly, rename dsq_id_to_cpu() to dsq_to_cpu() and cpu_to_dsq_id() to
cpu_to_dsq() for more clarity.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 16:38:17 +01:00
Andrea Righi
45d8b54eb9 scx_rustland: re-introduce per-CPU DSQ + a global shared DSQ
With commit c6ada25 ("scx_rustland: use custom pcpu DSQ instead of
SCX_DSQ_LOCAL{_ON}") we tried to introduce custom per-CPU DSQs, instead
of using SCX_DSQ_LOCAL and SCX_DSQ_LOCAL_ON to dispatch tasks.

This was required, because dispatching tasks using SCX_DSQ_LOCAL_ON
doesn't provide a guarantee that the cpumask, checked at dispatch time
to determine the validity of a target CPU, remains valid.

This method solved the cpumask validity issue, but unfortunately it
introduced a noticeable performance regression and a potential
starvation issue (that were probably caused by the same problem): if a
task is assigned to a CPU in select_cpu() and the scheduler decides to
dispatch it on a different CPU, the task will be added to the new CPU's
DSQ, but if no dispatch event happens there, the task may remain stuck
in the per-CPU DSQ for a long time, triggering the sched-ext watchdog
timeout that would kick out the scheduler, for example:

  12:53:28 [WARN] FAIL: IPC:CSteamEngin[7217] failed to run for 6.482s (err=1026)
  12:53:28 [INFO] Unregister RustLand scheduler

Therefore, we reverted this change with 6d89ece ("scx_rustland: dispatch
tasks only on the global DSQ"), dispatching all the tasks to the global
DSQ, completely delegating the kernel to distribute tasks among the
available CPUs.

This is not the ideal solution, because we still want to give the
possibility to the user-space scheduler to assign tasks to specific
CPUs.

Therefore, re-introduce distinct per-CPU DSQs, but also provide a global
shared DSQ. Tasks dispatched in the per-CPU DSQs are consumed from the
dispatch() callback of their corresponding CPU, tasks dispatched in the
global shared DSQ are consumed from any CPU.

In this way the BPF layer is able to provide an interface that gives
the flexibility to the user-space to dispatch a task on a specific CPU
or on the first CPU available, depending on the particular scheduler's
need.

If an invalid CPU (according to the cpumask) is selected the BPF
dispatcher will transparently redirect the task to a valid CPU, selected
using the built-in idle selection logic.

In the future we may want to improve this part, giving to the
user-space the visibility of the cpumask, in order to pick a valid CPU
in advance and in a proper synchronized way.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 00:33:35 +01:00
Andrea Righi
b5e846c538 scx_rustland: BPF: small refactoring
No functional change, just some refactoring to make the code more clear.

We have is_usersched_needed() and set_usersched_needed() that are doing
different things (the former is checkig if there are pending tasks for
the scheduler, the latter is setting the usersched_needed flag to
activate the dispatch of the user-space scheduler).

Rename is_usersched_needed() to usersched_has_pending_tasks() to make
the code more clear and understandable.

Also move dispatch_user_scheduler() closer to the other dispatch-related
helper functions.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-01 00:33:35 +01:00