scx-upstream/scheds
Andrea Righi 93dc615653 scx_rustland: use a ring buffer for queued tasks
Switch from a BPF_MAP_TYPE_QUEUE to a BPF_MAP_TYPE_RINGBUF to store the
tasks that need to be processed by the user-space scheduler.

A ring buffer allows to save a lot of memory copies and syscalls, since
the memory is directly shared between the BPF and the user-space
components.

Performance profile before this change:

  2.44%  [kernel]  [k] __memset
  2.19%  [kernel]  [k] __sys_bpf
  1.59%  [kernel]  [k] __kmem_cache_alloc_node
  1.00%  [kernel]  [k] _copy_from_user

After this change:

  1.42%  [kernel]  [k] __memset
  0.14%  [kernel]  [k] __sys_bpf
  0.10%  [kernel]  [k] __kmem_cache_alloc_node
  0.07%  [kernel]  [k] _copy_from_user

Both the overhead of sys_bpf() and copy_from_user() are reduced by a
factor of ~15x now (only the dispatch path is using sys_bpf() now).

NOTE: despite being very effective, the current implementation is a bit
of a hack. This is because the present ring buffer API exclusively
permits consumption in a greedy manner, where multiple items can be
consumed simultaneously. However, libbpf-rs does not provide precise
information regarding the exact number of items consumed. By utilizing a
more refined libbpf-rs API [1] we may be able to improve this code a
bit.

Moreover, libbpf-rs doesn't provide an API for the user_ring_buffer, so
at the moment there's not a trivial way to apply the same change to the
dispatched tasks.

However, just with this change applied, the overhead of sys_bpf() and
copy_from_user() is already minimal, so we won't get much benefits by
changing the dispatch path to use a BPF ring buffer.

[1] https://github.com/libbpf/libbpf-rs/pull/680

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
2024-02-20 12:30:22 +01:00
..
c scx: Add compat support for SCX_KICK_IDLE and use it for idle CPU wakeups 2024-02-06 15:28:40 -10:00
include scx_rusty: Fix typos 2024-02-07 23:38:26 -06:00
rust scx_rustland: use a ring buffer for queued tasks 2024-02-20 12:30:22 +01:00
meson.build Restructure scheds folder names 2023-12-17 13:14:31 -08:00
README.md Restructure scheds folder names 2023-12-17 13:14:31 -08:00
sync-to-kernel.sh scheds/sync-to-kernel.sh: Drop most schedulers from sync 2024-02-02 09:08:30 -10:00

SCHED_EXT SCHEDULERS

Introduction

This directory contains the repo's schedulers.

Some of these schedulers are simply examples of different types of schedulers that can be built using sched_ext. They can be loaded and used to schedule on your system, but their primary purpose is to illustrate how various features of sched_ext can be used.

Other schedulers are actually performant, production-ready schedulers. That is, for the correct workload and with the correct tuning, they may be deployed in a production environment with acceptable or possibly even improved performance. Some of the examples could be improved to become production schedulers.

Please see the following README files for details on each of the various types of schedulers:

  • rust describes all of the schedulers with rust user space components. All of these schedulers are production ready.
  • c describes all of the schedulers with C user space components. All of these schedulers are production ready.

Note on syncing

Note that there is a sync-to-kernel.sh script in this directory. This is used to sync any changes to the specific schedulers with the Linux kernel tree. If you've made any changes to a scheduler in please use the script to synchronize with the sched_ext Linux kernel tree:

$ ./sync-to-kernel.sh /path/to/kernel/tree