Reading the ABI specification, I realized that fallback_unwind_ppc64()
is completely wrong. Fix it.
Fixes: eec67768aa ("libdrgn: replace elfutils DWARF unwinder with our own")
Signed-off-by: Omar Sandoval <osandov@osandov.com>
The usage of the link register in DWARF is a little confusing. On entry
to a function, the link register contains the address that should be
returned to. However, for DWARF, the link register is usually used as
the CFI return_address_register, which means that in an unwound frame,
it will contain the same thing as the program counter. I initially
thought that this was a mistake, believing that the link register should
contain the _next_ return address. However, after a return (with the blr
instruction), the link register will indeed contain the same address as
the program counter. This is consistent with our documentation of
register values for function call frames: "the register values are the
values when control returns to this frame".
So, rename our internal "ra" register to "lr", expose it to the API, and
add a little more documentation to the ppc64 initial register code.
Fixes: 221a218704 ("libdrgn: add powerpc stack trace support")
Signed-off-by: Omar Sandoval <osandov@osandov.com>
In additional to the general-purpose registers, struct pt_regs also
provides the cs and ss segment registers and the rflags register.
elf_gregset_t provides the other segment registers as well. We should
expose all of those.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Currently, register definitions are split across two files:
arch_foo.defs lists the names of registers, and arch_foo.c defines the
layout used to store registers in memory. The main rationale for this
was that the layout could be processed entirely by the C preprocessor,
but the register names needed an AWK script that we wanted to keep
minimal. But since commit af6f5a887d ("libdrgn: replace gen_arch.awk
with gen_arch_inc_strswitch.py"), arch_foo.defs is processed by a Python
script.
Let's define both the register names and the register layout in a new
file, arch_foo_defs.py, which is processed by gen_arch_inc_strswitch.py
This has a few benefits:
* It puts all of the register definitions for an architecture in one
place.
* It is easier to maintain than preprocessor magic. (It also makes it
trivial to support registers that don't exist in DWARF, which would've
been harder to do with our preprocessor code.)
* It gets rid of our DSL in favor of Python (which also lets us reduce
repetition for the ppc64 definitions).
Signed-off-by: Omar Sandoval <osandov@osandov.com>
This hasn't been used since commit eec67768aa ("libdrgn: replace
elfutils DWARF unwinder with our own").
Signed-off-by: Omar Sandoval <osandov@osandov.com>
drgn_stack_frame_register() gets the register value with copy_lsbytes()
and then byte swaps it if the program's byte order is different from the
host's. But, copy_lsbytes() already fixes the byte order, so this ends
up with the original (wrong) byte order. We also don't need to zero out
the integer that we copy into since copy_lsbytes() also does that.
Fixes: eec67768aa ("libdrgn: replace elfutils DWARF unwinder with our own")
Signed-off-by: Omar Sandoval <osandov@osandov.com>
When running a python script with the drgn cli, an import of
a module in the current directory does not find that module.
This is surprising for regular python users.
According to the python documentation, sys.path is modified when
a script is passed on the command line:
"If the script name refers directly to a Python file, the
directory containing that file is added to the start of
sys.path, and the file is executed as the __main__ module."
However, it does not set the path if the passed in script is a
zipfile. Use pkgutil.get_importer() to check if this is the case
and only add the path if it returns None.
Add the same operation in drgn so that importing modules from
the same directory as the script work as expected.
Link: https://docs.python.org/3/using/cmdline.html#using-on-interface-options
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
This needed the previous small update to
tests.linux_kernel.test_debug_info.TestModuleDebugInfo.
tests.linux_kernel.helpers.test_tc.TestTc will also only work with
pyroute2 >= 0.6.10 (see svinota/pyroute2#899). No changes needed to drgn
itself.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Linux kernel commit a0e286b6a5b6 ("loop: remove lo_refcount and avoid
lo_mutex in ->open / ->release") (in v5.19-rc1) removed the lo_open
symbol that we use for
tests.linux_kernel.test_debug_info.TestModuleDebugInfo. Replace it with
a symbol from the test kernel module so we don't need to worry about it
going away.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Linux kernel commit 31cb50b5590f ("kbuild: check static EXPORT_SYMBOL*
by script instead of modpost") (in v5.19-rc1) added this script to the
build process, and the latest vmtest kernel build failed without it.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Currently, we identify explicitly-reported kernel modules by the module
name that we get from the .modinfo or the .gnu.linkonce.this_module
section. However, objcopy --only-keep-debug (used for some Linux distro's
separate debug files) does not keep these sections. This means that
passing a file processed by objcopy --only-keep-debug to, e.g., drgn -s,
fails with "could not find kernel module name".
Instead of using the module name as the identifier, let's use the
module's GNU build ID. We can get it on a live system from
/sys/module/<module>/notes/, and on a core dump from struct
module::notes_attrs (which is the implementation of that sysfs
directory).
This was split out of my larger debug info discovery rework, which will
make more use of the build ID.
Closes#178.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
DEFINE_BINARY_SEARCH_TREE_TYPE() doesn't need these. This is preparation
for a potential new use of a BST. But, it's also a good cleanup on its
own and allows us to move some code out of memory_reader.h and into
memory_reader.c. (This is similar to commit 1339dc6a2f ("libdrgn:
hash_table: move entry_to_key to DEFINE_HASH_TABLE_FUNCTIONS()").)
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Instead of getting the address range from the sections we find, get it
directly from /proc/modules or from the `struct module`. (We already had
partial code to get the address range, but I can't remember why I didn't
use it.)
The real motivation for this is the upcoming module rework: it'll allow
us to report the module and its address range before iterating through
its sections. But it also means that we don't need the heuristic to
ignore special sections that shouldn't be considered part of the address
range (e.g., .init, .data..percpu [the latter of which we should be
ignoring but get away with not because it's excluded from sysfs]).
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Like drgn_error_fwrite(), but writes to a file descriptor instead of a
stdio stream. This will be used for logging.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
drgn_error_fwrite() only calls string_builder_append_error() to get
special formatting for DRGN_ERROR_OS, but DRGN_ERROR_FAULT also needs
special formatting. Rather than needing to keep drgn_error_fwrite() and
string_builder_append_error() in sync, define them both in terms of a
common macro.
Fixes: 80fef04c70 ("Add address attribute to FaultError exception")
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Since most Linux distros enable this, we should make sure we test it. It
was added for SLUB in Linux kernel commit 2482ddec670f ("mm: add SLUB
free list pointer obfuscation") (in v4.14), so we'll still get test
coverage of the non-hardened codepath while 4.9 is around.
It was also added for SLAB in Linux kernel commit 3404be67bf73
("mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB") (in
v5.9), although that currently doesn't change the in-memory data
structures.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Most Linux distros (I checked Fedora, Debian, and Arch) enable this,
which requires us to de-obfuscate the pointers in the freelist.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
This has been useful to run manually before, but I haven't added it to
the CI because it was somewhat noisy. But, it reports some really useful
warnings, so let's configure it for our needs and add it to pre-commit.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Several have snuck in since the last time I did this in commit
5541fad063 ("Fix some flake8 errors"). Prepare for adding flake8 to
pre-commit by fixing them.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Splitting the tests between tests/linux_kernel and tests/helpers/linux
means that we have to set up the unit tests twice, including loading
debug info. Python 3.7 and newer have a way to get around this, but
we're still sort of supporting Python 3.6. Move them under one path to
speed up test runs.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
This one is considerably more complicated than the linked list one, but
it should catch lots of kinds of red-black tree mistakes.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
At LSF/MM+BPF 2022, Ted Ts'o pitched me the idea of using drgn to
validate the consistency of kernel data structures. I really liked this
idea, especially for big, complicated data structures. But first, let's
start small: document the concept of a "validator", which is just a
special kind of helper, and add some basic validator versions of linked
list helpers.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
The current test case for slab_cache_for_each_allocated_object() is
barely more than a smoke test. It missed the bug fixed by the previous
commit. Now that we have the test kernel module, we can do a lot better
by comparing against the exact list of objects that are allocated.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
SLUB and SLAB both have a mechanism where they'll remove some objects
from the per-slab freelist to keep them in a per-CPU cache. SLUB's is a
per-CPU freelist just like the per-slab one, and SLAB's is a per-CPU
array of free entries. We're not checking those lists, which means that
we're returning free objects as allocated. Fix it by checking against
the per-CPU lists in addition to the per-slab freelist.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Just like we did for lists, use a test data structure instead of the vma
tree for a process. This also allows us to test RB_EMPTY_NODE, which we
couldn't test before.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Rather than picking a couple of lists which we hope won't change, use
the newly added test kernel module to define a few lists and test
against those. This also gives us proper tests for list_is_singular().
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Now that commit d999703f94 ("vmtest: add kernel module build
dependencies to kernel packages") added the files necessary to build a
test kernel module, add the module (currently a stub) and the
scaffolding necessary to build and load it.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
We have several helpers that are difficult to test because there are no
userspace APIs to test them against. Radix trees, slab objects, and
hlists are a few examples. For lists and rbtrees, luckily we can find
instances of those data structures exposed to userspace in some way, but
this is somewhat brittle.
We'd like a way to be able to unit test directly against kernel code
that sets things up in a way that is easy to test against. The easiest
way to do this is with a custom kernel module.
There are two options for how to enable this:
1. Build the custom kernel module as part of the vmtest kernel build and
package it with the vmtest kernel package.
2. Include the artifacts needed for kernel module builds in the vmtest
kernel package, then build the kernel module when running tests.
The latter makes the kernel packages significantly larger (on a build of
v5.18-rc6, 46M -> 53M compressed, 173M -> 217M decompressed), but it has
the huge advantage that it does not require a vmtest kernel rebuild to
add or modify tests. In order to optimize for making it easy to add new
helpers with test cases, this is the approach I chose.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Now `python3 -m vmtest.manage -K --no-build` can be used to get the list
of latest kernel releases that need to be built. Also rename --dry-run
to --no-upload since --no-build is also a dry run in a way.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Slab cache merging can cause major confusion when debugging using slab
cache helpers. Add a helper to detect whether a slab cache is merged and
an explanation of the implications.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
These are ported from https://github.com/josefbacik/debug-scripts.
Signed-off-by: alexlzhu <alexlzhu@fb.com>
[Omar: make test case check for exception on SLOB]
Signed-off-by: Omar Sandoval <osandov@osandov.com>
GCC and Clang have 128-bit integer types on 64-bit targets: __int128 and
unsigned __int128. Clang additionally has N-bit integers of up to 2<<24
bits with _ExtInt(N), which was standardized in C23 as _BitInt(N).
Currently, we disallow creating objects with a >64-bit integer type. Jay
Kamat reported that this would cause errors when examining some
binaries. The reason we disallow this is that we don't have a way to
represent or do operations on >64-bit values. We could make use of a
bignum library like GMP to do this in the future.
However, for now, we can loosen this restriction and at least allow
reference and absent objects with big integer types. This requires
enforcing two things: that we never create a value object with a >64-bit
integer type, and that we never read the value of a reference object
with a >64-bit integer type.
Co-authored-by: Jay Kamat <jaygkamat@gmail.com>
Signed-off-by: Omar Sandoval <osandov@osandov.com>
This will be used for partial 128-bit object support. There are other
places that should probably be converted to use it.
Signed-off-by: Omar Sandoval <osandov@osandov.com>