We don't need to get the DWARF index at the time we get the Dwfl handle,
so get rid of drgn_program_get_dwarf(), add drgn_program_get_dwfl(), and
create the DWARF index right before we update in a new function,
drgn_program_update_dwarf_index().
After the libdwfl conversion, we apply ELF relocations with libdwfl
instead of our homegrown implementation. However, libdwfl is much slower
at it than the previous implementation. We can work around this by
(again) applying ELF relocations ourselves for architectures that we
care about (x86-64, to start). For other architectures, we can fall back
to libdwfl.
This new implementation of ELF relocation reworks the parallelization to
be per-file rather than per-relocation. The latter was done originally
because before commit 6f16ab09d6 ("libdrgn: only apply ELF relocations
to relocatable files"), we applied relocations to vmlinux, which is much
larger than most kernel modules. Now that we don't do that, it seems to
be slightly faster to parallelize by file.
libdwfl is the elfutils "DWARF frontend library". It has high-level
functionality for looking up symbols, walking stack traces, etc. In
order to use this functionality, we need to report our debugging
information through libdwfl. For userspace programs, libdwfl has a much
better implementation than drgn for automatically finding debug
information from a core dump or PID. However, for the kernel, libdwfl
has a few issues:
- It only supports finding debug information for the running kernel, not
vmcores.
- It determines the vmlinux address range by reading /proc/kallsyms,
which is slow (~70ms on my machine).
- If separate debug information isn't available for a kernel module, it
finds it by walking /lib/modules/$(uname -r)/kernel; this is repeated
for every module.
- It doesn't find kernel modules with names containing both dashes and
underscores (e.g., aes-x86_64).
Luckily, drgn already solved all of these problems, and with some
effort, we can keep doing it ourselves and report it to libdwfl.
The conversion replaces a bunch of code for dealing with userspace core
dump notes, /proc/$pid/maps, and relocations.
This converts several open-coded dynamic arrays to the new common vector
implementation:
- drgn_lexer stack
- Array dimension array for DWARF parsing
- drgn_program_read_c_string()
- DWARF index directory name hashes
- DWARF index file name hashes
- DWARF index abbreviation table
- DWARF index shard entries
drgn has enough open-coded dynamic arrays at this point to warrant a
common implementation. Add one inspired by hash_table.h. The API is
pretty minimal. I'll add more to it as the need arises.
There are some cases where we format a path (e.g., with asprintf()) and
keep it around only in case of errors. Add drgn_error_format_os() so we
can just reformat it if we hit the error, which simplifies cleanup.
Since we currently don't parse DWARF macro information, there's no easy
way to get the value PAGE_SIZE and friends in drgn. However, vmcoreinfo
contains the value of PAGE_SIZE, so let's add a special symbol finder
that returns that.
Currently, if we don't get vmcoreinfo from /proc/kcore, and we can't get
it from /sys/kernel/vmcoreinfo, then we manually determine the kernel
release and KASLR offset. This has a couple of issues:
1. We look for vmlinux to determine the KASLR offset, which may not be
in a standard location.
2. We might want to start using other information from vmcoreinfo which
can't be determined as easily.
Instead, we can get the virtual address of vmcoreinfo from
/proc/kallsyms and read it directly from there.
kernel_module_iterator_next() can also fail in
open_loaded_kernel_modules(), so handle it in the same way that we
currently handle kernel_module_iterator_init().
/proc/kcore contains segments which don't have a valid physical address,
which it indicates with a p_paddr of -1. Skip those segments, otherwise
we got an overflow error from the memory reader.
The current array-based memory reader has a bug in the following
scenario:
prog.add_memory_segment(0xffff0000, 128, ...)
# This should replace a subset of the first segment.
prog.add_memory_segment(0xffff0020, 32, ...)
# This moves the first segment back to the front of the array.
prog.read(0xffff0000, 32)
# This finds the first segment instead of the second segment.
prog.read(0xffff0032, 32)
Fix it by using the newly-added splay tree. This also splits up the
virtual and physical memory segments into separate trees.
This will be used to track memory segments instead of the array we
currently use. The API is based on the hash table API; it can support
alternative implementations in the future, like red-black trees.
This makes several improvements to the hash table API.
The first two changes make things more general in order to be consistent
with the upcoming binary search tree API:
- Items are renamed to entries.
- Positions are renamed to iterators.
- hash_table_empty() is added.
One change makes the definition API more convenient:
- It is no longer necessary to pass the types into
DEFINE_HASH_{MAP,SET}_FUNCTIONS().
A few changes take some good ideas from the C++ STL:
- hash_table_insert() now fails on duplicates instead of overwriting.
- hash_table_delete_iterator() returns the next iterator.
- hash_table_next() returns an iterator instead of modifying it.
One change reduces memory usage:
- The lower-level DEFINE_HASH_TABLE() is cleaned up and exposed as an
alternative to DEFINE_HASH_MAP() and DEFINE_HASH_SET(). This allows us
to get rid of the duplicated key where a hash map value already embeds
the key (the DWARF index file table) and gets rid of the need to make
a dummy hash set entry to do a search (the pointer and array type
caches).
Currently, we load debug information for every kernel module that we
find under /lib/modules/$(uname -r)/kernel. This has a few issues:
1. Distribution kernels have lots of modules (~3000 for Fedora and
Debian).
a) This can exceed the default soft limit on the number of open file
descriptors.
b) The mmap'd debug information can trip the overcommit heuristics
and cause OOM kills.
c) It can take a long time to parse all of the debug information.
2. Not all modules are under the "kernel" directory; some distros also
have an "extra" directory.
3. The user is not made aware of loaded kernel modules that don't have
debug information available.
So, instead of walking /lib/modules, walk the list of loaded kernel
modules and look up their debugging information.
const char * const * is not compatible with char * const *, so make
c_string_hash() and c_string_eq() macros so they can work with both
const char * and char * keys.
Currently, size_t and ptrdiff_t default to typedefs of the default
unsigned long and long, respectively, regardless of what the program
actually defines unsigned long or long as. Instead, make them refer the
whatever integer type (long, long long, or int) is the same size as the
word size.
Currently, programs can be created for three main use-cases: core dumps,
the running kernel, and a running process. However, internally, the
program memory, types, and symbols are pluggable. Expose that as a
callback API, which makes it possible to use drgn in much more creative
ways.
Similar to "libdrgn: make memory reader pluggable with callbacks", we
want to support custom type indexes (imagine, e.g., using drgn to parse
a binary format). For now, this disables the dwarf index tests; we'll
have a better way to test them later, so let's not bother adding more
test scaffolding.
I've been planning to make memory readers pluggable (in order to support
use cases like, e.g., reading a core file over the network), but the
C-style "inheritance" drgn uses internally is awkward as a library
interface; it's much easier to just register a callback. This change
effectively makes drgn_memory_reader a mapping from a memory range to an
arbitrary callback. As a bonus, this means that read callbacks can be
mixed and matched; a part of memory can be in a core file, another part
can be in the executable file, and another part could be filled from an
arbitrary buffer.
Currently, we deduplicate files for userspace mappings manually.
However, to prepare for adding symbol files at runtime, move the
deduplication to DWARF index. In the future, we probably want to
deduplicate based on build ID, as well.
Relocations are only supposed to be applied to ET_REL files, not ET_EXEC
files like vmlinux. This hasn't been an issue with the kernel builds
that I've tested on because the relocations match the contents of the
section. However, on Fedora, the relocation sections don't match,
probably because they post-process the binary in some way. This leads to
completely bogus debug information being parsed by drgn_dwarf_index. Fix
it by only relocating ET_REL files.
We need to set the value after we've reinitialized the object, otherwise
drgn_object_deinit() may try to free a buffer that we've already
overwritten. This also adds a test which triggers the crash.
There's a bug that we don't allow comparisons between void * and other
pointer types, so let's fix it by allowing all pointer comparisons
regardless of the referenced type. Although this isn't valid by the C
standard, GCC and Clang both allow it by default (with a warning).
drgn has pretty thorough in-program documentation, but it doesn't have a
nice overview or introduction to the basic concepts. This commit adds
that using Sphinx. In order to avoid documenting everything in two
places, the libdrgn bindings have their docstrings generated from the
API documentation. The alternative would be to use Sphinx's autodoc
extension, but that's not as flexible and would also require building
the extension to build the docs. The documentation for the helpers is
generated using autodoc and a small custom extension.
I went back and forth on using setuptools or autotools for the Python
extension, but I eventually settled on using only setuptools after
fighting to get the two to integrate well. However, setuptools is kind
of crappy; for one, it rebuilds every source file when rebuilding the
extension, which is really annoying for development. automake is a
better designed build system overall, so let's use that for the
extension. We override the build_ext command to build using autotools
and copy things where setuptools expects them.
The declaration file name of a DIE depends on the compilation directory,
which may not always be what the user expects. Instead, make the search
match as long as the full declaration file name ends with the given file
name. This is more convenient and more intuitive.
Running tests with Clang's AddressSanitizer fails with "runtime error:
index 1 out of bounds for type 'struct drgn_type_member [0]'". Zero
length arrays are a GCC extension and aren't buying us much anyways, so
just add a helper function that gets the array payload using pointer
arithmetic.
Older versions of Clang generate a call to __muloti4() for
__builtin_mul_overflow() with mixed signed and unsigned types. However,
Clang doesn't link to compiler-rt by default. Work around it by making
all of our calls to __builtin_mul_overflow() use unsigned types only.
1: https://bugs.llvm.org/show_bug.cgi?id=16404
The current mixed Python/C implementation works well, but it has a
couple of important limitations:
- It's too slow for some common use cases, like iterating over large
data structures.
- It can't be reused in utilities written in other languages.
This replaces the internals with a new library written in C, libdrgn. It
includes Python bindings with mostly the same public interface as
before, with some important improvements:
- Types are now represented by a single Type class rather than the messy
polymorphism in the Python implementation.
- Qualifiers are a bitmask instead of a set of strings.
- Bit fields are not considered a separate type.
- The lvalue/rvalue terminology is replaced with reference/value.
- Structure, union, and array values are better supported.
- Function objects are supported.
- Program distinguishes between lookups of variables, constants, and
functions.
The C rewrite is about 6x as fast as the original Python when using the
Python bindings, and about 8x when using the C API directly.
Currently, the exposed API in C is fairly conservative. In the future,
the memory reader, type index, and object index APIs will probably be
exposed for more flexibility.