I originally thought it would just be enough to just check for an INTERP
section in isExecutable, however this would mean that we don't detect
statically linked ELF files, which would break our recent improvement to
gracefully handle those.
In theory, we are only interested in ELF files that have an INTERP
section, so checking for INTERP would be enough. Unfortunately the
isExecutable function is already used outside of autoPatchelfHook, so we
can't easily get rid of it now, so let's actually strive for more
correctness and make isExecutable actually match ELF files that are
executable.
So what we're doing instead now is to check whether either the ELF type
is EXEC *or* we have an INTERP section and if one of them is true we
should have an ELF executable, even if it's statically linked.
Along the way I also set LANG=C for the invocations of readelf, just to
be sure we don't get locale-dependent output.
Tested this with the following command (which contains almost[1] all the
packages using autoPatchelfHook), checking whether we run into any
library-related errors:
nix-build -E 'with import ./. { config.allowUnfree = true; };
runCommand "test-executables" {
drvs = [
anydesk cups-kyodialog3 elasticsearch franz gurobi
masterpdfeditor oracle-instantclient powershell reaper
sourcetrail teamviewer unixODBCDrivers.msodbcsql17 virtlyst
vk-messenger wavebox zoom-us
];
} ("for i in $drvs; do for b in $i/bin/*; do " +
"[ -x \"$b\" ] && timeout 10 \"$b\" || :; done; done")
'
Apart from testing against library-related errors I also compared the
resulting store paths against the ones prior to this commit. Only
anydesk and virtlyst had the same as they didn't have self-references,
everything else differed only because of self-references, except
elasticsearch, which had the following PIE binaries:
* modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/autoconfig
* modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/autodetect
* modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/categorize
* modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/controller
* modules/x-pack/x-pack-ml/platform/linux-x86_64/bin/normalize
These binaries were now patched, which is what this commit is all about.
[1]: I didn't include the "maxx" package (MaXX Interactive Desktop)
because the upstream URLs are no longer existing and I couldn't
find them elsewhere on the web.
Signed-off-by: aszlig <aszlig@nix.build>
Fixes: https://github.com/NixOS/nixpkgs/issues/48330
Cc: @gnidorah (for MaXX Interactive Desktop)
- respect libc’s incdir and libdir
- make non-unix systems single threaded
- set LIMITS_H_TEST to false for avr
- misc updates to support new libc’s
- use multilib with avr
For threads we want to use:
- posix on unix systems
- win32 on windows
- single on everything else
For avr:
- add library directories for avrlibc
- to disable relro and bind
- avr5 should have precedence over avr3 - otherwise gcc uses the wrong one
Usuage: Add breakpointHook to your `buildInputs` like this:
stdenv.mkDerivation rec {
# ...
buildInputs = [ breakpointHook ];
});
When the build fails as show in this example:
pkgs.hello.overrideAttrs (old: {
buildInputs = [ breakpointHook ];
postPatch = ''
false
'';
});
It will halt execution printing the following message:
build failed in patchPhase with exit code 1
To attach to this build run the following command as root:
cntr attach -t command cntr-/nix/store/ynyb4n82x2r7sldd58pbb405jdqh5f00-hello-2.10
Installing cntr and running the command will provide shell access to the
build sandbox of failed build:
sudo cntr attach -t command cntr-/nix/store/ynyb4n82x2r7sldd58pbb405jdqh5f00-hello-2.10
WARNING: bad ownership on /nix/var/nix/profiles/per-user/root, should be 1000
[nixbld@localhost:/var/lib/cntr]$
At /var/lib/cntr the sandbox filesystem is mounted. All commands and
files of the system are still accessible within the shell.
To execute commands from the sandbox use the `cntr exec` subcommand.
Bazel computes the default value of output_user_root before parsing the
flag[0]. The computation of the default value involves getting the $USER
from the environment. I don't have that variable when building with
sandbox enabled.
[0]: 9323c57607/src/main/cpp/startup_options.cc (L123-L124)
Create a many-layered Docker Image.
Implements much less than buildImage:
- Doesn't support specific uids/gids
- Doesn't support runninng commands after building
- Doesn't require qemu
- Doesn't create mutable copies of the files in the path
- Doesn't support parent images
If you want those feature, I recommend using buildLayeredImage as an
input to buildImage.
Notably, it does support:
- Caching low level, common paths based on a graph traversial
algorithm, see referencesByPopularity in
0a80233487993256e811f566b1c80a40394c03d6
- Configurable number of layers. If you're not using AUFS or not
extending the image, you can specify a larger number of layers at
build time:
pkgs.dockerTools.buildLayeredImage {
name = "hello";
maxLayers = 128;
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
- Parallelized creation of the layers, improving build speed.
- The contents of the image includes the closure of the configuration,
so you don't have to specify paths in contents and config.
With buildImage, paths referred to by the config were not included
automatically in the image. Thus, if you wanted to call Git, you
had to specify it twice:
pkgs.dockerTools.buildImage {
name = "hello";
contents = [ pkgs.gitFull ];
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
buildLayeredImage on the other hand includes the runtime closure of
the config when calculating the contents of the image:
pkgs.dockerTools.buildImage {
name = "hello";
config.Cmd = [ "${pkgs.gitFull}/bin/git" ];
};
Minor Problems
- If any of the store paths change, every layer will be rebuilt in
the nix-build. However, beacuse the layers are bit-for-bit
reproducable, when these images are loaded in to Docker they will
match existing layers and not be imported or uploaded twice.
Common Questions
- Aren't Docker layers ordered?
No. People who have used a Dockerfile before assume Docker's
Layers are inherently ordered. However, this is not true -- Docker
layers are content-addressable and are not explicitly layered until
they are composed in to an Image.
- What happens if I have more than maxLayers of store paths?
The first (maxLayers-2) most "popular" paths will have their own
individual layers, then layer #(maxLayers-1) will contain all the
remaining "unpopular" paths, and finally layer #(maxLayers) will
contain the Image configuration.
Using a simple algorithm, convert the references to a path in to a
sorted list of dependent paths based on how often they're referenced
and how deep in the tree they live. Equally-"popular" paths are then
sorted by name.
The existing writeReferencesToFile prints the paths in a simple
ascii-based sorting of the paths.
Sorting the paths by graph improves the chances that the difference
between two builds appear near the end of the list, instead of near
the beginning. This makes a difference for Nix builds which export a
closure for another program to consume, if that program implements its
own level of binary diffing.
For an example, Docker Images. If each store path is a separate layer
then Docker Images can be very efficiently transfered between systems,
and we get very good cache reuse between images built with the same
version of Nixpkgs. However, since Docker only reliably supports a
small number of layers (42) it is important to pick the individual
layers carefully. By storing very popular store paths in the first 40
layers, we improve the chances that the next Docker image will share
many of those layers.*
Given the dependency tree:
A - B - C - D -\
\ \ \ \
\ \ \ \
\ \ - E ---- F
\- G
Nodes which have multiple references are duplicated:
A - B - C - D - F
\ \ \
\ \ \- E - F
\ \
\ \- E - F
\
\- G
Each leaf node is now replaced by a counter defaulted to 1:
A - B - C - D - (F:1)
\ \ \
\ \ \- E - (F:1)
\ \
\ \- E - (F:1)
\
\- (G:1)
Then each leaf counter is merged with its parent node, replacing the
parent node with a counter of 1, and each existing counter being
incremented by 1. That is to say `- D - (F:1)` becomes `- (D:1, F:2)`:
A - B - C - (D:1, F:2)
\ \ \
\ \ \- (E:1, F:2)
\ \
\ \- (E:1, F:2)
\
\- (G:1)
Then each leaf counter is merged with its parent node again, merging
any counters, then incrementing each:
A - B - (C:1, D:2, E:2, F:5)
\ \
\ \- (E:1, F:2)
\
\- (G:1)
And again:
A - (B:1, C:2, D:3, E:4, F:8)
\
\- (G:1)
And again:
(A:1, B:2, C:3, D:4, E:5, F:9, G:2)
and then paths have the following "popularity":
A 1
B 2
C 3
D 4
E 5
F 9
G 2
and the popularity contest would result in the paths being printed as:
F
E
D
C
B
G
A
* Note: People who have used a Dockerfile before assume Docker's
Layers are inherently ordered. However, this is not true -- Docker
layers are content-addressable and are not explicitly layered until
they are composed in to an Image.
This causes problems for packages built using a bootstrap stdenv,
resulting in references to /bin/sh or even bootstrap-tools. The darwin
stdenv is much stricter about what requisites/references are allowed but
using /bin/sh on linux is also undesirable.
eg. https://hydra.nixos.org/build/81754896
$ nix-build -A xz
$ head -n1 result-bin/bin/xzdiff
#!/nix/store/yvc7kmw98kq547bnqn1afgyxm8mxdwhp-bootstrap-tools/bin/sh
This reverts commit f06942327a.
This reverts commit f777d2b719.
cc #34409
This breaks evaluation of the tested job:
attribute 'diskInterface' missing, at /nix/store/5k9kk52bv6zsvsyyvpxhm8xmwyn2yjvx-source/pkgs/build-support/vm/default.nix:316:24
This includes the initialy commit was done by @Mic92 plus a few fixes
from my side. So essentially this avoids patching statically linked
executables and also speeds up searching for ELF files altogether.
I've tested this by comparing the outputs of all the derivations which
make use of this hook using the following Nix expression:
let
getPackagesForRev = rev: with import (builtins.fetchGit {
url = ./.;
inherit rev;
}) { config.allowUnfree = true; }; [
cups-kyodialog3 elasticsearch franz gurobi javacard-devkit
masterpdfeditor maxx oracle-instantclient powershell reaper
teamviewer unixODBCDrivers.msodbcsql17 virtlyst wavebox zoom-us
];
pkgs = import <nixpkgs> {};
baseRev = "ef764eb0d8314b81a012dae04642b4766199956d";
in pkgs.runCommand "diff-contents" {
chset = pkgs.lib.zipListsWith (old: new: pkgs.runCommand "diff" {
inherit old new;
nativeBuildInputs = [ pkgs.nukeReferences ];
} ''
mkdir -p "''${NIX_STORE#/}"
cp --no-preserve=all -r "$old" "''${NIX_STORE#/}"
cp --no-preserve=all -r "$new" "''${NIX_STORE#/}"
find "''${old#/}" "''${new#/}" \
\( -type f -exec nuke-refs {} + \) -o \( -type l -delete \)
mkdir "$out"
echo "$old" > "$out/old-path"
echo "$new" > "$out/new-path"
diff -Nur "''${old#/}" "''${new#/}" > "$out/diff" || :
'') (getPackagesForRev baseRev) (getPackagesForRev "");
} ''
err=0
for c in $chset; do
if [ -s "$c/diff" ]; then
echo "$(< "$c/old-path") -> $(< "$c/new-path")" \
"differs, report: $c/diff" >&2
err=1
fi
done
[ $err -eq 0 ] && touch "$out"
''
With these changes there is only one derivation which has altered
contents, which is "franz". However the reason why it has differing
contents is not directly because of the autoPatchelfHook changes, but
because the "env-vars" file from the builder is in
"$out/opt/franz/env-vars" (Cc: @gnidorah) and we now have different
contents for NIX_CFLAGS_COMPILE and other environment variables.
I also tested this against a random static binary and the hook no longer
tries to patch it.
Merges: #47222
The "maxx" package recursively runs isExecutable on a bunch of files and
since the change to use "readelf" instead of "file" a lot of errors like
this one are printed during build:
readelf: Error: Not an ELF file - it has the wrong magic bytes at the
start
While the isExecutable was never meant to be used outside of the
autoPatchelfHook, it's still a good idea to silence the errors because
whenever readelf fails, it clearly indicates that the file in question
is not a valid ELF file.
Signed-off-by: aszlig <aszlig@nix.build>
If the ELF file is not an executable, we do not get a PT_INTERP section,
because after all, it's a *shared* library.
So instead of checking for PT_INTERP (to avoid statically linked
executables) for all ELF files, we add another check to see if it's an
executable and *only* skip it when it is and there's no PT_INTERP.
Signed-off-by: aszlig <aszlig@nix.build>
The `overrideScope` bound by `makeScope` (via special `callPackage`)
took an override in the form `super: self { … }`. But this is
dangerously close to the `self: super { … }` form used by *everything*
else, even other definitions of `overrideScope`! Since that
implementation did not even share any code either until I changed it
recently in 3cf43547f4, this inconsistency
is almost certainly an oversight and not intentional.
Unfortunately, just as the inconstency is hard to debug if one just
assumes the conventional order, any sudden fix would break existing
overrides in the same hard-to-debug way. So instead of changing the
definition a new `overrideScope'` with the conventional order is added,
and old `overrideScope` deprecated with a warning saying to use
`overrideScope'` instead. That will hopefully get people to stop using
`overrideScope`, freeing our hand to change or remove it in the future.
02c09e0171 (NixOS/nixpkgs#44558) was reverted in
c981787db9 but, as it turns out, it fixed an issue
I didn't know about at the time: the values of `propagateDoc` options were
(and now again are) inconsistent with the underlying things those wrappers wrap
(see NixOS/nixpkgs#46119), which was (and now is) likely to produce more instances
of NixOS/nixpkgs#43547, if not now, then eventually as stdenv changes.
This patch (which is a simplified version of the original reverted patch) is the
simplest solution to this whole thing: it forces wrappers to directly inspect the
outputs of the things they are wrapping instead of making stdenv guess the correct
values.