Since fdf32154fc, we no longer allow
missing modules in the initrd. Unfortunately since before this commit,
the modules-closure script would also fail on missing firmware, which
is a very common case (e.g. xhci-pci.ko.xz lists renesas_usb_fw.mem as
dependent firmware). Fix this by only issuing a warning instead.
This was already fixed on non-Darwin, but the fix missed that it was
also reintroduced for the Darwin code path at the same time.
Fixes: dd5d2482c9 ("emacs: Fix accidental double wrapping")
Warning about future breaking changes is wrong.
- It suggests that the maintainers don't value backwards compatibility.
They do.
- It implies that other parts of Nixpkgs won't ever break. They will.
- It implies that a well-defined "public" interface exists. It doesn't.
- If the reasons above didn't apply, it should have been in the manual
instead.
Breaking changes will come, especially to the interface. That can be the
only way we can make progress without breaking the image _contents_.
I don't think dockerTools is any different from most of Nixpkgs in
these regards.
`buildRustPackage` currently accepts `cargoSha256` as a hash for
vendored dependencies. This change adds `cargoHash` which accepts SRI
hashes, setting `outputHashAlgo` to `null`.
The hash mismatch message still uses `cargoSha256` as an example,
which it probably should until we completely switch to SRI hashes.
By default, Perl versions since 5.8.1 use randomization to make hashes
resistant to complexity attacks.
That randomization makes building VM images such as ubuntu1804x86_64
non-deterministic because the (imported) derivations built by
deb/deb-closure.pl are not stable.
This can easily be observed by repeating the following sequence of
commands and noting the path of the image's .drv:
nix-instantiate -E '(import <nixpkgs> {}).vmTools.diskImageFuns.ubuntu1804x86_64 {}'
nix-store --delete /nix/store/*ubuntu-18.04-bionic-amd64.nix
One source of non-determinism is the handling of Provides/Replaces,
which depends on the order of iteration over %packages. Here is a
diff showing the corresponding change in output:
>>> awk
-virtual awk: using original-awk
- original-awk: libc6 (>= 2.14)
+virtual awk: using mawk
+ mawk: libc6 (>= 2.14)
- mawk: libc6 (>= 2.14)
->>> libc6
This patch sorts packages by name for Provides/Replaces processing,
which seems to result in stable output.
(If the above turns out not to be sufficient, one could also set the
PERL_HASH_SEED and PERL_PERTURB_KEYS environment variables, documented
in 'perlrun', to disable Perl's built-in randomization. Complexity
attacks are not an issue as we control and trust all inputs.)
Using the full store hash as the random seed occasionally caused
reference cycles when the invocation was stored in output artifacts.
For example, cross-compiled gcc was failing due to this:
https://hydra.nixos.org/eval/1631713#tabs-now-fail
Simply truncating the hash is sufficient to avoid this.
When invoking a simple Ada program with `gcc` from `gnats10`, the
following warnings are shown:
```
$ gcc -c conftest.adb
gnat1: warning: command-line option ‘-Wformat=1’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: command-line option ‘-Wformat-security’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: ‘-Werror=’ argument ‘-Werror=format-security’ is not valid for Ada
$ echo $?
0
```
This is only spammy when compiling Ada programs inside a Nix derivation,
but certain configure scripts (such as the ./configure script from the
gcc that's built by coreboot's `make crossgcc` command) fail entirely
when getting that warning output.
https://nixos.wiki/wiki/Coreboot currently suggests manually running
> NIX_HARDENING_ENABLE="${NIX_HARDENING_ENABLE/ format/}" make crossgcc
… but actually teaching the nixpkgs-provided cc wrapper that `format`
isn't supported as a hardening flag seems to be the more canonical way
to do this in nixpgks.
After this, Ada programs still compile:
```
$ gcc -c conftest.adb
$ echo $?
0
```
And the compiler output is empty.
As @lopsided98 points out in #105305, since the hashes are now target
sensative, and until we find reason to actually care to test what they
are exactly, we are best just normalizing them away in the tests.
- Generate a link to the initramfs file with an appropriate file
extension, guessed based on the compressor by default
- Use correct metadata in u-boot images if generated, up to now this
was hardcoded to gzip and would silently generate an erroneous image
if another compressor was specified
- Document all the parameters
- Improve cross-building compatibility, by allowing passing either a
string as before, or a function taking a package set and returning the
path to a compressor in the "compressor" argument of the
function.
- Support more compression algorithms
- Place compressor executable function and arguments in passthru, for
reuse when appending initramfses
Co-Authored-By: Dominik Xaver Hörl <hoe.dom@gmx.de>
Originally this was meant to support other Windows versions than just
Windows XP, but before I actually got a chance to implement this I left
the project that I implemented this for.
The code has been broken for years now and I highly doubt anyone is
interested in resurrecting this (including me), so in order to make this
less of a maintenance burden for everybody, let's remove it.
Signed-off-by: aszlig <aszlig@nix.build>
Docker (via containerd) and the the OCI Image Configuration imply and
suggest, respectfully, that the architecture set in images matches those
of GOARCH in the Go Language document.
This changeset updates the implimentation of getArch in dockerTools to
return GOARCH values, to satisfy Docker.
Fixes: #106695
If I'm running an Emacs executable from emacsWithPackages as my main
programming environment, and I'm hacking on Emacs, or the Emacs
packaging in Nixpkgs, or whatever, I don't want the Emacs packages
from the wrapper to show up in the load path of that child Emacs. It
results in differing behaviour depending on whether the child Emacs is
run from Emacs or from, for example, an external terminal emulator,
which is very surprising.
To avoid this, pass another environment variable containing the
wrapper site-lisp path, and use that value to remove the corresponding
entry in EMACSLOADPATH, so it won't be propagated to child Emacsen.
An empty entry in EMACSLOADPATH gets filled with the default value.
This is presumably why the wrapper inserted a colon after the entry it
added for the dependencies. But this naive approach wasn't always
correct.
For example, if the user ran emacs with EMACSLOADPATH=foo, the wrapper
would insert the default value (by adding the trailing `:') even
though the user was trying to expressly opt out of it.
To do this correctly, here I've replaced makeWrapper with a bespoke
script that will actually parse the EMACSLOADPATH provided in the
environment (if given), and insert the wrapper's load path just before
the default value. If EMACSLOADPATH is given but contains no default
value, we respect that and don't add the wrapped dependencies at all.
If no EMACSLOADPATH is given, we insert the wrapped dependencies
before the default value, just like before. In this way, the wrapped
Emacs should now behave as if the wrapped dependencies were part of
Emacs's default load-path value.
Previously, meta wasn't being passed through at all, because it's
removed from args without being used anywhere. This made it so that
rcirc-menu wasn't being marked as broken even though it was supposed
to be.
This patch copies the meta handling from melpaBuild, including the
default home page (adapted for ELPA).
Use "find -exec" to strip rather than "find … | xargs …". The former
ensures that stripping is attempted for each file, whereas the latter
will stop stripping at the first failure. Unstripped files can fool
runtime dependency detection and bloat closure sizes.
There are a few operations in this library that naively runs on every
iteration while they could be cached.
For a simple test repository with a small number of files and ~1000
gitignore patterns this brings memory usage down from ~233M to ~157M
and wall time from 2.6s down to 0.78s.
This should scale similarly with the number of files in a repository.
For example graphviz has chained symlinked manpages: dot2gxl.1 is
a symlink to gv2gxl.1 which is a symlink to gxl2gv.1
The second loop replaces each non-compressed symlink to a compressed
symlink. The target is determined with 'readlink -f', which follows
links recursively until the first name that is not a link (so either
the 'target name' or the first 'dangling' symlink).
This means that if the loop converted dot2gxl.1 before converting
gv2gxl.1 it would add a symlink `dot2gxl.1.gz->gxl2gv.1.gz`. When
it converted gv2gxl.1 first, it would then add a
`dot2gxl.1.gz->gv2gxl.1.gz` symlink.
Both are 'correct', but it's weird the result depends on the order
in which 'find' returns the files. This PR makes the behaviour
deterministic.
fixes#104708
This is a workaround for NixOS/nix#4295, which caused single-user Linux
Nix installations using sandboxed builds to start failing to build
fetchzip derivations after 4a5c49363a.
In short: removing write permissions for the entire directory is great,
except we then can't rename(2) it to the final Nix store path out of the
sandbox, because we don't have write permission on the directory and
thus cannot update the ".." directory entry.
Bofore this change, NUM_JOBS was set to 1. Some crates for building
C/C++ code (e.g. the cc and cmake crates), rely on this variable to
set the number of jobs. As a consequence, we were compiling embedded
libraries serially. Change this to NIX_BUILD_CORES to permit parallel
builds.
Prior discussion:
https://github.com/NixOS/nixpkgs/pull/50452#issuecomment-439407547
This provides a /etc/passwd and /etc/group that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
The current nginx example used the `runAsRoot` parameter to setup
/etc/group and /etc/passwd (which also doesn't exist in
buildLayeredImage), so we can now just use fakeNss there and use
buildLayeredImage.
Informational messages belong on stderr, not on stdout and intermixed
with structured output for programmatic use.
Change-Id: I34d094d04460494e9ec8953db7490f4e2292d959
The dynamic loader on powerpc64 is called ld64.so.2 rather than
ld-linux.so.*, and was not matched by the existing pattern.
We reuse the dynamicLinker name from binutils to match a wider set
of platforms and to avoid specifying this information in two places.
The chroot environment under mnt had /dev and /sys via bind mounts,
but nothing setting up /proc. The `--mount-proc` argument to unshare
defaults to /proc, which is outside of the chroot envirnoment.
This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
Teach installShellCompletion how to install completions from a named
pipe. Also add a convenience flag `--cmd NAME` that synthesizes the name
for each completion instead of requiring repeated `--name` flags.
Usage looks something like
installShellCompletion --cmd foobar \
--bash <($out/bin/foobar --bash-completion) \
--fish <($out/bin/foobar --fish-completion) \
--zsh <($out/bin/foobar --zsh-completion)
Fixes#83284
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
Revert the hack and the original faulty commit.
This reverts commit 48264ee506105a2f5e61e5d327599e9f301bd77f.
Revert "Purity checking should accept $TMP and not just /tmp"
This reverts commit fb777be7d2.