As @lopsided98 points out in #105305, since the hashes are now target
sensative, and until we find reason to actually care to test what they
are exactly, we are best just normalizing them away in the tests.
- Generate a link to the initramfs file with an appropriate file
extension, guessed based on the compressor by default
- Use correct metadata in u-boot images if generated, up to now this
was hardcoded to gzip and would silently generate an erroneous image
if another compressor was specified
- Document all the parameters
- Improve cross-building compatibility, by allowing passing either a
string as before, or a function taking a package set and returning the
path to a compressor in the "compressor" argument of the
function.
- Support more compression algorithms
- Place compressor executable function and arguments in passthru, for
reuse when appending initramfses
Co-Authored-By: Dominik Xaver Hörl <hoe.dom@gmx.de>
Originally this was meant to support other Windows versions than just
Windows XP, but before I actually got a chance to implement this I left
the project that I implemented this for.
The code has been broken for years now and I highly doubt anyone is
interested in resurrecting this (including me), so in order to make this
less of a maintenance burden for everybody, let's remove it.
Signed-off-by: aszlig <aszlig@nix.build>
Docker (via containerd) and the the OCI Image Configuration imply and
suggest, respectfully, that the architecture set in images matches those
of GOARCH in the Go Language document.
This changeset updates the implimentation of getArch in dockerTools to
return GOARCH values, to satisfy Docker.
Fixes: #106695
If I'm running an Emacs executable from emacsWithPackages as my main
programming environment, and I'm hacking on Emacs, or the Emacs
packaging in Nixpkgs, or whatever, I don't want the Emacs packages
from the wrapper to show up in the load path of that child Emacs. It
results in differing behaviour depending on whether the child Emacs is
run from Emacs or from, for example, an external terminal emulator,
which is very surprising.
To avoid this, pass another environment variable containing the
wrapper site-lisp path, and use that value to remove the corresponding
entry in EMACSLOADPATH, so it won't be propagated to child Emacsen.
An empty entry in EMACSLOADPATH gets filled with the default value.
This is presumably why the wrapper inserted a colon after the entry it
added for the dependencies. But this naive approach wasn't always
correct.
For example, if the user ran emacs with EMACSLOADPATH=foo, the wrapper
would insert the default value (by adding the trailing `:') even
though the user was trying to expressly opt out of it.
To do this correctly, here I've replaced makeWrapper with a bespoke
script that will actually parse the EMACSLOADPATH provided in the
environment (if given), and insert the wrapper's load path just before
the default value. If EMACSLOADPATH is given but contains no default
value, we respect that and don't add the wrapped dependencies at all.
If no EMACSLOADPATH is given, we insert the wrapped dependencies
before the default value, just like before. In this way, the wrapped
Emacs should now behave as if the wrapped dependencies were part of
Emacs's default load-path value.
Previously, meta wasn't being passed through at all, because it's
removed from args without being used anywhere. This made it so that
rcirc-menu wasn't being marked as broken even though it was supposed
to be.
This patch copies the meta handling from melpaBuild, including the
default home page (adapted for ELPA).
Use "find -exec" to strip rather than "find … | xargs …". The former
ensures that stripping is attempted for each file, whereas the latter
will stop stripping at the first failure. Unstripped files can fool
runtime dependency detection and bloat closure sizes.
There are a few operations in this library that naively runs on every
iteration while they could be cached.
For a simple test repository with a small number of files and ~1000
gitignore patterns this brings memory usage down from ~233M to ~157M
and wall time from 2.6s down to 0.78s.
This should scale similarly with the number of files in a repository.
For example graphviz has chained symlinked manpages: dot2gxl.1 is
a symlink to gv2gxl.1 which is a symlink to gxl2gv.1
The second loop replaces each non-compressed symlink to a compressed
symlink. The target is determined with 'readlink -f', which follows
links recursively until the first name that is not a link (so either
the 'target name' or the first 'dangling' symlink).
This means that if the loop converted dot2gxl.1 before converting
gv2gxl.1 it would add a symlink `dot2gxl.1.gz->gxl2gv.1.gz`. When
it converted gv2gxl.1 first, it would then add a
`dot2gxl.1.gz->gv2gxl.1.gz` symlink.
Both are 'correct', but it's weird the result depends on the order
in which 'find' returns the files. This PR makes the behaviour
deterministic.
fixes#104708
This is a workaround for NixOS/nix#4295, which caused single-user Linux
Nix installations using sandboxed builds to start failing to build
fetchzip derivations after 4a5c49363a.
In short: removing write permissions for the entire directory is great,
except we then can't rename(2) it to the final Nix store path out of the
sandbox, because we don't have write permission on the directory and
thus cannot update the ".." directory entry.
Bofore this change, NUM_JOBS was set to 1. Some crates for building
C/C++ code (e.g. the cc and cmake crates), rely on this variable to
set the number of jobs. As a consequence, we were compiling embedded
libraries serially. Change this to NIX_BUILD_CORES to permit parallel
builds.
Prior discussion:
https://github.com/NixOS/nixpkgs/pull/50452#issuecomment-439407547
This provides a /etc/passwd and /etc/group that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
The current nginx example used the `runAsRoot` parameter to setup
/etc/group and /etc/passwd (which also doesn't exist in
buildLayeredImage), so we can now just use fakeNss there and use
buildLayeredImage.
Informational messages belong on stderr, not on stdout and intermixed
with structured output for programmatic use.
Change-Id: I34d094d04460494e9ec8953db7490f4e2292d959
The dynamic loader on powerpc64 is called ld64.so.2 rather than
ld-linux.so.*, and was not matched by the existing pattern.
We reuse the dynamicLinker name from binutils to match a wider set
of platforms and to avoid specifying this information in two places.
This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
Teach installShellCompletion how to install completions from a named
pipe. Also add a convenience flag `--cmd NAME` that synthesizes the name
for each completion instead of requiring repeated `--name` flags.
Usage looks something like
installShellCompletion --cmd foobar \
--bash <($out/bin/foobar --bash-completion) \
--fish <($out/bin/foobar --fish-completion) \
--zsh <($out/bin/foobar --zsh-completion)
Fixes#83284
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
Revert the hack and the original faulty commit.
This reverts commit 48264ee506105a2f5e61e5d327599e9f301bd77f.
Revert "Purity checking should accept $TMP and not just /tmp"
This reverts commit fb777be7d2.
This adds an option to skip adding --copt and --linkopt to Bazel
flags. In some cases, Bazel doesn’t like these flags, like when a
custom toolchain is being used (as opposed to the builtin one. The
compiler can still get the dependencies since it is invoked through
gcc wrapper and picks up the NIX_CFLAGS_COMPILE / NIX_LDFLAGS.
This enables short argument attrsets similar to fetchPypi:
src = fetchCrate {
inherit pname version;
sha256 = "02h8pikmk19ziqw9jgxxf7kjhnb3792vz9is446p1xfvlh4mzmyx";
};
Before this commit, the firmware information would be loaded from the
currently running kernel, not from the kernel to be loaded.
This commit ensures the correct kernel version and modules are read.
After a recent upgrade of modinfo, its output is now incorrect for
builtin modules. This commit filters out the output until a fix is made
available upstream
symlinkJoin can break (silently) when the passed paths contain symlinks
to directories. This should work now.
Down-side: when lib/tmpfiles.d doesn't exist for some passed package,
the error message is a little less explicit, because we never get
to the postBuild phase (and symlinkJoin doesn't provide a better way):
/nix/store/HASH-NAME/lib/tmpfiles.d: No such file or directory
Also, it seemed pointless to create symlinks for whole package trees
and using only a part of the result (usually very small part).
* writers.makeScriptWriter: fix on Darwin\MacOS
On Darwin a script cannot be used as an interpreter in a shebang line, which
causes scripts produced with makeScriptWriter (and its derivatives) to fail at
run time if the used interpreter was wrapped with makeWrapper (as in the case
of python3.withPackages).
This commit fixes the problem by detecting if the interpreter is a script
and prepending its shebang to the final interpreter line.
For example if used interpreter is;
```
/nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
which is a script with following shebang:
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e
```
then the shebang line in the produced script will be
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e /nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
This works on Darwin since there does not seem to be a limit to the length
of the shabang line and the shebang lines support multiple arguments to
the interpreters (as opposed to linux where the kernel imposes a strict limit
on shebang lengh and everything following the interpreter is passed to it
as a single string).
fixes; #93609
related to: #65351#11133 (and probably a bunch of others)
NOTE: scripts produced on platforms other than Darwin will remain unmodified
by this PR. However it might worth considering extending this fix to BSD systems
in general. I didn't do it since I have no way of testing it on systems other
than MacOS and linux.
* writers.makeScriptWriter: fix typo in comment
* writers.makeScriptWriter: fail build if interpreter of interpreter is a script
"bazel fetch" will, by default, fetch everything that _might_ be used,
including things that will later be discarded due to the way the build
is configured.
Concretely, this means that for some builds of Java packages, this will
avoid failures where the builder tries to retrieve the JDK from /usr/share/java
(or equivalent).
This also means that for most packages we can fetch _fewer_ dependencies,
since the standard tree pruning for artifacts to fetch will take effect.
fetchConfigured is disabled by default since it changes the fetch hashes
of tensorflow/tensorflow2 (since it ends up fetching less).