1. Update bower2nix version and add new/updated dependencies into
node-packages-generated.nix. This was done manually, with npm2nix
generating the initial set of derivations. In future, it would be
nice to have an automatic process (see #10358, #9332).
2. Add an override to nodePackages.bower2nix wrapping the commands so
that git is on the PATH.
3. Update fetchbower to support new command-line options of bower2nix,
and to allow github URL tag versions.
Previously, nix-prefetch-git would report the same JSON whether submodules were being fetched or not; with this change, the --fetch-submodules option will cause the JSON output to include "fetchSubmodules": true, so that fetchgit (builtins.fromJSON (builtins.readFile ./path/to/output.json)) will work.
Some recent perl version introduced "keys" to return the keys
in random order. As some of the packages are solved by "provides" and
based on the order, this randomness affects what packages get into the
closure.
This problem may be in other nix perl scripts.
The importance of glibc makes it worthwhile to provide debug
symbols. However, this revealed an issue with separateDebugInfo: it
was indiscriminately adding --build-id to all ld invocations, while in
fact it should only do that for final links. Glibc also uses non-final
("relocatable") links, leading to subsequent failure to apply a build
ID ("Cannot create .note.gnu.build-id section, --build-id
ignored"). So now ld-wrapper.sh only passes --build-id for final
links.
Otherwise, when building glibc and other packages, the "strip" from
bootstrapTools is used, which doesn't recognise some tags produced by
the newer "ld" from binutils.
The two lines I removed technically assert the exact same thing, since `!a -> b`
is equivalent to `a || b`. So, I replaced the two lines with the more symmetric
form to make it clearer.
This commit fixes#6651.
Before this change the `nix-prefetch-git` script would use a different store
name than nix's `fetchgit` function. Because of that it was not possible to
use `nix-prefetch-git` as a way to pre-populate the store (for example when
the user it using private git dependencies that needs access to the ssh agent)
In some cases the $sourceRoot is missing. Skip the hook instead
of showing the following cryptic error:
find: cannot search `': No such file or directory
/nix/store/0p1afvl8jcpi6dvsq2n58i90w9c59vz1-set-source-date-epoch-to-latest.sh: line 12: [: : integer expression expected
vcunat removed the warning; the hook will just skip silently in these cases.
Perhaps someone can improve on it some time.
Regression introduced by 4529ed1259.
I've missed this in #5096, not because of a messed up rebase as I have
guessed from a comment on #12635 but missed this in the first place.
The testing I did while working on the pull request weren't exhaustive
enough to cover this, because I haven't tested with packages that use
the propagatedUserEnvPkgs attribute.
In order to make the test a bit more exhaustive this time, let's test it
using:
nix-build -E 'with import ./. {}; buildEnv {
name = "testenv";
paths = [
pkgs.hello pkgs.binutils pkgs.libsoup pkgs.gnome3.yelp
pkgs.gnome3.totem
];
}'
And with this commit the errors no longer show up and the environment is
built correctly.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Fixes: #12635
- Now `pkg.outputUnspecified = true` but this attribute is missing in
every output, so we can recognize whether the user chose or not.
If (s)he didn't choose, we put `pkg.bin or pkg.out or pkg` into
`systemPackages`.
- `outputsToLink` is replaced by `extraOutputsToLink`.
We add extra outputs *regardless* of whether the user chose anything.
It's mainly meant for outputs with docs and debug symbols.
- Note that as a result, some libraries will disappear from system path.
* authorization token is optional
* registry url is taken from X-Docker-Endpoints header
* pull.sh correctly resumes partial layer downloads
* detjson.py does not fail on missing keys
The comment related to the `deepClone` and `no-deepClone` options was
misleading as these options have no relation with submodules, but on the
the depth in `git clone --depth n`.
Login mode can cause hidden problems, e.g. #12406. Generally we don't want
to read user's .bash_profile when we don't start an interactive shell inside
a chroot.
These environment variables allow using fetchgit with git:// URLs using
the SOCKS proxy technique described in 'Using Git with a SOCKS proxy':
http://www.patthoyts.tk/blog/using-git-with-socks-proxy.html
Briefly, GIT_PROXY_COMMAND is set to a script which invokes connect[1],
which reads SOCKS_PROXY, which might be pointing to a local instance of
'ssh -D'.
[1] pkgs/tools/networking/connect
The ld-wrapper.sh script calls `readlink` in some circumstances. We need
to ensure that this is the `readlink` from the `coreutils` package so
that flag support is as expected.
This is accomplished by explicitly setting PATH at the top of each shell
script.
Without doing this, the following happens with a trivial `main.c`:
```
nix-env -f "<nixpkgs>" -iA pkgs.clang
$ clang main.c -L /nix/../nix/store/2ankvagznq062x1gifpxwkk7fp3xwy63-xnu-2422.115.4/Library -o a.out
readlink: illegal option -- f
usage: readlink [-n] [file ...]
```
The key element is the `..` in the path supplied to the linker via a
`-L` flag. With this patch, the above invocation works correctly on
darwin, whose native `/usr/bin/readlink` does not support the `-f` flag.
The explicit path also ensures that the `grep` called by `cc-wrapper.sh`
is the one from Nix.
Fixes#6447
Building packages requires package-build.el from Melpa, but installing
packages only requires package.el. Packages from ELPA are already built,
so there is no need to involve package-build.el.
When building a package from a Melpa recipe file, get the Emacs package
name from the recipe. Nix is more restrictive about packages names than
Emacs, so the Nix name for a package is sometimes different.
Checking file contents is redundant in this case, because we will go
ahead anyway, regardless of whether the content is the same.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Originally wanted to include ignoreCollisions in cups-progs, but I think
it's better if we use ignoreCollisions only if there are _real_
collisions between files with different contents.
Of course, we also check whether the file permissions match, so you get
a collision if contents are the same but the permissions are different.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
For instance, a binary like libfoo.so will cause a symlink
lib/debug/libfoo.so.debug -> .build-id/<build-ID>.debug to be
created. This is primarily useful for use with eu-addr2line, if you
know the name of a binary and the relative address, but not the build
ID.
- fix in silencing some moveToOutput messages
- allow removing (developer) documentation even without defining outputs
(note: some paths are auto-removed by default, e.g. gtk-doc and man3)
Unfortunately, yesterday Nix got reverted to a version with broken
passAsFile implementation on some Hydra machines, so we have corrupted
files again. (E.g. http://hydra.nixos.org/build/29777678.) Forcing
another gratuitous rebuild to get rid of them.
(cherry picked from commit 75974d9220b8397c736ada76fb24eb934fa62f6c)
Also fix the hash in goPackages.inflect, the only user of the fetcher ATM.
Closes#12002 (different `inflect` fix), fixes#12012.
Using fetchzip-derived functions is likely more efficient than fetchhg,
and it's lighter on dependencies (hash is the same as with fetchhg in this case).
fetchbzr always uses the derivation name `bzr-export`. nix-prefetch-bzr
should use the same name for its output. This avoids duplicate downloads
and problems with forbidden characters in bazaar repository names.
Fixes#10819. emacsWithPackages will know its own package set. This
requires it to be in a package set, rather than at the top level, so it
lives in emacsPackagesNg.
This seems to be the root cause of the random page allocation failures
and @wizeman did a very good job on not only finding the root problem
but also giving a detailed explanation of it in #10828.
Here is an excerpt:
The problem here is that the kernel is trying to allocate a contiguous
section of 2^7=128 pages, which is 512 KB. This is way too much:
kernel pages tend to get fragmented over time and kernel developers
often go to great lengths to try allocating at most only 1 contiguous
page at a time whenever they can.
From the error message, it looks like the culprit is unionfs, but this
is misleading: unionfs is the name of the userspace process that was
running when the system ran out of memory, but it wasn't unionfs who
was allocating the memory: it was the kernel; specifically it was the
v9fs_dir_readdir_dotl() function, which is the code for handling the
readdir() function in the 9p filesystem (the filesystem that is used
to share a directory structure between a qemu host and its VM).
If you look at the code, here's what it's doing at the moment it tries
to allocate memory:
buflen = fid->clnt->msize - P9_IOHDRSZ;
rdir = v9fs_alloc_rdir_buf(file, buflen);
If you look into v9fs_alloc_rdir_buf(), you will see that it will try
to allocate a contiguous buffer of memory (using kzalloc(), which is a
wrapper around kmalloc()) of size buflen + 8 bytes or so.
So in reality, this code actually allocates a buffer of size
proportional to fid->clnt->msize. What is this msize? If you follow
the definition of the structures, you will see that it's the
negotiated buffer transfer size between 9p client and 9p server. On
the client side, it can be controlled with the msize mount option.
What this all means is that, the reason for running out of memory is
that the code (which we can't easily change) tries to allocate a
contiguous buffer of size more or less equal to "negotiated 9p
protocol buffer size", which seems to be way too big (in our NixOS
tests, at least).
After that initial finding, @lethalman tested the gnome3 gdm test
without setting the msize parameter at all and it seems to have resolved
the problem.
The reason why I'm committing this without testing against all of the
NixOS VM test is basically that I think we can only go better but not
worse than the current state.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Previously is was assumed that bash was in the path when calling the
environment setup script. This changes all of the references of bash to
be absolute paths so that the user doesn't have to worry about the
environment they call it with.
Otherwise, if the upstream mirror changes (rather than deletes) a
file, then tarballs.nixos.org won't be used even if it has a copy of
the original file, and so we'll get a hash mismatch.
Emacs packages are commonly distributed as single .el files. This
unpackCmd handles them correctly and sets up sourceRoot. Other sources
are treated in the default manner.
If "fetcher" is a string, then Nix will execute it with bash already, so
the additional bash argument in that string was redundant and apparently
causes trouble on non-Linux platforms.
Hopefully fixes https://github.com/NixOS/nixpkgs/issues/11496.
The list we had before contained a lot of junk, i.e. sites that were no
longer online or no longer in sync. The new list of sites comes from
https://gnupg.org/download/index.html.
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
It turns out that cargo implicitly depends on rustc at runtime: even
`cargo help` will fail if rustc is not in the PATH.
This means that we need to wrap the cargo binary to add rustc to PATH.
However, I have opted into doing something slightly unusual: instead of
tying down a specific cargo to use a specific rustc (i.e., wrap cargo so
that "${rustc}/bin" is prefixed into PATH), instead I'm adding the rustc
used to build cargo as a fallback rust compiler (i.e., wrap cargo so
that "${rustc}/bin" is suffixed into PATH). This means that cargo will
prefer to use a rust compiler that is in the default path, but fallback
into the one used to build cargo only if there wasn't any rust compiler
in the default path.
The reason I'm doing this is that otherwise it could cause unexpected
effects. For example, if you had a build environment with the
rustcMaster and cargo derivations, you would expect cargo to use
rustcMaster to compile your project (since rustcMaster would be the only
compiler available in $PATH), but this wouldn't happen if we tied down
cargo to use the rustc that was used to compile it (because the default
cargo derivation gets compiled with the stable rust compiler).
That said, I have slightly modified makeRustPlatform so that a rust
platform will always use the rust compiler that was used to build cargo,
because this prevents mistakenly depending on two different versions of
the rust compiler (stable and unstable) in the same rust platform,
something which is usually undesirable.
Fixes#11053