To successfully build rebar packages, it needs to be provided with
rebar3 plugins used to build it. This change passes them to env
variable. From there rebar3-nix-bootstrap takes them and symlinks into
_build/default/plugins.
Regression introduced by df2b9b48cb.
This breaks the build for ltrace and other programs using libelf,
because the header file relies on features from glibc >= 2.22.
Here is an excerpt from the log output of the configure script from
ltrace:
In file included from ...elfutils-0.165/include/gelf.h:32:0,
from conftest.c:57:
...elfutils-0.165/include/libelf.h:280:8: error: unknown type name 'Elf32_Chdr'
extern Elf32_Chdr *elf32_getchdr (Elf_Scn *__scn);
^
...elfutils-0.165/include/libelf.h:281:8: error: unknown type name 'Elf64_Chdr'
extern Elf64_Chdr *elf64_getchdr (Elf_Scn *__scn);
^
In file included from conftest.c:57:0:
...elfutils-0.165/include/gelf.h:89:9: error: unknown type name 'Elf64_Chdr'
typedef Elf64_Chdr GElf_Chdr;
^
The issue has been reported in the Debian bug tracker at
https://bugs.debian.org/810885 and I'm using the patch from Mark
Wielaard that has been posted there which adds compatibility for older
glibc versions.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Also, install programs with the "eu-" prefix to prevent collisions
with binutils (as recommended by upstream), enable xz support, and
enable deterministic archives.
The error was due to the fact that with-introduced bindings have lower
priority and we do have `darwin` in scope already.
Fixes#12350. Closes#12351. (A slightly different fix.
I chose this to lower the risk of people re-introducing the mistake.)
- fix in silencing some moveToOutput messages
- allow removing (developer) documentation even without defining outputs
(note: some paths are auto-removed by default, e.g. gtk-doc and man3)
http://hydra.nixos.org/eval/1234895
The mass errors on Hydra seem transient; I verified ghc on i686-linux.
Only darwin jobs are queued ATM. There's a libpng security update
included in this merge, so I don't want to wait too long.
This improves our Bundler integration (i.e. `bundlerEnv`).
Before describing the implementation differences, I'd like to point a
breaking change: buildRubyGem now expects `gemName` and `version` as
arguments, rather than a `name` attribute in the form of
"<gem-name>-<version>".
Now for the differences in implementation.
The previous implementation installed all gems at once in a single
derivation. This was made possible by using a set of monkey-patches to
prevent Bundler from downloading gems impurely, and to help Bundler
find and activate all required gems prior to installation. This had
several downsides:
* The patches were really hard to understand, and required subtle
interaction with the rest of the build environment.
* A single install failure would cause the entire derivation to fail.
The new implementation takes a different approach: we install gems into
separate derivations, and then present Bundler with a symlink forest
thereof. This has a couple benefits over the existing approach:
* Fewer patches are required, with less interplay with the rest of the
build environment.
* Changes to one gem no longer cause a rebuild of the entire dependency
graph.
* Builds take 20% less time (using gitlab as a reference).
It's unfortunate that we still have to muck with Bundler's internals,
though it's unavoidable with the way that Bundler is currently designed.
There are a number improvements that could be made in Bundler that would
simplify our packaging story:
* Bundler requires all installed gems reside within the same prefix
(GEM_HOME), unlike RubyGems which allows for multiple prefixes to
be specified through GEM_PATH. It would be ideal if Bundler allowed
for packages to be installed and sourced from multiple prefixes.
* Bundler installs git sources very differently from how RubyGems
installs gem packages, and, unlike RubyGems, it doesn't provide a
public interface (CLI or programmatic) to guide the installation of a
single gem. We are presented with the options of either
reimplementing a considerable portion Bundler, or patch and use parts
of its internals; I choose the latter. Ideally, there would be a way
to install gems from git sources in a manner similar to how we drive
`gem` to install gem packages.
* When a bundled program is executed (via `bundle exec` or a
binstub that does `require 'bundler/setup'`), the setup process reads
the Gemfile.lock, activates the dependencies, re-serializes the lock
file it read earlier, and then attempts to overwrite the Gemfile.lock
if the contents aren't bit-identical. I think the reasoning is that
by merely running an application with a newer version of Bundler, you'll
automatically keep the Gemfile.lock up-to-date with any changes in the
format. Unfortunately, that doesn't play well with any form of
packaging, because bundler will immediately cause the application to
abort when it attempts to write to the read-only Gemfile.lock in the
store. We work around this by normalizing the Gemfile.lock with the
version of Bundler that we'll use at runtime before we copy it into
the store. This feels fragile, but it's the best we can do without
changes upstream, or resorting to more delicate hacks.
With all of the challenges in using Bundler, one might wonder why we
can't just cut Bundler out of the picture and use RubyGems. After all,
Nix provides most of the isolation that Bundler is used for anyway.
The problem, however, is that almost every Rails application calls
`Bundler::require` at startup (by way of the default project templates).
Because bundler will then, by default, `require` each gem listed in the
Gemfile, Rails applications are almost always written such that none of
the source files explicitly require their dependencies. That leaves us
with two options: support and use Bundler, or maintain massive patches
for every Rails application that we package.
Closes#8612
Compatible with llvm+clang 3.7. Changes:
- Added Boost and Qt mappings.
- Better support for using declarations.
- Allow size_t from multiple headers.
- Fixed handling includes with common path prefix.
More: http://include-what-you-use.org/
This patch fixes compilation errors when using ccache wrapper:
```
cc1: error: /nix/store/19vvbsjs6l6j0r22albzhysxfvr94imf-ccache-links/lib/gcc/*/*/include-fixed: No such file or directory
```
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
It turns out that cargo implicitly depends on rustc at runtime: even
`cargo help` will fail if rustc is not in the PATH.
This means that we need to wrap the cargo binary to add rustc to PATH.
However, I have opted into doing something slightly unusual: instead of
tying down a specific cargo to use a specific rustc (i.e., wrap cargo so
that "${rustc}/bin" is prefixed into PATH), instead I'm adding the rustc
used to build cargo as a fallback rust compiler (i.e., wrap cargo so
that "${rustc}/bin" is suffixed into PATH). This means that cargo will
prefer to use a rust compiler that is in the default path, but fallback
into the one used to build cargo only if there wasn't any rust compiler
in the default path.
The reason I'm doing this is that otherwise it could cause unexpected
effects. For example, if you had a build environment with the
rustcMaster and cargo derivations, you would expect cargo to use
rustcMaster to compile your project (since rustcMaster would be the only
compiler available in $PATH), but this wouldn't happen if we tied down
cargo to use the rustc that was used to compile it (because the default
cargo derivation gets compiled with the stable rust compiler).
That said, I have slightly modified makeRustPlatform so that a rust
platform will always use the rust compiler that was used to build cargo,
because this prevents mistakenly depending on two different versions of
the rust compiler (stable and unstable) in the same rust platform,
something which is usually undesirable.
Fixes#11053
Bug fixes:
- Fixed build error related to zlib on systems with older make versions
(regression in ccache 3.2.3).
- Made conversion-to-bool explicit to avoid build warnings (and potential
runtime errors) on legacy systems.
- Improved signal handling: Kill compiler on SIGTERM; wait for compiler to
exit before exiting; die appropriately.
- Minor fixes related to Windows support.
- The correct compression level is now used if compression is requested.
- Fixed a bug where cache cleanup could be run too early for caches larger
than 64 GiB on 32-bit systems.
- systemd puts all into one output now (except for man),
because I wasn't able to fix all systemd/udev refernces
for NixOS to work well
- libudev is now by default *copied* into another path,
which is what most packages will use as build input :-)
- pkgs.udev = [ libudev.out libudev.dev ]; because there are too many
references that just put `udev` into build inputs (to rewrite them all),
also this made "${udev}/foo" fail at *evaluation* time
so it's easier to catch and change to something more specific
Upstream changes to the build system required adjusting many packages'
dependencies. On the Nixpkgs side, we no longer propagate the dependency
on cmake (to reduce closure size), so downstream dependencies had to be
adjusted for most packages that depend on kdelibs.
Seems cleaner.
Hm, there are also loadfiles in $out/share/doc/dbench/loadfiles/
(installed by the upstream build system), but there is no iscsi/
directory in there.