the fix to extendDerivation in #140051 unwittingly worsened eval performance by
quite a bit. set elements alone needed over 1GB extra after the change, which
seems disproportionate to how small it was. if we flip the logic used to
determine which outputs to install around and keep a "this one exactly" flag in
the specific outputs instead of a "all of them" in the root we can avoid most
of that cost.
Somewhere after #110628, which replaced stdenv.lib with lib, up to
bug #134572, lib got removed from the argument list, breaking any
invocations of debBuild. This adds it back.
A few minor changes to get #119638 - nextcloud: add option to set
datadir and extensions - ready:
* `cfg.datadir` now gets `cfg.home` as default to make the type
non-nullable.
* Enhanced the `basic` test to check the behavior with a custom datadir
that's not `/var/lib/nextcloud`.
* Fix hashes for apps in option example.
* Simplify if/else for `appstoreenable` in override config.
* Simplify a few `mapAttrsToList`-expressions in
`nextcloud-setup.service`.
Note the appstoreEnable which will prevent nextcloud form updating
nix-managed apps. This is needed because nextcloud will store an other
version of the app in /var/lib/nextcloud/store-apps and it will
no longer be manageable.
Manually remove trailing white spaces
in `pkgs/**.nix` with the help of an editor
Auto-generated nix expressions containing trailing whitespaces:
* pkgs/development/haskell-modules/hackage-packages.nix
* See issue https://github.com/NixOS/cabal2nix/issues/208
* pkgs/**/eggs.nix
* I don't know how they are generated,
but they seems to be Python-related.
Maintainers Notes below.
~~~
Hello,
New versions of all the skarnet.org packages are available.
skalibs has undergone a major update, with a few APIs having disappeared,
and others having changed. Compatibility with previous versions is *not*
assured.
Consequently, all the rest of the skarnet.org software has undergone
at least a release bump, in order to build with the new skalibs. But
some packages also have new functionality added (hence, a minor bump),
and others also have their own incompatible changes (hence, a major bump).
The new versions are the following:
skalibs-2.11.0.0 (major)
nsss-0.2.0.0 (major)
utmps-0.1.0.3 (release)
execline-2.8.1.0 (minor)
s6-2.11.0.0 (major)
s6-rc-0.5.2.3 (release)
s6-portable-utils-2.2.3.3 (release)
s6-linux-utils-2.5.1.6 (release)
s6-linux-init-1.0.6.4 (release)
s6-dns-2.3.5.2 (release)
s6-networking-2.5.0.0 (major)
mdevd-0.1.5.0 (minor)
bcnm-0.0.1.4 (release)
dnsfunnel-0.0.1.2 (release)
Additionally, a new package has been released:
smtpd-starttls-proxy-0.0.1.0
Dependencies have all been updated to the latest versions. They are,
this time, partially strict: libraries and binaries may build with older
releases of their dependencies, but not across major version bumps. The
safest approach is to upgrade everything at the same time.
You do not need to recompile your s6-rc service databases or recreate
your s6-linux-init run-images.
You should restart your supervision tree after upgrading skalibs and s6,
as soon as is convenient for you.
Details of major and minor package changes follow.
* skalibs-2.11.0.0
----------------
- A lot of obsolete or useless functionality has been removed:
libbiguint, rc4, md5, iobuffer, skasigaction, environ.h and
getpeereid.h headers, various functions that have not proven their
value in a while.
- Some functions changed signatures or changed names, or both.
- All custom types ending in _t have been renamed, to avoid treading on
POSIX namespace. (The same change has not been done yet in other
packages, but skalibs was the biggest offender by far.)
- Signal functions have been deeply reworked.
- cdb has been reworked, the API is now more user-friendly.
- New functions have been added.
The deletion of significant portions of code has made skalibs leaner.
libskarnet.so has dropped under 190 kB on x86_64.
The cdb rewrite on its own has helped reduce an important amount of
boilerplate in cdb-using code.
All in all, code linked against the new skalibs should be slightly
smaller and use a tiny bit less RAM.
https://skarnet.org/software/skalibs/
git://git.skarnet.org/skalibs
* nsss-0.2.0.0
------------
- Bugfixes.
- nsss-switch wire protocol slightly modified, which is enough to
warrant a major version bump.
- _r functions are now entirely thread-safe.
- Spawned nsssd programs are now persistent and only expire after a
timeout on non-enumeration queries. This saves a lot of forking with
applications that can call primitives such as getpwnam() repeatedly, as
e.g. mdevd does when initially parsing its configuration file.
- New nsssd-switch program, implementing real nsswitch functionality
by dispatching queries to various backends according to a script.
It does not dlopen a single library or read a single config file.
https://skarnet.org/software/nsss/
git://git.skarnet.org/nsss
* execline-2.8.1.0
----------------
- Bugfixes.
- New binary: case. It compares a value against a series of regular
expressions, executing into another command line on the first match.
https://skarnet.org/software/execline/
git://git.skarnet.org/execline
* s6-2.11.0.0
-----------
- Bugfixes.
- Some libs6 header names have been simplified.
- s6-svwait now accepts -r and -R options.
- s6-supervise now reads an optional lock-fd file in the service
directory; if it finds one, the first action of the service is to take
a blocking lock. This prevents confusion when a controller process dies
while still leaving workers holding resources; it also prevents log
spamming on user mistakes (autobackgrounding services, notably).
- New binaries: s6-socklog, s6-svlink, s6-svunlink. The former is a
rewrite of smarden.org's socklog program, in order to implement a fully
functional syslogd with only s6 programs. The latter are tools that start
and stop services by symlinking/unlinking service directories from a
scan directory, in order to make it easier to integrate s6-style services
in boot scripts for sequential service managers such as OpenRC.
https://skarnet.org/software/s6/
git://git.skarnet.org/s6
* s6-networking-2.5.0.0
---------------------
- Bugfixes.
- minidentd has been removed. It was an old and somehow still buggy
piece of code that was only hanging around for nostalgia reasons.
- Full support for client certificates. Details of the client
certificate are transmitted to the application via environment
variables (or via an environment string in the case of opportunistic
TLS).
- Full SNI support, including server-side. (That involved a deep dive
into the bearssl internals, which is why it took so long.) The filenames
containing secret keys and certificates for <domain> are read in the
environment variables KEYFILE:<domain> and CERTFILE:<domain>.
Complete client certificate and SNI support now make the TLS part of
s6-networking a fully viable replacement of stunnel and other similar
TLS tunneling tools. This is most interesting when s6-networking is
built against bearssl, which uses about 1/9 of the resources that OpenSSL
needs.
https://skarnet.org/software/s6-networking/
git://git.skarnet.org/s6-networking
* mdevd-0.1.5.0
-------------
- A new option to mdevd is available: -O <nlgroups>.
This option makes mdevd rebroadcast uevents to a netlink group (or set
of netlink groups) once they have been handled. This allows applications
to read uevents from a netlink group *after* the device manager is done
with them. This is useful, for instance, when pairing mdevd with
libudev-zero for full udev emulation.
- The * and & directives, which previously were only triggered by
"add" and "remove" actions, are now triggered by *all* action types.
This gives users full scripting access to any event, which can be
used to implement complex rules similar to udev ones.
These two changes make it possible to now build a full-featured desktop
system based on mdevd + libudev-zero, without running systemd-udevd or
eudev.
https://skarnet.org/software/mdevd/
git://git.skarnet.org/mdevd
* smtpd-starttls-proxy-0.0.1.0
----------------------------
This new package, in conjunction with the latest s6-networking,
implements the STARTTLS functionality for inetd-like mail servers that
do not already support it. (Currently only tested with qmail-smtpd.)
If you have noticed that sending mail to skarnet.org supports STARTTLS
now, it is thanks to this little piece of software.
https://skarnet.org/software/smtpd-starttls-proxy/
git://git.skarnet.org/smtpd-starttls-proxy
Enjoy,
Bug-reports welcome.
Laurent
fixes e.g.:
pkgsMusl.libfsm
pkgsMusl.libiscsi
pkgsMusl.nsjail
pkgsMusl.pv
match strings have whitespace on either side, which wasn't
matching leading/trailing arguments previously
fixes:
pkgsMusl.bulletml
pkgsMusl.proot
pkgsMusl.python3
Debian explains this issue well in the dpkg-buildflags manpage:
-fPIE
Can be linked into any program, but not a shared library (recommended).
-fPIC
Can be linked into any program and shared library.
On projects that build both programs and shared libraries you might need to
make sure that when building the shared libraries -fPIC is always passed last
(so that it overrides any previous -PIE) to compilation flags such as CFLAGS.
(from https://manpages.debian.org/bullseye/dpkg-dev/dpkg-buildflags.1.en.html#hardening)
In restricted mode (and therefore with flakes) `builtins.readFile` may not be the result of `builtins.toFile`,
making it impossible to use a generated lockFile (with or without IFD),
and thereby causing evaluation to fail if `system != builtins.currentSystem` on Hydra
so the jobs are not delegated to eligible build machines that support that system.
This is done in a way that avoids rebuilds.
autoPatchelfHook actually doesn't depend on stdenv and only needs
bintools (with its wrapper). This change uses $NIX_BINTOOLS instead of
$NIX_CC and makes the dependency on bintools explicit.
Fully enabling crossSystem support for autoPatchelfHook came with some
perhaps unintended consequences of being a bit more aggressive about
patching ELF files from architectures/ABIs that differ from the target
(previously, those files would be ignored because ldd usually couldn't
handle them).
This change adds architecture and rough OS ABI detection to the script
so that it doesn't try to blindly replace the interpreter of files that
can't possibly use that interpreter, and also makes sure it doesn't
accidentally use libraries of other architectures/ABIs.
The current name is misleading: it doesn't contain cli arguments,
but several constants and utility functions related to qemu.
This commit also removes the use of `with import ...` for clarity.
`--enable-deterministic-archives` is a GNU specific strip flag and
causes other strip implementations (for example LLVM's, see #138013)
to fail. Since strip failures are ignored, this means that stripping
doesn't work at all in certain situation (causing unnecessary
dependencies etc.).
To fix this, no longer pass `--enable-deterministic-archives`
unconditionally, but instead add it in a GNU binutils specific strip
wrapper only.
`commonStripFlags` was only used for this flag, so we can remove
it altogether.
Future work could be to make a generic strip wrapper, with support for
nix-support/strip-flags-{before,after} and NIX_STRIP_FLAGS_{BEFORE,AFTER}.
This possibly overkill and unnecessary though -- also with the
additional challenge of incorporating the darwin strip wrapper somehow.
In #84415, autoPatchelfHook was taught to use the correct path to the
readelf binary when a crossSystem is specified. Unfortunately, the
remainder of the functionality in the script depended on ldd, which only
reads ELF files of its own architecture. It has the further unfortunate
quality of not reporting any useful error, but rather that the file is
not a dynamic executable.
This change uses patchelf to directly analyze the DT_NEEDED tags in the
target files instead, which correctly works across architectures. It
also updates the use of objdump to be prefix-aware $OBJDUMP (which would
have been required in the PR mentioned above, but we never made it that
far into the script execution).
I currently do not have much time to work on nixpkgs. Remove
myself as a maintainer from a bunch of packages to avoid that
people are waiting on me for a review.
Conflicts:
pkgs/development/compilers/ghc/8.10.7.nix
pkgs/development/compilers/ghc/8.8.4.nix
I've removed the isWindows check from useLdGold in ghc, since that should
be covered by the new hasGold check.
I somehow accidentally left out the lib.flatten from mergeInputs. Without it, subtractLists won't ever remove anything from the inputs since the inputs will be a list of lists.
The motivation for inputsFrom is to create a shell environment that is suitable for development of the packages listed in inputsFrom. This commit filters out any dependencies from one package in inputsFrom to another when computing the shell environment's inputs. This supports the use case where several closely related packages (perhaps even built from the same source tree) are being mutually developed. It is assumed that the user will configure their environment to resolve dependencies between these mutually developed packages.
This function is fundamentally broken.
Not even the ncurses example will compile.
The interface needs to be rethought for it to work (i.e. don't
unconditionally include all pc files, set include path and ld path and
rpath).
Since it is unlikely that in the current this has any user, just drop it for now.
dyld: lazy symbol binding failed: Symbol not found: _dsyevd_
Referenced from: /nix/store/lr8grz1knmh6vc7j830gni0ka68qf1lk-xfitter-2.0.1/bin/xfitter
Expected in: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
Fixes https://github.com/dhall-lang/dhall-haskell/issues/2267
`pkgs.dhallToNix` currently fails when a Dhall package is
interpolated into the input source code, like this:
```nix
let
pkgs = import <nixpkgs> { };
f = { buildDhallPackage }: buildDhallPackage {
name = "not";
code = "λ(x : Bool) → x == False";
source = true;
};
not = pkgs.dhallPackages.callPackage f {};
in
pkgs.dhallToNix "${not}/source.dhall True"
```
This is because `dhallToNix` was using `builtins.toFile`, which
does not permit inputs with interpolated derivations. However,
`pkgs.writeText` does not have this limitation, so we can switch
to using that instead.
Apparently, a non-existent nsswitch.conf causes a very misleading host
resolution, differing from the defaults people are used to.
According to
https://github.com/golang/go/issues/22846#issuecomment-346377144, glibc
says the default is "dns [!UNAVAIL=return] files".
This means, `/etc/hosts` isn't really honored, causing all sorts of
unexpected behaviour.
Let's prevent this, and first ask `/etc/hosts` before querying DNS, like
we do on NixOS too.
The reason for this change is explained in the long comment I added.
Here's a simple example of the problem:
let
pkgs = import <nixpkgs> { crossSystem.system = "aarch64-linux"; };
in
pkgs.callPackage ({ stdenv, s6-rc }: stdenv.mkDerivation {
name = "s6-rc-compiled";
nativeBuildInputs = [ s6-rc ];
buildCommand = ''
mkdir in
s6-rc-compile $out in
'';
}) {}
We're cross compiling for aarch64 here, so we'd expect the scripts
generated by this derivation to be things we could run on aarch64.
But when I build this on my x86_64 machine, without this change
applied, $out/servicedirs/s6rc-oneshot-runner/run gets generated full
of references to x86_64 non-cross store paths for execline, s6, and
s6-rc.
With this change applied, the scripts generated by the above
expression now refer to the cross-compiled aarch64 store paths for
execline, s6, and s6-rc.
- Reuse build phase from the `buildDunePackage` function.
- Only install the package that was just built (useful for monorepo support).
- Introduces `opam-name` to override the default package name to build with Dune.
GitHub user flexibeast has been porting the html documentation from
skarnet.org to mdoc, making them available as man pages. While the
documentation is non authorative, it is certainly useful and is also
linked from skarnet.org.
buildManPages implements the common mkDerivation machinery common to all
ported man page packages / repositories.
`installCheckPhase` is mainly intended for checks that are part of the
upstream package, for our 'own' checks we prefer `passthru.tests`.
This loses running `buf --help`, but I'm not sure how much that adds
on top of `buf --version`?
skopeo 1.4.x doesn't accept --src-tls-verify as a flag to the *program*,
only as a flag to copy; we must pass it after the "copy" verb, or it
will fail with:
> FATA[0000] unknown flag: --src-tls-verify
Adapted from `pkgs/games/osu-lazer/update.sh`.
Restore the packages to a directory with `--packages`, then run
`./nuget-to-nix.sh [path to packages] > deps.nix`.
In newer versions of mingw, programs compiled with FORTIFY_SOURCE need
to link to libssp or they will have link-time errors.
gmp has been broken since @pstn updated mingw-64 in c60a0b0447
fetchzip downloads the file from specified URL, renames it to basename
of that url, and then relies on unzip to do the unpacking.
The first consequence is that this requires URL to end with proper
extension—otherwise it will fail to unpack. This is not always the
case and input-fonts workarounds this by adding “&.zip” query
parameter (which is obviously a hack and is not guaranteed to work
with every URL).
The second consequence is that basename of the url must be a valid
filename. I’ve tried to build a custom configuration of input-fonts
and I get an error from mv that the filename is too long:
> trying https://input.djr.com/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 100 406k 100 406k 0 0 230k 0 0:00:01 0:00:01 --:--:-- 230k
> mv: failed to access '/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip': File name too long
We could use “name” parameter as the filename (that’s how it is used
in fetchurl). However, the previous attempt to do
so (fc01353703) was
reverted (24b5eb61eb) because of the
introduced regression—many fetchzip invocations use names without
extension (also the default name is just “source”).
This commit adds an optional “extension” parameter. If it is set,
fetchzip renames the downloaded file to “download.${extension}”
effectively solving both problems above without introducing a massive
regression.
This is a no-op for all existing packages.
Tested by updating my NixOS setup + the extra inputs-fonts
configuration mentioned above + tons of unstable emacs packages after
a nix-collect-garbage (3Gb downloaded) with this patch applied.
GPRbuild is a multi language build system developed by AdaCore which
is mostly used for build Ada-related projects using GNAT.
Since GPRbuild is used to build itself and its dependency library
XML/Ada we first build a bootstrap version of it using the provided
bash build script bootstrap.sh as the gprbuild-boot derivation.
gprbuild-boot is then used to build xmlada and the proper gprbuild
derivation.
GPRbuild has its own search path mechanism via GPR_PROJECT_PATH which
we address via a setupHook. It currently works quite similar to the
pkg-config one: It accumulates all inputs into GPR_PROJECT_PATH,
GPR_PROJECT_PATH_FOR_BUILD etc. However this is quite limited at the
moment as we don't have a gprbuild wrapper yet which understands the
_FOR_BUILD suffix. However, we'll need to address this in the future
as it is currently basically impossible to test since the distinction
only affects cross-compilation, but it is not possible to build a GNAT
cross-compiler in nixpkgs at the moment (I'm working on changing that,
however).
Another issue we had to solve was GPRbuild not finding the right GNAT
via its gprconfig tool: GPRbuild has a knowledge base with compiler
definitions which run some checks and collect info about binaries
which are in PATH. In the end the first compiler in PATH that supports
the desired language is selected.
We want GPRbuild to discover our wrapped GNAT since the unwrapped one
is incapable of producing working binaries since it won't find the
crt*.o objects distributed with libc. GPRbuild however needs to find
the Ada runtime distributed with GNAT which is not part of the wrapper
derivation, so it will skip the wrapper and select the unwrapped GNAT.
Symlinking the unwrapped's lib directory into the wrapper fixes this
problem, but breaks linking in some cases (e. g. when linking against
OMP from gcc, the runtime variant will shadow the problem dynamic lib
from buildInputs). Additionally it uses gnatls as an indicator it has
found GNAT which is not part of the wrapper.
The solution we opted to adopt here is to install a custom compiler
description into gprbuild's knowledge base which properly detects the
nixpkgs GNAT wrapper: It uses gnatmake to detect GNAT instead of
gnatls and discovers the runtime via a symlink we add to
`$out/nix-support`. This additional definition is enough to properly
detect GNAT, since the plain wrapped gcc detection works out of the
box. It may, however, be necessary to add special definitions for
other languages in the future where gprbuild also needs to discover
the runtime.
One future improvement would be to install libgpr into a separate
output or split it into a separate derivation (which would require to
link gprbuild statically always since otherwise we end up with a
cyclical dependency).
near the end of 2019, the default Cargo.lock format was changed to
[[package]]
checksum = ...
This is what importCargoLock assumes. If the crate had not been `cargo
update`'d with a more recent toolchain than the one with the new
format as default, importCargoLock would fail when trying to access
pkg.checksum.
I ran into such a case (shamefully, in my own crate) and it took me a
while to figure out what was going on, so here is an assert with a
more user friendly message and a hint.
At least for now. Such changes are risky (we have very many packages),
and apparently it needs more testing/review without blocking other
changes.
This reverts the whole range 4d0e3984918^..8752c327377,
except for one commit that got reverted in 6f239d7309 already.
(that MR didn't even get its merge commit)
* bintools: disable -pie when -r or -Ur are used
ld’s -r allows you to partially link object files. When -pie is passed with -r, though, we get:
ld: -r and -pie may not be used together
Most build systems are intelligent enough to pass -no-pie before -r, but we might as well support those that
don’t.
Note: -pie is not enabled by default in Nixpkgs, but it is when you are using musl. So this solution is really
only useful for musl toolchains.
* bintools-wrapper: Add incremental -i check for pie
It's hugely inefficient as we can't use shallow cloning (--depth=1).
This has been tested and adapted for quite a few hosts fetchgit is used on in
Nixpkgs. For those where fetching the hash directly doesn't work (most notably
git.savannah.gnu.org), we simply fall back to the old method.
The NixOS pipewire module places its alsa compatiblity configuration in
/etc/alsa/conf.d/ instead of /etc/asound.conf. This commit enables
applications running in a bubblewrap fhs environment to use alsa on
systems running pipewire.
According to rustc implementation[1], `-C incremental=no` enables
incremental builds with directory name `no`. This patch removes the
`-C incremental` argument to disable incremental builds.
[1]: ee86f96ba1/compiler/rustc_session/src/options.rs (L918-L919)
I think this is due an update. I've chosen to update to the latest
version that has been merged into Melpa.
Unfortunately we now need to hack around it trying to run VCS
commands.
My Emacs configuration with thirty-something leaf packages seems fine
after the rebuild.
skopeo will disable the progress bar if it detects that stdout isn't a
TTY - in order to make it think that stdout _isn't_ a TTY and therefore
avoid it printing a lot of "…" on separate lines, we pipe the output
through cat.
This changes the output from:
…
…
…
…
…
…
to the eminently more useful and less spammy:
Getting image source signatures
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying config sha256:[snip]
Writing manifest to image destination
Storing signatures