with #99631 and #102693 merged, it's possible to bump the babashka
version again.
However recent versions of babashka depend on java11 features and I
spoke in Slack with the project lead and this java11 dependency will
exist going forward.
Now that we have community builds of graalvm landed in #99631, both
clj-kondo and babashka can depend on those versions of graalvm rather
than the one that requires building from source - this can be built in
hydra, and generally is much easier to build and test.
ext/io/console/io-console.gemspec was embedding a timestamp which made
the build not reproducible. Gems respect SOURCE_DATE_EPOCH so it's
enough to just delete that line if it exists.
This file has been fixed in
679a941d05 (diff-d8422f096931c58d4463e2489f62a228b0f24f0492950ba88c8c89a0d741cfe6)
And then ruby regularly merges that gem into their own repository. Ruby
master is fixed but none of the ruby releases have been fixed yet.
lib/ruby/gems/2.6.0/specifications/default/io-console-0.4.7.gemspec now
contains:
s.date = "1980-01-01"
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
This reverts commit c778945806.
I believe this is exactly what brings the staging branch into
the right shape after the last merge from master (through staging-next);
otherwise part of staging changes would be lost
(due to being already reachable from master but reverted).
There are a variety of additional scripts that are included with the
JRuby installation that use JRuby itself.
For instance the `bin/gem` had the following contents:
```bash
❯ cat /nix/store/kglkqf56ii83yl6yrgcaj5r3s9m2fzr0-jruby-9.2.13.0/bin/gem
load File.join(File.dirname(__FILE__), 'jgem')
```
This is clearly wrong. Patchshebangs was not picking up the fix as part
of stdenv because the patch is not a build input but the final output
itself.
We have to rely on substituteInPlace so that we get the correct version.
```bash
❯ cat /nix/store/k4fnrn0dcsh2wzw81217r0ywsspb468f-jruby-9.2.13.0/bin/gem
```
fixes c88f3adb17, which resulted in
qt 5.15 being used in pythonPackages, despite 5.14 being
declared, and adapts qutebrowser accordingly.
'callPackage { pkgs = pkgs // { … }; }' does not work, because
it does not take into account the recursive evaluation of nixpkgs:
`pkgs/development/interpreters/python/default.nix` calls
`pkgs/top-level/python-packages.nix` with `callPackage`.
Thus, even if the former gets passed the updated `pkgs`,
the latter always gets passed `pkgs.pkgs`.
For the change in the qt5 version to apply consistently, 'pkgs.extend'
must be used.
qutebrowser only used the right qt5 version (5.15) because all
pythonPackages used it anyway.
Optional setting of format == "pyproject", "egg" had been documented
in the manual, but they weren't listed in the function header for
``mk-python-derivation.nix``.
On darwin the compilation would fail with the following warning:
```
clang-7: error: argument unused during compilation: '-fno-strict-overflow' [-Werror,-Wunused-command-line-argument]
```
This error happens because the `-fno-strict-overflow` is passed to the compiler. To fix this, disable the `strictoverflow` hardening feature. Also see #39687.
ZHF: #97479
Since wasmer 0.17 no backends are enabled by default. Backends are now detected
using the [makefile](https://github.com/wasmerio/wasmer/blob/master/Makefile).
This change enables cranelift as this used to be the old default. At
least one backend is needed for the `run` subcommand to work. If we want
to replicate the actual logic in the makefile, we would probably want to
enable the singlepass and llvm backend as well. However enabling llvm
backend introduces a dependency on openssl, so we opted for replicating
the old default behavior.
/cc roundup issues: #96821, #96828.
The diff upstream is fairly small, so let me trust Mike Pall on this.
Both versions get a pair of commits that seem to address the CVE
https://github.com/LuaJIT/LuaJIT/issues/603
and 2.1 additionally gets one other small commit.
The erlang `generic-builder` accepts a lot of arguments that would
affect the `configureFlags` passed to `mkDerivation`. Though all these
arguments would be without any effect if additionally `configureFlags`
is passed and not the empty list.
This change should make it easier to "compose" arbitrary erlang overrides.
This commit introduces two changes.
First, cpython gets optional BlueZ support, which is needed for
AF_BLUETOOTH sockets. Therefore bluezSupport was added as a parameter.
Second, the call to the pythonFull packages has been adjusted. The
Python packages have a self-reference called self. This was not adjusted
for the override. As a result, Python packages for this special version
of Python were not built with the overridden Python, but with the
original one.
Before this commit, we only built the main ACL2 executable. Most users
will also want the standard library (the "Community Books"), so after
this commit, we build the entire `make everything` suite, which includes
essentially everything provided in the ACL2 repository.
There's also a new top-level package called `acl2-minimal` which has
just the core ACL2 executable, for those who really only want that.
Future work: modularize the build so that we can support multiple
different subsets of the standard library. A lot of the stuff in this
complete build is probably superfluous to almost all users. Also,
because some of the books have unclear or idiosyncratic licenses, the
full build will not be cached on cache.nixos.org, and installing it will
mean spending a few hours building it. So it would be good to have a
pared down build which excluded non-free books and things that people
rarely or never use.
This adds a warning to the top of each “boot” package that reads:
Note: this package is used for bootstrapping fetchurl, and thus cannot
use fetchpatch! All mutable patches (generated by GitHub or cgit) that
are needed here should be included directly in Nixpkgs as files.
This makes it clear to maintainer that they may need to treat this
package a little differently than others. Importantly, we can’t use
fetchpatch here due to using <nix/fetchurl.nix>. To avoid having stale
hashes, we need to include patches that are subject to changing
overtime (for instance, gitweb’s patches contain a version number at
the bottom).
used together with cpython's debugging symbols, this allows inspection of
the python stack of cpython programs in gdb. this file is a little
different from the rest of the python output by this package, in that it's
not intended to be run by the current python being built, instead by the
python being used by the gdb in question, which could be very different.
therefore placed in its own, but hopefully logical & predictable location.
* Backported patches from `php-7.4` which fixes the env for all
`gettext` and `zlib` tests.
* Setting `--with-libxml-dir` is still needed for versions 7.2 and 7.3.
The motivation for this change is to enable a new Dhall command-line
utility called `dhall-to-nixpkgs` which converts Dhall packages to
buildable Nix packages. You can think of `dhall-to-nixpkgs` as the
Dhall analog of `cabal2nix`.
You can find the matching pull request for `dhall-to-nixpkgs` here:
https://github.com/dhall-lang/dhall-haskell/pull/1826
The two main changes required to support `dhall-to-nixpkgs` are:
* Two new `buildDhall{Directory,GitHub}Package` utilities are added
`dhall-to-nixpkgs` uses these in the generated output
* `pkgs.dhallPackages` now selects a default version for each package
using the `prefer` utility
All other versions are still buildable via a `passthru` attribute
There are too many regressions. Instead of reverting all the work that has been
done on this so far, let's just disable it Python-wide. That way we can
investigate and fix it easier.
Related:
- 9fc5e7e473
- 593e11fd94
- 508ae42a0f
Since the last time I ran this script, the Repology API changed, so I had to
adapt the script used in the previous PR. The new API should be more robust, so
overall this is a positive (no more grepping the error messages for our relevant
data but just a nice json structure).
Here's the new script I used:
```sh
curl https://repology.org/api/v1/repository/nix_unstable/problems \
| jq -r '.[] | select(.type == "homepage_permanent_https_redirect") | .data | "s@\(.url)@\(.target)@"' \
| sort | uniq | tee script.sed
find -name '*.nix' | xargs -P4 -- sed -f script.sed -i
```
I will also add this script to `maintainers/scripts`.
After making `ffmpeg` point to the latest `ffmpeg_4`, all packages that
used `ffmpeg` without requiring a specific version now use ffmpeg_3
explicitly so they shouldn't change.
Also, don't use autoreconfHook on Darwin with Python 3.
Darwin builds are still impure and fail with
ld: warning: directory not found for option '-L/nix/store/6yhj9djska835wb6ylg46d2yw9dl0sjb-configd-osx-10.8.5/lib'
ld: warning: directory not found for option '-L/nix/store/6yhj9djska835wb6ylg46d2yw9dl0sjb-configd-osx-10.8.5/lib'
ld: warning: object file (/nix/store/0lsij4jl35bnhqhdzla8md6xiswgig5q-Libsystem-osx-10.12.6/lib/crt1.10.6.o) was built for newer OSX version (10.12) than being linked (10.6)
DYLD_LIBRARY_PATH=/private/tmp/nix-build-python3-3.8.3.drv-0/Python-3.8.3 ./python.exe -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
/nix/store/dsb7d4dwxk6bzlm845z2zx6wp9a8bqc1-bash-4.4-p23/bin/bash: line 5: 72015 Killed: 9 DYLD_LIBRARY_PATH=/private/tmp/nix-build-python3-3.8.3.drv-0/Python-3.8.3 ./python.exe -E -S -m sysconfig --generate-posix-vars
generate-posix-vars failed
make: *** [Makefile:592: pybuilddir.txt] Error 1
When a PEP 517 project file is present, pip will not install
prerequisites in `site-packages`:
https://pip.pypa.io/en/stable/reference/pip/#pep-517-and-518-support
For the shell hook, this has the consequence that the generated
temporary directory that is added to PYTHONPATH does not contain
`site.py`. As a result, Python does not discover the Python
module. Thus when a user executes nix-shell in a project, they cannot
import the project's Python module.
This change adds the `--no-build-isolation` option to pip when
creating the editable environment, to correctly generate `site.py`,
even when a `pyproject.toml` is present.
When a PEP 517 project file is present, pip will not install
prerequisites in `site-packages`:
https://pip.pypa.io/en/stable/reference/pip/#pep-517-and-518-support
For the shell hook, this has the consequence that the generated
temporary directory that is added to PYTHONPATH does not contain
`site.py`. As a result, Python does not discover the Python
module. Thus when a user executes nix-shell in a project, they cannot
import the project's Python module.
This change adds the `--no-build-isolation` option to pip when
creating the editable environment, to correctly generate `site.py`,
even when a `pyproject.toml` is present.
I took a close look at how Debian builds the Python interpreter,
because I noticed it ran substantially faster than the one in nixpkgs
and I was curious why.
One thing that I found made a material difference in performance was
this pair of linker flags (passed to the compiler):
-Wl,-O1 -Wl,-Bsymbolic-functions
In other words, effectively the linker gets passed the flags:
-O1 -Bsymbolic-functions
Doing the same thing in nixpkgs turns out to make the interpreter
run about 6% faster, which is quite a big win for such an easy
change. So, let's apply it.
---
I had not known there was a `-O1` flag for the *linker*!
But indeed there is.
These flags are unrelated to "link-time optimization" (LTO), despite
the latter's name. LTO means doing classic compiler optimizations
on the actual code, at the linking step when it becomes possible to
do them with cross-object-file information. These two flags, by
contrast, cause the linker to make certain optimizations within the
scope of its job as the linker.
Documentation is here, though sparse:
https://sourceware.org/binutils/docs-2.31/ld/Options.html
The meaning of -O1 was explained in more detail in this LWN article:
https://lwn.net/Articles/192624/
Apparently it makes the resulting symbol table use a bigger hash
table, so the load factor is smaller and lookups are faster. Cool.
As for -Bsymbolic-functions, the documentation indicates that it's a
way of saving lookups through the symbol table entirely. There can
apparently be situations where it changes the behavior of a program,
specifically if the program relies on linker tricks to provide
customization features:
https://bugs.launchpad.net/ubuntu/+source/xfe/+bug/644645https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637184#35
But I'm pretty sure CPython doesn't permit that kind of trick: you
don't load a shared object that tries to redefine some symbol found
in the interpreter core.
The stronger reason I'm confident using -Bsymbolic-functions is
safe, though, is empirical. Both Debian and Ubuntu have been
shipping a Python built this way since forever -- it was introduced
for the Python 2.4 and 2.5 in Ubuntu "hardy", and Debian "lenny",
released in 2008 and 2009. In those 12 years they haven't seen a
need to drop this flag; and I've been unable to locate any reports
of trouble related to it, either on the Web in general or on the
Debian bug tracker. (There are reports of a handful of other
programs breaking with it, but not Python/CPython.) So that seems
like about as thorough testing as one could hope for.
---
As for the performance impact: I ran CPython upstream's preferred
benchmark suite, "pyperformance", in the same way as described in
the previous commit. On top of that commit's change, the results
across the 60 benchmarks in the suite are:
The median is 6% faster.
The middle half (aka interquartile range) is from 4% to 8% faster.
Out of 60 benchmarks, 3 come out slower, by 1-4%. At the other end,
5 are at least 10% faster, and one is 17% faster.
So, that's quite a material speedup! I don't know how big the
effect of these flags is for other software; but certainly CPython
tends to do plenty of dynamic linking, as that's how it loads
extension modules, which are ubiquitous in the stdlib as well as
popular third-party libraries. So perhaps that helps explain why
optimizing the dynamic linker has such an impact.
Without this flag, the configure script prints a warning at the end,
like this (reformatted):
If you want a release build with all stable optimizations active
(PGO, etc), please run ./configure --enable-optimizations
We're doing a build to distribute to people for day-to-day use,
doing things other than developing the Python interpreter. So
that's certainly a release build -- we're the target audience for
this recommendation.
---
And, trying it out, upstream isn't kidding! I ran the standard
benchmark suite that the CPython developers use for performance
work, "pyperformance". Following its usage instructions:
https://pyperformance.readthedocs.io/usage.html
I ran the whole suite, like so:
$ nix-shell -p ./result."$variant" --run '
cd $(mktemp -d); python -m venv venv; . venv/bin/activate
pip install pyperformance
pyperformance run -o ~/tmp/result.'"$variant"'.json
'
and then examined the results with commands like:
$ python -m pyperf compare_to --table -G \
~/tmp/result.{$before,$after}.json
Across all the benchmarks in the suite, the median speedup was 16%.
(Meaning 1.16x faster; 14% less time).
The middle half of them ranged from a 13% to a 22% speedup.
Each of the 60 benchmarks in the suite got faster, by speedups
ranging from 3% to 53%.
---
One reason this isn't just the default to begin with is that, until
recently, it made the build a lot slower. What it does is turn on
profile-guided optimization, which means first build for profiling,
then run some task to get a profile, then build again using the
profile. And, short of further customization, the task it would use
would be nearly the full test suite, which includes a lot of
expensive and slow tests, and can easily take half an hour to run.
Happily, in 2019 an upstream developer did the work to carefully
select a more appropriate set of tests to use for the profile:
https://github.com/python/cpython/commit/4e16a4a31https://bugs.python.org/issue36044
This suite takes just 2 minutes to run. And the resulting final
build is actually slightly faster than with the much longer suite,
at least as measured by those standard "pyperformance" benchmarks.
That work went into the 3.8 release, but the same list works great
if used on older releases too.
So, start passing that --enable-optimizations flag; and backport
that good-for-PGO set of tests, so that we use it on all releases.
Some PECLs depend on other PECLs and, like internal PHP extension
dependencies, need to be loaded in the correct order. This makes this
possible by adding the argument "peclDeps" to buildPecl, which adds
the extension to buildInputs and is treated the same way as
internalDeps when the extension config is generated.
@the-kenny did a good job in the past and is set as maintainer in many package,
however since 2017-2018 he stopped contributing. To create less confusion
in pull requests when people try to request his feedback, I removed him as
maintainer from all packages.
This should enable (manual) building of RPMs from python projects using
the `python setup.py bdist_rpm` command on systems where `rpmbuild` is
not located in `/usr/bin/`. (e.g. NixOS)
The discovery of the rpmbuild command was fixed upstream in Python 3.8,
so this commit backports the relevant patch to our currently supported
Python 3 versions.
Fixes: #85204
At the moment, using .withExtensions on a PHP derivation will
produce something which can't be used inside an
environment.systemPackages array, because outputsToInstall refers
to an output which doesn't exist on the final derivation.
Instead, override it back to just containing the single output
"out".
Also passthrough the meta of the package to have description,
homepage, license, maintainers and other metadata passed through to
the commonly used attribute.
Instead of using two different php packages in php-packages.nix, one
wrapper and one unwrapped, simply use the wrapper and use its
"unwrapped" attribute when necessary. Also, get rid of the packages
and extensions attributes from the base package, since they're no
longer needed.
Since the introduction of php.unwrapped there's no real need for the
phpXXbase attributes, so let's remove them to lessen potential
confusion and clutter. Also update the docs to make it clear how to
get hold of an unwrapped PHP if needed.
Some extensions depend on other extensions. Previously, these had to
be added manually to the list of included extensions, or we got a
cryptic error message pointing to strings-with-deps.nix, which wasn't
very helpful. This makes sure all required extensions are included in
the set from which textClosureList chooses its snippets.
Rework withExtensions / buildEnv to handle currently enabled
extensions better and make them compatible with override. They now
accept a function with the named arguments enabled and all, where
enabled is a list of currently enabled extensions and all is the set
of all extensions. This gives us several nice properties:
- You always get the right version of the list of currently enabled
extensions
- Invocations chain
- It works well with overridden PHP packages - you always get the
correct versions of extensions
As a contrived example of what's possible, you can add ImageMagick,
then override the version and disable fpm, then disable cgi, and
lastly remove the zip extension like this:
{ pkgs ? (import <nixpkgs>) {} }:
with pkgs;
let
phpWithImagick = php74.withExtensions ({ all, enabled }: enabled ++ [ all.imagick ]);
phpWithImagickWithoutFpm743 = phpWithImagick.override {
version = "7.4.3";
sha256 = "wVF7pJV4+y3MZMc6Ptx21PxQfEp6xjmYFYTMfTtMbRQ=";
fpmSupport = false;
};
phpWithImagickWithoutFpmZip743 = phpWithImagickWithoutFpm743.withExtensions (
{ enabled, all }:
lib.filter (e: e != all.zip) enabled);
phpWithImagickWithoutFpmZipCgi743 = phpWithImagickWithoutFpmZip743.override {
cgiSupport = false;
};
in
phpWithImagickWithoutFpmZipCgi743
Make buildEnv take earlier overridden values into account by
forwarding all arguments (a merge of generic's arguments, all previous
arguments and the current arguments) to the next invocation of
buildEnv.
Make all arguments to a PHP build overridable; i.e, both configuration
flags, such as valgrindSupport, and packages, such as valgrind:
php.override { valgrindSupport = false; valgrind = valgrind-light; }
This applies to packages built by generic and buildEnv/withExtensions;
i.e, it works with both phpXX and phpXXBase packages.
The following changes were also made to facilitate this:
- generic and generic' are merged into one function
- generic now takes all required arguments for a complete build and
is meant to be called by callPackage
- The main function called from all-packages.nix no longer takes all
required arguments for a complete build - all arguments passed to it
are however forwarded to the individual builds, thus default
arguments can still be overridden from all-packages.nix
This implements the override pattern for builds done with buildEnv, so
that we can, for example, write
php.override { fpmSupport = false; }
and get a PHP package with the default extensions enabled, but PHP
compiled without fpm support.
Also passthrough the meta of the package to have description,
homepage, license, maintainers and other metadata passed through to
the commonly used attribute.
This is a better name since we have multiple 64-bit things that could
be referred to.
LP64 : integer=32, long=64, pointer=64
ILP64 : integer=64, long=64, pointer=64
This makes packages use lapack and blas, which can wrap different
BLAS/LAPACK implementations.
treewide: cleanup from blas/lapack changes
A few issues in the original treewide:
- can’t assume blas64 is a bool
- unused commented code
This is to fix a regression in upstream Racket packaging, the upstream
bug tracking this is here:
https://github.com/racket/racket/issues/3046
When the bug is fixed this workaround will be unnecessary.
mkDerivation uses the dev output in buildInputs if it exits, hence the
php-with-extensions package was never built or put into the path of
packages dependent on it during build. With this fix, the php packages
built with buildEnv or withExtensions don't have any dev outputs;
packages which need the dev output can refer to the phpXXbase packages
instead.
This provides a means to build a PHP package based on a list of
extensions from another.
For example, to generate a package with all default extensions
enabled, except opcache, but with ImageMagick:
php.withExtensions (e:
(lib.filter (e: e != php.extensions.opcache) php.enabledExtensions)
++ [ e.imagick ])
So now we have only packages for human interaction in php.packages and
only extensions in php.extensions. With this php.packages.exts have
been merged into the same attribute set as all the other extensions to
make it flat and nice.
The nextcloud module have been updated to reflect this change as well
as the documentation.
Make mkExtension put headers in the dev output and use them, instead of
a different part of the current source tree, when referring to another
extension by using internalDeps.
This means external extensions can be built against the internal ones.
This means php packages can now refer to other php packages by looking
them up in the php.packages attribute and gets rid of the internal
recursive set previously defined in php-packages.nix. This also means
that in applications where previously both the php package and the
corresponding version of the phpPackages package set had to be
specified, the php package will now suffice.
This also adds the phpWithExtensions parameter to the
php-packages.nix, which can be used by extensions that need a fully
featured PHP executable.
A slight rewrite of buildEnv which:
1. Makes buildEnv recursively add itself to its output, so that it can
be accessed from any php derivation.
2. Orders the extension text strings according to their internalDeps
attribute - dependencies have to be put before dependants in the
php.ini or they will fail to load due to missing symbols.
This moves yet more extensions from the base build to
phpPackages.ext. Some of the extensions are a bit quirky and need
patching for this to work, most notably mysqlnd and opcache.
Two new parameters are introduced for mkExtension - internalDeps and
postPhpize. internalDeps is used to specify which other internal
extensions the current extension depends on, in order to provide them
at build time. postPhpize is for when patches and quirks need to be
applied after running phpize.
Patch notes:
- For opcache, older versions of PHP have a bug where header files are
included in the wrong order.
- For mysqlnd, the config.h is never included, so we include it in the
main header file, mysqlnd.h. Also, the configure script doesn't add
the necessary library link flags, so we add them to the variable
configure should have added them to.
Also, add opcache to default extensions since it significantly
increases PHP's performance and is by default enabled on Debian based
distributions. Not having it enabled by default results in a puzzling
performance loss for anyone attempting to migrate from Debian/Ubuntu
to NixOS who is unaware of this. Therefore, enable it by default. /talyz
The ./configure script prints a warning when passed this flag,
starting with 3.7:
configure: WARNING: unrecognized options: --with-threads
The reason is that there's no longer such a thing as a build
without threads.
Eliminate the warning, by only passing the flag on the older releases
that accept it.
Upstream change and discussion:
https://github.com/python/cpython/commit/a6a4dc816https://bugs.python.org/issue31370
The prefix will now be correct in case of Nix env.
Note, however, that creating a venv from a Nix env still does not function. This does not seem to be possible
with the current approach either, because venv will copy or symlink our Python wrapper. In case it symlinks
(the default) it won't see a pyvenv.cfg. If it is copied I think it should function but it does not...
This is needed in case of `python.buildEnv` to make sure site.PREFIXES
does not only point to the unwrapped executable prefix.
--------------------------------------------------------------------------------
This PR is a story where your valiant hero sets out on a very simple adventure but ends up having to slay dragons, starts questioning his own sanity and finally manages to gain enough knowledge to slay the evil dragon and finally win the proverbial price.
It all started out on sunny spring day with trying to tackle the Nixops plugin infrastructure and make that nice enough to work with.
Our story begins in the shanty town of [NixOps-AWS](https://github.com/nixos/nixops-aws) where [mypy](http://mypy-lang.org/) type checking has not yet been seen.
As our deuteragonist (@grahamc) has made great strides in the capital city of [NixOps](https://github.com/nixos/nixops) our hero wanted to bring this out into the land and let the people rejoice in reliability and a wonderful development experience.
The plugin work itself was straight forward and our hero quickly slayed the first small dragon, at this point things felt good and our hero thought he was going to reach the town of NixOps-AWS very quickly.
But alas! Mypy did not want to go, it said:
`Cannot find implementation or library stub for module named 'nixops'`
Our hero felt a small sliver of life escape from his body. Things were not going to be so easy.
After some frustration our hero discovered there was a [rule of the land of Python](https://www.python.org/dev/peps/pep-0561/) that governed the import of types into the kingdom, more specificaly a very special document (file) called `py.typed`.
Things were looking good.
But no, what the law said did not seem to match reality. How could things be so?
After some frustrating debugging our valiant hero thought to himself "Hmm, I wonder if this is simply a Nix idiosyncrasy", and it turns out indeed it was.
Things that were working in the blessed way of the land of Python (inside a `virtualenv`) were not working the way they were from his home town of Nix (`nix-shell` + `python.withPackages`).
After even more frustrating attempts at reading the mypy documentation and trying to understand how things were supposed to work our hero started questioning his sanity.
This is where things started to get truly interesting.
Our hero started to use a number of powerful weapons, both forged in the land of Python (pdb) & by the mages of UNIX (printf-style-debugging & strace).
After first trying to slay the dragon simply by `strace` and a keen eye our hero did not spot any weak points.
Time to break out a more powerful sword (`pdb`) which also did not divulge any secrets about what was wrong.
Our hero went back to the `strace` output and after a fair bit of thought and analysis a pattern started to emerge. Mypy was looking in the wrong place (i.e. not in in the environment created by `python.withPackages` but in the interpreter store path) and our princess was in another castle!
Our hero went to the pub full of old grumpy men giving out the inner workings of the open source universe (Github) and acquired a copy of Mypy.
He littered the code with print statements & break points.
After a fierce battle full of blood, sweat & tears he ended up in 20f7f2dd71/mypy/sitepkgs.py and realised that everything came down to the Python `site` module and more specifically https://docs.python.org/3.7/library/site.html#site.getsitepackages which in turn relies on https://docs.python.org/3.7/library/site.html#site.PREFIXES .
Our hero created a copy of the environment created by `python.withPackages` and manually modified it to confirm his findings, and it turned out it was indeed the case.
Our hero had damaged the dragon and it was time for a celebration.
He went out and acquired some mead which he ingested while he typed up his story and waited for the dragon to finally die (the commit caused a mass-rebuild, I had to wait for my repro).
In the end all was good in [NixOps-AWS](https://github.com/nixos/nixops-aws)-town and type checks could run. (PR for that incoming tomorrow).
Initially discussed in #55460.
This patch adds a `buildEnv` function to `php` that has the
following features:
* `php.buildEnv { extraConfig = /* ... */; }` to specify custom
`php.ini` args for the CLI.
* `php.buildEnv { exts = phpPackages: [phpPackages.apcu] }` to
create a PHP interpreter for the CLI with the `apcu` extension.
Perl on darwin (and any other sane platform) has a pretty good threading
support, enable it.
As it turns out, we were building non-multithreaded perl on all systems,
since glibc was not part of the stdenv anymore:
nix-repl> pkgs = import <nixpkgs> {}
nix-repl> pkgs.stdenv ? glibc
false
meaning that the comments were incorrect. Thus, clear up the confusion
and remove the misleading comments, while enabling multithreading by
default. The builds will fail on unsupported platforms, and in this case
the only place is the bootstrap, where we already force
non-multithreaded perl.
As a consequence of the above, this change will cause the full rebuild
of stdenv on all platforms, including linux.
The upstream repository has been archived, with a note that this should never be needed:
https://github.com/alexcrichton/wasm-gc
It also happens to have custom patch logic that will fail the newer cargo
verification in #79975
Changes the default fetcher in the Rust Platform to be the newer
`fetchCargoTarball`, and changes every application using the current default to
instead opt out.
This commit does not change any hashes or cause any rebuilds. Once integrated,
we will start deleting the opt-outs and recomputing hashes.
See #79975 for details.
One of the motivations for this change is the following Discourse
discussion:
https://discourse.dhall-lang.org/t/offline-use-of-prelude/137
Many users have requested Dhall support for "offline" packages
that can be fetched/built/installed using ordinary package
management tools (like Nix) instead of using Dhall's HTTP import system.
I will continue to use the term "offline" to mean Dhall package
builds that do not use Dhall's language support for HTTP imports (and
instead use the package manager's support for HTTP requests, such
as `pkgs.fetchFromGitHub`)
The goal of this change is to document what is the idiomatic way to
implement "offline" Dhall builds by implementing Nixpkgs support
for such builds. That way when other package management tools ask
me how to package Dhall with their tools I can refer them to how it
is done in Nixpkgs.
This change contains a fully "offline" build for the largest Dhall
package in existence, known as "dhall-packages" (not to be confused
with `dhallPackages`, which is our Nix attribute set containing
Dhall packages).
The trick to implementing offline builds in Dhall is to take
advantage of Dhall's support for semantic integrity checks. If an
HTTP import is protected by an integrity check and a cached build
product matches the integrity check then the HTTP import is never
resolved and the expression is instead fetched from cache.
By "installing" dependencies in a pre-seeded and isolated cache
we can replace remote HTTP imports with dependencies that have
been built and supplied by Nix instead.
The offline nature of the builds are enforced by compiling the
Haskell interpreter with the `-f-with-http` flag, which disables
the interpreter's support for HTTP imports. If a user forgets
to supply a necessary dependency as a Nix build product then the
build fails informing them that HTTP imports are disabled.
By default, built packages are "binary distributions", containing
just a cache product and a Dhall expression which can be used to
resolve the corresponding cache product.
Users can also optionally enable a "source distribution" of a package
which already includes the equivalent fully-evaluated Dhall code (for
convenience), but this is disabled by default to keep `/nix/store`
utilization as compact as possible.