As part of the splicing the build/host/target combinations of the interpreter
need to be passed around internally. The chosen names were not very clear,
implying they were package sets whereas actually there were derivations.
This commit introduces two changes.
First, cpython gets optional BlueZ support, which is needed for
AF_BLUETOOTH sockets. Therefore bluezSupport was added as a parameter.
Second, the call to the pythonFull packages has been adjusted. The
Python packages have a self-reference called self. This was not adjusted
for the override. As a result, Python packages for this special version
of Python were not built with the overridden Python, but with the
original one.
This adds a warning to the top of each “boot” package that reads:
Note: this package is used for bootstrapping fetchurl, and thus cannot
use fetchpatch! All mutable patches (generated by GitHub or cgit) that
are needed here should be included directly in Nixpkgs as files.
This makes it clear to maintainer that they may need to treat this
package a little differently than others. Importantly, we can’t use
fetchpatch here due to using <nix/fetchurl.nix>. To avoid having stale
hashes, we need to include patches that are subject to changing
overtime (for instance, gitweb’s patches contain a version number at
the bottom).
used together with cpython's debugging symbols, this allows inspection of
the python stack of cpython programs in gdb. this file is a little
different from the rest of the python output by this package, in that it's
not intended to be run by the current python being built, instead by the
python being used by the gdb in question, which could be very different.
therefore placed in its own, but hopefully logical & predictable location.
Also, don't use autoreconfHook on Darwin with Python 3.
Darwin builds are still impure and fail with
ld: warning: directory not found for option '-L/nix/store/6yhj9djska835wb6ylg46d2yw9dl0sjb-configd-osx-10.8.5/lib'
ld: warning: directory not found for option '-L/nix/store/6yhj9djska835wb6ylg46d2yw9dl0sjb-configd-osx-10.8.5/lib'
ld: warning: object file (/nix/store/0lsij4jl35bnhqhdzla8md6xiswgig5q-Libsystem-osx-10.12.6/lib/crt1.10.6.o) was built for newer OSX version (10.12) than being linked (10.6)
DYLD_LIBRARY_PATH=/private/tmp/nix-build-python3-3.8.3.drv-0/Python-3.8.3 ./python.exe -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
/nix/store/dsb7d4dwxk6bzlm845z2zx6wp9a8bqc1-bash-4.4-p23/bin/bash: line 5: 72015 Killed: 9 DYLD_LIBRARY_PATH=/private/tmp/nix-build-python3-3.8.3.drv-0/Python-3.8.3 ./python.exe -E -S -m sysconfig --generate-posix-vars
generate-posix-vars failed
make: *** [Makefile:592: pybuilddir.txt] Error 1
I took a close look at how Debian builds the Python interpreter,
because I noticed it ran substantially faster than the one in nixpkgs
and I was curious why.
One thing that I found made a material difference in performance was
this pair of linker flags (passed to the compiler):
-Wl,-O1 -Wl,-Bsymbolic-functions
In other words, effectively the linker gets passed the flags:
-O1 -Bsymbolic-functions
Doing the same thing in nixpkgs turns out to make the interpreter
run about 6% faster, which is quite a big win for such an easy
change. So, let's apply it.
---
I had not known there was a `-O1` flag for the *linker*!
But indeed there is.
These flags are unrelated to "link-time optimization" (LTO), despite
the latter's name. LTO means doing classic compiler optimizations
on the actual code, at the linking step when it becomes possible to
do them with cross-object-file information. These two flags, by
contrast, cause the linker to make certain optimizations within the
scope of its job as the linker.
Documentation is here, though sparse:
https://sourceware.org/binutils/docs-2.31/ld/Options.html
The meaning of -O1 was explained in more detail in this LWN article:
https://lwn.net/Articles/192624/
Apparently it makes the resulting symbol table use a bigger hash
table, so the load factor is smaller and lookups are faster. Cool.
As for -Bsymbolic-functions, the documentation indicates that it's a
way of saving lookups through the symbol table entirely. There can
apparently be situations where it changes the behavior of a program,
specifically if the program relies on linker tricks to provide
customization features:
https://bugs.launchpad.net/ubuntu/+source/xfe/+bug/644645https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=637184#35
But I'm pretty sure CPython doesn't permit that kind of trick: you
don't load a shared object that tries to redefine some symbol found
in the interpreter core.
The stronger reason I'm confident using -Bsymbolic-functions is
safe, though, is empirical. Both Debian and Ubuntu have been
shipping a Python built this way since forever -- it was introduced
for the Python 2.4 and 2.5 in Ubuntu "hardy", and Debian "lenny",
released in 2008 and 2009. In those 12 years they haven't seen a
need to drop this flag; and I've been unable to locate any reports
of trouble related to it, either on the Web in general or on the
Debian bug tracker. (There are reports of a handful of other
programs breaking with it, but not Python/CPython.) So that seems
like about as thorough testing as one could hope for.
---
As for the performance impact: I ran CPython upstream's preferred
benchmark suite, "pyperformance", in the same way as described in
the previous commit. On top of that commit's change, the results
across the 60 benchmarks in the suite are:
The median is 6% faster.
The middle half (aka interquartile range) is from 4% to 8% faster.
Out of 60 benchmarks, 3 come out slower, by 1-4%. At the other end,
5 are at least 10% faster, and one is 17% faster.
So, that's quite a material speedup! I don't know how big the
effect of these flags is for other software; but certainly CPython
tends to do plenty of dynamic linking, as that's how it loads
extension modules, which are ubiquitous in the stdlib as well as
popular third-party libraries. So perhaps that helps explain why
optimizing the dynamic linker has such an impact.
Without this flag, the configure script prints a warning at the end,
like this (reformatted):
If you want a release build with all stable optimizations active
(PGO, etc), please run ./configure --enable-optimizations
We're doing a build to distribute to people for day-to-day use,
doing things other than developing the Python interpreter. So
that's certainly a release build -- we're the target audience for
this recommendation.
---
And, trying it out, upstream isn't kidding! I ran the standard
benchmark suite that the CPython developers use for performance
work, "pyperformance". Following its usage instructions:
https://pyperformance.readthedocs.io/usage.html
I ran the whole suite, like so:
$ nix-shell -p ./result."$variant" --run '
cd $(mktemp -d); python -m venv venv; . venv/bin/activate
pip install pyperformance
pyperformance run -o ~/tmp/result.'"$variant"'.json
'
and then examined the results with commands like:
$ python -m pyperf compare_to --table -G \
~/tmp/result.{$before,$after}.json
Across all the benchmarks in the suite, the median speedup was 16%.
(Meaning 1.16x faster; 14% less time).
The middle half of them ranged from a 13% to a 22% speedup.
Each of the 60 benchmarks in the suite got faster, by speedups
ranging from 3% to 53%.
---
One reason this isn't just the default to begin with is that, until
recently, it made the build a lot slower. What it does is turn on
profile-guided optimization, which means first build for profiling,
then run some task to get a profile, then build again using the
profile. And, short of further customization, the task it would use
would be nearly the full test suite, which includes a lot of
expensive and slow tests, and can easily take half an hour to run.
Happily, in 2019 an upstream developer did the work to carefully
select a more appropriate set of tests to use for the profile:
https://github.com/python/cpython/commit/4e16a4a31https://bugs.python.org/issue36044
This suite takes just 2 minutes to run. And the resulting final
build is actually slightly faster than with the much longer suite,
at least as measured by those standard "pyperformance" benchmarks.
That work went into the 3.8 release, but the same list works great
if used on older releases too.
So, start passing that --enable-optimizations flag; and backport
that good-for-PGO set of tests, so that we use it on all releases.
This should enable (manual) building of RPMs from python projects using
the `python setup.py bdist_rpm` command on systems where `rpmbuild` is
not located in `/usr/bin/`. (e.g. NixOS)
The discovery of the rpmbuild command was fixed upstream in Python 3.8,
so this commit backports the relevant patch to our currently supported
Python 3 versions.
Fixes: #85204
The ./configure script prints a warning when passed this flag,
starting with 3.7:
configure: WARNING: unrecognized options: --with-threads
The reason is that there's no longer such a thing as a build
without threads.
Eliminate the warning, by only passing the flag on the older releases
that accept it.
Upstream change and discussion:
https://github.com/python/cpython/commit/a6a4dc816https://bugs.python.org/issue31370
- Replaced python override from the final stdenv, instead we
propagate our bootstrap python to stage4 and override both
CF and xnu to use it.
- Removed CF argument from python interpreters, this is redundant
since it's not overidden anymore.
- Inherit CF from stage4, making it the same as the stdenv.
This will turn manylinux support back on by default.
PIP will now do runtime checks against the compatible glibc version to
determine if the current interpreter is compatible with a given
manylinux specification. However it will not check if any of the
required libraries are present.
The motivation here is that we want to support building python packages
with wheels that require manylinux support. There is no real change for
users of source builds as they are still buildings packages from source.
The real noticeable(?) change is that impure usages (e.g. running `pip
install package`) will install manylinux packages that previously
refused to install.
Previously we did claim that we were not compatible with manylinux and
thus they wouldn't be installed at all.
Now impure users will have basically the same situation as before: If
you require some wheel only package it didn't work before and will not
properly work now. Now the program will fail during runtime vs during
installation time.
I think it is a reasonable trade-off since it allows us to install
manylinux packages with nix expressions and enables tools like
poetry2nix.
This should be a net win for users as it allows wheels, that we
previously couldn't really support, to be used.
It's a year until the final release but this will give a chance to test
out certain features and how it integrates with other packages.
https://www.python.org/dev/peps/pep-0596/
Python 3.8 fails to build on macOS for two reasons:
* python-3.x-distutils-C++.patch fails to apply cleanly.
* An #include for <util.h> is missing, causing a build failure:
./Modules/posixmodule.c:6586:9: error: implicit declaration of function 'openpty' is invalid in C99
if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
^
Use the correct version of python-3.x-distutils-C++.patch, and add a
patch to #include <util.h>.
We don’t want cpython picking up /Library/Frameworks and
/System/Library/Frameworks which contains Tcl.framework. Instead it
should use the one provided by Nix. this would not be an issue if
sandboxing was enabled, but unfortunately that has its own issues.
Fixes#66647
There ver very many conflicts, basically all due to
name -> pname+version. Fortunately, almost everything was auto-resolved
by kdiff3, and for now I just fixed up a couple evaluation problems,
as verified by the tarball job. There might be some fallback to these
conflicts, but I believe it should be minimal.
Hydra nixpkgs: ?compare=1538299