stripHash uses a global variable to communicate it's computation
results, but it's not necessary. You can just pipe to stdout in a
subshell. A function mostly behaves like just another command.
baseHash() also introduces a suffix-stripping capability since it's
something the users of the function tend to use.
This seems to be the root cause of the random page allocation failures
and @wizeman did a very good job on not only finding the root problem
but also giving a detailed explanation of it in #10828.
Here is an excerpt:
The problem here is that the kernel is trying to allocate a contiguous
section of 2^7=128 pages, which is 512 KB. This is way too much:
kernel pages tend to get fragmented over time and kernel developers
often go to great lengths to try allocating at most only 1 contiguous
page at a time whenever they can.
From the error message, it looks like the culprit is unionfs, but this
is misleading: unionfs is the name of the userspace process that was
running when the system ran out of memory, but it wasn't unionfs who
was allocating the memory: it was the kernel; specifically it was the
v9fs_dir_readdir_dotl() function, which is the code for handling the
readdir() function in the 9p filesystem (the filesystem that is used
to share a directory structure between a qemu host and its VM).
If you look at the code, here's what it's doing at the moment it tries
to allocate memory:
buflen = fid->clnt->msize - P9_IOHDRSZ;
rdir = v9fs_alloc_rdir_buf(file, buflen);
If you look into v9fs_alloc_rdir_buf(), you will see that it will try
to allocate a contiguous buffer of memory (using kzalloc(), which is a
wrapper around kmalloc()) of size buflen + 8 bytes or so.
So in reality, this code actually allocates a buffer of size
proportional to fid->clnt->msize. What is this msize? If you follow
the definition of the structures, you will see that it's the
negotiated buffer transfer size between 9p client and 9p server. On
the client side, it can be controlled with the msize mount option.
What this all means is that, the reason for running out of memory is
that the code (which we can't easily change) tries to allocate a
contiguous buffer of size more or less equal to "negotiated 9p
protocol buffer size", which seems to be way too big (in our NixOS
tests, at least).
After that initial finding, @lethalman tested the gnome3 gdm test
without setting the msize parameter at all and it seems to have resolved
the problem.
The reason why I'm committing this without testing against all of the
NixOS VM test is basically that I think we can only go better but not
worse than the current state.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
While debugging an issue with running NixOps tests, I found out that the
output from debClosureGenerator is not deterministic.
The reason behind this is the way how Provides and Replaces fields are
handled. I haven't yet found out what's the exact issue, but so far
packages "Provides" are more or less picked at random.
So, running the NixOps Hetzner tests we get either mawk, original-awk or
gawk altering on every invocation.
While for the test it isn't poisionous whether wi have mawk or gawk,
having original-awk certainly is, because live-build only works with
mawk or gawk.
The best solution would obviously be to make debClosureGenerator
deterministic, but in the case of "Provides: awk", we can safely pick
mawk by default, because the latter has a "Priority: required" in its
package description.
This also has the advantage that we can safely cherry-pick this to
release-15.09 because it's very unlikely that we'll break the
debClosureGenerator by adding a dependency to commonDebPackages.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
[Note from Austin: I think @edolstra forgot to merge this to master.]
(cherry picked from commit 02b056c5b180b4b8ba22ddc3061d78258e2ef98f on
release-14.04)
The default target (i386-linux) causes flags like "-march i386" to be
added, which breaks on recent Fedora releases (18 and up), resulting
in errors like:
/usr/lib/gcc/i686-redhat-linux/4.7.2/../../../../include/c++/4.7.2/ext/atomicity.h:48: undefined reference to `__atomic_fetch_add_4'
So set the target to i686-linux.
http://hydra.nixos.org/build/6567357
Wheezy has been released on June 15th and on all mirrors the SHA256 hash
of Packages.bz2 has changed to reflect the new release, so let's update.
Here is the release announcement from Debian:
http://www.debian.org/News/2013/20130615
It also seems that the versioning scheme has changed in version 7.x, so
they seem to have switched to a two digit versioning scheme. This means,
that the attribute name "debian70..." should really be something like
"debian7...", but I'm keeping the attribute as-is to not break
references.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is needed in order to prevent services from starting while
populating the image with the contents of the .deb files. The procedure
used here is exactly the same as used in debootstrap.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
9p (with caching enabled) is much faster than CIFS and doesn't require
Samba or virtual networking. For instance, building GNU Hello with
CIFS takes ~323s on my laptop, but with 9p it takes 54s.
More measurements will be needed to see if "cache=fscache" is really
faster than "cache=loose" (the former seems to be a little bit
faster).