systemd-udev-settle is not started by default anymore.
Because checking for psmouse like that is considered legacy,
we start systemd-udev-settle manually in the test.
cc @edolstra
The most complex problems were from dealing with switches reverted in
the meantime (gcc5, gmp6, ncurses6).
It's likely that darwin is (still) broken nontrivially.
Commit 9bfe92ecee ("docker: Minor improvements, fix failing test") added
the services.docker.storageDriver option, made it mandatory but didn't
give it a default value. This results in an ugly traceback when users
enable docker, if they don't pay enough attention to also set the
storageDriver option. (An attempt was made to add an assertion, but it
didn't work, possibly because of how "mkMerge" works.)
The arguments against a default value were that the optimal value
depends on the filesystem on the host. This is, AFAICT, only in part
true. (It seems some backends are filesystem agnostic.) Also, docker
itself uses a default storage driver, "devicemapper", when no
--storage-driver=x options are given. Hence, we use the same value as
default.
Add a FIXME comment that 'devicemapper' breaks NixOS VM tests (for yet
unknown reasons), so we still run those with the 'overlay' driver.
Closes#10100 and #10217.
I'm not quite sure why the official Hydra gets a kernel panic in one of
two VMs using the exact same kernels:
https://hydra.nixos.org/build/26339384
Because the kernel panic happens before stage 1, let's wait for the
first VM to boot up and after the bootup is done, start the second one
in hope that it won't trigger the panic.
Oddly enough, whenever I run the test on my own Hydra and on my local
machines, I don't get anything like that.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
I forgot to do this in da0e642. It shouldn't be a big problem but it's
more clean to destroy the VM once we're done testing.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We previously had 1024 MB of memory to fit a VirtualBox VM with 512 MB
plus the memory needed of the VirtualBox host VM. That obviously won't
work for two VirtualBox VMs, which are used for testing networking
between two VirtualBox guests.
Now, we have 2048 MB on the qemu guest (the VirtualBox host) and 768 MB
for each VirtualBox guest. That should be enough to fit in two
VirtualBox guests (I hope).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Unfortunately, we can't test whether USB is really working, but we can
make sure that VirtualBox has access to the USB devices.
This is essentially testing #9736, which I haven't yet been able to
reproduce though, but it makes sense to test it so it won't happen in
future releases.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Addresses #9876 in the way that we want to make sure that VirtualBox 5.x
is going to be properly detected. Right now the result is "kvm", so the
subtest fails as expected with:
error: systemd-detect-virt returned "kvm" instead of "oracle" at (eval
14) line 414, <__ANONIO__> line 92.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Makes it easier to debug and find out for which machine a certain log
socket has been started or stopped.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're simply using antiquotation, since it's been a while since these
got introduced (in Nix 1.7). So we can use them because it makes the
code much more readable.
As usual, I made sure that I didn't accidentally change something in
functionality:
$ nix-instantiate nixos/tests/virtualbox.nix
...
/nix/store/cldxyrxqvwpqm02cd3lvknnmj4qmblyn-vm-test-run-virtualbox.drv
$ git stash pop
...
$ nix-instantiate nixos/tests/virtualbox.nix
...
/nix/store/cldxyrxqvwpqm02cd3lvknnmj4qmblyn-vm-test-run-virtualbox.drv
$
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is essentially not only "wrapping" the line but refactoring into a
shorter name which is used in two places.
And yes, I know I'm very pedantic if it comes to whitespaces and line
lengths, but I made sure this doesn't change any functionality:
$ nix-instantiate nixos/tests/virtualbox.nix
...
/nix/store/cldxyrxqvwpqm02cd3lvknnmj4qmblyn-vm-test-run-virtualbox.drv
$ git stash pop
...
$ nix-instantiate nixos/tests/virtualbox.nix
...
/nix/store/cldxyrxqvwpqm02cd3lvknnmj4qmblyn-vm-test-run-virtualbox.drv
$
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Instead of manually setting debug to true or false, this should make it
possible to now run the test like this:
nix-build nixos/tests/virtualbox.nix --arg debug true
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Sometimes there are random kernel panics do to the lack of memory in the
qemu guests, but as we're setting the VirtualBox memory size relatively
low, 1024 MB should be enough for the qemu guests.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We want to check whether DBus functionality is working, so let's make
sure it is running in our mini-initrd.
DBus unfortunately requires to have users properly set up and another
configuration file other than in ${dbus.daemon}/etc/dbus-1/system.conf,
so we do provide that as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Using waitForWindow on the IceWM root window doesn't necessarily mean
that the panel will be shown. In the lightdm test, we only make sure
that the login is working and thus it doesn't matter how the session
itself will look or whether IceWM is broken, so we don't need that
screenshot.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently the lightdm test detects a successful login by OCR'ing the
screen and searching for the clock widget's text. Since the last
IceWM update (commit bdd20ced), either the font or the colors of the
clock changed such that the OCR doesn't pick it up anymore.
Instead, just look for a matching (root) window title, e.g.
"IceWM 1.3.9 (Linux/i686)"
Commit 687caeb renamed services.virtualboxHost to programs.virtualbox,
but according to the discussion on the commit, it's probably a better to
put it into virtualisation.virtualbox instead.
The discussion can be found here:
https://github.com/NixOS/nixpkgs/commit/687caeb#commitcomment-12664978
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This removes all references to .../sbin for the guest additions and also
installs all binaries to .../bin instead (so no more .../sbin).
The main motivation for doing this is commit 98cedb3 (which
unfortunately had to be reverted in a9f2e10) and pull request #9063,
where the latter is an initial effort to move mount.vboxsf to .../bin
instead of .../sbin.
The commit I made afterwards is finishing the removal of .../sbin
entirely.
Using $storepath/sbin is deprecated according to commit 98cedb3, so
let's avoid putting anything in .../sbin for the guest additions.
This is a continuation of the initial commit done by @ctheune at
1fb1360, which unfortunately broke VM tests and only changed the path of
the mount.vboxsf helper.
With this commit, the VM test is fixed and I've also verified on my
machine that it is indeed working again.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We no longer need have "SUID sandbox" enabled in the chrome://sandbox
status page and we now also check for "You are adequately sandboxed." to
be absolutely sure that we're running with proper sandboxing.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We already have separate tests for checking whether the ISO boots
correctly, so it's not necessary to do that here. So now
tests/installer.nix just tests nixos-install, from a regular NixOS VM
that uses the host's Nix store. This makes running the tests more
convenient because we don't have to build a new ISO after every
change.
Serves as a regression test for #7902.
It's not yet referenced in release(-combined)?.nix because it will fail
until the issue is resolved. Tested successfully against libgcrypt with
libcap passed as null however.
As for the test itself, I'm not quite sure whether checking for the time
displayed by IceWM is a good idea, but we can still fix that if it turns
out to be a problem.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This will make the test a lot more reliable, because we no longer need
to press ESC multiple times hoping that it will close the popup.
Unfortunately in order to run this test I needed to locally revert the
gyp update from a305e6855d.
With the old gyp version however the test runs fine and it's able to
properly detect the popup.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
By default this is now enabled, and it has to be explicitely enabled
using "enableOCR = true". If it is set to false, any usage of
getScreenText or waitForText will fail with an error suggesting to pass
enableOCR.
This should get rid of the rather large dependency on tesseract which
we don't need for most tests.
Note, that I'm using system("type -P") here to check whether tesseract
is in PATH. I know it's a bashism but we already have other bashisms
within the test scripts and we also run it with bash, so IMHO it's not a
problem here.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Fixes the "blindly hope that 60 seconds is enough" issue from 1f34503,
so that we now have a (hopefully) reliable way to wait for the
passphrase prompt.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This serves as a regression test for #7859.
It's pretty straightforward, except from the fact that nixos-generate-
config doesn't detect LUKS devices and the "sleep 60".
As for the former, I have tried to add support for LUKS devices for
nixos-generate-config, but it's not so easy as it sounds, because we
need to create a device tree across all possible mappers and/or LVM up
to the "real" device and then decide whether it is relevant to what is
currently mounted. So I guess this is something for the nixpart branch
(see #2079).
And the latter isn't very trivial as well, because the LUKS passphrase
prompt is issued on /dev/console, which is the last "console=..." kernel
parameter (thus the `mkAfter`). So we can't simply grep the log, because
the prompt ends up being on one terminal only (tty0) and using select()
on $machine->{socket} doesn't work very well, because the FD is always
"ready for read". If we would read the FD, we would conflict with
$machine->connect and end up having an inconsistent state. Another idea
would be to use multithreading to do $machine->connect while feeding the
passphrase prompt in a loop and stop the thread once $machine->connect
is done. Turns out that this is not so easy as well, because the threads
need to share the $machine object and of course need to do properly
locking.
In the end I decided to use the "blindly hope that 60 seconds is enough"
approach for now and come up with a better solution later. Other VM
tests surely use sleep as well, but it's $machine->sleep, which is bound
to the clock of the VM, so if the build machine is on high load, a
$machine->sleep gets properly delayed but the timer outside the VM won't
get that delay, so the test is not deterministic.
Tested against the following revisions:
5e3fe39: Before the libgcrypt cleanup (a71f78a) that broke cryptsetup.
69a6848: While cryptsetup was broken (obviously the test failed).
15faa43: After cryptsetup has been switched to OpenSSL (fd588f9).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
These commands will be executed directly after the machine is created,
so it gives us the chance to for example type in passphrases using the
virtual keyboard.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're going to need it for installer tests where nixos-generate-config
isn't yet able to fully detect the filesystems/hardware. for example for
device mapper configurations other than LVM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It seems like there's an upstream bug in the "lpstat" command. We need
to specify the server's port.
Further information: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=711327
[root@client:~]# lpstat -H
/var/run/cups/cups.sock
[root@client:~]# lpstat -h server -H
/var/run/cups/cups.sock:631
[root@client:~]# CUPS_SERVER=server lpstat -H
server:631
[root@client:~]# lpstat -h server:631 -H
server:631
Sometimes, keys aren't properly recognized the first time, so in order
to make sure they get through, always resend the key again on retry.
In this case the worst that could happen is that the VM is started over
and over again, but never in parallel, so that's fine because we're
checking for successful startup 10 seconds after the keypress.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's not nice to send the escape key over and over again just to ensure
the popup is closed, because even *if* it fails to close the popup 4
times in a row it's just very unlikely that it will be closed. But in
order to make really sure, we might need to do a screenshot and detect
visual changes.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
I'm increasing it to 100MB to make sure, any bootloader will fit with
all its stages. Of course, right now the reason why GRUB doesn't fit
into the partition is because of mdadm 3.3.2 and thus the initrd taking
all the space, but in order to avoid confusion on why the *boot* loader
fails in the VM tests, I've increased the size.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently just makes sure that by default it's possible to open a
terminal.
And exactly this should be the main point that might confuse users of i3
in NixOS, because i3 doesn't print a warning/error if it is unable to
start the terminal emulator.
Thanks to @waaaaargh for reporting this issue.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Since Chromium version 42, we have a new user namespaces sandbox in the
upstream project. It's more integrated so the chrome://sandbox page
reports it as "Namespace Sandbox" instead of SUID sandbox, which we were
re-using (or abusing?) in our patch.
So if either "SUID Sandbox" or "Namespace Sandbox" reports with "Yes",
it's fine on our side.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Chromium is quite memory hungry and we frequently get random crashes in
the tests, so let's set it to 1024 MB because new releases of Chromium
most probably won't consume *less* memory.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Added attributes to nixos/tests/mesos.nix to verify that mesos-slave
attributes work. If the generated attributes are invalid, the daemon
should fail to start.
Change-Id: I5511245add30aba658b1af22cd7355b0bbf5d15c
Especially if the user isn't in the vboxusers group anymore, this gets
VERY noisy, because the VBoxSVC process emits warnings for every single
USB device noting that it's only possible to access it when the user is
in the vboxusers group.
So, we now have a debug attribute, where we can enable it when
necessary.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The "nix-store" command within the VM test is running without
NIX_REMOTE=daemon and since Nix 1.8 tries to open the store database in
read-write mode even for nix-store -qR.
Now, we're doing this properly and rely on setup hooks, which is the
same method that's used when you're building a library which depends on
blivet.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Creates unnecessary cruft in the root users home directory, which we
really don't need. Except the log, but therefore we now cat the log to
stderr and the private temporary directory is cleaned up afterwards.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Essentially adds two more VirtualBox VMs to the test and also increases
the memory size of the qemu VM to 768 MB to make sure we don't run out
of memory too soon.
We're testing whether those two VMs can talk to either each other
(currently via ICMP only) or to/from the host via TCP/IP.
Also, this restructures the VM test a bit, so that we now pass in a
custom stage2Init script that has access to the store via a private
mount over the /nix/store that's already in the initrd. The reason why
this is a private mount is that we don't want to shadow the Nix store of
the initrd, essentially breaking cleanup functionality after the custom
stage 2 script (currently this is only "poweroff -f").
Note that setting the hostname inside the VirtualBox VM is *not* for
additional fanciness but to produce a different store path for the VM
image, so that VirtualBox doesn't bail out when trying to use an image
which is already attached to another VM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're going to create more than one VirtualBox VM, so let's dynamically
generate subs specific to a particular VirtualBox VM, merging everything
into the testScript and machine expressions.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently it pretty much tests starting up virtual machines and just
shutting down afterwards, but for both VBoxManage and the VirtualBox
GUI.
This helps catching errors in hardened mode, however we still need to
test whether networking works the way intended (and I fear that this is
broken at the moment).
The VirtualBox VM is _not_ using hardware virtualization support (thus
we use system = "i686-linux", because x86_64 has no emulation support),
because we're already within a qemu VM, which means it's going to be
slow as hell (that's why I've written own subs just for testing
startup/shutdown/whatnot with respective timeouts).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This patch should be reverted if either:
- systemd fixes the multi-swapon issue.
https://bugs.freedesktop.org/show_bug.cgi?id=86930
- If we disable the autogeneration of swap and vfat units within
systemd.
Following the discussion NixOS#5021:
- obsolete the nix.proxy option
- add the networking.proxy option
- open a default no_proxy environment variable
- add a rsync option
- Manual tests ok.
- Automatic tests ok.
Amended by lethalman to simplify the option descriptions.
Of course, this could be done via packageOverrides, but this is more
explicit and makes it possible to run the tests with various Chromium
overrides.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently, the test is only for testing the user namespace sandbox and
even that isn't very representative, because we're running the tests as
root.
But apart from that, we should have functionality for opening/closing
windows and the main goal here is to get them as deterministic as
possible, because Chromium usually isn't very nice to chained xdotool
keystrokes.
And of course, the most important "test" we have here: We know at least
whether Chromium works _at_all_.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Once nixpart 1.0 is released we then only need to delete one single
directory rather than searching for needles in a haystack, that is, all
of <nixpkgs>. Also, it keeps my sanity at an almost healthy level.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Quite a mess but at least the mdraid tests succeed now. However, the
lvm2 tests are still failing, so we need to bring back a few more old
crap :-(
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
I'm really not sure whether these tests are actually run upstream,
because there are quite a few oddities which either are my fault by just
missing something important or upstream really doesn't bother to run
those tests.
One example of this are testDiskChunk1 and testDiskChunk2, which create
two non-existing partitions and tries to allocate them. Now, in
allocatePartitions(), the partedPartition attributes are reset to None
and shortly afterwards a for loop is expecting it to be NOT None.
So, for now I'm disabling these tests and will see if we stumble on them
during work on nixpart 1.0, so we're really sure whether it's my fault
or a real bug in blivet.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Put a copy of old version 0.17 expression into 0.17.nix and update the
pointers from nixpart0 accordingly.
This also means, that plain nixpart is now way more broken than
nixpart0 (we might want to temporarily fix 0.4 anyway).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This prevents a dependency on liblapack (which randomly fails) and
TeXlive (which is huge).
http://hydra.nixos.org/build/14897240
(cherry picked from commit b9bde98161d09496421efbaafe5219cc84d05f5b)
This tells the sad tale of @the-kenny who had bind-mounted his home
directory into a container. After doing `nixos-container destroy` he
discovered that his home directory went from "full of precious data" to
"no more data".
We want to avoid having similar sad tales in the future, so this now also
check this in the containers VM test.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently only tests basic resource record lookup against IPv4 and IPv6.
Nothing special yet, but probably enough for most setups.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This reverts commit 469f22d717, reversing
changes made to 0078bc5d8f.
Conflicts:
nixos/modules/installer/tools/nixos-generate-config.pl
nixos/modules/system/boot/loader/grub/install-grub.pl
nixos/release.nix
nixos/tests/installer.nix
I tried to keep apparently-safe code in conflicts.
This randomly fails with "Destination Host Unreachable". That
shouldn't happen, since all interfaces/routes should be up after
"nixos-container start" returns. Need more investigation...
So far the test only uses an authorized key that is copied over to the
target machine instead of being set by the target's configuration.
Now, we cover both cases.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Attempt to fix race condition in installer tests, especially the
grub1 test.
The latter was failing when running "parted /dev/sda ..." because
/dev/sda didn't exist yet.
The installer now asks the user to set a root password if stdin is a
tty, which doesn't work for an interactive test.
http://hydra.nixos.org/build/11130072
Hopefully fixes failures like:
http://hydra.nixos.org/build/10712833
This shouldn't be necessary, but it might be that the use of unionfs
is interfering with a clean shutdown.
This allows specifying rules for systemd-tmpfiles.
Also, enable systemd-tmpfiles-clean.timer so that stuff is cleaned up
automatically 15 minutes after boot and every day, *if* you have the
appropriate cleanup rules (which we don't have by default).
You can now run a test in the nixos/tests directory directly using
nix-build, e.g.
$ nix-build '<nixos/tests/login.nix>' -A test
This gets rid of having to add the test to nixos/tests/default.nix.
(Of course, you still need to add it to nixos/release.nix if you want
Hydra to run the test.)
Uses standard NixOS user config merging.
Work in progress: The slave config does not actually start the slave agent. This just configures a
jenkins user if required. Bare minimum to enable a nice jenkins SSH slave.
By default the jenkins server is executed under the user "jenkins". Which can be configured using
users.jenkins.* options. If a different user is requested by changing services.jenkins.user then
none of the users.jenkins options apply.
This patch does not include jenkins slave configuration. Some config options will probably change
when this is implemented.
Aspects like the user and environment are typically identical between slave and master. The service
configs are different. The design is for users.jenkins to cover the shared aspects while
services.jenkins and services.jenkins-slave cover the master and slave specific aspects,
respectively.
Another option would be to place everything under services.jenkins and have a config that selects
master vs slave.
Now that overriding fileSystems in qemu-vm.nix works again, it's
important that the VM tests that add additional file systems use the
same override priority. Instead of using the same magic constant
everywhere, they can now use mkVMOverride.
http://hydra.nixos.org/build/6695561
Those tests are flapping and redundant to some degree, as two
configurations are tested in NixOps as well. So, let's deactivate them
until the 1.0 release of nixpart, which has a more general approach for
automatically partitioning NixOS installations.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>