The default has been unchanged for a decade. Space is cheaper and
software catches up with that. Let's not make our testing harder
than necessary by default.
qemu_kvm is only built for one architecture, so it's smaller and takes
MUCH less time to build if it has to be built from source. And this
module doesn't support running a VM for one architecture from another
architecture, so the one architecture is all we'll need.
pathsInNixDB isn't a very accurate name when a Nix store image is
built (virtualisation.useNixStoreImage); rename it to additionalPaths,
which should be general enough to cover both cases.
Add the `useNixStoreImage` option, allowing a disk image with the
necessary contents from the Nix store to be built using
make-disk-image.nix. The image will be mounted at `/nix/store` and
acts as a drop-in replacement for the usual 9p mounting of the host's
Nix store.
This removes the performance penalty of 9p, drastically improving
execution speed of applications which do lots of reads from the Nix
store. The caveats are increased disk space usage and image build
time.
The headless option broke with 7d8b303e3f
because the path /bin/vmware-user-suid-wrapper does not exist in the
headless variant of the open-vm-tools package.
Since the vmblock fuse mount and vmware-user-suid-wrapper seem to only
be used for shared folders and drag and drop, they should not exist in
the vmware-guest module if it is configured as headless.
Move all `virtualisation.libvirtd.qemu*` options to a
`virtualisation.libvirtd.qemu` submodule.
Also for consistency, add `virtualisation.libvirtd.qemu.swtpm.package`
(only new option during this refactor.)
I realized quite recently that running a test VM - as documented in the
manual - like
QEMU_NET_OPTS='hostfwd=tcp::8080-:80' ./result/bin/nixos-run-vms
doesn't work anymore on `master`. After bisecting I realized that the
introduction of a forward-port option[1] is the problem since it adds a
trailing comma even if no forwarding options are specified via
`virtualisation.forwardPorts`. In that case, the networking options
would look like `-netdev user,id=user.0,,hostfwd=tcp::8080-:80' which
confused QEMU and thus the VM refused to start.
Now, the trailing comma is only added if additional port forwards are
specified declaratively.
[1] b8bfc81d5b
The current name is misleading: it doesn't contain cli arguments,
but several constants and utility functions related to qemu.
This commit also removes the use of `with import ...` for clarity.
Introduce an AWS EC2 AMI which supports aarch64 and x86_64 with a ZFS
root.
This uses `make-zfs-image` which implies two EBS volumes are needed
inside EC2, one for boot, one for root. It should not matter which
is identified `xvda` and which is `xvdb`, though I have always
uploaded `boot` as `xvda`.
iptables is currently defined in `all-packages.nix` to be
iptables-compat. That package does however not contain `ethertypes`.
Only `iptables-nftables-compat` contains this file so the symlink
dangles.
Both networking.nat.enable and virtualisation.docker.enable now want to
make sure that the IP forwarding sysctl is enabled, but the module
system dislikes that both modules contain this option.
Realistically this should be refactored a bit, so that the Docker module
automatically enables the NAT module instead, but this is a more obvious
fix.
Changed the startup timeout from 15 seconds to one minute as 15 seconds is really low.
Also it's currently not possible to change it without editing your system configuration.
This is analogous to #70447 and #76487.
These are all needed to attach a container to the default bridge
network, without which the final line of the following script fails with
the error for each respective kernel module listed below.
```sh
lxc storage create foo dir
lxc launch -s foo ubuntu:trusty bar
lxc network attach lxdbr0 bar
```
veth
----
> Error: Failed to start device 'lxdbr0': Failed to create the veth interfaces vethefbc3cd6 and vetha4abbcbc: Failed to run: ip link add dev vethefbc3cd6 type veth peer name vetha4abbcbc: RTNETLINK answers: Operation not supported
iptable_mangle
--------------
> lvl=eror msg="Failed to bring up network" err="Failed to list ipv4 rules for LXD network lxdbr0 (table mangle)" name=lxdbr0
xt_comment
----------
> lvl=error msg="Failed to bring up network" err="Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
xt_MASQUERADE
-------------
> vl=eror msg="Failed to bring up network" err="Failed to run: iptables -w -t nat -I POSTROUTING -s 10.0.107.0/24 ! -d 10.0.107.0/24 -j MASQUERADE -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load target `MASQUERADE':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
systemd-nspawn can react to SIGTERM and send a shutdown signal to the container
init process. use that instead of going through dbus and machined to request
nspawn sending the signal, since during host shutdown machined or dbus may have
gone away by the point a container unit is stopped.
to solve the issue that a container that is still starting cannot be stopped
cleanly we must also handle this signal in containerInit/stage-2.
At the moment, it's not possible to override the libvirtd package used
without supplying a nixpkgs overlay. Adding a package option makes
libvirtd more consistent and allows enabling e.g. ceph and iSCSI support
more easily.
Xen is now enabled unconditionally on kernels that support it, so the
xen_dom0 feature doesn't do anything. The isXen attribute will now
produce a deprecation warning and unconditionally return true.
Passing in a custom value for isXen is no longer supported.
This dependency has been added in 65eae4d, when NixOS switched to
systemd, as a substitute for the previous udevtrigger and hasn't been
touched since. It's probably unneeded as the upstream unit[1] doesn't
do it and I haven't found any mention of any problem in NixOS or the
upstream issue trackers.
[1]: https://gitlab.com/libvirt/libvirt/-/blob/master/src/remote/libvirtd.service.in
Related to #85746 which addresses documentation issue,
digging deeper for a reason why this was disabled
was simply because it wasn't working which is not the case anymore.
- Actually use the zfsSupport option
- Add documentation URI to lxd.service
- Add lxd.socket to enable socket activatation
- Add proper dependencies and remove systemd-udev-settle from lxd.service
- Set up /var/lib/lxc/rootfs using systemd.tmpfiles
- Configure safe start and shutdown of lxd.service
- Configure restart on failures of lxd.service
Launching a container with a private network requires creating a
dedicated networking interface for it; name of that interface is derived
from the container name itself - e.g. a container named `foo` gets
attached to an interface named `ve-foo`.
An interface name can span up to IFNAMSIZ characters, which means that a
container name must contain at most IFNAMSIZ - 3 - 1 = 11 characters;
it's a limit that we validate using a build-time assertion.
This limit has been upgraded with Linux 5.8, as it allows for an
interface to contain a so-called altname, which can be much longer,
while remaining treated as a first-class citizen.
Since altnames have been supported natively by systemd for a while now,
due diligence on our side ends with dropping the name-assertion on newer
kernels.
This commit closes#38509.
systemd/systemd#14467systemd/systemd#17220https://lwn.net/Articles/794289/
Ensure that the HyperV keyboard driver is available in the early
stages of the boot process. This allows the user to enter a disk
encryption passphrase or repair a boot problem in an interactive
shell.
We now set the hooks dir correctly if the OCI hook is enabled. CRI-O
supports this specific hook from v1.20.0.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>
Reintroduce the `fetch-ssh-keys` service so that GCE images that work
with NixOps can once again be built. Also, reformat the code a bit.
The service was removed in 88570538b3,
likely due to a comment saying it should be removed. It was still
needed for images to work with NixOps, however, and probably needed to
be replaced or rewritten rather than removed.
The socketActivation option was removed, but later on socket activation
was added back without the option to disable it. The description now reflects
that socket activation is used unconditionally in the current setup.
Since the introduction of option `containers.<name>.pkgs`, the
`nixpkgs.*` options (including `nixpkgs.pkgs`, `nixpkgs.config`, ...) were always
ignored in container configs, which broke existing containers.
This was due to `containers.<name>.pkgs` having two separate effects:
(1) It sets the source for the modules that are used to evaluate the container.
(2) It sets the `pkgs` arg (`_module.args.pkgs`) that is used inside the container
modules.
This happens even when the default value of `containers.<name>.pkgs` is unchanged, in which
case the container `pkgs` arg is set to the pkgs of the host system.
Previously, the `pkgs` arg was determined by the `containers.<name>.config.nixpkgs.*` options.
This commit reverts the breaking change (2) while adding a backwards-compatible way to achieve (1).
It removes option `pkgs` and adds option `nixpkgs` which implements (1).
Existing users of `pkgs` are informed by an error message to use option
`nixpkgs` or to achieve only (2) by setting option `containers.<name>.config.nixpkgs.pkgs`.
It's been 8.5 years since NixOS used mingetty, but the option was
never renamed (despite the file definining the module being renamed in
9f5051b76c ("Rename mingetty module to agetty")).
I've chosen to rename it to services.getty here, rather than
services.agetty, because getty is implemantation-neutral and also the
name of the unit that is generated.