Having a disks object with a dictionary of all the disks and their
properties makes it easier to process multi-disk images.
Note the rename of `label` to `system_label` is because `$label`i
is something of a special token to jq.
Introduce an AWS EC2 AMI which supports aarch64 and x86_64 with a ZFS
root.
This uses `make-zfs-image` which implies two EBS volumes are needed
inside EC2, one for boot, one for root. It should not matter which
is identified `xvda` and which is `xvdb`, though I have always
uploaded `boot` as `xvda`.
For reasons we haven't been able to work out, the aarch64 EC2 image now
regularly exceeds the output image size on hydra.nixos.org. As a
workaround, set this back to being statically sized again.
The other images do seem to build - it's just a case of the EC2 image
now being too large (occasionally non-determinstically).
As a temporary workaround for #120473 while the image builder is patched
to correctly look up disk sizes, partially revert
f3aa040bcb for EC2 disk images only.
We retain the type allowing "auto" but set the default back to the
previous value.
Because this script enables `set -u` when no arguments are provided bash
exits with the error:
$1: unbound variable
instead of the helpful usage message.
NixOS 20.03 is built on kernel 5.4 and 19.09 is on 4.19, so we should update
this option to the highest value possible, per linked upstream instructions from
Amazon.
For the case of blkfront drives, there appears to be no difference
between /dev/sda1 and /dev/xvda: the drive always appears as the
kernel device /dev/xvda.
For the case of nvme drives, the root device typically appears as
/dev/nvme0n1. Amazon provides the 'ec2-utils' package for their first
party linux ("Amazon Linux"), which configures udev to create symlinks
from the provided name to the nvme device name. This name is
communicated through nvme "Identify Controller" response, which can be
inspected with:
nvme id-ctrl --raw-binary /dev/nvme0n1 | cut -c3073-3104 | hexdump -C
On Amazon Linux, where the device is attached as "/dev/xvda", this
creates:
- /dev/xvda -> nvme0n1
- /dev/xvda1 -> nvme0n1p1
On NixOS where the device is attach as "/dev/sda1", this creates:
- /dev/sda1 -> nvme0n1
- /dev/sda11 -> nvme0n1p1
This is odd, but not inherently a problem.
NixOS unconditionally configures grub to install to `/dev/xvda`, which
fails on an instance using nvme storage. With the root device name set
to xvda, both blkfront and nvme drives are accessible as /dev/xvda,
either directly or by symlink.
Since 34234dcb51, for resize2fs to be automatically included in
initrd, a filesystem needed for boot must be explicitly defined as an
ext* type filesystem.
Cloudstack images are simply using cloud-init. They are not headless
as a user usually have access to a console. Otherwise, the difference
with Openstack are mostly handled by cloud-init.
This is still some minor issues. Notably, there is no non-root user.
Other cloud images usually come with a user named after the
distribution and with sudo. Would it make sense for NixOS?
Cloudstack gives the user the ability to change the password.
Cloud-init support for this is imperfect and the set-passwords module
should be declared as `- [set-passwords, always]` for this to work. I
don't know if there is an easy way to "patch" default cloud-init
configuration. However, without a non-root user, this is of no use.
Similarly, hostname is usually set through cloud-init using
`set_hostname` and `update_hostname` modules. While the patch to
declare nixos to cloud-init contains some code to set hostname, the
previously mentioned modules are not enabled.