This reverts commit 764c8203b8.
While this is desireable in principle, some of our modules and services
fail during service startup if no network is available don't currently
properly set Wants=network-online.target.
If nothing pulls in this target anymore, systemd won't try to reach it.
We have many VM tests waiting for `network-online.target`, and after
764c8203b8 fail with the following error
message:
```
error: unit "network-online.target" is inactive and there are no pending jobs
```
Most likely, test scripts shouldn't wait for `network-online.target` in
first place (as `network-online.target` says nothing about whether a
service has been started), but instead, the script should wait for the
network ports of the corresponding service to be open.
Let's revert this for now, and re-apply in a draft PR, fixing the tests
before merging it back in.
This follows upstreams change in documentation. While the `[DHCP]`
section might still work it is undocumented and we should probably not
be using it anymore. Users can just upgrade to the new option without
much hassle.
I had to create a bit of custom module deprecation code since the usual
approach doesn't support wildcards in the path.
You can now specify option for the `[DHCPv6]` section with
`systemd.network.<name>.dhcpV6Config.…`. Previously you could only use
the combined legacy DHCP configuration.
Systemd upstream has deprecated CriticalConnection with v244 in favor of
KeepConnection as that seems to be more flexible:
The CriticalConnection= setting in .network files is now deprecated,
and replaced by a new KeepConfiguration= setting which allows more
detailed configuration of the IP configuration to keep in place.
Not all systems need to be online to boot up. So, don’t pull
network-online.target into multi-user.target. Services that need
online network can still require it.
This increases my boot time from ~9s to ~5s.
1d61efb7f1 accidentially changed the
restartTriggers of systemd-networkd.service` to point to the attribute
name (in this case, a location relative to `/etc`), instead of the
location of the network-related unit files in the nix store.
This caused systemd-networkd to not get restarted on activation of new
networking config, if the file name hasn't changed.
Fix this, by pointing this back to the location in the nix store.
systemd-tmpfiles will load all files in lexicographic order and ignores rules
for the same path in later files with a warning Since we apply the default rules
provided by systemd, we should load user-defines rules first so users have a
chance to override defaults.
In contrast to `.service`-units, it's not possible to declare an
`overrides.conf`, however this is done by `generateUnits` for `.nspawn`
units as well. This change breaks the build if you have two derivations
configuring one nspawn unit.
This will happen in a case like this:
``` nix
{ pkgs, ... }: {
systemd.packages = [
(pkgs.writeTextDir "etc/systemd/nspawn/container0.nspawn" ''
[Files]
Bind=/tmp
'')
];
systemd.nspawn.container0 = {
/* ... */
};
}
```
Dropbear lags behind OpenSSH significantly in both support for modern
key formats like `ssh-ed25519`, let alone the recently-introduced
U2F/FIDO2-based `sk-ssh-ed25519@openssh.com` (as I found when I switched
my `authorizedKeys` over to it and promptly locked myself out of my
server's initrd SSH, breaking reboots), as well as security features
like multiprocess isolation. Using the same SSH daemon for stage-1 and
the main system ensures key formats will always remain compatible, as
well as more conveniently allowing the sharing of configuration and
host keys.
The main reason to use Dropbear over OpenSSH would be initrd space
concerns, but NixOS initrds are already large (17 MiB currently on my
server), and the size difference between the two isn't huge (the test's
initrd goes from 9.7 MiB to 12 MiB with this change). If the size is
still a problem, then it would be easy to shrink sshd down to a few
hundred kilobytes by using an initrd-specific build that uses musl and
disables things like Kerberos support.
This passes the test and works on my server, but more rigorous testing
and review from people who use initrd SSH would be appreciated!
`$toplevel/system` of a system closure with `x86_64` kernel and `i686` userland should contain "x86_64-linux".
If `$toplevel/system` contains "i686-linux", the closure will be run using `qemu-system-i386`, which is able to run `x86_64` kernel on most Intel CPU, but fails on AMD.
So this fix is for a rare case of `x86_64` kernel + `i686` userland + AMD CPU
This mirrors the behaviour of systemd - It's udev that parses `.link`
files, not `systemd-networkd`.
This was originally applied in 36ef112a47,
but was reverted due to 1115959a8d causing
evaluation errors on hydra.
...even when networkd is disabled
This reverts commit ce78f3ac70, reversing
changes made to dc34da0755.
I'm sorry; Hydra has been unable to evaluate, always returning
> error: unexpected EOF reading a line
and I've been unable to reproduce the problem locally. Bisecting
pointed to this merge, but I still can't see what exactly was wrong.
This is to facilitate units that should _only_ be manually started and
not activated when a configuration is switched to.
More specifically this is to be used by the new Nixops deploy-*
targets created in https://github.com/NixOS/nixops/pull/1245 that are
triggered by Nixops before/after switch-to-configuration is called.
This avoids a possible surprise if the user is using `nixpkgs.system`
and `nesting.children`. `nesting.children` is expected to ignore all
parent configuration so we shouldn't propagate the user-facing option
`nixpkgs.system`. To avoid doing so, we introduce a new internal
option for holding the value passed to eval-config.nix, and use that
when recursing for nesting.
The current behavior lets `system` default to
`builtins.currentSystem`. The system value specified to
`eval-config.nix` has very low precedence, so this should compose
properly.
Fixes#80806
Depending on the network management backend being used, if the interface
configuration in stage 1 is not cleared, there might still be some old
addresses or routes from stage 1 present in stage 2 after network
configuration has finished.