- Add serve.enable option, which configures uwsgi and nginx to serve
the mailman-web application;
- Configure services to log to the journal, where possible. Mailman
Core does not provide any options for this, but will now log to
/var/log/mailman;
- Use a unified python environment for all components, with an
extraPackages option to allow use of postgres support and similar;
- Configure mailman's postfix module such that it can generate the
domain and lmtp maps;
- Fix formatting for option examples;
- Provide a mailman-web user to run the uwsgi service by default
- Refactor Hyperkitty's periodic jobs to reduce repetition in the
expressions;
- Remove service dependencies not related to functionality included in
the module, such as httpd -- these should be configured in user config
when used;
- Move static files root to /var/lib/mailman-web-static by default. This avoids
permission issues when a static file web server attempts to access
/var/lib/mailman which is private to mailman. The location can still
be changed by setting services.mailman.webSettings.STATIC_ROOT;
- Remove the webRoot option, which seems to have been included by
accident, being an unsuitable directory for serving via HTTP.
- Rename mailman-web.service to mailman-web-setup.service, since it
doesn't actually serve mailman-web. There is now a
mailman-uwsgi.service if serve.enable is set to true.
Since Buildbot 0.9.0, status targets were deprecated and ignored.
There's a very small line on startup explaining that, and status simply
isn't reported. Avoid others the same headaches, and do it right in the
NixOS module.
As there might have been changes in the way reporters are organized, and
configuration might need to be migrated remove the old option, and not
just provide an alias.
Previously the NixOS-specific configuration for man-db was in the
package itself and /etc/man.conf was completely ignored.
This change moves it to /etc/man_db.conf, making declarative
configuration practical again.
It's now possible to generate the mandb caches for all packages
installed through NixOS `environment.systemPackages` at build-time.
The standard location for the stateful cache (/var/cache/man) is also
configured to allow users to run `mandb` manually if they wish.
Since generating the cache can be expensive the option is off by
default.
In /etc/sudoers, the last-matched rule will override all
previously-matched rules. Thus, make the default rule show up first (but
still allow some wiggle room for a user to `mkBefore` it), before any
user-defined rules.
Done by setting `autopilot.min_quorum = 3`.
Techncially, this would have been required to keep the test correct since
Consul's "autopilot" "Dead Server Cleanup" was enabled by default (I believe
that was in Consul 0.8). Practically, the issue only occurred with our NixOS
test with releases >= `1.7.0-beta2` (see #90613). The setting itself is
available since Consul 1.6.2.
However, this setting was not documented clearly enough for anybody to notice,
and only the upstream issue https://github.com/hashicorp/consul/issues/8118
I filed brought that to light.
As explained there, the test could also have been made pass by applying the
more correct rolling reboot procedure
-m.wait_until_succeeds("[ $(consul members | grep -o alive | wc -l) == 5 ]")
+m.wait_until_succeeds(
+ "[ $(consul operator raft list-peers | grep true | wc -l) == 3 ]"
+)
but we also intend to test that Consul can regain consensus even if
the quorum gets temporarily broken.
The systemd socket unit files now more precisely track the IPFS
configuration, by including any multaddr they can make a `ListenStream`
for. (The daemon doesn't currently support anything which would use
`ListDatagram`, so we don't need to worry about that.)
The tests use some of these features.
Specifying mailboxes as a list isn't a good approach since this makes it
impossible to override values. For backwards-compatibility, it's still
possible to declare a list of mailboxes, but a deprecation warning will
be shown.
Virtualbox recommends VMSVGA for Linux guests.
It is also currently the only one supporting 3D acceleration
and it works out of the box with NixOS and auto screen resizing.
VMSGVA is recommended by virtualbox for Linux clients.
Compared to VBoxVGA and VBoxSVGA it also supports 3D acceleration.
Adding the driver makes nixos work with all three supported graphics card
types.
We need to keep the passthru.filesInstalledToEtc and passthru.defaultBlacklistedPlugins in sync with the package contents so let's add a test to enforce that.
Since cd1dedac67 systemd-networkd has it's
netlink socket created via a systemd.socket unit. One might think that
this doesn't make much sense since networkd is just going to create it's
own socket on startup anyway. The difference here is that we have
configuration-time control over things like socket buffer sizes vs
compile-time constants.
For larger setups where networkd has to create a lot of (virtual)
devices the default buffer size of currently 128MB is not enough.
A good example is a machine with >100 virtual interfaces (e.g.,
wireguard tunnels, VLANs, …) that all have to be brought up during
startup. The receive buffer size will spike due to all the generated
message from the new interfaces. Eventually some of the message will be
dropped since there is not enough (permitted) buffer space available.
By having networkd start through / with a netlink socket created by
systemd we can configure the `ReceiveBufferSize` parameter in the socket
options without recompiling networkd.
Since the actual memory requirements depend on hardware, timing, exact
configurations etc. it isn't currently possible to infer a good default
from within the NixOS module system. Administrators are advised to
monitor the logs of systemd-networkd for `rtnl: kernel receive buffer
overrun` spam and increase the memory as required.
Note: Increasing the ReceiveBufferSize doesn't allocate any memory. It
just increases the upper bound on the kernel side. The memory allocation
depends on the amount of messages that are queued on the kernel side of
the netlink socket.
Reads a bit more naturally, and now the changes to the
acme-${cert}.service actually reflect what would be needed were you to
do the same in production.
e.g. "for dns-01, your service that needs the cert needs to pull in the
cert"
udev gained native support to handle FIDO security tokens, so we don't
need a module which only added the now obsolete udev rules.
Fixes: https://github.com/NixOS/nixpkgs/issues/76482
Upstream has this alias too; so that dbus activation works.
What I don't fully understand is why this would ever be useful given
this unit is already started way in early boot; even before dbus is up.
But lets just keep behaviour similar to upstream and then ask these
questions to upstream.
With this systemd buffers netlink messages in early boot from the kernel
itself; and passes them on to networkd for processing once it's started.
Makes sure no routing messages are missed.
Also makes an alias so that dbus can activate this unit. Upstream has
this too.
This will make dbus socket activation for it work
When `systemd-resolved` is restarted; this would lead to unavailability
of DNS lookups. You're supposed to use DBUS socket activation to buffer
resolved requests; such that restarts happen without downtime
This makes it possible to only start IPFS when needed. So a user’s
IPFS daemon only starts when they actually use it.
A few important warnings though:
- This probably shouldn’t be mixed with services.ipfs.autoMount
since you want /ipfs and /ipns aren’t activated like this
- ipfs.socket assumes that you are using ports 5001 and 8080 for the
API and gateway respectively. We could do some parsing to figure
out what is in apiAddress and gatewayAddress, but that’s kind of
difficult given the nonstandard address format.
- Apparently? this doesn’t work with the --api commands used in the tests.
Of course you can always start automatically with startWhenNeeded =
false, or just running ‘systemctl start ipfs.service’.
Tested with the following test (modified from tests/ipfs.nix):
import ./make-test-python.nix ({ pkgs, ...} : {
name = "ipfs";
nodes.machine = { ... }: {
services.ipfs = {
enable = true;
startWhenNeeded = true;
};
};
testScript = ''
start_all()
machine.wait_until_succeeds("ipfs id")
ipfs_hash = machine.succeed("echo fnord | ipfs add | awk '{ print $2 }'")
machine.succeed(f"ipfs cat /ipfs/{ipfs_hash.strip()} | grep fnord")
'';
})
Fixes#90145
Update nixos/modules/services/network-filesystems/ipfs.nix
Co-authored-by: Florian Klink <flokli@flokli.de>
Previously we had three services for different config flavors. This is
confusing because only one instance of IPFS can run on a host / port
combination at once. So move all into ipfs.service, which contains the
configuration specified in services.ipfs.
Also remove the env wrapper and just use systemd env configuration.
This should have been done initially, as otherwise it gets awfully
awkward to boot into new generations by default.
This system-specific image wasn't expected to be long-lived, thus why it
didn't end up being polished much.
Reality shows us we may be stuck with it for a bit longer, so let's make
it easier to use for new users.
The way this ends up being called with the raspberry pi 4 image builder
ends up not using the `-e` from the shebang.
In turn, the builds fails during cross-compilation. The wrong coreutils
ends up being used, but this is not made apparent.
The issue I faced is already fixed on master, but this ensures no one
ends up with a failed build "succeeding".
The default `undervolt` package does not accept floating point numbers for any of its numeric
arguments. This also mentions in what units are the values expressed.
The setgid is currently required for offline enqueuing, and
unfortunately smtpctl is currently not split from sendmail so there's
little running around it.
The OC_PASS environment variable can be used to create a user with
`occ user:add --password-from-env`. It is currently not possible to
use the `nextcloud-occ` to "non-interactively" create a user since
this variable is ignored by sudo.
This switches the unit to Restart=on-failure and switches the CPU policy
to fifo (the daemon tries to do that itself, but is denied permission).
Also add the package to $PATH to be able to use fs_cli easily.
It's pbPort, and it's also a connection string, meaning
listen-on-localhost is also possible. Provide an alias for the old
option name, so old configs still work.
This patch was done by curro:
The generated /etc/pam.d/* service files invoke the pam_systemd.so
session module before pam_mount.so, if both are enabled (e.g. via
security.pam.services.foo.startSession and
security.pam.services.foo.pamMount respectively).
This doesn't work in the most common scenario where the user's home
directory is stored in a pam-mounted encrypted volume (because systemd
will fail to access the user's systemd configuration).
This fixes a regression from 993baa587c which requires
networking.hostName to be a valid DNS label [0].
Unfortunately we missed the fact that the hostnames may also be empty,
if the user wants to obtain it from a DHCP server. This is even required
by a few modules/images (e.g. Amazon EC2, Azure, and Google Compute).
[0]: https://github.com/NixOS/nixpkgs/pull/76542#issuecomment-638138666
Fixes this warning at ibus-daemon startup:
(ibus-dconf:15691): dconf-WARNING **: 21:49:24.018: unable to open file '/etc/dconf/db/ibus': Failed to open file ?/etc/dconf/db/ibus?: open() failed: No such file or directory; expect degraded performance
xchg is advertised as a bidirectional exchange dir, but file content
transfer from host to VM fails due to caching:
If a file is read in the VM and then modified on the host, subsequent
re-reads in the VM can yield old, cached data.
This is caused by the use of 9p's cache=loose mode that is explicitly
meant for read-only mounts.
9p doesn't provide any suitable cache modes, so fix this by disabling
caching.
Also, remove a now unnecessary sync in the test driver.
This effectively disables nscd's built-in hosts cache, which turns out
to be erratic in some cases.
We only use nscd these days as a more ABI-neutral NSS dispatcher
mechanism.
Local caching should still be possible with local resolvers in
/etc/resolv.conf (via the `dns` NSS module), or without local resolvers
via systemd-networkd (via the `resolve` nss module)
We don't set enable-cache to no due to
https://github.com/NixOS/nixpkgs/pull/50316#discussion_r241035226.
Refactor the systemd service definition for the haproxy reverse proxy,
using the upstream systemd service definition. This allows the service
to be reloaded on changes, preserving existing server state, and adds
some hardening options.
These syncs have the goal to transfer host filesystem changes to the VM,
but they have no effect because 1) syncing in the VM can't possibly pull
in host data and 2) 9p is accessing the host filesystem on the cached
layer anyways, so even syncing on the host would have no effect in the
VM.
The `networking.interfaces.<name?>.proxyARP` option previously mentioned it would also enable IPv6 forwarding and `proxy_ndp`.
However, the `proxy_ndp` option was never actually set (the non-existing `net.ipv6.conf.proxy_arp` sysctl was set
instead). In addition `proxy_ndp` also needs individual entries for each ip to proxy for.
Proxy ARP and Proxy NDP are two different concepts, and enabling the latter
should be a conscious decision.
This commit removes the broken NDP support, and disables explicitly
enabling IPv6 forwarding (which is the default in most cases anyways)
Fixes#62339.
NixOS currently has issues with setting the FQDN of a system in a way
where standard tools work. In order to help with experimentation and
avoid regressions, add a test that checks that the hostname is
reported as the user wanted it to be.
Co-authored-by: Michael Weiss <dev.primeos@gmail.com>
This fixes the output of "hostname --fqdn" (previously the domain name
was not appended). Additionally it's now possible to use the FQDN.
This works by unconditionally adding two entries to /etc/hosts:
127.0.0.1 localhost
::1 localhost
These are the first two entries and therefore gethostbyaddr() will
always resolve "127.0.0.1" and "::1" back to "localhost" [0].
This works because nscd (or rather the nss-files module) returns the
first matching row from /etc/hosts (and ignores the rest).
The FQDN and hostname entries are appended later to /etc/hosts, e.g.:
127.0.0.2 nixos-unstable.test.tld nixos-unstable
::1 nixos-unstable.test.tld nixos-unstable
Note: We use 127.0.0.2 here to follow nss-myhostname (systemd) as close
as possible. This has the advantage that 127.0.0.2 can be resolved back
to the FQDN but also the drawback that applications that only listen to
127.0.0.1 (and not additionally ::1) cannot be reached via the FQDN.
If you would like this to work you can use the following configuration:
```nix
networking.hosts."127.0.0.1" = [
"${config.networking.hostName}.${config.networking.domain}"
config.networking.hostName
];
```
Therefore gethostbyname() resolves "nixos-unstable" to the FQDN
(canonical name): "nixos-unstable.test.tld".
Advantages over the previous behaviour:
- The FQDN will now also be resolved correctly (the entry was missing).
- E.g. the command "hostname --fqdn" will now work as expected.
Drawbacks:
- Overrides entries form the DNS (an issue if e.g. $FQDN should resolve
to the public IP address instead of 127.0.0.1)
- Note: This was already partly an issue as there's an entry for
$HOSTNAME (without the domain part) that resolves to
127.0.1.1 (!= 127.0.0.1).
- Unknown (could potentially cause other unexpected issues, but special
care was taken).
[0]: Some applications do apparently depend on this behaviour (see
c578924) and this is typically the expected behaviour.
Co-authored-by: Florian Klink <flokli@flokli.de>
- Update the default pause image
- Set the cgroup manager to systemd
- Enable `manage_ns_lifecycle` instead of the deprecated
`manage_network_ns_lifecycle` option
Signed-off-by: Sascha Grunert <sgrunert@suse.com>