This follows upstreams change in documentation. While the `[DHCP]`
section might still work it is undocumented and we should probably not
be using it anymore. Users can just upgrade to the new option without
much hassle.
I had to create a bit of custom module deprecation code since the usual
approach doesn't support wildcards in the path.
You can now specify option for the `[DHCPv6]` section with
`systemd.network.<name>.dhcpV6Config.…`. Previously you could only use
the combined legacy DHCP configuration.
Systemd upstream has deprecated CriticalConnection with v244 in favor of
KeepConnection as that seems to be more flexible:
The CriticalConnection= setting in .network files is now deprecated,
and replaced by a new KeepConfiguration= setting which allows more
detailed configuration of the IP configuration to keep in place.
We are leveraging the systemd sandboxing features to prevent the
service accessing locations it shouldn't do. Most notably, we are here
preventing the prosody service from accessing /home and providing it
with a private /dev and /tmp.
Please consult man systemd.exec for further informations.
Setting up a XMPP chat server is a pretty deep rabbit whole to jump in
when you're not familiar with this whole universe. Your experience
with this environment will greatly depends on whether or not your
server implements the right set of XEPs.
To tackle this problem, the XMPP community came with the idea of
creating a meta-XEP in charge of listing the desirable XEPs to comply
with. This meta-XMP is issued every year under an new XEP number. The
2020 one being XEP-0423[1].
This prosody nixos module refactoring makes complying with XEP-0423
easier. All the necessary extensions are enabled by default. For some
extensions (MUC and HTTP_UPLOAD), we need some input from the user and
cannot provide a sensible default nixpkgs-wide. For those, we guide
the user using a couple of assertions explaining the remaining manual
steps to perform.
We took advantage of this substential refactoring to refresh the
associated nixos test.
Changelog:
- Update the prosody package to provide the necessary community
modules in order to comply with XEP-0423. This is a tradeoff, as
depending on their configuration, the user might end up not using them
and wasting some disk space. That being said, adding those will
allow the XEP-0423 users, which I expect to be the majority of
users, to leverage a bit more the binary cache.
- Add a muc submodule populated with the prosody muc defaults.
- Add a http_upload submodule in charge of setting up a basic http
server handling the user uploads. This submodule is in is
spinning up an HTTP(s) server in charge of receiving and serving the
user's attachments.
- Advertise both the MUCs and the http_upload endpoints using mod disco.
- Use the slixmpp library in place of the now defunct sleekxmpp for
the prosody NixOS test.
- Update the nixos test to setup and test the MUC and http upload
features.
- Add a couple of assertions triggered if the setup is not xep-0423
compliant.
[1] https://xmpp.org/extensions/xep-0423.html
The elasticsearch-curator was not deleting indices because the indices
had ILM policies associated with them. This is now fixed by
configuring the elasticsearch-curator with `allow_ilm_indices: true`.
Also see: https://github.com/elastic/curator/issues/1490
Not all systems need to be online to boot up. So, don’t pull
network-online.target into multi-user.target. Services that need
online network can still require it.
This increases my boot time from ~9s to ~5s.
Rework withExtensions / buildEnv to handle currently enabled
extensions better and make them compatible with override. They now
accept a function with the named arguments enabled and all, where
enabled is a list of currently enabled extensions and all is the set
of all extensions. This gives us several nice properties:
- You always get the right version of the list of currently enabled
extensions
- Invocations chain
- It works well with overridden PHP packages - you always get the
correct versions of extensions
As a contrived example of what's possible, you can add ImageMagick,
then override the version and disable fpm, then disable cgi, and
lastly remove the zip extension like this:
{ pkgs ? (import <nixpkgs>) {} }:
with pkgs;
let
phpWithImagick = php74.withExtensions ({ all, enabled }: enabled ++ [ all.imagick ]);
phpWithImagickWithoutFpm743 = phpWithImagick.override {
version = "7.4.3";
sha256 = "wVF7pJV4+y3MZMc6Ptx21PxQfEp6xjmYFYTMfTtMbRQ=";
fpmSupport = false;
};
phpWithImagickWithoutFpmZip743 = phpWithImagickWithoutFpm743.withExtensions (
{ enabled, all }:
lib.filter (e: e != all.zip) enabled);
phpWithImagickWithoutFpmZipCgi743 = phpWithImagickWithoutFpmZip743.override {
cgiSupport = false;
};
in
phpWithImagickWithoutFpmZipCgi743
Instead of hardcoding all nss modules that are added into nsswitch,
there are now options exposed.
This allows users to add own nss modules (I had this issue with
winbindd, for example).
Also, nss modules could be moved to their NixOS modules which would
make the nsswitch module slimmer.
As the lists are now handled by the modules system, we can use mkOrder
to ensure a proper order as well as mkForce to override one specific
database type instead of the entire file.
nix build should store it's temporary files on target filesystem.
This should fix 'No space left on device' on systems
with low amount of RAM when there is a need to build something
like Linux kernel
Fixes this warning at ibus-daemon startup:
(ibus-dconf:15691): dconf-WARNING **: 21:49:24.018: unable to open file '/etc/dconf/db/ibus': Failed to open file ?/etc/dconf/db/ibus?: open() failed: No such file or directory; expect degraded performance
Fixes#858001d61efb7f1 accidentially changed the
restartTriggers of `datadog-agent.service` to point to the attribute
name (in this case, a location relative to `/etc`), instead of the
location of the config files in the nix store.
This caused datadog to not get restarted on activation of new
config, if the file name hasn't changed.
Fix this, by pointing this back to the location in the nix store.
1d61efb7f1 accidentially changed the
restartTriggers of systemd-networkd.service` to point to the attribute
name (in this case, a location relative to `/etc`), instead of the
location of the network-related unit files in the nix store.
This caused systemd-networkd to not get restarted on activation of new
networking config, if the file name hasn't changed.
Fix this, by pointing this back to the location in the nix store.
It currently says that everything will be backward compatible between lego and simp-le certificates, but it’s not.
(cherry picked from commit 21c4a33ceef77dec2b821f7164e13971862d5575)
What's happening now is that both cri-o and podman are creating
/etc/containers/policy.json.
By splitting out the creation of configuration files we can make the
podman module leaner & compose better with other container software.
For imports, it is better to use ‘modulesPath’ than rely on <nixpkgs>
being correctly set. Some users may not have <nixpkgs> set correctly.
In addition, when ‘pure-eval=true’, <nixpkgs> is unset.
Context: discussion in https://github.com/NixOS/nixpkgs/pull/82630
Mesa has been supporting S3TC natively without requiring these libraries
since the S3TC patent expired in December 2017.
Use types.str instead of types.path to exclude private information from
the derivation.
Add a warinig about the contents of acl beeing included in the nix
store.
Enables multi-site configurations.
This break compatibility with prior configurations that expect options
for a single dokuwiki instance in `services.dokuwiki`.
Shimming out the Let's Encrypt domain name to reuse client configuration
doesn't work properly (Pebble uses different endpoint URL formats), is
recommended against by upstream,[1] and is unnecessary now that the ACME
module supports specifying an ACME server. This commit changes the tests
to use the domain name acme.test instead, and renames the letsencrypt
node to acme to reflect that it has nothing to do with the ACME server
that Let's Encrypt runs. The imports are renamed for clarity:
* nixos/tests/common/{letsencrypt => acme}/{common.nix => client}
* nixos/tests/common/{letsencrypt => acme}/{default.nix => server}
The test's other domain names are also adjusted to use *.test for
consistency (and to avoid misuse of non-reserved domain names such
as standalone.com).
[1] https://github.com/letsencrypt/pebble/issues/283#issuecomment-545123242
Co-authored-by: Yegor Timoshenko <yegortimoshenko@riseup.net>
This was added in aade4e577b, but the
implementation of the ACME module has been entirely rewritten since
then, and the test seems to run fine on AArch64.
These now depend on an external patch set; add them to the release tests
to ensure that the build doesn't break silently as new kernel updates
are merged.
linux-hardened sets kernel.unprivileged_userns_clone=0 by default; see
anthraxx/linux-hardened@104f44058f.
This allows the Nix sandbox to function while reducing the attack
surface posed by user namespaces, which allow unprivileged code to
exercise lots of root-only code paths and have lead to privilege
escalation vulnerabilities in the past.
We can safely leave user namespaces on for privileged users, as root
already has root privileges, but if you're not running builds on your
machine and really want to minimize the kernel attack surface then you
can set security.allowUserNamespaces to false.
Note that Chrome's sandbox requires either unprivileged CLONE_NEWUSER or
setuid, and Firefox's silently reduces the security level if it isn't
allowed (see about:support), so desktop users may want to set:
boot.kernel.sysctl."kernel.unprivileged_userns_clone" = true;
* nixos/k3s: simplify config expression
* nixos/k3s: add config assertions and trim unneeded bits
* nixos/k3s: add a test that k3s works; minor module improvements
This is a single-node test. Eventually we should also have a multi-node
test to verify the agent bit works, but that one's more involved.
* nixos/k3s: add option description
* nixos/k3s: add defaults for token/serveraddr
Now that the assertion enforces their presence, we dont' need to use the typesystem for it.
* nixos/k3s: remove unneeded sudo in test
* nixos/k3s: add to test list
After the recent rewrite, enabled extensions are passed to php programs
through an extra ini file by a wrapper. Since httpd uses shared module
instead of program, the wrapper did not affect it and no extensions
other than built-ins were loaded.
To fix this, we are passing the extension config another way – by adding it
to the service's generated config.
For now we are hardcoding the path to the ini file. It would be nice to add
the path to the passthru and use that once the PHP expression settles down.
Add a distinctive `unit-script` prefix to systemd unit scripts to make
them easier to find in the store directory. Do not add this prefix to
actual script file name as it clutters logs.
Current journal output from services started by `script` rather than
`ExexStart` is unreadable because the name of the file (which journalctl
records and outputs) quite literally takes 1/3 of the screen (on smaller
screens).
Make it shorter. In particular:
* Drop the `unit-script` prefix as it is not very useful.
* Use `writeShellScriptBin` to write them because:
* It has a `checkPhase` which is better than no checkPhase.
* The script itself ends up having a short name.
systemd-tmpfiles will load all files in lexicographic order and ignores rules
for the same path in later files with a warning Since we apply the default rules
provided by systemd, we should load user-defines rules first so users have a
chance to override defaults.
This reverts commit 5532065d06.
As far as I can tell setting RemainAfterExit=true here completely breaks
certificate renewal, which is really bad!
the sytemd timer will activate the service unit every OnCalendar=,
however with RemainAfterExit=true the service is already active! So the
timer doesn't rerun the service!
The commit also broke the actual tests, (As it broke activation too)
but this was fixed later in https://github.com/NixOS/nixpkgs/pull/76052
I wrongly assumed that PR fixed renewal too, which it didn't!
testing renewals is hard, as we need to sleep in tests.
For reasons yet unknown, the vxlan backend doesn't work (at least inside
the qemu networking), so this is moved to the udp backend.
Note changing the backend apparently also changes the interface name,
it's now `flannel0`, not `flannel.1`
fixes#74941
This was whitespace-sensitive, kept fighting with my editor and broke
the tests easily. To fix this, let python convert the output to
individual lines, and strip whitespace from them before comparing.
When trying to build a VM using `nixos-build-vms` with a configuration
that doesn't evaluate, an error "at `<unknown-file>`" is usually shown.
This happens since the `build-vms.nix` creates a VM-network of
NixOS-configurations that are attr-sets or functions and don't contain
any file information. This patch manually adds the `_file`-attribute to
tell the module-system which file contained broken configuration:
```
$ cat vm.nix
{ vm.invalid-option = 1; }
$ nixos-build-vms vm.nix
error: The option `invalid-option' defined in `/home/ma27/Projects/nixpkgs/vm.nix@node-vm' does not exist.
(use '--show-trace' to show detailed location information)
```
This commit:
1. Updates the path of the traefik package, so that the out output is
used.
2. Adapts the configuration settings and options to Traefik v2.
3. Formats the NixOS traefik service using nixfmt.
According to my analysis the last critical fix went into v5.4.23, I have
confirmed this by running WebGL over night and haven't seen a single
i915 GPU hang. Lets remove the notes from the release notes.
(cherry picked from commit da764d22ce3b698707861d58824843ded87cbb0a)
We already set the relevant env vars in the systemd services. That does
not help one when executing any of the executables outside a service,
e.g. when creating a new user.
we use stdenv.hostPlatform.uname.processor, which I believe is just like
`uname -p`.
Example values:
```
(import <nixpkgs> { system = "x86_64-linux"; }).stdenv.hostPlatform.uname.processor
"x86_64"
(import <nixpkgs> { system = "aarch64-linux"; }).stdenv.hostPlatform.uname.processor
aarch64
(import <nixpkgs> { system = "armv7l-linux"; }).stdenv.hostPlatform.uname.processor
"armv7l"
```
The new wording does not assume the user is upgrading.
This is because a user could be setting up a new installation on 20.03
on a server that has a 19.09 or before stateVersion!!
The new wording ensures that confusion is reduced by stating that they
do not have to care about the assumed 16→17 transition.
Then, the wording explains that they should, and how to upgrade to
version 18.
It also reviews the confusing wording about "multiple" upgrades.
* * *
The only thing we cannot really do is stop a fresh install of 17 if
there was no previous install, as it cannot be detected. That makes a
useless upgrade forced for new users with old state versions.
It is also important to state that they must set their package to
Nextcloud 18, as future upgrades to Nextcloud will not allow an uprade
from 17!
I assume future warning messages will exist specifically stating what to
do to go from 18 to 19, then 19 to 20, etc...
This allows to have multiple certificates with the same common name.
Lego uses in its internal directory the common name to name the certificate.
fixes#84409
This is an backward incompatible change from upstream dhcpcd [0], as
this could have easily locked me out of my box.
As dhcpcd doesn't allow to use only a blacklist (denyinterfaces in
dhcpcd.conf) of devices and use all remaining devices, while explicitly
allowing some interfaces like bridges, I think the best option would be
to not change anything about it and just educate the users here about
that edge case and how to solve it.
[0] https://roy.marples.name/archives/dhcpcd-discuss/0002621.html
(cherry picked from commit eeeb2bf8035b309a636d596de6a3b1d52ca427b1)
I've had Netdata crash on me sometimes. Rarely but more than once. And I lost days of data before I noticed.
Let's be nice and restart it on failures by default.
This properly supports the `supportedSystems` and
`limitedSupportedSystems` arguments of `release-combined.nix`.
Previously, evaluation would fail if `x86_64-linux` was not part either
of those, since the tested job always referenced the `x86_64-linux`
nixos tests (which won't exist in an aarch64-only eval).
Since the hydra configuration for the jobset`trunk-combined` has both
`aarch64-linux` and `x86_64-linux` as supported systems, this will make
aarch64 be part of the tested job on that jobset.