systemd.exec(5) on DynamicUser:
> If a statically allocated user or group of the configured name
> already exists, it is used and no dynamic user/group is allocated.
Using DynamicUser while still setting a group name can be
useful for granting access to resources that can otherwise only be
accessed with entirely static IDs.
The /run/wrapper directory is a tmpfs. Unfortunately, it's mounted with
its root directory has the standard (for tmpfs) mode: 1777 (world writeable,
sticky -- the standard mode of shared temporary directories). This means that
every user can create new files and subdirectories there, but can't
move/delete/rename files that belong to other users.
* programs.neovim: init
Allows to build a proper runtime folder with after/ ftplugin/ parser/ subfolders etc.
(neo)vim expects a few different folders, for instance to load
treesitter parsers.
This PR reuses the builder from the etc module, notwithstanding the
different modes/uid/gid.
This allows to get rid of some autocmd in customRC (via proper use of
the folder hierarchy) which is a win in my opinion.
Both packages will get EOLed within the lifetime of 20.09. `nextcloud17`
can be removed entirely (the attribute-path is kept however to provide
meaningful errors), however `nextcloud18` must be kept as `insecure` to
make sure that users from `nextcloud17` can properly upgrade to
`nextcloud19` on NixOS 20.09.
Turns out, `dd_url` should only be used in proxy scenarios, not to point
datadog to their EU endpoint - `site` should be used for that.
The `dd_url` setting doesn't affect APM, Logs or Live Process intake
which have their own "*_dd_url" settings.
The postfix exporter needs to access postfix's `queue/public/` directory
to read the `showq` socket inside. Instead of making the public
directory world accessible, this sets the postfix exporter's group to
`postdrop` by default, when the postfix service is enabled.
- This is fetched from a different URL, so allow passing that explicitly.
- There also isn't an nvidia-persistenced or nvidia-settings release for
this version, so use 450.57 instead. Also implement passing
persistenced and settings version explicitly.
Co-authored-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Secrets are injected from the environment into the rendered
configuration before each startup using envsubst.
The test now makes use of this feature for the db password.
Otherwise, stage-2-init.sh will complain about not having access to
/dev/fd/62 as of systemd v246.
On IRC, flokli said:
15:14 <flokli> cole-h: hmmm... I could imagine some of the setup inside /dev has been moved into other parts of systemd
15:14 <flokli> And given we run systemd much later (outside initramfs only) it doesn't work properly here
15:17 <flokli> We probably don't invoke udev correctly
The format of the listenAddress option was recently changed to separate
the address and the port parts. There is now a legacy check that
tells users to update to the new format. This legacy check produces
a false positive on IPv6 addresses, since they contain colons.
Fix the regex to make it not match colons within IPv6 addresses.
This splits PulseAudio and JACK emulation into separate outputs. Doing
so provides a number of benefits.
First it fixes pw-pulse and pw-jack. Prior to this they pointed to bogus
locations because the environment variables were not evaluated.
Technically fixing this only requires setting libpulse-path and
libjack-path to any absolute path not necessarily separate outputs but
it comes as a nice result.
Secondly it allows overriding libpulseaudio with pipewire.pulse in many
packages. This is possible because the new outputs have a more standard
layout.
This adds two tests. One is for whether the paths used by the module are
present, while the other is for testing functionality of PipeWire
itself. This is done with the recent addition of installed tests by
upstream.
This allows for transparent JACK and PulseAudio emulation. With this you
can essentially replace your entire audio framework with just PipeWire
for almost no configuration.
It had confusing semantics, being somewhere between a boolean option and
a FontPath specification. Introduce fontPath to replace it and mark the
old option as removed.
As of version 1.18.0 Appindicator support is available in the official
network-manager-applet package. To use nm-applet in an Appindicator
environment the applet should be started with the following command:
$ nm-applet --indicator
Without this option it does appear in the Enlightenment panel systray,
for instance.
Regression introduced by 053b05d14d.
The commit in question essentially removed the "with pkgs;" from the
scope around the various packages added to environment.systemPackages.
Since services.colord.enable and services.xserver.wacom.enable are false
by default, the change above didn't directly result in an evaluation
error.
Tested evaluation before and after this change via:
for cfg in hardware.bluetooth.enable \
networking.networkmanager.enable \
hardware.pulseaudio.enable \
powerManagement.enable \
services.colord.enable \
services.samba.enable \
services.xserver.wacom.enable; do
nix-instantiate --eval nixos --arg configuration '{
services.xserver.desktopManager.plasma5.enable = true;
'"$cfg"' = true;
}' -A config.environment.systemPackages > /dev/null
done
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @ttuegel
Since using flakes disallows the usage of <unstable> (which I use in
some tests), this adds an alternative. By setting specialArgs, all VMs
can get the `unstable` flake input as an arg. This is not possible with
extraConfigurations, as that would lead to infinite recursions.
This removes the `services.dbus.socketActivated` and
`services.xserver.startDbusSession` options. Instead the user D-Bus
session is always socket activated.
In version 2.0.15 `gotify` switched to `packr` at 2.x which is why the
UI can't be served properly via HTTP and causes an empty 500 response and
the following errors in `journald`:
```
2020/09/12 19:18:33 [Recovery] 2020/09/12 - 19:18:33 panic recovered:
GET / HTTP/1.1
Host: localhost:8080
Accept: */*
User-Agent: curl/7.72.0
stat /home/ma27/Projects/ui/build/index.html: no such file or directory
```
This wasn't caught by the VM-test as it only tested the REST and push
APIs. Using their internal `packr.go` script in our build as it's the
case in the upstream build-system[1] fixes the issue.
[1] https://github.com/gotify/server/pull/277/files#diff-b67911656ef5d18c4ae36cb6741b7965R48
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
ecb73fd555 introduced a new keepVmState
CLI flag for test-driver.py. This CLI flags gets forwarded to the
Machine class through create_machine.
It created a regression for the boot tests where __main__ end up not
being evaluated. See
https://github.com/NixOS/nixpkgs/pull/97346#issuecomment-690951837 for
bug report.
Defaulting keepVmState to false when __main__ ends up not being
evaluated.
This reverts commit 42eebd7ade, reversing
changes made to b169bfc9e2.
This breaks nfs3.simple test and even current PR #97656 wouldn't fix it.
Therefore let's revert for now to unblock the channels.
This is a temporary fix for #97433. A more proper fix has been
implemented upstream in systemd/systemd#17001, however until it gets
backported, we are stuck with ignoring the error.
After the backport lands, this commit should be reverted.
`rngd` seems to be the root cause for slow boot issues, and its functionality is
redundant since kernel v3.17 (2014), which introduced a `krngd` task (in kernel
space) that takes care of pulling in data from hardware RNGs:
> commit be4000bc4644d027c519b6361f5ae3bbfc52c347
> Author: Torsten Duwe <duwe@lst.de>
> Date: Sat Jun 14 23:46:03 2014 -0400
>
> hwrng: create filler thread
>
> This can be viewed as the in-kernel equivalent of hwrngd;
> like FUSE it is a good thing to have a mechanism in user land,
> but for some reasons (simplicity, secrecy, integrity, speed)
> it may be better to have it in kernel space.
>
> This patch creates a thread once a hwrng registers, and uses
> the previously established add_hwgenerator_randomness() to feed
> its data to the input pool as long as needed. A derating factor
> is used to bias the entropy estimation and to disable this
> mechanism entirely when set to zero.
Closes: #96067
This commit fixes the ejabberd tests for hydra:
mod_http_upload and mod_disco need to be explicitly enabled, and a
handler needs to be setup to make it work. Also, the client needs to be
able to contact the server.
The commit also fixes the situation where http upload failed: in that
case the client would wait forever because nothing catched the error.
Finally, there remains a non-reproducible error where ejabberd server
fails to start with an error like:
format: "Failed to create cookie file '/var/lib/ejabberd/.erlang.cookie': eacces"
(happens ~15%) I tried to check existence of /var/lib/ejabberd/ in
pre-start script and saw nothing that would explain this error, so I
gave up about this error in particular.
Now allows applying external overlays either in form of
.dts file, literal dts context added to store or precompiled .dtbo.
If overlays are defined, kernel device-trees are compiled with '-@'
so the .dtb files contain symbols which we can reference in our
overlays.
Since `fdtoverlay` doesn't respect `/ compatible` by itself
we query compatible strings of both `dtb` and `dtbo(verlay)`
and apply only if latter is substring of the former.
Also adds support for filtering .dtb files (as there are now nearly 1k
dtbs).
Co-authored-by: georgewhewell <georgerw@gmail.com>
Co-authored-by: Kai Wohlfahrt <kai.wohlfahrt@gmail.com>
This option is only available as a command-line flag and not from the
config file, that is `services.picom.settings`. Therefore it is more
important that it gets its own option.
One reason one might need this set is that blur methods other than
kernel do not work with the old backends, see yshui/picom#464.
For reference, the home-manager picom module exposes this option too.
The commit enforces buildPackages in the builder but neglects
the fact that the builder is intended to run on the target system.
Because of that, the builder will fail when remotely building a
configuration eg. with nixops or nix-copy-closure.
This reverts commit a6ac6d00f9.
Following changes in https://github.com/NixOS/nixpkgs/pull/91092 the `path` attribute is now a list
instead of being a string. This resulted resulted in the following evaluation error:
"cannot coerce a list to a string, at [...]/nixos/modules/services/networking/openvpn.nix:16:18"
so we now need to convert it to the right type ourselves.
Closes https://github.com/NixOS/nixpkgs/issues/97360.
Add the option `environmentFile` to allow passing secrets to the service
without adding them to the Nix store, while keeping the current
configuration via the existing environment file intact.
The previous version of the code would only kick in if the state
directory path pointed at a *file*, which never occurs. Making that
codepath actually work reveals an ordering bug, which this patch fixes
as well.
It also replaces the confusing, imperative case log message "delete VM
state directory" with "deleting VM state directory".
Finally, we hint the user about how to prevent this deletion. IE. by
passing the --keep-vm-state flag.
Bug report:
https://github.com/NixOS/nixpkgs/pull/91046#issuecomment-685568750
Credit goes to Edef for the rebase on top of a recent nixpkgs commit
and for writing most of this commit message.
Co-authored-by: edef <edef@edef.eu>
Right now the UX for installing NixOS on a headless system is very bad.
To enable sshd without physical steps users have to have either physical
access or need to be very knowledge-able to figure out how to modify the
installation image by hand to put an `sshd.service` symlink in the
right directory in /nix/store. This is in particular a problem on ARM
SBCs (single board computer) but also other hardware where network is
the only meaningful way to access the hardware.
This commit enables sshd by default. This does not give anyone access to
the NixOS installer since by default. There is no user with a non-empty
password or key. It makes it easy however to add ssh keys to the
installation image (usb stick, sd-card on arm boards) by simply mounting
it and adding a keys to `/root/.ssh/authorized_keys`.
Importantly this should not require nix/nixos on the machine that
prepare the installation device and even feasiable on non-linux systems
by using ext4 third party drivers.
Potential new threats: Since this enables sshd by default a
potential bug in openssh could lead to remote code execution. Openssh
has a very good track-record over the last 20 years, which makes it
far more likely that Linux itself would have a remote code execution
vulnerability. It is trusted by millions of servers on many operating
systems to be exposed to the internet by default.
Co-authored-by: Samuel Dionne-Riel <samuel@dionne-riel.com>
readd perl (used in shell scripts), rsync (needed for NixOps) and strace (common debugging tool)
they where previously removed in https://github.com/NixOS/nixpkgs/pull/91213
Co-authored-by: Timo Kaufmann <timokau@zoho.com>
Co-authored-by: 8573 <8573@users.noreply.github.com>
systemd-confinement's automatic package extraction does not work correctly
if ExecStarts ExecReload etc are lists.
Add an extra flatten to make things smooth.
Fixes#96840.
xss-lock needs XDG_SESSION_ID to respond to loginctl lock-session(s)
(and possibly other session operations such as idle hint management).
This change adds XDG_SESSION_ID to the list of imported environment
variables when starting systemctl.
Inspired by home-manager, add importVariables configuration.
Set session to XDG_SESSION_ID when running xss-lock as a service.
Co-authored-by: misuzu <bakalolka@gmail.com>
We apparently didn't fit anymore. I don't think this test is meant
to (also) check closure size.
Note: as of this commit, the test is blocked by a fontconfig problem,
so I tested with that merge temporarily reverted.
This goes through a recent example of 19.09 (because the workflow
should be everchanging, so our example needs to be recent).
Lots of changes, just read idk.
Attempting to reuse keys on a basis different to the cert (AKA,
storing the key in a directory with a hashed name different to
the cert it is associated with) was ineffective since when
"lego run" is used it will ALWAYS generate a new key. This causes
issues when you revert changes since your "reused" key will not
be the one associated with the old cert. As such, I tore out the
whole keyDir implementation.
As for the race condition, checking the mtime of the cert file
was not sufficient to detect changes. In testing, selfsigned
and full certs could be generated/installed within 1 second of
each other. cmp is now used instead.
Also, I removed the nginx/httpd reload waiters in favour of
simple retry logic for the curl-based tests
The cyclic dependency of systemd → cryptsetup → lvm2 → udev=systemd
needs to be broken somewhere. The previous strategy of building
cryptsetup with an lvm2 built without udev (#66856) caused the
installer.luksroot test to fail. Instead, build lvm2 with a udev built
without cryptsetup.
Fixes#96479.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Testing of certs failed randomly when the web server was still
returning old certs even after the reload was "complete". This was
because the reload commands send process signals and do not wait
for the worker processes to restart. This commit adds log watchers
which wait for the worker processes to be restarted.
- Use an acme user and group, allow group override only
- Use hashes to determine when certs actually need to regenerate
- Avoid running lego more than necessary
- Harden permissions
- Support "systemctl clean" for cert regeneration
- Support reuse of keys between some configuration changes
- Permissions fix services solves for previously root owned certs
- Add a note about multiple account creation and emails
- Migrate extraDomains to a list
- Deprecate user option
- Use minica for self-signed certs
- Rewrite all tests
I thought of a few more cases where things may go wrong,
and added tests to cover them. In particular, the web server
reload services were depending on the target - which stays alive,
meaning that the renewal timer wouldn't be triggering a reload
and old certs would stay on the web servers.
I encountered some problems ensuring that the reload took place
without accidently triggering it as part of the test. The sync
commands I added ended up being essential and I'm not sure why,
it seems like either node.succeed ends too early or there's an
oddity of the vm's filesystem I'm not aware of.
- Fix duplicate systemd rules on reload services
Since useACMEHost is not unique to every vhost, if one cert
was reused many times it would create duplicate entries in
${server}-config-reload.service for wants, before and
ConditionPathExists