- Remove xdg-desktop-menu-dummy.menu kbuildsycoca5. Not sure why we
need it but it is a pretty big failure if it exists.
See issue #56176.
- plasma: clear ksycoca cache before building
This is needed to pick up on software removed since the last cache
update. Otherwise it hangs around as zombies forever (or until the
cache is cleared).
- Add the above + the icon cache cleanup to plasmaSetup
This will be run for the logged in user on each nixos-rebuild.
Unfortunately this only works if you are managing software through
nixos-rebuild (nix-env users need to run this manually, otherwise
log out and log back in).
I think the bepasty nixos service has been broken since c539c02, since
bepasty changed from using python2.7 to python3.7. This updates the
nixos module to refer to the matching python version.
A nixos module for configuring the server side of pkgs.snapcast.
The module is named "snapserver" following upstream convention.
This commit does not provide module for the corresponding client.
Fix handling of port and controlPort
Fix stream uri generation & address review
Remove unused streams options & add description
Add missing description & Remove default fs path
Use types.port for ports & formatting improvements
Force mpd and mopidy to wait for snapserver
by adding targets and curl wait loops to services to ensure services
are not started before their depended services are reachable.
Extra targets cfssl-online.target and kube-apiserver-online.target
syncronize starts across machines and node-online.target ensures
docker is restarted and ready to deploy containers on after flannel
has discussed the network cidr with apiserver.
Since flannel needs to be started before addon-manager to configure
the docker interface, it has to have its own rbac bootstrap service.
The curl wait loops within the other services exists to ensure that when
starting the service it is able to do its work immediately without
clobbering the log about failing conditions.
By ensuring kubernetes.target is only reached after starting the
cluster it can be used in the tests as a wait condition.
In kube-certmgr-bootstrap mkdir is needed for it to not fail to start.
The following is the relevant part of systemctl list-dependencies
default.target
● ├─certmgr.service
● ├─cfssl.service
● ├─docker.service
● ├─etcd.service
● ├─flannel.service
● ├─kubernetes.target
● │ ├─kube-addon-manager.service
● │ ├─kube-proxy.service
● │ ├─kube-apiserver-online.target
● │ │ ├─flannel-rbac-bootstrap.service
● │ │ ├─kube-apiserver-online.service
● │ │ ├─kube-apiserver.service
● │ │ ├─kube-controller-manager.service
● │ │ └─kube-scheduler.service
● │ └─node-online.target
● │ ├─node-online.service
● │ ├─flannel.target
● │ │ ├─flannel.service
● │ │ └─mk-docker-opts.service
● │ └─kubelet.target
● │ └─kubelet.service
● ├─network-online.target
● │ └─cfssl-online.target
● │ ├─certmgr.service
● │ ├─cfssl-online.service
● │ └─kube-certmgr-bootstrap.service
to protect services from crashing and clobbering the logs when
certificates are not in place yet and make sure services are activated
when certificates are ready.
To prevent errors similar to "kube-controller-manager.path: Failed to
enter waiting state: Too many open files"
fs.inotify.max_user_instances has to be increased.
PlexPy was renamed to Tautulli.
This renames the module as well as the application accordingly.
Aliases are kept for backwards compatibility.
# Conflicts:
# nixos/modules/services/misc/tautulli.nix
The overwriteprotocol option can be used to force Nextcloud to generate
URLs with the given protocol. This is useful for instances behind
reverse proxies that serve Nextcloud with HTTPS.
In this case Nextcloud can't determine the proper protocol and it needs
to be configured manually.
https://github.com/golang/go/issues/9334 describes how net.Listen (as used by Alertmanager):
* listens on 127.0.0.1 if the listenAddress is "localhost"
* listens on all interfaces if the listenAddress is ""
Before flannel is ready there is a brief time where docker will be
running with a default docker0 bridge. If kubernetes happens to spawn
containers before flannel is ready, docker can't be restarted when
flannel is ready because some containers are still running on the
docker0 bridge with potentially different network addresses.
Environment variables in `EnvironmentFile` override those defined via
`Environment` in the systemd service config.
Co-authored-by: Christian Albrecht <christian.albrecht@mayflower.de>
+ isolate etcd on the master node by letting it listen only on loopback
+ enabling kubelet on master and taint master with NoSchedule
The reason for the latter is that flannel requires all nodes to be "registered"
in the cluster in order to setup the cluster network. This means that the
kubelet is needed even at nodes on which we don't plan to schedule anything.
- All kubernetes components have been seperated into different files
- All TLS-enabled ports have been deprecated and disabled by default
- EasyCert option added to support automatic cluster PKI-bootstrap
- RBAC has been enforced for all cluster components by default
- NixOS kubernetes test cases make use of easyCerts to setup PKI
This round is without the systemd CVE,
as we don't have binaries for that yet.
BTW, I just ignore darwin binaries these days,
as I'd have to wait for weeks for them.
The module is indeed very large but allows configuring every aspect of
icingaweb2. The built-in monitoring module is in an own file because
there are actually more (third-party) modules and this structure means
every module can get an own file.
Patch by @Baughn, who noticed these imports being very slow when run
serially with many datasets, so much that the service would time out and
fail, this fixes it.
With this option it's possible to specify a custom expression for
`roundcube`, i.e. a roundcube environment with third-party plugins as
shown in the testcase.
* pr-55320:
nixos/release-notes: mention breaking changes with matrix-synapse update
nixos/matrix-synapse: reload service with SIGHUP
nixos/tests/matrix-synapse: generate ca and certificates
nixos/matrix-synapse: use python to launch synapse
pythonPackages.pymacaroons-pynacl: remove unmaintained fork
matrix-synapse: 0.34.1.1 -> 0.99.0
pythonPackages.pymacaroons: init at 0.13.0
Force this option to false. Leaving this as true (currently the default)
is dangerous. If the TT-RSS installation upgrades itself to a newer
version requiring a schema update, the installation will break the next
time the TT-RSS systemd service is restarted.
Ideally, the installation itself should be immutable (see
https://github.com/NixOS/nixpkgs/issues/55300).
* redmine: 3.4.8 -> 4.0.1
* nixos/redmine: update nixos test to run against both redmine 3.x and 4.x series
* nixos/redmine: default new installs from 19.03 onward to redmine 4.x series, while keeping existing installs on redmine 3.x series
* nixos/redmine: add comment about default redmine package to 19.03 release notes
* redmine: add aandersea as a maintainer
munin_update relies on a stats file that exists, but isn't found in the
default location on NixOS; the appropriate plugin configuration is
added.
munin_stats relies on munin-cron writing a logfile, which the NixOS
build of munin does not. (This is probably fixable in the munin package,
but I don't have time to dig into that right now.)
This permits custom styling of the generated HTML without needing to
build your own Munin package from source. Also comes with an example
that works as a passable dark theme for Munin.
extraAutoPlugins lets you list plugins and plugin directories to be
autoconfigured, and extraPlugins lets you enable plugins on a one-by-one
basis. This can be used to enable plugins from contrib (although you'll
need to download and check out contrib yourself, then point these
options at it), or plugins you've written yourself.
munin-graph is hardcoded to use DejaVu Mono for the graph legends; if it
can't find it, there's no guarantee it finds a monospaced font at all,
and if it can't find a monospaced font the legends come out badly
misformatted.
This is just a set of globs to remove from the active plugins directory
after autoconfiguration is complete.
I also removed the hard-coded disabling of "diskstats", since it seems
to work just fine now.
Since this module was written, Munin has moved their documentation from
munin-monitoring.org/wiki to guide.munin-monitoring.org. Most of the
links were broken, and the ones that weren't went to "please use the new
site" pages.
NixOS currently defaults services.nginx.package to
nginxStable. Including configuration files from nginxMainline could
potentially cause incompatible configuration.
- allow for options to (added 2 options):
- agree to eula (eula.txt) true/false will create symlink over
existing eula.txt to `/nix/store/...`.
- whitelist users (optional and will symlink over existing
whitelist.json and create backup)
- server.properties can be configured with the serverProperties
option. If there is an existing server.properties it will
copy it to a server.properties.old to keep the old
one. server.properties MUST be writable thus symlinking is not
an option.
- all ports that are stated in `server.properties` are exposed
properly in the firewall.
(infinisil) nixos/minecraft-server: Fix, refactor and polish
Adds an option `declarative` (defaulted to false), in order to stay
(mostly) backwards compatible. The only thing that's not backwards
compatible is that you now need to agree to the EULA on evaluation time,
but that's guarded by an assertion and therefore doesn't need a release
note.
New option `extraPluginPaths' that allows users to supply additional
paths for netdata plugins. Very useful for when you want to use
custom collection scripts.
1. Allow syslog identifiers with special characters
2. Do not write a pid file as we are running in foreground anyway
3. Clean up the module for readability
Without this, when deploying using nixops, restarting sshguard would make
nixops show an error about restarting the service although the service is
actually being restarted.
Don't add the testing "webcam" device,
which is unexpected to see when querying
what devices fwupd believes exist :).
Won't change behavior for anyone defining
the blacklistPlugin option already,
but doesn't seem worth making more complicated.
Systemd provides some functionality to escape strings that are supposed
to be part of a unit name[1]. This seems to be used for interface names
in `sys-subsystem-net-devices-{interface}.device` and breaks
wpa_supplicant if the wireless interface name has a dash which is
encoded to \x2d.
Such an interface name is rather rare, but used i.e. when configuring
multiple wireless interfaces with `networking.wlanInterfaces`[2] to have on
interface for `wpa_supplicant` and another one for `hostapd`.
[1] https://www.freedesktop.org/software/systemd/man/systemd-escape.html
[2] https://nixos.org/nixos/options.html#networking.wlaninterfaces
Add an ExecReload command to the prosody service, to allow reloading
prosody by sending SIGHUP to the main process, for example to update
certificates without restarting the server. This is exactly how the
`prosodyctl` tool does it.
Note: Currently there is a bug which prevents mod_http from reloading the
certificates properly: https://issues.prosody.im/1216.
`collectd' might fail because of a failure in any of numerous plugins.
For example `virt' plugin sometimes fails if `collectd' is started before `libvirtd'
The default galera_new_cluster script tries to set this environment
variable using systemctl set-environment which doesn't work if the
variable is not being used in the unit file ;)
Without this line, attempting to copy and paste non-ASCII characters
will result in error messages like the following (and pasting from the
server to the client will not work):
```
CLIPBOARD clipboard_send_data_response_for_text: 823 : ERROR: clipboard_send_data_response_for_text: bad string
```
For large setups it is useful to list all databases explicit
(for example if temporary databases are also present) and store them in extra
files.
For smaller setups it is more convenient to just backup all databases at once,
because it is easy to forget to update configuration when adding/renaming
databases. pg_dumpall also has the advantage that it backups users/passwords.
As a result the module becomes easier to use because it is sufficient
in the default case to just set one option (services.postgresqlBackup.enable).
Although this can be added to `extraOptions` I figured that it makes
sense to add an option to explicitly promote this feature in our
documentation since most of the self-hosted gitea instances won't be
intended for common use I guess.
Also added a notice that this should be added after the initial deploy
as you have to register yourself using that feature unless the install
wizard is used.
Symlinking works for most plugins and themes, but Avada, for instance, fails to
understand the symlink, causing its file path stripping to fail. This results in
requests that look like:
https://example.com/wp-content//nix/store/...plugin/path/some-file.js
Since hard linking directories is not allowed, copying is the next best thing.
While at it (see previous commit), using attrNames in combination with
length is a bit verbose for checking whether the filtered attribute set
is empty, so let's just compare it against an empty attribute set.
Signed-off-by: aszlig <aszlig@nix.build>
When generating values for the services.nsd.zones attribute using values
from pkgs, we'll run into an infinite recursion because the nsd module
has a condition on the top-level definition of nixpkgs.config.
While it would work to push the definition a few levels down, it will
still only work if we don't use bind tools for generating zones.
As far as I could see, Python support for BIND seems to be only needed
for the dnssec-* tools, so instead of using nixpkgs.config, we now
directly override pkgs.bind instead of globally in nixpkgs.
To illustrate the problem with a small test case, instantiating the
following Nix expression from the nixpkgs source root will cause the
mentioned infinite recursion:
(import ./nixos {
configuration = { lib, pkgs, ... }: {
services.nsd.enable = true;
services.nsd.zones = import (pkgs.writeText "foo.nix" ''
{ "foo.".data = "xyz";
"foo.".dnssec = true;
}
'');
};
}).vm
With this change, generating zones via import-from-derivation is now
possible again.
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @pngwjpgh
This adds a NixOS option for setting the CPU max and min frequencies
with `cpufreq`. The two options that have been added are:
- `powerManagement.cpufreq.max`
- `powerManagement.cpufreq.min`
It also adds an alias to the `powerManagement.cpuFreqGovernor` option as
`powerManagement.cpufreq.governor`. This updates the installer to use
the new option name. It also updates the manual with a note about
the new name.
Although the package itself builds fine, the module fails because it
tries to log into a non-existant file in `/var/log` which breaks the
service. Patching to default config to log to stdout by default fixes
the issue. Additionally this is the better solution as NixOS heavily
relies on systemd (and thus journald) for logging.
Also, the runtime relies on `/etc/localtime` to start, as it's not
required by the module system we set UTC as sensitive default when using
the module.
To ensure that the service's basic functionality is available, a simple
NixOS test has been added.
This flag causes the shairport-sync server to attempt to daemonize, but it looks like systemd is already handling that. With the `-d` argument, shairport-sync immediately exits—it seems that something (systemd I'm guessing?) is sending it SIGINT or SIGTERM.
The [upstream systemd unit](https://github.com/mikebrady/shairport-sync/blob/master/scripts/shairport-sync.service.in#L10) doesn't pass `-d`.
pkgs.owncloud still pointed to owncloud 7.0.15 (from May 13 2016)
Last owncloud server update in nixpkgs was in Jun 2016.
At the same time Nextcloud forked away from it, indicating users
switched over to that.
cc @matej (original maintainer)
Systemd provides an option for allocating DynamicUsers
which we want to use in NixOS to harden service configuration.
However, we discovered that the user wasn't allocated properly
for services. After some digging this turned out to be, of course,
a cache inconsistency problem.
When a DynamicUser creation is performed, Systemd check beforehand
whether the requested user already exists statically. If it does,
it bails out. If it doesn't, systemd continues with allocating the
user.
However, by checking whether the user exists, nscd will store
the fact that the user does not exist in it's negative cache.
When the service tries to lookup what user is associated to its
uid (By calling whoami, for example), it will try to consult
libnss_systemd.so However this will read from the cache and tell
report that the user doesn't exist, and thus will return that
there is no user associated with the uid. It will continue
to do so for the cache duration time. If the service
doesn't immediately looks up its username, this bug is not
triggered, as the cache will be invalidated around this time.
However, if the service is quick enough, it might end up
in a situation where it's incorrectly reported that the
user doesn't exist.
Preferably, we would not be using nscd at all. But we need to
use it because glibc reads nss modules from /etc/nsswitch.conf
by looking relative to the global LD_LIBRARY_PATH. Because LD_LIBRARY_PATH
is not set globally (as that would lead to impurities and ABI issues),
glibc will fail to find any nss modules.
Instead, as a hack, we start up nscd with LD_LIBRARY_PATH set
for only that service. Glibc will forward all nss syscalls to
nscd, which will then respect the LD_LIBRARY_PATH and only
read from locations specified in the NixOS config.
we can load nss modules in a pure fashion.
However, I think by accident, we just copied over the default
settings of nscd, which actually caches user and group lookups.
We already disable this when sssd is enabled, as this interferes
with the correct working of libnss_sss.so as it already
does its own caching of LDAP requests.
(See https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/usingnscd-sssd)
Because nscd caching is now also interferring with libnss_systemd.so
and probably also with other nsss modules, lets just pre-emptively
disable caching for now for all options related to users and groups,
but keep it for caching hosts ans services lookups.
Note that we can not just put in /etc/nscd.conf:
enable-cache passwd no
As this will actually cause glibc to _not_ forward the call to nscd
at all, and thus never reach the nss modules. Instead we set
the negative and positive cache ttls to 0 seconds as a workaround.
This way, Glibc will always forward requests to nscd, but results
will never be cached.
Fixes#50273
Could also move kdc.conf, but this makes it inconvenient to use command line
utilities with heimdal, as it would require specifying --config-file with every
command.
Allow switching out kerberos server implementation.
Sharing config is probably sensible, but implementation is different enough to
be worth splitting into two files. Not sure this is the correct way to split an
implementation, but it works for now.
Uses the switch from config.krb5 to select implementation.
As cassandra start script hardcodes the location of logback
configuration to `CASSANDRA_CONF_DIR/logback.xml` there is no way to
pass an alternate file via `$JVM_OPTS` for example.
Also, without logback configuration DEBUG level is used which is not
necessary for standard usage.
With this commit a default logback configuration is set with log level
INFO.
Configuration borrowed from:
https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configLoggingLevels.html
Instead of setting User/Group only when DynamicUser is disabled, the
previous version of the code set it only when it was enabled. This
caused services with DynamicUser enabled to actually run as nobody, and
services without DynamicUser enabled to run as root.
Regression from fbb7e0c82f.
This cleans up the CockroachDB expression, with a few suggestions from
@aszlig.
However, it brought up the note of using systemd's StateDirectory=
directive, which is a nice feature for managing long-term data files,
especially for UID/GID assigned services. However, it can only manage
directories under /var/lib (for global services), so it has to introduce
a special path to make use of it at all in the case someone wants a path
at a different root.
While the dataDir directive at the NixOS level is _occasionally_ useful,
I've gone ahead and removed it for now, as this expression is so new,
and it makes the expression cleaner, while other kinks can be worked out
and people can test drive it.
CockroachDB's dataDir directive, instead, has been replaced with
systemd's StateDirectory management to place the data under
/var/lib/cockroachdb for all uses.
There's an included RequiresMountsFor= clause like usual though, so if
people want dependencies for any kind of mounted device at boot
time/before database startup, it's easy to specify using their own
mount/filesystems clause.
This can also be reverted if necessary, but, we can see if anyone ever
actually wants that later on before doing it -- it's a backwards
compatible change, anyway.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Currently there are two calls to curl in the reloadScript, neither which
check for errors. If something is misconfigured (like wrong authToken),
the only trace that something wrong happened is this log message:
Asking Jenkins to reload config
<h1>Bad Message 400</h1><pre>reason: Illegal character VCHAR='<'</pre>
The service isn't marked as failed, so it's easy to miss.
Fix it by passing --fail to curl.
While at it:
* Add $curl_opts and $jenkins_url variables to keep the curl command
lines DRY.
* Add --show-error to curl to show short error message explanation when
things go wrong (like HTTP 401 error).
* Lower-case the $CRUMB variable as upper case is for exported environment
variables.
The new behaviour, when having wrong accessToken:
Asking Jenkins to reload config
curl: (22) The requested URL returned error: 401
And the service is clearly marked as failed in `systemctl --failed`.
This also includes a full end-to-end CockroachDB clustering test to
ensure everything basically works. However, this test is not currently
enabled by default, though it can be run manually. See the included
comments in the test for more information.
Closes#51306. Closes#38665.
Co-authored-by: Austin Seipp <aseipp@pobox.com>
Signed-off-by: Austin Seipp <aseipp@pobox.com>
As the comment notes, restarts/exits of dhcpcd generally require
restarting the NTP service since, if name resolution fails for a pool of
servers, the service might break itself. To be on the safe side, try
restarting Chrony in these instances, too.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Setting the server list to be empty is useful e.g. for hardware-only
or virtualized reference clocks that are passed through to the system
directly. In this case, initstepslew has no effect, so don't emit it.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Part of #49783. NextCloud tracks in its `config.php` the application's
state which makes it hard for the module to modify configurations during
upgrades.
It will take time until the issue is properly fixed, therefore we
decided to warn about this in the manual.
This PR addresses two things:
* Adding a basic example for nextcloud. I figured it to be helpful to
add some basic usage instructions when adding a new manual entry.
Advanced documentation may follow later.
For now this document actively links to the service options, so users
are guided to the remaining options that can be helpful in certain
cases.
* Add a warning about upgrades and manual changes in
`/var/lib/nextcloud`. This will be fixed in the future, but it's
definetely helpful to document the current issues in the manual (as
proposed in https://github.com/NixOS/nixpkgs/issues/49783#issuecomment-439691127).
This allows, finally, proper detection when postgresql is ready to
accept connections. Until now, it was possible that services depending
on postgresql would fail in a race condition trying to connect
to postgresql.
When reworking the rspamd workers I disallowed `proxy` as a type and
instead used `rspamd_proxy` which is the correct name for that worker
type. That change breaks peoples existing config and so I have made this
commit which allows `proxy` as a worker type again but makes it behave
as `rspamd_proxy` and prints a warning if you use it.
This commit adds an assertion that checks that either `configFile` or
`configuration` is configured for alertmanager. The alertmanager config
can not be an empty attributeset. The check executed with `amtool` fails
before the service even has the chance to start. We should probably not
allow a broken alertmanager configuration anyway.
This also introduces a test for alertmanager configuration that piggy
backs on the existing prometheus tests.
The default of systemd is to kill the
the whole cgroup of a service. For slurmd
this means that all running jobs get killed
as well whenever the configuration is updated (and activated).
To avoid this behaviour we set "KillMode=process"
to kill only slurmd on reload. This is how
slurm configures the systemd service.
See:
https://bugs.schedmd.com/show_bug.cgi?id=2095#c24508f866ea1
Otherwise netdata will not find python modules.
To make sure netdata still pick up our setuid version of apps.plugin
we rename the original executable.
These days build systems are more robust w.r.t. to concurrency.
Most users will have at least two cores in their machines.
Therefore I suggest to increase the number of cores used for building.
fixes#50376
Based on reports X wouldn't start out of the box and seems OK now.
In case there are still some problems, we can improve later.
I checked that nixos.tests.virtualbox.* still succeed.