When IPXE tests were added, an option was added for configuring only
the frontend, and the backend configuration was dropped entirely. This
caused most installer tests to fail.
- Create a child configuration named "Work" with an extra config file.
- Name the default configuration as "Home" :-)
- Once the VM is setup, reboot and verify that it has booted into
default configuration.
- Reboot into the "Work" configuration via grub.
- Verify that we have booted into the "Work" configuration and that
the extra config file is present.
This test works for the simple grub configuration and simple UEFI
Grub configuration. UEFI Systemd is not included in the test.
Basic test which confirms new inputs can be created and that messages
can be sent to a UDP-GELF input using `netcat`.
This test requires 4GB of RAM to avoid issues due insufficient
memory (please refer to `nixos/tests/elk.nix` for a detailed explanation of
the issue) for elasticsearch.
Also it's ensured that elasticsearch has an open HTTP port for communication
when starting `graylog`. This is a workaround to ensure that all services
are started in proper order, even in test environments with less power.
However this shouldn't be implemented in the `nixos/graylog` module as
this might be harmful when using elasticsearch clusters that require e.g.
authentication and/or run on different servers.
This commit adds new options to the Deluge service:
- Allow configuration of the user/group which runs the deluged daemon.
- Allow configuration of the user/group which runs the deluge web
daemon.
- Allow opening firewall for the deluge web daemon.
Below that it works but only when supplying a custom password file with
restricted permissions (i.e. outside the nix-store). We can't do that
using an absolute path in the tests.
Seems like you can't have a node as its own seed when it's listening on
an interface instead of an IP. At least the way it was done in the
test doesn't work and I can't figure out any other way than to just
listen on the IP address instead.
This is a simple exporter which exports the information
provided by `wg show all dump` to prometheus.
Co-authored-by: Franz Pletz <fpletz@fnordicwalking.de>
Somewhen between systemd v239 and v242 upstream decided to no longer run
a few system services with `DyanmicUser=1` but failed to provide a
migration path for all the state those services left behind.
For the case of systemd-timesync the state has to be moved from
/var/lib/private/systemd/timesync to /var/lib/systemd/timesync if
/var/lib/systemd/timesync is currently a symlink.
We only do this if the stateVersion is still below 19.09 to avoid
starting to have an ever growing activation script for (then) ancient
systemd migrations that are no longer required.
See https://github.com/systemd/systemd/issues/12131 for details about
the missing migration path and related discussion.
With the systemd update to v242 five lines are not longer sufficient to
verify that the storage was verified. In order to reduce future test
failures increasing it to 10 lines sounds like a sane amount.
We differentiate between modules and baseModules in the
VM builder for NixOS tests. This way, nesting.children, eventhough
it doesn't inherit from parent, still has enough config to
actually complete the test. Otherwise, the qemu modules
would not be loaded, for example, and a nesting.children
statement would not evaluate.
This is actually very useful. Allows you to test switch-to-configuration
nesting.children is still currently still broken as it will throw
away 'too much' of the config, including the modules that make
nixos tests work in the first place. But that's something for
another time.
nixos/nextcloud: Add documentation for nextcloud app installation and updates
nixos/nextcloud: Enable autoUpdateApps in nextcloud test
nixos/nextcloud: Fix typo in nixos/modules/services/web-apps/nextcloud.xml
Co-Authored-By: Florian Klink <flokli@flokli.de>
nixos/nextcloud: Escape html in option description
nixos/nextcloud: Fix autoUpdateApps URL in documentation.
Co-Authored-By: Florian Klink <flokli@flokli.de>
Quite some fixing was needed to get this to work.
Changes in VirtualBox and additions:
- VirtualBox is no longer officially supported on 32-bit hosts so i686-linux is removed from platforms
for VirtualBox and the extension pack. 32-bit additions still work.
- There was a refactoring of kernel module makefiles and two resulting bugs affected us which had to be patched.
These bugs were reported to the bug tracker (see comments near patches).
- The Qt5X11Extras makefile patch broke. Fixed it to apply again, making the libraries logic simpler
and more correct (it just uses a different base path instead of always linking to Qt5X11Extras).
- Added a patch to remove "test1" and "test2" kernel messages due to forgotten debugging code.
- virtualbox-host NixOS module: the VirtualBoxVM executable should be setuid not VirtualBox.
This matches how the official installer sets it up.
- Additions: replaced a for loop for installing kernel modules with just a "make install",
which seems to work without any of the things done in the previous code.
- Additions: The package defined buildCommand which resulted in phases not running, including RUNPATH
stripping in fixupPhase, and installPhase was defined which was not even run. Fixed this by
refactoring using phases. Had to set dontStrip otherwise binaries were broken by stripping.
The libdbus path had to be added later in fixupPhase because it is used via dlopen not directly linked.
- Additions: Added zlib and libc to patchelf, otherwise runtime library errors result from some binaries.
For some reason the missing libc only manifested itself for mount.vboxsf when included in the initrd.
Changes in nixos/tests/virtualbox:
- Update the simple-gui test to send the right keys to start the VM. With VirtualBox 5
it was enough to just send "return", but with 6 the Tools thing may be selected by
default. Send "home" to reliably select Tools, "down" to move to the VM and "return"
to start it.
- Disable the VirtualBox UART by default because it causes a crash due to a regression
in VirtualBox (specific to software virtualization and serial port usage). It can
still be enabled using an option but there is an assert that KVM nested virtualization
is enabled, which works around the problem (see below).
- Add an option to enable nested KVM virtualization, allowing VirtualBox to use hardware
virtualization. This works around the UART problem and also allows using 64-bit
guests, but requires a kernel module parameter.
- Add an option to run 64-bit guests. Tested that the tests pass with that. As mentioned
this requires KVM nested virtualization.
Ideally, private keys never leave the host they're generated on - like
SSH. Setting generatePrivateKeyFile to true causes the PK to be
generate automatically.
This is to make sure that we get different ETag values whenever we
switch to a different store path but with the same file contents.
I've checked this against the old behaviour without the patch and it
fails as expected.
Signed-off-by: aszlig <aszlig@nix.build>
This adds the following new packages:
+ elasticsearch7
+ elasticsearch7-oss
+ logstash7
+ logstash7-oss
+ kibana7
+ kibana7-oss
+ filebeat7
+ heartbeat7
+ metricbeat7
+ packetbeat7
+ journalbeat7
The default major version of the ELK stack stays at 6. We should
probably set it to 7 in a next commit.
Documize is an open-source alternative for wiki software like Confluence
based on Go and EmberJS. This patch adds the sources for the community
edition[1], for commercial their paid-plan[2] needs to be used.
For commercial use a derivation that bundles the commercial package and
contains a `$out/bin/documize` can be passed to
`services.documize.enable`.
The package compiles the Go sources, the build process also bundles the
pre-built frontend from `gui/public` into the binary.
The NixOS module generates a simple `systemd` unit which starts the
service as a dynamic user, database and a reverse proxy won't be
configured.
[1] https://www.documize.com/get-started/
[2] https://www.documize.com/pricing/
Since the switch to check the nginx config with gixy in
59fac1a6d7, the ACME test doesn't build
anymore, because gixy reports the following false-positive (reindented):
>> Problem: [alias_traversal] Path traversal via misconfigured alias.
Severity: MEDIUM
Description: Using alias in a prefixed location that doesn't ends with
directory separator could lead to path traversal
vulnerability.
Additional info: https://github.com/yandex/gixy/blob/master/docs/en/plugins/aliastraversal.md
Pseudo config:
server {
server_name letsencrypt.org;
location /documents/2017.11.15-LE-SA-v1.2.pdf {
alias /nix/store/y4h5ryvnvxkajkmqxyxsk7qpv7bl3vq7-2017.11.15-LE-SA-v1.2.pdf;
}
}
The reason this is a false-positive is because the destination is not a
directory, so something like "/foo.pdf../other.txt" won't work here,
because the resulting path would be ".../destfile.pdf../other.txt".
Nevertheless it's a good idea to use the exact match operator (=), to
not only shut up gixy but also gain a bit of performance in lookup (not
that it would matter in our test).
Signed-off-by: aszlig <aszlig@nix.build>
which was wrongly specified as types.lines
Prevent it from getting copied to nix store as people might use it for
credentials, and make the tests cover it.
Also add back tests, don't seem broken anymore.
This is just fine:
nix-build ./nixos/release.nix -A tests.kafka.kafka_2_1.x86_64-linux -A tests.kafka.kafka_2_2.x86_64-linux
Currently if you want to properly chroot a systemd service, you could do
it using BindReadOnlyPaths=/nix/store or use a separate derivation which
gathers the runtime closure of the service you want to chroot. The
former is the easier method and there is also a method directly offered
by systemd, called ProtectSystem, which still leaves the whole store
accessible. The latter however is a bit more involved, because you need
to bind-mount each store path of the runtime closure of the service you
want to chroot.
This can be achieved using pkgs.closureInfo and a small derivation that
packs everything into a systemd unit, which later can be added to
systemd.packages.
However, this process is a bit tedious, so the changes here implement
this in a more generic way.
Now if you want to chroot a systemd service, all you need to do is:
{
systemd.services.myservice = {
description = "My Shiny Service";
wantedBy = [ "multi-user.target" ];
confinement.enable = true;
serviceConfig.ExecStart = "${pkgs.myservice}/bin/myservice";
};
}
If more than the dependencies for the ExecStart* and ExecStop* (which
btw. also includes script and {pre,post}Start) need to be in the chroot,
it can be specified using the confinement.packages option. By default
(which uses the full-apivfs confinement mode), a user namespace is set
up as well and /proc, /sys and /dev are mounted appropriately.
In addition - and by default - a /bin/sh executable is provided, which
is useful for most programs that use the system() C library call to
execute commands via shell.
Unfortunately, there are a few limitations at the moment. The first
being that DynamicUser doesn't work in conjunction with tmpfs, because
systemd seems to ignore the TemporaryFileSystem option if DynamicUser is
enabled. I started implementing a workaround to do this, but I decided
to not include it as part of this pull request, because it needs a lot
more testing to ensure it's consistent with the behaviour without
DynamicUser.
The second limitation/issue is that RootDirectoryStartOnly doesn't work
right now, because it only affects the RootDirectory option and doesn't
include/exclude the individual bind mounts or the tmpfs.
A quirk we do have right now is that systemd tries to create a /usr
directory within the chroot, which subsequently fails. Fortunately, this
is just an ugly error and not a hard failure.
The changes also come with a changelog entry for NixOS 19.03, which is
why I asked for a vote of the NixOS 19.03 stable maintainers whether to
include it (I admit it's a bit late a few days before official release,
sorry for that):
@samueldr:
Via pull request comment[1]:
+1 for backporting as this only enhances the feature set of nixos,
and does not (at a glance) change existing behaviours.
Via IRC:
new feature: -1, tests +1, we're at zero, self-contained, with no
global effects without actively using it, +1, I think it's good
@lheckemann:
Via pull request comment[2]:
I'm neutral on backporting. On the one hand, as @samueldr says,
this doesn't change any existing functionality. On the other hand,
it's a new feature and we're well past the feature freeze, which
AFAIU is intended so that new, potentially buggy features aren't
introduced in the "stabilisation period". It is a cool feature
though? :)
A few other people on IRC didn't have opposition either against late
inclusion into NixOS 19.03:
@edolstra: "I'm not against it"
@Infinisil: "+1 from me as well"
@grahamc: "IMO its up to the RMs"
So that makes +1 from @samueldr, 0 from @lheckemann, 0 from @edolstra
and +1 from @Infinisil (even though he's not a release manager) and no
opposition from anyone, which is the reason why I'm merging this right
now.
I also would like to thank @Infinisil, @edolstra and @danbst for their
reviews.
[1]: https://github.com/NixOS/nixpkgs/pull/57519#issuecomment-477322127
[2]: https://github.com/NixOS/nixpkgs/pull/57519#issuecomment-477548395
eb90d97009 broke nslcd, as /run/nslcd was
created/chowned as root user, while nslcd wants to do parts as nslcd
user.
This commit changes the nslcd to run with the proper uid/gid from the
start (through User= and Group=), so the RuntimeDirectory has proper
permissions, too.
In some cases, secrets are baked into nslcd's config file during startup
(so we don't want to provide it from the store).
This config file is normally hard-wired to /etc/nslcd.conf, but we don't
want to use PermissionsStartOnly anymore (#56265), and activation
scripts are ugly, so redirect /etc/nslcd.conf to /run/nslcd/nslcd.conf,
which now gets provisioned inside ExecStartPre=.
This change requires the files referenced to in
users.ldap.bind.passwordFile and users.ldap.daemon.rootpwmodpwFile to be
readable by the nslcd user (in the non-nslcd case, this was already the
case for users.ldap.bind.passwordFile)
fixes#57783
users.ldap.daemon.rootpwmodpw -> users.ldap.daemon.rootpwmodpwFile
users.ldap.bind.password -> users.ldap.bind.passwordFile
as users.ldap.daemon.rootpwmodpw never was part of a release, no
mkRenamedOptionModule is introduced.
* WIP: Run Docker containers as declarative systemd services
* PR feedback round 1
* docker-containers: add environment, ports, user, workdir options
* docker-containers: log-driver, string->str, line wrapping
* ExecStart instead of script wrapper, %n for container name
* PR feedback: better description and example formatting
* Fix docbook formatting (oops)
* Use a list of strings for ports, expand documentation
* docker-continers: add a simple nixos test
* waitUntilSucceeds to avoid potential weird async issues
* Don't enable docker daemon unless we actually need it
* PR feedback: leave ExecReload undefined
In Linux 4.19 there has been a major rework of the overlayfs
implementation and it now opens files in lowerdir with O_NOATIME, which
in turn caused issues in our VM tests because the process owner of QEMU
doesn't match the file owner of the lowerdir.
The crux here is that 9p propagates the O_NOATIME flag to the host and
the guest kernel has no way of verifying whether that flag will lead to
any problems beforehand.
There is ongoing work to possibly fix this in the kernel, but it will
take a while until there is a working patch and consensus.
So in order to bring our default kernel back to 4.19 and of course make
it possible to run newer kernels in VM tests, I'm merging a small QEMU
patch as an interim solution, which we can drop once we have a working
fix in the next round of stable kernels.
Now we already had Linux 4.19 set as the default kernel, but that was
subsequently reverted in 048c36ccaa
because the patch we have used was the revert of the commit I bisected a
while ago.
This patch broke overlayfs in other ways, so I'm also merging in a VM
test by @bachp, which only tests whether overlayfs is working, just to
be on the safe side that something like this won't happen in the future.
Even though this change could be considered a moderate mass-rebuild at
least for GNU/Linux, I'm merging this to master, mainly to give us some
time to get it into the current 19.03 release branch (and subsequent
testing window) once we got no new breaking builds from Hydra.
Cc: @samueldr, @lheckemann
Fixes: https://github.com/NixOS/nixpkgs/issues/54509
Fixes: https://github.com/NixOS/nixpkgs/issues/48828
Merges: https://github.com/NixOS/nixpkgs/pull/57641
Merges: https://github.com/NixOS/nixpkgs/pull/54508
After working on the last wireguard bump (#57534), we figured that it's
probably a good idea to have a basic test which confirms that a simple
VPN with wireguard still works.
This test starts two peers with a `wg0` network interface and adds a v4
and a v6 route that goes through `wg0`.
From @edolstra at [1]:
BTW we probably should take the closure of the whole unit rather than
just the exec commands, to handle things like Environment variables.
With this commit, there is now a "fullUnit" option, which can be enabled
to include the full closure of the service unit into the chroot.
However, I did not enable this by default, because I do disagree here
and *especially* things like environment variables or environment files
shouldn't be in the closure of the chroot.
For example if you have something like:
{ pkgs, ... }:
{
systemd.services.foobar = {
serviceConfig.EnvironmentFile = ${pkgs.writeText "secrets" ''
user=admin
password=abcdefg
'';
};
}
We really do not want the *file* to end up in the chroot, but rather
just the environment variables to be exported.
Another thing is that this makes it less predictable what actually will
end up in the chroot, because we have a "globalEnvironment" option that
will get merged in as well, so users adding stuff to that option will
also make it available in confined units.
I also added a big fat warning about that in the description of the
fullUnit option.
[1]: https://github.com/NixOS/nixpkgs/pull/57519#issuecomment-472855704
Signed-off-by: aszlig <aszlig@nix.build>
Another thing requested by @edolstra in [1]:
We should not provide a different /bin/sh in the chroot, that's just
asking for confusion and random shell script breakage. It should be
the same shell (i.e. bash) as in a regular environment.
While I personally would even go as far to even have a very restricted
shell that is not even a shell and basically *only* allows "/bin/sh -c"
with only *very* minimal parsing of shell syntax, I do agree that people
expect /bin/sh to be bash (or the one configured by environment.binsh)
on NixOS.
So this should make both others and me happy in that I could just use
confinement.binSh = "${pkgs.dash}/bin/dash" for the services I confine.
[1]: https://github.com/NixOS/nixpkgs/pull/57519#issuecomment-472855704
Signed-off-by: aszlig <aszlig@nix.build>
Quoting @edolstra from [1]:
I don't really like the name "chroot", something like "confine[ment]"
or "restrict" seems better. Conceptually we're not providing a
completely different filesystem tree but a restricted view of the same
tree.
I already used "confinement" as a sub-option and I do agree that
"chroot" sounds a bit too specific (especially because not *only* chroot
is involved).
So this changes the module name and its option to use "confinement"
instead of "chroot" and also renames the "chroot.confinement" to
"confinement.mode".
[1]: https://github.com/NixOS/nixpkgs/pull/57519#issuecomment-472855704
Signed-off-by: aszlig <aszlig@nix.build>
Currently, if you want to properly chroot a systemd service, you could
do it using BindReadOnlyPaths=/nix/store (which is not what I'd call
"properly", because the whole store is still accessible) or use a
separate derivation that gathers the runtime closure of the service you
want to chroot. The former is the easier method and there is also a
method directly offered by systemd, called ProtectSystem, which still
leaves the whole store accessible. The latter however is a bit more
involved, because you need to bind-mount each store path of the runtime
closure of the service you want to chroot.
This can be achieved using pkgs.closureInfo and a small derivation that
packs everything into a systemd unit, which later can be added to
systemd.packages. That's also what I did several times[1][2] in the
past.
However, this process got a bit tedious, so I decided that it would be
generally useful for NixOS, so this very implementation was born.
Now if you want to chroot a systemd service, all you need to do is:
{
systemd.services.yourservice = {
description = "My Shiny Service";
wantedBy = [ "multi-user.target" ];
chroot.enable = true;
serviceConfig.ExecStart = "${pkgs.myservice}/bin/myservice";
};
}
If more than the dependencies for the ExecStart* and ExecStop* (which
btw. also includes "script" and {pre,post}Start) need to be in the
chroot, it can be specified using the chroot.packages option. By
default (which uses the "full-apivfs"[3] confinement mode), a user
namespace is set up as well and /proc, /sys and /dev are mounted
appropriately.
In addition - and by default - a /bin/sh executable is provided as well,
which is useful for most programs that use the system() C library call
to execute commands via shell. The shell providing /bin/sh is dash
instead of the default in NixOS (which is bash), because it's way more
lightweight and after all we're chrooting because we want to lower the
attack surface and it should be only used for "/bin/sh -c something".
Prior to submitting this here, I did a first implementation of this
outside[4] of nixpkgs, which duplicated the "pathSafeName" functionality
from systemd-lib.nix, just because it's only a single line.
However, I decided to just re-use the one from systemd here and
subsequently made it available when importing systemd-lib.nix, so that
the systemd-chroot implementation also benefits from fixes to that
functionality (which is now a proper function).
Unfortunately, we do have a few limitations as well. The first being
that DynamicUser doesn't work in conjunction with tmpfs, because it
already sets up a tmpfs in a different path and simply ignores the one
we define. We could probably solve this by detecting it and try to
bind-mount our paths to that different path whenever DynamicUser is
enabled.
The second limitation/issue is that RootDirectoryStartOnly doesn't work
right now, because it only affects the RootDirectory option and not the
individual bind mounts or our tmpfs. It would be helpful if systemd
would have a way to disable specific bind mounts as well or at least
have some way to ignore failures for the bind mounts/tmpfs setup.
Another quirk we do have right now is that systemd tries to create a
/usr directory within the chroot, which subsequently fails. Fortunately,
this is just an ugly error and not a hard failure.
[1]: https://github.com/headcounter/shabitica/blob/3bb01728a0237ad5e7/default.nix#L43-L62
[2]: https://github.com/aszlig/avonc/blob/dedf29e092481a33dc/nextcloud.nix#L103-L124
[3]: The reason this is called "full-apivfs" instead of just "full" is
to make room for a *real* "full" confinement mode, which is more
restrictive even.
[4]: https://github.com/aszlig/avonc/blob/92a20bece4df54625e/systemd-chroot.nix
Signed-off-by: aszlig <aszlig@nix.build>
by adding targets and curl wait loops to services to ensure services
are not started before their depended services are reachable.
Extra targets cfssl-online.target and kube-apiserver-online.target
syncronize starts across machines and node-online.target ensures
docker is restarted and ready to deploy containers on after flannel
has discussed the network cidr with apiserver.
Since flannel needs to be started before addon-manager to configure
the docker interface, it has to have its own rbac bootstrap service.
The curl wait loops within the other services exists to ensure that when
starting the service it is able to do its work immediately without
clobbering the log about failing conditions.
By ensuring kubernetes.target is only reached after starting the
cluster it can be used in the tests as a wait condition.
In kube-certmgr-bootstrap mkdir is needed for it to not fail to start.
The following is the relevant part of systemctl list-dependencies
default.target
● ├─certmgr.service
● ├─cfssl.service
● ├─docker.service
● ├─etcd.service
● ├─flannel.service
● ├─kubernetes.target
● │ ├─kube-addon-manager.service
● │ ├─kube-proxy.service
● │ ├─kube-apiserver-online.target
● │ │ ├─flannel-rbac-bootstrap.service
● │ │ ├─kube-apiserver-online.service
● │ │ ├─kube-apiserver.service
● │ │ ├─kube-controller-manager.service
● │ │ └─kube-scheduler.service
● │ └─node-online.target
● │ ├─node-online.service
● │ ├─flannel.target
● │ │ ├─flannel.service
● │ │ └─mk-docker-opts.service
● │ └─kubelet.target
● │ └─kubelet.service
● ├─network-online.target
● │ └─cfssl-online.target
● │ ├─certmgr.service
● │ ├─cfssl-online.service
● │ └─kube-certmgr-bootstrap.service
to protect services from crashing and clobbering the logs when
certificates are not in place yet and make sure services are activated
when certificates are ready.
To prevent errors similar to "kube-controller-manager.path: Failed to
enter waiting state: Too many open files"
fs.inotify.max_user_instances has to be increased.
+ isolate etcd on the master node by letting it listen only on loopback
+ enabling kubelet on master and taint master with NoSchedule
The reason for the latter is that flannel requires all nodes to be "registered"
in the cluster in order to setup the cluster network. This means that the
kubelet is needed even at nodes on which we don't plan to schedule anything.
- All kubernetes components have been seperated into different files
- All TLS-enabled ports have been deprecated and disabled by default
- EasyCert option added to support automatic cluster PKI-bootstrap
- RBAC has been enforced for all cluster components by default
- NixOS kubernetes test cases make use of easyCerts to setup PKI
The `| tee` invocation always masked the return value of the
switch-to-configuration test.
```
~ $ false | tee && echo "oh no"
oh no
```
The added wrapper script will still output everything to stderr, while
passing failures to the test harness.
trace: warning: config.services.gitea.database.password will be stored as plaintext
in the Nix store. Use database.passwordFile instead.
(Arguably, this shouldn't be a warning at all. But making it happy is
easier than having a debate on the value of this warning.)
trace: warning: The options services.ndppd.interface and services.ndppd.network will probably be removed soon,
please use services.ndppd.proxies.<interface>.rules.<network> instead.
trace: warning: The option `services.rspamd.bindUISocket' defined in `<unknown-file>' has been renamed to `services.rspamd.workers.controller.bindSockets'.
trace: warning: The option `services.rspamd.bindSocket' defined in `<unknown-file>' has been renamed to `services.rspamd.workers.normal.bindSockets'.
trace: warning: The option `services.rspamd.workers.”rspamd_proxy".type` defined in `<unknown-file>' has enum value `proxy` which has been renamed to `rspamd_proxy`
With this option it's possible to specify a custom expression for
`roundcube`, i.e. a roundcube environment with third-party plugins as
shown in the testcase.
* pr-55320:
nixos/release-notes: mention breaking changes with matrix-synapse update
nixos/matrix-synapse: reload service with SIGHUP
nixos/tests/matrix-synapse: generate ca and certificates
nixos/matrix-synapse: use python to launch synapse
pythonPackages.pymacaroons-pynacl: remove unmaintained fork
matrix-synapse: 0.34.1.1 -> 0.99.0
pythonPackages.pymacaroons: init at 0.13.0
Hydra should support multiple Nix versions (and currently contains fixes
to work with Nix 2.0 and higher).
Further Nix versions can be added to the `hydraPkgs` expression in the
test case which lists all supported Nix versions for Hydra.
* redmine: 3.4.8 -> 4.0.1
* nixos/redmine: update nixos test to run against both redmine 3.x and 4.x series
* nixos/redmine: default new installs from 19.03 onward to redmine 4.x series, while keeping existing installs on redmine 3.x series
* nixos/redmine: add comment about default redmine package to 19.03 release notes
* redmine: add aandersea as a maintainer
This is just a set of globs to remove from the active plugins directory
after autoconfiguration is complete.
I also removed the hard-coded disabling of "diskstats", since it seems
to work just fine now.